Wednesday, December 26, 2012

Excuses

Tonight I was at home working late on multiple important projects.  I was stressed out, focused, and in a flow state with a mental house of cards carefully built up as I worked through the problems trying to make the best decisions.

My wife wandered in and told me we were out of eggs. My flow state disappeared and I forgot what I was thinking about.  The house of cards collapsed. I got annoyed at the trivial interruption and snipped at her. Offended, she snipped back and then left me alone.

I got back to work and wrapped stuff up in a few hours.  I still felt bad for snipping at her.  I was ashamed of acting like that.  So I went to apologize.  She was already asleep but I woke her up anyway.  There were lots of excuses for me to act like I did and all of them wanted to jump out of my mouth and justify my behavior.  Work stress, long days, cranky baby, tired, the list went on.  I paused and fought back against my excuses.  I'm not ashamed to say that it wasn't easy, some part of me wanted to justify what I did.  But I didn't;  I said I was sorry and there was no excuse for acting like I did.  I asked for her forgiveness.  I was able to go to bed with my conscience clear but I still couldn't sleep.  A question was still bothering me so I got on here to try and write it down.

Why do people make excuses for our behavior?

I can understand the desire to make excuses.  I felt the pull myself tonight.  The justification feels like instant absolving of the wrong and being absolved feels better than guilt.  But in reality, excuses only serve to justify poor decisions.  By justifying them we are, in a way, claiming they were the right thing to do.  Excuses allow us to hide a "wrong" behind a facade of "right" and lie to ourselves.  Justifying things helps us pretend that they are out of our control.  We don't try and fix our behavior and do better next time.  Justifying poor decisions means that in the same situation in the future you're going to make the same poor choice and do it again.  The only way to really escape the shame and guilt of mistakes is to take a lesson from them.  If not, you've wasted an opportunity.

It's alright to make mistakes.  Don't make excuses or you run the risk of poor decisions following you around.

Wednesday, December 19, 2012

Funny thought experiment about information density

What are the human constraints of comprehension of numbers?  I think I have an idea of mine.  There are some cavernous mental pitfalls when thinking about information density.  This one regards the application of context (which always matters but is always stored outside of the data itself).

How many sentences, words, and sounds can be spoken in five minutes?  In any number of languages.  Is there an infinite possible number of languages and combination of languages itself?

Intuitively I do think of all the possibilities as infinite.

Some more thinking proves my intuition wrong.  Let's look at the problem from a different angle.  How much data is stored in that five minutes?

60 * 5 = 300 = length of recording (in seconds)
128 = bitrate (in kilobits per second)
z = file size (in kilobytes)
(300 * 128) / 8 = z
38,400 kilobits / 8 = 4,800 kb = 4.8mb

4.8mb of information is nowhere near infinite.  It's actually pretty damn small.  My intuition was dead wrong... or was it?

Maybe my intuitive perception of infinite languages is due to same sounds being perceived differently in context.  That's the infinite; it's in there somewhere but it's completely based on the infinite context of the same sounds.

Sunday, December 9, 2012

Taking credit

As a manager, here's some simple rules for when to give and take credit for things:

Rule 1: When things are going well: give all the credit to others.  This includes members of your team and others external but involved with the team (sometimes even competitors).

Rule 2: When things aren't going well: take all the blame.  Then take action to fix it.

Too often I see scenarios where the giving and taking of credit for good ideas or jobs well done is carefully bartered and feelings get hurt easily.  The idea of a manager taking credit for the work of those he manages is so often cited it's now cliché.  It's jut a given in some places.  We should strive to not be like that.  Doing so creates resentment, undermines work performance, and damages your credibility.

If you find yourself in a scenario where somebody else has taken credit for your work, I've found it's always best to forgive unconditionally. Yes, I said always.  Harbouring your own resentment will undermine your own work performance and create barriers to your ability to communicate; never good.  If it's unjust enough that you get actually angry about it, just relax.  Take some time off or go work on something else until you cool down.  Then get over it and get back to work as if it never happened.  This is how to handle something with grace.

I've used the word grace in a few blog posts.  I looked up the definition today.  A funny thing about it, I think that when you give a pardon with grace you're acknowledging that the forgiveness isn't deserved.  They did something wrong and you're consciously making the decision to forget about it. This is grace. 

As I read modern thinking about scenarios like this, the universal advice seems to be to confront the offender.  I think this is a terrible idea and here's why: they know they did it and they already feel guilty about it.  If they don't know they did it, then they were acting without malicious intent so it doesn't matter anyway.  Why people do something is more important than what they do.  The universal sense of what's right and wrong is written into the heart of every person.  It's why we want to seek justice in the first place, because we want to make the other person suffer more in their guilt.  I don't think that's ever the right thing to do.  Better to drop it, move on, and keep trying to live and work as a better example.  Folks will see that.  With enough exposure to it, they will eventually want to emulate it.

Monday, November 26, 2012

Sharing Data for Analysis - appropriate standards and metadata



This was originally posted on Data Tactic's Blog.

Sharing Data in 2013 and beyond
Selecting appropriate levels of standardization and the challenges of metadata

Eric Whyne
Technical Manager
Data Tactics Corporation
November 2012

Efforts at sharing data within the US Department of Defense and Intelligence Community, especially on the classified networks, have historically focused on rigid edicts to standardize how data is stored and accessed.  The logic is easy to follow: if data is stored in a standardized manner then we all can write software to the standards.  If there’s only a few ways to access the data, it should reduce the amount of time we spend writing code to access that data.  This approach was more reasonable when sources of data were fewer and there was time to bring interfaces into compliance before the data was published to a wider audience.  This way of thinking, unfortunately, persisted into the fast paced mission focused explosion of data created by the post 9/11 wars in the Middle East, OEF and OIF.  Almost overnight, timeliness became far more important than standards compliance, and rightly so; lives were on the line in real time.  Some national organizations stuck to the old at the cost of efficiency and some organizations adapted well to the new way of thinking.  Some other national organizations, due partly to internal opposing views on this, found their way in the middle ground.  Names are withheld to protect the guilty.  In the meantime, important data continued to be generated, stored, and shared by individuals and organizations both inside and outside of the intended audiences of the standards.   It is my intent to provide a thoughtful discussion of the more important aspects of data sharing and provide useful information for engineering decision makers thinking about undertaking a data sharing architecture, caring for and feeding one, or involved with planning a data sharing system.

When it was expected that data standards would be adhered to and kept stable a data ingestion approach known as Extract Translate, and Load (ETL) was widely adopted.  ETL required knowledge of data formats before it could be brought into the system and made accessible.  Staging areas required awareness of the data in order to maintain its fidelity.  This created a problem when data formats changed (as they always do) or data was published that was different than the standard.  Things as simple as changing an integer value to a decimal (float) could throw a wrench into data sharing causing days worth of lost data until the problem was identified and fixed.  Try to address this for hundreds of different data sources and it’s easy to see how O&M costs for data sharing architectures can skyrocket and quality can greatly suffer.    

An ETL cycle typically has the following steps:
1.                   Initiation (establish requirement, need to know, and memorandums of understanding)
2.                   Evaluate and Reference
3.                   Extract (from sources in native format)
4.                   Validate
5.                   Transform (clean, apply mission logic, check for data integrity, create aggregates or disaggregates, normalize or denormalize)
6.                   Stage (in typed value tabular staging tables)
7.                   Quality Control
8.                   Load (to target systems)
9.                   Archive (staged data, to ensure providence and for quality control baseline)
10.               Clean up

Rigid standardization of data publication not only increases cost, it reduces the quality of data, prevents it from being shared by raising overhead costs, creates barriers of entry for promising new data collection, and is antithetical to the core tenants of “big data”.  Big data architectures have been designed from the ground up to deal with poorly formatted data from heterogeneous systems.  There were two main enablers for this.  The first enabler was the mountain moving power that distributed computing provided.  Moving computation to the data stores meant that we could spend more processor cycles evaluating each record and still keep systems scalable.  The second enabler was how widely accepted generic untyped data storage formats became.  I know, one moment I’m talking about how standards can be bad and the next I’m extolling the virtues of standardization; bear with me.  Some standardization is good.  XML and JSON provided a way to store data that could be sharded, was not strongly typed, and was completely extensible.  By sharding, I mean that rather than requiring all records to have the same number of fields of a certain data type, we could publish records with only the fields that were applicable and in whatever data type made sense.  In this manner we can generically store data without knowing much about it.  We could figure out how to use it later and had the processor cycles to do it in a scalable manner thanks to distributed processing.  Have floats or integers?  Not a problem!  Put whatever in the “height” attribute and we can figure it out later.  The “store now” and “process later” is how you deal with big data and lets us sidestep some nasty formatting surprises always lurking in large sets of data.  There has been no widely accepted term for this “store first then use” method yet.  Some papers and articles have reference it as ELT (Extract, Load, Translate) others have described it as a SMaQ method (Store Map and Query).  Regardless of what it’s called, the concepts remain the same.

An ELT or SMaQ cycle typically has the following steps:
1.                   Initiation (establish requirement, need to know, and memorandums of understanding)
2.                   Extract (from sources in native format)
3.                   Store persistently (on a distributed file system in a denormalized fashion)
4.                   Transform (index, apply logic, check data integrity, create aggregates or disaggregages)
5.                   Repeat step 4 ad infinitum as you find out new ways to get value from the data.
6.                   Quality Control

I’ve listed 4 fewer steps with this new way of doing things.  Let’s talk about why some of the steps are absent.  “Evaluation and Reference“ is not needed because we don’t need to configure tabular relational data stores to accept strongly typed values.  We don’t care.  Because we get rid of that configuration overhead, there’s no need to conduct validation on the data.  Poorly formatted data from the other system won’t break us.  If they accidentally publish text where a number should be we’re fine, our architecture will still store it in whatever form it was published in and we’ll find it during our later checks on data integrity then decide on a course of action.  Having the system break or ignore poorly formatted data is unacceptable.  Often times poorly formatted data turns out to be an update to the previous publishing standard that provides more or better information.  A simple example would be changing an integer to a decimal to increase fidelity of a ­­­­measurement.  If they start publishing decimal values that are more accurate, we want those!  Since we go right to persistent storage, we can skip the staging requirement. Our persistent storage is distributed and as a consequence we gain a level of fault tolerance built in to most distributed systems.  This combined with our ability to crunch massive amounts of data means that archiving gets dropped off as a needed step as well.  Since we avoid having to validate and stage there is no need to have separate cleanup or garbage collection step.  I’m sure that in certain cases there are good reasons to add or remove step, but in general the “store now” and “figure it out later” way of doing things reduces the amount of up front work for sharing data and allows us to focus on the most important work of extracting value from the data and applying it to our mission.

Back to the DoD.  The Department of Defense Discovery Metadata Specification (DDMS) was created in support of the DoD Net-Centric Data Strategy.  The first standard was released in 2003 with multiple changes per year ever since (and everybody pretended standards would be stable).  It has lofty goals and specifies the attributes (fields) that should be used to describe any data or service that is made known to DoD Enterprise.  I’ve heard rumors of it working great, but have yet to experience a useful and working implementation of it when organizations claims of adoption or increased efficiency are put to the test.  The inability of a strict metadata standard to fix everything is not a statement about the quality of the standard or the competence of the authors.  The metadata approach itself is intrinsically wrought with difficult and sometimes insurmountable problems.  The requirement to standardize metadata amplifies those problems and creates new ones.  Although the frequent changes to the standard have certainly caused problems as time is wasted “catching up” in system implementation, locking the standard or working very hard to try and fix problems by modifying the standard again is not the right answer.  Any approach to comprehensively standardize metadata across heterogeneous systems can’t work.  This statement alone can cause heart palpitations and weeping when said in front of the wrong people, so be careful when repeating these ideas.  It’s common sense that “standardizing” is a good thing; to go against that wisdom you need to approach it’s disciples with patience and build your arguments for common sense engineering over time.  Two years prior to the DDMS standard being published, Cory Doctorow popularized the term Metacrap with his essay titled “Metacrap: Putting the torch to seven straw-men of the meta-utopia”  http://www.well.com/~doctorow/metacrap.htm  .  What the article lacks in tact it makes up in eloquence.  The essay discusses the following obstacles to reliable metadata.  I’ve summarized here and provided some notes in the context of the US Department of Defense and Intelligence Community technology community.

People are lazy: Populating metadata isn’t their core mission, leave it blank.
People are stupid: People will still misspell classification markings even when there are compelling reasons to get it right.
Mission Impossible, know thyself:  Ever watch a program fill out a survey about itself?  It’s always glowingly positive.  Weird.  Think people are going to be accurate about the quality of their own data?  Think again.
Schemas are not neutral: People have dedicated entire careers to specifying taxonomies, ontologies, and schemas for various aspects of intelligence and warfare.  When they retire, somebody else starts from scratch with their own idea of how it should be done with their own biases derived from their perfectly valid experiences in a different part of the industry.
Metrics influence results: Moving decimal points around or changing units of notation destroys any hope of automating queries across data sets without manual intervention to correct these mistakes.
There's more than one way to describe something: Reasonable people can disagree forever on how to describe something. Was this document the result of a questioning, an interview, or an interrogation?
People lie: Or exaggerate their claims of accuracy or success.  This happens mostly through unintentional bias, but sometimes intentionally.
Data may become irrelevant in time: The language and things important to analysis evolves as the approaches to the problems change.
Data may not be updated with new insights: Modifying shared data records means that the original information is lost.   Often times pushing updates to the authoritive or original source is not possible.

These obstacles compound with the baseline overhead of learning and addressing the standard before publishing or sharing data to create insurmountable requirements for sharing data in accordance with the standard.  This is, of course, a serious problem for smaller programs; but, even many larger programs have not been able to get close to functioning implementations. 
It’s easily understood that our unique programs produce data that is different in quantity, format, quality, and comprehensiveness.   Perhaps less intuitively obvious is that we also all use data differently.  This means that somebody else’s standard really just exposes an implementation of that data that nobody else really cares about.  A tactical user populating a situational awareness tool has a drastically different use case than an analyst constructing a network diagram.  Each of these uses requires the data to be stored and indexed in different manners not just for performance but just to make access to it computationally feasible.  

Does all of this mean that standardization and metadata should be abandoned?  Of course not!  But a cautious level of prudence needs to be exercised.  When planning your next information sharing architecture, try loosening the standardization reigns a little up front.  It will reduce your O&M wasted time, increase the providence and quality of your data, and free you up for success later as innovative folks find new ways to derive value from the data in it’s natural state.

Monday, November 12, 2012

Stories of survival in tough circumstances


Six years ago I read the book "Life of Pi" on the recommendation of my wife's cousin.  It turned out to be a pretty good read.  With the movie coming out on the 21st, I'd recommend picking it up.  I can't say that I'm eager to see the movie, but it's always good to read the book before too many of the advertisements hit or people start talking about it.
http://en.wikipedia.org/wiki/Life_of_Pi

The book got me on a "stranded at sea" or "survival in extreme circumstances" reading kick.  If you're into that kind of thing here's a list of titles I'd recommend.

"Endurance"
http://en.wikipedia.org/wiki/Endurance:_Shackleton%27s_Incredible_Voyage
If you haven't heard of the story of Shackleton and crossing the south pole, this book will keep you on edge the whole way through.  There is another book on the topic called "South" that is not as well written (in my opinion) as "Endurance" is.  This is an epic story that makes the little daily trials of life seem silly in comparison.
http://en.wikipedia.org/wiki/Ernest_Shackleton

"In the Heart of the Sea"
http://en.wikipedia.org/wiki/In_the_Heart_of_the_Sea:_The_Tragedy_of_the_Whaleship_Essex
I read this book just after I read "Endurance" by Alfred Lansig.  Shackleton and his crew were so competent that it made me feel sorry for this crew which seemed to have poor leadership.  The book has good reviews and it deserves them. The discussions of their plights while adrift were engrossing.

"Moby Dick"
The book isn't as much about the survival topic as the other books.  The details of whale hunting were fun to read.  It's a literary classic and if you had to read it while in school I'd still pick it up again if you have the time.  Expect to take a few weeks to get through it, and get the digital version because the paper version is heavy.  The sea adventure ends on a chapter that is more written as poetry than action which I thought was disappointing.

"Into the Wild"
http://en.wikipedia.org/wiki/Into_the_Wild_%28book%29
I originally read this back in 2003, but I re-read it during this kick.  There was a movie which came out in 2007.  The book is still worth reading even if you did see the movie because of how well it documents other "lost in the wild" tales while telling McCandless's story. 

Just last year I read "Unbroken" which after reading I'd put into this group of books as well.
http://en.wikipedia.org/wiki/Unbroken:_A_World_War_II_Story_of_Survival,_Resilience,_and_Redemption
Another Amazing WW2 story.  Follow Zamperini as he competes in the Olympics in Germany before WW2 breaks out and he undergoes flight training in the southern pacific.  The real meat of the book is when his plane malfunctions and he ends up as a Japanese POW for the remainder of the war.

Speaking of WW2 survival books... "With the Old Breed"
http://en.wikipedia.org/wiki/With_the_Old_Breed
I'm glad I read this book way after I got done with my combat tours.  I thought we had it rough, but WW2 Marines in the Pacific had it way tougher.  War was hell in the truest sense of the word in this book.  Follow Eugene Sledge who turned the scribbles he wrote in the margin's of his bible during the pacific campaign into an amazing book.  I later found out that "The Pacific" miniseries used this as one of their reference memoirs so I watched it.  It wasn't nearly as emotionally stirring as the book.



Thursday, November 8, 2012

The sun revolves around the earth

Sometimes it's useful to remember the mistakes of the past in the context of today and exactly how tough it is to learn and know something.  Those that thought the sun revolved around the earth were just as smart as we are today.  It was only after careful study of the nuanced motion of the light dotting the night time sky that we were able to differentiate that some were planets and that some of them exhibited apparent retrograde motion. (Image to the right is an illustration of apparent retrograde motion.)  Then, having made those observations, creating descriptions of the experiments which were repeatable to make the case to others and spread the knowledge.

The powers of data analysis brought to us by modern computing technology opens up a whole new class of problems to new levels of scrutiny.  (The image to the left is an illustration of the four color theorem proved by computer in 1976.) Interestingly, there are still philosophical objections to mathematical proofs that depend partially on exhaustive computer powered computation.  The main argument being that humans can't verify it logically so we can't trust it as proven.  It requires a replacement of logical deduction with faith in the ability of the computer to accurately operate.  I think the concept of black-box advancement of science is interesting.  If you approach it with a willingness to engage in the process, risks of computer error in computer assisted proofs can be mitigated by using heterogeneous systems to replicate the proof.  Varieties of architectures and software is something we don't have a shortage of.

Distributed processing and the promise of cloud based computation (buzz words aside) bring even more capability and variety to computational proofs.  Being involved in this advancing field, I haven't seen anybody really engaging with this.  We are barely scratching the surface of what's possible with making computers work together.  The next few years promise to be an exciting time.

Sunday, November 4, 2012

Ruins

This is a follow up to my previous post when I mentioned the Orson Scott Card book "Pathfinder".  Last week on October 31st, the next book in that series came out called "Ruins".  I'm a few chapters into it and it's even better than the last one.  If you are a sci-fi fan, it's definitely worth picking up.

9 Nov 2012 Update: Finished the book.  Great read.  Annoyed at the ending.  There's going to be more books to the series.  Card is probably typing away at his keyboard on the next one as I'm writing this.

Wednesday, October 24, 2012

Time Travel: Many Worlds vs Causality

Time travel, as far as I can tell, will pretty much remain fiction.  But that's not to say it's not fun to conduct thought experiments into the "what-if" scenarios.  Rules of time travel in fiction have varied, but recently I had the fortune to read two great novels back to back that described things in their own clever way.

Neal Stephenson has an interesting talk about his book "Anathem" on youtube.  The talk surrounds his involvement with the Long Now clock. The Long Now clock is designed to have a period of ten thousand years.  His book Anathem builds a story around a similar clock in a fictional universe that is used to limit interactions of scholars for long periods of time with the outside world and each other.  I ended up buying and reading the book.  I'll avoid spoilers and just recommend it as a great read.  In it he ends up describing a Many Worlds interpretation of time travel.

At some point during the few weeks it took me to read it, I was drinking beer with a friend and ended up talking about how great Anathem was.  During our conversation he recommended an Orson Scott Card book: "Pathfinder".  A few days later he brought me his copy of the book and it sat on my shelf until I finished Anathem.  When I did finally get around to it, I was excited as the plot developed into a causal interpretation of time travel.  What an awesome transition of thought.  All the rules in Anathem were different in Pathfinder.

Reading these two books back to back ended up being interesting and useful.  It was neat to see two of my favorite authors try their hand at the notoriously difficult thematic concept of time travel.  If you have the chance to pick either one of these books up, I'd recommend the experience of reading the other one immediately afterward.

Wednesday, October 17, 2012

BCC

Is blind cc ethical? Ever?

The only time I ever use this button is when I want to send an email to a lot of people who may not want to share their email addresses with the others or future recipients of the chain.

If you want to "keep somebody in the loop" go to your outbox and forward the sent letter with an additional note.  This method does not imply deceit, since you're not hiding anything.

Wednesday, October 10, 2012

Computer Timesharing

http://www.youtube.com/watch?v=Anxxe8SdX78

A video from 1961 discussing computer timesharing.  Too cool.

Monday, October 1, 2012

Living generously

Over the last few years my wife and I have met some amazing people through our professional networks and our church. A few of them were at our house recently. I wasn't there at the time but this story was relayed to me by my wife.

During the visit one of them ended up mentioning her cousin who was having trouble affording formula for her baby. Immediately one of our friends got out some cash and handed it over. It wasn't a huge amount, it was all they had with them.  The offer was sincere and was followed by an apology they couldn't give more right then. Others, including us, plan to donate some clothes, toys, and other baby stuff. That made an impression on me. Not so much just giving away stuff to a stranger, we do that all the time dropping stuff off at good will or elsewhere. What made an impression was the immediate bias for action and generosity.  I'm not certain that sitting there and listening to that story about a person in another state having hard times that I'd have immediately reached into my pocket.  But... I think that's what we should do and how we should live.  The generosity was contagious and more giving followed.  Awesome people with big hearts.

When you hear of someone in need, don't hesitate, take action. Live generously.

Update 2013-12-06: Two years of blogging and over 100 posts mostly about technology and this is the most popular post to date. I think that says something good about the people reading my blog and maybe people in general.

3D Printing ...guns

A friend, mprk, sent me this link today: http://www.wired.com/dangerroom/2012/10/3d-gun-blocked/

The article describes the challenges of a group trying to print the first functional gun from a 3D printer. The road blocks have ranged from the 3D printer company cancelling the lease on his printer to interviews by the ATF.  Unfortunately, or fortunately depending on your views, neither the printer company or the ATF has awareness of the Streisand effect.

That aside, I think that additive manufacturing is one of the most significant things to be happening in our world right now. The open source hardware movement, which has focused on hobbyist electronics for now, demonstrates interesting precedents for what is going to happen. As Linux, Apache, and other massive open source projects have demonstrated: community collaboration on large engineering is not only possible, it's powerful. What happens when we see that type of collaboration applied to engineering efforts that result in physical objects. We still can't comprehend the long term implications for economies in an age of abundant and mostly free material goods developed with no labor. We are on the edge of a true age of abundance. What happens when we community projects organize the information required to print more complex objects. Cell phones, clothing (of course in the latest styles), motorcycles, cars, computers, televisions, and... yes weapons. I think the ability to have insight into the implications of this remains rare in our society. We look at these events with minds from cultures rooted in the ideas of slower times, when both science and technology lacked their current strength and speed. I've cited it before on this blog and I'll cite it again. The Law of Accelerating Returns is something you must read and understand if you care to follow technological progress. A quote from Kurzweil's writing applicable here: "it is not the case that we will experience a hundred years of progress in the twenty-first century; rather we will witness on the order of twenty thousand years of progress".

When Leonardo Davinci was making his sketches of flying machines and attempting unsuccessfully to build them, he was able to point to an analog in nature as proof of the possibility of heavier than air flight (birds). As we enter this new age of additive manufacturing, we can point in even more directions. Additive manufacturing is how life is replicated. We ourselves were not mechanically reduced from a larger blob of cells like one of Michelangelo's granite statues or injection molded like the plastic keyboard I'm typing this on. Just as oak trees replicate themselves from a packet of information and nano-machines, we are going to see buildings and vehicles created in similar fashion someday. When will the Wright Brothers of replication technology show up? Probably sooner than all of us expect.

Back to the whole gun thing. The idea is a neat one. It reminds me of Neal Stephenson's concept of a h.e.a.p. gun from one of my favorite books: Cryptonomicon. US Marines fighting in WW2, Math, Cryptography, Entrepreneurship, and Programming; a better book will never exist. But in general, guns have been pretty easy to make at home for as long as they've existed. http://en.wikipedia.org/wiki/Improvised_firearm I don't think guns are a bad thing to have generally available. It's estimated that in the US alone there are 90 guns for every 100 citizens. Strangely the only areas subject to routine gun violence are the areas where legal ownership of firearms is banned. If you want to lose sleep at night; ANFO scares me ...guns not so much. The real story here is how asinine the regulation is. I'm loving watching the same arguments we had about bits a few years ago apply to real world objects. Remember cryptography export laws and the discussions about making computer viruses illegal? They all seem silly now. Objects can now be just a stream of bits. You can obfuscate them perfectly through cryptography, archive them essentially forever, and they are as tough to destroy as any idea ever was. As for how that whole restricting the export of cryptography thing went, here's a full implementation of RSA encryption in three lines of perl:


It became trendy to included it in email signatures, t-shirts, and bumper stickers. It's like closing the barn doors after all the animals are out. Except the animals travel at the speed of electromagnetic radiation and are infinitely duplicated using almost no energy for duplication and storage. In the near future we'll see the Streisand effect's influence on physical objects. It will make for interesting times.

This post is already too long. If you're really interested in the topic, check out "Engines of Creation". It was published in 1986 but remains the best book I've ever read on the topic.

Wednesday, September 26, 2012

Leadership by humor

After crying, the first emotion that my newborn son predictably expressed was laughter. It's a pure emotion and and possibly the deepest one.

Want to inspire people and be a better leader?
Be funny.

I think that we laugh because we know we don't have to be afraid.  It takes the stress out of situations, it resolves confrontations, and it helps build and strengthen teams which improves communication (and project performance).  Oh... and it shortens long days and makes work a fun place to be.

Here is my advice for using humor in leadership:

  • Be self-effacing, but never in a way that would give others doubt in your competence.  Humility does not imply incompetence.
  • Never make a joke at the expense of somebody else.  Do the opposite.
  • Situational humor almost always works.  It's ok to laugh at having to work on weekends.
  • When making a joke about a bad situation, never use humor to complain.  As a matter of fact, never complain.  Ever.  Handle everything with grace and strength.
  • Be clever, but not in a way that makes other people feel dumb.
  • It's ok to be a little controversial.  Do so with integrity and never say anything so controversial that you might have to back down from it later (i.e. it shows up in an article or on the news).
  • If just saying something to be funny or brighten somebody's day, avoid using the reply-all button.  It doesn't work if it's not personal.
  • If circumstances have somebody else in the leadership position and you as a follower, don't use humor to outshine them.
  • Avoid sarcasm
  • If the joke falls flat, ignore it and drive on.  You messed up the timing or misread their mood, it's nobody's fault.  Keep the conversation and dialog moving.  
  • If you don't think you're funny, no problem.  Experiment and learn from the results of your experiments.  We all do it.  The most experienced and excellent leaders I know can be hilarious to be around and listen to.  This is the result of a lifetime of iterative improvement.  Start now.

Wednesday, September 19, 2012

SYSK: Simpson's Paradox

If you are conducting analysis on a set of data, you may get results that conflict with each other as you change the sample size.  This has big implications for data driven decision making.  Anyone trying to drive decisions by analyzing data needs to understand that things like this can happen and understand how to react when they do.

http://en.wikipedia.org/wiki/Simpson's_paradox

Friday, September 14, 2012

Humble management


There’s a form of knowledge withholding perpetrated by the professional management cadre.  It’s a good thing to have a sharp mind, but if you are in a position of authority, it’s extremely impolite to poke people with it.  It’s a pet peeve of mine when I see folks attempt to withhold knowledge from others or bully them by refusing to speak plainly.  I’m not referring to the professional engaging in a dialog with another specialist, a form of mental sparring that serves to elicit an understanding of each other’s capabilities.  I’m referring to the instances I’ve seen where a supervisor has intentionally withheld information from subordinates in order to derive power from it.

My own experience with this occurred several years after transitioning out of the military.   I was asked to take over as program manager on a large contract (88 employees).  Being young, a technologist, and my basis of leadership and management being in the military; I admittedly did not know as much about business as I should.  Some people I respected had confidence in me.  To my surprise, immediately several people attempted to intimidate me and asked me to back down and not take the job.  Ultimately it was unsuccessful because my self-confidence had been forged in much tougher circumstances. I took it more as a lesson in human behavior than wasting time worrying about it. The overt aggression came primarily from those who’s entire career was “managing”.  They were also the same folks who I feel derived their self validation from having authority over other people and they routinely used terms like “work for me”.  That term has since become a red flag in my mind.  In my opinion, nobody works for anybody.  People contribute to your project because you are compensating them.  In some circumstances, such as volunteer work or compelling projects, people show up because they enjoy doing it (key word "enjoy" as in self-serving).  With very few exceptions, your only true authority is derived from your employees desire to serve themselves and their ambitions.  The other type of authority is derived from threat of violence, such as when you have to obey the police (or they will put you in jail against your will).  I might write more on that on a future post, I'll try and keep this one to project management.

I feel that anyone who seeks out management roles as a way to make themselves feel exceptional in any way should probably not be entrusted with the responsibilities that come with it.  A perfectly executed role of a manager is to recognize the environmental factors that are influencing the project and take actions on the decisions that are practically being made for you.  An environmental factor might be employees leaving, so we increase their compensation or provide opportunities that reduce the risk.  If the risk is running out of money, the opposite action might need to happen.  It wasn’t some virtue of the manager that inspired them to do that.  They were just able to see what the right thing to do was and took action on it.  The right thing to do is always to empower those around you and maintain a level of transparency with your actions.  When making decisions regarding the outcome of a project we are the humble stewards of resources: human, financial, and physical.

In contrast to management, leadership is the empowering of the spirits and confidence of those around you.  Study after study has indicated that employee ownership and motivation are the most influential factor in performance by a large margin.  If you’re not a servant leader that highlights employee accomplishments and motivates them, you are negatively affecting the outcome of the projects which you have responsibility for.  I even hesitate to engage in these dialogs because it is a self-correcting behavior.  With very rare exception, those managers with self serving ego will spend their entire careers only being moderately successful. The biblical verse of Mathew 5:5 was the one that said something along the lines of “the meek shall inherit the earth”.  This, I believe unfortunately, has been the translation of the verse that has been popularized.  Other translations word it like this "God blesses those who are humble, for they will inherit the whole earth."  Meekness implies things that humility does not.  A person can, and should, be both humble and energetic; aggressively pursue good endeavors, but receive and hold the accomplishments with humility and grace.  If you're not inclined to dive into the Bible right now, here are some other quotes from important people  throughout history.  Humble leaders are more successful. h/t to Businessinsider for this compilation of quotes (http://articles.businessinsider.com/2011-04-01/strategy/30054050_1_humility-leader-ego)

Ancient China:  “The great leader speaks little.  He works without self-interest and leaves no trace.  When all is finished, the people say: ‘we did it ourselves.’”  Lao-Tzu

Ancient Greece:  The Ancient Greeks had a word for the loss of humility and the triumph of the ego: hubris.  Hubris is the outrageous arrogance where a person in power overestimates his or her own competence and capabilities, gradually loses touch with reality, and (in Greek tragedies) succumbs to a tragic fall.

Ancient Rome:  “To conquer one’s spirit, abandon anger, and be modest in victory… whoever can do this I compare not to the greatest of men but to a god.”  Cicero

Mongol World around 1200:  “The key to leadership is self-control: primarily, the mastery of pride, which is more difficult to subdue than a wild lion.”  Genghis Khan 

Louis XIV France: “Louis’s greatest gift was to maintain his quality of common sense in the midst of constant flattery.  Throughout, the king demanded respect and obedience, not flattery.”  Louis XIV biographer, Olivier Bernier

18th Century Austria: To keep herself humble and ensure that she did what was right and best for the Austria-Hungarian Empire, the Archduchess Maria Teresa employed one advisor as her official critic.  It was the formal job of Emmanuel Count Sylva-Tarouca to tell Maria Teresa all of her mistakes.

20th Century America: “To possess self-confidence and humility at the same time is called maturity.”  Jack Welch

What comes before the fall?  Pride.  Walking the line of maintaining humility and confidence at the same time is a fine one.  When you’re certain you’re humble is the only time you can be certain you’re not.  It’s my aspiration to always empower those around me and always attempt to act with humility and transparency.  

Wednesday, September 12, 2012

Writing abstract code is avoidance behavior

Don't start off a project by spending 4 months writing  classes, libraries, or other abstract stuff.  Doing so is an insidious form of procrastination.

Write the interfaces first.  Human or otherwise.  Mess with them.  Get trusted opinions.  Some of the time you'll discover that your approach was wrong.  Better to discover that sooner than later.

Optimize and abstract later.  Premature optimization is the root of all evil.

Wednesday, September 5, 2012

Adapting to new stuff

Having an open mind to unfamiliar programming languages and technologies is a good thing.  An open mind is especially useful in a field that changes as quickly and drastically as software.  This seems so obvious to state that it’s almost cliché.  Adapting to new tools means more than bringing your old habits with you.  If all you've ever worked with is a hammer and somebody hands you a screw driver, reserve judgment and do a little research on screws before complaining about how your new tool doesn't pound in nails very well.


Wednesday, August 29, 2012

Historical Exponential View vs The Heart of Man


This essay has probably influenced my thinking about technology more than any other: http://www.kurzweilai.net/the-law-of-accelerating-returns  Ray Kurzweil starts it off by promising that you will get $40 trillion by just reading it.  In it he compares the intuitive linear view of how we expect technology to progress with the historical exponential trend that is has demonstrated.  The reason we intuitively think of technological progress as linear is because exponential trends appear to be linear when viewed (and experienced) for a brief period of time.  In general, I think being optimistic about technological progress is the right attitude.  But ten years later, I’ve embraced a more cautious optimism regarding the concept of the singularity.  Kurzweil goes way off the charts near the end of the essay, following his singularity to a logical conclusion of an ever expanding existence merely comprised of self-organizing knowledge.  Intuitively I’ve always felt that to be a little off, but since he posits very well early on in the essay that intuition can sometimes be wrong, I kind of just shrugged and decided to take the latter part of the essay on authority.  It’s not like it likely matters to us anyway, if it happens we can adjust and if it doesn’t… well then it doesn’t matter.

Lately I’ve been exploring a more logical and objective counterpoint to the “knowledge blob assimilation theory” (my name for it… not his). The limits of technological advancement might be similar in concept to the limits of functional abstraction in programming.  Think for example about writing software libraries.  As we write functionality we can reference, as long as we can understand how we implemented it and remember how to reference it, it empowers us to write new software much faster.  There’s a point of diminishing returns as we lose familiarity with the libraries we have built or work with and it takes some time to catch up again and be as productive.  But in general abstraction makes things happen faster.  The technological singularity might end up giving us what we would now consider extreme capabilities.  For example being able to tell our car where to take us and then relaxing, having perfect digital memories thanks to implants, and being fully immersed in virtual worlds whenever we choose; but the limits of abstraction hit in at some point.  And the heart of man never changes.  Tying technology abstractions to the intents of the heart of man (who we are and what we want to be or do) will be the upper limit of the advancement.  I think a different way to articulate the concept will be to say that what we will be able to do post singularity will be limited by our imaginations.  But since our imaginations are limited and imperfect (even when augmented), there will be a limit.  Mike Minter, an intellectual that I respect highly and routinely get to hear speak, made plain an example of technological advancement vs the heart of man in this short video.  The title of the three part series (each part is two minutes) is “The Ultimate Contradiction”.  It’s worth watching.  http://vimeo.com/21296651 The first part is the story I’m referring to.  The second part describes why this happens: “The insatiable desire for a man to be satisfied will always be thwarted by his inability to be satisfied.”  Regardless of your perspective and opinions, it’s certainly an interesting time to be alive.

Monday, August 27, 2012

Fundamental truths of building things

Sometimes when you are building something it feels like youre uncovering structures that always existed.  Things that are fundamental truths that were there before you and will be there after for others to enjoy.  Like math; prime numbers, Fibonacci, the golden ratio.  I think this is how the structure a bit of code can look beautiful.  How the structural simplicity of an arch eliminating tensile stresses and resolving them into compression can be pleasing to the eye.  I find it in the bending of a bow and the flight of an arrow.  They feel solid, atomic (non-reducible), and elegant.  When you get to the point where things cant be any simpler, that's when you've made it all the way.  Unfortunately, we don't always make it that far in the real world, there are trade offs; but it stands as an admirable goal and hints toward the existence of a higher order of things being there beyond just us.

One of my favorite paintings, Clairvoyance, is a self portrait of the artist René Magritte sitting at an easel, looking at an egg while painting a picture of a bird with its wings extended. The title translates to "Perspicacity" which means: Acuteness of perception, discernment, or understanding or keen vision. The meaning of the painting is indicated by the title. The egg will become a bird. The artists see what it will become. In the eye of the artist, he already sees the bird. Through understanding of the laws of the Universe, the artist knows that the egg will become a bird; that it is already a bird.  There's something to be said for having that clairvoyance about what things can be.  In the case of building things, the egg needs our help to hatch.  So we roll up our sleeves to do battle in the war of art and overcome the resistance holding us back.

Monday, August 13, 2012

Automated vulnerability scanning


A hefty dose of caution and expectations management when it comes to static code analysis or vulnerability scanning tools.

- Scan findings often contain a few thousand items.  The vast majority of them don't matter.
- Scanning tools often do not take into account the attack surface when they scan. This means that vulnerabilities found by them should not immediately be described as those that cause a risk.
- The code libraries common in use are open source, these libraries have lots of code in them which does not make it into the final software. This unused code might be included in the scan and it would be a waste of time to go hacking apart a library just to make the scan results better.

These reasons and more make the results from static code analysis poor metrics.  It's tempting to use them in this manner.  The number of discovered vulnerabilities is such an easy number to find and we all expect it to be reduced quickly as we fix things.  But unlike other metrics, static code analysis results, in the real world, never arrive at a state we can call complete or any other status than “better”.  This is troublesome because metrics, almost by definition, are presented to audiences that are not deep subject matter experts in the nuances of their creation.  That often means that the decisions that are made are not driven by an objective review of the results. Instead, non-expert audiences are swayed in their perceptions, positive or negative, by how well the expert presents their arguments.  An inarticulate developer thrown before leadership holding a report claiming 1000 false positives on a scan is going to fare poorly.  I’ve seen this scenario play out multiple times and it never ends well.  The loss of confidence that can result from confusing metrics is often countered by adding more developers or oversight to the project.  In the worst of cases it can lead to a shift of responsibilities, effectively destroying the hard fought knowledge attained by that developer during the course of their work.  In 1975 Fred Brooks coined Brook’s Law which states that "adding manpower to a late software project makes it later".  I believe it holds true for projects that weren’t late to begin with.  Using bad metrics to report performance can ruin that performance.

With all of the trouble that comes with improper use of their results, scanning tools can still be a powerful tool in your arsenal for writing good software.  I believe that the highest level that those raw scan numbers should go is the person who directly manages the performance of the developers.  These performance management decisions are mostly based on relative personnel performance and developer performance management.  Most importantly the person in this role often has deep subject matter understanding.  This is also the level where implementation of design takes place. If you don't trust your lead developer to be able to find and mitigate code vulnerabilities, find a new one. Everybody up the management chain from them will not have the technical skill to evaluate those decisions although few will hesitate to do so if called on.  When developing status reports for levels higher in the organization than this, the scan results need to be culled down to those that are actionable items.  Typically, they get dropped as a bug fix into a sprint or some other task management system, whatever yours may be.  Those sprint items or the backlog can provide excellent metrics, provided they have an apparent and predictable stability.  Managing requirements coherently is the topic for another post, so I’ll leave it at that here.

Software code and vulnerability scanning is a great tool, but it's a two edged sword... with a few other edges hidden where you don't expect them.

Tuesday, August 7, 2012

Raising kids and the computer scientist as tool smith

In 1996 Fred Brooks put together this great paper. http://www.cs.unc.edu/~brooks/Toolsmith-CACM.pdf
I was reviewing it recently because it had been mentioned in another book I'm reading, Coders at Work.  The main point of the paper being that Computer Science has been misnamed.  This is funny because Fred Brooks was partly responsible for having named it.  Given the context the paper is written in, I agree. The larger concepts in the paper are still applicable today, as much of his writing is.  Most individuals working with computers are not scientists building things in order to study; we are engineers studying in order to build.  The true metric of our success is how well the tools we create enable our users.  We are more like tool smiths than scientists, but the semantic discussion need not settle starkly on one description of the other.  It only serves to remind us of where our responsibilities are.

Toward the end of the paper he makes a departure from computer science and into discussing the virtues of how we spend our time in general.  The main question being how much time we spend creating and producing versus wasting time.  This being 1996, the TV serves as the time wasting villain.  Today we have even more effective ways to destroy productivity.  The following paragraphs resonated with me.

"TV fails the beauty test. Although the cinematography is frequently very skillful, the overall effect is
ugliness — bleak slumscapes, ugly violence, and endless car chases.
TV is only occasionally good. The voracious appetite for material means mediocre dramas. The characters are rarely people we should like to have as friends, quite unlike, for example, the people in Neville Shute’s novels. Only rarely would we want our children to take TV characters as their role models.
On a late-life occasion honoring the inventor of the vacuum tube, Lee DeForest, he remarked on how the tube had made radio possible, and he sadly commented, “This is DeForest’s prime evil.” Today he would have a new candidate.  “What did people do before TV?” How did we recreate ourselves?"

Just a few years after 1996 the world wide web came along and there was a new candidate for DeForest’s scorn.  The voracious appetite for material continues to drive the creation of ugliness.  Beauty still seems to be the exception online as it was/is on TV.  But that wasn’t what gave me the most pause.  It was his point about how we raise our children.  As I look down and watch my ten month old crawl around the living room, I see alongside his wandering path the various electronic windows to the world now available: laptops, tablets, tvs, and smartphones.  In his lifetime we will become vastly more connected with new devices: glasses, contact lenses, perhaps even implants.  He will have an appetite for and consume much of that ugly time wasting material, as will I.  Material will be set before us and we will seek it out even when we know it's wrong.  The only question left is if we will have the courage to police ourselves and attempt to only seek out only those things that enrich us.  Will we have the awareness to hold things of beauty in higher regard than the rest of it.

Reflecting on the movie’s I’ve watched, the video games I play, the blogs I read, and the content streaming into my social media networks, Fred was right.  Very rarely is there a character (real or imagined) in those electronic windows that I would want my child to hold as a role model.  Rarely are they people we should even like to have as friends.  I know I have the courage and thankfully I still have a few years to figure out the details of how to raise my children right.  In a hyper-connected world, we won’t have the option to shelter them.  But we can certainly prepare them.  I need to work hard at managing the material that I let myself consume and also what I let my children watch.  I hope other parents do the same.

Friday, August 3, 2012

Foreign perceptions


Foreign perceptions of things are often drastically different than what we’d expect from listening to our own news.  An Arabic interpreter that once worked for me tipped me off to this little nugget of fun:  If you go to the Wikipedia Arabic version of the Marshall plan (http://ar.wikipedia.org/wiki/%D9%85%D8%B4%D8%B1%D9%88%D8%B9_%D9%85%D8%A7%D8%B1%D8%B4%D8%A7%D9%84) and run it through Google Translate, you’ll find out that the Marshall plan was actually a United States plot to take over Europe.  “Washington succeeds in achieving its control through investments and the purchase of existing projects in these countries, in exchange for a promise of payment in dollars, and for giving creditors certificates to those promises”.

Funny.

Thursday, August 2, 2012

Use use

"Utilize" is overused (not over utilized) because it is a long word.  Use projects the same meaning for fewer letters.  Don't utilize utilize. Use use.

SYSK: The CNN Effect

By focusing instantaneous and ongoing media coverage on a particular conflict, international incident, or diplomatic initiative, the news cycle effectively demands political attention, as governing politicians attempt to demonstrate that they are "on top of" current issues. 

http://en.wikipedia.org/wiki/CNN_effect

Wednesday, August 1, 2012

Innovation Starvation


Neal Stephenson penned a great article after giving an amazing presentation about innovation.

Presentation:
http://www.youtube.com/watch?v=TE0n_5qPmRM

Article:
http://www.worldpolicy.org/journal/fall2011/innovation-starvation


The part of of the article that resounded strongly enough with me to make me want to share:
"Most people who work in corporations or academia have witnessed something like the following: A number of engineers are sitting together in a room, bouncing ideas off each other. Out of the discussion emerges a new concept that seems promising. Then some laptop-wielding person in the corner, having performed a quick Google search, announces that this “new” idea is, in fact, an old one—or at least vaguely similar—and has already been tried. Either it failed, or it succeeded. If it failed, then no manager who wants to keep his or her job will approve spending money trying to revive it. If it succeeded, then it’s patented and entry to the market is presumed to be unattainable, since the first people who thought of it will have “first-mover advantage” and will have created “barriers to entry.” The number of seemingly promising ideas that have been crushed in this way must number in the millions.

What if that person in the corner hadn’t been able to do a Google search? It might have required weeks of library research to uncover evidence that the idea wasn’t entirely new—and after a long and toilsome slog through many books, tracking down many references, some relevant, some not. When the precedent was finally unearthed, it might not have seemed like such a direct precedent after all. There might be reasons why it would be worth taking a second crack at the idea, perhaps hybridizing it with innovations from other fields. Hence the virtues of Galapagan isolation.

The counterpart to Galapagan isolation is the struggle for survival on a large continent, where firmly established ecosystems tend to blur and swamp new adaptations. Jaron Lanier, a computer scientist, composer, visual artist, and author of the recent book You are Not a Gadget: A Manifesto, has some insights about the unintended consequences of the Internet—the informational equivalent of a large continent—on our ability to take risks. In the pre-net era, managers were forced to make decisions based on what they knew to be limited information. Today, by contrast, data flows to managers in real time from countless sources that could not even be imagined a couple of generations ago, and powerful computers process, organize, and display the data in ways that are as far beyond the hand-drawn graph-paper plots of my youth as modern video games are to tic-tac-toe. In a world where decision-makers are so close to being omniscient, it’s easy to see risk as a quaint artifact of a primitive and dangerous past.

The illusion of eliminating uncertainty from corporate decision-making is not merely a question of management style or personal preference. In the legal environment that has developed around publicly traded corporations, managers are strongly discouraged from shouldering any risks that they know about—or, in the opinion of some future jury, should have known about—even if they have a hunch that the gamble might pay off in the long run. There is no such thing as “long run” in industries driven by the next quarterly report. The possibility of some innovation making money is just that—a mere possibility that will not have time to materialize before the subpoenas from minority shareholder lawsuits begin to roll in.

Today’s belief in ineluctable certainty is the true innovation-killer of our age. In this environment, the best an audacious manager can do is to develop small improvements to existing systems—climbing the hill, as it were, toward a local maximum, trimming fat, eking out the occasional tiny innovation—like city planners painting bicycle lanes on the streets as a gesture toward solving our energy problems. Any strategy that involves crossing a valley—accepting short-term losses to reach a higher hill in the distance—will soon be brought to a halt by the demands of a system that celebrates short-term gains and tolerates stagnation, but condemns anything else as failure. In short, a world where big stuff can never get done."