Hillary 2016 as a Failed Agile Transformation – Part II

In Part I we reviewed 2 scenarios that are consistently seen across agile transformation failures, and how they could relate to Hillary Clinton’s 2016 Presidential Campaign. Let’s continue the conversation!

1.      Using trailing and/or “bad” metrics

Every initiative wants to know if what they’re doing is working or not. To accomplish this task they rely on “metrics”, which are numerical measurements of a particular “thing.” The thought is that by boiling down some complex idea or process into a one-dimensional representation we can accurately gauge progress.

When it comes to measuring progress and/or success in Agile we classify metrics into two types: leading and lagging. Leading indicators show progress towards a goal (think burndown charts) whereas lagging indicators are an “after the fact” measure of success and/or failure (# of defects, etc.). Both metrics can be powerful if used correctly and if used incorrectly can be disastrous.

Robbie Mook (Clinton’s campaign manager) is a typical modern technocrat; he was enamored with data and models and thinks that data mining and efficient use of the findings will equate to success. Common knowledge or “finger on the pulse” feelings were eschewed for numbers and metrics. The problems in this data-centric approach started to rear its head during the Democratic primary.

Clinton never saw Bernie Sanders as a threat to her campaign, if the book is to be believed. Her analytics showed wins with a comfortable margin, so his being in the race was viewed as being a gadfly. The campaign predicted an Iowa win (the first state in the Democratic primary) by 3-4% based upon analytics. This supposed leading indicator led the campaign some comfort in their concerns around winning the state.

The number was correct, but the decimal was in the wrong place as the truth was Clinton won Iowa…but by only 0.3%.

The fact that the campaign predicted her eventual win was viewed as a validation of the analytics and data strategy. This lagging indicator told the team that their method of measuring success is correct. The fact that the margin was heart-attack-inducing close was ignored. The team continued down this current path to their eventual failure in the general election.

When an organization undergoes an agile transformation one of the first things that comes up in discussion is metrics, and how to measure success. Organizations that define hard metrics early often find that the metrics look good but success escapes them; this ties to old adage “what gets measured gets managed, what gets managed gets done.” If you do not create leading indicators to know if you’re going the correct way there is a good chance you end up not where you intended.

2.      Failure to Inspect & Adapt

The most powerful ceremony in the agile repertoire is arguably the retrospective. This has been a primary tenet of iterative and Lean thinking since the Toyota/improvement kata days of old (google “salary thief”). Forcing teams to look back on their accomplishments, successes, and failures under the auspices of relentless improvement is a powerful tool to enact positive change.

Agile teams have retrospectives at the end of every iteration, be it 2-4 weeks for a sprint or even 10 weeks for a SAFe program increment. The Clinton campaign went on for 2+ years if you count the “unofficial” preparatory work done prior to her official announcement of candidacy in April 2015. This timeline gave the campaign plenty of time to stop, inspect, and adapt, but they never did so. For example:

·Democratic Primaries-there were states where the projected margin of win was lower than expected (Iowa, Illinois, Missouri) as well as states where a projected win was actually a loss (Michigan, Indiana). What went wrong with our model? Each primary should have been an excuse to look at what went wrong and make adjustments.

·Feedback after the Primaries-there are many accounts in the book of operatives from each state contacting the campaign and begging for budget to have paid canvassing done, polling of potential voters, and numerous other ways to garner sentiment and cement her candidacy. Almost every request was ignored. If every group is asking the same request that’s usually a sign of a common theme.

·Someone who’s done this before-Bill Clinton is remarked numerous times “We need to visit these rural places and shake hands. We need to get out and sit with the electorate.” This approach was ignored in lieu of data analytics point the campaign to where they need to go. Maybe a retrospective would’ve included this feedback and this opinion would have been investigated.

These are just a few offhand examples of when the campaign would’ve benefitted from taking a pause. They then could have looked at where they are, how they got there, and where they were headed. It stands to reason that even one instance of inspecting and adapting during the campaign might have resulted in changes that helped change the course of history.

The 2016 American Presidential Election is one that will be studied and debated for quite a long time. Theories and opinions will abound about what could’ve been. The lesson for practitioners should be that while frameworks and processes abound in today’s agile world, to forget the basics is to invite disaster.