During my time as an operations GM at Amazon, we tended a weekly ritual called labor planning.
Leaders of each of my operational departments would predict the next week’s output. They based their predictions on the volume they expected to process, the labor hours available to them, and the processing rate they expected to achieve.
Getting these predictions right meant our building would fulfill every order assigned to us, but without paying for idle manpower or humoring sluggish productivity. Getting them wrong would impose costs upon us and our network.
I didn’t reward my managers for guessing low and over-achieving. That’s called sand-bagging. It meant we could have stretched to cover more customer demand. By not committing to and processing our max, we left money on the table.
Nor did I reward them for predicting too aggressively and then under-achieving. Missing on the low side meant calling unplanned overtime to cover demand we committed to processing. A higher labor bill was the penance for excessive boldness.
I rewarded accuracy. And not just accuracy in one week, but consistent accuracy over time. I expected senior managers to be within 2% of their predictions and to get closer with experience. They often achieved this, for reasons we will discuss.
So, who the Hell is Horace Rackham and what does he have to do with Amazon labor planning?
Nothing. Sadly, Horace is long dead.
But this means he won’t mind if his part of the story waits a little longer.
Predictions are often wrong. And their wrongness is often hilarious in retrospect, except when it’s dark and tragic.
In 1995, a columnist for Newsweek named Clifford Stoll told us the internet was a passing fancy. It had no capitalist logic. We would all soon lose interest.
Blockbuster’s c-suite bet their company’s future on the prediction that streaming video was a pipe dream.
Former Microsoft CEO Steve Ballmer predicted the iPhone would be a flop.
“There's no chance that the iPhone is going to get any significant market share.”
Any number of 20th Century economists predicted the Western world would transition to universal income before 2020, with most jobs eliminated by robots. Outside of a few Utopian enclaves, we’ve gone in the opposite direction.
Future-gazing scientists predicted in the 1980s that the human lifespan would be 150 by now. The oldest human so far reached 122 years, and average lifespan is declining in many places.
The housing market would never decline, everyone said prior to 2008. Consumers, banks, politicians, realtors. Up until the crash which nearly took the financial system with it, everyone shared this prediction, feeding on mutual confidence. All except a few, who we will come back to later.
And of course, these are just big predictions. Smaller ones are wrong all the time, as social media can attest.
Lest we spare the politicians, Britain’s Tories predicted a tens-of-billions windfall would flow from leaving the European Union. It didn’t happen, and virtually every line of the UK public budget is now stretched on the rack of an invalidated assumption.
Lest we exclude our personal heroes, airpower visionary Billy Mitchell predicted that air superiority would always and inevitably lead to decisive victory. We know that’s wrong now many times over. This is one reason why Mitchell is less a founding father of our independent Air Force and more a crazy uncle.
Lest we exclude genius billionaires, hedge fund titan Bill Ackman predicted with absolute certainty that Herbalife would go out of business. He starred in a movie about it. He bet his investors’ money on it. And he lost.
In shorting a company that looks and acts like a pyramid scheme but successfully persuades a lot of people it isn’t, Ackman perhaps underestimated the power of stupidity.
Stupidity is the ultimate predictive paradox. It is itself predictable, but can invalidate other predictions which don’t account for how predictable it is.
Which brings me to one of my favorites.
The fellow pictured above is Horace Rackham, who died in 1933. He was Henry Ford’s lawyer, advisor, and investment partner.
In 1903, he advised Ford not to invest in automobiles, predicting they were a flash in the pan.
“The horse is here to stay, but the automobile is only a novelty -- a fad.”
Horace is spared his blushes by the sheer age of his little known words. Spoken before mass media, they didn’t find many ears to hear them.
Perhaps the slower spread of information back then tells us something about the relative confidence people maintained in one another and their institutions. It was a lot more effort to prove definitively to a wide audience that you were full of shit.
Much easier now.
Bad predictions are so commonplace these days that we think people who get something noteworthy right are always right. We start asking them for advice on everything.
Entrepreneurs make good guesses on how to invest, but they don't know anything about other subjects, like leadership, politics, or space travel. Yet we ask them. Because we think the ability to make money endows authority.
Actors make occasionally good guesses about which films to get involved with. Then we start asking their advice about politics or social problems or money or medical issues, pretending their celebrity status endows wisdom when it often endows the opposite.
Some of them run for high office and win. Then proceed to remove any doubt about how clueless they are.
Christian Bale isn’t necessarily either of these. I just like the meme.
Then again, this is tricky business, understanding how prediction works and what we can infer from getting things right or wrong.
Einstein was wrong over and over and over again before he was right. This is often true of the best among us, the ones who change our world.
The difference, perhaps, is how Einstein applied the lessons of those bad guesses to other guesses. This foreshadows a serious point about predictions.
To navigate into the serious, let’s go a little darker.
The entire US defense establishment was absolutely certain for decades that our next big war would be fought in the Fulda Gap against the USSR. It would be a climactic death rattle enacted out of Soviet economic desperation. A set-piece struggle to end the Cold War.
Policy, resources, diplomacy, intelligence, and national intellect were all channeled toward readiness for this eventuality.
Meanwhile, the Middle East grew less stable. In 1990, we found ourselves in a desert war with a very different foe. Different tactical and logistical challenges. Different weapons. Different risks and knock-on effects.
And we hadn’t thought much about it. Our plans were useless.
We adapted to Desert Storm, albeit with the benefit of some interesting luck and some teething issues.
But we hadn’t thought about how we could get into that war, so we had little idea how to get out. Its life cycle costs spiraled into the immense as we hung around in a far-flung region for a decade in a vain attempt to put the genie of regional conflict back in the bottle. All the while sewing the seeds of those knock-on impacts by our very presence.
In the relative global peace, dotted with “peacekeeping operations,” which followed that First Gulf War, we presumed continual stability and discounted non-state actors as “low risk” and/or “low probability” threats. We predicted a “real threat” would eventually show itself, and we’d have time to adjust.
September 11, 2001 proved all of that wrong. The threat had been there all along, festering in the shadows created by the blinding light of our unwarranted certainty.
When you think about how human cognition and confidence operate, it makes sense we get predictions wrong a lot. Our reasoning is riddled with various forms of bias, most of which are unconscious.
We’re overly optimistic about our own thinking.
We believe past experience is a more accurate predictor of future events than it actually is.
We believe what’s worked before will work again.
These are sample forms of ego-centrism, of which fractals exist as numerous as those comprising the complex workings of our minds. And the more egocentric and less self-critical we are, the less reliable our predictions are likely to be.
But amid all this, we get things right sometimes. Some of us predict more effectively than others. This tells us there are reasons why, which we can learn and exploit to be right more often.
Those Amazon labor plans were accurate because of deep and direct knowledge of their underlying processes, honed with experience, until a layer of intuition was grafted atop the knowable.
This counts for a lot in forming judgment, which guides predictions. With enough knowledge and experience, a high form of situational awareness is achievable, which permits anticipation instead of reaction.
Watch any professional footballer and you’ll see them going not where the ball is, but where it will arrive, even before it’s been kicked. They know intuitively from thousands of hours of experience what’s going to happen.
Hedge fund manager Michael Burry was one of the few who predicted the US housing market would crash. As famously depicted in The Big Short, Burry bet against the market. And he won, earning his ingrate investors, most of whom had threatened to sue him, billions in returns.
His prediction was based on something simple and boring. He simply looked at the data. He selected a sample of mortgages and actually reviewed their terms. The evidence was clear to him that banks were taking undue risks. Eventually these mortgages would default, and when that happened, the bonds extended based on their collective solvency would be downgraded.
Data and evidence are great bias-breakers. They foster better decisions.
One of the reasons the US military prevailed decisively in Desert Storm is that a rag-tag band of airmen in the nether regions of the Pentagon were engaging in scenario thinking. Their geopolitical gaming sessions led to them to consider conflict scenarios beyond the Fulda Gap.
Their leader, Col. John Warden, was particularly interested in the Middle East. He had long believed it would be there, not Europe, where US military force would most likely be next called upon. While everyone else was looking left, and he and his team were looking right.
Through their scenario play, they developed a loose outline for a different way of dealing with an adversary. An airpower-centric way of attacking vital centers with speed, stealth, and simultaneity leveraging precision weapons. That outline became the Instant Thunder campaign plan, which in action led to a decisive US victory.
Horace Rackham advised Henry Ford against investing in cars. But he retained the capacity for self-doubt. When Ford went against his advice, Rackham chucked in $5,000 of his own money, becoming one of the original investors in the Ford Motor Company.
16 years later, Rackham sold his share of the company for a cool $29.3M, which is around $530M in today’s dollars. He spent the rest of his life giving away his wealth, mainly to educational institutions. He knew the value of critical thinking.
Our brains are not natural prediction engines. They are bias-addled and impressionistic conflict zones.
We have to trick them with learned skills of prediction.
Critical thinking, self-doubt, scenario thinking, reliance on data and evidence, and development of deep knowledge and experience are just some of the ways we can overcome our biases and predict better.
Organizations bent on efficiency and beset with myopia have a tendency to overlook and minimize these things in their resource models.
But if we make the necessary investment, we can be right in our predictions more often.
Some of us, anyway. Others will continue unabated, making foundationless predictions punctuated by just enough occasional correctness to convince them of their own genius.
But that’s OK. The world needs morons too.
TC is an American and British writer and veteran. He writes The Radar from Manchester, United Kingdom.