80/20 Automation

6 minute read

For over two decades now, the holy grail of robotics and automation has been full autonomy: machines and software that can operate completely on their own, seamlessly navigating the messy, human world around them. From self-driving cars to fully digital “Industry 4.0” factories to humanoid robots poised to replace human labour, we’ve been promised transformative revolutions. Yet, time and again, we find ourselves with capable but limited technologies that deliver incremental gains rather than the seismic shifts we were sold.

In this article, I argue that our cultural obsession with full autonomy blinds us to a more powerful and achievable opportunity: collaborative automation—systems designed around humans and computers each doing what they do best.

My own journey with automation began as a controls engineering co-op in 2003 at the Nemak (formerly Ford) Windsor Aluminum Plant. A foundry is about as messy as it gets, but here was a modern facility built on mid-1990s technology, and a surprising amount of the casting process ran on its own without human intervention. The deterministic robots, gantries, conveyors, and custom-built machines of the day capably and reliably handled 80% of the work. Human operators remained essential in a few tricky areas—such as sand mold assembly, loading aluminum ingots into the blast furnaces, and parts of the finishing process—as well as for maintenance when things went wrong. They handled the other 20%.

During my time there, a big shift was happening in the world of robotics. In 2004, the first DARPA Grand Challenge took place. Not long after, the seminal textbook Probabilistic Robotics was published, and became a bible for those of us looking to unlock the big next step in automation. I began my graduate studies in 2006. By the time I had completed my Ph.D. in 2012 the self-driving car boom was under way. Advanced driver assistance systems (ADAS) like adaptive cruise control, automatic emergency braking, and lane keeping assist had been emerging on the market for some years at this point, but research groups like mine at Bosch were working separately on the moonshot focus of what would come to be classified as SAE Level 5 autonomy. The idea was that the occupant could enter a destination, then check out and do whatever they liked until the car delivered them safely and comfortably. Nothing less would do.

Let’s take a step back to appreciate the difference between these two approaches.

When the Ford plant was constructed, the goal was to produce engine blocks efficiently. Human labour was expensive, so the upfront capital and engineering investment in automation was a means to reduce operating expenses. It only made sense to automate tasks when the cost of automation was lower than the projected cost of human labour over a reasonable time horizon. In theory, nothing would have prevented someone with an unlimited budget in 1996 from building a fully autonomous aluminum engine block factory. Clearly, though, someone decided that automating some of the operations would have required too much engineering for the juice to be worth the squeeze.

On the other hand, when the DARPA challenges brought the possibility of commercial autonomous vehicles to the foreground, the goal was not (as with ADAS) merely to make driving incrementally safer or more convenient. This was touted as a transformative technology: no longer will personal transportation be limited to able-bodied, sober, alert, licensed adults! Now all that commute time can be reclaimed for sleep, work, or entertainment! Now the car can move itself from place to place without its owner inside! And imagine the parking and road use efficiency, the comprehensive safety gains, the saved labour costs in taxi and delivery services! All of these benefits depended on the promise of fully autonomous vehicles, so we worked on developing fully autonomous vehicles… whatever it took.

Well, it turns out that it took, and is still taking, a lot. As of 2026, no SAE Level 5 vehicle exists on the market. Waymo’s robotaxi service uses what is arguably a Level 4 vehicle, albeit one that still depends on remote operator intervention even within its limited operating domain. The best you can do at a dealership is Level 3, and you have to squint generously there too.

The truth, from the trenches, is that fully autonomous systems are extraordinarily hard to achieve in the real world. Physical environments are unstructured, data is incomplete or ambiguous, and human intentions are unpredictable. At some point, though, it became culturally unfashionable to propose practical automation yielding incremental efficiency gains. We only want the big prize, the disruptor, the end-game tech that will revolutionize the world by taking the human entirely out of the loop.

Autonomous vehicles are just one example, but they’re a good one. Time and again, we achieve 80% automation (the easy 80%) quickly, the demos are impressive, executives make bold promises about 100% being just around the corner, investment pours in, and then people like me get hired to wrestle with the much harder remainder for the next decade or two while promises fail to materialize and people are told to temper their expectations.

Presumably, this mentality would not have been very fruitful for Ford in building its casting plant. A fully autonomous process would surely have been nice to have, but the incremental impact of partial automation was very real, and a much more economically sound investment. This applies to autonomous vehicles, too. While the bluster about full autonomy never really went away, the automotive companies have all quietly focused the bulk of their resources on delivering more practical ADAS products in recent years. It’s a shame that the diversion of resources, talent, and product focus likely slowed them down while we still wait for the payoff.

We may be seeing the same pattern play out again today with humanoid robots and agentic AI. Rapid initial progress, impressive demos, wildly extrapolative promises, unprecedented capital investment. Is everyone ready for the next stage, a long, expensive grind on aggravating problems with a side of disappointment? Perhaps we could instead decide to learn from history, and refocus on a more pragmatic vision for the future of automation.

I learned one of the best lessons of my engineering career at an internship during my graduate studies, in between working at the casting plant and on the driverless cars. I was using concepts from my research to automatically plan a vision system based on task requirements. While the algorithm was very good at optimizing a complex set of variables better and faster than any human could hope to do, it was frustratingly bad at handling a few parameters that were, conversely, very easily tuned by an experienced engineer. The contrast between academia and my paid internship made me aware that time spent banging my head against this long-tail problem was eating into the profitability of the solution. I therefore declared that this would be a semi-automatic planning system instead.

There are many better examples of the success of this approach: robotic surgery, industrial AMRs, shared autonomy in mining and agriculture, human-in-the-loop content moderation and medical imaging triage, logistics and dispatch optimization. The broad pattern is that where the machine is good at control, precision, or optimization, but the human is better at intent and exceptions, collaborative automation is the practical design. Full autonomy makes a great story, but we’ll get better technologies sooner if popular culture and capital markets stop overvaluing it. If we can recalibrate our ambitions away from replacing people and toward empowering them, we can get back to building a future where automation actually works.

Updated: