Will AI Accelerate Top-down Organizational Culture?
- Michael Amenta
- Nov 14
- 6 min read
Updated: Nov 26
Summary
Successful adaptation requires two traits: mutation and selection. When executives develop too much confidence in a top-down perspective, they may over-bias selection, which can result in an over-allocation of "Taylorized" components and specialists vs. generalist resources who excel at mutation.
Executive over-confidence is often derived from observer bias, which is prejudice in their subjective interpretation of information—as demonstrated by an over-emphasis on limited KPIs. However, there is always an area of unmeasurable factors in complex systems. When this "illegible margin" is ignored, it can lead to underperformance (e.g. Ford Mass Production vs Toyota TQM in the 20th century, first-gen Electronic Medical Records, the planned city of Brasilia).
This risk can be managed by bottom-up decision making, wherein workers are multi-skilled, holistic, autonomous owners of the end product, teams are cross-functional, and there is an emphasis on developing talent.
For timeless insights on software design, I highly recommend “Why Doctors Hate their Computers” by Atul Gawande, a surgeon.
Gawande recounts the frustrations medical professionals experienced with first-generation Electronic Medical Record (EMR) systems where software that was intended to make administrative tasks more efficient had the opposite effect. Despite fixing issues like poor penmanship and enabling search, EMR led to physicians spending less time with patients and heightened their burnout from frustrating, menial labor.
How could digitization lead to such profound inefficiency and human misery? The answer lies in the unchecked application of Taylorization.
Taylorization enables top-down design decisions
Historically, Taylorization comes from the Industrial Revolution when manufacturing work was broken into components of standardized parts. This enabled designers to work independently from builders and created workers who performed highly specialized, efficient plug-and-play labor. In software, we see this trend reflected in the rise of SDKs, code libraries, low-code solutions, and AI agents and LLMs.
Unfortunately, the shift from general-purpose development to specialized, plug-and-play labor can create brittleness and organizational siloes. Dr. Gawande describes the tradeoff succinctly:
Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice—in medicine and in other fields—was all mutation and no selection. Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn’t wreck some other, distant part of the system.
In other words, using efficient components (selection) can kill the craft, flexibility, and ground-level innovation (mutation) that makes work adaptive, meaningful, and impactful.
The Illegible Margin consequences of executive-first design
When resources are allocated top-down for specific executive goals—such as control, oversight, and reporting—it often pours administrative burden downwards onto the worker. For example, EMR software allowed hospital leadership to streamline the imposition of rules and to monitor population health risks, but it was also burning out doctors.
I experienced similar administrative overhead when my leadership mandated a program management tool, called PlanView. The goal was to push standardized progress updates to VPs via dashboards, but the result was forcing PMs to make a dozen more clicks to update a single field. Despite more administrative hours worked, the goals our organization had defined weren't being realized more effectively with PlanView. This experience was echoed years later with OpenAir timesheets.
This is the fallacy of top-down organizations and what we can call "trickle-down software": what benefits the C-Suite does not necessarily benefit the worker or the organization. When the system over-indexes on a few KPIs, it reduces the knowledge worker (e.g. a doctor or a PM) to a cog in the machine. As Dr. Gawande stated:
Artisanship has been throttled, and so has our professional capacity to identify and solve problems through ground-level experimentation... Technology for complex enterprises is about helping groups do what the members cannot easily do by themselves—work in coördination. Our individual activities have to mesh with everyone else’s. What we want and don’t have, however, is a system that accommodates both mutation and selection.
As human processes are replaced by software automations, the ability to complete a task with any sort of creative process (i.e. a mutation) is eliminated. This has the effect of increasing the perception of risk mitigation at the expense of removing the potential for innovation.

I say perception of risk because there is a great deal of risk that can't be measured, and over-confidence in finite KPIs can catapult a complex system into failure. This "illegible margin" is why unplanned city blocks, such as those in New York's West Village, outperform highly planned designs, such as the city of Brasilia, as Taylor Pearson brilliantly explains.
These types of unintended consequences were demonstrated by first-gen EMR software. As a result of its brittleness and inefficiency, the medical system adapted by hiring more humans to enter and read data: so called “medical scribes”. As Dr. Gawande describes:
This fix is, admittedly, a little ridiculous. We replaced paper with computers because paper was inefficient. Now computers have become inefficient, so we’re hiring more humans. And it sort of works.
Scribes started as lower status local workers—medical students—but the hospitals eventually found even lower cost, but more highly trained, workers in India. This evolution for efficiency sake has subsequently created laughably inefficient workflows:
A note for a thirty-minute visit takes Rane [the scribe] about an hour to process. It is then reviewed by a second physician for quality and accuracy, and by an insurance-coding expert, who confirms that it complies with regulations—and who, not incidentally, provides guidance on taking full advantage of billing opportunities.
Dr. Gawande describes the scribes as having huge error rates and sky high turnover rates, with many lasting “several months” in their jobs. Combined with ballooning rates of physician dissatisfaction, stress, and depression, he paints a story where both the highly paid specialist and the minimum wage worker are succumbing to misery.
Embracing generalists to avoid "brain drain" attrition and low engagement
Importantly, Dr. Gawande pointed to burnout arising from the “pointlessness” of the added administrative tasks. Work needs meaning.
While AI agents are accelerating specialization at work, the trend has been present for years. When I started my career as a product manager, I was a Swiss Army Knife. I generated product ideas, validated them, submitted them for approval, wrote their specifications, managed their build, tested their quality pre-release, marketed their features for users, and provided customer support. Now, there's often a Product Strategist for defining initiatives, a User Researcher for research, a Business Analyst to write detailed specifications, a Scrum Master to oversee the build, and a Product Marketer to market the release — all reporting to different heads of department. But, despite all these additional resources, I'm not working less. Like the doctors in the story, my work has “shifted”.
Correspondingly, I've noticed brain drain among my colleagues. Whereas once it was the norm for PMs to drive strategy and innovation, now it seems my colleagues increasingly need to be told what to do. Furthermore, what they produce often results in more predictable—and unremarkable—results.
The reality is that generalist excellence that helps people excel in careers, but this shift is forcing workers into specialist roles where big picture thinking and inter-team collaboration are restricted in favor of task optimization.
Empowering bottom-up organizations
Is this all inevitable? As Gawande notes:
Postwar Japan and West Germany eschewed Taylor’s method of industrial management, and implemented more collaborative approaches than was typical in the U.S. In their factories, front-line workers were expected to get involved when production problems arose, instead of being elbowed aside by top-down management. By the late twentieth century, American manufacturers were scrambling to match the higher quality and lower costs that these methods delivered.
It's a false choice to: use top-down selection to get better top-line results, or maintain worker autonomy, meaningfulness, and satisfaction. There is a "both/and" solution to install and protect mechanisms that force human-led decisions for mutation and selection, just as Six Sigma, Lean Manufacturing, and TQM enabled front-line workers to optimize production systems.
Agile development principles were designed, in part, to address this, but enforcement is not guaranteed. Most agree on Agile in spirit, but there’s still a lot of ineffective development teams in practice.
As automation and the use of AI agents continue to grow, we must internalize the manufacturing lessons where investment in human capital—through a philosophy continuous improvement, investment in education, and multi-skill training—enabled organizational success. To do this, leaders must explicitly explore how every KPI and executive dashboard might cause unintended consequences to the worker.
The most enduring long-term value of breakthrough technology—including AI—is not in its ability to maximize quantity in the short term, but in its capacity to empower and elevate the human worker to maximize quality, innovation, and sustained success.