The pitch for artificial intelligence in the corporate world is seductive and simple: “Let the bots do the grunt work.”
We are told that by automating the repetitive, low-value tasks—data entry, summarizing meeting notes, drafting boilerplate emails, debugging simple code—we will “unleash” human potential. Junior employees, freed from the drudgery of the spreadsheet mines, will immediately ascend to higher-level strategic thinking. They will become architects instead of bricklayers.
But there is a flaw in this utopian vision. It assumes that “grunt work” is just waste. It assumes that the only value in writing a summary is the summary itself.
What if the drudgery is the education?
For centuries, the apprenticeship model has defined professional growth. A junior lawyer spends years reading boring contracts not because the partner enjoys torturing them, but because reading a thousand boring contracts is the only way to develop the intuition to spot the one dangerous clause in a complex merger. A junior developer writes unit tests not because it’s glamorous, but because breaking code is how you learn how code works.
If we hand the “bottom rung” of the ladder to an algorithm, we aren’t just saving time. We might be sawing off the only way up.
To understand the risk, we have to distinguish between “explicit knowledge” and “tacit knowledge.”
Explicit knowledge is what you can write down in a manual: Press this button to send the invoice. AI is excellent at this. Tacit knowledge is what you learn by doing: The client sounds hesitant, so I should delay sending the invoice until Monday.
Tacit knowledge cannot be downloaded. It is accumulated through osmosis. It is built during the “boring” hours. When a junior analyst spends three days manually cleaning a messy dataset, they aren’t just fixing commas. They are learning the texture of the data. They are learning that customer IDs from the 2019 merger always have a trailing space. They are learning that the sales team in Chicago tends to overestimate their projections.
If an AI cleans that dataset in four seconds, the dataset is clean, but the analyst is empty. They have the output, but they lack the context. When they are later asked to make a strategic decision based on that data, they will lack the “gut feeling” that comes from having wrestled with the raw numbers.
The current solution proposed by tech evangelists is that juniors will become “editors” or “managers” of AI. Instead of writing the code, they will review the code written by GitHub Copilot. Instead of drafting the press release, they will tweak the draft generated by ChatGPT.
The problem is that you cannot effectively edit what you do not understand.
To be a good editor, you must first be a competent writer. To catch a subtle hallucination in an AI-generated legal brief, you need to have read enough case law to know what sounds wrong. If a junior employee skips the “doing” phase and goes straight to the “reviewing” phase, they are flying blind. They lack the baseline competence to evaluate the machine’s work.
This creates a dangerous loop: The junior employee relies on the AI because they lack experience. The AI makes a subtle error. The employee approves the error because they lack the experience to catch it. The error becomes part of the final product.
This brings us to the demographic crisis: The “Hollow Middle.”
In a decade, the current generation of senior leaders—who learned their craft the hard way—will retire. They will be replaced by the current generation of juniors. But if those juniors spent their formative years merely prompting AI rather than doing the work, they will be “paper tigers.” They will have the titles, but not the skills.
We risk creating a workforce of “prompt engineers” who can initiate a process but cannot fix it when it breaks. If the AI server goes down, or if the model encounters a novel situation it wasn’t trained on, the human “expert” will be helpless. They will be like a GPS-dependent driver lost in a city where the satellites don’t work.
So, is the solution to ban calculators and force everyone to do long division? No. That is luddism. The efficiency gains of automation are too valuable to ignore.
Instead, companies must intentionally redesign the concept of “on-the-job training.” If the work itself no longer teaches the skills, the training must become simulation.
We may see a shift toward the airline pilot model. Pilots use autopilots for 90% of the flight, but they spend hundreds of hours in flight simulators practicing for the 10% where the autopilot fails.
Corporate training programs will need to move away from “efficiency” and toward “intentional friction.”
We are currently treating business technology as a tool to remove effort. But effort is the currency of learning.
The danger isn’t that AI will replace us. The danger is that AI will stunt us. It will create a generation of professionals who are incredibly productive but deeply shallow. To avoid this, we must stop viewing “drudgery” as a defect to be patched and start viewing it as the gymnasium where professional muscle is built. We can let the robots carry the weight, but we must make sure we don’t forget how to lift. The future belongs not to the person who can prompt the machine, but to the person who knows when the machine is lying.
In aviation maintenance, compliance is not optional—it is legally mandated. When aircraft components require inspection,…
Rodents have a knack for finding their way into homes and businesses when you least…
As our digital lives expand with photos, videos, documents, and media collections, the need for…
If you're managing projects using Microsoft Project, you've probably wondered whether getting certified is worth…
In modern gardening, lightweight planter boxes have emerged as a revolutionary solution for both urban…
As coworking communities grow, managing member queries becomes increasingly challenging. Responding to emails, calls, and…
This website uses cookies.