A recent post I saw a post on LinkedIn from a fellow engineer, has been circulating, and it seems to have crystallized a deep-seated anxiety many in the technical field are feeling.
He describes a future where software engineers are relegated to “nursery management of agentic agents,” becoming “more and more detached from the codebase, tech stack, and design decisions.”
His concern is that this is just another form of offshoring—a promise of cheapness and efficiency that, in his experience, often failed to deliver on quality. This sentiment is not isolated; it reflects a growing apprehension among many seasoned engineers who value their craft.
But this isn’t just a bottom-up anxiety. The move toward an agentic future is being driven by a very different, top-down perspective.
The Two-Sided View of the Agentic Shift
The conversation around this technology seems to be split into two very different camps, defined by what they stand to lose and what they hope to gain.
The Engineer’s Apprehension: A Loss of Craft
For the individual engineer, the fear is one of deskilling. The concern is that the job will shift from creative problem-solving and intricate design to simply writing “detailed hand written markdown specs” and then cleaning up the “mess” a “non-thinking autonomous agent” produces.
This is where the offshoring analogy hits home for many. They’ve seen this pattern before: a task is abstracted away to a cheaper resource, and the result is a mountain of “rework,” integration debt, and a loss of end-to-end quality control. The fear is becoming a “janitor” for AI generated code, a role that feels more like “childcare” than engineering.
The Leadership Calculation: A Pursuit of Profit
It would be a mistake, in my opinion, to think that senior leadership shares the same fears. Their perspective isn’t one of fear; it’s one of financial opportunity.
From a top-level view, the logic seems simple: “Why would we need as many engineers if agentic AI can do the same thing for cheaper?”
This is a classic fiscal calculation, aimed directly at increasing profits and padding the bottom line by reducing one of the largest costs in tech: engineering salaries. The promise of “agent swarms” producing complex changes on cheap cloud compute is, to them, the next logical step in optimizing the business.
However, this calculation may be overlooking several critical, and costly, blind spots:
- The Talent Pipeline: Where do senior engineers and architects come from? They come from junior engineers who spent years in the codebase, learning the craft, failing, and understanding the “why.” If agents write all the code, that pipeline breaks. This creates a long-term “competency debt,” where the company’s internal ability to innovate or even perform complex maintenance atrophies. Ultimately, the business becomes a brittle shell, wholly dependent on increasingly expensive external talent or the limitations of the very AI it embraced.
- The Real Cost of Management: The “nursery management” Frank mentioned isn’t free. In fact, it’s likely a new, highly specialized, and expensive role. This means the projected cost savings from reducing “engineer” headcount are just shifted to a new, even more specialized “agent architect” role. The total cost of ownership for these systems may not decrease at all, simply transforming from a salary line item to a combination of salaries, complex tooling, and compute costs.
- The Hidden Costs of Responsibility: Leadership is also overlooking the significant, non-negotiable costs of responsible AI development. Ignoring this “responsibility tax” is a direct path to catastrophic brand damage, crippling regulatory fines, and a complete loss of customer trust. The long-term cost of a single major AI failure will dwarf any short term savings in engineer salaries.
A Familiar Pattern: The Compiler
What makes this entire debate so interesting is that we have seen this exact dynamic before.
In the 1950s, the “real” programmer was a master of machine code. They controlled every register and CPU cycle by hand. It was a highly skilled craft.
Then, pioneers like Grace Hopper and John Backus came along. Hopper built the A-0 system, one of the very first compilers, which led to FLOW-MATIC tool that let people program in English-like statements. Backus and his team at IBM created FORTRAN, which allowed scientists to write algebraic formulas instead of raw hardware instructions.
These new tools promised to abstract away the “mechanical detail.”
The “real” programmers of the day were deeply skeptical. They argued that a machine could never write code as optimized as a human (a claim that was, for a very short time, true). They felt they were being “detached from the machine” and that this abstraction would lead to a loss of control and quality.
An Observation on Abstraction
They were, in a way, correct. The specific skill of hand-writing optimized machine code did become obsolete.
But that abstraction didn’t end programming. It was the catalyst for its explosion.
By removing the need to manage individual memory registers, the compiler allowed engineers to stop thinking about the “how” of the machine and start focusing on the “what” of the problem. This shift enabled the creation of operating systems, databases, and global networks—systems of a complexity that would have been completely impossible to build, or even imagine, in raw machine code.
The skill set didn’t disappear; it evolved. It moved from hardware mastery to algorithm design and systems architecture.
We now stand at a similar crossroads. The “agentic” evolution proposes to abstract away the “mechanical detail” of our time—boilerplate code, API integration, unit test generation.
The question is whether leadership’s profit driven calculation accounts for the full picture. Will this abstraction free us to tackle a new, higher level of complexity? Or will the overlooked costs in talent, management, and responsibility turn this quest for a padded bottom line into a longterm strategic failure?
A Final Thought
To be clear, I’m not writing this to agree or disagree with the original LinkedIn post.
Personally, I feel like it’s still a little too early to tell exactly where this is going to take us. But one thing is for certain: AI is here, and it’s here to stay.
We have to evolve and adapt to work with it.