The Gap We Haven't Named Yet
I'm a gray-haired engineer: I've been writing software professionally for decades. I learned from the gray-haired engineers before me, studied how existing successful software worked, and spent uncountable hours in unglamorous debugging sessions trying to emulate those patterns. Ultimately, I got to the point where I could synthesize all that knowledge into solutions to new problems that hadn't previously existed. Over the decades, I built up a deep sense of architectural intuition that is common among senior developers and architects who have taken similar paths.
But the industry is now at a crossroads: I'm watching a generation of developers enter the field with tools that would have seemed like science fiction when I started. Spending bazillions of hours writing and debugging code is becoming less and less necessary.
Given that, I find myself asking a difficult question:
If the old path to becoming a senior architect is no longer The Way, what's the new one?
How We Got Here
The path that my generation of engineers took to senior roles was simple: write code -- mountains of it. Lots of it was bad. We wrote bad code, shipped it, watched it break, debugged it at odd hours, rewrote it, and repeated. Over years -- five, ten, twenty -- we developed a feel for how systems behave under stress, where abstractions leak, and why that "simple" design decision in year one becomes an architectural bottleneck in year three.
The progression looked something like this:
Each phase leads into the next. There was no shortcut; you couldn't skip the years of writing code that didn't work because those failures were how you learned. Your formal education (e.g., a university degree in Computer Science) taught you theory and fundamentals; the years that followed were where you learned how to become a practitioner. That was a whole art unto itself: you learned because the path forced you to learn, one hard lesson at a time.
But that path is now changing due to rapid advancements in technology.
The Disruption
Large language models are fundamentally changing the economics of software development. Junior engineers will increasingly use LLMs to write code -- not because they're lazy, but because they'd be foolish not to. In a competitive job market, the developer who can close tickets faster using AI tools will get the offer over the one who is hand-crafting every function. This is already happening.
But if junior engineers aren't spending years writing and debugging code by hand, where does the deep systems knowledge come from?
LLMs are outstanding research tools and personalized tutors. But if junior engineers are instead incentivized to use LLMs solely as devices to close tickets as fast as possible, how do they gain the knowledge that separates someone who can complete a task from someone who can architect a system?
To be clear: the new path has both the same starting point (formal education) and the same destination (senior developers and architects). But the middle is changing:
That gap in the middle is the problem we need to solve.
What I See in the Classroom
I have some visibility into the first step on the path: I teach a senior-level Computer Science Capstone course as an adjunct professor at my local university -- a required class for graduation where students synthesize theory into practice by building something "real." I'm seeing students near the end of their academic journey and the beginning of their professional careers. Over the past few years, I have seen many students accepting whatever the LLM gives them without question.
What makes this difficult to address is that students can get excellent grades doing this. The code compiles. The homework is solved. From a grading rubric standpoint, the output is often indistinguishable from what students produced five years ago through manual effort. But something is missing: they're "building" without understanding what they've built, assembling systems from components they can't explain, "solving problems" they haven't actually internalized.
The underlying issue is that students may be bypassing gaining critical thinking and analytical skills. The feedback loop rewards speed and output, not depth. And I'm not saying the students are doing anything wrong -- they're responding rationally to the incentives in front of them. If you can get an A by using the tools available to you, of course you will. That's good engineering instinct.
Whether students should use LLMs is not the real issue -- they should. The challenge is adapting university curricula fast enough to keep pace with the technology, knowing that the shifts we make today may need revising again in two years. Computer Science departments -- and likely all engineering departments -- are wrestling with exactly this tension.
Why This Still Matters
You can reasonably ask: if AI can write the code, do developers still need to understand how systems work?
Yes.
I work at an AI startup. Every day, I see what AI tools can and cannot do. Building large software systems still requires deep architectural-level knowledge -- knowing how to direct fleets of AI agents to build durable, reliable, large-scale systems still requires engineers who understand what they're building at a fundamental level.
Nate B. Jones characterized this well: AI agents are excellent at performing tasks, but they are not capable of doing the entire job. A task is "implement this API endpoint." A job is "design a payment processing system that handles millions of transactions daily, degrades gracefully under load, and meets compliance requirements across twelve jurisdictions." Someone needs to hold the entire picture in their head -- the constraints, the tradeoffs, the second-order effects, the failure modes that only manifest at scale.
The hardware memory limitations of AI systems (the "memory wall") remain a hard problem. Hardware is expensive and slow to develop. In a world of complex geopolitical issues and physical resource constraints, fundamentally solving the AI memory wall issue may take a long time. Until then, humans remain the only entities with sufficient capacity to hold an entire complex system in their heads and reason about it critically.
Humans are needed to do the whole job: we still need senior architects. The demand for that expertise is increasing as systems grow more complex and more AI-assisted. Which brings us back to the original question: what's the path to evolve junior engineers into senior architects?
The Paths That Won't Work
When discussing the "what is the new path?" question with friends and colleagues, here's some answers that I have heard multiple times:
-
"They just need to learn to use the AI tools." This is the most common response I hear, and it's the most reflexive. Yes, fluency with AI tools is necessary. But it's not the same as understanding systems. Becoming an expert prompt engineer doesn't teach you why a distributed system needs eventual consistency, or when to choose a message queue over a synchronous call, or how a memory leak in one microservice can cascade into a platform-wide outage. Additionally, at the rate AI tools are evolving, "prompt engineering" changes every few months, anyway. Put differently: using the tools is a skill. But it's a skill that operates at a different layer than the architectural knowledge we need to cultivate.
-
"Just celebrate the speed." There's a temptation to look at a developer closing a hundred tickets in record time and call that progress. Management loves it because the velocity metrics look fantastic. But is that developer building the mental models they'll need in ten years? Is closing a ticket the same as understanding a system? Speed of task completion is not the same as growth of engineering judgment.
-
"The old path still works, just faster." Maybe. But I'm skeptical. The old path worked because the struggle was the teacher. You learned memory management by dealing with segfaults. You learned concurrency by debugging race conditions. You learned API design by building APIs that were painful to use and hearing about it from your colleagues. If the AI absorbs most of that struggle, how do you learn?
-
"We don't need to hire junior engineers anymore." Some companies are already acting on this. If AI tools can produce what is perceived to be senior-quality output on individual tasks, the economic logic seems straightforward: why staff a cohort of junior developers when a smaller team of experienced engineers with good tooling can close the same tickets? The velocity metrics support it. The (short-term) cost savings are real. But this reasoning ignores the pipeline. The senior architects of today were the junior engineers of ten or fifteen years ago. Deep systems intuition doesn't transfer in laterally -- it accumulates over time, from experience. If the industry collectively stops hiring at the junior level, who fills the senior roles in a decade? There will be no pool to promote from. Eliminating the junior tier doesn't reduce the need for senior architects; it simply defers the reckoning to a point when the people capable of doing the whole job are no longer available. In fact, this reduces to an even more problematic path:
I don't think that any of these answers are sufficient.
What Might Work
So how do we define a new path? I have some intuitions about where to look.
-
The fundamentals still matter. The undergraduate CS curriculum teaches data structures, algorithms, operating systems, networking, databases. These are the building blocks, and I think they're still generally the right building blocks. Could they use revising to account for an AI-augmented world? Probably. Indeed, that's under active debate in academia. But the underlying goal -- teaching engineers critical thinking and how systems work at a fundamental level -- is more important now, not less. If you're going to direct AI agents to build software, you absolutely need to understand the principles those agents are applying. You cannot evaluate what you do not understand.
-
Communication skills have become technical skills. Engineers have long had a reputation for weak writing, if for no other reason than engineering curricula historically didn't emphasize it. But in an AI-augmented world, developers will spend significant time directing LLMs, and LLMs require cohesive, logical, precise language to produce good results. Clear communication -- to both humans and machines -- is no longer a soft skill; it's a core engineering competency. Perhaps it's time we require formal technical writing in the curriculum rather than treating it as an elective afterthought. This solution is a little different because it's aimed at the first step on the path, but it still feels relevant to the overall solution.
-
The progression might need to be restructured. If the old path was "learn fundamentals, then spend years coding to develop intuition," maybe the new path emphasizes fundamentals differently -- as explicit building blocks that are studied, combined, stress-tested, and reasoned about, rather than absorbed implicitly through years of practice. Less "write a linked list implementation" and more "here's a system design -- explain why it will fail under these conditions."
-
Mentorship matters more than ever. If the code-writing apprenticeship is compressed or eliminated, the human-to-human transfer of architectural wisdom needs to be more deliberate. Senior engineers may need to spend more time explicitly teaching why, not just reviewing what. In my Capstone class, over half the projects this semester are "mentorship" projects: students work under the tutelage of a senior-level engineer to build something real-ish solely for the sake of learning how to build it. At the project conclusion, the students have achieved something concrete but there isn't necessarily an external deliverable or shippable product.
-
We might need new kinds of practice. Not "write this code" but "evaluate this system." Not "implement this feature" but "find the flaw in this architecture." Not "make this test pass" but "explain why this test isn't testing what you think it's testing." The ending point on the path hasn't changed, but the exercises that develop them probably need adapting.
Where to Go from Here?
The gap between a newly graduated computer science student and a senior architect is real and, despite the rapid advance in AI tools, it is not shrinking.
The old path across that gap -- write code for a decade, learn through repeated failure, slowly accumulate wisdom -- is eroding, not because it was wrong, but because the ground it was built on is shifting. The destination hasn't changed: we still need people who can internalize the whole system, make the judgment calls, and take responsibility for the outcome.
If the path of least resistance is to keep hitting <TAB> and
accepting whatever the LLM suggests, we will produce a generation of
engineers with less understanding of why complex systems work and
fewer critical thinking skills to maintain them. We gray-haired
engineers will retire before long, and we need generations behind us
who can take over.
I don't have the answers. I'm not certain I've even framed the problem correctly. But I think this topic deserves a serious conversation -- among educators rethinking their curricula, senior engineers reconsidering how they mentor, and junior developers navigating this transition without a clear map.
How would you define the new path? Send me a message on LinkedIn and share your thoughts.
Stay in the loop
Get the latest updates on our progress, product news, and insights.
Related Insights
AI Made the Codebase Feel Shared
AI coding tools changed more than productivity for our team. By lowering the cost of contribution across the codebase, they shifted collaboration from implementation details to intent — and made the whole system feel like shared ground.
NVIDIA GTC 2026 Recap
A firsthand look at NVIDIA GTC 2026 in San Jose — covering the keynote highlights, the rise of Agentic AI, data readiness as the real competitive moat, robotics breakthroughs, cloud inference economics, AI security gaps, and what it all means for enterprise AI strategy.
KubeCon 2025: The Enterprise AI Infrastructure Moment Has Arrived
KubeCon 2025 marked the moment when enterprise AI infrastructure became real. Data sovereignty, GPU economics, and compliance requirements are driving enterprises to bring AI home. Here's what changed and what it means for your AI strategy.