I Need to Stop Being Amazed
As I mentioned in my prior post I'm a gray-haired engineer. I've been working on enterprise-grade software for decades: I'm as old-skool as they come.
"Design for robustness!", "Code for stability!", and "Layer functionality deliberately!" — these were our rallying cries.
And they're still correct.
But all of these rallying cries had a subtext: moving carefully meant moving slowly. Being able to deliver must-always-work software to Fortune 500 customers meant agonizing over every line of code and trying to imagine every single possible way that it could fail. Then you wrote additional code to handle those failures. Then you test, test, test, and test some more. When tests failed, you pored through the code to find the error.
Rinse, repeat.
This is not another "AI is awesome!" blog post. The part that actually interests me isn't what AI can do. It's why it's so hard to get other experienced engineers to genuinely change how they work. This turns out to be a lot harder than any of the technology.
Where I Started
Last fall, I was using AI the way most of my colleagues were: as a very capable assistant for individual tasks. Help me write this function. Explain what this code is doing. What's the best way to use these libraries together?
And AI was genuinely, demonstrably helpful. But I was essentially using a sport car to drive to the grocery store -- I just didn't know it yet.
When I joined Actualyze AI, some colleagues had started a few weeks before me and had already gone far deeper — exploring approaches like spec-driven development and telling me frankly astounding stories about what the tools enabled.
So I started trying some of these approaches myself.
Two Features in Two Weeks
In my last two weeks at my prior job, I prepared for and delivered nearly 20 hours of transfer-of-information meetings — documenting all the intricate details of everything I'd built over the years for my team. Simultaneously, I delivered two brand-new, essentially-coded-from-scratch features that had been in our backlog for over a year. Customer requests we'd never had time to get to.
Both happened at the same time, within the span of 2 weeks (along with all my other outprocessing logicstics!).
That was my first "wow" moment. Going whole-hog on AI had enabled me to do massively large things — while still maintaining enterprise-grade quality — in a fraction of the expected time. I was still architecting the results, reviewing everything the AI produced, and testing rigorously. I hadn't handed the wheel to a robot. But the robot had dramatically extended what I could do with it.
I need to tell myself to stop being amazed.
Every Week, Something New
After going fully AI-first at Actualyze AI, a pattern emerged: every week, something new genuinely astonishes me. A small sample:
"Has this bug been reported before?" During testing, I found a new defect. Before filing it, I asked an AI agent to search our GitHub issue tracker — hundreds of open issues — for anything similar, related, or possibly adjacent to this behavior. Not just keyword matching, but semantic analysis across the full issue history. The agent comes back with a comprehensive summary in minutes. That used to be the kind of task nobody actually did properly because it took too long.
"What does my epic touch?" I was working on a large feature epic and wanted to understand the surface area — where my work would interface with other ongoing, completed, and planned epics. I asked an AI agent to analyze all open and closed epic tickets and map the relationships and overlaps. It surfaced dependencies I hadn't considered and identified potential conflicts with three other in-flight efforts. That kind of cross-ticket analysis, done manually, takes days - if it's done at all.
"There might be a race condition in here." A section of our codebase — tens of thousands of lines — was failing intermittently in ways that intuitively felt like a concurrency issue. I gave an AI agent the entire section of code and a batch of runtime logs from both successful and failed runs, and asked it to find the problem. It did. Then it fixed it.
"Make this new code fit the existing patterns." When designing a new feature, I asked an AI agent to analyze the existing code where the feature would be integrated — the existing abstractions, conventions, and infrastructure — and then cross-reference that against our related epics, use cases, and acceptance criteria. The goal was to ensure the new design would be consistent with the rest of the system before we wrote a single line of implementation. The agent synthesized all of that into a design brief that I could react to and refine.
Every time something like this happens, I feel the same thing: a quiet internal voice saying that's remarkable, but please stop being amazed; this is just Tuesday now.
A Friend's Religious Experience
I have a fellow gray-haired friend who had been largely ignoring AI, for reasons that are entirely rational:
- He was (and still is) busy trying to get his actual job done. When you have crushing deadlines, "learning new tools" isn't a luxury you get to budget for.
- He's at the top of his field. He didn't see the value in a glorified search engine when he already knows the answers.
- More to the point: he's such an advanced engineer that he didn't need help writing functions. He already knew the shape of the code he wanted to write; he just had to transcribe it. Tab-completion wasn't his bottleneck.
When I encouraged him to go deeper, he eventually tried Codex.
A few days later, he had what he called a "religious experience."
In six hours, he built infrastructure he'd been wanting for years: logging and analytics embedded deep in firmware, exposed up through multiple stack layers into a modern (responsive!) web dashboard with detailed reports and aggregated statistics.
Six hours == years of wanted capability. He's fully on board now.
I completely understand the feeling.
The Harder Problem
My friend's experience and my own are data points, not a strategy.
The transformation AI enables isn't primarily a technology story — it's a people story. And getting experienced engineers to genuinely change how they work is much harder than it looks.
At my last job, we were heavily encouraged to use AI. Leadership was enthusiastic. But we received almost no direction on how to use it beyond the basics. And as a result, most engineers kept using it the way we always had: "help me write this function," "explain this code," etc. Useful, but not transformative.
Two interventions are often attempted, and neither tend to work as intended:
The demo. Seeing manufactured examples of "here's AI doing something remarkable" are genuinely impressive. But when an already-busy, crushing-deadlines senior engineer sees that demo, the typical reaction is: That's cool! OK, back to work. The demo doesn't show them their work, their codebase, their specific problems. The mental translation from "impressive demo" to "this changes how I work" doesn't happen automatically.
The mandate. "You must use AI, or find another job." I've heard from friends at companies — large and small — where management is pushing AI adoption without accompanying guidance. When a developer is told to use AI or else (sometimes even by measuring individual developer token usage), they'll rationally use it in minimal ways to satisfy the requirement. Nobody gets what they want. Mandates produce compliance, not transformation.
What I Think Actually Works
The engineers who've had genuine breakthroughs — my friend, me, my colleagues here — share one thing in common: we each had personal proving moments. Not a demo someone else showed us. Not a requirement from above. A moment where we gave AI a meaningfully large task and watched it do something that we absolutely could have done, but in a significantly longer amount of time.
I'm focusing on mid- and senior-level engineers here — my earlier post covered the different challenges facing junior engineers. For experienced engineers, the shift isn't about learning to code with AI — they already know how to code. It's about learning to define and direct AI at increasingly large problems: whole subsystems, entire codebases, months of backlogged work. And staying in - or growing in to - the architect's seat: evaluating what AI produces, maintaining quality gates, testing rigorously. Results that look fast but aren't durable aren't a win.
That shift doesn't happen by watching demos. It happens when someone you trust sits down next to you, points at your actual open tickets, and says: let me show you what this can do with your specific problem. Then you see it yourself. Then you try it.
And then, if you're anything like me or my friend, you have your own moment of amazement that you need to consciously suppress.
This is hard to scale. The engineers we need to reach — the ones with decades of architectural intuition and the highest leverage in any engineering organization — are also the ones most capable of evaluating what they see. They're not going to change how they've worked for twenty years because of a slide deck.
They need to see it work. In their codebase. On their problems. With their own hands on the keyboard. The bottleneck isn't the technology. It's finding the time and the champions to do this, person by person, team by team.
That's the real frontier, as far as I can tell. How would you approach it? Send me a message on LinkedIn and share your thoughts.
Stay in the loop
Get the latest updates on our progress, product news, and insights.
Related Insights
The Gap We Haven't Named Yet
If AI tools compress the years of hands-on coding that once built senior architects, what replaces that path? A veteran engineer examines the growing gap between junior developers and the architectural expertise the industry still desperately needs.
AI Made the Codebase Feel Shared
AI coding tools changed more than productivity for our team. By lowering the cost of contribution across the codebase, they shifted collaboration from implementation details to intent — and made the whole system feel like shared ground.
NVIDIA GTC 2026 Recap
A firsthand look at NVIDIA GTC 2026 in San Jose — covering the keynote highlights, the rise of Agentic AI, data readiness as the real competitive moat, robotics breakthroughs, cloud inference economics, AI security gaps, and what it all means for enterprise AI strategy.