While I’m not usually a fan of posts that promote “Learn AI or else” positions, this post by Adam Jacob hit an important point for me.
If you’re thinking to yourself “this 10x increase in capability to create software doesn’t matter, because writing software was never the bottleneck”, you’re drawing the wrong conclusions from a true statement.
Adam Jacob, CEO and Co-Founder at System Initiative (2026)
Let’s assume for a minute you’re right. That today, your bottlenecks are everywhere else. Good ideas. Your ridiculous SDLC. Deployment. Security. Compliance. This is a reasonable, smart person thing to say! Those things *do* slow you down.
But with 10x increase in capability, anyone unshackled by your existing constraints will break through those things like the Kool-Aid Man broke through your TV on a Saturday Morning. New solutions are possible now at every stage. Those who figure it out will gain massive advantage over incumbents.
You will be over there taking comfort in how important all the mechanisms you have in place today were. Meanwhile new answers will get built around you, and you won’t even know it happened. The true incumbents will be fine – they have the money and the market share to simply acquire their way to the new world after the fact. Everyone else? Serious jeopardy.
If you’re an engineer or an operations person, there is only one move. You have to start working in this new way as much as you can. If you can’t do it at work, do it at home. You want to be on the frontier of this change, because the career risk to being a laggard is incredibly high.
We will rebuild everything around this capability. Everything.
The fact is that the hype around AI became real enough that a lot of organizations are looking across their whole SDLC for where they can insert it. Fixing one part of the flow doesn’t help, you just move the problem elsewhere. We need a holistic approach to automation in software delivery. And people will solve that problem whether you want them to or not.
I am WAY BEHIND the folks who are running whole teams of agents. I’m very much using code assist as a way to not have to worry about the syntax of my changes. Still, I am trying to build with code assist every day, on production code, learning what works and what doesn’t. I am using it to evolve a live application with users and customers safely and securely. I use it to debug performance reports or production error logs. I do some code review and testing. I have it in my planning stages to guide my thought process and point out where I might have missed something.
Every single stage uncovers different limitations and eccentricities in the standard tooling that is out there. And those limitations will change.
I’m not sure how this will all shake out.
- The environmental cost.
- The legal copyright questions.
- The privacy regulations.
- The perceived value of software in a world where software creation feels easy.
- The moral and ethical concerns around an automated anthropomorphized servant class.
- The impact to workers and the job market.
- The lessening pipeline of experienced folks who understand lower levels of the software.
- The eventual bottoming-out of the hype.
- The impact of the real price of AI when the costs are no longer subsidized by investors and circular markets.
Putting your head in the sand and saying “AI BAD” won’t get us to where we need to be. We need guardrails on these tools. We need smart people advocating for the responsible implementation of these tools. We need ethics to matter in this space. We need regulations. Most all of we need everybody to push back and make this better for everyone, not just the top-line folks who benefit financially from all of this.
AI isn’t good enough today. But it could be! And we need the right people at the frontier of this.

Leave a comment