Having instructed Cursor to add a new feature, I’m staring at the screen. It’s been running for 2 minutes already. It’s on the 23rd iteration now. Right, the new Claude 3.7 just came out. And… it’s done. Eight hundred new lines of code to integrate calendar support across backend and frontend. The world has changed again. Just a regular Wednesday by now.

Embracing the AI Revolution in Software Engineering

AI is reshaping the technology landscape at breakneck speed, and few industries feel the impact more profoundly than software engineering.

In “The Executive-Coder Experiment: Returning to Code with GenAI After 7 Years”, I shared my personal journey using AI tools for development and concluded that “as an engineering executive, I can’t leave GenAI adoption to chance—the impact is too profound.” The true revolution happens when entire engineering organizations integrate AI—a shift far more complex than any single developer’s journey.

From a broader business perspective, engineering costs represent a significant portion of technology company budgets. Organizations that increase engineering productivity through AI will gain competitive advantages in time-to-market and resource efficiency.

AI adoption isn’t optional—it’s necessary for survival. The question is no longer whether AI will play a role, but how to harness it effectively while transforming our entire approach to software development.

This transformation requires deliberate strategy. Our ways of working will be fundamentally impacted, demanding thoughtful guidance at the organizational level.

In this post, I explore how to systematically adopt generative AI in engineering organizations—focusing not just on the productivity gains, but on the organizational challenges we must address.

The Benefits of GenAI

The productivity gains roughly fall into three categories:

Research support. AI functions as an extraordinary research companion. Engineers can explore topics directly through chat interfaces instead of jumping between documentation, articles, and forums. It excels at synthesizing concepts, highlighting tradeoffs, and exploring new technologies.

Bootstrapping. AI shines when quickly bootstrapping new functionality. Setting up projects, scaffolding modules, or implementing new cross-cutting concerns happens in minutes instead of hours. What once took days of configuration and boilerplate can now be accomplished in a single session.

Code generation. LLMs can generate reasonable code with remarkable speed. They can implement complex functionality, suggest refactorings, and write tests that would previously take significant effort.

AI Isn’t a Silver Bullet

Despite impressive capabilities, today’s AI models still fall short in key areas.

Software development encompasses far more than producing code. Engineers also:

  • Clarify product requirements
  • Design architectures
  • Operate software in production
  • Respond to incidents
  • Resolve conflicting stakeholder priorities
  • Maintain codebase health
  • Manage technical risk
  • Collaborate with other engineers

Engineers constantly navigate alignment challenges and tradeoffs between short- and long-term. These concerns grow with organization size—barely noticeable in a five engineers in a startup but consuming the majority of effort in a 250-person engineering organization.

Current GenAI isn’t adept at handling these nuanced organizational aspects. For successful organizational adoption, we must integrate AI-assisted workflows with traditional engineering concerns.

The Scatterbrained Senior Engineer

Working with AI can feel like pairing with a scatterbrained senior engineer, hyper-productive but inconsistent and flaky. It accommodates whatever you ask, without pushing back on questionable decisions. It doesn’t consider complex tradeoffs. Nor does it bring intent to its work. It just generates output, with wildly varying consistency.

To properly integrate AI, we need to address both its inherent inconsistency and the lack of intentionality.

The Stabilization Challenge

AI produces inconsistent output because it’s essentially a stochastic process sampling probabilities over tokens. This is partly what makes AI so powerful—it can generate diverse solutions to the same problem. But it’s also a challenge.

If we already use a particular library for a task, we don’t want AI introducing alternatives and causing dependency explosion. If we’ve established a UI style, we don’t want it arbitrarily creating different patterns.

We need to stabilize AI—constrain its outputs to align with established standards and practices.

The Intent Gap Problem

AI tackles problems exactly as presented, ignoring missing context. It readily assumes, hallucinates, or generates information it lacks—while ignoring critical aspects not explicitly mentioned.

AI doesn’t prioritize codebase maintainability, future evolution, or separation of concerns. It doesn’t consider that functionality might already exist elsewhere or that capabilities might belong to different systems. It simply does what you ask, without questioning whether more context is needed.

Earlier, I mentioned that engineers constantly face alignment challenges and short- versus long-term tradeoffs. Current AI completely overlooks these considerations.

To prevent codebases from spiraling into AI-powered spaghetti messes, we must actively address this lack of intentionality.

Key Organizational Concerns for AI Adoption

There are six critical areas1 that engineering organizations must address for successful AI adoption:

1. Structural Intent. Where should a change happen, and under what constraints? We must ensure AI-generated changes don’t harm maintainability and domain cohesion by adding unnecessary dependencies, duplicating functionality, or weakening separation of concerns.

2. Code Reviews. How do we effectively review AI-assisted changes, validate architectural alignment, and identify bugs, security risks, or testing gaps?

3. Project-Specific Context. How do we ensure AI respects our tech stack choices and coding standards? How do we constrain its output to align with our project’s specific requirements?

4. Knowledge Sharing. How do we share best practices for working with AI? As capabilities evolve rapidly, how do we keep the entire engineering organization current?

5. Feedback Loops. We’re all learning how to work with AI—there’s no definitive playbook. How do we quickly identify and address emerging issues? How do we maintain adaptability as the landscape shifts?

6. Tooling Flexibility. AI tools evolve weekly. We can’t commit to a single tool long-term and must be wary of vendor lock-in. How do we maintain flexibility to change tooling as needed?

Implementation Ideas

Here are practical ideas for addressing these concerns:

1. Structural Intent. This is the most critical organization-wide concern. Implement a lightweight process that defines structural intent before AI-supported development begins. For example, prepare Architecture Decision Records (ADRs), ideally with input from the entire team.

2. Code Reviews. AI-augmented development increases PR volume. To maintain review quality, implement checklists like Definition of Done. AI can help by summarizing changes and scanning for security issues. A useful test: if an AI summary of a code change lacks coherence, the change itself may have underlying issues.

3. Project-Specific Context. Many AI tools allow you to provide project-specific context. For instance, Cursor Rules let you encode coding standards and tech stack preferences directly into the AI’s context, helping constrain its output and reduce complexity explosion.

4. Knowledge Sharing. Create dedicated spaces for AI knowledge sharing. Combine written resources (prompt collections, best practices) with interactive sessions (demos, discussions, dedicated channels). Early hackathons can accelerate adoption by quickly moving from theory to practice.

5. Feedback Loops. Integrate AI adoption discussions into team retrospectives. Establish community-driven support channels for cross-organizational support. Use periodic deep-dives and surveys to maintain an accurate org-wide understanding of adoption progress.

6. Tooling Flexibility. Maintain flexibility to switch between foundation models. Balance standardization (teams using fewer tools) with exploration of emerging alternatives. Consider adopting a single AI-native IDE like Cursor to provide common workflows while allowing engineers to choose different underlying models.

Adoption Challenges to Anticipate

It’s tempting to overengineer AI adoption. Instead, balance thoroughness, pace, and innovation through iteration. Start with minimal guidelines and refine continuously. Trying to implement too many guardrails while the field evolves rapidly will fail. We’re in uncharted waters—maps help, but we must watch the sea.

Beyond changing workflows, internal resistance will likely be a challenge. Two common objections to AI adoption I hear:

Job security concerns. Some developers resist using AI tools, fearing they’ll be replaced. So far jobs aren’t disappearing but evolving. Ironically, those who resist may find themselves less relevant as the industry advances. The best job security might come from mastering these new tools.

Capability skepticism. Often, this stems from negative experiences with less capable models early on. Many developers underestimate just how much AI capabilities have advanced recently.

The Path Forward

Embracing AI for developer productivity isn’t optional. We must do so thoughtfully and deliberately. Fast iterations and tight feedback loops will guide our adoption journey as we discover what works.

“Not all those who wander are lost”


  1. This post is focusing on engineering-specific adoption, and so challenges with procurement, compliance, and similar, are outside the scope. 

If you enjoyed this post, you should share your details for the upcoming newsletter: