What I Learned and Where I am Going

Part 10 of 10: What I’m Learning Now and What Comes Next

I’ve been working with Claude Code for a while now.

I’ve built a working Pomodoro timer. I’ve developed practices that are validated by research and proven by retrospectives. I’ve learned what works, what doesn’t, and what I’m still figuring out.

But this isn’t the end of the story. It’s barely the beginning.

Because the real questions are just starting to surface.

What I Have Now

Let me be honest about where things stand:

The app: Working. Deployed. Used by real people (though not many). Generating actual feedback. Creating actual value (small, but real).

The process: Documented. Measured. Validated. Reproducible (at least by me). Better than when I started (retrospectives prove it).

The understanding: Deeper. Grounded in experience. Humble about what I don’t know. Curious about what comes next.

What I don’t have: certainty. Who does? Models change weekly, sometimes daily. New features are coming out all the time. I sometimes wish I had the confidence of so many I see in social media, but more often than not, I am happy to be humble and learn.

I don’t know if this scales beyond solo development. I don’t know if it works for domains I’m not expert in. I don’t know what breaks when you add real team dynamics, real deadlines, real organizational constraints.

But I’m going to find out.

The Experiments I’m Running Now

Experiment 1: Can I teach this?

I’m have experince as a programmer, manager, architect, professor, and trainer. I have taught hundreds of students a year, built curriculum and delivered it to fortune 500 companies, and collaborated with incredibly skilled people to positively impact thousands.

So I’m developing curriculum. Not “how to use Claude Code” curriculum. But “how to develop software with agentic AI” curriculum.

I plan to find organizations interested in adopting new practices with an experienced coach. I expect there will be ups and downs, but like my last 30 years of experience tells me, we will make forward progress.

Experiment 2: Can I scale to teams?

I’ve been theorizing about team dynamics for weeks. Time to test those theories.

I’m starting small: pair programming with one other developer, plus Claude. Three-way collaboration.

Although we will build something that provides value, there is another equally important goal: stress-test the practices:

  • Can two humans share context with one AI?
  • Do the metrics reveal different issues in team settings?
  • Does code review work differently when multiple people are committing?
  • What breaks first?

I’ll document everything. Including (especially) the failures.

Experiment 3: Can I build in unfamiliar domains?

The Pomodoro timer succeeded partly because I knew the domain cold. I had opinions. I could verify AI output against my own expertise.

What happens when I don’t have that expertise?

I’m planning a second project deliberately outside my comfort zone. Different technology stack. Different problem domain. Different user base.

If the practices only work where I’m already expert, they’re not that useful.

If they help me learn new domains faster… that’s genuinely valuable.

What I’m Still Getting Wrong

Problem 1: Where to focus

These last few weeks have been a whirlwind. I am learning how to write code with Claude. I keep pushing my knowledge of Claude. I am spending time writing and editing to share my experiences. I have prepared presentations for other groups. I am building a product and managing the infrastructure. Soon I will be building curriculum also.

It is a balance. I still want to have time for family and friends. I still want to do things like go for a run. I don’t want to burn out.

I need to find the right balance. I am still learning.

Problem 2: The context management problem

Two-hour sessions help. But they don’t solve the fundamental issue: AI doesn’t maintain context the way humans do.

I am constantly thinking of how to keep Claude up to speed on how the work should be done. I have 200,000 tokens to explain enough of the project and process to get Claudes output to align to my expected outcomes.

I run into stale prompts, documentation, tests, and documentation bloat.

I regularly re-explain project goals, architectural decisions, user needs. Claude “knows” them in the current conversation but doesn’t internalize them.

This works solo. But in teams? When multiple people need to maintain shared context with AI?

The answer, as always is good communication. Also, as always, that’s a good goal, but not actionable. I need to be Agile, stay optomistic, and try new things. It is frustrating when the new things I try fail, but exhilirating when they work.

Problem 3: The expertise gap

I can am very comfortable with code, debugging techniques, testing, architectural strategies, CI/CD, feedback loops and many other areas

But there are domains where it is harder to verify AI output because I have less experince: security, accessibility edge cases, browser compatibility issues, UI, and user experience.

How do you use AI in areas where you can’t verify correctness?

Right now: carefully, with expert review. But that’s expensive and slow.

There’s a better answer. This is where teams shine.

What Hiring Managers Should Know

If you’re a technical leader reading this series, here’s what I’d want you to understand:

AI development isn’t faster development. It’s different development.

You’re not hiring developers who code faster. You’re hiring developers who verify faster, review faster, adapt faster.

The skills that matter:

  • Can they write effective verification tests?
  • Can they review AI-generated code for subtle bugs?
  • Can they articulate clear requirements?
  • Can they distinguish good architecture from plausible-looking garbage?
  • Can they learn from metrics and adjust their process?
  • Can they communicate problems and solutions within their teams

These aren’t the skills we’ve traditionally hired for.

The learning curve is steep and unavoidable.

I’ve been at this for a while with over 20 years of development experience, and I’m still figuring things out.

Your teams will go through similar struggles. No amount of training will skip the valley of despair that comes early on.

Budget for that. Expect productivity to dip before it rises. Create space for experimentation and failure.

Process matters more with AI, not less.

I thought AI would make process less important. “Just ship fast and iterate!”

Wrong.

Good process (retrospectives, review, verification, role clarity) is what makes AI productive instead of just fast.

If your team’s process is weak now, AI will make it worse. Expect working with AI to shine a light on problems that already exist. This seems similar to what I have seen when teams adopt Agile. Teams hate seeing problems surface. Most often these problems existed before. It is tools like AI and practices like Agile that make them more visible. Accept that our teams were never perfect.

You need new roles.

Someone needs to own:

  • Standards and quick references (what patterns do we want AI to follow?)
  • Retrospectives and learning (are we getting better or just busier?)
  • Context management (how do we share knowledge across AI sessions?)
  • Verification frameworks (how do we know AI output is correct?)

These don’t fit traditional roles cleanly. You might need to create new ones.

What Coaches Should Know

If you’re an Agile coach, trainer, or process consultant:

Your expertise is about to become more valuable.

Teams using AI will struggle with exactly the things Agile addresses:

  • Clear requirements
  • Short feedback loops
  • Continuous improvement
  • Sustainable pace
  • Responding to change

AI makes all of these more important, not less.

But you need to learn the tool.

You can’t coach AI development without using AI yourself.

Not because you need to code. But because you need to understand:

  • Where teams will get stuck
  • What the verification challenges feel like
  • Why retrospectives matter differently with AI
  • Where human judgment is irreplaceable

Get a Claude Code subscription. Build something small. Have fun. Go through the struggle.

Then you can coach from experience, not theory.

The patterns are familiar, but the pace is different.

Everything in Agile happens faster with AI:

  • You can build the wrong thing 10x faster
  • Technical debt accumulates 10x faster
  • Teams can become misaligned 10x faster

But also:

  • You can experiment 10x faster
  • You can course-correct 10x faster
  • You can iterate 10x faster

The principles stay the same. The cadence changes dramatically.

What Comes Next for Me

I’m not done learning. Not even close.

Near term:

  • Test teaching these practices
  • Run team development experiments
  • Build second project in unfamiliar domain
  • Continue refining retrospectives and processes
  • Document everything (including failures)

Medium term:

  • Develop comprehensive curriculum for AI-assisted development
  • Write case studies from student/team experiments
  • Create coaching framework for organizations adopting AI

Long term:

  • Help organizations implement AI development practices
  • Train coaches/teachers on AI-assisted development coaching
  • Build community of practice around evidence-based AI development
  • Keep learning, keep documenting, keep sharing

Why You Should Follow Along

I’m documenting this journey in real-time because I believe we’re in a unique moment.

AI-assisted development is new enough that best practices haven’t ossified. We’re all figuring this out together.

But it’s mature enough that you can start using it productively today—if you learn from others’ mistakes instead of making them all yourself.

I’m sharing my mistakes. My false starts. My metrics that prove what works and what doesn’t.

Not because I’ve figured it all out. Because I’m figuring it out in the open.

If you’re a technical leader trying to understand how AI will affect your teams, following this journey will show you what to expect. The struggles, the breakthroughs, the practices that matter.

If you’re a coach or trainer, you’ll see how Agile principles apply (and sometimes fail) in AI development contexts.

If you’re a developer, you’ll learn from my failures without paying the token bills I paid to discover them.

The App I’m Building (And Why It Matters)

I haven’t talked much about the Pomodoro timer itself in this series. That was intentional.

This series isn’t about the app. It’s about the process of building with AI.

But the app matters because it’s real. It’s deployed. It has users. It’s generating feedback.

It’s not revolutionary. It won’t change the world. It won’t make me rich.

But it proves something important: you can build real, deployable, user-facing software with agentic AI using practices that are documented, measured, and teachable.

That’s the portfolio piece I wanted. And I have it.

If you want to see it check out https://pomofy.net. I hope that it see releases every week.

If you want to follow the journey: Connect with me on LinkedIn, where I’m documenting this as it happens.

If you want to discuss your own AI development challenges: Reach out. I’m learning as much from others’ experiences as from my own

The Honest Truth

When I started, I thought I’d discovered a superpower.

Today, I know I’ve discovered something more modest but more valuable: a learnable practice for working productively with agentic AI.

It’s not magic. It’s work. Different work than before, but still work.

It’s not 10x faster. It’s faster for some things, slower for others, dramatically different for everything.

It’s not going to replace developers. It’s going to change what development means.

And we’re all learning together what that change looks like.

I’m grateful you’ve followed along this far.

The journey continues.


*This is part 10 of a 10-part series documenting my journey from Agile coach to agentic AI developer.

About the Author: I’m an Agile coach with 10+ years coaching enterprise teams and a professor who taught 600+ students annually. I have 20 years experince writing code, architecting, and delivering software. I’m documenting my journey into agentic AI development in real-time—the failures, the breakthroughs, and the practices that actually work. Connect with me on LinkedIn to follow along as the journey continues.


Thank you for reading all 10 parts. Thanks for sharing my learning.


Comments

Leave a comment