← Back to Blog

March 2026 · ~7 min read

What to build is still the hard part

I have been thinking about how we learn AI lately. Most of the noise is tools, prompts, frameworks. Almost none of it is the question that decides if any of this was worth it.

Person presenting at a whiteboard to colleagues seated at a table
Photo by Campaign Creators on Unsplash

TL;DR

We teach the easy story

Look at how people say they are “learning AI.” Piles of attention on which IDE, which API, which vector database. Prompt templates and jailbreak gossip. A new orchestration library every season. None of that is stupid. You need a toolchain.

I just think that pile is less and less the hard part. Not because it is easy, because it is learnable the way other engineering is learnable. Docs exist. Courses exist. The path from zero to a working demo keeps getting shorter. What does not shrink the same way is the call behind the demo: why this, who it is for, and what has to be true for it to earn maintenance?

AI speeds up building, not deciding

These models are good at turning intent into artifacts. Code drafts, schema sketches, slide outlines, endless variations until your fan spins. What they do not do on their own, not reliably, is tell you which user pain deserves your next six weeks. They can sound sure about direction. They are not a substitute for reality: messy workflows, office politics, the gap between “annoying” and “expensive.”

If you build systems, you know how to make things coherent: boundaries, interfaces, costs you accept. Aim that skill at the wrong problem and you get something I call elegant wrong, clean insides hooked up to a fuzzy why.

Fast without a compass hurts

Cheap building means more branches, more half finished experiments, more “one more feature” energy. Motion shows up in git. Choosing not to build rarely gets celebrated even though it is often the highest leverage move on the table.

I am not arguing for standing still. I want a short written bet: the problem, who feels it, what better looks like, what you will actually measure. If you need a model name to write that paragraph, you are probably still in solution theater.

What “worth it” looks like for me

The problems I take seriously tend to be boring on paper. Someone can point to time lost, money lost, or risk carried today, not in a persona slide. The constraint is real: compliance, latency, headcount, whatever actually binds. There is a story for why software, sometimes with AI inside, changes behavior and not just headlines.

That is different from “we could automate this” or “we should add an assistant.” Maybe true. Not yet a problem. I am trying to burn more calories there, before the stack debate gets emotional, so implementation is execution quality instead of realizing you tuned the wrong workflow.

Where my attention is going

I still care about retrieval, tests, reliability, the unglamorous stuff that makes AI features behave in production. That work is not going away. I am reshuffling learning time though: less chasing novelty for its own sake, more pressure on whether the thing I want to build should exist, for who, with what definition of done.

Tools will improve. Prompts will evolve. Frameworks will rebrand. The question underneath is old: what problem is actually worth solving. That is still the hard part. That is the part I want to be sharper on.