The Myth of the AI PM
Ever since generative AI took over the tech conversation two years ago, every PM has felt the same itch: I need AI on my resume. That’s not wrong. AI is reshaping how we build and think about...
Everyone Wants to Be an AI PM
Ever since generative AI took over the tech conversation two years ago, every PM has felt the same itch: I need AI on my resume. That’s not wrong. AI is reshaping how we build and think about products. Leadership teams want AI in the roadmap. Users expect it. Founders pitch it. So PMs scramble to catch up.
But somewhere along the way, we started selling the illusion: become an “AI PM” in six weeks. Take a course. Get certified. Add the magic three letters to your title.
Let me be blunt: that’s bullshit.
The tech is still in its infancy.
Besides OpenAI, Anthropic, and a handful of niche players, most companies haven’t shipped anything meaningful. Yet we’re already drowning in bootcamps promising to teach AI product management. It’s like reading one book on finance and calling yourself an expert trader.
You can’t fast-track this. Not in a space where the fundamentals are still being rewritten every quarter.
AI Changes How We Build (And That’s the Problem)
Here’s the real shift: AI isn’t just changing what we build—it’s changing how we build.
Old-school product management was deterministic. You wrote detailed specs. Engineers implemented logic. QA found bugs. Eventually, it worked—exactly as you described it.
AI is non-deterministic. You don’t always know what the system will do. It might give you different answers for the same input. You’re dealing with probabilities, not rules. It’s a fog, not a blueprint.
A biotech researcher I spoke with recently tried using AI to generate DNA sequences. It failed. Not because the models were weak, but because AI’s probabilistic nature couldn’t meet the hard determinism needed in scientific research. Same thing applies to your product. If your feature needs repeatable, testable behavior, AI may not be the right tool.
And yet, even if you’re not building an “AI product,” you’re already building with AI. Your team is using it in tooling. Your users expect it. Your boss wants to know how it fits the strategy.
That’s the paradox: you have to be good at AI before you’ve had a chance to learn it. Meanwhile, LinkedIn is full of PMs who list “AI” because they connected a UI to OpenAI’s API and called it innovation.
Real Skills Come From the Field, Not the Classroom
So what do you do when you’re already on an AI project… and still learning what AI actually is?
You turn the project into your classroom.
Forget theoretical frameworks. The best way to learn AI as a PM is by shipping. Are you calling an LLM? Great—what’s your cost per inference? How’s the latency? Are the outputs reliable? What breaks in production?
This is where theory fails and fieldwork matters.
Take prompts, for example. You’ll learn fast that tiny tweaks can break everything. Or that users don’t trust responses unless you show confidence. Or that outputs degrade over time without proper feedback loops. These lessons don’t come from a slide deck.
Understand Cost to Serve—or Die Trying
Here’s one topic no AI PM course will cover in enough detail: cost to serve.
Inference costs are real. And they may hurt your bottom line badly if you’re not careful. You might be shipping a magical feature that kills your gross margin.
Want to add business value? Cut costs, not just build features.
That’s where understanding things like batching, prompt efficiency, and even model quantization starts to matter. You don’t need to be an engineer—but you do need to talk to one. Prioritize infra work that saves 30–40% cost over shipping one more LLM-powered tooltip.
And yes, some technical courses are worth your time. I personally took three Deep Learning courses from Stanford taught by Andrew Ng and his associates. Dry? Absolutely. Hardcore? 100%. But they’ll help you make decisions your CFO will care about.
No One Has It Figured Out (So Don’t Pretend You Do)
If there’s one thing to remember, it’s this: no one has cracked AI product management yet.
We’re all figuring it out as we go. There are no “10 proven frameworks.” There’s no certification that prepares you for how a model fails in production, or how users freak out when the AI says something weird.
You can’t skip the work. You need to build, break things, ask dumb questions, run tests, and slowly develop intuition. That’s the edge. Not the badge on your LinkedIn.
So stop chasing the title. Start chasing understanding. It is a new craft. Embrace the journey!
What About You?
I’m genuinely curious—where are you in your AI journey as a PM?
Have you worked on an AI feature? Struggled to make sense of the hype? Found a course or project that actually helped? Drop a comment. I’d love to hear what’s worked, what hasn’t, and what challenges you’re facing.
If there’s enough interest, I’ll write a follow-up post diving into your questions and experiences. Let’s learn from each other—without the fluff.
I have integrated AI to my work as PM for a couple of years now. I find that it allows me to significantly expand the scope of my thinking about a problem and dive into areas that I would have previously ignored because there was never enough time. While I've had success in accelerating speed to market for smaller self-contained features, the biggest speedbump comes from broken organizational processes, wide variations in the people's level of comfort with AI (manifesting in varied degrees of resistance) and lack of AI enablement across the product development lifecycle. For larger feature-sets, I often find myself in hurry-up and wait mode.