AI, Locastic, Agile, Apr 13, 20268 min read

The 10 Levels of AI Adoption for Project Managers

Most conversations about AI agents in software focus on developers. How they write code with AI, how much they trust it, how their workflow changes as they delegate more. But PMs are adopting AI agents too, and the progression looks a bit different. A PM delegates many things simultaneously: documents, calculations, system actions, communication drafts, planning. So the progression is messier, more uneven, and in some ways, more interesting.

I’ve been experimenting with AI agents in my PM workflow for a while now, building custom skills, connecting tools, automating entire pipelines. Based on that experience, and from watching other PMs at various stages of adoption, here’s what I think the levels of adoption look like for our role.

Level 1: No AI

Every spec, estimate, ticket, and email is handcrafted from scratch. The blank page is your starting point for everything. Not long ago, this was just how the job worked.

Level 2: AI as a better search engine

You ask ChatGPT or Claude questions. “What’s a good format for acceptance criteria?” “How should I structure a sprint retrospective?” You’re treating AI the way you used to treat Google, but with answers that actually understand your context. Nothing comes out of it except knowledge. You’re still doing all the work, you just know more before you start.

Level 3: Copy-paste drafting

You paste meeting notes into a chat and get a summary back. You describe a feature and get a spec outline. You dump a client brief and get a proposal skeleton. Right now, the clipboard is your integration layer and you rewrite 60-70% of what comes back, but it beats the blank page. You’d never send the raw output to a client, it doesn’t sound like you, and it misses the nuance. But it gets the structure right.

Level 4: The ghost PM

This is where things start to shift. You’ve refined your prompts and maybe you’ve written custom instructions that define your voice, your templates, your quality standards. The output comes back and it sounds like you. Internal docs and status updates go out nearly unchanged. The high-stakes stuff, client-facing specs, tricky scope communications, still gets heavy attention, but you notice you’re spending more time reviewing than creating. And this is where review fatigue becomes a real risk, because when 8 out of 10 sections are perfect, you start skimming the other 2.

Level 5: AI enters your workspace

This is the big leap when AI connects to Jira, reads your Google Sheets, accesses Drive files, checks the calendar. Instead of you copy-pasting project context into a chat, the AI reads it directly from your systems. “What’s overdue on this project?” doesn’t require an export, it queries Jira. “Update the estimate sheet” modifies your actual spreadsheet. AI stops being a text generator and becomes a tool operator working in your real environment. You start giving it access to things and your trust is visibly climbing.

Level 6: Pipeline workflows with quality gates

Spec to estimation breakdown to Jira tickets isn’t three separate conversations anymore, it’s one flow. You’ve built skills that chain together, and each has built-in self-evaluation. The AI checks its own output against your criteria before showing you the result. A generator writes it, an evaluator grades it, and only passing output reaches you. Note: the AI prepares the breakdown, the structure that makes estimation possible, but the actual hours still come from you and the dev team. That boundary matters, and it doesn’t move just because the tooling gets better. Your job shifts from producing PM deliverables to defining what good looks like and reviewing results against that bar. You start measuring your impact not in documents written, but in how well you’ve encoded your judgment into the system. This is also where the paradox hits: a spec that took 2 days now takes 2 hours, and you’re not sure what that means for pricing, for proposals, for how you talk about timelines with clients.

Level 7: Managing an AI team

Now you’re running multiple workflows in parallel across projects. One agent writes a spec section while another builds the estimation breakdown while a third creates Jira tickets from last week’s approved spec. You’re multitasking between AI workflows the way you used to multitask between projects. It feels like managing a team of very fast, very literal junior PMs who never get tired but also never “read the room.” You spend real energy on coordination, making sure what one agent produces is consistent with what another assumed. You start to feel the throughput ceiling: the bottleneck isn’t production anymore, it’s your cognitive bandwidth to review and make judgment calls. You’re more productive than ever, but also more exhausted in a new way.

Level 8: Designing the PM operating system

You’ve moved up a level of abstraction. You’re not using AI tools, you’re designing a system of tools, rules, skills, evaluation criteria, and quality gates that encodes your PM methodology. The AI doesn’t just help with work but it works the way you would work, because you’ve taught it your approach through examples, references, and progressive skill architectures. You spend your actual time on what only a human PM can do: the client call where you have to read between the lines, the scope decision where the spreadsheet says one thing but your gut says another. At this point, what makes you valuable isn’t that you’re fast at PM work, it’s that you’ve built a system that makes PM work fast, and you know how to keep it running. You start thinking about how to teach other PMs to operate at this level.

Level 9: Role-blending (Frontier)

You start doing work that isn’t PM work, and nobody told you to. You need an internal estimation tool, so you build it. Not “write a brief for the dev team to build it,” you build the web app yourself, with AI handling the code. You need a quick prototype to show a client what you mean? You scaffold it yourself. You want to verify the dev team’s implementation matches the spec? You write test scenarios and run them against the staging environment. The AI provides domain expertise you don’t have, frontend frameworks, testing patterns, data visualization, and your PM judgment provides the what and why. You start to notice that the line between “PM task” and “Dev task” and “QA task” is getting thinner.

Level 10: The one-person product team (Theoretical)

You can take something from concept to deployed, tested, and measured, alone. Not prototypes, not internal tools, production code for real clients. You orchestrate AI agents across design, development, testing, and deployment while providing the kind of product thinking that usually takes a whole team to get right. The agents handle domain-specific execution, you handle scope, priorities, quality standards, and client relationships. You don’t file requests and wait for capacity, you ship. And at some point you stop thinking in terms of roles and start thinking about outcomes.

A note on Levels 9 and 10

Level 9 is happening now. PMs are building internal tools, scaffolding prototypes, writing test scenarios with AI handling the parts they don’t have expertise in. It works because the stakes are manageable, the user base is small, and “good enough” is good enough. This is the frontier, and it’s already achievable.

Level 10 is different. At the time of writing, it’s genuinely questionable whether production-quality code can be reliably delivered by non-technical people using AI agents. Everything we know right now points in the opposite direction: the biggest productivity gains from AI coding agents go to senior, experienced developers, not to newcomers. The people who benefit most are those who already know what good code looks like, can catch subtle bugs, understand architecture trade-offs, and can course-correct when the agent drifts.

A PM shipping production client code through agents? The review problem from Level 4 applies tenfold: if you can’t recognize a bad architectural decision in generated code, you can’t catch it before it costs the client weeks of rework down the line.

So Level 10 describes a trajectory, not a current reality. The tools are heading there. Whether they arrive, and what “production quality” means when they do, is still an open question. For now, the honest version is: a PM at Level 9 can build things that work, but not yet things that scale, survive edge cases, and pass a senior developer’s code review without one being involved.

What this means for how we define PM seniority

I built a PM Seniority Matrix a while back that defines what’s expected at each level, from Trainee to Lead, across eight competency categories. Looking at it now through the lens of AI adoption, it became clear that the matrix needed a ninth column: AI Proficiency.

The idea is straightforward. At each seniority level, there’s a different expectation for how you use AI agents in your PM work:

  • Trainee uses AI as a knowledge tool, asking questions and learning concepts. No AI-generated content goes out without mentor review.
  • Junior drafts and structures documents with AI, but rewrites and adapts the output. Builds personal prompt patterns that start producing consistent results.
  • Mid has custom instructions and templates that match Locastic quality standards. Connects AI to workspace tools like Jira, Sheets, and Drive. Builds reusable workflows and can explain them to less experienced team members.
  • Senior designs and maintains AI workflows for the whole team. Encodes PM methodology into skills, evaluation criteria, and quality gates. Mentors other PMs on building their own workflows, not just using pre-built ones.
  • Lead sets AI adoption standards for the department. Decides which workflows to automate and which need human judgment at every step. Makes sure the team develops critical thinking and review skills alongside AI proficiency.

Each level also has a 4-step roadmap, the same way the other competencies do. A Junior’s STEP 1 is copy-pasting context and rewriting most of the output. By STEP 4, they’re critically assessing AI output against project requirements and catching most issues before review. A Mid’s STEP 1 is writing custom instructions. By STEP 4, they have quality gates where AI evaluates its own output against defined criteria.

What I like about adding this to the matrix rather than treating it as a separate thing is that it makes AI proficiency part of the job, not an optional extra. It sits right next to Communication, Planning, and Problem-solving because that’s where it belongs. The progression from “uses AI to ask questions” to “designs AI systems for the department” mirrors the same trajectory as every other competency: from following instructions, to working independently, to building systems for others.

I’m aware that AI tools are evolving fast enough that parts of this will probably look different in six months. Some levels might merge, new ones might appear, and the expectations at each seniority tier will shift as the tools mature. But that’s fine, updating the matrix as things change is part of the job. A living framework that evolves with the tools is more useful than a perfect one that’s out of date.

If you’re building something similar for your team, the matrix is public. I’d be curious to hear how you’re thinking about it, and whether 10 levels is too many, too few, or just right. Feel free to reach out or leave a comment.


You liked this? Give Danilo a .

10