
The Three Ps: A Practical Way to Deliver AI Projects That Actually Work
Across industries, the same pattern continues to repeat. Companies invest in AI initiatives with genuine ambition. They test tools, run pilots, and showcase impressive demos internally. Early results look promising. Stakeholders are optimistic. And yet, months later, very little has scaled into something durable.
The initiative does not fail dramatically. It simply stalls.
In most cases, the blocker is not the model. The underlying technology is often capable. What breaks down is adoption, commercial clarity, and operational resilience. Workflows do not truly change. The financial upside is loosely defined. Prototypes collapse under real-world complexity.
AI projects rarely fail because of intelligence. They fail because of delivery.
At Intersect, we use a simple leadership lens to avoid this pattern. We call it the Three Ps. It helps us decide what to build, what to stop, and what to scale. More importantly, it forces clarity before investment rather than after disappointment.
In this article, we will explore:
- What the Three Ps are in practical terms
- Why most AI initiatives stall after the pilot stage
- Where the “sweet spot” lies between adoption, value, and operational reliability
- A 15-minute checklist leaders can apply before approving a build
Why Most AI Projects Stall After the Pilot
AI pilots often begin with energy and urgency. A team identifies a workflow to automate, builds a proof of concept, and demonstrates how the system can generate outputs faster or more accurately. The room nods in agreement. The potential seems obvious.
Then momentum fades.
Three failure patterns appear again and again:
- The workflow does not truly change, so teams continue using legacy processes
- Costs are visible and immediate, but value is vague or not measured
- The solution works in a demo but struggles under messy, real conditions
When this happens, it is tempting to blame the technology. In reality, the issue is structural. AI success is not primarily a model problem. It is a delivery problem.
If adoption is not engineered, if commercial logic is not disciplined, and if operational reality is not tested early, even strong technical systems will stall. The Three Ps exist to prevent these exact failure modes.
What the Three Ps Are
The Three Ps are not a slogan. They are a decision framework.
They ask three simple but demanding questions:
- People – Will teams naturally and consistently use this in their daily workflow?
- Profit – Does the financial logic make sense, and is the value measurable?
- Practicality – Will it hold up under real conditions, not just in theory?
When all three are strong at the same time, you reach the sweet spot. That is where human behaviour, commercial value, and operational reliability align.
If one is weak, the initiative becomes fragile.
People: Adoption Is the First Success Metric
A solution only succeeds when people choose to use it.
This sounds obvious, yet adoption is frequently treated as a training issue rather than a design principle. If a system adds friction, interrupts flow, or feels like oversight rather than support, usage will decline regardless of how impressive the underlying model may be.
From a leadership perspective, the People dimension means looking beyond features and asking whether the solution fits naturally into existing workflows. It means ensuring that the value is visible to the person doing the work, not just to the executive approving the budget. It also requires clear ownership after launch. Someone must be responsible for training, iteration, and continuous improvement.
Common People failure looks like this:
- The tool works technically
- It adds additional steps or fields
- It disrupts established routines
- Usage drops after week two
To prevent this, leaders should apply practical checks:
- Identify the primary daily user and the exact moment of use
- Map the “before vs after” workflow in ten steps or fewer
- Define a clear feedback loop for improvements
- Assign ownership for adoption and training
Adoption is not a secondary metric. It is the first sign that the system belongs in the workflow.
Profit: Commercial Logic or It Is a Hobby
AI initiatives must contribute to financial performance. Without measurable impact, they risk becoming well-intentioned experiments.
Profit does not only mean cost reduction. It can take several forms:
- Reduced cost per task
- Faster cycle times that unlock throughput
- Improved accuracy that reduces rework and exception handling
- Revenue enablement, where relevant
A simple financial lens leaders can reuse is:
Monthly Value = (Time Saved × Fully Loaded Cost) + Rework Avoided + Throughput Gains
The discipline lies not just in estimating value, but in defining it before the build begins. Leaders should:
- Establish a baseline for current performance
- Define the single metric that will prove value
- Agree upfront how the organisation will capture that value
If the build cost is high and the annual return modest, it should not proceed. AI without commercial logic is a hobby. Sustainable AI requires disciplined economics.
Practicality: Can It Survive Reality?
Practicality addresses the gap between demonstration and deployment. A solution may perform flawlessly in a controlled environment yet struggle when exposed to operational complexity.
Real conditions usually include:
- Messy or incomplete data
- Exceptions and edge cases
- Tooling constraints and permission limits
- Evolving processes
- Reliability and incident response expectations
A common Practicality failure unfolds predictably. A prototype works in a clean dataset. Once live data flows in, missing fields create friction. Systems time out. Approval steps stall progress. Confidence drops.
To ensure resilience, leaders should demand:
- Clear scope boundaries defining what the system will not do
- Testing, monitoring, and fallback mechanisms
- Auditability of decisions and actions
- Operational ownership for ongoing maintenance
Practicality ensures the system remains dependable when reality intrudes, as it always does.
The Sweet Spot: Where AI Becomes a Dependable Capability
The Three Ps only work in combination.
- People without Profit creates something enjoyable but not worth funding
- Profit without People never scales
- Practicality without the other two becomes technical success with business failure
The sweet spot is reached when teams adopt the system naturally, financial impact is measurable and meaningful, and the solution operates reliably in real-world conditions.
At that point, AI stops being a pilot and becomes a capability.
A 15 Minute Checklist for Leaders Evaluating Any AI Initiative
Before approving any AI initiative, run this quick assessment.
People
- Who uses this daily?
- What specific step does it remove or simplify?
- What changes in the workflow on day one?
- Who owns adoption and training?
Profit
- What is the baseline metric today?
- What improvement is targeted, and by when?
- How will savings or gains be captured structurally?
Practicality
- What are the top five likely failure modes?
- Where are approvals enforced?
- What is the fallback if confidence drops?
If these answers are unclear, pause. Refine the scope. Then proceed.
Closing Thoughts
Most AI failures are not technological failures. They are delivery failures rooted in misalignment between people, profit, and practicality.
The Three Ps provide a disciplined way to decide what to build, what to stop, and what to scale. They transform AI from a sequence of pilots into a reliable, embedded capability.
If you want support applying the Three Ps to your workflows and turning AI ambition into measurable outcomes, Intersect can help guide assessment through to delivery, with governance and adoption built in from day one.
