Jan 14, 2026
People, Not Platforms: How to Successfully Roll Out New AI Tools and Workflows
Rolling out a new AI platform isn’t a technology challenge.
It’s a people and change management challenge.
You can choose the best AI tools on the market, beautifully designed, powerful, and affordable - and still fail if you don’t bring people with you. Resistance, passive avoidance, “this won’t work for us” energy, or quiet non-adoption will kill momentum long before the tech does.
At the end of the day, if people don’t use the tool or follow the process - it’s impossible to leverage the benefits across the company.
This guide walks through a practical approach to implementing AI platforms and workflows in a way that:
Builds genuine buy-in
Surfaces real feedback (not just complaints)
Identifies whether the tool actually saves time or changes behaviour
Creates internal advocates, your “walking billboards”
We’ll focus on how to do this using an AI pilot, drawing from real-world experience running Riff pilots inside organisations but the principles apply to any AI rollout.
Step 1: Start With a Pilot (Not a Big Bang Rollout)
If you want adoption, start small and intentional.
A pilot gives you:
A safe testing environment
Space to refine workflows
Evidence before scaling
Internal credibility
But who you choose for the pilot matters more than how many people you include.
Choose Two Types of People (Yes, Both)
Your pilot group should deliberately include:
Fast adopters
Curious
Tech-comfortable
Likely to explore features
Often informal influencers
People who are “stuck in their ways”
Experienced
Often sceptical
Deeply familiar with current processes
Likely to articulate what won’t work
Why include both?
Because if you only test with enthusiasts, you’ll miss the real blockers.
And if you win over the sceptics, you create your most powerful advocates.
These people become your walking billboards, not because you told them to promote the tool, but because they trust the conclusion.
Step 2: Set Expectations Up Front (This Is Critical)
Before anyone touches the tool, be explicit about why this pilot exists.
You’re not asking:
“Do you like this tool?”
You’re saying:
“We can’t keep doing things the way we always have for these reasons, so we’re assessing whether modern tools genuinely help us work better. Your role in this pilot is to help us work that out.”
That framing matters.
This does two important things:
It removes the illusion that “doing nothing” is an option
It positions participants as assessors, not passive recipients
People are far more open to change when they feel their judgment matters and when the broader direction is non-negotiable.
If possible - show that there’s people behind the tech. At Riff for example, we love to meet the first pilot group in person. They meet our founders, engineers, success managers - they get a sense that we care about their feedback which often lifts the quality of it.
Step 3: Don’t Ask for Feedback Casually (It Backfires)
Here’s a common mistake:
A manager sends a Teams message: “How’s the new tool going?”
Or books a 1:1 and asks: “Any feedback?”
What happens next?
You get a complaint session.
That doesn’t mean the feedback is wrong but it often blurs the line between:
Legitimate usability issues
Natural discomfort with change
General venting about workload
People often resist change by default. Expect that. Design around it by following step 4.
Step 4: Create a Structured Feedback Process (And Tell Them the Questions)
Instead of ad hoc feedback, treat the pilot like a formal assessment team.
How to Do This Well
Set a fortnightly pilot meeting
Make attendance purposeful, not optional
Tell participants in advance:
What you’re assessing
How value will be measured
The specific questions they’ll be asked
This shifts feedback from emotional reaction to thoughtful evaluation.
During a Riff pilot, we recommend explicitly telling participants:
You’re part of a team assessing whether this tool saves time, improves clarity, and helps ideas move forward more efficiently.
Then share the exact questions upfront.
Step 5: Ask the Right Questions (Benefits vs Problems)
Here are example pilot questions that work because they anchor feedback in real work, not opinion.
Core Pilot Questions
Time & Value
If your manager asked for a written justification or short business case for spending money or approving an idea, do you think Riff saves you time preparing that?If yes: how much time?
If no: how would you do this without Riff?
Ease of Use
Do you find it easy to use? Why or why not?Type of Confusion
If you felt confused, was it about:How to use the tool?
When or why you’d use it?
What problem it’s meant to solve?
Workflow Impact
Did this reduce back-and-forth, rework, or unclear expectations?Adoption Likelihood
Would you choose to use this again without being asked? Why or why not?Improvement Suggestions
What would need to change for this to be genuinely useful in your day-to-day work?
These questions help you distinguish between:
“This tool doesn’t work”
“I don’t like changing how I work”
“This needs refinement”
That distinction is key.
Step 6: Anticipate Pushback And Address It Head-On
Don’t pretend learning a new system is fun.
Say the quiet part out loud.
What Good Change Communication Sounds Like
We know learning a new system can be annoying.
That’s exactly why we’re piloting a platform with features designed to make things easier, not harder. Obviously the people using it should be the judge of that, which is why you’re here.
Then connect the tool to real pain points.
Example: How to Frame a Riff Pilot
We’re trialling Riff because it offers things we believe should reduce friction:
Easy to use on mobile for people on site
Voice input if you don’t like typing
Simple screens so it’s not overwhelming
We also know there’s nothing more frustrating than not knowing what process is required to get something approved.
So we’re testing a simple rule:
If you need to spend money or propose an idea, you use Riff to write a short, clear justification.
It helps you understand who needs to approve it and how much detail is required.
Your role in this pilot is to help us confirm whether this genuinely makes it easier to get things done or not.
This reframes the tool as:
A process simplifier
A time-saver
A way to reduce organisational friction
Not “another system.”
Step 7: Run a Separate Pilot for Approvers
AI tools don’t just affect doers, they affect decision-makers.
Approvers care about:
Governance
Risk
Capital allocation
Quality of information
They should be assessed separately.
Questions for Approvers
Decision Quality
Does this improve the clarity and quality of requests you receive?Time to Decision
Does it reduce the time it takes to understand, approve, or push back on an idea?Consistency
Are justifications more consistent compared to before?Governance
Does this make it easier to meet governance and documentation expectations?
Confidence
Do you feel more confident saying yes or no based on the information provided?
Approver buy-in is often what determines whether a tool scales.
Conclusion: AI Adoption Is a Social System, Not a Software Install
Successful AI rollout isn’t about forcing adoption.
It’s about:
Respecting human resistance
Designing structured evaluation
Making value measurable
Giving people agency without pretending change is optional
Saying the quiet parts out loud so people feel heard
When you:
Start with a pilot
Choose the right people
Set expectations clearly
Ask better questionsSeparate feedback from venting
You don’t just implement a tool, you build trust.
And that’s what turns AI from an experiment into part of how work actually gets done.
If you get the people side right, the platform has a real chance to succeed.


