
The AI Change Agent Is Not Who You Think
The employees driving the most meaningful AI adoption don't share a job title, a department, or even a comfort level with technology. What they share is a cluster of three traits.

The employees driving the most meaningful AI adoption don't share a job title, a department, or even a comfort level with technology. What they share is a cluster of three traits.
Phil Hamstra has been tracking every Gemini prompt across Meeting Tomorrow since last April. Eighty-six employees. A dashboard that shows who's logging a thousand prompts a month and who's logging two.
The company's number-one adopters? The inbound sales team. They started with the Gmail integration to clean up their writing, then moved into Gemini proper, built custom Gems, and shared them across the group. Nobody mandated any of it.
One of the company's sharpest technical minds, a developer whose professional standard is, in Phil's word, "perfection," barely touched it. AI could write code, but it didn't write code better than he did, didn't write it the way he wanted to, and he couldn't put his stamp on it.
Phil has been at Meeting Tomorrow for over twenty years. He's held nearly every role at the company, and his current title, Director of Systems Technology and Strategy, puts him at the intersection of IT, product innovation, and the CEO's strategic agenda. When the company decided to go all-in on progressing AI maturity, they did what most companies do: promoted it, ran demos, hired a trainer, sent tips-and-tricks emails. Classic rollout.
Then he watched the adoption dashboards. And the people going exponential were not who anyone would have predicted.
The employees driving the most meaningful AI adoption at Meeting Tomorrow, and across the organizations we work with, don't share a job title, a department, or even a comfort level with technology. What they share is a cluster of three traits:
A specific problem that's eating them alive. Not a vague curiosity about AI. Not compliance with a mandate. A real pain point they were already losing sleep over. Meeting Tomorrow's inbound sales team didn't pick up Gemini because someone told them to. They picked it up because they were drowning in prospect research and AI turned hours into minutes.
Self-directed learning instincts. These people don't wait for the training session. They try the tool on a Saturday, break it, fix it, and show up Monday with a working workflow. Phil called them "the best autodidacts" and noted that their impatience is the feature, not the bug. They skip the permission step entirely.
A motor that doesn't stop. Once they solve something, they keep going. They test adjacent use cases. They optimize. And critically, they tell people about it, whether anyone asked them to or not.
This profile cuts across every level and function. The salesperson building Gems. The technical producer rethinking event design. The COO who spent an entire weekend A/B testing Gemini against ChatGPT until she understood exactly when to use which tool.
What united them was never their role. It was their relationship to problems.
Here's what Phil surfaced that most AI adoption frameworks miss: the traits that make someone a great AI explorer are not the same traits that make someone a great AI diffuser. And organizations need both.
The Explorer is the puzzle-solver. They get a deep neurochemical reward from cracking hard problems. Phil compared it to finishing the Saturday New York Times crossword without cheating. These people will stay up until 2 a.m. in a flow state, not because anyone asked them to, but because the problem is right there and they can't leave it unsolved. They go deep. They build sophisticated workflows. They push the tools to their limits.
But they don't always share what they've built. Not because they're hoarding. They're just already on to the next puzzle. As Phil put it: "There are people who are autodidacts and they've got that drive, but they don't like talking. It's not that they're not willing to share. It's just not what they do."
The Evangelist runs on a different fuel. Their satisfaction comes from watching someone else's problem disappear. They see a colleague struggling with a manual process and they can't help themselves. They jump in, build a solution, walk the person through it, and make sure the whole team knows about it. Phil described these people as motivated not by the puzzle itself but by the belief that "if they can solve it, they can help that person."
The Evangelist is who creates the reinforcement loop that spreads adoption beyond the early adopters. They translate what the Explorers build into language and use cases that the rest of the organization can actually grab onto.
Most companies make one of two mistakes here. They try to turn their Explorers into trainers. Phil tried this and quickly realized it was a mismatch. "They shouldn't be trainers," he said. "We just need them to tell us what's going on." Or they only invest in broad training programs that neither Explorers nor Evangelists need, while the people who do need support get a one-size-fits-all experience that doesn't stick.
The real insight is recognizing these as complementary roles and supporting each one differently.
There's a tempting conclusion to draw from all of this: change agents are a personality type. You either have the wiring or you don't. Find the naturals, resource them, and let everyone else follow.
Phil pushed back on that framing. "I don't think it's that you lack the growth mindset," he said. "I think it's just that you need the instructions. But once you have the instructions, you're done and you're going to be able to see other ways to do it."
This is one of the most important and underappreciated dynamics in AI adoption. The natural Explorers and Evangelists exist in every organization. They're already moving before anyone gives them permission. But there's a much larger group that has the drive, knows their problems, and would adopt if someone met them where they are. They aren't resistant. They're unsupported.
Phil saw this firsthand when Meeting Tomorrow launched a volunteer AI development group. Some participants, the natural autodidacts, took off immediately, turning into Claude Code-level developers building tools that solved real operational problems. But the rest of the group, people who showed up voluntarily, who clearly had interest and motivation, needed far more hands-on guidance than the team had resourced. "We knew how to support the people who were going to go exponential," Phil said. "I'm still working on how you give everyone the tools and chance to get to the next phase."
The distinction matters for how you invest. If you only resource the naturals, you get pockets of brilliance inside silos. If you also build the scaffolding (guided workflows, one-on-one problem mapping, protected time to practice) you create the conditions for a second and third wave of change agents who are every bit as effective, even if they didn't start on their own.
Leaders see the gap between their power users and everyone else, and they diagnose a skills deficit. More training. Better onboarding. A lunch-and-learn series.
Phil tried all of it. The broad training was, by his own assessment, ineffective. He ran a high-energy demo for managers showing how to build a complete business plan using AI in under an hour. Their feedback was blunt: "I wish you would have just shown us one piece and how to actually do it."
The skills diagnosis isn't wrong. It's incomplete.
What we see across organizations is that people stall for three distinct reasons, and each one requires a different intervention:
They never hit the first win. The employees who tried Gemini twice and stopped simply never experienced the moment where AI solved a real problem for them. Their core work didn't present an obvious overlap with what the tools could do. Meeting Tomorrow's creative team, for instance, produces stunning work for Fortune 100 events. AI didn't make their output better. It made it generic. One underwhelming experience was enough to close the door. These people don't need more training. They need someone to sit with them, understand their actual workflow, and find the one friction point where AI genuinely helps.
They hit a wall on prompting. We see this constantly in client engagements. One leader told us she'd tried ChatGPT early on, had a mediocre experience, and never went back. One shot on goal. Shot missed. Never took another one. Meanwhile, the technology has leaped forward, but she doesn't know that, because her single data point told her it wasn't useful. Tim Sanders, the CIO of G2, put it sharply at a recent Harvard Digital Data Design Institute session: the new 10,000 hours is 10,000 prompts. You need enough reps to discover where AI's capabilities overlap with your actual work. Most people never get past the first few dozen.
AI threatens their professional identity. This is the one nobody talks about, and it's the most powerful blocker. When AI starts performing tasks that define how someone sees themselves professionally, the brain doesn't register it as a productivity tool. It registers it as status erosion.
Phil described this precisely with his NetSuite developer. The developer writes code. AI writes code. But in his primary language and environment, the developer wrote it better, with contextual judgment that AI couldn't match. His professional stamp was perfection, and AI didn't improve perfection. It was only when he had to develop in an unfamiliar language, one where he had no existing identity to protect, that he adopted AI immediately and without hesitation.
This pattern is consistent across every organization we've worked with. The person whose identity is "I write compelling client proposals" doesn't experience AI-assisted drafting as empowerment. They experience it as encroachment on the thing that makes them valuable. This isn't irrational. It's deeply human. And it's why technical training alone fails for these individuals. You're trying to solve an identity problem with a skills intervention.
The people who break through have reframed their professional self-concept from task executor to strategic orchestrator. Instead of "I write reports," they think "I ensure clients get insights that drive decisions." The drafting is a means, not the identity.
Phil's most effective adoption moments weren't the company-wide trainings or the tips-and-tricks emails. They were one-on-one conversations.
"I'd go to them and say, what's your biggest problem? What are you wasting time on? Cool. I'm going to go spin up a demo and show you tomorrow." And then, he said, "immediately they're like: I see. I see what we're doing now."
Across our work, the interventions that actually move people from stalled to self-sustaining share a few things in common.
Start with the person's problem, not the tool's capabilities. The worst AI training opens with "look at everything this tool can do." The best opens with "tell me what's eating your afternoon." Phil's one-on-one approach worked because he started with the workflow, not the technology.
Deliver guided first wins, not open-ended exploration. The Explorers thrive on self-direction. Most people need someone to walk them through their first successful workflow using their actual data, their actual bottleneck. Not a hypothetical exercise. One real win is worth a hundred demos.
Build diffusion into the operating rhythm. Meeting Tomorrow's COO embedded ten minutes into every team meeting for someone to share a technology win. Not formal training, just a standing invitation to say "here's a thing I figured out." It worked because knowledge flowed among people who shared context and trusted each other. But Phil was honest about the limit: cross-silo diffusion remains the unsolved problem. The salesperson building brilliant research Gems is unlikely to share them with the production team, because they don't share meetings, workflows, or problems.
Create space for identity renegotiation. This is the hardest intervention because it can't be solved with a workshop. It requires leaders who can help people see that their value was never the task itself. It was the judgment, the relationships, the contextual knowledge that made the output meaningful.
Phil raised something that stuck with us: "The exponential people are exponentially far ahead of everybody. I'm not sure it's possible in the laws of math and physics to now catch up."
He's right that the gap is compounding, not linear. Someone logging a thousand prompts a month isn't just ahead. They're operating in a fundamentally different relationship with their tools. Their native way of approaching a problem now starts with "how fast can I write this prompt?"
But this doesn't mean the rest of the organization is a lost cause. It means leaders need to stop thinking about AI adoption as a single curve that everyone climbs at different speeds. It's multiple curves. The goal isn't to get everyone to the same point. It's to start as many new exponential trajectories as possible, from wherever people actually are, with whatever combination of Explorer, Evangelist, or made-not-born change agent they happen to be.
Phil has a question he asks at bars, at company meetings, whenever he's trying to figure out if someone has the change-agent wiring: "Tell me about the most interesting problem you've solved recently."
The right person, he says, lights up and talks for thirty minutes. You learn something immediately. The wrong person doesn't have an answer.
The question for leaders isn't just how to find those people. It's how to build an organization where more people have an answer worth giving, and the right kind of support to turn that answer into momentum.
The AI Change Agent is not a hire you make. It's a condition you create.
Wendy Rasmussen, PhD is a clinical psychologist and founder of Alpenglow Insights, specializing in the human dimensions of AI transformation. Jonathan Hansing is the founder of Wallabi, where he works as an AI transformation strategist alongside CXOs navigating the shift from AI-curious to AI-native.