Enterprises have spent the past two years rushing to make their workforces “AI-ready.” But many early training programs — focused on prompt writing and chatbot skills — are proving poorly suited to the realities of AI-powered work.
The reason is simple: the skills that matter most once AI enters real workflows have less to do with interacting with tools and more to do with judgment. The durable capabilities emerging in the AI era include output validation, data literacy, process understanding, and the ability to challenge automated recommendations. Tool-specific skills, by contrast, tend to age quickly as models and interfaces evolve.
“AI-ready is not defined by how many people took training or how many licenses you bought,” said Neal Sample, executive vice president and chief digital and technology officer at electronics retailer Best Buy. “It’s defined by whether you have redesigned real workflows, assigned accountability, and can show the technology is improving outcomes without introducing unmanaged risk.”
That shift — from tool proficiency to operational judgment — is forcing enterprises to rethink how they train employees for AI.
The illusion of AI readiness
The first wave of corporate AI training focused heavily on prompt engineering and basic familiarity with generative AI tools. That approach made sense early on, when employees needed help understanding the technology. But many organizations are discovering those skills have a short half-life.
“Prompt engineering aged the fastest,” said Rebecca Schalber, senior manager for generative AI at cosmetics company cosnova Beauty. As new models and interfaces appear, the effort invested in crafting perfect prompts quickly becomes obsolete.
When cosnova rolled out generative AI across its workforce, Schalber expected training to center on individual capability — understanding large language models, learning prompting techniques, and experimenting with tools. Early adoption looked promising. Within six months, a survey showed employees reporting productivity gains of nearly 10%.
Adoption alone was not enough. “You need broad adoption to move the needle,” Schalber said. “But what really matters is the workflow design.”
Instead of focusing on prompts, cosnova began examining how work actually happens inside teams — what tasks employees perform, where friction exists, and which parts of a workflow could be safely automated or augmented by AI. That shift forced employees to confront a different question: not how to use AI, but how to verify its output and integrate it into real business processes.
When AI hits real workflows
The distinction becomes clear once AI leaves experimental environments and enters operational workflows. In testing, outputs can be compared against known answers. In real business processes, however, the answer often isn’t known in advance. AI systems are deployed precisely because they help employees analyze complex situations, interpret data, or generate insights.
That’s where human oversight becomes critical. “Human oversight is not second-guessing every output from the AI,” said Sample from Best Buy. “It means being explicit about where judgment, escalation, and accountability must remain human.”
The closer a decision comes to customer trust, regulatory obligations, or significant financial risk, the more important that judgment becomes. Organizations deploying AI at scale must build guardrails into workflows and clearly define who is responsible for final decisions.
“For every AI-enabled workflow, you need to know who owns the decision, who handles exceptions, and where a human must intervene before the business takes action,” Sample said.
In other words, the challenge of AI readiness is not teaching employees to interact with a model — it’s teaching them how to supervise it.
From training programs to workflow design
At cosnova, Schalber’s team moved away from generic training sessions toward hands-on workshops where managers and employees map their daily workflows. During these sessions, teams identify tasks that could benefit from AI support and then redesign processes around those opportunities.
When AI was introduced as simply another tool, enthusiasm was limited. But when employees saw how the technology could remove tedious tasks or reduce friction in their work, adoption accelerated.
“It was no longer just another tool that management wanted people to use,” Schalber said. Instead, teams were solving their own problems — removing repetitive tasks or speeding up processes they disliked.
The company also began emphasizing transferable skills that apply across AI tools and models, including critical thinking, workflow design, and data literacy. These capabilities remain valuable even as the technology evolves and have proven far more durable than prompt-writing techniques.
Experimentation before formal training
Some organizations are taking a different approach: encouraging experimentation first and formal training later. At AI infrastructure company Turing, Taylor Bradley, vice president of talent strategy, deliberately began the company’s AI upskilling effort by encouraging non-technical employees to experiment with generative AI tools.
The goal was to spark curiosity rather than enforce compliance. Bradley compares the process to teaching his daughter to ride a bicycle. “The best way for her to learn was to actually have her ride the bike,” he said.
At Turing, employees experimented with AI through informal activities such as turning photos of pets into “royal portraits” or creating short AI-generated films for internal competitions. The exercises were designed to lower the barrier to experimentation. Once employees became comfortable with the technology, the company introduced practical workshops focused on real work tasks.
Bradley now sits down with teams to examine daily workflows and identify where generative AI could help. Employees often discover that AI can serve as a sounding board for ideas, a drafting assistant, or a way to accelerate communication.
Within weeks, those experiments often evolve into more formal systems. One early project began as a conversational tool helping HR specialists draft responses to employee support tickets before expanding into a broader internal knowledge system.
The key metric, Bradley said, is not course completion but whether teams develop useful AI applications. “We focus on quality use cases with measurable outcomes,” he said.
Learning inside the flow of work
For large enterprises, the challenge of AI skill development is even more complex. Traditional training models — where employees attend courses and then return to their jobs — are poorly suited to technology evolving as quickly as generative AI.
According to Margaret Burke, talent acquisition and development leader at professional services firm PwC, traditional training programs are inherently episodic. “Employees attend a course, return to work, and may or may not apply what they learned,” she said. “In an AI-accelerating environment, that model breaks down.”
PwC is embedding AI learning directly into everyday work. The firm still runs formal programs but is expanding apprenticeship-style learning and weaving AI capability development into routine business activities.
One example is the company’s “skills days,” where employees explore AI applications relevant to their work. During a recent session with advisory associates, participants documented how they were already using AI — or where they planned to apply it. Hundreds of ideas emerged. PwC then used AI to analyze the inputs, clustering them into categories and redistributing the results across the organization so teams could learn from one another.
Crucially, PwC pairs technical AI capabilities with what Burke calls “human edge” skills, including critical thinking, independent judgment, and storytelling. “We never teach an AI technical skill without teaching the human skill that goes with it,” Burke said.
As AI systems generate more content and analysis, those human capabilities become essential for interpreting results, spotting errors, and explaining insights to colleagues and clients.
Measuring real AI readiness
As organizations rethink AI capability, the metrics used to evaluate training programs are changing. Traditional learning programs often rely on course completion rates or certifications. But those metrics reveal little about whether employees can use AI responsibly inside real workflows.
Instead, organizations are looking for operational signals. Some track how frequently employees develop new AI use cases that improve productivity or decision-making. Others measure how quickly teams adapt when AI tools or models change.
For Bradley at Turing, the key indicator is whether employees continually find new ways to improve their work with AI. “If my team members come to me every week with ideas for improving or expanding AI use cases, that’s the signal that capability is growing,” he said.
From the CIO perspective, however, the ultimate measure is operational outcomes. AI readiness only becomes meaningful when organizations integrate AI into real workflows while maintaining accountability for the results.
“The most durable capabilities are not the current best prompt tricks,” said Best Buy’s Sample. “They are judgment, problem framing, systems thinking, and the ability to translate machine output into business action.”
But for CIOs deploying AI across the enterprise, workforce capability is only part of the equation. Organizations must also rethink how leadership defines accountability when AI systems influence decisions.
“An AI-ready workforce without an AI-ready leadership model is likely to stall,” Sample said. “AI can accelerate analysis and recommendations, but accountability doesn’t transfer to the model. Leaders still have to define guardrails, decision rights, and what success looks like.”
As enterprises move beyond early AI experimentation, that leadership clarity may prove just as important as any skill employees learn.
Related reading:
- What AI skills job seekers need to develop in 2026
- 5 things IT managers get wrong about upskilling tech teams
- Two-thirds of jobs will be impacted by AI
- How to keep tech workers engaged in the age of AI
- How to train an AI-enabled workforce — and why you need to
Read the full article here

