90% Don't Trust AI With Their Data. Copilot Shows Why.
Public AI trust is cratering while adoption climbs. Microsoft Copilot's failures reveal what's broken and what practitioners must do differently.
I had three client deployments stall last quarter. Not because the AI didn’t work. It worked fine. The users refused to touch it.
One team lead told me, point blank: “After what happened with Copilot, I’m not putting my data in another AI tool.” She wasn’t wrong to feel that way. And she’s not alone. A Malwarebytes survey published this month found that 90% of people don’t trust AI with their data. Not technophobes. People who use technology every day.
If you’re deploying AI for clients or inside your own organization, this is your problem now. Even if your system is bulletproof. Because every user walks in carrying the weight of the industry’s collective failures, and Microsoft’s Copilot just handed them the heaviest example yet.
The Numbers That Should Worry Every AI Practitioner
I’ve been tracking Copilot’s adoption data since launch, and the trajectory is brutal:
| Metric | Copilot | ChatGPT | Gemini |
|---|---|---|---|
| Paid subscriber share (Jan 2026) | 11.5% | 55.2% | 15.7% |
| Workplace conversion rate | 35.8% | 83.1% | 34.0% |
| Primary reason users leave | Distrust of answers (44.2%) | N/A | N/A |
| Share change since July 2025 | Down from 18.8% | Stable | Growing |
That “distrust of answers” line is the one I keep coming back to. 44.2% of people who stopped using Copilot did so because they couldn’t trust what it told them. Not too expensive. Not too slow. Unreliable. Microsoft spent roughly $500 billion building Copilot into every surface of its product suite, and among 450 million Microsoft 365 subscribers, only 3.3% actually use it. When employees have access to both Copilot and ChatGPT, 76% choose ChatGPT.
Copilot’s market share dropped 7.3 percentage points in six months. That’s not an adoption problem. That’s a credibility collapse.
The Bug That Validated Every Skeptic
In January 2026, customers discovered that Copilot was summarizing confidential emails marked with data loss prevention labels. Business agreements. Legal communications. Protected health information. All accessible to a tool that was explicitly told not to touch them.
I’ve deployed DLP policies for Fortune 500 clients. I know how much work goes into those classifications. Someone spent weeks labeling what’s confidential, what’s restricted, what the AI can and can’t see. Copilot ignored all of it.
Bug CW1226324 affected the “work tab” chat feature. It read and summarized emails from Sent Items and Drafts folders carrying confidentiality labels designed to restrict automated access. And here’s what made it genuinely dangerous: no audit trail captured the unauthorized access. No anomaly detection flagged it. No alerts fired. Organizations had no idea it was happening until users noticed on their own.
Microsoft began rolling out a fix in early February. As of mid-February, remediation wasn’t complete across all affected tenants. Nearly a month of exposure.
The U.S. House of Representatives had already banned Copilot for congressional staff over data security concerns. This bug proved every fear that drove that decision was justified.
This Isn’t Just a Microsoft Problem
I wish I could say Copilot is an outlier. It’s not.
MITRE’s AI Trust Gap survey found that only 39% of Americans believe AI is safe and secure. 78% worry about malicious use. 84% believe the government will prioritize its partnerships with big tech over the public interest when regulating AI. YouGov’s research shows the contradiction: most Americans use AI, but only 5% trust it “a lot” for recommendations. 53% don’t trust AI systems to make decisions at all.
And globally, 66% of people use AI regularly while less than half actually trust it.
People are using AI the way they use a broken printer. Because they have to, not because they believe it works.
I see this every week in my consulting. The tools work. The users don’t care. Their trust was spent by someone else’s failure before my client’s product even launched.
What This Costs You in Practice
Here’s what the trust deficit actually looks like in a real deployment, based on what I’ve seen across 47 AI rollouts:
- Adoption rates drop 30-50% when users have prior negative AI experiences. I had one client budget for 80% adoption in month one. They got 34%. The AI performed perfectly. The users had all read about Copilot’s email bug.
- Implementation timelines extend by 4-8 weeks for additional trust-building, training, and governance documentation that wasn’t in the original scope.
- ROI projections miss targets because they assume willing adoption. Resistant users find workarounds. They copy-paste into ChatGPT instead of using the company tool. They revert to manual processes. The AI sits there, working, unused.
If you’re building an AI deployment budget in 2026 and you haven’t added a line item for trust-building, your projections are fiction. I’ve started adding it to every proposal. Usually 15-20% of the total implementation cost. Clients push back on it until I show them the adoption data from projects that skipped it.
How Microsoft Got It Wrong (And What I’d Do Differently)
After watching this unfold for a year, I see three strategic errors any practitioner can learn from.
They prioritized surface area over reliability. Copilot buttons appeared in Notepad, Paint, File Explorer, and every Office app simultaneously. Windows president Pavan Davuluri talked about turning Windows into an “agentic OS” and was met with thousands of negative replies. I’ve made this mistake on a smaller scale with a client. We launched AI in four departments at once. Should have started with one, proved it worked, and let word of mouth do the rest.
They treated governance as an afterthought. Sensitivity labels work for human users. Nobody verified they worked for AI agents operating at machine speed across entire mailboxes. In every deployment I run now, I test guardrails against AI behavior specifically, not just human behavior. It’s a different attack surface. The AI doesn’t get tired. It doesn’t skip emails. It processes everything it can see, and if your access controls have a gap, it’ll find it.
They didn’t measure trust as a product metric. 44.2% of lapsed users cited distrust. That number should have triggered emergency product changes months before it hit public reporting. I measure trust in every deployment now. Monthly surveys. Adoption willingness tracked separately from raw usage. (A user forced to use AI isn’t an adopter. They’re a flight risk who’ll revert the moment they can.)
What Actually Builds Trust (From 47 Deployments)
The companies getting this right share a pattern. It’s operational discipline, not marketing.
Transparency by default. Dropbox discloses which AI models power each feature and names the underlying providers. Autodesk publishes AI Trust Principles covering responsibility, accountability, reliability, and security. I stole this approach for a 15-person financial advisory firm I work with. We documented every data source the AI could access and shared the list with end users on day one. Their client retention increased 12% after launch. In financial services, where trust is the product, that’s a significant edge.
Governance built into the architecture, not bolted on after. The UK’s NHS built higher public trust in its AI systems by embedding ethical safeguards before deployment. Not after a breach. Not after public backlash. Before launch. I’ve seen too many teams treat governance as a phase 2 task. Phase 2 never comes. Or it comes after the breach.
Measurable, visible boundaries. Harvard Business Review confirmed what I’ve seen in practice: users trust AI systems more when they can see what it’s doing, understand why it made a decision, and override it when needed. The best deployments I’ve run define clear limits. What data the AI can access. What decisions it makes autonomously. When it escalates to a human. What gets logged. Users who understand the boundaries trust the system more because they know where the guardrails are.
Control builds trust. Opacity destroys it. Every time.
Your Deployment Audit (Do This Before You Ship)
Based on everything the Copilot situation reveals, here are the three questions I now ask before every deployment:
-
Can you show every user exactly what data the AI accesses and why? If the answer is “it’s complicated” or “they don’t need to know,” you’re building a Copilot. Users will find out what the AI accessed eventually. Better they learn it from you on day one than from a bug report.
-
Do your guardrails work against AI behavior, not just human users? Test them. Sensitivity labels, access controls, DLP policies all need AI-specific testing. The AI doesn’t browse emails one at a time like a person. It processes entire mailboxes in seconds. Your controls need to handle that speed and scope.
-
Are you measuring trust alongside performance? If you’re only tracking accuracy, latency, and cost, you’re flying blind. Survey users monthly. Track adoption willingness separately from usage. The moment trust metrics dip, you need to know before adoption craters.
If the answer to any of those is no, fix it before you ship. The Copilot bug proved what happens when the answer is no at scale. You won’t get a second chance at user trust.
The Real Opportunity Here
Here’s what I keep telling clients who get depressed by these numbers: when 90% of people don’t trust AI, the company that demonstrates trustworthy deployment wins by default. The best model doesn’t matter. The most features don’t matter. Whoever people believe won’t mishandle their information captures the market.
The AI skills premium I wrote about last week shows the market values implementation expertise. But implementation without trust architecture is just building faster toward a credibility wall. I’ve seen it happen. Beautiful system, perfect accuracy, zero adoption because nobody trusted it.
Build the trust architecture first. Everything else compounds from there.
Related Reading:
TAGS
Ready to Take Action?
Whether you're building AI skills or deploying AI systems, let's start your transformation today.
Related Articles
Your Shopify Store Is Invisible in AI Chat
Shopify's Universal Commerce Protocol lets AI assistants sell your products inside ChatGPT and Gemini. See how to claim this new sales channel now.
Apple Just Killed AI Vendor Lock-In
Apple's iOS 27 Siri Extensions open the assistant to any AI app—ending ChatGPT's exclusive role. Discover what this means for your AI vendor strategy.
Anthropic's $100M Partner Network: SMB Guide
Discover how Anthropic's new Claude Partner Network helps SMBs find certified AI implementation consultants — and what to look for before hiring.