Elevate AI CEO Quoted in TechRadar: What AI Experts Think Will Matter Most in 2026

TL;DR
Thiago E. Ferreira, CEO of Elevate AI Consulting, was quoted in TechRadar's 2026 AI expert roundup. His comments focus on proof over performance: in 2026 the question won't be "Can AI do this?" but "Should I trust this result?" He expects more focus on verification, sources, confidence indicators, and human review—and that AI literacy will become a baseline requirement, like digital or media literacy.
Our Founder and CEO, Thiago E. Ferreira, was quoted again in TechRadar in a new article by journalist Becca Caddy.
The article, "'It's time to demand AI that is safe by design': What AI experts think will matter most in 2026", asks experts across AI ethics, psychology, and real-world implementation what they're watching for in 2026—not the next killer feature, but trust, emotional stakes, and whether we can really work alongside AI long-term.
Thiago was quoted in the section on proof over performance at work: why workplace adoption will hinge on trust, and why the big question in 2026 will shift from "Can AI do this?" to "Should I trust this result?"
What Thiago Said in the TechRadar Article
In the piece, Thiago explains that the conversation is flipping:
"The big question in 2026 will no longer be 'Can AI do this?', it will be 'I know AI can do this, but should I trust this result?'"
That shift, he says, could push developers and businesses toward proof over performance.
"I expect more focus on verification, sources, confidence indicators, and human review," Ferreira says in the article. "The winners this year won't be the most impressive models, but the most trustworthy ones."
He also expects AI literacy to become a baseline requirement for many people:
"Understanding how to work with AI, like how to ask, verify, and apply outputs, will be treated like digital literacy or media literacy."
What the Full TechRadar Article Covers
The roundup doesn't focus on one trend—it explores the messier, human side of AI in 2026:
- Faster progress, higher emotional stakes — As AI mimics listening and reassurance, people form deeper emotional connections with it; dependency can feel normal before we notice.
- Therapy-adjacent AI — More mental health and support tools; developers need to support people without overpromising or cutting ethical corners.
- Child safety — Experts call for AI that is "safe by design," not just guardrails layered on systems built for engagement.
- Proof over performance at work — Where Thiago is quoted: workplace adoption will hinge on trust, verification, and human review.
- Reality check and creativity — A recalibration of expectations; true creative work and originality may become more valued as AI produces more generic content.
Why This Matters for Your Organization
If you're leading AI adoption, the themes in the article line up with what we see in the field:
Trust and verification matter more as AI touches more decisions. Building in sources, confidence indicators, and clear human-review points isn't just good practice—it's what will differentiate the tools and teams that scale from those that stall.
AI literacy—knowing how to ask, verify, and apply AI outputs—is becoming a core skill. Investing in training that goes beyond "which button to click" and covers when to trust, when to double-check, and how to use AI responsibly will pay off in adoption and risk management.
Read the full TechRadar article here →
If you're interested in how we help organizations build trust, verification, and AI literacy into their workflows, explore our AI consulting services, check out our case studies, or book a free consultation to discuss how we can help your organization.