Markets

Pichai Says AI Is in Its ‘Jagged’ Phase — and It Still Can’t Spell “Strawberry”

The journey to artificial general intelligence (AGI) is shaping up to be far messier than tech leaders once envisioned. Google CEO Sundar Pichai now describes this moment not as a steady march toward human-level intelligence, but as a bumpy ride filled with brilliant flashes and baffling stumbles — a stage he calls artificial jagged intelligence, or AJI.

Speaking on the Lex Fridman Podcast, Pichai explained that the term AJI captures the uneven development of AI systems — capable of astonishing feats one moment and nonsensical mistakes the next. He credits the phrase either to himself or Andrej Karpathy, the deep learning expert and former OpenAI cofounder.

“You see what they can do and then you can trivially find they make numerical errors or [struggle with] counting R’s in ‘strawberry,’ which seems to trip up most models,” Pichai said. “I feel like we are in the AJI phase where dramatic progress [is happening], some things don’t work well, but overall, you’re seeing lots of progress.”

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

The comment reflects a broader frustration emerging among both developers and users of advanced AI. The problem isn’t just novelty mistakes — it’s the persistence of a deeper flaw known as hallucination.

Hallucination — where AI models confidently generate false or misleading information — remains one of the most serious unresolved issues in the field. Despite billions of dollars in investment and waves of model upgrades, today’s most advanced systems from OpenAI, Google, and Anthropic continue to hallucinate frequently. These errors are not only embarrassing but potentially dangerous in sensitive use cases like legal advice, healthcare, or journalism.

Even OpenAI CEO Sam Altman, who once described GPT-4 as the most useful tool he’s ever used, recently admitted to being surprised by how stubborn hallucinations have been. Speaking at a private event earlier this year, Altman reportedly said he had expected hallucinations to be significantly reduced in newer iterations — but they weren’t.

The Chicago Sun-Times and the Philadelphia Inquirer recently learned this the hard way after publishing an AI-generated summer reading list that included several non-existent books — a mistake that reignited the debate over editorial responsibility and AI oversight.

Pichai Forecasts AGI by 2030 — Or Something Close

When DeepMind launched in 2010, its founders estimated it would take about 20 years to achieve AGI. Google acquired the lab in 2014, and Pichai says that while the timeline might stretch, we’re likely to see “mind-blowing” breakthroughs across several dimensions by 2030 — even if AGI in the strictest sense isn’t yet realized.

“I would stress it doesn’t matter what that definition is because you will have mind-blowing progress on many dimensions,” he said.

However, he emphasized that by then, the world will need clearer systems for labeling synthetic content to help people “distinguish reality” from AI-generated fiction.

Pichai has been one of the most vocal tech leaders pushing for coordinated global regulation of AI. At the UN’s Summit of the Future in September 2024, he outlined four ways AI could significantly benefit humanity: improving access to knowledge in native languages, accelerating scientific breakthroughs, combating climate change, and powering economic growth.

But he has echoed the call for AI safety, which aligns with the warning that without governance, AI could do more harm than good. He indicated the need for frameworks and safeguards — a point echoed by DeepMind CEO Demis Hassabis, who recently warned about autonomous models being misused by bad actors.

Speaking at the SXSW festival in London, Hassabis emphasized the need for tighter restrictions on access to powerful AI systems, warning that the world is moving too slowly to regulate tools capable of destabilizing entire economies and societies.

“Both of those risks are important,” Hassabis told CNN’s Anna Stewart during the interview. “But a bad actor could repurpose those same technologies for a harmful end… and so one big thing is how do we restrict access to these powerful systems to bad actors, but enable good actors to do many, many amazing things with it?”

Despite these concerns, Pichai remains optimistic. He sees AJI not as a failure, but as the awkward adolescence of a powerful new technology — still learning, still stumbling, but pushing toward a future that could reshape everything.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button