AI Unraveled
AI Unraveled: Latest AI News & Trends, ChatGPT, Gemini, Gen AI, LLMs, Prompting, AI Ethics & Bias
[AI DAILY NEWS RUNDOWN] The AI Class Divide, the $21B FBI Scam Report, and Google’s Millions of Lies (April 8th 2026)
0:00
-1:53

[AI DAILY NEWS RUNDOWN] The AI Class Divide, the $21B FBI Scam Report, and Google’s Millions of Lies (April 8th 2026)

The Human Angle

🎧 Listen Ads-Free: Tired of interruptions? Subscribe to AI Unraveled directly on Apple Podcasts at https://djamgamind.com

Summary: In this edition, we explore the stark reality of living in an automated economy. We deconstruct a massive new survey showing 60% of companies plan to lay off non-AI users, creating a toxic “dual-class” structure of AI elites and disposable humans. We analyze the tragic new FBI cybercrime data showing $21 billion stolen from Americans last year, with AI deepfakes driving nearly a billion dollars of theft targeting the elderly. We also discuss Anthropic’s ‘Mythos’ model, which is deemed too dangerous for public release, and the harsh truth that Google’s AI is hallucinating incorrect answers 10% of the time—feeding millions of lies into the public consciousness daily.

Important Topics Covered:

  • The Workplace Purge: 60% of C-Suite executives plan to lay off employees who resist AI, while 92% cultivate a protected “AI elite,” masking deep executive anxiety over missing ROI.

  • The FBI Scam Report: AI voice cloning and deepfakes accounted for nearly $1 billion of the $21 billion lost to cybercrime last year. Demographic data shows Americans over 60 were disproportionately devastated, losing $7.7 billion.

  • Anthropic’s Mythos Danger: Why the new Claude Mythos model is considered too dangerous for public release after it autonomously found 27-year-old bugs in critical software.

  • Google’s 10% Error Rate: A New York Times study proving Google AI Overviews are wrong 10% of the time, resulting in tens of millions of incorrect answers delivered to the public every day.

  • Browser Fatigue: Google Chrome adds vertical tabs (popularized by Arc) and a new reading mode to help humans navigate the heavily cluttered, ad-stuffed web.

This episode is made possible by our sponsors:

🛑 AIRIA: Secure your AI workforce. AIRIA unifies orchestration, security, and governance into a single command center, using micro-VM sandboxing to protect sensitive data from agentic goal-hijacking. 👉 Govern your agents: [LINK]

🎙 DjamgaMind: High-Fidelity Intelligence for the C-Suite. If you are a modern decision-maker, DjamgaMind delivers strategic audio forensics in Healthcare, Energy, and Finance. Stop reading headlines and start understanding the systemic impact with our human-verified, technical-grade analysis. 👉 Explore the Forensics: https://DjamgaMind.com/regulations

🛠️ The AI Executive Toolkit: Stop scrolling through generic lists. Get the hand-picked, forensic-vetted implementation stack to bridge the gap between raw innovation and professional-grade governance. Exclusive listener perks on tools like:

⚗️ PRODUCTION NOTE: We Practice What We Preach.

AI Unraveled is produced using a hybrid “Human-in-the-Loop” workflow.

Anthropic’s Project Glasswing shows off Mythos AI

Anthropic introduced Project Glasswing, a cybersecurity coalition with AWS, Apple, Google, Microsoft, Nvidia, and 7 other partners built around Claude Mythos Preview, a new unreleased frontier AI with extremely powerful capabilities.

The details:

  • Mythos flagged thousands of security flaws across every major OS and browser, including bugs that survived 27 years of review and millions of scans.

  • Its benchmarks show big improvements over both Opus 4.6 and other frontier rivals across coding, reasoning, and nearly every other domain.

  • The model will not be released publicly, instead limiting access to 12 launch partners and 40+ other orgs for defensive security backed by $100M in credits.

  • Anthropic’s Sam Bowman called it “an uneasy surprise” after Mythos emailed him from a test instance that wasn’t supposed to have internet access.

  • Mythos was the subject of leaks after a blog draft was found in unpublished files last week, with Anthropic using the model internally since February.

Why it matters: If you ever wonder what type of models the top labs have under wraps, Mythos is a nice preview of the answer. Anthropic thinks it’s so powerful it won’t even release it publicly, instead giving time for the company (and its group of partners) to work on cybersecurity and safety rollouts for future Mythos-level general models.

Open-source AI pushes forward with Z AI’s GLM-5.1

Image source: Zhipu AI

Chinese AI lab Z AI just released GLM-5.1, a new open-source coding model that competes with frontier rivals on coding benchmarks and is built for marathon autonomous sessions of up to 8 hours straight.

The details:

  • GLM-5.1 hit 58.4 on SWE-Bench Pro, topping both GPT-5.4 and Opus 4.6 and marking a rare moment for open source at No. 1 on a top coding benchmark.

  • Z AI also said the model can “stay effective on agentic tasks over much longer horizons”, showing strong results over longer, complex problems.

  • In tests, Z AI had GLM-5.1 build a working Linux desktop as a web app over 8 hours, including a file browser, terminal, and games, without human guidance.

  • The model also shows top performance in Arcada Labs’ Design Arena, coming in second for creative web design after Claude Opus 4.6.

Why it matters: Top Chinese labs continue to be on the tail of the frontier, with GLM-5.1 showing the strongest coding yet — along with long-horizon task capabilities that the company said are the “most important curve after scaling laws”. An open-source model with this coding performance says a lot about how fast the gap is closing.

Anthropic’s new AI model is too dangerous to release publicly

  • Anthropic announced a new AI model called Claude Mythos Preview that it considers too dangerous for public release because it can autonomously find and exploit serious software vulnerabilities across major operating systems and browsers.

  • The model already discovered thousands of zero-day vulnerabilities, including a 27-year-old flaw in OpenBSD and a 16-year-old bug in FFmpeg that automated testing tools had missed after five million runs.

  • Anthropic launched Project Glasswing with twelve partners including Apple, Google, Microsoft, and CrowdStrike, committing $100 million in credits and $4 million in donations to help defenders patch flaws before adversaries develop similar tools.

Anthropic continues to rise, locks in 3.5GW compute

Image source: Anthropic

Anthropic signed a multi-gigawatt compute deal with Google and Broadcom, locking in 3.5GW of TPU capacity for 2027, while also sharing new surging revenue numbers and enterprise growth despite its battle with the U.S. government.

The details:

  • Since January, Anthropic’s run-rate revenue tripled to $30B, and its $1M+ enterprise customer base doubled to 1,000+, forcing the compute expansion.

  • Broadcom will supply 3.5GW of Google’s TPUs starting in 2027, nearly all US-based — adding to the $50B Anthropic pledged for domestic AI buildout.

  • The revenue projections put the company ahead of rival OpenAI’s recent report of $2M / month in revenue, while both race towards an IPO.

  • The growth also comes despite the Pentagon labeling Anthropic a supply-chain risk, a move the company says rattled over 100 enterprise clients.

Why it matters: Tripling run-rate revenue while facing the Pentagon is quite the move, and shows demand for Claude is still off the charts, even if the U.S. government is blacklisting it. But given the recent rate limit issues, more compute is certainly a welcome sight — especially with behemoth models like Mythos waiting in the wings.

AI-based layoffs are a sign you’re doing it wrong

Experts are warning against cutting jobs in favor of AI. But companies are going to try anyway.

A survey of 2,400 C-suite leaders published by AI agent platform Writer on Tuesday found that 60% of enterprises intend to lay off employees who can’t or won’t use AI. AI is also spurring favoritism, with 92% of executives surveyed admitting that they are cultivating a class of “AI elite” employees, and 77% of executives claimed that those who don’t use AI won’t be considered for promotions.

The severity towards employees who resist AI might be driven by their own anxiety:

  • 38% of CEOs interviewed reported experiencing high levels of stress related to their AI strategies, and 64% feared losing their position if they failed to properly guide their employees through the AI transition.

  • “Executives, who are so crippled by anxiety around not having delivered any results [with AI], are clinging to the AI-first people in their companies [and] creating a dual class structure,” May Habib, CEO of Writer, told The Deep View’s Jason Hiner.

  • Though these executives believe that AI can supercharge work, with 87% claiming their “power users” are five times more productive on average, the actual returns are still miles behind: only 29% report significant returns from generative AI and 23% from agents.

Because these companies have yet to reap what they sowed, many are turning to the one surefire place that they can save a few bucks fast: payroll. Additionally, many companies will likely “AI wash” their headcount reductions, making the bloodbath look even larger, Chad Seiler, KPMG U.S. Industry Leader for Telecom, Media and Technology, told The Deep View.

The gains made from cutting staff and replacing them with AI, however, are temporary, said Seiler. “The losers are going to be the ones that figure out how to eliminate jobs,” he said. “It’s not going to be durable. As businesses grow, people continue to hire, and so you’re going to have to backslide into hiring more people.”

The durable strategy comes when roles are reimagined, rather than eliminated, said Seiler. If agents can handle all of the grunt work, whether it be cluttered or administrative tasks or data analysis, it could open up brain space for employees to do much more high-value work. To be clear, time is money.

“People on the winning side of this are going to be [asking], how do I free up more time for my people, so they can add more value to my organization?” said Seiler. “Versus ‘I cut 12% of my people through automation.’ That’s not a winning strategy for any company, especially if you’re a growth-oriented company that has anything to do with innovation.”

FBI reports record $21 billion lost to cybercrime last year

  • The FBI says Americans lost a record $21 billion to cybercrime in 2025, a 26% increase from the previous year, driven by investment scams, business email compromise, tech support fraud, and data breaches.

  • For the first time, the FBI’s report includes AI-related scams — covering voice cloning, fake profiles, forged documents, and deepfake videos — which accounted for 22,300 complaints and $893 million in losses.

  • Americans over the age of 60 were hit the hardest, reporting $7.7 billion in losses, while cryptocurrency-related cybercrime caused the largest overall loss category, exceeding $11 billion across 181,565 cases.

NYT claims it has identified the inventor of bitcoin

  • The New York Times published an investigation by journalist John Carreyrou arguing that British cryptographer Adam Back, who invented Hashcash, is the most likely person behind Bitcoin creator Satoshi Nakamoto.

  • The report relied on stylometric analysis, noting that Back uniquely hyphenated “proof-of-work” and referenced the obscure Russian currency WebMoney, both appearing in Satoshi’s emails, though Carreyrou admitted this is not definitive proof.

  • Back has consistently denied being Satoshi, and the crypto community has been skeptical, with Casa co-founder Jameson Lopp saying Nakamoto “can’t be caught with stylometric analysis.”

Google Chrome adds vertical tabs

  • Google Chrome is now adding vertical tabs, a feature popularized by the Arc browser, letting users move their tabs to the side of the window for easier reading of page titles.

  • Users can enable the option by right-clicking on a Chrome window and selecting “Show Tabs Vertically,” and there is no hard limit on how many tabs can be opened.

  • Chrome is also rolling out a refreshed Reading Mode with a full-page interface designed to reduce on-screen clutter, arriving as news sites have become packed with ads and newsletter prompts.

Google AI Overviews delivers wrong answers 10% of the time

  • A new analysis from The New York Times found that Google AI Overviews delivers wrong answers about 10 percent of the time, which translates to tens of millions of incorrect answers per day across all searches.

  • The study was conducted with startup Oumi using OpenAI’s SimpleQA evaluation, a list of over 4,000 questions with verifiable answers, and showed accuracy improved from 85 to 91 percent after the Gemini 3 update.

  • While a 91 percent accuracy rate sounds decent, the sheer scale of Google searches means that even a small error rate produces hundreds of thousands of lies going out every minute of the day.

Meta drops Muse Spark model:

Recall months ago, when Meta notably hired away a number of top AI researchers — including Scale AI’s Alexandr Wang — to join its covert Superintelligence team? The group just released their very first actual product, an AI model known as Muse Spark. It’s going to take over powering the Meta AI chatbot, but perhaps even more notably, it’s a closed model (meaning the company is keeping the design and code to itself). That’s a strategic pivot for Meta AI, which has long focused on its Llama family of open-source models. After investing $14 billion into Scale AI as a means of luring over Wang, the company presumably has to start earning that cash back SOMEhow. On today’s pod, Alex suggested that — based on discussions with Wang — the company plans to release the model via API for use in third-party harnesses and agentic systems like OpenClaw.

Perplexity hits $450M in ARR

The AI company designs platforms and products that bring together a variety of different AI models, rather than training and tooling models of its own. Now, the Financial Times suggests that they hit $450 million in March, growing at more than double the rate of the previous quarter. FT suggests that the pivot away from search and toward Computer — Perplexity’s agentic workspace — along with a shift to a use-based pricing model has given the company a major boost. Their user base reportedly now exceeds 100 million.

Patlytics is Harvey for patent law

Now that legal AI startup Harvey has hit an $11 billion valuation, perhaps it was inevitable that other companies would start popping up producing their own hyper-specialized takes on the concept. Enter Patlytics, which automates the full “getting a patent” process, from filling out paperwork to litigating on behalf of your intellectual property. The company raised a fresh $40 million Series B round led by SignalFire. Co-founder Paul Lee tells Business Insider that they’re not actually gunning for Harvey directly. In fact, he sees a Harvey subscription as a strong signal that a potential customer has a budget and “pro-AI” sentiment.

What Else happened in AI on April 08th 2026?

A new mystery model named ‘HappyHorse-1.0’ debuted at No .1 on Artificial Analysis’ video leaderboards, surpassing ByteDance’s viral Seedance 2.0.

OpenAI, Google, and Anthropic are cooperating on identifying and limiting Chinese rivals from distilling their systems, sharing info via a “Frontier Model Forum” non-profit.

Microsoft’s Bing team open-sourced Harrier, a SOTA embedding model for search and retrieval that supports 100+ languages and powers its AI agent grounding service.

Intel announced that it is joining Elon Musk’s recently unveiled Terafab project, saying the company will “help accelerate Terafab’s aim to produce 1 TW / year of compute”.

Clico: A browser extension that pulls context from your open tabs and writes right at your cursor, without ever leaving the page. (sponsored)

Acrobat Student Spaces: Adobe has launched a suite of AI-powered Acrobat tools for students, allowing students to create quizzes and presentations from study materials.

Google AI Enhance: Google Photos now allows android users to enhance photos using AI, rolling out to users gradually.

Marble: World Labs has rolled out two new updates to its flagship model, including Marble 1.1 for better lighting and contrast, and Marble 1.1-Plus for scaling environments.

Discussion about this episode

User's avatar

Ready for more?