Last Week in AI #321 - Anthropic & Midjourney Lawsuits, Bad Jobs Data
Judge puts Anthropic’s $1.5 billion book piracy settlement on hold, Warner Bros. Joins Studios’ AI Copyright Battle Against Midjourney, Anthropic endorses California’s AI safety bill SB 53
Top News
Judge puts Anthropic’s $1.5 billion book piracy settlement on hold
A federal judge has paused Anthropic’s proposed $1.5 billion class-action settlement with U.S. authors, citing concerns the deal was crafted “behind closed doors” and could be forced “down the throats of authors.” Judge William Alsup, who previously ruled that training on purchased books can be fair use but not on illegally downloaded works, rejected the agreement for now and pressed for clearer details on the claims process and the exact number of covered titles. The settlement envisioned roughly $3,000 per book for about 465,000 works, but Alsup warned against “hangers on” and insisted class members receive “very good notice.” The dispute has drawn industry pushback—AAP CEO Maria Pallante said the court misunderstands publishing—while authors’ counsel Justin Nelson said they want every valid claim compensated. Alsup set another hearing for September 25, quipping he’ll see if he can “hold [his] nose and approve it.”
Warner Bros. Joins Studios’ AI Copyright Battle Against Midjourney
Warner Bros. sued Midjourney, becoming the third major studio after Disney and Universal to accuse the AI image/video generator of copyright infringement. The complaint alleges Midjourney willfully produces still images and video of protected characters like Superman, Batman, Bugs Bunny, Daffy Duck, Tom and Jerry, Joker, Flash, and Scooby-Doo. WB claims Midjourney recently removed guardrails that previously blocked infringing video creation, and that prompts can “easily” generate near-duplicate character depictions. The suit seeks statutory damages and an injunction to stop alleged infringement, arguing Midjourney could deploy protections to prevent duplication but has chosen not to.
Midjourney’s prior defense in the Disney-Universal case asserted “fair use” for training on copyrighted works and noted its terms of service prohibit users from violating IP rights. The WB filing, brought by the same legal team as the Disney/Universal suits, follows a similar template and highlights Midjourney’s expansion into video and a 24/7 streaming channel (website and YouTube), with a stated plan for “channels” that WB says signals intent to enter TV/streaming markets.
Anthropic endorses California’s AI safety bill, SB 53
Anthropic officially endorsed California’s SB 53, a frontier AI safety bill from Sen. Scott Wiener that would impose transparency and safety requirements on the largest AI developers, including OpenAI, Anthropic, Google, and xAI. The bill would mandate developers to create safety frameworks and publish public safety and security reports prior to deploying “powerful” models, and it adds whistleblower protections for employees raising safety concerns.
SB 53 targets “catastrophic risks,” defined as events causing at least 50 deaths or over $1 billion in damages, focusing on preventing models from enabling biological weapons development or cyberattacks rather than issues like deepfakes. While Anthropic says it prefers federal standards, it argues progress can’t wait; co-founder Jack Clark called SB 53 a “solid blueprint for AI governance.”
A prior version has already passed the California Senate; a final vote is pending before the bill can go to Governor Gavin Newsom, who previously vetoed Wiener’s SB 1047 and has not commented on SB 53. The bill faces pushback from industry groups (CTA, Chamber of Progress), investors (a16z, Y Combinator), and the Trump administration, with critics arguing state action could chill innovation and violate the Constitution’s Commerce Clause by burdening interstate commerce.
Compared to SB 1047, experts describe SB 53 as more modest and technically grounded, and lawmakers recently removed a third-party audit requirement; the goal is to codify existing lab safety practices into enforceable state law with financial penalties for noncompliance.
OpenAI Announces GPT-Realtime, Its Best Voice AI Model Yet
OpenAI is rolling out GPT-Realtime, its most advanced speech-to-speech model, as it graduates the Realtime API from beta to production. Built to reduce latency versus the old transcribe→LLM→TTS pipeline, the updated API processes audio directly and, according to OpenAI, is more reliable, cheaper, and better aligned to real-world use cases after input from customer support, personal assistance, and education experts.
GPT-Realtime promises more natural, expressive speech; stronger instruction following; mid-sentence language switching; and two new voices (Cedar and Marin), plus the ability to pick up nonverbal cues like laughter, interpret images, and shift tone more fluidly. Pricing drops from $40/$80 to $32/$64 per million input/output audio tokens, and OpenAI says thousands of developers already use the API.
AI adoption linked to 13% decline in jobs for young U.S. workers, Stanford study reveals
A new Stanford study using ADP payroll data from millions of U.S. workers finds large, early labor-market effects from generative AI concentrated among entry-level employees. For workers aged 22–25 in AI-exposed occupations—customer service, accounting, and software development—employment fell 13% since 2022, while employment for older workers in the same roles held steady or grew. In less-exposed occupations, such as nursing aides, employment rose across ages, with young health aides growing faster than older counterparts; front-line production/operations supervisors also saw gains for young workers, though smaller than for 35+. The authors attempted to isolate AI’s role by controlling for education, remote work, offshoring/outsourcing, and macroeconomic shifts, and note the paper is not yet peer-reviewed.
The study argues young workers are particularly vulnerable where AI substitutes for “codified knowledge” learned in school, while experience-based tacit knowledge is harder to replace. Impacts are non-uniform: in occupations where AI complements tasks and boosts efficiency rather than substitutes for them, employment changes were muted. The findings help explain stagnant national employment growth among young workers despite resilient overall employment post-pandemic, aligning with recent private-sector analyses pointing to early AI effects in tech and among younger employees. Researchers emphasize that many firms have not yet deployed AI at scale, suggesting these labor market impacts could intensify as adoption expands.
Other News
Tools
Anthropic’s Claude can now make you a spreadsheet or slide deck.. Claude can now create and edit actual Excel spreadsheets, PowerPoint decks, documents, and PDFs from uploaded data, research, or from scratch.
Microsoft AI launches its first in-house models. Microsoft introduced its MAI lineup: MAI-Voice-1, a fast speech generator used in Copilot features and available to try in Copilot Labs, and MAI-1-preview, a 15,000‑GPU‑trained text model that’s slated for everyday instruction-following in Copilot.
OpenAI to route sensitive conversations to GPT-5, introduce parental controls. The company plans to detect signs of acute distress and automatically route those chats to slower “reasoning” models like GPT‑5, while adding parental controls—notifications, age‑appropriate behavior rules, and options to disable memory and chat history—as part of a 120‑day safety initiative.
OpenAI tests new tools for effort control and chat branching. An effort selector (with options reportedly up to “max” 200 juice) and a “branch from here” feature are being tested to help Pro, Enterprise, and some team users control computation per task and explore alternate conversation paths.
DuckDuckGo adds access to advanced AI models to its subscription plan. Subscribers to the $9.99/month plan can use newer, higher-capability models like GPT-4o, GPT-5, Claude Sonnet 4, and Llama Maverick through Duck.ai at no extra fee, with DuckDuckGo pitching a more privacy-focused way to access multiple providers.
Switzerland releases open-weight AI model. Apertus, an open-source LLM (8B and 70B), has its code, weights, and training details on HuggingFace. Trained on over 1,800 languages from public sources with website opt-outs honored, it was developed to comply with EU copyright rules and voluntary AI guidelines.
Google Vids adds AI avatars to its video editor and launches a consumer version. The editor now supports AI avatars for script-to-video creation, automatic transcript trimming to remove filler words and pauses, and image-to-video generation for short clips, with features rolling out to certain Workspace and Google AI Pro/Ultra subscribers; a limited free consumer tier omits AI tools.
Google’s NotebookLM now lets you customize the tone of its AI podcasts. Users can choose formats like “Deep Dive,” “Brief,” “Critique,” or “Debate,” pick length, and select new AI voices for the generated Audio Overviews, with a global rollout underway.
Business
China’s Unitree heats up humanoid robot race as IPO valuation reportedly hits $7 billion. A planned IPO would follow rapid revenue growth and backing from investors like Geely, Alibaba, and Tencent, potentially making Unitree one of the first profitable Chinese humanoid-robot specialists to list as Beijing pushes industrial-scale development.
Anthropic raises $13B Series F at $183B valuation. Anthropic says the funding will accelerate enterprise adoption, international expansion, and safety research as it scales rapidly—reporting ARR rising from $1B to $5B, over 300,000 business customers, and significant revenue from its Claude Code vibe-coding product.
ASML to invest $1.5 billion in Mistral at over $11 billion valuation. The deal would make ASML the largest shareholder in Mistral, add a board seat, and fund the startup’s expansion, furthering Europe’s push for homegrown AI capabilities.
OpenAI acquires product testing startup Statsig and shakes up its leadership team. In an all‑stock $1.1 billion deal, OpenAI will bring Statsig’s experimentation platform and founder Vijaye Raji—who will become CTO of Applications—into the company to accelerate product engineering for ChatGPT, Codex, and future apps, alongside leadership changes that create new science and B2B-focused roles.
OpenAI reorganizes research team behind ChatGPT’s personality. The roughly 14-person Model Behavior team will be folded into OpenAI’s Post Training group; leader Joanne Jang is leaving to start a new OAI Labs research team focused on alternative AI interfaces.
Apple Plans AI-Powered Web Search Tool for Siri to Rival OpenAI, Perplexity. Apple is developing World Knowledge Answers, an AI-powered web search tool for Siri coming next year that may later extend to Safari and Spotlight.
Cognition AI defies turbulence with a $400M raise at $10.2B valuation. Founders Fund led the round, valuing the company amid rapid revenue growth to $73 million ARR and controversy over an intense workplace culture that included layoffs and buyout offers tied to extreme work-hour expectations.
Perplexity Launches Comet Plus, Shares Revenue With Publishers. For $5 a month, Comet Plus directs most subscriber payments to participating publishers based on human visits, search citations, and AI agent actions—while Perplexity retains a small compute fee—and grants subscribers direct site access, answers from partners, and agent workflows tied to the Comet browser.
Amazon has mostly sat out the AI talent war. This internal document reveals why.. An internal memo cites strict pay bands, a frugal culture, and rigid return-to-office “hub” rules as factors making Amazon less competitive for top generative-AI talent, prompting plans to adjust recruiting, compensation, and location policies.
You.com Raises $100 Million to Grow AI Search. The funds will expand compute, talent, and scraping/indexing infrastructure to support enterprise and consumer customers using You.com’s AI search and agent APIs at scale.
Waymo expands to Denver and Seattle with its Zeekr-made vans. Initial operations in Denver and Seattle will use Jaguar I-Pace and Zeekr vans with human drivers while autonomous systems are tested; robotaxi service is planned for Denver next year and for Seattle once permitted.
Research
Fantastic Pretraining Optimizers and Where to Find Them. Benchmarking eleven optimizers with careful hyperparameter tuning across model sizes up to 1.2B and four data-to-model ratios finds that reported 2× speedups over AdamW largely stem from weak baselines; matrix-based methods offer modest (~1.3×) gains mainly at smaller scales, with benefits shrinking as scale and data-to-model ratios increase, and optimizer rankings depending on regime with end-of-training comparisons required.
An AI system to help scientists write expert-level empirical software. A large language model combined with tree search generates empirical scientific software across domains, producing methods that outperform those developed by humans.
Are bad incentives to blame for AI hallucinations?. The authors argue that metrics rewarding exact accuracy encourage confident guessing and thus hallucinations; they recommend scoring schemes that penalize confident errors and reward appropriate uncertainty.
RL's Razor: Why Online Reinforcement Learning Forgets Less. The work shows that RL’s on-policy bias toward KL-minimal solutions helps preserve prior capabilities by minimizing policy shift on the new-task distribution, which predicts reduced forgetting.
Concerns
OpenAI targets another nonprofit in surging campaign against critics. Subpoenas demand detailed records of The Midas Project’s funders, communications with Elon Musk, Meta, and others, plus internal work on OpenAI’s governance and restructuring—moves critics call a broad fishing expedition that could chill nonprofit advocacy.
Anthropic will start training its AI models on chat transcripts. Users must opt out by Sept. 28 if they don’t want new chat and coding session content—retained for up to five years—used to train Anthropic’s consumer Claude models.
Therapists are secretly using ChatGPT. Clients are triggered.. Reports describe some therapists covertly using ChatGPT during sessions or to draft communications, raising ethical and privacy concerns and, in some cases, damaging patient trust.
Policy
FTC to Review AI Chatbot Risks With Focus on Privacy Harms. The study will examine how chatbot providers collect, store, and share user data and assess privacy risks and other harms—particularly to children—stemming from interactions with services from OpenAI, Google, and Meta.