Last Week in AI #331 - Nvidia announcements, Grok bikini prompts, RAISE Act
Nvidia Details New A.I. Chips and Autonomous Car Project, Grok is undressing anyone, NY passes AI regulation
Nvidia Details New A.I. Chips and Autonomous Car Project With Mercedes
Related:
Nvidia launches Alpamayo, open AI models that allow autonomous vehicles to ‘think like a human’
Nvidia launches Vera Rubin AI computing platform at CES 2026
At CES 2026, Nvidia CEO Jensen Huang announced the company’s new AI chip, Vera Rubin, which will begin shipping to customers like Microsoft and Amazon in the second half of the year. The chip represents a major efficiency leap, requiring only one-quarter as many chips as its predecessor Blackwell for training AI models and delivering inference at one-tenth the cost. This advancement is critical for Nvidia to maintain its dominance in the AI chip market (over 90% share) amid rising competition from AMD and Google, while also addressing the soaring energy demands of AI data centers.
Huang also unveiled Nvidia’s ambitious push into autonomous vehicles with Alpamayo, an open-source AI model and simulation platform that enables vehicles to reason through complex driving scenarios like humans. Mercedes-Benz will begin shipping CLA cars with Nvidia’s self-driving technology in early 2026, comparable to Tesla’s Autopilot. The 10-billion-parameter Alpamayo 1 model uses chain-of-thought reasoning to navigate edge cases and explain its driving decisions, marking what Huang called “the ChatGPT moment for physical AI.” This diversification effort comes as Nvidia reported record financial performance with $31.9 billion in quarterly profit and expectations of $500 billion in annual sales.
Grok is undressing anyone, including minors
Related:
France to investigate deepfakes of women stripped naked by Grok
X users asking Grok to put this girl in bikini, Grok is happy obliging
xAI’s Grok recently rolled out an “Edit Image” tool on X that lets any user instantly modify others’ photos without the original poster’s consent or notification, triggering a flood of nonconsensual sexualized edits. Users widely prompted Grok to “put this girl in a bikini,” “undress,” or “remove the skirt,” and the bot often complied, including with images of minors and toddlers, despite xAI’s acceptable use policy banning pornographic depictions of real people. Examples included edited photos of teens in skimpy clothing and public figures like Kim Jong Un, Donald Trump, and Priti Patel in bikinis; even Musk amplified the trend with bikini memes.
Grok’s replies alternated between flippant acknowledgment and canned apologies for “lapses in safeguards,” while xAI responded to press queries with “Legacy Media Lies,” and Grok’s public media feed continued surfacing these outputs. Regulatory responses escalated quickly. France’s prosecutor added the Grok deepfake surge to an existing investigation into X, with offenses punishable by up to two years in prison and a €60,000 fine; multiple French ministers and the children’s commissioner flagged “manifestly illegal content” to Pharos for removal. India’s IT ministry ordered X to immediately restrict Grok from generating nudity/sexualized content and to file an action-taken report within 72 hours, warning that noncompliance could strip X’s safe harbor and trigger criminal liability.
Meanwhile, X Safety publicly blamed users for prompting CSAM, saying violators will be suspended and referred to law enforcement, but announced no technical fixes to prevent Grok from producing such content—despite AI systems being non-deterministic and capable of refusing requests.
New York governor Kathy Hochul signs RAISE Act to regulate AI safety
New York Governor Kathy Hochul signed the RAISE Act, making New York the second state after California to enact major AI safety legislation. The law requires “large AI developers” to publicly disclose safety protocols and report “safety incidents” to the state within 72 hours, and it establishes a new office within the Department of Financial Services to monitor AI development. Companies that fail to submit required reports or make false statements face fines up to $1 million, rising to $3 million for subsequent violations.
OpenAI and Anthropic backed the bill while urging federal standards, with Anthropic noting that two major states now have AI transparency frameworks. Some tech figures are actively opposing the measure: a super PAC backed by Andreessen Horowitz and OpenAI president Greg Brockman is targeting Assemblyman Alex Bores, a co-sponsor alongside Senator Andrew Gounardes, who called the law the “strongest AI safety law in the country.” The law explicitly references California’s approach as a benchmark.
Amazon’s AI assistant comes to the web with Alexa.com
Amazon launched Alexa.com to bring its overhauled AI assistant, Alexa+, to the web for Early Access users, complementing its presence on Echo devices and the updated Alexa mobile app. The site offers a chatbot-style interface for tasks like exploring complex topics, content creation, and trip planning, while emphasizing household workflows: smart home control, calendar and to-do updates, dinner reservations, grocery additions to Amazon Fresh or Whole Foods, recipe discovery and walkthroughs, and personalized movie-night recommendations. Alexa+ is also adding service integrations including Angi, Expedia, Square, and Yelp, alongside existing partners like Fodor’s, OpenTable, Suno, Ticketmaster, Thumbtack, and Uber.
Other News
Tools
Z.AI launches GLM-4.7, new SOTA open-source model for coding. The model improves reasoning, coding, and multimodal performance with expanded context handling, agent-style tool use, and API access for real-time or batch integration.
MiniMax Releases M2.1: An Enhanced M2 Version with Features like Multi-Coding Language Support, API Integration, and Improved Tools for Structured Coding. The update improves code quality, instruction following, multilingual coding and app/web development performance, agent and tool compatibility, context management, and response clarity while reducing latency and token usage compared with the prior M2.
Hyundai and Boston Dynamics unveil humanoid robot Atlas at CES. The company says a production version of Atlas for assembling cars is already being built and will be deployed at Hyundai’s Savannah EV plant by 2028, and Boston Dynamics will integrate Google DeepMind AI into its robots.
LG says its CLOiD home robot will be folding laundry and making breakfast at CES. It can fetch items, operate appliances, and handle laundry—including folding and stacking—using two seven‑degree‑of‑motion arms, spoken and facial communication, and integration with LG’s ThinQ smart‑home ecosystem.
Business
Meta Buys AI Startup Manus, Adding Millions of Paying Users. Meta Platforms is acquiring Manus, a Singapore-based AI startup with Chinese founders that builds AI agents for research and analysis, for more than $2 billion in one of the highest-profile acquisitions of an Asian-developed AI product by a major U.S. tech company.
OpenAI bets big on audio as Silicon Valley declares war on screens. The company has consolidated teams to rebuild audio models and is planning an audio-first personal device and a family of companion-like hardware (glasses, speakers, wearables) with more natural, interruption-aware conversational abilities expected by 2026.
Uber reveals the design of its robotaxi at CES 2026. It’s a modified Lucid Gravity EV outfitted with lidar, radar and high-res cameras around a roof-mounted halo, an LED passenger-display, a six-seat interior with rider controls and a real-time route/decision display, and is due to begin production later this year after ongoing San Francisco testing.
AI godfather says Meta’s new 29-year-old AI boss is ‘inexperienced’ and warns of staff exodus. LeCun said Wang lacks experience in running and attracting research teams and warned that Meta’s sidelining of its GenAI group has already prompted departures and could trigger further staff exits.
Yann LeCun calls Alexandr Wang ‘inexperienced’ and predicts more Meta AI employee departures. He warned that Wang lacks research experience and understanding of researchers’ practices, criticized Meta’s handling of Llama results, and said Zuckerberg’s focus on LLM hires could prompt further departures from Meta AI.
Research
Recursive Language Models. The approach treats long inputs as an external environment that the model can programmatically inspect and recursively call itself on, enabling handling of inputs far beyond the native context window with improved performance and similar or lower inference cost.
Dynamic Large Concept Models: Latent Reasoning in an Adaptive Semantic Space. It segments token streams into learned variable-length concepts, pools them into a compressed sequence for deep concept-level reasoning, and reconstructs token predictions via causal cross-attention to reallocate compute toward semantically dense regions and improve efficiency and reasoning performance.
Decoupling the “What” and “Where” With Polar Coordinate Positional Embeddings. It shows that a minor modification to Rotary Positional Embeddings—Polar Coordinate Positional Embeddings (PoPE)—separates content (”what”) and position (”where”) in key-query matching, improving data efficiency, accuracy, and context-length generalization compared with RoPE.
Nested Learning: The Illusion of Deep Learning Architectures. It proposes “Nested Learning,” a framework that models learning as layered and parallel optimization problems and introduces expressive optimizers, a self-modifying sequence model, and a continuum memory system (implemented as the “Hope” module) to improve in-context learning and continual learning capabilities.
Deep Delta Learning. The method replaces the fixed identity shortcut with a learnable, rank-1 Householder-style operator controlled by a direction vector and a scalar gate, enabling the network to interpolate between identity, projection, and reflection transformations and thereby alter the hidden-state Jacobian spectrum.
Concerns
John Carreyrou and other authors bring new lawsuit against six major AI companies. The suit alleges the six AI firms trained their models on pirated copies of authors’ books and seeks greater accountability and compensation than offered in the Anthropic settlement.
Sam Altman is hiring someone to worry about the dangers of AI. The hire will lead efforts to evaluate and mitigate risks from frontier AI capabilities—including mental-health harms, AI-enabled cyberweapons, biological misuse, and self-improving systems—by building a preparedness framework and safety pipeline.
Murder-suicide case shows OpenAI selectively hides data after users die. The family’s lawsuit alleges OpenAI disclosed only selected ChatGPT logs while withholding key conversations from days before the suspect’s suicide that could have shown the chatbot reinforced delusions blamed for the killings.
Analysis
2025: The year in LLMs. The year saw reasoning-focused models and tool-enabled agents—especially asynchronous coding agents like Claude Code and Codex—drive major practical gains in long multi-step tasks, coding, search, and image editing while shifting revenue, open-weight competition (notably from Chinese labs), safety concerns (prompt injection/normalization of deviance), and new industry standards and pricing dynamics.
AI Slop Report: The Global Rise of Low-Quality AI Videos. Kapwing’s analysis of trending YouTube channels and a new-user Shorts feed suggests that a sizable share of popular videos—roughly 21–33% on their test feed—are low-quality AI-generated “slop” or brainrot, with some channels earning millions and certain countries (Spain by subscribers, South Korea by views) showing particularly high impact.










