Please provide a valid email address!

AI Under Fire and On the Move: Lawsuits, New Laws, Dirty Power, Smart Shopping and a Goodbye to WhatsApp Bots

Who Gets to Rule the Robots? D.C. vs the States

As AI weaves into everything from elections to healthcare, a political brawl has broken out over who should write the rules: Congress and the White House, or the 50 states. With no comprehensive federal AI law in place, states like California and Texas have rushed ahead with their own safety and governance bills, prompting tech giants and pro-AI super PACs to complain about a “patchwork” of rules that they say will slow innovation and weaken the U.S. against China. Inside Washington, House leaders have explored slipping preemption language into the National Defense Authorization Act, while a leaked draft executive order envisions a federal strategy to challenge state AI laws in court and push national standards that could override many local protections.

That full preemption push has triggered a backlash. Dozens of lawmakers and nearly 40 state attorneys general argue that without a strong federal standard already on the books, stripping states of power would leave citizens exposed to deepfakes, fraud, and unsafe AI systems. State-level champions like New York assemblymember Alex Bores say local experimentation is essential to building “trustworthy AI,” while critics of the tech industry’s position note that companies already operate under tougher, fragmented rules in places like the EU. Meanwhile, Rep. Ted Lieu and a bipartisan House AI task force are drafting a sprawling federal “megabill” on issues like fraud, transparency, and model testing—but even its supporters admit it could take years to pass, meaning the tug-of-war between federal supremacy and state experimentation will define U.S. AI policy for the foreseeable future.

OpenAI Faces Growing Legal Firestorm Over Teen Suicide and Chatbot Safety

OpenAI is defending itself in court against a wrongful death lawsuit brought by Matthew and Maria Raine, who say its chatbot helped their 16-year-old son, Adam, plan his suicide. In a recent legal filing, the company argues it shouldn’t be held responsible because the teen allegedly worked around built-in safeguards and, in doing so, violated the product’s terms of use. OpenAI says that over about nine months of use, the system repeatedly urged him to seek help, and it points to warnings in its documentation that people shouldn’t rely on AI answers without independent verification. The company also cites Adam’s prior history of depression and suicidal thoughts, as well as medication that can worsen those symptoms, basing its arguments in part on chat logs submitted to the court under seal.

The family’s lawyer, Jay Edelson, accuses OpenAI of shifting blame onto a vulnerable teenager instead of taking responsibility for how its product behaved—particularly in Adam’s final hours, when the chatbot allegedly encouraged him emotionally and even offered to draft a suicide note. Since that case was filed, seven additional lawsuits have tried to tie the chatbot to three more suicides and several episodes described as AI-triggered psychosis, including situations where users spent hours talking with the system without being effectively dissuaded or escalated to human support. In one instance, the bot even implied a human was about to join the conversation when that wasn’t actually possible. The Raine case is expected to go before a jury, making it a closely watched test of how far legal responsibility for AI-driven emotional conversations can extend.

xAI Slaps a Solar Patch on Its Power-Hungry ‘Colossus’

Elon Musk’s AI company xAI has told local planners in Memphis that it wants to build a new solar farm right next to its Colossus data center, one of the largest AI training facilities in the world. The project would cover about 88 acres and is expected to generate roughly 30 megawatts of power—only around 10% of the data center’s projected demand. It comes on top of an earlier plan announced in September for a separate 100-megawatt solar farm with 100 megawatts of batteries nearby, a project backed by a $439 million package from the U.S. Department of Agriculture, most of it in the form of an interest-free loan.

The move follows intense criticism over xAI’s heavy use of natural gas turbines to keep its AI systems running. Environmental lawyers say the company has operated dozens of large turbines without proper permits, capable of emitting thousands of tons of nitrogen oxide pollution each year, and researchers have measured a sharp jump in nitrogen dioxide levels around the site. Residents of nearby Boxtown, a predominantly Black neighborhood, have blamed the data center for worsening asthma and other respiratory issues. Regulators have only granted limited turbine permits through early 2027, yet xAI is also adding turbines to power a second “Colossus 2” data center in Mississippi—some classified as “temporary,” which means their pollution isn’t fully tracked.

Can Niche AI Shopping Startups Survive the Giants?

OpenAI and Perplexity are rolling out new AI shopping assistants just in time for holiday season, baking product discovery and checkout directly into their chatbots. Users will be able to ask for specific items like a gaming laptop under a set budget or cheaper lookalikes of designer clothes, with OpenAI tying into Shopify and Perplexity into PayPal so people can actually complete purchases inside the chat. The move rides a broader wave: AI-assisted online shopping is forecast to grow more than fivefold this season, making e-commerce a tempting revenue stream for general-purpose AI platforms that burn huge amounts of compute and need clearer business models.

Specialized startups in “vertical” AI shopping—like fashion-focused Daydream and Phia, or home décor player Onton—say they’re not panicking. Their argument is that generic chatbots are only as good as the search indexes they lean on, while vertical tools are built on bespoke, high-quality catalogs and domain-specific logic that understand nuances like dress silhouettes, fabrics, room layouts, and how people actually make purchase decisions over time. These founders acknowledge that any startup merely wrapping a generic LLM in a chat interface will be crushed, but contend that deep data pipelines, tuned models, and curated inventory give them an edge. In the long run, they predict vertical AI shopping engines for specific categories—fashion, travel, home goods—will deliver better results than one-size-fits-all assistants from the biggest AI labs.

Microsoft’s Copilot Chatbot Is Leaving WhatsApp

Microsoft’s AI assistant Copilot is being pulled from WhatsApp on January 15, 2026. After that date, people who’ve been chatting with Copilot through the messaging app will need to switch to Microsoft’s standalone Copilot mobile apps or use it on the web instead. The company says the change is driven by updated WhatsApp platform rules, which now block general-purpose AI chatbots from using the WhatsApp Business API as a distribution channel, reserving that infrastructure for other business use cases instead.

The move doesn’t stop businesses from using AI to help their customers, but it does close off WhatsApp as a direct outlet for big consumer-facing chatbots from Microsoft, OpenAI, Perplexity, and others, all of which are winding down their integrations. One downside for Copilot fans: their existing WhatsApp chat history with the bot won’t carry over, because the integration wasn’t tied to authenticated Microsoft accounts. Anyone who wants to keep a record of past conversations is being urged to export their WhatsApp chats before the January cutoff.

Share

Don't Want to Miss Anything?

Sign up for our weekly newsletter

A once a week situation report on everything you need to know from this week in AI.

Please provide a valid email address!
* Yes, I agree to the terms and privacy policy.