OpenAI’s Pentagon Pact: Rush Deal With Safety “Red Lines” Amid Big Backlash
OpenAI has publicly shared more details about its recently struck agreement with the U.S. Department of Defense to allow its AI models to be used on classified military networks. CEO Sam Altman admitted the deal was “definitely rushed” and could look bad in public optics, especially since rival AI firm Anthropic failed to reach similar terms with the Pentagon and was being told to exit federal use. In its clarifications, OpenAI said its contract explicitly bans use of its technology for mass domestic surveillance, fully autonomous weapons, and high‑stakes automated decisions such as social‑credit systems. OpenAI also highlighted layered safeguards — retaining control over the safety stack, deploying via cloud infrastructure, and retaining cleared personnel oversight — to enforce those red lines and argued these protections go beyond typical usage policies.

Despite the company’s emphasis on safety, the deal has sparked significant controversy. Critics point out that restrictions in the contract focus more on private data and rely on existing laws and deployment methods rather than strong new limits on publicly available information, raising concerns it could still be used in ways people find ethically troubling. The quick announcement and strong reaction — including online “Cancel ChatGPT” movements and rival Anthropic’s legal objections — highlight the fraught debate over AI’s role in military and national security applications.
Pentagon Spat Boosts Claude: Anthropic’s AI Climbs App Store Charts
Anthropic’s AI chatbot Claude — developed by Anthropic — has surged in popularity in the wake of a high‑profile disagreement with the U.S. Department of Defense over how its technology should be used. After the company rejected Pentagon demands to remove contractual safeguards limiting Claude’s use for mass surveillance and fully autonomous weapons, the U.S. government moved to phase out federal use of Anthropic products. That public standoff appears to have driven strong user interest, pushing Claude up the Apple App Store’s free apps ranking, where it climbed to the No. 2 spot in the U.S. and even briefly challenged OpenAI’s ChatGPT for the top position as downloads spiked.

The sudden rise underscores how broader debates over AI ethics and government contracts can influence consumer behavior: many users rallied behind Claude after the Pentagon dispute became widely discussed online. Analytics data show Claude was well outside the top 100 free apps at the start of February before its rapid ascent in the final week of the month.
Pentagon vs. Anthropic: AI Ethics Clash Escalates Into “Supply Chain Risk” Showdown
The U.S. Department of Defense has moved to label AI startup Anthropic as a supply chain risk after months of increasingly tense negotiations over how its AI models should be used in military settings. At the heart of the dispute is Anthropic’s refusal to give the Pentagon unrestricted rights to apply its Claude AI model to uses such as mass domestic surveillance and fully autonomous weapons. The Pentagon — backed by a directive from President Donald Trump — has now announced that federal agencies must phase out Anthropic technology and that defense contractors and partners can no longer work with the company if it’s designated a supply chain risk, a status typically reserved for foreign or adversarial threats. Anthropic says it will challenge this designation in court, calling it legally unsound and dangerous for U.S. tech firms.

The move has sent shockwaves across the tech and defense sectors, as Anthropic was one of the few firms previously approved to provide AI tools to the U.S. military. Critics argue that forcing companies to drop ethical safeguards or face exclusion from government contracts could chill innovation and set a precedent for how private technology providers are treated in national security–related procurement. Meanwhile, major AI companies, Pentagon policy experts, and Silicon Valley observers are watching closely, with some suggesting that the episode could redefine the balance of power between emerging AI developers and the federal government.
ChatGPT Zooms to 900 Million Weekly Users as AI Goes Mainstream
OpenAI’s flagship AI chatbot ChatGPT has achieved a major new milestone, reaching 900 million weekly active users worldwide — a significant jump from the roughly 800 million reported late last year and putting it close to the 1 billion mark. The company also disclosed that it now has about 50 million paying subscribers, underscoring rapid growth both in general use and in revenue‑generating tiers. This surge reflects ChatGPT’s widespread adoption for tasks like writing, planning, learning, and building digital tools, and highlights how central AI chat interfaces are becoming in everyday digital interaction.

OpenAI shared the updated usage numbers alongside news of an unprecedented $110 billion private funding round, led by major investors aiming to support scaling infrastructure and continued product development. The expanding user base and strong monetization signals reinforce ChatGPT’s dominant position in the AI chatbot space, even as competition and use cases evolve rapidly.
Bumble Uses AI to Help You Look Your Best — and Boost Matches
The dating app Bumble is rolling out a suite of AI‑powered tools designed to help users optimize their profiles and increase their chances of meaningful connections. One major feature — AI‑suggested profile guidance — will be available globally and offers personalized, actionable feedback on users’ bios and prompts, helping people write clearer, more engaging descriptions that better reflect their personalities. In the U.S., Bumble is also introducing an AI photo feedback tool that analyzes profile pictures and suggests improvements, like choosing clearer shots of your face, adding outdoor or group photos, or avoiding images where sunglasses or other elements obscure your features. These AI tools aim to guide users on how to present themselves authentically and confidently, though the suggestions reflect common‑sense dating tips many people already know informally from friends.

Alongside the AI updates, Bumble is testing a non‑AI feature in Canada called Suggest a Date, allowing users to signal they’re ready to move a conversation offline when chats stall. The changes reflect a broader trend in the online dating industry, as platforms increasingly use AI to help users improve profiles and matchmaking outcomes amid growing competition.
Figma + Codex: A New Era of Seamless Design‑to‑Code Workflows
Design platform Figma has announced a deeper partnership with OpenAI by integrating the AI coding assistant Codex directly into its design environment, expanding on similar work it recently did with Anthropic’s Claude Code. Using Figma’s Model Context Protocol (MCP) server, the integration lets users fluidly move between visual design files and executable code — designers can push designs straight into Codex for implementation, and engineers can bring live, code‑based interfaces back into Figma as editable designs. This bidirectional workflow aims to reduce friction in the traditional design‑to‑development handoff and make both sides more productive by keeping context intact across tools.

Figma’s chief design officer highlighted that the combined setup helps teams “build on their best ideas — not just their first idea” by blending creative design and code execution without forcing users to step outside their preferred environments. Codex’s role in this context expands its reach beyond standalone coding tools into core aspects of product development, allowing multidisciplinary teams to iterate faster and more collaboratively on UI and product experiences.