AI capabilities rapidly evolved from chatbots to executive assistants, triggering market sell-offs
Anthropic was blacklisted by Trump administration and abandoned its core safety pledge
OpenAI reversed its stance on monetization, running ads CEO previously opposed
AI safety has become a pivotal political issue with $125 million being spent to influence races
📖 Full Retelling
In the first two months of 2026, generative artificial intelligence has undergone a rapid scaling of capabilities, evolving from simple chatbots to full-blown executive assistants that triggered an indiscriminate sell-off across software, legal, insurance, and cybersecurity sectors worldwide, as companies like Nvidia, Anthropic, and OpenAI accelerate development while safety researchers resign and political tensions mount over regulatory approaches. Nvidia CEO Jensen Huang characterized this period as AI's third major inflection point, noting that agentic systems can now reason, take tasks, and perform actual work, fundamentally changing how businesses operate and creating market volatility. The rapid advancement has coincided with the dismantling of safety measures, exemplified by Anthropic's blacklisting by the Trump administration after the startup refused to comply with Pentagon demands about its technology use, followed by Anthropic's abandonment of its core safety pledge in favor of what it calls 'nonbinding, publicly declared targets.' Meanwhile, OpenAI has reversed its previous stance on monetization, running ads that CEO Sam Altman once said would only be implemented as a last resort, while researchers from both companies have resigned in recent weeks, warning of the potential risks of unchecked AI development. The tension surrounding AI safety has emerged as a pivotal political issue, with New York State Assemblyman Alex Bores, who authored the nation's first major AI safety law, becoming a target in his Congressional race against a $125 million super PAC backed by OpenAI cofounder Greg Brockman, Andreessen Horowitz, and Palantir's Joe Lonsdale, who have made clear their opposition to stricter AI regulations.
🏷️ Themes
AI Advancement, Regulatory Challenges, Political Polarization, Corporate Ethics
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct ...
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses AI alignment (which aims to ensure AI systems behave as intended), monitoring AI systems for risks, and enhancing their rob...
In the first two months of 2026, generative artificial intelligence has undergone a rapid scaling of capabilities, going from chatbot to full-blown executive assistant and triggering an indiscriminate sell-off across sectors, hitting software , legal, insurance and cybersecurity stocks. "AI just went through its third inflection," Nvidia CEO Jensen Huang told CNBC's Becky Quick on Wednesday. "Now, with these agentic systems, we're having these agents able to reason, take tasks, and actually do work." But the faster AI moves, the faster the safety nets are coming off. Anthropic was just blacklisted by the Trump administration after the AI startup refused to comply with the Pentagon's demands about the use of its technology. Anthropic was founded on the promise of building AI responsibly, but scrapped its core safety pledge this week in the midst of the Pentagon battle, replacing hard commitments with what it calls "nonbinding, publicly declared targets." It said part of the reason was competitors racing ahead without the same guardrails. OpenAI is now running ads that CEO Sam Altman once said the company would only monetize as a last resort. Researchers at both companies have resigned in recent weeks, warning of the risks. The tension around AI safety has the potential to be a pivotal issue in the 2026 midterms, and one race is already signaling how it might shake out. New York State Assemblyman Alex Bores authored the first major AI safety law in the country and is now running for Congress. And he's become a target for the camp that's arguing for looser regulations. Bores is now facing a $125 million super PAC that counts OpenAI cofounder Greg Brockman, Andreessen Horowitz and Palantir 's Joe Lonsdale as backers. "They've made clear they want to make an example here, that if they win this race, they're going to go to every member of Congress and say, don't you dare regulate AI, otherwise we'll spend $10 million against you," Bores said. "This is moving very, very qu...