NVIDIA unveiled Nemotron 3 Nano Omni, an open multimodal model unifying vision, audio, image, and text in a single model rather than chaining separate specialists.
The model tops 6 leaderboards for document intelligence, video, and audio understanding, and is up to 9x more efficient than multi-model pipelines on agent tasks.
Early adopters include Foxconn, Palantir, H Company, Docusign, Oracle, Dell, and Infosys; H Company notes agents can now interpret full HD screen recordings in real time.
OpenAI open-sourced Symphony, a spec that turns an issue tracker (Linear) into an always-on control plane for Codex coding agents.
Teams using Symphony saw a 500% increase in landed PRs by keeping agents continuously aligned to the live backlog without manual task-passing.
Symphony is designed to be tool-agnostic and extends the Coding Agent Wars thread — now with an official open orchestration layer any platform can adopt.
Elon Musk testified April 28–30 in Oakland federal court, accusing Sam Altman of betraying OpenAI's nonprofit mission by converting it to for-profit.
Cross-examination revealed Musk did not read 'fine print' about the conversion and waited until 2024 to sue, undermining his own timeline of grievances.
The trial exposes founding emails and texts between Musk, Altman, and Brockman, and will define whether AI nonprofits can restructure without donor consent.
Alphabet (+21.8%), Microsoft (+18%), Meta (+33%), and Amazon (+28% AWS) all beat revenue estimates when reporting simultaneously on April 29.
Google Cloud hit $20B quarterly revenue; Microsoft's AI run rate surpassed $37B (+123% YoY); Meta raised its 2026 capex guide to $145B for superintelligence.
Combined hyperscaler AI infrastructure spending now tracks $650–700B for 2026, with Alphabet guiding 2027 capex to increase further.
Connecticut's Senate passed SB5 32-4, a 71-page bill regulating frontier model developers, companion chatbots, AI in employment decisions, and online platform provenance.
The bill establishes whistleblower protections for frontier model employees and consumer disclosure requirements for AI subscriptions.
Now heading to the state House, SB5 is one of the broadest US state AI regulation attempts since California's SB 1047 and faces federal preemption pressure.
A forensic analysis revealed that HauhauCS uncensored models (5M+ monthly downloads) are plagiarized forks of Heretic's models with AGPL copyright stripped and relicensed to noncommercial.
The Heretic author confirmed the theft; the models remain live on HuggingFace pending removal requests.
The case exposes a systematic gray market of license-laundering in the open-weight community and spotlights AGPL enforcement gaps on model hosting platforms.
Launched at Google Cloud Next '26, the Gemini Enterprise Agent Platform consolidates Vertex AI model selection, agent building, DevOps, and governance into one interface.
CEO Sundar Pichai announced 75% of all new Google code is now AI-generated, with the company shifting from assisted coding to fully agentic workflows.
The platform integrates third-party data connectors, identity controls, and agentic SecOps features that reduced a typical 30-minute alert triage to 60 seconds.
Anthropic opened Claude Security (formerly Claude Code Security) to all Enterprise customers, using Opus 4.7 to scan codebases, find vulnerabilities, and generate patches in one session.
Features include scheduled scans, confidence ratings, Slack/Jira webhooks, and CSV/Markdown export — no API integration required.
Security partners embedding Opus 4.7 include CrowdStrike, Microsoft Security, Palo Alto Networks, SentinelOne, TrendAI, and Wiz.
Amazon launched the Quick desktop app — a personal AI that stays connected to local files, calendar, email, Slack, Jira, Google Workspace, and Microsoft 365 across sessions.
Quick can create live dashboards, intelligent apps, presentations, and images; users sign up with just an email address.
Positioned as a direct competitor to Microsoft Copilot and Anthropic's Claude Cowork, Quick is Amazon's entry into the persistent workplace AI assistant market.
OpenAI launched opt-in Advanced Account Security for high-risk users including journalists, activists, government officials, and dissidents.
The opt-in mode enables passkey-only login, disables email and SMS recovery, enforces short sessions, and automatically excludes accounts from model training.
The feature also covers Codex and comes as OpenAI faces continued scrutiny after the April macOS security incident.
Anthropic released BioMysteryBench, a new evaluation suite specifically designed to test LLM capabilities on real-world bioinformatics research tasks.
The benchmark is part of Anthropic's broader Claude Science Evaluation push and the first domain-specific eval they have published for life sciences.
Results show Claude performing well on gene annotation and protein structure tasks, extending the AI science tools thread from the GPT-Rosalind launch in April.
Anthropic's Societal Impacts team published the largest qualitative study of AI personal guidance use, covering emotional support, life decisions, and sensitive conversations.
The study surfaces patterns in how users frame personal requests, the contexts where they prefer AI over humans, and the risks of dependency formation.
Results directly inform Anthropic's ongoing work on Claude's character and will shape forthcoming policy updates to how Claude handles mental health topics.