You’d think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case.
Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the U.K. in 2023 became the "AI Action Summit” earlier this year, seemingly driven by a fear of falling behind on AI.
None of this would be so worrying if it weren’t for the fact that AI is showing some bright red flags: behavior described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.