The news this week: major new initiatives show governments are taking AI risks seriously — but there’s still a long way to go.
From DC to London and beyond, leaders are waking up to AI. They see potential dangers from the technology on the horizon.
Take the White House. This week, President Joe Biden announced a sweeping new executive order to respond to the risks potentially posed by advanced AI systems, including risks to national security.
The new order includes the following:
- A requirement for AI labs working on the most powerful models to share information about safety tests and training plans
- Direction to the National Institute of Standards and Technology to create standards for red teaming and assessing the safety of powerful new AI models
- Efforts to reduce the risk of AI-related biological threats and to mitigate cybersecurity vulnerabilities
- Provisions on fraud, privacy, equity, civil rights, workers’ rights, and international coordination
Vice President Kamala Harris also announced the creation of the United States AI Safety Institute this week, which will help evaluate and mitigate dangerous capabilities of AI models.
And the US government is making a big push to hire more AI professionals. They’ve extended the deadline for applying to the Presidential Innovation Fellowship in light of this push.
And it’s not just the US that’s taking action on AI risks. In the UK, Prime Minister Rishi Sunak convened the AI Safety Summit this week to bring together national leaders, AI companies, civil society groups, and researchers to collaborate on managing the risks responsibly.
There already appears to be some real progress from these efforts:
- Representatives from the US, the UK, the EU, China, Nigeria, Brazil, Japan, the Philippines, Indonesia, and more than a dozen other countries joined a declaration acknowledging the major risks posed by frontier AI models.
- Yoshua Bengio, a leading scientist in the field of AI, has been announced as the chair of a new “State of the Science” report on the best evidence-based research we have about the risks and capabilities of frontier models.
- Ahead of the summit, the UK government announced that seven of the leading companies developing AI have outlined safety policies across nine different areas.
- Prominent AI scientists from China, the US, the UK, the EU, and elsewhere signed a statement calling for global coordination on AI safety and governance.
These are only preliminary steps, and there’s still a long way to go. A lot more work is needed to drive down the risks from AI.
But these are promising signs that mitigating catastrophic AI risk is tractable. We’ve been enthusiastic about people pursuing careers in AI policy and governance for years, so we’re excited to see increasing opportunities and progress in this area.
This blog post was first released to our newsletter subscribers.
Join over 350,000 newsletter subscribers who get content like this in their inboxes every two weeks — and we’ll also mail you a free book!