Tech & Science

Google, Amazon, Meta agree to White House AI safeguards

Washington –

Companies leading the development of artificial intelligence technology, including Amazon, Google, Meta, and Microsoft, have agreed to meet a series of AI safeguards brokered by President Joe Biden’s administration.

The White House announced Friday that it has secured voluntary commitments from seven U.S. companies aimed at ensuring the safety of AI products before they go on sale. Some of the promises call for third-party oversight of the operation of commercial AI systems, but don’t specify who will audit the technology or hold companies accountable.

The surge in commercial investment in generative AI tools that can create compelling, human-like text and churning out new images and other media has sparked public interest as well as concerns about their ability to trick people into spreading disinformation and other dangers.

In a statement, the White House announced that ChatGPT developer OpenAI and startups Anthropic and Inflection are working on security tests “partially conducted by independent experts” to prevent critical risks such as biosecurity and cybersecurity.

The companies are also working on ways to report system vulnerabilities and use digital watermarks to distinguish between real images and AI-generated images known as deepfakes.

The White House said it will also publicly report on the flaws and risks of the technology, including its impact on fairness and bias.

The voluntary effort is intended to be an immediate means of addressing risks ahead of a long-term effort to get Congress to pass legislation regulating the technology.

Some advocates of AI regulation said Mr. Biden’s move was a start, but more was needed to hold companies and their products accountable.

“History shows that many tech companies have not actually walked on voluntary commitments to act responsibly and support strong regulation,” James Steyer, founder and CEO of the nonprofit Common Sense Media, said in a statement.

Senate Majority Leader Chuck Schumer of New York said he would introduce legislation to regulate AI. He has held numerous meetings with government officials to educate senators on issues of bipartisan concern.

Many tech executives have called for regulation, and several visited the White House in May to meet with Mr. Biden, Vice President Kamala Harris and other officials.

But some experts and upstart competitors worry that the emerging type of regulation could benefit well-funded first movers, led by OpenAI, Google and Microsoft, at a time when smaller companies face high costs to make their AI systems, known as large language models, comply with regulatory constraints.

The BSA, a software industry group that includes Microsoft as a member, said Friday it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.

“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promotes its benefits,” the group said in a statement.

Many countries are considering ways to regulate AI, including European Union lawmakers negotiating comprehensive AI rules in a bloc of 27 countries.

UN Secretary-General António Guterres recently said the UN is the “ideal place” to adopt global standards and appointed a Council to report on global AI governance options by the end of the year.

The UN secretary-general also said he welcomed calls from some countries to create new UN agencies to support global efforts to manage AI, inspired by models such as the International Atomic Energy Agency and the Intergovernmental Panel on Climate Change.

The White House announced Friday that it is already in talks with many countries about voluntary initiatives.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button