Washington, D.C. — In an effort to ensure the safe development and deployment of artificial intelligence (AI) technology, major companies including Amazon, Google, Meta, Microsoft, and others have agreed to a set of voluntary safeguards negotiated by President Joe Biden’s administration.
The White House announced on Friday that these leading tech companies have made commitments to ensure the safety of their AI products prior to release. While some of these commitments call for third-party oversight of commercial AI systems, specific details regarding the auditing process and accountability measures have not been disclosed.
The rise in commercial investment in generative AI tools, capable of producing human-like text and generating various media forms, has piqued public interest, alongside concerns about their potential to deceive individuals and propagate disinformation.
The four tech giants—Amazon, Google, Meta, and Microsoft—alongside OpenAI (creator of ChatGPT) and startups Anthropic and Inflection, have all committed to rigorous security testing to mitigate major risks such as biosecurity threats and cybersecurity vulnerabilities. The White House statement emphasizes that these tests will be partially conducted by independent experts.
Investors have shown optimism in AI-related products, leading to stock market gains for Microsoft (MSFT), Google parent company Alphabet (GOOGL), Meta (META), and Amazon (AMZN) throughout the year.
In addition to security measures, these companies have also pledged to adopt methods for reporting system vulnerabilities and implementing digital watermarking to distinguish between genuine content and AI-generated deepfakes.
Moreover, the companies have agreed to public disclosure of flaws and risks associated with their technology, including considerations of fairness and bias.
While these voluntary commitments aim to address immediate risks, a more comprehensive approach involving the passing of legislation by Congress is being pursued in the long term.
Although some proponents of AI regulation applaud President Biden’s initiative as a step in the right direction, they believe stricter measures are necessary to hold companies accountable for their products.
“Given past experiences, it is evident that many tech companies fail to uphold their voluntary pledges and support robust regulations,” stated James Steyer, founder and CEO of the nonprofit organization Common Sense Media.
Regulating AI: A Global Effort
As the significance of artificial intelligence (AI) continues to grow, lawmakers around the world are recognizing the need for regulation. Senate Majority Leader Chuck Schumer, D-N.Y., plans to introduce legislation to address this issue, hosting briefings to educate senators on the matter. This bipartisan interest has also attracted attention from technology executives who have discussed their concerns with President Biden, Vice President Kamala Harris, and other White House officials.
However, there are apprehensions from experts and smaller competitors. They worry that proposed regulations could inadvertently favor dominant players like OpenAI, Google, and Microsoft. Compliance with regulatory constraints would impose high costs, potentially pushing out smaller companies in the process.
The software trade group BSA, which counts Microsoft as a member, recently expressed its support for the Biden administration’s efforts to establish rules for high-risk AI systems. In a statement, the group emphasized the importance of addressing risks while promoting the benefits of AI.
Internationally, several countries are exploring AI regulation. European Union lawmakers are engaged in negotiations to develop comprehensive AI rules for their 27-nation bloc. Meanwhile, U.N. Secretary-General Antonio Guterres believes that the United Nations is the ideal organization to adopt global standards. He has appointed a board dedicated to reporting on options for global AI governance by the end of this year.
Moreover, calls have been made by some countries for the establishment of a new U.N. body specifically focused on supporting global AI governance. This proposal takes inspiration from successful models such as the International Atomic Energy Agency and the Intergovernmental Panel on Climate Change.
To further these efforts, the White House has already engaged in consultations with numerous countries regarding voluntary commitments related to AI regulation.
Overall, regulating AI is a complex task that demands collaboration among governments, organizations, and industry stakeholders. The aim is to strike a balance between reaping the benefits of AI advancements while mitigating potential risks. With ongoing discussions and initiatives at both national and international levels, the path toward comprehensive AI regulation is slowly taking shape.
(contributed to this report)