Tech Giants Commit to AI Safety Measures Outlined by the White House

JJohn July 21, 2023 3:19 PM

Top tech companies such as Amazon, Google, Meta, and Microsoft have agreed to adhere to a set of AI safety precautions laid out by the Biden administration, aimed to ensure the safety of their AI products prior to their release.

Tech giants accept AI safety standards

Leading tech corporations like Amazon, Google, Meta, and Microsoft have voluntarily committed to implementing AI safety measures proposed by the Biden administration. The commitments, aimed to ensure the safety of their AI products before they are launched, include third-party oversight of commercial AI systems, although the specific details about who will conduct the audits or hold the companies accountable remain unclear.

The increasing commercial investment in generative AI tools, capable of producing convincingly human-like text and generating new images and other forms of media, has sparked both public fascination and concern. Fears are primarily centered around these tools' potential ability to deceive individuals and disseminate disinformation.

Companies commit to enhanced security testing

The companies, including OpenAI, Anthropic, and Inflection, have pledged to carry out security testing, partially conducted by independent experts. The purpose of this testing is to safeguard against significant threats to biosecurity and cybersecurity. They have also agreed to methods for reporting system vulnerabilities and to the use of digital watermarking to differentiate between real and AI-generated images, also known as deepfakes.

Immediate actions ahead of long-term legislation

These voluntary commitments from major tech companies are viewed as an immediate measure to address potential risks ahead of a more long-term strategy of pushing Congress to pass laws that regulate AI technology. While these steps are seen as a positive start, advocates for AI regulation argue that more needs to be done to hold these companies and their products accountable.

Some experts and emerging competitors express concerns that the proposed type of AI regulation could benefit large, well-funded companies like OpenAI, Google, and Microsoft. They argue that smaller players might be pushed out due to the high costs associated with ensuring their AI systems, known as large language models, comply with the regulatory rules.

Global efforts toward AI regulation

Multiple countries, including those in the European Union, are scrutinizing ways to regulate AI. The UN Secretary-General Antonio Guterres recently stated that the United Nations could be the perfect platform to adopt global standards and has appointed a board to report back on options for global AI governance by the end of the year. He also welcomed the idea of a new UN body to support worldwide efforts to govern AI, inspired by models such as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

More articles

Also read

Here are some interesting articles on other sites from our network.