OpenAI and Microsoft have been advocating for government regulation of AI technology, even suggesting the establishment of an intergovernmental organization similar to the International Atomic Energy Agency to regulate AI. However, Alphabet-C does not want all AI regulation to be under government control, and the company prefers existing regulatory bodies to address the challenges posed by AI in various fields. Others in the AI field, including researchers, have expressed similar views to Alphabet-C.
With the AI boom brought about by OpenAI's popularity, how to regulate AI technology has become a hot topic in the market.
A recent document shows that Alphabet-C and OpenAI, the leaders in the AI field, have opposite views on how the government should regulate artificial intelligence.
OpenAI CEO Sam Altman has been advocating for government regulation of AI technology, even suggesting the establishment of a government organization similar to the International Atomic Energy Agency to regulate AI, and he believes that the organization should focus on regulating AI technology and issuing licenses to entities using the technology. However, Alphabet-C does not want AI regulation to be entirely under government control, and the company prefers a "multilayered, multi-stakeholder approach to AI governance."
Others in the AI field, including researchers, have expressed views similar to Alphabet-C's, saying that government regulation of AI may be a better way to protect marginalized communities - although OpenAI believes that technological progress is too fast to adopt this approach.
Alphabet-C said:
At the national level, we support a center-of-radiation approach - where central agencies such as the National Institute of Standards and Technology (NIST) notify departmental regulatory agencies to supervise the implementation of AI - rather than an "AI department." AI will bring unprecedented challenges to regulated industries such as financial services, healthcare, and others, which will be addressed by regulatory agencies with experience in these industries, rather than a new, independent department to regulate them.
Emily M. Bender, professor and director of the Computational Linguistics Laboratory at the University of Washington, said:
I completely agree that so-called AI systems should not be deployed without some kind of certification process. But this process should depend on the purpose of the system. Existing regulatory agencies should maintain their jurisdiction and decision-making power.
Alphabet-C's view contrasts sharply with Microsoft's preference for a more centralized regulatory model. Microsoft President Brad Smith said he supports the creation of a new government agency to regulate artificial intelligence, and OpenAI's founders have also publicly expressed their vision of creating an organization similar to the International Atomic Energy Agency to regulate AI.
Kent Walker, global affairs chief of Alphabet-C, said he "does not oppose" the idea of a new regulatory body overseeing licenses for large language models, but said the government should "take a more comprehensive" look at the technology.
The seemingly opposite views of Alphabet-C and Microsoft on regulation indicate that the debate in the field of artificial intelligence is growing, far beyond the scope of how much regulation technology should be subject to.
In May of this year, US Vice President Harris said after meeting with leaders of important companies in the AI industry that the technology "could greatly increase threats to security and safety, violate civil rights and privacy, and erode public trust and confidence in democracy." She added that companies have a responsibility to comply with existing laws, as well as "ethical, moral, and legal obligations to ensure the safety and security of their products."