New Delhi: OpenAI, the creator of ChatGPT, has put forward a proposal for the establishment of a global regulatory body aimed at overseeing the development and deployment of artificial intelligence (AI) technologies. OpenAI’s CEO, Sam Altman, highlighted the potential of AI systems in the next decade to possess expert-level skills across various domains and perform tasks with an efficiency comparable to today’s largest corporations.
In a recent blog post, OpenAI emphasized the need for robust public oversight in governing powerful AI systems and determining their boundaries and defaults. While acknowledging the absence of a clear mechanism for achieving this, the company expressed its intention to explore and experiment with the development of such a mechanism.
Authored by Sam Altman, along with OpenAI President Greg Brockman and Chief Scientist Ilya Sutskever, the blog post drew parallels between the risks associated with AI and nuclear energy. OpenAI proposed the creation of an authority akin to the International Atomic Energy Agency to address the potential hazards posed by superintelligent AI systems.
To address the challenges brought forth by AI, OpenAI outlined a three-point agenda:
Coordination among AI developers: OpenAI advocated for coordinated efforts among companies involved in AI development, including Bard, Anthropic, Bing, and others. The aim is to ensure that the progress toward developing superintelligent AI occurs in a manner that prioritizes safety and facilitates the seamless integration of these systems into society. OpenAI suggested two potential approaches for achieving this coordination: governments worldwide establishing a regulatory system involving leading AI manufacturers or voluntary agreements among companies to limit the rate of AI growth annually.
International regulatory body: OpenAI proposed the formation of a new international organization, similar to the International Atomic Energy Agency, to address the existential risks associated with superintelligent AI systems. This regulatory body would possess the authority to conduct inspections, mandate audits, enforce compliance with safety standards, impose restrictions on deployment and security levels, and undertake other necessary measures.
Safer superintelligence: OpenAI affirmed its commitment to enhancing the safety and alignment of AI systems with human values and intentions. The company is actively working on research and development to ensure that the deployment of artificial intelligence aligns with human preferences and minimizes risks.
OpenAI’s proposal reflects a proactive approach to AI governance, emphasizing the importance of public involvement and cooperation among AI developers and regulatory bodies. By addressing these challenges, OpenAI aims to foster the responsible and beneficial development of AI technologies in the years to come.