New Delhi: OpenAI CEO Sam Altman recently testified before the U.S. Senate Committee on the Judiciary, where he discussed the need for regulation in the field of artificial intelligence (AI). Altman, alongside IBM’s Chief Privacy & Trust Officer Christina Montgomery and Professor Emeritus Gary Marcus from New York University, engaged in a conversation about the potential risks associated with AI and how to address them.
When asked about the best approach to regulating AI, Altman proposed the creation of a new government agency responsible for setting standards and issuing licenses for AI initiatives that exceed a certain scale of capabilities. This agency would also have the authority to revoke licenses and ensure compliance with safety standards. Altman emphasized the importance of independent audits conducted by experts, who would evaluate whether AI models meet the required safety thresholds and performance criteria.
Altman, however, expressed his reluctance to personally lead the proposed agency, as he is committed to his current role as CEO of OpenAI. Instead, he highlighted the need for a dedicated regulatory body that can oversee AI technologies in a manner similar to how the U.S. Food and Drug Administration (FDA) reviews pharmaceutical drugs before they enter the market. Professor Marcus supported this perspective, suggesting that any AI technology intended for widespread use should undergo a comprehensive safety review.
The envisioned agency would be agile and stay updated with advancements in the AI industry. It would conduct pre-reviews of projects to assess their potential risks and perform post-release reviews, with authority to recall any technology if necessary. Transparency and explainability were also emphasized as essential aspects of AI regulation. Montgomery from IBM emphasized the importance of defining high-risk AI applications and implementing impact assessments, transparency measures, and requirements for companies to disclose their work and protect the data used to train AI systems.
Governments worldwide are grappling with the challenges posed by the rapid proliferation of AI technology. In response to these concerns, the European Union passed the AI Act, which encourages the establishment of regulatory sandboxes for testing AI technologies before their release. The act also introduces new rules to ensure the ethical and human-centric development of AI systems.
Addressing the question of whether an independent regulatory agency is necessary, Montgomery expressed the view that regulations should not be slowed down but rather tailored to address immediate risks. She acknowledged the existence of regulatory authorities in various domains but highlighted their resource limitations and lack of adequate powers. Montgomery advocated for regulating AI at the point of risk, emphasizing the intersection between technology and society as the focal point for effective regulation.
In conclusion, the discussion highlighted the need for a regulatory framework to ensure the safe and responsible development and deployment of AI technologies. The proposed regulatory agency, separate from OpenAI, would establish standards, conduct independent audits, and enforce compliance with safety thresholds. Transparency, risk assessment, and the protection of user data were identified as critical elements in AI regulation. By addressing these concerns, policymakers can strike a balance between fostering innovation and mitigating potential risks associated with AI.