New Delhi: Anthropic, a startup specializing in artificial intelligence (AI), has announced a groundbreaking development in the field by introducing a set of moral values that guide its AI system, Claude. The startup, which is backed by Alphabet Inc., the parent company of Google, has developed an approach to creating a safe and reliable AI system, which they call Constitutional AI. The approach involves providing explicit values determined by a constitution to ensure that the AI system’s responses align with ethical and moral considerations and discourages content that promotes illegal or harmful activities.
Claude’s constitution draws inspiration from various sources, including Apple’s data privacy rules and the United Nations Declaration on Human Rights. This constitution serves as a guide for Claude’s decision-making process, ensuring that the AI system’s responses are respectful of non-western cultural traditions and sensitive to diverse perspectives. Unlike traditional AI chatbot systems that rely on human feedback during training, Claude is equipped with a set of written moral values, allowing the system to make informed decisions on how to respond to various questions and topics, including those that may be contentious.
Anthropic’s constitution for Claude includes values such as opposing torture, slavery, cruelty, and degrading treatment. This aligns with the startup’s goal of avoiding content that promotes harmful or illegal activities, such as weapon-building or racially biased language. By prioritizing safety measures and ethical considerations, Anthropic aims to create an AI system that protects the public and promotes a better understanding of AI’s impact on society.
Former executive at Microsoft Corp-backed OpenAI and Co-founder of Anthropic, Dario Amodei, recently participated in a meeting with President Joe Biden and other AI executives to discuss the potential risks associated with AI. The discussion highlighted the need for companies to prioritize safety measures before deploying AI systems to the public.
Anthropic’s approach with Constitutional AI and Claude’s constitution provides a unique solution to addressing the ethical concerns around AI systems, highlighting the importance of creating AI systems that prioritize safety measures and ethical considerations.
Jack Clark, co-founder of Anthropic, believes that AI system values will become a crucial topic of discussion, particularly among policymakers. He foresees politicians placing significant importance on understanding the values underlying different AI systems. Constitutional AI, with its transparent and adjustable set of values, could facilitate these discussions and foster a better understanding of AI’s impact on society.
In conclusion, Anthropic’s approach to AI with Constitutional AI and Claude’s constitution provides a promising solution to the ethical concerns surrounding AI systems. By prioritizing safety measures and ethical considerations, Anthropic aims to create a reliable and trustworthy AI system that promotes a better understanding of AI’s impact on society. The inclusion of AI system values, as highlighted by Anthropic, could become a crucial topic of discussion, leading to more transparent and adjustable AI systems that align with ethical and moral considerations.