Superalignment in the field of artificial intelligence is a principle in and approach to AI alignment. It is concerned with ensuring that artificial super intelligence or superintelligent AI systems are aligned with human values and goals. Take note that artificial superintelligence is a hypothetical artificial intelligence system that surpasses artificial general intelligence and human intelligence across all domains. Superalignment researchers ensure that the development and deployment of superintelligent systems are both safe and beneficial.
An Explainer on What Superalignment is in Artificial Intelligence and its Principles and Approaches
The term “superalignment” as it applies in artificial intelligence alignment was first introduced in a blog article with the title “Introducing Superalignment” that appeared on the official website of OpenAI and was posted on 5 July 2023. OpenAI cofounder and chief scientist Ilya Sutskever and head of alignment of Jan Leike co-authored the article. The term “superalignment” was chosen to emphasize superintelligence and capture the unique challenges of aligning advanced AI systems that are better than artificial general intelligence and more intelligent than humans.
Principles: Purpose and Challenges
Sutskever and Leike explained that superintelligence will be the most impactful technology that will be invented. It could help solve the most important and pressing problems of the world and would bring forth a new paradigm. However, because of its enormous capabilities, it could also endanger humans and societies. It could specifically lead to the disempowerment of humankind or even result in human extinction. The central purpose of superalignment is to guarantee that superintelligent artificial intelligence systems will be both safe and beneficial.
The core principle of superalignment is to align artificial superintelligence with human values and goals. This means putting in place measures that would guarantee that superintelligent AI systems would not act nor behave to the detriment of human welfare. These systems are barred from having their own purpose or goals and objectives that supersede the collective interest of humankind. The decision-making processes and reasoning of these superintelligence AI systems should also be transparent and understandable to human operators and users.
Below are the specific principles of superalignment:
• Instilling Diverse Human Values: One of the more specific principles of superalignment is to come up with a clear definition of human values to provide a faultless direction for superintelligent AI systems. This requires deliberating and synthesizing various inputs from diverse groups of stakeholders that represent different perspectives to ensure that the systems are aligned with the values and collective interests of all humanity.
• Protecting Against Exploitation: The development and deployment of superintelligent AI systems should also take into consideration the possibilities of exploitation carried out by malicious actors without hampering continuous learning and improvement. These systems should have the capabilities to anticipate and mitigate risks arising from their own intelligence and adapt to changes in human values and the environment.
• Maintaining Human Oversight: Another principle of superalignment is to ensure that humans retain ultimate control over superintelligent AI systems. This includes infusing mechanisms for setting goals and objectives, monitoring their activities or behaviors, and intervening when needed. The systems are still capable to act or behave on their own with limitations but do not have complete independence from human oversight.
The core principle and more specific principles mentioned above also translate to the importance of superalignment. Remember that this more focused approach to AI alignment aims to safeguard humankind from the risks and potential threats arising from artificial superintelligence while also ensuring that these advanced and more intelligent artificial intelligence systems are developed and deployed for the purpose of benefiting the human race. However, while the principles or the superalignment is straightforward, there are also challenges to meeting its purpose.
Below are the notable challenges of superalignment:
• Defining and Synthesizing Human Values: Remember that superintelligent AI systems must be aligned with the values and collective interests of humans. The problem is that humans or most groups of humans have conflicting values and interests. One of the main challenges of superalignment boils down to choosing which of these values and interests represent humankind as a whole. This is a challenging pursuit.
• Training Superintelligent AI Systems: Another challenge of superalignment centers on value learning or training superintelligent AI systems to comprehend and demonstrate human values. Current AI models are trained on datasets that reflect the knowledge of the world but are not reflective of human values. Teaching superintelligent AI systems to prioritize values over outcomes is a complex undertaking.
• Value Preservation and Value Coordination: It is also important for superintelligent AI systems to preserve human values over time. This is another superalignment challenge because even human values change or evolve. It is also possible for advanced AI systems to encounter new situations or environments as they become more intelligent and capable. Some systems can also have different sets of values.
The challenges of superalignment are also somewhat aggravated by conflicting views on how to best approach the ongoing artificial intelligence revolution. The most significant example of this conflict is the emerging debate between the adherents of effective altruism and supporters of effective accelerationism. Effective altruists believe in the safe and gradual development in the field of artificial intelligence. Effective accelerationists have argued that the best way to benefit from artificial intelligence is to accelerate its development unhinged or undisturbed.
Approaches: Notable Examples
It is still important to come up with a solution to align artificial superintelligence despite the challenges of superalignment. OpenAI has been leading the charge. It has created a specific unit within the organization to focus on researching and developing approaches to superalignment. Other organizations such as retailer and cloud computing services provider Amazon, internet and software giant Google, and fabless chipmaker Nvidia also have dedicated research teams tasked to solve the various AI alignment problems and come up with approaches.
Below are the approaches to superalignment and examples:
1. Human Feedback Training
The most basic approach to superalignment is through continuous reinforcement learning from human feedback. This involves providing a superintelligent AI system with feedback on its output or actions and decisions to enable it to understand and prioritize human values. This is an iterative process that factors in incessant alignment with continuous improvement.
Feedback can take various forms. Specific examples include an upvote and downvote explicit judgment mechanism, explanations of desired behaviors, and correction actions. The downside of this revolves around the fact that it would be hard for humans to supervise a more intelligent AI system and it is impractical to provide and maintain continuous feedback.
2. Scalable Oversight Approach
Researchers have also explored using AI itself to understand the internal workings of AI models. It is important to note that advanced models such as large language models or foundation models can become too large due to the use of advanced architectures and algorithms. Even developers do not understand the exact inner workings of existing large AI models.
Nevertheless, in a superalignment approach called scalable oversight, specific AI systems are developed and deployed to assist in evaluating other AI systems and helping their operators and users understand how these end-use systems work. The downside of this centers on the need to also align these leveraged AI systems with human values and goals.
3. Specific Automated Researcher
OpenAI has developed another approach to aligning superintelligent AI systems. It is pursuing the development and deployment of a human-like automated alignment researcher that can perform various approaches to AI alignment such as human feedback, robustness, scalable oversight or automated interpretability, generalization, and adversarial testing.
The automated alignment researcher is considered a viable approach to superalignment because it involves outsourcing the tasks of evaluating the internal workings of a superintelligent AI system and the potential risks of misalignment to an artificial intelligence agent. This approach automates the entire process of aligning artificial superintelligence.
FURTHER READINGS AND REFERENCES
- Schuett, J., Dreksler, N., Anderljung, M., McCaffary, D., Heim, L., Bluemke, E., and Garfinkel, B. 2023. “Towards Best Practices in AGI Safety and Governance: A Survey of Expert Opinion. arXiv. DOI: 48550/ARXIV.2305.07153
- Strickland, E. 2023. “OpenAI’s Moonshot: Solving the AI Alignment Problem.” IEEE Spectrum. Available online
- Sutskever, I. and Leike, J. 2023. “Introducing Superalignment.” OpenAI Blog. OpenAI. Available online
- Yazdani, A., Novin, R. S., Merryweather, A., and Hermans, T. 2021. “Ergonomically Intelligent Physical Human-Robot Interaction: Postural Estimation, Assessment, and Optimization.” arXiv. DOI: 48550/ARXIV.2108.05971