OpenAI Establishes Specialized Task Force to Address ‘Hyperintelligent’ AI Systems

OpenAI Prepares to Forge a Specialized Team for Navigating the Risks of Advanced AI Systems

OpenAI, the organization behind the widely popular AI chatbot ChatGPT, has announced its intention to assemble a team dedicated to mitigating the potential risks associated with the advent of superintelligent AI systems. Through a blog post on July 5, the nonprofit revealed its plans to establish this team with the objective of effectively governing AI systems that surpass human intelligence.

Recognizing superintelligence as a technology that could yield profound impacts, OpenAI emphasized the need to address the potential dangers that accompany it, including the disempowerment or even extinction of humanity. The organization expressed its belief that such superintelligent systems may materialize within the next decade.

To tackle this challenge, OpenAI pledged to allocate 20% of its existing computing power towards this endeavor. The organization aims to develop an automated alignment researcher capable of understanding and aligning superintelligent AI systems with human values and intent, reaching a level of alignment equivalent to human understanding.

OpenAI appointed its chief scientist, Ilya Sutskever, and the head of its research lab’s alignment team, Jan Leike, as co-leaders of this initiative. They have extended an invitation to machine learning researchers and engineers to join their team.

OpenAI’s announcement coincides with global discussions surrounding the regulation and governance of AI systems. Notably, the European Union has made notable progress in enacting AI regulations, exemplified by the recent passing of the EU AI Act, which mandates disclosure of AI-generated content. Similar deliberations are ongoing in the United States, as lawmakers propose the establishment of a National AI Commission to shape the nation’s approach to AI. Concerns regarding potential constraints on innovation have arisen, prompting OpenAI CEO Sam Altman to engage with EU regulators and advocate for balanced regulations.

In light of these developments, Senator Michael Bennet recently drafted a letter urging major tech companies, including OpenAI, to adopt AI-generated content labeling practices. As the landscape surrounding AI governance evolves, OpenAI’s proactive measures to address superintelligent AI risks underscore the importance of responsible and thoughtful deployment of advanced AI systems.