• The Bridge Club
  • Posts
  • OpenAI forms new team to tackle 'catastrophic' AI risks

OpenAI forms new team to tackle 'catastrophic' AI risks

OpenAI has recently established a new team, named 'Preparedness', with a mission to assess, evaluate, and mitigate potential "catastrophic risks" related to AI models.

The team will be led by Aleksander Madry, director of MIT's Center for Deployable Machine Learning, and OpenAI's head of Preparedness.

  • Team Responsibilities: Monitoring and forecasting the potential dangers of future AI systems and protecting against malicious AI activities such as phishing attacks and malicious code generation. Other scope of responsibility would be investigating a wide range of risk categories, including some that might seem far-fetched like "chemical, biological, radiological, and nuclear" threats in relation to AI models.

  • Risk Allocation: OpenAI CEO, Sam Altman, who is known to express concerns over AI's potential threats to humanity, has now taken a step further by allocating resources to study these potential risks. However, the company also expresses interest in investigating "less obvious", yet realistic areas of AI risk.

  • Open Competition: To encourage community participation, OpenAI has launched a competition with a $25,000 prize and a job opportunity at Preparedness for the top ten submissions. Participants are asked to envisage being a malicious actor with unrestricted access to OpenAI’s advanced models and consider the most unique, probable, and potentially catastrophic misuse of the model.

  • New Policy: In addition to this, the Preparedness team has been tasked with developing a "risk-informed development policy". This policy will outline OpenAI’s approach to building AI model evaluations, monitoring tools, risk-mitigation strategies, and governance structure for the model development process.

OpenAI believes that the highly capable AI models of the future have the potential to greatly benefit humanity but also pose significant risks. The creation of the Preparedness team coincides with a major UK government AI safety summit.