OpenAI establishes a safety council while training its most recent artificial intelligence model.

OpenAI establishes a safety council while training its most recent artificial intelligence model.

The US tech startup has announced the formation of a committee that will offer guidance and expertise on crucial decisions pertaining to safety and security.

OpenAI announced that it is establishing a safety and security committee and has started developing a new AI model to replace the GPT-4 system that powers its ChatGPT chatbot.

In a blog post on Tuesday, the San Francisco-based company stated that the committee will provide advice to the full board on “critical safety and security decisions” regarding its projects and operations.

This announcement comes amid ongoing debates about AI safety at OpenAI, which intensified after researcher Jan Leike resigned, accusing the company of prioritizing “shiny products” over safety. Additionally, OpenAI co-founder and chief scientist Ilya Sutskever also resigned, leading to the dissolution of the “superalignment” team, which they both led and focused on AI risks.

OpenAI announced that it has “recently begun training its next frontier model,” claiming that its AI models lead the industry in both capability and safety, without addressing the recent controversy. “We welcome a robust debate at this important moment,” the company stated.

AI models are advanced prediction systems trained on extensive datasets to produce on-demand text, images, video, and human-like conversation. Frontier models represent the most powerful and cutting-edge AI systems.

The newly formed safety committee comprises company insiders, including OpenAI CEO Sam Altman, chairman Bret Taylor, and four technical and policy experts from OpenAI. It also includes board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, former general counsel of Sony.

The committee’s initial task will be to evaluate and enhance OpenAI’s processes and safeguards, delivering its recommendations to the board within 90 days. OpenAI stated that it will publicly disclose the adopted recommendations “in a manner that is consistent with safety and security.”

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *