Close Menu
    Facebook X (Twitter) Instagram
    • Home
    • Technology
    • Business
    • Gaming News
    Facebook X (Twitter) Instagram
    Home»Technology»OpenAI establishes a safety council while training its most recent artificial intelligence model.

    क्या इस जॉब में आपकी रुचि है?

    जवाब देकर आगे बढ़ें:

    YES NO
    Artificial Intelligence OpenAI
    Technology

    OpenAI establishes a safety council while training its most recent artificial intelligence model.

    HazelBy HazelMay 29, 2024No Comments2 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The US tech startup has announced the formation of a committee that will offer guidance and expertise on crucial decisions pertaining to safety and security.

    OpenAI announced that it is establishing a safety and security committee and has started developing a new AI model to replace the GPT-4 system that powers its ChatGPT chatbot.

    In a blog post on Tuesday, the San Francisco-based company stated that the committee will provide advice to the full board on “critical safety and security decisions” regarding its projects and operations.

    This announcement comes amid ongoing debates about AI safety at OpenAI, which intensified after researcher Jan Leike resigned, accusing the company of prioritizing “shiny products” over safety. Additionally, OpenAI co-founder and chief scientist Ilya Sutskever also resigned, leading to the dissolution of the “superalignment” team, which they both led and focused on AI risks.

    OpenAI announced that it has “recently begun training its next frontier model,” claiming that its AI models lead the industry in both capability and safety, without addressing the recent controversy. “We welcome a robust debate at this important moment,” the company stated.

    AI models are advanced prediction systems trained on extensive datasets to produce on-demand text, images, video, and human-like conversation. Frontier models represent the most powerful and cutting-edge AI systems.

    The newly formed safety committee comprises company insiders, including OpenAI CEO Sam Altman, chairman Bret Taylor, and four technical and policy experts from OpenAI. It also includes board members Adam D’Angelo, CEO of Quora, and Nicole Seligman, former general counsel of Sony.

    The committee’s initial task will be to evaluate and enhance OpenAI’s processes and safeguards, delivering its recommendations to the board within 90 days. OpenAI stated that it will publicly disclose the adopted recommendations “in a manner that is consistent with safety and security.”

    Artificial Intelligence OpenAI
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Hazel
    • Website

    Related Posts

    Google Launches Nano Banana 2 With Faster AI Image Generation and Improved Text Rendering

    February 27, 2026

    Samsung Galaxy S26 Series Battery Details Leak via EPREL Listing

    February 24, 2026

    AI+ Introduces NovaPods Series Earbuds and NovaWatch Smartwatches in India

    February 24, 2026

    Comments are closed.

    • Privacy Policy
    • GDPR Compliance Policy
    • Fact-Checking Policy
    • Ethics Policy
    • Editorial Policy
    • DMCA Policy
    • California Consumer Privacy Act (CCPA)
    • Corrections Policy
    • Terms of Use
    • Contact us
    • Disclaimer
    • Contact us
    • About Us
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.