Wednesday, 27 Nov 2024

OpenAI's head of trust and safety is stepping down


OpenAI's head of trust and safety is stepping down
1.9 k views

OpenAI's head of trust and safety announced on Thursday plans to step down from the job.

Dave Willner, who has led the artificial intelligence firm's trust and safety team since February 2022, said in a LinkedIn post that he is "leaving OpenAI as an employee and transitioning into an advisory role" to spend more time with his family.

Willner's exit comes at a crucial moment for OpenAI. Since the viral success of the company's AI chatbot ChatGPT late last year, OpenAI has faced growing scrutiny from lawmakers, regulators and the public over the safety of its products and their potential implications for society.

OpenAI CEO Sam Altman called for AI regulation during a Senate panel hearing in March. He told lawmakers that the potential for AI to be used to manipulate voters and target disinformation are among "my areas of greatest concern," especially because "we're going to face an election next year and these models are getting better."

In his Thursday post, Willner - whose resume includes stops at Facebook and Airbnb - noted that "OpenAI is going through a high-intensity phase in its development" and that his role had "grown dramatically in its scope and scale since I first joined."

A statement from OpenAI about Willner's exit said that "his work has been foundational in operationalizing our commitment to the safe and responsible use of our technology, and has paved the way for future progress in this field." OpenAI's Chief Technology Officer Mira Murati will become the trust and safety team's interim manager and Willner will advise the team through the end of this year, according to the company.

"We are seeking a technically-skilled lead to advance our mission, focusing on the design, development, and implementation of systems that ensure the safe use and scalable growth of our technology," the company said in the statement.

Willner's exit comes as OpenAI continues to work with regulators in the United States and elsewhere to develop guardrails around fast-advancing AI technology. OpenAI was among seven leading AI companies that on Friday made voluntary commitments agreed to by the White House meant to make AI systems and products safer and more trustworthy. As part of the pledge, the companies agreed to put new AI systems through outside testing before they are publicly released, and to clearly label AI-generated content, the White House announced.

you may also like

Flight passenger ignites debate after posting photo of traveler's coat thrown over seat
  • by foxnews
  • descember 09, 2016
Flight passenger ignites debate after posting photo of traveler's coat thrown over seat

A traveler who said he was flying on Delta posted a photo on Reddit showing that a passenger had their jacket draped over a seat, sparking a discussion in the comments section.

read more