Mark Zuckerberg, CEO of Meta, attends the U.S. Senate Bipartisan Forum on Artificial Intelligence at the U.S. Capitol on September 13, 2023 in Washington, DC.
Stephanie Reynolds | AFP | Getty Images
Meta According to a Meta representative, it disbanded the responsible AI unit, a team dedicated to regulating the security of AIs as they are developed and deployed.
Most members of the RAI team have been reassigned to the company's Generative AI products division, while others will now work in the AI infrastructure team, the spokesperson said. This story was published for the first time information.
The Generative AI team, born in February, focuses on developing products that generate language and images to mimic an equivalent human-made version. This comes as companies in the tech industry have poured money into machine learning to keep up with the AI race. Meta is one of the big tech companies that continues to catch on as the AI boom hits.
RAI's restructuring comes as parent Facebook nears the end of a “performance year,” as CEO Mark Zuckerberg called it during a February earnings call. So far, it's been a flurry of layoffs, team mergers and redeployments at the company.
Ensuring the safety of artificial intelligence has become a stated priority for leading players in the space, especially as regulators and other officials pay more attention to the nascent technology's potential harm. In July, Anthropic, Google, Microsoft, and OpenAI formed an industry group specifically focused on setting security standards as AI advances.
Although RAI staff have now been dispersed across the organization, a spokesperson noted that they will continue to support “responsible AI development and use.”
“We continue to prioritize and invest in the development of safe and responsible artificial intelligence,” the spokesperson said.