Meta has disbanded its Responsible AI team, the team responsible for understanding and addressing the harms associated with the AI technology it develops, as the company diverts more resources toward its work in the field of… artificial intelligence Obstetrician.
The American company said that it is distributing the members of the responsible artificial intelligence team among other groups in the company, where they will continue to work on preventing harm associated with artificial intelligence, according to Reuters.
The changes are part of a broad adjustment to the artificial intelligence teams that the company announced internally, according to a report published by The Information website, citing an internal publication it saw.
Most of the employees in the Responsible AI team will move to the Generative AI team in deadThe company was formed last February to manufacture generative artificial intelligence products.
A Meta spokesperson noted that other members are moving to the company’s AI infrastructure unit, which develops the systems and tools needed to build and operate AI products.
The company regularly indicates that it wants to develop artificial intelligence responsibly, and has a special page for this, where the company lists the pillars of responsible artificial intelligence, including accountability, transparency, safety, privacy, and more.
“The company continues to prioritize and invest in the development of safe and responsible AI,” the report quotes John Carville, who represents Meta, adding that team members continue to support related efforts across Meta regarding the development and use of responsible AI.
Earlier this year, the team went through a restructuring that included layoffs, leaving the responsible AI team merely a shell of a team.
The responsible AI team that has been in place since 2019 has had little autonomy, and its initiatives must go through lengthy negotiations with different departments.
Meta created a responsible AI team to identify issues with its AI training methods, such as whether the company’s models were trained with appropriately diverse information, with a focus on preventing things like moderation issues across its platforms.
Meta’s move comes as world governments race to create regulatory barriers to the development of artificial intelligence. The US government entered into agreements with artificial intelligence companies, and President Biden later directed government agencies to come up with rules for the safety of artificial intelligence.
And publish European Union At the same time his principles for artificial intelligence, he is still fighting for the passage of his law for artificial intelligence.