A report published by the British newspaper The Guardian stated that a new feature in the application Whatsapp Owned by a company dead It allows users to create “stickers” of images of children holding weapons when they type the word “Palestine” or similar words with artificial intelligence.
Although the search results varied when tested by different users, the Guardian’s test, which was supported by screenshots and its own tests, confirmed that different stickers depicting weapons appeared when searches were made using the words “Palestinian” or “PalestineOr “Palestinian Muslim boy” in English. Requests to create posters for the “Israeli boy” showed cartoons of children playing soccer and reading.
The artificial intelligence also created drawings of soldiers smiling and praying without using weapons, in response to requests made by the Israeli army for companies to modify this in artificial intelligence education systems, the report said. An unnamed source told the newspaper that some employees at Meta Company escalated the problem to management.
The messaging app began using artificial intelligence to create stickers a month ago, and this feature asks users to “turn ideas into stickers using artificial intelligence.”
Meta Bias Posters
The Guardian’s process of creating a poster about “Muslim Palestine” showed 4 pictures of women, one of whom was a veiled woman standing still, another reading, a third holding a flower, and the last holding a sign.
But the Guardian newspaper’s process of creating a poster for a “Palestinian Muslim boy” led to the appearance of 4 images of children, including one of a boy carrying a firearm that resembles… Kalashnikov He wears a keffiyeh or yarmulke usually worn by Muslims.

In another Guardian newspaper search for “Palestine,” the result showed an image of a hand holding a gun. When I changed the word to “Israel,” the feature recreated a poster of the Israeli flag and a man dancing. When writing the word “agitationA message appears, “Cannot create AI-powered labels.” Try again”.
The report stated that one user shared screenshots of the process of creating a poster with the word “Palestinian,” which resulted in another, different image of a man holding a gun.
Similar poster creations using the word “Israeli boy” showed four images of children, two of which depicted boys playing soccer, and two of which were simply images of their faces.
The request for the “Israeli Jewish boy” also showed 4 pictures of children, two of which were of boys wearing necklaces bearing the Star of David, one of a boy wearing a yarmulke and reading, and the fourth of a boy standing. None of them carried weapons.

Even calls for military posters with explicit words such as “Israeli Army” or “Israeli Defense Forces” did not result in images of guns or weapons. The cartoons depicted people wearing uniforms in various positions, most of them smiling. One illustration showed a man in uniform praying.

This report by the Guardian comes at a time when Meta has been criticized by many Instagram and Facebook users who publish content supportive of the Palestinians.
Users say that, as the Israeli bombing continues For GazaMeta enforces its policies in a biased manner, a practice they say amounts to surveillance.
Other users reported that Meta deliberately hid their posts supporting the Palestinians from other users without explanation, and say they saw a sharp decline in interaction with their posts.
Meta: Unintentional errors
“It is never our intention to suppress a particular community or viewpoint,” Meta previously said in a statement. But due to the “large amounts of content reported”, “content that does not violate our policies may be removed in error.”
Meta spokesperson Kevin McAllister told The Guardian that the company was aware of the issue and was addressing it. “As we said when we launched the feature, models can return inaccurate or inappropriate output as is the case with all generative AI systems. We will continue to improve these features as “It’s evolving and more people will share their comments.”
Demands for investigation
In response to the Guardian’s report on the posters created by Meta’s artificial intelligence, Australian Senator Mehreen Faruqi, deputy leader of the Green Party and the country’s e-safety commissioner, called for an investigation into the “racist and anti-Islamic images produced by Meta.”

“AI images of Palestinian children appearing carrying weapons on WhatsApp are a horrifying look at the racist and Islamophobic norms being fed into the algorithm,” Mehreen said in a statement.
“How many racist wrongs must be exposed before serious action is taken? The damage has already been done. Meta must be held accountable,” she added.
It is noteworthy that Meta faced repeated pressure from Palestinian creators, activists and journalists, especially during times of escalating conflict or aggression towards Palestinians living in Gaza and the West Bank.
A study commissioned by the company in September 2022 found that Facebook and Instagram’s content policies during the Israeli attacks on the Gaza Strip in May 2021 violated Palestinian human rights.
The report said that Meta’s actions may have had a “negative impact…on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and thus on the ability of Palestinians to exchange information and insights about their lives,” according to the Guardian report.
Source link