Artificial intelligence has so far not been a major factor for social media disinformation about Israel – Gaza. The social media is flooded with disinformation and as the attack on Gaza plays out on social media, it was expected that generative AI would be a major factor in the flood of disinformation on social media. But that has not been the case. Referring to EU’s
new Digital Services Act (DSA), the EU commission has asked Meta and TikTok what the
companies are doing to stop illegal content and disinformation.
The dominating threat for disinformation has been real footage used out of context, Alex
Mahadevan, director of MediaWise at the Poynter Institute, writes at the Institute’s website.
The vast majority of images and videos fact-checkers have debunked during the war have
included footage from other countries like Syria or Turkiye, and the past in Gaza”,
“The war was the first real test of experts’ warnings about the threat of generative AI. Their
hypothesis that it would increase the quality and quantity of misinformation has so far
The Commission wants answers from Meta and TikTok on what they are doing to stop
misinformation about Israel – Gaza.
Based on what Meta and TikTok says, the Commission will assess next steps. Following the
DSA, Meta and TikTok are required to comply with the full set of provisions introduced by the
DSA, including the assessment and mitigation of risks related to the dissemination of illegal
content, disinformation, and any negative effects on the exercise of fundamental rights.
President of the European Commission, Ursula von der Leyen, said: “Hamas’ terrorist attack
has also led to an online assault of heinous, illegal content promoting hatred and terror. With
our Digital Services Act, Europe now has strong rules to protect users, including vulnerable
population groups, from intimidation and to ensure fundamental freedoms online.”
“Major platforms are subject to new obligations to mitigate such risks from their services.
Today’s recommendation will help us to coordinate our responses with Member States and
protect our society.”
“The war between Israel and Hamas has accelerated the spread of misinformation and
broadened its reach due to graphic and emotional visuals, a deeply political conflict whose
repercussions are felt around the world and a wealth of unreliable sources”, Mahadevan
writes in a blog post. He recommends some rules that can help you to find out what is
correct and what is misinformation.
Mahadevan notes that social media is flooded with out-of-context videos and images users
claim are coming from Israel or Gaza. Finding the original source is key. Misinformation is
rampant, he writes.
“The war between Hamas and Israel is playing out on social media through graphic images
and videos shared on X, formerly known as Twitter, Instagram and TikTok.”
He recommends first asking three questions developed by the Stanford History Education
Group in its study of how fact-checkers navigate the internet:
● Who’s behind the information?
● What’s the evidence?
● What do other sources say?
“While I remain concerned about generative artificial intelligence supercharging the creation
of disinformation, I’ve yet to see any significant AI images or videos. Still, it’s worth
remaining on guard, and checking images for watermarks, warped features, too many
fingers or other inconsistencies”, Mahadevan writes.
The EU Commission’s requests for information from Meta and TikTok are based on the EU’s
new Digital Services Act meant to create a sager online environment. Market are closely
following how the implementation of the new and rather strict rule system will play out.
DITAL SERVICES ACT, a short summary
● Aims to create a safer online space for users, stricter rules for platforms
● The DSA establishes a “notice and action” mechanism, as well as safeguards, for the
removal of illegal content.
● Online platforms must be transparent about how algorithms work and platforms
should be accountable for decisions they make.
● Measures to counter illegal products, services and content online, including clearly
defined procedures for removals
● Mandatory risk assessments and more transparency over “recommender systems” to
fight harmful content and disinformation
● Online platforms should be prohibited from using deceiving or nudging techniques to
influence users’ behaviour through “dark patterns”
● Targeted advertising: the text provides for more transparent and informed choice for
all recipients of services, including information on how their data will be monetised
and to better protect minors from direct marketing, profiling and behaviourally
targeted advertising for commercial purposes