Who spreads visual generative disinformation, how does it work, and why is it so persuasive?

$0
Pledged
0%
Funded
$6,200
Goal
33
Days Left
  • $0
    pledged
  • 0%
    funded
  • 33
    days left

About This Project

This research explores how visual generative AI has been utilized to spread disinformation over the past two years, identifying key topics, actors, mechanisms, and features. Additionally, it experimentally tests the impact of this content on engagement, persuasiveness, and perceived trustworthiness. Unlike previous studies that focused on potential misuse, the project analyzes real-world cases, providing practical and theoretical insights to understanding the impact of visual generative AI.

Ask the Scientists

Join The Discussion

What is the context of this research?

The rise of visual generative AI has revolutionized the visual communication landscape. These tools have facilitated the creation of high-quality, photorealistic visuals in a fraction of a second and at a minimal cost, which simplified visualizations more than ever before. This has raised huge concerns over the potential to exacerbate visual mis/disinformation. Recent data showed that over 15 billion images have been generated from text-to-image models in 2023. This figure exceeds Shutterstock’s entire image library and constitutes almost one-third of all images ever uploaded to Instagram. More recently, we have witnessed several instances where visual generative AI intensified disinformation and triggered public panic and confusion, particularly during events such as elections and ongoing wars. This underscores the urgent need for a systematic investigation into the potential harm of this content, its impact on the public, and possible solutions to face it.

What is the significance of this project?

The project provides a real-world analysis of how visual generative AI has been used to spread disinformation over the past two years. Unlike previous studies that focus on theoretical risks or hypothesized scenarios, this research systematically examines fact-checked visual generative content to identify its key topics, actors, mechanisms, and features.

Additionally, the project experimentally tests the impact of visual generative disinformation on persuasiveness, engagement, and perceived trustworthiness, offering new insights into how audiences process and react to such content. It also experimentally tests whether visual generative literacy can improve the ability to recognize and resist such content.

By providing a comprehensive, evidence-based understanding of visual generative AI and its impact, the research will inform future strategies and policies for mitigating harm, e.g., labeling generative content or training the public on identifying visual generative disinformation.

What are the goals of the project?

  1. Analyze the scope of visual generative disinformation that went viral over the past two years, the actors behind it, their mechanisms, content features, main themes, and audience engagement. Visual generative disinformation flagged by fact-checking organizations will be scrapped for the analysis. Coded mechanisms will include amplification, segmentation, and obfuscation. Coded variables will include intention, target audience, frames, emotional tone, and engagement, among others. Data will be analyzed using Nvivo and SPSS.

2. Experimentally test public perceptions of this content in terms of persuasiveness, perceived trustworthiness, and potential engagement.

3. Experimentally test whether providing visual literacy training on how to identify visual generative disinformation can enhance the ability to identify and resist this content. This will help us understand the potential of such training so that the media can generalize it.

Budget

Coders Training
$300
Coders Salary
$1,500
Experiment 1 incentives for participants
$1,200
Experiment 2 incentives for participants
$1,200
Data Analysis tools subscriptions
$500
Conference Presentation and Publishing
$1,500

I am an Egyptian PhD student at City, University of London on a scholarship. My stipend barely covers my living expenses, hindering me from conducting innovative research studies. This project cannot be completed without funding allocated to coders, incentives, data analysis, and publishing.

Coding must be conducted with objective and rigorous criteria. Two coders will undergo extensive training and will be compensated with $750 each for their work on large datasets.

Each experiment will include 400 participants, with an incentive of $3 per participant per experiment, aligned with Prolific's hourly rate.

Premium subscriptions to data analysis tools will expedite the analysis process.

Conference presentations and publishing are important to ensure the global reach of the findings.

Although I am a PhD student, I have received 7 research awards, published 3 Q1 journal articles, and presented at 11 top-tier conferences, demonstrating my capability to successfully conduct the project.

Endorsed by

I believe the topic of this research is not only relevant to current issues created by AI, but it seeks to provide realistic data for future reflection. It’s a study that is both academically and socially significant. It simply needs to be conducted, and this award-winning researcher is more than qualified to do it.

Project Timeline

The project is expected to be completed within 1-2 years and will be divided into 2-3 studies: one content analysis and two experiments. I am committed to maintaining transparency throughout the project and will provide monthly updates on my progress once funding is secured.

Mar 14, 2025

Project Launched

Aug 31, 2025

Content Analysis of Viral Visual Generative Disinformation

Jan 01, 2026

Experiment 1 (Audience Perceptions)

Mar 01, 2026

Experiment 2 (Training and Visual Literacy)

Meet the Team

Menna Elhosary
Menna Elhosary
PhD Student

Affiliates

City St Georges, University of London
View Profile

Team Bio

This research project is solo-authored. Please view my academic profile here for more information about my qualifications https://www.city.ac.uk/about/p...

Menna Elhosary

Menna Elhosary is an early-career award-winning communication scholar and a first-year PhD student in the Department of Journalism at City St Georges, University of London. Menna has over five years of teaching and research experience. Her research agenda focuses on generative AI and news automation, information disorder, and war propaganda. She is the recipient of seven international academic awards from AEJMC, ICA, and the University of Sharjah. Her work has appeared in multiple top journals, including the International Communication Gazette and the International Journal of Communication.

Additional Information

Both experiments will include 400 participants each and will receive ethical approval from City, University of London to ensure minimal harm to participants. Experiments will be conducted online on Prolific.


Project Backers

  • 0Backers
  • 0%Funded
  • $0Total Donations
  • $0Average Donation

See Your Scientific Impact

You can help a unique discovery by joining 0 other backers.
Fund This Project