
© Adobe / SveM
AI Images: A Threat to Democracy?
Communications scholar Stephanie Geise on new technologies and the German election campaign
The German federal election campaign is facing a new challenge due to the use of AI-generated images and videos. How dangerous is this content for democracy, and what measures can ensure transparency and integrity? Professor Stefanie Geise, head of the ZeMKI Political Communication and Innovative Methods Lab, calls for clear legal requirements for disclosure.
AI-generated images and videos are being used in the current German federal election campaign. What new challenges does this pose for political communication?
The use of AI-generated images, videos, and multimodal content present political communication with an unprecedented challenge. These technologies allow parties and politicians to create deceptively real-looking messages that deliberately evoke emotions and reinforce existing narratives. However, the increasing reach of such content also leads to greater risks to thwart democratic discourse and political integrity. Since modern AI tools are easy to use, the technical and financial hurdles for creating synthetic content have become much smaller. As a result, images and videos can be disseminated widely, events can be recorded that only ever took place in the minds of their creators, political personalities can be portrayed in compromising situations, and certain groups of people can be stereotyped and visualized in a discriminatory manner. Such content not only promotes the spread of misleading information, but also encourages polarization and radicalization, which ultimately endangers social cohesion.
What is particularly problematic about the use of AI-generated content in a political context?
The lack of transparency is particularly problematic. AI-generated content is often so realistic that even trained viewers do not recognize it as fake. Without clear labeling, voters are left in the dark about how content was created, which undermines their trust in political actors and jeopardizes the informed decisions necessary for democratic discourse.
What measures are needed to ensure the integrity of the election campaign in the face of such technologies?
In order to ensure the integrity of the election campaign, clear legal requirements for labeling synthetic content are required. Platforms should be required to provide technical mechanisms to disclose content with an AI origin. In addition, political actors should make strict commitments to avoid or clearly label synthetic content. Only transparent standards can ensure that voters are able to make informed decisions based on reliable information. It is the joint responsibility of politics, platforms, and society to enforce these standards. However, at the very least, we should demand that political actors strictly commit to labeling synthetic content.

© Beate C. Koehler
Personal Profile
Professor Stephanie Geise researches how people perceive and process political media content via images and texts. She is a professor of communication and media studies with a focus on innovative methods at the University of Bremen’s ZeMKI. Her main areas of research include political communication, visual communication, digital communication, methods of empirical communication research, especially computer-based observation methods; media reception and media impact processes; political participation; political protest.