How artificial intelligence threatens trust in information

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

Finance Minister Bayaz and Prof. Stöcker will discuss the influence of AI on information trust and democracy on May 21, 2025.

How artificial intelligence threatens trust in information

At a time when artificial intelligence (AI) is increasingly finding its way into the flow of information, the question of the truth and trust in information is becoming highly topical. At a conversation attended by the Finance Minister Dr. Danyal Bayaz and the expert Prof. Christian Stöcker took part, it became clear how much the digital public is changing and what challenges this brings with it for society. FM Baden-Württemberg reports that the erosion of public facts and new dimensions of disinformation were central topics in the discussion.

Stöcker and Bayaz discussed, among other things, the role of psychological effects and the measures that both the state and individual citizens can take to counteract this problem. The aim is to promote social resilience in the digital age and strengthen trust in the truth. The podcast on this topic is available free of charge on popular platforms and on the Ministry of Finance's YouTube channel.

Social media manipulation

Access to reliable information is essential for democracy. But information consumption behavior has changed in recent years, leading to social vulnerabilities. According to a report by the Federal Agency for Civic Education The German population's trust in the media was 47% last year and has therefore fallen slightly, while trust in the government has even fallen to 42%. This development is accompanied by increasing social polarization and changing media consumption behavior.

Artificial intelligence is seen as a global risk, particularly in terms of the spread of misinformation and disinformation. AI technologies enable the automated production of content and the targeted personalization of information, increasing the opportunities for manipulation. Political actors are increasingly using algorithmic skills to control users' online behavior and promote their mobilization. This became particularly noticeable in the super election year of 2024, when systematic manipulation of social media was used by both political actors and strategic companies.

Emotional activation and mobilization

The content curated via social media algorithms aims to maximize user engagement. Particularly polarizing content generates emotional reactions and can therefore significantly increase engagement rates. Influencers have established themselves as important players in this context and have high mobilization potential. The Alternative for Germany (AfD) has specifically used these strategies to spread its messages and increase its reach.

The manipulation strategies include, among others, AI-powered content manipulation and coordinated disinformation campaigns based on emotional activation. It is highlighted that the coordination of activities is increasingly being shifted to closed groups on messenger services. This not only affects the visibility of political messages, but also has profound implications for the formation of democratic opinions.

Recommendations for action and outlook

In view of these worrying developments, recommendations for action are necessary. The Federal Agency for Civic Education recommends the introduction of a code of conduct for parties, public education campaigns and the promotion of a multi-stakeholder approach to combat manipulation. In addition, access to information about data use and algorithms should be improved in order to create transparency and regain public trust.

Overall, both reports highlight the key challenges posed by the systematic manipulation of information in the age of AI. It is crucial that politicians, platform operators and civil society organizations work together to effectively address these challenges and maintain a healthy, balanced discourse.