On June 24, 2025, the ECDF hosted another edition of the Industry Forum, this time under the title “Artificial Intelligence (AI) in the Practice of Administration and Local Government – Opportunities and Risks for Businesses and the Public Sector.” The specialist event, initiated jointly with the Research Forum on Public Safety (FOES), brought together representatives from industry, science, administration, and politics. The focus was on how artificial intelligence can be used in municipal structures in the future and what framework conditions are needed to exploit its potential responsibly and effectively.
After a welcome address by ECDF spokesperson Tabea Flügge and FOES director Jochen Schiller, cybersecurity expert and strategy consultant for economic protection Prof. Timo Kob introduced the first of two keynotes with a compelling analysis of the security risks of AI. Felix Nadolni, senior innovation strategist at Bundesdruckerei GmbH, then spoke about the prerequisites for trustworthy AI in public administration. Both presentations provided a concise basis for the subsequent discussions in the panel and at the theme tables.
Timo Kob drew attention to the technical, ethical, and social risks associated with the use of AI systems in public spaces. In addition to specific threat scenarios such as data poisoning and product injection, i.e., the targeted manipulation of training data or the introduction of malicious code, Kob also addressed the danger of disinformation through AI-generated content. He said it was particularly critical that the processes behind AI decisions often remain opaque – both for users and for those responsible in public authorities. “It is important not to lose control over our processes,” Kob warned. “If we don't understand how systems make decisions, we lose not only transparency but also trust.” Another key point is the quality of the training data. If this data is unbalanced or even distorted by racism or sexism, these biases are directly transferred to the decisions and responses of the AI. Users often accept these AI results blindly, without knowing how they were arrived at.
In the second keynote speech, Felix Nadolni focused on the question of how trust in AI systems can be built and maintained. He advocated conscious and controlled use: instead of automating as many tasks as possible at once, he said, there needs to be a clear delineation of tasks and the ability to critically question systems. “Integrity, competence, and goodwill are the cornerstones of trust,” explained Nadolni, also in the context of AI. Trust can only develop if the systems work correctly, transparently, and confidently, and if the administrations know exactly what they are using. The competence of the public sector plays a particularly decisive role here: only those who understand how AI works and what its limitations are can make informed decisions. Close exchange between domain experts and AI specialists is therefore essential, both in public authorities and in companies.
In the subsequent panel discussion, Karen Toppe (Federal Ministry of Digital and Public Service), Prof. Timo Kob (FH Campus Wien), Felix Nadolni (Bundesdruckerei GmbH), Prof. Felix Biessmann (ECDF/Berliner Hochschule für Technik), and moderator Samira Franzel (ECDF) discussed specific applications and challenges of AI in local government and public administration. Karen Toppe emphasized that many administrations are already actively working with AI applications, for example in the form of chatbots that answer citizens' questions, knowledge management or training programs. These are presented transparently in the AI Opportunities Marketplace (MaKI). For her, exchange with industry is central to innovation, but at the same time data sovereignty must be preserved. Transparency and benefits, as well as the ethical use of AI for citizens, are crucial factors, according to Toppe, not least for acceptance among the population. Felix Biessmann emphasized that technical fairness is not a given: "There are dozens of fairness metrics – but no uniform solution. The choice always depends on the field of application.“ It is therefore important to consider different perspectives and identify undesirable developments at an early stage. At the same time, the professor of data science warns against ”over-trust," i.e., almost blind trust in artificial intelligence, which leads to results not being questioned.
Afterwards, participants discussed specific issues at themed tables. One table focused on the use of AI in crisis management by public institutions, while another examined data protection and ethical implications. The role of public-private partnerships in the development of AI-based solutions was also a topic of discussion: What does successful cooperation between startups, companies, and public authorities look like? Where are the common interests, and where are the potential conflicts? The guests also discussed economic potential for companies: Which AI products and services are particularly relevant for local authorities? What requirements must providers meet in order to survive in the market for administrative AI? The discussion table on AI, sustainability, and crisis communication in the smart city focused on the question of how technological innovations can be meaningfully linked to social and ecological goals.
Conclusion: Focus, trust, and cooperation as success factors
The Industry Forum made it clear that artificial intelligence can make public administration more efficient, targeted, and future-proof—provided that the systems are used with a sense of proportion, the relevant actors are well connected, and trust is created through transparency and competence. The discussions showed that there are no easy answers, but many good approaches to how digital transformation in the public sector can be shaped together.
