The modern digital landscape allows conspiracy theories and hate speech to spread with unprecedented speed and reach. This makes it increasingly difficult for NGOs, journalists and researchers to monitor, analyze or even counteract the spread of these phenomena. AI models can be a useful tool in this regard. This talk will show how masked (small) language models and autoregressive large language models can be used to classify short texts from online platforms for the presence of conspiracy theories or anti-Semitic content. The specific challenges of different platforms and linguistic expressions for algorithmic recognition as well as the opportunities and hurdles for a meaningful use of such models in a social context are discussed.
The event is part of the bi-weekly HEIBRiDS Lecture Series.
Venue:
Einstein Center Digital Future
Conference room on the 1st floor
Wilhelmstrasse 67
10117 Berlin
The event will take place in hybrid mode. Please register by email to Sandra Pravica (sandra.pravica@tu-berlin.de). The event will take place in English.