Kaleidoscopic Patterns of Protest: A Fresh Look at Mediated Protest through Stories and Stats
Report by Katharina Kneidinger
The interdisciplinary workshop hosted by the University of Innsbruck and funded by the Austrian Academy of Sciences brought together researchers from computational science, political studies, and humanities. The workshop marked the conclusion to a two-year project by Gernot Howanitz, Magdalena Kaltseis an Ilya Sulzhytski, and its aim was to get feedback on their preliminary results from researchers from various fields.
Their team started the workshop with a presentation of their work on analysing an archive of over 300.000 YouTube videos portraying protest movements in Eastern Europe. The research focused on the protest movements in Belarus in 2020, in Ukraine in 2014 and in Russia in 2011/12 and combined large-scale visual data analysis with qualitive narrative studies to decode how protests are mediated in video content. To overcome biases in standard AI models, they trained a custom neural network to detect protest symbols such as flags. Their initial test on the Belarusian documentary “Мы не знали друг друга до этого лета“ revealed unexpected complexities because protest symbols turned out to be more diverse and complicated than expected. Shifting strategies, the team employed large language models like LLaVA to annotate video frames extracting keywords for topic modelling. This approach was used to categorize twelve topics in the protest videos.
After that initial presentation on Howanitz’s, Kaltseis’s and Sulzhytski’s approach and method Yaraslava Ananka and Heinrich Kirschbaum talked about their work on Belarusian protests in 2020. Ananka reframed the protests through the lens of dilettantism. Unlike Russia’s vertically organised movements, Belarusian protests lacked centralised leadership, instead relying on “sketch-like” acts of resistance thus making them seem “dilettantish”. Kirschbaum’s book “Revolution der Geduld” argued that the Belarusian movement’s power laid in its refusal to conform to Western media’s “event-centric” narratives, instead cultivating endurance and patience through “non-events”. The Belarusian protests can thus be read like a diary – not as a confessional diary about oneself but rather random folds and insignificant details in the form of sketchy marking, resembling a draft.
Elizaveta Gaufmann then gave a presentation on her work on Russian civil resistance to the war in Ukraine. Because public opinion polls are quite questionable in authoritarian regimes and Telegram audience analyses can only be an estimate, Gaufmann aimed to examine everyday practices such as buying, eating, talking and mating. The subject of the study included car stickers, memes, Putin fan shirts, prices for buckwheat, music listened to by Russians etc. Despite state propaganda, practices like placing anti-war stickers or wearing clothing of certain colours revealed “semiotic guerrilla resistance” thriving in this authoritarian space.
This was followed by Mykola Makhortykh’s research on the topic “Google, is it a Protester or an Activist? Auditing the Algorithmic Gaze on Different Forms of Civil Resistance”. For this he analysed search results from a selection of search engines originating in democracies and autocracies such as Google, Bing and Yandex and from different locations via VPN. Two key axes of representation were examined: Visibility (which movements are shown?) and framing (how are they shown?). The search results then gave an insight into the algorithmic gaze meaning the algorithms’ ability to characterize, conceptualize and affect users and issues.
The workshop continued to deal with computational science with Adam Jatowt talking about the highly topical issue of detecting misinformation on the web. The use of large language models has various drawbacks with one of them being an unprecedented surge of false information. Detecting misinformation has thus become more difficult than ever due to various factors such as fake news or rumour news being spread rapidly by social media platforms’ algorithms and generative AI hallucinating or deliberately deceiving. Therefore, Adam Jatowt conducted research analysing large language models’ capability for detecting misinformation on the web. Both distorted authentic news and human-written fake news were used in his study testing several LLMs.
Afterwards, Bernhard Bermeitinger gave an entertaining presentation on how to turn data into action in the context of applied AI. Before going into detail about current research the participants were given answers to the questions: Why do we even need AI? And what do we need it for? Bermeitinger then elaborated on how to train a model and why it is so important to test a model’s generalization capability. Following the theoretical introduction Bermeitinger presented his research on the analysis of videos showing fascist symbols and faces such as Adolf Hitler’s and Stepan Bandera’s. The study demonstrated that fascist symbols can both co-occur within their group and outside their group and that the detection of faces turned out to be of low precision.
Ralph Erwert concluded the participants’ lectures by presenting his project “The FakeNarratives Project: Multimodal Computational Analysis of News Videos”. The project’s aim was to understand narratives of disinformation in news videos. Film Editing Patterns (FEP) were used to formalise the narrative strategy for the analysis. These patterns from filmography can also be applied to visual news and help obtain information about the techniques applied for intensification, objection, fragmentation, evocation of emotions, etc. His team then defined narrative strategies for the analysis of German news videos.
As the last item on the agenda Gernot Howanitz and Ilya Sulzhytski showcased preliminary results from their research. Unfortunately, a number of videos are missing because they had been taken off YouTube. Nevertheless, their team collected annotated keywords which could then be categorized into topics. By implementing topic modelling and thus creating twelve topics with an ample list of individual keywords regarding protest the collected YouTube videos can now be analysed. These results and their distribution and intensity can be visualised in a graph showing the temporal and quantitative occurrence of the topics. The combination of large language models’ annotation and topic modelling allows them to trace the narrative of videos. The team’s next step is now to get the topic distribution for even more protest videos.