Responsive image





Explainable Classification of News Bias and Selected Further IARG Research [Classificació explicable del biaix informatiu / Clasificación explicable del sesgo informativo]

Leidner, Jochen L. (2026)


more

Explainable Classification of News Bias and Selected Further IARG Research

Leidner, Jochen L. (2026)


more

LectureChat: Hybrid RAG over Wikipedia and Multilingual Lectures

Dimitsas, Markos; Leidner, Jochen L. (2026)

The 48th European Conference on Information Retrieval (ECIR 2026), Delft, The Netherlands, March 28-April 3, 2026.


Peer Reviewed
 

LectureChat extends the WikiChat conversational AI system by integrating multilingual university lecture transcripts alongside

Wikipedia content. The demo showcases a dual retrieval architecture that combines structured encyclopedic knowledge with academic lecture

material, leveraging multiple segmentation strategies and cross index reconciliation to improve retrieval quality. The system maintains separate

citation spaces for Wikipedia (numeric) and lectures (alphabetic) and preserves temporal provenance for direct video navigation. We present

the overall architecture, interaction flow, implementation details, and a reproducibility plan.

more

Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics, Rabat, Morocco, March 23-29, 2026 (Volume 3: System Demonstrations)"

3.
DOI: 10.18653/v1/2026.eacl-demo.0


Open Access Peer Reviewed
more

Künstliche Intelligenz im wissenschaftlichen Arbeiten - Chancen, Grenzen und verantwortungsvoller Einsatz

Grosch, Christian (2026)

Vortrag zur Wissenschaftswoche im Frankenwald Gymnasium Kronach, 16.03.2026.



Adapting to Disruption: A Case Study of Project-Based and AI-Integrated Curriculum Innovation at Coburg University

Grosch, Christian (2026)

Zeitschrift für Hochschulentwicklung 21 (1), 321-340.
DOI: 10.21240/zfhe/21-1/16


Open Access Peer Reviewed
more

The Risks of Better Risk Radars: Considerations Regarding the Use of Artificial Intelligence Technologies in Risk Intelligence

Leidner, Jochen L. (2026)

The 34th Annual Conference of the Society for Risk Analysis – Europe (SRA-E 2026), Alicante, Spain, 26-29 May 2026.


Peer Reviewed
 

Progress in artificial intelligence research, caused by the volume of available data on the World Wide Web, the development of affordable yet extremely efficient mathematical processors ("GPUs", graphical processing units), and the discovery of more effective training algorithms for very large models (such as "transformer" neural networks like Google's BERT and OpenAI's GPT, the technology behind ChatGPT.com) has recently led to a technological convergence that has begun to disrupt many other areas of scientific research, business and life. In this work, we explore some methodological concerns and boundary conditions when aspiring to apply such advanced technologies in order to advance the state of the art in software-implementable models for risk intelligence. We look at the potential of these technologies to assist open-ended 360˚ risk profiling, ethical and government questions such as dealing with the inherent bias in data, potentially unknown status of of information’s factuality of datasets, questionable provenance of datasets and other factors, such as sabotaging models. Borrowing from security engineering, we adopt the concept of the ‘attack surface‘ and introduce a variant of it as ‘risk surface‘: we posit that a good risk model should be supplemented by a model of its own risks in the form of making limitations like blind spots and known questionable behavior explicit. Model cards a proposed as a standard type of document to capture the risk profile of the risk model itself.

more

SALOMO: An Annotation Tool For Complex Annotation Tasks With A Large Number of Labels

Menzner, Tim; Leidner, Jochen L. (2026)

The Fifteenth biennial Language Resources and Evaluation Conference (LREC 2026), Palma, Mallorca, Spain, 11-16 May 2026.


Open Access Peer Reviewed
more

Media Manipulation, Bias and Propaganda: An Experience Report on Research-Teaching Integration

Leidner, Jochen L.; Menzner, Tim (2026)

Datenbankspektrum - Special Issue on Trends in Narrative Analysis, with a Focus On Fake News, Misinformation, and Bias 26 (1), 75-80.
DOI: 10.1007/S13222-026-00532-0}


Open Access Peer Reviewed
 

In this short experience report, we present our attempt to integrate output from an ongoing research project with a traditional, mostly literature-based, course on media manipulation, bias and fake news, to turn an existing undergraduate course into a more engaging experience for attending students. To this end, we successfully utilized our system BiasScanner for news bias detection and classification (BiasScanner.org).

more

Detecting and Explaining News Bias and Other Selected Research in Information Access

Leidner, Jochen L. (2025)


 

I present a couple of active research projects of my Information Access Research Group (IARG) that include work on the detection, sub- classification and explaining of sentence-level news bias, methodology support for machine learning projects, as well as RAG chatbots that assist with areas ranging from machine learning project work in a company to students learning about artificial intelligence; I conclude with some thoughts on evaluation.

more

Bias-Mitigating News Search with BiasRank

Menzner, Tim; Leidner, Jochen L. (2025)

Proceedings of the 17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, Marbella, Spain, October 22-24, 2025 (KDIR 2025) 1, 436-447.


Peer Reviewed
more

Besteuerung von Kapitalanlagen in der betrieblichen Altersversorgung

Demmler, Uwe (2025)

Steuerrecht der betrieblichen Altersversorgung mit arbeitsrechtlicher Grundlegung Lfg. 56 / Oktober 2025 / Band II / Teil 9, 1-120.



Sonderthema KI: "Ist Denken Out?" - Ein Interview mit Prof. Dr. J. Leidner

Leidner, Jochen L. (2025)

Coburger Magazin.


Open Access
 

Die Rolle des Denkens im KI-Zeitalter


more

MANDALA.ML: A Life Cycle-Centric and Role-Aware Methodology for Agile Machine Learning Projects

Reiche, Michael; Leidner, Jochen L. (2025)

The 24th International Conference on Intelligent Software Methodologies, Tools, and Techniques (SOMET 2025), Kitakyushu, Japan, September 23-26, 2025.


Peer Reviewed
more

Large Language Models for the Automated Detection and Classification of Media Bias and Propaganda to foster Media Literacy among News Audiences

Menzner, Tim (2025)

Doctoral Consortium contribution, Proceedings of the Ninth Euopean Conference on Information Literacy (ECIL'25), from 22-25 September 2025, Bamberg, Germany .


Peer Reviewed
 

Media bias is an enduring feature of news dissemination, reflecting the subjective perspectives of its creators across history. From archaic records like "The Victory Stele of Naram-Sin" to contemporary news channels, bias permeates media, influencing political, social, and public health narratives. This research aims to investigate the persistent phenomenon of media bias and the potential of large language models (LLMs)(Kojima et al., 2022) in its detection and classification, in order to deploy publicly available software tools aiming to enhance media literacy among news consumers.

Traditionally, media bias served the interests of ruling powers; even with the rise of modern journalism, objectivity is often compromised by commercial pressures and inherent human biases. (Rodrigo-Ginés et al., 2024). As media landscapes evolve, bias continues to shape public opinion, impacting democratic processes and public health perceptions—evident during the COVID-19 pandemic, where polarized media narratives swayed public health decisions and fueled misinformation. (Recio-Román et al., 2023)

Current research on the effects of labeling media bias or propaganda, whether automatically or with human involvement, highlights the complexity of the issue. Depending on different circumstances, labeling can lead to negative outcomes (such as reinforcing filter bubbles by providing means to avoid news with a different perspective), no change in news consumption behavior at all, or, in some cases, an actual improvement in media literacy as intended (Zavolokina et al., 2024).

This research aims to develop a technical solution for the automatic labeling of biased media content, emphasizing several proposals that we hope will lead to a positive effect on media literacy among those presented with the system’s assessments.

These proposals include using a fine-grained taxonomy of bias types rather than a simple binary left/right labeling, focusing on detailed explanations for each model decision in natural language, marking bias at the sentence level rather than at the article or publication level to provide more insights, fine-tuning autoregressive models like GPT-3.5 or Mistral with high-quality examples instead of using “simple” bidirectional models like BERT(Brown et al., 2020) or non-finetuned models, and focusing on the German language, which has not yet been properly explored for such systems.

Understanding readers' perceptions when exposed to bias-labeled content is another facet of this research. It will explore how bias labeling influences readers' views on credibility and neutrality and whether real-time bias indicators affect news consumption behaviors. As mentioned, practical applications serve as a cornerstone of this research. One aim is to implement bias detection systems in real-world settings, such as search engines and news aggregators, to promote balanced information consumption. The development of user tools, like browser extensions highlighting media bias, intends to address public need for transparent information evaluation.

In essence, this research contributes to media literacy enhancement by demystifying media bias through advanced computational methods. By refining detection mechanisms, classifying bias more effectively, and implementing practical tools, it aims to fortify democratic discourse and public understanding, thereby addressing the pervasive influence of media bias in today’s interconnected world.

 

References

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in Neural Information Processing Systems (Vol. 33, pp. 1877–1901). Curran.

Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large language models are zero-shot reasoners. Advances in Neural Information Processing Systems35, 22199–22213.

Recio-Román, A., Recio-Menéndez, M., & Román-González, M. V. (2023). Influence of Media Information Sources on Vaccine Uptake: The Full and Inconsistent Mediating Role of Vaccine Hesitancy. Computation (Basel). https://doi.org/10.3390/computation11100208

Rodrigo-Ginés, F.-J., Carrillo-de-Albornoz, J., & Plaza, L. (2024). A systematic review on media bias detection: What is media bias, how it is expressed, and how to detect it. Expert Systems with Applications237, 121641.

 

Keywords: Media Bias, Large Language Models, Bias Detection, Natural Language Processing, Journalism, Public Opinion, Taxonomy

more

Stellt euch vor, Innovation wäre ein Studiengang der Wirtschaftsinformatik

Grosch, Christian (2025)

Science Slam, 6. MINT Symposium 2025, Nürnberg.


more

Welcome to the ML Team: A Chat Agent as a Project Management Support Agent

Reiche, Michael; Leidner, Jochen L. (2025)

11th Intelligent Systems Conference 2025 (Intellisys'25), 28-29 August 2025, Amsterdam, The Netherlands.


Peer Reviewed
more

Attachment Theory in the Digital Age: Exploring the Psychosocial Dimensions of Technology Use

Grosch, Christian (2025)

The Human Side of Service Engineering 182.
DOI: 10.54941/ahfe1006400


Open Access Peer Reviewed
more

Sprachmodelle und Gefahren verbunden mit ihrem ‚white hat‘- und ‚black hat‘-Einsatz im Bereich ITSEC & (Counter)Propaganda

18. Wissenschaftstag der Europäischen Metropolregion Nürnberg, Amberg, Amberger Congress Centrum (ACC), Freitag, 11. Juli 2025.


more

Learner Models: Design, Components, Structure, and Modelling - A Systematic Literature Review

Böck, Felix; Ochs, Michaela; Henrich, Andreas; Landes, Dieter; Leidner, Jochen L....

User Modeling and User-Adapted Interaction 35, 15.


Open Access Peer Reviewed
 

Learning is at the heart of every progress the human species makes. It is most effective when it considers who we are as individuals, what learning approach we prefer and what we already know to begin with. In the digital age, we strive to capture such information in the form of a digital representation -- the so-called learner model --, to tailor learning-related systems to this information and build upon it to create more personalised learning experiences. Over recent years, the proliferation of diverse models across various educational applications and disciplines has made it challenging to access targeted research.

In this survey, we aim to address this gap, reviewing the latest advances in learner modelling and conducting a comprehensive analysis of the existing approaches, focusing on developments from 2014 to 2023. With the help of a systematic literature review, we want to provide designers and developers of learner models with a structured overview and simplified entrance into the topic and the field of learner models. We investigate the question: What do learner models look like and how are they filled, kept up-to-date, and used?

To this end, we analyse and classify existing approaches. Our findings provide a comprehensive and structured overview of the field of learner modelling, allowing researchers to navigate and understand the diverse approaches more easily and providing developers of learner models or adaptive systems with a practical tool to access relevant information according to their needs.


more

Fakultät Wirtschaftswissenschaften (FW)

Hochschule Coburg



Support of publications
Jakob Hanke
Forschungsreferent
T +49 9561 317 360
jakob.hanke[at]hs-coburg.de