ELSA Lab for AI, Media & Democracy

Published on: 10 April 2022

Machine learning and data analysis can play a role in the entire media production chain. AI systems throughout the chain need to be both transparent and explainable, not merely to comply with regulations but also to boost user confidence.

What social challenge in AI is being tackled?

Disinformation, fake news and polarisation pose real dangers to the functioning and vitality of a democratic society. The rapid rise of AI-based technology means that this danger is becoming ever more acute. The ELSA Lab for AI, Media & Democracy develops models and algorithms to quantify and control the spread of disinformation and polarisation.

Automated recommendation systems can use feedback loops that end up always presenting the same type of content to users, an effect known as a ‘filter bubble’. This ELSA Lab works to improve the diversity and inclusivity of recommendation systems by developing new metrics that are used to evaluate various recommendation algorithms and systems.

Appropriate legal and ethical guidelines must be developed to make sure that AI-driven solutions are developed and deployed in ways that respect our public values and fundamental rights. The legislative frameworks currently emerging in Brussels and The Hague need to reflect the democratic role of the media (and the real-world nature of media companies), the creative industry, the general public and society at large.

What type of solution is offered to the end user?

Fundamental, theoretical models that let us comprehend the dynamics of interactions within networks of strategic, self-learning agents (including AI). Models like these can be used for analysing empirical data, which allows important control parameters to be identified and provides guidance when regulations are being drafted. They also provide real-world input for ethically and socially responsible design and implementation of AI models in practice. This involves e.g. developing prototype solutions for specific issues, as well as guidelines and ethical-legal assessment frameworks. These solutions are developed jointly with media professionals and end users.

What AI methods or technologies are used in the studies?

AI is often a black box. We ought to aim for transparency and explainability in all our systems and new methods are being developed to achieve this. The ELSA Lab focuses less on the algorithms themselves and more on the transparency of the whole AI pipeline, i.e. also on the selection of training data and test data, the pre-processing steps, the algorithm’s parameters and the predictions it makes as a result, as well as how various users can view and query those results.

The techniques used include multi-agent systems, mechanism design, game theory, machine learning, deep learning, multi-agent reinforcement learning, natural language processing, data analytics, knowledge representation and reasoning, human-computer interaction and affective interactive systems.

Is there cooperation with other sectors?

Together with the Utrecht Media Lab and the Cultural AI Lab, we are looking at the differences and similarities between the cultural heritage sector and the media sector, in particular the diversity and inclusivity of recommender systems.

What is the ultimate success this ELSA Lab can achieve?

Developing theoretically sound conceptual frameworks and methodologies that underpin the way factually based, independent media operate, plus broader awareness among media professionals of the capabilities of AI and the objections to it, and how such technology can be used to reinforce the democratic function of the media.

Awarded the NL AIC Label

NL AIC LabelThe Netherlands AI Coalition has developed the NL AIC Label to underline its vision for the development and application of AI in the Netherlands. An NL AIC Label formally recognises an activity that is in line with the aims and strategic goals of the NL AIC and/or the quality of that activity. The NL AIC would like to congratulate the ELSA Lab for AI, Media and Democracy.

Awarding ELSA Lab funding

The Netherlands Organisation for Scientific Research (NWO) and the Netherlands AI Coalition launched the NWA call for ‘Human Centric AI for an inclusive society: Towards an ecosystem of trust’. After testing by an independent NWO evaluation committee, five projects were approved at the end of January 2022, including this ELSA Lab. The NL AIC would like to congratulate all the parties involved in obtaining this funding and wishes them every success in the further development of the Lab.

More information?

If you’re interested in this ELSA Lab, please visit the website. If you would like more information about human centric AI and the ELSA concept, please go to this page.

Share via: