A lot of people came on board in NL AIC’s startup phase, but to what extent do they reflect society? Diversity and inclusion are currently receiving extra attention from the Programme Team. Anita Lieverdink and Joris den Bruinen explain why.
The advantages of variety within teams
“When NL AIC was founded, it was a coalition of the willing,” says Joris, starting to explain. “It created cooperation between people who all wanted to help with the system changes and the social and economic impact that AI brings. In that initial phase, all of that kept us so busy that less attention was being paid to diversity and inclusion. Now that we’re a couple of years down the road, it’s time to get serious about that subject – not only because diverse AI teams are better able to prevent bias during the development of algorithms and AI applications, but because heterogeneous teams make better decisions.”
Openness and transparency
Anita emphasises that it is not merely about quotas based on gender or cultural background. “It’s also important that we create a culture where everyone feels free to contribute in their own way. That there’s space to make mistakes and learn from them, that we speak up if we don’t agree with something and that we know how decisions are made. That requires transparency, which I think is an important issue.”
Raising awareness
“A defining first step is of course the selection,” she continues, “and who you include in your team. But that’s only the beginning. How much we embrace diversity and inclusion is then reflected in the way we work together and communicate. We will no doubt make the odd blunder as well. My point is that we can learn from that and use it to raise our awareness and make progress. A lot is still happening subconsciously. That’s why it is important to reflect regularly, and asking each other questions helps. You want people to feel their voice is heard and you want them to feel genuinely involved. If the answer to those questions is in the negative we need to do something about that.”
Don’t reinvent the wheel
“Exactly,” agrees Joris. “We see ourselves as a learning organisation. I also think it’s important to realise that we don’t have a monopoly on wisdom here. So to avoid us reinventing the wheel, we’d like to collaborate with NL Digital, Women in AI and VHTO on aspects of diversity and inclusion. We’re already in solid discussions with those parties about the use of female role models. And that’s just the start because diversity and inclusion are about much more than just creating gender equality. As NL AIC we want to be an organisation where everyone is welcome. We want to be an organisation that is open to the rich variety of different opinions and visions. By the way, it’s not that I feel we’re doing poorly in that area at the moment, as a coalition. And like Anita already said, it’s not so much about meeting certain quotas and leaving it at that. That would make it just whitewashing to show the outside world how well we’re doing.”
Make it concrete
“But if we really want to take steps in diversity and inclusion, that of course does mean setting goals,” explains Anita. “So we need to know where we are now and where we want to get. Otherwise it’s difficult to make it concrete. Meanwhile, we’ve already taken an important step as the Programme Team by stating that we’re going to embrace this issue and give it more attention. This means that we’re going to promote the importance of more diversity and inclusion and talk to people in the coalition about it.”
An important step: the Strategy Team elections
“An important moment will be the NL AIC Strategy Team elections,” she continues. “That’s because we intend to make our focus on diversity and inclusion clearly visible in the composition of that team.”
“The more diverse our Strategy Team is, the more perspectives will come to the fore during discussions and the better we’ll be able to anticipate social developments,” thinks Joris. “So yes, it would be nice if the make-up of the Strategy Team matches what we’ve got in mind and lets us put our vision on diversity and inclusion into practice immediately. We want to focus on as many aspects as possible. Including a good mix of ages, for example. And whether someone is a sciencey type or a people person. Whether they work in government, business or a knowledge institution. These things are also important for a person’s view of the world.”
The NL AIC has the subtitle: algorithms that work for everyone. You can read below how algorithms are used to detect sensitivities in the field of diversity and cultural contexts in the past.
AI technology that takes account of cultural contexts
It is relatively easy to teach an algorithm that the Dutch Golden Age and the Seventeenth Century were the same period. But how do you teach an algorithm that there is a great difference in what they signify and that the Golden Age is a controversial term and the context the phrase is used in matters? And there are many more words that we used in the past without thinking that we now deem objectionable or even downright offensive. That is the reason why the Rijksmuseum replaced the concept of a ‘slave’ by an ‘enslaved person’ everywhere.
Cultural AI Lab
Going through everything yourself and manually flagging words or excerpts that might be deemed controversial is a long and tedious task. It’s a good chore to use artificial intelligence for. In 2021, the Cultural AI Lab was founded: a joint initiative by the Centrum Wiskunde & Informatica, the KNAW Humanities Cluster, the National Library of the Netherlands, the Netherlands Institute for Sound and Vision, TNO, the University of Amsterdam, VU University Amsterdam and the Rijksmuseum. There are several projects under the Lab’s auspices in which AI technology is being used to study human culture with all its associated complexities and subjectivity.
Two worlds come together
“Developing an algorithm that takes cultural contexts into account is a major challenge,” emphasises Michel de Gruijter, who works with AI at the KB (National Library of the Netherlands) and who is also closely involved with the Cultural AI Lab. “Because words can have different meanings in different contexts, AI must have knowledge covering a range of domains and must learn to interpret cultural expressions correctly. That’s why various specialists are involved in each project, including experts with knowledge about data technology and people who have vast knowledge of the heritage sector. These two worlds come together nicely in the Lab and that yields fresh new ideas.”
Developing AI tools
“We’re very happy with the initial results from our PhD students Ryan Brate and Andrei Nesterov,” adds Michel de Gruijter. “They’re studying what’s needed to train AI systems so that they are better able to recognise cultural contexts. Their goal is to develop AI tools that can be used successfully in cultural institutions. It’s about algorithms that automatically detect controversial terms or textual passages and then ‘do something’ about them. We don’t yet know exactly what. But we are certain that cultural institutions will eventually be able to use AI to find controversial words in their catalogues quickly and then add appropriate explanations for them.”