KIDD stands for artificial intelligence in the service of diversity. It is a project that deals with the use of AI applications and algorithmic decision systems (AES), because the repetition and reinforcement of stereotypes and advantages, bias and the possible exclusion of people can be harmful for companies.
Advertisement
In the area of recruiting, the risk is so high that the AI Act prescribes special obligations for systems to ensure that no one is disadvantaged based on their gender, origin or other characteristics. But all other tasks in which AI is used can also produce incorrect or unhelpful results due to bias.
KIDD would like to provide companies and organizations with a process that accompanies the introduction of AI systems with a focus on diversity. Katja Anclam, one of the project founders, explains in our deep dive how such a process can be designed. First, a team will be founded, a so-called diversity panel. This should be involved in the early process of software development or the selection of software and point out the legal, ethical and diversity-related aspects.
Anclam says that a lot happens in companies and organizations just by setting up a group. Mindfulness is growing and further discussions on the topic of diversity are almost automatically triggered. In order to make the best possible decisions and initiate measures that are negotiated between the panel and software development, there is a so-called KIDD moderator and a potential AI expert who explains technical issues.
Anclam hopes that the project can contribute to the sensible use of AI in society. “We can still help shape things and should do so,” she says. But the time window is limited.
KIDD is funded by the Federal Ministry of Labor and Social Affairs under the umbrella of the New Quality of Work Initiative. The goal is a standardized KIDD process “that will enable companies to purchase or develop and introduce fair, transparent and understandable software applications.”
(emw)