The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) argues that organizations using artificial intelligence and algorithms in their operations should explain why they do so. This applies to companies and governments that “seriously process personal data” and use AI systems or algorithms that can have a significant impact on people’s lives.
The Autoriteit Persoonsgegevens (AP) states that the introduction of AI chatbots and a lack of understanding of existing algorithms pose the greatest risks. These risks include discrimination, inequality of opportunities, deception, and a lack of transparency and explanation regarding algorithms.
Almost all organizations in the Netherlands now work with algorithms. “Many customers mean a lot of customer data in your files,” says AP Chairman Aleid Wolfsen. The benefits scandal is the most well-known example of an algorithm disadvantaging citizens. However, more recently, DUO and the UWV also made mistakes with algorithms.
Wolfsen believes that AI and algorithms can be useful, but we should not ignore the risks. “Awareness of this is increasing, but many governments and companies are still searching for the right way to do it,” he says. “That’s why clear regulations are crucial now. We call on the caretaker government to continue making progress in this regard.”
AP calls for mandatory completion of register
In the Netherlands, there is an algorithm register where government organizations can voluntarily fill in high-risk algorithms. However, it is not currently mandatory. The AP hopes that this obligation will be in place within a year. The Dutch broadcasting organization NOS previously reported that the register is hardly being filled in.
Since the beginning of this year, the AP has not only been supervising privacy violations but also algorithms. This is the first time the AP has issued a report on this topic. The regulator plans to do so every six months.