DSGVO / GDPR

There are numerous constitutional articles, laws, ordinances and regulations according to which companies must conduct their daily activities. And the number of these legal foundations is constantly increasing. A relatively new piece of legislation in Europe is the EU’s General Data Protection Regulation, or GDPR for short, which was adopted in 2016. The aim of this transnational regulation is to standardize the collection, processing, storage and deletion of personal data by private and public actors. During a transition period of two years that expired in May 2018, companies had the opportunity to take the necessary steps to comply with the new regulations in their daily working practices. However, a large majority of them are lagging behind in taking such measures, even though the GDPR should be on the minds of all compliance departments by now in 2022. In addition to all customer-data-processing areas, it is also the HR sector that is strongly affected. HR sections now often process data on (potential) employees with the help of artificial intelligence (AI)-based tools. As we have already explained, the potential of AI for labor-market-related processes is anything but unlimited, and further regulations are already being planned for its use, for instance in the EU.

As if the GDPR were not complex enough in itself, it thus poses additional pitfalls and risks for companies when it comes to the use of AI. In the event of non-compliance, businesses are threatened with heavy penalties and immense fines [1]. For all companies that do not yet comply with the regulation in the HR area (and, as already mentioned, there are many of them), it is time to immediately address this issue and, if necessary, to take appropriate measures, in particular by procuring correctly functioning IT.

Not only a lack of knowledge, but also of technology

Two years after the introduction of the regulation, there are still few cases of legal experience and therefore virtually no reliable points of reference for the interpretation of the GDPR in legal practice [2]. This confusing state of information about the collection and further processing of personal data leads to a lot of uncertainty among companies. In a large number of cases, especially such with no proper section with information security officers and IT specialists, ignorance is rife. Uncertainty is further exacerbated by the fact that the GDPR is likely to be tightened up in the future due to poor cooperation between data protection authorities, national disparities in the interpretation and enforcement of the rules, and in some cases questionable dealings with large digital corporations to date [3].  Furthermore, the GDPR is only the European example of such a legal remedy. Similar regulations that would have to be followed in an international (labor) market will soon exist in most parts of the world, another popular example already in place being the CCPA (California Consumer Privacy Act) in the US state of California.

There is another key reason why many companies would not currently withstand a GDPR compliance test. In addition to ignorance and misinformation, around 80% of HR and labor market management software solutions on the European market simply do not meet the standards set out in the regulation. Moreover, many companies use tools that do not originate from the GDPR area and therefore rarely comply with the regulations here anyway. Such non-conformities often affect the entire setup of a company’s HR department, from the recruiting process to the people analytics processes that are currently being hyped everywhere. Performance-based decisions about which applicants advance to the next round in the hiring process, as well as which existing employees are kept in a position – or promoted or fired from it – can all too quickly become legally (and ethically) critical when made with the help of data-driven, AI-based systems.

Potential for discrimination only part of the problem

Experiences from labor law practice on the consequences for non-compliance with the GDPR can, as already mentioned, be counted on a single hand. Nevertheless, one thing is already very clear: Anyone who cannot explain how their company’s software solution produces its results and bases their HR decisions on said outcomes has a huge problem – especially if AI is involved. Because in such a situation, in addition to the obvious lack of transparency, there is also no technical or practical option to intervene in the process. In Germany, for instance, this is already running counter to the guidelines of the national HR Tech Ethics Advisory Board [4]. A well-known example of this was provided in 2018 by a multinational tech company. Their AI-based recruitment tool for the pre-selection of applicants showed a bias in favor of white, middle-aged male applicants in the algorithm that could not even be corrected by its own IT department. Besides this ‘classic’ example of discrimination in IT processes linked to the labor market, it is however also conceivable that a bias could arise along other lines, such as towards specific types of study, languages or similar, if the data quality on which the automated program is based is not heterogeneous enough. Strictly speaking, a variant of such discriminatory filters is already in use, namely when it comes to so-called ‘equal opportunity employment’. Any company that gives preferential consideration to diversity and inclusion aspects in its HR workflow (e.g. in the form of quotas) is, in effect, ‘discriminating’ against anyone who cannot check any of these boxes. Moreover, the collection and recording of such sensitive information on applicants and employees is not without controversy in itself. It may make people identifiable and thus, in turn, invade their personal rights. [2]

In general, the potential for discrimination and the risk of profound personality screening are, however, only part of the problem when it comes to automated tools for decision-making in human resources. According to legal experts, in the event of a potential breach of the GDPR, it is also relevant whether the tool is legally permissible (as defined in Art. 6 (1b) in conjunction with Art. 88 GDPR). The so-called “prognostic validity”, i.e. whether an algorithm depicts comprehensible and scientifically verifiable relationships, plays a decisive role here. [2] Unsuccessful (and highly problematic) examples of violations of this rule in human resources include speech analytics software solutions whose performance, after initial enthusiastic anticipation, is now receiving negative awards for their “scientifically dubious, probably illegal and dangerous” technology. Promises such as that automatic facial analysis during the job interview can provide important insights into a candidate’s intelligence or the like may sound very tempting and exciting. However, they do not stand up to the condition of suitability of the GDPR and are more like snake oil in their usefulness. [5]

It is past high time

The bottom line for companies is the following: Be aware that there only has to be one big lawsuit – spoiler: it will come – until all businesses (large, medium or small) have to catch up immediately in order to still escape the high punishments for a breach of the European regulations on the handling of personal data. Waiting until that point, however, will not only leave you ignorant and unprepared. It will simply be an impossibility to suddenly implement compliant means for the data-driven processes in human resources in time. Because, again, it is well past high time now that the transition period has expired in 2018. As mentioned, the number of standards regulating this area are not decreasing either, and further and more far-reaching regulations are already being planned in connection with AI, currently for example in the EU.

Today, anyone who processes personal data in any form in their HR department with the help of machine-trained algorithms and who is not absolutely certain of the legality of these automated processes should really act immediately. Otherwise, serious and costly consequences can soon be expected. In Europe, the GDPR is the first port of call regarding such strongly advised revision and updating of in-house compliance knowledge and the HR practices affected by it. After reviewing all of these aspects, it is also likely that new IT solutions will have to be acquired due to the high proportion of non-GDPR-compliant HR software on the European market.

At JANZZ.technology, we offer solutions for all HR and labor market management processes that are not only GDPR- and CCPA-compliant, but also unbiased, multilingual and modular. This is all possible due to our globally unique approach, which bases our extremely powerful solutions and systems on ontology-based semantic matching. At its heart is the JANZZon! knowledge graph, which is curated daily by our experts of different ages and diverse backgrounds in terms of language, education, culture and experience, using ISCO-08, ESCO, O*Net and more than 160 other international taxonomies and classifications. Special attention is also paid to the careful selection of data and sources, so that JANZZ can ensure that the information used to maintain JANZZon! is always optimally balanced, representative and diverse. This is why, for example, in the area of annotations and machine learning, not only English-language documents are used, which is another unique selling point of JANZZ compared to most HR applications in use today. The multilingual nature of our ontology also allows, for instance, gender- or culture-specific differences between languages and their associated geographic areas to be normalized and thus bias-free matching results to be achieved. The results of our parser product JANZZparser! as well as all our other tools are therefore evidence-based, explainable and – in contrast to many other solutions on the market – not only fully DSGVO-compliant, but also scalable thanks to continuous training, foundation on high data quality and their backing in JANZZon! Their use makes it possible to make the factually best decision for every HR issue in a company and to exclude so-called ‘false-negative’ or ‘false-positive’ decisions – in short, to act in the best interest of the business using them.

If you would like to learn more about our services, please contact us at info@janzz.technology or via contact form, or visit our product page for an overview of all our solutions.

 

[1] Datenschutz.org. 2021. EU-Datenschutzverordnung (DSGVO): Verbindliches Datenschutzrecht für alle! URL: https://www.datenschutz.org/eu-datenschutzgrundverordnung/  
[2] Diercks, Nina. 2021. Der Einsatz von KI in Recruiting-Prozessen – Diskriminierungspotential automatisierter Entscheidungsfindung im HR-Bereich. URL: https://stiftungdatenschutz.org/veranstaltungen/unsere-veranstaltungen-detailansicht/datentag-datenschutz-und-kuenstliche-intelligenz-239
[3] IT-Daily. 2020. Drei Herausforderungen verschärfen die DSGVO-Problematik. URL: https://www.it-daily.net/it-sicherheit/datenschutz-grc/24350-drei-herausforderungen-verschaerfen-die-dsgvo-problematik
[4] Ethikbeirat HR Tech. 2020. Richtlinien für den verantwortungsvollen Einsatz von Künstlicher Intelligenz und weiteren digitalen Technologien in der Personalarbeit. URL: https://www.ethikbeirat-hrtech.de/wp-content/uploads/2020/03/Richtlinien_Download_deutsch_final.pdf
[5] White, Sarah. 2021. AI in hiring might do more harm than good. URL: https://www.cio.com/article/189212/ai-in-hiring-might-do-more-harm-than-good.html