JANZZ.technology offers explainable AI (XAI)

Over the past decade, thanks to the availability of large datasets and more advanced computing power, machine learning (ML), especially deep learning systems, have experienced a significant improvement. However, the dramatic success of ML has forced us to tolerate the process of Artificial Intelligence (AI) applications. Due to their increasingly more autonomous systems, current machines are unable to inform their users about their actions.

Nowadays, most AI technologies are made by private companies that make sure to keep their data processing a secret. Furthermore, many companies employ complex neural networks in AI technologies that cannot provide an explanation on how they come up with certain results.

If this kind of system, for example, wrongly affects customers’ traveling, this might not be of great consequence. However, what if it falsely impacts autonomous vehicles, medical diagnoses, policy-making or someone’s job? In this case it would be hard to blindly agree with a system’s decision-making process.

At the beginning of this year, the Organization for Economic Cooperation and Development (OECD) put forward its principles on AI with the purpose of promoting innovation and trustworthiness. One of the five complementary value-based principles for the responsible stewardship of a trustworthy AI is that “there should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.” [1]

Explainable AI (XAI) has recently emerged in the field of ML as a means to address the issue of “black box” decisions in AI systems. As mentioned above, most of the current algorithms used for ML cannot be understood by humans in terms of how and why a decision is made. It is hence hard to diagnose these decisions for errors and biases. This is especially the case for most of the popular algorithms in deep learning neural network approaches. [2]

Consequently, numerous regulatory parties, including the OECD, have urged companies for more XAI. The General Data Protection Regulation, which took effect in Europe, provided the people in the European Union with a “right to a human review” of any algorithmic decision. In the United States, insurance laws force companies to elaborate on their decisions such as why they reject the coverage of a certain group of people or charge only a few with a higher premium. [3]

There are two main problems associated with XAI. Firstly, it is challenging to correctly define the concept of XAI. Furthermore, users should be aware of what the limitations of their knowledge are. If companies had no choice but to provide detailed explanations for everything, intellectual property as a unique selling proposition (USP) would disappear. [4]

The second problematic factor is assessing the trade-off between performance and explainability. Do we need to standardize certain tasks and regulate industries to force them to search for transparently integrated AI solutions? Even if that means putting a very high burden on the potential of those industries.

At JANZZ.technology, we try our best to explain to our users how we match candidates and positions. Our unique matching software excludes secondary parameters such as gender, age, or nationality and only compares skills, education/training, specializations, experiences, etc. It only uses aspects that truly matter in order to find the perfect candidates.

Instead of giving one matching score, our unique matching system breaks down all criteria such as functions, skills, languages and availability. This allows users to have a better understanding of the results and sets the foundation for reskilling and upskilling the workforce to be analyzed. Do you want to know more about the ways in which JANZZ.technology applies explainable AI solutions? Please write now to  sales@janzz.technology

 

 

[1] OECD. 2019. OECD Principles on AI. URL :https://www.oecd.org/going-digital/ai/principles/ [2019.9.17].

[2] Ron Schmelzer. 2019. Understanding Explainable AI. URL: https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/#6b4882fa7c9e[2019.9.17].

[3] Jeremy Kahn. 2018. Artificial Intelligence Has Some Explaining to Do. URL: https://www.bloomberg.com/news/articles/2018-12-12/artificial-intelligence-has-some-explaining-to-do[2019.9.17].

[4] Rudina Seseri. 2018. The problem with ‘explainable AI’. URL: https://techcrunch.com/2018/06/14/the-problem-with-explainable-ai/[2019.9.17].