With the emergence of big data, organizations – private and public alike – are increasingly adopting AI technologies to drive automation and data-driven decision making in an effort to improve efficiency and drive growth. However, the growing adoption of AI technologies has been accompanied by a steady stream of scandals around unethical deployment. AI assistants like Alexa, Siri and co. – and workers in the companies behind them – listening in on people’s private conversations to gather data for personalized marketing; Facebook’s algorithmic promotion of misinformation, disinformation and other harmful content, it’s massive leakage of user data, or its racist ‘mislabeling’; Clearview’s unlawful facial recognition surveillance…the list goes on. And it is not just private businesses. The Dutch child care benefits scandal, for example, where the Dutch tax authorities used a deeply biased algorithm to flag Dutch residents with a high risk of child care benefits fraud, had devastating effects on real people. Tens of thousands of families – often with lower incomes or belonging to ethnic minorities – were forced into poverty by the authorities over a mere suspicion of fraud based on the system’s risk profiles. More than a thousand children were taken into foster care, many of them for years. The criteria for the risk profiles were developed by the tax authority and included having dual nationality or low income.
As such instances of unethical deployment or outcomes have come to light, discussions over these technology-driven problems have typically turned into discussions of ‘bugs’ and programming errors or training data, and how to develop and implement ‘ethical AI’. A new version is released with more parameters to mitigate bias, or whatever the issue seems to be. Or the project is ditched entirely, like Amazon’s AI recruiting tool, Microsoft’s racist chatbot Tay, and many other ventures. However, although these errors often manifest as errors and flaws in coding, training data or user interfaces, they are almost always the consequence of deeper lying issues: poor requirements, poor processes, poor governance. Why did Amazon’s recruiting technology fail? It was trained on a dataset of past hires in an organization that clearly had biased recruitment practices. Why have Facebook’s many ‘attempts’ to mitigate the spread of misinformation and hate speech failed? Because the modified algorithms undermined the company’s top (and possibly only) priority of uninhibited growth. And the Dutch authorities? Well, apart from the institutional bias that was fed into the algorithms, it seems that humans delegated decisions that deeply affected people’s lives to an AI system – without questioning the results. In each case the key failure was not the unethical AI. Rather, the unethical AI was the inevitable result of an organization-wide failure of ethics. As Dave Lauer so expertly demonstrates in his opinion paper, ethical AI cannot exist in a vacuum from broader ethics. Facebook and many other companies have proven time and again that fundamentally unethical companies cannot or will not deploy ethical AI simply by creating a task force and a few ethical guidelines. To build ethical AI, there needs to be a solid ethical foundation throughout and possibly beyond the organization. As well as a clear understanding of how hard it actually is to develop and deploy truly transparent and fair AI systems. On this firm grounding, AI ethics can emerge naturally from a comprehensive ethical approach of the organizations building and deploying AI technologies. In other words, ethical by design.
Fortunately, there are companies in the AI space that appreciate the complexity of designing and deploying ethical AI and have adopted (AI) ethics principles systemically. Like the PaaS startup Fiddler AI that provides tools that help developers build ethical AI by monitoring their machine learning models and datasets, detecting bias, and integrating explainability, transparency and fairness. The core purpose of the company is ‘to bring trust and transparency to AI everywhere’, which is also reflected in their culture where ethical AI is everyone’s responsibility, ‘built on the values of humility, respect, kindness, transparency, and collaboration.’ On this foundation, it is hard not to produce ethical AI.
This is also the approach that we pursue here at JANZZ. Since the company’s founding in 2008, our mission has been to improve lives by creating software that promotes equal employment opportunities for everyone. We are committed to ethical and responsible use of technologies and data, developing software systems based on explainable AI that are safe, transparent, unbiased, and thus beneficial to individuals, organizations and communities. To ensure this, we have developed seven core principles for AI ethics in HR tech that we diligently follow – not only across the AI lifecycle, but woven into the fabric of our entire organization.