Largely spurred by the success of ChatGPT, governments worldwide are finally attempting to find a way to regulate AI. While the US is hesitant to impose limits on the technology, fearing regulations could cause US-based AI technology to fall behind the global competition, other countries are upping the ante. On June 14, the EU parliament voted to adopt the proposed AI Act that will take a risk-based approach to regulation. However, this legislation won’t take effect for up to three years. So the EU is now drawing up a voluntary code of conduct for AI through the US-EU Trade and Technology Council, with a draft expected within just a few weeks, to bridge the regulatory gap ahead of legislation.
But not just the EU is trying to get a firm grip on AI. In April, China announced a draft of a generative AI law set to be the world’s first legislation on the technology to take effect. And just recently, the Association of Southeast Asian Nations (ASEAN) began drafting an ASEAN Guide on AI Governance and Ethics that they aim to complete by the end of 2023. Reuters quotes a spokesman for Singapore’s Ministry for Communications and Information stating that the guide will “serve as a practical and implementable step to support the trusted deployment of responsible and innovative AI technologies in ASEAN.” Of course, non-binding codes of conduct alone are not very effective. However, they can give insight into the regulatory framework on the horizon.
In addition, many countries are beginning to scrutinize AI systems under existing laws – and enforce those laws. The UK has highlighted in a 2023 White Paper that some of the risks posed by AI technologies are already addressed through existing laws such as discrimination, product safety, and consumer law. In Australia, the National eSafety Commissioner has issued legal notices under its Basic Online Safety Expectations to compel algorithmic transparency insights from the tech industry regarding online recommendation systems. The first lawsuits against vendors and firms deploying AI algorithms have been filed in the US based on anti-discrimination laws. Italy temporarily banned ChatGPT over GDPR-based privacy concerns, and countries all over Asia-Pacific and the Middle East have passed or plan to pass GDPR-style data privacy laws that could also strongly affect AI systems.
Regulation is coming, and it is coming faster than you make think – in key markets across the globe. The common theme of existing and upcoming policies worldwide is Trustworthy AI, with regulators advocating for accuracy and robustness, safety, non-discrimination, security, transparency and accountability, explainability and interpretability, and data privacy. In other words, what we here at JANZZ have been implementing and advocating for over a decade. It’s about time.