AI regulators, in particular, have been struggling to bring AI technologies under their purview
Artificial Intelligence (AI), in course of time, has become an inevitable reality, evoking the interests of many not just from its value proposition point but from the threats it is posing to humanity. AI regulators, in particular, have been struggling to bring AI technologies under their purview. European Union, has recently proposed to introduce the world’s first broad standards for regulating artificial intelligence. Apart from deciding how the technology affects the lives of around 450 million citizens in 27 countries of the EU, it has a significant impact on how AI is put to use in other corners of the world. The proposed act is expected to influence up to 35% of AI systems used in Europe. The Act applies to private and public sector enterprises that deal with AI systems.
It comes at a critical time when digital technologies are developing at lightning speed and framing legislative measures is a tricky proposition. Given the black-box nature of AI algorithms attributing responsibility – particularly when autonomous systems are embedded and self-learning – is only next to impossible.
The AI systems are inherent with risks and therefore these AI regulations impose obligations on the providers and users. However, the task is not as straight and simple as mentioned. It starts with banning AI systems that are risky to the ones that require certain voluntary codes of conduct.
Regulators are particularly concerned about high-risk systems. AI systems embedded systems are already in use in EU-regulated sectors and are included in the conformity assessment processes. The other high-risk AI systems include systems that influence the very fundamental rights such as algorithmic bias in hiring, employee evaluation, credit scoring, etc.
In the case of high-risk AI systems, the law requires implementing risk management processes throughout the lifecycle. Conforming to data governance standards, documenting the systems in detail, and recording their actions systematically are part and parcel of the regulatory norms.
The UK government is also mulling bringing in an AI regulatory framework. However, its existing light-touch approach is different from that of EUs. However, its AI policies are sound, based on principles that give regulators the flexibility to execute them in ways that fit the usage of AI in different industries.
Disclaimer: The information provided in this article is solely the author’s opinion and not investment advice – it is provided for educational purposes only. By using this, you agree that the information does not constitute any investment or financial instructions. Do conduct your own research and reach out to financial advisors before making any investment decisions.