Regulators will regulate, and AI is on top of their list.
The EU AI Act will become the world’s first broad legal framework for artificial intelligence and could hopefully become a global standard. It will apply to any business operating within the EU or offering AI systems or services to EU residents.
The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. Currently the European Parliament is actively working on finalizing the regulation to ensure safe and ethical use of AI within its member states. Intention is to cover all sectors and all types of artificial intelligence except for military use. The European Commission proposed the AI Act in April 2021, and the commission has in December 2023 reached a provisional agreement. The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. The Act is intended to be adopted during 2024 and enter into force after a transitional period of 2-4 years.
Title I in the proposal defines the subject matter of the regulation and the scope of application of the new rules that cover the placing on the market, putting into service and use of AI systems. The definition of AI system in the legal framework aims to be as technology neutral and future proof as possible, taking into account the fast technological and market developments related to AI. The Act uses a risk-based approach in four classes to set up different requirements depending on the level of risk an AI system poses to human health, safety, or rights.
- Prohibited artificial intelligence practices (title II)
Some AI uses are banned, such as subliminal manipulation, social credit scoring, or real-time biometric identification. - High-risk AI systems (title III)
Permitted but subject to compliance with AI requirements and conformity assessments. This would include biometric identification, medical devices, law enforcement, critical infrastructure, and many others. - Transparency obligations for certain AI systems (title IV)
Permitted but subject to information/transparency obligations. This basically means that humans must be notified that they are interacting with an AI and what it does.
It applies to systems that:
– interact with humans,
– are used to detect emotions or determine association with (social) categories based on biometric data, or
– generate or manipulate content (‘deep fakes’). - Minimal or no risk
Permitted with no restrictions. No mandatory requirements but the commission proposes voluntary requirements to be developed. This would for example include ChatGPT.
Europe has with the AI act taken a big step in creating a general law on AI while regulation in the US is still evolving. The FDA has been developing a new regulatory approach for AI/ML-enabled medical devices that is based on the principle of “total product lifecycle” (TPLC). This means that the FDA would monitor and evaluate the performance of AI/ML-enabled medical devices throughout their lifecycle, not just at the point of premarket approval. The FDA also plans to establish a Medical AI Evaluation Database (MAIED) to collect real-world data on AI/ML-enabled medical devices.
The lack of common requirements in the EU and US could result in one of two situations. The first, and obviously the preferred one is that US regulators take a good look at the AI Act and align the requirements accordingly. The other scenario with a set of different rules would be both cumbersome and costly for industry and healthcare alike.
The future will tell how this will evolve but we can be certain that the use of AI will significantly affect every aspect of our society in the same way, and beyond, what the development of computers once did. And much like with computers, if you miss the queue to step onboard, you might miss the train entirely.
In the final episode of this article series, we will try to extrapolate where the AI journey might take us and glance into the future.
If you have any question, contact us at info@qadvis.com.