After a lengthy period of negotiations, the European Parliament’s Committee on Internal Market and Consumer Protection (IMCO) and Committees on Civil Liberties, Justice, and Home Affairs (LIBE) have approved the EU Artificial Intelligence Act (AIA). The next steps are a European Parliament plenary adoption in April and following this a final Council of the EU endorsement.
The approach that it adopts is based on assessing the risk, meaning that AI systems posing a high risk will be subject to more stringent requirements. Given the potential impact of AI devices on health, it is likely that all medical devices will be considered high-risk. Still, the negotiation between member states has been controversial due to concerns about the potential restriction of AI innovation in Europe.
High-risk AI systems are described as those which could influence health, safety, fundamental rights, the environment, democracy, and the rule of law. The category also includes ‘essential services’, for example, healthcare. Furthermore, the act’s text says that medical devices submitted to conformity assessment under the medical devices regulation or in vitro diagnostic medical devices regulation respectively (MDR or IVDR) should also be deemed as high-risk.
Regulatory sandboxes and real-world testing will be set up at national level. The two processes facilitate innovative AI systems testing in a controlled environment, while being supervised by experts. This helps small and medium enterprises (SMEs) and start-ups to train innovative AI before they are placed on the market. The mechanisms for innovation support are an element of the AI package, offering funding, coaching, and customised initiatives to support start-ups and SMEs.
Source: Medtech Insight (an Informa product)