After a lengthy peri­od of nego­ti­a­tions, the Euro­pean Parliament’s Com­mit­tee on Inter­nal Mar­ket and Con­sumer Pro­tec­tion (IMCO) and Com­mit­tees on Civ­il Lib­er­ties, Jus­tice, and Home Affairs (LIBE) have approved the EU Arti­fi­cial Intel­li­gence Act (AIA). The next steps are a Euro­pean Par­lia­ment ple­nary adop­tion in April and fol­low­ing this a final Coun­cil of the EU endorse­ment. 

The approach that it adopts is based on assess­ing the risk, mean­ing that AI sys­tems pos­ing a high risk will be sub­ject to more strin­gent require­ments. Giv­en the poten­tial impact of AI devices on health, it is like­ly that all med­ical devices will be con­sid­ered high-risk. Still, the nego­ti­a­tion between mem­ber states has been con­tro­ver­sial due to con­cerns about the poten­tial restric­tion of AI inno­va­tion in Europe.

High-risk AI sys­tems are described as those which could influ­ence health, safe­ty, fun­da­men­tal rights, the envi­ron­ment, democ­ra­cy, and the rule of law. The cat­e­go­ry also includes ‘essen­tial ser­vices’, for exam­ple, health­care. Fur­ther­more, the act’s text says that med­ical devices sub­mit­ted to con­for­mi­ty assess­ment under the med­ical devices reg­u­la­tion or in vit­ro diag­nos­tic med­ical devices reg­u­la­tion respec­tive­ly (MDR or IVDR) should also be deemed as high-risk. 

Reg­u­la­to­ry sand­box­es and real-world test­ing will be set up at nation­al lev­el. The two process­es facil­i­tate inno­v­a­tive AI sys­tems test­ing in a con­trolled envi­ron­ment, while being super­vised by experts. This helps small and medi­um enter­pris­es (SMEs) and start-ups to train inno­v­a­tive AI before they are placed on the mar­ket. The mech­a­nisms for inno­va­tion sup­port are an ele­ment of the AI pack­age, offer­ing fund­ing, coach­ing, and cus­tomised ini­tia­tives to sup­port start-ups and SMEs.

Source: Medtech Insight (an Infor­ma product)

Accom­pa­ny­ing this sub­ject we rec­om­mend the fol­low­ing con­tent on our website