Arti­fi­cial intel­li­gence (AI) makes the reg­u­la­tion and man­age­ment of med­ical devices more dif­fi­cult. In addi­tion to this new area of tech­nol­o­gy, oth­er exist­ing hor­i­zon­tal and pro­posed reg­u­la­tions are over­lap­ping with med­ical devices and IVD med­ical devices (IVDs) in the EU. The Gen­er­al Data Pro­tec­tion Reg­u­la­tion (GDPR), the AI Lia­bil­i­ty Direc­tive, the Cyber Resilience Act, the Data Act, the Data Gov­er­nance Act, the Euro­pean Health Data Space Reg­u­la­tion (EHDS) and the revised Prod­uct Lia­bil­i­ty Direc­tive are all adding to this problem. 

Reg­u­la­to­ry sand­box­es are being encour­aged as a safe space for reg­u­la­tors, indus­try and oth­er stake­hold­ers to exper­i­ment with inno­v­a­tive device mod­els and cope with the lay­ers of reg­u­la­to­ry guide­lines asso­ci­at­ed with them.

Although sand­box­es are not a con­cept direct­ly doc­u­ment­ed in Reg­u­la­tions (EU) 2017/745 and 2017/746 on med­ical devices and in vit­ro diag­nos­tic med­ical devices (MDR and IVDR), and are a fair­ly new con­cept, they are tools that are includ­ed in the con­text of the EU’s pro­posed AI Act.

The Com­mis­sion believes that some of the require­ments of the pro­posed AI Act, like test­ing, instruct­ing, val­i­da­tion of high-qual­i­ty data is some of the ele­ments that could be con­duct­ed in a reg­u­la­to­ry sand­box, as well as test­ing robust­ness, accu­ra­cy and the cyber­se­cu­ri­ty process­es that have been applied for AI. The idea is that these reg­u­la­to­ry sand­box­es could be used in the pre-mar­ket and the reassess­ment phases. 

By hav­ing a jump on con­ven­tion­al prod­uct con­cepts and asso­ci­at­ed reg­u­la­tions, sand­box­es pro­vide an excep­tion­al and safe learn­ing space for man­u­fac­tur­ers to coop­er­ate with reg­u­la­tors and oth­er inter­est­ed groups to delve into new, rev­o­lu­tion­ary solu­tions which would have to work hard to get to mar­ket because of chal­lenges with their sys­tem, tech­nol­o­gy, reg­u­la­to­ry or evi­dence process­es. But set­ting up these sand­box­es, which are unlike learn­ing hubs, entails a high lev­el of com­mit­ment, and an open-mind­ed approach.

How­ev­er, some EU reg­u­la­tors, includ­ing the Euro­pean Med­i­cines Agency (EMA) are rather wor­ried about the account­abil­i­ty, trans­paren­cy and test­ing of AI sys­tems used dur­ing the med­ical prod­uct life­cy­cle. These appre­hen­sions were voiced in the EMA’s reflec­tion paper, which was open for con­sul­ta­tion until the end of 2023. 

On a world­wide lev­el, the World Health Orga­ni­za­tion (WHO) have already request­ed that all gov­ern­ments and reg­u­la­to­ry author­i­ties estab­lish robust legal and reg­u­la­to­ry frame­works to pro­tect the use of AI in the health sector. 

Source: Medtech Insight (an Infor­ma product)

Accom­pa­ny­ing this sub­ject we rec­om­mend the fol­low­ing con­tent on our website