Artificial intelligence (AI) makes the regulation and management of medical devices more difficult. In addition to this new area of technology, other existing horizontal and proposed regulations are overlapping with medical devices and IVD medical devices (IVDs) in the EU. The General Data Protection Regulation (GDPR), the AI Liability Directive, the Cyber Resilience Act, the Data Act, the Data Governance Act, the European Health Data Space Regulation (EHDS) and the revised Product Liability Directive are all adding to this problem.
Regulatory sandboxes are being encouraged as a safe space for regulators, industry and other stakeholders to experiment with innovative device models and cope with the layers of regulatory guidelines associated with them.
Although sandboxes are not a concept directly documented in Regulations (EU) 2017/745 and 2017/746 on medical devices and in vitro diagnostic medical devices (MDR and IVDR), and are a fairly new concept, they are tools that are included in the context of the EU’s proposed AI Act.
The Commission believes that some of the requirements of the proposed AI Act, like testing, instructing, validation of high-quality data is some of the elements that could be conducted in a regulatory sandbox, as well as testing robustness, accuracy and the cybersecurity processes that have been applied for AI. The idea is that these regulatory sandboxes could be used in the pre-market and the reassessment phases.
By having a jump on conventional product concepts and associated regulations, sandboxes provide an exceptional and safe learning space for manufacturers to cooperate with regulators and other interested groups to delve into new, revolutionary solutions which would have to work hard to get to market because of challenges with their system, technology, regulatory or evidence processes. But setting up these sandboxes, which are unlike learning hubs, entails a high level of commitment, and an open-minded approach.
However, some EU regulators, including the European Medicines Agency (EMA) are rather worried about the accountability, transparency and testing of AI systems used during the medical product lifecycle. These apprehensions were voiced in the EMA’s reflection paper, which was open for consultation until the end of 2023.
On a worldwide level, the World Health Organization (WHO) have already requested that all governments and regulatory authorities establish robust legal and regulatory frameworks to protect the use of AI in the health sector.
Source: Medtech Insight (an Informa product)
Accompanying this subject we recommend the following content on our website
- The European Commission’s AI Act now in final negotiation phase
- Proposed general product safety regulation is expected to entry into force shortly
- New cybersecurity standard
- bsi develops guidelines for the application of ISO 14971 to artificial intelligence and machine learning