The 8th EAAR Annual Conference on the New Medical Device Regulations

The EAAR Annual Conference on the New Medical Device Regulations (RMD2024) will this year take place in Brussels on 26-27 February 2024.

It is the 8th consecutive year with seasoned speakers from the medical device industry, including EU Commission, competent authority and notified bodies.

It is a fantastic prestigious 2-day event, this year held at Sheraton inviting attendees to listen, learn and discuss the latest development of European regulations. QAdvis founder Nils-Åke Lindberg, member of MDCG Standards will this year speak on the topic of the latest state-of-play on the EU harmonisation process of ISO/IEC standards.

You are also very welcome to sign up for one-to-one meetings with Nils-Åke or other speakers, or just connect and discuss during breaks between the sessions.

Read more about the speakers here.

RMD2023 homepage

Registration here

If you have any question, contact us at info@qadvis.com.

AI systems: The Benefit / Risk balance

Autonomous cars and medical devices are only two examples of autonomous systems, which are systems that can achieve a specified goal, independent of detailed programming. These systems have both risks and advantages that need to be considered by manufacturers, users and regulators alike.

Autonomous systems can increase efficiency and productivity by reducing costs, saving time and optimizing resources. They can also improve safety and security by preventing or mitigating human errors, hazards or threats. For example, self-driving cars can avoid collisions, obey traffic rules and react faster than human drivers. A study published in Plos One on autonomous vehicles shows that if this is properly regulated the benefits outweigh the risks.

Likewise, AI in medical devices can monitor vital signs, suggest diagnosis, administer drugs and perform surgeries with more precision and accuracy than human clinicians. However, they also pose significant risks and challenges. They may fail to deliver the intended therapy or harm the patients due to general IT-risks like software bugs, cyberattacks and technical errors. Therefore, autonomous systems need to be designed, tested, and monitored with high standards of safety, reliability and security. These types of risks are however already well known from other software systems and there are strict regulatory requirements to handle these issues.

AI-based software can introduce new types of errors, biases, or harms that may affect patient safety. As an example, one challenge is to balance the benefits and risks of an AI-enabled medical device. Their complexity makes it difficult to explain or understand how they work or why they make certain decisions. And since the training data generally is not known to the healthcare professionals it can be difficult to determine the device efficiency in different patient groups. Is it equally effective regardless of sex, race or other parameters that were not represented in the training data?

A study by The Lancet Digital Health, shows that the approval of AI-based medical devices in the USA and Europe has increased significantly in recent years, but there are still gaps and inconsistencies in the regulatory frameworks and evidence standards. The study suggests that more harmonization and transparency is needed to ensure the safety and effectiveness of AI-based medical devices.

Therefore, it is important to have a clear and consistent framework for assessing and regulating AI-enabled medical devices that considers not only their technical aspects but also their ethical implications so we can ensure that they are trustworthy, beneficial, and respectful of human values.

In the next episode of this article series we will therefore look at the upcoming AI regulations.

If you have any question, contact us at info@qadvis.com.

The MedTech and Automotive Industry have a lot in common when it comes to AI.

In Sweden alone over 200 people are killed yearly in road accidents. A majority of those are due to some type of human error. We are all aware of this and more or less accept this risk when we sit down behind the wheel. So why is it such a big issue when a single person is killed in an accident involving an autonomous car?

There is no clear consensus on how to compare the safety of autonomous cars and human drivers. Different sources use different data sets and statistical methods to reach different conclusions. Regardless of whether autonomous cars have a higher or lower accident rate than human drivers, we know that crashes involving self-driving cars currently have less severe injuries and fewer fatalities than human drivers.

So, let’s say that we would replace all cars in Sweden with autonomous vehicles. What is the number of fatalities that we would accept? Would we accept a hundred, fifty? I mean the benefits of saved human lives would still be substantial. But would we accept even one death caused by an autonomous car? And if an autonomous vehicle causes a human death, who is responsible? The car manufacturer, or the driver? Currently car manufacturers try to avoid this liability issue by requiring that the driver must monitor AI performance and take control if necessary. But if autonomous cars have less severe accidents already, what about the future when the AI outperforms human drivers?

The situation is quite analogous to AI systems used in health care. According to the Swedish Socialstyrelsen 1200 people a year are injured or killed due to mistakes made by health care staff in Sweden. Half of those are caused by misdiagnosis.
According to a study published in National Libray of Medicine 2019 AI- systems already then had a diagnostic performance comparable with medical experts. In image recognition-related fields they even surpass human readers in some fields.

And even more general AI´s like ChatGPT, developed by OpenAI passed the US medical licensing exam and according to Dr. Isaac Kohane, who’s both a computer scientist at Harvard and a physician who performed the test it performed better than many doctors he had observed. ChatGPT4 could diagnose rare diseases as well as any real doctor based on anamneses and test data. So, would we accept an AI-doctor that only makes half as many mistakes as humans and save hundreds of lives a year in the process? Well, probably not, which in a sense might appear counterproductive. And who is liable if it does?

In the next article we´ll be looking at some of the benefits and risks with AI systems.

If you have any question, contact us at info@qadvis.com.

Holiday greetings

Soon 2023 comes to an end and we at QAdvis want to take the opportunity to thank all our clients, partners, and colleagues for the valued cooperation we have experienced throughout the year. It has been an intensive period filled with challenges, significant opportunities, valuable collaboration, and a multitude of enjoyable assignments.

Looking ahead to 2024, we eagerly anticipate engaging in exciting assignments within our regulated and controlled environment. We are pleased to announce the expansion of our team with the addition of two new colleagues who will join in January to further strengthen our capabilities. Stay tuned for more information.

We would like to share our warm wishes for a wonderful holiday season and a Happy New Year.

If you have any question, contact us at info@qadvis.com.