AI systems: The Benefit / Risk balance

Autonomous cars and medical devices are only two examples of autonomous systems, which are systems that can achieve a specified goal, independent of detailed programming. These systems have both risks and advantages that need to be considered by manufacturers, users and regulators alike.

Autonomous systems can increase efficiency and productivity by reducing costs, saving time and optimizing resources. They can also improve safety and security by preventing or mitigating human errors, hazards or threats. For example, self-driving cars can avoid collisions, obey traffic rules and react faster than human drivers. A study published in Plos One on autonomous vehicles shows that if this is properly regulated the benefits outweigh the risks.

Likewise, AI in medical devices can monitor vital signs, suggest diagnosis, administer drugs and perform surgeries with more precision and accuracy than human clinicians. However, they also pose significant risks and challenges. They may fail to deliver the intended therapy or harm the patients due to general IT-risks like software bugs, cyberattacks and technical errors. Therefore, autonomous systems need to be designed, tested, and monitored with high standards of safety, reliability and security. These types of risks are however already well known from other software systems and there are strict regulatory requirements to handle these issues.

AI-based software can introduce new types of errors, biases, or harms that may affect patient safety. As an example, one challenge is to balance the benefits and risks of an AI-enabled medical device. Their complexity makes it difficult to explain or understand how they work or why they make certain decisions. And since the training data generally is not known to the healthcare professionals it can be difficult to determine the device efficiency in different patient groups. Is it equally effective regardless of sex, race or other parameters that were not represented in the training data?

A study by The Lancet Digital Health, shows that the approval of AI-based medical devices in the USA and Europe has increased significantly in recent years, but there are still gaps and inconsistencies in the regulatory frameworks and evidence standards. The study suggests that more harmonization and transparency is needed to ensure the safety and effectiveness of AI-based medical devices.

Therefore, it is important to have a clear and consistent framework for assessing and regulating AI-enabled medical devices that considers not only their technical aspects but also their ethical implications so we can ensure that they are trustworthy, beneficial, and respectful of human values.

In the next episode of this article series we will therefore look at the upcoming AI regulations.

If you have any question, contact us at info@qadvis.com.

The MedTech and Automotive Industry have a lot in common when it comes to AI.

In Sweden alone over 200 people are killed yearly in road accidents. A majority of those are due to some type of human error. We are all aware of this and more or less accept this risk when we sit down behind the wheel. So why is it such a big issue when a single person is killed in an accident involving an autonomous car?

There is no clear consensus on how to compare the safety of autonomous cars and human drivers. Different sources use different data sets and statistical methods to reach different conclusions. Regardless of whether autonomous cars have a higher or lower accident rate than human drivers, we know that crashes involving self-driving cars currently have less severe injuries and fewer fatalities than human drivers.

So, let’s say that we would replace all cars in Sweden with autonomous vehicles. What is the number of fatalities that we would accept? Would we accept a hundred, fifty? I mean the benefits of saved human lives would still be substantial. But would we accept even one death caused by an autonomous car? And if an autonomous vehicle causes a human death, who is responsible? The car manufacturer, or the driver? Currently car manufacturers try to avoid this liability issue by requiring that the driver must monitor AI performance and take control if necessary. But if autonomous cars have less severe accidents already, what about the future when the AI outperforms human drivers?

The situation is quite analogous to AI systems used in health care. According to the Swedish Socialstyrelsen 1200 people a year are injured or killed due to mistakes made by health care staff in Sweden. Half of those are caused by misdiagnosis.
According to a study published in National Libray of Medicine 2019 AI- systems already then had a diagnostic performance comparable with medical experts. In image recognition-related fields they even surpass human readers in some fields.

And even more general AI´s like ChatGPT, developed by OpenAI passed the US medical licensing exam and according to Dr. Isaac Kohane, who’s both a computer scientist at Harvard and a physician who performed the test it performed better than many doctors he had observed. ChatGPT4 could diagnose rare diseases as well as any real doctor based on anamneses and test data. So, would we accept an AI-doctor that only makes half as many mistakes as humans and save hundreds of lives a year in the process? Well, probably not, which in a sense might appear counterproductive. And who is liable if it does?

In the next article we´ll be looking at some of the benefits and risks with AI systems.

If you have any question, contact us at info@qadvis.com.

Holiday greetings

Soon 2023 comes to an end and we at QAdvis want to take the opportunity to thank all our clients, partners, and colleagues for the valued cooperation we have experienced throughout the year. It has been an intensive period filled with challenges, significant opportunities, valuable collaboration, and a multitude of enjoyable assignments.

Looking ahead to 2024, we eagerly anticipate engaging in exciting assignments within our regulated and controlled environment. We are pleased to announce the expansion of our team with the addition of two new colleagues who will join in January to further strengthen our capabilities. Stay tuned for more information.

We would like to share our warm wishes for a wonderful holiday season and a Happy New Year.

If you have any question, contact us at info@qadvis.com.

NMI – It’s a Swedish thing

National Medical Information Systems – NMI – are information systems that are not medical devices in themselves, and therefore regulated by the provision HSLF-FS 2022:42. The provision is a standalone regulation, however very similar to MDR Annex I.

Since 2014, software systems in Sweden that are used on a national and regional level for handling data related to patient treatment, diagnostics, or prescriptions that are not medical devices are defined as National Medical Information systems (NMI). These systems are regulated under provisions provided by the Swedish Medical Products Agency (Läkemedelsverket, LMV). The most well-known examples are software systems used for sharing and distributing prescriptions and other pharmaceutical information as well as parts of the 1177.se services.

In the original NMI provisions from 2014 (LVFS 2014:7) the definition was fairly short. Manufacturers of such systems were imposed to follow applicable medical device requirements and register the system at the Swedish Medical Products Agency. The required handling of NMIs was very similar to class I MDD software, apart from some elements that were not applicable since these products are not classified as medical devices. Most importantly, this means that an NMI shall not be CE marked and that the clinical evaluation requirements don’t apply since the system by definition has no medical purpose.

However, in 2022 a new provision, HSLF-FS 2022:42, replaced the old text and included a more extensive definition, and the previous referral to the medical device regulations was removed. Instead, the provision is a standalone regulation very similar to the MDR Annex I General Safety and Performance Requirements with the same exceptions as earlier, but with the notable addition of a system for marking replacing the UDI requirement in the MDR. Software NMIs now need to be marked with an NMI-ID, an exclusive identification system defined in an annex of the provision.

Furthermore, the new definition includes a slightly wider range of products, including shared systems on a municipal level, placing more importance on the concept of joint and uniform use. Therefore, the number of systems registered as NMIs are expected to increase.

The transition period to release NMIs according to the requirements in LVFS 2014:7, or to put them into use, has expired. NMI released and put into service after 1st February 2023 must comply with the provisions of HSLF-FS 2022:42.

At QAdvis we see an increased number of questions regarding NMIs. If you need help with the regulatory aspects of your NMI, don’t hesitate to contact us. We can support you with the product qualification, technical documentation, quality management systems, and software specific tasks such as cybersecurity.

For more information we also recommend visiting the Swedish Medical Products Agency’s section on NMIs (in Swedish).

If you have any question, contact us at info@qadvis.com.