Autonomous cars and medical devices are only two examples of autonomous systems, which are systems that can achieve a specified goal, independent of detailed programming. These systems have both risks and advantages that need to be considered by manufacturers, users and regulators alike.
Autonomous systems can increase efficiency and productivity by reducing costs, saving time and optimizing resources. They can also improve safety and security by preventing or mitigating human errors, hazards or threats. For example, self-driving cars can avoid collisions, obey traffic rules and react faster than human drivers. A study published in Plos One on autonomous vehicles shows that if this is properly regulated the benefits outweigh the risks.
Likewise, AI in medical devices can monitor vital signs, suggest diagnosis, administer drugs and perform surgeries with more precision and accuracy than human clinicians. However, they also pose significant risks and challenges. They may fail to deliver the intended therapy or harm the patients due to general IT-risks like software bugs, cyberattacks and technical errors. Therefore, autonomous systems need to be designed, tested, and monitored with high standards of safety, reliability and security. These types of risks are however already well known from other software systems and there are strict regulatory requirements to handle these issues.
AI-based software can introduce new types of errors, biases, or harms that may affect patient safety. As an example, one challenge is to balance the benefits and risks of an AI-enabled medical device. Their complexity makes it difficult to explain or understand how they work or why they make certain decisions. And since the training data generally is not known to the healthcare professionals it can be difficult to determine the device efficiency in different patient groups. Is it equally effective regardless of sex, race or other parameters that were not represented in the training data?
A study by The Lancet Digital Health, shows that the approval of AI-based medical devices in the USA and Europe has increased significantly in recent years, but there are still gaps and inconsistencies in the regulatory frameworks and evidence standards. The study suggests that more harmonization and transparency is needed to ensure the safety and effectiveness of AI-based medical devices.
Therefore, it is important to have a clear and consistent framework for assessing and regulating AI-enabled medical devices that considers not only their technical aspects but also their ethical implications so we can ensure that they are trustworthy, beneficial, and respectful of human values.
In the next episode of this article series we will therefore look at the upcoming AI regulations.
If you have any question, contact us at firstname.lastname@example.org.