The MedTech and Automotive Industry have a lot in common when it comes to AI.

In Sweden alone over 200 people are killed yearly in road accidents. A majority of those are due to some type of human error. We are all aware of this and more or less accept this risk when we sit down behind the wheel. So why is it such a big issue when a single person is killed in an accident involving an autonomous car?

There is no clear consensus on how to compare the safety of autonomous cars and human drivers. Different sources use different data sets and statistical methods to reach different conclusions. Regardless of whether autonomous cars have a higher or lower accident rate than human drivers, we know that crashes involving self-driving cars currently have less severe injuries and fewer fatalities than human drivers.

So, let’s say that we would replace all cars in Sweden with autonomous vehicles. What is the number of fatalities that we would accept? Would we accept a hundred, fifty? I mean the benefits of saved human lives would still be substantial. But would we accept even one death caused by an autonomous car? And if an autonomous vehicle causes a human death, who is responsible? The car manufacturer, or the driver? Currently car manufacturers try to avoid this liability issue by requiring that the driver must monitor AI performance and take control if necessary. But if autonomous cars have less severe accidents already, what about the future when the AI outperforms human drivers?

The situation is quite analogous to AI systems used in health care. According to the Swedish Socialstyrelsen 1200 people a year are injured or killed due to mistakes made by health care staff in Sweden. Half of those are caused by misdiagnosis.
According to a study published in National Libray of Medicine 2019 AI- systems already then had a diagnostic performance comparable with medical experts. In image recognition-related fields they even surpass human readers in some fields.

And even more general AI´s like ChatGPT, developed by OpenAI passed the US medical licensing exam and according to Dr. Isaac Kohane, who’s both a computer scientist at Harvard and a physician who performed the test it performed better than many doctors he had observed. ChatGPT4 could diagnose rare diseases as well as any real doctor based on anamneses and test data. So, would we accept an AI-doctor that only makes half as many mistakes as humans and save hundreds of lives a year in the process? Well, probably not, which in a sense might appear counterproductive. And who is liable if it does?

In the next article we´ll be looking at some of the benefits and risks with AI systems.

If you have any question, contact us at info@qadvis.com.