William Fry’s Barry Scannell examines the regulatory landscape of everything from medical devices to remote robotic surgery.
As technology becomes more embedded in healthcare, there are legal implications that the sector will need to consider.
For example, emerging tech such as 5G, AR and VR allows for remote robotic surgery, where a surgeon on one side of the world can operate on a patient at the other side of the world.
William Fry’s Barry Scannell said in Ireland, medical negligence follows the Dunne principles, which is a legal test set out in case law for identifying when legal liability for malpractice arises.
“The Dunne principles establish the ‘reasonable doctor’ test in Ireland – whereby a doctor will be found guilty of negligence if it is proved that no other doctor of equal specialisation or skill acting with ordinary care would have acted the same,” he said.
“However, there is a proviso to this test whereby if the practice, which is the ‘general and approved practice’ followed by doctors of equal specialisation and skill has inherent defects, then the doctor could still be found negligent if these defects ought to be obvious to any person giving the matter due consideration.”
While these principles are traditionally there to guide human doctors, new technologies such as AI in diagnostic imaging systems can muddy the waters in terms of what a reasonable doctor would do because the tech itself may not be used widely enough to make that call.
Equally, it may be too early to tell whether or not and AI system has defects. What if, for example, an AI system identified a tumour, but the doctor ignored the AI recommendation? What if the AI failed to identify a tumour and a diagnosis was missed as a result?
Scannell said medical practitioners, institutions, device manufacturers and even insurers need to work together to ensure that established policies and procedures for the use of emerging tech in healthcare are put in place, so that issues such as these can be avoided.
“It is worth noting that two pieces of EU legislation coming down the tracks, the AI Act and the AI Liability Directive, could have enormous consequences for the healthcare sector. The use of AI systems in medical devices and in-vitro diagnostic medical devices may fall within the high-risk AI categorisation under the AI Act, bringing with it significant legal and regulatory obligations for producers and users of such devices,” he said.
“The AI Liability Directive proposes to fundamentally alter liability law in certain circumstances, by creating a ‘presumption of causality’ for AI systems – which means that if a person is injured as a result of an AI system, a rebuttable presumption will exist that the injury was caused by the AI system.”
Scannell added the recently implemented Medical Devices Regulation (MDR) and In-vitro Diagnostic Medical Devices Regulation (IVDR) represent a “significant development and strengthening of the existing regulatory system for medical devices in Europe”.
Remote robotic surgery
One of the biggest advances in the health-tech sector is remote robotic surgery, the ability to perform surgery from another location with the use of robotic technology.
However, this opens up potential issues around battery when it comes to the law, which is the wrongful or harmful unwanted physical contact.
“Any medical procedure could be held to be a battery unless there is written or oral consent or other lawful reason, such as an emergency where the patient is unconscious,” said Scannell.
“Where robotic surgery gets legally complex is, if a surgeon in New York is operating on a patient in Paris, and there was a slip of a scalpel which nicked an artery, causing significant medical complications, is the surgeon causing harmful physical contact on the patient, despite the distance of some 6,000km? Is that a physical contact?”
This, he said is why informed consent is extremely important. However, when proceedings arise out of a failure to provide informed consent, they usually arise under the tort of negligence rather than battery, which complicates matters further.
“Is an errant scalpel a result of the human surgeon’s negligence in operating the robotic surgeon control system, or was it a product liability defect that caused the machine to misinterpret the surgeon’s inputs? We have all experienced a video call dropping due to connection issues – but what if those connection issues were quite literally a matter of life and death? Is the surgeon liable for a connection dropping, the hospital, the robot manufacturer, or even the telecoms provider?”
Scannell added that doctors need to be conscious of ensuring compliance with their professional regulator. In Ireland, that requires them to be registered with the Irish Medical Council and follow the provisions in the Guide to Professional Conduct and Ethics.
“Professional indemnity cover is also something that needs to be considered. All doctors practising in Ireland must have adequate professional indemnity cover for all healthcare services they provide,” he said.
However, many policies only cover services provided to patients who are resident in Ireland and carried out by a doctor resident in Ireland. “Therefore if something did go wrong in the above example of the remote robotic surgery, the patient could be left in a position where the doctor who carried out the surgery is uninsured.”
Regulating the future of health
Scannell said the EU is leading the way when it comes to addressing the use of emerging technology.
“The EU Regulations on Medical and Diagnostic devices protects patients while at the same time providing users and producers of those technologies with a clear regulatory framework within which to operate. The forthcoming AI Act and AI Liability Directive is focused on protecting people from the harmful aspects of AI technology,” he said.
“Existing laws like the GDPR protect people whose data is used in order to develop drugs and new treatments, etc, as well as the training of AI systems.”
The World Health Organization is also moving forward in this space. In September 2022, it issued a global guidance framework for the responsible use of the life sciences.
The framework calls on leaders and other stakeholders to mitigate biorisks and safely govern dual-use research, which has a clear benefit but can be misused to harm humans, other animals, agriculture and the environment.
However, there is still much work to be done as new and innovative tools emerge in the health space.
“Another issue facing telemedicine and virtual healthcare companies is that healthcare providers and companies offering telemedicine services must comply with a range of legislation and guidance, but there is no specific legislation regulating telemedicine or virtual healthcare. This adds to uncertainty and makes regulatory compliance more difficult.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
Source by www.siliconrepublic.com