AI is transforming orthopedics by enhancing precision in surgical planning, diagnostics, and patient care, but raises critical liability questions when AI systems fail, especially in India where regulations are unclear.
Dr. Madhan Jeyaraman, Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai, Tamil Nadu, India. E-mail: madhanjeyaraman@gmail.com
AbstractArtificial intelligence (AI) is revolutionizing various sectors, including health care, with orthopedics being no exception. Orthopedic practice, already familiar with technological advancements such as robotic surgery, is rapidly integrating AI into clinical workflows, enhancing precision in surgical planning, diagnostics, and patient care. However, this evolution raises critical questions, particularly regarding liability when AI systems fail and cause harm. This article delves into the role of AI in orthopedics, exploring its current applications and the potential legal implications that come with its adoption. It examines the global landscape, highlighting the lack of clear regulations around AI liability, especially in India, where the topic remains underexplored in medical literature. With insights into how AI is transforming orthopedic practice, the article addresses the pressing concern of who bears responsibility when AI errors occur. This timely discussion serves as an exploration of a unique and recent topic, urging Indian orthopedic surgeons to balance the benefits of AI with the responsibility they hold, as the legal framework surrounding AI in health care continues to evolve.
Keywords: Artificial intelligence, orthopedics, liability, medicolegal, responsibility.
The artificial intelligence (AI) revolution has truly begun and we are witnessing it change more and more aspects of our lives. Health care is one of the key sectors AI is expected to reform. It has already begun to be used to help deal with unique challenges in health care. Orthopedics is a field which is not new to the use of technology in clinical practice, with robotic surgery being widely used especially for arthroplasty. Hence, it is one of the specialties of medicine that is expected to adopt AI rather quickly. We are already beginning to see this with various innovations and programs that use AI to aid the orthopedic surgeon in his practice. However, there is one question that has begun to arise, the answer to which is not clear – If the AI makes a mistake that leads to harm to the patient – who will be held responsible?
Before we explore the liability of AI use, we must understand what exactly AI is doing in our field and how it functions. AI has already been used in various scenarios in orthopedic practice and is expected to make further inroads shortly. There are AI programs that help in pre-operative planning. For example, computed tomography (CT) based robotic systems used in arthroplasty analyze the CT of the patient and help identify the bone cuts that need to be made with great precision. They also suggest implant types and sizes that can be used in the surgery, thereby decreasing the need for a large armamentarium and making intraoperative decision-making easier. There are also AI systems which can analyze patient data to identify potential risks and complications enabling personalized treatment and timely interventions [1]. AI has also been a boon in orthopedic research with it being able to analyze large data sets, identify patterns, and reveal insights. There is also a new system called ortho AI, developed in consultation with orthopedic surgeons in India, which can effectively act as an assistant to orthopedic surgeons [2]. It is the world’s first evidence-based generative AI for orthopedics built on large language models and cognitive search models. It functions such as a ChatGPT for orthopedics. It can answer questions related to orthopedics. In addition, it can even provide details of surgical steps, troubleshooting when facing intraoperative complications, and many other such functions. However, one question that will start cropping up in the minds of orthopedic surgeons using these AI systems and devices which integrate AI is – if one does use such a system, but the output generated by the system is wrong, then what?
With the rate of technological advancement, it is not a question of when autonomous AI systems will be used in orthopedics, but rather only a matter of when. It may be in diagnosing conditions or in determining the bone cuts in a knee arthroplasty all by itself, or in many other possible ways. However, the bigger question will be, who is going to be responsible if (rather than when) it goes wrong? The answer to this question is yet to be known though. Not just in India, but throughout the world. Not just in orthopedics and medicine but even for AI applications in other fields. No country has yet enforced an AI-specific law, though many countries are deliberating over such a law. That does not mean that AI systems in use today will have no liability or regulatory control. In the United States, the Food and Drug Administration (FDA) oversees the regulation of AI-based medical devices, ensuring they adhere to safety and effectiveness standards [3]. The FDA has approved or cleared various AI-powered health-care tools, including the IDx-DR system, which was the first AI system authorized to autonomously diagnose diabetic retinopathy. If such an AI system causes harm, liability may be addressed through existing product liability laws. The European Union has proposed an AI liability directive, which aims to make it easier for victims to claim compensation when AI systems cause harm [4]. In India, the Information Technology Act, of 2000 would apply as it deals with cyber security, data protection, and electronic records, which can be indirectly applied to AI. However, there is yet a lack of clarity on what the liability for AI will exactly look like when used in health care. We may receive the answer to that only in the coming years when dedicated regulatory frameworks are made or verdicts are delivered in courts of law on cases related to the liability of AI, which could become case laws and guides for future cases.
At the end of the day, what orthopedic surgeons want to know is that if they use an AI system, and it leads to damage to the patient, is the surgeon responsible? The answer to that, is yes, until proven otherwise. AI systems currently are viewed as tools that would help an orthopedic surgeon make decisions or improve patient care but are not seen as truly autonomous and completely self-reliant [5]. Orthopedic surgeons must use their discretion in choosing to use AI systems after weighing the risks and benefits. They must also not rely entirely on AI, or trust it blindly, as the consequences of that will still have to be faced by them in a court of law if a complication occurs. While one must adopt and adapt to newer technologies, some caution also needs to be taken to ensure that patient outcomes are never adversely affected. It would be prudent to explore, experiment, understand, research, and trial AI systems in controlled settings before actually using them in the treatment of patients. Before one hands over all brain activity to AI, one must remember that it is their buttock on the line.
While AI significantly enhances orthopedic practice, it also introduces complex liability issues that need addressing. Indian orthopedic surgeons must balance AI’s benefits with their responsibilities as the legal framework evolves.
- AI is on course to transform orthopedics, offering enhanced precision in diagnostics, surgical planning, and personalized patient care
- The legal responsibility for AI errors in orthopedics remains unclear, with no specific regulations currently in place, particularly in India
- Orthopedic surgeons must use AI cautiously, balancing its benefits with their responsibility, as liability for AI-related mistakes currently is likely to fall, at least in part, on the medical professional.
References
- 1.Myers TG, Ramkumar PN, Ricciardi BF, Urish KL, Kipper J, Ketonis C. Artificial intelligence and orthopaedics: An introduction for clinicians. J Bone Joint Surg Am 2020;102:830-40. [Google Scholar]
- 2.Sancheti P, Bijlani N, Shyam A, Yerudkar A, Lunawat R. ORTHO AI : World’s First artificial intelligence in orthopaedics. J Orthop Case Rep 2023;13:178-9. [Google Scholar]
- 3.Zhou K, Gattinger G. The evolving regulatory paradigm of AI in MedTech: A review of perspectives and where we are today. Ther Innov Regul Sci 2024;58:456-64. [Google Scholar]
- 4.Busch F, Kather JN, Johner C, Moser M, Truhn D, Adams LC, et al. Navigating the European Union artificial intelligence act for healthcare. NPJ Digit Med 2024;7:210. [Google Scholar]
- 5.Pai SN, Jeyaraman M, Jeyaraman N, Nallakumarasamy A, Yadav S. In the hands of a robot, from the operating room to the courtroom: The medicolegal considerations of robotic surgery. Cureus 2023;15:e43634. [Google Scholar]