
For decades, medical malpractice has largely focused on “errors of commission” — the wrong diagnosis, the botched surgery, the incorrect medication. But I see a far more insidious and rapidly emerging threat: “errors of omission.” We are on the cusp of being held accountable not just for what we did wrong, but for what we failed to do, especially when readily available, life-saving technology could have made a difference.
The shifting standard of care
The standard of care in medicine is not static; it evolves with scientific discovery and technological advancement. What was considered cutting-edge yesterday is standard practice today, and what is innovative today will be the expected norm tomorrow. AI is accelerating this evolution at an unprecedented pace. The question is no longer if AI will transform healthcare, but when its absence will be deemed negligent.
Whose job is it to usher in this new era? While it’s a collective responsibility, the Chief Medical Information Officer (CMIO) and Chief Medical Officer (CMO) stand at the vanguard. They are the crucial bridge between clinical practice and technological innovation. Their mandate extends beyond simply maintaining IT infrastructure; it encompasses identifying, vetting, and strategically integrating technologies that demonstrably improve patient care, enhance safety, and drive efficiency. This isn’t just about adopting new tools; it’s about redefining what constitutes optimal care when so many AI tools are on the table.
The cost of missed opportunities: A lung cancer case study
Consider the tragic case of lung cancer. For too long, diagnoses have been made at advanced stages, drastically limiting treatment options and survival rates. Imagine a scenario where a patient, let’s call her Sarah, presents with a persistent cough. Her chest X-ray is deemed “unremarkable.” Months later, she’s diagnosed with Stage III lung cancer. Now, imagine a world — our rapidly approaching reality — where an AI-powered diagnostic tool, integrated into the radiology workflow, could have flagged subtle anomalies on that initial X-ray, prompting further investigation and an early Stage I diagnosis.
The difference between a Stage I and Stage III diagnosis isn’t just a matter of clinical staging; it’s often the difference between life and death, between curative treatment and palliative care. Patients and their families are increasingly aware of these technological advancements. Lawsuits are already emerging where patients allege delayed diagnoses, arguing that hospitals failed to utilize available technologies that could have detected their condition earlier. For instance, legal scholars and medical ethicists are actively discussing the implications of AI’s absence in diagnostic processes, anticipating a rise in “failure to use AI” claims as the technology becomes more pervasive and demonstrably effective.
Just as advanced surgical robotics platforms have become a benchmark for sophisticated treatment, AI is rapidly becoming the benchmark for advanced diagnosis, risk stratification, and proactive intervention. The expectation is shifting: if the data exists, and AI could have analyzed it to prevent harm, why wasn’t it used?
Ethical and financial imperatives
The cost of such omissions extends far beyond legal settlements. There’s the profound ethical burden of preventable suffering and death. There’s the erosion of trust in healthcare institutions that are perceived as slow to adopt innovations that protect their patients. And there are the long-term financial implications: extended hospital stays, readmissions, and more complex, expensive treatments that could have been avoided with earlier intervention.
Investing in AI isn’t just about competitive advantage; it’s about fulfilling our fundamental promise to do no harm and to provide the best possible care – that promise extends beyond the exam room, it’s about how the entire system functions. When our providers are held back by outdated tools that delay critical surgeries or slow down the discharge process, the promise of “best possible care” is broken. It is an ethical imperative to provide staff with the technological support they need to deliver on this mission and ensure patients receive timely, high-quality care.
Overcoming the hurdles to AI adoption
Of course, barriers to AI adoption exist: the initial investment, the complexities of integration into legacy systems, the need for robust data governance, and the natural skepticism from clinicians accustomed to traditional methods.
Leading academic institutions such as Stanford (FURM) and Wake Forest (FAIR-AI) have recently published impressive frameworks for evaluating and implementing AI solutions. These aspirational efforts often involve deep technical expertise, multiple governance committees, and multidisciplinary leadership.
However, for every Stanford or Wake Forest, there are dozens of smaller hospitals that simply lack the staff and infrastructure necessary to replicate these processes. Academic medical centers account for less than 5% of US hospitals, meaning the vast majority of patients receive their care in settings where budgets are stretched, IT teams are lean, and governance structures are limited.
Frameworks like FURM and FAIR-AI can be distilled and adapted into lightweight toolkits practical for smaller organizations to adopt. We also need shared resources (e.g., rigorous academic research, governance models, standard evaluation methods) that empower all health systems to efficiently and safely deploy AI to improve patient care.
The call to action: Shaping healthcare’s future
The courtroom scene I opened with is not a distant dystopian fantasy; it is our imminent reality. Healthcare leaders, especially CMIOs and CMOs, must proactively champion the strategic adoption of AI. We must educate our clinicians, invest in the necessary infrastructure, and cultivate a culture that embraces innovation as a cornerstone of patient safety. The time for passive observation is over. The future of medical liability will increasingly hinge on whether we seized the opportunity to leverage AI to improve care, or whether we allowed an error of omission to define our legacy. The lives of our patients, and the integrity of our institutions, depend on our decisive action today.
Source: Just_Super, Getty Images
Dr. David Atashroo is Chief Medical Officer, Perioperative, at Qventus. In this role he leads the design and direction of the Qventus Perioperative Solution, which uses AI and automation to optimize OR utilization and drive strategic surgical growth. He holds a doctorate in medicine from the University of Missouri-Columbia and trained in plastic surgery at the University of Kentucky before completing his postdoctoral fellowship at Stanford University School of Medicine. In addition to his role at Qventus, Dr. Atashroo continues his clinical practice at the University of California-San Francisco.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
