
The rapid implementation of AI in healthcare decision-making creates complex ethical challenges for compliance and ethics professionals. Responding to a hypothetical scenario inspired by a recent ethics competition, Ask an Ethicist columnist Vera Cherepanova examines the tension between algorithmic efficiency and human-centered decision-making.
“I oversee AI implementation at a major healthcare provider. Our AI system automates medical claim approvals, significantly boosting profitability by quickly denying questionable claims. But lately, concerns have emerged: Patients are frustrated by opaque, automated decisions, and our internal audits suggest vulnerable populations may be disproportionately affected. Senior leadership insists our AI practices are legally compliant and financially justified, but I’m uncomfortable. Should I advocate strongly for more transparency and clinician involvement, even if it means challenging leadership and reducing short-term profits or accept this as the necessary cost of innovation and business efficiency?” This question was inspired by the International Business Ethics Case Competition, where students from across the US learn to discuss ethics in business in a practical, realistic and effective fashion, and where I was proud to be a judge for the fourth year in a row.
You’ve highlighted a tension that sits at the intersection of ethics, technology and business strategy. What you’re wrestling with is something increasingly common as organizations adopt artificial intelligence. Your discomfort is understandable.
Automation promises speed, consistency and financial efficiency, but when it comes to healthcare, the increasing role of AI in such decisions comes with significant ethical implications, particularly when profit-driven automation intersects with vulnerable patients’ needs.
Medical decisions aren’t purely transactional; they directly impact human lives, often profoundly. The tension between efficiency and humanity comes to the forefront.
Legally compliant actions aren’t always ethically sufficient. Your company may have the right to rely on AI-driven denials, but having the right does not inherently make something the right thing to do. When automated decisions lack transparency, patients, especially vulnerable ones, may suffer disproportionately. The complexity of healthcare demands more than algorithmic accuracy: It requires integrity, compassion and fairness, all of which are often featured as healthcare providers’ corporate values and none of which a purely automated system reliably provides.
Your inclination toward more human oversight and transparency is ethically sound and strategically wise. AI itself isn’t the problem here; the real issue is using it without sufficient safeguards or transparency. Short-term financial gains secured through aggressive AI-driven denials are fragile. Public backlash, potentially including extreme levels, loss of trust and the risk of legal action can rapidly offset those gains. Ensuring oversight by clinicians and clarity in decision-making aligns closely with your company’s stated mission of promoting healthier lives and a trustworthy healthcare system.
You should speak candidly with leadership, suggesting immediate improvements, such as increased human oversight in claim reviews, clearer explanations of denials and rigorous audit procedures for fairness and accuracy. Make your case, emphasizing how given the nature of healthcare originally grounded on compassion and human dignity, prioritizing human-centered approaches isn’t just ethical, it’s strategic.
The ethically sound path is clear, though it might not be the easiest to travel. Advocating for a balanced, patient-centered approach will honor the trust patients place in healthcare providers and protect your organization from the far-reaching risks of unregulated automation. This approach ensures that your company can continue innovating without abandoning the fundamental ethical obligations that underpin healthcare itself.
Readers respond
The previous question came from an ESG manager at a company navigating the ethics of greenhushing — the practice of quietly scaling back ESG disclosures in response to a politically hostile environment. The dilemma revolved around whether discretion was necessary to preserve sustainability efforts, or if silence ultimately amounted to complicity.
In my response, I noted: “If silence allows the company to continue its sustainability work without interference, some might argue that greenhushing is a strategic necessity rather than an ethical failure. However, if enough companies choose discretion over transparency, it reinforces the idea that sustainability is a liability, weakening both corporate credibility and industry-wide progress. Transparency isn’t just a virtue; it’s a responsibility. If the company has made progress, stakeholders — investors, employees, customers — deserve to know it. Investors and shareholders relying on ESG data may make decisions based on incomplete information, which could create ethical and litigation risks of its own. That said, defying political realities isn’t always viable. If the goal is long-term impact rather than short-term signaling, keeping a lower profile may actually preserve sustainability work rather than weaken it. So, it will not come as a big surprise if I say, “A balanced approach is needed.” Read the full question and answer here.
Let’s look at some reader reactions:
Greenhushing is NONCOMPLIANCE — TT
From whatever good side this is coming, ESG disclosures aren’t just marketing, they’re a governance tool. Withholding progress can distort markets and deprive stakeholders of the information they need to make informed decisions. — GK
Have a response? Share your feedback on what I got right (or wrong). Send me your comments or questions.