Trial Magazine

Theme Article

You must be an AAJ member to access this content.

If you are an active AAJ member or have a Trial Magazine subscription, simply login to view this content.
Not an AAJ member? Join today!

Join AAJ

Revolutionizing Med Neg Cases With AI

As AI continues to advance, learn how it can help you uncover alternative diagnoses, create visualizations of key data, and pinpoint critical evidence in medical negligence cases.

Brandon Vaughn, Carrie Joines August 2025

The rapid evolution of technology is revolutionizing the legal practice. For plaintiff attorneys handling medical negligence cases, incorporating advanced tools—especially artificial intelligence—brings both challenges and opportunities.

Though still in its infancy, AI has the power to reshape how we manage these cases, from analyzing and summarizing medical records to researching differential diagnoses, as well as providing expert-level insights. While AI tools offer promising benefits, they also raise significant questions about limitations, ethical considerations, and necessity of human oversight.1

This technology can help attorneys streamline the discovery process, but it can also lead to medical misdiagnoses. While the proliferation of AI offers significant benefits across both legal and medical fields, it also presents serious flaws. Just as AI might generate an inaccurate legal brief for an attorney, it can hinder medical care when health care providers rely on it improperly.

Plaintiff attorneys face additional challenges because some electronic medical record (EMR) systems now use AI. These systems may use AI to assist in highlighting critical results or detecting changes in the patient’s condition based on data entered into a patient’s chart. We must examine AI’s role in medical decision-making and consider how it influences the evolving landscape of medical negligence cases.

Unlock Deeper Insights

AI has transformed med neg cases by enhancing the analysis and summarization of medical records beyond the capabilities of traditional keyword searches. For years, attorneys have relied on paralegals or medical experts to sift through extensive medical records to identify critical evidence.

This manual process is not only time consuming but also prone to human error. AI, particularly natural language processing technologies, surpasses mere keyword identification by offering a deeper contextual understanding of medical information.2

Before interacting with any AI platform, plaintiff attorneys must ensure they are protecting client confidentiality by only using platforms that are compliant with data security regulations, such as HIPAA. Open platforms, such as ChatGPT,3 lack end-to-end encryption and may share user-entered data with third parties. You should never upload or enter confidential data into prompts on these platforms.

Better searches. AI’s advanced algorithms on platforms like Thomson Reuters’s CoCounsel4 allow users to ask specific natural language questions—also known as prompts—about a document. This marks a significant improvement from traditional keyword search methods.

For example, you might ask an AI platform, “How many times was the patient’s oxygen saturation below 90% on 4/29/23?” This is invaluable for plaintiff attorneys reviewing vast medical records or clinical literature to find pertinent evidence supporting their clients’ claims. AI-powered tools can efficiently search for specific conditions, medical treatments, or procedural errors, yielding more targeted results.5

Some AI platforms can organize a patient’s entire medical history chronologically, allowing legal teams to identify relevant trends in a patient’s condition over time. It can also detect patterns in medical records, such as trends in vital signs, physical assessments, and lab results.

This is particularly useful in cases involving extensive data. By providing a comprehensive chronology, AI can assist in determining whether a health care provider’s actions—or inactions—may have caused harm.6 Yet, this technology has limitations. Always verify AI-generated information against the original medical records to ensure accuracy.

AI can streamline the discovery process by highlighting key points that might otherwise have gone unnoticed and offering a thorough and efficient review of medical records. However, it’s important to note that AI should complement, not replace, human judgment.7

While AI can provide valuable insights, attorneys, paralegals, medical consultants, and experts must scrutinize AI-generated conclusions. Medical experts should base their opinions on their independent review of the original medical records, rather than solely on AI-generated summaries.8

Despite its advantages, AI is not infallible. It can occasionally “hallucinate” by fabricating information, including sources, data, and references. This occurs when AI generates content that seems credible but is actually inaccurate or entirely fabricated. To ensure accuracy, always fact-check AI-generated work against original records.

Differential diagnoses. Some emerging AI platforms, such as DxGPT,9 can aid in generating differential diagnoses by identifying potential conditions a patient might have based on their symptoms. You can input the clinical data from the medical record to generate differential diagnoses. AI then cross-references the patient’s symptoms with a database of known diseases and conditions, generating a list of possible diagnoses the health care provider should have considered.10

This capability helps demonstrate whether the physician adhered to appropriate standards of care and whether they should have considered alternative diagnoses. If AI identifies conditions that the health care provider overlooked, it could indicate potential negligence.11 While this feature can be helpful, this technology is still fairly new, and any results should be scrutinized.

Turn Numbers into Narratives

You can also use various AI platforms to generate visual representations of numerical data found in medical records. Patient data such as blood pressure, heart rate, and other vital signs can be difficult to interpret in a raw, tabular form. AI tools analyze these numbers and create charts, graphs, and other visuals that help you and your medical experts quickly identify patterns and anomalies. These visualizations can clarify how the data fits into the overall case analysis. In our experience, the type of AI platform you use will determine the quality of these representations.

The best way to start using these AI features is through experimentation. Beginners can try a free platform to ask easy questions and build on their last prompt. Ask ChatGPT to explain the standard of care for applying a Band-Aid, then ask it to create a table comparing the benefits of green Band-Aids to purple ones. Tell it to cite its sources in APA format. Then ask it to draw a picture of a tie-dyed Band-Aid wearing a yellow top hat. The more you play around, the easier it becomes to write prompts that produce the outputs you want. From there, you can evaluate which features are most helpful to you and look for a secure platform to meet your case needs.

Visual aids are especially valuable when demonstrating that a physician failed to monitor a patient’s condition or to highlight significant clinical trends they may have missed.

For instance, Microsoft Copilot12 can read a medical record, identify and organize the patient’s kidney function lab results into a table, and then use that to create a graph that shows when kidney function deteriorated or when critical changes occurred. This visualization simplifies the argument that the health care provider failed to intervene appropriately.13 Again, make sure to review AI-generated charts to carefully ensure the information is accurate.

In negligence cases involving extensive numerical data in medical records, AI-generated charts and graphs can serve as compelling evidence. They provide a clear, easily digestible narrative for judges and juries, highlighting the timeline of medical events and presenting a persuasive argument regarding how a health care provider’s actions or inactions led to harm.14

Medical Ethics

There’s always a balancing act between the benefits of technology and the necessity for human judgment. While AI can efficiently summarize medical records or suggest possible treatments, there are multiple ethical concerns that arise when we allow it to make medical decisions or act as a stand-in for clinical judgment.

Like most algorithms, AI outputs are only as reliable as the data and programming behind them. The accuracy of these tools often depends on how the underlying algorithm was designed and trained.

A significant concern is the growing integration of AI into EMR systems. Some EMR systems, such as Epic,15 now use generative AI to draft progress notes16 and responses to patient messages.17 Although these features can save time and allow clinicians to spend more time with patients, they can also introduce various risks such as documentation errors and unclear lines of responsibility.

For instance, how does an AI-generated progress note affect an attorney’s ability to cross-examine a provider at deposition or trial about their documentation? If a provider relies on AI for predictive modeling and the system fails to identify a patient’s deterioration, who bears the responsibility—the AI developer or the health care provider?18

As more health care systems adopt AI, we must examine whether these tools function as decision-makers. Despite low external validation scores,19 Epic already integrates the sepsis model algorithm,20 which attempts to identify signs of brewing sepsis. One study found the Epic Sepsis Model was a poor predictor of sepsis and sent disproportionate alerts to hospital staff, which led to alert fatigue.21


While AI can help detect patterns and offer diagnoses, health care providers must remain vigilant as the ultimate decision-makers.


Although AI can help detect patterns and offer diagnoses, health care providers must remain vigilant as the ultimate decision-makers, using their expertise and clinical judgment to guide care. Attorneys should approach AI-generated conclusions with caution, recognizing the critical role human oversight still plays in medical decision-making.22

Validate Every Bit

While AI offers significant advantages, it’s crucial to remember that tools like ChatGPT can hallucinate. Many have heard about lawyers submitting briefs created with AI tools that cited nonexistent cases.23 For instance, when asked prompts about legal matters, Westlaw’s AI tool24 hallucinated about 33% of the time, and GPT-425 hallucinated about 43%.26

Although these incidents reflected poor oversight by the attorneys involved, they also highlighted the broader risks of overreliance on AI. AI-generated data won’t always be free from error. But these tools will likely improve over time, similar to the transformation of AI platforms currently used to find case law and other information.

Before entering information into an AI platform, prioritize client confidentiality. If you need input on confidential data—like medical records—ensure the platform complies with HIPAA and other confidentiality requirements. On some platforms, you can adjust the settings to prevent the system from learning from the information you provide.

Attorneys should apply the same critical eye when using AI-generated information in med neg cases as they would when reviewing legal citations. Never accept AI-generated information at face value. Verify claims against reliable, authoritative sources for accuracy and integrity in your case preparation.

AI can support legal work effectively, but it remains a tool—not a substitute for human judgment. Fact-checking and professional discernment are essential to prevent the spread of misinformation and avoid critical errors.27

Leverage Tech Wisely

AI has made significant strides in recent years, but many of today’s tools remain in their early stages. Although AI holds tremendous promise for transforming how we handle medical negligence cases, developers still need to refine these systems to fully unlock their potential. AI tools must advance in their ability to analyze complex medical data, interpret clinical context accurately, and deliver reliable, meaningful insights.28

As plaintiff attorneys, we must stay current with developments in AI and medical technology via continuing education, as well as the latest medical technology and health care systems. Understanding both the strengths and limitations of these tools allows us to make informed decisions about how to incorporate AI into our practices.


Brandon Vaughn is a partner with Robins Kaplan in Minneapolis and can be reached at bvaughn@robinskaplan.com. Carrie Joines is a senior medical analyst with the firm and can be reached at cjoines@robinskaplan.com. The views expressed in this article are the authors’ and do not constitute an endorsement of any product or service by Trial® or AAJ®.


Notes

  1. Fei Jiang et al., Artificial Intelligence in Healthcare: Past, Present and Future, 4(2) Stroke & Vascular Neurology (2017), https://svn.bmj.com/content/2/4/230.
  2. Eiji Aramaki et al., Natural Language Processing: From Bedside to Everywhere, Y.B. Med. Informatics (June 2, 2022), https://pmc.ncbi.nlm.nih.gov/articles/PMC9719781/.
  3. ChatGPT, https://chatgpt.com.
  4. CoCounsel, https://www.thomsonreuters.com/en/cocounsel.
  5. Abid Haleem et al., Current Status and Applications of Artificial Intelligence (AI) in Medical Field: An Overview, 9(6) Current Med. Rsch. & Prac. 231–37 (2019), https://www.sciencedirect.com/science/article/abs/pii/S235208171930193X; Macon & Joan Brock Va. Health Scis. at Old Dominion Univ., AI Tools for Medical Education and Research (last edited Oct. 4, 2024), https://www.evms.edu/about_us/ai_resources/resources_and_ai_tools/ai_tools_for_medical_education_and_research/.
  6. Ahmed Al Kuwaiti et al., A Review of the Role of Artificial Intelligence in Healthcare, 13(6) J. Personalized Med. 951 (June 5, 2023), https://tinyurl.com/yc4d2wka; Avishek Choudhury & Onur Asan, Role of Artificial Intelligence in Patient Safety Outcomes: Systematic Literature Review, 8(7) J. Med. Internet Rsch. Med. Informatics (July 24, 2020), https://pmc.ncbi.nlm.nih.gov/articles/PMC7414411/.
  7. Bangul Khan et al., Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector, 1 Biomedical Materials & Devices (Feb. 8, 2023), https://pmc.ncbi.nlm.nih.gov/articles/PMC9908503.
  8. Shuroug A. Alowais et al., Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice, 23 BMC Med. Educ. 689 (2023), https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-023-04698-z#Sec8.
  9. DxGPT, https://dxgpt.app.
  10. Takanobu Hirosawa et al., ChatGPT-Generated Differential Diagnosis Lists for Complex Case-Derived Clinical Vignettes: Diagnostic Accuracy Evaluation, 11 J. Med. Internet Rsch. Med. Informatics (Oct. 9, 2023), https://pmc.ncbi.nlm.nih.gov/articles/PMC10594139/.
  11. Gina Kolata, A.I. Chatbots Defeated Doctors at Diagnosing Illness, N.Y. Times (Nov. 17, 2024), https://www.nytimes.com/2024/11/17/health/chatgpt-ai-doctors-diagnosis.html.
  12. Microsoft Copilot, https://copilot.microsoft.com
  13. Chieh-Chen Wu et al., Artificial Intelligence in Kidney Disease: A Comprehensive Study and Directions for Future Research, 14(4) Diagnostics (Basel) 397 (Feb. 12, 2024), https://pmc.ncbi.nlm.nih.gov/articles/PMC10887584/.
  14. Md Kazi Shahab Uddin, A Review of Utilizing Natural Language Processing and AI for Advanced Data Visualization in Real-Time Analytics, 1(4) Glob. Mainstream J. (Dec. 17, 2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4993556.
  15. Epic, https://www.epic.com.
  16. Epic, AI for Clinicians, https://www.epic.com/software/ai-clinicians/.
  17. Anna Cacciaglia, Gen AI Saves Nurses Time by Drafting Responses to Patient Messages, EpicShare (Mar. 4, 2024), www.epicshare.org/share-and-learn/mayo-ai-message-responses.
  18. Claudia E. Haupt & Mason Marks, AI-Generated Medical Advice—GPT and Beyond, 329(16) JAMA (2023), https://jamanetwork.com/journals/jama/fullarticle/2803077.
  19. Andrew Wong et al., External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients, 181(8) JAMA Internal Med. 1065–70 (June 21, 2021), https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2781307.
  20. Erin McBeth, Reducing Mortality and Saving Money by Identifying Signs of Sepsis Sooner, EpicShare (Jan. 29, 2024), www.epicshare.org/perspectives/streamlining-sepsis-prevention-and-care.
  21. Wong, supra note 19, at 1065.
  22. Giorgia Lorenzini et al., Artificial Intelligence and the Doctor-Patient Relationship Expanding the Paradigm of Shared Decision Making, 37(5) Bioethics 424–29 (June 2023), https://pubmed.ncbi.nlm.nih.gov/36964989/.
  23. Faiz Surani & Daniel E. Ho, AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries, Stanford Univ.: Human-Centered Artificial Intelligence (May 23, 2024), https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries.
  24. Westlaw Precision, https://legal.thomsonreuters.com/en/c/westlaw/westlaw-precision-generative-ai.
  25. GPT-4, https://openai.com/index/gpt-4/.
  26. Varun Magesh et al., Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, J. Empirical Legal Stud. 10 (Mar. 14, 2025), https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf.
  27. Rami Hatem et al., A Call to Address AI “Hallucinations” and How Healthcare Professionals Can Mitigate Their Risks, 15(9) Cureus (Sept. 5, 2023), https://pmc.ncbi.nlm.nih.gov/articles/PMC10552880/.
  28. Sauliha Rabia Alli et al., The Potential of Artificial Intelligence Tools for Reducing Uncertainty in Medicine and Directions for Medical Education, 10 J. Med. Internet Rsch. Med. Educ. (Nov. 4, 2024), https://tinyurl.com/5t2jw9sn.