Why Large Language Models Fool Busy Lawyers
Generative AI excels at mimicking legal syntax—Bluebook pin cites, Latin parentheticals, the calm cadence of judicial prose. That linguistic polish triggers “automation bias,” says Northwestern University tech‑ethics professor Daniel Linna: “When the memo matches the house style, senior associates assume the facts line up too.”
Compounding the problem is prompt drift: slight wording changes can shift a model from summarizing real precedent to inventing analogues. Firms that fed their research queries into private GPT instances gained speed but forfeited the guardrails and citation‑checking layers of traditional databases.
Insurers Turn Anxiety into Endorsements
Professional‑liability carriers, alarmed by an uptick in claim notices tied to AI errors, have started offering (and in some cases requiring) “generative‑AI endorsements.” The riders exclude coverage for losses stemming from “unverified or unreviewed” model output. A new survey by Marsh McLennan’s Law‑Firm Risk Group found premiums rose 5% on average for firms that could not demonstrate a written AI‑governance framework.
EPIC Insurance’s annual trends report warns that undisclosed AI use may trigger material‑misrepresentation defenses, letting carriers rescind policies after a malpractice judgment.