facebook twitter linkedin google gplus pinterest mail share search arrow-right arrow-left arrow print vcard

Designing AI Governance for In‑House Legal: From Default Use to Deliberate Design

04.21.26

By Jordan B. Segal

This article originally appeared in the Michigan Business Law Journal’s Spring 2026 issue.

Artificial intelligence is already inside most companies, whether legal likes it or not. Employees test AI tools without guidance or oversight; vendors turn on AI features by default without any notice; and business units use AI with no record of AI inputs or outputs.1 The headline risks—the too frequent and often cringe-worthy instances of attorneys and others getting caught irresponsibly using AI—get the attention, but the day‑to‑day legal exposure is often much more quiet gaps—exposures which go unnoticed until a disaster strikes. However, AI tools and product features have become ubiquitous for a simple reason—they are impressively useful. The task, then, for in‑house counsel, is not to stop AI, but to make its use visible, governed, and defensible—in other words, in-house counsel must mitigate the risk without eliminating the tool. This requires that in-house offices, first, understand the scope of AI risks, and then, second, adopt policies to mitigate those risks.

The Big Legal Risk Areas—What AI Can Break (and How)

Debating whether to “allow” AI misses the point. AI is already embedded in SaaS products, piloted by teams, and explored by employees—classic “adoption by default.” The alternative is “adoption by design”—planned, documented, risk-tiered use with clear owners and evidence to back decisions. When that happens without standards or documentation, risk accumulates in untracked prompts, unvetted outputs, and decisions no one can later reconstruct. The fix is adoption by design, which can be implemented as a simple program that sets boundaries for use, documents decisions, allocates risk in contracts, and preserves enough context to explain choices to regulators and courts. NIST’s AI Framework organizes this work into four plain-English functions—govern, map, measure, and manage—while ISO/IEC 42001 describes the policies, roles, controls, and monitoring an organization should have in place.2

Privilege and Work Product Risk

AI tools can quietly erode privilege and work-product protection if they funnel client confidences through third-party systems, retain prompts by default, or commingle attorney analysis with general business records. Consumer-grade chatbots may reuse or review prompts; collaboration copilots may log attorney edits alongside nonlegal content. Courts have already sanctioned misuse of generative tools in litigation—for example, when counsel relied on AI-fabricated citations, resulting in sanctions and reputational harm.3 Even without bad faith, leaking privileged content into external models or producing AI-assisted drafts without preserving prompt context can fuel waiver arguments and spoliation disputes. Practical risk controls include enterprise deployments with contractual “no-training” terms, access controls and logging; segregated legal environments; and preserving governing prompts and iterations in the legal file where material to the advice.4

Trade Secret and Confidential Information Risk

Generative systems amplify both theft and accidental leakage of crown‑jewel know‑how. Employees can paste source code, pricing logic, deal terms, or product roadmaps into public models that store or learn from those inputs; vendors may enable AI features that create shadow datasets. On the offensive side, scraping and mass text-and-data mining to train or prompt models has triggered IP and unfair-competition suits,5 while automated harvesting of public-facing data has drawn CFAA and tort claims.6 Trade-secret plaintiffs need to be able to reconstruct what data left the enterprise, through which tools, and when.7 For in-house counsel, the risk is two-sided: uncontrolled prompting can waive secrecy; uncontrolled vendor ingestion can taint models and your outputs. Controls include bright-line bans on prompting crown-jewel content into public tools; restricted, logged enterprise instances; DLP on prompt/output channels; and contract terms that preserve input/output ownership and prohibit training on your data.

Records, Retention, and E-Discovery Risk

AI generates ephemeral artifacts—such as system instructions, prompts, weights, selected parameters, and iterative outputs—that can be material to claims or defenses but disappear under default settings. If your retention schedule and legal holds do not name AI systems and vendor logs, you may face completeness challenges or sanctions. Courts and commentaries increasingly expect contextual production from collaboration platforms and dynamic data, emphasizing reproducibility and proportionality.8 The inability to show who approved an AI-assisted statement, the failure to preserve the prompt that shaped a customer communication, and disputes over whether an AI output is a “record” can all be critical failures in a dispute or regulatory action.

Employment and Workplace Risk

Algorithmic tools used for recruiting, screening, performance management, or discipline can create disparate-impact or ADA “screenout” exposure if unvalidated, opaque, or poorly supervised. The EEOC has warned that employers remain responsible for vendor tools and must monitor for adverse impact under the Uniform Guidelines.9 Private litigation is emerging as well, alleging algorithmic bias, accessibility barriers, and age discrimination.10 For example, in Mobley v Workday, the plaintiff contends Workday’s automated AI-powered resume screening tools proportionately excluded older applicants and people with disabilities.

Regulatory Compliance and Marketing/Claims Risk

“AI-powered” claims frequently attract regulatory scrutiny. The FTC has warned against “AI-washing,” brought actions targeting unsubstantiated efficacy or detectability promises, and challenged “robot lawyer” and earnings claims—and this is so common that the FTC has labeled this program, “Operation AI Comply.”11

A Practical Blueprint For In-House Lawyers

The good news is that there are clear, nontechnical playbooks to protect against these kinds of unknown risks, grounded in widely used standards and recent enforcement trends. For example, NIST’s AI Risk Management Framework provides a practical structure,12 the FTC has issued concrete guidance about “AI‑washing” in marketing,13 and the ABA has issued ethics guidance for lawyers’ use of generative AI14 (and the Michigan Bar is following suit, with its own guidance).15 Together, these tools point to an approach corporate legal teams can implement quickly and explain easily to business partners.

NIST’s AI Risk Management Framework (RMF) organizes this work into four steps—map, measure, manage and govern—which can then facilitate the policies, roles, controls, and monitoring an organization should have in place. Govern means set policy, assign roles and accountability, ensure competence and oversight, and drive continual improvement across the organization.16 Map means identify and understand AI uses, data, stakeholders, and risks so decisions are based on facts rather than assumptions.17 Measure means test and monitor systems and uses to generate evidence about performance, bias, privacy, security, and compliance.18 Manage means apply controls, approvals, change management, and incident response to keep risks within acceptable bounds during day-to-day operations.19 NIST further explains: “Functions organize AI risk management activities at their highest level to govern, map, measure, and manage AI risks. Governance is designed to be a cross-cutting function to inform and be infused throughout the other three functions.”20

Below are specific, non-technical processes in-house counsel can lead for each function, aligned to NIST’s RMF and informed by its Playbook of suggested actions.

Map: Build a Reliable Picture of AI Across the Enterprise

Aspects related to context are critical; the MAP function is intended to enhance an organization’s ability to identify risks and broader contributing factors.”21

Start by creating a centralized AI use register that lists every internal tool, embedded vendor feature, and employee trial; capture business purpose, data categories (including personal, privileged, or trade-secret content), model/provider, external exposures, and a simple risk tier. Stand up a lightweight intake form in the procurement or legal portal so new uses and vendor features are registered before they launch. Work with privacy and security to classify data and define prohibited prompts; with IT, link the register to a system-of-record (e.g., GRC or contract lifecycle platforms) so it stays current as vendors add AI. Map critical dependencies (interfaces, subprocessors, data locations) and identify high-impact use cases that trigger enhanced review.

Measure: Generate Evidence That the Uses Are Safe and Effective

The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.22

Adopt right-sized testing protocols before launch and periodically thereafter: spotcheck accuracy and stability on representative tasks; perform adverse-impact screening where HR or customer outcomes are involved; review privacy/security controls (e.g., no-training commitments, access logging, data retention); and document failure modes and limits for user guidance. Establish simple acceptance criteria tied to risk tier (for example, higher accuracy, documentation, and human review for customer-facing uses). Preserve testing workpapers including inputs, outputs, material parameters, and edits, so results are reproducible in audits or litigation. Where feasible, pilot on internal or synthetic data first, and set up lightweight
monitoring (sampling or QA checklists) for production uses that affect customers, employees, or regulators.

Manage: Control Day-to-Day Operations and Changes

The MANAGE function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function, including incident response and continual improvement.”23

Implement a use-case review path that fast-tracks low-risk activities and routes high-risk ones (customer communications, HR analytics, pricing/marketing claims, legal work) for quick legal/compliance signoff with documented conditions. Require enterprise accounts for external models, with access controls, logging, retention settings, and “no-training” terms; prohibit crownjewel prompts in public tools and confine privileged work to segregated environments. Establish change‑management so substantive model or feature updates (vendor or internal) trigger a short re‑test and, if needed, refreshed approvals and disclosures. Require vendor “model update” notices; silent updates can invalidate prior approvals and disclosures. Define AI-specific incident response (what counts as an AI incident, who investigates, notification/escalation, corrective action), and add AI artifacts (prompts, outputs, logs) to legal-hold and collection playbooks.

Govern: Set Policy, Accountability, and Assurance

The GOVERN function cultivates and implements a culture of risk management…[and] connects technical aspects of AI system design and development to organizational values and principles.”24

Publish a concise acceptable-use standard and role-based playbooks that translate do’s and don’ts into everyday tasks (permitted prompts, prohibited inputs, escalation paths, approved claims/disclosures). Create a cross-functional steering group (legal, privacy, security, compliance, procurement, and business) with a clearly defined RACI matrix; designate legal as convener and policy owner. Deliver short, role-tailored trainings; require attestations for higher-risk roles. Build assurance into routine operations: periodic prompt/output reviews, bias/accuracy spot-checks in sensitive workflows, and post-incident lessons learned with documented remediation. Report program status and key metrics (inventory coverage, reviews completed, incidents, corrective actions) to senior leadership, and refresh policies and templates at least annually to reflect evolving standards and enforcement.

Implementing NIST’s Four Functions: A 90‑Day End‑to‑End Plan

This 90‑day plan is about moving from adoption by default to adoption by design. Think of it in four plain questions you already ask in other risk programs: who decides and is accountable (govern), where is the tool used (map), what checks happen before and after use (measure), and how do we control daily use and changes (manage). The steps below are written for legal teams and use ordinary business processes—policies, intake forms, contract terms, training, and simple reviews.

Days 1–30: Set the foundation

Publish an interim acceptable‑use standard with permitted uses, “do‑not‑enter” prompts, and clear escalation paths; convene a small cross‑functional steering group. Require approved enterprise/vendor accounts with logging and retention. Launch a one‑page intake form and central register embedded in procurement/product portals. Run first pre‑launch reviews using short checklists, save workpapers, and define AI‑incident triggers and contacts.

Days 31–60: Operationalize and contract

Convert guidance into role‑based playbooks and embed the review path in procurement/ product workflows. Update standard contracts with AI clauses (input/output ownership, “no training on our data,” model‑change notice, evaluation/attestation rights, subprocessor transparency, tailored indemnities). Complete the first‑pass register with risk tiers and data classifications, flag high‑impact uses, and start light production monitoring on a set cadence with a remediation tracker. Run a short tabletop of the AI‑incident process.

Days 61–90: Formalize and assure

Publish the formal AI governance policy; set two or three key performance indicators and a reporting cadence; align legal holds and preservation procedures to named AI systems and vendor logs. Run the first assurance cycle sampling high‑impact outputs (accuracy, bias, policy/contract compliance), assign owners/dates, and finalize the AI‑incident runbook. Close register gaps, add system/ subprocessor locations, and establish a standing monitoring plan for high‑impact uses. Require and act on vendor model‑update notices—pause affected workflows, re‑test samples, and refresh approvals/disclosures—and prepare a brief leadership update alongside an AI clause library.

Finally, ensure your processes and safeguards are consistent with expectations already established. For discovery, name AI systems in legal holds and work with IT to preserve prompts, outputs, and logs so you can explain important decisions later. For marketing and consumer claims, pre‑clear AI‑related statements and keep simple substantiation files tied to the tests you ran. By Day 90, you should have four concrete artifacts in place: a formal AI policy, a living AI use register, a standard AI clause library, and a standing monitoring plan for high‑impact uses; in other words: you should have a robust adoption by design AI program.

Conclusion

As stated above, a NIST‑anchored program delivers tangible business value by making AI use predictable and defensible: fewer contract cycles through standard AI clauses, faster vendor onboarding with clear diligence expectations, less rework from low‑quality outputs, and stronger positions in audits, investigations, and litigation. Grounding policies and controls in NIST’s functions and aligning contracts signals to counterparts and regulators that the organization meets recognized expectations. At the same time, keep focus on the core risk: not model “hallucinations,” but unmanaged use—no standards, no documentation, and no clear owners. A concise inventory, a risk‑tiered approval path, targeted data and privilege safeguards, vendor controls, and periodic assurance—mapped to NIST and informed by the American Bar Association, Equal Employment Opportunity Commission, Federal Trade Commission, and the Sedona Conference AI guidance—turns AI into a managed capability that withstands discovery and regulatory scrutiny. By framing the work in business terms—speed, predictability, and defensibility—you keep adoption moving while reducing surprise. Start small, iterate, and build the record as you go.


  1. Often resulting in the computing problem originally described by IBM Programmer and Instructor, George Fuechsel as “GIGO”, or “Garbage In, Garbage Out.” ↩︎
  2. Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Jan.
    26, 2023), https://doi.org/10.6028/NIST.AI.100-1 ; see also NIST, AI Risk Management Framework | Overview, https://www.nist.gov/itl/ai-risk-managementframework. ↩︎
  3. E.g., Mata v Avianca, Inc, 678 F Supp 3d 443 (SDNY 2023). ↩︎
  4. ABA Standing Comm. on Ethics & Prof’l Responsibility, Formal Op. 512, Generative Artificial
    Intelligence Tools (July 29, 2024). ↩︎
  5. Thomson Reuters Enter Ctr GmbH v Ross Intelligence Inc, 765 F Supp 3d 382 (D Del 2025). ↩︎
  6. hiQ Labs, Inc v LinkedIn Corp, 31 F4th 1180 (9th Cir 2022). ↩︎
  7. WeRide Corp v Huang, 379 F Supp 3d 834 (ND Cal 2019); Red Wolf Energy Trading, LLC v BIA Capital Mgmt., LLC, 626 F Supp 3d 478 (D Mass 2022). ↩︎
  8. In re Google Play Store Antitrust Litig, 664 F Supp 3d 981 (ND Cal 2023); The Sedona Conf., Commentary on Ephemeral Messaging, 22 Sedona Conf. J. 435 (2021), https://thesedonaconference.org/publication/Commentary_on_Ephemeral_Messaging. ↩︎
  9. U.S. Equal Emp. Opportunity Comm’n, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023), https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impactsoftware-algorithms-and-artificial. ↩︎
  10. Mobley v Workday, Inc, 740 F Supp 3d 796 (ND Cal 2024). ↩︎
  11. Benesch, One Year In, FTC’s “Operation AI Comply” Continues (Oct. 21, 2025), https://www.beneschlaw.com/resources/one-year-in-ftcs-operation-aicomply-continues-under-new-administration-signalingenduring-enforcement-focus.html. ↩︎
  12. Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0)
    (Jan. 26, 2023), https://doi.org/10.6028/NIST.AI.100-1; see also NIST, AI Risk Management Framework, https://www.nist.gov/itl/ai-risk-management-framework. ↩︎
  13. Fed. Trade Comm’n, Keep your AI claims in check (Feb. 27, 2023), https://www.ftc.gov/businessguidance/blog/2023/02/keep-your-ai-claims-check. ↩︎
  14. ABA Standing Comm. on Ethics & Prof ’l Responsibility, Formal Op. 512, Generative Artificial
    Intelligence Tools (July 29, 2024), https://www.americanbar.org/content/dam/aba/administrative/professional_
    responsibility/ethics-opinions/aba-formal-opinion-512.pdf
    . ↩︎
  15. https://www.michbar.org/AI. ↩︎
  16. Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0)
    (Jan. 26, 2023). ↩︎
  17. Id. ↩︎
  18. Id. ↩︎
  19. Id. ↩︎
  20. Id. ↩︎
  21. Id. ↩︎
  22. Id. ↩︎
  23. Id. ↩︎
  24. Id. ↩︎