Cumulative rules over time

About this dashboard

The AI Court Rules Tracker is a continuously-updating index of U.S. judicial standing orders, general orders, administrative orders, local rules, policies, and guidance that address generative AI.

What does this dashboard track?

Courts across the country are issuing rules about how lawyers and judges may use tools like ChatGPT, Claude, and other generative AI. This dashboard collects those rules in one place, organized into three types:

  • Federal courts — orders and rules from district courts, circuit courts, and specialty courts directed at litigants and attorneys.
  • State courts — orders and rules from trial, appellate, and supreme courts directed at litigants and attorneys.
  • Guidance for courts & judiciary — AI policies directed at judges, court staff, and judicial operations.

Each tracked federal or state rule for litigants is classified along a spectrum — from outright prohibition through disclosure-and-verification regimes to affirmative permission. Notably, some rules treat tools like Lexis and Westlaw differently from other AI, which may disadvantage less well-resourced litigants who lack access to those platforms. Rules are visualized as an interactive U.S. map with search, category filters, and a cumulative-rules-over-time chart. A companion “In the news” tab surfaces reputable reporting on new orders, hallucinated-citation incidents, sanctions, and bar ethics guidance.

Why does this matter?

Judicial approaches to generative AI are still new and rapidly evolving. There is no settled precedent. Different courts are reaching very different conclusions — from outright bans to open permission — and those positions are shifting as judges gain experience with the technology.

By categorizing rules by type and tracking them over time, this dashboard provides a continuously-updated empirical picture of how the judiciary is navigating AI. Which kinds of rules are proving popular? Which levels of strictness or openness are spreading across jurisdictions? What may one day become the dominant approach? These are the questions this project aims to help answer.

How does it work?

This project was built with Claude Code, Anthropic’s agentic coding tool. An autonomous agent runs as a GitHub Action on a weekly schedule, and each run it:

  • Searches the web for new or updated judicial orders addressing AI
  • Reads publicly-available court websites and PDFs to extract rule details
  • Classifies each rule along the dashboard’s category taxonomy
  • Validates updates against a strict schema before committing them

Every source link aims to bring you directly to the primary source — the actual court order or PDF — through publicly accessible websites.

How is accuracy maintained?

Automated data gathering is inherently imperfect. Some court websites block automated access, and AI classification can make mistakes. To address this, there is a human in the loop who performs regular data verification, maintenance, and spot-checks to the best of her ability.

Entries flagged as “unverified” have not yet been reconciled against the primary source. If you spot an error, please reach out.

Disclaimer

This dashboard is informational only and is not legal advice. The information presented here may not be entirely accurate, complete, or current. Always consult the order itself and current court guidance before relying on any entry. This is an ongoing project that is continuously being updated and improved.

Questions, corrections, and suggestions are welcome — Elizabeth Guo, eguo@jd27.law.harvard.edu.

View source on GitHub →

All tracked rules

Date Court Judge Type Category Summary Source