- Confidentiality and Security
Courts must prohibit entering any confidential, personal, or nonpublic data into “public generative AI systems.” This includes names, social security numbers, medical records, sealed documents, and more. Only secure, court-controlled AI tools may be used for sensitive work.
- Bias and Non-Discrimination
AI may not be used to “unlawfully discriminate against or disparately impact individuals or communities” based on age, gender, race, disability, or any protected status.
- Accuracy and Correction
Staff and judges using AI must “take reasonable steps to verify that the material is accurate” and correct any “erroneous or hallucinated output.” Any biased, offensive, or harmful content generated by AI must be removed before use.
- Transparency and Disclosure
If a court-produced work (written, audio, or visual) is entirely generated by AI, public disclosure is mandatory via a label or watermark explaining how AI was used and which system produced the output. However, this disclosure is not required for every instance of AI assistance—only when AI creates the entire final public-facing product.