NY Judges Are Requiring AI Disclosures. Here's What They Actually Say.
A review of every NY Supreme Court AI disclosure rule reveals wildly inconsistent requirements, vague language, and rules that affect clients more than attorneys. An opinion.
NY state court judges are now requiring attorneys to certify whether they used AI in their filings. I read through the individual part rules for every NY Supreme Court justice I could find who has one. The requirements vary wildly. Some judges want paragraph-level disclosure of which portions were AI-drafted. Others take a lighter approach.
I understand why judges are doing this. Hallucinated case law wastes the court's time and undermines trust. An attorney submitting an unreviewed AI-generated brief is no different from a software engineer submitting an unreviewed AI-generated piece of code, or someone sending you AI-generated meeting notes with action items that make no sense. It is insulting to the recipient and creates more work for everyone.
But the rules as written raise hard questions about where the line is. And the people most affected by these rules are not the attorneys. They are the clients.
A note on my perspective: I'm not a lawyer. I'm a former software engineer who has spent the last six months using AI to do substantive legal research across multiple NY practice areas. I have read hundreds of appellate opinions, published original research on serious injury thresholds, no-fault litigation, and MCA defense, and built a workflow where AI handles the reading and I make the judgment calls. Every claim I publish traces to a specific court opinion with a clickable link. The conversation about AI in legal practice is mostly happening between judges and lawyers, and it is missing two important perspectives. The first is people like me who have seen firsthand what AI can actually do when used carefully for legal work. The second, and more important, is the client's.
There Is No Statewide Rule
First, the baseline. There is currently no statewide rule in New York requiring attorneys to disclose AI use in court filings. No amendment to the CPLR, no rule from the Chief Administrative Judge. The OCA interim AI policy released in October 2025 applies only to judges and non-judicial court personnel, not to attorneys.
The NYC Bar's Formal Opinion 2024-5 concluded there is no blanket disclosure obligation beyond existing duties under Rules 1.1, 3.3, and 8.4 of the Rules of Professional Conduct.
There is a pending Senate bill (S2698) that would amend the CPLR to require disclosure and certification. It has not been enacted.
The only binding AI disclosure requirements come from individual judges' part rules. They apply only in cases before that specific judge. Here is what they actually say.
The Most Prescriptive Rules
Several NY state judges now require detailed, paragraph-level disclosure.
Justice Nancy M. Bannon (Supreme Court, New York County) requires that every submission include a certification stating either that no generative AI was used, or that:
a generative artificial intelligence program was used but all generated text, including citations, quotations and legal analysis was reviewed for accuracy and approved by an attorney (or the self-represented party). Any generative artificial intelligence program must be identified and the documents which include matter generated by the program must be specified along with which portions of the documents were drafted by the program.
Justice Bannon goes further than any other judge on remote proceedings:
The use of any artificial intelligence program during a remote oral argument, conference, or any other appearance before the court is strictly prohibited.
Banning AI during a live argument honestly makes perfect sense to me. A judge should not have to live-argue with an AI.
Justice Aaron D. Maslow (Supreme Court, Kings County) uses nearly identical language to Justice Bannon, requiring that if AI was used, "the program must be identified and the documents which include matter generated by the program must be specified along with which parts of the documents were drafted by the program."
Justice Michael J. Norris (Supreme Court, Niagara County) requires disclosure of "the specific AI tool used," identification of "the portion of the filing drafted by AI," and a certification that the work product was "diligently reviewed by a human being for accuracy and applicability." He also prohibits AI recording or transcription of remote conferences.
Justice Peter A. Weinmann (Supreme Court, Erie County) follows the same pattern: identify the program, identify the AI-drafted portions, certify human review.
The Middle Ground
Justice Grace M. Hanlon (Supreme Court, Chautauqua County) takes a different approach, focusing less on which portions were AI-drafted and more on what the attorney did to verify accuracy. She requires disclosure "if generative AI is used to compose or draft any paper presented for filing," and that "citations of authority have been verified by a human being by using print volumes or traditional legal databases." She is the only judge who specifies how citations must be verified.
The Lightest Touch
The proposed Commercial Division Rule 6(e), which has not been adopted, takes the most permissive approach of all. No separate disclosure requirement. No certification. It simply states:
any person who files any such material with this Court is certifying the accuracy and reliability of such material and any statements made therein.
Filing the document is the certification. AI is treated like any other tool.
The Vagueness Problem
These rules all use some version of the phrase "used to compose or draft." That language creates real ambiguity for practitioners.
Consider these scenarios:
Attorney uses AI to summarize 40 cases during research, then writes the brief themselves citing 5 of those cases. Was AI "used to compose or draft" the filing? The attorney wrote every word. But the research that informed the brief was AI-assisted. Under Justice Bannon's rule, does this require disclosure? Under Justice Hanlon's, it depends on whether summarizing counts as "composing or drafting."
Attorney writes the entire legal argument, then uses AI to fill in facts from uploaded exhibits. Which "portions" were drafted by AI? The legal argument is the attorney's. The factual recitations came from AI reading real documents. How do you draw that line in a certification?
Attorney runs a finished brief through AI as a proofreading checklist. Is that "use" of AI in drafting? The brief was already written. The AI just flagged potential issues.
Attorney uses AI-powered grammar or spell check built into Microsoft Word. Word has had AI features for years. Does Copilot count? Does the basic grammar checker count? At what point does "software with AI features" become "a generative artificial intelligence program"?
Justice Hanlon's rule says citations must be verified "by using print volumes or traditional legal databases." Does CourtListener count as a traditional legal database? It has been around since 2010 and hosts millions of opinions. It is free, which arguably makes it more accessible than Westlaw or Lexis. But it is not what most people picture when they hear "traditional legal database."
I do think identifying the specific program used is reasonable and helpful. It builds shared understanding across the bar about what tools are out there. But the "which portions were AI-drafted" requirement is going to be genuinely difficult to comply with as AI becomes embedded in more of the tools attorneys already use.
The Voice Nobody Is Hearing
These rules are written from the court's perspective: how do we protect the integrity of filings? That is a legitimate concern. But there is another perspective that is almost entirely absent from this conversation: the client's.
Consider a small business owner who took a merchant cash advance. The business hit a rough patch, the daily ACH debits stopped clearing, and now an MCA funder has filed a summary judgment motion. The business owner is already struggling to make payroll. They need a lawyer, but they are running out of money.
The funder's MSJ packet is 50 pages. Buried in those pages are evidentiary defects that could defeat the entire motion: a proof of funding that is just an internal email anyone could have created on a computer, a business records affidavit with boilerplate language that is not specific to any particular document, a payment history from a third-party processor with no foundation for that entity's record-keeping practices. In 2024, Kings County judges denied MCA summary judgment motions over and over on exactly these grounds, sometimes even when the defendant filed no opposition at all. I know because I read every one of those decisions and cataloged the specific deficiencies.
An attorney who knows how to use AI can upload that 200-page packet and have every CPLR 4518 defect identified in an afternoon. The cross-document inconsistencies, the missing foundation, the gap between what the complaint alleges and what the exhibits actually show. An attorney who does not use AI might spend 8 billable hours doing the same work. Hours the client cannot afford. The math is simple: the client who gets the AI-savvy attorney gets a real defense. The client who does not might default because the economics do not work.
The worst outcome for a client is not that their lawyer uses AI and makes a mistake. Mistakes can be caught in review. The worst outcome is that their lawyer refuses to use AI at all, and the client gets no defense because there are not enough hours in the day to make it economical. At some point, I expect it will be considered malpractice to not use AI at all, just as it would be malpractice today to refuse to use email or electronic filing.
Rules that create a chilling effect on AI adoption do not protect clients. They protect a status quo where the limiting factor on quality legal representation is how many hours an attorney has in a day.
I completely agree that attorneys should be responsible for anything they submit, regardless of what tool they used. If a paralegal or associate drafts a brief and the attorney puts their name on it, they are responsible. AI should be no different. The certification that matters is not "did you use AI" but "is everything in this filing accurate and supported." That should be true whether you used ChatGPT, a paralegal, a Word template from 2019, or your own memory.
Banning AI Outright Would Not Work Anyway
Some of the commentary around these rules gestures toward the idea that AI use in legal practice is inherently suspect. It is not, and an outright ban would be both bad for clients and unenforceable.
AI is already everywhere. It is in Word. It is in email. It is in legal research platforms. The line between "AI tool" and "software feature" is disappearing. "Did you use AI?" is becoming the same kind of question as "did you use the internet?" The answer is almost always yes, and the question is not very useful.
The proposed Commercial Division Rule 6(e) has the right idea. Stop policing the tool and start policing the output. You are responsible for what you file. That standard is clear, enforceable, and technology-neutral. It does not require attorneys to track which sentences came from which tool. It does not become obsolete when the next generation of software ships. It just says: if your name is on it, you own it.
What Responsible AI Usage Looks Like
Hallucinated case law is preventable. It comes from asking AI to do research and drafting at the same time, letting the model decide which cases to cite and which arguments to make. Separate the two and the failure mode largely disappears.
I have done this myself across multiple practice areas. Here is how it works. I download court opinions from CourtListener, a free legal database with millions of decisions. I feed them to AI in small batches, two to four opinions at a time, to keep the context clean and the error rate low. Every task I give the AI is bounded and verifiable: "Is this opinion relevant to the issues in this brief?" "Pull the key quote where the court states its holding." "Does this paragraph in my article accurately reflect this opinion?" These are mechanical tasks that language models handle well. The judgment calls, which practice area to research, what search queries to write, whether a case is actually relevant, those stay with me.
When I fact-check, I start a fresh conversation and give the AI a single narrow task: here is one paragraph and here is the full opinion it references, verify that the quote is accurate and the holding is correctly characterized. By reducing the context to just the claim and the source, the verification is reliable. Every claim in every article I have published traces to a specific opinion with a clickable link.
I did all of this without a law degree and without a Westlaw subscription. The research I produced was detailed enough to identify seven distinct CPLR 4518 attack patterns across a year of Kings County MCA decisions. Now imagine an actual attorney using this same approach. They have something I do not: the training to evaluate legal arguments, the judgment to know what matters in a given case, the license to practice, and the professional obligation to get it right. AI does not replace any of that. It multiplies it.
The principle is simple: do the research yourself, verify it, and then let AI help with the assembly. The attorney makes the judgment calls. The AI handles the repetitive work. That is what these rules should be encouraging.
Where This Is Headed
The courts are a public resource. AI, used well, makes that resource work better for everyone.
But there is a feedback loop that nobody is talking about. If AI makes it faster for attorneys to file motions, courts will see more filings. That is already happening. If judges do not adopt AI tools themselves, they will face an order of magnitude more work without the capacity to handle it. The same arguments for attorney AI usage apply to judicial AI usage. Judges reading briefs, checking citations, and drafting decisions are doing exactly the kind of careful text analysis that AI handles well when given bounded tasks and verified inputs.
The rules will keep evolving. Some of the vagueness I have described will get resolved through practice, through ethics opinions, and eventually through statewide rules or legislation. In the meantime, attorneys practicing before judges with AI disclosure requirements should read their specific judge's part rules carefully. The requirements are not uniform, and a certification that satisfies one judge may not satisfy another.
The spirit of every one of these rules is the same: do not submit work you have not verified. That was true before AI. It will be true after. The tools change. The obligation does not.
Sources
- Greenberg Traurig: Navigating AI Disclosure Rules in New York Courts (Nov. 2025)
- OCA Proposed Commercial Division Rule 6(e) (PDF)
- NYC Bar Formal Opinion 2024-5
- NYC Bar Comment Letter on OCA GenAI Proposal
- NY Senate Bill S2698
- Syracuse Law Review: A Code of Artificial Conduct
- CMM LLP: NY Courts Issue Interim Policy on Judges' Use of AI
Individual Judge Rules
| Judge | Court | Link |
|---|---|---|
| Justice Nancy M. Bannon | Supreme Court, New York County | Part 61 Rules (PDF) |
| Justice Aaron D. Maslow | Supreme Court, Kings County | Part Rules |
| Justice Michael J. Norris | Supreme Court, Niagara County | IAS Rules (PDF) |
| Justice Grace M. Hanlon | Supreme Court, Chautauqua County | IAS Rules (PDF) |
| Justice Peter A. Weinmann | Supreme Court, Erie County | IAS Rules (PDF) |