When the filer is a language model: AI disclosure at the SoS level
States began legislating AI disclosure in 2024, and by fall 2025 the edges of what counts as a truthful filing are being redrawn in real time
Contents 8 sections
- Utah moved first and moved again
- Texas signed in June, TRAIGA lands in 2026
- California walks at the training-data end
- Colorado is in the queue, not yet on the field
- Delaware said the quiet part out loud
- The Sarbanes-Oxley overlay for public companies
- What the operational answer looks like right now
- Sources
n AI-drafted Certificate of Formation is still a sworn document. Whoever signs it is attesting to the truth of every word, and by October 2025 at least three states have said so in writing.
The question that reached Secretary of State offices this year is narrow and operational: if a founder uses a large language model to draft a filing, does the state need to know, and does the signer carry new exposure when the model hallucinates a provision. The answers are accreting in pieces, some through statute, some through informal guidance, and one through the visible work of a bizfile intake queue that started flagging templated language this summer.
Utah moved first and moved again
Utah's Artificial Intelligence Policy Act, SB 149 of the 2024 general session, took effect May 1, 2024. It created a new chapter at Utah Code Title 13, Chapter 72, and it did two things that matter at the filing counter. It made a person using generative AI liable for statements the AI produces under the state's existing consumer-protection statutes, with no "the model did it" defense. And it required clear disclosure of generative-AI use in interactions with a consumer, when asked, for any person engaged in a regulated occupation.
The March 2025 amendment, HB 452, sharpened the regulated-occupation piece. It narrowed the proactive-disclosure trigger, requiring affirmative disclosure at the start of a conversation where the regulated professional is providing advice or services that would otherwise require a human licensee. Attorneys, accountants, and registered agents fall inside the scope when they use generative AI to produce documents a client will sign and file. The Utah Division of Consumer Protection now treats a registered agent's use of generative AI to draft client filings as covered conduct, and expects the disclosure to be made before the client reviews the draft, not after the signature page.
The operational effect is that a Utah registered-agent firm that uses AI to produce organizational documents owes its customers a notice. The Secretary of State's filing intake does not read those notices. The Division of Consumer Protection will, if a complaint lands.
Texas signed in June, TRAIGA lands in 2026
Governor Abbott signed the Texas Responsible Artificial Intelligence Governance Act, HB 149, on June 22, 2025. The operative provisions take effect January 1, 2026, which keeps TRAIGA out of the immediate dateline for a founder filing this quarter, but one piece is worth flagging now.
TRAIGA requires AI-use disclosure in certain government-facing filings when an AI system materially generated the content and when the filing affects a person's legal rights. The statutory text reaches entity documents signed and submitted to state agencies, including the Secretary of State, where the filing establishes or modifies a legal person. The Attorney General's office has enforcement authority and can seek civil penalties for material omissions. The Texas Secretary of State has not yet published a form change, and the sector-specific rulemaking will happen through 2026. Founders forming a Texas LLC today are not yet subject to the disclosure line, but anyone advising Texas formations should expect a form revision before next fall.
The quiet piece of TRAIGA is that it does not preempt the preexisting common-law rule that a signer warrants the truth of a filed document. TRAIGA adds a disclosure obligation on top of that rule. It does not relieve the signer of the existing one.
California walks at the training-data end
California's approach is upstream of the filing counter. AB 2013, signed in September 2024 and effective January 1, 2026 for covered developers, requires developers of generative AI systems made available to Californians to publish a summary of the data used to train the model. The statute is aimed at developer transparency, not at filers. A founder who uses ChatGPT to draft a Certificate of Formation does not owe California a disclosure under AB 2013. The model's developer does.
The downstream effect on filings is indirect but real. If a training-data summary reveals that a model was trained on a corpus that included a specific state's form language, a state could, in principle, treat the model's reproduction of that language as a trademarked-form problem or a derivative-work problem. No SoS office has tested this yet. It is the kind of theory that gets tested when someone wants it tested.
The more immediate California development is at the Secretary of State itself. bizfile, the online filing portal California rolled out for business filings, quietly began flagging filings with suspected template or machine-generated boilerplate in mid-2025. The flag does not reject the filing. It routes it to human review, which adds time and, in some cases, a request for clarification. Filings with hallucinated statute citations, nonexistent county designations, or language copied from a non-California form are the common triggers. Counsel who files in bulk started noticing the extra cycle in the summer queue.
Colorado is in the queue, not yet on the field
Colorado's AI Act, SB 24-205, was signed in May 2024 with an effective date of February 1, 2026. It is a comprehensive high-risk-AI statute with disclosure and impact-assessment requirements, and it will reach government-facing AI uses when it lands. Because it has not yet taken effect, Colorado formations today do not carry an AI-disclosure obligation at the SoS counter, and this piece will be rewritten when the Colorado rulemaking produces operational forms.
Delaware said the quiet part out loud
The Delaware Division of Corporations does not legislate, and Delaware has no statute specifically addressing AI-generated filings. What it did do, in informal guidance circulated in July 2025 through the registered-agent community, was remind filers of a statute already on the books.
Under 6 Del. C. § 18-204, the execution of a Certificate of Formation "constitutes an oath or affirmation" that the facts stated are true. The Division's July note reiterated that § 18-204 applies to every filing regardless of how the draft was produced. An AI-drafted certificate that contains an incorrect registered-office address, a nonexistent registered agent, or a fabricated statutory reference is a false oath, and the Division will refer suspected false oaths for prosecution under existing Delaware law.
That is not a new rule. It is a newly relevant one. The practical reading is that Delaware has no patience for the "I didn't know the model made that up" defense, and counsel should treat AI drafts as drafts, not as filings.
The Sarbanes-Oxley overlay for public companies
For any entity that is a subsidiary of a public reporting company, or that is itself public, the § 302 certification regime under Sarbanes-Oxley sits on top of everything above. The CEO and CFO are personally certifying that the company's public filings are accurate and that disclosure controls are in place. When subsidiary formation documents, merger certificates, or qualification filings are produced with AI assistance, the controls regime needs to cover the model. An AI-drafted subsidiary Certificate that misstates the subsidiary's purpose, its authorized shares, or its registered office becomes a disclosure problem the parent has to resolve if the misstatement surfaces later. SEC enforcement has not yet brought an AI-specific § 302 action, but the theory is unremarkable, and the 2025 enforcement calendar has started to surface control-failure matters that touch AI-generated content in financial reporting.
What the operational answer looks like right now
A founder or counsel filing in October 2025 has three defensible positions and one indefensible one.
The defensible floor is: AI draft, human attorney review, human signer. The model produces a draft of the Certificate, the operating agreement, or the qualification paperwork. A licensed attorney reviews the draft against the statute and the state's form, corrects anything that does not match, and confirms the factual provisions (registered agent, registered office, authorized shares, name availability) against primary sources. A human signs. This is the minimum. It is also what a Delaware § 18-204 prosecution would look for when deciding whether the signer acted in good faith.
The second position is the same floor plus an explicit disclosure in client-facing documents. Utah makes this close to mandatory for regulated professionals; Texas will make it mandatory in a narrower set of filings when TRAIGA's operative provisions take effect. The pragmatic version is to put a one-line disclosure in the engagement letter that the firm uses generative AI as a drafting aid and that all substantive content is reviewed and verified by a licensed attorney.
The third position, appropriate for high-volume filers, is a documented internal control: a logged prompt, a saved draft, a reviewer name, a verification checklist, and a retention policy. This is the Sarbanes-Oxley analogue ported to entity filings, and it is what a registered-agent firm with public-company subsidiaries should already be building.
The indefensible position is unreviewed AI output sent to the state under a human signature. The model does not bear the oath. The human who signs does. Utah has made the consumer-protection piece of this explicit, Delaware has made the oath piece of this explicit, and Texas will make the disclosure piece of this explicit in January. The direction of travel is clear enough that firms still shipping unreviewed drafts should stop this month.
The underexamined piece is the registered-agent sector, which has quietly become the place where most of this happens. A commodity registered agent offering $49 formations has strong economic incentive to automate the drafting and weak incentive to fund the review. A founder selecting an agent on price in October 2025 is selecting for the part of the workflow the state is most likely to scrutinize next.
Sources
- Utah SB 149 (2024 General Session), Artificial Intelligence Policy Act, https://le.utah.gov/~2024/bills/static/SB0149.html
- Utah HB 452 (2025 General Session), Artificial Intelligence Amendments, https://le.utah.gov/~2025/bills/static/HB0452.html
- Utah Code Title 13, Chapter 72, Artificial Intelligence Policy Act, https://le.utah.gov/xcode/Title13/Chapter72/13-72.html
- Texas HB 149 (89R), Texas Responsible Artificial Intelligence Governance Act (TRAIGA), https://capitol.texas.gov/BillLookup/History.aspx?LegSess=89R&Bill=HB149
- California AB 2013 (2023-2024), Generative artificial intelligence: training data transparency, https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB2013
- California Secretary of State, bizfile Online, https://bizfileonline.sos.ca.gov/
- Colorado SB 24-205, Consumer Protections for Artificial Intelligence, https://leg.colorado.gov/bills/sb24-205
- 6 Del. C. § 18-204 (execution of certificates), https://delcode.delaware.gov/title6/c018/sc02/index.html
- Delaware Division of Corporations, https://corp.delaware.gov/
- Sarbanes-Oxley Act of 2002, § 302, 15 U.S.C. § 7241, https://www.govinfo.gov/content/pkg/PLAW-107publ204/html/PLAW-107publ204.htm