Back to Tennessee

HB1898 • 2026

Safety

AN ACT to amend Tennessee Code Annotated, Title 4; Title 10, Chapter 7; Title 47; Title 58 and Title 68, relative to artificial intelligence.

Children Taxes Technology
Active

The official status still shows this bill as active or still awaiting another formal step.

Sponsor
Zachary, Yager
Last action
2026-04-09
Official status
H. Placed on Regular Calendar for 4/13/2026
Effective date
Not listed

Plain English Breakdown

The exact penalties for non-compliance, enforcement mechanisms, and oversight bodies remain unspecified in the provided sources.

Artificial Intelligence Public Safety and Child Protection Transparency Act

This bill requires large frontier model developers and chatbot providers in Tennessee to have public safety plans on their websites, detailing how they assess and manage risks that could cause significant harm or affect child safety.

What This Bill Does

  • Prohibits large frontier model developers (companies with over $500 million annual revenue) from making false or misleading statements about their public safety measures.
  • Requires large frontier model developers to create a detailed public safety plan on their website, outlining how they assess and manage risks that could cause significant harm.
  • Forbids large chatbot providers (services used by at least 1 million people monthly with over $25 million annual revenue) from making false or misleading statements about child safety risks.
  • Requires large chatbot providers to create a detailed child safety plan on their website, explaining how they assess and manage risks that could harm children's mental health or physical well-being.

Who It Names or Affects

  • Large frontier model developers (companies with over $500 million annual revenue).
  • Large chatbot providers (services used by at least 1 million people monthly with over $25 million annual revenue)

Terms To Know

Catastrophic risk
A significant and foreseeable risk that could cause death, serious injury to more than 50 people, or property damage exceeding $1 billion from a single incident.
Child safety risk
A significant and foreseeable risk that could harm the mental health of minors through interactions with AI chatbots.

Limits and Unknowns

  • The bill does not specify penalties for non-compliance.
  • It is unclear how enforcement will be carried out or who will oversee compliance.
  • The bill exempts accredited colleges and universities from these requirements if they are conducting academic research only.

Amendments

These notes stay tied to the official amendment files and metadata from the legislature.

Amendment 1-0 to HB1898

Plain English: The amendment changes definitions and requirements for artificial intelligence models, covered chatbots, and large chatbot providers in Tennessee's existing law.

  • Defines 'artificial intelligence model' as a system that can make decisions influencing real or virtual environments using machine- and human-based inputs.
  • Specifies what qualifies as a 'covered chatbot', including criteria like having at least one million monthly active users and being likely to be accessed by minors, while excluding certain types of software.
  • Defines 'frontier developer' as someone who uses significant computing power for training AI models, but excludes accredited colleges or universities doing research.
  • Adds a definition for 'monthly active users', which counts unique individuals interacting with a service in the past 30 days.
  • The amendment text does not provide full context on how these changes will be implemented or enforced, and it may require additional regulations to clarify its impact.
Amendment 2-0 to HB1898

Plain English: The amendment changes who the Tennessee Department of Safety consults with when making rules about federal laws and guidance documents related to artificial intelligence.

  • Adds the Tennessee Bureau of Investigation and the Department of Finance and Administration as entities the department must consult with, in addition to the office of the attorney general and reporter.
  • The amendment text does not specify what changes will be made to the rules or how the additional consultations will affect existing processes.
Amendment 3-0 to HB1898

Plain English: The amendment adds a new chapter to Tennessee law that requires large chatbot providers to create and publish detailed child safety plans.

  • Adds definitions for terms like 'artificial intelligence model', 'child safety incident', and 'covered chatbot'.
  • Requires large chatbot providers to write, implement, comply with, and publicly display a child safety plan on their website.
  • Specifies that the child safety plan must address how potential risks are assessed, mitigated, and monitored.
  • The amendment text is incomplete and does not provide full details of all requirements for large chatbot providers.
Amendment 1-0 to SB2171

Plain English: The amendment changes definitions related to artificial intelligence and chatbots in Tennessee law.

  • Changes the definition of 'artificial intelligence model' or 'AI' to include systems that can make decisions influencing real environments based on human-defined objectives.
  • Introduces a new term, 'covered chatbot,' which refers to services allowing conversations with humanlike responses from foundation models, likely accessed by minors and having at least one million monthly active users.
  • The amendment text does not specify the full implications of these changes on existing laws or regulations.
  • It is unclear how this amendment will be enforced or what specific actions are required from businesses or individuals.
Amendment 2-0 to SB2171

Plain English: The amendment changes who the Tennessee Department of Safety consults with when making rules about federal laws and guidance documents related to artificial intelligence.

  • Adds the Tennessee Bureau of Investigation and the Department of Finance and Administration as entities the department must consult with, in addition to the office of the attorney general and reporter.
  • The amendment text does not specify what changes will be made to the rules or how the additional consultations will affect existing processes.
Amendment 3-0 to SB2171

Plain English: The amendment adds a new chapter to Tennessee law requiring large chatbot providers to create and publish detailed child safety plans.

  • Adds definitions for terms like 'artificial intelligence model', 'child safety incident', and 'covered chatbot'.
  • Requires large chatbot providers with over $25 million in annual revenue to develop, implement, and publicly disclose a child safety plan that addresses potential risks to minors.
  • Specifies that the child safety plan must include assessments of risks, mitigation strategies, use of third-party evaluations, adherence to industry standards, regular updates, incident response procedures, and internal governance practices.
  • The amendment text is incomplete, particularly in subsection (c), which means some details about the requirements for large chatbot providers are not fully specified.

Bill History

  1. 2026-04-10 Tennessee General Assembly

    Placed on Senate Regular Calendar for 4/14/2026

  2. 2026-04-09 Tennessee General Assembly

    H. Placed on Regular Calendar for 4/13/2026

  3. 2026-04-08 Tennessee General Assembly

    Placed on cal. Calendar & Rules Committee for 4/9/2026

  4. 2026-04-07 Tennessee General Assembly

    Recommended for passage with amendment/s, refer to Senate Calendar Committee Ayes 6, Nays 3 PNV 0

  5. 2026-04-07 Tennessee General Assembly

    Sponsor(s) Added.

  6. 2026-04-02 Tennessee General Assembly

    Action def. in Calendar & Rules Committee to 4/9/2026

  7. 2026-04-01 Tennessee General Assembly

    Placed on cal. Calendar & Rules Committee for 4/2/2026

  8. 2026-04-01 Tennessee General Assembly

    Sponsor(s) Added.

  9. 2026-04-01 Tennessee General Assembly

    Placed on Senate Commerce and Labor Committee calendar for 4/7/2026

  10. 2026-03-30 Tennessee General Assembly

    Rec. for pass; ref to Calendar & Rules Committee

  11. 2026-03-25 Tennessee General Assembly

    Placed on cal. Government Operations Committee for 3/30/2026

  12. 2026-03-25 Tennessee General Assembly

    Rec. for pass. if am., ref. to Government Operations Committee

  13. 2026-03-24 Tennessee General Assembly

    Sponsor(s) Added.

  14. 2026-03-24 Tennessee General Assembly

    Recommended for passage with amendment/s, refer to Senate Commerce and Labor Committee Ayes 9, Nays 0 PNV 0

  15. 2026-03-23 Tennessee General Assembly

    Placed on Senate Judiciary Committee calendar for 3/24/2026

  16. 2026-03-23 Tennessee General Assembly

    Action deferred in Senate Judiciary Committee to 3/24/2026

  17. 2026-03-18 Tennessee General Assembly

    Placed on cal. Commerce Committee for 3/25/2026

  18. 2026-03-18 Tennessee General Assembly

    Rec for pass if am by s/c ref. to Commerce Committee

  19. 2026-03-18 Tennessee General Assembly

    Placed on Senate Judiciary Committee calendar for 3/23/2026

  20. 2026-03-17 Tennessee General Assembly

    Reset on Final calendar of Senate Judiciary Committee

  21. 2026-03-16 Tennessee General Assembly

    Placed on Senate Judiciary Committee calendar for 3/17/2026

  22. 2026-03-16 Tennessee General Assembly

    Action deferred in Senate Judiciary Committee to 3/17/2026

  23. 2026-03-11 Tennessee General Assembly

    Placed on s/c cal Banking & Consumer Affairs Subcommittee for 3/18/2026

  24. 2026-03-11 Tennessee General Assembly

    Action Def. in s/c Banking and Consumer Affairs Subcommittee to 3/18/2026

  25. 2026-03-11 Tennessee General Assembly

    Placed on Senate Judiciary Committee calendar for 3/16/2026

  26. 2026-03-04 Tennessee General Assembly

    Placed on s/c cal Banking & Consumer Affairs Subcommittee for 3/11/2026

  27. 2026-02-05 Tennessee General Assembly

    Passed on Second Consideration, refer to Senate Judiciary Committee

  28. 2026-02-04 Tennessee General Assembly

    Assigned to s/c Banking & Consumer Affairs Subcommittee

  29. 2026-02-04 Tennessee General Assembly

    P2C, ref. to Commerce Committee - Government Operations for Review

  30. 2026-02-02 Tennessee General Assembly

    Intro., P1C.

  31. 2026-02-02 Tennessee General Assembly

    Introduced, Passed on First Consideration

  32. 2026-02-02 Tennessee General Assembly

    Filed for introduction

  33. 2026-01-22 Tennessee General Assembly

    Filed for introduction

Official Summary Text

This bill prohibits a frontier model developer, excluding
an accredited college or university to the extent that the college or university is developing or using frontier models exclusively for academic research purposes
, or
a person who makes a covered chatbot
, which is
a service that allows an ordinary person to have conversations where humanlike responses are generated by a foundation model, is foreseeably likely to be accessed by minors, and has at least 1 million monthly active users
,
available in this state and who, together with the person's affiliates, co
llectively had an annual revenue of at least $25 million
("large chatbot provider") from making a materially false or misleading statement about
a catastrophic risk or a child safety risk
(together, "covered risk") from the frontier developer or large chatbot provider's activities or their management of covered risks. Further, a
frontier developer who together with its affiliates collectively had annual revenue of at least $500 million
("large frontier developer") and a large chatbot provider is prohibited
from making a materially false or misleading statement about its implementation of, or compliance with, its public safety plan or child safety plan.

Large Frontier Developers

This bill requires a large frontier developer to implement and clearly publish on its website a public safety plan that describes in detail how the large frontier developer defines and assesses thresholds used by the developer to determine whether a fron
tier model has capabilities that could pose a catastrophic risk. As used in this bill, a
"
c
atastrophic risk" means a foreseeable and material risk that a frontier developer's development, storage, use, or deployment of a frontier model will materially cont
ribute to the death of, or serious injury to, 50 people or more than $1 billion in damage to, or loss of, property arising from a single incident involving (i) providing expert-level assistance in the creation or release of a chemical, biological, radiolo
gical, or nuclear weapon; (ii) engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or
t
heft, including theft by false pretense; or (iii) evading the control of its frontier developer or user.
However, c
atastrophic risk does not include a foreseeable and material risk from (i) information that a frontier model outputs if the information is otherwise publicly accessible in a similar form; (ii) lawful activity of the federal government; or (iii) harm cause
d by a frontier model in combination with other software if the frontier model did not materially contribute to the harm.

This bill requires the plan to also describe how the large frontier developer addressed all of the following:



Applies mitigations to address the potential for catastrophic risks.


Reviews assessments of catastrophic risk as part of the decision to deploy a frontier model or use it internally.


Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks
.


Implements cybersecurity practices to secure unreleased frontier model weights
, which are
numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs
,
from unauthorized modification or transfer by internal or external parties
.


Assesses and manages catastrophic risk resulting from the internal use of the frontier developer's frontier models
.


Incorporates national standards, international standards, and industry-consensus best practices into the large frontier developer's public safety plan
.


Revisits and updates the public safety plan.


Identifies and responds to
"
critical safety incidents,
"
which means (i) unauthorized access to, or modification, inadvertent release, or exfiltration of, the model weights of a frontier model; (ii) the death of, or serious injury to, more than 50 people or more than $1 billion dollars in damage to, or loss of, property resulting from the materialization of a catastrophic risk; (iii) loss of control of a frontier model that cause death or bodily injury, or that demonstrates materially increased catastrophic risk; or (iv) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit such behavior and in a manner that demonstrates materially increased catastrophic risk. Loss of value of equity does not count as damage to or loss of property.


Institutes internal governance practices to ensure implementation of the public safety plan.

This bill requires a large frontier developer to clearly publish any material modifications to its public safety plan, along with a justification for the modification, within 30 days of making the material change.

Before a large frontier developer deploys a new frontier model,
this bill requires the large
frontier developer
to publish summaries of any
assessments of catastrophic risks from the frontier model, the results of the assessments, the extent to which third-party evaluators were involved in the assessments, and other steps taken to fulfill the requirements of the public safety plan. This inf
ormation may be published as part of a larger document, including a system card or model card.

Large Chatbot Providers

This bill requires a large chatbot provider to implement and clearly publish on its website a child safety plan that describes in detail how the large chatbot provider assesses potential for child safety risks. As used in this bill, a "c
hild safety risk" means a material and foreseeable risk that a frontier developer's
artificial intelligence model that is trained on a broad data set, designed for generality of output, and adaptable to a wide range of distinctive tasks
("
foundation model
")
, when used as p
art of a covered chatbot operated by the frontier developer, will engage in behavior when interacting with a minor that, if it had been engaged in by a human, would be deemed to intentionally or recklessly cause death or bodily injury to the minor, includ
ing as a result of self-harm; or damage to the mental health of such minor that constitutes severe emotional distress.

This bill requires the plan to also describe how the large chatbot provider addresses all of the following:



Applies
mitigations to address the potential for child safety risks
.


Uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks
.


Incorporates national standards, international standards, and industry-consensus best practices into the large chatbot provider's child safety plan
.


Revisits and updates the child safety plan
.


Identifies and responds to
"
child safety incidents
," which means
a covered chatbot engaging in behavior when interacting with a minor that, if the behavior had been engaged in by a human, would be deemed to intentionally or recklessly cause death or bodily injury to such minor or damage to the mental health of such minor that constitutes severe emotional distress.


Institutes internal governance practices to ensure implementation of the child safety plan
.

This bill requires a large chatbot provider to
clearly publish any material modifications to its child safety plan, along with a justification for the modification, within 30 days of making the material change.

Before a large chatbot provider integrates a foundation model into a covered chatbot, this bill requires the large chatbot provider to publish summaries of any assessments of child safety risks, the results of the assessments, the extent to which third-p
arty evaluators were involved in the assessments, and steps taken to fulfill the requirements of the large chatbot provider's child safety plan.

Confidentiality

This bill authorizes a large frontier developer or chatbot provider to make redactions to published safety plan documents, if the redactions are necessary to protect the large frontier developer or large chatbot provider's trade secrets or cybersecurity,
public safety, or the national security of the United States, or to comply with federal or state law. However, the developer or chatbot provider must describe the character and justification of the redactions and retain the redacted information for at l
ea
st five years.

REPORTING OF INCIDENTS

This bill requires the attorney general to establish a means for reporting
a child safety incident or a critical safety incident
(together, "safety incident"). The form must allow the report to include, at least, the date of the safety incident, the reasons the incident qualifies as a safety incident, and a short and plain statement describing the safety incident. A frontier rep
orter is required to report a critical safety incident to the attorney general within 15 days of discovery. If a critical safet
y incident poses an imminent risk of death or serious physical injury, then this bill requires the frontier developer to disclose the incident within 24 hours to an appropriate authority. This bill requires a large chatbot provider to report a child safe
ty incident to the attorney general within 15 days of discovery.

This bill requires the attorney general to establish a mechanism for a large frontier developer to confidentially submit summaries of any assessments of catastrophic risks resulting from internal use of frontier models. Large frontier developers must tra
nsmit to the attorney general a summary of such assessment beginning on January 1, 2027, and every three months thereafter.

This bill authorizes the attorney general to transmit reports of safety incidents, summaries of assessments of catastrophic risk resulting from internal use, and reports from employees to the general assembly, governor, federal government, or appropriate
state agencies. In transmitting such reports, the attorney general may consider any risks related to trade secrets, public safety, cybersecurity, or national security.

This bill requires the department of safety and attorney general to designate one or more federal laws or guidance documents that impose standards or requirements for safety incident reporting that are equivalent to or stricter than the reporting require
ments described above and is intended to assess, detect, or mitigate catastrophic or child safety risk. If a frontier developer or large chatbot provider intends to comply with reporting requirements of such designated federal law or guidance, then it mu
st
declare its intent to do so to the attorney general and department of safety. After declaring such intention, the frontier developer or large chatbot provider is in compliance with this bill to the extent that it meets the requirements of the designated
federal law or guidance. However, the failure of a frontier developer or large chatbot provider to comply with the designated federal law or guidance constitutes a violation of this bill.

CIVIL PENALTIES

This bill provides that a large frontier developer that violates this bill is subject to a civil penalty of $1 million or less per violation for a first violation and $3 million or less for subsequent violations. On the other hand, a large chatbot provi
der that violates this bill is subject to a civil penalty of $50,000 or less per violation. The attorney general has the exclusive right to enforce this bill.

RULEMAKING

This bill authorizes the department of safety, in consultation with the office of the attorney general, to
promulgate rules to effectuate this bill.

APPLICABILITY

This bill applies to conduct occurring on or after January 1, 2027.

Current Bill Text

Read the full stored bill text
SENATE BILL 2171
By Yager

HOUSE BILL 1898
By Zachary
HB1898
010822
- 1 -

AN ACT to amend Tennessee Code Annotated, Title 4;
Title 10, Chapter 7; Title 47; Title 58 and Title 68,
relative to artificial intelligence.

BE IT ENACTED BY THE GENERAL ASSEMBLY OF THE STATE OF TENNESSEE:
SECTION 1. Tennessee Code Annotated, Title 68, is amended by adding the following
as a new chapter:
68-107-101.
This chapter is known and may be cited as the "Artificial Intelligence Public
Safety and Child Protection Transparency Act".
68-107-102.
As used in this chapter:
(1) "Affiliate" means a person controlling, controlled by, or under common
control with a specified person, directly or indirectly, through one (1) or more
intermediaries;
(2) "Artificial intelligence model" means an engineered or machine-based
system that varies in its level of autonomy and that can, for explicit or implicit
objectives, infer from the input it receives how to generate outputs that can
influence physical or virtual environments;
(3) "Catastrophic risk":
(A) Means a foreseeable and material risk that a frontier
developer's development, storage, use, or deployment of a frontier model
will materially contribute to the death of, or serious injury to, more than
fifty (50) people or more than one billion dollars ($1,000,000,000) in

- 2 - 010822

damage to, or loss of, property arising from a single incident involving a
frontier model doing any of the following:
(i) Providing expert-level assistance in the creation or
release of a chemical, biological, radiological, or nuclear weapon;
(ii) Engaging in conduct with no meaningful human
oversight, intervention, or supervision that is either a cyberattack
or, if the conduct had been committed by a human, would
constitute the crime of murder, assault, extortion, or theft,
including theft by false pretense; or
(iii) Evading the control of its frontier developer or user;
and
(B) Does not include a foreseeable and material risk from:
(i) Information that a frontier model outputs if the
information is otherwise publicly accessible in a substantially
similar form from a source other than a foundation model;
(ii) Lawful activity of the federal government; or
(iii) Harm caused by a frontier model in combination with
other software if the frontier model did not materially contribute to
the harm;
(4) "Child safety incident" means a covered chatbot engaging in behavior
when interacting with a minor that, if the behavior had been engaged in by a
human, would be deemed to intentionally or recklessly cause death or bodily
injury to such minor or damage to the mental health of such minor that
constitutes severe emotional distress;

- 3 - 010822

(5) "Child safety plan" means a documented technical and organizational
protocol to manage, assess, and mitigate child safety risks;
(6) "Child safety risk" means a material and foreseeable risk that a
frontier developer's foundation model, when used as part of a covered chatbot
operated by the frontier developer, will engage in behavior when interacting with
a minor that, if it had been engaged in by a human, would be deemed to
intentionally or recklessly cause:
(A) Death or bodily injury to the minor, including as a result of
self-harm; or
(B) Cause damage to the mental health of such minor that
constitutes severe emotional distress;
(7) "Covered chatbot" means a service that:
(A) Allows an ordinary person to have conversations where
humanlike responses are generated by a foundation model;
(B) Is foreseeably likely to be accessed by minors; and
(C) Has at least one million (1,000,000) monthly active users;
(8) "Covered risk" means a catastrophic risk or a child safety risk;
(9) "Critical safety incident" means:
(A) Unauthorized access to, or modification, inadvertent release,
or exfiltration of, the model weights of a frontier model;
(B) The death of, or serious injury to, more than fifty (50) people
or more than one billion dollars ($1,000,000,000) in damage to, or loss of,
property resulting from the materialization of a catastrophic risk;
(C) Loss of control of a frontier model that causes death or bodily
injury, or that demonstrates materially increased catastrophic risk; or

- 4 - 010822

(D) A frontier model that uses deceptive techniques against the
frontier developer to subvert the controls or monitoring of its frontier
developer outside of the context of an evaluation designed to elicit such
behavior and in a manner that demonstrates materially increased
catastrophic risk;
(10) "Deploy":
(A) Means to make a frontier model available to a third party for
use, modification, copying, or combination with other software; and
(B) Does not include making a frontier model available to a third
party for the primary purpose of developing or evaluating the frontier
model;
(11) "Foundation model" means an artificial intelligence model that is:
(A) Trained on a broad data set;
(B) Designed for generality of output; and
(C) Adaptable to a wide range of distinctive tasks;
(12) "Frontier developer":
(A) Means a person who has trained, or initiated the training of, a
frontier model, with respect to which the person has used, or intends to
use, at least as much computing power to train the frontier model as
would meet the technical specifications found in subdivision (13); and
(B) Does not include an accredited college or university to the
extent that the college or university is developing or using frontier models
exclusively for academic research purposes;
(13) "Frontier model" means a foundation model that was trained using a
quantity of computing power greater than 10^26 integer or floating-point

- 5 - 010822

operations; for purposes of determining the computing power, the quantity of
computing power includes computing for the original training run and for any
subsequent fine-tuning, reinforcement learning, or other material modifications
the developer applies to a preceding foundation model;
(14) "Large chatbot provider" means a person who makes a covered
chatbot available in this state and who, together with the person's affiliates,
collectively had an annual revenue of twenty-five million dollars ($25,000,000) or
more;
(15) "Large frontier developer" means a frontier developer who together
with its affiliates collectively had annual revenue of five hundred million dollars
($500,000,000) or more;
(16) "Minor" means an individual who has not yet attained eighteen (18)
years of age;
(17) "Model weight" means a numerical parameter in a frontier model that
is adjusted through training and that helps determine how inputs are transformed
into outputs;
(18) "Person" means an individual, executor, administrator, or other
personal representative, or a corporation, partnership, association, or any other
legal or commercial entity, whether or not a citizen or domiciliary of this state and
whether or not organized under the laws of this state;
(19) "Property" means tangible or intangible property;
(20) "Public safety plan" means a documented technical and
organizational protocol to manage, assess, and mitigate catastrophic risks; and
(21) "Safety incident" means a child safety incident or a critical safety
incident.

- 6 - 010822

68-107-103.
(a)
(1) A large frontier developer shall write, implement, comply with, and
clearly and conspicuously publish on its internet website a public safety plan that
describes in detail how the large frontier developer:
(A) Defines and assesses thresholds used by the large frontier
developer to identify and assess whether a frontier model has capabilities
that could pose a catastrophic risk, including multiple-tiered thresholds;
(B) Applies mitigations to address the potential for catastrophic
risks based on the results of the assessments undertaken pursuant to
subdivision (a)(1)(A);
(C) Reviews assessments of catastrophic risk and adequacy of
mitigations of catastrophic risk as part of the decision to deploy a frontier
model or use it extensively internally;
(D) Uses third parties to assess the potential for catastrophic risks
and the effectiveness of mitigations of catastrophic risks;
(E) Implements cybersecurity practices to secure unreleased
frontier model weights from unauthorized modification or transfer by
internal or external parties;
(F) Assesses and manages catastrophic risk resulting from the
internal use of the frontier developer's frontier models, including risks
resulting from a frontier model circumventing oversight mechanisms;
(G) Incorporates national standards, international standards, and
industry-consensus best practices into the large frontier developer's
public safety plan;

- 7 - 010822

(H) Revisits and updates the public safety plan, including any
criteria that trigger updates and how the large frontier developer
determines when the frontier developer's frontier models are substantially
modified enough to require disclosures pursuant to subsection (d);
(I) Identifies and responds to critical safety incidents; and
(J) Institutes internal governance practices to ensure
implementation of the public safety plan.
(2) A large chatbot provider shall write, implement, comply with, and
clearly and conspicuously publish on its internet website a child safety plan that
describes in detail how the large chatbot provider:
(A) Assesses potential for child safety risks;
(B) Applies mitigations to address the potential for child safety
risks based on the results of the assessments undertaken pursuant to
subdivision (a)(2)(A);
(C) Uses third parties to assess the potential for child safety risks
and the effectiveness of mitigations of child safety risks;
(D) Incorporates national standards, international standards, and
industry-consensus best practices into the large chatbot provider's child
safety plan;
(E) Revisits and updates the child safety plan, including any
criteria that trigger updates and how the large chatbot provider
determines when its foundation models are substantially modified enough
to require disclosures pursuant to subsection (c);
(F) Identifies and responds to child safety incidents; and

- 8 - 010822

(G) Institutes internal governance practices to ensure
implementation of the child safety plan.
(b) If a large frontier developer or large chatbot provider makes a material
modification to the large frontier developer's or large chatbot provider's public safety plan
or child safety plan, then the large frontier developer or large chatbot provider shall
clearly and conspicuously publish the modified public safety plan or child safety plan and
a justification for that modification within thirty (30) days of making the material change.
(c) Before, or concurrently with, integrating a new foundation model, or a version
of an existing foundation model that has been substantially modified, into a covered
chatbot operated by a large chatbot provider, the large chatbot provider shall
conspicuously publish on the large chatbot provider's internet website summaries of:
(1) Assessments of child safety risks conducted pursuant to the large
chatbot provider's child safety plan;
(2) The results of the assessments;
(3) The extent to which third-party evaluators were involved in the
assessments; and
(4) Other steps taken to fulfill the requirements of the child safety plan.
(d)
(1) Before, or concurrently with, deploying a new frontier model or a
version of an existing frontier model that a large frontier developer has
substantially modified, the large frontier developer shall conspicuously publish on
the large frontier developer's internet website summaries of:
(A) Assessments of catastrophic risks from the frontier model
conducted pursuant to the large frontier developer's public safety plan;
(B) The results of the assessments;

- 9 - 010822

(C) The extent to which third-party evaluators were involved in the
assessments; and
(D) Other steps taken to fulfill the requirements of the public
safety plan with respect to catastrophic risks from the frontier model.
(2) A large frontier developer that publishes the information described in
subdivision (d)(1) as part of a larger document, including a system card or model
card, is in compliance with subdivision (d)(1).
(e)
(1) A frontier developer or large chatbot provider shall not make a
materially false or misleading statement or omission about:
(A) Covered risks from the frontier developer's or large chatbot
provider's activities; or
(B) The frontier developer's or large chatbot provider's
management of covered risks.
(2) A large frontier developer or large chatbot provider shall not make a
materially false or misleading statement or omission about its implementation of,
or compliance with, its public safety plan or child safety plan.
(3) Subdivisions (e)(1) and (e)(2) do not apply to a statement that was:
(A) Made in good faith; and
(B) Reasonable under the circumstances.
(f)
(1) When a large frontier developer or large chatbot provider publishes a
document to comply with this section, the large frontier developer or large
chatbot provider may make redactions to the document that are necessary to
protect the large frontier developer's trade secrets or cybersecurity, or public

- 10 - 010822

safety, the national security of the United States, or to comply with federal or
state law.
(2) If a large frontier developer or large chatbot provider redacts
information in a document pursuant to subdivision (f)(1), the large frontier
developer or large chatbot provider shall:
(A) Describe the character and justification of the redaction in any
published version of the document, to the extent permitted by the
concerns that justify redaction; and
(B) Retain the unredacted information for at least five (5) years.
68-107-104.
(a) The attorney general and reporter shall establish a form and means by which
a frontier developer, large chatbot provider, or member of the public may report a safety
incident. The form and means must allow the report to include, at a minimum:
(1) The date of the safety incident;
(2) The reasons the incident qualifies as a safety incident; and
(3) A short and plain statement describing the safety incident.
(b) A frontier developer shall report any critical safety incident pertaining to one
(1) or more of the frontier developer's frontier models to the attorney general and
reporter within fifteen (15) days of discovering the critical safety incident.
(c) If a frontier developer discovers that a critical safety incident poses an
imminent risk of death or serious physical injury, then the frontier developer shall
disclose the incident within twenty-four (24) hours to an authority, including any law
enforcement agency or public safety agency with jurisdiction, that is appropriate based
on the nature of that incident and as otherwise required by law.

- 11 - 010822

(d) A large chatbot provider shall report any child safety incident pertaining to
one (1) or more of the large chatbot provider's covered chatbots to the attorney general
and reporter within fifteen (15) days of discovering the child safety incident.
(e) The attorney general and reporter shall establish a mechanism by which a
large frontier developer may confidentially submit summaries of any assessments of the
potential for catastrophic risk resulting from internal use of the frontier developer's
frontier models.
(f) On January 1, 2027, and every three (3) months thereafter, or pursuant to
another reasonable schedule requested by the large frontier developer, communicated
in writing to the attorney general and reporter and agreed upon by the attorney general
and reporter, a large frontier developer shall transmit to the attorney general and
reporter a summary of any assessment of catastrophic risk resulting from internal use of
the frontier developer's frontier models.
(g)
(1) The attorney general and reporter may transmit reports of safety
incidents, summaries of assessments of the potential for catastrophic risk from
internal use, and reports from employees made pursuant to this section to the
general assembly, governor, federal government, or appropriate state agencies.
(2) The attorney general and reporter may consider any risks related to
trade secrets, public safety, cybersecurity of a frontier developer or large chatbot
provider, or national security when transmitting such reports.
(h) For purposes of subsection (i), the department of safety, in consultation with
the office of the attorney general and reporter, shall promulgate rules designating one (1)
or more federal laws or guidance documents that:

- 12 - 010822

(1) Impose or state standards or requirements for safety incident
reporting that are substantially equivalent to or stricter than those required by this
section for critical safety incidents, child safety incidents, or both; provided, that
the law or guidance document does not need to require safety incident reporting
to this state; and
(2) Are intended to assess, detect, or mitigate catastrophic risk, child
safety risk, or both.
(i)
(1) A frontier developer or large chatbot provider that intends to comply
with all or part of this section by complying with the requirements of, or meeting
the standards stated by, a federal law or guidance document designated
pursuant to subsection (h) shall declare the frontier developer's or large chatbot
provider's intent to do so to the attorney general and reporter and the department
of safety.
(2) After a frontier developer or large chatbot provider has declared its
intent pursuant to subdivision (i)(1), then to the extent that the frontier developer
or large chatbot provider meets the standards of, or complies with the
requirements imposed or stated by, the designated federal law or guidance
document, the frontier developer or large chatbot provider is in compliance with
any obligation under this section that pertains to the following until the frontier
developer or large chatbot provider revokes the declaration provided pursuant to
subdivision (h)(2) to the attorney general and reporter and the department of
safety, or the department of safety removes the applicable federal law or
guidance document from the rules pursuant to subsection (j):

- 13 - 010822

(A) Critical safety incident obligations, if the designated law,
regulation, or guidance document is intended to assess, detect, or
mitigate catastrophic risk; and
(B) Child safety incident obligations, if the designated law,
regulation, or guidance document is intended to assess, detect, or
mitigate child safety risk.
(3) If a frontier developer or large chatbot provider declares an intention
to comply with this section through compliance with a federal law or guidance
document designated pursuant to subsection (i), then the failure by the frontier
developer or large chatbot provider to meet the standards of, or comply with the
federal law or guidance document constitutes a violation of this chapter if the
frontier developer or large chatbot provider is not otherwise in compliance with
this section.
(j) The department of safety shall promulgate rules as necessary to ensure that
a federal law or guidance document designated pursuant to subsection (h) continues to
meet the requirements of subsection (h), and shall update the rules as soon as
practicable to remove a previously designated federal law or guidance document if the
federal law or guidance document no longer meets the requirements of subsection (h).
68-107-105.
(a) A large frontier developer that violates this chapter is subject to a civil penalty
of not more than one million dollars ($1,000,000) per violation for a first violation and in
an amount not to exceed three million dollars ($3,000,000) per each subsequent
violation.
(b) A large chatbot provider that violates this chapter is subject to a civil penalty
of not more than fifty thousand dollars ($50,000) per violation.

- 14 - 010822

(c) Enforcement of this chapter is vested exclusively in the office of the attorney
general and reporter.
68-107-106.
The loss of value of equity does not count as damage to or loss of property for
the purposes of this chapter.
68-107-107.
The duties and obligations imposed by this chapter are cumulative with any other
duties or obligations imposed under another law and do not:
(1) Relieve any party from any duties or obligations imposed under
another law; or
(2) Limit any rights or remedies under existing law.
SECTION 2. Tennessee Code Annotated, Section 10-7-504, is amended by adding the
following as a new subsection:
(gg) A notification submitted under § 68-107-104(b) or (d) or a summary of an
assessment submitted under § 68-107-104(f) is not open for public inspection.
SECTION 3. If any provision of this act or the application of any provision of this act to
any person or circumstance is held invalid, the invalidity does not affect other provisions or
applications of the act that can be given effect without the invalid provision or application, and to
that end, the provisions of this act are severable.
SECTION 4. This act takes effect January 1, 2027, the public welfare requiring it, and
applies to conduct occurring on or after that date.