Read the full stored bill text
SENATE BILL 2171
By Yager
HOUSE BILL 1898
By Zachary
HB1898
010822
- 1 -
AN ACT to amend Tennessee Code Annotated, Title 4;
Title 10, Chapter 7; Title 47; Title 58 and Title 68,
relative to artificial intelligence.
BE IT ENACTED BY THE GENERAL ASSEMBLY OF THE STATE OF TENNESSEE:
SECTION 1. Tennessee Code Annotated, Title 68, is amended by adding the following
as a new chapter:
68-107-101.
This chapter is known and may be cited as the "Artificial Intelligence Public
Safety and Child Protection Transparency Act".
68-107-102.
As used in this chapter:
(1) "Affiliate" means a person controlling, controlled by, or under common
control with a specified person, directly or indirectly, through one (1) or more
intermediaries;
(2) "Artificial intelligence model" means an engineered or machine-based
system that varies in its level of autonomy and that can, for explicit or implicit
objectives, infer from the input it receives how to generate outputs that can
influence physical or virtual environments;
(3) "Catastrophic risk":
(A) Means a foreseeable and material risk that a frontier
developer's development, storage, use, or deployment of a frontier model
will materially contribute to the death of, or serious injury to, more than
fifty (50) people or more than one billion dollars ($1,000,000,000) in
- 2 - 010822
damage to, or loss of, property arising from a single incident involving a
frontier model doing any of the following:
(i) Providing expert-level assistance in the creation or
release of a chemical, biological, radiological, or nuclear weapon;
(ii) Engaging in conduct with no meaningful human
oversight, intervention, or supervision that is either a cyberattack
or, if the conduct had been committed by a human, would
constitute the crime of murder, assault, extortion, or theft,
including theft by false pretense; or
(iii) Evading the control of its frontier developer or user;
and
(B) Does not include a foreseeable and material risk from:
(i) Information that a frontier model outputs if the
information is otherwise publicly accessible in a substantially
similar form from a source other than a foundation model;
(ii) Lawful activity of the federal government; or
(iii) Harm caused by a frontier model in combination with
other software if the frontier model did not materially contribute to
the harm;
(4) "Child safety incident" means a covered chatbot engaging in behavior
when interacting with a minor that, if the behavior had been engaged in by a
human, would be deemed to intentionally or recklessly cause death or bodily
injury to such minor or damage to the mental health of such minor that
constitutes severe emotional distress;
- 3 - 010822
(5) "Child safety plan" means a documented technical and organizational
protocol to manage, assess, and mitigate child safety risks;
(6) "Child safety risk" means a material and foreseeable risk that a
frontier developer's foundation model, when used as part of a covered chatbot
operated by the frontier developer, will engage in behavior when interacting with
a minor that, if it had been engaged in by a human, would be deemed to
intentionally or recklessly cause:
(A) Death or bodily injury to the minor, including as a result of
self-harm; or
(B) Cause damage to the mental health of such minor that
constitutes severe emotional distress;
(7) "Covered chatbot" means a service that:
(A) Allows an ordinary person to have conversations where
humanlike responses are generated by a foundation model;
(B) Is foreseeably likely to be accessed by minors; and
(C) Has at least one million (1,000,000) monthly active users;
(8) "Covered risk" means a catastrophic risk or a child safety risk;
(9) "Critical safety incident" means:
(A) Unauthorized access to, or modification, inadvertent release,
or exfiltration of, the model weights of a frontier model;
(B) The death of, or serious injury to, more than fifty (50) people
or more than one billion dollars ($1,000,000,000) in damage to, or loss of,
property resulting from the materialization of a catastrophic risk;
(C) Loss of control of a frontier model that causes death or bodily
injury, or that demonstrates materially increased catastrophic risk; or
- 4 - 010822
(D) A frontier model that uses deceptive techniques against the
frontier developer to subvert the controls or monitoring of its frontier
developer outside of the context of an evaluation designed to elicit such
behavior and in a manner that demonstrates materially increased
catastrophic risk;
(10) "Deploy":
(A) Means to make a frontier model available to a third party for
use, modification, copying, or combination with other software; and
(B) Does not include making a frontier model available to a third
party for the primary purpose of developing or evaluating the frontier
model;
(11) "Foundation model" means an artificial intelligence model that is:
(A) Trained on a broad data set;
(B) Designed for generality of output; and
(C) Adaptable to a wide range of distinctive tasks;
(12) "Frontier developer":
(A) Means a person who has trained, or initiated the training of, a
frontier model, with respect to which the person has used, or intends to
use, at least as much computing power to train the frontier model as
would meet the technical specifications found in subdivision (13); and
(B) Does not include an accredited college or university to the
extent that the college or university is developing or using frontier models
exclusively for academic research purposes;
(13) "Frontier model" means a foundation model that was trained using a
quantity of computing power greater than 10^26 integer or floating-point
- 5 - 010822
operations; for purposes of determining the computing power, the quantity of
computing power includes computing for the original training run and for any
subsequent fine-tuning, reinforcement learning, or other material modifications
the developer applies to a preceding foundation model;
(14) "Large chatbot provider" means a person who makes a covered
chatbot available in this state and who, together with the person's affiliates,
collectively had an annual revenue of twenty-five million dollars ($25,000,000) or
more;
(15) "Large frontier developer" means a frontier developer who together
with its affiliates collectively had annual revenue of five hundred million dollars
($500,000,000) or more;
(16) "Minor" means an individual who has not yet attained eighteen (18)
years of age;
(17) "Model weight" means a numerical parameter in a frontier model that
is adjusted through training and that helps determine how inputs are transformed
into outputs;
(18) "Person" means an individual, executor, administrator, or other
personal representative, or a corporation, partnership, association, or any other
legal or commercial entity, whether or not a citizen or domiciliary of this state and
whether or not organized under the laws of this state;
(19) "Property" means tangible or intangible property;
(20) "Public safety plan" means a documented technical and
organizational protocol to manage, assess, and mitigate catastrophic risks; and
(21) "Safety incident" means a child safety incident or a critical safety
incident.
- 6 - 010822
68-107-103.
(a)
(1) A large frontier developer shall write, implement, comply with, and
clearly and conspicuously publish on its internet website a public safety plan that
describes in detail how the large frontier developer:
(A) Defines and assesses thresholds used by the large frontier
developer to identify and assess whether a frontier model has capabilities
that could pose a catastrophic risk, including multiple-tiered thresholds;
(B) Applies mitigations to address the potential for catastrophic
risks based on the results of the assessments undertaken pursuant to
subdivision (a)(1)(A);
(C) Reviews assessments of catastrophic risk and adequacy of
mitigations of catastrophic risk as part of the decision to deploy a frontier
model or use it extensively internally;
(D) Uses third parties to assess the potential for catastrophic risks
and the effectiveness of mitigations of catastrophic risks;
(E) Implements cybersecurity practices to secure unreleased
frontier model weights from unauthorized modification or transfer by
internal or external parties;
(F) Assesses and manages catastrophic risk resulting from the
internal use of the frontier developer's frontier models, including risks
resulting from a frontier model circumventing oversight mechanisms;
(G) Incorporates national standards, international standards, and
industry-consensus best practices into the large frontier developer's
public safety plan;
- 7 - 010822
(H) Revisits and updates the public safety plan, including any
criteria that trigger updates and how the large frontier developer
determines when the frontier developer's frontier models are substantially
modified enough to require disclosures pursuant to subsection (d);
(I) Identifies and responds to critical safety incidents; and
(J) Institutes internal governance practices to ensure
implementation of the public safety plan.
(2) A large chatbot provider shall write, implement, comply with, and
clearly and conspicuously publish on its internet website a child safety plan that
describes in detail how the large chatbot provider:
(A) Assesses potential for child safety risks;
(B) Applies mitigations to address the potential for child safety
risks based on the results of the assessments undertaken pursuant to
subdivision (a)(2)(A);
(C) Uses third parties to assess the potential for child safety risks
and the effectiveness of mitigations of child safety risks;
(D) Incorporates national standards, international standards, and
industry-consensus best practices into the large chatbot provider's child
safety plan;
(E) Revisits and updates the child safety plan, including any
criteria that trigger updates and how the large chatbot provider
determines when its foundation models are substantially modified enough
to require disclosures pursuant to subsection (c);
(F) Identifies and responds to child safety incidents; and
- 8 - 010822
(G) Institutes internal governance practices to ensure
implementation of the child safety plan.
(b) If a large frontier developer or large chatbot provider makes a material
modification to the large frontier developer's or large chatbot provider's public safety plan
or child safety plan, then the large frontier developer or large chatbot provider shall
clearly and conspicuously publish the modified public safety plan or child safety plan and
a justification for that modification within thirty (30) days of making the material change.
(c) Before, or concurrently with, integrating a new foundation model, or a version
of an existing foundation model that has been substantially modified, into a covered
chatbot operated by a large chatbot provider, the large chatbot provider shall
conspicuously publish on the large chatbot provider's internet website summaries of:
(1) Assessments of child safety risks conducted pursuant to the large
chatbot provider's child safety plan;
(2) The results of the assessments;
(3) The extent to which third-party evaluators were involved in the
assessments; and
(4) Other steps taken to fulfill the requirements of the child safety plan.
(d)
(1) Before, or concurrently with, deploying a new frontier model or a
version of an existing frontier model that a large frontier developer has
substantially modified, the large frontier developer shall conspicuously publish on
the large frontier developer's internet website summaries of:
(A) Assessments of catastrophic risks from the frontier model
conducted pursuant to the large frontier developer's public safety plan;
(B) The results of the assessments;
- 9 - 010822
(C) The extent to which third-party evaluators were involved in the
assessments; and
(D) Other steps taken to fulfill the requirements of the public
safety plan with respect to catastrophic risks from the frontier model.
(2) A large frontier developer that publishes the information described in
subdivision (d)(1) as part of a larger document, including a system card or model
card, is in compliance with subdivision (d)(1).
(e)
(1) A frontier developer or large chatbot provider shall not make a
materially false or misleading statement or omission about:
(A) Covered risks from the frontier developer's or large chatbot
provider's activities; or
(B) The frontier developer's or large chatbot provider's
management of covered risks.
(2) A large frontier developer or large chatbot provider shall not make a
materially false or misleading statement or omission about its implementation of,
or compliance with, its public safety plan or child safety plan.
(3) Subdivisions (e)(1) and (e)(2) do not apply to a statement that was:
(A) Made in good faith; and
(B) Reasonable under the circumstances.
(f)
(1) When a large frontier developer or large chatbot provider publishes a
document to comply with this section, the large frontier developer or large
chatbot provider may make redactions to the document that are necessary to
protect the large frontier developer's trade secrets or cybersecurity, or public
- 10 - 010822
safety, the national security of the United States, or to comply with federal or
state law.
(2) If a large frontier developer or large chatbot provider redacts
information in a document pursuant to subdivision (f)(1), the large frontier
developer or large chatbot provider shall:
(A) Describe the character and justification of the redaction in any
published version of the document, to the extent permitted by the
concerns that justify redaction; and
(B) Retain the unredacted information for at least five (5) years.
68-107-104.
(a) The attorney general and reporter shall establish a form and means by which
a frontier developer, large chatbot provider, or member of the public may report a safety
incident. The form and means must allow the report to include, at a minimum:
(1) The date of the safety incident;
(2) The reasons the incident qualifies as a safety incident; and
(3) A short and plain statement describing the safety incident.
(b) A frontier developer shall report any critical safety incident pertaining to one
(1) or more of the frontier developer's frontier models to the attorney general and
reporter within fifteen (15) days of discovering the critical safety incident.
(c) If a frontier developer discovers that a critical safety incident poses an
imminent risk of death or serious physical injury, then the frontier developer shall
disclose the incident within twenty-four (24) hours to an authority, including any law
enforcement agency or public safety agency with jurisdiction, that is appropriate based
on the nature of that incident and as otherwise required by law.
- 11 - 010822
(d) A large chatbot provider shall report any child safety incident pertaining to
one (1) or more of the large chatbot provider's covered chatbots to the attorney general
and reporter within fifteen (15) days of discovering the child safety incident.
(e) The attorney general and reporter shall establish a mechanism by which a
large frontier developer may confidentially submit summaries of any assessments of the
potential for catastrophic risk resulting from internal use of the frontier developer's
frontier models.
(f) On January 1, 2027, and every three (3) months thereafter, or pursuant to
another reasonable schedule requested by the large frontier developer, communicated
in writing to the attorney general and reporter and agreed upon by the attorney general
and reporter, a large frontier developer shall transmit to the attorney general and
reporter a summary of any assessment of catastrophic risk resulting from internal use of
the frontier developer's frontier models.
(g)
(1) The attorney general and reporter may transmit reports of safety
incidents, summaries of assessments of the potential for catastrophic risk from
internal use, and reports from employees made pursuant to this section to the
general assembly, governor, federal government, or appropriate state agencies.
(2) The attorney general and reporter may consider any risks related to
trade secrets, public safety, cybersecurity of a frontier developer or large chatbot
provider, or national security when transmitting such reports.
(h) For purposes of subsection (i), the department of safety, in consultation with
the office of the attorney general and reporter, shall promulgate rules designating one (1)
or more federal laws or guidance documents that:
- 12 - 010822
(1) Impose or state standards or requirements for safety incident
reporting that are substantially equivalent to or stricter than those required by this
section for critical safety incidents, child safety incidents, or both; provided, that
the law or guidance document does not need to require safety incident reporting
to this state; and
(2) Are intended to assess, detect, or mitigate catastrophic risk, child
safety risk, or both.
(i)
(1) A frontier developer or large chatbot provider that intends to comply
with all or part of this section by complying with the requirements of, or meeting
the standards stated by, a federal law or guidance document designated
pursuant to subsection (h) shall declare the frontier developer's or large chatbot
provider's intent to do so to the attorney general and reporter and the department
of safety.
(2) After a frontier developer or large chatbot provider has declared its
intent pursuant to subdivision (i)(1), then to the extent that the frontier developer
or large chatbot provider meets the standards of, or complies with the
requirements imposed or stated by, the designated federal law or guidance
document, the frontier developer or large chatbot provider is in compliance with
any obligation under this section that pertains to the following until the frontier
developer or large chatbot provider revokes the declaration provided pursuant to
subdivision (h)(2) to the attorney general and reporter and the department of
safety, or the department of safety removes the applicable federal law or
guidance document from the rules pursuant to subsection (j):
- 13 - 010822
(A) Critical safety incident obligations, if the designated law,
regulation, or guidance document is intended to assess, detect, or
mitigate catastrophic risk; and
(B) Child safety incident obligations, if the designated law,
regulation, or guidance document is intended to assess, detect, or
mitigate child safety risk.
(3) If a frontier developer or large chatbot provider declares an intention
to comply with this section through compliance with a federal law or guidance
document designated pursuant to subsection (i), then the failure by the frontier
developer or large chatbot provider to meet the standards of, or comply with the
federal law or guidance document constitutes a violation of this chapter if the
frontier developer or large chatbot provider is not otherwise in compliance with
this section.
(j) The department of safety shall promulgate rules as necessary to ensure that
a federal law or guidance document designated pursuant to subsection (h) continues to
meet the requirements of subsection (h), and shall update the rules as soon as
practicable to remove a previously designated federal law or guidance document if the
federal law or guidance document no longer meets the requirements of subsection (h).
68-107-105.
(a) A large frontier developer that violates this chapter is subject to a civil penalty
of not more than one million dollars ($1,000,000) per violation for a first violation and in
an amount not to exceed three million dollars ($3,000,000) per each subsequent
violation.
(b) A large chatbot provider that violates this chapter is subject to a civil penalty
of not more than fifty thousand dollars ($50,000) per violation.
- 14 - 010822
(c) Enforcement of this chapter is vested exclusively in the office of the attorney
general and reporter.
68-107-106.
The loss of value of equity does not count as damage to or loss of property for
the purposes of this chapter.
68-107-107.
The duties and obligations imposed by this chapter are cumulative with any other
duties or obligations imposed under another law and do not:
(1) Relieve any party from any duties or obligations imposed under
another law; or
(2) Limit any rights or remedies under existing law.
SECTION 2. Tennessee Code Annotated, Section 10-7-504, is amended by adding the
following as a new subsection:
(gg) A notification submitted under § 68-107-104(b) or (d) or a summary of an
assessment submitted under § 68-107-104(f) is not open for public inspection.
SECTION 3. If any provision of this act or the application of any provision of this act to
any person or circumstance is held invalid, the invalidity does not affect other provisions or
applications of the act that can be given effect without the invalid provision or application, and to
that end, the provisions of this act are severable.
SECTION 4. This act takes effect January 1, 2027, the public welfare requiring it, and
applies to conduct occurring on or after that date.