Read the full stored bill text
HB4098 - 572R - I Ver
REFERENCE TITLE:
artificial intelligence business; attorney general
State of Arizona
House of Representatives
Fifty-seventh Legislature
Second Regular Session
2026
HB 4098
Introduced by
Representative
Travers
AN
ACT
amending title 44, chapter 9, arizona
revised statutes, by adding article 27; relating to commerce.
(TEXT OF BILL BEGINS ON NEXT PAGE)
Be it enacted by the Legislature of the State of Arizona:
Section 1. Title 44, chapter 9, Arizona Revised
Statutes, is amended by adding article 27, to read:
ARTICLE 27. ARTIFICIAL
INTELLIGENCE BUSINESS
START_STATUTE
44-1383.
Artificial intelligence business; risk assessment; report;
posting; quarterly audit; violation; civil penalty; definitions
A. An artificial intelligence
business shall conduct a comprehensive risk assessment for each of the
business's high risk AI systems before using or selling a high risk AI system.
B. An artificial intelligence
business shall evaluate a high risk AI system for all of the following:
1. The potential for discrimination
or bias.
2. Any safety risks to children and
vulnerable populations.
3. Any privacy and data protection
risks.
4. assessing catastrophic risks.
C. An artificial intelligence
business shall submit a transparency report to the attorney general before
using or selling a high risk AI system.� THe transparency report must include
all of the following:
1. The purposes and operation of the
artificial intelligence system.
2. The results from any internal risk
assessment evaluations.
3. The measures that were implemented
to mitigate bias and harmful outcomes.
D. The transparency reports shall be
publicly posted on the artificial intelligence business's website.
E. An artificial intelligence
business shall implement bias mitigation measures that include any technical or
procedural methods that were implemented to reduce discriminatory outcomes.
F. An artificial intelligence
business shall conduct a quarterly audit to identify and correct any instances
of discrimination, harmful outputs or unsafe behavior.
G. If a high risk AI system may be
accessed by a child, the artificial intelligence business shall include
parental monitoring tools, content filtering and mechanisms to report harmful
content.
H. The attorney general shall issue a
certification before an ARTIFICIAL intelligence business enters the stream of
commerce.
I. The attorney general shall enforce
this section and review the transparency reports for compliance with this
section.� The attorney general may impose a civil penalty of not more than
$50,000 per violation for a violation of this section.� Each day a violation
occurs constitutes a separate violation.
J. An artificial intelligence
business that implements recognized standards for and complies with the
children's online privacy protection act of 1998 (P.L. 105-277) advisory
services may receive a reduced civil penalty for inadvertent violations.
K. For the purposes of this section:
1. "Artificial intelligence
business" means an entity that creates, uses, sells or provides access to
artificial intelligence systems that interact with the public and that uses a
frontier model of artificial intelligence that is owned or operated by a large
developer.
2. "frontier model" means
an artificial intelligence model that is trained by either of the following:
(
a
) using
greater than 10.26 computational operations to compute costs of more than
$100,000,000.
(
b
) applying
knowledge distillation to a frontier model.
3. "High risk AI system"
means an artificial intelligence system that:
(
a
) May impact
children, health care, employment, public safety or education.
(
b
) Is likely
to cause discrimination, bias or significant privacy violations.
4. "Large developer":
(
a
) Means a
person or entity that has trained one or more frontier models and has spent
more than $5,000,000 in developing the frontier model and more than
$100,000,000 in training the frontier model.
(
b
) Does not
include an accredited college or a university that engages in academic research
involving frontier models.
(
c
) Creates a
FORESEEABLE and material risk that a large developer's development, storage or
deployment of a foundation model will result in the death of or serious injury
to more than one hundred people by an incident of either of the following:
(
i
) creating
and releasing a chemical, biological, radiological or nuclear weapon.
(
ii
) through
the use of A FOUNDATION model, engaging in conduct that is performed with
limited human intervention and, if the conduct was committed by an individual,
would constitute a crime that requires intent, recklessness or gross negligence
or the SOLICITATION or aiding and abetting of a crime.
END_STATUTE