Back to Arizona

HB4098 • 2026

artificial intelligence business; attorney general

HB4098 - artificial intelligence business; attorney general

Children Education Privacy Taxes Technology
Passed Legislature

This bill passed both chambers and reached final enrollment, even if later executive action is not shown here.

Sponsor
Stacey Travers
Last action
2026-02-11
Official status
House second read
Effective date
Not listed

Plain English Breakdown

Checked against official source text during the last sync.

AI Business Rules for Arizona

This bill sets rules for businesses using advanced AI systems in Arizona by requiring them to assess risks, report to the Attorney General, and follow safety measures.

What This Bill Does

  • Requires AI businesses to conduct a risk assessment before selling or using high-risk AI systems.
  • Businesses must submit transparency reports to the Attorney General about their AI systems' purposes, risks, and how they reduce harmful outcomes.
  • AI businesses need to post these reports on their websites for public view.
  • Businesses have to check their AI systems every quarter to fix any issues with discrimination or safety.
  • If an AI system can be used by children, the business must include tools for parents to monitor and report problems.

Who It Names or Affects

  • AI businesses that create, use, sell, or provide access to advanced AI systems in Arizona.
  • The Attorney General of Arizona who will oversee compliance and issue certifications.

Terms To Know

High risk AI system
An AI system that could affect children, health care, employment, public safety, or education; or is likely to cause discrimination, bias, or significant privacy violations.
Frontier model
A type of advanced AI model trained using a lot of computational power and money.

Limits and Unknowns

  • The bill does not specify how the Attorney General will enforce these rules or what happens if businesses do not comply.
  • It is unclear how many AI businesses in Arizona this law will actually affect.

Bill History

  1. 2026-02-11 House

    House second read

  2. 2026-02-10 House

    House Rules: None

  3. 2026-02-10 House

    House Artificial Intelligence & Innovation: None

  4. 2026-02-10 House

    House first read

Official Summary Text

HB4098 - artificial intelligence business; attorney general

Current Bill Text

Read the full stored bill text
HB4098 - 572R - I Ver

REFERENCE TITLE:
artificial intelligence business; attorney general

State of Arizona

House of Representatives

Fifty-seventh Legislature

Second Regular Session

2026

HB 4098

Introduced by

Representative
Travers

AN
ACT

amending title 44, chapter 9, arizona
revised statutes, by adding article 27; relating to commerce.

(TEXT OF BILL BEGINS ON NEXT PAGE)

Be it enacted by the Legislature of the State of Arizona:

Section 1. Title 44, chapter 9, Arizona Revised
Statutes, is amended by adding article 27, to read:

ARTICLE 27. ARTIFICIAL
INTELLIGENCE BUSINESS

START_STATUTE
44-1383.

Artificial intelligence business; risk assessment; report;
posting; quarterly audit; violation; civil penalty; definitions

A. An artificial intelligence
business shall conduct a comprehensive risk assessment for each of the
business's high risk AI systems before using or selling a high risk AI system.

B. An artificial intelligence
business shall evaluate a high risk AI system for all of the following:

1. The potential for discrimination
or bias.

2. Any safety risks to children and
vulnerable populations.

3. Any privacy and data protection
risks.

4. assessing catastrophic risks.

C. An artificial intelligence
business shall submit a transparency report to the attorney general before
using or selling a high risk AI system.� THe transparency report must include
all of the following:

1. The purposes and operation of the
artificial intelligence system.

2. The results from any internal risk
assessment evaluations.

3. The measures that were implemented
to mitigate bias and harmful outcomes.

D. The transparency reports shall be
publicly posted on the artificial intelligence business's website.

E. An artificial intelligence
business shall implement bias mitigation measures that include any technical or
procedural methods that were implemented to reduce discriminatory outcomes.

F. An artificial intelligence
business shall conduct a quarterly audit to identify and correct any instances
of discrimination, harmful outputs or unsafe behavior.

G. If a high risk AI system may be
accessed by a child, the artificial intelligence business shall include
parental monitoring tools, content filtering and mechanisms to report harmful
content.

H. The attorney general shall issue a
certification before an ARTIFICIAL intelligence business enters the stream of
commerce.

I. The attorney general shall enforce
this section and review the transparency reports for compliance with this
section.� The attorney general may impose a civil penalty of not more than
$50,000 per violation for a violation of this section.� Each day a violation
occurs constitutes a separate violation.

J. An artificial intelligence
business that implements recognized standards for and complies with the
children's online privacy protection act of 1998 (P.L. 105-277) advisory
services may receive a reduced civil penalty for inadvertent violations.

K. For the purposes of this section:

1. "Artificial intelligence
business" means an entity that creates, uses, sells or provides access to
artificial intelligence systems that interact with the public and that uses a
frontier model of artificial intelligence that is owned or operated by a large
developer.

2. "frontier model" means
an artificial intelligence model that is trained by either of the following:

(
a
) using
greater than 10.26 computational operations to compute costs of more than
$100,000,000.

(
b
) applying
knowledge distillation to a frontier model.

3. "High risk AI system"
means an artificial intelligence system that:

(
a
) May impact
children, health care, employment, public safety or education.

(
b
) Is likely
to cause discrimination, bias or significant privacy violations.

4. "Large developer":

(
a
) Means a
person or entity that has trained one or more frontier models and has spent
more than $5,000,000 in developing the frontier model and more than
$100,000,000 in training the frontier model.

(
b
) Does not
include an accredited college or a university that engages in academic research
involving frontier models.

(
c
) Creates a
FORESEEABLE and material risk that a large developer's development, storage or
deployment of a foundation model will result in the death of or serious injury
to more than one hundred people by an incident of either of the following:

(
i
) creating
and releasing a chemical, biological, radiological or nuclear weapon.

(
ii
) through
the use of A FOUNDATION model, engaging in conduct that is performed with
limited human intervention and, if the conduct was committed by an individual,
would constitute a crime that requires intent, recklessness or gross negligence
or the SOLICITATION or aiding and abetting of a crime.
END_STATUTE