Back to Hawaii

SB2967 • 2026

RELATING TO ARTIFICIAL INTELLIGENCE.

RELATING TO ARTIFICIAL INTELLIGENCE.

Active

The official status still shows this bill as active or still awaiting another formal step.

Sponsor
MCKELVEY, CHANG, HASHIMOTO, RICHARDS
Last action
2026-02-11
Official status
The committee on LBT deferred the measure.
Effective date
Not listed

Plain English Breakdown

Using official source text because the generated explanation was unavailable or could not be confirmed against the official bill text.

RELATING TO ARTIFICIAL INTELLIGENCE.

RELATING TO ARTIFICIAL INTELLIGENCE.

What This Bill Does

  • RELATING TO ARTIFICIAL INTELLIGENCE.
  • Office of Consumer Protection; Executive Director; Attorney General; Artificial Intelligence; Consumer Protection; Disclosures; Algorithmic Discrimination; Unfair and Deceptive Acts or Practices; Appeals; Risk Management Establishes consumer protection requirements for the use of artificial intelligence systems in consumer interactions and consequential decisions, including disclosures, documentation, and a right to correction, appeal, and human review.
  • Makes certain violations an unfair or deceptive act or practice.
  • Requires risk management and impact assessments for high-risk artificial intelligence systems.

Limits and Unknowns

  • This entry is temporarily using official source text because the generated explanation could not be confirmed against the official bill text during the last sync.

Bill History

  1. 2026-02-11 S

    The committee on LBT deferred the measure.

  2. 2026-02-06 S

    The committee(s) on LBT has scheduled a public hearing on 02-11-26 3:00PM; Conference Room 225 & Videoconference.

  3. 2026-01-30 S

    Referred to LBT, CPN.

  4. 2026-01-26 S

    Passed First Reading.

  5. 2026-01-23 S

    Introduced.

Official Summary Text

RELATING TO ARTIFICIAL INTELLIGENCE.
Office of Consumer Protection; Executive Director; Attorney General; Artificial Intelligence; Consumer Protection; Disclosures; Algorithmic Discrimination; Unfair and Deceptive Acts or Practices; Appeals; Risk Management
Establishes consumer protection requirements for the use of artificial intelligence systems in consumer interactions and consequential decisions, including disclosures, documentation, and a right to correction, appeal, and human review. Makes certain violations an unfair or deceptive act or practice. Requires risk management and impact assessments for high-risk artificial intelligence systems. Requires incident reports to the Executive Director of the Office of Consumer Protection and the Attorney General.

Current Bill Text

Read the full stored bill text
SB2967

THE SENATE

S.B. NO.

2967

THIRTY-THIRD LEGISLATURE, 2026

STATE OF HAWAII

A BILL FOR AN ACT

relating
to artificial intelligence
.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF HAWAII:

����
SECTION
1.
�
The legislature finds that artificial
intelligence systems, including generative artificial intelligence services,
are becoming integral to commerce, education, health care, finance,
transportation, and government operations.
�

When responsibly designed and governed, these technologies can improve
productivity, expand access to services, reduce costs, and support faster,
better-informed decision-making.

����
The legislature further finds however, that
the rapid deployment of artificial intelligence systems, often without mature
controls, can erode foundational principles of security, oversight, and
accountability.
�
Documented trends
include hallucinations and other reliability failures; misuse of accessible
data or application of irrelevant or low-quality data; poisoned or contaminated
inputs that degrade outputs; and attempts by providers or deploying
organizations to shift responsibility to end users or to an automated system to
avoid accountability for errors and harms.

����
The legislature further finds that
consumers can face severe financial and reputational impacts when automated
tools are used in customer-facing interactions or to support high-impact
decisions.
�
Examples include:

����
(1)
�
Use
of automated vehicle damage scanning and scoring processes that can generate
disputed or erroneous damage charges in rental-car transactions; and

����
(2)
�
Well-documented
harms from algorithmic scoring in housing and tenant screenings that can
contribute to unfair outcomes and disparate impacts.

����
The legislature further finds that basic,
common-sense consumer protection standards can curb abuses while supporting
responsible innovation, by clarifying accountability, requiring transparent
disclosures, ensuring meaningful human review and a right to appeal when
automated tools materially affect consumers, and requiring risk management and
documentation proportionate to consumer risk.

����
Accordingly, the purpose of this Act is to
establish a technology-neutral framework that:

����
(1)
�
Requires
disclosure when artificial intelligence is used in consumer interactions and
consequential decisions;

����
(2)
�
Prohibits
the use of artificial intelligence as a shield from responsibility for unfair
or deceptive practices;

����
(3)
�
Establishes
rights to explanation, correction, appeal, and human review for certain
automated decisions; and

����
(4)
�
Requires
reasonable governance, testing, monitoring, and cybersecurity controls to
improve reliability, fairness, and consumer trust.

����
SECTION 2.
�

Chapter 480, Hawaii Revised Statutes, is amended by adding a new part to
be appropriately designated and to read as follows:

"
Part
.
�
Artificial intelligence
consumer protections

����
�480-A
�
Definitions.
�
As used in this part:

����
"Adverse
action" means a denial, reduction, termination, or other materially
unfavorable change in, or refusal to provide, a product, service, price, term,
or condition that is the result of, or is materially influenced by, a
consequential decision.

����
"Algorithmic
discrimination" means the use of an artificial intelligence system or
automated decision tool that contributes to unjustified differential treatment
or unjustified differential impact that disfavors a person or class of persons
on the basis of a characteristic protected by state or federal law.

����
"Artificial
intelligence system" or "AI system" means a machine-based system
that, for a given set of objectives, generates outputs such as predictions,
recommendations, content, classifications, scores, or similar outputs that
influence decisions or behaviors in real or virtual environments, and that
operates with varying levels of autonomy.

����
"Automated
decision tool" means a computational process, including one derived from
machine learning, statistics, data analytics, or artificial intelligence, that
issues an output used to make, inform, or materially influence a consequential
decision.

����
"Consequential
decision" means a decision that determines or materially influences a
consumer's access to, eligibility for, or the terms, conditions, or pricing of:

����
(1)
�
Housing
or rental screening, including a lease, tenancy, or occupancy determination;

����
(2)
�
Credit,
lending, or other financial services;

����
(3)
�
Insurance;

����
(4)
�
Employment,
including hiring, promotion, termination, scheduling, compensation, or the
assignment of work;

����
(5)
�
Education
admissions or educational opportunities;

����
(6)
�
Health
care services, including access to care, payment, or coverage determinations;

����
(7)
�
Legal
services provided to a consumer in a consumer-facing context; or

����
(8)
�
Any
other category designated by the director by rule as a consequential decision
due to a comparable risk of material financial, reputational, or legal harm.

����
"Deployer" means a person that
uses, operates, or makes available an AI system or automated decision tool in
the course of business in the State, including use through a vendor, when the
system or tool is used to interact with a consumer or to make, inform, or
materially influence a consequential decision.

����
"Executive director" means the executive
director of the office of consumer protection.

����
"Generative artificial intelligence
service" means an AI system that generates, in response to prompts or
other inputs, content such as text, images, audio, video, computer code, or
other synthetic output.

����
"High-risk AI system" means an AI
system or automated decision tool that is used to make, or is a substantial
factor in making, a consequential decision.

����
"Materially influence" means that
an output is used in a manner that could change the outcome of a decision, the
terms of a decision, or the process used to reach a decision, including use as
a gatekeeping score, recommendation, ranking, classification, or flag.

����
"Vendor" means a person that
provides, licenses, hosts, maintains, or materially modifies an AI system or
automated decision tool on behalf of a deployer.

����
�480-B
�
Disclosure when interacting with an AI
system.
�
(a)
�
A deployer that uses an AI system, including
a generative artificial intelligence service, to interact with a consumer in a
consumer-facing communication shall provide a clear and conspicuous disclosure,
at the beginning of the interaction and at reasonable intervals as necessary to
avoid deception, that the consumer is interacting with an AI system.
�

����
(b)
�

Subsection (a) shall not be construed to prohibit the use of a live
human agent; provided that when a human agent joins or assumes control of the
interaction, the deployer shall not misrepresent the identity of the agent.

����
(c)
�

The disclosure required by this section may be provided by text, audio,
or another method reasonably calculated to be understood by the consumer given
the communication channel.

����
�480-C
�
Disclosures for consequential decisions;
notice and explanation.
�
(a)
�
If a deployer uses a high-risk AI system to
make, inform, or materially influence a consequential decision, the deployer
shall provide the consumer, in plain language and in a timely manner:

����
(1)
�
Notice
that a high-risk AI system was used to make, inform, or materially influence
the consequential decision;

����
(2)
�
A
description of the type of information used by the system and the primary
factors that materially contributed to the decision; and

����
(3)
�
Information
describing how the consumer may request correction of inaccurate information,
submit additional information, seek reconsideration, and obtain human review
under section 480-D.

����
(b)
�
The
notice required by this section shall be provided:

����
(1)
�
At
or before the time of communicating an adverse action; or

����
(2)
�
If
no adverse action is communicated, upon request by the consumer within a
reasonable period after the decision.

����
(c)
�
This
section shall not be construed to require a deployer to disclose proprietary
source code or trade secrets; provided that the deployer shall provide a
meaningful explanation sufficient for a reasonable consumer to understand the
basis for the decision and the process to contest it.

����
�480-D
�
Right to correction, appeal, and human review.
�
(a)
�
A deployer
that uses a high-risk AI system to make, inform, or materially influence a
consequential decision shall implement a reasonable process by which a consumer
may:

����
(1)
�
Request
correction of inaccurate personal information used in the decision;

����
(2)
�
Submit
relevant information for reconsideration; and

����
(3)
�
Obtain
a human review of an adverse action.

����
(b)
�

Human review under this section shall be performed by an individual with
authority to overturn the adverse action and with training reasonably related
to the subject matter of the consequential decision.

����
(c)
�

A deployer shall provide a written response to a consumer request under
this section within a reasonable period, and shall include:

����
(1)
�
The
outcome of the reconsideration;

����
(2)
�
If
the adverse action is upheld, an explanation of the basis for the determination
in plain language; and

����
(3)
�
Any
additional steps available to the consumer through internal appeal, customer
dispute channels, or applicable external rights.

����
(d)
�

This section shall not be construed to require a deployer to provide
human review when doing so would:

����
(1)
�
Prevent
the deployer from complying with state or federal law; or

����
(2)
�
Compromise
the security or integrity of systems or fraud-prevention processes;

provided that
the deployer shall document the basis for invoking this subsection and shall
provide the consumer with notice of the limitation and an alternative dispute
channel reasonably available to the consumer.

����
�480-E
�
AI user agreement; prohibited waivers.
�
(a)
�
A deployer
that provides a consumer-facing AI interaction shall make available, prior to
or at the time of interaction, a clear and conspicuous AI user agreement that
describes, in plain language:

����
(1)
�
The
nature of the AI interaction and known material limitations of the AI system,
including the risk of inaccurate or fabricated output;

����
(2)
�
The
categories of data collected from the consumer during the interaction and how
the data will be used, retained, and shared;

����
(3)
�
How
a consumer may reach a human representative, submit a complaint, and dispute a
charge, decision, or other outcome tied to the AI interaction; and

����
(4)
�
Any
material ways the deployer uses the AI interaction to make, inform, or
materially influence consequential decisions.

����
(b)
�
Any
term in an AI user agreement that purports to waive, limit, or disclaim a
deployer�s obligations under this part, or to waive a consumer�s rights or
remedies under state law, shall be void.

����
�480-F
�
Duty of reasonable care; risk management
program for high-risk AI systems.
�

(a)
�
A deployer and vendor shall
use reasonable care to protect consumers from any known or reasonably
foreseeable material risks of:

����
(1)
�
Algorithmic
discrimination;

����
(2)
�
Material
errors, including systematic reliability failures; and

����
(3)
�
Cybersecurity
and data integrity failures that materially affect the reliability or security
of outputs.

����
(b)
�
Before
deploying a high-risk AI system, and throughout the period of deployment, a
deployer shall implement and maintain a written risk management program that is
risk-based and proportionate to the nature of the consequential decisions and
the degree of potential harm to consumers, and that includes:

����
(1)
�
Governance
and accountability for AI system use, including designation of responsible
personnel;

����
(2)
�
Documented
policies and procedures covering procurement, development, use, monitoring,
incident response, and retirement of high-risk AI systems;

����
(3)
�
Data
governance controls addressing data quality, relevance, limitations, and
reasonably practicable data lineage;

����
(4)
�
Pre-deployment
testing and ongoing monitoring designed to detect material errors, model drift,
and algorithmic discrimination;

����
(5)
�
Controls
addressing vendor and third-party risks, including contract terms requiring
reasonable cooperation with oversight, auditing, and consumer dispute handling;
and

����
(6)
�
Recordkeeping
sufficient to demonstrate compliance with this part.

����
(c)
�
At
least annually, and upon any intentional and substantial modification of a
high-risk AI system, a deployer shall complete an internal impact assessment
that evaluates:

����
(1)
�
The
system's intended use and reasonably foreseeable misuse;

����
(2)
�
The
categories of data used and material limitations;

����
(3)
�
The
reasonably foreseeable risks of material consumer harm, including financial and
reputational harm;

����
(4)
�
The
steps taken to mitigate material risks, including algorithmic discrimination;
and

����
(5)
�
A
summary of monitoring results and identified material issues, if any.

����
(d)
�
Impact
assessments and risk management program documentation shall be retained for not
less than five years after the system is retired or materially modified,
whichever is later, and shall be made available to the executive director or
the attorney general upon request; provided that confidential commercial
information shall be protected to the extent permitted by law.

����
�480-G
�
Documentation, consumer dispute records, and
access to relevant outputs.
�
(a)
�
When an AI system output is used as a
substantial factor in an adverse action, the deployer shall maintain
documentation sufficient to:

����
(1)
�
Identify
the system used, the nature of the output relied upon, and the decision
process;

����
(2)
�
Support
the disclosures required by section 480-C; and

����
(3)
�
Support
meaningful reconsideration and human review under section 480-D.

����
(b)
�

Upon request by a consumer and to the extent reasonably necessary to
support dispute resolution, a deployer shall provide the consumer with access
to relevant records, including a copy or summary of the information used by the
high-risk AI system and the output that was relied upon as a substantial
factor; provided that the deployer shall not be required to disclose
proprietary source code or trade secrets.

����
�480-H
�
Incident reporting.
�
(a)
�
A
deployer shall notify the executive director and the attorney general within
ninety days after discovering:

����
(1)
�
A
material violation of this part affecting a class of consumers; or

����
(2)
�
That
a high-risk AI system caused or materially contributed to algorithmic
discrimination or other material consumer harm.

����
(b)
�
The
notification shall include a description of:

����
(1)
�
The
nature of the issue;

����
(2)
�
The
categories of consumers potentially affected;

����
(3)
�
The
deployer�s mitigation steps and corrective actions; and

����
(4)
�
Any
changes made to prevent recurrence.

����
(c)
�
Nothing
in this section shall be construed to limit any obligation to notify consumers
or government agencies under other applicable state or federal law.

����
�480-I
�
Liability and unfair or deceptive acts or
practices.
�
(a)
�
A deployer shall not represent, expressly or
by implication, that:

����
(1)
�
A
consumer is required to accept an AI-generated output as accurate or binding;
or

����
(2)
�
The
deployer is not responsible for an act or omission because an AI system
generated, recommended, or performed the act.

����
(b)
�

A violation of this part shall constitute an unfair or deceptive act or
practice under section 480-2.

����
(c)
�

Nothing in this part shall be construed to diminish any obligation under
state or federal civil rights, fair housing, consumer credit, insurance,
employment, privacy, data security, or other applicable law.

����
�480-J
�
Rules.
�

The executive director may adopt rules pursuant to chapter 91 to
implement this part, including rules:

����
(1)
�
Further
defining consequential decisions and high-risk AI systems based on consumer
risk;

����
(2)
�
Establishing
minimum standards for disclosures, timing, and consumer-facing format;

����
(3)
�
Establishing
baseline elements for risk management programs and impact assessments
proportionate to risk; and

����
(4)
�
Establishing
procedures for submission and protection of confidential and proprietary
information."

����
SECTION 3.
�
If any
provision of this Act, or the application thereof to any person or
circumstance, is held invalid, the invalidity does not affect other provisions
or applications of the Act that can be given effect without the invalid
provision or application, and to this end the provisions of this Act are
severable.

����
SECTION 4.
�
This Act shall take effect upon its approval.

INTRODUCED
BY:

_____________________________

Report Title:

Office of Consumer Protection; Executive Director;
Attorney General; Artificial Intelligence; Consumer Protection; Disclosures;
Algorithmic Discrimination; Unfair and Deceptive Acts or Practices; Appeals;
Risk Management

Description:

Establishes
consumer protection requirements for the use of artificial intelligence systems
in consumer interactions and consequential decisions, including disclosures,
documentation, and a right to correction, appeal, and human review.
�
Makes certain violations an unfair or
deceptive act or practice.
�
Requires risk
management and impact assessments for high-risk artificial intelligence systems.
�
Requires incident reports to the Executive Director
of the Office of Consumer Protection and the Attorney General.

The summary description
of legislation appearing on this page is for informational purposes only and is
not legislation or evidence of legislative intent.