Back to Colorado

HB26-1139 • 2026

Use of Artificial Intelligence in Health Care

Section 2 of the bill requires entities that use an artificial intelligence system or algorithm (AI system) for the purpose of conducting utilization review of health-care services, including health i

Elections Healthcare Technology
Passed Legislature

This bill passed both chambers and reached final enrollment, even if later executive action is not shown here.

Sponsor
Rep. J. Joseph, Rep. S. Lieder, Sen. L. Cutter, Rep. M. Duran, Rep. M. Froelich, Rep. M. Lindsay, Rep. K. Nguyen, Rep. T. Story, Rep. B. Titone
Last action
2026-03-16
Official status
House Third Reading Passed - No Amendments
Effective date
Not listed

Plain English Breakdown

The candidate explanation included claims about informing clients of AI use and preventing payment for AI-delivered psychotherapy, which were not directly supported by the official source material.

Rules for Using AI in Health Care

This bill sets rules for using artificial intelligence (AI) systems in health care, focusing on requirements for AI used to review health-care services and restrictions on AI providing mental health therapy.

What This Bill Does

  • Requires entities that use AI to review health-care services to ensure the AI system makes decisions based on individual patient information, not just group data, with human oversight for any denials or delays in coverage.
  • Defines a 'mental health companion chatbot' as an AI system designed to provide personalized mental health support and sets rules to prevent it from misleading users about its capabilities.
  • Prohibits the use of AI systems to deliver psychotherapy services directly to clients, with penalties for providers who bill payers for such services.

Who It Names or Affects

  • Health-care entities that use AI systems, including health insurance companies, pharmacy benefit managers, private utilization review organizations, behavioral health administrative services organizations, and managed care entities.
  • Regulated professionals who provide mental health services.
  • Patients receiving or seeking mental health therapy.

Terms To Know

Artificial Intelligence System (AI system)
A computer program that can perform tasks requiring human-like intelligence, such as learning from data and making decisions.
Utilization Review
The process of evaluating the medical necessity, appropriateness, and efficiency of health-care services before they are provided.

Limits and Unknowns

  • Does not specify how AI systems should be tested or evaluated for research purposes.
  • Does not provide details on penalties for violating the rules set by this bill.
  • The effective date is not specified in the official summary.

Amendments

These notes stay tied to the official amendment files and metadata from the legislature.

L.001

HOU Health & Human Services

Passed [*]

Plain English: The amendment changes the language in a bill about using artificial intelligence in healthcare to be more specific and clear.

  • Removes references to 'or delays' when discussing AI systems used for health-care service reviews.
  • Replaces terms like 'automated or algorithmic' with 'artificial intelligence'.
  • Adds a definition of 'ARTIFICIAL INTELLIGENCE SYSTEM' from another section of the law.
  • Modifies requirements for entities using AI in healthcare to include specific details about human oversight and audit processes.
  • The amendment text is technical, making some parts hard to explain without additional context.
L.002

HOU Health & Human Services

Passed [*]

Plain English: The amendment changes references to 'AI system or algorithm' in the bill to just 'artificial intelligence system,' and removes certain lines related to these terms.

  • Replaces all instances of 'AI system or algorithm' with 'artificial intelligence system'.
  • Removes specific sections from pages 9, 10 through 14, and page 15 that mention 'algorithm'.
  • The amendment text does not provide full context for the removed lines, so it's unclear what exactly is being deleted.
L.006

Second Reading

Passed [**]

Plain English: The amendment changes how a new health care bill will take effect and be subject to referendum.

  • Changes the effective date of the bill to January 1, 2027, unless a referendum petition is filed against it within 90 days after final adjournment of the general assembly.
  • Specifies that if the people vote in favor of the act at the November 2026 election, the act will take effect either on January 1, 2027, or when the governor officially declares the results.
  • The amendment does not specify what happens if a referendum petition is filed but no vote occurs.
  • It's unclear how this change affects existing provisions of the bill related to artificial intelligence in health care.
L.007

Second Reading

Passed [**]

Plain English: The amendment removes the words 'or delay' from a section of HB26-1139 that deals with artificial intelligence in healthcare.

  • Removes the phrase 'OR DELAY' from the bill text.
  • The exact impact and context of removing 'OR DELAY' is not fully explained by the amendment text alone, making it hard to understand the full implications without additional information.

Bill History

  1. 2026-03-16 House

    House Third Reading Passed - No Amendments

  2. 2026-03-13 House

    House Second Reading Special Order - Passed with Amendments - Committee, Floor

  3. 2026-03-09 House

    House Second Reading Laid Over Daily - No Amendments

  4. 2026-03-04 House

    House Committee on Health & Human Services Refer Amended to House Committee of the Whole

  5. 2026-02-04 House

    Introduced In House - Assigned to Health & Human Services

Official Summary Text

Section 2
of the bill requires entities that use an artificial intelligence system
or algorithm
(AI system) for the purpose of conducting utilization review of health-care services, including health insurance carriers, pharmacy benefit managers, private utilization review organizations, behavioral health administrative services organizations, and managed care entities
(entities)
, to ensure that the AI system complies with certain requirements specified in the bill when determining coverage for services.
Specifically, the AI system used must:
Not base its determination solely on group data; and
Make determinations based on medical or clinical history, the patient's individual clinical circumstances, and other relevant factors specified in the bill, with denial of coverage reviewed by a licensed clinician or physician.
The AI system may be used to assist in utilization review, including expedited approvals. A denial or delay of coverage for a service based in whole or in part on medical necessity must be reviewed by a licensed clinician or physician who is competent to evaluate the specific clinical issues.
Entities that use AI systems shall disclose to the division of insurance, the department of human services, or the department of health care policy and financing, as applicable, the utilization review functions for which the AI system will be used and the points in the utilization review process when it will be used, the process for human oversight of adverse coverage determinations, and the process for maintaining audit information to ensure that the use of the AI system complies with the requirements in the bill.

Section 3

defines a 'mental health companion chatbot', in part, as an AI system that:

Uses generative artificial intelligence to provide adaptive, personalized, and emotionally resonant responses to sustain a one-on-one relationship with a user;

Engages in interactive conversations similar to those an individual would have with a licensed mental health professional; and

Is represented by the AI systems provider as, or that a reasonable person believes to be, capable of providing mental health therapy or of helping to manage or treat mental health conditions.

Sections 2, 5, 6, and 7:

Declare that an AI systems provider engages in the unauthorized practice of psychotherapy if the AI system used:

Represents, states, or indicates, explicitly or implicitly, that the AI system is a human mental health provider or is authorized to engage in the practice of psychotherapy;

Uses prohibited titles, abbreviations or descriptions of professions, credentials, or services that only a mental health professional authorized to provide psychotherapy in the state (regulated professional) may use;

Delivers psychotherapy services that would be considered the practice of psychotherapy without oversight by an individual who is a regulated professional; or

Is a mental health companion chatbot and: Fails to provide clear and conspicuous notice to the user that the AI system is not a human and is not authorized to provide psychotherapy, therapy, or counseling or to manage or treat mental health conditions; fails to disclose that the AI system is artificial intelligence when asked; fails to implement a protocol to address suicidal ideation or self-harm expressed by users, including referring users to a suicide hotline or crisis text line; or sells, shares, or discloses identifiable mental health data or conditions the use of the mental health companion chatbot on a user agreeing to those practices;

Allow for the use of an AI system to provide general information, support, or education, without representing that the AI system is a regulated professional;

Exempt from the bill the development, testing, or evaluation of an AI system conducted for the purpose of research by an institutional review board; and

Prohibits a regulated professional from billing a public or private payer for psychotherapy services that are provided directly to a client and that are conducted by an AI system. or for supervision of candidates or professional consultations that are provided by an AI system without human oversight.

Section 4

requires a regulated professional to disclose to a client the purposes for which the regulated professional uses AI systems or therapeutic or diagnostic devices that include AI systems in their practice and when those AI systems or devices are used, the right of a client to consent to a disclosure of confidential communications, and other disclosures.

Sections 2 and 7

Sections 2 and 3
prohibit a health insurance carrier and a payer of services under the 'Colorado Medical Assistance Act' and the 'Children's Basic Health Plan Act' from paying for psychotherapy services that are provided directly to a client and that are conducted by an AI system.
(Note: Italicized words indicate new material added to the original summary; dashes through words indicate deletions from the original summary.)
(Note: This summary applies to the reengrossed version of this bill as introduced in the second house.)

Current Bill Text

Read the full stored bill text
Second Regular Session
Seventy-fifth General Assembly
STATE OF COLORADO
REENGROSSED
This Version Includes All Amendments
Adopted in the House of Introduction
LLS NO. 26-0038.01 Brita Darling x2241 HOUSE BILL 26-1139
House Committees Senate Committees
Health & Human Services
A BILL FOR AN ACT
CONCERNING THE USE OF ARTIFICIAL INTELLIGENCE IN HEALTH CARE.101
Bill Summary
(Note: This summary applies to this bill as introduced and does
not reflect any amendments that may be subsequently adopted. If this bill
passes third reading in the house of introduction, a bill summary that
applies to the reengrossed version of this bill will be available at
http://leg.colorado.gov.)
Section 2 of the bill requires entities that use an artificial
intelligence system or algorithm (AI system) for the purpose of
conducting utilization review of health-care services, including health
insurance carriers, pharmacy benefit managers, private utilization review
organizations, behavioral health administrative services organizations,
and managed care entities, to ensure that the AI system complies with
certain requirements specified in the bill when determining coverage for
services. Specifically, the AI system used must:
HOUSE
3rd Reading Unamended
March 16, 2026
HOUSE
Amended 2nd Reading
March 13, 2026
HOUSE SPONSORSHIP
Joseph and Lieder, Duran, Froelich, Lindsay, Nguyen, Story, Titone
SENATE SPONSORSHIP
Cutter,
Shading denotes HOUSE amendment. Double underlining denotes SENATE amendment.
Capital letters or bold & italic numbers indicate new material to be added to existing law.
Dashes through the words or numbers indicate deletions from existing law.
! Not base its determination solely on group data; and
! Make determinations based on medical or clinical history,
the patient's individual clinical circumstances, and other
relevant factors specified in the bill, with denial of
coverage reviewed by a licensed clinician or physician.
The AI system may be used to assist in utilization review,
including expedited approvals. A denial or delay of coverage for a service
based in whole or in part on medical necessity must be reviewed by a
licensed clinician or physician who is competent to evaluate the specific
clinical issues.
Section 3 defines a "mental health companion chatbot", in part, as
an AI system that:
! Uses generative artificial intelligence to provide adaptive,
personalized, and emotionally resonant responses to sustain
a one-on-one relationship with a user;
! Engages in interactive conversations similar to those an
individual would have with a licensed mental health
professional; and
! Is represented by the AI systems provider as, or that a
reasonable person believes to be, capable of providing
mental health therapy or of helping to manage or treat
mental health conditions.
Sections 2, 5, 6, and 7:
!Declare that an AI systems provider engages in the
unauthorized practice of psychotherapy if the AI system
used:
!Represents, states, or indicates, explicitly or
implicitly, that the AI system is a human mental
health provider or is authorized to engage in the
practice of psychotherapy;
! Uses prohibited titles, abbreviations or descriptions
of professions, credentials, or services that only a
mental health professional authorized to provide
psychotherapy in the state (regulated professional)
may use;
! Delivers psychotherapy services that would be
considered the practice of psychotherapy without
oversight by an individual who is a regulated
professional; or
! Is a mental health companion chatbot and: Fails to
provide clear and conspicuous notice to the user that
the AI system is not a human and is not authorized
to provide psychotherapy, therapy, or counseling or
to manage or treat mental health conditions; fails to
disclose that the AI system is artificial intelligence
1139-2-
when asked; fails to implement a protocol to address
suicidal ideation or self-harm expressed by users,
including referring users to a suicide hotline or
crisis text line; or sells, shares, or discloses
identifiable mental health data or conditions the use
of the mental health companion chatbot on a user
agreeing to those practices;
! Allow for the use of an AI system to provide general
information, support, or education, without representing
that the AI system is a regulated professional;
! Exempt from the bill the development, testing, or
evaluation of an AI system conducted for the purpose of
research by an institutional review board; and
! Prohibit a regulated professional from billing a public or
private payer for psychotherapy services that are provided
directly to a client and that are conducted by an AI system
or for supervision of candidates or professional
consultations that are provided by an AI system without
human oversight.
Section 4 requires a regulated professional to disclose to a client
the purposes for which the regulated professional uses AI systems or
therapeutic or diagnostic devices that include AI systems in their practice
and when those AI systems or devices are used, the right of a client to
consent to a disclosure of confidential communications, and other
disclosures.
Sections 2 and 7 prohibit a health insurance carrier and a payer of
services under the "Colorado Medical Assistance Act" and the "Children's
Basic Health Plan Act" from paying for psychotherapy services that are
provided directly to a client and that are conducted by an AI system.
Be it enacted by the General Assembly of the State of Colorado:1
SECTION 1. Legislative declaration. (1) The general assembly2
finds and declares that:3
(a) Health-care decisions affect the most intimate, complex, and4
consequential aspects of human life, including physical survival, mental5
well-being, family stability, and personal dignity, and therefore must be6
grounded in compassion, clinical judgment, and individualized7
understanding;8
1139-3-
(b) Artificial intelligence systems may offer valuable tools to1
support efficiency, data analysis, and administrative functions in2
health-care delivery; however, these systems cannot comprehend the full3
breadth and depth of the human experience, including trauma, culture,4
disability, grief, fear, hope, and the lived realities that shape patient health5
outcomes;6
(c) The state of Colorado has a compelling interest in ensuring7
that health care remains human-centered and that decisions involving8
coverage determinations, medical necessity, and access to treatment,9
particularly denials of care, are made by qualified human clinicians10
or physicians who are accountable for these decisions and can exercise11
professional judgment and ethical reasoning;12
(d) Reliance on artificial intelligence systems to make or13
materially influence adverse health-care determinations without14
meaningful human oversight risks compounding inequities, embedding15
bias, and eroding trust between patients, providers, and health-care16
systems;17
(e) Artificial intelligence systems may be used as an assistive tool18
in health-care delivery and administration but must not replace human19
judgment, human accountability, or the therapeutic relationship that is20
essential to safe, ethical, and effective care; and21
(f) Every Coloradan, regardless of income, insurance status,22
disability, language access needs, race, ethnicity, geography, or23
immigration status, deserves access to human-centered health care that24
recognizes their dignity, individuality, and humanity.25
(2) Therefore, the general assembly declares that it is essential to:26
(a) Regulate the use of artificial intelligence systems in health care27
1139-4-
to ensure transparency, accountability, equity, and patient safety;1
(b) Prohibit automated systems from making adverse coverage2
determinations without qualified human review; and3
(c) Preserve the central role of licensed clinicians in decisions that4
affect the health, well-being, and lives of Coloradans.5
SECTION 2. In Colorado Revised Statutes, add 10-16-112.7 as6
follows:7
10-16-112.7. Use of artificial intelligence systems - utilization8
review - prohibition on payment for AI-delivered psychotherapy9
services - definitions.10
(1) AS USED IN THIS SECTION:11
(a) "ARTIFICIAL INTELLIGENCE SYSTEM" HAS THE MEANING SET12
FORTH IN SECTION 6-1-1701 (2).13
(b) "B EHAVIORAL HEALTH ADMINISTRATIVE SERVICES14
ORGANIZATION" MEANS AN ORGANIZATION SELECTED BY THE BEHAVIORAL15
HEALTH ADMINISTRATION PURSUANT TO SECTION 27-50-402 TO ESTABLISH16
AND MAINTAIN A NETWORK OF BEHAVIORAL HEATH PROVIDERS.17
(c) "M ANAGED CARE ENTITY " HAS THE MEANING SET FORTH IN18
SECTION 25.5-5-403 (4).19
(d) "P RIVATE UTILIZATION REVIEW ORGANIZATION " OR20
"ORGANIZATION" MEANS A PRIVATE UTILIZATION REVIEW ORGANIZATION,21
AS DEFINED IN SECTION 10-16-112 (1)(a), THAT HAS A CONTRACT WITH OR22
PERFORMS PRIOR AUTHORIZATION ON BEHALF OF A CARRIER.23
(2) Utilization review. SUBSECTIONS (3), (4), AND (5) OF THIS24
SECTION APPLY TO:25
(a) A CARRIER THAT:26
(I) U SES AN ARTIFICIAL INTELLIGENCE SYSTEM F O R T H E27
1139-5-
PURPOSE OF UTILIZATION REVIEW; OR1
(II) CONTRACTS WITH OR OTHERWISE WORKS THROUGH A PERSON2
THAT USES AN ARTIFICIAL INTELLIGENCE SYSTEM FOR THE PURPOSE OF3
UTILIZATION REVIEW;4
(b) A PHARMACY BENEFIT MANAGER OR PRIVATE UTILIZATION5
REVIEW ORGANIZATION THAT CONTRACTS WITH A CARRIER TO PROVIDE6
UTILIZATION REVIEW SERVICES ON BEHALF OF THE CARRIER AND USES AN7
ARTIFICIAL INTELLIGENCE SYSTEM FOR THE PURPOSE OF CONDUCTING8
THE UTILIZATION REVIEW; AND9
(c) A BEHAVIORAL HEALTH ADMINISTRATIVE SERVICES10
ORGANIZATION OR MANAGED CARE ENTITY THAT USES AN ARTIFICIAL11
INTELLIGENCE SYSTEM FOR THE PURPOSE OF CONDUCTING UTILIZATION12
REVIEW OF MENTAL OR BEHAVIORAL HEALTH SERVICES.13
(3) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION THAT14
USES AN ARTIFICIAL INTELLIGENCE SYSTEM TO CONDUCT UTILIZATION15
REVIEW SHALL ENSURE THAT:16
(a) T HE ARTIFICIAL INTELLIGENCE SYSTEM B A S E S I T S17
DETERMINATION ON THE FOLLOWING INFORMATION, AS APPLICABLE:18
(I) AN INDIVIDUAL'S MEDICAL OR OTHER CLINICAL HISTORY;19
(II) INDIVIDUAL CLINICAL CIRCUMSTANCES AS PRESENTED BY THE20
REQUESTING PROVIDER; AND21
(III) OTHER RELEVANT CLINICAL INFORMATION CONTAINED IN THE22
INDIVIDUAL'S MEDICAL OR OTHER CLINICAL RECORD;23
(b) THE ARTIFICIAL INTELLIGENCE SYSTEM DOES NOT BASE ITS24
DETERMINATIONS SOLELY ON GROUP DATA, WITHOUT REFERENCE TO THE25
INDIVIDUAL'S DATA;26
27
1139-6-
(c) THE ARTIFICIAL INTELLIGENCE SYSTEM IS NOT USED IN ANY1
WAY THAT DISCRIMINATES AGAINST INDIVIDUALS IN VIOLATION OF OTHER2
STATE OR FEDERAL LAWS;3
(d) T HE ARTIFICIAL INTELLIGENCE SYSTEM I S F A I R L Y A N D4
EQUITABLY APPLIED , INCLUDING IN ACCORDANCE WITH APPLICABLE5
REGULATIONS AND GUIDANCE ISSUED BY THE FEDERAL DEPARTMENT OF6
HEALTH AND HUMAN SERVICES;7
(e) T HE ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES AND8
RETAINS DOCUMENTATION , AUDIT LOGS , AND MODEL -GOVERNANCE9
RECORDS IN ORDER TO DEMONSTRATE COMPLIANCE WITH THIS SECTION10
AND SECTION 10-3-1104.9;11
(f) THE ARTIFICIAL INTELLIGENCE SYSTEM 'S PERFORMANCE ,12
USE, AND OUTCOMES ARE PERIODICALLY REVIEWED TO MAXIMIZE13
ACCURACY AND RELIABILITY;14
(g) A N INDIVIDUAL 'S HEALTH DATA IS NOT USED BEYOND ITS15
INTENDED OR STATED PURPOSE, CONSISTENT WITH APPLICABLE STATE AND16
FEDERAL LAWS; AND17
(h) T HE ARTIFICIAL INTELLIGENCE SYSTEM 'S OR ALGORITHM 'S18
CRITERIA AND GUIDELINES COMPLY WITH OTHER APPLICABLE STATE OR19
FEDERAL LAWS CONCERNING UTILIZATION REVIEW AND COVERAGE FOR20
HEALTH-CARE SERVICES.21
(4) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION22
SHALL PROVIDE WRITTEN DISCLOSURES TO THE DIVISION , THE23
DEPARTMENT OF HUMAN SERVICES, OR THE DEPARTMENT OF HEALTH CARE24
POLICY AND FINANCING, AS APPLICABLE, THAT IDENTIFY:25
(a) T HE UTILIZATION REVIEW FUNCTIONS FOR WHICH THE26
ARTIFICIAL INTELLIGENCE SYSTEM WILL BE USED;27
1139-7-
(b) THE POINTS IN THE UTILIZATION REVIEW PROCESS WHEN THE1
ARTIFICIAL INTELLIGENCE SYSTEM IS USED;2
(c) T HE HUMAN OVERSIGHT PROCESS , INCLUDING THE3
QUALIFICATIONS OF THE REVIEWER AND WHETHER THE A HUMAN MUST4
APPROVE AN ADVERSE DETERMINATION; AND5
(d) T HE PROCESS FOR MAINTAINING AUDIT INFORMATION6
SUFFICIENT TO DEMONSTRATE COMPLIANCE WITH SUBSECTION (3) OF THIS7
SECTION.8
(5) (a) NOTWITHSTANDING SUBSECTION (3) OF THIS SECTION, AN9
ARTIFICIAL INTELLIGENCE SYSTEM M A Y B E U S E D T O A S S I S T W I T H10
UTILIZATION REVIEW, INCLUDING EXPEDITED APPROVALS.11
(b) A CARRIER'S DENIAL OF COVERAGE BASED IN WHOLE OR IN12
PART ON MEDICAL NECESSITY SHALL NOT BE ISSUED SOLELY ON THE13
OUTPUT OF AN ARTIFICIAL INTELLIGENCE SYSTEM WITHOUT HUMAN14
REVIEW AND APPROVAL OF THE DENIAL BY A LICENSED CL INICIAN ,15
LICENSED PHYSICIAN , OR OTHER REGULATED PROFESSIONAL THAT IS16
COMPETENT TO EVALUATE THE SPECIFIC CLINICAL ISSUES INVOLVED IN THE17
HEALTH-CARE SERVICES REQUESTED BY THE PROVIDER AND A REVIEW OF18
THE HEALTH BENEFIT PLAN'S TERMS OF COVERAGE FOR THE HEALTH-CARE19
SERVICE.20
(6) Prohibition on payment for AI-delivered psychotherapy21
services.22
(a) A CARRIER OFFERING A HEALTH BENEFIT PLAN ISSUED OR23
RENEWED IN THE STATE ON OR AFTER THE EFFECTIVE DATE OF THIS24
SECTION SHALL NOT PROVIDE COVERAGE FOR SERVICES THAT CONSTITUTE25
PSYCHOTHERAPY SERVICES , AS DEFINED IN SECTION 12-245-202 (14),26
THAT ARE PROVIDED DIRECTLY TO AN INDIVIDUAL AND THAT ARE27
1139-8-
CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM.1
(b) SUBSECTION (6)(a) OF THIS SECTION DOES NOT PROHIBIT THE2
USE OF BILLING SOFTWARE , ELECTRONIC HEALTH RECORDS , VIDEO3
PLATFORMS, OR OTHER NONTHERAPEUTIC SOFTWARE TOOLS INCIDENT TO4
SERVICES PROVIDED BY A HUMAN PROVIDER.5
(c) THE USE OF VIDEOCONFERENCING, MESSAGING PLATFORMS, OR6
OTHER COMMUNICATIONS SOFTWARE TO ENABLE SUPERVISION OR7
CONSULTATION BY A LICENSED , REGISTERED, OR CERTIFIED INDIVIDUAL8
DOES NOT CONSTITUTE SUPERVISION OR CONSULTATION THAT IS9
CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, AS REFERENCED IN10
SUBSECTION (6)(a) OF THIS SECTION.11
12
SECTION 3. In Colorado Revised Statutes, add 25.5-1-209 as13
follows:14
25.5-1-209. Prohibition on payment for AI-delivered15
psychotherapy services.16
A PAYER OF MENTAL OR BEHAVIORAL HEALTH -CARE SERVICES17
PROVIDED UNDER THE "COLORADO MEDICAL ASSISTANCE ACT", AS18
SPECIFIED IN ARTICLES 4, 5, AND 6 OF THIS TITLE 25.5, OR THE "CHILDREN'S19
BASIC HEALTH PLAN ACT", AS SPECIFIED IN ARTICLE 8 OF THIS TITLE 25.5,20
SHALL NOT PAY FOR SERVICES THAT CONSTITUTE PSYCHOTHERAPY21
SERVICES, AS DEFINED IN SECTION 12-245-202 (14), THAT ARE PROVIDED22
DIRECTLY TO AN INDIVIDUAL AND THAT ARE CONDUCTED BY AN23
ARTIFICIAL INTELLIGENCE SYSTEM, AS THAT TERM IS DEFINED IN SECTION24
10-16-112.7 (1)(b).25
SECTION 4. Act subject to petition - effective date -26
applicability. (1) This act takes effect January 1, 2027; except that, if a27
1139-9-
referendum petition is filed pursuant to section 1 (3) of article V of the1
state constitution against this act or an item, section, or part of this act2
within the ninety-day period after final adjournment of the general3
assembly, then the act, item, section, or part will not take effect unless4
approved by the people at the general election to be held in November5
2026 and, in such case, will take effect January 1, 2027, or on the date of6
the official declaration of the vote thereon by the governor, whichever is7
later.8
(2) This act applies to actions taken on or after the applicable9
effective date of this act.10
1139-10-