This text of Utah § 58-60-118 (Mental health chatbots -- Affirmative defense.) is published on Counsel Stack Legal Research, covering Utah primary law. Counsel Stack provides free access to over 12 million legal documents including statutes, case law, regulations, and constitutions.
(1)As used in this section:
(1)(a) "Mental health chatbot" means the same as that term is defined in Section 13-72a-101.
(1)(b) "Supplier" means the same as that term is defined in Section 13-11-3.
(2)It is an affirmative defense to liability in an action brought under Subsection 58-1-501(1) or Subsection 58-1-501(2) if the supplier demonstrates that the supplier:
(2)(a) created, maintained, and implemented a policy that meets the requirements of Subsection (3);
(2)(b) maintains documentation regarding the development and implementation of the mental health chatbot that describes:
(2)(b)(i) foundation models used in development;
(2)(b)(ii) training data used;
(2)(b)(iii) compliance with federal health privacy regulations;
(2)(b)(iv) user data collection and sharing practices; and
Free access — add to your briefcase to read the full text and ask questions with AI
(1) As used in this section:
(1)(a) "Mental health chatbot" means the same as that term is defined in Section 13-72a-101.
(1)(b) "Supplier" means the same as that term is defined in Section 13-11-3.
(2) It is an affirmative defense to liability in an action brought under Subsection 58-1-501(1) or Subsection 58-1-501(2) if the supplier demonstrates that the supplier:
(2)(a) created, maintained, and implemented a policy that meets the requirements of Subsection (3);
(2)(b) maintains documentation regarding the development and implementation of the mental health chatbot that describes:
(2)(b)(i) foundation models used in development;
(2)(b)(ii) training data used;
(2)(b)(iii) compliance with federal health privacy regulations;
(2)(b)(iv) user data collection and sharing practices; and
(2)(b)(v) ongoing efforts to ensure accuracy, reliability, fairness, and safety;
(2)(c) filed the policy with the division as described in Subsection (4); and
(2)(d) complied with all requirements of the filed policy at the time of the alleged violation.
(3) A policy described in Subsection (2)(a) must:
(3)(a) be in writing;
(3)(b) clearly state:
(3)(b)(i) the intended purposes of the mental health chatbot; and
(3)(b)(ii) the abilities and limitations of the mental health chatbot; and
(3)(c) describe the procedures by which the supplier:
(3)(c)(i) ensures that licensed mental health therapists are involved in the development and review process;
(3)(c)(ii) ensures the mental health chatbot is developed and monitored in a manner consistent with clinical best practices;
(3)(c)(iii) conducts testing, prior to making the mental health chatbot publicly available and regularly thereafter, to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in therapy with a licensed mental health therapist;
(3)(c)(iv) identifies reasonably foreseeable adverse outcomes to, and potentially harmful interactions with, users that could result from using the mental health chatbot;
(3)(c)(v) provides a mechanism for a user to report any potentially harmful interactions from use of the mental health chatbot;
(3)(c)(vi) implements protocols to assess and respond to risk of harm to users or other individuals;
(3)(c)(vii) details actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions;
(3)(c)(viii) implements protocols to respond in real time to acute risk of physical harm;
(3)(c)(ix) reasonably ensures regular, objective reviews of safety, accuracy, and efficacy, which may include internal or external audits;
(3)(c)(x) provides users any necessary instructions on the safe use of the mental health chatbot;
(3)(c)(xi) ensures users understand they are interacting with artificial intelligence;
(3)(c)(xii) ensures users understand the intended purpose, capabilities, and limitations of the mental health chatbot;
(3)(c)(xiii) prioritizes user mental health and safety over engagement metrics or profit;
(3)(c)(xiv) implements measures to prevent discriminatory treatment of users; and
(3)(c)(xv) ensures compliance with the security and privacy provisions of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A, C, and E, as if the supplier were a covered entity, and applicable consumer protection requirements, including Sections 13-72a-201, 13-72a-202, and 13-72a-203.
(4) To file a policy with the division under this section, a supplier of a mental health chatbot:
(4)(a) shall provide to the division:
(4)(a)(i) the name and address of the supplier;
(4)(a)(ii) the name of the mental health chatbot supplied by the supplier;
(4)(a)(iii) the written policy described in Subsection (3); and
(4)(a)(iv) a fee set in accordance with Section 63J-1-504;
(4)(b) shall file in a manner established by the division; and
(4)(c) may provide to the division:
(4)(c)(i) any revisions to a policy filed under this section; or
(4)(c)(ii) any other documentation the supplier elects to provide.
(5) The division:
(5)(a) shall provide a means for a supplier of a mental health chatbot to file under this section; and
(5)(b) may impose an annual filing fee set in accordance with Section 63J-1-504.
(6) The affirmative defense described in this section applies only in an administrative or civil action alleging a violation of:
(6)(a) Subsection 58-1-501(1); or
(6)(b) Subsection 58-1-501(2).
(7) Nothing in this section shall be construed to:
(7)(a) bar the division from bringing an action under Subsection 58-1-501(1) or Subsection 58-1-501(2) against the supplier of a mental health chatbot; or
(7)(b) recognize a mental health chatbot as a licensed mental health therapist.