My Stance on AI

Most days, I would not take a stance on something I don’t feel confident to define, and I am in no position to accurately describe AI models as they exist in early 2024. Nor will a single blog post suffice for discussing the ways mental health clinicians currently use AI and how that impacts care. This blog is possibly less about what I think AI is and more about what I think healthcare is.

Norms Are Normal but not Inherently Good

The genius of large language models (LLMs) is that they consistently deliver normal and sensible language output in response to natural language queries. However, normal and sensible can also stand in for normative and average in the statistical sense. In healthcare, and mental health in particular, we should be wary about introducing tools that implicitly favor reiterative and homogeneous output. 

People often seek therapy because they feel outside accepted norms, whether of family, religion, or other social domains. Those norms are not necessarily healthy by default, but they quite often are unexamined by default. Much of individual therapy can comprise identifying and reflecting on the hitherto accepted norms in one’s life. The implications of adding a layer of persistent invisible normalization to our workflow should give us pause, as it could plausibly hinder therapeutic work through the reinforcement of existing norms.

Discourse for the Many, Implementation for the Few

Insofar as AI can serve as a tool for the betterment of society, I am all for systems that transparently assist humans in human pursuits. Unfortunately, the public discourse is notable for its lack of transparency, and both sympathetic and contrarian coverage of AI depict a technology poised to better the lives of a few select individuals at the expense of society by replacing rather than augmenting human work.

Anyone who has tried to navigate a phone tree would have reason to be skeptical of systems intended to replace human work, and the reasons for a well-informed clinician to be skeptical are too numerous to discuss here. 

Documenting the Lived Experience

For these reasons and others, I do not endorse the use of AI directly in or tangential to relation-based work or creative work. Though it would be generous to call healthcare documentation a sort of literature (c.f., Oliver Sacks1), when documentation has an impact on human well-being, it should be performed with human oversight. Health documentation has multiple purposes, such as: 

  • It tracks the progress and successfulness of treatment.
  • It describes medical necessity, which is used to justify compensation by health insurance. 
  • It provides an opportunity for the clinician to reflect on and digest the work of a day. 

Absent the aspects of documentation for legal protections and remuneration, there emerges a human component that helps the clinician provide better care. Documentation gets its bad reputation in healthcare because it is uncompensated for by insurance companies, unaccounted for in clinical scheduling, and utterly necessary.2 

Documentation makes news when stolen or held ransom or when it makes life difficult. I am an optimist in that I believe the priorities and expectancies laid on documentation are the problem rather than documentation itself. Documentation as a whole is a good thing, and a clinician’s ongoing documentation of treatment benefits the clinical work.

My Pledge

My pledge to you is this: I do not use AI anywhere creativity and individual voice are to be expected (e.g., blogs, articles, essays, the content of my website that describes who I am and what I do, and documentation relying on direct experience). My words are my own. My images are my own. My typos and idiosyncrasies are, for better or worse, my own.

There are certain materials where AI-derived content may prevail. Those include:

  • Documents pertaining to laws, regulations, insurance, and other mandates. Because such text is often written by third parties (e.g., legal consultants, EHR platforms, or government organizations), I cannot guarantee that the original document was written without AI. 
  • Stock photography from third-party sites. I occasionally use stock photos for backgrounds, headers, UI and navigation components, and blog photos. Such graphics may contain AI-generated components.
  • Code that makes the website and any attached plug-ins and applications work correctly.

  1. Popova, Maria. “Inside Oliver Sacks’s Creative Process: The Beloved Writer’s Never-Before-Seen Manuscripts, Brainstorm Sheets, and Notes on Writing, Creativity, and the Brain.” The Marginalian, 2017. https://www.themarginalian.org/2017/03/07/oliver-sacks-notebooks/ ↩︎
  2. Kolata, Gina. “According to Medical Guidelines, Your Doctor Needs a 27-Hour Workday.” New York Times, 2023. https://www.nytimes.com/2023/02/14/health/doctors-medical-guidelines.html ↩︎