Çï¿ûÊÓÆµ

AI and Mental Health Care: Issues, Challenges, and Opportunities

Afterword

Back to table of contents
Project
AI and Mental Health Care

Paul Dagum, Sherry Glied, and Alan Leshner, project cochairs
 

Over the eighteen months since this project’s inception, the integration of LLM applications into everyday use has progressed at an astonishing pace, leaving numerous unresolved societal and ethical questions. What is clear is that artificial intelligence, as it evolves, will alter many dimensions of our individual and social lives—in large ways and small, for better and worse. Our contributors have laid out a set of considerations for exploring these effects in the context of mental health treatment. Their responses, and our engaged discussions throughout the project, lead us to some overarching reflections about directions forward.

  1. The subject of our work, the use of artificial intelligence in mental health treatment, turns out to comprise a wide range of technologies and situations. One important step in solidifying research both on the immediate effects of technologies on individuals and on their broader effect on societies and relationships is to develop a set of definitions and taxonomies. Classification is often the first step in science and policy, and its absence makes discussion challenging.
  2. One broad grouping within the area of artificial intelligence in mental health treatment consists of applications explicitly designed to play a role in the delivery of such treatment (this might include administrative or monitoring apps, apps used by or in conjunction with professionals, and so on). Developers of these applications will usually conduct some formal assessment of their effectiveness in the specific context for which they are intended. Unlike the case for pharmaceuticals, where a well-established playbook guides FDA approval, these assessments use disparate methodologies and comparison groups (for example, pre-post, waitlist controls; comparisons to psychiatrists delivering individualized psychotherapy or counselors delivering manualized CBT). Developing a minimum set of requirements, including ethical requirements for the conduct of studies in this context, is an essential step to moving forward and is particularly important given the wide variation in the actual quality (and availability) of mental health treatment as it is currently delivered in the United States. It will also require a new approach to the design of FDA approval, which currently assumes that each new formulation of a drug or revision of a device is frozen in design and subjected to new evidence from clinical trials. The regulation of systems designed to evolve and improve with every interaction has no established precedent.
  3. The rapid uptake of chatbots and LLMs is happening without research or regulation. But this wide and fast diffusion is not inevitable. As the many recent changes around the use of smartphones in schools suggest, public opinion, persuasive critiques, and policies can affect the pace and nature of adoption. Those decisions—individual and societal—should be informed by as much evidence as can be brought to bear. Currently, that evidence largely takes the form of anecdotes, which are powerful but can be misleading. Opportunities exist to systematically collect information on the use of chatbots for therapeutic purposes (for example, in existing national surveys) and to conduct analyses of these effects (for example, comparing populations who, for exogenous reasons, have had easier and more limited access to LLMs, as has been done for smartphones and television). In doing so, we will also need to think carefully and comprehensively about the nature of the effects we might see. Experts in philosophy, psychology, and sociology can trace the pathways through which negative or positive effects may emerge. Systematic qualitative research can point to outcomes that are unexpected, whether promising or troubling.
  4. LLMs build on the information they collect. This feature generates network externalities (where a service becomes more valuable as more users adopt it), and network externalities in turn tend to drive market consolidation. Without up-front regulation, a dominant mental health care LLM, on which millions of Americans might rely and that would have access to deeply personal information about them, could emerge (as we have seen with other Internet technologies), posing profound economic and ethical implications beyond those inherent in the technology itself.
  5. When a wide range of disciplines and perspectives have seats at the table, the outcome can be tremendously generative and occasionally, and appropriately, uncomfortable. If only computer scientists and mental health providers participate in future discussions of the effects of AI on mental health, a great deal of useful insight will be lost. Such future discussions are essential as LLMs evolve alongside our understanding of their implications for individuals and society.