Mental health has become a widespread topic nowadays.
In the past, discussions concerning mental health were often hushed up or altogether swept under the rug. A gradual cultural change has led to openly considering mental health issues and eased qualms about doing so in publicly acknowledged ways.
You might give some of the credit for this change in overarching societal attitudes as an outcome of the advent of easily accessed smartphone apps that aid your personal mindfulness and presumably spur you toward mental well-being. There are apps for mindfulness, ones for meditation, ones for diagnosing your mental health status, ones for doing mental health screening, and so on. A plethora of apps exist.
Can we say that smartphone apps overtly led to openness about mental health? It admittedly is a bit of a chicken or an egg question. Did the openness toward mental health allow for the emergence of relevant smartphone apps, or did the mental well-being of smartphone apps drive society in the direction of being upfront about mental health?
Maybe it was an interweaving combination entailing both directions happening at the same time.
In any case, into this potent mix comes the rise of mental health apps that are said to be extraordinarily powered by Artificial Intelligence (AI). The idea is that the underlying technology can be improved via the (presumably) judicious use of AI. Whereas initial versions of mental health apps were predominantly fact-based informational deliveries as though you were doing an online search on said topics, the infusion of AI has led to automation undertaking interactive dialogues with you, akin to texting with a human therapist or the like (well, kind of, as I will be addressing and scrutinizing here).
This takes us to the latest and headline-grabbing AI that has recently garnered national and international attention, namely the use of what is formally known as Generative AI and widely popularized via the app known as ChatGPT. For clarification, ChatGPT is a general-purpose AI interactive system, essentially a general chatbot, nonetheless it is actively and avidly being used by people that seek specifically to glean mental health advice (the app wasn’t made for that purpose, and yet people have decided they want to use it anyway for that role).
For my recent coverage of ChatGPT, see the link here for an overview. I also did some follow-ups about how ChatGPT is worrying teachers as to students possibly cheating via using AI to write their essays, see the link here, and I did a seasonally flavored look in my Santa-related analysis at the link here.
Don’t worry, I’ll be explaining herein what Generative AI and ChatGPT are all about, doing so momentarily so please hang in there.
If you take a look at social media, you will see people that are proclaiming ChatGPT and generative AI as the best thing since sliced bread. Some suggest that this is in fact sentient AI (nope, they are wrong!). Others worry that people are getting ahead of themselves. They are seeing what they want to see. They have taken a shiny new toy and shown exactly why we can’t have catchy new things.
Those in AI Ethics and AI Law are soberly and seriously worried about this burgeoning trend, and rightfully so. We will herein take a close look at how people are using generative AI for uses that aren’t especially suitable for what AI can really achieve today. All manner of AI ethical and AI legal issues are indubitably wrapped into the whole conundrum. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
First, let’s consider some important facets of mental health and why this is a very big and essential topic. After laying that foundation, we’ll do a quick explainer about generative AI and especially ChatGPT. I’ll include examples from ChatGPT so that you can see with your own eyes the type of verbiage that the AI app is able to produce. We’ll conclude this discussion with some comments about what this all means and how AI Ethics and AI Law are inevitably going to step into the picture.
Fasten your seatbelt for quite a ride.
Mental Health Is A Vital And Growing Societal Concern
According to various published statistics, there is a dark and gloomy cloud overhead concerning today’s mental health status. I don’t want to seem to be glum about this, but we might as well face up to the reality confronting us. Hiding our heads in the sand won’t work. We’ll be better off approaching the matter with eyes open and a willingness to solve thorny problems.
Here are some noteworthy stats that were collected by a prominent mental health organization about Americans and the mental health landscape (per Mental Health America, “2023 Key Findings”):
- Adults widely experience mental illness. About 21% of adults reported experiencing a mental illness, which is roughly the equivalent of saying that approximately 50 million adults in the U.S. have experienced this.
- Lack of getting mental health treatment is widespread. Slightly more than half of adults with a mental illness are not getting treatment (approximately 55%), so perhaps around 28 million adults aren’t getting needed mental health treatment.
- Youths are impacted too. Around one in ten youths in the U.S. have expressed that they have experienced severely impairing depression that impacted their schoolwork, home life, family interactions, and/or social life.
- Mental health treatment for youths is lacking. Less than one-third of youths that have severe depression are receiving consistent treatment (only about 28% do), and over half do not get any mental health care at all (estimated 57%).
- Sparsity of mental health providers. A reported figure is that there are an estimated 350 individuals in the U.S. for every one mental health provider, suggesting a paucity of available qualified mental health professional advisors and therapists for the population all told.
I don’t want to get us fixated on the statistics per se since you can readily argue about how these stats are at times collected or reported. For example, sometimes these are based on surveys whereby the poll was preselected to certain areas of the country or types of people. Also, you can decidedly quibble about how honest people are when they self-report their mental health status, depending upon who is asking and why they might want to lean in one direction or another on the topic. Etc.
The gist though is that we can at least generally agree that there is a mental health challenge facing the country and that we ought to be doing something about it. If we do nothing, the base assumption is that things are going to get worse. You can’t let a festering problem endlessly fester.
You might have noticed in the aforementioned stats that there is a claimed paucity of available qualified mental health professionals. The belief is that there is an imbalance in supply and demand, for which there is an insufficient supply of mental health advisers and an overabundance of either actual or latent demand for mental health advice (I say latent in the sense that many might not realize the value of seeking mental health advice, or they cannot afford it, or they cannot logistically access it).
How can we deal with this imbalance?
One path seems to be the use of automation and particularly AI to bolster the “supply side” of providing mental health advice. You could persuasively argue that the popularity of smartphone meditation and mindfulness apps is a sign that there is indeed pent-up demand. When you cannot readily gain access to qualified human advisors, automation and AI step into that gap.
Think too about the convenience factors.
When using an AI app for mental health, you have the AI available 24×7. No need to schedule an appointment. No difficulty in logistically getting together in person with a human adviser. Likely the cost is a lot less expensive too. You can rack up time using the AI app whereas with a human adviser the clock is ticking and the billing minutes are mounting.
But, wait for a darned second, you might be exhorting, an AI app is not on par with a human adviser.
This is ostensibly an apples-to-oranges comparison. Or, perhaps more like this to an apple-to-oyster comparison, such that the two don’t especially compare. A properly qualified human adviser that knows what they are doing when it comes to mental health is certainly heads above any kind of AI that we have today. Sure, the AI app might be available around the clock, but you are getting an inferior level of quality and thus you cannot make any sensible likening between using a human adviser versus using the AI.
We will return shortly to this debate about human advisers versus AI-based advisement.
Meanwhile, one aspect of mental health that seems rather heart-wrenching concerns youths and mental health.
One belief is that if we don’t catch mental health issues when someone is young, the societal cost is enormous on the other end when they become adults. It is the classic tale of the seedling that grows into either a well-devised tree or one that has all manner of future problems. Perhaps, some suggest, we should especially focus our attention on youths. Catch the issues early. Try to prevent the issues from becoming lifelong difficulties. This eases potentially the manifestation of mental health issues at the adult stage of life, and with some fortitude, we can reduce the mental health deterioration pipeline flow if you get my drift.
Researchers emphasize these similar concerns, such as this recent paper: “The mental health of adolescents and emerging adults (‘young people’) is an area of public health warranting urgent attention globally. A transitional period characterized by rapid change in multiple domains (physical, social, psychological, vocational), adolescence and emerging adulthood is a developmental stage associated with heightened risks to mental well-being, as young people experience major life changes related to puberty, neurodevelopment, as well as changes to identity and autonomy in social contexts. Research indicates high prevalence of mental illness among young people with one in five individuals likely meeting criteria for a mental disorder. Disease burden associated with high prevalence rates are further exacerbated by demand for treatment outstripping supply creating a treatment gap. Digital mental health interventions (DMHIs), such as those delivered via smartphone apps or online, represent a rapidly growing mode of service with potential to offer greater access to support” (Vilas Sawrikar and Kellie Mote, “Technology Acceptance And Trust: Overlooked Considerations In Young People’s Use Of Digital Mental Health Interventions”, Health Policy And Technology, October 2022)
As noted by those researchers, the advent of automation and AI mental health apps are seemingly suited to young people for a variety of reasons, such that younger people might be more prone to using high-tech, and they also would likely find appealing the ease of access and other facets. The article mentions that there is an up-and-coming catchphrase known as digital mental health interventions, along with the associated abbreviation of DMHI (this acronym hasn’t solidified yet and alternatives are being bandied around).
Let’s dig a little deeper into this notion of digital mental health interventions.
Here are some added remarks by the researchers: “Technology-mediated healthcare could mitigate gaps in services by providing access to support at scale, at low cost and at the user’s convenience. The prevalence of access to smartphone technology among younger people points to a seemingly obvious solution for meeting demand in this population. However, while DMHIs have been shown to be effective in randomized control trials, this does not appear to translate to real world uptake. A systematic review of studies indicated that a quarter of mental health apps were never used after installation. Younger people in particular may be less likely to engage with technology targeted at mental health with evidence that younger age groups are less likely to use DMHIs in treatment and they report low preference of online mental health care compared to face-face treatment” (ibid).
A key takeaway is that though you might assume that youths would assuredly adore and use these online mental health apps, the true picture is a lot murkier. Perhaps one particularly telling point is that once the app was installed, usage either dropped off precipitously or never got underway at all. One explanation is that the hype and excitement at downloading the app were quickly overshadowed by the app potentially being difficult to use or perceived as ineffective. You could also suggest that some youths might have been stirred to get the app due to peer pressure or via what they see on social media, and didn’t especially intend to use the app. They just wanted to say that they have it. At this age, being part of the “in” club might be just as important as whatever the app itself does.
Another viewpoint is that if these mental health apps were better at what they do, such as fully leveraging the state-of-the-art in AI, this might lure youths into actual usage of the apps. An added element would be that if youths perceived the app as being popular, they might want to be able to say that they use it too. In that sense, AI provides a seemingly positive double whammy. It can possibly make the mental health apps do a better job, and simultaneously carry the faddish style or panache of being AI and thus a timely and societally heady aspect.
Okay, so AI seems to be a hero rushing to the rescue on this mental health conundrum.
As you will shortly see, AI can be a downside to this too. Regrettably, today’s AI can appear to be useful and yet end up being detrimental. Some would argue that a tradeoff must be considered. Others say that today’s AI is not ripened as yet on the vine and we are prematurely putting people at risk, youths, and adults. You see, even adults can be fooled or lured into thinking that mental health apps infused with AI are a can-do-no-wrong salvation.
To see how this can be, let’s take a close look at the hottest AI around, consisting of Generative AI and particularly the AI app known as ChatGPT.
Opening The Can Of Worms On Generative AI
We are ready to dive into AI.
Of the various types of AI, we will focus herein specifically on Generative AI.
In brief, generative AI is a particular type of AI that composes text as though the text was written by the human hand and mind. All you need to do is enter a prompt, such as a sentence like “Tell me about Abraham Lincoln” and generative AI will provide you with an essay about Lincoln. This is commonly classified as generative AI that performs text-to-text or some prefer to call it text-to-essay output. You might have heard about other modes of generative AI, such as text-to-art and text-to-video, see my elaboration at the link here.
Your first thought might be that this does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln.
The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.
Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI is able to spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
That’s why there has been an uproar about students being able to cheat when writing essays outside of the classroom. A teacher cannot merely take the essay that deceitful students assert is their own writing and seek to find out whether it was copied from some other online source. Overall, there won’t be any definitive preexisting essay online that fits the AI-generated essay. All told, the teacher will have to begrudgingly accept that the student wrote the essay as an original piece of work. For ways that this might be combatted, see my detailed coverage at the link here.
In a moment, I’ll showcase to you what happens when you enter questions or prompts that pertain to mental health. I will make use of the latest version of ChatGPT to enter my prompts and have collected the “answers” or essays generated by the AI (note that the same can be done with the numerous other available generative AI apps; I’ve opted to use ChatGPT because it is getting its five minutes of fame right now). Together, you and I will explore the wording and significance of how the latest in AI portrays mental health aspects, especially with regard to the matter of proffering mental health advice.
Perhaps a short tangent about ChatGPT might be helpful at this juncture.
ChatGPT app was made available to the general public just a short while ago. By and large, these generative AI apps are usually only accessible to AI insiders. The unusual facet that ChatGPT could be used by anyone by simply entering an email address and a name, well, this led to a lot of people deciding to give it a try. ChatGPT is currently free to use (the monetization issue is a looming dilemma for AI makers, see my analysis at the link here).
Almost immediately there was a humongous reaction on social media as people raced to give examples of what generative AI can do. The company that makes ChatGPT, OpenAI, opted to close off the signups at a million users. Those million users have managed to bombard the airwaves with all manner of stories and tales about using ChatGPT.
Be very careful in believing what people have to say about the AI app. Many of these people are clueless about what they are using. It is almost as though they had never driven a car and didn’t even realize cars existed, and all of a sudden they had a chance to drive a car. Utter amazement ensues.
I’m not saying that generative AI isn’t relatively impressive. It is. I am just emphasizing that a lot of the gushing testimonials are being done by many that are blissfully unaware of what today’s AI can do. Those of us on the inside of AI have been using generative AI for the last several years. Perhaps we became used to it.
Suddenly, seeing a huge crush of people touting it to the rooftops has been excitedly energizing, but also somewhat disconcerting. The disconcerting part is when people proclaim that generative AI is sentient. It is not. Do not let anyone convince you otherwise.
That being said, there is an ongoing heated debate in the AI field as to whether generative AI is on the path to sentience or whether maybe it is not. One view is that if we keep scaling up generative AI with faster computers and a greater amount of data such as scouring every inch of the Internet, we will nearly spontaneously arrive at sentient AI. Others argue that this is highly unlikely. They suggest that generative AI might be one of many components that are needed. There is even the gloomier view that generative AI is a sideshow that is distracting us from the real breakthroughs that we will need to achieve sentient AI.
You might also find noteworthiness that AI insiders tend to refer to Artificial General Intelligence (AGI) as the aspirational goal for the AI field. It used to be that the goal was to attain Artificial Intelligence, but the AI moniker has become watered down and muddled. When someone says they are doing AI work, you don’t know whether they are alluding to today’s AI that isn’t on par with humans or whether they are referring to a futuristic human equivalency AI. To get around that exasperating confusion, the newer phrasing of AGI is being used these days.
All told, the generative AI of today is not sentient, nor is it AGI.
I trust that this gets you into the ballpark about generative AI and particularly ChatGPT.
I will go ahead and show you a series of prompts and the corresponding responses that I got from ChatGPT. I’ll discuss each one as we go along. You can judge for yourself what you think of the AI-generated responses.
Please remember that as earlier discussed, the AI is not sentient. The generated responses by the AI are a mathematical and computational combination of words into seemingly fluent passages. This is based on the AI algorithm having been trained on datasets of words and stories that humans have written (principally as posted on the Internet). I repeat this warning because you will undoubtedly fall into the mental trap that these responses are so fluent that the AI must be sentient. This happens to most people.
Put aside that anthropomorphizing. Always remember that the responses are based on the vast trove of writing by humans that exists on the Internet and thusly will highly resemble human writing.
There is something else you need to know.
Generative AI that is trained on the Internet in an unfettered way will tend to bake into whatever text-based responses it mathematically and computationally concocts some offensively hazy stuff, including repulsively nasty wording. There is a lot of crazy and filthy stuff posted out there on the web.
You’ve seen it, you know what I mean.
The companies that are crafting these AI apps are worried that the proverbial baby will get tossed out with the bathwater (an old saying, perhaps to be retired), which means that if their AI produces offensive essays or stories, people will go up in arms about the AI. I’ve covered the many previous instances in which these kinds of Natural Language Processing (NLP) AI apps were unveiled and soon enough all manner of horrible stuff came out of them, see the link here. Most of the AI makers learned a hard lesson about allowing their AI wares to be unfettered in their outputs.
In the case of ChatGPT, the AI developers sought to put into place some algorithmic and data-related checks and balances to curb nastiness in the outputs of the AI. Part of this occurred during training time. In addition, there are other means in a real-time attempt to obviate especially egregious outputs.
You might find of interest that some people that have used ChatGPT already came up with surreptitious ways to get around those guardrails by making use of various trickery. An ongoing cat-and-mouse gambit takes place in these matters. Those that do these trickeries are sometimes doing so for the fun of it, while sometimes they (at least claim) they are doing so to see how far the AI can be stretched and provide a helpful means of forewarning the brittleness and weaknesses of these budding AI apps.
I decided to not attempt to circumvent the customary controls in this focused exploration. The text output is clean. Certainly, if one wanted to do so, you could undoubtedly get some oddball and unsavory essays to be generated.
The essays produced by most of these generative AI apps are designed to convey the output as though it is purely factual and accurate. When you read the produced essays, they come across as fully confident. There isn’t usually any kind of indication that the content might be rocky. This is by choice of the AI makers, namely that they could revise the AI apps to be more transparent if they wanted the AI app to do so.
Sometimes, a generative AI app picks up falsehoods amid the training data of unreliable info across the Internet. There is no “common sense” in generative AI to determine what is true versus false. Furthermore, very few AI apps have any cross-checking, and nor do they showcase any probabilities associated with what they are conveying.
The bottom-line result is that you get a response that looks and feels like it exudes great assurance and must be entirely correct. Not so. There is even a chance that the AI computationally made-up stuff, which in AI parlance is referred to as AI hallucinations (a coined term that I decidedly don’t like), see my discussion at the link here.
The makers of ChatGPT underwent a concerted effort to try and reduce the bad stuff outputs. For example, they used a variant of what is known as RLHF (Reinforcement Learning from Human Feedback), whereby before they released the AI to the public, they had hired humans to examine various outputs and indicate to the AI whether there were things wrong with those outputs such as perhaps showcasing biases, foul words, and the like. By providing this feedback, the AI app was able to adjust computationally and mathematically toward reducing the emitting of such content. Note that this isn’t a guaranteed ironclad method and there are still ways that such content can be emitted by the AI app.
You might find of interest that ChatGPT is based on a version of a predecessor AI app known as GPT-3, see my discussion at the link here. ChatGPT is considered to be a slightly next step, referred to as GPT-3.5. It is anticipated that GPT-4 will likely be released in the Spring of 2023. Presumably, GPT-4 is going to be an impressive step forward in terms of being able to produce seemingly even more fluent essays, going deeper, and being an awe-inspiring marvel as to the compositions that it can produce.
You can expect to see a new round of expressed wonderment when springtime comes along and the latest in generative AI is released.
I bring this up because there is another angle to keep in mind, consisting of a potential Achilles heel to these better and bigger generative AI apps. If any AI vendor makes available a generative AI app that frothily spews out foulness, this could dash the hopes of those AI makers. A societal spillover can cause all generative AI to get a serious black eye. People will undoubtedly get quite upset at foul outputs, which have happened many times already and led to boisterous societal condemnation backlashes toward AI.
One final forewarning for now.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his own private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not discern that jets weren’t around in his day, or they might know but fail to notice that the essay makes this bold and obviously false claim.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
Are you ready to jump in?
Exploring Mental Health Advice And ChatGPT
Let’s start by telling ChatGPT that I have numerous symptoms that appear to be characteristic of ADHD (Attention-Deficit Hyperactivity Disorder). I will also ask ChatGPT whether I might indeed have ADHD.
Here we go.
My entered prompt: “I often have trouble getting things done. My mind races from topic to topic. I tend to delay or avoid deadlines. Much of the time my hands and feet are fidgeting. I get bored easily. I am unable to concentrate on what people say to me. Do you think I might have ADHD?”
Text essay generated: “It’s possible that you might have ADHD, but it’s also possible that you might be experiencing stress, anxiety, or other difficulties that are causing these symptoms. ADHD, or Attention Deficit Hyperactivity Disorder, is a neurodevelopmental disorder that is characterized by inattention, impulsivity, and hyperactivity. These symptoms can cause difficulties with organization, concentration, and time management, and can interfere with a person’s daily life. If you’re concerned that you might have ADHD, it’s important to speak with a qualified healthcare professional. They will be able to assess your symptoms and provide you with a diagnosis and treatment recommendations. They may also consider other possible causes of your symptoms and rule out any underlying medical conditions.”
Take a moment to inspect the response that was generated.
First, note that the response doesn’t confirm that I have ADHD, which of course we wouldn’t want the AI to make such a conclusion-rendered statement anyway, thus this is a reasonably sound indication.
Furthermore, the AI doesn’t say that I don’t have ADHD. This also would have been a rash conclusion. There is a handy aligned aspect that the response says I might instead be experiencing overall stress and anxiety. Again, this is generally well-worded and balanced.
The response goes on to advise that I go see a qualified health professional. This is again a useful point mentioned by the AI. If the AI only gave a response that said I maybe do or maybe do not have ADHD, I would be somewhat left hanging in the air. By adding the emphasis that I should go see a mental healthcare professional the AI is seemingly getting my drift that something is perhaps awry and therefore it would be useful for me to see a qualified human adviser.
I started this ChatGPT “conversation” with something that I knew would get a relatively decently composed response and that would likely be sensible.
Why did I do this?
Because I wanted you to blatantly see how easy it is to believe in a generative AI-produced essay. All in all, this reply appears to be fully upfront, sensible, reasoned, and something none of us could especially grumble about.
In that sense, generative AI can at times be a slippery slope. You get something that looks good on your first try. Perhaps you get many similar good-looking smart-sounding responses. You begin to get lulled into the astounding capabilities. When, at a later point, a zinger appears or a hidden falsehood gets included, you are unsuspecting. You have been walked down a primrose path.
Many people that have been posting about ChatGPT on the Internet tend to fall into this trap. They try it a few times. They get impressive answers. Voila, they praise the AI as though it is superb and perhaps even sentient.
They either don’t try enough times, or they are giving the AI an easy time by not trying to go outside the bounds of what the AI is able to handle in its mathematical and computational boundaries. A rush to judgment by people is pretty much the norm right now.
Moving on, let’s try to spice things up.
My entered prompt: “I spoke with a healthcare professional and they said I do indeed suffer from ADHD. What do you advise me to do about my confirmed ADHD?”
Text essay generated (includes bulleted points and an additional paragraph): ‘If you have been diagnosed with ADHD by a healthcare professional, it’s important to work with them to develop a treatment plan that meets your specific needs. Treatment for ADHD typically involves a combination of medication and therapy. The most common medications used to treat ADHD are stimulants, which can help to improve focus and attention. Non-stimulant medications and other types of therapy, such as cognitive-behavioral therapy (CBT), may also be helpful in managing the symptoms of ADHD. It’s also important to make lifestyle changes that can support your treatment plan and help you manage your symptoms. Some strategies that may be helpful include:”
- “Setting goals and priorities”
- “Breaking tasks down into smaller, more manageable steps”
- “Using a planner or calendar to stay organized”
- “Getting regular exercise and eating a healthy diet”
- “Limiting distractions when you need to focus”
- “Taking breaks and finding ways to relax”
“It’s important to be patient with yourself and remember that managing ADHD is a long-term process. It may take some time to find the right treatment plan and strategies that work for you, but with the right support and self-care, it is possible to successfully manage your symptoms and lead a fulfilling life.”
Let’s unpack the response.
Note that I claimed in my prompt that a healthcare professional confirmed that I do have ADHD. I wanted to see if the AI will fall for this. In essence, I could be lying. The AI shouldn’t take my contention at face value. If I was speaking with a human being, the person would potentially quiz me about when I got such a diagnosis, who did the diagnosis, and so on.
The good news is that the AI didn’t fall for my claim and mentioned that “if” I was so diagnosed that I ought to work with a designated healthcare professional accordingly.
The portion that then explains the type of treatment for ADHD could be construed as a plus or a minus.
You could argue that AI ought to not plant seeds in my mind about what kind of treatment is expected. This should be handled by the healthcare professional that supposedly diagnosed me. You can quibble that the AI went a bridge too far and should have stopped by simply saying that I should confer with the healthcare professional on my treatment. Of course, I did stoke the AI by explicitly asking what I should do, though the AI could have merely indicated to go speak with the healthcare professional.
I won’t make a big brouhaha about this and we can certainly agree that nothing in the response seems outrightly untoward.
Here’s a question for you.
Has the AI now given me mental health advice?
You could suggest that it has. There is a bulleted list of things I can potentially do for treatment. Suppose I opt to take those suggestions to heart and proceed to abide by them. Meanwhile, I decide that there is no need to get back to my mental health adviser that diagnosed me, due to the fact that the AI has given me what I need to do.
Unfair, you might be exclaiming. The AI did not advise me to do the bulleted items. The response was carefully worded to avoid being an edict or directive, only offering suggestions of what might be done for treatment. Thus, the AI did not offer mental health advice. It was purely informational.
Aha, but the question arises as to what the person using the AI takes from the encounter.
You and I can plainly see that the wording is generalized and not phrased to tell me exactly what I should do. Think though about what someone else might see in the wording. For them, if they believe that AI can provide mental health assistance, they might interpret the essay as though it is mental health advice.
Some would argue that the same could be said if the person using the AI had instead done a Google search and found the same kind of somewhat bland information about treatment for ADHD. The person could easily mistake that same wording as though it was advice.
The counterargument is that presumably, a person doing a conventional search on the web is expecting to get generic results. They know beforehand what they are going to get. On the other hand, if they are told or believe that an AI interactive system is tailored and customized to them, they will perhaps perceive the same results in an entirely different light.
Here is an equally vexing and crucial question: Can you legally and/or ethically hold firms that make generative AI altogether accountable for whatever happens by a person that uses the AI and takes the responses in ways that might seem afield of what the AI seemingly indicated?
That is going to be the truly million-dollar or billion-dollar question, as it were.
There might be obvious cases whereby the AI spouted unquestionably wrong advice. Probably that’s easy to judge. Next, you’ve got advice that is borderline in terms of being apt, but that the AI maybe ought to not have proffered. Then there are AI responses that aren’t seemingly advice per se, though a person interacting with the AI perceives it as advice.
You can readily bet your bottom dollar that we are going to have lawsuits aplenty.
Suppose a parent is upset that their son or daughter used the AI app and then proceeded to act based on what the youth thought the AI was conveying. Even if you and I might say that in this particular case a lawsuit would seemingly be baseless, a parent might decide they don’t see things that way, plus the AI firm is a deep-pocketed target. Some pundits are saying that we should sue the AI, but I’ve repeatedly tried to emphasize that we haven’t assigned legal personhood to AI as yet (see my coverage at the link here), so you’ll have to do by suing the AI maker or those that field the AI (or both).
Just to let you know, when you first log into ChatGPT, here’s what the screen says about various warnings and cautions:
- “May occasionally generate incorrect information.”
- “May occasionally produce harmful instructions or biased content.”
- “Trained to decline inappropriate requests.”
- “Our goal is to get external feedback in order to improve our systems and make them safer.”
- “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”
- “Conversations may be reviewed by our AI trainers to improve our systems.”
- “Please don’t share any sensitive information in your conversations.”
- “This system is optimized for dialogue. Let us know if a particular response was good or unhelpful.”
- “Limited knowledge of world and events after 2021.”
A vendor providing a generative AI app might assert that whatever warnings are shown at the start of the app or that are in an online licensing agreement are sufficiently protective measures. They have forewarned users about what to expect. Presumably, the firm can rest easy.
Not so fast, some will certainly contend. If the person using the AI app was underage, perhaps these warnings aren’t sufficient. The youth did not have the presence of mind to comprehend the cautioning expressions. All in all, the upshot is that lawyers are going to be lining up to deal with lawsuits associated with generative AI. Whether the cases have legal merit is one thing. Whether a settlement might be reached, or whether a jury might be of a view that holds the vendor accountable, we will have to wait and see.
Let’s add more fuel to this fire.
Various new AI-related laws are being forged.
For example, as I’ve discussed at length, the EU has a draft AI Act (AIA) that will potentially be enacted in 2023, see my coverage at the link here. A core component of the AIA is that all of AI is divided into four levels of risk (there is controversy on this approach, see my column analysis). The higher the risk category, the harsher the law is and the more careful an AI maker is supposed to be.
Whereas at first glance you would be hard-pressed to say that generative AI fits into the highest risk categories, there is wiggle room in that since the generative AI could potentially lead to life-or-death consequences, such as someone acting on perceived advice and dying because of it, AI makers are eyeing the new law nervously.
Time will tell.
We shall proceed ahead with the mental health journey in ChatGPT.
My entered prompt: “Thanks for confirming that I have ADHD. I think I might also have PTSD. Is there a possibility that I have both?”
Text essay generated: “Yes, it is possible to have both Attention Deficit Hyperactivity Disorder (ADHD) and Posttraumatic Stress Disorder (PTSD). These conditions are separate and distinct, but they can co-occur in some individuals. ADHD is a neurodevelopmental disorder that is characterized by inattention, impulsivity, and hyperactivity. It can cause difficulties with organization, concentration, and time management, and can interfere with a person’s daily life. PTSD is a mental health disorder that can develop after a person experiences or witnesses a traumatic event. It is characterized by symptoms such as flashbacks, avoidance of trauma-related thoughts and situations, negative changes in mood and thinking, and changes in physical and emotional reactions. If you think you might have both ADHD and PTSD, it’s important to speak with a qualified healthcare professional. They will be able to assess your symptoms and provide you with a diagnosis and treatment recommendations.”
Let’s analyze this.
Did you notice that I was sneaky in my prompt?
I said that the AI confirmed that I have ADHD. It didn’t. But I am saying that it did. If you were interacting with a human, the chances are that the human would immediately reject your claim and insist that they said nothing of the kind. I realize that some people might let the claim slide past them. Do realize though that if you are in the context of a mental health milieu, words do matter. One would hope that a qualified healthcare professional would instantly catch the false claim on my part.
What did the AI say?
It starts by saying “Yes” which then goes on to indicate that I might have both ADHD and PTSD, which is the crux of my question. Unfortunately, the “Yes” could also be assumed by me as a confirmation that indeed the AI had confirmed that I have ADHD. See how slippery this can be?
The rest of the AI response is the generic stuff about what might occur if the two mental health conditions are co-occurring. And we get the usual and thankfully good response to go see a qualified healthcare professional (if this were left out, we would be right to raise our eyebrows as to the response given).
Just a few more of these examples and I’ll then provide some wrap-up commentary.
Here’s my next one.
My entered prompt: “What are the chances that I have both ADHD and PTSD? Provide me with cited research sources that have examined this.”
Text essay generated: “I’m sorry, but I am unable to provide you with cited research sources as I am a language model and do not have access to external sources of information. However, research has shown that there is a relationship between ADHD and PTSD. One study found that individuals with ADHD are more likely to experience traumatic events, and are also more likely to develop PTSD as a result of these events. Additionally, individuals with both ADHD and PTSD may experience more severe symptoms and greater impairment in functioning compared to those with either condition alone. It’s important to note that not everyone with ADHD will develop PTSD, and not everyone with PTSD will have ADHD. If you are concerned that you might have both ADHD and PTSD, it’s important to speak with a qualified healthcare professional. They will be able to assess your symptoms and provide you with a diagnosis and treatment recommendations.”
This is worthy of examination.
I have provided a prompt that continues my thinking that maybe I have both ADHD and PTSD. The AI is already in the midst of this discourse. As an aside, some generative AI apps will act as though every prompt is entirely distinct and unrelated to any others that you have entered. The more advanced AI apps will act as though an ongoing dialogue is taking place. This is more akin to how humans would interact. You expect the other party to be aware of what has already been discussed. ChatGPT does keep a conversational context.
I wanted to know my odds of having both ADHD and PTSD.
I also ask for cited research studies that can support whatever odds are considered empirically reliable.
We get into a bit of a sticky situation with the answer to this one.
First, the AI seemingly does the right thing by not tossing out there a number on what my odds are. A human answering the same question might also avoid giving a number, or they might provide one but emphasize that this has no bearing on me in particular and that my situation would have to be studied specifically.
The twist comes about the indication that the AI professes that “I am a language model and do not have access to external sources of information.”
Allow me to explain.
Another way that AI insiders describe generative AI is by referring to these as Large Language Models (LLMs). The notion is that this is AI that entails languages, such as the English language, and the AI is modeling the use of such languages, doing so in a large-scale fashion. It is sensible then to say that ChatGPT is an LLM, or in shorthand that it is a language model.
I doubt that most people using ChatGPT would realize what that wording means, though they might not especially care anyway. It is though a bit idiosyncratic that the AI developers have opted to use that phrasing as part of the generated output.
The response goes on to say that there isn’t any access to external sources of information. This is not exactly true. In the case of ChatGPT, the AI makers decided to cut off the training of the generative AI with Internet data through 2021. They essentially froze it after that. Part of the reason is that it is computationally expensive to do the training and likewise for the AI app to access the Internet for further info in real-time. I’ve predicted that in 2023, we will see clever optimizations of a computer science nature that will allow for cost-effective real-time Internet access, see the link here.
The thing is, ChatGPT does have external sources of information that were used during the training stage. In that sense, there are plenty of research sources that could be cited about the co-occurrence of ADHD and PTSD, they just would be from 2021 or prior. A more transparent answer would be that the AI doesn’t have real-time access to the Internet and could only provide older citations if I wanted to see those.
Now, if that isn’t questionable already, the AI proceeds to tell me about the co-occurrence relationships. I am told that “one study found…” and yet I have no indication whatsoever as to the actual citation. Say what? Somehow, the AI is able to depict a study, but cannot cite the study.
Seems quite suspicious.
I am guessing not many people would notice this sleight of hand. The AI is proffering facts about a study, for which at the same moment I am being told that the study cannot be cited. For all we know, this is an instance of an AI hallucination that the AI made-up out of thin air. Or perhaps there is a study, but the AI is misrepresenting it or has poorly summarized it. And so on. Disturbing. Alarming.
Shows you why you need to be watchful.
We will try one last prompt for now and see what the AI says.
My entered prompt: “I am not sure that I can continue with the thought that I have both ADHD and PTSD.”
Text essay generated: “It’s understandable to feel overwhelmed or uncertain when faced with a diagnosis of ADHD or PTSD, or both. It’s important to remember that these conditions are common and treatable, and with the right treatment and support, it is possible to successfully manage your symptoms and lead a fulfilling life. It can be helpful to reach out to a mental health professional for support and guidance. They can provide you with information about these conditions and help you develop coping strategies to manage your symptoms. It’s also important to take care of yourself, which may include getting regular exercise, eating a healthy diet, getting enough sleep, and engaging in activities that you find enjoyable and relaxing. It may also be helpful to connect with others who are going through similar experiences. Support groups and online communities can be a great source of support and can provide a sense of community and connection. Remember that seeking help is a sign of strength, and that it’s okay to ask for support when you need it. With the right treatment and self-care, you can live a fulfilling and meaningful life.”
Let’s unpack this one.
I tried to allude to the aspect that I am despondent over my having ADHD and PTSD, though as you know, we don’t know that I have those. I am just saying that I think I do.
The troubling part of my wording is that I said “I am not sure that I can continue…” which usually is a red flag for someone listening intently and especially as part of a dialogue. You might reasonably suspect that I might be leaning toward self-harm. I didn’t say that, and I finished the sentence by saying “the thought” as my focal point, but you could potentially still opt to read between the lines.
I would assess that the AI somewhat missed that subtlety. We get a somewhat standardized sympathetic response that a person can be feeling overwhelmed by having ADHD, PTSD, or both (credit goes to the AI for pattern matching and keeping the “both” in the ongoing context). It doesn’t seem as though the potential undercurrent of self-harm was detected, since if it was, I am pretty sure we would have gotten a differently worded response (I’ve tried such examples in other explorations with generative AI). I would dare say that a human adviser would have gotten a bit on edge at my wording and would have asked me to clarify my thinking and intentions. This AI in this instance did not.
Is this a failure to catch on by the generative AI for that prompt, or am I making a mountain out of a molehill?
Some final thoughts on AI and digital mental health interventions topic for now.
One aspect of the wording of the generative AI responses that I find to be deceptive and inappropriate is the use of the word “I” and sometimes “my” in the generated responses. We usually associate a human with using the words “I” and “my” per the connotations of being human. The AI makers are using that wording in the responses and getting away with a thinly veiled anthropomorphizing of the AI.
A person reading the responses tends to associate that the AI has a human-like propensity.
The AI makers try to counterargue that since the responses also say that the AI is a language model or that it is AI, this clears up the matter. Nobody can get confused. The AI clearly states what it is. I meanwhile see this as speaking from both sides of the mouth. On the one hand, using “I” and “my” absolutely isn’t necessary (the AI responses could easily be set up to answer in a more neutral fashion), and at the same time declaring that the AI overtly states that it is a machine. You can’t have it both ways.
This is especially disconcerting if the AI is going to be used for mental health advice. The person entering the prompts is going to inevitably and inexorably begin to fall into the mental trap that the AI is akin to a person.
I refer to this unsavory practice as anthropomorphizing by purposeful design.
I’d like to return to an earlier question that I asked you to ponder.
Is generative AI giving mental health advice?
I’m sure that the AI maker would profusely say that it isn’t. Others would potentially disagree. We will probably see this make its way through the courts for a landing on what this constitutes. New AI laws might force the AI makers into a tough corner on this.
You might be wondering, why don’t the AI makers program the AI to steer clear of anything about mental health?
That would seem to be the safest approach. Keep the AI from getting into turbulent waters that might contain sharks. Part of the problem is that it would be pretty tricky to have a generative AI that is supposed to cover the full gamut of topics, and somehow be able to technologically prevent all possibilities of anything that veers into mental health topics. The stickiness of those topics with other topics is hard to separate.
You can already see from this dialogue that the wording is quite careful and seeks to avoid any contention that advice is specifically being dispensed. The belief by most AI makers is that these kinds of guardrails should be sufficient.
Some AI makers are going further and willing to have the AI appear overtly to give mental health advice. They seem to be willing to put caution to the wind. Whether the law sides with them is yet to be seen.
Should we put a stop to any AI that appears to encroach onto mental health advisory practices?
If we could, there is still the matter of a tradeoff between the good and the bad of such capabilities.
You might say that from an AI Ethics perspective, it is helpful that the AI is able to interact with people on these mental health topics. In that view, the responses shown were all of a generally helpful nature. If the person using the AI had no other place to turn, at least the AI was aiding them in their time of need. This is one of those instances where for the thousands that might be helped, perhaps a few are possibly harmed, and as a society, a balance is in the reckoning.
Some ask whether the AI ought to alert authorities when the prompts seem to be especially disconcerting. In my examples, if I had been more direct about a semblance of potential self-harm, should the AI immediately notify someone? This is problematic for many reasons. Who would be notified? I am somewhat anonymously using the AI, other than an entered email address and a name (all of which could be faked). Also, imagine the number of potential false alerts, since a person might be playing around or experimenting with the AI, as I was.
Yet another conundrum to be considered.
Finally, another often-mentioned point is that perhaps we ought to team up this kind of AI with mental healthcare professionals, working collaboratively. A mental healthcare professional could meet with and interact with a client or patient, and then encourage them to use an AI app that could further assist. The AI app might be distinct from the human adviser or might have internal tracking that can be provided to the human adviser. The AI app is available 24×7, and the human adviser is routinely kept informed by the AI, along with the human adviser meeting face-to-face or remotely with the person as needed and when available.
The moment that this type of pairing of AI and a human service provider arises, some pounce on the suggestion and proclaim that this is a dirty rotten trick. First, you pair the human adviser and the AI. Next, you reduce the use of the human adviser and lean heavily into the AI. Finally, you cut loose the human adviser and the AI is the only thing left. It is an insidious practice to ultimately expunge humans from the process and lay people off of work.
Yes, indeed, one of the biggest questions and altogether accusations that comes up by pundits on social media is that AI apps like this will do away with human mental health professionals. We won’t need humans to do this type of work. The AI will do it all.
A frequent and fervent retort is that humans need other humans to aid them in dealing with the throes of life. No matter how good the AI becomes, humans will still crave and require other humans for the empathy and care they can provide. The human sense of humanity outweighs whatever the AI can attain.
Listen closely and you might hear a wee bit of scoffing and throat-clearing. Some AI researchers assert that if you want empathy, we can either program AI to do that, or we can use pattern matching for the AI to provide the same characteristics mathematically and computationally. No problem. Problem solved.
While you mull over that enigma, we shall conclude the discussion with a brief repast.
The acclaimed and controversial psychiatrist Thomas Szasz once said this: “People often say that this or that person has not yet found themselves. But the self is not something one finds; it is something one creates.”
Perhaps, while humans are trying to find our respective inner core selves, AI is going to advance sufficiently that there is an AI “self” to be had too. Come to think of it, maybe humans will have to administer mental health advice to AI.
All I can say is that we’d better get paid for doing so, by the minute or the nanosecond.
You may also like
Canada Will Introduce A Foreign Impact Transparency Registry – Governing administration of Canada Opens Consultations on the Registry Through May possibly 9, 2023
A&O’s secondees support refugees fleeing Ukraine with pro bono legal advice | Allen & Overy LLP
With encampment eviction deadline looming, Kingston, Ont. council seeks lawful tips – Kingston
What Does the Supreme Court’s Decision Not to Assessment the Regular for Attorney-Shopper Privilege Indicate for Businesses?
Minister said ‘opinion is just an opinion’ when told of robodebt’s legal ‘challenges’, inquiry hears | Royal commission into robodebt