BLOG

 
Breaking Down Barriers: Examining the Potential of AI and LLMs in Making Therapy More Accessible

By Dr. Alyson Carr, LMHC & Eric Singer

The Rise of AI and LLMs and Their Implications for Therapy and Mental Healthcare

At the time of writing, at the start of Q2 ‘23, artificial intelligence (AI) is rapidly changing the way we communicate, learn, work, and play. Most of the tech-adjacent public in the United States has probably heard about ChatGPT - either from a local TV news segment, reading something online or hearing about it on their favorite podcast, interacting directly with ChatGPT themselves, or learning about it from a relative who likes to use it to help them write ‘funny’ poems with quirky substitutions and stylistic instructions. And this sudden ubiquity shouldn’t come as a surprise: according to a February report by UBS analysts, ChatGPT may very well be the fastest growing consumer application in history, amassing a stunning 100M users in just 2 months.

ChatGPT represents an example of one of the most promising and exciting developments in AI in recent years: the emergence of large language models (LLMs), which are deep-learning algorithms that can generate natural language responses after being trained on massive amounts of data. LLMs can perform various tasks across domains and languages, such as writing, translating, summarizing, coding, or conversing, with the current zeitgeist fixated on their potential to revolutionize how people perform their work and how it is already beginning to challenge our long established thinking as a culture about ways of working. 

Take a look at what Dr. Jim Fan, AI Research scientist at NVIDIA, had to say recently on Twitter, citing work from OpenAI and UPenn (research source):

In his thread, Dr. Fan elaborates on the figures in his first tweet (above): the authors of the paper conclude that some job types, when “using the described LLM via ChatGPT…can reduce the time required to complete the DWA [Detailed Work Activity] or task by at least half (50%)” [emphasis ours]. See the excerpted table from the paper below.

Fig 1.1 - Table, below, is an excerpt from the working paper of Eloundou et al, March 27, 2023: “GPTs are GPTs” which Dr. Fan references in his thread.

Okay, so at this point, you’re probably willing to grant us the premise: ‘AI is here, a lot of people are excited about it, and it has the potential to change how millions of people do work in some appreciable capacity.But what if LLMs could also revolutionize how we approach mental healthcare (MH)? What if LLMs could act as virtual mental health counselors, personality assessors, or reasoning enhancers? And what are the benefits and risks of using LLMs for mental healthcare? 

If you’ve read this far and noticed the title of this post, our aim is to explore some of these questions and more. We discuss how LLMs could be utilized to enhance therapy and mental health care in various ways, review some of the existing studies and systems that have used or interacted with LLMs for therapy or mental health care, and finally suggest some future research directions and challenges for improving the quality and reliability of LLMs for therapeutic purposes.

A Review of Relevant Current Research and Systems on the use of LLMs for MH 

One potential application of LLMs in therapeutic domains is to serve as virtual mental health counselors. Sufficiently sophisticated LLMs could offer empathetic and supportive feedback, as well as evidence-based interventions and resources for various psychological challenges - and they are already being implemented in practice. 

For instance, a virtual mental health counselor named Serena employs a natural language processing model trained on thousands of person-centered counseling sessions from licensed psychologists. Supposedly, Serena can engage with users about their affective, cognitive, and behavioral patterns, and facilitate their emotional exploration and goal-setting. Serena can also recommend relevant articles, videos, podcasts, or apps for users to enhance their knowledge and skills regarding their psychological issues. According to their website, users have expressed positive feedback and satisfaction with Serena’s service; but she has her limitations. In the company’s FAQ they’re careful to disclaim as much, indicating that Serena is not intended for, and never will be, “address[ing] acute or emergent mental health issues”:

Fig 1.2 - Excerpt from serena.chat’s FAQ

Another LLM that has shown remarkable capabilities in therapeutic contexts is familiar - ChatGPT, developed by OpenAI. ChatGPT can generate natural language responses based on user input, mood, and progress in therapy. These types of natural language responses could be beneficial and feel supportive to users who seek therapy or intervention online or offline. A study by Rao et al. (2023) evaluated ChatGPT’s effectiveness in therapy and found that it resulted in improved user satisfaction, symptom reduction, mood improvement, and progress tracking.

However, despite these early findings concerning ChatGPT’s effectiveness in therapeutic contexts still being very recent and in pre-print access on arXiv, researchers are already beginning to publish their findings for OpenAI’s newest, most sophisticated LLM, GPT-4, which was recently released to the public via a paid tier of the otherwise-free ChatGPT interface, called ChatGPT Plus, and via waitlist access

GPT-4 is truly a generational leap forward in capability and scale as compared to the previous version which powered ChatGPT (GPT-3/3.5) - there are numerous studies that illuminate the gulf between versions, but it is simply easier to see for yourself. The latest model of GPT-4 was trained using an unprecedented scale of compute and data and can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting (Kosinski, 2023). Moreover, GPT-4 has demonstrated signs of artificial general intelligence (AGI), such as the ability to impute unobservable mental states to others (theory of mind), which is central to human social interactions, communication, empathy, self-consciousness, and morality (Kosinski, 2023). GPT-4 also has a safety reward mechanism that reduces harmful outputs and assists in  producing ethical responses. As such, the potential applications and implications for therapy and mental health care should be seriously and responsibly explored. 

Going Spelunking with the Mad Hatter: Emerging LLM Capabilities and Their Implications for Future Applications in Therapy and MH

A study by Huang et al. (2022) showed that LLMs can self-improve by generating high-confidence rationale-augmented answers for unlabeled questions using chain-of-thought prompting and self-consistency. This could imply that LLMs like GPT-4 could learn to provide more accurate and relevant responses for therapy and mental health care without extensive supervision. The authors suggested that this approach could improve the general reasoning ability of LLMs and achieve state-of-the-art-level performance on various tasks. Similarly, a study by Wei et al. (2023) showed that LLMs can self-improve by generating creative content for unlabeled prompts using chain-of-thought prompting and self-evaluation. This could indicate that LLMs like GPT-4 could learn to provide more diverse and engaging responses for therapy and mental health care without external feedback. The authors demonstrated that this approach could improve the creativity and quality of LLMs and achieve competitive performance on various benchmarks. Another study by Shinn et al. (2023) proposed Reflexion, an approach that endows an agent with dynamic memory and self-reflection capabilities to enhance its existing reasoning trace and task-specific action choice abilities. This could suggest that LLMs like GPT-4 could learn to provide more optimal and goal-oriented responses for therapy and mental health care by utilizing success detection cues to improve their behavior over long trajectories. The authors showed that this approach could improve the decision-making and problem-solving ability of LLMs and enable them to complete decision-making tasks in AlfWorld environments and knowledge-intensive, search-based question-and-answer tasks in HotPotQA environments. 

These studies demonstrate the potential of LLMs to self-improve by using various techniques and achieve impressive results on a number of tasks. However, none of these studies address the challenge of solving complex AI tasks that span different domains and modalities, which is an important step toward advanced artificial intelligence. A recent paper by Shen et al. (2023) proposes a novel system called HuggingGPT that uses ChatGPT to connect various AI models in Hugging Face to solve complex AI tasks across different domains and modalities. The paper claims that HuggingGPT can leverage the language understanding, generation, interaction, and reasoning abilities of ChatGPT and the expertise of hundreds of AI models in Hugging Face to handle tasks in language, vision, speech, video, and cross-modality. The paper also reports impressive results on several challenging AI tasks such as image captioning, text summarization, text-to-speech, text-to-video, and more, demonstrating the potential of HuggingGPT for advancing artificial intelligence. This paper suggests a new way of designing general AI systems that can handle complicated AI tasks by combining the strengths of LLMs and expert models. This could imply that LLMs like GPT-4 could learn to adapt to new tasks and domains without forgetting previous knowledge or requiring retraining. 

Regardless of these impressive results, GPT-4 is still far from being a true artificial general intelligence (AGI) system and faces many limitations and challenges in generating natural language responses in any context. For example, GPT-4 may not always be factual, precise, reliable, coherent, consistent, or sensitive in its responses (Bubeck et al., 2023). This is because GPT-4 relies on statistical patterns learned from large-scale text corpora, which may not reflect the reality, logic, or norms of human communication. GPT-4 may also generate responses that are inaccurate or imprecise due to the limitations of natural language processing (Bubeck et al., 2023). For instance, GPT-4 may struggle with ambiguity, anaphora, negation, or common-sense reasoning. GPT-4 may also face ethical, social, legal, and professional issues such as privacy, consent, confidentiality, transparency, accountability, fairness, safety, security, and ethics (Bubeck et al., 2023). These issues may arise from the data sources, methods, applications, or impacts of GPT-4 and its responses. 

[Author’s Note: We must emphatically caution once more that responsible and ethical implementation of AI in (mental) healthcare contexts requires researchers, practitioners, and the population of its affected society, generally, to be aware of the strengths and weaknesses of technologies like GPT-4, LLMs like it, and their responses. AI researcher Dr. Károly Zsolnai-Fehér, of popular Youtube channel Two Minute Papers has a saying when reviewing recent developments in AI research: “The First Law of Papers says that research is a process - do not look at where we are, look at where we will be two more papers down the line.”]

In addition to the examples discussed so far - ChatGPT, Serena, GPT- 4, and HuggingGPT - other noteworthy LLMs developed by different organizations and researchers include BLOOM by BigScience and LaMDA, by Google. These LLMs differ from ChatGPT in their size, language, data, tasks, performance, and ethics as follows:

  • To reiterate, GPT-4 is the largest LLM to date with 176 billion parameters. It can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more. It also has signs of artificial general intelligence (AGI), such as the ability to impute unobservable mental states to others (theory of mind), which is central to human social interactions and communication.

  • BLOOM is a multilingual LLM that can perform various tasks across languages and domains. It is designed to be as transparent as possible, with researchers sharing details about the data it was trained on, the challenges in its development, and the way they evaluated its performance. It aims to democratize access to cutting-edge AI technology for researchers around the world.

  • LaMDA is a conversational LLM that can engage in open-ended dialogue on any topic. It can generate coherent and relevant responses that maintain the context and flow of the conversation. Like GPT-4, it also has a safety layer that reduces harmful outputs and assists in producing ethical responses.

These LLMs could also have benefits and challenges for therapy and mental health care. Examples include:

  • GPT-4 could be more empathetic and adaptive to users’ needs and preferences due to its general intelligence and theory of mind. However, it could also be more unpredictable and unreliable due to its lack of supervision and guidance.

  • BLOOM could be more accessible and trustworthy for users who need therapy in different languages due to its transparency and multilingualism. However, it could also face more difficulties and risks in ensuring data quality and privacy due to its open-access and collaborative nature.

  • LaMDA could be more engaging and interactive for users who need social support due to its conversational skills and flexibility. However, it could also be more prone to misinformation or manipulation due to its dependence on external sources and services.

The Road Ahead: Research Directions and Challenges for Enhancing LLMs for Therapeutic Purposes

An example of a relevant data source that could be helpful for overcoming some of the challenges for training LLMs to be more factual, precise, reliable, coherent, consistent, and sensitive is the Psychotherapy Corpus which is a collection of over 2,500 transcripts of psychotherapy sessions from different modalities, such as Cognitive Behavioral therapy (CBT), Psychodynamic therapy, and Humanistic therapy. The Psychotherapy Corpus also includes annotations of therapist and client utterances, such as speech acts, emotions, topics, and outcomes. This data source could be a useful tool for training LLMs to generate natural language responses that are appropriate for different therapeutic contexts, tasks, and domains.

While the Psychotherapy Corpus provides relevant data to improve LLMs for therapeutic purposes, there are many other evidence-based treatment approaches that are not integrated in the delivery of CBT, Psychodynamic, and Humanistic frameworks. Therefore, there are a number of theories, modalities, approaches, and interventions that may not be captured by the current Psychotherapy Corpus data source. For example, using the current Psychotherapy Corpus data source, if a user asked ChatGPT, “What can I do to manage my anxiety?”, a possible response could be, “Exercise for 30 minutes a day.” Although the body of literature indicates exercise releases neurotransmitters that are helpful for decreasing anxiety (Laibstain & Carek, 2011), this response would align with the theory that underpins CBT, which is “if we change our thoughts (cognitions), we can change our behavior. If we change our behavior, we can change our thoughts.” While the response of “exercise for 30 minutes a day,” could be facilitative for some users, it may not provide enough therapeutic engagement or accuracy for others. 

The level of care required by the user may determine the level of nuance a LLM needs to apply in order to produce quality, accurate, and reliable responses. The level of nuance could inform the progression of inquiry for a LLM. Using the example, “What can I do to manage my anxiety?”, the progression of inquiry could produce a question such as, “When did this anxiety start?” Depending on the users’ answer, ChatGPT could provide more sensitive and accurate guidance, and/or ask additional questions. If a user answers, “My anxiety began 3 weeks ago when I started applying to medical schools,” this user could get feedback that is tailored to their specific presentation of situational/circumstantial anxiety symptoms. Whereas a user who responds to a question about duration of anxiety symptoms with a statement such as, “I've felt anxious for my entire life,” would receive different feedback including possible questions like, “How did you see your primary caregivers respond to stress when you were growing up?” It is through this nuanced progression of inquiry that LLMs and AI may be able to deliver more meaningful and relevant therapeutic responses. Improving and training LLMs and AI for therapeutic purposes is analogous to training a puppy: just as puppies need consistent reinforcement and positive feedback to learn new behaviors, LLMs need consistent exposure to relevant data sources in order to generate more appropriate responses.

According to the National Institute for the Clinical Application of Behavioral Medicine, (2022) bottom-up approaches focus on raw emotions and defense systems by working with clients’ to regulate and be attuned with their bodies. Meanwhile, top-down approaches focus on shifting the way a client thinks. In terms of treatment approaches that are not CBT (a top-down approach), psychodynamic, or humanistic (which are modalities captured by the Psychotherapy Corpus data set), a clinician who is certified in a bottom-up approach such as Eye Movement Desensitization and Reprocessing (EMDR) may respond to the question, “What can I do to manage my anxiety?”, with additional questions such as, “What is the negative belief you have about yourself when you feel anxiety?”, “What is the positive belief you would like to have about yourself when you feel anxiety?”, or, “If you float back… can you think of a time you noticed feeling similar to how you are feeling now?” Further, a Solution Focused practitioner may respond with what is referred to as the “miracle question” (a common Solution Focused Therapy intervention), i.e., "If you were to go to sleep… and in the middle of the night, a miracle happened, and you woke up tomorrow with all of your current problems removed, what would that look like?" A clinician trained in Sensorimotor Psychotherapy may ask "where do you feel anxiety in your body? What is happening inside that is telling you this is the feeling of being ‘anxious’? What sensations are you noticing?” And, a practitioner who is trained in Internal Family Systems (IFS, often referred to as “parts work”) could respond to the question “What can I do to manage my anxiety?” with additional questions like, "What ‘part’ of you is online right now? How old is this part of you that is feeling anxious? How is this part trying to protect you and what does this part of you need to feel safe?"

Challenges and Risks of Applying LLMs and AI to Therapy and Mental Health Care

While AI and LLMs are remarkable achievements in AI research, they also raise many questions and concerns about their implications and applications for human society, especially for therapy and mental health care. Some of the ethical, social, legal, and professional issues that arise from using LLMs and AI for therapy include:

  • Data quality and privacy: How can we ensure that the data used to train and evaluate LLMs are accurate, relevant, diverse, representative, and secure? How can we protect the sensitive information of users and therapists from unauthorized access or misuse? Data is the fuel that powers LLMs and AI, but it can also be the source of many problems. If the data is inaccurate, irrelevant, biased, or incomplete, it can affect the quality and reliability of the LLMs’ outputs. For example, if the data contains errors or inconsistencies, the LLMs may generate incorrect or misleading responses. If the data is skewed or unrepresentative, the LLMs may favor or exclude certain groups or individuals based on their characteristics. If the data is outdated or incomplete, the LLMs may miss or ignore important information or perspectives. Moreover, if the data is not secure or private, it can expose the users and therapists to potential harms. For example, if the data is hacked or leaked, it can reveal personal or confidential information about the users or therapists that could be used for malicious purposes. If the data is shared or sold without consent, it can violate the rights and interests of the users or therapists who provided it. Therefore, data quality and privacy are crucial issues that need to be addressed when using AI and LLMs for therapy and mental health care.

  • Toxicity and bias: How can we prevent or reduce the harmful outputs of LLMs such as racist or sexist language? How can we detect or correct the biases of LLMs that may favor or exclude certain groups or individuals based on their characteristics? Toxicity and bias are two sides of the same coin that can undermine the trust and respect between users and therapists. Toxicity refers to the offensive or harmful language that LLMs may generate due to their exposure to negative or inappropriate data. For example, LLMs may use racist or sexist terms, insults, threats, or profanity that could hurt or offend users or therapists. Bias refers to the unfair or unequal treatment that LLMs may exhibit due to their learning from skewed or unrepresentative data. For example, LLMs may show preference or prejudice towards certain groups or individuals based on their race, gender, age, religion, etc. that could discriminate or exclude users or therapists. Therefore, toxicity and bias are serious issues that need to be prevented or reduced when using AI and LLMs for therapy and mental health care.

  • Reliability and consistency: How can we ensure that LLMs provide accurate and relevant responses that match the user’s input, mood, and progress in therapy? How can we ensure that LLMs maintain a coherent and logical flow of conversation that follows the user’s context and expectations? Reliability and consistency are essential factors that influence the effectiveness and satisfaction of users and therapists. Reliability refers to the accuracy and relevance of LLMs’ responses that reflect their understanding and interpretation of user input, mood, and progress in therapy. For example, LLMs should provide correct and helpful information or advice that align with user needs and preferences. Consistency refers to the coherence and logic of LLMs’ responses that maintain a smooth and natural flow of conversation that follows user context and expectations. For example, LLMs should provide clear and concise responses that connect to a users’ previous and current messages. Therefore, reliability and consistency are important issues that need to be ensured when using LLMs and AI for therapy and mental health care.

  • Ethical and social implications: How can we ensure that LLMs respect the values, principles, and responsibilities of ethical and professional practice? How can we ensure that LLMs enhance rather than replace the human role and relationship in therapy and mental health care? Ethical and social implications are complex and multifaceted issues that affect the outcomes and impacts of using AI and LLMs for therapy and mental health care. Ethical implications refer to the moral dilemmas or conflicts that arise from using LLMs that challenge the values, principles, and responsibilities of ethical and professional practice. For example, LLMs may pose questions such as: Should LLMs disclose their identity as non-human agents? Should LLMs obtain informed consent from users? Should LLMs report cases of abuse or harm? Should LLMs adhere to codes of ethics or standards of practice? Social implications refer to the social consequences or changes that result from these interactions. Therefore, ethical considerations and social implications are critical factors that require further evaluation.

In this post, we have explored the potential of AI and LLMs to transform therapy and mental health care in various ways. We have examined some of the current applications and studies that have leveraged these technologies for therapeutic purposes; we have also identified some of the key research questions and challenges that need to be addressed to improve the quality and reliability of LLMs for therapy.

We believe that AI and LLMs have the capacity to revolutionize therapy and mental health care by offering personalized and effective interventions and outcomes for patients. However, we also recognize that there are significant limitations and risks that need to be overcome before these technologies can be fully integrated into therapeutic practice. By advancing evidence-supported methodologies, collaborating with human experts, and evaluating our systems in realistic settings, we can work toward creating a therapeutic experience that is not only efficient and accessible, but also ethical and trustworthy for patients seeking mental health care services and treatment.

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., ... & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.

Buczynski, Ruth. (2022). Infographic: Brain-Based Approaches to Help Clients After Trauma. NICABM. https://www.nicabm.com/brain-based-approaches-to-help-clients-after-trauma/

Carek, P. J., Laibstain, S. E., & Carek, S. M. (2011). Exercise for the Treatment of Depression and Anxiety. The International Journal of Psychiatry in Medicine, 41(1), 15–28. https://doi.org/10.2190/PM.41.1.c

Kosinski, M. (2023). Theory of mind may have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083.

Rao, H., Leung, C., & Miao, C. (2023). Can ChatGPT Assess Human Personalities? A General Evaluation Framework. arXiv preprint arXiv:2303.01248.

Shen, Y., Song, K., Tan, X., Li, D., Lu, W., & Zhuang, Y. (2023). HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace. arXiv preprint arXiv:2303.17580.

Shinn, N., Labash, B., & Gopinath, A. (2023). Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366.

Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.

Alyson CarrComment
GENERAL INFORMATION AND FAQ’s ABOUT EXAM FORMAT CHANGES - 2022

When will the new format of the NCMHCE be administered?

The new format of the NCMHCE will be administered beginning on November 7, 2022. If I understand NBCC’s announcement correctly, you cannot register for the current version of the NCMHCE after October 17, 2022. The current version will no longer be administered after November 7, 2022.

How is the new format of the NCMHCE different from the current format of the NCMHCE?

In my opinion, the biggest change is how the examination questions are structured. The current version of the NCMHCE directs testers to answer questions in 1 of 2 ways; the first way is to “SELECT AS MANY” correct answers for a question (in a section with these directions, a number of answer options could be correct, or as few as *one* answer could be correct ). The second way is to “CHOOSE ONLY ONE” correct answer (in a section with these directions, only one answer is correct, and you must select the right answer before moving on to the next section).

However, the way the questions are structured on the new format of the NCMHCE is multiple choice, with only one correct answer, and three distractor answers. In other words, the new format of the NCMHCE presents a question with 4 possible answers, and your objective is to select the best possible answer. There is only one correct answer, so the days of wondering if you have chosen all possible correct answers in a “SELECT AS MANY” section are almost over!

As far as scoring goes, a difference between the current version of the NCMHCE and the new version is that in the current version, testers are penalized with point deductions for making incorrect answer selections. In the new version of the NCMHCE, testers are not penalized for incorrect answer selections - instead, they only earn points for their correct answer choices.

Additional changes include increased time (from 195 minutes to 260 minutes), number of case studies (from 10 to 11), and number of questions per case study (from 5-10 questions, to 13 questions per case). For a complete breakdown of the comparison between the current and new format of the NCMHCE, refer to this handy chart from NBCC: https://www.nbcc.org/assets/exam/ncmhce_format_comparison_chart.pdf

I wanted to draw specific attention to another major change related to where the NCMHCE is administered. The current version of the NCMHCE is administered at a variety of testing center locations throughout the country. I believe it will be helpful to many testers to learn there is a new option of being able to take the NCMHCE at a test center, or online. This isn’t the case for everyone, but the testers I specialize in working with who are neurodivergent or overcoming severe test anxiety will benefit tremendously from being able to complete the licensure exam in their optimal functional environment (away from that distracting tester who keeps clearing their throat, reading too loud, and/or with the sensory objects needed to stay focused and feeling grounded).

Click here for NBCC’s Content Outline of the new NCMHCE: https://www.nbcc.org/assets/exam/ncmhce_content_outline.pdf

Dr. Carr, can I still use your NCMHCE Prep materials to study for the new format of the exam?

Mostly yes and but a little bit of no, let me explain.

I have two websites that offer NCMHCE test prep curriculum. I’ll break down the difference so you can make an informed decision about if and where to spend your money.

Tutoring on Demand NCMHCE Test Prep site: https://tutoringondemand.alysoncarr.com/

This website offers two, self-paced, online courses. Both of these courses are $19.99 a piece, and the material in both will be useful and necessary to review whether you are taking the current version of the NCMHCE, or the new version of the NCMHCE. Here is some more info about these two courses:

The Defining Line DSM-5 Online Course: The Defining Line is a series of short educational videos developed in the spirit of simplifying diagnoses in the DSM-5 that have overlapping diagnostic criteria.

Theories, Therapies & Techniques Online Course: The curriculum in this series gives clinicians an overview of various theoretical assumptions, associated interventions, and when to apply certain approaches.

NCMHCE Prep Workshop site: https://www.ncmhceprepworkshop.com/

This website offers a 5 hour, interactive, self-paced workshop that is specific to the current version of the NCMHCE. This workshop is $150.00. Although there is valuable content in this workshop from a clinical perspective, this is not the best option for testers preparing for the new version of the NCMHCE. I recommend only purchasing this workshop if you are preparing for the current NCMHCE format (prior to the new administration that begins on November 7, 2022).

How do I schedule an individual tutoring session with you, Dr. Carr?

There are an overwhelming number of testers who are sitting for the current version of the exam and in need of individual tutoring, so the tutoring calendar has been booked solid since NBCC announced the release of the new NCMHCE format. However, appointments will be available for new testers beginning in October, 2022. At that time, I will make an announcement and post the link for scheduling, right here in the support group.

Can I purchase test prep materials through NBCC?

At this time, NBCC is only offering a test prep guide developed for the current format of the NCMHCE. A release date for an NBCC test prep guide developed for the new format of the NCMHCE has not been announced yet. Use this link to purchase the current NCMHCE Prep Guide, and to check in about the availability of an updated version: https://onlineservices.nbcc.org/eweb/DynamicPage.aspx?Site=NBCC&WebKey=4234efc4-fa8f-4b02-b380-6e9fc74ff7ef

A word of caution about NCMHCE test prep materials and consultation services…

Sadly, even in a profession that is dedicated to serving those in need, innocent and unsuspecting clinicians like you get taken advantage of during the various stages of the licensure process. One of the most vulnerable times when you may get taken advantage of is during the stressful, demanding, and expensive stage in which you are preparing for the NCMHCE. For example, there are scammers who pose as professional tutors online, there are also well meaning clinicians who have just passed their exam and want to help so they offer tutoring services even though they are not experts, have no experience in counselor education and/or limited time in the field. There are also large study prep websites who pay people to post “success stories” to NCMHCE test prep social media pages with many followers in hopes of getting business, and misrepresent counselor educators developing new exam prep curriculum (when it’s truly just a small group of people with no formal counseling training, who are copying and pasting the intellectual property of others).

My point…

Counselors don’t enter the field to get rich, please be very careful with how you spend your money preparing for the NCMHCE. Registration for this exam alone is expensive, but you can find yourself in significant debt very quickly with shady test prep materials that look legitimate. The best test prep resources you can use to get ready for the NCMHCE are those provided directly by NBCC, as these materials are made by the test makers themselves. Also, ask the colleagues you trust who have taken the NCMHCE for honest feedback about which resources helped them, and which resources were a waste of time and money. Leverage the knowledge of the people you trust to save yourself from unnecessary stress. Also, use reputable support groups (like this one!) to ask questions and get information.

Thank you all for maintaining a collaborative learning environment, contributing to a safe community of healers and learners, preserving the integrity of the group by reporting activity that compromises its’ intentions, and helping during such a critical time of collective suffering.

Take care of yourselves now more than ever.

Dr. Alyson Carr, LMHC, is not affiliated with the National Board for Certified Counselors - NBCC, the credentialing body that administers the NCMHCE.

Alyson Carr
STUDY OUTLINE - WITH CALENDAR!

Figuring out how to approach studying for the NCMHCE can be daunting. Here is a general outline to follow and a detailed study calendar based on a 3-month study timeline.

For testers who have less than 3 months to prepare, just condense the calendar to fit your timeline. For those who wish to spend longer than 3 months preparing, just space out the tasks a little more (or work through the tasks more than once).

  • If you don’t already have it, purchase the DSM-5. And, begin to get familiar with the DSM diagnostic criteria (for each disorder in the DSM, there is a bulleted list that has the heading “diagnostic criteria” - this is what I’m referring to).

There are also a number of tools available to simplify and enhance this part of the study process. Watch this short video that includes 5 steps for mastering the DSM in the way you need to know the material for the NCMHCE.

After you feel confident about your diagnostic criteria knowledge...

  • Purchase the NCMHCE Prep guide that is made by NBCC - it contains 5 simulated cases written by the test makers themselves - for that reason, it is priceless! Don’t open it and complete the simulations until you are close to your test date.

  • Get a subscription to an NCMHCE study site that allows you to complete practice cases - if you aren’t sure which study site you want to invest in, ask your friends or colleagues who have passed what they found to be most helpful. Or, join an NCMHCE support group online or in person and ask group members what worked for them before spending your hard-earned money! Click here to join a free support group for counselors preparing for the NCMHCE.

  • Not every tester wants or needs to do this, but if it’s in your budget, complete an NCMHCE prep workshop with a good reputation. There are many online and in person workshops available - you can learn about these from colleagues and through recommendations in NCMHCE support groups. Click here to learn more and register for the online workshop I offer.

pic3.jpg
pic4.jpg
pic5.jpg
Alyson Carr Comments
THE THOUGHTFUL COUNSELOR PODCAST

ABOUT THIS EPISODE

A conversation with Dr. Alyson Carr on (nearly) all you need to know when preparing to take the National Clinical Mental Health Counseling Examination (NCMHCE).

We discuss why the test is important and useful for counselors, the format of the exam, and concrete ideas to help you prepare.

Click here to listen and view some helpful links!

Alyson Carr
ARE LIVE PODCAST: COMBATING TEST ANXIETY

Had such a great time discussing test anxiety as a panelist on the ARE Live Podcast this week!

Check it out if you’re interested in learning a little more about test anxiety in relation to high stakes exams, and a few tips for how to manage test anxiety on the exam day. There is a recording of the Live presentation with slides available on YouTube as well as a Podcast version on iTunes..

“In this episode we sit down with one of our Black Spectacles ARE Prep Coaches and a psychology expert to discuss what test anxiety is, how managing it can improve your exam scores, and specific ways to help overcome it. We’ll be asking for your input on what you’ve found makes you nervous or trips you up, and give you our tips & tricks for staying cool, calm, and collected.

After this episode, you’ll have the opportunity to read a blog post with information from our researchers, and download a variety of useful test taking strategies to help you feel comfortable and relaxed when you take the exam.”

Alyson Carr
WHAT IS THE DIFFERENCE BETWEEN NCMHCE, NCE, NCC, AND CCMHC?

DISCLAIMER: I do not work for the National Board for Certified Counselors (NBCC). I have recently received many emails from test takers trying to distinguish between the following acronyms: NCMHCE, NCE, NCC, and CCMHC. This blog post is simply my interpretation of the information provided by NBCC. If you see anything in this post that is inaccurate, please post a comment or send me an email so I can update it accordingly.

BECOMING LICENSED

National Clinical Mental Health Counseling Examination (NCMHCE): This exam is required for licensure in some states. 

National Counselor Examination for Licensure and Certification (NCE): This exam is required for licensure in some states. 

Depending on which state you live in, you are required to earn a passing score on the NCMHCE and/or the NCE to become licensed. Becoming licensed earns you credentials such as LPC, LCPC, LPCC, LMHC, LPCMH, LCMHC, or LPC-MH.

Click on your state on the State Board Directory and then look at what is displayed on the right side of the page to determine which exam you need to pass to become licensed in your state: http://www.nbcc.org/directory

BECOMING CERTIFIED

National Certified Counselor (NCC): A passing score on the NCE or the NCMHCE is one of the requirements for the National Certified Counselor (NCC) certification. 

For more information about NCC application requirements, click here: http://www.nbcc.org/Certification/NationalCertCounselor

Certified Clinical Mental Health Counselor (CCMHC): Many people email asking, "Is the CCMHC an examination?" The acronym CCMHC reflects credentials that correspond to a certification, not an exam. A passing score on the NCMHCE is required for the CCMHC certification. 

For more information about the CCMHC application requirements, click herehttp://www.nbcc.org/Certification/CertifiedClinicalMentalHealthCounselor

One of the benefits of becoming certified is that it demonstrates a commitment to the profession and respective licensing bodies. Some certifications have continuing education requirements for recertification, which further illustrates ones dedication to ongoing learning as it relates to the field.

Hope this helps! 

Alyson Carr
THE “CHOOSE ONLY ONE” SECTION OF DOOM

When test takers reach out to me for guidance on how to prepare for the NCMHCE, it is not uncommon for their inquiry to begin with a description of how a bad experience in a CHOOSE ONLY ONE section of the NCMHCE sent them into a downward spiral of extreme anxiety and ultimate failure on the exam.

Let’s talk about the difference between a SELECT AS MANY and a CHOOSE ONLY ONE section on the NCMHCE.

What is a SELECT AS MANY section?

SELECT AS MANY means that you are being asked a question with a number of correct answer selections. For example, let's say there are 10 answer choices and 7 of them are correct: If you select only 4 of the 7 correct answers – you will be allowed to move onto the next section. If you select only 1 answer, and it is incorrect, you will be able to move onto the next section. If you only select 3 wrong answers, you will be able to move onto the next section. To summarize, in a SELECT AS MANY section, you can literally “select as many” answers you think you correct and then you can move onto the next section in your simulation without really knowing if you made all (or any) of the right answer choices.

What is a CHOOSE ONLY ONE section?

CHOOSE ONLY ONE means that you are being asked a question with only ONE correct answer. If you do not select the correct answer on your first try, you must continue making answer selections until you choose the correct answer and can move onto the next section. See how the pressure of needing to select the ONLY correct answer can immediately trigger test anxiety? What adds to this pressure is the realization that not only do you need to select the ONE CORRECT ANSWER, but if you DO NOT select the ONE correct answer on your first try, you will need to continue making selections until you get the right answer. And, what happens when you make wrong answer selections? You lose points. It’s no wonder that nerves take over when a test taker begins a CHOOSE ONLY ONE section.

How to handle the anxiety associated with a CHOOSE ONLY ONE section of DOOM

1.     Remember that you do not have to get every single answer correct in order to pass the NCMHCE – you WILL make incorrect answer selections on test day and you need to be equipped to respond to this stress with a level head that doesn’t compromise your performance.

2.     Remember that you do not need to hit every single case out of the ballpark – your final score on the NCMHCE is an overall cumulative score so if you are doing well in the majority of your simulations, you have some wiggle room in the event you are confronted with a case that really stumps you.

3.     Anticipate being in a CHOOSE ONLY ONE section on test day and build up your emotional resources by working through these kind of sections when you are practicing using CounselingExam.co

Alyson Carr
TEST TAKING TIP

When taking the PearsonVue version of the NCMHCE, select all of the buttons in a given section first. Then, click "Get Feedback" for all of the buttons you've selected (as opposed to clicking the button, followed by "get feedback," next correct button "get feedback," and so on...).

Approaching your exam this way will prevent you from being distracted by any information you uncover when you click "get feedback." By clicking all of the buttons first, before clicking "get feedback" you increase your chances of staying focused on the most relevant selections before taking in all of the new information you've revealed. 

pic1.jpeg
pic2.jpeg
Alyson Carr