Mental Health Crisis – What’s in a Number?

Dr Les Kertay – Chief Psychologist at The Claim Lab

Is there a mental health crisis?
Yes, I think so. I base this on these premises:

1. There is a substantial degree of stress that people are feeling at work, and it is contributing to increased reports of burnout and mental health challenges;
2. People in general, including at work, are talking about mental health more than they used to, particularly since COVID and the simultaneous entry into the workforce of the next generational cohort;
3. There has been an ongoing, steady, and reasonably dramatic increase in the number of diagnosed psychiatric conditions in the US since at least 2015, predating the pandemic and continuing through it;
4. There is a huge problem with access to high quality mental health care in the US; 
5. There is an even greater shortage of work-focused interventions in the mental health field. In short, we have a problem with mental health and mental health care…. and it’s important!

What do we know about the size of the problem?
Now, there is a problem with the numbers I keep reading. I recently read an article, typical among many, stating that about 80% of the workforce reports experiencing “burnout” in the preceding year. I see a lot of head-nods at those figures, but I have to ask: if 80% of the population is experiencing something, is that really pathology or is that simply a normative response (perhaps to current working conditions in a culture that’s sick of the way we’re working)?

I saw another article quoting figures from ComPsych, published earlier this year and often cited since, indicating a 300% increase in mental-health-related leaves of absence between 2017 and 2023, and a 22% increase from 2023 to 2024. According to the study, 10% of all leaves are now mental health related. I’m not surprised that there’s been an increase, and it sure sounds dramatic, but what does it mean?

The problem is that percentages are meaningless without knowing the raw numbers – an increase from 1 to 4 is also a 300% increase; and an increase from 1 out of 10,000 to 1.22 out of 10,000 can also be described as a 22% increase. In both these examples the numbers sound dramatic but are trivial, practically speaking. As to 1 in 10 leaves being related to mental health, that doesn’t surprise me a bit – 23 years ago, in my first job with a large disability insurer, 11% of our block of business had a primary diagnosis of a mental health disorder. Again, if we don’t know the baseline, we have no idea what these numbers mean.

One last example: when I first started to see numbers related to new mental health diagnoses associated with the pandemic, I was seeing numbers like a 40-50% increase. Now, knowing that 20% of the population at any given time has a diagnosable psychiatric condition, I have an idea of the baseline and it’s large. But I next asked myself, how were the increases determined? Checking the methodologies of those early studies, it turns out that nearly all of them used one of two methods: one group used new references in the chart to “depression” or “anxiety,” and others used positive screening measures.

The problem with new references in medical charts is that such notations are next to useless in the middle of a pandemic – who among us didn’t sometimes feel “anxious” and “depressed,” especially those who were hospitalized. It’s not a diagnosis but put the word in the chart and it suddenly becomes one. And the problem with screening measures is accuracy. For example, screening for depressive disorders typically employs the PHQ-2 or PHQ-9. Now these are decent screeners, but screening tools are NEVER diagnostic; instead, they are an indicator that additional clinical evaluation should be undertaken.

I will spare you the details of sensitivity and specificity, baseline, and the relevant calculations, but running the numbers on the PHQ-9 it turns out that over 60% of those scoring positive on the screen are false positives – meaning that the screener says there’s a depressive disorder, but a clinical interview would find there isn’t. If you base your estimates on a measure that has more false positives than true diagnoses, what does a large increase mean?  Sure enough and unsurprisingly, over time those early dramatic increases that were reported have subsided back toward baseline.

As an aside: a huge number of antidepressants are prescribed in primary care practices based on self-report and a positive screen on the PHQ-9. How many of those are actually false positives? But that’s another article.

What’s the point?
Does this mean that there isn’t a problem with mental health, or that the problems are grossly exaggerated? Maybe, but I don’t think so. I strongly believe we have an important set of problems to solve, and we must find better ways to measure and assess mental health challenges. Moreover, compassion and simple human decency requires that we do better at addressing them, at both individual and cultural levels. And certainly, there are important issues specific to the workplace that need to be addressed.

But we have to become better consumers of information, especially in a media environment in which we are constantly bombarded by sound bites and impressive-sounding numbers that we don’t have time to question. Simply put, if we don’t understand the actual size, scope, and content of a problem, we are likely to offer solutions that simply rehash old ideas. Doing more of what doesn’t work has never seemed like a good problem-solving strategy, and it’s never a good use of limited funding.

In a time when science skepticism is rampant, I’m not optimistic at my call to become better consumers of information coupled with thoughtful discussion of the nature of, and solutions for, serious problems in mental health. Nevertheless, like the fool who rushes in where angels fear to tread, and like Don Quixote tilting at windmills (link), I persist.

Care to join me?

Scroll to Top