Imagine for a moment that you run a mental health center, and you meet someone who claims their new product can help you predict which psychotherapy clients will fail treatment, have an adverse event, or choose to drop out prematurely. You might dismiss that person with the assurance that you employ licensed clinicians who surely need no such help.
You mention it to your senior clinicians who agree with you and laugh that prediction is for gamblers, not for professionals. Your office manager overhears the discussion and asks if you remember how well the experts predicted the 2016 presidential election. They agree that prediction or forecasting in behavioral healthcare is either a well-intentioned waste of time or a scam.
We have the evidence
Evidence from very different sources would suggest you might be passing up a good opportunity, depending on the risk management product. We have learned a great deal from social scientists about forecasting. We have learned even more from clinicians who have monitored clinical indicators for risk during psychotherapy. Let us first review this evidence.
The social psychologist Philip Tetlock spent decades studying the question of prediction. His work is summarized in two landmark books, “Expert Political Judgement” (2005) and “Superforecasting: The Art and Science of Prediction” (2015). His work is based on actual forecasting challenges in which predictions about social events are found to be right or wrong.
The conclusion of Tetlock’s first book is not hopeful. He conducted more than 80,000 forecasting challenges over many years. He found the average expert is not much better at predicting social phenomena than the proverbial chimpanzee throwing darts. His later book distinguishes good from bad forecasters and analyzes what makes them different.
Tetlock found some people are much better at forecasting than others, and he found that practice can improve forecasting. He divides people into two humorous categories, the hedgehog and the fox, to capture how one’s mindset makes a tremendous difference. While the hedgehog relies on one or two big ideas to understand the world, the fox finds the world too complicated for one or two big ideas.
There are some foxes who become “superforecasters” based a lack of dogmatism and the ability to adjust their thinking to new facts. While most people might dismiss new information that does not fit with their beliefs, the top forecasters are not seeking to confirm their biases or hunches, nor fit data to one or two big ideas. The way they think seems to be more important than what they believe.
The clinical prediction literature is best represented in the work of Michael Lambert. His clinic uses patient self-report questionnaires at every session to track clinical progress. A persistent finding throughout his work is that just under 10% of patients present a risk for a poor outcome. A corollary finding is that clinicians are unable to identify those patients. Yet changing test scores are quite good at identifying them.
His conclusion after repeated validation of these findings is that the standard of care should include the tracking of risk and outcome. We can predict those at risk for adverse events and a failure to benefit from treatment. The question becomes what clinicians should do if risk is identified. Here is where the work of Tetlock dovetails with that of Lambert. A change of mindset is in order.
Lambert’s general recommendation is to do something different. I initially found this unsatisfying, hoping for something more specific. He provides some direction with the suggestion that one should explore the therapeutic alliance, but his main point is comparable to superforecasting. Think differently. Do not discount information. Adapt to the new clinical data suggesting therapy is not working.
Risk management systems
Despite the foregoing evidence, few delivery systems seek to mitigate risk as described. Yet the risk pool is large. We routinely deal with the risk of harm toward self or others, overdose on drugs, and other adverse events. When a patient population includes many patients who are depressed, thinking of suicide, acting impulsively or abusing substances, the early detection of risk is immensely valuable.
The use of monitoring systems based on patient self-report measures has been historically promoted to clinicians for the clinical value. Yet these systems are also critical for better risk management at the system level. Executives face the consequences of adverse events on many levels, and the return on investment for a better management system can be realized by preventing just a few events.
A fundamental point bears repeating. Clinicians with years of experience fail to detect these patients at high risk for a negative outcome. It is therefore not a matter of better hiring, training or supervision. The reality is that clinical data needs to be gathered, analyzed and presented to the clinician for action. Executives should then develop a way to monitor these risk cases to the point of clinical resolution.
The other source of risk for executives to manage is staffing. Most clinicians are helping patients reduce symptoms and improve functioning. Yet there is a group of outliers consisting of fewer than 10% of clinical staff who get significantly worse results. Like many phenomena distributed on a bell curve, there are small groups of therapists with “super” and “sub” performances on clinical outcome measures.
Executives should be monitoring those clinicians with the poorest results to better understand what might be driving those outcomes. Again, no specific direction or recommendation emerges from such a finding other than being alert to increased risk. This is an important contribution to a risk management system. Forecasting problems is always preferable to playing catch up after they appear.
Ed Jones, PhD, is senior vice president for the Institute for Health and Productivity Management.