top of page
Search

An Evidence-Based Appeal to Opening Doors

  • Writer: Chris Dabbs
    Chris Dabbs
  • 3 days ago
  • 5 min read

I wrote the following commentary in response to an open solicitation from members of the Canadian Psychological Association for feedback regarding current changes to psychology registration in Ontario. For context, I am an American living in Canada, and I teach master’s-level psychologists in Alberta. I am a doctorally-trained psychologist, licensed as a Health Service Provider in Psychology in my home state and registered as a psychologist with the College of Alberta Psychologists. I hold dual memberships with both the National Register of Health Service Psychology and the Canadian Register of Health Service Psychology alongside being a dual member of CPA/APA. I am also a licensed mental health counselor, with discrete training as such at the masters-level. My perspective herein is not reflective of my professional memberships or workplace, and I only offer them to provide context for my response. 


Something that concerns me with the conversations I have seen surrounding the CPBAO changes is substantial lack of evidence in the arguments against the changes. I would expect a field that holds the scientific method so closely, with the overcompensatory fervor of a Napoleon complex, to be more resistant to the emotionally laden, status quo hugging, pearl-clutching arguments I have seen proliferating the conversational zeitgeist. I respect that people are worried about the protection of the public — as am I, being a member of the field — but worries can sometimes get away from us. I offer commentary herein on some of the main points of argumentation against select changes that I have experienced in conversation. 


First, regarding the Oral Examination, individuals have told me that they worry the removal of this facet of registration constitutes the removal of a significant gatekeeping mechanism. This is objectively false. According to CPBAO’s own reported statistics, only four candidates in the last decade have not passed the Oral Examination (CPBAO Board Minutes dated Sept. 26, 2025) — a pass rate of 99.7%. For those of us familiar with test construction: this is not an examination with strong differential capabilities. It is the minority of licensable jurisdictions in North America that retain an oral examination for professional psychology. In reality, there are dozens of oral examination opportunities during training that test ethical and decision-making abilities: during group supervision practicum presentations, during individual supervision sessions, during thesis/dissertation proposal and defense, if relevant. Talk about an over-saturation of assessment points! For psychologists heartily invested in the oral examination as an evaluative measure — I would ask that you reflect on how many times your oral conceptualization abilities were tested during training. Is it necessary to retain a final vestige of a bygone, pre-standardized era of psychology training? 


Second, regarding clinical efficacy between training levels, doctorally-trained psychologists have claimed that of course clinical outcomes are better at the doctoral-level when compared to the masters-level. Are they? I have begged for evidence supporting this claim, and have received none (reader, please send me some if you have it). Over and over again, researchers have shown that degree-level/level of training is not one of the individual characteristics predictive of client outcomes. Owen and colleagues (2016) found no outcomes differences across practicum, predoctoral, and postdoctoral clinicians in more distressed clients (although they did find a very small difference across mildly distressed clients, likely linked to clinician development and not training level). Miller and colleagues (2013) contribute to the ‘common factors’ debate by suggesting a refocus on the therapist’s contributions to outcomes, as therapist qualities (not education level) have been shown to be the most predictive component of the outcome efficacy. Hell, even Nyman and colleagues (2010), in a 3-year longitudinal study, showed no clinical efficacy outcome differences between licensed professionals and master's practicum students within a university counseling training center — a finding that has been replicated many times (Boswell et al., 2010; Christensen & Jacobson, 1994; Lambert & Ogles, 2004; all cited in Miller et al., 2018). The argument that doctoral-level providers have better outcomes than masters-providers is a shibboleth indicative of a superiority complex disconnected from reality. Having a doctorate does not make you a better clinical provider, and holding onto that belief in the face of existing evidence is representative of the lack of humility that, in fact, does lead to poor client outcomes. 


Finally, regarding the changing jurisprudence exam: I implore providers to observe the environment of jurisprudence exams across North America. Does a high-stakes law exam make sense in our profession? Might it make more sense to presume clinicians have a working rather than encyclopedic knowledge of the mental health laws in their jurisdictions? Providers are worried that a transition to a low-stakes, repeatable jurisprudence exam will contribute to a lawless field rife with malpractice complaints. I value the underlying worry — protection of the public. I’ve taken two jurisprudence exams for professional psychology in two different countries — both have been low stakes and repeatable, and both have led to good learning outcomes for me. Don’t just take my word for it: the pedagogical and andragogical literature is pretty clear that, if your desired outcome criterion is long-term learning, then low-stakes, master-based testing is superior to high-stakes, one-shot examinations. A 2021 meta-analysis indicated that frequent, low-stakes quizzing improves academic performance over the alternative — with ironically stronger effects in psychology courses. Similarly, Morphew and colleagues (2019) found that master-based testing resulted in half as many failing grades and twice as many As. Mastery-based, low-stakes examinations decrease cognitive load, allowing the tester to refocus on learning rather than worry about failing (Supriya et al., 2024). 


I offer this commentary as a perspective not intended to criticize or insult but intended to empathize and challenge. There are a historical and hegemonic reasons why the health service psychology subfields are predominantly comprised of cisgender, heterosexual, able-bodied, able-minded, White men. While we know this lack of diversity does not benefit our clientele, I think some of the pushback against CPBAO’s changes are partially rooted in a deep-seated fear of upsetting the aforementioned status quo. In tandem, I think that many of the opponents to these changes are arguing from a place of bitterness — “look at all of the hoops I had to jump through, it’s not fair that you want different!” Related to this, and something often left unspoken in our professional environments that privilege politeness, is just how traumatizing doctoral psychology training can be. The doctoral psychology training process is bloated, expensive, overly demanding, privileging of a select few, and full of abusive, nearly untouchable personalities. The reactivity I see in many opponents of these CPBAO changes is, in my view, reminiscent of a visceral traumatic response rooted in bitterness. It’s through this lens that I encourage self-compassion around the reactions these proposed changes bring up, critical reflection on what the evidence suggests about the changes, and exploration of how the field may actually improve in light of changing regulations. 

 
 
 

Comments


© 2025 by Chris Dabbs, Ph.D. Powered and secured by Wix

bottom of page