Introduction

This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.



The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the American University of the Caribbean, the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine. Nor does any of this represent any position of the Northeast Georgia Medical Center or Northeast Georgia Health System.



Tuesday, February 25, 2014

What’s new in Academic Medicine?

There were several interesting studies in Academic Medicine this month….

The first study (1) was led by one of my favorite educational researchers, Dr Geoffrey Norman. Dr Norman is one of the foremost researchers in the area of cognitive reasoning. In this current study, his team looked at resident physicians in Canada. Participants were second year residents from three Canadian medical schools (McMaster, Ottawa, and McGill). They were recruited right after they had taken the Medical Council of Canada (MCC) Qualifying Examination Part II.  They were recruited in 2010 and 2011. 

The researchers asked the residents to do one of two things as they completed twenty computer-based internal medicine clinical cases. They were instructed either to go through the case as quickly as possible without making mistakes (Go Quickly Group; n=96) or to be careful, thorough, and reflective (Careful Group; n=108). The results were interesting. There was no difference in the overall accuracy (44.5% v. 45%; p=0.8, effect size (ES) = 0.04). The Go Quickly group, did that. They finished each case about 20 seconds on average faster than the careful group (p<0.001). Interestingly, there was an inverse relationship between the time on the case and diagnostic accuracy—cases that were incorrect took longer for the participants to complete.

Another interesting study about diagnostic errors came out of the Netherlands (2). Dr Henk Schmidt asked an important question: does exposure to information about a certain disease make doctors more likely to make mistakes on subsequent diagnoses? In this study, internal medicine residents were given an article from Wikipedia to review and critique. The article was about one of two diseases (Legionnaire’s disease or Q fever). Half of the residents received the Legionnaire’s article, the other half the article on Q fever. Six hours later, they were tested on eight clinical cases in which they were forced to make a diagnosis. Two of the cases (pneumococcal pneumonia and community-acquired pneumonia) were superficially similar to Legionnaire’s disease. Two were similar to the other disease from Wiki (acute bacterial endocarditis and viral infection). The other four cases were “filler” cases that were not related to either case from Wikipedia. (aortic dissection, acute alcoholic pancreatitis, acute viral pericarditis, and appendicitis).

The results are a little scary. The mean diagnostic accuracy scores were significantly lower on the cases that were similar to the ones that they had read about in Wiki (0.56 v. 0.70, p=0.16). In other words, they were more likely to make an error in diagnosis when they had read about something that was similar but was not the correct diagnosis. The authors believed that this demonstrates an availability bias because they were more likely to misdiagnose the cases that were similar to ones that they had recently read about. Availability bias can also be seen with students, think about the student who comes from the Cardiology service. Every patient that they see in clinic with chest pain is having a myocardial infarction.

The last article that caught my eye was another study out of Canada. The authors, from the University of Calgary, wanted to determine if students that were doing their clinical clerkships in a non-traditional longitudinal fashion were learning as much as students in the traditional track. So they looked at all of the students who completed their clinical training in a Longitudinal Integrated Clerkship (n=34) and matched them to four students in rotation-based clerkships. Students were matched based on grade point average (GPA) and their performance on the medical skills examination in the second year.

The outcomes that they studied were the Medical Council of Canada Part 1 exam scores, in-training evaluation scores, and performance on their clerkship objective structured clinical examinations (OSCE). They found no significant differences between the two groups on the Part 1 exam score (p = .8), in-training evaluation (p = .8), or the mean OSCE rating (p = .5). So, apparently, students in a rural longitudinal rotation did just as well as those who stayed at the University hospital for rotation-based clerkships.                  


References
(1) Norman G, Sherbino J, Dore K, Wood, T, et al. The Etiology of Diagnostic Errors: A Controlled Trial of System 1 Versus System 2 Reasoning.  Acad Med  2014; 89(2): 277-284.

(2) Schmidt H, Mamede S, van den Berge K, et al. Exposure to Media Information About a Disease Can Cause Doctors to Misdiagnose Similar-Looking Clinical Cases. Acad Med  2014; 89(2): 285-291.


(3) Myhre D, Woloschuk W, Jackson W, et al. Academic Performance of Longitudinal Integrated Clerkship Versus Rotation-Based Clerkship Students: A Matched-Cohort Study.  Acad Med 2014; 89(2), 292–295.

3 comments:

  1. Nice summary, John. Too bad that actual data doesn't seem to change what we believe, or how we teach.

    ReplyDelete
  2. i never read about this post.
    http://www.afu.ac.ae/en/sru/community-engagement/

    ReplyDelete
  3. i never read about this post.
    http://www.afu.ac.ae/en/sru/community-engagement/

    ReplyDelete