Introduction

This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.



The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the American University of the Caribbean, the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine. Nor does any of this represent any position of the Northeast Georgia Medical Center or Northeast Georgia Health System.



Tuesday, February 25, 2014

What’s new in Academic Medicine?

There were several interesting studies in Academic Medicine this month….

The first study (1) was led by one of my favorite educational researchers, Dr Geoffrey Norman. Dr Norman is one of the foremost researchers in the area of cognitive reasoning. In this current study, his team looked at resident physicians in Canada. Participants were second year residents from three Canadian medical schools (McMaster, Ottawa, and McGill). They were recruited right after they had taken the Medical Council of Canada (MCC) Qualifying Examination Part II.  They were recruited in 2010 and 2011. 

The researchers asked the residents to do one of two things as they completed twenty computer-based internal medicine clinical cases. They were instructed either to go through the case as quickly as possible without making mistakes (Go Quickly Group; n=96) or to be careful, thorough, and reflective (Careful Group; n=108). The results were interesting. There was no difference in the overall accuracy (44.5% v. 45%; p=0.8, effect size (ES) = 0.04). The Go Quickly group, did that. They finished each case about 20 seconds on average faster than the careful group (p<0.001). Interestingly, there was an inverse relationship between the time on the case and diagnostic accuracy—cases that were incorrect took longer for the participants to complete.

Another interesting study about diagnostic errors came out of the Netherlands (2). Dr Henk Schmidt asked an important question: does exposure to information about a certain disease make doctors more likely to make mistakes on subsequent diagnoses? In this study, internal medicine residents were given an article from Wikipedia to review and critique. The article was about one of two diseases (Legionnaire’s disease or Q fever). Half of the residents received the Legionnaire’s article, the other half the article on Q fever. Six hours later, they were tested on eight clinical cases in which they were forced to make a diagnosis. Two of the cases (pneumococcal pneumonia and community-acquired pneumonia) were superficially similar to Legionnaire’s disease. Two were similar to the other disease from Wiki (acute bacterial endocarditis and viral infection). The other four cases were “filler” cases that were not related to either case from Wikipedia. (aortic dissection, acute alcoholic pancreatitis, acute viral pericarditis, and appendicitis).

The results are a little scary. The mean diagnostic accuracy scores were significantly lower on the cases that were similar to the ones that they had read about in Wiki (0.56 v. 0.70, p=0.16). In other words, they were more likely to make an error in diagnosis when they had read about something that was similar but was not the correct diagnosis. The authors believed that this demonstrates an availability bias because they were more likely to misdiagnose the cases that were similar to ones that they had recently read about. Availability bias can also be seen with students, think about the student who comes from the Cardiology service. Every patient that they see in clinic with chest pain is having a myocardial infarction.

The last article that caught my eye was another study out of Canada. The authors, from the University of Calgary, wanted to determine if students that were doing their clinical clerkships in a non-traditional longitudinal fashion were learning as much as students in the traditional track. So they looked at all of the students who completed their clinical training in a Longitudinal Integrated Clerkship (n=34) and matched them to four students in rotation-based clerkships. Students were matched based on grade point average (GPA) and their performance on the medical skills examination in the second year.

The outcomes that they studied were the Medical Council of Canada Part 1 exam scores, in-training evaluation scores, and performance on their clerkship objective structured clinical examinations (OSCE). They found no significant differences between the two groups on the Part 1 exam score (p = .8), in-training evaluation (p = .8), or the mean OSCE rating (p = .5). So, apparently, students in a rural longitudinal rotation did just as well as those who stayed at the University hospital for rotation-based clerkships.                  


References
(1) Norman G, Sherbino J, Dore K, Wood, T, et al. The Etiology of Diagnostic Errors: A Controlled Trial of System 1 Versus System 2 Reasoning.  Acad Med  2014; 89(2): 277-284.

(2) Schmidt H, Mamede S, van den Berge K, et al. Exposure to Media Information About a Disease Can Cause Doctors to Misdiagnose Similar-Looking Clinical Cases. Acad Med  2014; 89(2): 285-291.


(3) Myhre D, Woloschuk W, Jackson W, et al. Academic Performance of Longitudinal Integrated Clerkship Versus Rotation-Based Clerkship Students: A Matched-Cohort Study.  Acad Med 2014; 89(2), 292–295.

Thursday, January 9, 2014

Getting the right students in medical school

Are we getting the right students into the medical school?

To answer this question, we have to decide what we want students to look like when they get out of medical school. There is some debate about this but not as much as you would think. Once we decide what characteristics we want our graduates to have then we can start looking for those characteristics in the students that get admitted to medical school.  It is the most important question if we want to answer the first question. Why is it so important?

That is simple. The outcome is determined by the input. It is like the old computer adage—garbage in, garbage out.(1) Not that medical students are garbage by any means, in fact they are a pretty amazing group of young men and women. But, if we wanted all of our graduates to be black, but all of the students that were admitted were white, how much success would we have?  Not much!  In the same vein, if we wanted our graduates to all speak Spanish, what is the easiest way to accomplish this?  Clearly, admitting students that speak Spanish is far easier than teaching them Spanish during medical school.

This discussion works for a lot of the characteristics that people think are important for their personal physician to have. Take communication skills. We can teach them techniques to improve their communication skills and their techniques in taking a medical history. If the students that we admit to medical school have poor listening skills or they are very shy and do not like to talk to other people it will be much harder to end up with a graduate that has good listening skills.

Another example is professional behavior. I like to call this “not being a jerk” but it is much bigger—self-sacrifice, commitment, respect, accountability, and trust.(2) Can we teach those things? I don’t think so. I think a student either has those characteristics or they don’t. They do not learn them in medical school. We have to admit students that have the characteristics and then we can teach them how to actualize them as a medical student and physician.

The funny part of this is that we take this for granted with one attribute: the ability to do well on standardized, multiple-choice question tests. No one argues that an incoming medical student should not be good at that skill (and for sure it is a skill that is learned). So the graduating output is a physician who is good at answering multiple-choice questions. No surprise, but is that what we want. It is clearly part of what we want, right? We believe that doctors should have a basic minimum knowledge base in medicine, but is that the only key to being a good doctor?

We need to start thinking about what we want in our graduates. The US spends a lot of money on educating doctors. State tax revenue supports many schools. Federal research dollars support the infrastructure at many schools. Students pay tuition with the help of federally backed student loan programs. Even graduate education is mostly funded through federal Medicare dollars. (see the great piece on GME accountability by Kenny Lin, MD, MPH, at the Common Sense Family DoctorBut for all of this, we the people, have very little say on the output. There has been remarkably little discussion of accountability of medical schools to the tax payers for their physician graduates.

Do they have the characteristics that we need from our physicians?

Do they care for the people that need to be cared for?

Are graduates practicing in the geographic regions that we need them to be practicing in?

I think the answer to all of these questions is—no, unfortunately.  There is this idea that it is ok for medical student outcomes to be market driven. That is rubbish! The people are paying for medical education. We deserve to get the right outcome for our money.

This brings me back to my original question—are we getting the right students into medical school? If we want to change the answers to the three questions above, then we have to change the students that come into medical school.

References
1) http://www.worldwidewords.org/qa/qa-gar1.htm

2) http://www.abimfoundation.org/Professionalism/Medical-Professionalism.aspx