Introduction

This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.



The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the American University of the Caribbean, the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine. Nor does any of this represent any position of the Northeast Georgia Medical Center or Northeast Georgia Health System.



Monday, January 31, 2011

The STFM Conference on Medical Student Education

Last week I had the privilege of chairing the 37th annual STFM Conference on Medical Student Education. Until 2010, the conference was known as the STFM Predoctoral Education Conference. We changed the name to the Conference on Medical Student Education. You may not know much about STFM. The Society of Teachers of Family Medicine is my academic and professional home. All of my mentors, my teachers, my peers, and my colleagues are in STFM. It is a great organization. The Conference on Medical Student Education is a premier educational meeting that includes most of the family medicine educators from around the country. 

Let me give you some highlights of the meeting. 

We started the meeting with an amazing plenary speaker. Dr Kevin Eva, Senior Scientist from the Centre for Health Education Scholarship (CHES) at the University of British Columbia in Vancouver, Canada. Dr Eva gave an invigorating talk about medical decision making. My favorite concept from the talk was that we have to make errors in order to get better, and maybe more importantly, we as educators have to provide safe environments that allow students to make those mistakes. His talk is posted on FMDRL.

There was a great talk by Stacy Brungardt, CAE (Executive Director of STFM) about the alphabet soup of family medicine. She described several of the organizations that make up the "family" of family medicine (AAFP, CAFM, COGME, etc...). There was an excellent peer session describing a study of teaching students about the Four Habits model of patient-centered communication, by Dr Hannah Maxfield and colleagues. (full disclosure here, Drs. Maxfield, Zaudke and Chumley are my colleaguesy at KU)

Dr. Chumley and I presented some of our data about using Artificial Neural Networks to classify students' information gathering patterns to make a diagnosis. We looked at 200 students' performance on a standardized patient case, with a 22 item checklist. We used the first 100 patients to train the ANN, and then we tested the neural network with the second 100 cases.  We found that the ANN was able to predict whether the student got the right or wrong answer/diagnosis with a 85% accuracy.  This was better than two other standard classifiers called Bayesian and KNN (K Nearest Neighbor).

There was an awesome dance party on Friday night that brought together faculty (old and young) with medical students.  

The Saturday morning plenary was by Dr Cathy Pipas from the Dartmouth medical college. Dr. Pipas is the Vice Chair of Community and Medicine. She gave a stimulating talk about the transformation of the Dartmouth practices to patient centered medical homes. The scary part of that talk was that the senior administration at Dartmouth have still not aligned the financial incentives with the clinical practices that are transforming to PCMHs.

Drs. Jana Zaudke and Hannah Maxfield presented an interesting randomized trial of giving feedback about the Four Habits model of communication after watching the students perform on a standardized patient.

On Sunday morning Dr. Joshua Freeman moderated a special session on social justice and family medicine. There's were several medical students at the session and we had a great discussion after his talk.

The final plenary for Sunday morning was Dr. Jerry Kruse. Dr. Kruse is the Chair of the Department of Family and Community Medicine at Southern Illinois University School of Medicine. I asked Dr. Kruse to talk about his views of health care reform. He said that there are two different and divergent views of healthcare reform and its importance to the nation's progress toward the future.  He called the passage of the health care reform bill last year, "the triumph of reason over power". Dr Kruse is famous amongst his friends for his poetry. He gave the most amazing Seussian rhyme describing the saga of Dr Michael Klein, the Canadian doctor that studied the routine use of episiotomy. Dr. Kruse gave me permission to post the lyrics of this poem for your edification. Look for it coming in a couple of days.

Dr. Kruse also presented the new COGME report, "Advancing Primary Care" and its recommendations. The most important recommendation from COGME was that the percentage of primary care physicians should be at least 40% of all physicians.

Overall, this was a great meeting.  Thanks to all of the presenters for your great work. Thanks to all the attendees, including over 200 students attending the national student-run free clinic forum. Thanks to the STFM staff for your hard work, in particular Ray Rosetta, the hardest workin' man in the conference business.  Next year, the meeting will be February 2-5, 2012 in sunny Long Beach, California. The Call for Papers opens in March, so get ready!

Monday, January 17, 2011

Criteria for selecting students in the Match

We are fast approaching a very important day in the academic calendar. On February 23, 2011 residency programs around the country have to enter their Rank Order List. This day is the culmination of a lot of work by the Program Director, faculty and residents from the program, and from students applying to that program. The actual day that the results of the Match are released is about a month later on March 17. But the work is all done once the lists are in on February 23.
You may not understand this process, so let me walk you through it. Medical students around the country decide what specialty they are interested in applying to at some point during the third year of medical school. Students gather letters of recommendation from faculty over the next several months. Frequently, they will do a fourth year elective rotation in their specialty of interest. They also have to decide if they are going to try to go to another school and do an “away” rotation. For most students, they are obligated to enter the National Resident Matching Program.  If they are a fourth-year student at an allopathic medical school in the US, they have no choice but to enter the Match.
Beginning in October, students begin their job interviews. We call these residency interviews, but honestly the students are trying to land a job as a resident in a particular program. Students will have anywhere from 10 to 40 interviews depending on the competitiveness of the specialty that they are applying to enter. These interviews may be anywhere across the country, but are mostly in larger cities (that is where the teaching hospitals are located).
So, on February 23 the students enter their Rank Order List. The program that they like best is Number 1. Their least favorite is last. Residency programs do the same. They rank all of the students that they interviewed from 1 to however many they want to rank.  The programs don’t have to rank all of the students that they interview, and the students don’t have to rank all of the programs.  But the Match is a binding contract,1 if they rank someone (student or program) they are legally bound to that ranking.
The question for today is how do programs decide how to rank the students that they interview? There are several ways, some good, some really bad! Let’s start with the good ways.
Letters of recommendation can be very helpful, if they are written by honest faculty physicians, who know the student, and have personally worked with a student. These letters can be a great assessment of a student’s global performance. An old study by Keynan, et al 2 done in 1987 compared objective faculty ratings to other types of assessment. This study compared a global faculty rating, a multiple choice question (MCQ) test, and an oral examination.  They found that the “the 'subjective' expert assessment of performance through global rating scales is comparable to that of 'objective' evaluation through written MCQ.” They also found, using a stepwise regression analysis, that the ratings of 'reliability', 'knowledge', 'organization', 'diligence,' and 'case presentation' were the most predictive of the overall global rating. Chair’s letters which are often written by the Chair of a department (who probably does not know the student very well) are generally not much help.
Another good way to rank students is through an interview. Skilled interviewers can pick up on many communication and personality issues that probably don't show up on a paper application. Maybe the applicant is very introverted and has difficulty talking during the interview. Or maybe they are a jerk or a racist or a sexist. A personal interview can pick up these problems (not always, but often).
Unfortunately, there are also some bad ways to rank students.  Commonly, grades and boards are used. Frequently, medical school grades and USMLE board scores are the screens that decide whether a program invites a student to interview.
I want to focus on USMLE scores. Grades are quite variable from school to school. Some schools have an A to F scale, some have Pass/Fail, and others have Satisfactory to Superior. Preclinical grades don't have a lot of predictive value for clinical grades and neither are very predictive for performance in residency.
Board scores are just as bad. They seem to be an objective way to compare students. Everyone, across the country takes the same test. There is one big problem. The USMLE is designed to measure knowledge and application of knowledge. It was created to be used by the State Licensing agencies as a common evaluation for licensure. There are statistical problems when you try to interpret the scores that are given with a pass/fail based test. There have been several studies that all show basically the same thing about board scores. Performance on the boards does not correlate to performance as a physician.
In 2005, Rifkin and Rifkin3 compared the performance of all the first year Internal Medicine residents at a large academic medical center on standardized patient encounters to their scores on the USMLE Step 1 and 2. They found very low correlations. For Step 1, the correlation was 0.2 (df=32, p=0.27) and for Step 2 it was 0.09 (df=30, p=0.61). Remember a higher number means that the two measures are more strongly related.
A more recent study is very critical of the use of USMLE scores for selection of residents. This study by McGaghie and colleagues,4 was a research synthesis using a critical review approach.5 They collected and reported correlations between USMLE Step 1 and 2 and several reliable measures of clinical skills. These skills included auscultation of the heart, performance of ACLS (Advanced Cardiac Life Support), communication with patients, thoracentesis, and central line placement. They found correlations from -0.05 to 0.29 to Step 1 and -0.16 to 0.24 for Step 2.
Their conclusion sums it all up. "Use of these scores for other purposes, especially postgraduate residency selection, is not grounded in a validity argument that is structured, coherent, and evidence based. Continued use of USMLE Step 1 and 2 scores for postgraduate medical residency selection decisions is discouraged."
I couldn't agree more. If I need a neurosurgeon to operate on my brain, I want to know that he has a very steady hand, not the highest board score. If I need a radiologist, I want to know that her visual pattern recognition is outstanding, not that she scored well on a multiple-choice question test. And if I need a family doctor, I want to know that his clinical reasoning and communication skills are excellent, not that he scored well on the boards.
References
1. http://www.nrmp.org/res_match/policies/map_main.html
2. Keynan A, Friedman M, and Benbassat J.  Reliability of global rating scales in the assessment of clinical competence of medical students. Med Educ  1987;21(6):477-81.

3. Rifkin WD, Rifkin A. Correlation between house staff performance on the United States Medical Licensing Examination and standardized patient encounters. Mt Sinai J Med. 2005;72(1):47-9.

4. McGaghie WC, Cohen ER, and Wayne DB. Are United States Medical Licensing Exam Step 1 and 2 scores valid measures for postgraduate medical residency selection decisions? Acad Med  2011;86(1):48-52.

5. Eva KW. On the limits of systematicity. Med Educ. 2008;42:852–853.