This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.

The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the American University of the Caribbean, the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine. Nor does any of this represent any position of the Northeast Georgia Medical Center or Northeast Georgia Health System.

Monday, October 6, 2014

Be FAIR to students

I recently saw a great editorial in the Medical Teacher.  Medical Teacher is the official journal of AMEE, an international association for all involved with medical and healthcare professions education. The Medical Teacher “addresses the needs of teachers and administrators throughout the world involved in training for the health professions.” (1)

The editorial by Harden and Laidlaw (2) discussed the FAIR principles that teachers can use to help their students develop and that lead to better learning. As someone who works in faculty development for my department and my institution, I found these principles to be an effective faculty development tool. 

The four principles of FAIR are:

            F          provide appropriate Feedback to students
            A         make learning Active not passive
            I           Individualize learning
            R         ensure the learning is Relevant

Feedback is something that I have written about here in the past (see- Feedback or Compliments? Which is better?)  Feedback is important for learners. It helps students to get better by giving the teacher the opportunity to correct mistakes. Dr. Harden quotes a 2007 review by Hattie and Timperley. (3) The authors reviewed twelve large meta-analyses that included 196 studies that looked at feedback. They found that the average effect size of feedback on performance in several different contexts was about 0.79. For perspective, it was lower than the effect of direct instruction (0.93) but was greater than a student’s prior cognitive ability (0.71).  Not getting enough feedback is one of the most common complaints from medical students regarding their teachers. Feedback is “the most powerful single thing that teachers can do to enhance achievement of their students.” (1) An important point from this article is that is that students need to use feedback for it to be effective. Students should use feedback from the preceptors and faculty members to fix deficits through increased practice, readings, and experiences.

The second point is that learning needs to be active. Active learning has a lot of advantages for the learner—it keeps them engaged in the process, it allows them to interact with peers in small group peer teacher/learner activities, and it encourages learners to use electronic and other outside resources to enhance their knowledge acquisition. Dr. Harden makes the point that no matter what the context, learning activities should be “designed to be meaningful”. Often students feel that learning activities have no point. For some activities in medical education, I would have to agree.  

This leads into the third principle of FAIR which is that learning needs to be individualized. It is funny that Facebook, Twitter, and Google have figured this out in less than a decade while in medical education we still don’t do this. The model has remained for one hundred years, everyone gets the same curriculum taught in the same way with the same assessments at the end. There is data that suggests that students have different learning styles and benefit from individual attention to those

(reprinted from: Harden RM, Laidlaw JM. Med Teach. 2013; 35(1): 27-31)
styles (see my 2012 blog--Self Regulated Learning and Performance)  We get students into medical school from a wide variety of backgrounds and experiences, but pay little attention to these differences. Some students come from a science heavy background while others from a more liberal arts background. Some students may have been heavily involved in clinical medicine by volunteering in a free clinic while others have almost no clinical experience. More attention to these differences would maximize the students’ learning.

The final area of concern to help us be FAIR to students is relevance. This used to be a big deal in medical education. When I went to medical school, there was very little clinical education before the third year of medical school, it was all basic science. As a student you were just trying to get through that so that you could learn to be doctor later. Now as a faculty member who not only teaches in the basic science curriculum, but also directs a basic science module, I find it easier to provide the relevance. I think that we do a better job of using clinical cases and vignettes to frame basic science knowledge in the clinical context. In this setting of relevance, I believe student learning is enhanced.

I believe that the bottom-line suggestions from this article can be very helpful in structuring our teaching:

1) Recognize the importance of feedback
2) Assess the extent of active engagement of your students
3) Individually tailor the learning environment
4) Ensure the relevance of all learning activities

(2) Harden RM, Laidlaw JM. Be FAIR to students: four principles that lead to more effective learning. Med Teach. 2013; 35(1): 27-31

(3) Hattie J, Timperley H. 2007. The power of feedback. Review Educational Research. 2007; 77: 81–112.

Wednesday, September 10, 2014

Does the experience on a clinical clerkship effect performance?

I found an interesting study this week that I wanted to blog about today. This study was published in Medical Education a couple of years ago. The authors, Dong and colleagues (1) asked an important and very common question: does the experience that a student has during a clinical rotation effect their performance on that rotation? This is important for many reasons. One big reason is that developing and maintaining adequate clinical experiences is an expensive and time-consuming process. It would be nice if we knew that the experiences that we were providing for students were having a positive effect.

The authors describe two alternative theories of learning in the clinical arena. One idea is that students need to utilize deliberate practice to learn. In other words, they need specific learning experiences that are led by a qualified mentor. These learning experiences are planned and need to be varied and extensive in order for students to develop expertise. The alternative idea is based on the concept of cognitive load theory. In this theory, medical students may have difficulty learning clinical medicine when they are exposed to multiple patients and clinical problems. Instead, students might learn better if they have more straightforward instructional formats, such as simulated cases.

Clinical clerkships in all specialties spend a lot of time trying to demonstrate that the clinical experiences that they provide are similar across different sites and for different students. A previous study of clerkship directors from Internal Medicine, found that they use core cases to compare the clinical experiences of multiple students.(2)  Many clinical clerkships use paper or electronic logs to track the students’ experiences.

This study was done at the Uniformed Services University which is the military medical school. It is the only federal medical school and draws students from across the country. The authors looked at students on the internal medicine clerkship. The students kept track of all of their patient contacts using a patient log. They tracked how many patients each student saw and the number of patients with core problems that were seen.

The authors compared students’ intensity of clinical exposure with performance on the clerkship. What they found was a little surprising and maybe a bit counter-intuitive. Student performance was positively correlated with their clinical experience, but only weakly . Specifically, after they used a pre-test to control for ability, there was a weak (r = 0.19) but statistically significant association. The student’s clinical score improved by two points with every ten extra patients that they saw in the outpatient setting. Similarly, the number of core clinical problems that the students saw was correlated to their ambulatory clinical score (r = 0.19; p < 0.05). In real terms this means that a student who saw patients with all of the core problems (about 88% of all students), scored less than four points higher in ambulatory clinical points than those who did not see all of the core problems.

So what does this all mean? Well, for one thing, we need to think very carefully about how clinical experiences should be structured. More is not necessarily better at least when it comes to number of patients. A targeted approach that is thoughtful and includes more time to think about patients may actually be better.

1) Dong T, et al. Relationship between clinical experiences and internal medicine clerkship performance. Medical Education 2012: 46: 689–697.

2) Denton GD, Durning SJ. Internal medicine core clerkships experience with core problem lists: results from a national survey of clerkship directors in internal medicine. Teach Learn Med  2009; 21: 281–3.

Tuesday, August 26, 2014

Use of Student Satisfaction Data

There has been an interesting change in medical education over the time that I have been a faculty member. I am sure that many people involved in education have seen the same change. It appears that the phenomenon occurs across multiple, different disciplines. The change is related is how we use student satisfaction data. And even more specifically, what importance is put on student satisfaction data, how it’s used for curricular change, and how it is used to drive the curriculum.

I had a lot of great experiences when I was a medical student. But, I had one really interesting (not-so-great) experience. We had a professor who was very well-liked. He was a great teacher and a student advocate. He was one of those teachers that was always available to meet with a student who didn’t understand concepts or to do individual tutoring sessions. He was very personable. He was also a great lecturer, he always delivered content that was relevant and useful. His test questions were consistent with the lectures that he gave and the information that was in the lectures that he gave.

As a student, he was what most of us thought of as the gold standard for a faculty educator. But he was not a researcher. And in many academic departments and medical schools, you have to be a researcher first, and a teacher second. If you don’t get enough research grants, then your job is at risk. And that’s what happened in this situation: his contract was not renewed, because he did not receive tenure.

My medical school class was the group of students that worked with him the most because of the course that he taught. We were very upset by the school's decision. We wanted to do something to express our dissatisfaction with this result. He was one of our best teachers. We did not think it was right to get rid of one of our best teachers. So, we wrote a very professional letter to the Dean of the medical school, expressing our feelings about this decision. The entire class signed it. There were over a hundred people in the class, everyone signed this letter. Then our class president took this letter, you can call it a petition, I guess, but it was really just a letter, to the Dean of the medical school.

What happened next is shocking to me now, but at the time it was not surprising. The Dean basically threw the class president out of his office and told him never to come back. He said something to the effect of “This is my medical school, and I’m charge and the students don’t have any say in what goes on.” Actually, at the time, there was a story going around that he said, “If you complain about anything else, I’ll break your kneecaps,” or something like that. But that may have just been a story that was made up by med students afterwards.

Fast forward to 2014...  For every course that we deliver in our medical school (and in every medical school that I know of) we ask the students to comment on the process.  We ask them to comment on the policies that are in place. We ask them to comment on the learning objectives and whether the content matched the learning objectives. We ask them about the content of the course. We ask them a lot for their opinion about their satisfaction with the curriculum. There’re some really good things about this. Clearly, the dean’s response to my class’s dissatisfaction was not a great response. But I think it is possible for the pendulum to swing too far the other direction.

In a lot of medical schools, students have this idea that they are helping to determine the curriculum. I think that is a dangerous and, in many ways, nonsensical idea. In addition, the LCME looks at student satisfaction on the Graduate Questionnaire, a survey that is sent out to all medical students after graduation. In the accreditation visits for each medical school, the LCME uses that data to determine accreditation for the medical schools. Again, this is a very dangerous proposition in many ways.

Now, don’t get me wrong. I am very interested in student satisfaction data. But the data that I think that they should be giving us is how the content was delivered, how accessible were the faculty, did they follow the policies that we have set forth for grading? Did the questions that were on their test match up with the content that was delivered during their lectures? Was their lecture style appropriate? Did they understand the lecture? Were their slides helpful and additive to the lectures? Those are the kind of questions that student satisfaction data would be pretty useful in answering. And we do some of that.

But often, we ask them other questions. For example: do you think that the information in this course will help you to be a better doctor? That question bothers me a lot. I really have trouble seeing how a medical student would know the answer to that or how their answer to that would change what I want to do as far as a course that I am teaching. I see medical students in their comments, write things like, “the information in this lecture is not important,” or “the information in this course is fluff.” When I think about that, I wonder. Does a student have the framework to make that call?

One of the things that is important for faculty and course directors and medical schools to do is to think about what the curriculum should look like. What information do you want the students to learn? To some extent, that is driven by the national examinations, like USMLE. There is content that every student will be tested on in the national standardized tests. The curriculum needs to do a good job of preparing the students to take those tests. But how the other pieces of the curriculum are emphasized should be the judgment of the faculty.

It often seems like, students are giving their opinion. That is opinion is based on what they think is important, but with little background or context. So for instance, we had a student comment that said, “I didn’t like going to this clinical site.” I thought it was an interesting comment. We specifically chose that as a clinical site for all the students, because we wanted them to have the experience. It was an underserved practice. We wanted them to be exposed to a clinic in an underserved area so that they might see how it could be useful to them in their future practice. Most of the students really got that. But occasionally, there’d be a student who just didn’t really think that that was important. So I think, okay, so you didn’t think it was important, but I do.

Another thing that students comment on in satisfaction surveys are things that are outside of their expertise. It is often comments like “I don’t understand why this test counts for so much of my grade.” I think it should count for that much of your grade. It doesn’t make any difference that the student doesn’t agree with me.

I say all of this to come back to the curricular review and development process. Every course in medical school should be evaluated on a regular basis. The satisfaction surveys that students do are given a lot of weight. If students make a lot of negative comments, or there are several things that they do not like, there is a push from the Dean’s office to look at those comments. The administration may encourage the director to change their course in some way. That is problematic, because it takes the power of determining the direction of the medical curriculum away from the faculty and gives it to students. The faculty are the most experienced and most able to determine what the curriculum should look like but this gives curricular control to the least experienced and least able to make decisions, the medical students.

I’m not advocating that we should go back to the place where the dean can kick you out of his office and tell you he doesn’t want you to ever come back, but I do think that it’s important for the medical school faculty to drive the curriculum. Student comments should be limited to the delivery and the process and the policies, and stay out of what the content is and how that content is weighted. 

Wednesday, August 13, 2014

What’s new in Academic Medicine this month?

There were several interesting articles in the August issueof Academic Medicine

The first was a retrospective study by Norcini, et al (1) that actually tries to connect performance on a high-stakes examination (USMLEStep 2 CK) with some real patient outcomes. The authors looked at about 61,000 patients who were hospitalized in Pennsylvania for Congestive Heart Failure (CHF) or Myocardial Infarction (MI). They were looking at admitting physicians who were graduates of an international medical school and had taken the Step 2 Clinical Knowledge (CK) examination. The authors found that an increase of one point on the examination was associated with 0.2% decrease in the mortality of their patients (95% CI: 0.1—0.4%).  The authors recommended using the Step 2 CK as part of the licensure process but that seems premature. It would also be interesting to look at physicians who were graduates of US Allopathic and Osteopathic medical schools.

The second study, by Nixon, et al (2), evaluated students on the Internal Medicine clerkship at the University of Minnesota. Students were instructed on using educational prescriptions to create PICO-formatted questions (Patient-Intervention-Comparison-Outcome) and then answers to those questions for a bedside case presentation. The content and quality of the questions and answers was then analyzed by the authors. They found that 59% (112/190) of the questions were about therapy, and 19% (37/190) were related to making a diagnosis. They also saw that 61% (116/190) were scored 7/8 - 8/8 on the PICO conformity scale. The quality of answers was pretty high with 37% (71/190) meeting all criteria for high quality.

And finally, a really cool study by Watson (3) that analyzed hand motion patterns using an inertial measurement unit. The author looked at 14 surgical attendings and 10 first- and second-year surgical residents. They were asked to do a simulated surgical procedure while wearing an inertial measurement unit on their dominant hand. They used the pattern of movements to train a classification algorithm with expert and novice patterns. The classification algorithm (which is similar to an artificial neural network) is good at identifying patterns. In this case, when the authors gave the classification algorithm blinded hand motion patterns, it did a pretty good job of classifying them as expert or novice. Its accuracy was 83%, with a sensitivity of 86% and specificity of 80%. The classification algorithm was able to reliably classify surgical hand motion patterns as expert or novice. This could be used in the future to make an objective assessment of procedural or surgical proficiency.

This was a good month in Academic Medicine. Some pretty good studies!

(1)       Norcini J, et al. The Relationship Between Licensing Examination Performance and the Outcomes of Care by International Medical School Graduates.  Acad Med  2014; 89(8): 1157-1162.
(2)       Nixon J, et al. SNAPPS-Plus: An Educational Prescription for Students to Facilitate Formulating and Answering Clinical Questions. Acad Med 2014; 89(8): 1174-1179.

(3)       Watson R. Use of a Machine Learning Algorithm to Classify Expertise: Analysis of Hand Motion Patterns During a Simulated Surgical Task.  Acad Med  2014; 89(8): 1163-1167.

Monday, May 12, 2014

How money influences specialty choice

Does it bother anyone that the top billers from the CMS are also specialties that students have chosen for the ROAD? If you don’t remember, I wrote about the ROAD in my November 5, 2013 post “The ROAD less traveled or why don’t med students choose primary care?”  The short version is the ROAD is the high pay, nice lifestyle choices for specialties that have become very popular with medical students. The ROAD includes: Radiology, Ophthalmology, Anesthesiology, and Dermatology.

As many of you have probably already heard, the Centers for Medicare and Medicaid Services or CMS has after many years released billing data from almost all of the physicians that billed Medicare for patient care in 2012. As I read the articles in the Wall Street Journal  and in the New York Times, I started thinking about the implications of the data as applied to student specialty interest. The New York Times reported that 100 physicians received a total of $610,000,000 from Medicare. The top biller was an ophthalmologist who received $21M from CMS.

There is a great chart from CMS that was in the NYT article. I have recreated it here and it is based on data from CMS. It shows the breakdown of billing.  But I would like to look in more detail at the Top 2 percent. That group billed CMS for 15.1 billion dollars in 2012. Almost every specialty was represented (except pediatrics which does not bill Medicare very often). 

The specialty that has gotten the most attention was ophthalmology. Maybe for good reason—lots of high billers. But let’s go back to my blog from November “The ROAD less traveled or why don’tmed students choose primary care?”  In it, I wrote about a survey of medical students by Clinite and colleagues.(1) They found that students with a higher interest in Primary Care specialties were less concerned about the average salary, and vice versa, students with less interest in Primary Care were more concerned about a specialty’s salary. So, it might follow that a specialty that has a lot of high billing providers, would be more attractive to some students. Particularly those students who were already more concerned about a specialty’s salary.

With that in mind, we should look more carefully at the top 2 percent of CMS billers again. When we break down the Top 2 percent of Medicare billers we find some striking differences among specialties.

Opthalmology had 2,995 physicians who were in the top 2 percent of CMS billers. Those physicians were 15.5% of all the practicing ophthalmologists.

Dermatology had 1,142 physicians who were in the top 2 percent of CMS billers. Those physicians were 9.3% of all the practicing dermatologists.

On the other end of the spectrum, family medicine had 302 physicians who were in the top 2 percent of CMS billers. Those physicians were 0.003% of all practicing family doctors.

I am not saying that any of these doctors did anything wrong. I understand that many of the high billers to CMS are practicing in groups, with multiple locations, doing difficult procedures, etc. But the differences are so large that it is hard for a student who is making a decision about what to do with his or her life to ignore. 

Think about this again. In the case of ophthalmology, 15 percent of all their doctors would be considered high billers by any measure. A specialty that has a lot of high billing providers is more attractive to students who are more concerned about a specialty’s salary. There is some support for this in the NRMP Match data.(2) In the 2013-2014 Match, the average percentage for a specialty matching a US allopathic seniors was about 62 percent. Family medicine (with a very small number of high billing providers) was able to fill 44 percent of its residency spots with US allopathic seniors. On the other hand, 91 percent of ophthalmology positions and 88 percent of dermatology positions were filled by US allopathic seniors. Radiology (68%) and anesthesiology (69%) while not as high were both above the mean.

At some level this goes back to the admissions process. We have to get the right students into medical school. We (the US taxpayer) pay for this system. We think it is a great system, but it is not really that great. We have created high reward and thus high demand for some parts of the system, and low reward and thus low demand for other parts. Until the system is readjusted (like the Canadians did a few years ago), (3) there will continue to be a lack of students entering primary care.

1) Clinite KL, et al. Primary Care, the ROAD Less Traveled: What First-Year Medical Students Want in a Specialty. Academic Medicine  2013;88(10):1522-1528.
3) Kruse J. Income Ratio and Medical Student Specialty Choice: The Primary Importance of the Ratio of Mean Primary Care Physician Income to Mean Consulting Specialist Income. Family Medicine  2013;45(4):281-3

Thursday, April 10, 2014

Medicine, Humanism, and Social Accountability: How the Values of the Gold Humanism Honor Society and Community Capstone projects collide

The following is the script of a talk that I gave last night at the induction ceremony for the Florida International University Herbert Wertheim College of Medicine’s 2014 Gold Humanism Honor Society.

Thank you so much for the opportunity to speak to you this evening.

The Gold Foundation was formed in 1988 with the purpose of nurturing and preserving the tradition of the caring physician. There was a concern then (as there is now) that the outcome of our medical education process was doctors who no longer had compassion for patients. The patients that were, for the most part, the very reason that we went into medicine in the first place.

The Gold Foundation proposed that the humanistic doctor, not just has, but displays on a daily basis certain attributes, described as IE CARES

Integrity – the congruence between expressed values and behaviors
Excellence in clinical care
Compassion – awareness of the suffering of others but also working to relieve it
Altruism – the capacity to put the needs of another before your own
Respect – regard for the autonomy and values of another
Empathy – the ability to put oneself in another’s situation
Service – sharing with those in need

These core attributes have real value and meaning in the care that we provide to our patients on a daily basis.
When the Gold Foundation was first formed, they asked several important questions.
1.    Can we identify students who are both scientifically proficient and compassionate?
2.    Are we selecting idealistic and humanistic young people for medical school, but then discouraging their spirit of caring through the education process?
3.    If we select students who don’t have the right characteristics, can we through education, teach them to be?
I would like to spend the next few minutes talking about each of these questions. As I speak, I believe that you will see the relevance of the Community Capstone project to this discussion.

So, the first question was:

Can we identify students who are both scientifically proficient and compassionate?

Before we answer that question, you need to ask yourself an important question: do you believe that it is important to be both scientifically proficient and compassionate?

Patients definitely do. A Health magazine survey (1) a few years ago found that the number one thing that patients wanted was a doctor who listens to them. Number two was being up to date on the most recent information in the medical field. Did you hear that? A doctor who listens was first, then being up to date.

Patients don’t want to see a really smart doctor who is a jerk
But they also don’t want a caring doctor who doesn’t know anything.
Both attributes are important.

In February of last year, the Association of American Medical Colleges’ (AAMC) Committee on Admissions endorsed a list of nine core personal competencies that medical students should have prior to beginning medical school.(2) These included ethical responsibility, dependability, a service orientation, social skills, the capacity for improvement, resilience, cultural competence, oral communication and teamwork. These seem like important characteristics that have some commonalities with the Gold characteristics even though it does not specifically mention compassion.

A survey of medical school deans done in 2007 (3) found that 90 percent thought that “Caring Attitudes” were emphasized during the pre-clinical and clinical years. 93 percent of the schools asked admission interviewers to assess the caring attitudes of their applicants. But do they do that?

I am sure that you remember the application process to come to medical school here at FIU. We started out looking at your college grades and your MCAT scores. Every school does that. Most every school in the country also uses an in-person interview to determine if you get into medical school. The interview’s purpose is try to figure out what kind of person you are. Do you have those other characteristics that will make you a caring and compassionate physician?

A specific type of interview, known as a semi-structured interview, is actually pretty good at predicting performance in medical school. And importantly, is better and figuring out if an applicant has those important personal characteristics such as compassion and ethical attitudes.(4)

So, the answer to the first question is yes, we can identify students who are scientifically proficient and compassionate.

The second question was:

Are we selecting idealistic and humanistic young people for medical school, but then discouraging their spirit of caring through the education process?

To answer this question, I want to tell you a true story that took place 23 years ago.This story is from my medical school experience. 

My first rotation was surgery. I was assigned to the VA hospital. It was across the street from the medical school and everyone wanted to go there. Remember, this was back in the old days. We never saw a faculty physician! So as a student you actually got to do a lot while you were working with the residents.

The surgical residents ran everything. And the Chief Residents were like gods. They could do anything, they knew everything, and could handle any problem that came along. We had one surgical chief that everyone was terrified of: Dr. X *. And as luck would have it, I was assigned to his team. The Red surgery team. I have no idea where he is now but back then he was the Chief Resident and ran the entire surgical floor.

Every morning we arrived at about 4:30 to pre-round on our patients. Each student carried five or six patients and we had to present each of them on rounds. At six o’clock AM, we met with the junior residents and went over all of the patients, updating them on any issues from the night before. We went to surgery in the morning and surgical clinic in the OPD in the afternoon.  And then we would wait. We had to wait for Dr. X to finish his case or clinic or his coffee (whatever he was doing) so that we could round again with him.

When he was ready, we would get a page. Back then it was just a voice page (there was no such thing as cell phones or text messaging). The page said “RED DOGS to the ICU, RED DOGS to the ICU”. We were the dogs, the medical students. At this point, it was usually 7 or 8 at night and you had been in the hospital since 4:30 or so.

Rounds with Dr. X were an exercise in terror.  If you talked to long, he would tell you to shut up. If he asked you a question and you got the answer wrong, he would just shake his head and say, “Stupid Medical Student”. If you tried to talk about family history or what the patient’s spouse was worried about or gave any social history he would cut you off. “I’m not interested in that…”

Was that discouraging?  Yes it was.

Experiences like this can cause you to develop a hard external shell. It is like a callous that develops on your psyche, to keep you from getting harmed. A survey of medical students found that they identified empathy, communication, integrity, and honesty as the most important qualities of a doctor.(5) But Morley and colleagues at SUNY Upstate in Syracuse New York found that idealism begins to decline as early as the end of the first year of medical school. (6) Empathy begins to decline as soon as you start to see patients and declines across all of the years of medical school.(7) You start out as caring people and then the weight of the medical education system begins to wear you down.

So the answer to the second question, is yes. We do things during the educational process that can damage you.

The third question was:

If students lose (or never really had important characteristics) can we through education, help them to develop or learn them?

From my perspective, this gets to the heart of the issue. 

This is the intersection of the Gold Humanism Honor Society and the Green Family Foundation NeighborhoodHELPprogram

If you want students to learn to be caring and compassionate physicians, it is important to see patients in the communities where they live.  As a student it is easy to get jaded by the difficulties of the patients around you.
Why did that patient miss his appointment?
Why doesn’t she get out of her house and walk more?
Why don’t they eat more healthy foods?

People smoke. People drink. People are obese, they don’t exercise, they don’t eat right. They don’t take their medicine. It is easy to get to the point where we believe that every medical problem that our patients have is their own fault. Rather than seeing and understanding that people’s lives are complicated and difficult. More difficult than our lives. Far more difficult than my life.

The only way to see that is to go out into the community.  Get to know patients in their own world, not in the artificial world of the clinic or the hospital. But in their home.

That is the simple brilliance of the NeighborhoodHELP program. You have the opportunity to see patients and families in their own home. Where they live. In their neighborhood. Their community.

The Community Capstone projects, build on that experience. The best Capstone projects, such as some of these that are honored here tonight, are the ones in which a student or group of students was impacted by something that they saw in the community, an issue. They found a community partner who was also interested in that issue. And they worked to address the issue.

We know that the humanistic qualities that are held up by the Gold Foundation can be nurtured through exposure to mentors who also have those qualities. Through experiences that encourage you to care for people not just take care of them. And through opportunities to reflect on those experiences.

When you applied for medical school, all of you said something about how you wanted to help people. You wrote in your personal essays about your motivations for going into medicine. I have read thousands of those essays over the years. Everyone says that right things, but many don’t follow through on those words.

You have the opportunity to do something different.

To care for people

To make a difference in their lives

Congratulations and thank you again for the opportunity to talk to you today

(2) Koenig TW, et al. Core Personal Competencies Important to Entering Students’ Success in Medical School: What are they and how could they be assessed early in the Admission process? Acad Med  2013; 88(5): 603-613.
(2) Lown BA, et al. Caring attitudes in medical education: perceptions of deans and curriculum leaders. J Gen Intern Med 2007; 22(11):1514-1522.
(4) Pau A, et al.  The Multiple Mini-Interview (MMI) for student selection in health professions training - a systematic review. Med Teach. 2013; 35(12): 1027-41.
(5) Hurwitz S, et al. The desirable qualities of future doctors—a study of medical student perceptions.  Med Teach  2013; 35(7): 1332-1339.
(6) Morley CP, et al. Decline of medical student idealism in the first and second year of medical school: a survey of pre-clinical medical students at one institution. Med Educ Online  2013; 18.
(7) Neumann M, et al. Empathy decline and its reasons: a systematic review of studies with medical students and residents. Acad Med. 2011; 86(8): 996-1009. 

* (name removed to protect his identity)

Tuesday, February 25, 2014

What’s new in Academic Medicine?

There were several interesting studies in Academic Medicine this month….

The first study (1) was led by one of my favorite educational researchers, Dr Geoffrey Norman. Dr Norman is one of the foremost researchers in the area of cognitive reasoning. In this current study, his team looked at resident physicians in Canada. Participants were second year residents from three Canadian medical schools (McMaster, Ottawa, and McGill). They were recruited right after they had taken the Medical Council of Canada (MCC) Qualifying Examination Part II.  They were recruited in 2010 and 2011. 

The researchers asked the residents to do one of two things as they completed twenty computer-based internal medicine clinical cases. They were instructed either to go through the case as quickly as possible without making mistakes (Go Quickly Group; n=96) or to be careful, thorough, and reflective (Careful Group; n=108). The results were interesting. There was no difference in the overall accuracy (44.5% v. 45%; p=0.8, effect size (ES) = 0.04). The Go Quickly group, did that. They finished each case about 20 seconds on average faster than the careful group (p<0.001). Interestingly, there was an inverse relationship between the time on the case and diagnostic accuracy—cases that were incorrect took longer for the participants to complete.

Another interesting study about diagnostic errors came out of the Netherlands (2). Dr Henk Schmidt asked an important question: does exposure to information about a certain disease make doctors more likely to make mistakes on subsequent diagnoses? In this study, internal medicine residents were given an article from Wikipedia to review and critique. The article was about one of two diseases (Legionnaire’s disease or Q fever). Half of the residents received the Legionnaire’s article, the other half the article on Q fever. Six hours later, they were tested on eight clinical cases in which they were forced to make a diagnosis. Two of the cases (pneumococcal pneumonia and community-acquired pneumonia) were superficially similar to Legionnaire’s disease. Two were similar to the other disease from Wiki (acute bacterial endocarditis and viral infection). The other four cases were “filler” cases that were not related to either case from Wikipedia. (aortic dissection, acute alcoholic pancreatitis, acute viral pericarditis, and appendicitis).

The results are a little scary. The mean diagnostic accuracy scores were significantly lower on the cases that were similar to the ones that they had read about in Wiki (0.56 v. 0.70, p=0.16). In other words, they were more likely to make an error in diagnosis when they had read about something that was similar but was not the correct diagnosis. The authors believed that this demonstrates an availability bias because they were more likely to misdiagnose the cases that were similar to ones that they had recently read about. Availability bias can also be seen with students, think about the student who comes from the Cardiology service. Every patient that they see in clinic with chest pain is having a myocardial infarction.

The last article that caught my eye was another study out of Canada. The authors, from the University of Calgary, wanted to determine if students that were doing their clinical clerkships in a non-traditional longitudinal fashion were learning as much as students in the traditional track. So they looked at all of the students who completed their clinical training in a Longitudinal Integrated Clerkship (n=34) and matched them to four students in rotation-based clerkships. Students were matched based on grade point average (GPA) and their performance on the medical skills examination in the second year.

The outcomes that they studied were the Medical Council of Canada Part 1 exam scores, in-training evaluation scores, and performance on their clerkship objective structured clinical examinations (OSCE). They found no significant differences between the two groups on the Part 1 exam score (p = .8), in-training evaluation (p = .8), or the mean OSCE rating (p = .5). So, apparently, students in a rural longitudinal rotation did just as well as those who stayed at the University hospital for rotation-based clerkships.                  

(1) Norman G, Sherbino J, Dore K, Wood, T, et al. The Etiology of Diagnostic Errors: A Controlled Trial of System 1 Versus System 2 Reasoning.  Acad Med  2014; 89(2): 277-284.

(2) Schmidt H, Mamede S, van den Berge K, et al. Exposure to Media Information About a Disease Can Cause Doctors to Misdiagnose Similar-Looking Clinical Cases. Acad Med  2014; 89(2): 285-291.

(3) Myhre D, Woloschuk W, Jackson W, et al. Academic Performance of Longitudinal Integrated Clerkship Versus Rotation-Based Clerkship Students: A Matched-Cohort Study.  Acad Med 2014; 89(2), 292–295.

Thursday, January 9, 2014

Getting the right students in medical school

Are we getting the right students into the medical school?

To answer this question, we have to decide what we want students to look like when they get out of medical school. There is some debate about this but not as much as you would think. Once we decide what characteristics we want our graduates to have then we can start looking for those characteristics in the students that get admitted to medical school.  It is the most important question if we want to answer the first question. Why is it so important?

That is simple. The outcome is determined by the input. It is like the old computer adage—garbage in, garbage out.(1) Not that medical students are garbage by any means, in fact they are a pretty amazing group of young men and women. But, if we wanted all of our graduates to be black, but all of the students that were admitted were white, how much success would we have?  Not much!  In the same vein, if we wanted our graduates to all speak Spanish, what is the easiest way to accomplish this?  Clearly, admitting students that speak Spanish is far easier than teaching them Spanish during medical school.

This discussion works for a lot of the characteristics that people think are important for their personal physician to have. Take communication skills. We can teach them techniques to improve their communication skills and their techniques in taking a medical history. If the students that we admit to medical school have poor listening skills or they are very shy and do not like to talk to other people it will be much harder to end up with a graduate that has good listening skills.

Another example is professional behavior. I like to call this “not being a jerk” but it is much bigger—self-sacrifice, commitment, respect, accountability, and trust.(2) Can we teach those things? I don’t think so. I think a student either has those characteristics or they don’t. They do not learn them in medical school. We have to admit students that have the characteristics and then we can teach them how to actualize them as a medical student and physician.

The funny part of this is that we take this for granted with one attribute: the ability to do well on standardized, multiple-choice question tests. No one argues that an incoming medical student should not be good at that skill (and for sure it is a skill that is learned). So the graduating output is a physician who is good at answering multiple-choice questions. No surprise, but is that what we want. It is clearly part of what we want, right? We believe that doctors should have a basic minimum knowledge base in medicine, but is that the only key to being a good doctor?

We need to start thinking about what we want in our graduates. The US spends a lot of money on educating doctors. State tax revenue supports many schools. Federal research dollars support the infrastructure at many schools. Students pay tuition with the help of federally backed student loan programs. Even graduate education is mostly funded through federal Medicare dollars. (see the great piece on GME accountability by Kenny Lin, MD, MPH, at the Common Sense Family DoctorBut for all of this, we the people, have very little say on the output. There has been remarkably little discussion of accountability of medical schools to the tax payers for their physician graduates.

Do they have the characteristics that we need from our physicians?

Do they care for the people that need to be cared for?

Are graduates practicing in the geographic regions that we need them to be practicing in?

I think the answer to all of these questions is—no, unfortunately.  There is this idea that it is ok for medical student outcomes to be market driven. That is rubbish! The people are paying for medical education. We deserve to get the right outcome for our money.

This brings me back to my original question—are we getting the right students into medical school? If we want to change the answers to the three questions above, then we have to change the students that come into medical school.