Introduction

This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.



The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the American University of the Caribbean, the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine. Nor does any of this represent any position of the Northeast Georgia Medical Center or Northeast Georgia Health System.



Tuesday, August 26, 2014

Use of Student Satisfaction Data

There has been an interesting change in medical education over the time that I have been a faculty member. I am sure that many people involved in education have seen the same change. It appears that the phenomenon occurs across multiple, different disciplines. The change is related is how we use student satisfaction data. And even more specifically, what importance is put on student satisfaction data, how it’s used for curricular change, and how it is used to drive the curriculum.

I had a lot of great experiences when I was a medical student. But, I had one really interesting (not-so-great) experience. We had a professor who was very well-liked. He was a great teacher and a student advocate. He was one of those teachers that was always available to meet with a student who didn’t understand concepts or to do individual tutoring sessions. He was very personable. He was also a great lecturer, he always delivered content that was relevant and useful. His test questions were consistent with the lectures that he gave and the information that was in the lectures that he gave.

As a student, he was what most of us thought of as the gold standard for a faculty educator. But he was not a researcher. And in many academic departments and medical schools, you have to be a researcher first, and a teacher second. If you don’t get enough research grants, then your job is at risk. And that’s what happened in this situation: his contract was not renewed, because he did not receive tenure.

My medical school class was the group of students that worked with him the most because of the course that he taught. We were very upset by the school's decision. We wanted to do something to express our dissatisfaction with this result. He was one of our best teachers. We did not think it was right to get rid of one of our best teachers. So, we wrote a very professional letter to the Dean of the medical school, expressing our feelings about this decision. The entire class signed it. There were over a hundred people in the class, everyone signed this letter. Then our class president took this letter, you can call it a petition, I guess, but it was really just a letter, to the Dean of the medical school.

What happened next is shocking to me now, but at the time it was not surprising. The Dean basically threw the class president out of his office and told him never to come back. He said something to the effect of “This is my medical school, and I’m charge and the students don’t have any say in what goes on.” Actually, at the time, there was a story going around that he said, “If you complain about anything else, I’ll break your kneecaps,” or something like that. But that may have just been a story that was made up by med students afterwards.

Fast forward to 2014...  For every course that we deliver in our medical school (and in every medical school that I know of) we ask the students to comment on the process.  We ask them to comment on the policies that are in place. We ask them to comment on the learning objectives and whether the content matched the learning objectives. We ask them about the content of the course. We ask them a lot for their opinion about their satisfaction with the curriculum. There’re some really good things about this. Clearly, the dean’s response to my class’s dissatisfaction was not a great response. But I think it is possible for the pendulum to swing too far the other direction.

In a lot of medical schools, students have this idea that they are helping to determine the curriculum. I think that is a dangerous and, in many ways, nonsensical idea. In addition, the LCME looks at student satisfaction on the Graduate Questionnaire, a survey that is sent out to all medical students after graduation. In the accreditation visits for each medical school, the LCME uses that data to determine accreditation for the medical schools. Again, this is a very dangerous proposition in many ways.

Now, don’t get me wrong. I am very interested in student satisfaction data. But the data that I think that they should be giving us is how the content was delivered, how accessible were the faculty, did they follow the policies that we have set forth for grading? Did the questions that were on their test match up with the content that was delivered during their lectures? Was their lecture style appropriate? Did they understand the lecture? Were their slides helpful and additive to the lectures? Those are the kind of questions that student satisfaction data would be pretty useful in answering. And we do some of that.

But often, we ask them other questions. For example: do you think that the information in this course will help you to be a better doctor? That question bothers me a lot. I really have trouble seeing how a medical student would know the answer to that or how their answer to that would change what I want to do as far as a course that I am teaching. I see medical students in their comments, write things like, “the information in this lecture is not important,” or “the information in this course is fluff.” When I think about that, I wonder. Does a student have the framework to make that call?

One of the things that is important for faculty and course directors and medical schools to do is to think about what the curriculum should look like. What information do you want the students to learn? To some extent, that is driven by the national examinations, like USMLE. There is content that every student will be tested on in the national standardized tests. The curriculum needs to do a good job of preparing the students to take those tests. But how the other pieces of the curriculum are emphasized should be the judgment of the faculty.

It often seems like, students are giving their opinion. That is opinion is based on what they think is important, but with little background or context. So for instance, we had a student comment that said, “I didn’t like going to this clinical site.” I thought it was an interesting comment. We specifically chose that as a clinical site for all the students, because we wanted them to have the experience. It was an underserved practice. We wanted them to be exposed to a clinic in an underserved area so that they might see how it could be useful to them in their future practice. Most of the students really got that. But occasionally, there’d be a student who just didn’t really think that that was important. So I think, okay, so you didn’t think it was important, but I do.

Another thing that students comment on in satisfaction surveys are things that are outside of their expertise. It is often comments like “I don’t understand why this test counts for so much of my grade.” I think it should count for that much of your grade. It doesn’t make any difference that the student doesn’t agree with me.

I say all of this to come back to the curricular review and development process. Every course in medical school should be evaluated on a regular basis. The satisfaction surveys that students do are given a lot of weight. If students make a lot of negative comments, or there are several things that they do not like, there is a push from the Dean’s office to look at those comments. The administration may encourage the director to change their course in some way. That is problematic, because it takes the power of determining the direction of the medical curriculum away from the faculty and gives it to students. The faculty are the most experienced and most able to determine what the curriculum should look like but this gives curricular control to the least experienced and least able to make decisions, the medical students.


I’m not advocating that we should go back to the place where the dean can kick you out of his office and tell you he doesn’t want you to ever come back, but I do think that it’s important for the medical school faculty to drive the curriculum. Student comments should be limited to the delivery and the process and the policies, and stay out of what the content is and how that content is weighted. 

Wednesday, August 13, 2014

What’s new in Academic Medicine this month?

There were several interesting articles in the August issueof Academic Medicine

The first was a retrospective study by Norcini, et al (1) that actually tries to connect performance on a high-stakes examination (USMLEStep 2 CK) with some real patient outcomes. The authors looked at about 61,000 patients who were hospitalized in Pennsylvania for Congestive Heart Failure (CHF) or Myocardial Infarction (MI). They were looking at admitting physicians who were graduates of an international medical school and had taken the Step 2 Clinical Knowledge (CK) examination. The authors found that an increase of one point on the examination was associated with 0.2% decrease in the mortality of their patients (95% CI: 0.1—0.4%).  The authors recommended using the Step 2 CK as part of the licensure process but that seems premature. It would also be interesting to look at physicians who were graduates of US Allopathic and Osteopathic medical schools.

The second study, by Nixon, et al (2), evaluated students on the Internal Medicine clerkship at the University of Minnesota. Students were instructed on using educational prescriptions to create PICO-formatted questions (Patient-Intervention-Comparison-Outcome) and then answers to those questions for a bedside case presentation. The content and quality of the questions and answers was then analyzed by the authors. They found that 59% (112/190) of the questions were about therapy, and 19% (37/190) were related to making a diagnosis. They also saw that 61% (116/190) were scored 7/8 - 8/8 on the PICO conformity scale. The quality of answers was pretty high with 37% (71/190) meeting all criteria for high quality.

And finally, a really cool study by Watson (3) that analyzed hand motion patterns using an inertial measurement unit. The author looked at 14 surgical attendings and 10 first- and second-year surgical residents. They were asked to do a simulated surgical procedure while wearing an inertial measurement unit on their dominant hand. They used the pattern of movements to train a classification algorithm with expert and novice patterns. The classification algorithm (which is similar to an artificial neural network) is good at identifying patterns. In this case, when the authors gave the classification algorithm blinded hand motion patterns, it did a pretty good job of classifying them as expert or novice. Its accuracy was 83%, with a sensitivity of 86% and specificity of 80%. The classification algorithm was able to reliably classify surgical hand motion patterns as expert or novice. This could be used in the future to make an objective assessment of procedural or surgical proficiency.

This was a good month in Academic Medicine. Some pretty good studies!

References
(1)       Norcini J, et al. The Relationship Between Licensing Examination Performance and the Outcomes of Care by International Medical School Graduates.  Acad Med  2014; 89(8): 1157-1162.
(2)       Nixon J, et al. SNAPPS-Plus: An Educational Prescription for Students to Facilitate Formulating and Answering Clinical Questions. Acad Med 2014; 89(8): 1174-1179.

(3)       Watson R. Use of a Machine Learning Algorithm to Classify Expertise: Analysis of Hand Motion Patterns During a Simulated Surgical Task.  Acad Med  2014; 89(8): 1163-1167.