This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.

The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine.

Wednesday, September 10, 2014

Does the experience on a clinical clerkship effect performance?

I found an interesting study this week that I wanted to blog about today. This study was published in Medical Education a couple of years ago. The authors, Dong and colleagues (1) asked an important and very common question: does the experience that a student has during a clinical rotation effect their performance on that rotation? This is important for many reasons. One big reason is that developing and maintaining adequate clinical experiences is an expensive and time-consuming process. It would be nice if we knew that the experiences that we were providing for students were having a positive effect.

The authors describe two alternative theories of learning in the clinical arena. One idea is that students need to utilize deliberate practice to learn. In other words, they need specific learning experiences that are led by a qualified mentor. These learning experiences are planned and need to be varied and extensive in order for students to develop expertise. The alternative idea is based on the concept of cognitive load theory. In this theory, medical students may have difficulty learning clinical medicine when they are exposed to multiple patients and clinical problems. Instead, students might learn better if they have more straightforward instructional formats, such as simulated cases.

Clinical clerkships in all specialties spend a lot of time trying to demonstrate that the clinical experiences that they provide are similar across different sites and for different students. A previous study of clerkship directors from Internal Medicine, found that they use core cases to compare the clinical experiences of multiple students.(2)  Many clinical clerkships use paper or electronic logs to track the students’ experiences.

This study was done at the Uniformed Services University which is the military medical school. It is the only federal medical school and draws students from across the country. The authors looked at students on the internal medicine clerkship. The students kept track of all of their patient contacts using a patient log. They tracked how many patients each student saw and the number of patients with core problems that were seen.

The authors compared students’ intensity of clinical exposure with performance on the clerkship. What they found was a little surprising and maybe a bit counter-intuitive. Student performance was positively correlated with their clinical experience, but only weakly . Specifically, after they used a pre-test to control for ability, there was a weak (r = 0.19) but statistically significant association. The student’s clinical score improved by two points with every ten extra patients that they saw in the outpatient setting. Similarly, the number of core clinical problems that the students saw was correlated to their ambulatory clinical score (r = 0.19; p < 0.05). In real terms this means that a student who saw patients with all of the core problems (about 88% of all students), scored less than four points higher in ambulatory clinical points than those who did not see all of the core problems.

So what does this all mean? Well, for one thing, we need to think very carefully about how clinical experiences should be structured. More is not necessarily better at least when it comes to number of patients. A targeted approach that is thoughtful and includes more time to think about patients may actually be better.

1) Dong T, et al. Relationship between clinical experiences and internal medicine clerkship performance. Medical Education 2012: 46: 689–697.

2) Denton GD, Durning SJ. Internal medicine core clerkships experience with core problem lists: results from a national survey of clerkship directors in internal medicine. Teach Learn Med  2009; 21: 281–3.

Tuesday, August 26, 2014

Use of Student Satisfaction Data

There has been an interesting change in medical education over the time that I have been a faculty member. I am sure that many people involved in education have seen the same change. It appears that the phenomenon occurs across multiple, different disciplines. The change is related is how we use student satisfaction data. And even more specifically, what importance is put on student satisfaction data, how it’s used for curricular change, and how it is used to drive the curriculum.

I had a lot of great experiences when I was a medical student. But, I had one really interesting (not-so-great) experience. We had a professor who was very well-liked. He was a great teacher and a student advocate. He was one of those teachers that was always available to meet with a student who didn’t understand concepts or to do individual tutoring sessions. He was very personable. He was also a great lecturer, he always delivered content that was relevant and useful. His test questions were consistent with the lectures that he gave and the information that was in the lectures that he gave.

As a student, he was what most of us thought of as the gold standard for a faculty educator. But he was not a researcher. And in many academic departments and medical schools, you have to be a researcher first, and a teacher second. If you don’t get enough research grants, then your job is at risk. And that’s what happened in this situation: his contract was not renewed, because he did not receive tenure.

My medical school class was the group of students that worked with him the most because of the course that he taught. We were very upset by the school's decision. We wanted to do something to express our dissatisfaction with this result. He was one of our best teachers. We did not think it was right to get rid of one of our best teachers. So, we wrote a very professional letter to the Dean of the medical school, expressing our feelings about this decision. The entire class signed it. There were over a hundred people in the class, everyone signed this letter. Then our class president took this letter, you can call it a petition, I guess, but it was really just a letter, to the Dean of the medical school.

What happened next is shocking to me now, but at the time it was not surprising. The Dean basically threw the class president out of his office and told him never to come back. He said something to the effect of “This is my medical school, and I’m charge and the students don’t have any say in what goes on.” Actually, at the time, there was a story going around that he said, “If you complain about anything else, I’ll break your kneecaps,” or something like that. But that may have just been a story that was made up by med students afterwards.

Fast forward to 2014...  For every course that we deliver in our medical school (and in every medical school that I know of) we ask the students to comment on the process.  We ask them to comment on the policies that are in place. We ask them to comment on the learning objectives and whether the content matched the learning objectives. We ask them about the content of the course. We ask them a lot for their opinion about their satisfaction with the curriculum. There’re some really good things about this. Clearly, the dean’s response to my class’s dissatisfaction was not a great response. But I think it is possible for the pendulum to swing too far the other direction.

In a lot of medical schools, students have this idea that they are helping to determine the curriculum. I think that is a dangerous and, in many ways, nonsensical idea. In addition, the LCME looks at student satisfaction on the Graduate Questionnaire, a survey that is sent out to all medical students after graduation. In the accreditation visits for each medical school, the LCME uses that data to determine accreditation for the medical schools. Again, this is a very dangerous proposition in many ways.

Now, don’t get me wrong. I am very interested in student satisfaction data. But the data that I think that they should be giving us is how the content was delivered, how accessible were the faculty, did they follow the policies that we have set forth for grading? Did the questions that were on their test match up with the content that was delivered during their lectures? Was their lecture style appropriate? Did they understand the lecture? Were their slides helpful and additive to the lectures? Those are the kind of questions that student satisfaction data would be pretty useful in answering. And we do some of that.

But often, we ask them other questions. For example: do you think that the information in this course will help you to be a better doctor? That question bothers me a lot. I really have trouble seeing how a medical student would know the answer to that or how their answer to that would change what I want to do as far as a course that I am teaching. I see medical students in their comments, write things like, “the information in this lecture is not important,” or “the information in this course is fluff.” When I think about that, I wonder. Does a student have the framework to make that call?

One of the things that is important for faculty and course directors and medical schools to do is to think about what the curriculum should look like. What information do you want the students to learn? To some extent, that is driven by the national examinations, like USMLE. There is content that every student will be tested on in the national standardized tests. The curriculum needs to do a good job of preparing the students to take those tests. But how the other pieces of the curriculum are emphasized should be the judgment of the faculty.

It often seems like, students are giving their opinion. That is opinion is based on what they think is important, but with little background or context. So for instance, we had a student comment that said, “I didn’t like going to this clinical site.” I thought it was an interesting comment. We specifically chose that as a clinical site for all the students, because we wanted them to have the experience. It was an underserved practice. We wanted them to be exposed to a clinic in an underserved area so that they might see how it could be useful to them in their future practice. Most of the students really got that. But occasionally, there’d be a student who just didn’t really think that that was important. So I think, okay, so you didn’t think it was important, but I do.

Another thing that students comment on in satisfaction surveys are things that are outside of their expertise. It is often comments like “I don’t understand why this test counts for so much of my grade.” I think it should count for that much of your grade. It doesn’t make any difference that the student doesn’t agree with me.

I say all of this to come back to the curricular review and development process. Every course in medical school should be evaluated on a regular basis. The satisfaction surveys that students do are given a lot of weight. If students make a lot of negative comments, or there are several things that they do not like, there is a push from the Dean’s office to look at those comments. The administration may encourage the director to change their course in some way. That is problematic, because it takes the power of determining the direction of the medical curriculum away from the faculty and gives it to students. The faculty are the most experienced and most able to determine what the curriculum should look like but this gives curricular control to the least experienced and least able to make decisions, the medical students.

I’m not advocating that we should go back to the place where the dean can kick you out of his office and tell you he doesn’t want you to ever come back, but I do think that it’s important for the medical school faculty to drive the curriculum. Student comments should be limited to the delivery and the process and the policies, and stay out of what the content is and how that content is weighted.