[MUSIC] My name is Brook Watts. I am a, on the faculty of Case Western Reserve School of Medicine. And I'm a practicing general internist at the Cleveland VA Medical Center. I have been involved in quality improvement for a decade now, and I hope to share with you today some of my lessons learned. The course directors have asked that I focus on data. Well that's fun because I have lots of stories about how I, we made missteps as we learned through our data. I think the first story that comes to mind is the first improvement project I ever embarked on right after I was done with my post graduate education. We were tasked with improving the care for patients who have eye problems and diabetes. And the idea was that there were patients who needed to see the eye doctor more frequently, but they weren't able to get appointments because there were so many patients who were getting seen that were sort of clogging up the access. So I was very enthusiastic, and I went down to the eye clinic to speak with the schedulers. And the first thing they told me is, we have this problem. The primary care providers, they consult us over and over again on the same patient. So we have to sit there and weed through the consults because we can't find that patient that really needs us. We're, we're doing all this duplicate work. We have to discontinue these things and figure out who actually needs us and, and it just takes up so much time that we can't help, you know, we can't work on some of these other problems in scheduling. Well as we know one of the first things we do in quality improvement is, we try to walk the walk. We try to get that baseline data, so we can understand the scope of the problem. So my task, first off, was to quantify the number of these duplicate consults, so that we could come up with an appropriate intervention target. So I reviewed a little more than 300 consults by hand. Looking for ones that were, were duplicate. And these were actually pieces of paper, so I was matching names. And I found three duplicates out of 300. So, to that clerical staff that seemed like an insurmountable task that had saw, that they saw these so frequently and they were taking so much time that they couldn't attend to other things. When the reality was, it was really quite a small problem, 1% of the whole sample. And probably not the best target for an improvement effort. I think that's a very good lesson learned, so when we start data and quality improvement, we quantify the problem. So, any time we embark on an improvement effort, our very first step is to do that assessment and sort of figure out where we're starting from. This is a real challenge. In this particular example it was pretty clear what I needed to measure, but often when we start in quality improvement efforts in our own practices, it's a little bit harder. Some tips that we've learned over the years sort of how to go about this are, are number one, to focus on what is actually measurable and I think as part of this curriculum, you may have learned what a smart aim is. And of course, the M there is measurable. So, picking something. In data that you can actually measure. Oftentimes it may not be exactly what we're trying to achieve, but we have to pick something that will show us whether or not the improvement work we're doing is actually an improvement. When we think about something that's that's measurable we also have to differentiate between a process measure and an outcome measure. And frequently in health care, we're trying to, we're trying to achieve some sort of focus, concrete outcome. But in our improvement work, we have to pick a piece that's a little bit earlier in the cycle because those are the process measures that might be some part of the improvement cycle early on that ties to our measure-- our fundamental outcome-- but allows us to assess where we're going a little bit earlier. So that's you know, how do we know that the change that we're making is an improvement. Within the healthcare setting, there's a lot of data that's gathered as a part of routine clinical care because of performance measures or sort of regulatory guidance. These are things for the Joint Commission or for Medicare. Oftentimes, if we can align our own efforts at data collection with efforts that are already ongoing as part of the, there were sort of the performance measure, regulatory environment. That may make it, make it easier. We have a, a couple of examples from our own practice. For example, we wanted to work very hard to try to improve care for our women patients with diabetes. And this was an important facility goal, it also ties to some of the broader performance matrix and reimbursement that we're all familiar with. So we needed to pick data points that were measurable and that we could follow over time. So we wanted to look ultimately, we wanted improvements in hemoglobin A1C. But we had to start at a smaller place. And that smaller place is thinking about exactly how many patients we could reach for our program. So instead of looking every week saying what are the hemoglobin A1C, we had to say how many patients have we touched this week who needed us? And, you know, how can we monitor those measurements over time to know that we're getting the appropriate impact and, and spread of our program. Ultimately, we were able to achieve our goal, but you know, a lot of thought had to go into how we picked these measures in the first place to be able to, to get us where we wanted to go. Once you've decided on a measure and you have something that you can actually measure, what do you do with it? In quality improvement, we often use the term control charts, and I want talk a little bit about what run charts are and what control charts are. A run chart, which is something I think that most of us are familiar with, is really just graphing data over time. We usually have the times going across the bottom, and whatever data point we're measuring is measured across the top. And it's very easy for us to understand. The challenge with run charts is they're really not, they're not statistically valid. They can't show us that something was a statistically significant change. That's not always important in quality improvement. We want to know where we started and where we're going. So, from at a very basic fundamental level, the first tool, I think, is a run chart, understanding that we often need a lot of baseline data. Because one of the things you learn when you're approaching healthcare quality improvement, is that data changes over time for all kinds of reasons that we can't always predict. And the only way we can see that is really to look at a broad, as many data points as we possibly can. So I do encourage everyone, you know, even if you're using a run chart that doesn't mean you have an excuse not to think about what the baseline data look like. The more complex version of a run chart is a control chart, and a control charts are considered one of the seven core tools of quality improvement. And what you're doing with the control chart, is plotting, again a measurement over time. But these measurements are generally means or proportions. And there's a variety of different charts, depending on, on what you're measuring. And there's a lot of good, good sorts of references that guide you through this. But the general idea is, you're still plotting something over time, but it's generally not, it's rarely a count, it's more often a mean or proportion. And, using control chart software, the software will, will calculate standard errors for you, and draw what we call upper and lower control limits. And those upper and lower control limits are basically measures of standard deviation, and the idea is that just under a 100% it's like 99 point something percent of data should fall in these ranges. And any point that falls outside the upper control limit or the lower control limit, may represent special cause variation. The challenge with control charts, is that they require again, a lot of data. You have to have sort of a substantial baseline to be able see that any change you have made is a real change. The second challenge with control charts is that it requires a little bit of a comfort and sophistication with statistical approaches that many people don't have. I would say that there are lots of good tools out there to lead you through this, and if you get to a place where you really want an understanding of whether or not your, your data represents a statistical change this is, is a great tool to look at. Statistical process control charts or control charts can also help you to understand variation in your data, which is something I talked about earlier. It's very hard, again as we know, because we see so much variability in many healthcare processes from month to month, or even day to day, to understand again, when we really know that a change is an improvement. So the last point I wanted to talk about was the importance of thinking from the beginning about sustainability. When we do improvement work, we like to think that it's, it's going to stick around. That the changes we made, the improvements we made are going to last. But the reality is that we always need to keep, keep our eye on the ball. And that means that we need ways to measure our change longitudinally over time. So when you, you focus on data at the beginning of an effort you need to keep that eye forward on where you're going to be when your improvement effort is done, and how you're going to continue to measure over time to make sure that the improvement work you did is sustained. Thank you for giving me the opportunity today to talk about data and some of the tools we use to think about data in our improvement work. I sincerely wish you well in your own efforts to improve our healthcare environment.