The Highly Interruptive River of Healthcare
Lessons on Clinical Workflow from EHR implementation
In 2014, the CEO of Athens Regional Medical Center received a letter signed by more than 12 physicians describing the disturbing level of care in the hospital. They stated that the new Cerner EHR rollout caused an inpatient to not be seen by a physician for five days, long ED wait times that compromised clinical care, and multiple medication errors. The CEO then resigned after a 270-0 vote of no confidence by the staff.
How often have you seen physicians band together to write a letter to a hospital board? Doctors rarely attempt to solve hospital issues with advocacy maneuvers, so when it happens you know it’s bad. Among the physicians’ major complaints was that the rollout was too aggressive and didn’t take the clinical workflow into account.
Clinician Workflow: The highly interruptive river that flows through healthcare
Lessons from EHR Implementations
There are clearly some lessons learned from EHR implementations that can be applied as AI technology moves forward. So let’s take a short walk down memory lane to the days of EHR implementation. If you’re old enough to have a mortgage and a bad knee, I’m guessing you’re also old enough to have experienced at least 3 EHRs, and possibly as many as 10. You were definitely present for at least one EHR rollout and possibly a few you’ve tried to block out.
In what I can only describe as a glass-half-full approach, that decade (plus) produced literature about EHR rollout failures that we can use to apply to the upcoming AI implementations.
The razor’s edge of clinical workflow
There’s a reason most successful EHR implementations involve a decrease in clinical volume: to get everything done when it needs to be done, physicians have developed a workflow that maximizes their efficiency. It’s true that most doctors stop learning new ways to efficiently use the EHR once they become proficient (rather than maximally efficient). Still, doctors have to get a lot done in a short period of time, and it’s often a delicately balanced system that can be easily destroyed. Even worse, most doctors realize that their system is fragile, and that makes them protective of the way they do things. They don’t want more steps, more windows to open or more passwords to enter. The AI might be great once you login, but there’s nothing cutting edge about logging in to yet another software program. It’s time-consuming and knocks you off your razor’s edge of efficiency.
In an ideal world, a project manager would help with process mapping to determine where the platform fits into the existing workflow and which aspects of workflow would need to change. Even better if a short experiment can be run in which a person with a stopwatch times how long it takes to do activities with and without the platform. I know those studies are being done now by several institutions related to clinical scribes and other AI platforms.
The steps of clinical workflow analysis are listed below:
The effect of any new AI platform on workflow has to be carefully considered and trialed, preferably with the method above, in order to be successful. Equally important is considering what a failure of an AI system would do to the workflow. These AI platforms have the potential to produce the same decrease in productivity as EHR implementations, which are unlikely to be mentioned in the product’s promotional materials.
Why did some EHR implementations fail?
There are many studies that describe what systems should do when new technology is rolled out: have clinical champions, decrease clinical workload, personalize training and support, and understand current workflows. Those are all excellent ideas, if somewhat vague. More interesting to me are the studies that focus on why EHR implementations fail.
One study described two main categories of EHR implementation failure: errors in entering and retrieving information, and errors in communication and coordination.
Errors in the process of entering and retrieving information
Technical errors
In 2012, a woman with an unusual headache went to her primary care physician, who ordered a head CT. She died two months later of a brain aneurysm that wasn’t caught on the scan because the EHR didn’t transmit the request to the radiology department, so she wasn’t diagnosed until it was too late. I think we can all appreciate how easily this kind of tragedy could happen due to minor technical glitches. In other industries the effect may be a minor delay in payment or service, but in clinical care these kinds of errors can be deadly.
Human errors
In what is possibly my favorite description ever of how a doctor’s day feels, the authors describe “health care’s highly interruptive context”, pointing out that many of the user interfaces were unable to be saved, would close unexpectedly, and generally assumed the person entering information was not being called, texted, paged, or physically pulled away from the computer multiple times while writing a note or entering orders.
They also point out that much of the information being recorded had to be changed by the clinician from unstructured, which is how humans think and how they were trained, to structured. I clearly remember Russ Altman, one of the pioneers of biomedical informatics and generally a brilliant and kind person, lecturing about the computer age back in the mid-200s. He told our medical school class that either humans would have to think in computer form or computers would have to think in human form, and that humans were going to have to adapt to computers for the time being. That has been true of my entire medical career, and the authors point out that there’s an underappreciated cognitive demand associated with translating unstructured into structured data. Even if it’s not explicitly stated, much of the promise of AI in healthcare is related to decreasing this cognitive load, allowing us to shift away from parsing our thoughts into structured data.
Errors in the communication and coordination process
Another great phrase in this section, “the misrepresentation of health care work as a linear process” describes why clinicians use workarounds so often. Unlike the aerospace and power plant industries to which healthcare is often compared, there are many more “special cases” in which things just don’t go how you expect them to. We’ve all experienced times when a patient needed something simple but it took an additional 20 steps to make the EHR accept that action. Humans are less predictable than machines, both in what can go wrong with them and in what they need to fix them. Workarounds are so rampant that one study even developed a “typology of workarounds”:
My personal favorite type of workaround is “unplanned”, which includes workarounds that are “chosen out of desperation”. The authors note “many of the workarounds we observed fall into this category”, and I can relate very viscerally to being in that circumstance.
Information transfer vs sense-making
Another part of communication errors involved “the misrepresentation of communication as information transfer rather than interactive sense-making”. I really like this concept of interactive sense-making because it puts into words what we’re actually doing when we look through a chart, and more commonly when we call a colleague after reading their note. I don’t just want to know that a cardiologist ordered an echo; I want to know why they ordered it, what they’re worried about and how worried they are. Writing “rule out valvular disease” on the differential doesn’t make sense to me without talking to the cardiologist about the “why” behind that phrase.
When I taught residents, I often caught issues when another specialty’s decisions just didn’t seem to make sense. A short conversation with that team usually clarified a better course of action better than the many datapoints in the computer. The authors point out that clinicians have to increase their vigilance for errors with the introduction of EHRs because there was a loss of the previous more nuanced feedback, such as clinicians actually talking to each other.
What questions can we ask before AI implementation to help it go well?
As we move forward with AI in healthcare settings, we need to ask questions about how the platform affects workflow:
Does this platform allow multiple ways to do this task, in multiple sessions?
Does this platform require physicians to adapt their thinking to a computer?
How flexible is the platform for special cases, and what systems are in place to help the system learn from those occurrences?
Does the platform optimize for sense-making rather than information quantity?
The usability of the platform is key and really needs to be piloted to get a full understanding of the potential use. This can only be done by people who see a lot of patients a day trying out the software. Any platform that only shows a short demo or screenshot and doesn’t allow clinician use prior to purchase shouldn’t be considered.
Physicians as both users and buyers
People who sell into hospital systems frequently point out that physicians are the users, not the customers, and therefore it doesn't really matter what they think. Physicians don’t have the money and they’re not the ones ultimately making the decision about whether to purchase a piece of technology. Unless they strenuously object to an IT platform, either vocally or via passive resistance, their opinion often isn’t considered. This applies to nurses also, and generally all healthcare workers. Disconnecting the buyer from the user immediately disconnects the user experience and the ability to sell the product, which leads to issues like those in the Athens hospital.
Last week we discussed the shadow of EHR implementation for physicians.
Next week we’ll discuss lessons about the financial impact of EHR implementation and how AI differs