Showing: 1 - 1 of 1 RESULTS

With an anthropologist’s eye, Duke pioneers a new approach to medical AI

If not for an anthropologist and sociologist, the leaders of a prominent health innovation hub at Duke University would never have known that the clinical AI tool they had been using on hospital patients for two years was making life far more difficult for its nurses.

The tool, which uses deep learning to determine the chances a hospital patient will develop sepsis, has had an overwhelmingly positive impact on patients. But the tool required that nurses present its results — in the form of a color-coded risk scorecard — to clinicians, including physicians they’d never worked with before. It disrupted the hospital’s traditional power hierarchy and workflow, rendering nurses uncomfortable and doctors defensive.

As a growing number of leading health systems rush to deploy AI-powered tools to help predict outcomes — often under the premise that they will boost clinicians’ efficiency, decrease hospital costs, and improve patient care — far less attention has been paid to how the tools impact the people charged with using them: frontline health care workers.

advertisement

That’s where the sociologist and anthropologist come in. The researchers are part of a larger team at Duke that is pioneering a uniquely inclusive approach to developing and deploying clinical AI tools. Rather than deploying externally developed AI systems — many of which haven’t been tested in the clinic — Duke creates its own tools, starting by drawing from ideas among staff. After a rigorous review process that loops in engineers, health care workers, and university leadership, social scientists assess the tools’ real-world impacts on patients and workers.

The team is developing other strategies as well, not only to make sure the tools are easy for providers to weave into their workflow, but also to verify that clinicians actually understand how they should be used. As part of this work, Duke is brainstorming new ways of labeling AI systems, such as a “nutrition facts” label that makes it clear what a particular tool is designed to do and how it should be used. They’re also regularly publishing peer-reviewed studies and soliciting feedback from hospital staff and outside experts.

advertisement

“You want people thinking critically about the implications of technology on society,” said Mark Sendak, population health and data science lead at the Duke Institute for Health Innovation.

Otherwise, “we can really mess this up,” he added.

Getting practitioners to adopt AI systems that are either opaquely defined or poorly introduced is arduous work. Clinicians, nurses, and other providers may be hesitant to embrace new tools — especially those that threaten to interfere with their preferred routines — or they may have had a negative prior experience with an AI system that was too time-consuming or cumbersome.

The Duke team doesn’t want to create another notification that causes a headache for providers — or one that’s easy for them to ignore. Instead, they’re focused on tools that add clear value. The easiest starting point: ask health workers what would be helpful.

“You don’t start by writing code,” said