:: Text mining & electronic health records ::

Text mining electronic health records to identify hospital adverse events

Center for Quality, Region of Southern Denmark & SAS Institute, Denmark


   
   

 About this site

Established in 2013

The site contains supplementary information to a poster presentation about text mining electronic health records to identify adverse events in hospitalised patients.


 ► Get the poster [0.4 MB]

 ► See a presentation [in Danish]

 ► A published abstract

 

Background

Using the IHI Global Trigger Tool (GTT) to conduct structured reviews of health records to identify adverse events consumes costly human resources.

We are developing an IT-tool based on natural language processing (NLP) of the unstructured and semi-structured narrative texts in electronic health records to identify common triggers as well as some adverse events (harms).

Which triggers? 

We are building algorithms to identify these triggers and adverse events —

  • C01–Transfusion of blood or use of blood products

  • C02–Code, cardiac or pulmonary arrest, or rapid response team activation

  • C05–X-Ray or doppler studies for emboli or deep vein thrombosis

  • C07–Patient fall

  • C08–Pressure ulcers [bedsores]

  • C09–Readmission within 30 days [not text mining]

  • C11–Healthcare-associated infections

  • C14–Any procedure complication & C15–Other [selected problems, i.e. with catheters]

  • M10–Anti-emetic administration

  • S01–Return to surgery

  • S11–Occurrence of any operative complication [combining several of the triggers above]

We selected these problems because they are either common — or anticipated to be difficult to manage with text analytics.

 

Links

IHI Global Trigger Tool

Wikipedia about text mining

A textbook

 

 

 

 

Methods

Data

About 500 randomly selected health records had been manually analysed from April 2010 through May 2012 as part of a routine use of GTT to monitor patient safety in a 450-bed acute care hospital.

All narrative texts in these records were extracted to a corpus of XML-files.

Software 

We use the SAS® Text Miner and the SAS® Enterprise Content Categorization to build algorithms.

We build module-based algorithms with clinically specific word lists, and Boolean operators.

Speed

The algorithms typically read 500 records in about 15 seconds.

Results

Bedsores (pressure ulcers)

The poster shows our results for bedsores (C08).

General findings

The positive predictive values varies, and can be low for some triggers, but the negative predictive values are high (see below).

Thus, if an algorithm scores negatively, humans will usually not find anything either.

All triggers

Pos PV = Positive predictive value

Neg PV = Negative predictive value

C01

Pos PV = 70% [95% CI: 56% to 80%] Neg PV = 99% [95% CI: 97% to 100%]

C02

Pos PV = 45% [95% CI: 36% to 55%] Neg PV = 95% [95% CI: 93% to 97%]

C05

Pos PV = [None found]

Neg PV = 100% [95% CI: 99% to 100%]

C07

Pos PV = 35% [95% CI: 17% to 59%] Neg PV = 100% [95% CI: 98% to 100%]

C08

Pos PV = 56% [95% CI: 42% to 69%] Neg PV = 97% [95% CI: 95% to 98%]

C09

Pos PV = 76% [95% CI: 67% to 83%] Neg PV = 96% [95% CI: 94% to 98%]

C11

Pos PV = 26% [95% CI: 18% to 37%] Neg PV = 97% [95% CI: 95% to 98%]

C14 + C15

[Not estimated]

M10

Pos PV = 52% [95% CI: 41% to 62%] Neg PV = 100% [95% CI: 99% to 100%]

S01

Pos PV = 60% [95% CI: 36% to 80%] Neg PV = 99% [95% CI: 98% to 100%]

S11

Pos PV = 24% [95% CI: 15% to 36%] Neg PV = 97% [95% CI: 96% to 99%]  

Tests

We have tested the performance of the algorithms using about 250 new health records from other sections/departments in the same hospital.

The algorithms perform well compared to findings by human reviewers, in particular as regard negative findings: The negative predictive values are consistently high.

The tests also revealed that disagreements among GTT-reviewers (2-3 different) of health records are common, and that the computer algorithms perform well when we compared the results to the consensus eventually obtained.

Challenges

The narrative texts. Writings by physicians and nurses in health records are often informed telegram style notes with many acronyms and context-dependent abbreviations, and with spelling errors.

Time. Developing good algorithms requires repetitive cycles of modifying word lists etc., running the algorithms and manually controlling the findings.

Brought in or acquired? It is difficult, for humans as well as for computer algorithms, to distinguish between conditions present on admission and problems acquired during a hospital stay.

 

 26-11-2017 19:11

 

Who are we?

Authors 

Ulrik Gerdes

Christian Hardahl

Institutions 

Centre for Quality

Institute of Regional Health Research

SAS Institute, Denmark

Sygehus Lillebælt [in Danish]

 

Experiences

Building algorithms

Optimising an analytical algorithm typically requires 50-100 iterations.

We work side-by-side to do it most efficiently, i.e. we modify an algorithm, see what happens to the distribution of health records in the 2 by 2 table, scrutinise the texts and discuss the results.  





We are equipped with good computers and large screens, so that we can simultaneously run algorithms, check the results and make notes of what we are doing.

Project management

We use a Microsoft SharePoint 2010 platform to share internal information about the project.


Perspectives

The text mining algorithms have been incorporated in an IT-system to be used by GTT-teams.

The system can randomly select a number of health records, run the algorithms and show the results, allowing team members to control the findings, accept or override them, makes notes etc.

The system can automatically generate summary reports, including graphs showing time trends, distributions of adverse events by departments etc.

 

   
Sidst opdateret den 26. november 2017 19:11