Why big data has so far failed medicine

Published October 5, 2013   |   
Audrey Quinn

Can big data improve human decision-making? That was the question that MIT’s Irving Wladawsky-Berger put to a panel of IBM clients at the company’s research colloquium on cognitive computing Wednesday. The meat of his discussion focused on Douglas Johnston, a surgeon with the Cleveland Clinic. We’ll share their interchange here.

“I think that what’s happening now with data science,” said Wladawsky-Berger, “is we can now turn these microscopes on ourselves, systems where the critical components are people, communities, and organizations, and get a level of understanding we didn’t have before.”

“In healthcare in general we’ve been applying data science poorly,” admitted Johnston. “We have a medical literature that is contradictory, and we are relying on 100 year old transcription technology for our records. We still have to dig through those records to get the data. I see the results are failing because it’s garbage in and garbage out.”

“And that to me is the challenge for all of you,” he said, addressing the audience of computer scientists. “How do we create a system that makes that body of data robust enough that we can work from it?”

“We keep talking about the ability to monitor people with chronic illnesses,” Wladawsky-Berger said, “I’m assuming that in five to ten years we’ll be able to generate huge amounts of real time data. Will that change things?”

“I think that will be a huge advance,” Johnston replied. “We need to know how sick someone is when they come in the door, how well do we do, and how well do they do when they leave. I think we’ve done an okay job with the second two but not the first. We need a way to look at those outcomes that is not dictated by a doctor into a dictation device.”

Read More