Back to Latest News

Meet Léa Deleris, IBM researcher on the Human Behaviour Change Project

By Morgan Williamson

Publication Date: 05/06/2018

Image credit: Léa Deleris

Léa Deleris oversees a team of researchers working on projects related to healthcare leveraging cognitive computing technologies. She, together with Pol Mac Aonghusa, is leading the computer science programme of research for the Human Behaviour Change Project. She filled us in on her research interests and current HBCP progress.

Can you give me an overview of your career and education?

I have been a researcher at IBM for 11 years already. I joined the Thomas J. Watson Research Center located in Yorktown Heights NY (Headquarters of IBM Research) after completing my PhD at Stanford. My study focus at that time was decision and risk analysis, investigating methods and models to help people make better decisions in the face of uncertainty. One domain that I had looked into was supply chain risk analysis, a topic that I pursued when I joined IBM Research. In 2010, when a new research lab was created in Dublin, I took this opportunity to come back to Europe (I am French). I became the principal investigator for its Risk Management Collaboratory which was a research effort focused on democratising risk management.  This started my interest in using text information as we developed a system and associated algorithms  to make use of the medical literature to semi-automatically build risk models and support medical decision making based on those models.

What is your involvement in the Human Behaviour Change Project (HBCP)?

I am the lead researcher for the IBM components of the HBCP knowledge system. The team of computer science researchers that I oversee is developing two systems:

  1. A Natural Language Processing system to find and extract information from research reports, starting with the ‘use case’ of smoking cessation
  2. A set of Machine Learning and Reasoning algorithms that integrate and extrapolate from that information to generate new knowledge and hypotheses about behaviour change

What attracted you to the HBCP?

The core idea is the fact that we need a disciplined and computer-assisted way to leverage knowledge stored in text format, academic papers in this case. As a researcher who also produces such papers, I sometimes experience a nagging feeling that my work may never find its way to the people that could use it. This is the curse associated with the tremendous progress that we have made in producing and sharing information. 

Tobacco use, excessive alcohol consumption, substance use, reckless driving, obesity are all examples of domains in which effective behaviour change policies can make a significant impact. One basic behaviour, handwashing with soap, is effective in reducing the risk of diarrhoea and the spread of respiratory diseases such as pneumonia, ultimately leading to reduced mortality in children under 5 years of age. This means that our research efforts have the potential to provide invaluable knowledge to help with developing or selecting behaviour change interventions and improving the well-being of society as a whole. The good news is that recent advances in Artificial Intelligence, in both the treatment of text information and in the ability to reason with data, enable us to tackle this challenge by making sure the right information is accessible by the right person at the right time and by using our collective wisdom to recommend novel behaviour change interventions. Guided by the structure of a sound behaviour change intervention ontology, we are developing a system and an interface to enable policy makers and researchers alike to obtain recommendations based on a broad base of up-to-date evidence.  

How do you feel the Human Behaviour Change project is progressing?

At this point we are focusing predominantly on the Natural Language Processing system. Our efforts are targeted at developing an automated feature extraction system to annotate behaviour change intervention evaluation reports using an ontology of behaviour change interventions. This is made up of several entities: context ( population and setting), intervention (what was planned) and effects (how well did it work).  One challenge lies in the diverse nature of the intervention features to be extracted from reports. Some are fairly well defined (e.g., a minimum age should be a number), but others are of a more free form nature such as behaviour change intervention or outcome descriptions. Another challenge is that reports typically include tables as a means to efficiently summarise information so we need to design methods that can also make sense of tabular text data. In a few months, we have designed a first version of the NLP system, which we will work on refining and improving.  This initial system provides sufficient visibility of the data to allow us to start exploring the inference system.

Have you been working on any other projects? 

A few years ago, my team in Dublin worked on a project whose objective is to help medical professionals make more rational decisions. The tool we developed, called MedicalRecap (click to view a demo), extracts information from PubMed’s 24 million online citations to create a risk model for doctors. MedicalRecap’s semantic module allows doctors to cluster the extracted terms by grouping similar or related terms into concepts. It also has an aggregation module, which allows the user to combine the extracted dependence and probability statements into a dependence graph, also known as a Bayesian network. Imagine an instance of a doctor needing to understand the role of tea and coffee consumption on the incidence of endometrial cancer. Currently, doctors would address this task manually by searching for relevant papers, reading them, taking notes (by hand or copy-pasting on a spreadsheet), and aggregating this data. MedicalRecap, instead, presents extracted and aggregated data in an intuitive graphical format, providing a way for the user to trace back through the summarised information, to the original input. The tool also allows users to edit the output of the algorithms if they encounter an error, which is fed back into the system to improve its knowledge and performance over time.  MedicalRecap also relies on the doctor’s expertise, so errors are reduced by combining the doctor’s knowledge with the inferences the tool makes in finding dependency relationships. Ideally, it will reach the same conclusions that the doctor already has previously made so that they will trust the cognitive system more.

14 December 2017

Share News Article

Share with Networks

Human Behaviour-Change Project

Centre for Behaviour Change
University College London
1-19 Torrington Place, London, WC1E 7HB