0

Texts should be adapted to users

 3 years ago
source link: https://ehudreiter.com/2021/04/01/texts-adapted-to-users/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

In a recent blog I talked about my vision to of using NLG to humanise data and AI, and listed several challenges. I’d like to expand on one of these, which is making NLG sensitive to the skills, knowledge, emotion, etc of users, so that appropriate texts can be generated for very different people.

Example from Text Simplification

This came up recently in a discussion I had with Wei Xu, who does research on text simplification. Wei basically uses a corpus-based approach, where models are trained on aligned corpora of normal and simplified texts. Results are evaluated with metrics and by asking Turkers to rate resulting text for simplicity. Ie, pretty much what you would expect from an NLP researcher in 2021.

However, if the goal of simplification is to make information more accessible to children, non-native speakers, adults with poor literacy skills, and others with below-average reading ability (which is very much in line with my vision!), then I think we need to acknowledge that such people have very different literacy skills. One person may struggle with complex syntax but know lots of words, another may mostly be OK but have difficulty with referring expressions, a third may have limited reading vocabulary in general but know lots of specialised words about football, etc. Also, adults with poor literacy want information presented in ways which they are used to, and often have low self-esteem and hence are sensitive to being patronised.

To take one simple example, years ago we tried to simplify patient-information material by paraphrasing unusual low-frequency words. But when we replaced “inhaler” in an asthma-information leaflet by “pump”, people just got confused. “Inhaler” is a very low-frequency word in English, but anyone who has asthma will be familiar with it. And although some people do use the word “pump” for asthma inhalers, others do not, and people who were used to using the word “inhaler” got confused when we replaced the term they were familiar with by “pump”. We saw a lot of other examples of this kind of thing, as well as examples where some patients thought we were patronising them when we replaced medical terms with “everyday language” paraphrases.

So anyways, from this perspective the problem with a corpus-based approach is that it ignores differences between readers. The model is trying to produce language similar to its training corpus, and the language in the corpus will be a better match for some low-skill readers than for others. Also, evaluating with metrics and Turkers is not going to reveal this issue, the only way to do this is to evaluate with a varied set of low-skill readers.

Wei agreed, and indeed would like to build interactive systems which learn from users and adapt to their reading needs. But this is **much** harder than simply training a language model! It also requires HCI as well as NLP skills. It may well make sense to initially focus on “generic” simplification via a language model. But ultimately, I think interactive adaptation is a path we will need to take if we truly want to achieve the vision of making information accessible to all.

User adaption in NLG

Of course all of the above issues also apply to NLG, as indeed I discovered many years ago while working on a project to generate appropriate texts for adults with poor literacy (Williams and Reiter 2007). There are many other adaptions which also make a lot of sense. For example in the Babytalk project, we tried to generate reports for parents of babies in neonatal intensive care (NICU), to give updates on the baby’s condition (Mahamood and Reiter 2011). These reports assumed much less knowledge of medicine (and medical vocabulary) than reports for doctors and nurses (which we also generated in Babytalk). But some of the parents we worked with were very knowledgeable, either because they had a medical background or because they had spent a huge amount of time learning about neonatal medicine after their baby was put into NICU. For these parents, a more technical report would have been better. Other parents were teenagers who had left school at 16 and sometimes struggled to understand any kind of numerical information; for these parents, our reports were perhaps too technical. Again, some kind of interactive adaptation mechanism would have been really useful.

Adaptation is not only about linguistic complexity and domain knowledge (which change slowly, so we can build up a user model over time). Ideally it should also take into consideration the user’s emotional state and stress level (which can change radically over the course of a day). For example, we may not want to tell someone bad new about a baby if the recipient is very depressed, if we think that the bad news could trigger a heart attack or even a suicide attempt (blog). Since we know that stress can reduce cognitive and reading ability, we might want to reduce the linguistic complexity of a text if we know the recipient is very stressed (Balloccu and Reiter 2020). I think these examples just scratch the surface! We know that good doctors communicate with patients in a way that is sensitive to their emotional state. It would be great if NLG systems could do likewise, and I think this is essential for achieving my vision!

There are more subtle effects as well. For example, we know that different people use and interpret words in different ways (Reiter and Sripada 2002), so ideally an NLG system should adapt and “align” its vocabulary usage with each target reader.

I’d love to see more research on this!

Unfortunately, in all honesty I think I am seeing *less* on user adaptation in NLG (and indeed NLP more generally) than I did 10 years ago. I wonder if this is because this kind of research

  • doesnt fit well into the “train model on large corpus” paradigm
  • requires doing experiments on children, low-literacy adults, stressed mothers, etc. which is much harder than using metrics and running Turker-based experiments
  • is best done in collaboration with HCI experts, which is less common than it used to be

I hope this changes, because user-adaptation is essential to my vision of using NLG to humanise data and AI!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK