5

How do I Build an NLG System: Requirements and Corpora

 3 years ago
source link: https://ehudreiter.com/2017/02/05/nlg-system-requirements/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Uncategorized

How do I Build an NLG System: Requirements and Corpora

As with other kinds of software development, the single most important thing is to get the requirements right. In other words, what does the system do (what inputs does it accept,. what outputs does it produce from these inputs) and also “non-functional” requirements such as speed and concurrency.  The biggest source of failure in software overall is getting requirements wrong, and the this is true of NLG as well.

Unfortunately, getting requirements right for an NLG system is often harder than getting requirements right for databases, payroll systems, patient record systems, etc.  This is  because NLG is a new technology, which means that people dont understand what it can and cannot do.  Someone who is commissioning a new payroll system usually has a pretty good idea about what payroll systems can and cannot do; but someone commissioning an NLG system may be very new to the technology, and not have this kind of understanding.

So how do we decide on the requirements of an NLG system?  Of course there is a large general literature on requirements engineering in software engineering, much of which can be applied to building NLG systems.  But is there anything unique about gathering requirements for NLG systems?

Corpus Analysis

I believe that corpus analysis is a very useful technique for understanding requirements of NLG systems.  By this, I dont necessarily mean gathering a corpus of thousands or millions of example input-output pairs, and then using machine learning to build input-output models.  Although this certainly is a great thing to do if there is sufficient data.  But my focus here is more on building a collection of 50-100 input-output pairs, where the output texts are generally manually written by subject matter experts (SME).  Using this corpus, the NLG developers and users can discuss in very concrete terms what the NLG system should do in specific cases.

Building a corpus is usually an iterative procedure.  The SMEs write an initial version (perhaps by adapting texts written for other purposes), and the NLG developers analyse the initial text to identify both stuff that cannot be generated (usually because the necessary input data is not available) and also conflicts and inconsistencies between SME’s (which are pretty much inevitable if more than one SME writes the corpus texts).    The NLG developers then discuss these with the SME’s and users (although with other questions, such as how specific edge/boundary cases should be handled).  Hopefully the NLG developers, SMEs, and users will converge and agree on a specific set of 50-100 “target texts” and associated input data; this then forms a key part of the requirement specification for of the NLG system.

I have discussed corpus analysis in many publications, some of which are listed below.  These are mostly pretty old papers, but I dont think the basics have changed much in the past 20 years.    The best place to start would be the corpus analysis section of my book.

Rapid Prototyping and Refinement

Although this is not a pure requirements analysis technique, I have often found that the best way to get functionality right is to build something fairly quickly, even if it probably has the wrong functionality, and then get users and subject matter experts to try out the system and see where they think its functionality needs to change.  Of course “rapid prototyping” is very common across the software engineering world, and not something I invented!  But I think this is especially appropriate for new and poorly understoood technologies such as NLG.  We acknowledge that you cant get requirements right for a new technology which users and SMEs have little experience with, so we give them something to play with, and use their feedback to get the system righ

I have discussed this a but under the name “refinement” in my paper on Acquiring Correct Knowledge for Natural Language Generation.   One limitation of refinement is that it tends to lead to “local optimisation” (incremental functionality improvements) rather than radically new approaches.

Relevant Papers

E. Reiter and R. Dale (2000). Building Natural-Language Generation
Systems
. Cambridge University Press (Amazon)

E. Reiter and R. Dale (1997).
Building Applied Natural-Language Generation Systems.
Journal of Natural-Language Engineering, 3:57-87. (DOI)

E Reiter, S Sripada, and R Robertson (2003).
Acquiring Correct Knowledge for Natural Language Generation.
Journal of Artificial Intelligence Research 18:491-516. (journal link)

S Williams and E Reiter (2005).
Deriving content selection rules from a corpus of non-naturally occurring documents
for a novel NLG application.
Proceedings of Corpus Linguistics workshop on using Corpora for NLG.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK