Apple AI researchers propose a “Plan Then Build” (PlanGen) framework to improve the controllability of neural models of text-to-text data

In recent years, developments in neural networks have led to the advancement of text data generation. However, their inability to control structure can be limited when applied to real applications requiring more specific formatting.

Researchers from Apple and the University of Cambridge propose a novel Plan then generate (PlanGen) framework to improve the controllability of data-to-text neural models. PlanGen consists of two components: a content planner and a sequence generator. The content planner starts by predicting the most likely plan its release will follow. Then the sequence generator generates results using the data and the content plan as input.

To ensure the controllability of the PlanGen model, the research group goes a step further to propose a structure-aware reinforcement learning goal that encourages the output generated from the content plans. They use an ordered list of tokens for its simplicity and broad applicability. Each content plan token is a table location key in terms of tabular data. Graphical data stored in an RDF format is represented by tokens that represent the triple predicate.

Source: https://arxiv.org/pdf/2108.13740.pdf

The researchers validated their proposed model by testing it on two benchmarks with different data structures: the ToTTo dataset with tabular data and the WebNLG dataset with graphical data. The proposed model achieves better performance than previous state-of-the-art approaches. This has been demonstrated by both human and machine evaluations, with outputs containing highly controllable structures that can achieve what they set out to achieve in terms of production quality.

Key points to remember:

Article: https://arxiv.org/pdf/2108.13740.pdf

Github: https://github.com/yxuansu/plangen

Dataset: https://github.com/google-research-datasets/ToTTo

James G. Williams