Instructors
Muhao Chen,
Hongming Zhang,
Qiang Ning,
Manling Li,
Heng Ji and
Dan Roth.
Date and Time
12:00PM-03:00PM, Feb. 03, 2021.
Goal of Tutorial:
This tutorial targets researchers and practitioners who are interested in AI technologies that help machines understand natural language text, particularly real-world events described in the text. These include methods to extract the internal structures of an event regarding its protagonist(s), participant(s) and properties, as well as external structures concerning memberships, temporal and causal relations of multiple events. This tutorial will provide audience with a systematic introduction of (i) knowledge representations of events, (ii) various methods for automated extraction, conceptualization and prediction of events and their relations, (iii) induction of event processes and properties, and (iv) a wide range of NLU and commonsense understanding tasks that benefit from aforementioned techniques. We will conclude the tutorial by outlining emerging research problems in this area.
Introduction
Human languages always involve the description of realworld events. Therefore, understanding events plays a critical role in natural language understanding (NLU). For example, narrative prediction benefits from learning the causal relations of events to predict what happens next in a story; machine comprehension of documents may involve understanding of events that affect the stock market, describe natural phenomena or identify disease phenotypes. In fact, event understanding also widely finds its important use cases in tasks such as open-domain question answering, intent prediction, timeline construction and text summarization. Since events are not just simple, standalone predicates, frontier research on event understanding generally faces two key challenges. One challenge is to precisely induce the relations of events, which describe memberships, co-reference, temporal orders and causality of events. The other is to comprehend the inherent structure and properties of an event, concerning its participants, granularity, location and time.
In this tutorial, we will comprehensively review existing paradigms for event-centric knowledge representation in literature, and focus on their contributions to NLU tasks. Beyond introducing partial-label and unsupervised learning approaches for event extraction, we will discuss recent constrained learning and structured inference approaches for multi-faceted event-event relation extraction from text. We will also review recent data-driven methods for event prediction tasks, including event process induction and conceptualization, and how an event-centric language model benefits narrative prediction. In addition, we will illustrate how distantly supervised approaches help resolve temporal and causal commonsense understand of events, and how they can be applied to construct a large-scale eventuality knowledge base. Participants will learn about recent trends and emerging challenges in this topic, representative tools and learning resources to obtain ready-to-use models, and how related models and techniques benefit end-use NLU applications.
Tutorial Outline
Introduction [20 min]
handout
We will define the main research problem and motivate the topic by presenting several real-world applications based on event-centric NLU.
Event-Centric Information Extraction (Part I) [40 min]
handout
We will start the tutorial by introducing the essential background knowledge about events and their relations, including the definitions, categorizations, and applications. In the last part of the introduction, we will talk about widely used event representation methods, including event schemas, event knowledge graphs, event processes, event language models, and more recent work on event meaning representation via question-answer pairs, event network embeddings and event time expression embeddings. After introducting the back ground, we will introduce unsupervised and zero-shot techniques for parsing the internal structures of verb and nominal events from natural language text, which also involves methods for automatic salient event detection. In addition, this part will also introduce event extraction technologies in multi-modal and multilingual scenarios.
Event-Centric Information Extraction (Part II) [30 min]
handout
Once events are extracted, the section part of Event-centric IE will discuss methods that identify temporal and causal relations of primitive events, and event coreference. Specifically, for data-driven extraction methods, we will present how constrained learning and structured prediction are incorporated to improve the tasks by enforcing logic consistency among different categories of event-event relations. We will also cover various crossdomain, cross-lingual and cross-media structure transfer approaches for event extraction.
Event-centric Prediction [35 min]
handout
We will then present recent works on machine comprehension and prediction of event complexes. Specifically, people are trying to understand events from different angles. For example, many efforts have been devoted into modeling event narratives such that they can successfully predict missing events in an event sequence. Besides, another important event understanding angle is conceptualization, which aims at understanding the super-sub relations between events. Last but not least, we will discuss technologies for automatic completion of event process structures, and ways to infer the central goals and intentions of such processes.
Event-centric Knowledge
Acquisition [35 min]
handout
Commonsense reasoning is a challenging yet important research problem in the AI community and one key challenge we are facing is the lack of satisfactory commonsense knowledge resources about events. Previous resources typically require laborious and expensive human annotations, which are not feasible on a large scale. In this tutorial, we introduce recent progress on modeling commonsense knowledge with high-order selectional preference over event knowledge and demonstrates that how to convert relatively cheap event knowledge, which can be easily acquired from raw documents with linguistic patterns, to precious commonsense knowledge defined in ConceptNet. Beyond that, we will also introduce how to automatically acquire other event-centric commonsense knowledge including but not limited to temporal properties, intentions, effects and graph schemas of events.
Emerging Research Problems [20 min]
handout
Event-centric NLU impacts on a wide spectrum of knowledge-driven AI tasks, and is particularly knotted with commonsense understanding. We will conclude the tutorial by presenting some challenges and potential research topics in applying eventuality knowledge in downstream tasks (e.g., reading comprehension and dialogue generation), and grounding eventuality knowledge to visual modalities, and challenges for cross-document event consolidation with human-defined schemas. In addition, this part will provide pointers to representative demos and learning resources for event-centric NLU.
Resources
Tutorial syllabus
Tutorial slides (pptx)
Tutorial video presentation
Acknowledgement
This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750- 19-2-1004, and U.S. DARPA AIDA Program No. FA8750-18-2-0014. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
- Home
- Software
- 2021 AAAI Event-centric NLU Tutorial