Description |
Though many improvements have been achieved in Natural Language Processing, the task of enabling computers to understand events that we experience is still far from achieved. Events that are stereotypically desirable (positive) or undesirable (negative) for experiencers are called Affective Events. Acquiring knowledge about affective events holds promise for obtaining a deeper understanding of narrative texts such as stories and conversations. In this dissertation, I present research on automatically learning knowledge about affective events: (1) the affective polarity of events indicates how experiencers are affected by the events (e.g., "I broke my leg" is typically undesirable), and (2) the human needs associated with affective events provide a general explanation for why an event is positive or negative (e.g., "I broke my leg" is negative because it violates the human need to be physically healthy). My research designed two graph-based semi-supervised models to identify the affective polarity of events. The first Event Context Graph model identifies affective events by using discourse context and event collocation information. The second Semantic Consistency Graph model recognizes the affective polarity of events by optimizing the semantic consistency among events. Experimental results show that the Semantic Consistency Graph model outperformed previous methods and learned over 110,000 affective events with >90% precision for positive events and >80% precision for negative events. My research also studied a new task of categorizing affective events into human need categories: Physiological, Health, Leisure, Social, Financial, Cognition, and Freedom Needs, which were developed to explain why events are affective. To automatically recognize human needs of affective events, I designed a co-training model that learns from unlabeled data by simultaneously training classifiers based on an event expression view and an event context view in an iterative learning process. Experimental results demonstrate that the co-training model achieved good performance, and outperformed each individual supervised classifier and a self-training model. |