SCIENCE CHINA Information Sciences, Volume 62, Issue 11: 212101(2019) https://doi.org/10.1007/s11432-018-9833-1

## Event co-reference resolution via a multi-loss neural network without using argument information

• AcceptedMar 15, 2019
• PublishedOct 9, 2019
Share
Rating

### Abstract

Event co-reference resolution is an important task in natural language processing, and nearly all the existing approaches for this task rely on event argument information. However, these methods tend to suffer from error propagation from event argument extraction. Additionally, not every event mention contains all arguments of an event, and the argument information may confuse the model where events contain arguments to detect an event co-reference in real text. Furthermore, the context information of an event is useful to infer the co-reference between events. Thus, to reduce the errors propagated from event argument extraction and use context information effectively, we propose a multi-loss neural network model that does not require any argument information relating to the within-document event co-reference resolution task; furthermore, it achieves a significantly better performance than the state-of-the-art methods.

### Acknowledgment

This work was supported by National Natural Science Foundation of China (Grant Nos. 61533018, 61806201, 61702512), Independent Research Project of National Laboratory of Pattern Recognition. This work was also supported by CCF-Tencent Open Fund.

### References

[1] Daniel N, Radev D, Allison T. Sub-event based multi-document summarization. In: Proceedings of HLT-NAACL 03 on Text summarization workshop, 2003. 9--16. Google Scholar

[2] Humphreys K, Gaizauskas R, Azzam S. Event coreference for information extraction. In: Proceedings of a Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts, 1997. 75--81. Google Scholar

[3] Narayanan S, Harabagiu S. Question answering based on semantic structures. In: Proceedings of the 20th International Conference on Computational Linguistics, 2004. Google Scholar

[4] Bejan C, Harabagiu S. Unsupervised event coreference resolution with rich linguistic features. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, 2010. 1412--1422. Google Scholar

[5] Chen Y B, Liu S L, Zhang X, et al. Automatically labeled data generation for large scale event extraction. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017. 409--419. Google Scholar

[6] Cybulska A, Vossen P. Guidelines for ECB+ Annotation of Events and Their Coreference. Technical Report NWR-2014-1. 2014. Google Scholar

[7] Bagga A, Baldwin B. Algorithms for scoring coreference chains. In: Proceedings of the 1st International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, 1998. 563--566. Google Scholar

[8] Luo X Q. On coreference resolution performance metrics. In: Proceedings of Conference on Human Language Technology and Empirical Methods in Natural Language Processing, 2005. 25--32. Google Scholar

[9] Vilain M, Burger J, Aberdeen J, et al. A model-theoretic coreference scoring scheme. In: Proceedings of the 6th Conference on Message Understanding, 1995. 45--52. Google Scholar

[10] Pradhan S, Luo X Q, Recasens M, et al. Scoring coreference partitions of predicted mentions: a reference implementation. In: Proceedings of Conference Association for Computational Linguistics Meeting, 2014. Google Scholar

[11] Cybulska A, Vossen P. Using a sledgehammer to crack a nut? Lexical diversity and event coreference resolution. 2014. http://120.52.51.16/www.lrec-conf.org/proceedings/lrec2014/pdf/840_Paper.pdf. Google Scholar

[12] Kehler A, Kehler A. Coherence, Reference, and the Theory of Grammar. Stanford: CSLI Publications, 2002. Google Scholar

[13] Cardie C, Wagstaf K. Noun phrase coreference as clustering. In: Proceedings of Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999. Google Scholar

[14] Stoyanov V, Gilbert N, Cardie C, et al. Conundrums in noun phrase coreference resolution: making sense of the state-of-the-art. In: Proceedings of Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2009. 656--664. Google Scholar

[15] Ahn D. The stages of event extraction. In: Proceedings of Workshop on Annotating and Reasoning about Time and Events, 2006. Google Scholar

[16] Chen Z, Ji H, Haralick R. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In: Proceedings of Workshop on Events in Emerging Text Types, 2009. 17--22. Google Scholar

[17] Zeng D J, Dai Y, Li F, et al. Adversarial learning for distant supervised relation extraction. Comput Mater Con, 2018, 55: 121--136. Google Scholar

[18] Chen Z, Ji H. Graph-based event coreference resolution. In: Proceedings of Workshop on Graph-based Methods for Natural Language Processing, 2009. 54--57. Google Scholar

[19] Liu Z Z, Araki J, Hovy E H, et al. Supervised within-document event coreference using information propagation. 2014. https://pdfs.semanticscholar.org/200d/cd81b19601831915f5b2f184587053933370.pdf. Google Scholar

[20] Lu J, Venugopal D, Gogate V, et al. Joint inference for event coreference resolution. In: Proceedings of the 26th International Conference on Computational Linguistics, 2016. 3264--3275. Google Scholar

[21] Yang B, Cardie C, Frazier P. A Hierarchical Distance-dependent Bayesian Model for Event Coreference Resolution. Trans Association Comput Linguistics, 2015, 3: 517-528 CrossRef Google Scholar

[22] Choubey P K, Huang R. Event coreference resolution by iteratively unfolding inter-dependencies among events. 2017,. arXiv Google Scholar

[23] Bird S, Klein E, Loper E. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. Sebastopol: O'Reilly Media Inc., 2009. Google Scholar

• Figure 1

(Color online) Instances of event co-reference resolution.

• Figure 2

(Color online) Structure of feedforward neural network for event mention extraction.

• Figure 3

(Color online) Structure of MLNN for event co-reference detection.

• Table 1   Mentions of event components in ECB+ corpus
 Action Participant Time Location Shooting Worker/2 women 8:30 p.m. Kraft
• Table 2   Statistics of ECB+ corpus
 Train Dev. Test Total #Document 462 73 447 982 #Sentences 7294 649 7867 15810 #Event mentions 3555 441 3290 7268 #WD chains 2499 316 2137 4953 Average WD chain length 2.8 2.6 2.6 2.7
• Table 3   Results of within-document event co-reference resolution on ECB+ corpus
 $B^3$ MUC CEAFE$_e$ CoNLL $F_1$ R P $F_1$ R P $F_1$ R P $F_1$ $F_1$ LEMMA 56.8 80.9 66.7 35.9 76.2 48.8 67.4 62.9 65.1 60.2 HDP-LEX (2010) 67.6 74.7 71.0 39.1 50.0 43.9 71.4 66.2 68.7 61.2 Agglomerative (2009) 67.6 80.7 73.5 39.2 61.9 48.0 76.0 65.6 70.4 63.9 HDDCRP (2015) 67.3 85.6 75.4 41.7 74.3 53.4 79.8 65.1 71.7 66.8 Iter-WD/CD (2017) 69.2 76.0 72.4 58.5 67.3 62.6 67.9 76.1 71.8 68.9 MLNN 87.3 71.0 78.3 69.0 57.0 62.4 66.6 76.0 70.7 70.4
• Table 4   Comparisons of three systems
 $B^3$ MUC CEAFE$_e$ CoNLL $F_1$ R P $F_1$ R P $F_1$ R P $F_1$ $F_1$ C-NN 90.2 48.8 63.3 76.8 40.0 56.0 40.2 69.7 51.0 56.8 C-MLNN 86.8 67.7 76.0 67.6 53.3 59.6 62.3 74.5 67.9 67.8 MLNN 87.3 71.0 78.3 69.0 57.0 62.4 66.6 76.0 70.7 70.4
• #### 0

Citations

• Altmetric

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有