TAPAS: Weakly Supervised Table Parsing via Pre-training
Document-level relation extraction aims to extract relations among entities
within a document. Compared with its sentence-level counterpart, Document-level
relation extraction requires inference over multiple sentences to extract
complex relational triples. Previous research normally complete reasoning
through information propagation on the mention-level or entity-level
document-graphs, regardless of the correlations between the relationships. In
this paper, we propose a novel Document-level Relation Extraction model based
on a Masked Image Reconstruction network (DRE-MIR), which models inference as a
masked image reconstruction problem to capture the correlations between
relationships. Specifically, we first leverage an encoder module to get the
features of entities and construct the entity-pair matrix based on the
features. After that, we look on the entity-pair matrix as an image and then
randomly mask it and restore it through an inference module to capture the
correlations between the relationships. We evaluate our model on three public
document-level relation extraction datasets, i.e. DocRED, CDR, and GDA.
Experimental results demonstrate that our model achieves state-of-the-art
performance on these three datasets and has excellent robustness against the
noises during the inference process.