Web tables form a valuable source of relational data. The Web contains an estimated 154 million HTML tables of relational data, with Wikipedia alone containing 1.6 million high-quality relational tables. Extracting the semantics of Web tables to produce machine-understandable knowledge has become an active area of research.
A key step in extracting the semantics of Web content is entity linking (EL): the task of mapping a phrase in text to its referent entity in a knowledge base (KB). In this paper we present TabEL, a new EL system for Web tables. TabEL differs from previous work by weakening the assumption that the semantics of a table can be mapped to pre-defined types and relations found in the target KB. Instead, TabEL enforces soft constraints in the form of a graphical model that assigns higher likelihood to sets of entities that tend to co-occur in Wikipedia documents and tables. In experiments, TabEL significantly reduces error when compared to current state-of-the-art table EL systems, including over 75% error reduction on Wikipedia tables and 60% error reduction on Web tables. We also make the Wikipedia table corpus and all test datasets publicly available for future work.
Datasets and APIs
- - Web_Manual Dataset
- - Wiki_Links-Random
- - Test tables
The file contains, per line, two tab-separated fields: page ID and table ID. These are references in the 1.6M tables dataset. In the JSON file, the respective field names are "pgId" and "tableId".
- - Test tables
- - TabEL_35K
Will be released later.
We are from Northwestern University. Our team members are: