You are on page 1of 61

Agenda

• Introduction to Knowledge Graphs

• KG Application & Entity Linking Advances

• Techniques for Knowledge Graph Construction

• Frontier: Personal Knowledge Graphs


Knowledge Graphs
• Knowledge Graph (KG) = Knowledge
Base (KB): Entities, Relations, Types
and other properties.

rel/⽣产商 20M entities


wiki/⼤众新甲壳⾍
type/汽⻋型号
“甲壳⾍”
wiki/The_Beatles wiki/⼤众汽⻋
type/乐队

words Entities Types Relations 5B entities [1]

Text World Semantic World


[1] https://blog.google/products/search/about-knowledge-graph-and-knowledge-panels/
Why KGs are useful
• High-quality knowledge for structured understanding & reasoning
leather bags
linking(“harry potter”).follow(/r/Author) under 300?
Author of Harry Author U.K. Product
Potter? KB
has_occupation nationality
Here’s a list!

J.K. Rowling!
J.K.Rawling
KB author Harry Potter

all.filter(/material/leather)
.filter(/type/bag)
series series
Faceted Search .filter(/price < $300)
Harry Potter I Harry Potter
II .rerank()
Question Answering

Query and Document Understanding


“Ball Animal at Target”
Applications in Recommender Systems
• Reasoning over KG to identify relevant interests

likes recommend

/person/Taylor_Swift

1989 World
The Red Tour
Tour

performs performs
Love Story
/event/The_1989_World_Tour
artist

Taylor Swift Adele

instance_of instance_of
/song/Love_Story_(Taylor_Swift)
American female
country singers
Video Recommendation
Entity Linking is critical in KB systems
• The crucial step: mapping text to KB entities!

linking(“harry potter”).follow(/r/Author)
Author of Harry Author U.K.

Potter?
has_occupation nationality

J.K. Rowling!
J.K.Rawling
KB author Harry Potter

series series

Harry Potter I Harry Potter


II
Question Answering
/person/Taylor_Swift

/song/Love_Story_(Taylor_Swift)

/event/The_1989_World_Tour
Text Understanding
“Ball Animal at Target” Video Recommendation
Agenda

• Introduction to Knowledge Graphs

• KG Application & Entity Linking Advances

• Techniques for Knowledge Graph Construction

• Frontier: Personal Knowledge Graphs


Entity Linking
• Entity Linking: Given a piece of text, find out what entities in the KB are
mentioned.

❖ e.g. “⾼尔夫甲壳⾍哪个贵” mentions 2 entities:

• /wiki/⼤众⾼尔夫, and

• /wiki/⼤众新甲壳⾍.

• Also known as Entity Disambiguation or


Entity Resolution.
Entity Linking: Overview
• Sentence: “⾼尔夫甲壳⾍哪个贵”

• Mentions: ⾼尔夫 甲壳⾍

• Candidate
Generation:
0.96 0.85 0.14
• Scoring:

0.01
0.04
Entity Linking: Traditional Approach
• Sentence: “⾼尔夫甲壳⾍哪个贵”

• Mentions: ⾼尔夫 甲壳⾍


Using sequence labeling (NER), rules, entity alias tables…

⾼尔夫
ENT/B ENT/I ENT/I ENT/B ENT/I

Output

Input

⾼ 尔 夫 甲 壳
Entity Linking: Traditional Approach
• Mentions: ⾼尔夫 甲壳⾍

• Candidate
Generation:
Using entity alias tables…

Example Alias Table:


alias entity
⾼尔夫 Volkswagen_Golf
⾼尔夫 Golf_(sport)
甲壳⾍ Beetle_(insects)
Is this a good approach? 甲壳⾍ The_Beatles_(band)
甲壳⾍ Volkswagen_Beetle
Entity Linking: Traditional Approach
• Mentions: ⾼尔夫 甲壳⾍

• Candidate
Generation:

• Scoring: prior=0.32
0.96
full_match_alias

same_type_in_sentence

prior=0.05
0.04
full_name_match

common_entity Using rules or shallow ML models


Entity Linking: Typing-based Approach
• Mentions:

• Candidate
Generation:

• Scoring:P(E|M) = X1
0.96
TypeCoherence(E,Context) = Y1

Use context to predict entity types.

P(E|M) = X2
0.04
TypeCoherence(E,Context) = Y2

[Raiman & Raiman] DeepType: Multilingual Entity Linking by Neural Type System Evolution
Challenges & Limits
alias entity
• Absence of good Alias Tables for CandGen san jose San_Jose,_Calif
No alias
san jose ornia
San_José,_Cost
a_Rica
san francisco San_Francisco
❖ Name Mining: challenging, low recall, and table
san francisco San_Francisco_I
error-prune. nternational_Air
san francisco San_Francisco,_
1 Córdoba
• Zero-shot for tail and fresh entities 0.75
0.5
❖ e.g. new people, products, albums and events show up.
0.25
Tail entities
0
• Massive KBs 1e3 1e5 1e7

❖ e.g. Google KG: billions of entities

• KBs are Multilingual


Revisit the traditional approach
• Sentence: “San jose to san francisco”

• Mentions: San jose (LOC) san francisco (LOC)

• CandGen:
Entity retrieval with
dual encoders; San Francisco

Featurized entity 0.96
San Jose, California International Airport
0.14
representations
0.85
San Francisco, California
• Scoring:
Entity scoring with
reading comprehension

San Jose, capital of Costa Rica


0.04 0.01
San Francisco, Argentina
Intuition: Deep Retrieval
• Instead of alias table, use a deep model to retrieve candidates.
• Encode mention as vector, and retrieve
similar entity vectors via nearest
neighbor search.
Jorge_Costa
Costa has not played mention
Mention since being struck by Ricardo_Costa
the AC Milan forward.

encode Encoding [0.135, -0.047, 0.028, … ] ANN

Ricardo_Costa (0.588)
Candidates
Jorge_Costa (0.572) retrieved Illustration of nearest neighbor search for 

entity retrieval
Fernando_Torres (0.508)

Entity retrieval with dual encoders (DEER)
• Train with two losses:
Cosine
❖ Batch Softmax for positives

Mention Entity
encoder encoder
❖ Cross entropy for hard negatives
(hard negative mining: +0.37 R@1)

Mention Mention
span context
Title Paragraph Categories
• Inference:
costa has, not, jorge, costa jorge, paulo, basketball_player,
played… born, 1971… person… ❖ Nearest neighbor search for entity
Costa has not played
embeddings
since being struck by
the AC Milan forward. [Gillick 2019] Learning Dense Representations for Entity Retrieval
Limits to DEER

• Shallow BOW model for both encoders, not very expressive.

• Requires a sophisticated negative mining setup to work well.


Introducing BERT
• BERT is a major advancement in NLP in 2018. It fuses these ideas:
❖ Transfer learning

❖ Contextualized word embeddings

❖ Transformers

❖ Pre-training with unsupervised data

• Core idea: use massive unlabeled text to pre-train a good language


model.
[Devlin, Chang, Lee, Toutanova 2018] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT tasks
• Input: “This is sentence one. This is sentence two.”
❖ [CLS] this is sentence one . [SEP] this is sentence two . [SEP]
• Training objectives:
• (1) Masked Language Model (MLM):
❖ [CLS] this [MASK] sentence one . [SEP] this is [MASK] two . [SEP]
❖ is sentence
• (2) Next Sentence Prediction (NSP):
❖ [CLS] this [MASK] sentence one . [SEP] this is [MASK] two . [SEP]
❖ is_next_sentence=1
• Key benefit: unsupervised. Model must understand the language well to make these
predictions!
BERT for entity linking?
• Mentions carry lots of entity info. Learn from BERT representation of mentions? Entity

• Input: “Valentina: first woman in space” Entity


❖ [CLS] [E1] valentina [/E1] : first woman in space [SEP]
Mention
• Training objectives: entity linking task.
• K% of the time: mask the mention, predict the entity ID
❖ [CLS] [E1] [MASK] [/E1] : first woman in space [SEP]
❖ —> /wiki/Valentina_Tereshkova
• 1-K% of the time: do not mask the mention, predict the entity ID
❖ [CLS] [E1] valentina [/E1] : first woman in space [SEP]
❖ —> /wiki/Valentina_Tereshkova
• Model must understand the entity to make a prediction!
[Ling, FitzGerald, Shan 2019] Learning Cross-Context Entity Representations from Text
BERT for Entity Linking: RELIC
Cosine

FFNN

[CLS] [E1] Costa [/E1] has not … [SEP]


/wiki/Jorge_Costa
• “RELIC” model: Representations
Mention encoder
/wiki/Ricardo_Costa
of Entities Learned In Context.
(BERT)
/wiki/Costa_Coffee
[CLS] [E1] Costa [/E1] has not
played since being struck by
… • Training:
Entity Embeddings
the AC Milan forward . [SEP]
❖ Batch Softmax loss only

[Ling, FitzGerald, Shan 2019] Learning Cross-Context Entity Representations from Text
RELIC Evaluations
R@1 CONLL-
• R@1 Matches SOTA on (Accuracy) AIDA
KBP 2010
EL (CoNLL-AIDA)
Raiman
• Industry-friendly model 2018 94.9 90.9
(SOTA)
(no entity features; no alias table);
can be used for retrieval DEER - 87.0
• Outperforms DEER
RELIC
without doing negative mining (ours)
94.9 89.8
Insight: question answering
• RELIC is also an approach to entity-centric QA:
❖ encode the question, retrieve nearest entities in the embedding space as
answer candidates.

❖ “closed-book” vs “open-book” QA
In which Lake District town would
Question you find the Cumberland Pencil
Museum? [MASK]
Keswick (0.638) (correct)
Candidates
encode Encoding [0.372, 0.187, -0.408, … ] NNS Hawkshead (0.602)
Grasmere (0.517)

Recovers 80% R@1 of more complex SOTA models for QA, in a closed-book setting
[Ling, FitzGerald, Shan 2019] Learning Cross-Context Entity Representations from Text
Limits to RELIC

• No Zero-shot ability: must have ~10 mentions to learn a


reasonable embedding.

• Hard to scale to massive KBs: training billions of embeddings


are challenging.
Recap: Challenges
• Absence of good Alias Tables for CandGen
❖ Mining names for entities can be challenging, low recall, and error-prune.

• Zero-shot for tail and fresh entities


❖ e.g. new people, products, albums and events show up

• Massive KBs
❖ e.g. Google KG: billions of entities

• KBs are Multilingual


Recap: Challenges
• Absence of good Alias Tables for CandGen
❖ Mining names for entities can be challenging, low recall, and error-prune.

• Zero-shot for tail and fresh entities —> featurized entity representations
❖ e.g. new people, products, albums and events show up

• Massive KBs —> shared parameters between entities


❖ e.g. Google KG: billions of entities

• KBs are Multilingual —> multilingual entity representations


Multilingual entity encoder w/ BERT: Model F

Cosine • Idea: Use pretrained BERT model to


read entity features (title, description,
types etc) to produce entity encoding.
FFNN FFNN

[CLS] [E1] Costa [/E1] has not … [SEP] [CLS] Jorge Paulo Costa Almeida.. [SEP]

Mention encoder Entity encoder • Encoders are multilingual.


(BERT) (BERT)

[CLS] [E1] Costa [/E1] has not [CLS] Jorge Paulo Costa Almeida,
• Train with Batch Softmax + Cross
played since being struck by known as Costa, is a Portuguese Entropy for hard negatives.
the AC Milan forward . [SEP] retired footballer … [SEP]

[Botha, Shan, Gillick 2020] Entity Linking in 100 Languages


SOTA on Multilingual EL!

R@1 Model F
Tsai+ Uadhyay+
• R@1 Outperforms (Accuracy) (ours)
previous SOTA on
Cross-Lingual EL
13 langs 13 langs 104 langs
Stats
5m entities 5m entities 20m entities
• With more entities
and languages
R@1 on
(harder)
TR2016- 0.51 0.54 0.57
hard (avg)
Releasing New Multilingual EL dataset

“Wikinews 9” Dataset Non-English English

ja, de, es, ar, sr, tr, fa, ta


Languages
(8 langs)
en

# mentions 208,845 80,242

# distinct entities 43,465 38,697


# entities not in English
Wikipedia
8,807 -
Entity linking in 100 Languages
• Accuracy on holdout set with 100 languages

log (training size) Model R@1 Alias Table R@1

log (training size)


R@1

Languages

Uniform improvements over Alias Table on all 104 langauges, despite training size!

[Botha, Shan, Gillick 2020] Entity Linking in 100 Languages


Example multi-lingual wins
• Mention (German): “…Beiden neuen Bahnen handelt es sich
um das Model Tramino von der polnischen Firma Solaris Bus
& Coach...”
→Entity (Polish): “Solaris Tramino – rodzina tramwajów, które
sa ̨ produkowane przez firme Solaris Bus & Coach z
Bolechowa koło Poznania...”
(Q780281, a family of trams manufactured in Poland)

• Mention (English): “…Federal Bureau of Reclamation to


protect threatened fish stopped irrigation pumping to parts
of the California Central Valley...”
→ Entity (Japanese): “スプリンクラーは、⽔に⾼圧をかけ
⾶沫にしてノズルから散布する装置 ...”
(Q998539, a method of irrigating lawns and crops)

[Botha, Shan, Gillick 2020] Entity Linking in 100 Languages


Insight: Zero-shot and few-shot linking
• Model F largely improved on R@100
diff (Model F holdout TR2016-hard
Zero-shot and Few-shot entities. vs RELIC)

• Model F: feature based Zero-shot


+0.84 +0.38
[0,1)
• RELIC: embedding-based
Few-shot
+0.86 +0.81
• Eval based on entity frequency [1,10)

in training:
Micro-avg +0.02 +0.01
Scales to larger KBs because
the entity encoder can generalize!
[Botha, Shan, Gillick 2020] Entity Linking in 100 Languages
Example Architecture
/ent/Mojito_(song_by_Jay_Chou) (0.64) 4. Embedding Search
/ent/Mojito_(song_by_Abigail) (0.43) return
… /ent/Mojito_(song)
5. downstream scoring
ANN /ent/Mojito_(cocktail)
search
Index /ent/Glenrothes1990

3. Mention Encoding
FFNN FFNN

[CLS] [E1] mention [/E1] context [SEP] [CLS] name [TYPE] type [REL] rels [SEP]

Query: encode Mention encoder Entity encoder


(BERT) (BERT)

encode

1. Training 2. Encoding entities


GPU KB
live
offline
Recap: approach
• Sentence: “San jose to san francisco”

• Mentions: San jose (LOC) san francisco (LOC)

• CandGen:
San Francisco

0.96
San Jose, California International Airport
0.14
0.85
San Francisco, California
• Scoring:
Entity scoring with
reading comprehension

San Jose, capital of Costa Rica


0.04 0.01
San Francisco, Argentina
Entity Scoring with Reading Comprehension

• Zeshel: a single BERT cross-attention model to read


mention and entity together.
Acc (Zero-shot
Model
score = 0.96 EL dataset)
binary

Dual-encoder
FFNN

[CLS] [E1] Costa [/E1] has not … [SEP] Jorge Paulo Costa Almeida, known [SEP] (Model-F like)
0.58
Mention+Entity 

Cross-Attention encoder

(BERT)

[CLS] [E1] Costa [/E1] has not played since being struck by
Cross Attention 0.76
the AC Milan forward . [SEP] Jorge Paulo Costa Almeida,
known as Costa, is a Portuguese retired footballer … [SEP] Also works well for the Multilingual EL
paper setting & Google EL scoring.
[Logeswaran 2019] Zero-Shot Entity Linking by Reading Entity Descriptions
Agenda

• Introduction to Knowledge Graphs

• KG Application & Entity Linking Advances

• Techniques for Knowledge Graph Construction

• Frontier: Personal Knowledge Graphs


Where do knowledge bases come from?

? Knowledge
Base

• Human Editing?

• Hard Rules?

• Machine Learning Systems?


Common KBC Techniques
KB-based DS
KB

• Distant supervision uses existing KB or rules to


match text for bootstrapping low-resource Author U.K.

domains. has_occupation nationality

J.K.Rawling author Harry Potter


• Reusable ML components: Mention Extraction,
Relation Extraction, Entity Linking, Coref
Distant Supervision
Text
• Rich textual feature library or pre-trained models. J. K. Rawling, the “mother” of Harry Potter, was in an interview…
relation: author

• Tools for error analysis and quality Rule-based DS


measurement. [A] -appos-> mother -nmod-> [B]

• ML Frameworks and methods: DeepDive (factor-


graph based), Snorkel, BERT, MTB, Transformers…
KBC Walkthrough:
From text to a knowledge base
KBC: overview
Entity-level data
(knowledge base) KB

Barack has_spouse Michelle


Entities
Obama (entity-level relation) Obama

Entity Linking refers to refers to

has_spouse
Mentions "Barack" "Michelle"
(mention-level relation)

Mention-level data Barack and Michelle are married ...


(sentences)
KBC: input

Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured
Unstructured infobox and lists)
Unstructured

Table 1

orig_text "Taylor Swift Performs in Pouring Rain at New Jersey Concert”


KBC: NLP tagging

Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured
Unstructured infobox and lists)
Unstructured

NLP tagging

Table 1

orig_text "Taylor Swift Performs in Pouring Rain at New Jersey Concert”


idx 0 1 2 3 4 5 6 7 8 9
tokens Taylor Swift Performs in Pouring Rain at New Jersey Concert
lemmas taylor swift perform in pour rain at new jersey concert
POS NNP NNP VBZ IN VBG NN IN NNP NNP NNP
NER PER PER PROV PROV
Depende compo nsubj root mar advct, 2 dobj, cas compo compo nmod, 4
ncy und, 1 , 2 k, 4 4 e, 9 und, 9 und, 9
KBC: mention extraction

Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured
Unstructured infobox and lists)
Unstructured

Mention
NLP tagging
Extraction

Table 1

orig_text "Taylor Swift Performs in Pouring Rain at New Jersey Concert”

“Taylor Swift”, PER, 0.97


“New Jersey”, LOC, 0.99
“New Jersey Concert”, EVENT, 0.68
“Pouring Rain”, EVENT, 0.25
KBC: relation extraction

Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured
Unstructured infobox and lists)
Unstructured

Mention Relation
NLP tagging
Extraction Extraction

Table 1

orig_text "Taylor Swift Performs in Pouring Rain at New Jersey Concert”

(“Taylor Swift”, “New Jersey”, PERFORMS_IN, 0.95)


KBC: coref and linking

Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured Structured KB
Unstructured infobox and lists)
Unstructured (e.g. Wikidata)

Coref
Mention Relation
NLP tagging resolution and
Extraction Extraction
Entity Linking

Taylor Swift Performs in Pouring Rain at New Jersey Concert.

Swift said in a video posted on her Instagram story, which shows her with her

mother Andrea Swift.


KBC: coref and linking

Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured Structured KB
Unstructured infobox and lists)
Unstructured (e.g. Wikidata)

Coref
Mention Relation
NLP tagging resolution and
Extraction Extraction
Entity Linking

Taylor Swift Performs in Pouring Rain at New Jersey Concert.


pm1; linking(Q26876)

Swift said in a video posted on her Instagram story, which shows her with her
pm2; coref(m1) pm3; coref(m1)

mother Andrea Swift.


pm4; linking(Q17319206)
KBC: fusion
Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured Structured KB
Unstructured infobox and lists)
Unstructured (e.g. Wikidata)

Coref
Mention Relation
Extraction Pipeline
Knowledge
NLP tagging resolution and Data Fusion
Extraction Extraction Base
Entity Linking

<Taylor_Swift, birth_date, “1989-12-13”>, 0.98, from Wikidata


<Taylor_Swift, birth_date, “1989-12-14”>, 0.85, from CrazyForum
<Taylor_Swift, birth_date, “1989-12-13”>, 0.93, from NewsExtractions

<Taylor_Swift, birth_date, “1989-12-13”>, 0.94, sources: [“Wikidata”]


KBC: error analysis
Semi-structured
Unstructured Text Tables & Lists
(e.g. Wikipedia text)
Unstructured (e.g. Wikipedia
Unstructured Structured KB
Unstructured infobox and lists)
Unstructured (e.g. Wikidata)

Coref
Mention Relation
Extraction Pipeline
Knowledge
NLP tagging resolution and Data Fusion
Extraction Extraction Base
Entity Linking

Error Analysis

http://deepdive.stanford.edu/labeling
A more modern approach with BERT
• Matching-The-Blanks: use BERT to read two mentions in
context and classify the relation.

Large wins on FewRel, 



Contrastive learning with distant supervision: exceeding human performance!
[A] is [B] ’s mother [C] and [A] were childhood friends
= ≠
[B] gave birth to [A] on a snowy morning in 1981
[Baldini-Soares 2020] Matching the Blanks: Distributional Similarity for Relation Learning
Challenges for knowledge base construction

• KBC for low-resource languages.

• Reducing noise in distant supervision.

• Temporal knowledge and events.

• KBC with multiple modalities and sources.

• Personal KG construction and representation.


Agenda

• Introduction to Knowledge Graphs

• KG Application & Entity Linking Advances

• Techniques for Knowledge Graph Construction

• Frontier: Personal Knowledge Graphs


New topic: Personal Knowledge Graphs
• Personal Knowlede Graphs (PKG) are
proposed by Google in 2019.

• User-specific Knowledge Graphs, a good


representation of user’s persona

• Types of PKGs defined in the paper:

❖ Personalized KGs: a user-specific


view of the public KG.

❖ Personal KGs: contains entities and


relations not in the public KG.
[Balog 2019] Personal Knowledge Graphs: A Research Agenda
Example of a Personal Knowledge Graph
PKG User
2015-06 Electric
Private Entity
Guitar
Acoustic Date purchased
Instance of Public Entity
Guitar
Guitar_1 Textual attribute
Instance of John Doe
2017-08 Jessica Doe
Date purchased possess Parent
Guitar_2 Parent
possess
User_134 Name
Jane Doe
possess
Car_1 Interest
Previous

Instance of Residence residence Folk Music Basketball

2015 San
Mazda-6 Shenzhen Birds Baseball
Francisco
User Portrait vs PKG
• Traditional user portraits are collections of (unstructured) labels and
attributes:

Electric
Rock
Acoustic Guitar Imagine
Guitar Dragons
James
Basketball Ragdoll
Baseball Bird
User Portrait vs PKG
• PKGs: entities, relations, temporal, expandable, reasoning
Folk
Taylor Music
Music
(guitar brand)
Guitar Subclass of Muse (band)
Genre
Electric Rock Genre
Miami Heat Guitar Music
Acoustic Imagine
Former employer Guitar Dragons
LeBron James
Employer
Ragdoll
Basketball
Cat
LA Lakers League Baseball Birds
NBA Subclass of
Instance of Subclass of
Pets
Sports Animals
Instance of Instance of Skiing

Soccer
Stacks for PKG Construction

Personal Knowledge Graphs Personal Interest Embeddings

Temporal Interest Multimodal


Entity Discovery Entity Expansion
reasoning Understanding

Mention Relation Behavioral


Entity Linking
Detection Extraction sequence modeling

Documents Videos User Interactions General KGs


Industrial Applications of PKGs

Content
Personalized Interest Personal Ads &
Recomm-
Search Expansion Assistants Shopping
endations

Personal Knowledge Graphs Personal Interest Embeddings


Example usage of a PKG
Assistant Buy strings for my
guitar.

The acoustic or
electric one?

Video
Recommender
You may be interested
in these recent SF
baseball videos!
PKG: Challenges and Open Questions
• Engineering PKGs with Privacy

❖ Effective Distant supervision without reliance on labeled personal data;

❖ Knowledge transfer from generic KBC models to personal KBC models

❖ Knowledge distillation and edge computation

❖ Keep user experience and privacy in mind!

• How to model and store an evolving PKG?

❖ New entities and relations; expired entities and relations;


State updates (e.g. purchased, watched); Evolving interest…

• Explainable, temporal, efficient

[Balog 2019] Personal Knowledge Graphs: A Research Agenda


• Zifei Shan@Tencent WeChat

• (We are hiring in Shenzhen & Shanghai!)

You might also like