You are on page 1of 1

Improving Neural Question Generation using Answer Separation

Yanghoon Kim, Hwanhee Lee, Joongbo Shin and Kyomin Jung

Introduction
Neural Question Generation (NQG)
- A task of generating a question from a given passage with deep neural
networks.
Motivation
- Previous NQG models suffer from a problem that a significant
proportion of the generated questions include words in the target
answer.
Approach
- Separately encode the passage and the target answer
- Extract key information from the target answer
Outcome
- Significantly reduces the number of improper questions that include
target answer.
- Quantitatively outperforms previous state-of-the-art NQG models.

Answer-Separated Seq2Seq
• ℎ : j-th hidden state of
Input / Output
Keyword-Net answer encoder
- A masked passage and a target answer / A question
𝑜 =𝑐 • 𝑠 : t-th hidden state of
Answer-Separated Passage Encoder & Answer Encoder
𝑝 = 𝑆𝑜𝑓𝑡𝑚𝑎𝑥 𝑜 ℎ decoder
- two separate Bi-LSTMs
• 𝑐 : context
Answer-Separated Decoder
𝑜 = 𝑝 ℎ vector(attention
- LSTM with attention mechanism
mechanism)
- Initialization with the last hidden state of the answer encoder
- Keyword-Net: attention-based key information extractor 𝑠 = 𝐿𝑆𝑇𝑀(𝑦 ,𝑠 ,𝑐 ,𝑜 ) • 𝑜 : l-th hop Keyword-Net
- Retrieval style word generator: Adopt the idea from (Ma et al., feature vector for t-th
2018) timestep(of decoder)

Performance Comparison (SQuAD 1.1)


Split-1 Split-2
Model
BLEU METEOR ROUGE-L BLEU
Du et al., 2017 12.28 16.62 39.75 -
Song et al., 2018 13.98 18.77 42.72 13.91
Zhou et al., 2017 - - - 13.29
ASs2s (ours) 16.20±0.32 19.92±𝟎. 𝟐𝟎 43.98±0.25 16.17±0.35

Recall of interrogative word prediction Target answer inclusion


Question type Model Complete Partial
Model what how when which where who why yes/no Seq2seq + AP 0.8% 17.3%
Seq2seq + AP 77.3% 56.2% 19.4% 3.4% 12.1% 36.7% 23.7% 5.3% Song et al., 2018 2.9% 24.0%
ASs2s 82.0% 74.1% 43.3% 6.1% 46.3% 67.8% 28.5% 6.2% ASs2s (ours) 0.6% 9.5%

You might also like