You are on page 1of 5

HKE’s

Department of Computer Science & Engineering


Question bank ;
Sub:Deep Learning(18CS731); Date:20-1-22
----------------------------------------------------------------------------------------------------

MODULE-1
1. Define the following:
i. Deep learning
ii. ANN
iii. Activation function
iv. Gradient descent
v. Forward/Backward propagation
vi. Learning rate

2. Explain ANN and its layers.


3. Discuss different activation functions/transfer functions in neural networks.
i. Sigmoid function
ii. Tanh function.
iii. Rectified Linear Unit function(ReLU)
iv. Leaky ReLU function.
v. Exponential linear unit function.
vi. Swish function
vii. Softmax function
4. Describe forward/backward propagation in ANN.
5. Illustrate how does ANN learn.
6. Illustrate debugging of gradient descent with gradient checking.
7. Show step by step implementation of gradient checking algorithm in python.
8. Explain gradient checking.
9. Explain dying ReLU problem.
10. Define a computational graph.
11. What are sessions? Explain how sessions are created in tensorflow.
12. Differentiate variables and placeholders.
13. Explain the need for tensorboard.
14. Discuss math operations in Tensorflow.
15. What is namescope and explain how it is created.
16. Explain eager execution.
17. Discuss important steps for building a model in keras.
18. Describe two different API’s to define models in keras.(sequential & functional API’s).
MODULE-2
1. What is the drawback of RNN? What is the cause of drawback in RNN?
2. With a neat diagram explain LSTM cells(input gate, output gate, forget gate).
3. Differentiate RNN and LSTM network.
4. Explain RNN and its layers, with a neat diagram.
5. Differentiate RNNs and feed forward networks.
6. Explain forward propagation in RNN.
7. Explain back propagation through time in RNN.
8. Illustrate Gradients with respect to the hidden to output weights, V.
9. Illustrate Gradients with respect to hidden to hidden layer weights, W.
10. Illustrate Gradients with respect to input to the hidden layer weight, U.
11. Illustrate vanishing and exploding gradients problem in RNN.
12. What is Gradient clipping? Explain.
13. Explain steps to generate song lyrics using RNNs.
14. Define network parameters, placeholders, forward propagation, BPTT in tensor flow to generate
song lyrics using RNNs.
15. Discuss different types of RNN architechtures.
16. Illustrate updating the cell state(hidden state also) in an LSTM network with suitable example.
17. Illustrate backpropagation/forward propagation in LSTM.
18. Calculate gradients of loss with respect to gates used in LSTM cell.
19. Calculate gradients with respect to all the weights(V,W,U) used in the LSTM cell.
20. Write the steps of data preparation in tensor flow to predict Bitcoin prices using LSTM model.
21. Define network parameters, LSTM cell, forward propagation, backpropagation in tensorflow to
predict Bitcoin prices using LSTM model.
22. Illustrate training the LSTM model and making predictions using LSTM model to predict Bitcoin
prices.
23. What are Gated recurrent units? Explain GRU cell(reset,update gate).
24. Explain how update gate and reset gate help in updating the hidden state.
25. Illustrate forward/back propagation (gradient with respect to weights, gradients with respect to
gates) in a GRU cell.
26. Explain different layers in bidirectional RNN.
27. Explain language translation using the seq2seq model.
28. Explain encoder and decoder with LSTM or GRU cells in the seq2seq architecture.
29. What is the use of the attention mechanism? Explain.
MODULE-3
30. What are CNN’s, Explain different layers of CNN with suitable examples
31. Explain the following
1. Strides
2. Padding
3. Pooling layers
4. Architecture of CNN
5. Forward propagation in CNN
6. Backward/ back propagation in CNN
32. Define and implement the Convolutional network in tensorflow
33. Define helper functions and libraries required to implement CNN in TF.
34. How to compute loss, How to train model, How to visualize extracted features in CNN using TF.
35. Explain different CNN architectures.
36. Explain Google Net in detail.
37. Discuss different versions of Google Net(V1,V2 and V3).
38. What are capsule networks, explain capsule networks in detail
39. Explain architecture of capsule network , with a neat diagram.
40. Explain the following:
i. Coupling coefficients
ii. squashing function
iii. Dynamic routing algorithm.
41. Define squash function, dynamic routing algorithm, decoder in tensorflow.
42. Compute accuracy of capsule model in tensor flow.
43. Calculate margin loss, reconstruction loss and total loss in capsule model.
44. Differentiate capsule networks and CNNs.
45. What is factorized convolution in the inception network.
46. Define pooling. Explain different types of pooling operation.
MODULE-4

1. What is the word2vec model? Explain two types of word2vec models for learning the embedding of a
word

a) CBoW model b) Skip gram model

2. Explain CBoW with single context word.

3. Explain CBoW with multiple context words.

4. Explain forward and backward propagation of CBOW with single context word.

5. Explain skip-gram model with suitable example.

6.Explain forward / backward propagation in skip-gram model.

7. Illustrate various training strategies which can optimize and increase the efficiency of word2vec model

8. What is negative sampling, subsampling?Explain

9. Discuss Doc2vec model.

10. Explain the following:

i. Paragraph vector-Distributed model.

ii. Paragraph vector-Distributed bag of words model.

11. Build word2vec model using gensim library.(preprosessing and preparing the dataset,bulding the
model, evaluating the embeddings)

12. Explain skip-thoughts algorithm with suitable examples.

13. Explain Quick-thoughts algorithm for sentence embeddings.

MODULE-5
1. Differentiate discriminative and generative models.

2. Explain architecture of GANS.

3. Explain how the discriminate and generators learns.

4. Explain in detail discriminators loss function.

5. Why heuristic loss function is used? Explain.

6. Explain DCGAN architecture.

7. Explain conditional GANs with suitable examples.

8. Generate specific handwritten digits using CGAN.


9. Explain InfoGAN .

10 Explain the architecture of infoGAN.

11. Differntiate GAN,CGAN,InfoGAN,stackGAN.

12. What is mutual information?Explain.

13. Why do we need auxiliary distribution in InfoGAN? Explain.

14. What is cycle consistency loss?Explain.

15. Explain role of generators & discriminators in a cycleGAN.

16. Explain how do stackGANs convert text descriptions into pictures.

17. Explain Loss function in a cycleGAN.

18. What are stackGANs? Explain.

19. In detail explain the architecture of stackGANs.

You might also like