You are on page 1of 3

UNIVERSITY COLLEGE OF ENGINEERING JNTUK NARASARAOPET

B.Tech (CSE) – IV-I SEMESTER


QUIZ-I EXAMINATIONS

Roll No. : SUBJECT:DEEP LEARNING TECHNOUES


DATE: 27-09-2023 TIME: 20 MINUTES

1. What is the perceptron algorithm used for?


(a) Clustering data points (b) Finding the shortest path in a graph
(c) Classifying data (d) Solving optimization problems
2. What is the most common activation function used in perceptrons?
(a) Sigmoid (b) ReLU (c) Tanh (d) Step
3. Which of the following Boolean functions cannot be implemented by a perceptron?
(a) AND (b) OR (c) XOR d) NOT

4. Which of the following best represents the meaning of term ”Artificial Intelligence”?

a) The ability of a machine to perform tasks that normally require human intelligence
b) The ability of a machine to perform simple, repetitive tasks
c) The ability of a machine to follow a set of pre-defined rules
d) The ability of a machine to communicate with other machines
5. Which of the following statements is true about error surfaces in deep learning?
(a) They are always convex functions. (b) They can have multiple local minima.
(c) They are never continuous. (d) They are always linear functions.
6. Which of the following theorem states that a neural network with a single hidden layer
containing a finite number of neurons can approximate any continuous function?
(a) Bayes theorem (b) Central limit theorem (c) Fourier’s theorem (d) Universal approximation
theorem
7. What is the derivative of the ReLU activation function with respect to its input at 0?

a) 0 b) 1 c) −1 d) Not differentiable

8. Which of the following activation functions can only give positive outputs greater than 0?
(a) Sigmoid (b) ReLU (c) Tanh (d) Linear
9. Which of the following is well suited for perceptual tasks?
A. Feed-forward neural networks B. Recurrent neural networks
C. Convolutional neural networks D. Reinforcement Learning
10. Which neural network has only one hidden layer between the input and output?
A. Shallow neural network B. Deep neural network
C. Feed-forward neural networks D. Recurrent neural networks
11. Which of the following methods DOES NOT prevent a model from overfitting to the training set?
A) Dropout B) Pooling C) Early stopping D) Data augmentation
12. Which of the following would have a constant input in each epoch of training a Deep Learning
model?
A) Weight between input and hidden layer B) Weight between hidden and output layer
C) Activation function of output layer D) Biases of all hidden layer neurons
13. Which of the following techniques perform similar operations as a dropout in a neural network?
a) Boosting b) Bagging c) Stacking d) None of above
14. Which of the following statement(s) correctly represents a real neuron?
a) A neuron has multiple inputs but a single output only
b) A neuron has a single input and a single output only
c) All of the above d) A neuron has a single input but multiple outputs
15. Which of the following statements is true about the bias-variance trade-off in deep learning?
a) Increasing the learning rate reduces bias b) Increasing the learning rate reduces variance
c) Decreasing the learning rate reduces bias d) None of These
16. Which of the following statements is true about the bias-variance trade-off in deep learning?
a) Increasing the size of the training dataset reduces bias b) Increasing the size of the training dataset
reduces variance
c) Decreasing the size of the training dataset reduces bias d) Decreasing the size of the training dataset
reduces variance
17. What is the effect of high bias on a model's performance?
a) The model will overfit the training data. b) The model will underfit the training data.
c) The model will be unable to learn anything from the training data.
d) The model's performance will be unaffected by bias.
18. How can overfitting be prevented in deep learning?
a) By increasing the complexity of the model
b) By decreasing the size of the training data
c) By adding more layers to the model
d) By using regularization techniques such as dropout
19. Which of the following regularization techniques is likely to produce a sparse weight vector?
a) L1 regularization b) L2 regularization c) Dropout d) Data augmentation
20. We trained different models on data and then we used the bagging technique. We observe that
our test error reduces drastically after using bagging. Choose the correct options.
a) All models had the same hyperparameters and were trained on the same features
b) All the models were correlated.
c) All the models were uncorrelated(independent).
d) All of these.

Write answers in the following table

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 13. 14. 15. 16. 17. 18. 19. 20.

You might also like