You are on page 1of 35

How to Fit Artificial Intelligence

into Manufacturing
What is holding up AI adoption, and where is it already in use?

Even early concerns related to artificial intelligence (AI) have not appeared to slow its
adoption. Some companies are already seeing benefit and experts are saying companies
not adopting new technology will not be able to compete over time. However, AI
adoption seems to be moving slowly despite early successful case studies.

Why AI is Moving so Slow in Manufacturing?

AI is growing, but exact numbers can be difficult to obtain, as the definition of


technologies such as machine learning, AI, machine vision, and others are often blurred.
For example, using a robotic arm and camera to inspect parts might be advertised as a
machine learning or an AI device. While the device could work well, it might only be
comparing images taken to others that were manually added it to a library. Some would
argue this is not a machine learning device as it is making a preprogrammed decision, not
one “learned” from the machine’s experience.

Going forward, this article will use general terms when mentioning AI technology. But
when deciding on a design or product, make sure you understand the differences between
terms such as supervised vs. unsupervised, and other buzzwords that might get blurry
through sales and marketing efforts.   

According to a Global Market Insights report publish in February this year, the
market size for AI in manufacturing is estimated to have surpassed $1 billion in 2018,
and is anticipated to grow at a CAGR of more than 40% from 2019 to 2025. But other
resources insist that AI is moving slower. Some resources are often comparing AI case
studies to the entire size of the manufacturing market, talking about individual companies
investments, or specifically AI on a mass scale. From this prospective, AI growth is
slower, and that is for a few reasons other than the aforementioned.

AI is still a new technology. Much of the success has been in the form of testbeds, not
full-scale projects. This is because in large companies, one small adjustment could affect
billions of dollars, so managers don’t want to test full-scale projects until they’'ve found
the best solution. Additionally, companies of any size need to justify or guarantee a return
on investment (ROI). This leads to smaller projects, a focus on low-hanging fruit, or
projects that can be isolated as a testbed.

1
While the current investment wave of AI is at an all-time high, high-level adoption
remains low. A research paper from the McKinsey Global Institute in 2017, “Artificial
Intelligence, The Next Digital Frontier?”, reported high investment into AI. Early
adopters of AI have common characteristics: digital maturity, larger business models, the
adoption AI into core activities, the adoption of multiple technologies, a focus on growth
over savings, and C-level support for AI. This diagram highlights areas where money has
been invested into AI R&D. (Courtesy: McKinsey Global Institute)

Smaller or isolated projects might work well as a test, but theoretically, AI should return
greater benefits when operating at larger scales. This generally requires more

2
connectivity and data to maintain accuracy. This is the next reason why AI might be
moving slowly: Scale and connectivity.

Many companies have legacy equipment that does not provide data or a way to send data
to another location. New technology is working on retrofitting legacy equipment, but then
design engineers may have infrastructure problems. For example, some factories might
lack easy access to power for smart sensors or an IT network to get the data where it can
offer greater benefit.

While AI is growing and by all resources will continue to, maturity, confidence, ROI,
scaling, and connectivity might be slowing mass adoption.

What can AI do for Manufacturing and Design

This section may be the most difficult, as it relates to the blurred lines and buzzwords
previously mentioned. Designers and manufacturing have used CAD tools, machine
vision, and predictive maintenance before. AI technology is advancing these technologies
to new heights, but individual devices might be debated to where it is on the AI spectrum.

AI CAD Tools

Design engineers have specifications they have to achieve when developing new parts
and devices. To do this it is important to understand a plethora of information from
materials and processing, to the applications and needs of the end-user. With theoretical
data CAD programs have tools like finite element analysis (FEA), and the design
engineer must add the data manually or select it from a library.

One new tool in CAD technology uses AI to create a generative design. This takes the
specifications and inputs needed for a design and generates all possible materials,
geometries, and even costs. While new features are user-friendly, the technology is only
as good as the user.

Not only do you need the knowledge of what should be added to the specification and
inputs, but the user still needs to review the possibilities to select the best solution. This
type of AI CAD technology helps amplify design engineers’ abilities and saves time
because the design engineer doesn’t have to manually design multiple iterations.

3
This multi-material gripper was automatically designed using topology optimization. A
user specifies desired grip direction and the applied forces. The shape of the part and the
layout of the materials (rigid and elastic) are computed automatically to obtain a digital
representation that can be directly 3D printed. Click here for more details.   

Currently, generative design will most likely produce a part that isn’t easy to manufacture
using traditional processes. It can work well for 3D printing or additive processes.
Companies are working on adding variables to the software to consider traditional, or
subtractive processes, which should open AI CAD design technology to the masses.

Digital Twins

Moving forward, AI technology is building increasingly accurate models using these


CAD and AI tools to include theoretical and real-world data. This combination of data is
building accurate digital twins. Having a digital model lets engineers accurately predict
wear, movement, and interactions with other devices.

AI technology in digital twins give engineers the ability to see and test parts, entire
machines, production lines, and more, all digitally. With today’s ability to rent cloud
computing power, both large and small companies can afford to use AI CAD technology
to find bottlenecks, limitations, mistakes, or better features to accelerate time to market.
Having a mass of data and mapping interactions of materials, machines, and processes
lets engineers see how everything is connected and interacts. Design engineers will know
how changing design specifications would affect the product, production line, supply
chain, and maintenance.

Predictive Maintenance

A large concern for manufacturers in downtime. While IoT and connectivity are helping
predict and detect problems before they occur, AI technology could keep things running
smoother. For example, an engineer looking at a set of operational data of a machine
think a vibration change means the cutting tool need to be replaced or sharpened soon.

4
Preventive maintenance agreements can increase system availability. Connectivity gives
the ability for specialist to regularly inspect parts, and as AI programs advance parts
can be monitored round the clock. Software can send notifications to engineers or
specialists to alert them to changes in operation and suggest maintenance to optimize
machine’s uptimes. (Credit: Bosch Rexroth)

It would be difficult for an engineer to know all the information that could be affecting
the vibration on a machine. However, an AI system could instantly take data, the
machines history, and other parameters to suggest a more informed decision. In this
example, perhaps a material or speed change caused the vibration to increase due to
resonance or natural frequency of the material. Accuracy is improved by connecting large
datasets, processing data quickly to find patterns (or lack of patterns), and using AI to
learn from past and present data to deliver more accurate models to help engineers make
more informed decisions.

AI Changes Factories and Education

Eventually, connectivity and AI will grow where it will be possible to achieve an N-value
where a program could update or improve a design autonomously based on real-world
data. Mass adoption of AI technology can lead to mass customization and greatly
increase flexibility. This will not only keep companies competitive but might have a
ripple effect from industry to training and education.

“We are currently at stage one [of AI adoption] with information on Google taking your
data and making suggestions. Stage two will be more disruptive, replacing some
traditional training and education,” said Markus J. Buehler, a materials scientist and
engineer at the Massachusetts Institute of Technology. “As we move to an AI Nero-
network approach…future students will only need to know how to work with the AI
programs and the computer will do the physics.”

The speed of technology is moving faster, and it is harder to compete if a company falls
behind. Industry doesn’t have time for four-year degrees. Education could shift toward
streamlined, employer-focused classes that teach students how to use AI programs. Some

5
experts say this ripple affect is not only inevitable but necessary for a company’s
survival. However, legacy equipment, confidence, a focus on ROI, and other factors are
slowing AI’s adoption. According to Forbes Insights research, more than half of
respondents (56%) in the automotive and manufacturing sectors plan to increase AI
spending by less than 10%.

Quality inspection in manufacturing


using deep learning based computer
vision
Improving yield by removing bad quality
material with image recognition Partha Deka

Automation in Industrial manufacturing:

Today’s increased level of automation in manufacturing also demands automation of


material quality inspection with little human intervention. The trend is to reach human
level accuracy or more in quality inspection with automation. To stay competitive,
modern Industrial firms strive to achieve both quantity and quality with automation
without compromising one over the other. This posting takes user through a use case of
deep learning and showcases the need for optimizing the full stack (algorithms, inference
framework and hardware accelerators) to get the optimal performance.

Deep Learning for Quality inspection:

To meet industry standards quality inspectors in manufacturing firms inspect product


quality usually after the product is manufactured, it’s a time consuming manual effort and
a rejected product results in wasted upstream factory capacity, consumables, labor and
cost. With the modern trend of Artificial Intelligence, industrial firms are looking to use
deep learning based computer vision technology during the production cycle itself to
automate material quality inspection. The goal is to minimize human intervention at the
same time reach human level accuracy or more as well as optimize factory capacity, labor
cost etc. The usage of deep learning is varied, from object detection in self-driving cars to
disease detection with medical imaging deep learning has proved to achieve human level
accuracy & better.

6
What is deep learning?

Deep learning is the field of learning deep structured and unstructured representation of
data. Deep learning is the growing trend in AI to abstract better results when data is large
and complex. Deep learning architecture consists of deep layers of neural networks such
as input layer, hidden layers, and output layer. Hidden layers are used to understand the
complex structures of data. A neural network doesn’t need to be programmed to perform
a complex task. Gigabytes to terabytes of data are fed to the neural network architecture
to learn on its own. Sample deep neural networks below:

7
Convolution neural Network:

8
Convolution neural network is a class of deep neural network commonly applied in
image analysis. Convolution layers apply a convolution operation to the input passing the
result to the next layer. For example an image of 1000 by 1000 pixels has 1 million
features. If the first hidden layer has 1000 neurons, it ends up having 1 billion features
after the first hidden layer. With that many features, its difficult to prevent a neural
network from overfitting with less data. The computational and memory requirements to
train a neural network with a billion features is prohibitive. The convolution operation
brings a solution to this problem as it reduces the number of free features, allowing the
network to be deeper with fewer features. There are two main advantages of using
convolution layers over fully connected layers — parameter sharing and sparsity of
connections.

Convolution neural network look for patterns in an image. The image is convolved with a
smaller matrix and and this convolution look for patterns in the image. The first few
layers can identify lines / corners / edges etc, and these patterns are passed down into the
deeper neural network layers to recognize more complex features. This property of CNNs
is really good at identifying objects in images.

Convolution neural network (aka ConvNet) is nothing but a sequence of layers. Three
main types of layers are used to build ConvNet architectures: Convolutional Layer,
Pooling Layer, and Fully-Connected Layer. These layers are stacked layers to form a
full ConvNet architecture:

9
Image Source: http://cs231n.github.io/convolutional-networks/

The below image clarifies the concept of a convolution layer:

The

below image clarifies the concept of a pooling layer (Average or Max pooling):

10
Following is one of the original CNN architectures:

Visualizing CNN:

11
Following is an image
of a crack on a plain
surface:

Two layers each of


Conv (one 3X3 filter),
ReLU and Max
Pooling (2X2) similar
to LENET-5
architecture are
applied to the crack image above. It can be seen below that the CNN architecture is
focusing on the blocks of crack area and the spread of it throughout the surface:

Case Study:

To maintain confidentiality of our work we are presenting an abstract use case


below:

Problem Statement:

12
Detecting bad quality material in hardware manufacturing is an error prone & time
consuming manual process and results in false positives (detecting a bad one as good
one). If a faulty component/part is detected at the end of the production line there is loss
in upstream labor, consumables, factory capacity as well as revenue. On the other hand, if
an undetected bad part gets into the final product there will be customer impact as well as
market reaction. This could potentially lead to unrepairable damage to the reputation of
the organization.

Summary:

We automated defect detection on hardware products using deep learning. During our
hardware manufacturing processes there could be damages such scratches / cracks which
make our products unusable for the next processes in the production line. Our deep
learning application detected defect such as a crack / scratch in milliseconds with human
level accuracy and better as well as interpreted the defect area in the image with heat
maps.

Details of our Deep Learning Architecture:

To describe things better, we are using an example image of a circuit board with an
integrated chip on it below:

Our first approach:

We adopted a combination of pure computer vision approach (non-machine learning


methods) to extract the region of interest (ROI) from the original image and a pure deep
learning approach to detect defects in the ROI.

Why ROI extraction before DL?

While capturing the images, the camera assembly, lighting etc. was focusing on the
whole area of the circuit (example images below). We are only inspecting the chip area
for defects and no other areas in the circuit. We found with a few experiments that DL
accuracy increased substantially when neural networks focus only on the area of interest
rather than the whole area.

First Extract “Region of Interest (ROI)” with Computer Vision (Non-Machine Learning Methods).
Here, we go through multiple processes on the image such as gray scaling, transformations such as
eroding, dilating, closing the image etc. and eventually curve out the ROI from image based on use
case type / product type etc. The basic idea of erosion is just like soil erosion — it erodes away the
boundaries of foreground object. Dilating is just opposite of erosion — it increases the size of
foreground object. Normally, in cases like noise removal, erosion is followed by dilation. Opening is
just another name of erosion followed by dilation. It is useful in removing noise. Closing is reverse of
opening, dilation followed by erosion. It is useful in closing small holes inside the foreground objects,
or small black points on the object. Gradient transformation is the difference between dilation and
erosion of an image. Overall, these steps help in opening up barely visible cracks / scratches in the
original image. Refer the figure below :

13

 Secondly, detect defects using deep neural networks(deep neural network (CNN)-Based
Models) using proven CNN topologies such as Inception Net(aka Google Net), Res Net,
Dense Net :

Some other areas where experimentation was necessary to find the optimal architecture

Data Augmentation: We have few thousand unique images labelled as defects and few thousand
labelled as good ones. Augmentation is critical to avoid overfitting the training set. We did X random
crops and Y rotations (1 original image results in X*Y augmented images). After augmentation we
have X*Y thousand defective images and X*Y thousand good images. Referring one of original CNN
papers in this context https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-
convolutional-neural-networks.pdf
Initialization strategy for CNN topologies:

We replaced the final connected layer with our own FC layer and sigmoid layer (binary
classification) as shown in the figure below:

14
Rather than random initialization of weights in each layer we considered ImageNet
initialization for each CNN topology, our DL accuracy have increased substantially when
we used ImageNet initialization than random.

Loss Function and Optimizer:

Cross Entropy loss: Cross-entropy loss, or log loss, measures the performance of a
classification model whose output is a probability value between 0 and 1. Cross-entropy
loss increases as the predicted probability diverges from the actual label. So predicting a
probability of .01 when the actual observation label is 1 would be bad and result in a high
loss value. A perfect model would have a log loss of 0.

SGD and Nesterov


momentum: SGD or
stochastic gradient
descent is an iterative
method for optimizing a differentiable objective function(loss function), it’s stochastic
because it takes random samples from the data to do the gradient descent update.
Momentum is a moving average of the gradients, it is used to update the weight of the
network and it helps accelerate gradients in the right direction. Nesterov is a version of
the momentum that is getting popular recently.

15
Our Second approach:

Critique to first approach: While extracting regions of interest, it requires rewriting


code whenever there are changes in product types, circuit board type/chip type (in case of
our abstract example), camera setups / directions etc. This is not scalable.

Solution: We built an end-end two step DL architecture. In the first step, instead of a CV
approach we used a DL approach to predict the ROI itself. We manually created a
labelled dataset with a bounding box tool & we let train a DL architecture to predict the
ROI. One downside of this technique is that the labelled dataset has to be explicit and
extensive enough to include all product types etc. (circuit board type/chip type in case of
our abstract example) for the deep neural network to generalize well on unseen images.
Refer the figures below:

16
CNN ROI generator Loss function:

We initially used a squared distance based loss function as below :

After training a Resnet50 model for 20 epochs on validation set we achieved the
following validation metric on average missed area and IOU:

Ave. missed area = 8.52 * 10–3

17
Ave. IOU (intersection over union) = 0.7817

We want to improve at least on the IOU

· We came up with an Area based loss, please refer the figure below to get an idea of
how we use basic math to calculate the area of intersection between the ground truth and
the predicted label. In the loss function, we want to penalize both the missed and the
excess area. Ideally, we would want to penalize the missed area more than the excess
area:

The Loss function above is differentiable so we can do gradient descent optimization on


the loss function

CNN ROI generator Augmentation: we simply added margins 5% (both left and right) margins
during training time and test time on our predicted ROIs
CNN ROI generator results: We used Resnet50 (ImageNet initilization) toplogy and SGD +
Nesterov momentum optimizer with =2, =1 in the area based loss as described above. Training the
Resnet50 model for muliple epochs we want to minimze our avg. missed area and maximize our avg.
IOU (Best IOU is ‘1’). After training for 20 epochs we achieved the following on the validation set,
with area based loss and augmentation we improved (described above) our validation metric on
missed area and IOU:

Ave. missed area = 3.65 * 10–3

Ave. IOU (intersection over union) = 0.8577

18
Experiments & Benchmarks:

Total # of images: Few thousand images

Data split: 80-to-10-to-10 split, using unique images only

Framework used: PyTorch & Tensorflow / Keras

Weights Initialization: Pre-trained on ImageNet

Optimizer: SGD with learning rate = 0.001, using Nesterov with momentum = 0.9

Loss: Cross entropy

Batch size: 12

Total # of epochs: 24

Image shape: 224x224x3 (except for Inception V3, which requires 299x299x3)

Criterion: Lowest validation loss

Our benchmarks with both the approaches are pretty comparable, the results with
CV+DL (first) approach are little better off than the DL+DL (second) approach. We
believe, our DL+DL could be better if we can create an extensive and explicit labelled
bounding box dataset.

Following a successful completion of training, an inference solution has to be found to


complete the whole end to end solution. We used Intel OpenVino software to optimize
inferencing in different types of hardware besides CPU such as FPGA, Intel Movidius
etc.

Inference:

Intel Open Vino: Based on convolutional neural networks (CNN), the Intel Open Vino
toolkit extends workloads across Intel hardware and maximizes performance:

- Enables CNN-based deep learning inference on the edge

- Supports heterogeneous execution across computer vision accelerators — CPU, GPU,


Intel® Movidius™ Neural Compute Stick, and FPGA — using a common API

- Speeds time to market via a library of functions and pre-optimized kernels

- Includes optimized calls for OpenCV and OpenVX*

19
Refer the following figures on Open Vino architecture:

Two Step Deployment:

- Step one is to convert the pre-trained model into IRs using Model Optimizer:

§ Produce a valid Intermediate Representation: If this main conversion artifact is not


valid, the Inference Engine cannot run. The primary responsibility of the Model optimizer
is to produce the two files to form the Intermediate Representation.

§ Produce an optimized Intermediate Representation: Pre-trained models contain


layers that are important for training, such as the dropout layer. These layers are useless
during inference and might increase the inference time. In many cases, these layers can
be automatically removed from the resulting Intermediate Representation. However, if a
group of layers can be represented as one mathematical operation, and thus as a single
layer, the Model Optimizer recognizes such patterns and replaces these layers with one.
The result is an Intermediate Representation that has fewer layers than the original model.
This decreases the inference time.

The IR is a pair of files that describe the whole model:

.xml: Describes the network topology

.bin: Contains the weights and biases binary data

- Step two is to use the Inference Engine to read, load, and infer the IR files, using a
common API across the CPU, GPU, or VPU hardware

Open Vino documentation: https://software.intel.com/en-us/inference-trained-models-


with-intel-dl-deployment-toolkit-beta-2017r3

Inference Benchmarks on sample Image:

20
It is clear that optimizing with software stack is critical to reduce the inference time.
There is 30X to 100X improvement in latency time using OpenVino software
optimization. In addition, other Intel hardware acceleretors such as Intel Movidius and
FPGA were run through the same inference testing. The intent was to see how much
improvement accelerators can have over traditional CPU. Some inference benchmarks
below on a sample image:

Used Intel Movidius Myriad1, converted our Resnet-50 Tensorflow/Keras model to NCS
graphs using the NCS SDK, the Raspberry Pi is hosting the images and inferencing is
performed with vision processing unit in the movidius stick. The movidius stick has
lower compute power so this accelerator didn’t provide a large performance boost. In
addition the software framework used is a NCS graph which may not contain all the
performance boost (sparsity, quantization etc) from a framework like OpenVino.

* Configured and
programmed the FPGA board
with Open Vino in a linux
machine with the provided bit
stream for our Resnet-50 model. FPGA acts like a true accelerator and provides a further
~10x improvement over CPU with the same software framework (OpenVino).

The above performance


numbers clearly indicate the
need for a holistic view to
improve deep learning
performance. Both optimized
software stacks as well as hardware accelerators are needed for optimal
performance.

Visualizing our CNNs with Heat Maps:

Often deep neural networks are criticized for low interpretability and most deep learning
solutions stop at the point when the classification of the labels are done. We wanted to
interpret our results, why the CNN architecture labelled an image as good or bad (binary
classification for our case study), which area in the image the CNN is focusing most.

21
Based on this research in MIT https://arxiv.org/pdf/1512.04150.pdf , a class activation
map in combination with the global max pooling layer has been proposed to localize class
specific image regions.

Global average pooling usually acts as a regularizer, preventing overfitting during


training. It is established in this research that the advantages of global average pooling
layer extend beyond simply acting as a regularizer — little tweaking, the network can
retain its remarkable localization ability until the final layer. This tweaking allows
identifying easily the discriminative image regions in a single forward pass for a wide
variety of tasks, even those that the network was not originally trained for.

Following is a heat map interpretation using this technique on the “crack on a plain
surface” image using Resnet-50 architecture trained on ImageNet. As we can see, the
heat map focusses on the crack area below although the architecture is not trained on
such images –

Summary & Conclusion:

With deep learning based computer vision we achieved human level accuracy and better
with both of our approaches — CV+DL and DL+DL (discussed earlier in this blog). Our
solution is unique — we not only used deep learning for classification but for interpreting
the defect area with heat maps on the image itself.

Human factor cannot be completely dissociated but we can substantial reduce human
intervention. An optimal model is always a fine tune between FPR (false positive rate) &
FNR (false negative rate) or Precision vs Recall. For our use case, we successfully
automated defect detection with a model optimized for low FNR (High Recall). We

22
substantially reduced the human review rate. With our case study we proved that we can
automate material inspection with deep learning & reduce human review rate.

References:

Movement of Earth and Weather


1) Which planet do we live on?

· A) Mars

· B) Jupiter

· C) Air

· D) Earth

2) Where do we grow our food on?

· A) Air

· B) Water

· C) Paper

· D) Earth

3) We get coal from ______.

· A) Earth

· B) Air

· C) Sun

· D) Moon

4) What do we breathe in?

· A) Carbon dioxide

· B) Water

· C) Oxygen

23
· D) Sulphur dioxide

5) Pick the odd one out.

· A) Mars

· B) Jupiter

· C) Sun

· D) Earth

6) The planets of the solar system revolve around ______.

· A) Moon

· B) Stars

· C) Air

· D) Sun

7) What is movement of earth around sun called?

· A) Rotation

· B) Spinning

· C) Revolution

· D) None of these

8) How many types of motions does earth have?

· A) Zero

· B) One

· C) Two

· D) Three

9) Earth spins around its own.

· A) Mars

24
· B) stars

· C) Axis

· D) moon

10) Mainly how many types of tides are there?

· A) 1

· B) 2

· C) 0

· D) 4

11) Earth rotates from:

· A) north to east

· B) south to west

· C) East to west

· D) west to east

12) Earth's one rotation takes ____ hours.

· A) 36

· B) 12

· C) 24

· D) 365

13) Axis is _____.

· A) Name of planet

· B) Name of place

· C) an imaginary line on which earth rotates

· D) none of these

25
14) There are ______ poles.

· A) Two

· B) four

· C) Three

· D) nine

15) Earth revolves around sun.

· A) True

· B) False

· C) Partially true

· D) none of these

16) Earth takes _______ year/years to complete its one revolution around the sun.

· A) One

· B) two

· C) Five

· D) four

17) What is elliptical?

· A) Name of weather

· B) It means oval shape

· C) Name of cloth

· D) None of these

18) The path in which all the planets move is _____.

· A) Circular

· B) rectangular

26
· C) Triangular

· D) elliptical

19) The dark period on earth is called:

· A) Night

· B) day

· C) Month

· D) year

20) Earth's tilt on axis is _______ degrees.

· A) 40.5

· B) 24

· C) 23.5

· D) 90

21) Read the statements and choose the correct options. Statement A: We produce coal
synthetically. Statement B: Earth is always stationary (i.e. it doesn't move). Statement C:
Sun is not a part of solar system.

· A) TFT

· B) FFF

· C) TTT

· D) FTP

22) Directions: Fill in the blanks. Earth revolves around __(22)__. The path followed by
a planet is called __ (23) __ __ (24) __ takes 365 days to complete its one revolution
around sun. Revolution of earth also called __ (25) __ Motion. Earth rotates from __ (26)
__ to east.

· A) moon

· B) sun

· C) Stars

27
· D) all the above

23) Directions: Fill in the blanks. Earth revolves around __(22)__. The path followed by
a planet is called __ (23) __ __ (24) __ takes 365 days to complete its one revolution
around sun. Revolution of earth also called __ (25) __ Motion. Earth rotates from __ (26)
__ to east.

· A) road

· B) highway

· C) Orbit

· D) street

24) Directions: Fill in the blanks. Earth revolves around __(22)__. The path followed by
a planet is called __ (23) ___. __ (24) __ takes 365 days to complete its one revolution
around sun. Revolution of earth also called __ (25) __ Motion. Earth rotates from __ (26)
__ to east.

· A) Earth

· B) Moon

· C) Stars

· D) Mars

25) Directions: Fill in the blanks. Earth revolves around __(22)__. The path followed by
a planet is called __ (23) __ __ (24) __ takes 365 days to complete its one revolution
around sun. Revolution of earth also called __ (25) __ Motion. Earth rotates from __ (26)
__ to east.

· A) monthly

· B) annual

· C) Weekly

· D) daily

26) Directions: Fill in the blanks. Earth revolves around __(22)__. The path followed by
a planet is called __ (23) __ __ (24) __ takes 365 days to complete its one revolution
around sun. Revolution of earth also called __ (25) __ Motion. Earth rotates from __ (26)
__ to east.

28
· A) south

· B) west

· C) north

· D) none of these

Directions: Read the passage and answer the questions below: There are two hemispheres
in which our earth is divided. The equator forms two hemispheres. These are Northern
and Southern hemispheres. Hemisphere means half of a sphere. The equator divides the
earth into these two hemispheres. Equator is equally distant from both poles. There are
_______ hemispheres on earth.

· A) 0

· B) 1

· C) 3

· D) 2

28) Directions: Read the passage and answer the questions below: There are two
hemispheres in which our earth is divided. The equator forms two hemispheres. These are
Northern and Southern hemispheres. Hemisphere means half of a sphere. The equator
divides the earth into these two hemispheres. Equator is equally distant from both poles.
What is the name of the line which forms two hemispheres?

· A) Equator

· B) Axis

· C) Straight line

· D) Orbital line

29) Directions: Read the passage and answer the questions below: There are two
hemispheres in which our earth is divided. The equator forms two hemispheres. These are
Northern and Southern hemispheres. Hemisphere means half of a sphere. The equator
divides the earth into these two hemispheres. Equator is equally distant from both poles.
Hemisphere means _______ of a sphere.

A) 14

B) 12

29
C) 34

D) full

30) Directions: Read the passage and answer the questions below: There are two
hemispheres in which our earth is divided. The equator forms two hemispheres. These are
Northern and Southern hemispheres. Hemisphere means half of a sphere. The equator
divides the earth into these two hemispheres. Equator is equally distant from both poles.
Equator is equally distant from both the poles.

· A) True

· B) False

· C) Partially true

· D) none of these

31) Directions: Read the passage and answer the questions below: There are two
hemispheres in which our earth is divided. The equator forms two hemispheres. These are
Northern and Southern hemispheres. Hemisphere means half of a sphere. The equator
divides the earth into these two hemispheres. Equator is equally distant from both poles.
Rahul a small boy thinks that moon looks white only because we see it from a distance
but actually it gives us black light that is why it gets dark during night. He doesn't know
that moon has no light of its own. As his elder sister how will you help him?

· A) Update him with the correct information about moon light

· B) Tell him to ask his science teacher why its gets dark during night.

· C) Ignore him because he is right.

· D) Both (a) and (b)

32) Directions: Read the passage and answer the questions below: There are two
hemispheres in which our earth is divided. The equator forms two hemispheres. These are
Northern and Southern hemispheres. Hemisphere means half of a sphere. The equator
divides the earth into these two hemispheres. Equator is equally distant from both poles.
Out of two hemispheres which hemisphere has larger land area?

· A) Northern

· B) Southern

· C) Both have equal land area

30
· D) None of these

33) Directions: Read the passage and answer the questions below: There are two
hemispheres in which our earth is divided. The equator forms two hemispheres. These are
Northern and Southern hemispheres. Hemisphere means half of a sphere. The equator
divides the earth into these two hemispheres. Equator is equally distant from both poles.
The _______ most point of earth is called South Pole.

· A) Northern

· B) southern

· C) Eastern

· D) western

34) What is the mutual position of two poles?

· A) Both are adjacent to each other.

· B) Both are diametrically opposite to each other.

· C) They keep on changing their position.

· D) None of these

35) Day and night occur because of:

· A) Movement of earth around itself

· B) Movement of sun around earth

· C) Movement of earth around moon

· D) Movement of moon around earth

36) The part which gets sunlight experiences:

· A) Day

· B) night

· C) Rain

· D) storm

31
37) Apart from change in day and night, there is always change in:

· A) Season

· B) Shape of earth

· C) Shape of sun

· D) Brightness of stars

38) Eclipse means:

· A) Darkening of a heavenly body

· B) Brightening of a heavenly body

· C) Change in shape of a heavenly body

· D) None of these

39) In solar eclipse:

· A) Sun comes between earth and moon

· B) Moon comes between earth and sun

· C) Earth comes between sun and moon

· D) None of these

40) Solar eclipse causes:

· A) More brightness in day time

· B) Darkness in day time

· C) Light in night time

· D) None of these

41) Solar means:

· A) Moon

· B) air

32
· C) Sun

· D) none of these

42) In lunar eclipse, earth comes between _______ and moon.

· A) Air

· B) water

· C) Sun

· D) any of these

43) There are _______main types of eclipses.

· A) 4

· B) 0

· C) 5

· D) 2

44) Total eclipse of moon is when _______.

· A) Full shadow of moon falls on earth

· B) Partial shadow of moon falls on earth

· C) Full shadow of earth falls on moon

· D) None of these

45) Out of these which heavenly body revolves around earth?

· A) Moon

· B) Sun

· C) Jupiter

· D) Mars

33
46) Moon exerts gravitational force on earth.

· A) True

· B) False

· C) Not observed yet

· D) None of these

47) Match the following:

· List I List II
A Earth revolves from 1 Elliptical
B Earth's orbit is 2 keeps on changing
C distance between earth and sun 3 west to east
· ABC

· A) 1 2 3

· B) 3 1 2

· C) 3 2 1

· D) 2 3 1

48) What is rotation of earth?

· A) Spinning of earth around itself

· B) Spinning of earth around sun

· C) Spinning of earth around stars

· D) Spinning of earth around galaxy

49) The part of earth nearest to moon experiences _______ tides.

· A) Low

· B) High

· C) Both (a) and (b)

· D) None of these

34
50) How many words related to earth?s movement and weather change can you find in
the grid given below?

· A) 6

· B) 9

· C) 8

· D) 5

35

You might also like