You are on page 1of 45

AR, VR AND AI FOR

AGRICULTURE: A MODERN WAY


TO ADDRESS TRADITIONAL
PROBLEMS

G.P. Sandeep
PAMB 0028
II Ph.D
UNIVERSITY OF AGRICULTURAL SCIENCES
College of Agriculture, Gandhi Krishi Vigyan Kendra,
Bengaluru -560065, Karnataka

DEPARTMENT OF AGRICULTURAL EXTENSION

SEMINAR REPORT
ON
AR,VR AND AI FOR AGRICULTURE: A MODERN WAY TO ADRESS
TRADITIONAL PROBLEMS

Submitted to:
Dr. Y. N. Shivalingaiah (Professor and Head)
Dr. Ganesamoorthi, S. (Assoc. Professor & Head AKMU)
Department of Agricultural Extension
UAS, GKVK, Bengaluru– 560 065

Submitted by:
Guntukogula Pattabhi Sandeep
PAMB0028, II Ph. D (Agri.)
Department of Agricultural Extension
UAS, GKVK, Bengaluru– 560 065
INDEX

SL. NO. PARTICULARS PAGE NO.

1. Introduction 1-2
Augmented Reality
2.1. Working model of AR
2.2. Marker-based AR
2.3. AR without marker

2. 2.4. Characteristics of Augmented Reality 2-10


2.5. Implications of Augmented Reality in agriculture
2.6. Advantages of AR technology in agriculture, agricultural
education and training
2.7. Constraints in adopting AR
Virtual Reality
3.1. VR technology: Basic components required
3.2. Mode of VR used in Agriculture
3. 10-18
3.3. Major roles VR can be used in Agriculture
3.4. Advantages of Virtual Reality
3.5 Disadvantages / constraints in adopting VR
Artificial Intelligence
4.1.History of AI
4.2. Types of AI
4.3.Expert system
4. 18-31
4.4. Drones
4.5. Robots
4.6. Internet of things
4.7. Other general applications of AI in Agriculture

5. Literature available / Research studies / Case studies 31- 35

6. Conclusion 35-36

7. Bibliography/ Literature cited / Research studies 36-37

8. Discussion 37-39

9. Synopsis 40-41

10. Enclosed Power point presentation slides


1. Introduction

Agricultural practices and advancements differ globally—since plants have their own
differences and the location plays a role on their development as well. But through the exchange
of knowledge from different agriculturally involved individuals from all over the world,
improvement of techniques can be experienced as well. It has made an impact on how
information is shared and being able to use this information for the advancement of the
agricultural sector gives a great positive impact that is beneficial for everyone. IT (Information
Technology) has become a bridge for people from all over the world. Agriculture in India is
the core sector for food security, nutritional security, and sustainable development & for
poverty alleviation. It contributes approximately 16% of GDP. Milestones in agriculture
development in India includes: Green revolution, Evergreen revolution, Blue revolution, White
revolution, yellow revolution, Bio technology revolution and the most recent one is
Information and communication technology revolution. Information and communication
technologies (ICT) play a crucial role in disseminating information to farmers enabling them
to decide on the cropping pattern, use of high-yielding seeds, fertilizer application, pest
management, marketing, etc. Traditionally, Indian farmers have been following indigenous
production methods and rely upon friends, relatives, fellow farmers and input dealers to get
information regarding agriculture. With advancement of agricultural science and technology,
multiple options to access modern technologies have become available. It is evident from the
replacement of indigenous varieties of seeds by high-yielding varieties and traditional
equipment and practices by power tillers, tractors and other machines.

ICTs is used as an overarching term incorporating all modes of transmission like


electronic devices, networks, mobiles, services and applications which help to disseminate
information with the help of technology. In the recent years, ICT has proved to be extremely
beneficial for farmers including small land holders, marginalized and poor farmers, and helped
them in marketing, precision farming and improved profits. Through ICT, farmers have been
empowered to exchange their opinions, experiences and ideas. It has given farmers more
exposure and allowed them to use science that looks at agriculture from an integrated
perspective. Also, e-Agriculture is one of the action lines identified in the declaration and plan
of action of the World Summit on the Information Society. Agriculture has the potential to put
India on the higher pedestal of ’Second Green Revolution’ by making Indian agricultural sector
self-sufficient. Availability of timely information and technology has proved very crucial in
areas like disease prevalence and drought management thereby helping the farmers not only in

1
avoiding crop loss but also thwarting economic loss, this raised the demand of planning and
making strategies that could equip farmers with various information rights from sowing seeds
to harvesting. And, ICT has now become a reliable instrument for improving the quantity and
quality of the agricultural production.

ICTs are advancing day by day and changing its size and shape. From radio to real
time governance, ICTs have reached a long way in short duration. The generation is stepping
ahead with tech easing human lives, business, and agriculture. Augmented Reality (AR),
Virtual Reality (VR), and Artificial Intelligence (AI) in agriculture are bringing new hopes for
agriculture sector to avoid the upcoming food crisis (Monique and Glimenez, 2020).
Leveraging reality tech for agricultural implementations would help in improving production.

With this brief background, the current seminar has been conceptualized with the
following objectives:

1.To understand the concept of AR, VR & AI

2.To explore the different types of AR, VR & AI technologies

3.To know the potential areas for applying the AR, VR & AI for agriculture

4.To review the relevant research studies

2. Augmented Reality

Augmented reality (AR) is one of the current technological trends that is spreading day
by day. This technology allows to visualise the real-life environment with a digital
augmentation overlay, being a highly visual and interactive method with digital content such
as sounds, videos, graphics and GPS in real working environments through cameras. AR is
growing area in virtual research. Augmented Reality (AR) technology has revolutionised the
way digital content is displayed (Templin et al.2022). The environment around us provides a
wealth of information. Augmented Reality can be used as a technique to show extra information
over the real world (Luis et al., 2020). The reality information is used in virtual environments
for better understanding the surrounding better way. The true work of AR was started in 1960
with the efforts of Mr. Sutherland and he developed the See-through HMD to present 3D
graphics. Louis Rosenberg developed one of the first known AR systems called as virtual
fixtures in 1992. Since then, AR’s growth and progress have been remarkable. Augmented
reality has been a hot topic in software development circles for a number of years. Augmented

2
reality is a technology that works on computer vision-based recognition algorithms to augment
sound, video, graphics and other sensor-based inputs on real world objects using the camera of
your device (Hurst et al. 2021) In other words “ Augmented reality (AR) is a live direct or
indirect view of a physical, real-world environment whose elements are "augmented" by
computer-generated or extracted real-world sensory input such as sound, video, graphics,
haptics or GPS data. Currently, AR can be generated through applications on conventional
devices such as smartphones, tablets, Hololens, etc. Little by little, this technology is looking
for new application sectors to improve their workflows, especially in view of the arrival of 5G.
Google, Facebook and Amazon are some of the giants that use AR software to optimise their
productivity. For example, Instagram or Snapchat create fun filters for their users. Thus,
augmented reality is defined as an altered form of reality in which computer-generated content
is superimposed on the user’s real-world views, allowing digital assets to be added to their
physical environment.

2.1. Working model of AR: The basic process of creation in augmented reality is to create
virtual models that will exist stored in a database. After this, the model will stand retrieved
from the mentioned database, rendered, and registered into the scene. Sometimes, this process
implies serious difficulties in many area applications. The virtual content must exist stored on
the database and also published as printed material, containing an index to our database. This
communication to the database increases the complexity of the virtual model as final work. To
avoid these difficulties is necessary to fully encode our virtual content in a bar code; which is
not understandable to a human without using a specific augmented reality system. When
captured by an AR system, the virtual models exist then extracted from the incoming image.

Embedding —> Acquisition —> Extraction —> Registration —> Rendering

The virtual model stands created and printed. This printed representation exists then acquired
by the augmented reality device. After, the virtual models exist extracted from the acquired
image. Finally, the virtual models stand registered onto the scene and rendered. Besides adding
virtual objects into the real world, AR must be able to remove them. Desirable systems would
be those that incorporate sound to broaden the augmented experience. These systems should
integrate headsets equipped with microphones to capture incoming sound from the
environment; thus having the ability to hide real environmental sounds by generating a masking
signal.

3
However, there are different types of ARs and their differences should be known, as
each will be more suitable for a particular use, although they all share common features. Thus,
the main differentiation will be between:

2.2. Marker-based AR: Marker-based AR applications use target images (markers) to position
objects in a given space. These markers determine where the application will place the 3D
digital content within the user’s field of view. Early-stage AR technologies were based on
markers.

In other words, these applications are linked to a specific physical image pattern marker
in a real-world environment in order to superimpose the 3D virtual object on it. Thus, the
cameras must continuously scan the input and place a marker for image pattern recognition in
order to create its geometry. In case the camera is not properly focused, the virtual object will
not be displayed.

Consequently, a marker-based image recognition system requires several modules,


such as camera, image capture, image processing and marker tracking, among others.
Generally, this is a simple and inexpensive system to implement in filters through a custom
application to recognise specific patterns through a camera. An example of this type of
augmented reality is used by Instagram and Snapchat through filters and games. Therefore, this
type of AR is already introduced in the daily life of human beings as they are routine social
activities.

2.3. AR without markers

In contrast, markerless AR allows virtual 3D objects to be positioned in the real image


environment by examining the features present in the data in real time. This type of guidance
relies on the hardware of any smartphone, be it the camera, GPS or accelerometer, among
others, while the augmented reality software completes the job. With this model, there is no
need for an object tracking system due to recent technological advances in cameras, sensors
and AI algorithms. Thus, it works with the digital data obtained by these sensors capable of
recording a physical space in real time.

4
Primarily, markerless analysis uses simultaneous localisation and mapping (SLAM) to
scan the environment and create appropriate maps on which to place virtual objects. SLAM
markerless image tracking scans the environment and creates maps of where to place virtual
objects in 3D, even if the objects are not within a user’s field of view, do not move when the
user moves, and the user does not have to scan new images.

Therefore, this technology is able to detect objects or characteristic points in a scene


without prior knowledge of the environment, for example, it can identify walls or intersection
points. This is a technology that is characterised by its association with the visual effect of
combining computer graphics with real-world images. The first systems using this type of AR
used the location and hardware services of a device to interact with the resources provided by
the AR software, in such a way that the user’s location and orientation in the space where
he/she was located was defined.

Another feature of this type of AR is that users can increase the average range of motion
while experiencing the experience. Apple’s ARKit and Google’s ARCore SDK have made
markerless AR available on smart devices. Currently, markerless AR is the preferred image
recognition method for applications employing this technology.

Thus, there are four categories of markerless AR:

Location-based AR Projection-based Overlay AR Contour-based AR


AR

Location-based AR

Location-based markerless AR aims at the fusion of 3D virtual objects in the physical


space where the user is located. Clearly, this technology uses the location and sensors of a
smart device to position the virtual object at the desired location or point of interest.The most
5
representative example of this type of augmented reality is the smartphone game Pokémon GO,
which uses markerless, location-based AR, bringing the user’s environment to life immediately
depending on where they look.

This AR links the virtual image to a specific location by reading the data in real time
using the camera, GPS, compass and an accelerometer. Also, as it is based on markerless AR,
no image track is required for its operation, as it is able to predict the user’s approach to match
the data in real time with the user’s location. In addition, this typology allows the option of
adding interactive and useful digital content to geographies of interest, which is very beneficial
for travellers within a specific area by helping to understand the environment through 3D
virtual objects or videos.

Projection-based AR

This methodology is used for the delivery of digital data within a stationary context,
i.e. project-based AR focuses on rendering virtual 3D objects within the user’s physical
space.Therefore, AR allows the user to move freely around the environment of a specific area
where a fixed projector and a tracking camera are placed. The main use of this technology is
to create illusions about the depth, position and orientation of an object by projecting artificial
light onto real flat surfaces.

For example, projection-based AR is suitable for simplifying complex tasks in business


or industry, eliminating computers because instructions can be placed in a given space. In
addition, this technology is able to provide feedback to optimise digital identification processes
for manufacturing cycles.

Overlay AR

Typically, this AR is used to replace the original view of an object with an updated
virtual image of that object for the human eye. Overlay AR provides multiple views of a target
object with the option to display additional relevant information about that object.

Contour-based AR

Essentially, this technology, through the use of special cameras, is used for human eyes
to outline specific objects with lines to facilitate certain situations. For example, it can be used
for car navigation systems to enable safe driving in low visibility situations.

6
2.4. Characteristics of Augmented Reality:

1. Haptic Technology: he main goal of AR is the interactivity between the user and virtual
objects. HT is the system that allows the user to have tactile experiences within immersive
environments. With this system, the user interacts with the virtual environment through an
augmented system. To bring realism to these interactions, the system must allow the user to
feel the touch of surfaces, textures, and the weight and size of virtual objects.With haptic
devices, mass can exist assigned to virtual elements so that the weight and other qualities of
the object can exist felt in the fingers. This system requires complex computing devices
endowed with great power. Furthermore, the system must recognize the three-dimensional
location of fiducial points in the real scene.

2. Position-Based Augmented Reality: For correct compensation between the virtual and real
image, the system must represent both images in the same frame of reference by using sensitive
calibration and measurement systems to determine the different coordinate frames in the AR
system. This system measures the position and orientation of the camera concerning the
coordinate system of the real world. These two parameters determine the world-to-camera
transform, C. We can quantify the parameters of camera-to-image, P, by calibrating the video
camera. Finally, the third parameter, O, stands computed by measuring the position and
orientation of the virtual object in the real world, existing rendered and combined with the live
video.

3. Computer Vision for Augmented Reality: Augmented Reality uses computer vision methods
to improve performance. Thus, the system eliminates calibration errors by processing the live
video data. Other systems invert the camera projection to obtain an approximation of the viewer
pose. Recently, a mixed-method uses fiducial tracking; which stands combined with a magnetic
position tracking system that determines the parameters of the cameras in the scene. Currently,
the problems of camera calibration exist solved by registering the virtual objects over the live
video.

4. Animation: If we want an AR system to be credible, it must have the ability to animate the
virtual elements within the scene. Thus, we can distinguish between objects moving by
themselves and those whose movements exist produced by the user. These interactions exist
represented in the object-to-world transform by multiplication with a translation matrix.

7
5.Portability: Since the user can walk through large spaces, Augmented Reality should pay
special attention to the portability of its systems, far from controlled environments, allowing
users to walk outdoor with comfort. This stands accomplished by making the scene generator,
the head-mounted display, and the tracking system capable of being autonomous.

6. Others: It enables the merging of the physical and virtual worlds, depends on the situation,
real-time interactive , make use of all three dimensions, and overlays digital content on the
user's real-world view.

2.5. Implications of Augmented Reality in agriculture:

Monitoring farms Training new farmers Facilitating tools


visually assessment

1. Monitoring farms visually : Farmers have to check the fertility of lands to select the
crop they want to sow in that farm. AR can augment fertility inspection. Suppose a
farmer wants to inspect land for any infestation, he would need to check each inch of
the land manually. But with AR, farmers can visualize the entire farm in a single
dashboard and detect the presence of any pest or insect infestation. For instance, a
recently developed AR app can assist farmers in the land examination. It collects
satellite data on land for constant monitoring. And it converges AR with AI and deep
learning to identify areas that might require attention.
2. Training new farmers: AR can enable new farmers to get familiar with
agricultural equipment without actually having a need to operate one. With historical
accident-related data and how they occurred, farmers can also train juniors on how to
avoid such unforeseen casualties. AR will help new farmers to visually learn the use of
complex agriculture tools. It will also facilitate remote collaboration with and training
from farmers who are experts in using advanced agricultural methods. Giving farming
tips from such experts in their field can help new farmers to take an appropriate
approach based on their land conditions to enhance yield. For instance, an AR app

8
allows farmers to get a glimpse of their farms’ inner workings with the help of their
smartphones or tablets.
3. Facilitating tools assessment : Agriculture is not all about sowing and
harvesting; there is much more to it like inspecting land and monitoring crops. And
there are multiple tools available to perform all these different tasks in agriculture. In
fact, there can be multiple tools available to perform a single task. AR devices can help
select optimal tools for a specific task based on the needs and requirements of the farmer
and the task. For instance, farmers can use a sickle, gandasa, or an axe for reaping. But
when crops are small and too close to each other, an ax might not be the optimal choice
as it might damage closer crops while reaping. AR can help farmers to see how a tool
can perform in different situations and assist in selecting optimal tools for different
situations.
2.6. Advantages of AR technology in agriculture, agricultural education and training:

1. Access to learning materials. Educational institutions often lack up-to-date teaching


materials; many students have to study outdated information or search for information
on their own at home. In an AR application, you can download the latest data and
display it in an interactive format.
2. Access to virtual equipment. In cases where it is necessary to explore specific
equipment and learn how to use it, an augmented reality application can present the
required 3D model and helpful explanations. This adds practical value to the traditional
learning materials.
3. Higher student engagement. Students study the material more deeply through
immersion, which makes it feel more real and relevant. This is a substantial change of
pace and an exciting experience for many.
4. Faster learning. A new way of presenting information helps reduce the overall
learning time. Subsequently, there is more time for practice and in-depth examination
of niche topics.
5. Safer practice. In such cases as anatomy lessons, students no longer need to dissect
real animals; this can be accurately simulated through software. Students get the same
level of practice without harming anything or working with dangerous tools.
6. Others: Enhanced experience, AR will be easy to use, supports many activities,
improved technology, increased understanding, learning spatial structure and function,

9
long-term memory retention, improved physical task performance, improved
collaboration and increased student motivation are the other advantages of AR.

2.7. Constraints in adopting AR

1. Proper hardware is needed: An AR application cannot be installed on old phone


models or those who run OS versions that don’t support immersion. Schools are rarely
known for having the most modern electronic tools, so purchasing smartphones/smart
glasses may be out of the question.

2. The lack of teachers’ experience with tech: Some educators do not embrace modern
technologies, or even know how to use them. This can be a problem when the teacher is
obligated to show how the device works and help students in case of difficulties.

3. Others : Unaffordable and costly for initial setup, privacy and security issues and
addiction by users are the other constraints in adopting AR.

3. Virtual reality

Nowadays computer graphics is used in many domains of our life. At the end of the
20th century it is difficult to imagine an architect, engineer, or interior designer working
without a graphics workstation. In the last years the stormy development of microprocessor
technology brings faster and faster computers to the market. These machines are equipped with
better and faster graphics boards and their prices fall down rapidly. It becomes possible even
for an average user, to move into the world of computer graphics. This fascination with a new
(ir)reality often starts with computer games and lasts forever. It allows to see the surrounding
world in other dimension and to experience things that are not accessible in real life or even
not yet created. Moreover, the world of three-dimensional graphics has neither borders nor
constraints and can be created and manipulated by ourselves as we wish – we can enhance it
by a fourth dimension: the dimension of our imagination. But not enough: people always want
more. They want to step into this world and interact with it – instead of just watching a picture
on the monitor. This technology which becomes overwhelmingly popular and fashionable in
current decade is called Virtual Reality (VR). VR is a computer-generated technique in
simulated environment to bring auditory and visual experience to people (Zhou and Calvo,
2018). VR most commonly employs headsets that rely on stereoscopic displays, spatial audio,
and motion-tracking sensors to simulate a wholly virtual environment (Joseph and Jeremy,

10
2021). The very first idea of it was presented by Ivan Sutherland in 1965: “make that (virtual)
world in the window look real, sound real, feel real, and respond realistically to the viewer’s
actions” (Feng et al, 2010). It has been a long time since then, a lot of research has been done
and status quo: “the Sutherland’s challenge of the Promised Land has not been reached yet but
we are at least in sight of it”. The following glimpse at the last three decades of research in
virtual reality and its highlights.

Sensorama – in years 1960-1962 Morton Heilig created a multi-sensory simulator. A


prerecorded film in color and stereo, was augmented by binaural sound, scent, wind and
vibration experiences. This was the first approach to create a virtual reality system and it had
all the features of such an environment, but it was not interactive.

The Ultimate Display : In 1965 Ivan Sutherland proposed the ultimate solution of virtual
reality: an artificial world construction concept that included interactive graphics, force-
feedback, sound, smell and taste.

The Sword of Damocles: The first virtual reality system realized in hardware, not in concept.
Ivan Sutherland constructs a device considered as the first Head Mounted Display (HMD),
with appropriate head tracking. It supported a stereo view that was updated correctly according
to the user’s head position and orientation.

GROPE: The first prototype of a force-feedback system realized at the University of North
Carolina (UNC) in 1971.

VIDEOPLACE: Artificial Reality created in 1975 by Myron Krueger – “a conceptual


environment, with no existence”. In this system the silhouettes of the users grabbed by the
cameras were projected on a large screen. The participants were able to interact one with the
other thanks to the image processing techniques that determined their positions in 2D screen’s
space.

VCASS: Thomas Furness at the US Air Force’s Armstrong Medical Research Laboratories
developed in 1982 the Visually Coupled Airborne Systems Simulator – an advanced flight
simulator. The fighter pilot wore a HMD that augmented the out-thewindow view by the
graphics describing targeting or optimal flight path information.

11
VIVED: VIrtual Visual Environment Display – constructed at the NASA Ames in 1984 with
off-the-shelf technology a stereoscopic monochrome HMD.

VPL: VPL company manufactures the popular DataGlove (1985) and the Eyephone HMD
(1988) –the first commercially available VR devices. BOOM – commercialized in 1989 by the
Fake Space Labs. BOOM is a small box containing two CRT monitors that can be viewed
through the eye holes. The user can grab the box, keep it by the eyes and move through the
virtual world, as the mechanical arm measures the position and orientation of the box.

UNC Walkthrough project : in the second half of 1980s at the University of North Carolina
an architectural walkthrough application was developed. Several VR devices were constructed
to improve the quality of this system like: HMDs, optical trackers and the Pixel-Plane graphics
engine.

Virtual Wind Tunnel : developed in early 1990s at the NASA Ames application that allowed
the observation and investigation of flow-fields with the help of BOOM and DataGlove .

CAVE: presented in 1992 CAVE (CAVE Automatic Virtual Environment) is a virtual reality
and scientific visualization system. Instead of using a HMD it projects stereoscopic images on
the walls of room (user must wear LCD shutter glasses). This approach assures superior quality
and resolution of viewed images, and wider field of view in comparison to HMD based
systems.

At the beginning of 1990s the development in the field of virtual reality became much more
stormy and the term Virtual Reality itself became extremely popular. We can hear about Virtual
Reality nearly in all sort of media, people use this term very often and they misuse it in many
cases too. The reason is that this new, promising and fascinating technology captures greater
interest of people than e.g., computer graphics. The consequence of this state is that nowadays
the border between 3D computer graphics and Virtual Reality becomes fuzzy. Therefore in the
following sections some definitions of Virtual Reality and its basic principles are presented.

Virtual Reality (VR) and Virtual Environments (VE) are used in computer community
interchangeably. These terms are the most popular and most often used, but there are many
other. Just to mention a few most important ones: Synthetic Experience, Virtual Worlds,
Artificial Worlds or Artificial Reality. All these names mean the same:

12
“Real-time interactive graphics with three-dimensional models, combined with a display
technology that gives the user the immersion in the model world and direct manipulation.”
[Fuch92] • “The illusion of participation in a synthetic environment rather than external
observation of such an environment. VR relies on a three-dimensional, stereoscopic head-
tracker displays, hand/body tracking and binaural sound. VR is an immersive, multi-sensory
experience.”

• “Computer simulations that use 3D graphics and devices such as the DataGlove to allow the
user to interact with the simulation.”

• “Virtual reality refers to immersive, interactive, multi-sensory, viewer-centered, three


dimensional computer generated environments and the combination of technologies required
to build these environments.”

• “Virtual reality lets you navigate and view a world of three dimensions in real time, with six
degrees of freedom. (...) In essence, virtual reality is clone of physical reality.

3.1. VR technology: Basic components required

VR requires more resources than standard desktop systems do. Additional input and output
hardware devices and special drivers for them are needed for enhanced user interaction. But
we have to keep in mind that extra hardware will not create an immersive VR system. Special
considerations by making a project of such systems and special software (Zyda et al) are also
required. First, let us have a short look at the basic components of VR immersive applications.

Basic components of VR immersive application.

13
The above figure depicts the most important parts of human-computer-human
interaction loop fundamental to every immersive system. The user is equipped with a head
mounted display, tracker and optionally a manipulation device (e.g., three-dimensional mouse,
data glove etc.). As the human performs actions like walking, head rotating (i.e. changing the
point of view), data describing his/her behavior is fed to the computer from the input devices.
The computer processes the information in real-time and generates appropriate feedback that
is passed back to the user by means of output displays. In general: input devices are responsible
for interaction, output devices for the feeling of immersion and software for a proper control
and synchronization of the whole environment.

Input devices: Input devices determine the way a user communicates with the computer.
Ideally all these devices together, should make user’s environment control as intuitive and
natural as possible – they should be practically invisible. Unfortunately, the current state of
technology is not advanced enough to support this, so naturalness may be reached in some very
limited cases. In most of cases we still have to introduce some interaction metaphors that may
become a difficulty for an unskilled user.

Output devices: Output devices are responsible for the presentation of the virtual environment
and its phenomena to the user – they contribute to the generation of an immersive feeling at
most. These include visual, auditory or haptic displays. As it is the case with input, the output
devices are also underdeveloped. The current state of technology does not allow to stimulate
human senses in a perfect manner, because VR output devices are far from ideal: they are
heavy, lowquality and low-resolution. In fact, most systems support visual feedback, and only
some of them enhance it by audio or haptic information.

Software: Beyond input and output hardware, the underlying software plays a very important
role. It is responsible for the managing of I/O devices, analyzing incoming data and generating
proper feedback. The difference to conventional systems is that VR devices are much more
complicated than these used at the desktop – they require extremely precise handling and send
large quantities of data to the system. Moreover, the whole application is time-critical and
software must manage it: input data must be handled timely and the system response that is
sent to the output displays must be prompt in order not to destroy the feeling of immersion.

What makes VR possible:

14
• Visualize and manipulate things that you cannot see in the real world

• Take on different perspectives

• Visualize 3D concepts

• Interact in real time

• Explore dangerous situations

• Present realistic or abstract scenarios

• Promote different learning styles and teaching methods

3.2.Mode of VR used in Agriculture

Virtual Practical Virtual Theory Virtual Courses Virtual culture


Programs

Virtual Practical: Virtual practical helps the user to feel real hand experience of doing a task.
Virtual Theory Programs: These programs help the user to understand the basic concepts of
a topic through theory classes.
Virtual Courses: Such courses help to earn a certificate and degree of a particular course by
attending regular online classes. This is one of the most striking applications of virtual reality
in agriculture.
Virtual culture: The technique that helps in growing plants and saplings with the help of
virtual environment and conditions is called as virtual culture.

3.3. Major roles VR can be used in Agriculture

Virtual reality has a lot of applications in different sectors of life. From cinema to
robotics, virtual reality has its legs in each body. In agriculture it helps farmers to understand
about the latest technologies, latest agricultural equipment, soil requirements, cropping

15
patterns, harvesting steps, and profit outputs. Virtual reality plays an important role in
strengthening the agro-economic status of the country. Here are the ways in which virtual
reality is changing the agriculture sector.

1.Create 3D visualization : Virtual reality helps in recreating a 3D work environment. This


helps the farmers to carry out pre-plantation activities or harvesting. This further helps him to
learn about his strong and weak hand in the technique.

2.Less training costs: Virtual training of farming is less expensive than the real hand training.
Original modern farming equipment is too costly, and a trainee farmer or agriculturist cannot
afford them. In such a situation virtual training excavates an easy way. It helps the fresher
farmers and agricultural trainees to learn the technique easily at cheaper costs. Though, a minor
disadvantage of virtual training is that it is a bit less practical than real hand experience.
3.Free resources: Most of the resources available on virtual training are free of cost.
4.Help to connect with global market: Virtual reality with the aid of the internet helps the
producer to connect with the global market. This connection of the world helps the users to
learn about global technologies and equipment.
5. Learn with the best faculty of world: Virtual reality helps to connect farming students with
the best global faculty. This strengthens their learning and understanding ability. Also, when a
student learns from a best teacher, he builds a better nation.
6.Help to build budding agriculturists: Virtual reality due to its advanced technology has the
power to convince its audience. Thus, virtual learning programs are able to attract GenZ in the
agriculture field. It innovates more users to pursue agricultural studies and practices as their
career.
7. Helps in understanding soil profile: Virtual reality collects the data of different geographic
locations and its soil profile. This data further helps to draw observations related to the nature
of soil, minerals present in the soil, water retaining capacity of soil, and soil deficiency. Thus,
virtual reality helps in understanding farmers the best suitable crops for their fields.
8. Reduces cost of plant protection tools: After studying the agricultural patterns, farmers
can learn more about crops and their protection methods. This analysis report allows farmers
to purchase only required tools.
3.4. Advantages of Virtual Reality
This technology does come with several benefits, and several areas have been positively
affected by the implementation of this technology. Some of the positive impacts of virtual
reality are listed below:
16
1. More Than Real: Virtual reality, in comparison, offers high quality visualizations that give
the user a feeling of being in a different world while playing games, or watching the scenery,
etc. Playing games using virtual reality gives the user the impression of actually being inside
the game experiencing every move as if it were real with all those visual and sound effects
along with countless other sensations.

2. Safe Practice / Simulation : Another of the countless other benefits of virtual reality
technology is using it for training purposes and practice. It can simulate potentially dangerous
real-world operations like surgery, combat, flight. One can easily learn to perform operations,
fly a plane, and many more without risking theirs’ and/or others’ lives.

3. Detailed : Planning a vacation can be tiring work, and if the planned location doesn’t work
out as hoped, it is bound to worsen one’s mood. But with the virtual reality technology one can
simply get a detailed and sharp view of any tourist location and decide if the trip is worth their
time and effort.

4. Handy : Virtual reality comes in very handy in our day-to-day activities such as shopping.
Suppose you’re shopping for interior design for your house. However, just looking at the
designs doesn’t make it simpler to decide a perfect match and thus a confusing situation arises.
Such confusion can easily be handled by using virtual reality to put together the designs and
the interior of your house for a well-versed decision.

5. Increased Learning Possibilities : Using VR technology, doctors can understand any


medicine’s new qualities and determine its side-effects, giving them a clear idea of the
outcome. Fields such as content writing and editing can also benefit from the virtual reality
technology by easing in detecting faults through certain software arrangements.

6. Virtual reality creates a realistic world


7. It enables user to explore places.
8. Through Virtual Reality user can experiment with an artificial environment.
9. Virtual Reality make the education more easily and comfort.

17
3.5. Disadvantages / constraints in adopting VR
Regardless of the numerous merits offered by virtual reality technology, there are still some
disadvantages of virtual reality that need to be acknowledged when considering this type of
technology. The cons you need to consider are listed below.

1. Intransigent: Unlike real-world systems, virtual reality doesn’t offer flexibility in making
changes to the pre-set program sequence. Say someone is in the classroom and wants to raise
some questions, in the real world they are free to do so as well as to make suggestions, but in
a virtual reality such isn’t feasible.

2. Obsession: Anyone can become obsessed with anything if they get involved with it for a
longer period. This is especially the case when something as astounding as virtual reality is
taken into account. Many tend to like games that allow violence and other illegal activities, but
if that turns into addiction it’s possible they will commit the crime in the real world.

3. Expensive: Regardless of the fun and amazing experience provided by the virtual reality
technology, not everyone is capable of affording the same tech as it does not come cheap.
Irrespective of the decrement in its price over the years, virtual reality still hasn’t been cheap
enough to be affordable to most populations.

4. Isolated: After spending significant amounts of time in a virtual reality, they become
addicted to it and tend to enjoy it more there than in the real world. Thus, they spend more time
with their friends in the virtual reality, just like in games. This eventually leads them to become
isolated from the real world.

5. Others: High technical skills required, Impact on the real human body, Not engaged in the
real world and psychological damage are few other draw backs in adopting VR.

4.Artificial Intelligence

Artificial Intelligence is defined as a “field of computer science, which focuses on the


creation of machine systems which behave intelligently and show behavior to the same level
as human beings think and act to achieve human-like performance in all cognitive works and
fields using precise logical reasoning.”

18
Artificial intelligence (AI) is evolving exponentially, from SIRI to self-driving vehicles.
It is today’s the most discussed technology. The idea of making AI a part of our daily lives is
an idea that most of us could overpower, but the truth is that in most corporate and domestic
sectors, AI is already commonplace. Agriculture includes a variety of processes, in which a
number of goods are produced by natural resources. The agricultural industry consists of
numerous operations, including harvesting crops, plants, feeding animals, grazing, etc. It
involves soil preparation for maximum returns, crop enhancement, horticultural services,
landscaping services, veterinary services, labour or farming management. By 2050, the global
population is projected to surpass 9 billion and agricultural production needs to double (70%)
to meet demand. Due to multiple economic, environmental, and sociological powers, land,
water, and resources are already becoming inadequate. Reduced production of food is having
a particularly devastating effect on developing countries. In order to fight global warming, AI
has the potential to promote more productive agricultural practices, but only with an expanded
regulation of its growth.

In the domain of agriculture, AI is an evolving technology. Equipment and machines


based on AI have brought today’s farming system to a new level. This technology has increased
the productivity of crops and enhanced tracking, harvesting, processing, and marketing in real
time. AI bots can harvest crops at a higher volume and faster speed than human workers in the
agricultural sector. It helps to track the weed and spray it by using computer vision. Artificial
Intelligence also allows farmers to find more successful ways of protecting their crops from
weeds. Farmers are now using AI for techniques such as precision farming, tracking crop
humidity, soil composition, and temperature in growing areas, allowing them to increase their
yields by studying how to take care of their crops and deciding the optimal amount of water or
fertilizer to be used. In fact, robots and machine learning are helping to promote new, more
efficient agricultural methods that take agriculture indoors and to new heights to save energy,
reduce pesticides, and shortening the market time. These robots grow food with no farmers,
more like a lean factory than a plant.

Agricultural technology has come a long way, from the invention of grain elevators and
artificial fertilizers to the use of satellites. It’s too early, however, to talk of a full digital
revolution in this industry. The productivity of the farming industry will be greatly improved
by AI. The need for better AI solutions in agriculture will, inevitably, only evolve. But in terms
of proper investment and testing, we need to make sure there is a collaboration between

19
governments, science, and businesses. Therefore, AI is developing beyond the farming
industry. We expect to see more exciting solutions as the future of AI in agriculture will pave
the way for growth in other industries as the AI sector expands at a rapid pace to resolve our
future problems.

4.1. History of AI: Artificial intelligence (AI) is a young discipline of sixty years, which is a
set of sciences, theories and techniques (including mathematical logic, statistics, probabilities,
computational neurobiology, computer science) that aims to imitate the cognitive abilities of a
human being. Initiated in the breath of the Second World War, its developments are intimately
linked to those of computing and have led computers to perform increasingly complex tasks,
which could previously only be delegated to a human.

1940-1960: Birth of AI in the wake of cybernetics

The period between 1940 and 1960 was strongly marked by the conjunction of
technological developments (of which the Second World War was an accelerator) and the
desire to understand how to bring together the functioning of machines and organic beings. For
Norbert Wiener, a pioneer in cybernetics, the aim was to unify mathematical theory, electronics
and automation as "a whole theory of control and communication, both in animals and
machines". Just before, a first mathematical and computer model of the biological neuron
(formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943.

At the beginning of 1950, John Von Neumann and Alan Turing did not create the term
AI but were the founding fathers of the technology behind it: they made the transition from
computers to 19th century decimal logic (which thus dealt with values from 0 to 9) and
machines to binary logic (which rely on Boolean algebra, dealing with more or less important
chains of 0 or 1). The two researchers thus formalized the architecture of our contemporary
computers and demonstrated that it was a universal machine, capable of executing what is
programmed. Turing, on the other hand, raised the question of the possible intelligence of a
machine for the first time in his famous 1950 article "Computing Machinery and Intelligence"
and described a "game of imitation", where a human should be able to distinguish in a teletype
dialogue whether he is talking to a man or a machine. However controversial this article may
be (this "Turing test" does not appear to qualify for many experts), it will often be cited as
being at the source of the questioning of the boundary between the human and the machine.

20
The term "AI" could be attributed to John McCarthy of MIT (Massachusetts Institute
of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the
construction of computer programs that engage in tasks that are currently more satisfactorily
performed by human beings because they require high-level mental processes such as:
perceptual learning, memory organization and critical reasoning. The summer 1956 conference
at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the
discipline. Anecdotally, it is worth noting the great success of what was not a conference but
rather a workshop. Only six people, including McCarthy and Minsky, had remained
consistently present throughout this work (which relied essentially on developments based on
formal logic).

While technology remained fascinating and promising (see, for example, the 1963
article by Reed C. Lawlor, a member of the California Bar, entitled "What Computers Can Do:
Analysis and Prediction of Judicial Decisions"), the popularity of technology fell back in the
early 1960s. The machines had very little memory, making it difficult to use a computer
language. However, there were already some foundations still present today such as the
solution trees to solve problems: the IPL, information processing language, had thus made it
possible to write as early as 1956 the LTM (logic theorist machine) program which aimed to
demonstrate mathematical theorems. Herbert Simon, economist and sociologist, prophesied in
1957 that the AI would succeed in beating a human at chess in the next 10 years, but the AI
then entered a first winter. Simon's vision proved to be right... 30 years later.

1980-1990: Expert systems

In 1968 Stanley Kubrick directed the film "2001 Space Odyssey" where a computer -
HAL 9000 (only one letter away from those of IBM) summarizes in itself the whole sum of
ethical questions posed by AI: will it represent a high level of sophistication, a good for
humanity or a danger? The impact of the film will naturally not be scientific but it will
contribute to popularize the theme, just as the science fiction author Philip K. Dick, who will
never cease to wonder if, one day, the machines will experience emotions.

It was with the advent of the first microprocessors at the end of 1970 that AI took off
again and entered the golden age of expert systems. The path was actually opened at MIT in
1965 with DENDRAL (expert system specialized in molecular chemistry) and at Stanford
University in 1972 with MYCIN (system specialized in the diagnosis of blood diseases and

21
prescription drugs). These systems were based on an "inference engine," which was
programmed to be a logical mirror of human reasoning. By entering data, the engine provided
answers of a high level of expertise.

The promises foresaw a massive development but the craze will fall again at the end of
1980, early 1990. The programming of such knowledge actually required a lot of effort and
from 200 to 300 rules, there was a "black box" effect where it was not clear how the machine
reasoned. Development and maintenance thus became extremely problematic and - above all -
faster and in many other less complex and less expensive ways were possible. It should be
recalled that in the 1990s, the term artificial intelligence had almost become taboo and more
modest variations had even entered university language, such as "advanced computing". The
success in May 1997 of Deep Blue (IBM's expert system) at the chess game against Garry
Kasparov fulfilled Herbert Simon's 1957 prophecy 30 years later but did not support the
financing and development of this form of AI. The operation of Deep Blue was based on a
systematic brute force algorithm, where all possible moves were evaluated and weighted. The
defeat of the human remained very symbolic in the history but Deep Blue had in reality only
managed to treat a very limited perimeter (that of the rules of the chess game), very far from
the capacity to model the complexity of the world.

Since 2010: a new bloom based on massive data and new computing power

Two factors explain the new boom in the discipline around 2010. First of all, access to
massive volumes of data. To be able to use algorithms for image classification and cat
recognition, for example, it was previously necessary to carry out sampling yourself. Today, a
simple search on Google can find millions. - Then the discovery of the very high efficiency of
computer graphics card processors to accelerate the calculation of learning algorithms. The
process being very iterative, it could take weeks before 2010 to process the entire sample. The
computing power of these cards (capable of more than a thousand billion transactions per
second) has enabled considerable progress at a limited financial cost (less than 1000 euros per
card). This new technological equipment has enabled some significant public successes and
has boosted funding: in 2011, Watson, IBM's IA, will win the games against 2 Jeopardy
champions! ». In 2012, Google X (Google's search lab) will be able to have an AI recognize
cats on a video. More than 16,000 processors have been used for this last task, but the potential
is extraordinary: a machine learns to distinguish something. In 2016, AlphaGO (Google's AI
specialized in Go games) will beat the European champion (Fan Hui) and the world champion
22
(Lee Sedol) then herself (AlphaGo Zero). Let us specify that the game of Go has a
combinatorics much more important than chess (more than the number of particles in the
universe) and that it is not possible to have such significant results in raw strength (as for Deep
Blue in 1997). Overnight, a large majority of research teams turned to this technology with
indisputable benefits. This type of learning has also enabled considerable progress in text
recognition, but, according to experts like Yann LeCun, there is still a long way to go to produce
text understanding systems. Conversational agents illustrate this challenge well: our
smartphones already know how to transcribe an instruction but cannot fully contextualize it
and analyze our intentions.

4.2. Types of Artificial Intelligence:

1. Reactive Machines: These are the oldest forms of AI systems that have extremely limited
capability. They emulate the human mind’s ability to respond to different kinds of stimuli.
These machines do not have memory-based functionality. This means such machines cannot
use previously gained experiences to inform their present actions, i.e., these machines do not
have the ability to “learn.” These machines could only be used for automatically responding to
a limited set or combination of inputs. They cannot be used to rely on memory to improve their
operations based on the same. A popular example of a reactive AI machine is IBM’s Deep
Blue, a machine that beat chess Grandmaster Garry Kasparov in 1997.

2. Limited Memory: Limited memory machines are machines that, in addition to having the
capabilities of purely reactive machines, are also capable of learning from historical data to
make decisions. Nearly all existing applications that we know of come under this category of
AI. All presentday AI systems, such as those using deep learning, are trained by large volumes
of training data that they store in their memory to form a reference model for solving future
problems. For instance, an image recognition AI is trained using thousands of pictures and their
labels to teach it to name objects it scans. When an image is scanned by such an AI, it uses the
training images as references to understand the contents of the image presented to it, and based
on its “learning experience” it labels new images with increasing accuracy. Almost all present-
day AI applications, from chatbots and virtual assistants to self-driving vehicles are all driven
by limited memory AI.

3. Theory of Mind: While the previous two types of AI have been and are found in abundance,
the next two types of AI exist, for now, either as a concept or a work in progress. Theory of

23
mind AI is the next level of AI systems that researchers are currently engaged in innovating. A
theory of mind level AI will be able to better understand the entities it is interacting with by
discerning their needs, emotions, beliefs, and thought processes. While artificial emotional
intelligence is already a budding industry and an area of interest for leading AI researchers,
achieving Theory of mind level of AI will require development in other branches of AI as well.
This is because to truly understand human needs, AI machines will have to perceive humans
as individuals whose minds can be shaped by multiple factors, essentially “understanding”
humans.

4. Self-aware: This is the final stage of AI development which currently exists only
hypothetically. Self-aware AI, which, self explanatorily, is an AI that has evolved to be so akin
to the human brain that it has developed self-awareness. Creating this type of Ai, which is
decades, if not centuries away from materializing, is and will always be the ultimate objective
of all AI research. This type of AI will not only be able to understand and evoke emotions in
those it interacts with, but also have emotions, needs, beliefs, and potentially desires of its own.
And this is the type of AI that doomsayers of the technology are wary of. Although the
development of self-aware can potentially boost our progress as a civilization by leaps and
bounds, it can also potentially lead to catastrophe. This is because once self-aware, the AI
would be capable of having ideas like self-preservation which may directly or indirectly spell
the end for humanity, as such an entity could easily outmaneuver the intellect of any human
being and plot elaborate schemes to take over humanity.

5. Artificial Narrow Intelligence (ANI): This type of artificial intelligence represents all the
existing AI, including even the most complicated and capable AI that has ever been created to
date. Artificial narrow intelligence refers to AI systems that can only perform a specific task
autonomously using human-like capabilities. These machines can do nothing more than what
they are programmed to do, and thus have a very limited or narrow range of competencies.
According to the aforementioned system of classification, these systems correspond to all the
reactive and limited memory AI. Even the most complex AI that uses machine learning and
deep learning to teach itself falls under ANI.

6. Artificial General Intelligence (AGI): Artificial General Intelligence is the ability of an AI


agent to learn, perceive, understand, and function completely like a human being. These
systems will be able to independently build multiple competencies and form connections and

24
generalizations across domains, massively cutting down on time needed for training. This will
make AI systems just as capable as humans by replicating our multi-functional capabilities.

7. Artificial Superintelligence (ASI): The development of Artificial Superintelligence will


probably mark the pinnacle of AI research, as AGI will become by far the most capable forms
of intelligence on earth. ASI, in addition to replicating the multi-faceted intelligence of human
beings, will be exceedingly better at everything they do because of overwhelmingly greater
memory, faster data processing and analysis, and decision-making capabilities. The
development of AGI and ASI will lead to a scenario most popularly referred to as the
singularity. And while the potential of having such powerful machines at our disposal seems
appealing, these machines may also threaten our existence or at the very least, our way of life.

There are bradly 4 types of technologies in which AI used are 1. Expert system 2. Robots
3. Drones and 4.IOTs

4.3. Expert systems: Expert systems are the computer applications developed to solve
complex problems in a particular domain, at the level of extra-ordinary human intelligence and
expertise.

Model of Expert system

25
Knowledge Base It contains domain-specific and high-quality knowledge. Knowledge is
required to exhibit intelligence. The success of any ES majorly depends upon the collection of
highly accurate and precise knowledge. Components of Knowledge Base The knowledge base
of an ES is a store of both, factual and heuristic knowledge.

• Factual Knowledge − It is the information widely accepted by the Knowledge Engineers


and scholars in the task domain.

• Heuristic Knowledge − It is about practice, accurate judgement, one’s ability of evaluation,


and guessing.

Knowledge representation: It is the method used to organize and formalize the knowledge in
the knowledge base. It is in the form of IF-THEN-ELSE rules.

Knowledge Acquisition: The success of any expert system majorly depends on the quality,
completeness, and accuracy of the information stored in the knowledge base. The knowledge
base is formed by readings from various experts, scholars, and the Knowledge Engineers. The
knowledge engineer is a person with the qualities of empathy, quick learning, and case
analyzing skills.

He acquires information from subject expert by recording, interviewing, and observing


him at work, etc. He then categorizes and organizes the information in a meaningful way, in
the form of IF-THEN-ELSE rules, to be used by interference machine. The knowledge engineer
also monitors the development of the ES.

Inference Engine: Use of efficient procedures and rules by the Inference Engine is essential
in deducting a correct, flawless solution. In case of knowledge-based ES, the Inference Engine
acquires and manipulates the knowledge from the knowledge base to arrive at a particular
solution. In case of rule-based ES, it – 1. Applies rules repeatedly to the facts, which are
obtained from earlier rule application. 2. Adds new knowledge into the knowledge base if
required.3. Resolves rules conflict when multiple rules are applicable to a particular case.

User Interface: User interface provides interaction between user of the ES and the ES itself.
It is generally Natural Language Processing so as to be used by the user who is well-versed in

26
the task domain. The user of the ES need not be necessarily an expert in Artificial Intelligence.
It explains how the ES has arrived at a particular recommendation. The explanation may appear
in the following forms – 1. Natural language displayed on screen. 2.Verbal narrations in natural
language. 3. Listing of rule numbers displayed on the screen. The user interface makes it easy
to trace the credibility of the deductions.

Expert Systems Limitations No technology can offer easy and complete solution. Large
systems are costly, require significant development time, and computer resources. ESs have
their limitations which include – 1. Limitations of the technology 2.Difficult knowledge
acquisition 3.ES are difficult to maintain 4.High development costs.

4.4. Drones: Drones are more formally known as unmanned aerial vehicles (UAVs) that can
be remotely controlled or fly autonomously through software-controlled flight plans in their
embedded systems, working in conjunction with onboard sensors and GPS.

Agriculture Drones

Mapping/Surveying : Drones equipped with near infrared camera sensors allow the drone to
see the spectrum of light that plants use to absorb light for photosynthesis. From this
information, using the normalized difference vegetation index (NDVI) farmers can understand
plant health. Software analysis can be used to change values in order to reflect the specific crop
type and even in which stage of life a specific crop is in. In addition to crop health, drones can
create detailed GPS maps of the crop field area.

Crop Spraying/Dusting: To maintain yields, crops require proper fertilization and pesticide
application. Crop spraying drones can carry large liquid storage reservoirs, can be operated
more safely (even autonomously), and can be operated and maintained at a fraction of the cost
compared to crop dusters

Irrigation management : Drones equipped with thermal cameras can provide excellent insight
into irrigation by highlighting areas that have pooling water or insufficient soil moisture. These
issues can severely affect crop yields and quality. Thermal drones give farmers a better way to
understand their fields through more frequent inspections and surveying.

27
Livestock Monitoring: Drones gave livestock farmers a new way to keep an eye on their
livestock at all times, resulting greater profits. Drones with thermal imaging cameras allow a
single remote pilot in command to monitor livestock. The operator can check in on the heard
to see if there are any injured, missing, or birthing animals.

4.5. Robots: A machine resembling a human being and able to replicate certain human
movements and functions automatically.

Agriculture Robots: Field robots work with respect to environment and medium. They change
themselves according to the required condition. Mobile robots are those which possess mobility
with respect to a medium. The entire system moves with respect to environment. There are
broadly six types of robots. They are 1. Demeter 2. Robot of weed control 3. Robort grnaty
4. Tree robot 5. Forester robot and 6. Fruit picking robot.

1.Demeter: Robot farmer Demeter is a robot that can cut crops it looks like a normal harvester,
but can drive by itself without any human supervision. Demeter has cameras on it that can
detect the difference between the crop that has been cut and crop that hasn’t.

2. Weed Controller Robot :A four-wheel-drive weed-seeking robot was developed and the
task of the weed removing device is to remove or destroy the weed. Crops that are grown in
rows can be weeded by running a hoe between the crop rows.

3. Robot Gantry: Traditional spraying can be very efficient, especially when they cover large
areas. The robotic gantry could apply both liquid sprays and fertilizer and be able to regulate
itself according to current weather conditions. If it became too windy then the gantry could just
stop and wait until conditions improved.

4. Tree Robot : It is very important in the biology community to understand the interaction
between the atmosphere and the forest environment. But 90% of all interaction between the
environment and atmospheric conditions happens high up in the forest canopy. A fearless
mobile robot is helping scientists monitor environmental changes in forests. The Treebot helps
by being stealthy enough to travel through the forest canopy along specially constructed
cabling, night and day Treebot consists of combine networked sensors, a web cam, and a

28
wireless net link. It is solar-powered and moves up and down special cables to take samples
and measurements for vital analysis.

5. Forester Robot: This is a special type of robot used for cutting up of wood, tending trees,
and pruning of X- mas tree and for harvesting pulp and hard wood and in the forests. It employs
a special jaws and axes for chopping the branch.

6.Fruit Picking Robot: The fruit picking robots need to pick ripe fruit without damaging the
branches or leaves of the tree. The robots must be able to access all areas of the tree being
harvested. The robot can distinguish between fruit and leaves by using video image capturing.
The camera is mounted on the robot arm, and the colours detected are compared with properties
stored in memory. If a match is obtained, the fruit is picked. If fruit is hidden by leaves, an air
jet can be used to blow leaves out the way so a clearer view and access can be obtained. It can
move, in, out, up, down, and in cylindrical and spherical motion patterns. The pressure applied
to the fruit is sufficient for removal from the tree, but not enough to crush the fruit. The shape
of the gripper depends on the fruit being picked.

4.6. Internet of Things

IoT is an environment where objects, animals or people are equipped with unique
identifiers capable of data transmission over Internet network without the need for human -
human or human-computer interaction.

Five types of Internet of things

29
1. Tagging Things: Real-time item traceability and addressability by RFIDs. • Widely used in
Transport and Logistics • Easy to deploy: RFID tags and RFID readers

2. Feeling Things: Sensors act as primary devices to collect data from the environment.

3. Shrinking Things: Miniaturization and Nanotechnology has provoked the ability of smaller
things to interact and connect with smart devices.

4. Thinking Things: Embedded intelligence in devices through sensors has formed the
network connection to the Internet

4.7. Other general applications of AI in Agriculture

Soil Analysis and Monitoring: AI can be used to monitor soil health with the help of sensors,
cameras, and infrared rays that scan the soil for its nutritional properties (Sennaar, 2019;
Baruah, 2018). This also helps in understanding the reaction of specific seeds to different soils,
the impact of weather changes on the soil, and the probability of the spread of diseases and
pests (Irimia, 2016). With such data in hand, the efficiency of crop inputs is improved, leading
to cost savings and productivity gains for farmers. Currently, an average of 207.56 kg of
chemical fertilisers are used per hectare in Haryana annually (one of the highest among Indian
states). Besides being costly for farmers, fertilisers also introduce harmful substances into the
food chain through crops and the water table (Indian Fertiliser Scenario, 2013).

Crop Sowing: AI in crop sowing is used essentially to drive predictive analytics to determine
when and how to sow. It helps in making predictions on the right time to plant, apply fertilisers,
harvest, bale, till, etc. based on climate data, historical conditions, market conditions for inputs
and outputs, personal information, and so on. Crops can also be sowed using AI-aided
machinery at equidistant intervals and at optimal depths.

Weed and Pest Control: Average losses of up to 90% of the total crop production have been
reported due to the infestation of weed (Meena, 2015). Similarly, average losses of up to 19%
have been reported due to pests (Dhaliwal et al., 2015). This leads to a greater use of pesticides,
further contaminating the soil and groundwater. As of today, there are 250 identified species
of weeds which have become completely resistant to herbicides (Sennaar, 2018), presenting a

30
severe threat to the sustainability of crop production. Pesticide resistance is also on the rise.
The purchase of insecticides and pesticides contribute approximately 5% to the total cost inputs
in agriculture, and this cost is on rise both in percentage and absolute terms (Price Policy for
Kharif crops, 2017-18; Price Policy for Rabi Crops, 2014-15).

Crop Harvesting: An estimated 40% of annual agriculture costs go into the employment of
labour, predominantly for sowing and harvesting (Sennaar, 2019). AI-enabled robots for
harvesting can lead to huge cost savings by reducing the need for approximately 4 agricultural
labourers per acre of land (Panpatte, 2018). Furthermore, crops can be sorted according to pre-
identified grades at the time of harvest, saving time and enhancing the quality of crops.
However, AI is likely to change the way labour is employed in agriculture. Although
conventional manual jobs will be replaced, AI presents new opportunities for job creation.

Supply Chain Management: Policymakers have not yet been able to tackle the agricultural
supply chain challenge. On the one hand, farmers either do not receive a suitable price for their
produce that continues to rot in mandis (or marketplace), and on the other, food consumers
either end up paying exorbitant prices or are malnourished. Although AI in agricultural supply
chain management is yet to make major inroads, its informed application in supply chain
planning and optimization, including demand forecasting and logistics, can lead to huge cost
savings for farmers, and solve the information asymmetry problem for buyers.

5. Literature available / Research studies / Case studies

1. Learning in virtual reality: Effects on performance, emotion and engagement

- Devon and Adrian (2018)

Introduction: Recent advances in virtual reality (VR) technology allow for potential learning
and education applications. For this study, 99 participants were assigned to one of three
learning conditions: traditional (textbook style), VR and video (a passive control). The learning
materials used the same text and 3D model for all conditions. Each participant was given a
knowledge test before and after learning. Participants in the traditional and VR conditions had
improved overall performance (i.e. learning, including knowledge acquisition and
understanding) compared to those in the video condition. Participants in the VR condition also
showed better performance for ‘remembering’ than those in the traditional and the video

31
conditions. Emotion self-ratings before and after the learning phase showed an increase in
positive emotions and a decrease in negative emotions for the VR condition. Conversely there
was a decrease in positive emotions in both the traditional and video conditions. The Web-
based learning tools evaluation scale also found that participants in the VR condition reported
higher engagement than those in the other conditions. Overall, VR displayed an improved
learning experience when compared to traditional and video learning methods.

Materials and methods:

All participants were first-year Psychology students at the University of Warwick


(UK), who completed the study for course credit. A total of 99 participants (84 females,
15 males) who were 19 years of age on average were assigned randomly to one of three
learning conditions: traditional (textbook style), VR and video. The questionnaires and
learning materials were presented on a 19" LCD computer screen (1920 × 1080 pixels,
60 Hz) using Microsoft Word and Qualtrics. Responses were collected through mouse
and keyboard. A HTC Vive (Xindian, New Taipei, Taiwan) (Figure 1) was used for the VR
condition. The headset weighs 550 g and displays a 3D environment via two OLED displays
(1080 × 1200 pixels per eye, 90 Hz) with a field of view of 100 × 110 degrees. Participants
controlled the VR environment with the standard handheld HTC Vive controller. The learning
materials used the same text and 3D model of a plant cell for all three conditions. The VR
condition presented the model from the application ‘Lifeliqe Museum’ on the HTC Vive
headset, allowing the participants to see and interact with a 3D model, with accompanying
descriptive text.

An adapted version of the Differential Emotions Scale (DES, Izard et al. 1974), with nine
emotion categories (interest, amusement, sadness, anger, fear, anxiety, contempt, surprise and
elatedness), was used to measure participants’ mood before and after the learning phase.
Participants were asked to rate to which extent the emotional adjectives, each represented with
three words (e.g., surprised, amazed, astonished), applied to them on a scale from 1 (not at all)
to 5 (very strongly). Five of the categories related to negative emotions, and four related to
positive emotions. The procedure was the same for each participant, starting with a pretest and
the DES, followed by the learning phase.

32
Results and Discussion:
The knowledge scores were analysed with a mixed-design ANOVA with the between-
subject factor condition (textbook, video, virtual) and the within-subject factor test (pre-
, post). The ANOVA revealed a significant main effect for test, F (1,96) = 73.25,p < 0.001,
ηp2 = 0.740, indicating that knowledge improved overall by 23.2% from pre- test to post-test,
and a significant test × condition interaction, F(2,96) = 6.80, p = 0.002, ηp2 = 0.124. The
significant interaction was further analyzed with two split-up ANOVAs, separately for pretest
and for post-test. The ANOVA on the post-test data revealed a significant condition effect,
F(2,96) = 3.51,p =0.034, ηp2 = 0.068. Post-hoc least significant difference (LSD) showed
that participants in the VR condition scored significantly higher than participants in the video
condition (56.5% vs. 43.9%,espectively; p = 0.009). The pretest ANOVA showed no
significant effect (p = 0.793).
The confidence ratings showed a similar pattern of results as the knowledge
data.The equivalent mixed-design ANOVA revealed a significant effect for test, F(1,96) =
266.96, p < 0.001, ηp2 = 0.736, as a result of participants being more confident in the post-
test than in the pretest (3.24 vs. 2.24, respectively), as well as a significant test × condition
interaction,F(2,96) = 5.80, p = 0.004, ηp2 = 0.108,because of less confidence gain in
the video than in the VR or textbook condition (0.71 vs. 1.12 and 1.18, respectively).

Table 1 Number of participants (N), knowledge scores (percentage correct) and


confidence ratings (1–5) in the pretest and post-test separately for the three conditions.
Condition n Pre-test Post test Difference
I Knowledge
Virtual 34 28.1% 56.5% 28.5%
Video 34 27.9% 43.9% 16.1%
Textbook 31 25.3% 50.2% 24.9%
II Confidence rating
Virtual 34 2.24 3.35 1.12
Video 34 2.33 3.04 0.71
Textbook 31 2.14 3.32 1.18

Results: Overall, VR does seem to be a potential alternative to traditional textbook-style


learning, with similar performance levels and improved mood and engagement. These

33
benefits may have a longer-term impact on learning, such as improvements resulting from the
learning experience. However, the results may be partially because of the novelty of the VR
equipment, so the improvements may not be sustained over longitudinal studies. Conversely,
these improvements could increase over time, as individuals become more familiar with the
equipment and more able to navigate it easily. Therefore, further longitudinal studies are
needed to address these questions. VR does show enormous potential, not only as an option to
supplement or replace traditional learning methods, but to develop novel learning experiences
that have not been used before.

2. The Impact of an Augmented Reality Application on Learning Motivation of Students

-Khan, Kevin , and Jacques (2019)

Introduction: The research on augmented reality applications in education is still in an early


stage, and there is a lack of research on the effects and implications of augmented reality in the
field of education. The purpose of this research was to measure and understand the impact of
an augmented reality mobile application on the learning motivation of undergraduate health
science students at the University of Cape Town. We extend previous research that looked
specifically at the impact of augmented reality technology on student learning motivation. the
intrinsic motivation theory was used to explain motivation in the context of learning. the
attention, relevance, confidence, and satisfaction (ARCS) model guided the understanding of
the impact of augmented reality on student motivation, and the Instructional Materials
Motivation Survey was used to design the research instrument.

Methodology: the intrinsic motivation theory was used to understand motivation in the context
of learning. The ARCS model of motivational design was used to understand the impact of AR
technology on student motivation towards learning. The impact on student learning motivation
was measured by comparing the learning motivation of students before and after using an AR
mobile application, using a pre usage and post usage questionnaire.

Results: From the table 1 results shown that there is a positive significant difference in
motivation dimensions i.e., attention with Z value (7.03), confidence (2.17), satisfaction (2.44)
and overall (3.06). But the dimension relevance found non-significant with Z value (-0.76).

34
Table 1 Significance of differences in mean values (n=78)

S. No Indicator Mean Pre test Mean Post test Z Value


1. Attention 2.93 3.83 7.03**
2. Relevance 3.37 3.26 -0.76
3. Confidence 2.98 3.30 2.17*
4. Satisfaction 2.96 3.33 2.44*
5. Overall 3.05 3.49 3.06**

Conclusion : The objective of this research was to understand the impact of an AR mobile
application on the learning motivation of undergraduate health science students at UCT. The
literature indicated that there is insufficient research on the impact of using mobile AR in
education, and there is room to explore the potential of AR to improve student learning
motivation and contribute to improved academic achievement. The literature review
summarized various concepts which led to developing the research questions that were based
on the attention, relevance, confidence, and satisfaction (ARCS) model of motivational design.
Augmented reality (AR) was defined as combining real and virtual worlds, supplementing the
real world with computer-generated virtual objects in real-time, and AR was explained in the
context of education. Mobile AR was discussed given that AR may easily be used through
mobile devices and the design involved using the Anatomy 4D mobile application as the
educational AR tool.

6. Conclusion

Above all, AR ,VR and AI technologies improve the using efficiency of agricultural
production, which can improve the efficiency of agricultural resources comprehensive
utilization, which can simulation agricultural product market transactions and agricultural
production management, which can realize the agricultural and technical education, training,
research etc. They can permeate various fields of agricultural production .Therefore, It has
great significance that these technological application was researched in agriculture, It is
extremely complex and very long cycle life science research on space-time quantitative
analysis of coordinate system, which can greatly reduce the research cycle and also can get
direct experimental results. However, these technology application have a scientific method,
as do the guidance of an effective supplementary means of scientific research in order to better
promote the development of agriculture.
35
The digital technologies are coming with stronger and better way. It is important for all
stakeholders to understand the technologies such as AR, VR and AI for better implementation
in the field of agriculture. Extension system have more responsibility than other stakeholders
in not only in understanding them, but also to conduct applied research of using these
technologies in areas of content development, content analysis, effectiveness, economic
analysis etc. It is recommended that affordable institutes should incorporate AR, VR and AI
technologies in training of human resources for better understanding the agricultural
technologies.

7. Bibliography/ Literature cited/ Reference

DEVON, A. AND ADRIAN, V.M., 2018, Learning in virtual reality: Effectiveness on


performance, emotions and engagement. Res. Ler. Tech., 26 (1): 1 – 13.

FENG, Y., ZHANG, J., ZAHO, Y., ZAHO, J., TAN, C., AND LUAN, R., 2010, The research
and application of virtual reality technology in agriculture sciences. Int. Fed.
Info.Proces., 4 (2): 546-550.

HURUST, W., MENDOZA, F., R. AND TEKINERDOGAN, B., 2021, Augmented reality in
precesion farming : Concepts and Application. Smart. Citi., 4 (4): 1454-1468.
DOI: https://doi.org/10.3390/smartcities4040077

JOSEPH, J. AND JEREMY, G., 2021, Agumented reality + virtual reality- Privacy &
autonomy consideration in emerging, immersive digital worlds. Future of Privacy
Forum., Wahington D.C.- USA, pp: 2.

KHAN, T., KEVIN, J. AND JOCQUES, O., 2019, The impact of an augmented reality
applications on learning motivation of students. Adv. Hum. Comput. Inter., 2019 (1): 1-
14.

LUIS, M., LOUNDRDES, M. AND MANUEL, D., 2020, Augmented and virtual reality
evolution and future tendency. App. Sci., 10 (1): 1-23. Available at :
doi:10.3390/app10010322.

36
MONIQUE, E. AND GLIMENEZ, C., C., 2020, Virtual reality and Agumented reality
applications in agriculture : a literature review. Symposium Report. DOI 10.1109/SVR
51698.2020.00017.

TEMPLIN, T., DARIUSZ, P. AND RYSZKO, M., 2022, Using augmented and virtual reality
(AR/VR) to support safe navigation on inland and coastal water zones. Rem. Sens.,
14 (1): 1-23.

ZHOU, Y. AND CALVO, J., V., 2018, Virtual reality to boost agriculture in Colombia: A
report. Report. Available at: DOI: 10.13140/RG.2.2.10052.48006.

8. Discussion

1. How can extension agent influence small and marginal farmers to adopt these
technologies?

Ans: Yes it will be helpful to make farmers to understand the most complex task in easier way
and more appropriate way. It is suggested the videos or video modules can be developed for
both AR and VR for better understanding the situation. AR enabled by AI performing better in
especially the mobile applications such as plantix, google lenses etc.

2. Which ICAR institutions developed farm monitoring systems using these


technologies?

Ans: No. There is no such monitoring systems developed by ICAR based on AR and VR. But
Under the scheme of NAHEP, videos and video modules are being developed by coordinating
the SAUs and Research institutes. But where ever the data available either old or live it can be
easily converted in to the monitoring systems.

3. Crime rates are increasing using these technologies? How to overcome it?

Ans: Technologies always can be used into many way. It is true that digital scams are
increasing day by day. But it is important that mode of overcome also should be in more digital
way than traditional one. UPI payments made great change in ways of handling cash now a
days. We hear and there reading the losing money in UPI payments. But we cant give up the
technologies for few incidents. The phenomenon should see in larger aspect and macro level.

37
4. Can we have AR and VR in digital evaluation?

Ans: Yes, we can use. The technologies are highly helpful in evaluation of thing over VIVA
and practical which are costly for materials and hazardous to handle. These also can be used in
the crop evaluation and helps us in understanding the micro crop environment better way.

5. Can it be a substitute for extension system? What about employment


opportunities?

Ans: No, these technologies never can be replacement for extension system. But the extension
system needs to adopt these technologies to bridge the gap between information haves and
have-nots. This is important and responsibility of extension to harvest the advantages of these
technologies for training and skill development.

Regarding employment, as the technologies changes and replaces the old one, the is
probability of losing jobs in traditional systems. This can be addressed by the upgrading the
HRD of traditional with new practice or new jobs will be created as per requirement of new
skill.

6. Can KVK develop content, will they reach farmers?

Ans: No, KVK such small we should not recommend these technologies. Its better ICAR can
take up project development of content and this can be used by all the KVK system. At present
we may not to recommend to individual farmer to buy, but it will be highly helpful for
institutions in disseminating content to their learners. These are advanced technologies and in
future these can be scaled down and adopted for various purposes, during that time it can be
recommended to farmer to buy.

7. Which is better among AR, VR and AI?

Ans: It is difficult choose among there as they are closely related with other. Currently VR is
not expensive that adopting to AR. But, it is also important to understand the concept of mobile
AR coming rapidly which we can see in many applications. The future is mixed reality in which
all three components work together . AI also doing good in mobile applications in crop pest
diagnosis.

38
8. Is technologies locally available?

Ans: No, there no locally available as of now. But there are devices which are compatible to
convert the mobile phone in to VR headset.

9. Is there any research studies on IOT usage in agriculture in India?

Ans: Yes, lot of publications are available especially in field of irrigation, precision farming,
soil moisture measurement etc. But the current seminar more focused or AR, VR and AI only
those concepts given more weightage.

10. Is there employment opportunities with these technologies?

Ans: Yes, there is scope of research of these technologies how much effective in learning and
training. This also may help in generating employment in content generation, content
verification, content updating, and development etc.

39
UNIVERSITY OF AGRICULTURAL SCIENCES, BANGALORE
DEPARTMENT OF AGRICULTURAL EXTENSION
COLLEGE OF AGRICULTURE, GKVK, BENGALURU - 560 065

Name : Guntukogula Pattabhi Sandeep Venue : Dwarakinath Hall


Class : II Ph.D. Time : 10:30 am
ID. No. : PAMB 0028 Date : 10-09-2022
Seminar-I
Augmented Reality (AR), Virtual Reality (VR) and Artificial Intelligence (AI) for
Agriculture: A Modern Way to Address Traditional Problems
Agriculture includes a variety of processes, in which a number of goods are produced
by using natural resources. The agricultural industry consists of numerous operations, which
includes harvesting crops, plants, feeding animals, grazing, etc. It involves soil preparation
for maximum returns, crop enhancement, agriculture and horticultural services, landscaping
services, veterinary services, labour or farming management. By 2050, the global population
is projected to surpass 9 billion and agricultural production needs to be doubled (70%) to meet
demand. Due to multiple economic, environmental, and sociological powers, land, water, and
resources are becoming inadequate and this has a particularly devastating effect on developing
countries. The traditional techniques of several agricultural procedures are not supporting
enough according to the generation and are not productive enough to meet the future food
demand of a massive population. Especially in Indian scenario, small size of landholding by
farmers, lack of adoption of scientific technologies, lack of optimum use of mechanization,
market instability, lack of human resources in extension system and their skill levels etc. are
major constraints in improving the productivity of agricultural produce. Extension systems of
ICAR, SAUs, NGOs and private are working to improve the capacities of stake holders to
improve the human resource for upscaling the productivity. Extension systems are adopting
the required communication technologies for better dissemination of information to needy.
The Information Communication Technologies (ICTs) are advancing day by day and changing
its size and shape. From radio to real time governance, ICTs have reached a long way in short
duration. The generation is stepping ahead with tech easing human lives, business, and
agriculture. Augmented Reality (AR), Virtual Reality (VR), and Artificial Intelligence (AI) in
agriculture are bringing new hopes for agriculture sector to avoid the upcoming food crisis.
Leveraging reality tech for agricultural implementations would help in improving production.
With this brief background, the current seminar has been conceptualized with the following
objectives:
1.To understand the concept of AR, VR & AI
2.To explore the different types of AR, VR & AI technologies
3.To know the potential areas for applying the AR, VR & AI for agriculture
4.To review the relevant research studies

Augmented reality: Augmented reality is a technology that works on computer vision-based


recognition algorithms to augment sound, video, graphics, and other sensor-based inputs on
real world objects using the camera of your device. AR can be grouped into broadly two types
i.e., marker based and AR without marker. Combination of virtual and real world depends on
context, real time interactive and three-dimensional view are the important characteristics of

40
AR. AR can be implemented in areas such as monitoring farms visually, training new farmers
and facilitating tools assessment etc. in agriculture. The advantages of AR are usage will help
in enhanced experience, easy to use, supports many activities and whereas unaffordable,
privacy, security and addiction are few disadvantages of using AR.

Virtual reality: VR is defined as a three-dimensional, computer or electronic device generated


environment, which can be explored and interacted with by a person. Virtual practical’s, virtual
theory programs, virtual courses, and virtual culture are the broad modes of disseminating
agricultural information. Exploring new places and things without visiting there, lowest risk,
improves interest and cost effective are the major advantages of using virtual reality tools.
Required technical skill to operate, impact on body, real world is neglected, and psychological
damage are the few disadvantages using VR tools.

Artificial Intelligence: AI is defined as field of computer science, which focuses on the


creation of machine systems which behave intelligently and show behavior to the same level
as human beings think and act to achieve human-like performance in all cognitive works and
fields using precise logical reasoning. Reactive machines, Limited memory, artificial super
intelligence etc. are the major types of AI. Expert system, robots, drones, IOTs etc. are the
major technologies popularly using AI. Sowing the crops, monitoring, weed and pest
management, harvesting etc. are the major areas where AI or AI enabled technologies can be
used efficiently in agriculture.

Research studies

Devon and Adrian (2018) reported that the among difference mean value between pre
test and post test of different formats of content, the virtual content (28.50%) was found high
followed by textbook (24.90%) content and video (16.10%).

Khan et al (2019) reported that there is a positive significant difference in motivation


dimensions i.e., attention with Z value (7.03), confidence (2.17), satisfaction (2.44) and overall
(3.06). But the dimension relevance found non-significant with Z value (-0.76).

Conclusion: The digital technologies are coming with stronger and better way. It is important
for all stakeholders to understand the technologies such as AR, VR and AI for better
implementation in the field of agriculture. Extension system have more responsibility than
other stakeholders in not only in understanding them, but also to conduct applied research of
using these technologies in areas of content development, content analysis, effectiveness,
economic analysis etc. It is recommended that affordable institutes should incorporate AR, VR
and AI technologies in training of human resources for better understanding the agricultural
technologies.

References

DEVON, A. AND ADRIAN, V.M., 2018, Learning in virtual reality: Effectiveness on


performance, emotions and engagement. Res. Ler. Tech., 26 (1): 1 – 13.

KHAN, T., KEVIN, J. AND JOCQUES, O., 2019, The impact of an augmented reality
applications on learning motivation of students. Adv. Hum. Comput. Inter., 2019 (1): 1-
14.

41

You might also like