You are on page 1of 6

AGGRANDIZE: ARTIFICIAL INTELLIGENCE (THE NEXT DIGITAL FRONTIER FOR ENERGY REVOLUTION)

Name: Dhrumil Savalia

Roll no: 17BPE025

Technical essay on artificial intelligence (AI)

Artificial intelligence is one of the most reliable emerging technology in the world. It is made for
simplifying human tasks that are considered to be time consuming and mistakes prone i.e. mistakes are
bound to happen. AI technology is able to recognize complex patterns and perform different tasks based
on the basis on that patterns.

AI can also be referred to as augmented intelligence. It is of three different types based on the workload
it can handle and efficiency.

Weak or narrow AI is applied to a specific domain for example language translators, virtual assistants
,self driving cars, AI powered web searches recommendation engines etc, but this type of AI’s are not
able to learn and perform new tasks.

Strong AI or generalized AI is AI that can interact and operate a wide variety of independent and related
tasks. It can learn new tasks to solve new problems and it does this by teaching itself new strategies.

Super AI or conscious AI is AI with human-level consciousness which would require it to be self aware.
Self consciousness is not actually defined.

There are many definitions based on AI brought by technology world .It is defined as how machines can
have cognitive capability i.e. impart the ability to think and learn new machines and how humans work.
It is a set of technology that identifies patterns of data and reproduces them on new information.

It is a set of mathematical algorithms that enables us to have computers find very deep and patterns
that we may not known have exist without us to hardcore them manually. Study by PWC suggest that 16
trillion$ addition to GDP by 2030 on the basis of AI

It aggregates knowledge from different sources into one centralized cloud and provide them in an
accessible manner & perception.

AI systems typically demonstrates behaviours associated with human intelligence, learning, reasoning,
problem solving, knowledge representation and

For data scientists AI is a way of exploring and classifying data to meet specific goals. It is like assistant
which we talk daily on our mobile phones or laptop.

Chatbots have natural language processing capability, it is used in healthcare to question patients and
run basic diagnoses like real doctors. In education it is used for providing students with easy to learn
conversational interfaces on demand online tutors. This the main use of the AI speech to text
technology.

Computer vision is a form of AI used to provide street vision for the car to overcome obstructions on the
road. It helps automate tasks such as detecting cancerous moles in skin, finding symptoms in X-ray and
MRI scans.

In case of bank it can be used for detecting fraudulent transactions, identifying credit card fraud and
preventing financial crimes.

In medical field it can help doctors arrive at more accurate preliminary diagnoses, reading medical
imaging and find appropriate clinical trials for patients.

It has potential to access enormous amount of information, imitate humans, make recommendations
and correlate data. In oil and gas industry it can be used in developing petroleum exploration techniques
and distinguish rock samples.

AI works on the concept of cognitive computing, which enables people to create a profoundly new kind
of value, finding answers and insights locked away in volumes of data.

Cognitive computing mirrors some of the key cognitive elements of human expertise systems that
reason out problems like human does. They use similar processes as humans do to reason about
information. They read and can do this at massive speed & scale.

Unlike conventional computing solutions which can only handle neatly organized structured data such as
what is stored in a database, cognitive computing solutions can understand unstructured data, which is
80% of data today.

They rely on natural language which is governed by rules of grammar, context and culture. It is implicit,
ambiguous, complex and a challenge to process. Certain idioms can be particularly challenging in English
and it is difficult to parse these languages. Cognitive systems read and interpret like a person. They do
this by structurally discerning meaning of the semantics of the written material.

It is very different from simple speech recognition. They try to understand the real intent of users
language and use that language or understanding to draw interferences through a broad array of
linguistic models and algorithms. They do this by learning from their interactions with us.

Machine learning is a subset of AI that uses computer algorithms to analyze data and make intelligent
decisions based on what it has learned without being explicitly programmed. Their algorithms are
trained with large sets of data and they learn from examples.

Deep learning is a specialized subset of machine learning (ML) that uses layered neural networks to
simulate human decision making. Their algorithms can label and categorize information and identify
patterns.
Artificial neural networks often referred simply as neural networks. A neural networks in AI is a
collection of small computing units called neurons that take incoming data and learn to make decisions
overtime. They are often layered deep and are the reason deep learning algorithms become more
efficient as the data sets increase in volume.

AI is different from data science which involves statistical analysis, data visualization, machine
learning(ML) and more.

It can use many AI techniques to derive insight from data and draw interferences from data using ML
algorithms. They both can handle significantly large volumes of data.

Machine learning relies on defining behavioural rules by examining and comparing large data sets to find
common patterns.

Supervised learning is a type of machine learning where algorithm is trained on human-labeled data.
Unsupervised learning is another type of machine language that relies on giving the algorithm an
unlabeled data and it find patterns by itself. Input is provided by without providing the labels and let the
machine infer qualities.

Clustered data or grouped data includes providing the algorithm with a constant stream of network
traffic and let it independently learn the baseline network activity as well as outlier and possibly
malicious behaviour happening on the network.

This type of ML algorithm is known as reinforcement learning , which rely on providing a ML algorithm
with a set of rules and constraints and letting it learn how to achieve goals. Desired goal is defined by is
with allowed actions and constraints. It is the model used to find patterns in the data without the
programmer having to explicitly program these patterns.

Supervised learning can be classified into three categories regression , classification and neural
networks.

Regression estimates continuous values. It is built by looking at the relationship between the features X
and result Y where Y is a continuous variable.

Neural networks are based on discretized model. Assigning discretized class labels Y based on many
many input features X.

Classification models classify results with more than two categories, it includes decision trees, support
vector machines, logistic regression and random forests. Each column is a feature, each row is a data
point. Classsification is a process of predicting the class of given data points.

With ML data sets are typically split into training, validation and test sets. Training subset is the data
used to train the algorithm. Validation subset validate results and fine tune the algorithm parameters.
Testing data is used to evaluate how good our model is, through some defined parameters such as
accuracy, precision and recall.
Deep learning algorithms learn from the unstructured data sets such as photos, videos and audio files.
These algorithms do not directly map input to output instead they rely on several layers of processing
units. Each layer passes its output to the next layer which processes and passes it to the next.

The process of developing algorithms include configuring the number of layers and the type of functions
that connect the outputs of each layers to the inputs of the next. Then model is trained with lots of
annotated examples.

These algorithms improve as they are fed more data unlike the ML algorithms which plateau as the data
sets grow. It is used in facial recognition, medical imaging, language translation and driverless cars.

Neural networks are a collection of small units called as neurons. These neurons take incoming data like
biological neural network and learn to make decisions overtime. They learn through a process called
back propagation. Back propagation uses a set of training data that match known inputs 3to desired
outputs.

First the inputs are plugged into the network and outputs are determined, then an error function
determines how far the given output is from the desired output.

Collection of neurons is called layers. They take in input an provide an output. Hidden layers other than
input & output layers take in set of weighted inputs and produce a output through an activation
function.

Perceptrons are the simplest and oldest type of neural networks which uses single-layered networks
consisting of input nodes connected directly to output nodes. Hidden and output nodes have a property
called bias which is a special type of weight that applies to a node after the inputs are considered.
Activation function is run against the sum of the inputs and bias and then result is forwarded as an
output.

Convolutional neural networks (CNN) are multi-layered networks that takes inspiration from animal
visual cortex, they are useful in image processing and video recognition.

Convolution is a mathematical operation where a function is applied to another function is a mixture of


two functions. They are good at detecting simple structures in an image and putting those simple
features together to construct more complex features. It occurs in a series of layer, each of which
conducts a convolution on the output of the previous layer.

Recurrent neural networks (RNN) perform the same task for every element of a sequence with prior
outputs feeding the subsequent stage inputs in a general network.

An input is processed through a no. of layers of and an output is produced with an assumption that 2
successive inputs are independent of each other. It can be used in making use of information in long
sequences.
Google uses AI powered speech to text in their call screen feature to handle scam calls and show the
user the text of the person speaking in real time. Youtube uses this to provide automatic closed
captioning.

With the help of neural network synthesizing human voice is possible which is known as speech
synthesis.

Field of computer vision focuses on replicating parts of the complexity of the human system and
enabling computers to identify & process project in images & videos , the same way humans do.

It enables digital world to interact with physical world. It plays a crucial role in augmented and mixed
reality that allows smartphones, tablets and smartglasses etc to overlay and embed visual objects on
real world imagery.

There are some ethical issues and concerns related to AI which helps us in knowing the negative impacts
of AI on human world. It can be used for nefarious reasons i.e. for a dictorial government to enforce
their will on people, on arresting and suppressing democracy and other.

Ethics is not a technological problem, it is a human problem. In self driving cars ethical question
emerging is the trolley problem, for example if a car has to decide which accident to cause, it has to pick
between running into a sign and hurting passengers in the vehicle or running into pedestrians on the
side of the road, potentially saving the passengers of the vehicle. It opens lots of questions on whom to
blame for the accident whether the car owner or the car company.

AI powered risk assessment systems in courts help in predicting the probability of a person reoffending
and hence provide guidelines for sentencing or granting parole based on calculated risk of redivicism.
There is concern that these systems can be biased against people of color.

The main area of research in AI is solving the bias problem in ML. There is a technique of directly
modifying the data we feed through techniques like data augmentation to enable less bias data. But stlll
this technique of solving the bias has aroused plenty of questions in the mind of researchers ,since it is
not an effective method of eliminating bias.

AI systems experts must guard against introducing bias whether gender, social or any other form of bias.

For developers there are 4 aspects of AI that help people perceive it as trustworthy; Transparency :
People should be aware of the fact of having some sort of expectations while interacting with AI;
Accountability : Any unexpected results can be undone if required; Privacy : Personal information should
always be protected; Lack of bias: Developers should use representative training data to avoid regular
audits to detect any kind of bias expecting in.

AI can be of immense importance in medical field for early detection of any kind of diseases such as
cancers, sight loss and other problems for quick treatment before the situation is aggravated.
It can also be helpful in agriculture sector by keeping crops away from diseases in case of any kind of
harm posed to crops.

AI in oil and gas industry is centred around 2 fields i.e. machine learning & data science. British
petroleum developed a cloud-based geosciences platform known as “Sandy” to interpret geology,
geophysics, historic and reservoir project information. The national data repository (NDR) of UK has
many terabytes of data different wellbores, seismic surveys and pipelines, which is interpreted by AI.

Spark cognition AI systems will be used in spark predict platform to monitor topside and subsea
installations and analyze sensor data to identify any kind of failure before it occurs.

Shell has also adopted AI software named as Azure C3 IOT (internet of things) platform for its offshore
operations. It is similar kind of platform as compared with spark predict platform.

To develop and use AI systems responsibly AI developers must consider the ethical issues inherent in AI.
They must have a realistic view of their systems and their capabilities and be aware of different forms of
bias potentially present in their systems. With this awareness developers can avoid unintentionally
creating AI systems that have negative rather than positive impacts.

To this issue many researchers and scientists have showed their concerns by anticipating the future of
AI. Professor Stephen Hawking said about the future of AI that “The rise of the powerful AI will be either
be the best or worst thing to happen to humanity, we do not yet know which”. Elon Musk said that ”AI is
more dangerous than nuclear weapons”. Hence Future of AI should be decided by us, whether it should
be helpful for humanity or a threat.

You might also like