You are on page 1of 3

1.

Voice Identification:
Speech/Voice Recognition which is also known as automatic speech recognition (ASR) and
voice recognition recognizes the spoken words and phrases and converts them to a machine-
readable format.
The recognition of a person by his voice is one of the forms of biometric authentication,
which makes it possible to identify a person by a combination of unique voice characteristics
and refers to dynamic methods of biometrics. Speaker recognition is a technology that can
automatically identify the speaker based on the speech waveform, that reflects the
physiological and behavioural characteristics of speech parameters from the speaker. Like
traditional speaker recognition systems, there are two stages, namely, training and testing.
These are the main stages of speaker recognition.
Two popular sets of features, often used in the analysis of the speech signal are the Mel
frequency cepstral coefficients (MFCC) and the linear prediction cepstral coefficients
(LPCC). The most popular recognition models are vector quantization (VQ), dynamic time
warping (DTW), and artificial neural network (ANN).

2. Converting Scan Document into Text:


As the things get smarter and smarter every day, we get encounter with some problems like
we have so many documents and we want them to be in editable format. The tradition method
was to manually type the whole documents into word which was quite hectic process. To
solve this problem AI plays a very important role now we can easily convert documents into
text without any problem.
Neural networks used for converting the scanned document into text. It has so many stages.
 Image pre-processing (skew correction, noise remove)
 Segmentation - character / text lines
 Feature extraction (shape feature, local features).
 Training the neural networks with characters.
 Using the trained model to converting individual character image into text.

3. Chatbot:
Humans are always fascinated with making their daily lives easier with the implementation of
new technology. “Chatbots” are one such means of technology which helps humans in a lot
of ways, by helping them increase sales whilst providing great customer satisfaction and
retention. As chatbots can be applied to any service industry humans who want to do no
matter what will be making use of chatbots to make it easier.

How Chatbots work?


The chatbots work by adopting 3 classification methods:
 Pattern Matchers Bots use pattern matching to classify the text and produce a suitable
response for the customers. A standard structure of these patterns is “Artificial
Intelligence Markup Language” (AIML)
 Artificial Neural Network Neural Networks are a way of calculating the output from
the input using weighted connections which are calculated from repeated iterations
while training the data. Each step through the training data amends the weights
resulting in the output with accuracy.
 Natural Language Processing: Chatbot takes some combination of steps to convert the
customer’s text or speech into structured data that is used to select the related answer.

4. Face Recognition:
A facial recognition system is a technology capable of matching a human face from a digital
image or a video frame against a database of faces, typically employed to authenticate users
through ID verification services, works by pinpointing and measuring facial features from a
given image.
There are different types of face recognition algorithm.
 Eigenfaces  
 Local binary pattern histogram
 Fisher Faces
 Scale invariant feature transform
 Speed up robust features.

5. Self-driving car
A self-driving car, also known as an autonomous vehicle, driverless car, or robot-car is a
vehicle that can sense its environment and moving safely with little or no human input.
Algorithm of Self driving car:
The regression algorithms that can be utilized for self-driving cars are decision forest
regression, neural network regression and Bayesian regression, among others to achieve self-
driving car experience. The neural networks are utilized for regression, classification, or
unsupervised learning.

6. AlphaGo player
Back in 2016, AlphaGo player beat the South Korean Go champion. This AlphaGo player
was created using deep reinforcement learning. The company also taught this Artificial
Intelligence how to use imaginations to make predictions.
Before the release of AlphaGo player Deep mind built an AI that could play Atari games. The
robot gets inputs the same way as a human and sees the image on the screen and the reward
after each move that directs it to a viable decision.

7. Experience replay
Google research scientists have also taught deepmind on how to sleep, to solve the problem
of it staying awake and alert while the programmers go to sleep. Through Deep-Q network,
which is an algorithm, it gets to mimic “experience replay “that makes it store the training
data to learn from successes and failures from the past.

8. Neural Networks
In a huge way, Google’s DeepMind researchers are teaching computers on how to learn
through Neural networks. This is an A.I. Machine learning algorithm that teaches computers
on how to figure things on their own. This applies to the self-driving cars that need to make
decisions about traffic and how to group certain information. Through this learning
algorithm, computers using fully functional resources can figure a problem out and dream
about it when it is offline. When it gets back online, it may then be able to solve the problem
encountered before.

9. Alpha-Zero
This is a recent AI evolution that developed superhuman performance in chess and just took
four hours to learn the rules and knowledge. It takes the self-playing reliance called
reinforcement learning that gives a broader focus to problem-solving. It plays both chess and
Shogi which is Japanese chess.

10. Diagnosis of eye disease


With time, Artificial intelligence could be used in hospitals to diagnose eye disease. This was
after collaboration between Google’s deep mind and Moorfields Eye Hospital. This was after
analysing thousands of retinal scans to train Artificial intelligence algorithms to find any
signs of eye diseases. This was after a research that showed that due to the growing world
population, it could lead to a threefold increase in blindness that its’ essential to have a
cutting-edge technology that will help solve the problem. The AI algorithm is being trained to
spot signs of glaucoma, age-related eye problems, and diabetic retinopathy. This will help
reduce the percentage of people who lose their sight.

You might also like