You are on page 1of 41

Future applications of CC

• Future will be a combination of evolution and revolution.


• Evolutionary aspects of CC are foundational technologies such as
security, data visualization, ML, NLP, data cleaning, management and
governance.
• Hardware and software innovations.
• Neuromorphic architecture, “brain like” and which use processing
elements modeled after neurons, will have a profound impact on speed
and portability.
• Neuromorphic hardware will provide a high level of performance and will
allow data processed closer to the source, including direct processing oin
mobile devices.
• Quantum computing architectures offer great promise for fast processing
of large data sets.
Future applications of CC

Requirements for the Next generation

• Emerging technologies that speed the capability to manage and interpret


data to gain insights are emerging.

(1) Leveraging CC to improve predictability

• Advanced analytics will be integrated with cognitive solutions. Corpora


of data expand with more experience. Use of AA to analyze, determine
next best actions or to correlate data to find hidden patterns.
• Set of tools that can automate the process of vetting data sources to
ensure data quality.
• After analysis results can be moved into the cognitive systems to update
the ML models.
Future applications of CC

Requirements for the Next generation

(2) The new cycle for Knowledge management


Begin with creating hypothesis for
the problem, ingest all the data
relevant for the problem, vet the
data sources, cleanse them, verify
the sources, train the data, apply
NLP and visualization and refine
the corpus.

After system is put to use, data is


continuously analyzed with
predictive analytics to understand
what is changing. Then the
process starts all over again.
Future applications of CC

Requirements for the Next generation

(3) Creating Intuitive Human-to-Machine Interfaces

• NLP will continue to be the foundation of how interaction happens with


cognitive systems.
• Need for additional interfaces for use depending on the nature of the task
Eg. Visualization required for researcher to determine where a pattern
exists for additional exploration.
• Improvements in voice recognition technology that can detect emotions
such as fear through detection of hesitance could be useful for guiding a
user and system through a complex process.
• Visual interfaces – experiment called ‘BabyX’ developed at the University
of Auckland at the Laboratory for Animate Technologies – live
computational models of the face and brain by combing Bioengineering,
Computational and Theoretical Neuroscience.
Future applications of CC

Requirements for the Next generation

(4) Requirements to increase the packaging of best practices

• Initially there will be a set of foundational services that developers can


use.
• Over time, there will be packaged services that has been proven through
multiple uses by organizations in similar industries.
• In a packaged CS, there is a level of transparency. It is critical to
understand the assumptions and hypotheses that are built into models as
well as the source of data in the package.
• There will also be packages that are ubiquitous practices that will become
industry standards.

(5) Technical advancements that will change future of CC

• Speed of learning is the area most in need of innovation.


• Real-time processing is the heart of fast learning.
Future applications of CC

(5) Technical advancements that will change future of CC

• Software side – data has to be analyzed in real time especially to process


information in data-rich environment such as video, images, voice and
signals from sensors.
• Systems will require better clarity and faster identification of the
meaning of these signals.
• Future innovations in software and hardware will transform what today
is complicated and time-consuming for data analytics.
• In the future, ML will become more abstracted into the fabric of the
development environment.
• It will be possible to interact with a system in real time as a pattern or
connection is detected from the data.
Future applications of CC

• The three facets that will define the future of CC – software innovation,
hardware transformation and availability of refined and trusted data
sources.

• Service that can automatically build ontologies based on deep analysis of


txt in natural language within a domain.

• Traveler equipped with a cognitive trip system – the system knows


destination, preferences for the way drive, the gas station along the way,
the health of the car, preferences for food, types of hotels would like to
stay etc.

• Learning will happen in real time, technologies built into the fabric of CS
or platforms rather than assembled from discrete components.
• The system will automatically understand context from events and data.
Future applications of CC

• A sensor-based device with a sophisticated interface can provide a


different level of interaction with people who have trouble interacting in
social situations.
• Individuals with autism spectrum could be helped by a system that learns
the best way to interact and has the potential to open new lines of
communications that have been blocked.

• It could help elderly suffering from Alzheimer’s disease.


https://witanworld.com/article/2018/10/
08/cognitive-computing/
According to IBM Institute for Business Value, three types of capabilities
for cognitive systems are as follows:
(i) Engagement - systems fundamentally change the way systems
interact and significantly extend the capabilities of humans by taking
advantage of people’s ability to provide expert assistance and to
understand. CS play the role of assistant who can consume vast
amounts of structured and unstructured data, can reconcile
ambiguous and self-contradictory data and can learn.
(ii) Decision – decisions made by CS are evidence-based and continually
evolve based on new information, outcomes and actions. Currently
they work as advisors suggesting lot of options to human users, who
will take the final decision. The system relies on confidence scores.
(iii) Discovery – can discover insights that cannot be discovered by the
most brilliant human beings. This involves finding insights and
connections and understanding the vast amounts of information
available around the world. While still in the early stages, some
discovery capabilities have already emerged and the value
propositions for future applications are compelling.
https://www.forbes.com/sites/ibm/2015/02/23/whats-the-future-of-
cognitive-computing-ibm-watson/#3ec927fe5e2e
Emerging innovations

(i) Deep QA and Hypothesis generation

• Need for a system to generate a series of probing questions for a human


to answer for the system to navigate multiple levels of meaning.
• Deep QA requires the system to keep track of all the information that
has been provided in previous answers for a session and only ask
further questions, when the human answer can help it improve its own
performance.
• It will evaluate the possible answers it may give and assign a confidence
level in each, but look at what additional evidence could change that
confidence to decide whether to ask for additional information.
• For eg. For a specific type of skin cancer, QA analysis can help to
discover optimal treatment, because enough data exists, when
aggregated and enough analysis has been done on the data and it has
been vetted by best experts.
• Much like the scientific method guides the discovery in natural sciences,
discovery through QA and hypothesis generation and testing is likely to
become the default approach for many disciplines.
Emerging innovations

(ii) NLP

• Advances in NLP is dramatic, as observed by the capability of IBM


Watson to derive meaning from unstructured data under conditions of
intentional difficulty.
• NLP systems or services must understand state and conditions that may
have been set previously.
• Automatic translation between natural languages that capture deep
meaning remains a difficult problem for NLP.
• A key NLP innovation is the identification and emulation of the manual
process used by expert human translators to discover rules or heuristics
they may be applying unconsciously.

(iii) Cognitive training tools

• It is tedious and time consuming to build a corpus today by training a


system based on ingested knowledge.
• Much of the training that is human intensive today, will become
automated as current generation CC systems are being used.
Emerging innovations

(iii) Cognitive training tools

• Bias in training is one of the most important issues that needs to be


addressed.
• With a lot of unstructured data and no standards to understand the
data, experts make judgements based on their own experiences, which
are biased.
• In the future, cognitive tools will become powerful and apply more
cognitive learning that it will become easier to determine the source of a
bias and point that out to the expert.

(iv) Data integration and Representation

• Today connectors, adapters, encapsulation and interfaces are used to


deal with complex data integration.
• Although this is sufficient and there is a good understanding of the data
sources and they are well vetted, it becomes difficult when thousands of
data sources are brought together.
Emerging innovations

(iv) Data integration and Representation

• Data integration needs to be automated to identify patterns across data


sources and detect anomalies to see if they represent new, important
relationships that were unknown before or problems with a data source
being inconsistent.
• Today ontologies are created so that performance is acceptable with
current system constraints.
• With sufficient processing power, an ontology would actually be a
system state during execution. It would be generated only on demand if
it were required for auditing purposes, perhaps to understand why a
decision or recommendation was made.

(v) Emerging hardware architecture

• Today traditional hardware systems are used to build CS.


• Although parallel structures are used, systems still use von Neumann
architecture computers using CPU
Emerging innovations

(v) Emerging hardware architecture

• or using GPUs.
• In future, major changes in chip architectures and programming
models.
• Hardware architecture – modeling neurosynaptic behavior directly in
the hardware called ‘neuromorphic chips’, which has many small
processing elements that are most tightly interconnected to near
neighbours to communicate much like human brain neurons.
• Quantum computing, which is based on quantum mechanics that
explores physical properties at nano scale. They use qubit, which may
be in more than one state at any given time.
Neurosynaptic architectures

• The challenge today is to partition the data effectively to funnel it into an


architecture that processes 64 bits of data at a time.
• The difficult part is to effectively distribute the workload across similarly
architected processors.
• CC requires hypothesis generation, which is inherently parallel. Based
on the data it may be desirable to generate hundreds of hypotheses and
then process them independently on different processors, cores or
threads.
• Another task in CC application is real-time image processing in a
manner similar to human vision.
• For most applications today, it is impractical to do large-scale
hypothesis generation and evaluation or real-time video analysis.
• Neurosynaptic hardware approach – IBM’s TrueNorth – a
neurosynaptic chip with 1 million neuron-inspired processing units and
256 million synapses.
• It was developed as part of a program called Systems of Neuromorphic
Adaptive Plastic Scalable Electronics (SyNAPSE).
• One million neuron brain-inspired processor.
• Chip consumes merely 70 milliwatts and is capable of 46 billion
synaptic operations per second, per watt – literally a synaptic
supercomputer in your palm.

True North is the first single, self-contained chip to achieve:


i. One million individually programmable neurons--sixteen times more
than the current largest neuromorphic chip
ii. 256 million individually programmable synapses on chip which is a
new paradigm
iii. 5.4B transistors. By device count, largest IBM chip ever fabricated,
second largest (CMOS) chip in the world
iv. 4,096 parallel and distributed cores, interconnected in an on-chip
mesh network
v. Over 400 million bits of local on-chip memory (~100 Kb per core) to
store synapses and neuron parameters
http://www.research.ibm.com/articles/brain-chip.shtml
TrueNorth Chip Core Array
(Research.ibm)

https://www.evolving-science.com/information-communication-networks-
computer-science-technology-hw-sw-systems/truenorth-ibm-s-cognitive-
computing-technology-00298
Synapse 16 chip board (Research.ibm)

https://www.evolving-science.com/information-communication-networks-
computer-science-technology-hw-sw-systems/truenorth-ibm-s-cognitive-
computing-technology-00298
Neurosynaptic architectures

• Researchers have put together 16 TrueNorth chips into one scalable


platform, NS16e, with the equivalent parallel processing power of 16
million neurons and 4 billion synapses, which consume the energy
equivalent of a tablet computer.
• This new platform with 16 TrueNorth chips working together will be
tested in practice by the Lawrence Livermore National Laboratory of the
National Nuclear Security Administration (NNSA) of USA.
• The NNSA evaluate machine learning applications, deep learning
algorithms and architectures as well as conducting general computing
feasibility studies in cyber security and stewardship of the nation’s
nuclear deterrent and non-proliferation. As the common programming
languages won’t do for this new platform, the NS16e comes together
with a programming ecosystem: simulator, programming language,
integrated programming environment, library of algorithms and tools
for composing neural networks for deep learning.
https://www.evolving-science.com/information-communication-networks-
computer-science-technology-hw-sw-systems/truenorth-ibm-s-cognitive-
computing-technology-00298
Neurosynaptic architectures

• The underlying principle that is modeled in neurosynaptic chips is


Hebb’s rule – “cells that fire together, wire together”, ie., neurons in
close proximity that fire together reinforce learning.
• In the future there may be hybrid solutions in which neuromorphic
approaches will be combined with conventional computers.
• Neuromorphic chips integrated with a conventional system will enable
to take advantage of conventional programming models for much of the
required preprocessing.
• Parallelism without partitioning is a huge advantage for neuromorphic
architectures.
• For mobile devices – Qualcomm has a processing chip set called zeroth
that is intended to capture patterns of human behavior based on the
usage of the mobile device to provide context-aware devices.
• The architectures operate in parallel efficiently so that the total power
consumption for a unit of work is lower than that of a register-based
architecture.
Quantum architectures

• Fundamental concept behind a quantum computer is to go beyond a


binary state to a multistate unit called the qubit.
• A qubit can have multiple states, including being in multiple states
simultaneously.
• Quantum computers can be simulated using conventional computers by
mapping each of the possible states to binary states, but the performance
overhead is significant.
• In theory, quantum computers can scale without the artificial register
restriction, which makes them attractive for parallel computations.
• Significant barrier to quantum computing is that it requires physical
materials to actually be in these superposition states, which requires the
processing units to operate at a temperature near absolute zero.
• IBM, Google and DWave.
Natural language processing

• It is an interdisciplinary field to analyze and understand human


languages.
• Natural languages are used in two forms – written and spoken
• Text and speech are the mediums for written and spoken languages.
• Speech signals carry information about the message to be conveyed,
language identity, speaker identity, gender and emotion.
• Speech processing is at intersection of DSP, written form NLP, ML and
information retrieval.
• Core tasks in speech processing include language identification, speaker
recognition, speech recognition and speech synthesis.
• Speech applications require pipelining of core tasks. For eg. Siri, a
voice-activated personal assistant for iPhones employs speech
recognition and speech synthesis tasks.
• Speech applications development first requires creating a database to
hold speech data.
• Data sources – recording of telephone conversations, radio and
television broadcast signals
• Relevant features are extracted from the data to develop and test speech
algorithms.
Natural language processing

• Speech data is partitioned into three non overlapping groups – training,


development and test.
• Training data is used to construct statistical models and estimate their
parameters, development data is used to tune the model parameters and
the effectiveness of the model is evaluated using the test data.

• Automatic speech recognition (ASR) or speech-to-text system identifies


spoken words in speech and converts them to written text.
• ASR systems extracts acoustic signal characteristics from speech and
determine the words in it by pattern matching. Acoustic and language
models are used in developing ASR systems.
• It is an attractive alternative for user interfaces to computing devices/
• Applications include call routing, automatic transcriptions, information
searching, data entry, voice dialing and hands free computing for people
with disabilities.
• ASR systems are classified according to – vocabulary size, speaking
style, speaker mode, channel type and transducer type.
Natural language processing

• Development of speech recognition systems is a multi step process.


(i) Relevant features are extracted from the speech signal.
(ii) Reference models are developed using these features. Models are
needed for each sound unit.
(iii) Feature vectors are derived from speech utterances and are presented
to all the reference models. The model which gives the highest
confidence measure indicates the identity of the sound unit. The
sequence of the identified sound units is validated using language
models. Language models are used to convert sequence of sound units
into text.

• Approaches to developing speech recognition systems fall into two types


– template and model based.
• In template based approach, the system is initially trained using known
speech patterns. Eg. DTW and Vector Quantization
• In model based approach, suitable features for each sound unit are
extracted from the training data. Reference models need to be developed
for each sound unit. Eg. GMM, HMM, ANN and SVM.
Natural language processing

• Many ML based approaches to speech processing critically depend on


training and test data.
• Datasets are of different form and size – raw speech corpora, speech
with manual or automatic annotations, words and their relationships,
grammars and parameters of statistical models.
• Extracting useful information from spoken language data is challenging.
• The current research in speech processing is driven by statistical
machine learning approaches than linguistic theory.
• Statistical models use large training data and validate the view that
simpler models often outperform more complex ones.
• Performance of many speech processing applications depends on the
quality of language models. The amount of training data is one factor
that determines the effectiveness of the models.
• Functional features of Siri, Google Now and Cortana show the state-of-
the art in natural language interfaces.
Natural language processing

• Siri is a virtual assistant available for Apple’s iOS devices.


• It uses sequential inference and contextual awareness to understand and
respond to user voice commands.
• Siri understands and speaks over 15 languages including various dialects
of English and Spanish.
• Google Now and Cortana are similar applications for Android and
Windows phone devices.

• voice search is used for tasks such as seeking driving directions,


dictating text messages, making phone calls and checking weather.
Natural language processing in Health care

• Studies show that Natural Language Processing in Healthcare is


expected to grow from USD 1030.2 million in 2016 to USD 2650.2
million in 2021, at a CAGR of 20.8 percent during the forecast period.

Applications

(i) Handle the surge in clinical data – increased use of EMR and digital
transformation of medicine has led to an increase in the volume of
data.
(ii) Support value-based care and population health management - need
to tap into the reservoir of unstructured information.
Use of NLP for - (i) Improving clinical documentation - with speech-to-
text dictation, data can be automatically captured at the point of care,
freeing the doctors from the tedious task of document care delivery.
(ii) Making Computer-Assisted-Coding more efficient - improving the
customer experience significantly.

(iii) Improve patient-provider interactions with EHR – integrating NLP


with EHR systems will take off workload from doctors and make
Natural language processing in Health care

• analysis easier.

(iv) Empowers patients with health literacy – by becoming more aware of


their health conditions, patients can make informed decisions and keep
their health on track by interacting with an intelligent chatbot. Use of NLP
in health care, can help patients understand EHR portals, opening up
opportunities to make them more aware of their health.

(v) Address the need for higher quality of health care – NLP can help in
assessing and improving the quality of health care by measuring physician
performance and identifying gaps in care delivery. It can help in identifying
and mitigating potential errors in care delivery. It can measure the quality
of healthcare and monitor adherence to clinical guidelines.

(vi) Identify patients who need improved care – NLP can help to improve
the care coordination with patients who have behavioral health conditions.
It can mine through patient data and detect those that are at risk through
any gaps in the healthcare system.
Natural language processing in Health care

• NLP can help enhance the accuracy and completeness of EHRs by


transforming free text into standardized data.
Natural language processing in Health care

• NLP tools might be able to bridge the gap between the insurmountable
volume of data in health care generated every day and the limited
cognitive capacity of the human brain.

• The key to the success of NLP in healthcare is to develop algorithms that


are accurate, intelligent and health-care specific while creating user
interfaces that can showcase decision support data in a desired format.

https://chatbotsmagazine.com/top-6-applications-of-natural-language-
processing-in-healthcare-6b7a39af797a
Smart city ecosystems have the following characteristics

(i) Humans need to interact with the systems to provide their feedback.
(ii) Many sensors and smart devices generate data at a high speed, need
for system to learn and improve itself from previous experiences.
(iii) They need a general, dynamic and continuous learning mechanism,
as the smart city application is not static and the operating
environment evolves with time.
(iv) The data generated by smart city applications is generally noisy and
has a level of uncertainty.
Smart Cities
Major functions

(1) Law enforcement – great potential, huge volume and variety of data to
be managed and analyzed. Need to identify patterns / anomalies.
Problem of correlating crime data

 Correlating data from hundreds and thousands of data sources.


 Other information sources – unstructured and stored in sources like
incident reports, paper files, interviews with witnesses etc. still and
video images and audio data.
 Acoustic event detection
 Facial detection algorithms
 Problem of analyzing real time video data

(2) Smart energy management

 Visual representation – allocating energy production resources for a


smart grid or responding to expected changes in demand based on
predictive analytics may create situations that can benefit from operator
intervention.
Smart Cities

(2) Smart energy management

• Visual abstractions makes it easier for the operator to detect patterns


than simply seeing relevant numbers scroll past on devices.
• Visualization is the key in helping the user understand what requires
attention as soon as the system can detect it.

(3) Improving public health with CC services

• Public health in cities is concerned with wellness and medical care.


• Wellness – access to information and preventative care, feedback on
behavior that impacts health and the availability of a full range of health
care when prevention is not enough.
• Commercial health management firms

(4) Smarter approaches to preventative health care

• Preventative care is seen as a social service for economically


disadvantaged people.
Smart Cities

(5) Building a smarter transportation infrastructure

• As cities expand, managing transportation and traffic flow is important


using better use of information.
• When adding infrastructure becomes expensive and disruptive, then CC
can help to guide who can go where and when etc.
• Traffic management to prevent traffic congestion by managing peak
loads.
• Eg. City of Toronto, Canada – Multi-Agent Reinforcement Learning
Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-
ATSC) – for smarter traffic management.
• Downtime delays reduced by 40 %
• Xerox helped in the development and deployment of the system.
• Uses camera images and ML chips to enable real time communication
between traffic lights to detect patterns ad dynamically adjust their
timing.
• Transportation management – ready availability of sensors and systems
to collect and share data.
(6) Using analytics to close the workforce skills gap

• Human capital management – management of significant


interdependencies among employment, education and social services.
• Lower unemployment
• Matching skills training to aptitude in an applicant pool is a perfect
opportunity for CC applications.

(7) Creating a cognitive community infrastructure

• CC solutions benefit from retaining knowledge created during the


operation as hypotheses are tested and refined over time.
• When natural communities communicate they raise the collective
intelligence of the community.
• When professional communities communicate with or without CC
solution, they can amplify the learning.
• Physical communities can also benefit from collaboration via CC as in
case of city residents.
Smart city use cases

• Management and control of the city’s resources is performed through


intelligent information systems.
• Need to consider food, energy and water nexus.
• Need to develop IoT based systems and the big data that is generated
are critical for optimal provisioning and efficient utilization of the
resources.
• Other areas – transportation, health care, convenience, agriculture and
government.

Eg. Water

• California – drought prone area – 2017 - severe drought


• Analytics of big data from city temperature and humidity sensors,
weather forecasts, prediction of water usage and available water
resources.
• Monitoring the level and quality of water in creeks using crowd sensing
data, along with IoT based approaches such as smart water meters can
help to achieve efficient and sustainable water provisioning.
Smart city use cases

Eg. Water

• Identification of trash and location automatically and perform the


required action.
• Use of smart water meters for fine grained monitoring of water
consumption at house level as well as city level.
• Water consumption level analyzed for abnormal and leakage detection.

Eg. Energy

• Energy conservation is a daily concern for people and energy utility


service providers.
• Energy providers can monitor consumer’s energy usage profile and
provide suitable feedback to decrease the high-peak power load using
smart meters.
• Smart meters can also be connected to their smart home systems to
cooperate with other devices towards energy management at the level of
smart home using Appliance Load Monitoring (ALM).
Smart city use cases

Eg. Energy

• Each electrical device is equipped with a smart power outlet.


• Mechanism to turn off the devices.
• Extra cost and complexity
• Non Intrusive Load Monitoring (NILM) – can extract individual
device’s usage from one aggregated electrical measurement at the scope
of the whole house. This approach requires to be trained one time by
the consumption data of individual appliances and their events and
time stamps.
• Optimal power usage by controlling when to turn appliances on or off.

Eg. Agriculture

• Monitoring the soil parameters (moisture, minerals etc.), powered by


decision processes and consequently performing corrective actions by
actuators (adding water or minerals) can lead to improved productivity.
Smart city use cases

Eg. Agriculture

• Plant disease recognition through disease recognition systems through


various measurements. Eg. Identifying diseased plants visually through
a classification system based on images of crops or their leaves.
• Use of smart devices to identify fruits and crops with anomalies.
• System can recommend remedies or pesticides.

Challenges for smart city project

(1) Integrating big and fast data analytics.


(2) Preserving security and privacy
(3) On-device intelligence
(4) Big data shortage
(5) Context awareness
Smart city use cases Source: Mehdi Mohammadi & Ala Al-
Fuqaha, “Enabling Cognitive smart cities
Challenges for smart city project using Big data and Machine learning:
Approaches and Challenges”, IEEE
Communications Magazine,

 Smart cities projects have moved beyond collecting data.


 Smart city managers, chief data officers etc.
 Smart cities make better use of all their resources and having good data is at
the heart of these better decisions.
 ML can be effectively used to provide smart city services through data-
recycling, efficient sampling and scalable models.

You might also like