You are on page 1of 8

Data are now woven into every sector and function in the global economy, and, like other

essential factors of production such as hard assets and human capital, much of modern
economic activity simply could not take place without them. The use of Big Data — large pools
of data that can be brought together and analyzed to discern patterns and make better
decisions — will become the basis of competition and growth for individual firms, enhancing
productivity and creating significant value for the world economy by reducing waste and
increasing the quality of products and services. Until now, the torrent of data flooding our
world has been a phenomenon that probably only excited a few data geeks. But we are now at
an inflection point. According to research from the McKinsey Global Institute (MGI) and
McKinsey & Company’s Business Technology Office, the sheer volume of data generated,
stored, and mined for insights has become economically relevant to businesses, government,
and consumers. The history of previous trends in IT investment and innovation and its impact
on competitiveness and productivity strongly suggest that Big Data can have a similar power,
namely the ability to transform our lives. The same preconditions that allowed previous waves
of IT-enabled innovation to power productivity, i.e., technology innovations followed by the
adoption of complementary management innovations,  are in place for Big Data, and we expect
suppliers of Big Data technology and advanced analytic capabilities to have at least as much
ongoing impact on productivity as suppliers of other kinds of technology. All companies need
to take Big Data and its potential to create value seriously if they want to compete. For
example, some retailers embracing big data see the potential to increase their operating
margins by 60 per cent.
Big data presents great opportunities as they help us develop new creative products and
services, for example apps on mobile phones or business intelligence products for companies. It
can boost growth and jobs in Europe, but also improve the quality of life of Europeans.
Here are some examples of research projects that have an impact on society and economy:

Healthcare: enhancing diagnosis and treatment while preserving privacy


Big data offers solutions for improved efficiency in healthcare information processing which in
turn creates value for businesses, public sector and citizens. The analysis of large clinical
datasets can result in the optimisation of the clinical and cost effectiveness of new drugs and
treatments and patients can benefit from more timely and appropriate care. Data
interoperability is of utmost importance since the data is derived from diverse and
heterogeneous sources such as bio-signal streams, health records, genomics and clinical lab
tests. Privacy-preserving technologies aim at providing access to health data for patients,
healthcare professionals and clinical researchers in a uniform way and in an anonymized and
aggregated form to develop better prevention or treatment options.

Projects:

 Reducing cost, improving patient outcomes and delivering better access to healthcare
facilities: BigMedilytics - Big data for medical analytics
 Clinical decision support and integrated care: AEGLE
 Anonymisation and blockchain technology solution for healthcare: My Health My Data
 Multilingual medical text analysis, search and machine translation: KConnect
 Evidence-based actionable information provision for public health policies: MIDAS

Data Markets
Information technology has driven, directly or indirectly, much of Europe’s economic growth
during the last decades as the role of data transitioned from the support of business decisions
to becoming a good in itself. An open approach towards data value creation has become critical
in the new networked economy, with Europe well placed to nurture this new revolution.

Projects:
 TRUSTS: aims to create a federated platform of data marketplaces and address the
issues of platform regulation, standardisation and interoperability.
 i3-MARKET innovates on marketplace platforms and implements industrial
demonstrators by using technologies and solutions to achieve a trusted, interoperable
and decentralised infrastructure. It enables interoperability of existing and emerging
data spaces and marketplaces and ensures the required levels of privacy and
confidentiality for sharing data among relevant systems and services, and develops the
necessary security and access control measures for secure trading of data. The industrial
pilot demonstrators include automotive, manufacturing and healthcare pilots.
 DataPorts develops a Cognitive Ports Data Platform addressing ports transportation and
logistics domain. The platform connects to the different digital infrastructures currently
existing in digital seaports; sets the policies for a trusted and reliable data sharing and
trading; and leverages on the data collected, to provide advanced data analytic services
based on which the different actors in the port value chain can develop novel AI and
cognitive applications. The platform will be rolled out in the operational environment of
two major ports in Europe.
 Automotive Big Data Marketplace for Innovative Cross-sectorial Vehicle Data
Services: AutoMat. The AutoMat project established a novel and open ecosystem in the
form of a cross-border Vehicle Big Data Marketplace that leverages currently unused
information gathered from connected vehicles.
 Accelerating data to market: DataPitch is a business incubator that supports European
entrepreneurs make business with industrial data. DataPitch aims at making the
European data economy stronger and support innovation through digital
transformation.

Transport: fewer accidents and traffic jams


The transport sector can clearly benefit from big data collected through sensors, GPS data and
social media in particular. A smart use of big data supports governments in optimising
multimodal transport and managing traffic flows, making our cities smarter. Citizens and
companies can save time through the use of route planning support systems.

Projects:
 Increase operational efficiency, deliver improved customer experience and foster new
business models - TransformingTransport (TT)
 Making fleet management cheap and easy for SMEs (SimpleFleet)

Environment: reduced energy consumption


The big data revolution brings about novel ways of understanding and addressing
environmental challenges. A better use of globally available national and local datasets helps
scientists in their research and enables policy-makers to make informed and evidence-based
decisions related to natural disasters like flooding, to fight against climate change and reduce
costs. Smart cities also host data centres adapting the power consumption of public buildings to
the availability of renewable energy and other useful indicators. At the same time, our mobiles
devices become smarter by integrating analytical tools to reduce our energy consumption and
save money.

Projects:

 Environmentally-friendly data centres DC4Cities


 Cloud based analysis of big geospatial data for the simulation of events like
flooding IQmulus

Open Data
Open Data refers to the information collected, produced or paid for by the public bodies and
made freely available for re-use for any purpose. Public sector information is information held
by the public sector. The Directive on the re-use of public sector information provides a
common legal framework for a European market for government-held data. It is built around
the key pillars of the internal market: free flow of data, transparency and fair competition.

The "European Data Portal" was set up by the Commission to improve accessibility and increase
the value of Open Data published by European Public Administrations at all levels of
government. Watch the video on open data for more details.

Projects:

 The Open Data Incubator for Europe ODINE helps companies in developing their ideas of
using open data at the core of their business by giving: expert advice from business
mentors, peer networking and support, technology and data sets, high-quality coverage
in the Guardian datablog, brokering introductions to business angels and VCs. The Open
Data Incubator for Europe has funded 57 start-ups and SMEs, generating already more
than EUR 1 million in additional investment, sales, and jobs.
 TheyBuyForYou: Enabling procurement data value chains for economic development,
demand management, competitive markets and vendor intelligence.
 Lynx: Building the Legal Knowledge Graph for Smart Compliance Services in Multilingual
Europe

Personal Data
 DataVaults aims to develop a framework to securely manage personal data and to
guarantee citizens’ control of their data. 
 KRAKEN creates a marketplace for trading personal data that will allow individuals to
have control over shared data and that will improve trust thanks to regulatory
compliance and use of crypto technologies. 
 PIMCITY develops a series of open source components and interfaces backed up by
learning materials and an educational community to facilitate the adoption of data-
exchange platforms within SMEs and to improve personal data sharing management.
The planned use cases include several B2B and B2C scenarios with actual users

Data Science Skills


To make sense out of the increasing amounts of available data, there is a growing demand for
data scientists in Europe. This comparatively new profile combines expertise from various
domains, e.g. programming skills, analytics skills and statistical expertise, as well as domain
knowledge from application domains (e.g. health, agriculture or finance, to name just a few).

European initiatives:

 The European project European Data Science Academy (EDSA) helps to develop the


needed data skills, designs curricula for data science training and provides courses and
training material like Massive Open Online Courses (MOOCs) for self-study in
the EDSA Online Courses Portal.
 The European initiative European Network of Big Data Centers of Excellence

Each of these centers alone can only cover a small part of the whole Big Data picture, but
together their competences can define the data-driven Future of Europe. Some of the particular
themes discussed were data skills and education needed for Europe's Digital Transformation.
This is one of the hottest topics in the big data community. To know more about the Network’s
activities, read on i-Know website.

Agriculture: safer food and increased productivity


A smart use of big data in agriculture can increase productivity, food security and farmer
incomes at the same time. Through an intelligent and widespread use of data coming from
sensors and Earth observations such as the open data from the Copernicus Programme the
ways we are farming today can be changed entirely for the better. This can lead to a more
efficient use of natural resources (including water or sunlight) in our farming practices. With
advanced technologies farmers can have access to data in real time on how their farm
machinery is working as well as to historic weather patterns, topography and crop
performance.

Projects:
 TheFSM delivers a data marketplace for sharing agri-food information such as food
certification data in a secure way within the agricultural sector, for food safety and
inspection purposes, among others. The marketplace will be an open and collaborative
virtual environment that will facilitate the exchange and connection of data between
different food safety actors interested in sharing information critical to certification.
 Data-driven bioeconomy - producing best possible raw materials from agriculture,
forestry and fishery: DataBio
 Using an open data standards platform to make agriculture more efficient: AgroIT
 A platform for sharing agricultural data to make better farming decisions: FOODIE

Industrial impact / Big Data access technologies / Research


Maximally exploiting available data is increasingly critical to industrial competitiveness.
Accessing the relevant data is becoming progressively difficult due to the explosion in the size
and complexity of data sets. Maximally exploiting data requires flexible access and engineers
need to explore the data in ways not supported by current applications. Engineers spend up to
80% of their time on data access problems. Apart from the enormous direct cost, freeing up
expert time would lead to even greater value creation through deeper analysis and improved
decision making.

Projects:
 smashHit aims to ensure trusted and secure sharing of data streams from personal and
industrial platforms, with the focus on cyber-physical products that generate large
amounts of data, such as vehicles. It will be driven by two industrial business cases
involving several existing industrial and personal data platforms in three different
sectors: automotive, insurance, and smart cities.
 OpertusMundi will deliver a trusted, secure and highly scalable pan-European industrial
geospatial data platform that will act as a single point of entry for the discovery, sharing,
trading, remuneration and use of proprietary and commercial geospatial data assets.
 Big Data for factories: Boost 4.0 aims to lead the construction of the European Industrial
Data Space to improve the competitiveness of Industry 4.0 and will guide the European
manufacturing industry in the introduction of Big Data in the factory, providing the
industrial sector with the necessary tools to obtain the maximum benefit of Big Data.
 Scalable End-user Access to Big Data: Optique brings a unique combination of
technologies to bear on Big Data challenges. The project was extremely successful in
bringing research results into commercial fruition.

Here’re the biggest advantages of using big data.

 Improved business processes: Probably the biggest advantage of big data is it helps
businesses to gain a huge competitive advantage. Apart from being able to understand, as
well as, target customers better, analyzing big data can result in the improvement and
optimization of certain facets of business operations. For instance, by mining big
data retailers can not only explore patterns in consumption and production but can also
promote better inventory management, improve the supply chain, optimize distribution
channels, among others.

 
 Fraud detection: This advantage of using big data comes from the implementation of
machine learning technologies. It helps banks and other financial institutions to detect
frauds like fraudulent purchases with credit cards often before even the cardholder gets to
know about it.

 Improved customer service: One of the most common goals among big data analytics
programs is improving customer service. Today’s businesses capture a huge amount of
information from different sources like customer relationship management (CRM) systems,
social media together with other points of customer contact. By analyzing this massive
amount of information they get to know about the tastes and preferences of a user. And
with the help of the big data technologies, they become able to create experiences which
are more responsive, personal, and accurate than ever before.

Despite the advantages of big data, it comes with some serious challenges that make its
implementation difficult or risky. Here’re the biggest disadvantages.

 Privacy and security concerns: Probably the biggest disadvantage of big data is that it can
make businesses a softer target for cyberattackers. Even giant businesses have
experienced instances of massive data breaches. However, with the implementation of
GDPR, businesses are increasingly trying to invest in processes, protocols, and
infrastructure to be able to maintain big data

 Need for technical expertise: Working with big data needs a great deal of technical
proficiency and that’s one of the key reasons for which big data experts and data scientists
belong to the highly paid and highly coveted group in the IT landscape. Training existing
staff or hiring experts to handle big data can easily increase the cost of a business
considerably.

We have all heard of the the 3Vs of big data which are Volume, Variety and Velocity. Yet,
Inderpal Bhandar, Chief Data Officer at Express Scripts noted in his presentation at
the Big Data Innovation Summit in Boston that there are additional Vs that IT, business
and data scientists need to be concerned with, most notably big data Veracity. Other big
data V’s getting attention at the summit are: validity and volatility. Here is an overview
the 6V’s of big data.
Volume
Big data implies enormous volumes of data. It used to be employees created data. Now that
data is generated by machines, networks and human interaction on systems like social
media the volume of data to be analyzed is massive. Yet, Inderpal states that the volume
of data is not as much the problem as other V’s like veracity.
Variety
Variety refers to the many sources and types of data both structured and unstructured. We
used to store data from sources like spreadsheets and databases. Now data comes in
the form of emails, photos, videos, monitoring devices, PDFs, audio, etc. This variety of
unstructured data creates problems for storage, mining and analyzing data. Jeff Veis, VP
Solutions at HP Autonomy presented how HP is helping organizations deal with big
challenges including data variety.
Velocity
Big Data Velocity deals with the pace at which data flows in from sources like business
processes, machines, networks and human interaction with things like social media
sites, mobile devices, etc. The flow of data is massive and continuous. This real-time
data can help researchers and businesses make valuable decisions that provide
strategic competitive advantages and ROI if you are able to handle the velocity. Inderpal
suggest that sampling data can help deal with issues like volume and velocity.
Veracity
Big Data Veracity refers to the biases, noise and abnormality in data. Is the data that is
being stored, and mined meaningful to the problem being analyzed. Inderpal feel
veracity in data analysis is the biggest challenge when compares to things like volume
and velocity. In scoping out your big data strategy you need to have your team and
partners work to help keep your data clean and processes to keep ‘dirty data’ from
accumulating in your systems.
Validity
Like big data veracity is the issue of validity meaning is the data correct and accurate for the
intended use. Clearly valid data is key to making the right decisions. Phil Francisco, VP
of Product Management from IBM spoke about IBM’s big data strategy and tools they
offer to help with data veracity and validity.
Volatility
Big data volatility refers to how long is data valid and how long should it be stored. In this
world of real time data you need to determine at what point is data no longer relevant to
the current analysis.value
Big data clearly deals with issues beyond volume, variety and velocity to other concerns like
veracity, validity and volatility. To hear about other big data trends and presentation
follow the Big Data Innovation Summit on twitter #BIGDB
 
 

You might also like