You are on page 1of 21

Policy & Internet, Vol. 9, No.

1, 2017

Data Intelligence for Local Government? Assessing


the Benefits and Barriers to Use of Big Data in the
Public Sector
Fola Malomo and Vania Sena

The concept of Big Data has become very popular over the last decade, with large technology
companies successfully building their business models around its exploitation. The public sector in
the United Kingdom has tried to follow suit and local governments in particular have tried to
introduce new models of service delivery based on the routine extraction of information from their
own Big Data. These attempts have been hailed as the beginning of a new era for the public sector
where service delivery and commissioning are shaped by data intelligence on local needs and by
evidence on the outcomes. In this article we assess this claim and the extent to which it captures
the way local governments in the United Kingdom use intelligence from Big Data in light of the
structural barriers they face when trying to exploit their data. We also present a case study on the
development and deployment of an integrated data model for children services in a large county
council in the South-East of England.
KEY WORDS: Big Data, local government, data ecosystem, integrated data model, public sector,
service delivery

Introduction

As buzzwords go, Big Data is a very successful one. It is now a popular by-
word for large volumes of data that are produced routinely by organizations and
are too complex for standard software packages to process (Mayer-Sch€ onberger &
Cukier, 2013). The interest in Big Data and their potential benefits has been
triggered by the exponential increase in the volume of data collected by
organizations (in turn facilitated by the pervasive use of sensors and mobile
devices) and the simultaneous fall in the cost of storing and processing large data
sets.1 As a result, routine extraction of information from Big Data has become
affordable for most organizations and the private sector has led the way in
showing how insights from their data can help solve their business challenges.2
The experience from the private sector showing that data exploitation can
generate tangible benefits to an organization has alerted policymakers to the

7
1944-2866 # 2016 Policy Studies Organization
Published by Wiley Periodicals, Inc., 350 Main Street, Malden, MA 02148, USA, and 9600 Garsington Road, Oxford, OX4 2DQ.
8 Policy & Internet, 9:1

potential uses of Big Data within the public sector (Information Commissioner’s
Office [ICO], 2016). Research by the McKinsey Global Institute (Manyika et al.,
2011) has suggested that the deployment of Big Data technologies across the
public sector in Europe can cut its costs by 20 percent creating s300 billion in
value. In the United Kingdom, local governments have been identified as one
segment of the public sector which can mostly benefit from the systematic
exploitation of Big Data, with some commentators suggesting it can help them
save up to £25.4 billion over five years (Policy Exchange, 2015).
This estimate is based on the assumption that Big Data exploitation can help
local governments to allocate resources where they will have the biggest impact
and restructure services in such a way that early prevention is prioritized so to
avoid the need for more expensive interventions (Cebr, 2012; Kim, Trimi, &
Chung, 2014; Yiu, 2012). While most of the benefits generated by the exploitation
of Big Data within local governments tend to be framed around cost savings, a
few commentators have suggested that the routine exploitation of data holdings
through Big Data methodologies can support local governments in their transition
toward a model of service delivery where their choices in terms of quantity and
quality of commissioned services are underpinned by data intelligence on users
and their (current and future) needs (Beresford, 2015; Desouza, 2014; Local
Government Association, 2014).
The purpose of this article is to assess this claim and to identify the main
barriers that stop U.K. local governments from fully benefitting from Big Data.
We will start by reviewing the existing evidence on the type of Big Data projects
that have been implemented by local governments so far and the benefits they
have generated. Our analysis suggests that for a long time the ambition around
the development of Big Data capabilities has not matched the actual use of
analytics in local governments. Indeed, Big Data methodologies have mostly been
employed to develop new digital channels for service delivery, and even if the
financial benefits of these initiatives are well documented, very little is known
about the benefits generated by them for the local communities. While this is
slowly changing as Councils are starting to exploit their own data to support their
commissioning activities, the overall impression, gained from even a cursory
overview, is that local governments are not exploiting their Big Data to their full
potential. We therefore set out to identify the key obstacles within local govern-
ments that may be hampering their efforts in this area. We start by analyzing the
key features of Big Data within the public sector: this is an important step of our
analysis as it will provide a first glimpse into the barriers faced by public sector
organizations when trying to exploit their Big Data. Our view is that the concept
of Big Data has been elaborated within the private sector and therefore its
standard definition tends to emphasize the key features of Big Data generated by
private companies (i.e., volume and variety). However in the case of Big Data
from the public sector (and in particular local governments), complexity is much
more relevant than volume or variety. In turn, complexity creates the conditions
for structural data silos which together with factors such as narrowly defined
data access among different authorities, unresolved ethical issues, inadequate
Malomo/Sena: Data Intelligence for Local Government 9

infrastructure and investment in information technology (IT), skills gaps, and


organizational culture may stop local governments from exploiting Big Data.
Finally, we present a case study on the deployment of an integrated data
model (IDM) for children services in Kent County Council (KCC). This is an
interesting example of both the benefits that the exploitation of their own Big
Data may generate for local governments and the barriers they face. The structure
of the article is as follows. In the next section we examine the existing evidence
on the benefits that Big Data can offer to local authorities in the context of service
delivery. We then discuss the concept of Big Data within the public sector.
The main barriers to the systematic exploitation of Big Data among local
authorities are then presented, followed by the case study on the implementation
of a Big Data project in KCC. Finally, concluding remarks are offered.

Big Data and Local Governments

There is a lot of discussion in the United Kingdom on the opportunities that


the exploitation of Big Data can offer local authorities. Local authorities are a
nontrivial component of the public sector in the United Kingdom: they deliver
1,500 services to local communities ranging from education and social care to
planning and waste collection (Local Government Association, 2014). At the same
time, this is a sector in a state of flux. Changes in the funding regime for local
governments (started in 2010) imply that these now face the challenge of having
to re-organize their services so that costs can be reduced while simultaneously
managing the demand for their services. As the operating environment changes,
the service delivery model employed by local governments is changing as well.
Traditionally, public services have been delivered in a reactive fashion whenever
and wherever needs have arisen. The pressure on local finances makes this
traditional model unfeasible, with the result that local governments in the United
Kingdom are moving toward a model where they do not directly deliver the
services but rather facilitate their delivery by working with their local communi-
ties (Policy Exchange, 2015). This shift in the service delivery model has been
popularized by the concept of “synaptic public services” (Policy Network, 2014)
and has put the commissioning function at the heart of the service provision. In
reality, with the advent of commissioning, local governments have started to act
as signposts toward services offered by other organizations, to use digital
channels to deliver their services, and most importantly to emphasize early
prevention so that services can be delivered in a proactive way by tackling the
drivers of demand for council services (rather than intervening after the demand
has arisen) with the result that existing resources can be allocated to the most
needy areas and/or services (Local Government Association, 2014). However,
gathering intelligence on the existing (and future) needs in an area requires local
governments to start using their own data differently, that is, to predict where
future needs may arise—rather than to simply monitor the level of past
activities—and to develop the analytical capability to develop prediction models
which link policy interventions to future outcomes.
10 Policy & Internet, 9:1

Using data as a prediction tool to inform decision making is very common


within the private sector and has generated tangible benefits to the companies
that have adopted this approach. Indeed, top-performing companies use
analytics five times more than poorly performing companies do, and make
decisions based on rigorous analysis at a rate more than double that of lower
performers (Manzoor, 2015). In addition, research shows that, on average,
companies that use data-driven decision making are 5 percent more productive
and 6 percent more profitable than their competitors (McAfee & Brynjolfsson,
2012). There is no similar evidence for the public sector. Indeed, for a long
time, the desire to use data to drive changes in the delivery of public services
has not been matched by actual practice (HM Government, 2013). In 2013 the
Government pointed out that the public sector in general lacked the skills to be
able to implement these major changes (HM Government, 2013) and indeed for
a while the development of data platforms that allowed data capture (although
not necessarily capture of Big Data) from different databases (stored by local
governments) and that allowed users to access services digitally were consid-
ered examples of Big Data technologies deployed within local governments.
There exists some evidence on the benefits generated by these data platforms in
terms of cost savings. For instance the London Borough of Hammersmith and
Fulham saved £1.15 million annually through its online self-service portal,
registering 70 percent of its households (Local Government Association, 2014).
Spelthorne Borough Council saved approximately £43,800 using the Engage app
to improve communication between the council and its local citizens and led to
a 10 percent decrease in calls to customer services (Local Government
Association, 2016). East Riding of Yorkshire Council saved £91,500 over a
three-year period by taking payments through self-service via its website and
self-service stalls (Local Government Association, 2014).
More recently, local governments have started to exploit data differently,
that is, to reconfigure services so to reduce operating costs (Symons, 2016). For
instance, South Cambridgeshire District Council calculated the optimal routes
for waste collection by using their data on local addresses (Symons, 2016). In
addition a small number of councils have started to exploit their data holdings
to support the commissioning process. KCC is one of those. Its Public Health
Observatory has linked a variety of National Health Service (NHS) data sets3
with data on adult social care and the result is an integrated data set containing
information on activities, costs, levels of staffing, demographics, and location
(Abi-Aad, 2016). The data set has been used to evaluate the commissioned
services and to identify the efficiency and effectiveness of individual services. It
has also been used to develop a new payment system and to decide on service
reconfiguration. Equally, Somerset County Council has shared the data on adult
social care with the local Clinical Commissioning Group to develop a “holistic
data model” which maps all the contacts with the health care and adult social
care systems for each patient. However, evidence on how this model is used
for commissioning is limited (Somerset Clinical Care Commissioning Group
[CCG], 2014).
Malomo/Sena: Data Intelligence for Local Government 11

The practice of using data to support commissioning is very limited and in


reality very little is known about the actual benefits generated by these initiatives.
What prevents local governments from fully exploiting their data for this
purpose? To be able to answer this question, we believe it is important to start
from an analysis of the nature of Big Data generated by the public sector, as this
will offer a glimpse of the key features that may make their exploitation difficult.
This is the focus of next section.

Understanding Big Data in the Public Sector

The term Big Data emerged in the private sector around a decade ago,
although there are earlier references to the concept (Laney, 2001). In reality, there
is no agreed definition of Big Data. The most common one (Gartner, 2015) uses a
series of Vs to describe the dimensions of Big Data:

Volume considers the amount of data generated and collected.


Velocity refers to the speed at which data are analyzed.
Variety indicates the different types of data that are collected.
Viscosity measures resistance to flow of data.
Variability measures the change of rate of flow.
Veracity measures biases, noise, and abnormality.
Volatility indicates how long data are valid for and should be stored for.

However, there are additional features of Big Data that are essential to their
nature and make them of interest. For instance, Manyika et al. (2011) use a
subjective definition that emphasizes the size of the data sets being beyond the
ability of typical database software tools to capture, store, manage, and analyze
them. Other definitions focus on the fact that they can be of different types, and
on the insights they can provide (Forbes, 2014).
To what extent do data produced within the public sector fit the standard
definition of Big Data? Several authors have pointed out that public sector data
cannot be considered to be Big Data because of their volume, variability, and
variety (Aggarwal, 2016a, 2016b; Van Rijmenam, 2013). There are two reasons for
this: first, public sector data tend to be generated out of administrative records of
users4 and therefore tend to be both structured and static. Second, it is typically
assumed that the data produced within the public sector are not sufficiently
granular for analysts to be able to draw a clear picture of a specific phenomenon
(Chambers, Dimitrova, & Pollock, 2012; Roccasalva, 2012; Thakuriah, Tilahun, &
Zellner, 2015). In reality, administrative records have volume, and depending on
the source, they may also have veracity (UNECE, 2013). In the case of local
governments, many of their data holdings easily fit the definition of Big Data in
terms of volume and veracity. For example, county councils have responsibility
over the provision of education and social care locally. While doing so, they
collect data on each pupil enrolled locally and these data may contain both
demographic and contextual information. Equally, data produced by adult social
12 Policy & Internet, 9:1

care teams may be very detailed as they may contain information on both users
and services. In both cases, volume is generated by the granularity of the data
collected and by the amount of contextual information that may have been
collected simultaneously.
As the technology for data capture has changed, the nature of the data collected
by the public sector has changed as well. While the bulk is still made of
administrative data, the fall in the cost of capturing real-time data (Forbes, 2016) has
increased the incentive for the public sector to produce the dynamic and
unstructured data that we tend to associate with the private sector (Manzoor, 2015).
Examples of dynamic data produced by the public sector include data from cameras
and sensors that allow one to monitor traffic in real time as well as footfall or details
of ambulance dispatches in a region,5 while reports from social workers and
geographic information system data are an example of unstructured data stored by
county councils. More interestingly, the level of granularity of the data has become
comparable to what is produced within the private sector (Deloitte, 2011) with the
result that local government analysts can enrich their own data holdings with data
from videos, photos, or sensors, and so create a powerful resource for prediction. For
example, information on traffic flows in a large city that is routinely collected by
closed-circuit television cameras can be used to identify bottlenecks in traffic flows
and change the traffic management system, but can also be used to identify where
investment in infrastructure will be needed in the future.6
As the variety of data produced by the public sector has grown, a few authors
have pointed out that complexity—driven by the volume of data and the large
number of data sources—is the defining feature of Big Data within the public
sector (IBM, 2015) for several reasons (Sisense, 2013). First of all, public bodies
tend to address needs from communities that may cross the administrative
boundaries of a single administrative body (Local Government Association, 2014).
As a result, it is unlikely that any organization in the public sector will have all
the data necessary to get a clear picture of a given phenomenon in the local
community (Manzoor, 2015). The implication is that several authorities are
required to work together to share data.7 However, these may have collected the
data for different purposes. Consider domestic abuse. Information about victims
of domestic abuse is stored by different public sector bodies (Home Office, 2014)
operating under different regulatory frameworks with the result that the data sets
may not necessarily be compatible with each other. Second, data collection locally
may be driven by the need to report activities to the central government with the
result that the choice around the methodology for data collection and the unit of
observation is mostly driven by the reporting requirements (Policy Exchange,
2015). Third, the physical infrastructure developed for data storage locally may be
built around the reporting requirements and not around the need to facilitate
data linkage across different agencies (Policy Exchange, 2015). The implication is
that data from different parts of the public sector referring to the same
phenomenon may be characterized by a variety of reporting units (individuals,
neighborhoods, boroughs, counties, etc.) with the result that they may not be
compatible if attempts are made to merge them.
Malomo/Sena: Data Intelligence for Local Government 13

Finally, the growth in the variety of data types within the public sector has
not been accompanied by the creation of common standards around data
provenance and governance across different authorities (NHS Information Stand-
ards Board, 2002; HM Government, 2013). Needless to say, the picture becomes
more complicated if relevant data on users is stored by organizations outside the
public sector. Indeed, as more services are commissioned to private companies
and/or charities, local governments need to access the data stored outside their
systems to be able to evaluate the outcomes of the commissioned services.
However, unless data collection standards are agreed beforehand in the
commissioning contract, the data collected by external organizations may not be
compatible with the service data stored by local authorities and therefore they
may not be very useful.8 This is very different from what we observe in the
private sector. In most private organizations, their data holdings may give key
stakeholders an exhaustive understanding of their own customers (EY, 2014;
Microsoft, 2010). For instance, when a customer enters a bank, the data stored by
the bank allow its employees to check the customer’s profile in real-time and to
find out which products or services are relevant to the customer, for instance.
Unsurprisingly, the concept of networks may be more helpful when describ-
ing public sector data, as these are stored within large networks that span
organizations and individuals. In our view, a useful concept to represent this
specific dimension of data complexity within the public sector is the “Data
Ecosystem.” The concept of “Data Ecosystem” was introduced by Parsons et al.
(2011) and Pollock (2011) and in turn it is borrowed from the concept of the
“business ecosystem” which is defined as “a dynamic structure which consists of
an interconnected population of organizations” (Peltoniemi & Vuori, 2004, p.
279). A data ecosystem on the contrary refers to the organizations, people and
technologies collecting, handling, and using the data and the interactions between
them (Parsons et al., 2011). We adapt this concept to our case and argue that each
organization in the public sector is part of a much larger ecosystem of data
owners (either public or private) and potential users who can double as data
producers as well. In a data ecosystem, no organization has access to all the
data it needs but rather, these are stored by different organizations and only
when merged together can they provide a powerful picture of the phenomenon
under analysis (Computerworld, 2015).
How do organizations operate in such data ecosystems? Ideally, data should
flow freely among the different parts of the ecosystem. According to Pollock
(2011), organizations should be able to share their data with other organizations
and end users who in turn should be allowed to participate in the data
production process. However, this is not what happens in the public sector.
Fragmentation of functions and competences among different organizations in the
public sector results in different interpretations of what can be shared and what
cannot. Ultimately these different interpretations do not facilitate collaboration
among the different organizations that are part of the ecosystem (OECD, 2014).
Different standards around data collection (not to mention the different require-
ments behind data collection) are also not helpful (GSM Association, 2015). The
14 Policy & Internet, 9:1

result is that each organization operates like a data silo within the ecosystem and
therefore the seamless data flow we should observe in the ecosystem simply does
not happen (GSM Association, 2015). The implications of these silos on the
capability of exploiting their data will be explored in the next section.

Barriers to the Implementation of Big Data Solutions in Local Governments

As mentioned in the previous section, data silos are a consequence of the


fragmentation of activities within the public sector and therefore cannot be easily
avoided. However, their adverse impact is amplified by a variety of additional
factors like data access, ethical issues, organizational capabilities and so on. In
this section, we discuss each of them and assess their impact on the capability of
local governments to exploit their data holdings.

Data Access

The main advantage that Big Data technologies offer is their capability of
merging different types of data, mining them for insights, and combining them
for actionable insights (IBM, 2015). However, while the use of Big Data
approaches to data exploitation assumes that organizations can access all the data
they need, this is not the case in the public sector. While this fragmentation is
really intrinsic to the way the public sector is organized (Policy Exchange, 2015),
in reality its implications for data sharing have not been fully appreciated. Ideally,
this issue could be solved by letting public sector organizations share data among
themselves, and indeed local governments tend to build collaborative networks
exactly for this purpose (Jarqun, 2012). However, the practice behind these “data
sharing agreements” is mixed. The U.K. Data Protection Act (DPA) approved in
1998 identifies a set of principles that underpin data sharing. For instance, the
DPA stipulates that data can be shared for a specific purpose and for a specified
period of time. They can be used only in a way that is adequate for the purpose
of the data sharing and have to be stored securely. Interestingly, the legislation is
not prescriptive in what can be shared but leaves to each organization the onus of
deciding what can be shared and whether the sharing is justified and appropriate.
It is also the responsibility of the organization that plans to share the data to
assess the risks of data sharing. In other words, what can really be shared among
local authorities is left to the subjective interpretation of the local information
governance team (Symons, 2016). For instance, according to the existing legisla-
tion, informed consent is the key requirement for personal data to be shared and
it can be overridden only on specific occasions. While the principle itself is not
controversial, how “informed consent” from data owners can be obtained is
context dependent, as an assessment is required on the impact that data sharing
may have on the individuals whose data are shared (ICO, 2011). The implication
is that a uniform practice on what data can be shared locally has not yet emerged.
Furthermore there is no solution to the fact that data can span across organiza-
tions that are not part of the public sector (HM Government, 2013; Local
Malomo/Sena: Data Intelligence for Local Government 15

Government Association, 2014) and that may therefore be unwilling to share data
with public sector bodies.
De-identifying personal data is another key requirement to fulfill before
personal data can be shared under the terms of the Data Protection Agreement. It
is usually argued that this last requirement is particularly relevant when trying to
merge small data sets as individuals can be easily re-identified once the data
linkage is completed (Ohm, 2010). However, this risk is still important when
dealing with large and complex data sets. Traditionally, most organizations
de-identify the data before sharing them9 but as the volume of data grows,
“deductive disclosure” becomes a major risk as computer scientists have shown
on a number of occasions that de-identified data can easily be re-identified
(Mochmann & M€ uller, 1979; Ohm, 2010). Deductive disclosure is facilitated by the
volume of data and its complexity, with the result that it is difficult to fully assess
beforehand the risk of deductive disclosure. Therefore, the only option left to
facilitate the linkage of data sets with personal information is to create a secure
environment where data can be safely de-identified and then matched. Safe
havens have been developed exactly for this purpose (Big Data Public Private
Forum, 2014).
The concept of a safe haven or “trusted third party” refers to an organization
that has been authorized to manage and process confidential data for a specific
purpose (Administrative Data Liaison Service, 2016). Its key feature is that it only
processes data in accordance with instructions from the data owners and includes
areas where personal data can be managed securely. At the moment, there are
several trusted third-party organizations in place in the United Kingdom but they
operate mostly in the field of health. What is clearly missing is a trusted
third-party organization that can operate across the whole pubic sector and offer
a safe environment for the de-identification of data. An intermediate solution to
the lack of infrastructure where sensitive data can be matched is the development
of data warehouses where data from local governments and from other parts of
the public sector can be matched and linked. Manchester City Council has
developed such a data warehouse that helps staff to access data from other local
authorities.10

Ethical Issues

There is a huge debate on the ethical challenges posed by the routine


extraction of information from Big Data.11 The key issue is that the capabilities
offered by technology in terms of extraction and manipulation of personal
information cannot be easily reconciled with what is perceived to be ethically
acceptable in this area. For instance, profiling customers of a company for a
marketing campaign can be quite profitable but at the same time, this commercial
practice can be considered intrusive and unethical. The same applies to local
governments. Using predictive modeling to assess the probability that specific
individuals (or groups of individuals) will require support and services from
local governments in the future raises many complex ethical questions. For
16 Policy & Internet, 9:1

instance, individuals need to consent to the use of their personal data for a
purpose that differs from the one they were initially collected for: does this mean
that the development of predictive tools requires explicit consent or can such
exercises be carried out without explicit consent because of the research
exemption contained in the U.K. legislation on the protection of personal data?
Also does the use of predictive analytics adequately capture each individual’s
specific circumstances (Goel, Rao, & Shroff, 2016)? For instance, underspecified
risk models that perform well statistically may validate existing stereotypes. In
addition, the principle of data minimization (which underpins the U.K. legislation
on data protection; see ICO [2016]) limits the volume of personal data that
organizations can store with the result that the risk of missing key variables when
specifying a model is quite real. Additional ethical issues are related to the re-use
of the output of the specific predictive model for other purposes within the public
sector (ICO, 2016).
To exemplify these ethical issues, consider a simple algorithm to identify
which pupils should be entitled to free school meals. Technically, the develop-
ment of such a model is straightforward. Still, if the model is wrongly specified,
some children may be mistakenly identified as not being entitled to a free meal.
Given this risk, a few ethical questions immediately emerge: should children and
their parents be made aware of the exercise? Should they provide consent for this
use of their personal data as families may not wish to be stigmatized by an
algorithm. This last issue is particularly relevant given the fact that most
predictive analytics algorithms only provide an estimate of the risk of an event.

Organizational Changes

Big Data technologies can be used as levers to introduce changes in the way
services are provided as long as key stakeholders in the organization accept that
insights from data will inform service delivery (Bain & Company, 2013).
However, embedding a new technology in an organization is a complex process
that may require radical changes in its processes (Sharma, Reynolds, Scheepers,
Seddon, & Shanks, 2010; Sharma & Shanks, 2011); equally any change requires
support from the senior management within the organization (Symons, 2016).
This is true both in the private and the public sector. Unfortunately, this point is
not sufficiently appreciated by key stakeholders within local governments.
Indeed, it is commonly believed that the deployment of Big Data technologies
simply implies a change in the way data are interrogated and interpreted and
therefore should not have any bearing on the way internal processes are
organized (Sharma, Mithas, & Kankanhalli, 2014).

Investments in Information Technology and Skill Gaps

It is well known that investment in IT has been very uneven between the
private and public sector, and within the private sector as well (Policy Exchange,
2015). Over the last decade, there has been major growth in information and
Malomo/Sena: Data Intelligence for Local Government 17

communications technology (ICT) budgets across the private sector (Policy


Exchange, 2015). For example, the banking sector and the financial services
industry spend 8 percent of their total operating expenditure on ICT (Policy
Exchange, 2015). On the contrary, among local authorities, ICT spending makes
up only 3–6 percent of the total budget (Policy Exchange, 2015).
The underinvestment in ICT is not the only issue faced by local governments
when trying to exploit Big Data. Successful deployment of Big Data technologies
needs to be accompanied by the development of internal skills that allow for the
analysis and modeling of complex phenomena that is essential to the develop-
ment of a data-driven approach to decision making within local governments
(IBM, 2015; Policy Exchange, 2015). However, local governments tend to lack
these skills and this skills gap may be exacerbated by the high turnover in the
sector. Another common weakness among local governments is the sector’s
fragmentation in terms of IT provision (Policy Exchange, 2015). This is simply
due to the fact that teams within local authorities work in isolation with the result
that most IT applications are unique to each of them. At the same time separate
IT systems may require recording data standards that may not be compatible
with those employed by other organizations (Policy Exchange, 2015). This
encourages those teams to work in different ways and develop skill sets that may
be useful to interact with specific computing environments but may not be
deployable elsewhere (Policy Exchange, 2015). All this reinforces the structural
silos that prevent local authorities from sharing and exploiting their data.

The Integrated Data Model in the Kent County Council Children’s Service

In the previous sections, we have discussed the challenges faced by local


governments when trying to use their data to support commissioning. To better
illustrate some of these challenges as well as the benefits that Big Data may
generate for local governments, we present and discuss the experience of the
corporate intelligence team in KCC (a large County Council in the South-East of
England) which was tasked with the development of an IDM for the provision of
children services. To the best of our knowledge, no other local government has
developed a similar model. To this purpose, we first provide some background
information on the local area and the activities of the County Council. We then
describe the IDM and its uses. We conclude with a description of the limitations of
the model and of the lessons that can be drawn from the experience of the team at
KCC.

Background

KCC has jurisdiction over a large part of the South-East of England. The
region overseen by the Council is mostly populated by a large number of small
and medium size towns, although parts of the region are rural. However, despite
the apparent homogeneity, the socio-economic characteristics of the region vary a
lot; the west is populated by relatively affluent commuter towns that tend to be
18 Policy & Internet, 9:1

well connected to London while remote rural communities occupy the east of the
county. The coast has attracted affluent retirement communities but there are also
significant areas of urban deprivation in coastal towns.
Like many other councils in England, KCC has had to cut its budgets
substantially (following the cuts to local government agreed by the central
government in 2010) while facing an increasing demand for its services, mostly
driven by demographic changes. To minimize the adverse impact of the funding
cuts on users of the services,12 the Council agreed that a radical change in the way
services are delivered was needed. In particular, prevention was emphasized as a
key priority. However, it was quickly realized that to be able to re-design the
delivery of services around early prevention and identify gaps in the existing
provision, commissioners13 based in the Council needed to gain a better
understanding of the drivers of the demand for services across different depart-
ments and their connections: indeed, the overall perception was that commis-
sioners had a good understanding of their needs in their own area of responsibility
but lacked an integrated view of how the services were connected to each other
and how changes in the provision of “upstream” (possibly cheaper) services could
affect the “downstream” (and potentially more expensive) provision.
To develop an integrated model of the services provided by the Council, the
Business Intelligence team focused on “service data” (i.e., data produced
routinely by the Council when delivering their services) and how they could be
used for this purpose. Typically, the analysis and interrogation of data on service
users was carried out by each service in isolation and mostly in a retrospective
fashion and this approach was useful to monitor outcomes of a given service.
However, this way of analyzing data was not considered that useful by
commissioners given the desire to identify emerging needs that could potentially
be addressed by early intervention. The key issue for the team was therefore to
identify the extent to which existing service data could be used to develop an
integrated model that could highlight the connections among the variety of
services provided by the Council. It immediately became apparent to the Business
Intelligence team that although data were collected separately by each service
within the Council, they had detailed information on users of multiple services
across the Council and this feature of the service data was key to the development
of an integrated model which could give commissioners a clear picture of the
links among Council services.

The Development of the Integrated Data Model for Children

To explore and demonstrate the potential benefits that an integrated model of


local services could bring to commissioning the Business Intelligence team at
KCC worked with its children and young people’s services to develop a
prototype IDM using service data on children. At its heart, the prototype IDM
matches children’s data from service records that are held in many systems across
different services. For each child, there is information on each service it receives
from the Council and the frequency of use. There are multiple services provided
Malomo/Sena: Data Intelligence for Local Government 19

by a Council that can be of interest to children, with the result that a child may
have accessed several of the services on more than one occasion. By identifying
the services children get access to and the journey they follow through the
Council services, it is possible to compute some correlations among the children’s
characteristics, their needs, and the likelihood that they will need to access a
specific service in the future. For instance, a change in the usage of the local
libraries followed by a request to access counseling services may indicate that
there is a change in the personal circumstances of the child which may require
more support from the Council later on.
Names and addresses of children were used as the identifiers that allow all
the data sets to be matched. The model is aimed at children and young people
between 0 and 25 years old, although the most comprehensive data coverage
relates to children aged 4–16. The size of the merged data sets ranges from 1,000
to 250,000 observations. Similarly, the number of variables in each data set varies
but the final data has around 80 variables. Once the data sets were matched,
individual identifiers were removed as the main purpose of the model was to
identify local needs so that commissioning and service improvement could be
supported with the right information and data—in line with the U.K. DPA.14
The development of the prototype led to further development of the model,
which now draws on a much larger number of data sets. These include:
1. Information about educational outcomes.
2. Troubled families.
3. Early help notifications.
4. Youth offending.
5. Library membership/usage.
6. Specialist children services.
Additional information about the social context is drawn from the Experian
social segmentation tool Mosaic, which allows better understanding of the
incidence of specific issues and problems within specific communities.
The IDM has however some limitations. Given the variety of the data sets
employed to build the IDM, the match rate among data sets varies15 and of
course this has limited so far the development of a longitudinal version of the
IDM that would help to evaluate the effectiveness of early interventions on a
variety of outcomes. In addition, the IDM does not include data from external
organizations (like the NHS, Police, Fire Services, etc.). This is not perceived by
the Business Intelligence team to be a big limitation at the moment as the IDM is
perceived to be mostly an internal operational tool that supports the delivery and
commissioning of children’s services.
The main users of the IDM have been commissioners. How has the IDM
been used for commissioning? Operationally, the IDM has led to the develop-
ment of a “risk score” which is used to identify groups of children who are
more likely to use different services. For instance, the risk score may indicate
that some children (with a pattern of irregular school attendance) may be more
likely to use counseling services at some point in time implying that an analysis
20 Policy & Internet, 9:1

of what drives the irregular school attendance pattern is possible. The score is
only a descriptive indicator that ranks individual children and is not a
predictor of future risk. But the risk score has allowed commissioners to
identify clusters of children with multiple needs and the socio-economic
characteristics of their families while making explicit the different pathways
followed by children when accessing the different services. In turn, this exercise
has helped to identify the services that are under pressure so that the pressure
in one area can be reduced by strengthening the provision in other areas. For
instance, the IDM and the associated risk score have helped to show that most
of the referrals to children’s services could have been avoided with a stronger
provision of support for children’s emotional well-being. In particular, the IDM
has helped to highlight the gap in existing provision around emotional
well-being services with the result that corrective measures were put in place
last year to address the gap.16 Similarly the IDM has identified a gap in the
provision of mental health support services not only for the child but for their
family as well.

Limitations of the IDM

Although the development of the IDM has clearly helped to identify the
gaps in the existing provision of services for children in Kent, it suffers from a
set of limitations that makes it a useful descriptive tool for internal planning
but no more than that. The first limitation is that the model was built by using
administrative records stored by KCC and therefore it covers all children
known to children’s services. Also the nature of the data implies that many of
the children with multiple needs and from the most deprived communities in
Kent tend to be overrepresented in the data. Therefore, the IDM is not used to
identify needs in the population at large or to assess the impact of interventions
on the whole population. Equally, it cannot be considered to be a tool that
allows prediction of the impact of specific interventions on children’s welfare.
The second limitation is the fact that no external data (apart from data from
Mosaic) have been used to enrich the model. In other words, the model does not
have a lot of contextual information on children that are included in the data
underpinning the model. The lack of external data does not dramatically affect
the quality of the model: indeed, validation tests conducted on the IDM show
that the existing version of the model is accurate in identifying children at risk.
Using only data internal to the Council to develop the IDM has been a conscious
choice of the team: indeed doubts were raised on the usefulness of involving
external partners in the project given the fact that the IDM is first of all an
operational tool of the Council rather than a shared project supported by a group
of external stakeholders. Concerns were also raised about the quality of data held
by external partners and how easily a data sharing agreement among the
interested parties could be made. Interestingly, lack of external data is not
perceived as negatively affecting the quality of the model although it is
acknowledged that this limits the predictive capabilities of the model.
Malomo/Sena: Data Intelligence for Local Government 21

A significant challenge to the development of the IDM was the lack of


common standards. Indeed, when asked about data standards, the Business
Intelligence team at KCC replied as follows:

There have been issues within services as well as between services eg


duplicate UPNs and a lack of a common referencing system between
services that has made the data matching process more challenging . . .
However, working in this way with services can in many cases through
highlighting the benefits to be gained, lead to improvements in the data
quality and data capture systems for the organisation as a whole.

Lessons

The development of the IDM at KCC offers a set of interesting insights and
lessons on how data projects can be successfully embedded into the decision-
making process of a council. First of all, data projects have to be aligned with the
strategic priorities of the organization so that support from the senior manage-
ment and key stakeholders can be easily gained. When asked about the critical
success factors for this model, the Business Intelligence team at KCC replied in
the following way:

There is an operational ambition to improve integration of services for


young people. Therefore, it had the support of senior management,
particularly in the area of earlier intervention and education. We
developed the model as a prototype first and developed it further to
analyse the issues/questions faced by senior management, which helped
to demonstrate its usefulness and applicability to service planning.

It is also believed that the strong initial support for the development of the
IDM is one of the main reasons why it is used by commissioners across the
Council. Indeed:

To date the model has been used to better understand young people and
the communities from which they emerge. It has been used to target the
delivery of services, commission preventative services, inform business
planning and workforce development . . . The development of detailed
locality based mapping that draws on the data has been particularly
helpful for commissioners looking to use resources differently and
develop service delivery solutions.

Second, data project must be well defined and their main objectives have to
be realistic and shared among the different parties involved in the project. In the
case of the development of the IDM, the team’s decision not to involve external
partners has definitely sped up the development of the model but:
22 Policy & Internet, 9:1

. . .issues of trust and common understanding had to be developed to


make the project happen.

Finally, data projects developed within Councils can be successful if data


analysis is acknowledged to be an important function of the Council. Indeed, in
the Business Intelligence team’s view, the structure of the Council itself has also
helped the development of the IDM:

The success of the project has to be ascribed to the presence of a strong


research corporate function as it enabled to resource the project but
more importantly for the whole services to maintain focus on the
potential gains from an integrated model that balanced the individual
directorate aims.

Conclusions

Local governments collect large amounts of data on their services and users.
To echo a 2015 Capital City Foundation (Copeland, 2015) report on Big Data in
New York:

Cities (local places) are flooded with data, but data by itself is of little
value (a spreadsheet of traffic data does nothing to tackle congestion). To
have impact it needs to be joined up. It requires people with the time,
skills and resources to interpret and seek insights from it. Above all, data
must drive action on outcomes that really matter to citizens. That is why
being data-driven is not primarily a challenge of technology; it is a
challenge of direction and organizational leadership.

However, most local governments are grappling with the issue of how to
exploit the different data they hold and build the necessary analytical capabilities
so that they can move toward data-driven decision making. The purpose of this
article has been to shed some light on how local governments in the United
Kingdom are using data to reshape local services; in particular we were interested
in identifying the benefits and the challenges faced by local governments when
trying to exploit their own data. Our analysis suggests that most local govern-
ments are not exploiting their Big Data to their full potential thanks to the
presence of structural data silos and other contextual factors. Given these barriers
to the exploitation of Big Data within local governments, the success of data
projects hinges on the following three key elements:
1. The development of a general legal framework that facilitates data sharing
among local authorities. As the interpretation of the existing legislation may
create significant uncertainty as to what data can be shared, general data
sharing protocols have to be developed well before the need to share data for a
specific purpose arises.
Malomo/Sena: Data Intelligence for Local Government 23

2. The upgrading of the IT infrastructure available to local governments


accompanied by a plan to upgrade the skills of the existing workforce. Our
analysis shows that so far the development of the IT infrastructure in local
authorities has not been planned to facilitate data matching, and that therefore
there is a need to change the general philosophy behind IT investment if there
is a real desire to develop a Big Data approach to data exploitation. However,
these changes cannot be decoupled from the development of new skills among
the workforce in local governments. Arguably this is the most difficult
challenge to tackle given the continuous restructuring exercises local govern-
ments are going through, but still not impossible to solve as long as these
exercises take into account the need to retain staff with the right skills.
3. Support from the senior management is a key critical success factor for most
data projects in particular when working across different data silos. Informal
discussions with key stakeholders within local government have shown that
when the senior data management does not support the use of data in the
decision-making process, it is harder to make a case for it at the middle
management level.
Our view is that once these conditions are fulfilled, it will then be possible for
local governments to fully leverage the benefits offered by Big Data.

Fola Malomo, D. Phil, Essex Business School, University of Essex, Southend-


On-Sea, United Kingdom [fmalomo@essex.ac.uk].
Vania Sena, D. Phil, Essex Business School, University of Essex, Southend-
On-Sea, United Kingdom.

Notes

1. For instance the cost of hard drive storage per gigabyte has reduced from $1,120 in 1995 to $0.03
in 2014. The Quantum 2,500 MB hard-drive was available in 1996 for $440 ($207 per gigabyte); the
3,000,000 MB Seagate Barracuda hard-drive was available in 2013 for $129 ($0.043 per gigabyte)
(Komorowski, 2016).
2. Examples of nontechnology companies that have embedded the systematic exploitation of Big
Data into their business model include Macys, Sears, UPS, and General Electric, among others. In
the private sector, Big Data technologies tend to mostly support marketing and customer relations
although Big Data technologies are gradually expanding into other functions of the corporation
(like human resource management, supply chain management, etc.).
3. Examples include data relative to general practitioner practices, community health, mental health,
and acute hospital episodes.
4. In the context of local governments, data holdings mirror the services these provide to local
communities. For instance, county councils have responsibility over education, transport, planning,
social care, libraries, waste management, and trading standards while city councils oversee more
local services like rubbish collection, council tax collection, housing, and planning applications.
For each of the services they manage, local authorities collect (as a minimum) information on the
beneficiaries and the delivered services.
5. Novel types of data include audio data captured through networks of auditory sensors (Carr &
Doleac, 2015).
6. The U.S. Department of Transportation has started to use cameras and license plate recognition to
monitor the traffic flow so to identify where investment is needed (Smart Data Collective, 2015).
24 Policy & Internet, 9:1

7. Historically government departments have worked in isolation when designing, procuring and
managing their own ICT solutions. As a result, the ICT infrastructure within the public sector is
relatively expensive and fragmented; and often replicates solutions across teams with the result
that the ability to share and re-use data is often hindered (HM Government, 2011). The
Department for Communities and Local Government (2016) estimates that the use of common
standards for waste data can generate savings of £505 million for local authorities in England over
14 years. Individual local governments stand to gain between £117,900 and £219,255 per year by
using common data standards in the area of waste services.
8. Initially, commissioning contracts did not include clauses on data collection and data sharing with
the result that local governments could not get access to the data produced by the organization in
charge of the commissioned services and therefore they were not in a position to evaluate the
commissioned services. This situation has, however, changed as clauses on data sharing and data
standards are now included in commissioning contracts.
9. This includes a set of techniques like data masking and aggregation.
10. The data warehouse has been used by social care analysts to review the existing distribution of
needs as well as to identify future needs.
11. It has been argued that Big Data threaten the right to privacy and take away from individuals
the power to define themselves so erasing individual and group identities (Richards & King,
2014).
12. In the words of the staff of the Business Intelligence team at KCC “. . .the strategy has been to
place our customers and residents at the very heart of our transformation plans.”
13. Commissioners are public officers in local governments who manage the whole commissioning
process locally.
14. According to the Business Intelligence team, “the research exemption of the Data Protection Act
(Section 33) allowed the project to proceed.”
15. The matching rate varies between 40–90 percent of the records.
16. The outcomes of these additional services have not yet been evaluated and so the actual benefits
these services have generated are not known.

References

Abi-Aad, G. 2016. “Health, Social Care and Public Health” [PowerPoint presentation]. Local Datavores
Research Workshop. Available at: https://www.youtube.com/watch?v¼zgNowZ_UJAg.
Administrative Data Liaison Service. 2016. ADLS Trusted Third Party Service. http://www.adls.ac.uk/
adls-trusted-third-party-service/.
Aggarwal, A. 2016a. “A Hybrid Approach to Big Data Systems Development.” In Managing Big Data
Integration in the Public Sector, ed. A. Aggarwal. Hershey, PA: IGI Global, 20–37.
Aggarwal, A. 2016b. Managing Big Data Integration in the Public Sector. USA: IGI Global.
Bain and Company. 2013. Big Data: The Organisational Challenge. http://www.bain.com/publications/
articles/big_data_the_organizational_challenge.aspx.
Beresford, M. 2015. “Demystifying Data: The Data Revolution and What It Means for Local
Government.” An NLGN White Paper. http://www.nlgn.org.uk/public/wp-content/uploads/
DEMYSTIFYING-DATA1.pdf.
Big Data Public Private Forum. 2014. Final Version of Sector’s Requisites. http://big-project.eu/sites/
default/files/BIG_D2_3_2.pdf.
Carr, J., and J. Doleac. 2015. “The Geography, Incidence and Underreporting of Gun Violence: New
Evidence Using Shotspotter Data.” Unpublished Manuscript.
Cebr. 2012. Data Equity: Unlocking the Value of Big Data. Report for SAS, April.
Chambers, L., V. Dimitrova, and R. Pollock. 2012. “Technology for Transparent and Accountable
Public Finance.” A Report by the Open Knowledge Foundation. http://community.openspending.
org/resources/gift/pdf/ttapf_report_20120530.pdf.
Computerworld. 2015. The Data Science Ecosystem. http://www.computerworld.com/article/2899647/
the-data-science-ecosystem.html.
Malomo/Sena: Data Intelligence for Local Government 25

Copeland, E. 2015. “Big Data in the Big Apple - The Lessons London Can Learn from New York’s
Data-Driven Approach to Smart Cities.” London: Capital City Foundation.
Deloitte. 2011. “Open Data: Driving Growth, Ingenuity and Innovation.” A Deloitte Analytics Paper.
https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/deloitte-analytics/open-data
-driving-growth-ingenuity-and-innovation.pdf.
Department for Communities and Local Government. 2016. “Making the Case for Data Standards.”
Final Business Case for Local Waster Services. http://www.localdirect.gov.uk/wp-content/
uploads/2016/03/Waste-standards-business-case.pdf.
Desouza, K. 2014. “Big Data Is a Big Deal for Local Government.” International City/County Management
Association Knowledge Network. http://icma.org/en/icma/knowledge_network/blogs/blogpost/
2162/Big_Data_Is_a_Big_Deal_for_Local_Government.
EY. 2014. “Big Data: Changing the Way Businesses Compete and Operate.” In Insights on Governance,
Risk and Compliance. http://www.ey.com/Publication/vwLUAssets/EY_Big_data:_changing
_the_way_businesses_operate/$FILE/EY-Insights-on-GRC-Big-data.pdf.
Forbes. 2014. 12 Big Data Definitions: What’s Yours? http://www.forbes.com/sites/gilpress/2014/09/
03/12-big-data-definitions-whats-yours/.
Forbes. 2016. “Big Data Costs Money. Big Analytics Makes Money.” http://www.forbes.com/sites/
teradata/2016/05/12/big-data-costs-money-big-analytics-makes-money/#698e9d0d4bf9.
Gartner. 2015 What Is Big Data—Gartner IT Glossary—Big Data. http://www.gartner.com/it-glossary/big-data.
Goel, S., J.M. Rao, and R. Shroff. 2016. “Personalised Risk Assessments in the Criminal Justice System,
American Economic Review.” Papers and Proceedings 106 (5): 119–23.
GSM Association. 2015. Unlocking the Value of IoT Through Big Data. http://www.gsma.com/
connectedliving/wp-content/uploads/2015/12/cl_iot_bigdata_11_15-004.pdf.
HM Government. 2011. “Government ICT Strategy—Strategic Implementation Plan. Moving From the
’What’ to the ‘How.’” [Online] https://www.gov.uk/government/uploads/system/uploads/
attachment_data/file/266169/govt-ict-sip.pdf.
HM Government. 2013. “Seizing the Data Opportunity: A Strategy for UK Data Capability.” https://
www.gov.uk/government/uploads/system/uploads/attachment_data/file/254136/bis-13-1250-
strategy-for-uk-data-capability-v4.pdf.
Home Office. 2014. Multi Agency Working and Information Sharing Project Final Report. https://www.
gov.uk/government/uploads/system/uploads/attachment_data/file/338875/MASH.pdf.
IBM. 2015. “Big Data: It’s About Complexity, Not Size.” IBM Center for the Business of Government.
http://www.businessofgovernment.org/blog/business-government/big-data-it%E2%80%99s-about-
complexity-not-size.
Information Commissioner’s Office (ICO). 2011. Data Protection Code of Practice. https://ico.org.uk/for-
organisations.
Information Commissioner’s Office (ICO). 2016. Big Data. https://ico.org.uk/for-organisations/guide-
to-data-protection/big-data/.
Jarqun, P.B. 2012. “Data Sharing: Creating Agreements. In Support of Community-Academic Partner-
ships.” http://www.ucdenver.edu/research/CCTSI/community-engagement/resources/Docum
ents/DataSharingCreatingAgreements.pdf.
Kim, G.H., S. Trimi, and J.H. Chung. 2014. “Big-Data Applications in the Government Sector.”
Communications of the ACM 57 (3): 78–85.
Komorowski, M. 2016. A History of Storage Cost. http://www.mkomo.com/cost-per-gigabyte.
Laney, D. 2001. “3D Data Management: Controlling Data Volume, Velocity, and Variety. Application
Delivery Strategies.” Meta Group. https://blogs.gartner.com/doug-laney/files/2012/01/ad949-
3D-Data-Management-Controlling-Data-Volume-Velocity-and-Variety.pdf.
Local Government Association. 2014. Transforming Local Public Services: Using Technology and Digital
Tools and Approaches. http://www.local.gov.uk/documents/10180/11553/Transformingþpublic
þservicesþusingþtechnologyþandþdigitalþapproaches/ab9af2bd-9b68-4473-ac17-bbddf2adec05.
26 Policy & Internet, 9:1

Local Government Association. 2016. Spelthorne Borough Council Engage Mobile App. http://www.local.
gov.uk/documents/10180/6360115/SpelthorneþþEngageþappþcaseþstudyþFINALþþCopy.
pdf/2c882e49-d3c5-4ca6-a23e-134e01558736.
Manyika, J., M. Chui, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A.H. Byers. 2011. Big Data: The
Next Frontier for Innovation, Competition, and Productivity (Full Report). http://www.mckinsey.
com/~/media/McKinsey/Business%20Functions/Business%20Technology/Our%20Insights/Big
%20data%20The%20next%20frontier%20for%20innovation/MGI_big_data_full_report.ashx.
Manzoor, A. 2015. “Emerging Role of Big Data in Public Sector.” In Managing Big Data Integration in the
Public Sector, ed. A. Aggarwal. Hershey, PA: IGI Global, 268–88.
onberger, V., and K. Cukier. 2013. Big Data: A Revolution That Will Transform How We Live,
Mayer-Sch€
Work, and Think. Boston, New York: Houghton Mifflin Harcourt.
McAfee, A., and E. Brynjolfsson. 2012. Big Data: The Management Revolution. Harvard Business Review.
https://hbr.org/2012/10/big-data-the-management-revolution.
Microsoft. 2010. Bringing Master Data Management to the Stakeholders. http://download.microsoft.com/
download/d/b/d/dbde7972-1eb9-470a-ba18-58849db3eb3b/bringingmasterdatamanagementtoth
estakeholders.pdf.
Mochmann, E., and P.J. M€uller. 1979. Data Protection and Social Science Research: Perspectives From Ten
Countries. Ardent Media. https://books.google.co.uk/books?hl¼en&lr¼&id¼6iR3BBut7Z8C&oi¼fnd
&pg¼PA7&ots¼Cc8o1TKBHM&sig¼bCJuAmhufYVCDJlPRX4XrGhqfPI#v¼onepage &q&f¼false.
NHS Information Standards Board. 2002. “Capturing Data for NHS Patients Resident in England Treated
in Non-NHS Organisations and Overseas.” NHS Information Authority. http://webarchive.
nationalarchives.gov.uk/; http://www.isb.nhs.uk/library/dscn/dscn2002/472002.pdf.
OECD. 2014. “Data-Driven Innovation for Growth and Well-Being.” Interim Synthesis Report. http://
www.oecd.org/sti/inno/data-driven-innovation-interim-synthesis.pdf.
Ohm, P. 2010. “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymisation.”
UCLA Law Review 57, 1701.
Parsons, M.A., Ø. Godøy, E. LeDrew, T.F. De Bruin, B. Danis, S. Tomlinson, and D. Carlson. 2011.
“A Conceptual Framework for Managing Very Diverse Data for Complex, Interdisciplinary
Science.” Journal of Information Science 37 (6): 555–69. https://dl.dropboxusercontent.com/u/
546900/JIS-1391-v6.pdf.
Peltoniemi, M., and E. Vuori. 2004. “Business Ecosystem as the New Approach to Complex Adaptive
Business Environments.” In FeBR 2004: Frontiers of E-Business Research 2004, Conference Proceedings
of eBRF 2004, eds., M. Sepp€a, M. Hannula, A. J€arvelin, J. Kujala, M. Ruohonen, and T. Tiainen.
Tampere, Finland: Tampere University of Technology and University of Tampere, 267–81.
Policy Exchange. 2015. “Small Pieces Loosely Joined. How Smarter Use of Technology and Data Can
Deliver Real Reform of Local Government.” http://www.policyexchange.org.uk/images/
publications/small%20pieces%20loossely%20joined.pdf.
Policy Network. 2014. Localising Power: A Synaptic Approach to Public Services. http://www.
policynetwork.net/pno_detail.aspx?ID¼4753&title¼Localisingþpower%3aþAþsynapticþapproachþ
toþpublicþservices.
Pollock, R. 2011. “Building the (Open) Data Ecosystem.” Open Knowledge Blog. http://blog.okfn.org/
2011/03/31/building-the-open-data-ecosystem/.
Richards, N., and J. King. 2014. “Big Data Ethics.” Wake Forest Law Review 49 (2): 393–432.
Roccasalva, G. 2012. “How Big Data Might Induce Learning With Interactive Visualisation Tools.”
TERRITORIO ITALIA. http://www.agenziaentrate.gov.it/wps/wcm/connect/04f193804fa19505b
81fbd36409091e6/eng_Howþbigþdataþmightþinduceþlearning.pdf?MOD¼AJPERES&CACHEID
¼04f193804fa19505b81fbd36409091e6
Sharma, R., S. Mithas, and A. Kankanhalli. 2014. “Transforming Decision-Making Processes: A
Research Agenda for Understanding the Impact of Business Analytics on Organisations.”
European Journal of Information Systems 23 (4): 433–41.
Sharma, R., P. Reynolds, R. Scheepers, P. Seddon, and G. Shanks. 2010. “Business Analytics and
Competitive Advantage: A Review and a Research Agenda.” In Bridging the Socio-Technical Gap in
Malomo/Sena: Data Intelligence for Local Government 27

DSS—Challenges for the Next Decade, eds. A. Respicio, F. Adam, and G. Phillips-Wren. Amsterdam,
NL: IOS Press, 187–98.
Sharma, R., and G. Shanks. 2011. The Role of Dynamic Capabilities in Creating Business Value From IS
Assets, America Conference on Information Systems. http://aisel.aisnet.org/amcis2011_submissions/
135/.
Sisense. 2013 “Business Analytics and the Data Complexity Matrix.” White Paper. http://pages.
sisense.com/rs/601-OXE-081/images/data_complexity_matrix.pdf.
Smart Data Collective. 2015. 6 Incredible Ways Big Data Is Used by the US Government. http://www.
smartdatacollective.com/bernardmarr/343582/6-incredible-ways-big-data-used-us-government.
Somerset Clinical Care Commissioning Group (Somerset CCG). 2014. “Integrated Care in Somerset.”
Presentation at the South and West Commissioning Support Alliance.
Symons, T. 2016. “Datavores of Local Government.” Discussion Paper, NESTA, London, UK.
Thakuriah, P., Tilahun, N., and Zellner, M. 2015. “Big Data and Urban Informatics: Innovations and
Challenges to Urban Planning and Knowledge Discovery.” In Proceedings of the Workshop on Big
Data and Urban Informatics sponsored by National Science Foundation, August 11–12, Chicago, IL,
4–32.
UNECE. 2013. What Does Big Data Mean for Official Statistics? http://www1.unece.org/stat/platform/
pages/viewpage.
Van Rijmenam, M. 2013. “Why the 3v’s Are Not Sufficient to Describe Big Data.” Big Data Startup.
http://www. bigdata-startups. com/3vs-sufficient-describe-big-data.
Yiu, C. 2012. “The Big Data Opportunity. Making Government Faster, Smarter and More Personal.”
Policy Exchange 1: 1–34.

You might also like