You are on page 1of 14

Nordic Social Work Research

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rnsw20

Proceduralisation of decision-making processes: a


case study of child welfare practice

Marina S. Sletten

To cite this article: Marina S. Sletten (2022): Proceduralisation of decision-making


processes: a case study of child welfare practice, Nordic Social Work Research, DOI:
10.1080/2156857X.2022.2088606

To link to this article: https://doi.org/10.1080/2156857X.2022.2088606

© 2022 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group.

Published online: 13 Jun 2022.

Submit your article to this journal

Article views: 159

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rnsw20
NORDIC SOCIAL WORK RESEARCH
https://doi.org/10.1080/2156857X.2022.2088606

Proceduralisation of decision-making processes: a case study of


child welfare practice
Marina S. Sletten
Faculty of Health, Welfare and Organisation, Østfold University College, Fredrikstad, Norway

ABSTRACT KEYWORDS
This article examines the use of a standardized assessment framework (the Standardization; decision-
Kvello Assessment Framework (KF)), and how it guides assessment work, making; discretion;
professional discretion and the knowledge base in child welfare practice. transparency; child welfare
The KF is explored as an example of a standardized tool; it is a non-manual
based assessment tool commonly used in Norway. The data stem from
fieldwork in two child welfare offices and client documents from one of
these offices, which were analysed using thematic analysis. The findings
show that the use of the assessment tool led to proceduralisation of
assessment work in two areas. First, through requirements for focus and
for activities to obtain information. Second, the tool included procedural
requirements of form-filling, which in turn placed interpretive demands
on the professionals that turned interpretations into conclusions. The
findings also identified gaps in their chain of argument. Based on these
findings, I argue that use of this tool influences professionals’ discre­
tionary activity as it leads to standardization of decision-making and
a narrow knowledge base. The tool may increase the level of transpar­
ency of decision-making, and thus function as an instrument of control
in association with accountability. However, the use of a standardized
assessment tool does not seem to enhance child welfare professionals’
analytical skills and thus does not solve the challenges of child welfare
practice. The article discusses how these shortcomings may lead to
biased assessments, and emphasizes the importance of a transparent
decision-making process.

Introduction
Identifying children at risk and making decisions accordingly is considered paramount in
child welfare work (Munro 2011). Yet the decision-making process is complex and filled with
uncertainty and inadequacies (Fluke et al. 2020). In this regard, child welfare services (CWS)
have been criticized for lack of competence and systematization in their assessment work (e.g.
Vis, Lauritzen, and Fossum 2019) and lack of transparent arguments for their decisions (MCF
2020; Munro 2011). These criticisms are often linked to professionals’ use of discretion, and
are considered a threat to democratic accountability (Brodkin 2008). In response to such
criticism we have witnessed increased use of rule-following approaches and risk assessment
tools (e.g. Munro 2011; Sørensen 2018; Vis, Lauritzen, and Fossum 2019). This article aims to
examine the use of a standardized assessment framework, and how this influences child
welfare professionals’ decision-making processes.

CONTACT Marina S. Sletten marina.sletten@hiof.no


© 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://
creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the
original work is properly cited, and is not altered, transformed, or built upon in any way.
2 M. S. SLETTEN

Standardization is influenced by the idea of uniformity, objectivity and quality control by


streamlining processes to ensure efficient, transparent and accountable services (Timmermans
and Berg 2003). Brunsson and Jacobsson (2000) see standards as explicit and formalized rules,
which are connected to norms and function as instruments of control. Moreover, standardization
has commonly been seen as a means to regulate public services (Noordegraaf 2015).
In the Scandinavian countries and other Western countries, the development of standardized
assessment tools in CWS has accelerated alongside the movement towards evidence-based practice
(Timmermans and Berg 2003). The aims are to enhance the use of scientific knowledge (Bergmark
and Lundström 2011), to support frontline professionals in dealing with uncertainty and risky
situations in decision-making (Bartelink et al., 2015; Ponnert and Svensson 2016) and to ensure
legitimate and accountable services (Devlieghere, Bradt, and Roose 2018; Skillmark and Oscarsson
2020). This has led to a debate about the position of systematic approaches in CWS and their
influence in social work practice (Skillmark and Oscarsson 2020; Sletten and Ellingsen 2020; Vis,
Lauritzen, and Fossum 2019). For example, critics have argued that standardized assessment tools
de-professionalize social work (e.g. Ponnert and Svensson 2016; White, Hall, and Peckover 2008) by
relying too much on guidelines and checklists (Almklov, Ulset, and Røyrvik 2017). Additionally, the
tools restrict professionals’ actions, oversimplify complexities, and fit poorly with social work
because they overlook social and structural dimensions of life (Broadhurst et al. 2010; Stanley
2013). Moreover, the development of standardized practices is argued to be a strategy for auditing
and promoting accountability in CWS (Brodkin 2008).
At the same time, extensive literature has identified various challenges when using experience-
based approaches associated with intuitive reasoning (or ‘gut feeling’) (Munro 2011; Munro and
Hardie 2018), or tacit knowledge (Polanyi and Sen, 2009 [1961]). An example of such challenges is
that social workers seek to confirm what they already ‘know’ or assume, which may cause a cascade
effect of ‘errors’ (e.g. Benbenishty, Osmo, and Gold 2003; Gambrill 2005). However, confirmation
biases and errors may arise not just from the individual professional’s use of discretion, but from an
interaction of multiple factors (Munro and Hardie 2018). In this sense, social workers’ decisions are
influenced by several factors such as case characteristics, personal preferences and organizational
and external factors (Benbenishty, Osmo, and Gold 2003; Møller 2021). The argument is that
decision variability is related to context (Fluke et al. 2020), and should not be considered an isolated
event (Møller 2021). Similarly, professionals may use their discretion to tinker with tools in various
ways to make them fit their practice (Skillmark and Oscarsson 2020; Sletten and Bjørkquist 2020).
Despite growing interest in decision-making in CWS, there is limited research on how different
factors (e.g. contextual, systemic and biases) influence decision-making processes (Fluke et al. 2020).
How standardized tools are used in practice, and in turn, their impact on practice, has also been
understudied (Gillingham et al. 2017). As described by Møller (2021), there has been much focus on
decision output, and decision-making has thus been viewed as an isolated event. By disregarding the
steps leading to the actual decisions, there is a potential for oversimplifying the complexity of the
decision-making process. In this study on how CWS professionals interact with the standards,
a micro-level perspective is applied with a focus on how procedural assessment frameworks guide
and influence decision-making practices. This article pursues the following question: How does the
Kvello Assessment Framework tool (KF) influence CWS decision-making processes?

Standardization and the KF assessment tool


Although recent policy initiatives have aimed to ensure a more uniform practice in assessment work
(Havnen et al. 2021), there are currently no national guidelines on how to conduct assessments in
Norwegian CWS. Nevertheless, about 50% of Norwegian CWS have adopted the KF in various
forms (Vis, Lauritzen, and Fossum 2019), which in this study constitutes an example of
a standardized assessment form with embedded ‘procedural standards’ (Timmermans & Berg,
2003). When CWS offices use standards to create a more uniform practice, CWS professionals, as
NORDIC SOCIAL WORK RESEARCH 3

frontline workers, need to translate and adapt such guidelines into practice (Lipsky 2010).
According to Lipsky, translating ideals into practice is often difficult due to lack of resources and
limitations of the work structure.
Timmermans and Berg (2003, 26) differentiate between four subtypes of standards: design
standards, terminological standards, performance standards and procedural standards.
Procedural standards imply guidelines for predetermined courses of action, such as describing
how professionals should carry out their assessment work, and thus their decision-making process,
where professional knowledge is embedded in procedures (Brunsson and Jacobsson 2000).
Although these standards are interrelated, this article focuses on procedural standards, as these
attempt to direct the professional’s behaviour and therefore may cause tensions between profes­
sional practice and the quest of standardizations for rationality, transparency, objectivity and
accountability (Timmermans and Berg 2003).
The KF is a non-licenced standardized assessment framework for use in decision-making,
developed by a Norwegian psychologist (Kvello 2015). Using a systematic approach, it aims to
identify children at risk, limit arbitrariness, and improve professional reasoning and decision-
making through enhanced competence and transparency (Kvello 2015). The KF shares simila­
rities with the Swedish BBIC (‘Children’s Needs in the Centre’) and the Danish ICS (Integrated
Children’s System) (Havnen et al. 2021). It entails the use of guidelines and a checklist on how
to conduct assessments that are linked to scientific evidence, and how to report on these
(Kvello 2015); however, it does not qualify as an evidence-based programme (Kjær 2019).
In addition to a textbook (Kvello 2015), the KF consists of an electronic form with predetermined
boxes for different areas to assess by using three sources of information: i) dialogue with child and
parents, ii) information from external parties (e.g. school, doctor), and iii) observation of parents and
child. The broad areas to assess are: living situation, health of child and parents, the child’s develop­
ment, ability and opinions, parental functioning, parents’ ability to understand the child (mentaliza­
tion), child-parent interaction, and risk and protective factors (Kvello 2015). The KF thus directs the
professionals’ actions, where the theoretical knowledge is stored in the procedures of the standards
(Brunsson & Jacobsen, 2000). Accordingly, the standardized assessment form aims to make the
decision-making process more predictable, as process standards are coupled with outcomes.
Moreover, it is recommended to conduct a mentalization interview, which is a certified method.
Kvello (2015) has also provided a checklist of the most relevant factors (32 detailed risk factors and
10 broad protective factors), which aims to determine whether there is a cumulative risk based on
the number and intensity of the risk factors. This, together with the numerical dominance of risk
factors, indicates a strong emphasis on risk in the assessments. However, determining cumulative
risk in child welfare in general on the basis of the KF is ambiguous and therefore contested (Kjær
2019). Additionally, there is no manual describing how to use the framework (and the checklists),
which is considered a limitation (Vis et al. 2020).

Previous research on standardization and decision-making in CWS


Previous studies on standardized tools show conflicting findings regarding their fitness for their
purpose (Benbenishty, Osmo, and Gold 2003; Sletten and Bjørkquist 2020; Sørensen 2018). On the
one hand, research shows that social workers find the tools supportive, suggesting that this increases
their sense of competence and contributes to a common language (Gillingham et al. 2017; Sletten and
Ellingsen 2020; White, Hall, and Peckover 2008). Studies also find that assessment becomes more
structured and focused (Sletten and Ellingsen 2020; Vis, Lauritzen, and Fossum 2019), enhancing
CWS professionals’ analysis of complex cases (Bartelink et al., 2015).
On the other hand, standardized tools are found to be time-consuming, leading to more informa­
tion and long reports (Sletten and Ellingsen 2020; Vis, Lauritzen, and Fossum 2019; White, Hall, and
Peckover 2008). Shaw et al. (2009) revealed that social workers obtained different information when
using the same assessment tool, which revealed variation in the information assessments are based on.
4 M. S. SLETTEN

Furthermore, White, Hall, and Peckover (2008) argue that the tools exert descriptive and interpretive
demands on CWS professionals, described as a ‘descriptive tyranny’. They may place an adminis­
trative burden on professionals and function as a control mechanism (Almklov, Ulset, and Røyrvik
2017). At the same time, CWS professionals, by exercising discretion, commonly modify the tools to
fit their particular context, (e.g. Skillmark and Oscarsson 2020; Sletten and Bjørkquist 2020).
Moreover, studies found that risk assessments fail to nuance the level of risk on a case-by-case
basis, as the social worker needs to tick off information based on a form (Gillingham 2019). Others
find that vague risk factors cause confusion among CWS professionals (Sletten and Ellingsen 2020;
Vis, Lauritzen, and Fossum 2019). Further, guidelines on how to weight the factors are limited
(Sørensen 2018). Risk may also be assessed differently in different contexts (e.g. Fluke et al. 2020),
which makes it difficult to establish standardized guidelines to determine a child’s level of risk
(Thoburn 2010). Research also suggests that standardized instruments may not necessarily lead to
greater consensus than intuition in determining risk (Bartelink et al., 2015).
As shown, research on decision-making and standardization is conflicting and these studies
offer important insights into how standardized assessment tools may influence child welfare
practice. To complement existing research, the present study contributes in-depth knowledge
on how standardized assessment tools guide professionals’ decision-making processes in Norway.

The conceptual framework: the concept of profession in frontline practice


Analytical perspectives and concepts used in this study draw on the theory of profession (Freidson
2001; Molander 2016).
Discretion is considered unavoidable in decision-making about a child or family, which calls for
applying general knowledge to a particular case (Freidson 2001; Lipsky 2010). Discretion refers to
an area of delegated power where professionals exercise choice between permitted alternatives of
actions based on their own judgment (Molander 2016). Molander (2016) distinguishes between two
dimensions of discretion, discretionary space and discretionary reasoning. Discretionary space refers
to a structural dimension of discretion that constitutes an entrusted, but restricted area (e.g.
through laws and standards) for professionals to exercise discretion. In the debate about standar­
dization in social work practice it is claimed to restrict professionals’ ability to use discretion
(Ponnert and Svensson 2016). However, this is contested as standards need to be interpreted into
the local context (Molander 2016).
Discretionary reasoning refers to an epistemic dimension of discretion. This denotes a cognitive
activity performed by professionals through use of their expert knowledge and skills when making
reasoned decisions under conditions of uncertainty (Molander 2016). Professional knowledge in
this sense is commonly equated with practice wisdom (Freidson 2001), and resonates with tacit
knowledge: ‘we can know more than we tell’ (Polanyi and Sen, 2009 [1961]). Moreover, discre­
tionary reasoning is a form of practical reasoning based on professionals’ own judgment, in order to
determine what ought to be done in a particular case. Therefore, discretion is bounded by various
normative expectations and the context, which are considered a burden on discretionary activity.
Knowledge embedded in standardized tools involves formal knowledge that is codified and explicit,
and is thus a mechanism for transparency and accountability to ensure the predictability linked to
decision outcomes (Brunsson and Jacobsson 2000). Accordingly, focusing on the discretionary
activities of CWS professionals and what guides them in their reasoning will provide insight into
how the KF assessment framework influences the decision-making process.

Method
This article uses a qualitative case study design (Yin 2014) to examine how a standardized tool (KF)
influences CWS decision-making processes. Standardized practice in CWS constitutes the case, in
which the KF assessment framework is an example, hence an ‘exemplifying case’ (Bryman 2016). The
NORDIC SOCIAL WORK RESEARCH 5

study was conducted in two local child welfare offices in different regions of Norway; ‘Office A’ had
used the KF for about a decade, while ‘Office B’ had recently started to use it. Moreover, A was a large
office with a specialized approach, while B was a medium-sized office with a semi-generalist approach.
The combination of these variations increased the likelihood of identifying patterns (Braun and Clarke
2006), in which the CWS professionals’ practices emerging from the use of the KF tool were analysed.

Participants and data collection


Access to the offices was granted by the management staff. Thirty-two CWS professionals who used the
standard KF tool (20 from Office A and 12 from Office B), including seven in management positions,
consented to participate in this part of the study. They had worked in the CWS from one to 20+ years.
All except one held a bachelor’s degree in social work, although some had additional education.
The data in this article draw on fieldwork (45 days) and client documents (n = 15). The latter
were only connected to Office A due to restricted approval. The fieldwork was carried out at the two
offices over 12 months (April 2017 to March 2018), and included participant observation and
interviews (Spradley 2016). I participated in day-to-day activities, internal meetings and six client
meetings, and conducted interviews with the CWS staff and managers. In addition, I attended
training and guidance given by Kvello in both offices. Data were recorded as handwritten notes the
same day, and some informal talk was recorded and transcribed verbatim. This enabled reflection
and sampling that revealed new areas for further attention. In Office A, I was provided with my own
office in the same corridor as the CWS professionals, which enabled me to encounter key
informants (Bryman 2016). The fieldwork in Office A afforded valuable knowledge of the standar­
dized tool that made the subsequent fieldwork in Office B more concentrated in terms of participat­
ing in scheduled meetings, in addition to making the interviews more focused. Focus areas in the
observations were how the standardized tool was present in the participants’ daily work, who used it
and how. CWS professionals spend much of the day on casework, which enabled me to talk to them
in the role of ‘conversation partner’. These conversations dealt with their assessment procedure,
including what type of information they sought and their experiences of filling out the KF form.
I therefore gained access (Bryman 2016) into what guided their assessment work, as they willingly
shared ‘backstage’ information.
From Office A, 15 case assessment reports based on the KF were randomly selected. With support
from the manager, the first five reports from three sub-teams in Office A that were completed in
May 2017 were included. The reports provided important insights into how the CWS staff used the
KF form in decision-making processes. This included the type of information emphasized, sources of
information, and how the information was presented and interpreted. The purpose was to explore
the CWS professionals’ focus and how this was expressed in the reports, considering that documents
contain the writers’ point of view (Bryman 2016). Being present in the offices over time, observing
and talking with professionals, together with document analysis, enhanced my understanding of how
they used the KF tool and thus its influence on decision-making. The purpose of this design was to
capture both formal and informal practice and possible discrepancies between these.

Data analysis
The various data sources generated thick data, which were analysed using thematic analysis (Braun
and Clarke 2006), supported by NVivo 11. The dataset was analysed to search for patterns of
common meanings (Krippendorff 2019). In focusing on how the standardized tool influenced
decision-making processes, hence their doings and sayings, it was important to consider how the
tool was actually used by the practitioners in their context, and how it was represented in their daily
talk and activities, and in the documents. Coding and categorization emerged from alternation
between an inductive data-driven approach (Bryman 2016), based on fieldwork data and docu­
ments, and a more deductive approach, based on the theory of profession and the concept of
6 M. S. SLETTEN

procedural standardizations, and thus links to theory (Yin 2014). To limit potential misinterpreta­
tion, I discussed the data and its categorization with other researchers during the analysis. The
analysis resulted in 24 categories, which were carefully reviewed and refined, resulting in two broad
themes: i) requirements of the tool and ii) gaps in the chain of argument.

Ethics
This study was approved by the Norwegian Centre for Research Data (project number 53,005, dated
16 March 2017). All staff members were informed about the study and all participants signed a written
consent. Moreover, all parents whom I encountered in client observation provided oral consent and
received oral and written study information. For the included documents, which are highly sensitive
case files, special approval was granted by the Norwegian Directorate for Children, Youth and Family
Affairs. Due to the ethical challenges of using such documents, the number of documents was
restricted and limited to only one office, and they were anonymized beforehand by the CWS.

Strengths and limitations


The small sample of documents and the fact that they only came from one office may be regarded as
a limitation. However, considering the ethical challenges involved in using client documents, it is
a strength that I was granted access to them. Furthermore, the fieldwork was completed by the time
I gained access. It would have been interesting to ask the participants to reflect upon some of the
findings from the documents, which would have nuanced the findings further. Finally, this study
did not include the perspectives of service users, which could have established how far the tools
influence client involvement in the CWS. Nevertheless, few studies have followed casework
ethnographically, which is a strength of this study.

Findings
Two themes were seen to be prominent in the analysis. The first concerns how the tool determined
the CWS professionals’ actions. The second deals with how the tool led to gaps in their chain of
argument, and thus the process of formulating a basis for their decisions. These two themes will be
elaborated in more detail below.

Requirements of the tool for courses of action


The findings revealed patterns of procedural standardization in courses of action in two areas:
firstly, in the process of gathering information about the family situation, and secondly, in reporting
and interpreting the information obtained. The former involved the professionals’ tasks and focus
of attention, and the latter how information was systematized and understood. These patterns were
identified in data from both offices.

Task and focus requirements


Based on the KF, essential activities for obtaining information about the family situation are
observations, mentalization interviews and risk assessments, which involve requirements as to
what to focus on and look for. Such activities were found to be key aspects of the professionals’
daily work in both offices.
Several participants subscribed to observation as a source of valuable information, particularly
when assessing parent-child interaction. Here, attachment and mentalization were strongly empha­
sized; however, parents were not necessarily told that they were being observed:
NORDIC SOCIAL WORK RESEARCH 7

The caseworker states that the mother brought her toddler to the meeting, which enabled her to observe the
interaction between mother and child. She says that she paid attention to how the mother responded to the child
in this situation, which she feels could be a stressful setting. She points out that the mother did not help the child,
which could be related to her culture. (. . .) She explains that she checked the mother’s mentalisation skills, and
therefore asked her to describe her child in 3-5 words. (. . .). She reports not being satisfied with the mother’s reply,
emphasising that the mother struggled to give a good description of the child. (Field notes, conversation with R5)

Even though the professional acknowledges that the mother’s reaction may be related to culture or
stress, she still reasoned with reference to the mother’s mentalization skills. Parental mentalization
abilities were a recurring theme in the professionals’ observations of parents. This was also
prominent in the documents and in client meetings where mentalization interviews were con­
ducted. However, it was common to exercise discretion to alter the interview by using only
a selection of the mentalization questions with the parents. In several cases, parents had difficulty
in answering such questions, which professionals sometimes related to their culture. However, the
mentalization interview and questions were perceived by the CWS professionals to aid their
professional judgment regardless of cultural background, and thus the tool guided their reasoning
and production of knowledge of the families.
Risk and protective factors were regularly mentioned in talk about assessment work. In case
discussions, comments on risk factors were more frequent than comments on protective factors. In
some cases, participants emphasized that there were no protective factors, as a statement of fact.
Risk and protective factors were ticked off in all documents but one; however, it varied whether
these had been further assessed. Some were also concerned about the risk assessment and staff
paying too much attention to risk factors:

It’s very easy to put divorced parents as a risk, but this isn’t necessarily a risk (. . .) In their reports, some
caseworkers just list the risk and protective factors without further descriptions (. . .) and say that it looks more
like an assembly line. (Field note from conversation with supervisor R11)

Considering the numerical dominance of risk factors described in detail, they may be easier to
detect than protective factors. As the findings demonstrate, risk factors are on the CWS profes­
sionals’ agenda and are more commonly addressed in their assessments. A risk-dominated language
thus shapes the professionals’ reasoning and their understanding of family situations.

Form-filling requirements
The other area of procedural standardization concerned how the professionals subscribed to the
way of structuring the information in the predetermined categories in the forms, such as living
situation, or risk and protective factors. Descriptive requirements directed how the information
obtained was presented in written reports. Additionally, there is some evidence that these form-
filling requirements placed interpretive demands upon the professionals. For example, parent-child
interaction, mentalization and risks were commonly assessed, and conclusions were sometimes
presented as facts. However, practical reasoning with descriptions of how they were assessed were
often lacking, and thus subjective normative elements and informal practices were omitted. The
following field note extract exemplifies this; here, three participants filled out the form together:

They start by ticking off type of housing and then they describe its size and how long the family have lived there.
Participant A asks whether they need to put down all this information; participant B replies ‘Yes, we do’, with no
further elaboration. Participant C, who is filling out the form on the computer asks A how the atmosphere was in
the home. A replies: ‘That’s speculation’. C emphasises that it is important to remove speculations, but how this is
done is not elaborated. C then asks about the children’s room. A describes the children’s room and how she
perceived it and repeats that these are speculations. C writes the information in the form. A adds that she felt
concerned about the child, but does not state what that entailed. (. . .) At the end of the meeting they emphasise the
importance of not basing the information on speculation. (Field note from a group meeting with R20, R23 and
R27).
8 M. S. SLETTEN

Although one CWS professional questions parts of the form and mentions concerns about spec­
ulative responses, the information is not presented as interpretation in the form. Hence, the
professionals yield to the requirements of the form, and thus the various perspectives are not
included. Moreover, the professionals’ concern, which may be tacit, is not accounted for. Further,
this also illustrates, as supported in the documents, that the reasons for their actions and inter­
pretations are not stated.
However, the form-filling requirements did also focus attention on the child by making the
child’s voice more explicit, which may strengthen the involvement of children in CWS work. This
suggests that such requirements can enhance children’s participation, at least in terms of listening to
children’s views on their situation. However, there was no clear pattern in the documents as to how
or whether the child’s voice was weighted in the assessments, except for some examples where the
child’s descriptions conflicted with those of the parents, and were then given more weight.

Gaps in the chain of argument


Another strong and consistent theme throughout this study is the lack of transparency of the
reasoning on which conclusions were based. When the participants discussed their cases in groups,
informal conversations or in consultation after a client meeting, suggestions were put forward
without any articulation of the arguments leading to the suggested conclusion. In this sense, they
were exercising discretion, but without making their reasoning explicit. This is illustrated by the
following example from an investigative team discussing new cases transferred from the intake team
for further investigation:
The child welfare professionals are discussing a case involving a family with three children with a concern for only
one of the children. One participant reads from the intake report which concludes that the case needs to be further
investigated, for all three children. The investigative team questions the decision that all three children need to be
included in the investigation, which was not explained in the document. The participant reads on and states that
the report recommends issues the family needs to work on [suggestion of measures]. Another participant says:
“Well, then the case is already concluded, so what’s the point of investigating it”. A third participant replies that
this happens quite often. (Field note, from intake meeting, Office A).

This shows that the reasons for their decision to investigate all three children were inconclusive, and
thus, it was difficult to determine the nature of the case. Further, as seen throughout the fieldwork,
measures are often suggested before a case is fully investigated. Accordingly, conclusions are
presented without knowledge of what arguments or information these are based on, and thus the
professionals define the family’s needs without making it explicit. This suggests use of tacit knowl­
edge in order to arrive at a justified conclusion. These findings also relate to another finding
indicating that the professionals struggled to make explicit how they interpreted the information
obtained, as explained by one of the supervisors:
When they analyse, they’re supposed to state the reason for their opinion, e.g. why they believe that a risk is
present (. . .) and how the child is affected by this risk factor. (. . .) However, several of the professionals struggle to
differentiate between the analysis of the risk and protective factors and the overall assessment (R18).

Lack of transparent reasoning behind their analysis was also found in the documents. Participants
provided detailed information about the family and child, but it was challenging to discern how
these thick descriptions were interpreted and assessed, thus leaving a gap in their reasoning.
Similarly, inconsistency was detected between the description of the family situation, the CWS
assessment of the situation and their conclusion. For example, topics that were described were not
necessarily assessed and vice versa, and in some documents, new information was presented in the
conclusion. Moreover, one document stated that the child had special needs in the descriptive
section. However, the nature of these special needs was not described. Later in the document,
a report from the school said the child did not have any special needs, and there was no mention of
the child’s special needs in the assessment section. The conclusion section, however, stated that the
NORDIC SOCIAL WORK RESEARCH 9

child had special needs, but without mentioning the basis for this conclusion. Further, how
conflicting opinions of the child were assessed was not made explicit in the report. The same
tendencies were found in other documents, suggesting regular gaps in the professionals’ chain of
argument. The above findings demonstrate that a synthesis between the rich descriptions obtained,
the risk and protective factors, and conclusions based on practical reasoning, is not accounted for.
This may derive from tacit knowledge; however, when discretionary power is exercised, the
decisions lack transparency. Overall, the findings show that part of the decision-making process
and the CWS professionals’ focus of attention becomes standardized when using the tool; here,
psychological knowledge seemed to be the preferred knowledge base.

Discussion
This study examines how use of the standardized KF influences the decision-making process in
CWS, with an emphasis on discretion and professional knowledge. The analysis shows examples of
proceduralisation of decision-making practice, hence professionals’ actions and focus, when the KF
is used. Moreover, the use of a standardized assessment tool has several and even conflicting
implications for decision-making process, which will be discussed in the following.

Standardization of actions and increased control


The findings show how the CWS professionals’ actions become standardized when following the
procedures. This is particularly seen in their process of gaining information about the family
situation, e.g. the types of information they pursue and their activities in collecting this information,
such as talking with the child. These activities are explicitly expressed and visible in their reporting.
This suggests that the requirements of the KF tool, and thus the procedural practices (Timmermans
and Berg 2003), enhance transparency of their activities in assessment work. This corresponds with
the argument that standardized assessment tools, at least in some sense, help to make the entire
decision-making process in CWS more transparent and explicit (Devlieghere, Bradt, and Roose
2018; Ponnert and Svensson 2016). This form of transparency may be coupled with audit and
accountability in terms of following procedures (Devlieghere and Gillingham 2020). Since the KF
provides rules for assessment work, it functions as a form of regulation, and thus a tool of
procedural accountability (Brunsson and Jacobsson 2000; Timmermans and Epstein 2010). These
developments are referred to as a new mode of accountability, as they entail making the entire
process accountable to a third party (Timmermans and Berg 2003). Consequently, this may
influence the structural dimension of discretion (Molander 2016), as a standardized assessment
framework adds new rules to decision-making practices. Some scholars have raised concerns that
this limits frontline discretion that may be tacit (Brodkin 2008). In turn, this may restrict profes­
sionals’ body of knowledge, which needs to be both formal and tacit (Freidson 2001; Polanyi & Sen,
2009 [1961]). However, as pointed out by Timmermans and Berg (2003), even the strictest guide­
lines allow for the use of professional discretion. Nevertheless, stricter guidelines can be understood
as the creation of new social structures that favour explicit codified knowledge (Sletten and
Ellingsen 2020) and enable increased control of professional practice (Brunsson and Jacobsson
2000). Accordingly, this may weaken professionals’ discretionary power and the requirement of
individualization (Molander 2016).
Since the CWS holds authority over others, transparency may also be considered important to
enable service users to understand CWS work and processes leading to their decisions. Following
Blomberg and Sahlin (2017), procedural standards may be understood as a quest for transparency to
enhance user involvement, which is strongly coupled with democratic accountability, unlike the
efficiency focus found in managerial reforms. Although procedural standards aid transparency of
activities for managers and other professionals, as seen in this study, they do not seem to make
assessments more transparent for service users. Examples are gaps in professionals’ arguments or lack
10 M. S. SLETTEN

of information to service users that they were being observed or that their mentalization skills were
being assessed, and thus the professionals define the parents through their discretionary power
(White, Fook, and Gardner 2006). Following Molander (2016, 25), this may constitute a normative
problem, referred to as ‘burdens of discretion’, in which professionals’ reasoning may be exposed to
bias. Further, assessment tools have been found to strengthen the professional’s role through the use
of a more professional vocabulary (Gillingham et al. 2017; Sletten and Ellingsen 2020). Consequently,
this may increase the professionals’ discretionary power and thus make decision-making practice
even less transparent to parents and children, who find it difficult to understand the terminology
used. Therefore, transparency may be an important contribution to making social work practice
more accessible to service users (Devlieghere, Bradt, and Roose 2018), and thus avoiding deceiving
parents (Gambrill 2005) through the exercise of bias with the potential to prevent equal treatment of
families (Molander 2016). Yet unless decision-making practices are made explicit to service users,
transparency will vary according to the audience, and will therefore only be present to a certain
degree (Devlieghere and Gillingham 2020). Although professionals’ actions become standardized as
they seem to demonstrate rather strong loyalty to the tool, this study shows that procedural standards
only to some extent function as a tool for democratic accountability (Brodkin 2008).

Standardization of knowledge
From a knowledge perspective, standardized tools such as the KF contain focus requirements to
produce knowledge about the family situation, which is essential in making decisions, hence what
and how we know. The present findings concur with previous research that shows that professionals
favour using the knowledge base often embedded in standardized tools, namely psychological
knowledge (Sletten and Ellingsen 2020; Stanley 2013). This seems to become reinforced by the
language of the tools that enables professionals to make this knowledge explicit, and thus influences
decision-making practices. In this way, the professionals follow the rules of the standard, which
makes knowledge production in CWS become standardized, as knowledge is stored in the standard
rules, such as prediction of risk in risk assessment (Brunsson and Jacobsson 2000). Hence, risk and
mentalization seem to have become a gold standard for measuring parenting abilities that may pose
new normative constraints on the professionals’ reasoning (Molander 2016). This suggests that
their discretion is affected by the standard that in turn shapes their interpretation of the family
situation. Further, the formal knowledge embedded in the standards is what counts as legitimate
knowledge, and thereby the CWS professionals’ position as experts is under pressure.
Consequently, there is a potential for overlooking other factors that may influence the family
situation, and here the professionals may adhere to a narrow knowledge base in their reasoning.
These findings concur with a recent study that found that reliance on risk assessments could
potentially overlook risk-reducing factors (Krutzinna and Skivenes 2021). The fact that formal
written knowledge is more easily stored may undermine other forms of knowledge that are harder
to translate into specific rules, such as tacit knowledge and knowledge of particular cases (Brunsson
and Jacobsson 2000; Noordegraaf 2015). Accordingly, there is a risk of adopting a narrow approach
in knowledge production in CWS (Havnen et al. 2021; Stanley 2013), which is reinforced by
increased demands for accountability (Munro 2011). From a decision-making perspective, account­
ability is essential as procedural standards influence professionals’ discretionary reasoning
(Molander 2016), and may therefore lead to biased decision-making (Munro and Hardie 2018).

Reasoning and handling of uncertainty


CWS professionals found that the tool generated thick descriptions of the family situation, which was
linked to form-filling and descriptive requirements posed by the tool. However, only parts of the
activities and viewpoints were reported. For example, they did not report on considerations they took
in relation to individual clients, and the families’ response was not always accounted for. Moreover,
NORDIC SOCIAL WORK RESEARCH 11

there was inconsistency in the information analysed, where reasons for the statements were not
presented. Hence, how they interpreted the information obtained and how they handled different
perspectives was not made explicit. Accordingly, there were gaps in their reasoning in their reporting
of assessments. Professionals tend to rely on tacit knowledge and intuitive reasoning when exercising
discretion (Hammond 1996). However, the amount of information generated by the tool makes it
challenging to determine what information is essential to a given case (Vis et al. 2020). These findings
are in keeping with the criticism of the Norwegian CWS by the European Court of Human Rights and
the Norwegian Supreme Court, which pointed out that the CWS lacked clear arguments leading to
their conclusions, and that conflicting viewpoints were not assessed (MCF 2020). A response to this
criticism tends to involve an increased use of standardized CWS assessment tools to ensure qualified
decision-making and accountability. Moreover, from a decision-making perspective it is a common
perception that more information generates good decisions, particularly in cases of uncertainty
(Brunsson and Brunsson 2015), as found in CWS decision-making practice (Fluke et al. 2020).
However, according to Brunsson and Brunsson (2015), this is a misconception and may even increase
decision-makers’ level of uncertainty. This is because decision-makers may find it challenging to
handle large amounts of information, to make sense of the information and to deal with conflicting
viewpoints and perspectives, as seen in this study. Consequently, errors may occur due to uncertainty
(Fluke et al. 2020). Aligned with Brunsson’s and Brunsson’s (2015) argument, the use of a procedural
assessment framework may in fact not produce the desired effect, i.e. less uncertainty improved the
quality of discretionary reasoning, and thus led to better qualified decisions.

Conclusion
This study shows that the use of a standardized assessment tool results in proceduralisation of
CWS assessment work, as CWS professionals’ actions and knowledge become standardized. To
some extent, this increases the level of transparency and accountability, thus enabling increased
control over professional practice with the potential to limit the professionals’ discretionary
space. It is not necessarily a question of whether or not we should use such tools. However, as
this research has shown, a standardized assessment tool does not alone solve the challenges of
CWS practice nor prevent discretionary biases, despite the aim of the tool to improve profes­
sionals’ epistemic discretionary reasoning. It may in fact create new challenges. This study
demonstrates that CWS professionals prefer the knowledge base of the standards, in which risk
and mentalization become the gold standard in CWS decision-making practice. The problem
arises if one uses standardized tools blindly and disregards rival perspectives, without critically
revising potential biases and conclusions deriving from the standardized procedures. Analysing
information is a complex task. However, the use of standardized assessment tools does not
seem to enhance CWS professionals’ analytical skills nor enable them to articulate their
reasoning. Considering that professionals are entrusted with discretionary power, one should
expect their discretionary activity to be made explicit, since they have an obligation to others.
The aim is not to avoid use of tacit knowledge, but in line with Molander (2016), I argue for
a greater focus on reflective activity in order to enhance professionals’ discretionary reasoning.

Acknowledgments
I would like to thank Professor Ingunn T. Ellingsen at the University of Stavanger, Professor Catharina Bjørkquist at
Østfold University College and my anonymous reviewers for helpful comments and suggestions.

Disclosure statement
No potential conflict of interest was reported by the author(s).
12 M. S. SLETTEN

References
Almklov, P. G., G. Ulset, and J. Røyrvik. 2017. “Standardisering Og Måling I Barnevernet [Standardisation and
Measurement in Child Welfare.” In Trangen Til Å Telle: Objektivering, Måling Og Standardisering Som
Samfunnspraksis [The Need to Count: Objectification, Measurement and Standardisation as a Societal Practice],
edited by T. Larsen and E. Røyrvik, 153–183. Oslo: Scandinavian Academic Press.
Bartelink, C., T. A. Van Yperen, and I. J. Ten Berge. 2015. “Deciding on Child Maltreatment: A Literature Review on
Methods that Improve decision-making.” Child Abuse & Neglect 49: 142–153. doi:10.1016/j.chiabu.2015.07.002.
Benbenishty, R., R. Osmo, and N. Gold. 2003. “Rationales Provided for Risk Assessments and for Recommended
Interventions in Child Protection: A Comparison between Canadian and Israeli Professionals.” British Journal of
Social Work 33 (2): 137–155. doi:10.1093/bjsw/33.2.137.
Bergmark, A., and T. Lundström. 2011. “Guided or Independent? Social Workers, Central Bureaucracy and
evidence-based Practice.” European Journal of Social Work 14 (3): 323–337. doi:10.1080/13691451003744325.
Blomgren, M., and K. Sahlin. 2017. “Quests for Transparency: Signs of a New Institutional Era in the Health Care
Field.” In Transcending New Public Management, edited by P. Lægreid and T. Christensen, 167–190. London:
Routledge.
Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2):
77–101. doi:10.1191/1478088706qp063oa.
Broadhurst, K., C. Hall, D. Wastell, S. White, and A. Pithouse. 2010. “Risk, Instrumentalism and the Humane Project
in Social Work: Identifying the Informal Logics of Risk Management in Children’s Statutory Services.” British
Journal of Social Work 40 (4): 1046–1064. doi:10.1093/bjsw/bcq011.
Brodkin, E. Z. 2008. “Accountability in street-level Organizations.” International Journal of Public Administration
31 (3): 317–336. doi:10.1080/01900690701590587.
Brunsson, N., and B. Jacobsson. 2000. A World of Standards. Oxford, UK: Oxford University Press.
Brunsson, K., and N. Brunsson. 2015. Beslutninger [Decisions]. Oslo: Cappelen Damm akademisk.
Bryman, A. 2016. Social Research Methods. 5th ed. Oxford, UK: Oxford University Press.
Devlieghere, J., L. Bradt, and R. Roose. 2018. “Creating Transparency through Electronic Information Systems:
Opportunities and Pitfalls.” The British Journal of Social Work 48 (3): 734–750. doi:10.1093/bjsw/bcx052.
Devlieghere, J., and P. Gillingham. 2020. “Transparency in Social Work: A Critical Exploration and Reflection.” The
British Journal of Social Work. doi:10.1093/bjsw/bcaa166.
Fluke, J. D., M. López López, R. Benbenishty, E. J. Knorth, and D. J. Baumann. 2020. “Advancing the Field of
decision-making and Judgment in Child Welfare and Protection: A Look Back and Forward.” In Decision-making
and Judgment in Child Welfare and Protection. Theory, Research, and Practice, edited by J. D. Fluke, M. L. López,
R. Benbenishty, E. J. Knorth, and D. J. Baumann, 301–317. New York, NY: Oxford University Press.
Freidson, E. 2001. Professionalism: The Third Logic. Cambridge: Polity Press.
Gambrill, E. D. 2005. “Decision Making in Child Welfare: Errors and Their Context.” Children and Youth Services
Review 27 (4): 347–352. doi:10.1016/j.childyouth.2004.12.005.
Gillingham, P., P. Harnett, K. Healy, D. Lynch, and M. Tower. 2017. “Decision Making in Child and Family Welfare:
The Role of Tools and Practice Frameworks.” Children Australia 42 (1): 49–56. doi:10.1017/cha.2016.51.
Gillingham, P. 2019. “Can Predictive Algorithms Assist decision-making in Social Work with Children and
Families?” Child Abuse Review 28 (2): 114–126. doi:10.1002/car.2547.
Hammond, K. 1996. Human Judgement and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable
Injustice. Oxford: Oxford University Press.
Havnen, K., S. Fossum, C. Lauritzen, and S. A. Vis. 2021. “How Does the Kvello Assessment Framework Attend to
Important Dimensions of the Children’s Needs and Welfare? A Comparison with the BBIC and the ICS Frameworks
for Child Welfare Investigations.” Nordic Social Work Research 1–13. doi:10.1080/2156857X.2021.1891959.
Kjær, A.-K. B. 2019. “Risikovurderinger I Barnevernet – Hva Innebærer Det Og Når Trengs Det? [Risk Assessments
in Child Welfare - What Do They Mean and When are They Needed?].” Tidsskrift for familierett, arverett og
barnevernrettslige spørsmål 17 (2): 131–149. doi:10.18261/.0809-9553-2019-02-0.
Krippendorff, K. 2019. Content Analysis: An Introduction to Its Methodology. 4th ed. Los Angeles, CA: SAGE.
Krutzinna, J., and M. Skivenes. 2021. “Judging Parental Competence: A cross-country Analysis of Judicial Decision
Makers’ Written Assessment of Mothers’ Parenting Capacities in Newborn Removal Cases.” Child & Family Social
Work 26 (1): 50–60. doi:10.1111/cfs.12788.
Kvello, Ø. 2015. Barn I Risiko: Skadelige Omsorgssituasjoner [Children at Risk: Harmful Care Situations]. 2nd ed.
Oslo, Norway: Gyldendal akademisk.
Lipsky, M. 2010. Street-level Bureaucracy: Dilemmas of the Individual in Public Services. 30th anniversary expanded
ed. New York: Russell Sage Foundation.
MCF 2020. Informasjonsskriv om behandlingen av barnevernssaker - nye avgjørelser fra Høyesterett [Information
letter on the processing of child welfare cases: new decisions by the Supreme Court], Oslo, Norway: Ministry of
Children and Families. file:///N:/Artikler/informasjonsskriv-om-barnevernssaker—nye-retningslinjer-fra-
hoyesteretts-om-saksbehandling_BLD_2020.pdf
NORDIC SOCIAL WORK RESEARCH 13

Molander, A. 2016. Discretion in the Welfare State: Social Rights and Professional Judgment. Abingdon, UK:
Routledge.
Munro, E. 2011. The Munro Review of Child Protection: Final Report, a child-centered System. Vol. 8062. London, UK:
Department of Education.
Munro, E., and J. Hardie. 2018. “Why We Should Stop Talking about Objectivity and Subjectivity in Social Work.”
The British Journal of Social Work 49 (2): 411–427. doi:10.1093/social/bcy054.
Møller, A. M. 2021. “Deliberation and Deliberative Organizational Routines in Frontline decision-making.” Journal
of Public Administration Research and Theory 31 (3): 471–488. doi:10.1093/jopart/muaa060.
Noordegraaf, M. 2015. Public Management: Performance, Professionalism and Politics. London: Palgrave Macmillan.
Polanyi, M., and A. Sen. 2009. [1966] The Tacit Dimension. Chicago: University of Chicago Press.
Ponnert, L., and K. Svensson. 2016. “Standardisation—the End of Professional Discretion?” European Journal of
Social Work 19 (3–4): 586–599. doi:10.1080/13691457.2015.1074551.
Shaw, I., M. Bell, I. Sinclair, P. Sloper, W. Mitchell, P. Dyson, J. Clayden, and J. Rafferty. 2009. “An Exemplary
Scheme? An Evaluation of the Integrated Children’s System.” The British Journal of Social Work 39 (4): 613–626.
doi:10.1093/bjsw/bcp040.
Skillmark, M., and L. Oscarsson. 2020. “Applying Standardisation Tools in Social Work Practice from the
Perspectives of Social Workers, Managers, and Politicians: A Swedish Case Study.” European Journal of Social
Work 23 (2): 265–276. doi:10.1080/13691457.2018.1540409.
Sletten, M. S., and C. Bjørkquist. 2020. “Professionals’ Tinkering with Standardised Tools: Dynamics Involving Actors
and Tools in Child Welfare Practices.” European Journal of Social Work 1–12. doi:10.1080/13691457.2020.1793114.
Sletten, M. S., and I. T. Ellingsen. 2020. “When Standardization Becomes the Lens of Professional Practice in Child
Welfare Services.” Child & Family Social Work 25 (3): 714–722. doi:10.1111/cfs.12748.
Spradley, J. P. 2016. The Ethnographic Interview. Long Grove, IL: Waveland Press.
Stanley, T. 2013. “‘Our Tariff Will Rise’: Risk, Probabilities and Child Protection.” Health, Risk & Society 15 (1):
67–83. doi:10.1080/13698575.2012.753416.
Sørensen, K. M. 2018. “A Comparative Study of the Use of Different risk-assessment Models in Danish
Municipalities.” British Journal of Social Work 48 (1): 195–214. doi:10.1093/bjsw/bcx030.
Thoburn, J. 2010. “Achieving Safety, Stability and Belonging for Children in out-of-home Care: The Search for ‘What
Works’ across National Boundaries.” International Journal of Child and Family Welfare 13 (1): 34–49. Retrieved
from: https://www.scopus.com/record/display.uri?eid=2-s2.0-85055407863&origin=inward
Timmermans, S., and M. Berg. 2003. The Gold Standard. Philadelphia, PA: Temple University Press.
Timmermans, S., and S. Epstein. 2010. “A World of Standards but Not A Standard World: Toward A Sociology of
Standards and Standardization.” Annual Review of Sociology 36 (1): 69–89. doi:10.1146/annurev.soc.012809.102629.
Vis, S. A., C. Lauritzen, and S. Fossum. 2019. “Systematic Approaches to Assessment in Child Protection Investigations:
A Literature Review.” International Social Work 0020872819828333. https://doi.org.10.1177/0020872819828333
Vis, S. A., Ø. Christiansen, K. J. S. Havnen, C. Lauritzen, A. C. Iversen, and T. Tjelflaat. 2020. Barnevernets
undersøkelsesarbeid-fra Bekymring Til Beslutning. Samlede Resultater Og Anbefalinger [The Investigative Work of
the Child Welfare Service: From Concerns to Decisions. Overall Results and Recommendations]. Tromsø, Norway:
UiT The Arctic University of Norway.
White, S., J. Fook, and F. Gardner 2006. Critical reflection in health and social care, https://ebookcentral.proquest.
com/lib/hiof-ebooks/detail.action?docID=295530
White, S., C. Hall, and S. Peckover. 2008. “The Descriptive Tyranny of the Common Assessment Framework:
Technologies of Categorization and Professional Practice in Child Welfare.” British Journal of Social Work
39 (7): 1197–1217. doi:10.1093/bjsw/bcn053.
Yin, R. K. 2014. Case Study Research: Design and Methods. Thousand Oaks, CA: Sage.

You might also like