You are on page 1of 164

Evaluation of Models of Good Practice and Guidance for the Internet Industry

on Chat Services, Instant Messaging and Web Based Services

Lorraine Sheridan and Julian Boon

1
Index

Key findings

Summary and recommendations

Chapter 1: Introduction and methodology

Chapter 2: User perspectives and experiences (i): online users

Chapter 3: User perspectives and experiences (ii): young people, parents and teachers

Chapter 4: Internet Service Provider Senior Managers’ perspectives

Chapter 5: Moderators’ perspectives

Chapter 6: The objective picture: findings from the desk-based audit

References

Appendices (additional tables and figures)

2
Key findings

• 93% of the online sample felt that the benefits of using the Internet

outweighed any potential dangers

• Zero teachers and 7% of parents felt that the Internet had become more

dangerous for children during 2003

• Under 12% of frequent users of chat, IM and web pages felt that safety levels

had decreased between January 2003 and January 2004

• The majority of regular chatters (82%) felt that safety tools and advice were

clearer, more widespread and more accessible compared with 2003, whilst just

42% of regular IM users felt this was the case for IM–specific items

• Regular chat and IM users were generally found to be utilising safety tools

recommended by the Models and finding them helpful

• Ignore and block features were the most familiar safety tool for parents and

children

• Under half of parents and teachers felt their knowledge of the Internet had

increased between 2003 and 2004, compared with 62% of childen

• The online sample indicated IM as the service least likely to culminate in

uncomfortable incidents (15%), and chat as the most likely (40%)

• Half the sample of Moderators had suspected potential child abusers of being

on their sites. Difficulties in ascertaining intent of over-age chatters was also

highlighted

• Results of a desk-based audit of ISPs indicated generally high levels of

concordance with Model specific recommendations

• Safety tools and advice were most widely available within IM services (96%

of majority positive recordings) followed by chat services (74%)

3
• The desk-based audit also noted a significant lack of safety information and

tools offered by connectivity and hosting providers (25%)

• Quantitative data obtained from the interviews with ISP Senior Managers

indicated a high level of baseline concordance with Model recommendations

(67%) and this was highest with regards to the Model on chat (78%). Despite

this Model impact was still moderate (18%) and appeared to have most

influence on IM services (28%)

• A minority of children (9%) self-reported that they had been made to feel

threatened, uncomfortable or vulnerable while using chat rooms

• Parents tended to make lower estimates of their children’s usage of Internet

services than did their children, whilst teachers tended to make higher

estimates

• A majority of parents (65%) believed their children to possess higher than

average Internet skills

• Children used safety tools and features more frequently than their parents

estimations, and the vast majority of young people viewed the tools positively

• Parents did not possess high levels of Internet safety knowledge and rarely

helped their children use web safety tools and features. They also requested

safety tools and features that were already commonplace, underlining a need

for education

• Children demonstrated sound safety knowledge, advising that other children

should enjoy using the Internet but exercise caution

• Parents gleaned most of their safety information from the media and believed

their children to obtain theirs from school. In contrast, children reported

receiving most of their knowledge from parents and the media

4
• Online users, ISP Senior Managers and Moderators all saw education and

increasing awareness as the most effective and enduring means of promoting

safe Internet usage, particularly via the media and in schools

• Generally, a lack of education and parental responsibility was felt to be the

primary cause of child Internet safety problems

• The majority of ISP Senior Managers sampled were aware of the Guidelines

• Resistance to Model recommendations among the Managers was low

• Whilst some Managers acknowledged the costs associated with implementing

the Models, none felt that the net effects on business were negative

• Concerns were raised regarding dissemination: Managers felt that some of the

companies not involved in their development may have not seen the Models

• The majority of Managers familiar with the Models viewed their impact as

having been mainly positive - striking a 'reasonable' to 'perfect' balance

between industry and user needs

• Managers reported the primary positive consequence of the Models to be an

increased public and industry awareness

• Inhibiting factors identified included lack of familiarity with the Guidelines,

financial constraints, and an opinion that the Models were sometimes too

specific or irrelevant to the services offered

• Moderator familiarity with the Models was generally lacking

• Baseline concordance with the recommendation to screen Moderators was low

but it did appear that the Models were aiding a shift towards screening

• Moderators in general perceived that user access to Internet safety advice

improved during 2003, as had user and Moderator safety awareness

5
Summary and recommendations

The aim of this research was to evaluate the Models of Good Practice and Guidance

for the Internet Industry on Chat Services, Instant Messaging and Web Based

Services. The Models – developed by the Home Office Task Force for Child

Protection and published in January 2003 – were designed to establish recommended

practice for the protection of children and other potentially vulnerable users of the

Internet. In evaluating their success and utility for achieving this, the Models were

assessed with regard to their impact on a range of principal criteria. Specifically the

criteria concerned were:

• Establishing the extent to which Model-consistent practices have been

implemented since their publication

• Evaluating the functional utility of the Models in terms of making the use of

the Internet safer – with particular reference to the perspective of children

• Identifying the Models’ successes in achieving their objectives without

impeding the innovative benefits which accrue from commercial competition

• Evaluating the Models’ successes in attaining their objectives without

curtailing the highly beneficial educative and social potential of Internet usage

• Establishing the degree to which the Framework of good practice has

strengthened public confidence by delivering a safer and better service for

users

• Establishing the degree to which the publication of the Models has led to

ensuring clarity of information relating to safety advice, warnings and safety

features

6
The data sources were selected to provide a composite picture of the experiences of

users of Internet services, ranging from the professionals involved in the provision

and administration of the services through to the users themselves. Accordingly four

data collection methods were employed, comprising:

• An online questionnaire designed to measure awareness and perceptions of

Internet safety among regular users

• A traditional questionnaire study of the Internet experiences and perceptions

of young people, their parents and their teachers

• In-depth interview data exploring the views and perceptions of Internet

Service Provider (ISP) Senior Managers and chat room Moderators

• A desk-based audit conducted with the aim of providing an objective picture

of currently available safety features and tools

This current section will provide an overview of the findings from each of these four

datasets, followed by recommendations for good practice generated by the research,

as well as suggestions as to how good practice may be monitored. The Internet is a

rapidly evolving medium and this must be recognised by any future safety initiatives.

This issue will be discussed prior to the presentation of key findings from the

research.

The online questionnaire study

The online questionnaire was set up to collect data throughout February 2004 – one

year on from the dissemination of the Guidelines. Links to the questionnaire featured

on the sites of two Internet organisations – one a magazine-style page and the other on

7
the front page of the ISP’s chat services. The bulk of the questionnaire took the form

of pull-down menus for respondents to check responses to questions that ranged from

their general thoughts on web pages and safety on the Internet, through to more

specific questions that concentrated on chat rooms, public profiles and Instant

Messaging. In addition a final section was included to allow respondents the

opportunity to provide free-text qualitative feedback. Overall the questionnaire was

designed to gather a broad picture of the views and perceptions of regular Internet

users. In so doing it tapped into the users’ main sources of Internet safety information,

their perceptions of Internet safety issues, the degree to which users’ believed the

Internet had improved since the introduction of the Models, the users’ frequency of

usage with respect to chat rooms, Instant Messaging, and web pages in general, the

degree to which they had ever felt uncomfortable, threatened or vulnerable while

using any of these services, the frequencies with which the users had noted the

existence and made use of safety features/tools, and the degree to which they regarded

these features/tools as being helpful. It must be noted that the sample was unlikely to

be representative of all online users, and may have been biased towards those

individuals who had had a negative online experience.

In all two hundred and sixty-six participants completed the online

questionnaire – twenty-six of which were discarded due to missing or inconsistent

data. The remaining two hundred and forty participants’ responses revealed both

positive and negative aspects of personal and child Internet safety.

On the positive side, this sample overwhelmingly (93%) considered the

benefits of Internet usage to outweigh the potential dangers and only 12% reported

that they felt safety levels had decreased in the period between January 2003 and

January 2004. For chat services, the majority of regular chatters judged that there was

8
sufficient safety advice provided and that tools and advice were clearer, more

accessible and more prevalent as compared with 2003. In general too, regular chat and

IM users were found to be utilizing the safety tools recommended by the Models and

to be finding them helpful.

On a less positive note, one third of regular chatters reported that they had

experienced an uncomfortable situation that had made them feel threatened with one

half of these people reporting the individual(s) who caused them to feel that way.

Those chatters who did report such experiences tended to find the ISP’s to be helpful

but Moderators to be unhelpful. In other circumstances though Moderators were rated

as being predominantly helpful in the provision of generic advice. A mixed picture

emerged too with regard to chat and IM usage. Users were far more likely to have

experienced an uncomfortable incident while using chat as opposed to IM presumably

because of the markedly greater probability of encountering a stranger in the former.

Furthermore, the regular users perceived the safety features of chat services in a more

positive light than their counterparts in IM – again presumably because they had

greater need to have recourse to the former. In addition to a significant proportion of

the sample having experienced a negative episode while using chat almost three-

quarters of the sample considered that they were receiving more unwanted,

unsolicited material such as spam e-mail, pop-ups and adult material.

In terms of their general views almost one-third of the on-line sample reported

that their primary source of Internet safety was the media and their qualitative data

cited the media as the most effective means of promoting public awareness about such

issues. In particular the research found that regular users believed that education for

both parents and children was the single most important safety measure available.

This view was further highlighted with another emergent theme which showed regular

9
users to believe that a lack of education was the primary cause of child Internet safety

problems.

Overall therefore the online sample judged that the Internet was a positive

entity, that in some areas there had been improvements since the introduction of the

Models in January 2003, that education was the single most important means of

promoting safety, and that currently the media plays the predominant role in

disseminating safety awareness among the public.

User perspectives and experiences: Young people, parents and teachers

Three questionnaires were developed and piloted for distribution among parents,

children (between 9 and 15 years of age) and teachers in order to assess awareness,

experience and understanding of the Internet and various safety tools and features.

The questionnaires included both quantitative and qualitative sections which were

designed to elicit participants’ comments and views on: Internet safety; nature and

frequencies of negative experiences; available assistance in relation to negative

experiences; awareness, use of and familiarity with safety tools and features;

perceptions of change since January 2003; and general comments and opinions

regarding the Internet. One secondary and three primary schools in the Midlands

assisted and data from 305 children, 174 parents and 33 teachers were analysed by the

current work.

Analyses of the three groups’ responses revealed differences in perceptions

particularly relating to the amount of time children spent on the Internet and where

they acquired information about online safety. Relative to the children’s own

estimates, their parents tended to underestimate the time their child spent on the

Internet. By contrast the teachers, making estimates of a ‘typical child’, tended to

over-estimate the time children spent using Internet services. Similarly parents,

10
children and teachers tended to have different perceptions as to where they sourced

their information about the Internet with children reporting that their knowledge came

principally from the media and their parents. This is a potentially important finding

since parents did not exhibit high levels of Internet knowledge and rarely helped their

children use web safety tools and features and were not therefore well placed to

inform children. The problem is also compounded by the fact that parents also tended

to misperceive their children as having gleaned most of their Internet knowledge from

school. More than a third of parents (35%) cited school as the primary source of

Internet knowledge for their children as compared with only 18% of the children

rating themselves and 12.5% of the teachers when rating a typical child. In addition

the majority of parents believed that their children possessed higher than average

Internet skills again demonstrating a lack of awareness of issues relating to their

children’s Internet usage.

Only 7% of parents and none of the teachers took the view that the Internet

had become a more dangerous environment for children during the course of 2003.

Furthermore although there were self-reported instances of children being made to

feel threatened, uncomfortable or vulnerable while using the Internet services this was

exceptional with most children not having had such negative experiences.

Nevertheless one fifth of the teachers reported instances of having been concerned

about the intentions of an individual communicating with children over the Internet.

Children did however demonstrate sound safety knowledge and were capable of

advising peers that they should enjoy using the Internet but exercise caution as they

do so. Also children demonstrated that they could and did use safety tools and features

and most viewed these positively. In terms of personal growth in Internet knowledge

almost half the parents and the teachers felt that they did not know any more in 2004

11
than they knew in 2003. This relatively static picture contrasted sharply with that for

the children – most of who reported knowing a little more or much more than a year

before.

In summary this study’s findings for children are in tune with those for the

regular users of the online study in that the majority of Internet use is seen as positive

with exceptional instances of negative experiences. However, while in general the

children did emerge as being Internet safety aware parents in particular tended to be

misperceiving potentially important aspects of their children’s Internet usage.

Specifically parents tended to be over-confident as to their children’s Internet skills, to

overestimate the role of the school in teaching Internet knowledge, and to

underestimate the degree to which the children look to them as a source of

information.

The professionals’ perspective

ISP Senior Managers

A 137 item semi-structured interview was developed for use with senior ISP

Managers. Designed to collect both quantitative and qualitative data it was divided

into four sections that related to: General information, awareness of the Models,

implementation of the Models, and general feelings relating to the Models. Fifteen

ISP managers were interviewed in-depth with this instrument, their companies

representing a sample of large, medium and small operators who offered a range of

online communication and Internet services to which the Task Force Guidelines

would be applicable. See page 29 in the full report for further details of the

methodology.

12
While analyses of the Managers’ responses revealed very disparate views and

perspectives on a number of issues pockets of general consensus did emerge. In line

with the findings from both the online and schools studies, Managers were very clear

that responsible use of the Internet brought benefits that far outweighed the potential

negative aspects. In addition there was broad consensus that education plays a key

part as an effective and enduring means of promoting safe Internet usage. While this

again to some extent echoes the findings of the previous two studies the Managers

placed greater emphasis on the need for education not only of children but also of

parents. This tied in with a third strand of broad consensus among Managers that

personal responsibility, common sense and parental supervision (cf: differing

perception of parents above) were very important in fostering children’s safety while

using the Internet. The majority of Managers felt that although parents should have

prime responsibility for their children’s usage of Internet services there was still a

secondary educational role to be played by schools. However there were divergent

views as to the degree to which they considered that this should definitely be

implemented, with five Managers suggesting that such Internet safety training be

covered in school.

With regard to awareness of the Models and the degree to which they had been

implemented the Managers’ responses revealed widespread differences. Of the fifteen

Managers interviewed twelve had been aware of the Models prior to being contacted

by the project team. Of the twelve who were familiar with the Guidelines five

reported having made changes to policy, practice and/or operating structures since

their publication. A further five companies felt that there was no need to make

changes as many measures were already in place prior to the publication of the

Models and the remaining two companies were aware that they were generally not

13
complying with the Guidelines and had no plans to make any changes. This mixed

picture is reflected in the perceptions of Managers in terms of the commercial and

financial costs of implementing the Guidelines which ranged from readily

implementable through to the exact opposite. Where inhibiting factors were identified

they tended to be associated with a lack of familiarity with the Guidelines, financial

constraints, and/or a view that at times the Models were too specific for or irrelevant

to the services offered. Again reflecting our sample, there was concern expressed

among Managers regarding the Models’ dissemination and the possibility that some

companies may not be aware of their existence.

Overall though resistance to the idea of introducing the Models’ was low in

the sample and the majority of the Managers regarded their impact as having been

mainly positive, and to have struck a ‘reasonable’ to ‘perfect’ balance between

industry and user needs.

Moderators

Separate semi-structured interview protocols were devised for chat room and forum

Moderators aimed at determining the level of their familiarity with the Guidelines, the

impact that the Models had had on their roles, recruitment and training, and to provide

an opportunity to express general opinions relating to the Models. Ten Moderators

were interviewed using this instrument, the sample being selected from differing

backgrounds including: full and part-timers, those working in the commercial sector

and in the non-profit making sector, and paid and volunteer workers.

The sample of Moderators showed that awareness of the Models was generally

lacking, with only three being familiar with them prior to being contacted by the

research team. All three of the Moderators familiar with the Models felt that they had

had a positive impact on Internet safety. In addition the majority of those unfamiliar

14
with the Models judged that user access to Internet safety advice had improved during

2003. Likewise the majority judged that both user and Moderator safety awareness

had been noticeably enhanced during 2003.

In terms of their perception of their role, most of the sample saw it as being

primarily protective in nature, helping to ensure a safe environment for the users. The

large majority of Moderators reported having identified inappropriate material, such

as sexually explicit material, abusive and/or threatening material, or racist material.

Some Moderators reported difficulties with classifying material as being offensive, for

example it possibly being intended in a humorous way. Some Moderators though

indicated that they had taken steps to identify suspect perpetrators and had either

warned them via e-mail to desist or summarily banned them from the room

concerned.

Finally most of the ten Moderators held the view that increasing awareness

was the single most important method of promoting safe Internet usage with five

identifying schools as a suitable place to deliver this information.

Desk-based audit

To obtain an objective picture of safety tools and features 25 companies were audited

with a desk-based procedure. The companies variously provided one, some or all of

the following services: chat, IM, connectivity and hosting. Those companies (N = 15)

that offered chat services were audited with a nineteen-point schedule and the findings

showed that in general the recommended safety tools and features were in place –

(73.9% of the time). However, there were patches where safety measures were not

ideal from the perspective of children. For example, more than half of the eligible

companies audited did not provide safety messages tailor-made to the specific needs

of children. Likewise safety advice was inconsistently delivered with regard to

15
registration and setting up public profiles. Practices varied from explicitly explaining

the use and purpose of providing personal information to providing no information at

all. Similarly some companies actively encouraged children not to post personal

information on chat sites while others gave no information about public profile data.

The emergent picture for IM was better than for chat with a 95.5% recorded

presence of safety features and advice. Of the eight companies audited, all provided

information regarding the services offered. In addition all but one of the IM

companies provided information on how to keep safe while using the service. Also all

eight companies had ignore or block features in place together with information on

how to deal with unwanted instant messages. However, as with chat services there

were weaker areas such as only half the companies providing warning messages when

a user is about to add a new person to their buddy list. Also only one of the companies

provided safety information on both the IM client itself and the home page for

downloading this service.

The picture for web services was markedly less positive with the connectivity

and hosting providers providing information on safety and safety tools only 25% of

the time. Some features were well supported such as all the connectivity providers

giving explicit statements as to the boundaries of acceptable online behaviour.

However, fewer than half the companies audited provided other important information

that parents could have utilized in order to educate their children regarding potential

risks. Likewise, hosting providers all gave information regarding acceptable

behaviour but only two out of the ten audited gave safety advice for home users who

wanted to create web pages, or provided guidance and advice specifically tailored to

users who are young people.

16
Overall IM followed by chat services emerged as having safety tools and

advice widely available. There was however by contrast a deficiency in the provision

of these facilities for connectivity and hosting providers. For all four types of service

there appeared to be greater scope for the provision of advice which is tailor-made for

the needs of young people and their parents.

Recommendations for good practice

The present research has produced a number of key recommendations relating to

education, awareness, issues relating to Internet Service Providers, and ways in which

further links between industry and Government might be fostered.

Although ensuring accessibility to safety tools is a prime concern of the Internet

industry, increasing the level of public – particularly parent – knowledge regarding

Internet safety would provide a more enduring and user-active means of protection. It is

thus recommended that promoting awareness of available resources might be the most

important long-term recommendation arising from the present findings. Users need to be

made aware of the realities of the various risks associated with Internet use, the tools and

measures that can help combat these risks, and children in particular must be encouraged

to adopt safe online practices. Given that the mass media were perceived to be the

primary source of Internet safety knowledge for both regular and infrequent users, it is

suggested that a media campaign would represent the most appropriate vehicle to

promote safe Internet use.

One apparent obstacle to promoting child online safety may be perceived locus of

responsibility. The present work has identified that the largest proportion of parents

believed their child was taught safe Internet practice at school. Children however said

that most of their safety information was gleaned from their parents and from the media.

17
Our (small) sample of teachers did not tend to possess a detailed knowledge of Internet

safety tools, features and practice. Comments made by regular users and by ISP Senior

Managers revealed that these groups believed that parents accepted little responsibility

for child Internet safety. A media and/or leaflet campaign targeted at promoting parental

guidance would provide an effective means of dissemination to parents who may not

otherwise have access to such information.

A further possible way of advancing child protection on the Internet would be to

incorporate Internet safety practice into the National Curriculum and, in addition, into

teacher training programmes. Further, existing agencies such as the Internet Watch

Foundation might be promoted more widely and their roles clarified. Measures such as

these would allow for parents’ apparent reliance upon teachers to provide children with

up-to-date information concerning Internet safety. Such measures would also however

require regular re-training of teachers as the Internet develops.

Although education is seen as being an essential long-term investment, ensuring

that safety tools, advice and related facilities are in place is also key to Internet safety.

Chat and IM tools and advice should be consistently monitored; connectivity and hosting

providers must address the lack of safety information identified by the current research. It

is noteworthy that online child protection is not limited to online communication, but also

to the protection of young people from certain websites or material available on the

Internet. Whilst the potential benefits of a system via which users and providers can label

the content of their sites are obvious, present results suggest that the Model

recommendations regarding site labelling saw the least compliance. It should also be

noted that we would expect low levels of compliance with labelling measures among

those web materials that target children for dubious purposes.

18
Central to the development of Internet safety campaigns is the relationship

between the Service Providers and the Government. It is recommended that this

relationship be further improved by disseminating the Guidelines more widely, for

instance, with advertisements placed in the computer press.

On-going monitoring of good practice

Once effective safety features and policies are in place procedures are required to prevent

their attrition. Data from the present research would suggest that the ease of monitoring

varies according to size and nature of the ISP with larger user numbers presenting greater

challenges. Hosting providers have more problems monitoring user-generated content

compared with (for instance) moderated chat sites. The ongoing monitoring of good

practice must not only respond to, but also, anticipate change due to the rapidly

developing nature of the Internet. Encouragingly, interviews with ISP Senior Managers in

the present research revealed that all those Managers familiar with the Guidelines had

structures in place for regularly assessing safety policies and practice.

Problems associated with the dissemination of the Home Office Models disclosed

a need to develop further effective and dynamic links between the Internet industry and

Government. Further, success in monitoring good practice might be affected by the

production of safety measures applicable only to the UK. A substantial degree of

unsolicited mail originates from other countries and the Internet is a worldwide

phenomenon. It is therefore suggested that on-going monitoring of Internet safety might

need to be designed globally – with links between countries being every bit as important

as links within them.

19
The introduction of comprehensive educational programmes into the National

Curriculum and teacher training would enable straightforward monitoring of Internet

safety awareness and knowledge in the school system.

Responding to social and technological change

Although the Internet and its multi-faceted services are relatively recent phenomena, it

has had a direct impact on the lives of many people, particularly younger persons.

Information Technology skills are important in today’s job market, and a large proportion

of young people communicate, study, purchase goods and meet new people via the world

wide web. Internet related technology is growing at an exponential rate and our findings

would suggest that a substantial section of parents have been unable to keep abreast of

major developments that have occurred to improve online safety. In the present work,

children knew more about Internet safety tools and features than did many of their

parents and teachers. Although children may know more about Internet safety than their

parents, this does not necessarily imply that young people should use the Internet

unsupervised. Children may still be naïve to the more serious social dangers associated

with Internet usage. Once again, education may be cited as the greatest potential tool that

individuals have to improve children’s safety online. Although the children in the current

work regarded safety tools as beneficial, the functional utility of such tools is ephemeral,

whereas education and awareness are long term and enduring.

In terms of responding directly to technological developments, any future versions

of the Guidelines would need to extend the current Models’ recommendations to include

safety guidelines for Internet services received via mobile telephones. The results of a

survey published in 2002 (Hayward, Alty, Pearson & Martin, 2002) reported that 92% of

households owned a mobile phone and that 21% of households owned a WAP/3G phone

20
(able to connect to the Internet). A more recent study suggested that a quarter of British

children aged between seven and 10 owned mobile phones (Mintel, 2004). Further, chat

associated with online multiplayer gaming is not covered by the Guidelines, nor is

interactive digital television. Concerns have been raised about abuses of Bluetooth

technology (an innovation that allows any device with a Bluetooth chip to communicate

by radio instead of cables). ‘Bluesnarfing’, which is the theft of information from a

wireless device through a Bluetooth connection, can be very difficult to detect and can

result in the copying of an individual’s contact lists, calendar and any multimedia objects

associated with these. E-mail systems are not currently covered by the Task Force

Guidelines and examples of new developments not currently covered will undoubtedly

arrive imminently. One particular concern raised by both regular Internet users and by

ISP Senior Managers involved negative material initiated from overseas. Spam, pop-ups

(with or without adult content), junk e-mail and text messages, unsolicited pornographic

material and more, are not simply a public nuisance but may also upset vulnerable users

and seriously inconvenience UK businesses (e.g. Potter & Beard, 2004). Increasingly,

spam blocking is viewed as one of the services a good ISP should provide and future

Guidelines could offer a Model recommending spam reduction measures.

Primary findings

• Broad consensus among the professionals that the introduction of the Models has

been a positive exercise

• For professionals and users alike, negative experiences of Internet usage were not

found to be common place and the overall perceptions of the web were positive

21
• Levels of understanding and knowledge of safety tools and features were found to

be widely varied in all groups

• Children demonstrated sound safety knowledge

• Agreement was found among parents, regular users and the Internet professionals

that education and awareness are the most effective and enduring means of

promoting safe Internet use

• A contingent need was identified – with broad agreement from the professionals –

for greater Internet safety awareness education for parents

• Mass media was recognised as an important influence on Internet safety

knowledge

• Sections of the Industry, regular Internet users and parents identified a need for

Internet Safety awareness education to be formally incorporated into the National

Curriculum

Key points for future action

• Apparent inconsistencies among parents, children and teachers as to where

children currently source their knowledge of the Internet

• Parents tended to be limited in their understanding and knowledge of the Internet

yet children tended to look upon parents as a main source of knowledge and

advice

• Parents tended to underestimate their role as providers of Internet knowledge to

their children, and over-estimate both the current role of the schools in teaching

Internet safety and the level of their children’s Internet skills

22
Chapter 1: Project overview

Introduction and background to the project

This research aimed to evaluate the Home Office Task Force on Child Protection on

the Internet Models of Good Practice and Guidance for the Internet Industry. These

Models included three detailed sections and provided comprehensive safety

recommendations with regards to Chat Services, Instant Messaging and Web based

services. The primary concerns of the current evaluation were fivefold, namely:

• Establishing the extent to which Model-consistent practices have been

implemented since their publication in January 2003

• Evaluating the functional utility of the Models in terms of their making the use
of the Internet safer, particularly from the perspective of children
• Identifying the Models’ successes in achieving their objectives without
hampering the innovative benefits to services which are gained through
commercial competition
• Evaluating the Models’ successes in attaining their objectives and establishing
how these have been achieved without curtailing the highly beneficial
educative and social potential of Internet usage
• Evaluating the impact of the four Key Objectives of the Models

Specifically these four Key Objectives were to:


• provide a framework of good practice to deliver a safer and better
service for users
• help the industry to empower children and other users to make
well-informed decisions e.g. about disclosure of personal details
whenever they enter services or move between them
• ensure clarity of information, warning and advice
• strengthen public confidence in the services offered

23
The present study constitutes a detailed appraisal of whether the Internet industry has
implemented recommendations from the Models and whether the Models have
provided a vehicle to increase public awareness and uptake of safety measures.
Findings were obtained from seven assessments that were conducted by means of an
online survey, traditional questionnaire based assessments, structured interviews, and
an objective desk-based audit.

Background to the Models


The Office of National Statistics reported that in 2003, 11.9 million UK households
(48%) were able to access the Internet from home. Between 1998 and 2003, Internet
usage increased dramatically by 9.6 million (ONS, 2003). Further, it has been
estimated that the proportion of young people accessing the Internet rose from 73% in
2001 to 84% in 2002 (Hayward, Alty, Pearson & Martin, 2002). Hayward et al. found
the number of young people accessing the Internet from home also rose during this
period from 45% to 56%. In light of the significant increase of Internet access in
residential homes, and following concerns for the safety of children who use the
Internet, a means of ensuring safe use for the online community was required. The
Home Office Task Force on Child Protection on the Internet was assembled in March
2001 as a response to widespread concern about the safety of the Internet for children.
In particular, it was concerned with increasing the supervision in chat rooms, and
providing clearer safety messages to online users. The Task Force’s key aims are:

• To make the UK the best and safest place in the world for children to use the
Internet

• To help protect children the world over from abuse fuelled by criminal misuse
of new technologies

The Task Force published the Good Practice Models and Guidance for the Internet
Industry on Chat Services, Instant Messaging and Web Based Services in January
2003, providing separate recommendations and guidance on safety features and tools
for each of these three services. The Models were primarily aimed at the UK Internet
Industry and were collaborative in nature, the Task Force consisting of representatives
of the Internet industry, Government and main opposition parties, as well as law
enforcement and Child Welfare Organisations. Recognising that the Internet is a vast
and important educative and social tool, and that it is a preferred source of

24
information and day to day communication for millions of UK citizens, the Task
Force’s publication was produced as a means of informing business with realistic,
workable safety recommendations. A Task Force-led media campaign highlighting
safety issues directed at parents/carers and children ran between December 2001 and
Spring 2002 and a similar campaign accompanying the Models launch occurred early
in 2003. The most recent public awareness campaign was in January 2004 and
included “Keep your child safe on the Internet” leaflets and website updates.

Previous Research

Research on Internet safety and methods to increase Internet awareness has been
conducted mainly during the last four years, in response to the swift rise of ‘Internet
Culture’. The Cyberspace Research Unit (CRU) based at the University of Central
Lancashire has conducted much of this research. The CRU was developed in 2000
and aimed to provide research in order to “empower children and young people with
the tools, knowledge and skills they need to navigate safely in cyberspace”
(http://www.uclan.ac.uk/host/cru/about.htm). The Unit was also concerned with the
investigation of criminal behaviour on the Internet.

Research undertaken both before and after the Task Force Models were
published in 2003 has found that although fewer children now become regular
chatters than in the past, and although general risk awareness is relatively prevalent,
more children have reported that they have attended face-to-face meetings with
persons they met online. Further, generally fewer children are aware of specific safety
features than before, especially chatters (O’Connell, Price & Barrow, 2004). It has
also been established that although educational structures that target Internet child
protection issues are in existence, they might not be reaching everyone given that
schools do not allow children to use chat and Instant Messaging services (O’Connell,
Barrow & Sange, 2002). These findings emphasise the need to evaluate the
availability of specific safety features (e.g. alert features, advice on abusive chatters)
to children and the general public when using chat, Instant Messaging, and web based
services in their home environment. Part of the current report will consider the extent
of awareness and experience of safety tools and other safety features among primary
and secondary school children, as well as among their parents and teachers.

25
Methodology
This section will detail the methodologies employed by the current research. Four
diverse data types were collected: data obtained via an online questionnaire,
traditionally gathered questionnaire data, structured interview data, and objective data
on the frequency of safety features collected via a desk-based audit exercise.
Accordingly, the current section is split into four sub-sections, each describing the
data collection processes associated with individual data collection methods.

1. Online questionnaire data


The online questionnaire was designed to obtain data on the experiences and views of
Internet users. The questionnaire explored: the users’ main source of Internet safety
information, their perceptions of Internet safety and whether they believed it had
improved since the publication of the Models, the frequency with which they used
chat rooms, Instant Messaging and web pages in general, whether or not the user had
ever felt uncomfortable, threatened or vulnerable whilst using any of the above
services (and if so how frequently), how helpful various individuals or agencies had
been in such an event, how frequently users had observed and used various safety
tools, how well the participants understood these various safety features, and how
helpful they found them to be. Respondents were also given an opportunity to provide
general comments regarding Internet safety.
The questionnaire took the form of a web page with drop-down menus for
each item from which respondents selected their answers (except in the final section
where respondents typed in qualitative comments). The questionnaire was divided
into five sections: one to collect general information relating to the user (e.g. age,
occupation etc.); one to gather the participants general thoughts; another on web
pages and safety on the Internet in general; and the final three sections concentrated
on chat rooms, public profiles and Instant Messaging respectively with the user given
the option to skip through any or all of the sections if they never used these services.
Two Internet companies agreed to provide a web link to the questionnaire, one from a
computing magazine-style page and one from the front page of the ISP’s chat
services. Data collection took place during February 2004.
Two hundred and sixty six participants completed the online questionnaire. A
majority (72.8%) of the sample were female, and the sample had a mean age of 27.5

26
years (SD = 12.2). Data from 26 of the respondents were discarded due to
inconsistent or missing data resulting in a final sample of 240.

2. Traditional questionnaire data

Three questionnaires were devised for distribution among parents, children (under 16
years old) and teachers in order to assess awareness, experience and understanding of
the Internet and various safety tools. The questionnaires included both quantitative
and qualitative sections to allow participants to comment on various aspects of
Internet safety, negative experiences, satisfaction with responses from various third
parties where negative experiences did occur, awareness, use of and familiarity with
various safety tools and features, and more general feelings regarding Internet safety.
The parental and the child questionnaires were piloted using five parent and
child matched pair groups. Children were aged between 9 and 14. Following
completion, both children and their parents were invited to comment freely on the
structure of the questionnaire and its ease of understanding. Three of the child
questions were simplified and the child questionnaire was reformatted to make it
appear more inviting. The parental questionnaire was not revised as a result of
piloting.

Thirteen schools were approached in Leicester, Leicestershire and Rutland


and thee primary and one secondary school agreed to participate in the research. The
other nine schools explained that time constraints would not allow then to participate.
It must be noted that four schools from a relatively small geographical area are
unlikely to represent all UK schools. Time and financial limitations meant that a
convenience sample was chosen, as opposed to a more representative selection of
schools. The child questionnaires were administered simultaneously in a classroom
environment. At least one researcher or teacher was on hand to respond to questions.
Children were informed that not all young people had used the Internet, that they
should answer honestly and that it was normal for many children not to understand
chat rooms, Instant Messaging and web pages. Numerous queries were answered by
the researchers and teachers in order to maximise useable data. Children were given
an envelope containing a correspondingly coded parental questionnaire with a
covering letter that explained to parents the purpose and objectives of the research.

27
Anonymity was assured. Parents were asked to refer to one child whilst responding to
the questionnaire, namely the child who had brought the questionnaire home from
school, and parent and child questionnaires were coded with identifiers so that parent-
child groups could be matched. Teachers were asked to think about the ‘typical’ child
in their school whilst answering the questions.
The four schools represented a mix of inner city and county schools, and
expected demographics of children and parents varied accordingly. One school, based
in the centre of Leicester, had a high level of immigrant and refugee children, many
of whom did not have English as a first language (teachers took these children
through the questionnaire item by item), whilst the Rutland school recruited from a
catchment area of a large nearby Army base and from local villages. The remaining
primary school and the secondary school recruited pupils from a small and a large
town in Leicestershire respectively. Once the four schools had agreed to participate,
particular classes were selected to ensure a roughly equal gender mix, and to meet age
group criteria. The minimum age for inclusion in the study was nine years, with a
maximum of 15 years.
Three hundred and forty six children completed the children’s questionnaire,
constituting a 100% response rate. However, due to inconsistent data and/or
incomplete data, 41 questionnaires were discarded, leaving a final child sample of
305. Reponses from 183 parents were received but nine were excluded from the
analysis due to high levels of missing data or because of inconsistent data. Given that
more children completed and returned useable questionnaires than did their parents,
additional data files were created and these contained only the matched responses
from 165 children and their parents. Where responses between children and their
parents are compared, these data files are used. The parental respose rate was 53%.
Thirty three teachers completed a questionnaire and all these were judged to be
useable. Again, teachers were asked similar questions to children and parents,
allowing three-way comparison data to be computed. The teacher response rate was
43%.
For all three sets of questionnaire, some data are missing. This is because
respondents were asked to leave blank any questions that they did not understand.
Throughout the report, missing data are dealt with in a consistent manner unless
otherwise stated. That is, given that ‘don’t know’ and ‘not applicable’ input options

28
were usually available, averages have been computed based only on the data present.
All data presented in this report were analysed using SPSS for Windows v.11.01 TM.

3. Semi-structured interviews
Senior Managers from a range of Internet Service Providers and providers of chat
services were contacted in order to explore how successfully the Models had been
implemented and disseminated across the Internet industry since their publication in
January 2003. The main aims of these interviews included identification of the degree
of implementation of safety tools and features, assessment of the impact the Models
had on business, and examination of general views and opinions of the Models within
the industry. Given that a key section of the Models document focused on the
important role that chat room Moderators might play in Internet safety, Moderators
were also interviewed. The chief aims of these interviews were to determine
Moderator familiarity with the Guidelines, the impact that the Models had on their
roles, recruitment and training, and again to explore general opinions regarding the
Models.
Separate semi-structured interview protocols were devised for both ISP Senior
Managers and Moderators, and these were conducted via the telephone. The research
protocols were in part based directly on the Models themselves, enabling the research
to record whether specific recommendations had been followed. For
recommendation-specific questions, interviewees were offered five possible
responses: ‘in place pre-Models’, ‘introduced or changed in response to Models’,
‘plan to introduce’, ‘no plans to introduce’ or ‘not applicable’. For the more general
questions, fuller responses were expected. On a few occasions interviewees felt
unable to respond appropriately to a closed question by selecting one of the five
available responses, either because more contextual information was required or
because they could not recall (for instance) whether a feature had been introduced as
a response to the Models or was in place pre-Models. In the former cases, a more
detailed, qualitative response was recorded and data were coded with added
comments, whilst in the latter example data was coded as ‘missing’.
The ISP Interview was divided into four sections and comprised one hundred
and thirty-seven questions: general information (9 questions), Awareness of the
Models (four questions), Implementation of specific aspects of the Models (100
questions) and general feelings in relation to the Models (24 questions). The

29
Moderator Interview was divided into five sections and comprised seventy-three
questions: general information (four questions), recruitment and training (14
questions), the Moderator’s role (9 questions), experiences (19 questions), and the
impact of the Models (27 questions).

Pilot interviews were undertaken with an ISP Senior Manager and a chat room
Moderator to identify any likely misunderstandings, and ascertain whether the
research protocols were appropriately pitched. As a result of this exercise, several
minor amendments were made to the Senior Manager interview schedule. All
interviewees provided prior agreement that the interview be taped and transcription
usually took place immediately after the interview. Few interviewees kept rigidly to
the interview protocol, allowing the collection of rich qualitative data.
Forty-two separate organisations were approached to try and recruit
interviewees and ultimately 15 ISP Senior Managers and 10 Moderators were
interviewed. Difficulties in obtaining compliant managers from different companies
may have resulted in a filtering effect whereby managers from those companies more
concerned with Internet safety and with higher levels of familiarity with the Models
may have been more likely to agree to participate. A similar filtering effect may have
occurred with regards to company size. Larger companies have more staff at their
disposal to deal with policy related research as opposed to smaller companies whose
staff may not be as able to donate time to non-operational matters. Further issues with
the Senior Manager questionnaire arose due to the larger Internet companies often
having different Managers for different areas of their services. This problem arose for
two of the respondents: one interviewee could only comment on the chat services
offered by the company and another respondent could comment on all aspects other
than Instant Messaging. In the case of this second company a second interviewee was
recruited to answer the questions relating specifically to Instant Messaging only (this
participant was not counted as an additional interviewee). In most instances, contact
details for the Moderator sample were provided by ISP Senior Managers who had
already been interviewed.
The duration of ISP Senior Manager interviews ranged between 18 and 90
minutes with an average duration of 41 minutes, whilst the Moderator interviews
lasted between 18 and 45 minutes with an average length of 29 minutes. The Senior
Manager interviews varied in length partially due to organisations offering differing

30
levels of the services being explored (i.e. chat services, Instant Messaging,
connectivity and web hosting) and both the Senior Manager and Moderator
interviews varied significantly in duration with regards to the degree of interviewee
familiarity with the Models.

4. The objective exercise


Twenty-five Internet, chat and Instant Messaging Service Providers were
independently assessed using an objective questionnaire developed from the
recommendations stated in the Models document. The objective exercise research
instrument was devised to independently assess the implementation of safety tools as
suggested by the Models. The instrument comprised three sections: chat services (19
questions), Instant Messenger services (22 questions), and web services (connectivity
and hosting providers, 16 questions) and was constructed to be as objective as
possible and aimed to collect quantitative data on the basis of three possible
responses: ‘yes’, ‘no’ or ‘not applicable’. Questions were answered ‘yes’ if features
were reasonably easy to locate, given that the Models not only suggest that a safety
feature be present, but also that it be easily accessed, identified, and readily
interpretable. To further increase objectivity and internal validity, one Researcher
completed all the assessments. Where she was in doubt as to whether a response
should be coded ‘yes’ or ‘no’, two independent raters would separately provide
ratings, and majority ratings were recorded. Companies were selected in order to
identify a diverse range of large, mid-range and smaller companies specialising in
different and combined aspects of the Models being evaluated. The Researchers
attempted to match the responses of those interviewed via the structured interview
protocols with their companies’ objectively assessed performance via the desk-based
audits. This was achievable in 44% of cases in the audit sample (n = 11).
Twenty-five assessments were completed. The following breakdown reflects
the range and diversity of services on offer. Of the 25 companies, seven (28%)
provided chat services only, one (4%) provided Instant Messaging only, 9 (36%)
provided connectivity and web hosting, one (4%) provided connectivity, web hosting
and chat, three (12%) provided connectivity, hosting, chat and Instant Messaging
services and four (16%) provided chat and Instant Messaging services only. Due to
companies providing different services, there will be sections of data that are not
applicable as a result of the organisation not providing the service in question.

31
Structure of the Report

• Chapter 2 details the perspectives and experiences of online users, obtained


via an Internet based survey tool. Both quantitative and qualitative data are
presented.

• Chapter 3 examines the perspectives and experiences of young people (aged


between 9 and 15 years), their parents and teachers, all of whom were
recruited from four different schools in Leicester, Leicestershire and Rutland.
This section was facilitated by the use of traditional style questionnaires.
Again, both quantitative and qualitative data are employed to interpret the
findings. Together, Chapters 2 and 3 detail the impact of the Models in the
community by measuring public awareness of and uptake of safety tools and
features, as well as how useful these features are judged to be.

• Chapters 4 and 5 contain detailed, primarily qualitative accounts, of the views


of 15 Internet Service Provider Senior Managers and 10 chat service
Moderators. Industry responses to and implications of the Models will be
discussed. These findings derive from semi-structured interview protocols.

• Finally, Chapter 6 discusses the results of an objective desk-based audit and


assesses 25 Internet companies with regards to the degree to which
recommendations featured in the Models are in place.

32
Chapter 2: User perspectives and experiences (i): online users

This chapter first provides a quantitative overview of responses received from a


survey of online users. Attitudes of regular chatters and Instant Messaging users are
then investigated qualitatively, illustrating respondents’ opinions of the most and least
helpful forms of safety tools and advice, as well as their general feelings regarding
measures required to improve Internet safety.

The sample
The mean age of the 240 participants who took part in the online survey was 27.2
years (SD = 11.8, range 14 to 80). However, just over half (50.2% n = 120) of the
available sample was aged between 18-21 years. All were UK residents and 94.6% (n
= 227) reported having a home Internet connection. The majority of respondents
(74.1%) were female. The most frequent occupation was ‘student’, encompassing
57.1% of the sample, followed by ‘homemaker’ (4.6%), ‘professional’ and ‘other’
(both 4.2%) and ‘computer related’, ‘education/training’ and ‘sales/marketing’ (all
3.3%). The most commonly reported Internet Service Provider was BT (26.1%),
followed by NTL and Tiscali (both 19.3%) and then by AOL and Freeserve (both
9.7%).
Table 2.1 shows the respondents’ reported usage of chat, Instant Messaging
and web pages in a typical week. Web pages were most frequently accessed, with
93.3% of the sample using these services at least three times in a typical week and
100% accessing them at least once each week.

Table 2.1.
Respondents’ use of chat services, Instant
Messenger services and web pages
(In a typical week)
Chat services
Never 59.2% (n = 142)
Once or twice 17.5% (n = 42)
Three to five times 5% (n = 12)
At least once per day 18.3% (n = 44)

33
Instant Messenging
Never 38.3% (n = 92)
Once or twice 21.7% (n = 52)
Three to five times 15% (n = 36)
At least once per day 25% (n = 60)
Web services
Never 0% (n = 0)
Once or twice 6.7% (n = 16)
Three to five times 25.8% (n = 62)
At least once per day 67.5% (n = 162)
N = 240

It must be noted that the sample was self-selecting in nature. Because of this,
responses to some questions may produce artifically negative or positive responses.
For instance, it may be that the current sample were less likely than a more
representative sample of web users to state that they had experienced an unpleasant
incident whilst using the Internet, as such experiences may have led to individuals
abandoning use of the Internet. As such, generalisations to all UK users of chat, IM
and web services may not be made.

Safety and the Internet


The largest group of respondents (28.9%, n = 69) reported that their primary source of
Internet safety information was the mass media, followed by friends (21.3%, n = 51),
Internet Service Providers (18%, n = 43), the Internet itself (13%, n = 31), parents and
family (8.8%, n = 21) and school (3.3%, n = 8). A small number (6.7%, n = 16) of the
sample reported that they received their information from another source, the most
commonly cited being ‘common sense’ and ‘experience’. Respondents were asked
whether they felt that the benefits of using the Internet outweighed its risks and
overwhelmingly, 93% of respondents reported that the benefits were greater than any
potential dangers.
The current section details respondents’ use of and opinions concerning
various Internet safety features recommended by the Models. Separate analyses refer
to chat services, IM and web pages. In order to assess uptake rates and views of users
who were knowledgeable with regard to online communication, only those who

34
accessed chat or IM services at least once per week were included in the relevant
analyses (all respondents accessed web pages at least once during a typical week).
Similar findings from a more general sample may be viewed in Chapter 3, when
findings from the young peoples’, parents’ and teachers’ questionnaires are assessed.
In the present section, it is supposed that regular online users are a valuable source of
information specifically regarding post-Model safety developments as a result of their
high level of use and assumed high level of knowledge. Findings from chat users will
be detailed first, followed by findings from users of IM services, and finally web
pages.

Chat users
Just over two fifths of the sample (40.8%, n = 98) reported using chat at least once per
week. Of these 80.4% felt that sufficient clear chat room safety advice existed,
whereas (19.6%) indicated a belief that satisfactory advice was lacking. Furthermore,
82.3% of the participants reported that advice on how to safely use chat rooms was
clearer, more widespread and more accessible than it had been 12 months previously.
Although the majority of participants reported that they had never felt
uncomfortable, threatened or vulnerable while using chat services (60.2%, n = 59),
more than a third responded positively to this question (39.8%, n = 39). Of those who
had experienced a problem with another user or users, more than half (56.4%, n = 22)
revealed that they had reported the incident. Table 2.2 indicates to whom these
participants reported their problems and to what extent they found reporting to be
beneficial.

Table 2.2.
Chat Services users’ negative experiences: who users reported to and whether reporting
was helpful

People/institutions to whom Did not inform Did inform and Informed but did
a problem was reported found helpful not find helpful

Internet Service Provider 40.9% (9) 40.9% (9) 18.2% (4)


Moderator 31.8% (7) 13.6% (3) 54.5% (12)
Police 95.5% (21) / 4.5% (1)

35
Child Protection Agency 95.5% (21) 4.5% (1) /
Family member 95.5% (21) 4.5% (1) /
Teacher 100% (22) / /
N = 22

Table 2.2 indicates that respondents were most likely to report to Internet Service
Providers and chat room Moderators, finding the former to be largely helpful and the
latter to be largely unhelpful.
Regular chatters were asked whether they had had contact with four safety
features during their chat room experiences. They were also asked to rate whether
they found each feature to be helpful or unhelpful, and to what extent they understood
them. Table 2.3. presents the findings.

Table 2.3.
Chat services users’ experiences, perceptions and understanding of safety features
Chat room Ignore or Alert Advice on handling
Moderators block features abusive messengers
features
User had never / 6.2% 34.7% 24.2%
seen feature (6) (34) (23)
User had seen, but / 17.5% (17) 26.5% 37.9%
not used, feature (26) (36)
User employed 67% (63) 76.3% (74) 38.8% 37.9%
feature (38) (36)
User found feature 81% 89.2% (66) 89.2% 82.9%
helpful (47) (33) (29)
User did not find 19% 10.8% 10.8% 17.1%
feature helpful (11) (8) (4) (6)
User understood 82.4% 87.2% 59.8% 88.2%
feature (75) (82) (55) (52)
User did not 17.6% 6.4% 40.2% 11.8%
understand feature (16) (6) (37) (7)
N = 98

36
Those respondents who had encountered an abusive chatter were more likely to have
employed advice on how to handle abusive chatters when compared with those who
had not. Overall, table 2.3 would suggest that regular chatters are utilising the safety
tools recommended by the Models, and are finding them to be helpful in terms of
personal Internet safety. Increasing understanding of alert features (e.g. a panic button
within a chat room to inform a Moderator or operator about abuse or discomfort)
appears to be an outstanding issue, but some Moderators and ISP Senior Managers
commented that this feature may be redundant in a competently moderated chat room;
see Chapters 4 and 5). Although chatters who had experienced a specific problem
with another user did not tend to find Moderators to be helpful, the vast majority of
those chatters who had more mundane contact with Moderators viewed this contact
positively.

Instant messaging users


One hundred and forty eight participants used Instant Messaging services at least once
per week. Of these, 81 (55.5%) felt that clear and accessible safety advice was
lacking. Slightly more (58.2%) felt that IM safety advice and access to it had not
improved during the preceding year. These findings demonstrate that online users
viewed chat services more positively than IM in terms of levels and accessibility of
safety advice, and in terms of improving access to safety advice during 2003. Almost
a sixth (14.9%) stated that they had felt uncomfortable, threatened or vulnerable at
least once while using IM (this figure stood at 39.8% for chat services). Of those 22
participants who had reported feeling uncomfortable, just three (13.6%) stated that
they had reported it to someone. Two of these reported to their ISP and found the
experience to be helpful, whilst one reported to the police and stated that this had not
been helpful. Like chatters, IM users were asked about their knowledge, use and
experiences of various safety measures. Table 2.4 details the results.

Table 2.4.
IM users’ experiences, perceptions and understanding of safety features
Ignore or block Advice on handling abusive
features messengers
User had never seen feature 7.5% 52.4%

37
(11) (77)
User had seen, but not used, 26.5% 33.4%
feature (39) (49)
User employed feature 66% 14.3%
(97) (21)
User found feature helpful 98% 76.1%
(95) (16)
User did not find feature 2% 23.8%
helpful (2) (5)
User understood feature 93.6% 92.6%
(134) (50)
User did not understand 6.4% 7.4%
feature (9) (4)
N = 148

As with chatters, those who had experienced abuse from another user were more
likely to have looked at advice on how to handle abusive messengers. Instant
Messengers and chatters used ignore or block features and read safety advice at
roughly similar levels.

Public profiles seen in chat or IM


All 240 participants were asked if they had ever seen and used warning advice
referring to the creation of public profiles, and if so, whether they found it helpful.
The majority of the total sample reported they had never seen any warning advice on
public profiles (n = 175, 72.9%); only 27.1% had ever come across and used these
features. Of those 65 participants who had seen and used this advice, most (73.8%)
felt that it was helpful.

Web users
All 240 participants who completed the online survey used web services at least once
each week. The majority of these reported to have never felt uncomfortable,
threatened or vulnerable while looking at web sites and web pages. Of the 17.5% (42)
who said that they had, just five reported their experiences to someone else. Three
reported to their ISP with two finding their ISP to be helpful, one reported to the

38
police and did not find the police to be helpful, and one reported to the Internet Watch
Foundation (IWF), finding this organisation to be helpful. Of course, the location of
safety information may have had a large impact on who individuals reported their
experiences to. Chatters were far more likely than IM and web page users to report
negative experiences. This is likely due to clearer channels for reporting being in
place for chat services, and because chatters are more likely to encounter strangers
than are IM users.
All online participants were asked about their experiences and opinion of
online safety guides, contents ratings and filtering software. Table 2.5 displays the
results.

Table 2.5.
Web users’ experiences and perceptions of safety features
Online safety Contents Filtering software
guides ratings (parental controls)*
User had never seen 51.8% Not 65.4%
feature (118) available (157)
User had seen, but not 31.1% Not Not available
used, feature (71) available
User employed feature 17.1% (39) 48.3% (100) 26.3% (56)
User found feature 89.4% (34) 58.1% (57) 60.7% (34)
helpful
User did not find feature 10.5% (4) 41.9% (41) 39.3% (22)
helpful
User understood feature 95.5% (63) Not 74% (154)
available
User did not understand 4.5% (3) Not 26% (54)
feature available
Maximum N = 240, *where applicable

Again, users found the various safety features to be beneficial in the majority of cases.
However, online safety guides were used by few people. Further, contents ratings and
filtering software were not found to be helpful by around 40% of those participants
who used them.

39
Web users were asked whether they received more or less spam (unsolicited e-
mails) and pop-ups containing adult content than at the same time the previous year,
and whether it was now easier or more difficult to access sites with adult content. The
largest proportion (72.9%) felt there was more spam than a year previously, and just
13.8% felt that there was less (the remaining 13.3% viewed spam levels as remaining
constant). Similarly, when asked if they felt that it was more or less easy to stumble
across sites containing adult material (when not searching for this type of content)
compared with a year ago, 47.3% felt that this was easier and 40.8% thought the
chances were about the same. Only 11.9% of respondents felt that these sites are
better cloaked than a year ago. When asked about the number of pop-ups containing
adult material, 56.2% of respondents felt that there were more than there had been a
year ago and 30% judged there to be the same number. Only 13.8% of respondents
felt that the number of pop-ups containing adult material had decreased in frequency
during 2003. Many measures are available to deal with spam and pop-ups (e.g. pop-up
blockers, junk mail filters) and most major ISPs offer a variety of services. However,
these problems represent a moving target in that to date, each technological measure
put in place has eventually been circumvented. Further, the originators of spam, pop-
ups and junk mail are frequently from outside the UK, and ISPs, whilst able to
provide personal filters and perform some central filtering, cannot safeguard against
all sites, and nor can filters be effective if an adult content provider does not choose to
label their site.

General improvements
All online respondents were asked whether they felt that chat rooms, IM services and
web sites were generally safer than they had been before the Models were introduced
a year earlier. When thinking about web sites, participants were asked to consider
whether sites now carry better descriptions of their content than they did a year
before. Frequency data from these questions can be viewed in table 2.6 below:

Table 2.6.
Online respondents’ judgements of whether the Internet was safer in
January 2004 as compared with January 2003
Chat Services IM Services Web sites

40
(n = 94) (n = 142) (n = 218)
Much safer 13 (13.8%) 17 (12%) 5 (2.3%)
A little safer 33 (35.1%) 46 (32.4%) 66 (30.3%)
About the same 37 (39.4%) 73 (51.4%) 129 (59.2%)
A little less 7 (7.5%) 2 (1.4%) 11 (5%)
safe
Much less safe 4 (4.3%) 4 (2.8%) 7 (3.2%)
Participants who used a service less than once in a typical week were excluded
from these analyses

From the results shown in table 2.6 it can be seen that the majority of regular chatters,
Instant Messagers and web surfers felt that safety had improved on or was the same as
safety levels a year before the Models were introduced. For all three services, under
12% of online users felt that safety levels had decreased between January 2003 and
January 2004.

Attitudes towards safety tools and advice: A Qualitative Analysis.


Online respondents were asked open ended questions in order to establish what they
felt were the most and least useful safety tools and advice, and they were also invited
to provide general comments concerning Internet safety issues. The current section
will consider the opinions of those participants (n = 170) who used chat rooms or IM
at least once per week, given that these individuals were those most likely to have
knowledge and experience of Internet safety tools and advice.

(i) Perceptions of regular chatters or IM users

Most useful safety tools or advice


Eight qualitative themes were derived from regular chat or IM users’ free text
opinions concerning what they felt to be the most useful safety tools or advice. A
majority (116 of the 170, 68.2% of this sub-sample) chose to respond to this open-
ended question and table 2.7 displays the frequency of the primary themes from their
responses. Themes were derived from fragmenting quotes into conceptual parts and
then coding each fragment for meaning. Codings relating to similar meanings were
then compounded to form sub-categories and then further to form themes.

41
Table 2.7
Regular chat/IM users responses to free text question
regarding most useful safety tools and advice
Frequencies and percentages of themes
per derived category
Educational/awareness 34 (29.3%)
and advice
User-related 28 (24.1%)
Controls/blocks (Parental) 23 (19.8%)
Block/Ignore features 19 (16.4%)
(User) and “Tools” in
general
Responsibility/Supervisory 19 (16.4%)
Web related security features 12 (10.3%)
Moderator Factors 11 (9.5%)
Limiting Access 3 (2.6%)
N = 116

The most common themes associated with most useful safety tools and advice centred
on education and awareness of Internet safety. This would suggest that regular users
of chat and IM perceive user factors as most important in promoting safe Internet use.
Participants referred frequently to the media in relation to increasing awareness (11
references). For instance:

“The most useful safety tools include advert campaigns increasing awareness
of dangers for both parents, children and general users.”

Participants also cited Internet sites as being a primary source of safety advice (10
references), for example:

“The most useful safety advice is found on homepages such as [Internet


Company X] and [Internet Company XX] as it caters for all Internet users.”

42
“Information on websites - particularly before entering a chat area.”

Other factors mentioned were general education factors for parents, for children, by
parents and from schools (8 references):

“Word of mouth advice from teachers, parents and other responsible adults.”

“Teaching use of the Internet in schools including the dangers and safe use.”

“Making people (children) fully realise the dangers, letting them know what
can happen, let them watch the news so they can see for themselves that it
does happen and talk to them about it to make them fully aware that the same
thing could happen to them.”

Of user-related factors, the most frequently cited theme highlighted common sense
(11 references) and references to cautionary advice when divulging personal
information (17 references):

“Common sense. Don’t give out information about yourself over the Internet
to someone no matter how much you think you ‘know’ them, if you’ve only
got what they’ve told you about themselves to judge them by.”

Specific safety tools like ignore and block features, parental controls, filters and safety
tools were also considered to be important. In particular, caregivers were perceived to
have a responsibility for employing these tools on behalf of children (8 references):

“Parents being able to block sites from their children.”

“Parental controls.”

A theme associated with parental controls was observed as participants made


comments relating to supervision, and responsibility-related factors. These comments
were directed mainly towards parents or the caregiver and occasionally teachers (19
references):

43
“Parents taking responsibility for their children’s access to the Internet.”

“If you let your child use the Internet, make sure YOU know how it works!
Make sure YOU can see what your child is doing.”

Linked with tools, chat room moderation and related factors were cited 11 times as the
most useful form of safety tools or advice. References were also made to web related
security features such as firewalls and anti-virus/anti-spam programmes (12
references). A final, relatively infrequent theme emerged from the data and related to
participant opinions on limiting children’s access (three references). For instance:

“Children should have very limited use because we don’t know who else is
out there.”

“Pupils should be barred access to chat rooms from school computers.”

These references might suggest a lack of understanding of the Internet, in particular a


lack of awareness of the existence of safety features for children’s protection in chat
rooms and an assumption that schools freely allow access to chat rooms and teach
children about Internet safety.

Least useful safety features or advice


Ten themes were derived from regular chat or IM users’ opinions concerning what
they believed to be the least useful safety tools or advice. Seventy participants of the
170 (41.2% of this sub-sample) chose to respond to this open question, far fewer than
those who provided comment on the most useful safety features. Table 2.8 displays
the frequency of themes per derived group.

Table 2.8
Regular chat/IM users responses to
open-ended question regarding least useful
safety tools and advice

44
Frequencies and percentages of themes
per derived category
Educational/awareness 21 (30%)
and advice
Tools (Web related and general) 18 (25.7%)
Tools (Parental) 7 (10%)
Social factors 7 (10%)
Responsibility/Supervisory 4 (5.7%)
Moderator Factors 3 (4.3%)
All safety tools/advice are useful 3 (4.3%)
Chat rooms 3 (4.3%)
User-related 2 (2.9%)
N = 70

As when commenting on the most useful sources of Internet safety advice and tools,
education and awareness were the most frequently mentioned issues. This time,
participants commented on a lack of awareness and education concerning tools that
are readily available. So, although participants believed education and awareness to be
the most important issues impacting on Internet safety, they also believed that they
were generally lacking. This (apparent) contradiction would seem to reveal a concern
that although educative tools are, or have been there, campaigns might not last long
enough, they might assume a certain level of knowledge which might exclude young
children, or that they are not widely distributed (for example in schools):

“Lack of awareness of information about some of the tools mentioned in this


questionnaire such as alert features etc.”

“Lack of teaching or understanding.”

There seemed to be particular concern regarding education on safety advice and tools
for children, suggesting that guidance and advice so far provided would not seem
appealing or informative to children and would result in young people not absorbing
this advice:

45
“Safety guides may be useful to adults but will almost certainly be ignored by
vulnerable children.”

“Warnings…children would rarely pay attention to them.”

“Web pages going on about safety in chat rooms for children. Children won’t
read them as they are boring.”

There was also a concern that education and advice given is not always clear or
accessible (4 references):

“Safety advice in chat rooms is rarely clear enough.”

“Much safety advice is not accessible.”

There were 25 references to Internet tools as being the least useful form of safety
measures. Web based tools, parental control tools and blocking/filtering/ignore tools
were all considered to be of low utility by some respondents. References to web
related tools included spam-control and filters, warnings and web blocks:

“Filters tend to be too clumsy. They can exclude legitimate material.”

“There is nowhere completely safe on the Internet. No amount of filtering can


remove every problem.”

“Warnings on the websites. If you want to look at the site even if you’re under
age the warnings won’t stop you.”

Similarly, comments about blocking mechanisms included concern that they block too
much, or that they are not effective as a means to keep suspicious or unknown
individuals away from the user.

“ Parental controls. The average child can probably outwit them by the time
they are 12.”

46
“The parental blocks aren’t very useful because they can’t access many normal
sites needed for school work.”

“Blocking tools on Instant Messengers. People can sign in with a different


name and re-contact you.”

The theme of responsibility and supervision revealed concerns. This theme described
problems with supervision, for example:

“Supervising your child seems to be an impossible task for many parents.”

It was also suggested that the media might be responsible for exaggerating the
dangers, but emphasised that the parents were ultimately responsible for supervising
their children:

“ The current level of safety scares is utterly ridiculous. It’s up to the parents
to determine a child’s access to the Internet, not website owners.”

General opinions of regular chatters and IM users regarding Internet safety


The final question in the online survey aimed to identify general feelings regarding
Internet safety. Again it was open-ended, designed to elicit as much unbiased
qualitative information as possible. Thirty-eight participants chose to comment on
general issues surrounding Internet safety and four major themes emerged. These
covered: education and awareness (9 references), responsibility and supervisory issues
(7), moderation (5) and media related issues (3). Issues surrounding education
highlighted a belief that parental knowledge needed to be increased:

“Too often parents are not aware of parental blocks and other security
measures since they are poorly advertised.”

“ We need to be web wise in the same way we are streetwise i.e. don’t open
the door to strangers. My youngest son uses chat but I have talked to him

47
about the dangers and pitfalls in the same way as we talk about the dangers in
all areas of life.”
.
Although parental responsibility for educating and protecting children online clearly
emerged as an important factor, references were also made regarding schools (1), the
Government (1), Internet Service Providers (1) and other authorities (4):

“Maybe safety information on the Internet should be incorporated into ICT


lessons at primary school level.”

“The Government needs to target kids and teenagers. I know some younger
friends and relatives have visited over 18 chat rooms and websites so so
easily.”

“ISPs need a greater understanding of and interest in their chat rooms, if that
isn’t possible they need to delegate it to the right people.”

“The public, overall, need better education about the Internet.”

Comments surrounding moderation were generally positive (5) but a need for
Moderators to undergo criminal records bureau checks was noted as:

“after all they work with children.”

Mixed opinions were identified with regard to the media (3). One participant believed
that media attention towards Internet safety was very positive:

“The media is doing the right thing by highlighting the dangers of chat
rooms”,

whereas the other participants who commented on the media all felt that it had a
generally negative effect:

48
“In my experience I have found that the media “sex-up” the dangers to be
found on the Internet… although paedophiles they may be there, they are also
there in real life, but because of the ignorance that people have about the
Internet it is so much easier to convince them that the majority of Internet
users are perverts and sexual predators.”

Overall, the online sample have suggested that existing tools should be refined, and
that people should use ‘common sense’ when using Internet services. The primary
recommendation however concerns increasing education and awareness of existing
tools and sources of advice. Respondents have suggested that parents in particular
need to be educated about the risks that their children may face and how such risks
may be avoided. A small number of participants recommended that certain sites
should be banned completely but these were a minority:

“We need to find a way to prevent certain types of sites ever existing”.

“I took a stalker to court two years ago… I think there are too many sites
which use electoral roll information and are too easily accessed by would be
abusers. I personally think these sites should be illegal.”

Chapter summary
• 93% of the sample reported that the benefits of using the Internet outweighed
the potential dangers.
• Under 12% of frequent users of all three services felt that safety levels had
decreased between January 2003 and January 2004
• IM was the service least likely to culminate in uncomfortable incidents, whist
chat was the most likely
• Regular chat and IM users were generally found to be utilising safety tools
recommended by the Models and finding them helpful
• The majority of regular chatters felt that safety tools and advice was clearer,
more widespread and more accessible compared with 2003, whilst over half of
regular IM users did not feel this was the case for IM–specific items

49
• Almost three quarters of participants felt that compared with January 2003,
they were receiving more spam and seeing more pop-ups and adult material

Key points for future action

• The mass media were considered to be the biggest influence on child and
parent Internet safety
• Almost one third of the online sample reported that their primary source of
information on Internet safety was the media
• Education for both parents and children was felt to be the most important
safety measure available. Equally, a lack of education was felt to be the
primary cause of child Internet safety problems

50
Chapter 3: User perspectives and experiences (ii): young people, parents and
teachers

This chapter describes results obtained from the administration of three traditional style
questionniares aimed at primary and secondary school children, their parents and
teachers. Similar in content to the online survey, the questionnaires were designed to
measure knowledge and awareness of Internet related safety features in chat room, Instant
Messenger and web based environments. The questionnaires were also designed to
assemble a selection of qualititative information regarding feelings about the Internet and
associated safety issues. The current chapter first describes the quantitative data received
from the three groups of participants followed by a evaluation of the qualitative data. This
chapter will also discuss any noteworthy similarities and differences between the regular
(online) Internet users discussed in Chapter 2 and the present samples.

The sample
The final child sample numbered 305. Children’s ages ranged from nine to 15 years
(mean age 10.7 years, SD 1.6). Roughly half (52.5%) were female. The largest proportion
of parents completed the questionnaire referring to a female child (59.2%) aged between
nine and 11 years (61.6%). Parents were asked to refer to one child whilst responding to
the questionnaire, namely the child who had brought the questionnaire home from school.
Where both parent and child responses were available a matched-groups data set was
created, but as more parents of female children returned questionnaires than parents of
male children there was a higher proportion of female children in this sub-group
compared with the proportion of female children completing the questionnaire (59.2% to
52.5% respectively). The majority of parents who responded also reported that they had
Internet access at home (90.8%). This factor, more than child gender, is likely to have
motivated parents to respond. Some of the questions in the parents’ questionnaire
matched questions included in the children’s questionnaire, and some questions were also
shared with the teacher’s questionnaire. Given that more children (N = 305) completed
and returned useable questionnaires than did their parents (N = 174), additional data files
were created and these contained only the matched responses from 165 children and their
parents. Where responses between children and their parents are compared, these data
files are used.

51
Thirty three teachers from three different primary and one secondary school based
in Leicester, Leicestershire or Rutland completed a questionnaire. Fifteen teachers
(45.5%) taught at secondary level and the remainder taught at primary level.

Connectivity details
The majority of the total group of 305 children (75.1%, n = 229) said that they had home
Internet services and most (81.9%, n = 136) said that this had been the case for two or
more years. When the matched group children and parents were compared, this data
corresponded to a high degree – 150 parents said that they had home Internet access,
compared to 144 of their children and 83.7% of matched group parents said they had
been connected for at least two years. Thus, it would be expected that the parent sample
would be reasonably knowledgeable in regard to the Internet, and would demonstrate
some awareness of Internet safety tools.
As was the case with online participants, BT was the most commonly cited
Internet Service Provider (25.8% of the sample had a BT home Internet connection,
followed by AOL (17.6%), Freeserve (17%), and NTL (11.9%).

Child Internet use


In 87.7% of cases (n = 143) parents in the matched group stated that in a typical week,
their child never used chat services and 3.7% said that they were unsure of whether their
child accessed chat rooms. Almost two thirds (64.2%) said that their child never used
Instant Messaging in a typical week (4.3% were not sure). A minority of parents (13%)
said that their child never viewed web pages in a typical week. With regards to web
pages, the largest overall proportion of parents, 47.5%, said that their child looked at
Internet sites once or twice during a typical week (2.5% were unsure). It may be
concluded that the vast majority of parents felt that they had a good idea of how often
their child used chat rooms, IM and web pages. The matched groups’ (N = 165) data were
employed in these calculations and in table 3.1. below in order to provide a comparison
between parental and child responses to questions regarding child Internet usage.
Teachers estimates of child Internet use, based on the ‘average child’ who attended their
school, are also included in table 3.1.

Table 3.1.

52
Comparison of parents’, childrens’ and teachers’ estimates of children’s web use in a
typical week
Parent’s responses Don’t know Never Once 3-5 times Daily
Chat services 3.7% 87.7% 6.1% 0.6% 1.8%
Instant Messaging 4.3% 64.2% 16% 8.6% 6.8%
Web sites 2.5% 13% 47.5% 25.3% 11.7%
Children’s responses Don’t know Never Once 3-5 times Daily
Chat services N/a 75.6% 15.2% 3% 6.1%
Instant Messaging N/a 57% 22.2% 10.8% 10.1%
Web sites N/a 10.4% 43.9% 24.4% 21.3%
Teacher’s responses Don’t know Never Once 3-5 times Daily
Chat services N/a 30% 33% 23.3% 13.3%
Instant Messaging N/a 27.6% 34.5% 17.2% 20.7%
Web sites N/a 0 22.6% 35.5% 41.9%
Children N = 165, Parents N = 165, Teachers N = 33

The matched group data shows that children’s estimates of their usage of these
services tended to be greater than estimates obtained from their parents. Perhaps
children are using these services without their parents’ knowledge, for instance at a
friend’s home or at school. Alternatively, some children may be over-estimating their
own usage of these services or simply responding positively due to response bias.
Almost one in ten of all parents (9%) said that their child had never used the
Internet. One third of these parents also stated that they did not have a home Internet
connection. Almost two thirds (65.3%) said that their child has been using the Internet
for more than a year. Of those parents who said their child had used the Internet,
33.5% thought their child to be very proficient at using the Internet, 31.1% quite
proficient, 26.2% average, and 9.1% did not think their child was very proficient.
Thus, the majority of parents believed their child to possess above average Internet
capabilities.
Finally, in contrast to parents, teachers tended to over-estimate the degree and
extent of children’s use of chat, IM and web sites. At least a proportion of this over-
estimation could possibly be explained by the nature of the questions. Children were

53
asked to answer about themselves, and parents about their own children, but teachers
were asked to think of the average child who attended their school.

Child discomfort
A large majority of parents in the matched group (86.5%) said that their child had
never felt uncomfortable, threatened or vulnerable whilst using chat services.
Presumably, much of this finding may be explained by most parents’ belief that their
child never used chat rooms. Only four parents (2.8%) responded positively to this
question, and 10.6% more were unsure. Of the four who said their child had had a
negative experience whilst using chat services, one parent said that this only happened
once and the other three were not sure of the frequency. Only one parent (0.8%) said
that their child had felt uncomfortable, threatened or vulnerable whilst using IM, and
this parent stated that this had happened “very often”. More parents (seven, or 4.6%)
said that their child had felt uncomfortable, threatened or vulnerable whilst looking at
web sites. Three felt that this had occurred on one occasion and four that this had
happened “a few times”.
Most parents (81%) believed that if their child felt uncomfortable, threatened
or vulnerable whilst using the Internet, then the child would let them know. Although
7.2% said their child would not admit to a negative online experience, 11.8% more
were unsure as to whether their child would tell them. Again, the table below displays
differences in the parental and children’s estimates, and it is evident that children
report higher levels of negative online experiences than are recorded by parents on
their behalf.

Table 3.2.
Comparison of parents’ and childrens’ frequency reports of children feeling
uncomfortable, threatened or vulnerable whilst online
Parents’ responses Don’t know Yes No
Chat services 10.6% 2.8% 86.5%
Instant Messaging 11.5% 0.8% 87.8%
Web sites 6.6% 4.6% 88.8%
Children’s responses Don’t know Yes No
Chat services N/a 7.9% 92.1%

54
Instant Messaging N/a 4.2% 95.8%
Web sites N/a 12.1% 87.9%
Children N = 165, Parents N = 165

All 305 children’s self reports concerning the frequency with which they felt
threatened, uncomfortable or vulnerable whilst using Internet services are
summarised in table 3.3.

Table 3.3.
Children’s self-reports of feeling uncomfortable, threatened or vulnerable whilst
online
Never Once A few times Regularly Very often
Chat services 91.4% 5.1% 2.1% 0.3% 1%
Instant Messaging 93.9% 4.8% 1.3% 0 0
Web sites 87.4% 7.3% 4.7% 0.3% 0.3%
Children N = 305

From table 3.3 it is evident that where children have been made to feel uncomfortable
when using Internet services, the majority had only felt this way on one occasion. It is
not known whether a single occasion of discomfort led these children to avoid future
discomfort - for instance by avoiding services or taking additional safety precautions -
or whether abuse on such services is infrequent.
Three of the 33 teachers stated that children had come to them and reported
feeling uncomfortable, threatened or vulnerable whilst using chat rooms. One of these
teachers reported that such reports were a weekly occurrence. Three teachers also
reported that children had mentioned feeling threatened, uncomfortable or vulnerable
whilst using IM, and three reported the same for web pages. The emergent overall
picture therefore was that the large majority of children sampled did not feel
uncomfortable, threatened or vulnerable while using chat services, IM or web sites.
In addition where they did feel threatened and uncomfortable, such experiences were
unusual rather than commonplace.

55
Specific incidents
Matched group parents were asked whether their child had ever communicated via the
Internet with a person who was not honest with him or her. The largest proportion
(59.5%) stated that their child did not communicate with others via the Internet. A
similar number (57%) of matched groups children said that they did not use IM,
whilst 75.6% said that they did not use chat rooms. Four parents (2.7%) said that their
child had spoken on the Internet to an individual whom they felt had not been honest
with him or her, and 10.8% more said that they did not know if their child has been
deceived. Almost a fifth of the matched group children, however (18.8%), said that
someone had told them lies (e.g. about their identity) over the Internet. It is not known
whether the other individual was an adult or another child. Of the children who
responded positively, 57.1% said that they had only been lied to once, 35.7% said that
this had occurred a few times, 3.5% that it occurred regularly, and 3.5% that they had
been lied to very often.
The largest proportion of children who had been lied to (66.7%), said that they
had told an adult about their experience(s). Of those who informed an adult, the
largest proportion told a parent, and most of these children found telling a parent
helpful. Table 3.4. summarises the findings.

Table 3.4.
Frequencies of children reporting being lied as a
function of whom they informed and how helpful
they found the response
Told If told, found
helpful
Parent 17 12
Friend 7 7
ISP 3 3
Moderator 2 0
Teacher 2 2
Police 1 1
Other 3 2
N = 31

56
In all bar one cases, ‘other’ referred to pets. It must again be noted that 81% of parents
(see previous section) believed that if their child was made to feel threatened,
uncomfortable or vulnerable whilst using the Internet, then they would tell a parent.
When children said they had been lied to on the Internet, 66.7% reported that he or
she told a parent.
In all, just four parents believed that their child had been lied to by another
individual via the Internet, whilst 31 children said they were worried that someone
had lied to them. Thus, the parent and child matched group figures clearly do not add
up, and there are a number of possible explanations for this. The two most likely
involve accuracy of reporting and definitions. Either parents or children may not have
reported accurately, or, parents and children’s definitions of being lied to via the
Internet may diverge considerably. The second explanation is the more probable, and
children may have counted lies from a friend on their IM buddy list, whilst parents
would have most likely restricted their responses to suspected ‘grooming’ episodes. In
those four cases where parents believed another person had been dishonest with their
child, two said that this occurred once and two more that it had occurred a few times.
No parent stated that this had happened regularly or very often. Two of the parents
reported their concerns and two did not. Two reported to a teacher and said that the
teacher was helpful, and one reported to the police and found the police helpful. One
reported to a chat room Moderator and found this experience to be helpful, and one to
an Internet company, finding the ISP to be unhelpful. All talked to a friend or partner,
and found positive support.
A fifth of teachers (21.2%, n = 7) said that they had in the past been concerned
about the intentions of an individual communicating over the Internet with a child. In
two cases, teachers reported their suspicions, one to the relevant ISP and one to a
parent. Both teachers found reporting to these sources to be helpful.

Knowledge and uptake of safety tools and features


All parents (N = 174) were asked whether they were aware of six safety features
recommended by the Models that may aid their child whilst using the Internet: chat
room and IM ignore or blocking facilities, chat room Moderators, warning advice in
relation to public profiles, alert features, advice on handling abusive chatters and links
to online Internet safety guides. The most familiar feature was blocking facilities,

57
though still only 48.8% of parents said they were aware of tools that allowed their
child to block unwanted chatters in a chat room or IM environment, and 32% more
were unsure. The next most familiar feature was chat room Moderators, with 28.6%
of parents stating that they were aware of Moderators as facilitators of safety, with
45.6% unsure. Levels of familiarity with the four other safety features were as
follows: 25.9% of parents said they were familiar with links to online safety guides
(44.9% unsure); 22.6% were familiar with warning advice on public profiles (49.7%
unsure); 20.3% of parents were familiar with alert features (51.9% unsure); 15.2%
were familiar with the location and content of advice on how to handle abusive
chatters (52.5% unsure).
The following table illustrates data collected with the aim of identifying
whether parents were familiar with their child’s use of these safety features, whether
their child had found them useful, and also whether parents had helped their child
with the operation of these safety features. Where a parent had assisted their child,
they were asked how useful they found each safety feature to be. As with all parent-
child comparisons, only the parent and child matched groups data were included in
these tabulated calculations.

Table 3.5.

Comparison of parents’, children’s and teachers’ use of six safety tools and features

Believe child Believe child Parent used Parent found


Parents’ responses
used found helpful, if to help helpful, if
used child used
Ignore or block 13.9% 81% 12.7% 85%
facilities
Online safety 4.8% 71.4% 4.8% 80%
guides
Chat room 3.6% 50% 3% 40%
Moderators
Warning advice on 3.6% 75% 6% 80%
public profiles
Alert features 3% 57.2% 6% 66.7%

58
Advice on 3% 60% 5.4% 75%
handling abusive
chatters
Children’s Child used Child found N/a N/a
responses helpful, if used
Ignore or block 30.3% 98% / /
facilities
Online safety 8.5% 92.9% / /
guides
Chat room 8.4% 78.6% / /
Moderators
Warning advice on 20% 84.4% / /
public profiles
Alert features 16.9% 73.9% / /
Advice on 9.6% 81.3% / /
handling abusive
chatters
Teachers’ Believe Believe average N/a N/a
responses average child child found
used helpful, if used
Ignore or block 65% 100% / /
facilities
Online safety 40% 77.8% / /
guides
Chat room 61.9% 77.8% / /
Moderators
Warning advice on 69.7% 88.9% / /
public profiles
Alert features 65% 77.8% / /
Advice on 35% 77.8% / /
handling abusive
chatters
Children N = 165, Parents N = 165, Teachers N = 33

59
The table shows that parents under-estimated the extent to which their children had
used Internet safety tools. Also, parents rarely helped their child to use safety features,
despite 90.9% of the matched group parents stating that they had access to the Internet
at home. The minority of parents who did help their child to become familiar with
safety tools almost always had home Internet access – just one did not. Where parents
did help their children learn about safety features, most thought that the features were
helpful. Only chat room Moderators were not viewed as helpful by a majority of
parents. Children were even more likely than their parents to view all six safety
features asked about as helpful. Ignore and block tools were the most commonly used
by both parents and their children. Almost a third of children has utilised ignore or
block tools, and a fifth had followed warning advice when creating their public
profiles. Based on these results, it should be noted that there is a need for parents to
receive or be aware of greater levels of information about Internet safety tools, and
how they can use these tools to educate and support their child. However it should
also be noted first that children seem to know about these safety tools and second that
they are not only using them but generally they are also finding them helpful. Table
3.1. above shows that teachers over-estimated childrens’ Internet use. This may
explain why teachers also over-estimated the typical child’s use of safety tools.
Nearly half of all 305 children said they had seen safety tools whilst using the
Internet. The results for the six safety tools/features are listed in Table 3.6.:

Table 3.6.
Children’s self reports on whether they had seen various safety tools and features
Yes No
Moderators 15.7% 84.3%
Warning advice on public profiles 21.6% 78.4%
Ignore or block features 30.8% 69.2%
Alert features 17% 83%
Advice on handling abusive chatters 14.8% 85.2%
Links to online safety guides 14.1% 85.9%
N = 305

60
Again, ignore and block features were the most commonly cited safety tool or feature.
Warning advice on creating and managing public profiles was the next most
commonly viewed safety feature, and Moderators were the least commonly viewed.
This is most likely explained by the fact that not all of the children had used chat
rooms. Children were also asked how far they understood these safety tools and
features (regardless of whether or not they had seen them). Table 3.7. displays the
results:

Table 3.7.
Children’s self reports on how far they understood various safety tools and features
Don’t Understand a Reasonable Fully
understand little understanding understand
Moderators 80.4% 9.8% 5.6% 4.3%
Warning advice on 76.1% 10.2% 9.2% 4.6%
public profiles
Ignore or block 68.8% 8.2% 9.8% 13.1%
features
Alert features 89.9% 4.6% 3.6% 2%
Advice on 83.6% 5.2% 6.9% 4.3%
handling abusive
chatters
Links to online 83.6% 7.9% 3.6% 4.9%
safety guides
N = 305

Correlations were performed to test whether frequency of use of a safety tool or


feature was related to understanding of same. As would be expected, positive
significant relationships were identified for all tools and features. Table 3.8 provides
the details.

Table 3.8.
Statistical associations between children’s self reports R value
on how frequently they had used and understood various

61
safety tools and features
Moderators (N = 102) .466**
Warning advice on public profiles (N = 113) .571**
Ignore or block features (N = 131) .717**
Alert features (N = 288) .843**
Advice on handling abusive chatters (N = 97) .481**
**= p< .001

In light of these significant correlations it is unsurprising that ignore and block


features had the highest levels of understanding given that they appeared to be the
most commonly used tool). Alert features were the least well-understood safety
feature and also the least commonly used tool.
Teachers were asked about their awareness of the six Internet safety tools and
features and table 3.9. provides a breakdown of the results.

Table 3.9.
Teachers’ knowledge and understanding of the six safety tools/features
Aware Fully Partly Don’t
of understand understand understand
Moderators 59.4% 9.1% 39.4% 51.5%
Warning advice on 28.1% 0 15.2% 84.8%
public profiles
Ignore or block features 78.1% 15.6% 56.3% 28.1%
Alert features 46.9% 0 46.9% 53.1%
Advice on handling 25% 0 25% 75%
abusive chatters
Links to online safety 37.5% 15.2% 27.3% 57.5%
guides
N = 33

The levels of understanding of the six safety tools and features among the sample of
33 teachers was generally low. This is particularly interesting considering the feeling

62
from the qualitative section of the online questionnaire that education is the most
important issue impacting on child Internet safety.

Perceptions of knowledge and safety


All 174 parents and the 33 teachers were asked whether they thought that chat rooms,
IM and other Internet services were safer for children in January 2004 as compared
with January 2003. The largest proportion of parents (46.7%) said that they were
unsure, followed by 26% who said these services were no more or less safe than they
had been a year before. Over a sixth (16.7%) believed these services to now be a little
safer, followed by 5.3% who felt them to be a little less safe. The smallest proportions
(3.3% and 2%) believed these services to be much more and much less safe
respectively. Thus, only around 7% of parents felt that the Internet had become more
dangerous for children during the year in which the Models were published. Like
parents, the largest proportion of teachers (45.2%) were also unsure as to whether the
Internet was safer for children in January 2004. One third (32.3%) believed the
Internet to be a little safer, and 22.6% had observed no change. No teachers believed
the Internet to have become less safe during this time period.
Parents, children and teachers were all asked if they considered themselves to
be more knowledgeable about Internet safety than they had been one year before. The
largest group of parents (49.7%) said that their level of knowledge had not altered
during the past year, but 32.9% believed themselves to be a little more
knowledgeable. Twelve parents (7.7%) said that they were much more knowledgeable
than they were a year ago, while 9.7% said that they now knew less than they had
previously. Presumably, this latter sub-group of parents felt unable to remain up to
date with Internet safety issues and developments. The matched groups data in the
table below suggests parents’ perceptions of their own changing levels of knowledge
largely matched with their judgments of their child’s growth of awareness over the
previous year. Children’s judgments of their own knowledge are also included,
revealing that children believe their own awareness to have increased at a greater rate
than do their parents. The majority of teachers (68.8%) said that their knowledge of
Internet safety neither increased nor decreased over the past year, while 28.1% felt
that their knowledge had increased a little. No teachers believed that children’s
awareness had decreased, but only 53.8% felt that children knew more about Internet
safety than they had a year before.

63
Table 3.10.
Comparison of parents’ teachers’ and children’s judgements of whether they know
more or less about Internet safety than they did one year previously
Parents on Parents on Children on Teachers on Teachers
themselves their themselves themselves on children
children
Know 7.7% 13.4% 18.6% 3.1% 11.5%
much
more
Know a 32.9% 34.2% 43.4% 28.1% 42.3%
little
more
Know 49.7% 47.7% 26.4% 68.8% 46.2%
about the
same
Know a 4.5% 1.3% 8.5% 0 0
little less
Know 5.2% 3.4% 3.1% 0 0
much
less
Children N = 165, Parents N = 165, Teachers N = 33

All parties were asked from where they received the most information concerning
Internet safety, and both the parent and teacher samples were asked to estimate where
they felt children obtained most of their information from also. The table below
presents these results:

Table 3.11.
Comparison of parents’, children’s and teachers stated sources of knowledge
regarding Internet

64
Parents on Parents on Children on Teachers on Teachers
themselves their themselves themselves on
children children
41.5% 16.2% 25.6% 61.3% 37.5%
Media
ISP 15.6% 2.9% 6% 3.2% 0
Friends 14.8% 13.2% 9.8% 3.2% 28.1%
Family/partner 11.1% 29.2% 33.1% 9.7% 21.9%
Internet 6.7% 1.5% 5.3% 3.2% 0
School 6.7% 34.6% 18% 9.7% 12.5%
Other 2.2% 0 2.3% 9.7% 0
Don’t know 1.5% 2.2% 0 0 0
Children N = 165, Parents N = 165, Teachers N = 33

Table 3.11. would suggest that parents may be over-estimating the degree to which
their children learn about Internet safety at school, while under-estimating the impact
of the media. The results also suggest that teachers over-estimate the extent to which
children’s information is gained from the media and there is a clear assumption that
children get a lot of their information from their friends, when in fact children only
reported this 9.8% of the time. Interestingly, teachers under-estimated how much
information children report getting from school and from their parents.
Teachers were asked if they had received any form of training concerning
child safety on the Internet. A minority (13.3%) said that they had. Two said this took
the form of a short seminar and two more said their training had taken place via
distance learning. The lack of formal training with regard to Internet use and safety is
perhaps reflected in the significant over and under-estimations provided by teachers
when asked to assess aspects of Internet safety with regards to themselves, and the
average child. Overall, it would seem that the importance and availablilty of teacher
training with regards to the Internet is significantly absent in the educational system.

Questionnaire data: A qualitative analysis


In the last section of all three questionnaires, parents, children and teachers were
invited to make their own comments or statements on aspects of the Internet and its

65
safety for children. This section will discuss and provide examples of the major
themes that emerged from this data.
All parents and teachers were asked whether there were any additional safety
measures relating to chat rooms, IM, and web sites that they would like to see
implemented or addressed. Twenty-five parents (14.4%) and three teachers (9%)
responded via free text. Two of the teachers talked about banning chat rooms, for
instance:

“Chat rooms should be banned, children should have safe and limited access to
the Internet and websites.”

One more demonstrated a lack of knowledge regarding Internet safety features:

“I believe the Local Education Authority screen out dodgy websites before the
children can access the web.”

Parents
From the parents’ responses, five themes were identified and these are summarised in
Table 3.12. below:

Table 3.12.
Themes derived from parents’ comments on
safety issues they would like to see addressed
Frequencies and percentages of themes
per derived category (n = 25)
Internet Service Provider-related 13 (52%)
Ban/Limit 7 (28%)
School-related 6 (24%)
Government 4 (16.7%)
Supervisory 4 (16%)
N = 25

Each theme will now be briefly discussed, beginning with the most dominant theme,
that which related to issues surrounding ISPs and the provision of security features

66
and information in all Internet services. Most concerns were aimed at chat rooms or
forums with reference to registration issues and general safety features (five
references):

“ I would like to see more safety features re: chat rooms in particular to stop
paedophiles.”

“Chat room users should have to register to use this service and provide proof
of who they are on registration.”

One parent felt that ISPs should be responsible policing their own services:

“More legislation forcing Internet Service Providers to self-police services.”

Another felt that ISPs did not provide sufficient printed information:

“I don’t feel that information is readily available to parents… Service


Providers could produce some do’s and don’ts.”

Throughout the current section, it is clear that many parents are requesting mesasures
that are already in place. This finding was mirrored in the results from the qualitative
section of the online questionnaire. That is, regular users believed that education was
the most beneficial tool for increasing child Internet safety, but that it was only useful
if parents and others were actually accessed educative sources. Parents also voiced
concerns about spam and pop-ups (three references), emphasising that too much
unsolicited information appeared on the screens of their home computers. Concern
was also mentioned with regards to spam containing sexual content. The reference
below details this but again demonstrates that parents are not all IT proficient and that
there is an assumption that lack of knowledge is inter-generational:

“SPAM or pop-ups are a menace and as a parent I find the ease of which they
appear very worrying because they are sex-related. What defences can the
suppliers of computers offer to whole generations of very basic IT users? A

67
PC in our children’s bedroom is more of a concern than a TV. How bad is
that?”

The second theme that was identified from parents’ views on child Internet safety
concerned banning children from using the Internet or restricting their usage. Seven
parents expressed the view that certain web based services should be either banned or
that limitations should be made compulsory. These views ranged from comments
relating to pornographic sites and adult material to more general prohibitive views, for
instance:

“All pornography should be banned. Any information which could be used by


extremist or unstable individuals should be banned… companies which
…store such information should be prosecuted.”

“Children should be discouraged from using the Internet.”

“Abolish chat rooms - there will always be a way of breaking into them. They
will never be safe.”

The third parental theme covers education via schools. Although a large proportion of
parents (34.6%) stated that they thought most of their child’s knowledge of Internet
safety came via school, six references were made regarding schools’ involvement in
the educative process. Clearly, parents seem to be of the opinion that children will
listen to their teachers but not to parents (four references):

“More talks within schools, children tend to listen to others rather than their
parents.”

“Probably the issue should be addressed more in schools. Many children pay
attention to what a teacher tells them than what a parent says! Internet safety
could be incorporated into IT lessons from an early age so that children grow
up aware of the dangers of surfing the Net and using chat rooms.”

68
“ I think it would be really useful to have some sort of guide/leaflet available
through schools. I would like to know more about these issues.”

Four more parents directed their comments towards perceived Government


responsibility. One commented:

“The Government should persuade all parents to exercise sensible


supervision.”

Another parent asked for Government help with regulation:

“Uniform system of regulation of chat rooms, web pages and web sites and
sites to be vetted by organisation/body who is responsible/in charge and able
to withdraw site if it is not suitable.”

The final parental theme involved parental supervision. That is, four parents
emphasised that they supervise their children if they use the Internet and that they
ensure their children use the Internet solely for educational purposes:

“My son only uses the Internet when he needs references for homework. We
always supervise him.”

Children
All participating children were asked what safety advice they would offer to other
children who use the Internet. Of the 305 children who took part in the study, almost
one third (N = 101, 33.1%) chose to respond to this question and generally speaking,
these children appeared to be knowledgeable of Internet safety issues. Four themes
were apparent, and these are displayed in table 3.13.

Table 3.13.
Themes derived from childrens’ comments on how to
keep safe online

69
Frequencies and percentages of themes
per derived category
Be careful 85 (84.2%)
Don’t use chat rooms 6 (5.9%)
Be supervised 4 (4%)
Avoid strangers 3 (3%)
N = 101

The overwhelming theme of “be careful” comprised 84.2% of all comments made.
Thus, the majority of children who responded to this question gave cautionary advice
to their peers with regard to using the Internet and demostrated that children are aware
that the Internet can be a dangerous place. This advice included the understanding that
they should not give out any of their details online (13 references):

“I would say that if you use chat rooms you should keep it between people you
know and you should never give your name age or address and you should
always use safety panels and should tell someone if you think they are lying.”

“I would say that the main piece of advice is to never give out any information
about yourself to anybody on the Internet. People aren’t always who they say
they are so you shouldn’t tell them anything.”

There were also references to never meeting up with any individual first contacted via
the Internet (9 references):

“Don’t meet them unless you really know who they are.”

“Don’t meet people you met on a chat room without a grown-up.”

One child warned of what might happen should someone meet or make a “date” with
a stranger that they had met on the Internet:

“You mustn’t make blind dates. You never know who you’re dealing with. You
think you know someone but you could wake up the next morning in a shack in

70
Japan missing a kidney. These people are called Internet felons…[they] say
they’re someone that they’re not.”

The remaining references within this theme included telling or asking parents (or a
suitable adult) about anything threatening that has happened when using the Internet,
references to exercising caution if talking to strangers and advice not to enter adult-
related sites.

“If somebody if abusing you via the Internet, you should alert the Moderator of
the site or service as they can be really helpful.”

“Don’t go onto rude websites and don’t look at rude pictures.”

Within the second child theme, six children expressed the view that young children
should not use chat rooms. Those who explained why referred to strangers not telling
the truth:

“Do not use chat rooms for there is a chance you are talking to people who are
lying.”

One child went so far as to suggest closing all chat rooms:

“Don’t use chat rooms or shut them down.”

With regard to the third child theme, four children emphasised a need to be supervised
whilst using the Internet:

“Be sensible. Always ask for your parents’ supervision.”

“ I would tell children to have a parent near them while using the Internet.”

The final theme focussed on the importance of avoiding strangers. Three children
talked about the possible dangers for children of interacting with people they do not
know.

71
“Don’t talk to anyone you don’t know!”

“Always be in your guard, there might be somebody trying to trick you out.”

In all, participating children demonstrated a clear awareness of the potential hazards


of using the Internet. Interestingly the majority of the young peoples’ comments
identified risks associated with Internet use, advising caution and stressing the
importance of supervision, though few children mentioned safety tools in this section,
perhaps due to children seeing these features as less helpful than effective supervision
and common sense, or also perhaps due to low awareness/understanding of such tools.
Very few called for bans of Internet services and many mentioned the positive aspects
of the Internet:

“Be careful what you say and do. But the Internet can be great if you use it in
the right way.”

Chapter summary
• Parents tended to make lower estimates of their children’s usage of Internet
services than did their children
• Teachers tended to make higher estimates of children’s usage of the Internet
than did children
• A majority of parents believed their children to possess higher than average
Internet skills
• Though there were self-reported instances of children being made to feel
threatened, uncomfortable or vulnerable while using Internet services, most
children had not had such experiences. For those young people that reported
such incidents, these were not usually regular occurrences
• A fifth of teachers reported they had been concerned about the intentions of an
individual communicating with children over the Internet
• Children used safety tools and features more frequently than their parents
estimations, and the vast majority of young people viewed the tools positively

72
• Ignore and block features were the most familiar safety tool for parents and
children
• Around 7% of parents and zero teachers felt that the Internet had become more
dangerous for children during 2003
• Almost half of parents and teachers felt they didn’t know any more about the
Internet in 2004 compared with 2003. The largest proportion of children felt
they knew a little more
• Children demonstrated sound safety knowledge, advising that other children
should enjoy using the Internet but exercise caution

Key points for future action


• Parents gleaned most of their safety information from the media and believed
their children to obtain theirs from school. In contrast, children reported receiving
most of their knowledge from parents and the media
• Parents did not possess high levels of Internet safety knowledge and rarely helped
their children use web safety tools and features
• Parents requested safety tools and features that were already commonplace,
underlining a need for education

73
Chapter 4: Internet Service Provider Senior Managers’ perspectives

The sample
In all 15 Senior Managers from different Internet companies were interviewed. Where
possible, Managers with primary responsibilities for Internet safety within their
company’s sites were recruited to ensure a more knowledgeable sample and to
provide responses from people in similar positions across the different businesses. A
list of job titles of the respondents is given in table 4.1 (appendix) and the average
length of time respondents occupied this position was 2.9 years (SD = 2.7 years).
The Managers represented five International companies and 10 UK only
businesses. The sample included nine connectivity providers with an average UK user
base of 1.34 million subscribers (SD = 1.57 million). Four of these ISPs also provided
chat, as did an additional five companies (three of which were chat/community based
sites only). Six of the companies offered IM type services however two of these
Managers did not wish to comment on their IM clients for reasons that will not be
stated here. Eleven of the companies provided users with their own Internet space and
all but one organisation offered other web based services. In all, the sample
represented large, medium and small Internet companies offering a range of online
communication and Internet services to which the Task Force Guidelines would
apply.
This chapter will first explore qualitative data on the Managers’ perceptions of
the risks of the Internet as well as how responsible various parties should be for
protecting children online. Discussion will then turn to the Models themselves and
Guideline dissemination will be explored as well as both qualitative and quantitative
data on implementation of the Models to gain an insight into the uptake of the
recommendations against baseline concordance. This section will also explore the
effects of the Guidelines on business. Methods for preventing attrition of safety
practices and features will then be discussed before sections on the Managers’
opinions of what were the most and least useful recommendations as well as how they
perceived the consequences of the Models with regards to Internet safety. Opinions of
the Model facilitation process will be given and the perception of the balance of the
Guidelines between user and industry needs will follow before the final sections that
identify further information the Managers would like, as well as their suggestions for
future Guidelines.

74
Risks to young users

Perceptions of how dangerous the Internet is


Of the 15 Managers, eight responded with a definite ‘yes’ when asked ‘Do you
consider the Internet to be a dangerous place for users under the age of 16?’ A further
two of the interviewees did consider there to be risks related to Internet use, but made
it clear they also believed the advantages outweighed the risks:

“It’s dangerous but very informative…The Internet by itself, yes, as an open


source of information can be dangerous to young users but at the same time it
is such a great source of information that dealt with correctly it’s far from
dangerous.”

With regards to ‘correct use’ of the Internet, the above quote relates to views
expressed by two further respondents who believe the risks associated with the
Internet can be avoided if people are taught how to use it properly:

“I believe it’s a failure through education - that people aren’t informed enough
to make an educated decision about what is good and what isn’t good.”

This feeling reflects the opinions of many online users (see Chapter 2). These two
respondents were part of a group of five interviewees who believed either that the
Internet was not a dangerous place for young users (three cases) or that the Internet
was no more dangerous than the ‘outside world’ (two instances):

“The high street is a dangerous place for young children. In the same way that
the high street is dangerous I would say the Internet is dangerous as well.”
(How do you feel the Internet can be dangerous?) “Well how is a library
dangerous? Because there may be paedophiles in libraries, or the local
swimming pool – are they dangerous places?”

75
Specific risks
When asked to identify reasons why they felt the Internet could pose a risk to young
people the most commonly cited dangers involved exposure to inappropriate content
or information (five instances) and the ability for contact to be made with ‘dangerous’
individuals (four cases):

“Things like chat rooms etc. You can easily lead people from there as we all
know. I think that’s probably the main aspect.”

“… [there are risks] in terms of young people being preyed upon either by
fraudulent users or by, in incredibly rare events I suppose, by paedophiles.
Obviously though that’s the one that gets the most coverage. But most of all I
think young people happening across inappropriate material is the greatest
threat.”

A risk not unrelated to inappropriate content is that misleading information may be


available on the Internet:

“I think there’s a lot of information out there and what I do think is worrying,
slightly more, is the fact that they [young people] think things are bona fide
sites and in actual fact they are not necessarily so, so they could be getting
slightly skewed information. It could be that [the information is] from
something like the Ku Klux Klan. So I think there’s a problem with ‘bona
fide’ sites.”

As stated above, access to inappropriate content or information was cited as a threat to


child safety by five of the respondents. Other specific threats to safe child use of the
Internet as perceived by ISP Managers were risks associated with giving out personal
details (one case) and two more interviewees mentioned that the Internet can serve
potential abusers as a means of communicating:

“Yes [the Internet is a dangerous place for users under the age of 16] because
of the accessibility and the ability to bypass certain mechanisms; to glean

76
information from like-minded people. It’s a breeding ground for those type of
things.”

Finally two respondents who believed the Internet to be dangerous for young people
cited that children place themselves at risk due to their naivety or rebellious nature.
This view is not unrelated to the three respondents’ opinions given above concerning
that the dangers of the Internet often arise due to a lack of education or improper use.

Perceived locus of responsibility for Internet Safety

Respondents were asked four questions seeking their views on ISP obligations and
responsibilities for Internet safety. They were then asked further questions to explore
how responsible they felt parents, teachers, the Government and any other
organisation should be in ensuring protection of vulnerable users.

Responsibilities of the Internet industry


Concerning the responsibility of the Internet industry, views ranged from beliefs such
as “I don’t believe it’s the ISPs responsibility at all” to beliefs that the Internet
industry should be “very responsible.” Whilst these two quotes represented extremes
of the opinions expressed by the respondents, the vast majority of the interviewees (13
in all) accepted that the Internet industry had some level of responsibility but that
service providers could not be held wholly responsible and other parties must also be
involved.
Seven Managers provided opinions regarding the limitations of the Internet
industry’s responsibility. The most common view, expressed by four of the
respondents, was that service providers could not be responsible for user behaviour
(and hence could neither be responsible for completely protecting children or stopping
all potential abusers):

“I think parents see us slightly as a babysitting service where we’re


responsible for their kids actions which is impossible.”

Similarly three of the Managers expressed beliefs that they could not be held
responsible for content on the Internet, however two of these respondents did believe

77
that each company should be responsible for the content “actually being hosted on it’s
servers”. One of these interviewees further believed ISPs to have a responsibility to
act upon both inappropriate content and abusive behaviour on their sites once they
became aware of it:

“If it was a telephone line would BT be responsible because someone phoned


up and hurled abuse at somebody? Well no as long as they did something
about it. And the same goes for the ISP: as long as the ISP does something
about it when they know about it then I think that’s all they can do. I don’t
think they are responsible for the content itself - I think they should act
responsibly and take action as soon as they know something.”

Of those two Managers who believed that ISPs should take responsibility for the
content of their own servers, one commented further on specific methods ISPs could
employ to protect young users. This respondent believed that ISPs could “do a certain
amount of central filtering and can have an effective complaints procedure”. The other
Manager mentioned that ISPs have a responsibility to provide technology for safety:

“So of course we have responsibility and we provide the technology …we


offer our members, for instance, parental controls and advice.”

Although only two Managers explicitly stated that ISPs should take responsibility for
ensuring the content within their sites is safe, other views expressed included a duty
“to act as good models for common sense” and to “encourage good behaviour”. The
most commonly stated area of responsibility for the Internet industry was their role in
educating and advising users about safe Internet usage. This view was given by four
of the five interviewees who expressed opinions on how ISPs should be responsible:

“I feel we’re responsible for educating them as well, and the parents.”

Whilst a further interviewee did not explicitly state that the ISPs had a responsibility
to educate users, this interviewee also perceived education as the main preventative
measure:

78
“I think we have to take our responsibility but I think it’s more about
education – teaching them how to help themselves and what to be aware of
and what not to…”

As well as believing ISPs to play a role in educating and advising children on Internet
safety, three of the respondents believed that industry should additionally be
educating parents:

“I think as someone’s ISP we need to be aware of the dangers and be


providing the relevant advice and education to parents and children who are
using our service.”

Once again, these views tie in with those of the online users who believed education
and awareness to be the most valuable safety tools that young users have at their
disposal.

Responsibilities of parents and other carers


The Senior Managers in general viewed parents and carers to have more responsibility
for protecting children than the Internet industry. Expressions of the magnitude of this
duty varied from it being a “shared responsibility” to “100%”. Indeed, six of the 13
interviewees providing qualitative answers concerning parental responsibilities
identified parents as having the most significant duties compared with any other party:

“They should probably have the biggest role because it is their children after
all and no one will care about their children more than they do.”

Seven interviewees provided opinions with regards to specific ways in which parents
could help protect young people on the Internet. The above quote highlights the theme
that emerged concerning parents showing an interest in, or indeed providing
supervision, when children are online (six instances). Being involved with what their
children are doing on the Internet was the most commonly cited example of how
parents could keep their children safe. Two respondents further commented that
parents have the responsibility to “make sure their child is happy to come to them if
they see something they don’t like.”

79
One Manager commented on the usefulness of taking preventative measures
such as keeping PCs in common areas of the household, whilst three interviewees
stated that parents should ensure that protective technology is in place, for example:

“I also think parents should have responsibility to… use the technology that’s
available because they can very easily set up parental controls via
[respondent’s company]. If they’re not with us then they can use other filtering
packages and monitoring package.”

While all respondents believed that carers should play a role in protecting young users
on the Internet, five of the Managers expressed concerns that parents may be limited
in performing this role, due to perhaps having low awareness of risks and
countermeasures:

“I think in many cases parents are not aware, firstly of the nature of the threat,
and secondly obviously the many solutions which are available to them to
counter the threat including parental control software, firewalls, a selection of
ISPs which do or don’t support news groups, you know, monitoring what their
kids are doing”

Responsibilities of teachers
Opinions concerning teachers’ responsibilities varied from beliefs that teachers should
be “extremely” responsible to “hardly at all.” The majority of Managers (12) did
assign some responsibility to teachers, though only two parties assigned primary
responsibility to this group. As noted above, parents were viewed as having principal
responsibility.
With regards to ways in which teachers should protect children, two
interviewees believed that “it’s important teachers are aware of the dangers of the
Internet”, and one of these same respondents stated that if the Internet is to be used in
schools teachers have “got to make sure there are specific guidelines”. This Manager
also suggested that teachers ensure effective filtering mechanisms are in place on
school computers. Perhaps unsurprisingly the main responsibility of teachers
identified by the Managers was education and this was cited as a duty that teachers
should (as opposed to ‘could’) play by five interviewees. Three of these Managers

80
explicitly stated that Internet safety education should be incorporated into the national
curriculum:

“There should be a centrally led campaign and it should be included in the


curriculum for children to learn how to protect themselves on the Internet -
what to look out for, what the pit-falls are, where things can go wrong and
how. Generally I think, how to be careful.”

However one of these three respondents, whilst believing that Internet safety within
the curriculum would be a good idea, also highlighted that “the teachers are under
such pressure obviously without looking at that [Internet safety].”

Four of the Managers played down the role of teachers in favour of parents:

“I think education in school would probably be a good thing, but at the end of
the day if a child is using a computer at home though it should be more up to
the parent.”

One of these four Managers, whilst stating that teachers “should be just as responsible
as any parent” believed that compared with teachers “a parent would get more respect
from users under the age of 16”. In Chapter 3 however, four parents felt that children
were more likely to listen to their teachers than their parents. Opinions against
educating children about Internet safety in school were expressed by one respondent
who believed that doing so might have effects opposite to those desired:

“The history of educating people about things they shouldn’t do is like advice
about taking lots of exercise and not smoking – it’s very difficult to teach this
to a class without actually exciting kids about going and investigating these
things.”

When talking about limiting factors on teachers’ responsibilities only one respondent
mentioned that teachers might have a low awareness of the issues. This compares to
five Managers who highlighted lack of knowledge as a limiting factor for parents and
the findings from our (limited) teacher sample detailed in Chapter 3.

81
Responsibilities of other organisations
Twelve of the Managers were asked if they could think of any other organisation that
should have some degree of responsibility for child safety on the Internet identified at
least one other party. The most commonly cited body here was the Government and
the Task Force (eight respondents) followed by the Police (five interviewees).
The perceived responsibilities of Government included setting guidelines and
legislation that “dictates to a large extent what Service Providers do and how
enforceable things are” (three cases), as well as providing education to both children
(two instances) parents (one case) and the media (one respondent). Another role of
Government identified by a respondent was run “campaigns and, you know,
advertising, promoting awareness, things like that as well”, whilst another Manager
called for a Government initiative “to tackle spammers, the trouble is that most of
them are based in China or the States.” This quote relates to the views expressed by
two further respondents who believe that the Government has a duty to apply pressure
on International Governments:

“I think that Government has other functions as well in terms of… acting on
International Governments to control the kind of international influx of
dubious material. Things like pornographic spam. I think that’s a very difficult
thing to crack on a national ISP level when obviously a large part of it is
coming from overseas.”

In terms of the responsibilities of the Police, the most commonly requested duty
involved the provision of a clear mechanism via which abuse cases or complaints
could be channelled. This opinion was expressed by two of the Managers:

“…with regards to the Police there should be a proper mechanism in place


which allows [chat] service providers such as ourselves to actually follow
through, and a clear specific route on how to actually report matters and points
of contact and things like that.”

Linked to the above, one of the respondents expressed a need for the law to be
clarified with regards to Internet abuse:

82
“I’ve spoken to the Police for the last year or two and even the Police involved
appeared to be a bit sort of hazy on what’s a regulation, what’s a guideline.
You know, perhaps the law needs to be tightened up and made a bit clearer.”

One further Manager stressed the need for the Police to “take all complaints
seriously.”

The global nature of the Internet was highlighted by one interviewee as a factor
limiting the degree to which the Police can impact on Internet safety:

“You could be looking at something completely illegal that happens to be


hosted in Cuba and maybe in Cuba it’s perfectly legitimate. And that’s the
biggest problem: Yes they [the Police] should take responsibility but there’s
only so far that it will go and unless the whole world actually gets together and
makes some sort of unilateral decision to police the Internet themselves then I
don’t think anything will ultimately happen which is a shame.”

Other bodies identified by the Managers included the Internet Watch Foundation
(IWF), “various groups within the Internet Service Providers Association”, the
Internet Content Rating Association (ICRA), non-Governmental organisations and
software manufacturers (all of these organisations were each cited by one
interviewee). Finally, three of the respondents mentioned that young people should
take responsibility for their own behaviour:

“To a certain degree children should learn to take control of it themselves. Use
common sense. As much as we can turn around and say ‘yes we can protect
you, we can throw this person off if they’re being inappropriate’, at the end of
the day they can protect themselves by a) reporting it and b) using the ignore
feature.”

A fourth respondent gave an example of how shared child and parental awareness of
the dangers and collaboration can make use of the Internet safe:

83
“There was a case highlighted by the media, of a young girl who had met an
older man on the Internet and she went to meet him… the media had put this
on, and the TV company in their reporting were ‘look at the dangers – that the
young girl went to meet an older man’ but in actual fact what had happened was
that the girl was surfing the net using chat rooms, she had read the safe surfing
guide, so had her mother. Because she told her mother everything she was
doing and her mother showed an interest in what she was doing the mother
knew about the meeting that was about to happen and went with the girl to meet
the man. And as a result she had actually protected her. Now from the way it
was reported it was ‘well look how dangerous all this is’ but actually this was a
brilliant example of how things actually work when youngsters know what to
do, when the younger users know what to do and involve their parents and
when the parents are involved off their own back and actually want to be
involved in what their children do.”

Summary of views on responsibility


Managers assigned the most responsibility for protecting young users on the Internet
to parents via active parental involvement, providing supervision and ensuring
protective technology is in place. However, low awareness of the Internet was cited as
a factor inhibiting this group’s ability for keeping children safe. Whilst teachers were
assigned a comparatively lower level of responsibility, beliefs were expressed
suggesting Governmental and law enforcement bodies should also be responsible.
Whilst a filtering effect (whereby more safety conscious Managers may have agreed
to participate in the present research) may have led to the present sample believing
ISPs to have more responsibility than Internet industry Managers in general, the
present sample on the whole believed that ISPs did have duties to protect children
online and these responsibilities involved ensuring their own content is appropriate,
providing access to protective technology and, most commonly cited, educating users
and parents. Throughout all the sub-sections in this segment education and increasing
awareness of a) risks associated with the Internet and b) implementing technology to
minimise these risks were cited as the main protective measures. This is related to one
of the main themes identified from the previous chapter: that the Internet can be safe
if individuals both know how to use the Internet safely and choose to do so.

84
Familiarity with the Models and dissemination

Familiarity with the Guidelines

Twelve of the Managers were familiar with the Guidelines prior to contact by the
research, however a 13th respondent became acquainted with the Models through a
Home Office web link sent by the authors. Prior to the researchers’ contact three of
the 15 Managers were unaware of the Models. This figure is made more significant by
the possibility of a filtering effect whereby those companies who have a higher
priority on safety would show less resistance and more willingness to participate in
the current study. By the time of interview 13 of the 15 interviewees said they were
familiar with the Models and of these thirteen, 11 of the Managers considered
themselves to be very familiar with the Guidelines.

Intra-company dissemination
Of the 12 Managers familiar with the Models prior to the present research, 10 stated
that they believed the Models had been disseminated to staff within their organisation.
Of those 10, eight of the Managers believed that staff within their organisation had
good accessibility to the Guidelines.

Dissemination to the Internet industry


Problems regarding the distribution of the Models were a recurring theme throughout
the interviews and this issue was highlighted in response to several items. Two
Managers who were unfamiliar with the Guidelines and three who were acquainted
with the Models raised concerns. Two of these five were Task Force members and felt
there should have been more effort in distributing the Guidelines to companies not
involved in creating the recommendations:

“The only thing again would be awareness within the industry itself. I had
quite a role in it [the Models], but had I not I don’t think I would have known
very much about it.”

One of the three Managers familiar with the Guidelines had concerns with the way he
had received the Models:

85
“We have a regulatory affairs department who deal with all our interactions
with Government and authorities and it [the Models] came to them. There is
an issue there. I picked this up when it came out slightly by accident. It may be
a process issue within this company but our regulatory guides tend not to be
sufficiently closely linked into the day-to-day business, but I’m the guy who’s
operationally responsible for the server, I can’t say that I have a close link with
the Home Office, now perhaps I should have.”

Those Managers not aware of the Guidelines felt their company was not alone in this
regard:

“I didn’t have a copy of the guidelines, I wasn’t aware, or made aware that they
could be downloaded or whatever. That’s my experience I presume that will be
the experience of many of the hundreds of ISPs… I mean I don’t know how
they’ve distributed them, but at least let the general public or ISPs know about
it. I think the danger of a lot of these Task Force things were, you know, it’s a
good idea and so forth, but practically speaking if you’re going to make a real
difference it needs to be higher profile.”

The above respondent felt that publication of the Models should have had a higher
profile. As may been seen later, two of the Managers felt that awareness of the
Guidelines could better have been facilitated by involving more industry members in
the development of the Models, and a further interviewee called for a more dynamic
method of dissemination:

“I think sending out a document is OK but if you want to change things in the
real world you need a more active engagement with the ISP community.
We’re all busy 24 hours a day and sending us big fat documents from the
Home Office is not a way to encourage changes of behaviour. We don’t have
people really, well we do have people sitting round reading this sort of stuff
and trying to keep up with it all, you know, and we’re quite a big company
and I imagine smaller ISPs it would have been straight in the ‘to do’ list along
with a huge pile of other consultation papers which have come out from

86
central Government. It needs a more interventionist style I think, to come
round, talk it through, you know, if you want to make changes in the real
world.”

Implementation of the Models – closed item responses

Implementation of specific aspects of the Models was explored through both semi-
structured items and closed questions. This section will focus on responses to the
closed questions whilst the following section will be concerned with the more
qualitative responses from the semi-structured items. For each closed question, five
possible responses existed: feature in place before the Guidelines were introduced,
feature was implemented or changed in response to the Models, feature was not in
place but plans existed to introduce it, feature was not in place and there were no
plans to introduce it, or not applicable.
The results from the closed-item questions are displayed tables 4.2 to 4.5 in
the appendix. Notes for interpreting the results discussed below are also provided in
the appendix. The sub-samples of Managers for which useable data was obtained
represented eight chat providers, four companies offering IM, eight connectivity
providers and nine companies that provided web hosting for home users. Table 4.6
below shows the percentages of response types for each of the three Models.

Table 4.6.
Table 4.6. Percentages of response types for items across different Models

Model Feature
Feature Plan to Model No plans
in place
introduced/changed introduce/change impact to
pre-
in response to the the feature (sum of introduce/
Models
Models previous change
two feature
columns)
Chat (n = 8, 77.5% 5.4% 10.4% 15.8% 6.8%
responses =
280)

87
IM (n = 4, 53.5% 14.7% 13.2% 27.9% 18.6%
responses =
129)
Connectivity 62.4% 11.9% 4.6% 16.5% 21.1%
(n = 8,
responses =
109)
Hosting (n = 61.6% 10.1% 3.0% 13.1% 25.3%
9, responses =
99)
Total 67.3% 9.2% 8.9% 18.1% 14.7%
(responses =
617)
Response numbers do not include missing data responses, nor ‘not applicable’ responses. Sub-sample numbers do
not include companies for which data is missing.

Relative percentages as shown in table 4.6 were generated from only those
respondents who answered ‘pre-Models’, ‘post-Models’, ‘plan to introduce’ and ‘no
plans to introduce’ (the number of responses the percentage is derived from is given
in the table for each Model). Response numbers do not include missing data nor
responses where an item was identified as being ‘not applicable’ to the particular
service offered by a company. Not applicable answers were coded when the risk that a
certain Model recommendation aimed to prevent was not a risk associated with the
service in question due to other measures being in place (e.g. not having a means by
which users could enter any sensitive material into profiles allowed that warnings to
not enter such details were not required). Such measures in some instances might be
seen to make the site safer than if the Models had been followed to the letter, and in
all instances where a ‘not applicable’ answer was coded these measures had existed
pre-Models (thus diminishing the relative magnitude of the baseline concordance
discussed here).

Baseline concordance
Across each of the Models baseline concordance with the recommendations was high
with 67.3% of responses stating that features were in place ‘pre-Models’. The highest

88
proportion of baseline concordance related to recommendations of the Model on chat
(77.5%), although it should be noted that this figure does not include baseline features
of two companies who closed their chat rooms at least partly in response to the
Models and therefore the items on chat were not applicable to these companies. All
seven chat providers for whom user profile questions were applicable said that when
completing profiles, user information that will be in the public domain is highlighted,
whilst all eight respondents claimed to display clear information concerning both the
type of chat service offered and the audience services are aimed at. All companies
said Model items concerning safety advice aimed at and easily understood by young
users were in place prior to the publication of the Guidelines. The recommendation
that service providers should ‘give due prominence to some or all’ suggested safety
tools also met high levels of baseline concordance with all companies having at least
two of these tools in place pre-Models, though the ‘grab and print’ tool showed the
lowest level of baseline concordance, with only one company having this feature in
place pre-Models (see ‘Negative aspects’ sub-section, current chapter). Another chat
recommendation that showed low baseline concordance concerned the screening of
Moderators, with just two companies running checks on their Moderators pre-
Guidelines (see ‘Factors inhibiting implementation of the Models’ sub-section,
current chapter).
Similar levels of baseline concordance were observed for the Model for
connectivity providers (62.4%) and hosting providers (61.6%). With regards to the
former, the highest level of baseline concordance related to recommendations to have
effective mechanisms in place for dealing with complaints, informing users in terms
of service that unacceptable behaviour may lead to withdrawal of service and/or
referral to law enforcement, and warning users that they have legal liability for
whatever they place on the web (seven out of eight companies for each). It is
interesting to note that the last two recommendations diverge responsibility away
from the ISP. The lowest level of baseline concordance for this Model involved the
recommendation to provide information that parents may utilise in order to educate
their children regarding the risks of Internet use (two respondents). This finding is of
particular importance as the need for education and the involvement of parents in this
process was highlighted in chapters 2 and 3. Representatives of eight of the nine
hosting providers interviewed claimed to have effective mechanisms in place for
dealing with complaints relating to customers’ websites and effective procedures for

89
removing illegal content pre-Models. Seven of nine respondents answered ‘pre-
Models’ to four of the remaining nine items, including the item asking if they provide
a link to, or information about the IWF to facilitate reporting of illegal content. The
high level of base concordance for this recommendation is surprising as it this was
viewed by the Managers to be one of the least useful recommendations (see ‘Positive
and negative aspects of the Models’ below). The lowest levels of baseline
concordance for hosting recommendations occurred for the items asking if specific
guidance was available for young home users when creating web pages (zero
companies) and whether customers were encouraged to provide self-labelling of the
content of their sites using Platform for Internet Content Selection (PICS) compatible
systems such as ICRA.
The lowest level of baseline concordance occurred in relation to the Model for IM
providers, though over half of responses were ‘pre-Model’ (53.3%). The highest level
of concordance occurred for the three items asking if the client had ignore/block tools
that were clearly described, and whether there were clear links to the company’s
privacy policy (four of four respondents for each). Prior to publication of the
Guidelines none of the companies had safety messages in place on the home page for
downloading the IM client (zero of three respondents), nor information on how to
contact law enforcement and child protection agencies, nor links to online safety
guides (this is compared to seven of eight companies that provided such links from
chat rooms pre-Models).

Impact of the Models


As baseline concordance was so high it is unsurprising to find that the direct impact of
the Models in regard to specific points was moderate. Almost one in five (18.1%)
responses indicated a feature had been changed or introduced as a response to the
Guidelines, or a statement of plans to introduce or alter said feature. While the Model
on IM saw the lowest level of baseline concordance, it also made the greatest impact
with 27.9% of responses indicating that there were plans to alter features or that such
changes had already taken place, although it must be noted that this sub-sample was
small with data available for only four companies. The Guidelines had no reported
effect on just five of the 36 items on IM (13.8%). Items that had the most impact
relate to recommendations to display safety messages regarding communicating with
strangers (three out of four companies) and for prominent safety messages to be

90
displayed on the home page for downloading IM (all three respondents for which this
item was applicable to – one client was not downloadable).
Similar levels of direct impact were found for the Models on both connectivity
(16.5%) and chat providers (15.8%), though perhaps the magnitude of the latter is
more surprising considering the comparatively high baseline concordance for this
Model. The impact of the chat Model is also perhaps more significant when it is taken
into consideration that two represented companies (not part of the sub-sample of
eight) closed their chat rooms at least partially as a response to the Guidelines. With
regards to the sub-sample, the Model appeared to have no impact (above and beyond
existing baseline concordance) for 13 of 39 items (33.3%). The recommendations that
have had most impact include the introduction of or plans to introduce alert features
or/and a grab and print tool (three out of eight respondents for each), whilst four
companies had been looking at a system via which users can report problems with
Moderators and also conducting security checks on Moderators (there had been zero
levels of baseline concordance for this recommendation). Only one of 14 items (7.1%)
in the connectivity section of the research protocol showed zero impact and this item
related to users being informed that pornographic, racist and other possibly illegal
material will be reported to the IWF (half the respondents met with this
recommendation pre-Models, half had no plans to do so). The largest impact of the
connectivity Model was found in the item that showed lowest baseline concordance.
This item related to the recommendation to provide information for parents to utilise
in order to educate their young users regarding risks (three of eight companies).
The Model relating to hosting providers appeared to have least overall impact
(13.1%), though zero effects were found for only one of 10 items. This item related to
the recommendation to encourage users to self-label their content using PICS
compatible systems such as ICRA (one company did this pre-Models, seven had no
plans to do so). The biggest impact (two out of eight companies) was found for three
items: do Terms of Service explicitly state the boundaries of acceptable online
behaviour; is clear guidance available for home users when creating web pages; and is
there specific guidance aimed at young users for doing so. There was zero baseline
concordance for this last item.

91
Non-compliance with the Models
Non-compliance (‘no plans to introduce/change feature to concur with Model
recommendations’) accounted for 14.7% of responses to items applicable to the sub-
samples. The Model to which least resistance was shown by far was that aimed at chat
providers (6.8%) and this can be explained in part by high levels of baseline
concordance. Compliance (or plans to comply) was indicated in responses to 25 of the
39 items (64.1%). Items towards most resistance was shown related to the
recommendations for a ‘grab and print’ tool (four companies, again see ‘Negative
aspects’ sub-section) and the suggestion that an alert system should specifically be at
the top of each chat page in rooms aimed at young users (three respondents).
The Model for IM providers indicated the second lowest levels of resistance
(18.6% of responses being ‘no plans to introduce’) with complete compliance being
stated for 21 of 36 items (58.3%). Items to which non-compliance was most often
indicated were recommendations to display safety messages when a user considers
adding a person to their buddy list, providing clear information on how to contact law
enforcement and child protection agencies, and providing links to online safety sites
from the IM client (three of four companies had no plans for each). It is interesting to
compare resistance to adding links to online safety guides with the results from the
similar recommendation for chat: all eight chat service respondents said that such
links were in place or that there were plans to create them. It might be that the refusal
to link from the client itself could be due to the companies seeing IM as a side
offering to other services that already provide such links (as was the case with all four
of the present IM sub-sample).
A non-compliance rate of 21.1% was found in response to the items exploring
implementation of the connectivity Model and no resistance was shown to only four
of 14 items (28.6%). Most frequently cited areas of non-compliance related to
informing users that pornographic, racist and other possibly illegal material may be
reported to the IWF (four cases – see ‘Factors inhibiting implementation of the
Models’ sub-section below), providing information for parents to utilise in educating
their children on risks, encouraging parents to take practical steps in order to minimise
risk (e.g. advised to keep PCs in common areas of the home), and providing safe
surfing guidelines specifically for young users themselves (three of eight instances for
each - See ‘Negative aspects of the Models’ below).

92
The highest relative levels of resistance appear to be to the good practice
Model for hosting providers with a non-compliance rate of 25.3%. Complete
compliance (or plans to be so) was shown in response to just two of the 11 items
(18.2%). These recommendations involved having effective mechanisms for dealing
with complaints relating to customers’ websites, and effective procedures for
removing illegal content as soon as is reasonably possible. Extremely high levels of
non-compliance were shown in response to two of the items with eight of nine
respondents having no plans to encourage users to self-label their content (see
‘Negative aspects’ sub-section below) and seven organisations having no plans to
provide specific guidance to young users on creating web pages, although it must be
noted that one of these seven companies only allowed teenagers to generate content in
their web space.

Implementation of the Models – semi-structured item responses

Implementation in comparison to ‘competition’


Of the 12 Managers that had some familiarity with the Models prior to the research,
nine perceived their company’s implementation of the recommendations to be above
average in comparison with their main rivals. One interviewee however, believed that
the Models had no impact on the way he ran his company and that nothing had been
done to incorporate their recommendations. In commenting on the safety of his
service, this respondent stated “Well, we’ve never had a complaint. We’ve never had
any sort of illegal material, or anything negative at all, ever, on any of our sites. It’s
never been an issue, ever.” Another Manager was unable to comment with regards to
this item due to not having considered how well other sites had implemented aspects
of the Models, whilst the twelfth respondent viewed his company “in the top half of
compliant ISPs” though not “in the top quarter” and this Manager did not believe his
company had “taken aggressive steps to follow up on the kind of peripheral areas.”

Changes to operating structures


“Our focus and our priority has always been safety. We have people constantly
monitoring… So, you know, as long as your heart’s in it, and it always has
been and we haven’t really had to make any changes to accommodate for this
stuff [the Models] because it’s been there from the beginning.”

93
The above quote is representative of five respondents who could name no major
changes to operating structures or policy and practice as a response to the Models due
to these respondents claiming to have most of the recommendations in place before
Guideline publication. When considering this and the high rate of baseline
concordance found from the quantitative data presented in tables 4.2 to 4.6, it is
unsurprising to discover that only five of the twelve companies familiar with the
Models altered policy or practice in response to the Guidelines, whilst four of these
five Service Providers made changes to operating structures. A thirteenth respondent
recently acquainted with the Guidelines felt that his company would have to make
alterations to operating structures to accommodate the recommendations.
Perhaps the most drastic change to operating structures was the closure of chat
rooms in the UK by two large Internet companies. One of these companies however
stated that this was done only partially as a response to the Guidelines:

“[The Models were] certainly one of the factors in there, yes. But ultimately it
was a case of, you know, trying to protect our users online, and we thought
that [closing the chat rooms] was the safest thing to do.”

By closing chat rooms companies are not necessarily changing structures to comply
with the Models but are effectively removing the need to meet the chat guidelines.
The Manager who became aware of the Models as a result of this evaluation, whilst
believing that his company already followed most of the recommendations, stated that
in order to fully meet all the requirements, they would need to close their ‘open
access’ chat rooms (a service by which users from any ISP can enter and Chat as
opposed to the chat rooms that an ISP offers that are open only to subscribers):

“I’m not particularly in favour of continuing to run an open access chat room
for instance because we have no control of what happens, you know people
can come in there, do whatever they want with another ISP… It’s just a
resource for abuse, for whoever’s out there and it’s a bit like the wild west in
that, and hopefully we’ll take action to kind of reign in these services in the
next six months.”

94
Two of the Managers explained their operating structures had been altered by
modifying slightly both the content and/or prominence of safety messages:

“As a result of the Models we took a look at where those safety messages were
and we gave them even higher prominence as a result.”

Two companies made alterations to various product features and services, and one ISP
cut a link to an external IM provider in response to the Guidelines.

Changes to policy and practice in response to the Guidelines


With regards to changes to policy and practice the most commonly cited example of
change was a greater awareness or attention directed towards child protection issues,
including the hiring of a Child Protection Officer in one instance. Three of five
Managers who explained that changes to policy or practice had been made in response
to the Guidelines said that alterations had been made in this regard, however one of
these respondents, a member of the Task Force, stated that it was less the Models
themselves that had caused this shift in focus but the processes involved in
formulating the Guidelines. Perhaps one of the more significant changes to practice
and policy was the outsourcing of Moderators by one major ISP. Reasons for this are
given below:

“Before we went to this moderation company we had volunteers within the


kids and teen message boards as well and although we did supervise them on a
daily basis to ensure they were safe and there wasn’t any personal or
identifying material on the message board, we can guarantee that with this
moderation company that they’re checking the forums frequently. We have an
actual time schedule now, which you couldn’t really do with volunteers. It’s
made our supervision a lot easier in a lot of ways to have this company
working with us.”

The Models had affected the way another company evaluated their safety measures
and it had become practice to review their child protection policy on a quarterly basis.
This Manager also explained that as a result of the Models his company had “audited
all our applications so that not only do they comply with current recommendations

95
and Models but also so that they’re actually ahead of them so that we comply with
future Models.”

Factors inhibiting implementation of the Models


Whilst none of the interviewees felt the Models dictated drastic changes from
previous operating structures, only three of the Managers did not identify any factors
they felt would inhibit implementation of the Guidelines. It must however be noted
that a total of seven respondents out of 12 could not identify any specific factors when
asked: ‘What particular features of the Models inhibited their use?’ and some of the
inhibiting factors discussed here have been collected from responses to other items. It
would thus appear that on the whole most respondents felt that there were no major
aspects of the Model that inhibited their implementation.
Two respondents (including one Manager not familiar with the Guidelines)
talked about how they felt the Models were less relevant to their company’s services
due to already having in place strict Guidelines of their own or having to abide by
international laws. Another respondent highlighted that some features of the Models
might have been harder to implement than others and though not working for a
company that offered chat or IM, identified the Models for these services as being the
most difficult to implement:

“If I was looking at another ISP I don’t think it would have been too difficult
to start implementing. The hardest thing I think would be the Models on chat
and Instant Messaging because it’s quite an area to look at obviously but there
is quite a lot more that people can do and should do for chat in terms of
Moderators and obviously the background check on Moderators and such.
And I think that will be harder for a lot of ISPs to actually deal with. But if I
was running a chat room I think it would be difficult to implement but I think
ultimately that is what they should be doing anyway - it’s only common sense
the majority of it.”

Like the Manager quoted above, another respondent believed that the
recommendation that background checks be carried out on Moderators was difficult to
follow. This interviewee felt this was due to the current high demand for such checks:

96
“[Our Moderators] are screened but as you know there is a huge backlog for
the Criminal Records Bureau and [they have] terrible difficulty keeping up
with themselves, and there’s more work to do in this area - not just for us but
for everybody.”

A further Manager expressed her views concerning how a lack of information had
prevented her company from implementing the recommendation that there should be
clear information on how to contact law enforcement or child protection agencies:

“The only info that we would like, and it’s the same throughout the Internet
community industry, is that we are not sure of who to go to if there ever was a
problem. One of the questions was ‘is there clear information to tell parents
where to go if something serious happens?’ – well there isn’t because we don’t
know who we should be telling them to go and see.”

Two of the most commonly cited inhibiting factors are discussed elsewhere and
involve problems with the distribution of the Guidelines (five cases, see
‘Dissemination to the Internet industry’ above) and financial factors (three instances –
see ‘Effects on business’ below). With regards to the latter, one Manager, when
talking about the Model on chat stated that:

“It would be very difficult for an ISP to follow those Models throughout and
still be cost effective.”

Another common inhibiting factor was identified by five respondents who gave
examples of how certain recommendations were not implemented within their
company due to their specificity and irrelevance to the services they offered.
Specificity and irrelevance have been grouped together as it was often the case that
the applicability of particular Model features were limited by the Models being
perceived as too specific. Instances where the Guidelines appeared irrelevant differed
between respondents:

“…the reason why I said no to some of the points [concerning implementation


of the Models] is because of the way our site is. The guidelines are aimed at flat

97
chat and we are a virtual 3D site, we are more of a game so it’s not easy to put
some of the things in place.”

This respondent also numbered among two interviewees who felt the recommendation
to have an alert feature on the top of chat pages aimed at children to be too specific:

“Some things are a bit too specific. Like the alert button has to be top right,
that doesn’t really help when you have got a graphic-rich site and you can’t
put it top right or other things like that. It’s just a bit too specific a better way
of putting that would be there needs to be a clearly visible alert button.”

Two respondents questioned the relevance of the specific requirement to inform users
that inappropriate material may be reported to the IWF:

“[Warning users that possibly illegal material may be reported to the IWF is]
not really a relevant way to go about these sort of things because if you’re
looking at an organisation the size of ours then in any type of abuse there
would be a number of options that we could use, one would be the police,
another would be the IWF, yet another would be the judiciary system. It could
be any of the above plus others; the IWF is three people and a computer.”

Returning to the Model on chat, one Manager questioned the relevance of certain
safety messages (for example not to post personal details) being in place on his
company’s chat rooms due to the nature of Moderation they provide:

“The way you’re pre-moderated, there’s obviously massively reduced risk, so


the fact that they are pre-moderated has cut out a very large element of risk
and therefore safety advice in pre-moderated say, is clearly generally not
exactly relevant - words can be knocked out before anybody can see it.”

Finally, one respondent whose organisation had not altered anything in response to the
Models quoted a “lack of relevance” to the services his small ISP offered as an
inhibiting factor:

98
“Well, we have a very strict policy of keeping our own house in order. We
don’t allow this material on full stop, and it’s just not, you know, we provide
services to business primarily, and businesses don’t want to be associated with
this kind of material so that is our rule. And, so, a lot of the Task Force work,
as I said, was more about getting the children’s charities and the politicians to
understand that what they wanted was actually not possible, and what they
should have done, and I mean basically what they want is nobody to ever find
horrid porn on the Internet, that’s what they want, but in practice, that’s not
achievable in the way that they’re trying to do it which is basically criticising
the Internet industry in the UK.”

The belief by some of the Managers that certain recommendations are irrelevant to
their company suggests that a more general and holistic approach needs to be
undertaken when proposing Guidelines on how to offer safe services. Conversely this
finding has implications for assessing the safety of sites using the current Models –
where certain recommendations are not being met by a given company they might not
be required due to the nature of the site, and conversely when certain features are
present these measures might not be as effective as alternative practices utilised by
other companies.

Effects on business
The previous section identified three Managers who identified financial factors as
inhibitors of implementation. Six stated that costs had been incurred when asked a
more specific question concerning whether expenditure had been required to meet the
Guidelines. This number reaches seven if the Manager who had recently become
acquainted with the Guidelines is included as he felt there would be costs involved if
his company was to employ the Models. The lack of financial costs for seven of the
other eight companies can be partially explained in terms of these companies claiming
to have not altered practice, policy or operating structures in response to the Models.
The financial costs of implementing the Guidelines may differ between individual
Models. As one respondent whose company did not make any alterations in response to
the Guidelines stated when asked about cost implications:

99
“Not for us no. But again having said that if we were to do chat and IM and
such then yes there would be.”

Similarly, four out of the seven respondents who said that there had either been or
would be cost implications related these costs to chat services. For example, two
Managers spoke of the costs of Moderation:

“If we are going to have 24 hour, seven day a week Moderators then there’s
going to be cost implications.”

A representative from another company who stated that they could in theory implement
the chat recommendation for having recording mechanisms in place, went on to say
they do not do this “because of the size of the file it would produce …Give me enough
money to buy a hard drive then ‘yes’.” This same respondent also highlighted the
absence of an alert feature as a consequence of financial constraints.
Whilst the other three companies who said there had been costs associated with
implementing the Guidelines did not specifically attribute these expenditures to the
Model on chat, all three companies did offer chat at least prior to the Guidelines. None
of the Managers were able to either estimate or provide information regarding the
magnitude of the costs.
Whilst several respondents believed the Models resulted in cost implications for
their businesses, none of the interviewees viewed the overall effects on business as
negative. Of the six Managers whose companies had made (or planned to make)
substantial changes, two viewed the Models as having had both negative and positive
effects on business whilst the remaining four saw the Guidelines as having had a purely
positive effect on revenue. Negative factors have been discussed above and included
costs of deploying certain features (both in terms of programming and staff time) and
of losing users due to axing certain services. With regards to positive effects, all seven
respondents felt that if consumers perceive their site to be safer they will more likely
stay or become a user:

“It’s all a question of parents feeling their children are safe and it’s to do with
brand reputation really – if you have good child protection policy in place…

100
and you educate the public then they perceive you to care about their welfare,
which we do obviously, then it’s good for business.”

One chat provider also commented on a growth of user numbers as a result of other
sites closing their chat rooms in response to the Models.

Internal evaluation and maintenance of safety features


The research protocol contained three items designed to assess existing structures
within companies to prevent attrition of Model-consistent features and to explore
procedures for implementing the Guidelines. Whilst only six respondents believed that
their company had or would make significant changes in response to the Models, an
additional five companies reported that most of the recommendations were already in
place prior to publication of the Guidelines. This section will therefore focus on
responses from 10 of these 11 companies to explore how they maintain their safety
features (data from one of the 11 could not be obtained due to interviewee time
constraints).
All the 10 companies had in place some means of evaluating their safety
measures. In the majority of cases (six) this responsibility fell to one specific person. In
five of these six instances the interviewee themselves was this individual. The size of
these companies ranged extensively: whilst one respondent at the smaller end of the
scale believed that he alone was more than capable of maintaining safety features, a
Manager of an ISP at the larger end of the scale believed that this role was difficult.
The following quotes illustrate this dichotomy:

“If anything does get reported obviously it goes through a certain channel but
because we are a small team it’s very easy to see when things are changing,
when things need to be changed without having specific structures involved.”

“…when I was talking about the customer home pages, and the labelling [this
is referring to the recommendation to encourage users to self-label their
content], the reason why we don’t follow the Model on that particular basis is
because there’s such a huge surface area and it’s, you know, just impossible
for one person to do it.”

101
One large ISP explained that it had a specific team “committed to online safety and
community-related issues” and a Manager of a chat service explained that a large team
structure ensured safety features were maintained and evaluated. Two respondents
stated that evaluation of safety measures was carried out by regular review and audit -
one of these companies also explained that their policies were evaluated by an external
party as well as inter-company reviews.
Whilst all of the 10 companies had assigned responsibility to monitoring
implementation of the Models to an individual, team or review process, four of the
Managers talked about how evaluation procedures were ‘ad-hoc’, often evaluated and
maintained by users highlighting problems as well as through staff usage of the site.
This was the case with a small and mid-sized ISP as well as two chat only providers:

“I suppose it’s just what crops up. You know things will crop up from time to
time but usually it will crop up first on, you know, either a volunteer or a user
will alert us of something, and then we usually have a discussion on the
management mailing list and between us we figure out what would be the best
solution and how to, you know, what reaction we should be looking for.”

With regards to maintenance of Model features one respondent was of the view that

“Once they’re there they’re there. And then we build on them for our next
version.”

The difficulties of maintaining and evaluating safety features will not only vary with
the size of the company and user numbers, but also in terms of the nature of services
provided. For instance, in a quote given previously this section one Manager talks
about the difficulty in monitoring user home pages due to their sheer volume. However
with regards to online communication services, once particular messages or tools are in
place then the main issues of maintaining good practice may revolve around
Moderation and supervision of Moderators. For example, one ISP that provides chat
talked about how their Moderators “are given clear guidelines and their performance is
constantly checked by evaluating their chat logs and monitoring their message boards”.
Moderator supervision, training and provision of job guidelines will be discussed in

102
chapter five alongside roles of Moderators and the impact of the Models upon this
group.

Positive and negative features of the Models

Negative aspects
Interviewees were asked to report on what they perceived to be the most negative
aspects of the Models, as well as which particular features of the Guidelines made them
a success. None of the respondents identified features that they believed to be
particularly negative. However by analysing qualitative data obtained from other items
it was found that several participants had commented on a lack of utility among certain
Model recommendations. The most frequently reported feature in this category was the
‘grab and print’ tool recommended in the chat Model. Three respondents thought the
need for this tool was redundant because text could be recorded by the site itself. This
relates to relevance, which is discussed above.
One respondent commented on the specificity of the Models resulting in the
implementation of a ‘panic button’ that he perceived to be less useful than previous
measures:

“There is a point there that refers to a panic button being available in every
chat room. Now a panic button is a very specific example, or a very specific
requirement. In our case for example we have a facility that allows you to
report abuse but it wasn’t a button and we found ourselves having to bend
over backwards to create a less helpful solution that incorporated the button
because we wanted to comply.”

Two further respondents questioned the utility of encouraging users to self-label their
own content using PICS compatible systems such as ICRA:

“It’s a very difficult one that I think industry and consumers have been
grappling with for quite a few years and to be honest not many people self-
label, and there doesn’t seem to be any value in it.”

103
Indeed the above recommendation was found to be the least adopted measure in the
present research with only one Manager advising customers to self-label. This
respondent felt however that this recommendation was one of the most positive
provided by the Guidelines. An interesting point was made by one Manager who
believed the alert tool to be more beneficial than the ignore feature on a well moderated
site, given that the former can help prevent abuse of a number of users rather than just
the person using the tool:

“We are working to encourage more people, I think because it’s a valuable and
useful tool, to alert rather than block, because if you alert you alert the company
which otherwise someone else might forget, and then you’ve got the possibility
of the Moderator looking at the seriousness, whereas if you just block you’re
saying ‘I don’t want to be disturbed and I don’t mind if you go and do the same
thing to somebody else’. And that’s a cultural thing and I think it’s interesting
to explore.”

With regards to the Models as a whole, four respondents felt that the Models had
missed out certain Internet services such as e-mail. The final theme identified
concerning negative aspects of the Models was the non-mandatory nature of the
guidelines. Two Managers from both a large and a mid-sized ISP stated that the Models
should perhaps be made statutory, although one of these interviewees commented that
lack of resources might make this difficult for some ISPs:

“And at the end of the day maybe it should be, maybe another measure would
be to make some of the guidelines law. The issue then becomes where does
the money come from. I mean if you’re going to have a chat room and there’s
requirements to monitor that 24 hrs per day, who pays for that?”

Positive aspects
As well as few respondents identifying least useful recommendations of the Models,
few interviewees gave specific examples of what they felt were the best suggestions,
despite all but one of the Managers viewing the Models as being a positive measure
overall. It has been noted above that one of the Managers saw the recommendation to
encourage users to ‘self-label’ as a very positive recommendation, whilst a chat service

104
provider mentioned the utility of the alert tool and another ISP representative
commented on the importance of having an effective complaints procedure.
Two Managers stated that providing safety advice to users was the most
valuable feature of the Models. A quote illustrating this and the usefulness of a
complaints mechanism is given below:

“I think the focus on educating users and children who are users, and having
an effective complaints mechanism is a very kind of positive approach rather
than imposing lots of draconian legislation and outlaw of content. I think
education and an effective complaints mechanism are the right way to go
about it.”

When asked about the overall benefits of the Models, eight Managers said that an
increase in awareness was the most positive outcome. The next most commonly cited
positive feature was the clarity with which the advice was presented. Such comments
were made by five of the respondents: “I think the Models themselves are well written
and self-explanatory”. Two more respondents commented that the Guidelines provide
a good framework for industry when considering child safety. One of these two
respondents also felt the Models are “a good base for parenting as well as for ISPs”.
One interviewee was encouraged that the publication of the Guidelines reflected the
multi-agency nature of the Models development:

“I think the fact that it was collaborative and the fact that it wasn’t just
something that Government churned out with little consultation with the
industry or little understanding of how the industry would view or would be
able to adapt these in a real environment. So it’s the collaborative element that
I found the most beneficial.”

Consequences of the Models


One respondent didn’t see the Models as “being a success”, another commented he did
not feel the Models had been “100% successful” and another respondent believed the
Guidelines to have had no impact due to problems of dissemination. However the
majority of the 13 participants familiar with the Models viewed them as having had a
mainly positive impact (11 cases).

105
As noted above, the most commonly cited positive impact of the Models was a
perceived resultant increase in awareness of safety issues:

“Awareness. I don’t think I can stress that one enough. I think awareness is
probably the biggest thing because most people have got common sense and if
they’re aware of what the problems are and how to deal with them then most
people will.”

Whilst in some instances it was not clear from responses who had become more aware
as a result of the Models, two respondents commented that the Guidelines (or the
process of formulating the Guidelines in one instance) had resulted in a greater
awareness and focus on safety within the UK Internet industry:

“From personal experience I think the fact that people have got around a table
and talked about this stuff has made them think. It’s made them more aware of
what other people are doing in the industry; it’s made them aware of the
different attitudes. I think that’s been really helpful, just the process of getting
people talking about it”

A representative of an international company also spoke of the possible positive effects


the Models may have due to them being disseminated to International branches of the
business. One more interviewee believed the Models increased awareness of parents
and three respondents expressed views that the advertisement campaigns and media
attention accompanying the Models had resulted in greater public understanding. As
these adverts were aimed at the public it can be interpreted that these respondents
believe that user awareness of safety issues had improved:

“Awareness more than anything else on the positive side. During the course of
writing the Models there was quite a lot of media activity about it and there
was also an awful lot of advertising campaigns and such like that that all came
out to make people aware.”

Related to the theme of greater awareness were the views expressed by one Manager
that the Models had provided “reassurance that we were doing the correct thing”. This

106
respondent also stated that the Guidelines indicated that child safety had “been taken
seriously.”
Another Manager saw further positive business related consequences and
expressed his views that the Models have “kept us out of the media for the right
reasons” but perhaps more significantly have helped balance the level of responsibility
for child protection placed on ISPs by pushing some companies to be more responsible
whilst taking some responsibility away from those other ISPs who had already been
making an effort in this area (presumably due to the Task Force taking some of the
responsibility by generating the Good Practice Guidelines):

“It’s moved the Internet industry forward from a complete lack of any
commercial social responsibility in say 2000 to one were I am quite confident
that I can say what I’ve just said: That it isn’t all our responsibility now.”

This Manager saw this pitching of ISP responsibility as having a further positive
consequence because “theoretically, [the Models] will put all the main ISPs on a level
field, which you know obviously doesn’t make us paragons of virtue and neither does it
leave us on the roadside.” This respondent did however point out that the Models were
not mandatory and he called for them to be made law in order that companies could not
opt out.
Related to regulatory issues was one of the two ‘negative consequence’ themes
that emerged from the discourse. One Manager commented that small enterprises may
be discouraged from setting up community projects due to the demands of the
Guidelines “laying down a restrictive standard of moderation which naturally might put
off community projects and community issues if the standard is so high.” This Manager
further felt that the Models’ requirements were so great that they could result in
Internet safety being worsened in some conditions:

“I think the danger of people who don’t necessarily understand the medium
forcing over-prescriptive Models from another world, and trying to require
those Models... I think that can actually be counter-productive and can actually
have the opposite effect.”

107
And whilst three respondents had commented that the media campaign associated
with the Guidelines was positive, two of the respondents felt that the press associated
with the Models had the effect of scaring service users unnecessarily:

“Fear is now epidemic in children. I was at Centre Parks last week and
somebody was taking photographs, quite innocently, and my children were
worried that you know, they’d go on this Internet site and they’d be displayed.
That’s what the children said. It was shocking, they shouldn’t think that way at
all.”

Facilitation of the Models


Respondents were asked if they felt that the Models could have been better facilitated.
Eight of the interviewees responded positively, but this number is reduced to five when
those respondents whose main concerns were with publicity and the distribution of the
Guidelines to ISPs are removed. Two of these five respondents, both members of the
Task Force, commented that the drafting the Models took too long and that they would
liked to have seen this process shortened:

“I think the thing that some people don’t realise is the sheer amount of time
that.... the Guidance takes. There are times when I’ve been working for a day
a week, you know, it feels like a day a week, a lot of time on a Model when
they’ve got to be made completely from scratch… I’m not saying they’re not
worth it but it’s a nightmare. A small number of people have invested a lot of
time and again [we] have had to go on doing our normal jobs as well.”

As well as concerns about the time it takes to produce the Models one of these
respondents also felt that if representatives from more industries were involved in
creating the Guidelines, then this would encourage greater use:

“I think if you want people to accept that the Guidelines apply to them then
it’s not a bad idea to get more people playing a very big part in developing
these Guidelines.”

108
Related to this was the view expressed by another respondent who felt that his
company “should have been involved in them [the development of the Models] a lot
earlier. Because the minute we got involved we made our utmost to actually
implement them.” Two respondents, both Task Force members, expressed concerns
that during the facilitation of the Guidelines it was not completely clear who the
Models were to be targeted at:

“Because the Guidelines when we were making them, they were a little bit
vague sometimes so we had to actually work out whether we were going to
inform parents or whether we were informing children or carers.”

The most negative views about the facilitation process came from one Manager who
viewed the Models as a product of a PR exercise to meet the demands of children’s
charities in response to negative media concerning the Internet:

“The agendas that were behind the creation of the Task Force models were
wrong-headed… the way this whole process is being conducted is very much a
PR exercise driven by the children’s charities and the politicians and in effect,
not actually improving safety of children at all. The thing is children are being
abused and tortured right now. All over the World. And these models have not
actually done anything at all to rectify that.”

Balance of the Models between Industry and user needs


Just two respondents did not view the Models as striking an appropriate balance
between industry and user needs:

“The Models do very little to actually protect children. So I would say that on
one hand they do too little, but it might be better to say they don’t do the right
thing.”

Another respondent was more positive about the Guidelines but felt more could be
done to protect users:

109
“I mean for us it definitely wasn’t too much, because as you can see we were
doing most of the [recommendations] pre-Models anyway. I think for us it’s a
really good start and I know that there’s a lot of on-going work to do more with
these Models and so I think even more detail would probably be helpful.”

The remaining 11 participants with some familiarity with the Models felt that the
guidelines struck a “reasonable” to “perfect” balance between industry and user needs:

“Perfect balance. It’s nice that the Home Office consulted with Industry
Members.”

Two Managers acknowledged that the Models represented “a first step and actually
does get people thinking about it” and that “there’s always work to be done, so it’s a
benchmark on which we can actually build on in the future.”

Further information and additional support


Seven of the 15 Managers felt satisfied with the level of information in the Guidelines.
Of the remaining six interviewees who had some familiarity with the Models the most
commonly cited omission concerned knowing how and to whom to report instances of
serious abuse (four cases):

“I think additional support maybe with regards to what to do in instances


where people seriously abuse the service, so more related to the police… The
specific problem that we had was that we didn’t know the appropriate course
of action of what to do when reporting things to the police and then we were
passed on from one department to another.”

Similar views were aired in the ‘Responsibilities of other organisations’ section above.
One other Manager felt that there could have been more information regarding
support networks for ISPs:

“There are support mechanisms around for ISPs but not a great deal. So I
think it would sort of go back to the same thing of awareness and other ISPs

110
knowing, first that they’re there, and had I not been involved [in the Task
Force] I’m not sure I’d have known where I had to go to find out.”

One more respondent felt the Models could have gone into more depth on public
profiles:

“I think they could maybe do more on member directory, you know, the
personal profiles. It’s a case of, you know, more information about what we
should and shouldn’t be doing in terms of protecting user information, linking
to photos, not linking to photos, if there’s anything on data verification
systems it would be quite useful, but not only from a UK perspective but from
a global perspective.”

Manager suggestions for future directions


As well as feeling that the Models could have included more on public profiles, four
interviewees requested that future guidelines apply to other Internet communication
services such as message boards and e-mail, as well as mobiles and hand-held devices:

“There do seem to be different parts to this that need to be clarified a bit,


specifically about message boards and forums, because they’re not discussed, it’s
all chat and IM, but a huge amount of time spent online by kids and young teens is
on message boards and I think that those can also be a dangerous place if… proper
safety messages aren’t put in place beforehand.”

One Manager also felt that there should be a “more co-ordinated approach,
Governmental, to tackle spammers” and that police should have more powers to punish
people responsible for this, whilst another respondent called on future Guidelines to
recommend to ISPs to encourage users to protect their PCs from spam and viruses:

“I think there are some issues in two areas. One is around asking ISPs to be
more proactive in encouraging their customers to protect themselves against
viruses in particular by attaching Windows applications with their software,
and I think secondly in terms of protecting their PC against e-mail borne
viruses. And I think ISPs have a role to play in absolutely making clear to

111
customers good practice in terms of protecting themselves against viruses and
e-mail spam. Those are sources of lots of malicious content.”

Two of the Managers felt more work could be focused on age-related systems or
requirements. The second of these respondents felt that the Models on chat should
recommend that as well as providing a chat room for the general public, ISPs should
provide separate rooms for children and adults in order that young people can better
be kept safe from potential adult abusers, and that adult material can be drawn from
general access rooms to the age restricted adult sites:

“Part of the problem with chat for example is that in obvious cases children go
into chat rooms that are aimed at the whole population and we feel that that’s
probably inappropriate that there should always be a separate chat service for
children and hence we created a separate wall garden area that has different
parameters and caters for children, is better moderated, doesn’t allow private
messaging and so-on. So that’s an example of a step we’ve taken that isn’t
actually required by the Guidelines but we feel is essential for the protection
of children. Conversely what we’ve done is we’ve actually created an adult
area because we know that where children are restricted from going they will
go so we try to take a lot of the adult content that happens on our main chat
service to move it into a more restricted environment for adults thereby
cleaning up the main chat service as well so when children do go into it there
is [no] filth in there.”

One of the respondents felt that work on ‘self-labelling’ should be taken further and
called for the identity of all published material to be known:

“One thing that I think would be helpful though would be a requirement for
originators of web content to be required to make their identity known… so if
you go to a website it should be possible to identify who is responsible for that
material. Because if you buy a book, the author, the publisher and the printer
will always be somewhere on the book. There is a legal obligation for the
publisher to do that so that if there is a complaint the complainer knows who to
complain to.”

112
This respondent was also one of two Managers who felt that resources should be
directed towards setting up sites devoted to educating users of risks:

“What would be nice is that if there is going to be a child safety campaign that
it should be done so everybody could benefit from it, rather than just my
company. I’ve got 300 customers. For me to do a competent exercise in this
area is going to cost me £10,000. The common sense approach here is to have a
single one-stop shop which deals with these issues, and there are many of these
shops already. But signposting to places like IWF is the way to do it, rather than
put the obligation on the ISPs to put their general solutions warning on a
cigarette packet.”

One of this pair stated that he saw Internet safety as “purely an educational issue,”
echoing themes identified earlier concerning a need to educate users and that the
Internet is mainly dangerous due to a lack of awareness. Another Manager praised the
idea of using celebrity figures to promote education of children regarding the risks of
the Internet and this person felt that this was something that should be given more
attention:

“Today’s generation will not understand they think the Government is telling
them what to do. So they will rebel. If you get celebrities associated with chat
it will get under age people to listen.”

Whilst two respondents mentioned they felt the law needed to be tightened up in regard
to originators of inappropriate content, four of the Managers acknowledged that the
global and open nature of the Internet limits the impact that UK Guidelines can have on
Internet safety. Two of these Managers went on to call for the Government to apply
pressure on external Governments to exercise control over both the content suppliers
and distributors:

“I think that Government has other functions as well in terms of acting on


international Governments to control the kind of international influx of
dubious material. Things like pornographic spam. I think that’s a very difficult

113
thing to crack on a national ISP level when obviously a large part of it is
coming from overseas.”

Chapter summary
• The majority of Managers sampled were aware of the Guidelines though one
fifth had not heard of their existence prior to contact by the project
• ISP Managers held highly diverse views as to responsibility for safety, but
parents were assigned principal responsibility
• While acknowledging the potential dangers of Internet usage Managers tended
to emphasise the overall benefits of usage and the importance of personal
responsibility and appropriate supervision
• Resistance to Model recommendations within the sample was low
• Echoing earlier findings, Managers too saw education (of both parents and
children) as an effective and enduring means of promoting safe Internet usage
• Of the twelve Managers familiar with the Guidelines:
o Five reported having made changes to policy, practice, and/or
operating structures since their publication
o Five felt there had been no need to make changes as measures were
generally in place pre-Models
o Two felt they did not comply with the Guidelines but had no plans to
do so
• Whilst some Managers acknowledged the costs associated with implementing
the Models, none felt that the net effects on business were negative
• The most commonly cited positive consequence of the Models was an
increased public and industry awareness
• The majority of Managers familiar with the Models viewed their impact as
having been mainly positive - striking a 'reasonable' to 'perfect' balance
between industry and user needs
• Quantitative data indicated a high level of baseline concordance with Model
reccomendations (67.3%) and this was highest with regards to the Model for
chat providers (77.5%). Despite this Model impact was still moderate (18.1%)
and appeared to have most influence on IM services (27.9%). The Model that
saw least concordance was that aimed at hosting providers.

114
Key points for future action
• Managers tended to see parents as having primary resonsibility for helping
ensure their children’s safety with teachers to varying degrees being seen as
having secondary responsibility
• Factors that inhibited implementaion of the Models included lack of
familiarity with the Guidelines, financial constraints, and an opinion that the
Models were sometimes too specific or irrelevant to the services offered
• Concerns were raised regarding dissemination and Managers felt that some of
the companies not involved in their development may have not seen the
Models

115
Chapter 5: Moderators’ perspectives

The sample
Ten Moderators were interviewed. The average length of time respondents had been
moderating was 2.8 years (SD = 1.9 years). Nine of the respondents were currently
responsible for just one site, though two of the respondents had moderated
contemporaneously on several sites in the past. Three of the respondents did not work
for chat sites as such but rather forums and a fourth interviewee represented an
independent moderation company whose Moderators moderate chat rooms for several
companies. For the remainder of this text this respondent will be counted as a chat
room Moderator. The remaining six represented chat rooms of varying sizes, both
commercial and not, as well as sites that provided just chat and ISPs for which chat
was only one of the services offered.
Four Moderators worked on a part time voluntary basis, one worked full time
in a paid role, three moderated part-time within a full-time role, another though no
longer moderating worked full time in a senior position in an independent moderating
company and the final interviewee worked for a non-profit organisation as a
volunteer. All Moderators who volunteered worked from home.

Moderator’s roles

Descriptions of primary responsibilities


The 10 Moderators were asked to describe what they perceived to be their primary
role whilst moderating. Of the seven chat room Moderators, six emphasised protective
factors such as “making sure the chatters are safe” and “ensuring the safety of the
children who are chatting in the chat room.” Two of the forum Moderators
emphasised their role of chairing debates, whilst the third forum Moderator, as well as
speaking of protective measures involving deleting inappropriate threads and settling
disputes between users, also emphasised her role as “partly technical to make sure
people with any registration problems are sorted out.”

24-hour moderation
The second question in the present section asked Moderators whether the sites they
moderate are moderated 24 hours per day. Of the seven chat room Moderators three

116
claimed that their sites are moderated continually and that one more provided
moderation at all times the room was open. Three of the sites were not moderated at
all times, but two respondents said that they were moderated most of the time:

“We have got host operators from say the States and Canada so obviously they
are on a different time zone so they are generally there to do the work during
the night. But, no, as a rule we try to get our hosts to host when they’re
needed, so when there aren’t four or five other hosts on, but it’s not 24/7
coverage, no.”

This question was perhaps less relevant to two of the forums, a Moderator from one of
which said that their boards were moderated “as and when any of us are there.” One
of the forums that took the form of a live debate was moderated at all times when
open.

Meeting and greeting


Moderators were asked whether they hosted as part of their role: if they greeted the
chatters when they entered the room, and if they said goodbye when chatters left.
Eight of the 10 Moderators responded yes to this question. The remaining two
Moderators reported that they did not play host to their chatters, however one
respondent explained that Moderators are “easily identifiable by symbols by our
names so that people know we are there…if anyone approaches us on an individual
basis then we also respond to that as well”. The remaining Moderator felt the question
was not applicable because they moderated a forum, but mentioned that if they saw a
new user they would welcome them with another post.

Answering questions and site direction


All 10 respondents answered positively to two items asking whether they answered
user questions and helped people find their way around the site.

Advertising and moderation of links


All 10 Moderators tended to moderate advertisements within chat/posts and seven
respondents explicitly stated that advertising was prohibited (this included the
independent moderation company representative who said that most of their clients do

117
not permit advertising). Responses to users who did advertise ranged from telling the
abuser “not to do it again” (one case) to banning the advertiser (two respondents). All
three forum Moderators explained posts containing advertisements would be deleted.
Similarly, all 10 respondents moderated links, three explicitly stating that posting
links was not permitted on their sites.

Moderation of inappropriate behaviour


Nine of 10 Moderators claimed to moderate inappropriate behaviour. The need for
such Moderation on the adult political forum was greatly reduced but this respondent
explained that in the very few instances that inappropriate messages were posted,
action was taken:

“The only time our site was really flooded with messages, it was from a
campaigning group… and so we decided instead of deleting them we opened
another section and so all the messages coming from that group were moved
under that section.”

The tenth respondent, a forum Moderator, felt their moderation of inappropriate


behaviour was vague: “We are very loose about it.”

Familiarity with the Models

Of the 10 Moderators, three were familiar with the Good Practice Models prior to
contact although a fourth respondent became familiar with the guidelines prior to
interview as a result of a web link the researchers sent to him. Three further
interviewees had glanced at the Guidelines through this link provided prior to
interview, but did not feel able to respond to Model specific items and an eighth
respondent had heard of the Guidelines prior to our contact but was not familiar with
them.
One respondent, whilst viewing the concept of the Models as positive, had
complaints about the dissemination of the Guidelines:

118
“The first I’ve heard of this was two weeks ago. So it might be the best
guidelines ever but if you don’t get it out to the people on the front-line like
myself, and I actively look for these things, and I’ve been in the DFES for
meetings etc. extensive meetings with members of all the partners in the Home
Office etc. and no-one has ever mentioned these Models. And your contact
with [respondent’s organisation] was the first time we’d ever heard of them. I
do now have a set, a photocopy, in my action tray but I’ve not had a chance to
go through them. I obviously can’t comment on their actual detail but they do
strike me as being a good idea.”

Recruitment, training and supervision

Supervision
Several of the Moderators worked for sites in which there was some form of
supervision (indeed in five of these cases the interviewee was the main supervisor).
The remaining three Moderators had no formal supervision structures in place during
times when they were moderating.

Job descriptions
Six of the 10 said that they had a detailed job description for their moderating role.
Four had no specific job description, but two of those respondents did have job
descriptions for other aspects of their work.

Recruitment
Nine of the 10 Moderators had been in their role for more than one year, whilst one
reported to have started their position after the Models had been published (and was
recruited through connections with site owner). Five of the interviewees began their
moderating careers as regular chat room users and were later recruited or headhunted
into a Moderator role. These interviewees either completed online application forms
(two cases) or were approached by site representatives (three cases). Three more of
the interviewees were recruited for a different position for which moderation was part
of their role. The final interviewee reported that he was no longer actively a
Moderator but had set up an independent moderation company as a result of his
experiences.

119
CRB checks during recruitment
Of the seven Moderators who knew whether or not Criminal Record Bureau checks
had been carried out on them when they were recruited, just one responded positively
(an individual paid to Moderate as part of their role). A representative of a non-profit
chat site stated:

“What we have done is that we get to know people over time, and see how
they get on with the chatters and anyone we think is suitable is taken on-board.
But they’re also given an online interview, things like that, but we’re moving
towards a more secure system so, we’re also moving towards taking down
their contact details and we want to do as many checks as possible in the
future.”

Similarly two respondents for whom no security checks had been conducted said that
this was something their organisations had brought or planned to bring into the
recruitment process.

Model impact on recruitment


Four Moderators felt that such changes in recruitment policy might have been
encouraged by the Models:

“We have got more stringent checks in place so yes. Before there were just
background checks.”

The founder of the moderation company stated that they carry out CRB checks on all
of their Moderators, and that the Models have had some impact on this process:

“[Company name] have raised its standard of recruitment as a result of the


Models. Moderators are registered with CRB whereas three years ago they
weren’t. We now carry out enhanced checks on all staff – three years ago we
didn’t.”

120
Whilst two Moderators who were familiar with the Guidelines prior to being
contacted by the research felt that the Models had no impact on Moderator
recruitment processes, two respondents who weren’t familiar with the Guidelines felt
that their recruitment process had altered in the last year: one felt that their interviews
were now more in depth whilst the other felt that their focus had changed to recruiting
Moderators with better people skills.

“We hand select the people, now, we feel have the right attitude... we are now
more focused on people with people skills rather than technical abilities.”

Training
Of the 10 Moderators interviewed only one respondent said he received no training
for the Moderator role though this individual did provide training to other Moderators.
This participant Moderated in one aspect of his job and worked for a non-profit family
run chat site. The remaining nine interviewees reported that they had all received
training to some degree ranging from two hours to six months. Of these, three
reported that length of Moderator training in their organisation depended on how long
the trainee took to grasp the technical and in some cases human elements of
moderating. One respondent described the length and nature of their own Moderator
training as being ad-hoc:

“Just as and when, just asking how do I do this and they’ll tell us.”

Training was most commonly provided via distance learning facilitated by chat
rooms, IM and phone conversations. This was the case in several instances including
two Moderators who were trained through a specialist Open University course. One
interviewee in a senior role said that their Moderators were trained face-to-face using
a training programme he developed. Of the nine respondents for which we have data,
seven reported that their training covered child protection issues.

The Model’s impact on training


As with recruitment, two of the four respondents familiar with the Guidelines believed
the Models to have impacted on the way they trained Moderators, while two did not.

121
One of these two respondents explained that this was “mainly because we’re an
independent site and our main concern is with safety and we achieve the end result the
best way we see fit.” Both of the two respondents who felt the Guidleines had
impacted here felt that more time was now spent on training:

“More time is spent on it. Speaking for our Moderators, we need to make sure
those who do moderate are more aware of all the issues that can and can’t
come up through the Guidelines that we’ve set, whereas when we just started
we were just there to watch and we didn’t have a set of Guidelines that we
followed, we were just looking out for any abusive language.”

Of the six Moderators unfamiliar with the Guidelines, three felt their company’s
training had altered within the past year: one believed their training to be more in
depth, and another felt their guidelines were now more comprehensive. The third
Moderator cited a change in focus of the training from technical issues towards inter-
personal skills:

“I think it used to be about 50% how to use commands etc. but now it’s more
about getting people to calm down and stop swearing and so on. I think it’s
more aimed at that end, you know, getting people to calm down, finding out
what’s going on, what’s going wrong, how we can help them. We try to calm
people down to minimise aggressive commands.”

One Moderator who worked for a site where training had not altered called for a
greater focus on child protection issues, whilst similarly another felt that Moderator
training had altered but believed there should be a greater focus on understanding
children:

“You definitely need to be aware, to remind people, that young people are less
developed than adults and it seems like an obvious thing, so certain allowances
need to be made and certain efforts need to be made when you’ve got young
participants involved.”

122
Finally the respondent who completed the Open University course called for more
professional training programmes:

“I think there should be much more training available to Moderators, as far as


we know the OU is the only course available to Moderators, so there should be
more training provided.”

Moderators: Their experiences

Inappropriate material
Respondents were asked 19 questions that referred to their experiences as a
Moderator. They were asked whether, in their experience, some chat users resented
being ‘babysat’. One interviewee reported that this did not occur because he
moderated an adult forum where serious discussions of a political nature took place,
but all other interviewees responded positively to this question. Two of the
respondents further went on to explain that it was generally teenagers who resented
being moderated, but one interviewee explained that sometimes adults also felt this
way:

“In the main the people that resent it are usually the teens… they think ‘oh we
don’t need a host’ etc. Also we find it sometimes but not often in the adult
rooms where, in one way it’s more understandable, they say ‘hang on, we’re
adults here, why are you here?’ But it’s quite rare that you get that on a regular
basis.”

A substantial part of the section on Moderator’s experiences was concerned with


measuring their exposure to inappropriate behaviour described as racist, sexist,
threatening, abusive, sexually explicit and pornographic in sites not aimed at an adult
audience. Further, interviewees were asked whether this material had been ambiguous
or explicit in nature. Table 5.1 details the results.

Table 5.1.
Table 5.1. Proportion of Moderators’ who had seen inappropriate material

123
Number of respondents Respondents who felt this
who had seen material to be ambiguous
inappropriate material in
their chat rooms
Sexually explicit language 9 1
(in sites not aimed at an
adult audience)
Abusive material 8 1
Threatening material 8 3
Racist material 6 3
Sexist material 6 2
Links to pornographic sites 5 0
N = 10

Though the above table shows the number of Moderators that had encountered
various forms of inappropriate content it does not indicate the frequency with which
those respondents who had seen this material had happened across it. The qualitative
data revealed that links to pornographic sites were comparatively rare with
Moderators stating that such links were seen very occasionally, whilst abusive
language was the form of inappropriate material witnessed most often. When
comparing the frequency of racist and sexist material two respondents gave
contradictory answers:

[Have you seen sexist material in your chat rooms?] “No, not as much as racist
material. It does appear but not very often. It’s normally aimed at one person,
not very much I would say.”

“Yes, again it’s all over the place. It seems to be that people are more aware of
sexism than any of the remarks like that or any of the issues like that.
Unfortunately it seems to have escalated slightly, I don’t really know why; it’s
just something I’ve noticed.”

When interpreting the table it is also important to note that one Moderator had not
seen any dubious material due to the nature of the forum (site for adult political

124
discussion). Whilst the frequency of specific material cannot be determined it can be
concluded that it is not uncommon for inappropriate material to sneak through, despite
language filters being in place in many chat rooms. In terms of being unsure as to
what constituted inappropriate material the highest proportion of doubts were related
to racist and threatening material. Doubts concerning the latter all related to an
uncertainty as to whether or not the material was meant to be humorous:

“Sometimes it’s possible it may be done in jest but we don’t take any chances
so it will usually result in a ban from that room.”

Uncertainties with regards to racist material tended to occur in relation to context and
lack of background information:

“There are doubts sometimes, especially some adjectives used especially, you
know, it is a bit borderline and so a lot of these things depend on context, and
sometimes they are in doubt.”

Sexually explicit language and links to pornographic sites provided least uncertainty.
This was mainly cited as being due to the explicit nature of the language and the
ability to follow links through, as well as the names of the links themselves.

Potential abusers
Moderators were asked whether they had ever suspected potential child abusers of
entering their chat sites. Five of the 10 interviewees reported that although it was rare,
they had experienced incidents where they suspected child abusers were using their
services. One interviewee explained that he had only been suspicious once in his
experience. Another highlighted some of the difficulties in trying to ascertain the
intent of older chatters speaking to children on a site:

“I think that we get a lot of people coming into chat with very suggestive
names. We also get quite a few overage chatters that will go to the teens’
room. As a consequence, I know it shouldn’t be assumed that they’re there to
do wrong. I mean someone with a bad name yes that’s horrendous - they are
dealt with, but it might be a child being, you know, being a child. However as

125
far as over age chatters are concerned that is taken very seriously… I am sure
that they, potential paedophiles, are around however I’d like to think that we
are able to spot them. Certainly when it comes to users under the age of 19
they don’t really bother me, but a lot of the overage chatters might be 20, 21,
22 - they are not going in there for bad reasons, they’re often trying to get back
to talk to their friends who aren’t allowed in the other rooms so it has to be
very very carefully done.”

The most serious case that was mentioned involved someone who was “soliciting girls
under 16 and was offering them payment for sex”. This incident was reported to the
Police. One Moderator did not wish to make comment on courses of action following
suspicion of abusers. Of the other four, several different courses of immediate (non-
reporting) action in response to a suspected abuser were given: Two respondents,
including the Moderator who reported the incident to the Police and one who reported
suspicious individuals to his Manager, “removed” such chatters:

“Most abusers would probably be kicked out of the system before it would be
allowed to reach that stage but if someone did slip through the net then we’d
obviously be vigilant about things like that.”

One Moderator who was unsure of the intentions of a suspicious character explained
that the suspected individual was contacted:

“I got their account details and tracked down who they were and sent them an
email.”

The final Moderator’s site placed an emphasis on monitoring suspicious individuals


and communication between Moderators about such persons during shift handovers:

“Well as we say we monitor anyone who looks like they could be a potential
abuser. The teen chatters are actually very good. They are very aware. And
invariably they might whisper us saying “Oh, Paul has just told me he’s 25”,
so our immediate action is to tell the person to put this Paul on ignore. And

126
also then the ID is made a note of and all that information is then passed on to
our team leaders when we do our next shift.”

Two of the forum Moderators who had not suspected a potential child abuser of
entering their site believed that they would report a suspected individual. One more
would report to a line manager who would then pursue the issue with the relevant
authorities, and that in the first instance a block would be placed on the suspect user.
The other was unsure of what exact course of action they would take but said they
would research the issue:

“I’d have to do research on it and find out. Because I know there’s a place on
the net that you can report things to.”

Effects of the Models

Recruitment and training


Model impact on Moderator recruitment and training has been discussed in
‘Recruitment, training and supervision’ above.

Chat safety and user awareness


Whilst Model-specific questions were only applicable to four of the respondents who
had some familiarity with the Guidelines, the interview protocol also included items
that focused on general perceptions of chat safety and user awareness, and how these
had altered in the year since the Models were published. With regards to user access
to safety information eight of the 10 respondents felt that this had improved in the last
year, one respondent felt there had been no change, and the tenth stated that she did
not know and that these things were “very hard to judge”. Of those who believed
information had improved, two respondents accredited this to safety being “more high
profile” and one respondent attributed it to the media:

In discussing user access to safety advice, three respondents also felt that information
was more readily available due to Moderators themselves becoming more aware of
safety issues involved:

127
“We as Moderators are more aware, not more aware - we’ve always been
aware, but extra aware of safety issues. I think that the chatters don’t actually
pride themselves on being very aware of this.”

Awareness was also a theme that was identified from responses given to an item
concerning the frequency with which users gave out personal information. Whilst
only four Moderators felt that users were more protective of their information and six
of the respondents felt that this had stayed at a constant rate this past year, none of the
interviewees felt that the frequency had increased. Two of the respondents who felt
that the frequency had not altered identified awareness as a determining factor as to
whether or not the user chose to give out information:

“They [children] know the difference. I mean I remember the first time I used
a chat room, you know, probably 1995 or something like that, and even then
we were very aware that you had to watch out and stuff like that. I think that
people are aware that, but I think the problem with the likes of certain chat
services is that over an extended period of time wariness can be broken down,
but any initial, I think that kind of stranger-danger thing is at the forefront of
every young person’s mind really. I think that young people aren’t naive
enough to go out at an early stage and have a relationship with somebody
online - to give out information.”

As with the frequency with which users were seen to give share their personal
information, most respondents felt that chatters contacted Moderators at about the
same rate during 2003 as they did during 2002 (five cases). One of the respondents
felt this question was not applicable and three felt that users approach the Moderators
more frequently. Reasons cited for this were that the visibility of Moderators had
increased (two cases), a new reporting mechanism had been developed (one instance)
and safety warnings had become more prominent, resulting in chatters asking
questions about things they weren’t aware of such as ‘ignore’ features (one
respondent).
None of the Moderators felt that chatters’ use of safety tools had decreased
over the previous year. Four felt usage had remained constant and a further four felt it

128
had increased (one respondent wasn’t sure and a further interviewee felt this question
was not applicable).

Impact on the Internet and chat industry


Moderators familiar with the Guidelines were asked whether they felt that the Models
had had an impact on the Internet industry since they were published. Whilst one of
the four Moderators felt that this impact was “not as much as would probably be
desirable” the remaining three felt that the Guidelines did create an impact and all
four of the Moderators felt that the effects of the Models were generally positive.
Examples of how the Models have had a positive effect include helping:

“ISPs to become more responsible towards their user base and to offer more
protection to their user base.”

Two of these four respondents echoed sentiments expressed by Managers in the


previous chapter, stating that the Models had made the Internet safer through
increasing awareness:

“It’s raised awareness amongst parents as to what can happen in chat rooms.
Parents and kids are more aware to not give personal details out in chat
rooms”.

Specific to chat, one of the Moderators felt that the Models had no impact on his site,
given that his organisation had “always worked within those Guidelines”. This
Moderator did see the Models as a positive step, however. One interviewee explained
that the Models would have definitely made those sites that have implemented the
Guidelines safer but went on to comment that due to the non-mandatory nature of the
recommendations there would still be chat rooms that were not safe:

“Not everyone is obliged to follow them. I think those that do will have a safety
focus, it would actually be benefiting them but there are a considerable amount
of chat rooms that either don’t know about the Models or disregard the Models,
and the result is there are still places that go ahead with unmoderated chat out
there.”

129
Two of the four believed that the Models have helped make chat safer. Cited effects
of the Models included more safety messaging, more opportunities for users to call for
help, increased user awareness, improved features that facilitate better Moderation
and greater levels of responsibility placed upon both users and Industry for safety (one
case):

“Users are more aware of safety issues and the technical features built into the
chat software allows for better moderation. There are more safety messages
available, more opportunities for users to call for help or to raise a problem. The
Home Office Guidelines have encouraged the industry to do more and have
encouraged the users to take more responsibility themselves.”

When considering the views of those seven respondents not familiar with the Models,
one interviewee felt that a greater awareness of safety had developed during 2003 and
another reported that he felt that chatters had taken on more responsibility:

“It seems that there has been talk about [the Models] and more about the nature
of online safety and people are more aware how they work and what they’re
about.”

“Due to all the news that’s hit the headlines there’s more people on the
lookout for it and a lot more people willing to step forward. There’s more
people coming forward. The users of the chat rooms are more willing to turn
around and say that bloke over there has been talking to that 14 year old girl, I
don’t think in the right way. And the 14 year old girl might not have said
anything.”

However, one further Moderator not familiar with the Guidelines did warn that in the
last year their service “had more overage users in the teen room”.
Whilst overall the four Moderators familiar with the Models viewed them as a
positive thing for chat safety, only two of these were able to name features that had
been put in place as a direct response to the recommendations. These included a

130
reporting and panic mechanism and higher prominence of safety messages in the form
of ‘news-flashes’ within the chat rooms.

Specific content of the Models


All four of the Moderators familiar with the Guidelines felt that they had been pitched
“about right” in their recommendations (i.e. none of the respondents felt they had not
gone far enough or gone too far).
Whilst one of the Moderators felt that there wasn’t a particular “stand-out”
feature of the Guidelines that he felt was the best recommendation, this interviewee
did comment that:

“I think all the points made in the Model are actually beneficial, I don’t think
there’s anything… superfluous or irrelevant, everything in there helps safety
but there could be more things in there, but that’s from a personal view
because we’re fully committed to doing more and we’re hoping other people
will follow suit as well.”

One Moderator felt that warnings to not give out personal information during chat or
when completing public profiles were the most useful recommendations whilst
another felt that the safety tools were the most beneficial. The fourth respondent
familiar with the Guidelines had no preferred aspect of the Models. This same
respondent was also unable to identify any particular recommendation that he found
the least useful or effective, and two of the Moderators felt that all suggestions were
useful. The fourth familiar interviewee questioned the usefulness of an alert feature if
a site is being properly Moderated, but did accept that such a tool could be useful in
private messaging:

“The alert buttons really aren’t needed if somebody’s moderating it properly


because they’ll see what’s going on. I think where they will come into their
own right is with private messaging with people which may be missed by
Moderators.”

One Moderator unfamiliar with the Guidelines did point out a problem in getting
some chatters to make use of the ‘ignore’ tool:

131
“Unfortunately we’ve got an ignore feature and as most people want to know
what’s being said about them and what’s being said to others, that doesn’t get
used nearly as often as I would want them to use the facility.”

Benefits and disadvantages for chatters, Moderators and chat providers


One of the four respondents familiar with the Guidelines declined making comment
on each of the items relating to this section, and a second respondent felt unable to
comment on any advantages that may have been brought to users. The two remaining
Moderators who identified potential benefits for chatters highlighted the increased
safety of the chat rooms themselves, whilst the second interviewee also mentioned
that the Models have:

“Raised awareness of the dangers and given them the reassurance that they can
contact somebody if they’re being harassed etc. Whereas before they might
have not been protected.”

All three respondents who commented did not feel the Models had brought many
disadvantages to chatters though one respondent did mention that he considered
Moderation in chat rooms for 16-21 year-olds a disadvantage, presumably because
this would in part prevent this group of young people chatting in an age appropriate
social manner and similarly another respondent thought some users might perceive the
Models as restricting:

“Although the users may argue that it may limit their freedom. But from my
point of view I don’t see that too much as a downside.”

All three Moderator felt that the Guidelines had provided Moderators with benefits
due to the Models providing a frame of reference for good practice:

“Reiterates what we have in place and we know we’re doing the right thing.”

Just one Moderator talked about a downside for Moderators and chat services:

132
“It would make things more difficult for whoever uses them. Because they’ll
be concerned about breaking the rules.”

Future directions
Moderators were asked if they had any recommendations regarding Internet safety
and how the Guidelines could be further developed. One suggested making the
Guidelines mandatory and to have systems and punishments in place for abusers:

“I think, maybe a bit more extreme, maybe even going towards a legislatory
route making sure that everyone who presents a chat room has to do certain
things, most of which are actually suggested by the Guidelines, and maybe
stricter penalties for people who abuse the service, and systems in place for
actually dealing with any abusers or perpetrators.”

Three suggested that more visible policing of sites may cut the risks of abusive
behaviour:

“We think one of the reasons why we don’t see abusive stuff on our site is
because we are very visible on the site. And in our previous research when we
analyse other forums, for example in different Government departments, that
the worst record was in the forums were the Moderators were not known by
name at all but only as administrators. So people who were participating in the
forums reacted very much like “oh there is a big brother” and they were very
much against the administrators for the reason because they didn’t know who
they are. And then later when we interviewed them it turned out they had a
policy of silent moderation which meant they didn’t comment on the site at all
regardless of whatever query it was, which really caused a lot of paranoia on
the site.”

One respondent felt that whilst safety messages are useful, a message provided by the
Government might carry more merit:

“I mean I don’t know if the Government has any plans to produce any type of
standard material to put on the front of our site so that it would be a

133
Government thing. However official and efficient a site runs when something
is seen to be a Government warning it’s taken, it’s something people will look
at more I imagine and all that creates awareness.”

Whilst this opinion contrasts that expressed by one of the Managers (who felt that
teenagers are more likely to rebel against Government suggestions) another
interviewee also felt the Government could help raise user awareness by providing
“some kind of handout or guide in taking part in online facilities.” All in all,
increasing awareness was viewed by most of the 10 respondents as being of key
importance. Three highlighted the role schools can play in increasing awareness
through education and two Moderators expressed views suggesting that the media
should be involved.

Whilst awareness of safety issues is obviously a key area of concern for most
Moderators, one of the respondents stressed that technology has an important role
also:

“A lot of the improvement in child safety has to do as well with the fact that
there’s more technical restrictions, like if you go into schools - their computers
are choked up to the hilt with preventative measures and firewalls, selective
URLs, it’s all in there. Now you can do that with ISPs as well. And it’s not
just an awareness drive it’s a hardware drive as well and software initiatives.”

As well as providing suggestions for future actions, some areas of concern were
identified though the respondents’ discourse. One respondent talked about problems
associated with private messaging within chat due to Moderators not being able to
monitor such interactions. Another respondent did mention however that their
company had a facility to log the discourse of private messaging. One respondent
highlighted problems with a ‘whisper’ feature that some sites offer, a function similar
to private messaging, however rather than a separate box or ‘room’ opening up for the
private conversation to take place, the text appears in the main room but is only
visible to the whisperer and the person who is being whispered to. As this respondent
commented when talking about how obvious or not identifying what constitutes
abusive material is:

134
“Everything that has been said or typed up is obviously taken seriously. It’s all
obvious unless somebody goes and starts chatting to another chatter via
whisper. Say for example Paul came in and doesn’t say anything on screen,
but then Ann writes a message on the screen: “get out of my window Paul” or
“Paul you’re being horrible” and what we try and do, bearing in mind we
cannot see whispers, is to advise that the person who’s being harassed or
abused or whatever to use the ignore button… However what can happen is
that sometimes people will have a row. Suppose Ann and Paul have had a row
that day and they’re usually good friends, she knows that Paul would probably
get a little whisper reminder so it may be untrue that he is harassing her.
There’s a lot of little games that get played between chatters and were not
quite certain how you deal with these things.”

A further problem with the whisper feature was identified when users giving out
personal information were discussed. Some people may think they are talking via
whisper, but accidentally let a name or a number slip through the whisper feature.
Future Guidelines may seek to explore recommendations around newer facilities such
as whisper features.

Chapter summary
• Generally, Moderators of chat rooms have encountered inappropriate
behaviour and material inappropriate for children within their chat rooms
• Only half the sample had suspected potential child abusers of being on their
sites. Difficulties in ascertaining intent of overage chatters was also
highlighted and only one Moderator claimed to have experienced a ‘serious’
incident with regards to a potential child abuser
• Those Moderators familiar with the Guidelines felt the Models had had a
positive impact on Internet safety
• Most Moderators felt user access to Internet safety advice improved during
2003, as had user and Moderator safety awareness
• All Moderators familiar with the Guidelines viewed them positively and felt
the pitch of the Models was ‘about right’

135
• Moderators in general perceived that users and Moderators had been brought
positive advantages by the Models

Key points for future action

• Moderator familiarity with the Models was generally lacking

• Baseline concordance with the recommendation to screen Moderators was low but

it did appear that the Models were aiding a shift towards screening

• Increasing awareness was viewed as the most important method of promoting safe

Internet use, particularly via the media and in schools

136
Chapter 6: The objective picture: findings from the desk-based audit

In order to further explore concordance rates with Model recommendations, 25


Internet companies were audited via a desk-based procedure. The primary aim was to
collate quantitative information on safety facilities that were available to users. The
audit also provided a balance between company opinions and facilities clearly
available to an ‘ordinary user’. Every effort was made to match audited organisations
with those who had been interviewed. The current chapter is divided into three
sections and provides an overview of this more objective aspect of the research. It will
provide separate descriptive analyses of the extent to which features recommended by
the Models are currently in place for chat services, Instant Messaging (IM), and web
based services (connectivity and hosting providers).

Of the 25 companies, seven provided chat services only, one provided IM only, nine
provided connectivity and web hosting, one provided connectivity, web hosting and
chat, three provided connectivity, hosting, chat and IM services and four provided
chat and IM services only (see figure 6.1).

Figure 6.1. Services provided by assessed companies

N = 25

Chat and IM
16% Chat only
28%
All 4
12%
Con, Host, Chat IM only
4% 4%

ISP Connect.
36% and Hosting

Section one: Chat services


Nineteen questions pertained to those companies who provided chat services. These
questions were primarily derived from the following section of the Models document:
The Product, Moderated Chat, Safety Advice, Registration, Public Profiles, and Tools

137
and Reporting. Fifteen of the assessed companies provided chat services (60% of the
total sample) as either their main service, or as part of a combined service. Seven
(46.7%) of those 15 companies provided chat alone (28% of complete sample), whilst
four (26.7%) provided only chat and IM (16% of complete sample). In the subsequent
sections of this chapter relative concordance with various recommendations made by
the Models will be discussed with regards to the present sample.

a) The Product and Moderated Chat


Of the 15 companies who provided chat services, all provided information regarding
the nature of the service offered. Ten companies offered moderated chat services and
six of these had Moderators running all of their chat rooms. Eleven provided
information regarding whether their chat services were or were not moderated.
Of the 10 services that had moderated chat available, all explained the function
of Moderators. In three of these ten cases, a facility was provided that allowed users to
complain about Moderators.

b) Safety Advice
Ten of the 15 companies with chat services provided clear and accessible safety
messages on chat room front pages and in the chat rooms themselves. However, seven
of the 15 did not include separate safety messages designed for both adults and
children. Two companies specifically stated that their chat rooms were for over 18’s
and therefore it was not expected that they would offer safety messages for children.
As a result, these companies were counted as ‘not applicable’ in relation to this
question. They were included in the study since despite a clear disclaiming statement
that children should not enter their chat services, many of the topics discussed were
considered appealing to under 18’s and the chat rooms could easily have been
accessed by this age group. Six of the 15 companies assessed did include safety
messages aimed separately at adults and children, while all others did at least provide
generic user safety guides. Twelve of the companies provided links to either internal
or external safety guides whilst three did not. Two companies provided specific safety
messages when the user was completing their profiles, whilst seven companies did not
provide profiles, and for these this question was therefore ‘not applicable’. Six of the
companies however, did not provide safety messages for chatters to read when
completing user profiles. This could be partially explained by a tendency for safety

138
messages to be being placed in one location, for example, in a safety guide. Table 6.1.
summarises these findings.

Table 6.1.
Chat room safety advice: findings from the objective audit

Frequency of Frequency of
positive recordings negative recordings

Safety messages on front pages and in 10 5


chat rooms?
Separate safety messages designed for 6 7
adults and children?
Are links available to online safety 12 3
guides (internal or external)?
Are safety messages available when 2 6
the user completes his/her profile?
N = 15 (where the two columns do not add up to 15, questions were ‘not applicable’ to certain
companies)

c) Registration and Public Profiles


Considering only those eight companies who did have a facility for completing user
profiles, half explained the use and purpose of providing personal information at
registration, whilst the other half did not. Furthermore, seven of the eight companies
who provided a facility for creating personal profiles allowed users to limit personal
information, but one did not provide this facility. Three of the eight encouraged
children not to post personal information on chat sites. For three more this question
was classed as ‘not applicable’ as two did not aim directly at a child audience and the
third did not allow users to post up personal information on their profiles. Two
companies did, however, not explain the use and purpose of public profile data.

d) Tools and Reporting

139
Of the 15 companies that provided chat services, five had alert systems available at
the top of each page inside chat rooms aimed at children. Four did not whilst this
question was counted as ‘not applicable’ for six companies, given that their services
were not aimed specifically and solely at under 16’s. However, 14 companies had
ignore, alert and/or grab and print facilities available, whilst only one did not. The
Models recommend that in moderated chat rooms specifically aimed at children, alert
systems should be prominent at the top of each page. Our question directly reflected
this recommendation and might have negatively skewed the data. That is, it is not that
companies did not provide systems to protect their users (whatever their age) in a chat
room environment, but rather that alert systems were not always sited at the top of
each page. Eleven of the 15 chat service providers supplied advice regarding how to
handle abusive chatters, whilst four did not. With regards to filtering mechanisms,
seven of the 15 providers had available a filtering mechanism that could pick up bad
language or aimed at preventing children from sharing their e-mail address. Two
companies were excluded from this count as they had labelled their site as being for
over 18’s only. Six companies however, did not appear to have a filtering facility
available. Eleven of the 15 provided a facility to block private chat and IM within a
chat environment, three did not, and one company had recently closed their private
chat facilities within the chat environment, explaining that they felt private messaging
in chat rooms presented a possible risk to personal safety. Twelve of the chat service
providers had a system in place to allow them to respond to reports of incidents. Of
these 12, two companies divulged a target response time, while 10 did not. Table 6.2.
provides an overview.

Table 6.2.
Chat room safety tools and reporting facilities: findings from the objective audit

Frequency of Frequency of
positive recordings negative recordings

Alert systems on the top of all pages 5 4


aimed specifically at children?
Ignore, alert, and/or grab and print 14 1

140
facilities anywhere on the page?
Advice on handling abusive chatters? 11 4
Filtering mechanism available? 7 6
Facility to block private chat/IM? 11 3
System to allow incident reporting? 12 3
If incident reporting possible, target 2 10
response time indicated?
N = 15 (where the two columns do not add up to 15, questions were ‘not applicable’ to certain
companies)

Section two: Instant Messaging


Section two of the questionnaire comprised 22 questions directed at those companies
who provided IM Services. Seven assessed companies provided IM as either their
main service or as part of a combined service, and one more provided IM only.

a) The Product and Environment


Of the eight companies who provided IM, all provided information regarding the
service offered, all explained the type of IM environment provided and in all cases
users were able to access information on how to adjust privacy settings (e.g. by
employing block features).

b) Advice
Seven of the eight companies offered information and advice on how to keep safe in
an online environment, while one did not. It should be noted however that although
most of the services did place safety messages on their site, these were not necessarily
situated on the actual IM pages. Further, six of the eight relevant companies did not
provide safety information both on the page for downloading the IM client and on the
actual IM itself, whereas one company did provide this information in both locations.
One company was not considered to be applicable because their IM client was not
downloadable and was an integral part of their chat services.
Five companies who had IM clients provided links to online safety guides
from their IM pages, whilst three did not. Almost all the companies did actually

141
provide some direction to safety guides but it could not be flagged in the audit
because it aimed to identify a count of those located in the IM web pages, not on the
company web site itself.
Of the eight IM services assessed, two showed safety messages when users
completed a user profile for IM. Three did not show safety messages at this juncture
and three more did not have a facility for profile completion (this question was
therefore considered ‘not applicable’ for these three companies). Of those five
services that did have facilities for IM profiles, two included safety messages, whilst
three did not.
Four of the companies provided information regarding those elements of
personal user data that would be placed in the public domain, while three did not. This
question was ‘not applicable’ to one company, as there was no facility to place any
information in the public domain.
Half of the eight IM providers supplied a safety message when the user
considered adding a person to their buddy list, and the other half did not. Table 6.3.
provides an overview of the frequency of all IM safety advice features.

Table 6.3.
IM safety advice: findings from the objective audit

Frequency of Frequency of
positive negative
recordings recordings

Is there information on how to keep safe in 7 1


an online public environment?
Is safety information provided both on the 1 6
page for downloading the IM client and the
actual IM itself?
Are there links to Online Safety Guides? 5 3
Are safety messages available when 2 3
completing IM profiles?
Is information provided on what will be 4 3

142
placed on the public domain?
Is there a safety message present when a 4 4
user considers adding a person not on their
buddy list?
N = 8 (where the two columns do not add up to 8, questions were ‘not applicable’ to certain
companies)

c) Tools and Reporting


All eight companies had ignore or block features in place and all eight supplied
information on how to deal with unwanted Instant Messages. Half of the companies
who provided IM services had features for reporting abuse, allowing users to provide
information about an abusive incident, and half did not. This finding could be
explained by the fact that it might have been possible to report abuse via general
contact e-mails, but there was no specific system in place to encourage users to report
an abusive episode. Five companies provided a facility whereby users could record
evidence of abuse, and three offered no recording facilities. However in at least one of
these cases, there existed a centrally led facility that recorded all conversations that
came through their IM client. In six cases, descriptions and/or examples of what
constituted abuse were supplied, but in two cases no specific information was offered.
Half of the companies distributed information on how to report serious and urgent
incidents and half did not specifically have a facility to report incidents that were
“serious and urgent”. Table 6.4. provides an overview.

Table 6.4.
IM safety tools and reporting facilities: findings from the objective audit

Frequency of Frequency of
positive recordings negative recordings

Are ignore or block features in place? 8 0


Is information provided as to how to 8 0
deal with unwanted IMs?
Are there visible and easily accessible 4 4

143
features for reporting abuse?
Is information provided as to what 6 2
constitutes abuse?
Is there a facility whereby users can 5 3
record abuse?
Is information provided on how to 4 4
report serious and urgent incidents?
N=8

d) Privacy

Six of the eight audited companies explained the purpose and distribution of personal
information; two did not. Again, six companies informed the user of how their
personal information would be used. However, it might be that those remaining two
companies preferred to explain distribution of personal information in their Terms and
Conditions or in other sections of their website, rather than in their actual IM pages. It
appeared that for two of the eight companies, registration details were automatically
transferred to either a public/user profile, or an open member directory. One company
clearly did not do this whereas three companies did not provide any unequivocal
information regarding transfer of registration details. Finally, for two companies this
question was counted as ‘not applicable’ because their IM clients were designed such
that the user did not provide any details other than a chosen username.
Four of the eight companies assessed did not appear to provide advice
regarding the potential risks of publicly accessible profiles/member directories
whereas two did offer advice. For the remaining two companies, this question was
deemed ‘not applicable’ since their users could not place information on user profiles.
This approach does of course reduce risk even further, since potential abusers cannot
search through directories to identify possible targets.

Section three: Web services


The third section of the objective assessment was directed at web services and
included 16 questions. Findings from this section will be presented in two parts:
connectivity providers and hosting providers. Nine of the questions focussed on
connectivity providers and seven questions focussed on hosting providers. Thirteen of

144
the 25 companies (52% of the total sample) provided connectivity and hosting as part
(or all) of their service. All 13 companies who provided connectivity also provided
hosting services whilst 9 of these 13 companies only provided connectivity and
hosting services (no chat or IM).

Connectivity providers
Assessment of facilities available from connectivity providers was measured via
extensive searches on the companies’ websites. Five of the 13 connectivity and
hosting providers provided their home users with information detailing possible risks
to children who use the web, whilst eight did not specifically do so. Furthermore,
seven of the 13 companies did not provide information that parents might utilise in
order to educate their children on the risks and only four companies directly
encouraged parents to take practical steps in order to minimise risk (by, for example,
recommending placement of home PCs in communal areas rather than in a child’s
bedroom). Information on filtering software and safe surfing Guidelines did not
appear on the majority of connectivity and hosting providers’ web pages. Four of the
13 companies offered their users the option of filtering software, three informed
parents of the advantages and limitations of filtering software, five provided safe
surfing Guidelines aimed at parents, and three directed safe surfing Guidelines to a
child audience. It should be noted however, that safe surfing Guidelines would not
probably be included in the ISP connectivity and host packages because users must be
aged 18 years or over in order to join ISP services.
Five of the 13 companies provided their users with information to allow them
to report websites to the Internet Watch Foundation, whilst eight did not. However,
the question asked was only whether users could report sites to the IWF. It may well
have been the case that users were advised to report unacceptable sites to another
source (e.g. the ISPs’ own abuse contact line). All 13 companies did state in their
Terms and Conditions the boundaries of acceptable online behaviour. Table 6.5
summarises the frequency of all connectivity provider features examined by the
research.

Table 6.5.
Connectivity provider safety features: findings from the objective audit

145
Frequency of Frequency of
positive negative
recordings recordings

Are home users provided information 5 8


detailing risks to children who use the
Web?
Is information provided that parents may 6 7
utilise in order to educate their children re:
risks?
Are parents encouraged to take practical 4 9
steps in order to minimise risk?
Are parents informed about the advantages 3 10
and limitations of filtering and monitoring
software?
Are users offered the option of filtering 4 9
software/filtered services?
Are safe surfing Guidelines offered for 5 8
parents?
Are safe surfing Guidelines provided for 3 10
young users themselves?
Are users informed that they can report 5 8
websites to the Internet Watch Foundation?
Do Terms of Service explicitly state 13 0
boundaries of acceptable online behaviour?
N = 13

Hosting providers
The final section of the assessment questionnaire included seven questions directed at
ISPs who offered hosting facilities. As for connectivity providers, 13 assessed
companies provided this resource. However, although one of the 13 hosting providers
advertised web space as part of their package, the researcher could not find any

146
further information regarding this service on the site, (even when logged on as a
member), therefore all counts in the present section are out of 12 rather than 13.
Two of the 12 assessed hosting companies provided safety advice for users
who wished to create web pages and two also gave specific guidance for young users,
whilst 10 did not provide either of these features. All 12 companies explicitly stated
the boundaries of acceptable online behaviour, though only eight provided complaints
facilities. As regards complaints concerning other users’ websites, only five
companies provided a specific facility to deal with these problems. Nine of the 12
hosting providers assessed warned users that they were legally liable for whatever
they placed on the web, whilst three did not. A negative recording for these three
providers may be explained by the fact that although they stated that user accounts
would be terminated if any unsuitable content were found, the legalities of fault were
not mentioned. Finally, the majority of hosting providers (11) did not provide
customers with information or encouragement to self-label their website(s). Table 6.6.
provides an overview.

Table 6.6.
Hosting provider safety features: findings from the objective audit

Frequency of Frequency of
positive negative
recordings recordings

Is safety advice available for home users 2 10


re: creating web pages?
Is specific guidance available for home 2 10
users who are children and young persons?
Do Terms of Service explicitly state the 12 0
boundaries of acceptable online
behaviour?
Can complaints be made? 8 4
Are users warned that they have legal 9 3
liability for whatever they place on the

147
web?
Are effective mechanisms in place for 5 7
dealing with complaints relating to
customers’ websites?
Are customers encouraged to provide self- 1 11
labelling?
N = 12

Comparing present data with responses from Senior Managers


As was found from the responses of Senior Managers, non-concordance with Model
recommendations was higher with regards to the Models for connectivity and hosting
providers compared to concordance with chat and IM recommendations. Again,
implementation of advice encouraging users to self-label their content and concerning
providing specific guidance to young people for creating web pages were found to
have the lowest hosting provider implementation rates. Whilst all companies in this
audit were found to explicitly state the boundaries of acceptable online behaviour in
their Terms or Conditions, only seven of nine hosting provider Managers claimed that
this was the case within their company. More significantly whilst eight out of nine
Managers claimed that their organisation offered safety advice to home users
regarding the creation of web pages, the present objective data indicated that only two
out of 12 companies assessed actually did so. Similarly whilst only four of nine
connectivity providers were found to advise parents to take practical steps in order to
protect their children, five out of eight Managers claimed their sites encouraged
parents to do this. Similarly whilst three of seven Managers for whom data was
available said that their ISP provided information about the advantages and
disadvantages of filtering software, a relatively lower concordance rate was found in
the present data (three out of 10). Whilst half of the eight Managers claimed that users
were informed they could report websites to the IWF only five of 13 sites assessed
here were found to do so.
Data from both the Managers and the audit revealed that 100% of the sub-
samples offering IM provided information regarding the service offered, the type of
IM environment and how to adjust privacy settings. Similarly data from both sub-
samples indicated that ignore or block features were in place and information was

148
provided on how to deal with unwanted instant messages. Similar rates of non-
concordance were found for information being available on both the home page for
downloading IM and the IM client itself with only one of the seven applicable
providers in the current sample doing so and none of the four representatives
discussed in Chapter Four saying this was the case (though three of these Managers
planned to introduce such information). Similarly only half of each sub-sample had in
place easily accessible features for reporting abuse and information on how to report
serious and urgent incidents at the time of data collection (though one company had
plans to introduce each of these features).
Across both chat sub-samples all companies appeared to provide information
regarding the nature of the chat services offered. Similarly the present data indicates
that only one of 15 chat providers did not have either ignore, alert or grab and print
features in place and all eight of the chat Managers reported having at least two of
these tools in place. Managers however tended to report higher levels than those
identified by the audit for items enquiring whether there were separate safety
messages designed for adults and children (eight of eight Managers reported they
provided guides for under 16 year-olds and six claimed they offered guides for
parents. This compares to six of 13 applicable sites that were identified by the audit as
offering separate guides. Similarly, Managers reported higher levels of concordance
than were identified via the audit on the following measures: easily available filtering
mechanisms that might pick up bad language or prevent young users giving out their
email addresses (8 out of 8 Managers vs. 7 out of 13 on audit), users being informed
as to the time frame of a response when an incident is reported (4 out of 7 Managers
vs. 2 out of 12 audit) and a system via which users can report problems with
Moderators (5 out of 7 Managers vs. 3 out of 10 audit).
Some of the discrepancies between the two data sets can be explained by
differences between the samples – no representative was interviewed from 14 of the
25 companies audited. Where discrepancies arose these usually occurred in the
direction of the Managers stating higher concordance rates than the audit indicated.
This could be as a result of the auditor only marking a feature as being in place if it
was both clear and easily accessible. Secondly those companies who were audited but
declined to undertake an interview may have been less safety conscious than those
who agreed to participate (see discussion regarding a possible filtering effect in the
Methodology for the structured interviews).

149
Chapter summary
• Results indicate generally high levels of concordance with Model specific
recommendations
• Safety tools and advice are most widely available within IM services (95.5%
of majority positive recordings) followed by chat services (73.9%)

Key points for future action

• There is a significant lack of safety information and tools offered by connectivity

and hosting providers (25%)

• The focus of safety advice is general, rather than consisting of separate sets of

advice aimed at children and parents

150
References

Hayward, B., Alty, C., Pearson, S. and Martin, C. (2002) Young people and
ICT 2002: Findings from a survey conducted in Autumn 2002. London: British
Educational Communications and Technology Agency.
Home Office (2003). Good practice models and guidance for the Internet
Industry on chat services, Instant Messaging (IM) and web based services. London:
Home Office.
Mintel Intelligence (2004: April). Marketing for children aged 7 to 10.
London: Mintel.
O’Connell, R., Barrow, C. and Sange, S. (2002). Young peoples’ use of chat
rooms: Implications for policy strategies and programs of education. London: Home
Office.
O’Connell, R., Price, J. and Barrow, C. (2004). Emerging trends amongst
primary school children’s use of the Internet. University of Central Lancashire:
Cyberspace Research Unit.
Potter, C. and Beard, A. (2004). Information Security Breaches Survey 2004.
London: Department of Trade and Industry.
The Office of National Statistics (2003). Individuals accessing the Internet:
National Statistics Omnibus Survey. London: ONS.
The Office of National Statistics (2003). Family Expenditure Survey. London:
ONS.

151
Acknowledgements

The authors gratefully acknowledge the invaluable assistance of Mr John Morris of


Stoneygate School, Mr Tony Hitchman of Edith Weston School, Mr Cliff Ashby of
Medway Primary School and Miss Bridget O’Connor of Loughborough High School.

152
Appendix

Table 4.1.

Table 4.1. Senior Manager positions as identified by the respondents

Business Relationship Manager


Community Director
Corporate Responsibility
Director of Internet
General Manager
Head of Customer Security
Managing Director
Owner and Director
Senior Advisor Editorial Department
Senior Editor
Senior Manager, Head of Chat and Community
Service Operations Manager
Technical Director
UK Community Producer
UK Country Manager

153
Table 4.2.

Table 4.2. Responses relating to the implementation of chat Model specific recommendations
In place Introduced/ Plans to No plans to Not Missing
Safety feature pre-Models changed in introduce introduce applicable*
response to
Models

Product
Clear, prominent information displayed concerning 8 1
the type of service offered
Clear, prominent information displayed concerning 8 1
audience service is aimed at

Safety advice
Clear and prominent safety messages available on chat 5 1 1 1 1
front pages
Clear and prominent safety messages available in the 7 1 1
chat rooms
Safety messages aimed specifically at 5 1 1 1 1
parents/carers/other adults
Safety messages aimed specifically at young users 8 1
under the age of 16
Safety messages can be easily understood by young 8 1
users under the age of 16
Links available to online safety guides, in-house 7 1 1
and/or third party safety sites

154
Clear and prominent safety messages when a user 4 1 2 2
completes their profile
It is clear what profile information will be in the 7 2
public domain

Registration
Requests for personal information are as limited as 7 1
possible
The purpose of information gathered is clearly 6 1 2
explained
The distribution of registration information is clearly 6 1 2
explained
Are there clear community guidelines about conduct 6 1 1 1

Public profiles
The user is able to limit what information about them 7 2
is made public
Young users are particularly made aware of the need 2 2 5
for caution
Particular care taken to advise young users not to post 2 1 6
their various addresses and telephone numbers

Tools
Ignore features 7 1 1
Alert features 5 2 1 1
Grab and print 1 3 4 1
Reporting mechanisms 7 1 1
Easily available filtering mechanisms that might pick 7 1 1
up bad language or prevent young users giving out

155
their email addresses
Users can block private chat/IM 6 2 1

Reporting
Reporting mechanisms are in place 7 1 1
Reporting mechanisms are clearly described 7 1 1
Users are informed of what to expect (e.g. likely time 3 1 2 1 2
frame of a response)
Recording mechanisms are in place 6 1 1 1

Moderated chat
In chat rooms specifically aimed at young users there 3 1 g 3 1 1
is an alert system (e.g. panic feature) at the top of each
chat room page
Moderators are easily accessible 7 2
Moderators are screened (e.g. via Criminal Records 2 1 3 1 2
Bureau checks)
Moderators are trained 7 2
Moderators are supervised 7 2
Moderators are given a clear job description 5 1 1 2
There is a system via which users can report problems 4 1 2 2
with Moderators
Moderators can block fake profiles/links from porn 4 3 2
operators
Moderators can block files or links that carry viruses 2 2 3 2
or, e.g. dialler programmes linked to high cost
telephone services
Moderators operate on a 24/7 basis 3 1 2 1 2
* Relevant service not offered by company

156
Table 4.3.

Table 4.3. Responses relating to the implementation of IM Model specific recommendations


In place Introduced/ Plans to No plans to Not Missing
Safety feature pre- changed in introduce introduce applicable*
Models response to
Models

Product
Clear prominent information displayed concerning type of IM 3 1 3
product offered

Environment
The type of IM environment is clearly described (e.g. open for 3 1 3
people sharing similar interests or for buddies only)
Users can easily access information concerning how to adjust their 3 1 3
settings or preferences in order to increase or decrease privacy (e.g.
users can easily change their audience to buddies only)

Advice
Safety messages available on the home page for downloading IM 3 1 3
Safety messages available on the IM client 2 1 1 3
Clear safety messages are present when a user completes their 1 1 1 1 3
profile
Information that will be in the public domain is highlighted when 1 1 1 1 3
completing profiles
Safety messages in place re: communicating with strangers 1 2 1 3

157
Safety messages in place re: exchanging personal information 3 1 3
Safety messages visible every time user receives message from a 1 1 1 1 3
person not on their buddy list
Safety messages visible when a user considers adding a person to 1 3 3
their buddy list
These messages can easily be understood by young users 3 1 3
Safety messages aimed specifically at parents/carers/other adults 3
Parents/carers are informed as to the easy access to chat rooms via 2 2 3
IM
Safety messages aimed specifically at young users under the age of 2 1 1 3
16
Links are available to online safety guides, in-house and/or third 1 3 3
party safety sites

Tools
Ignore or block features are offered 4 3
Ignore or block features are clearly described 4 3
Users are given the option to not receive messages from people not 2 1 1 3
on their buddy list
Information is available on how to deal with unwanted Instant 3 1 3
Messages
This information is clear and comprehensible 2 1 1 3

Reporting
Reporting mechanisms are in place 3 1 3
Features for reporting abuse are clearly visible 2 1 1 3
Clear information as to what constitutes abuse is given 1 1 2 3
It is easy for users to report abuse (e.g. via archiving, screen grabs) 3 1 3
Users are informed of what to expect (e.g. likely time frame of a 2 1 1 3

158
response)

Reporting serious incidents


Clear information is available on how to report urgent and serious 2 1 1 3
incidents
Clear information is available as to what constitutes ‘urgent and 1 1 1 1 3
serious incidents’
Clear information provided on how to contact law enforcement and 1 3 3
child protection agencies

Privacy
The purpose and distribution of personal information is clearly 3 1 3
explained
Details from registration do not automatically transfer to public
profiles or open member directories
There are clear links to the ISP’s privacy policy 4 3
The purpose and distribution of profiles/member directories is 2 1 1 3
clearly explained
Users are given clear advice regarding the potential risks of 1 1 1 1 3
publicly accessible profiles/member directories
Young users are made particularly aware of the need for caution 1 1 1 1 3
Particular care is taken to advise young users under the age of 16 to 1 2 1 3
not post their various addresses and telephone numbers
* Relevant service not offered by company

159
Table 4.4.

Table 4.4 Responses relating to the implementation of connectivity Model specific recommendations
In place Introduced/ Plans to No plans to Not Missing
Safety feature pre-Models changed in introduce introduce applicable*
response to
Models

Information
Home users are provided with information detailing 5 1 2 1
the risks to young users under the age of 16
Information is provided for parents to utilise in 2 2 1 3 1
educating their young users under the age of 16 re:
risks
Parents are encouraged to take practical steps in 3 2 3 1
order to minimise risk (e.g. advised to keep PCs in
common areas of the home)
Parents are given information regarding availability 5 1 1 1 1
and use of filtering and monitoring software
Parents are clearly informed of the limitations of 3 2 2 2
filtering and monitoring software

Safe surfing
Safe surfing guidelines are offered 5 1 2 1
These guidelines are written in an accessible format 5 1 2 1
(as not all parents and carers are computer literate)
Safe surfing guidelines are provided for young users 4 1 3 1

160
themselves

Complaints
Effective mechanisms are in place for dealing with 7 1 1
complaints relating to users’ use of the web

Terms of service
Terms of service explicitly state the boundaries of 6 1 1 1
acceptable online behaviour
Users are informed that unacceptable behaviour may 7 1 1
lead to withdrawal of service and/or referral to law
enforcement

Legal liabilities
Users are warned they have legal liability for 7 1 1
whatever they place on the web

Filtering
Users in the private home market are offered the 5 1 2 1
option of filtering software or filtered services

Reporting users
Users are informed that pornographic, racist and 4 4 1
other possibly illegal material may be reported to the
IWF
* Relevant service not offered by company

161
Table 4.5.

Table 4.5. Responses relating to the implementation of hosting provider Model specific recommendations
In place Introduced/ Plans to No plans to Not Missing
Safety feature pre-Models changed in introduce introduce applicable*
response to
Models

Information and guidance (in relation to home users only)


Clear guidance is available in relation to home users 6 2 1 3
re: creating web pages
Specific guidance is available for home users who 2 7 3
are young users under the age of 16 and young
persons
Home users are warned about publishing personal 5 1 1 2 3
details on their web pages

Terms of service
Terms of service explicitly state the boundaries of 5 2 2 3
acceptable online behaviour
Users are informed that unacceptable behaviour may 7 1 1 3
lead to withdrawal of service and/or referral to law
enforcement
Customers are informed they have legal obligations 7 1 1 3
of their own regarding certain types of content

162
Complaints
Effective mechanisms are in place for dealing with 8 1 3
complaints relating to customers’ websites

Self-labelling
Customers are encouraged to provide self-labelling 1 8 3
of the content of their sites using PICS compatible
systems such as ICRA

Internet Watch Foundation (IWF)


Company are members of the IWF 7 2 3
A link too, or information about the IWF is provided 7 1 1 3
in order to facilitate reporting of illegal content
Procedures are in place for removing illegal content 8 1 3
as soon as it is reasonably possible
* Relevant service not offered by company

Notes for interpretation of data in table 4.6


The percentages given in table 4.6 are derived from responses given to the closed-question items exploring implementation of Model-specific
recommendations. It must be noted that as some of the Models’ recommendations were made up of a number of composite parts, it was
necessary to break down some of the bullet-points in the Models to form the items used in the present study. This means that whilst one item
might refer to one bullet-point of the Models in some cases, in other instances there may be a number of items for each Model recommendation
(e.g. the Model for chat providers recommends that ‘service providers should deploy and give due prominence to some or all of the following
safety tools’ and then goes on to list four sub-bullet points that were broken down into seven items in the present study). The given percentages
can thus only be used as a rough indication of Model uptake within the sample.
It is also perhaps worth noting that some of the items did not relate directly to Model recommendations but rather were additional areas
of related interest that the research wished to explore in order to better meet its aims.

163
Some companies, who offered chat, IM, hosting, or connectivity, provided responses which were coded as ‘not applicable’ and these
responses are not accounted for in the percentages displayed in table 4.6. Not applicable answers were coded as such when the risk that a certain
Model recommendation aimed to prevent could not be associated with the service in question due to other measures being in place (e.g. not
having a means by which users could enter any sensitive material into profiles meant that warnings to not enter such details were not required).
Such measures in some instances might be seen to make the site safer than if the Models had been explicitly followed, and in all instances where
a ‘not applicable’ answer was coded these measures had existed pre-Models (thus diminishing the relative magnitude of the baseline
concordance discussed here).
In interpreting these results it must also be noted that, as is discussed in Chapter 4, two of the respondents had no prior familiarity with
the Guidelines. Whilst these respondents did have plans to implement certain Model specific recommendations these plans cannot be attributed
to the Guidelines. Conversely, that the same companies had no plans to implement certain features at the time of the interview perhaps could be
due to unfamiliarity with the Guidelines, thus diminishing potential Model impact.
Another factor that must be taken into consideration when interpreting the present quantitative data is the small sub-sample size. In a few
instances a single company described non-compliance towards a number of recommendations within a Model, and in other instances one
company might have accounted for a large percentage of Model impact, thus skewing results slightly. The small sample size reflects the
researchers’ difficulties in recruiting representatives. It is likely that those that did participate were more confident about the safety of their
service(s) and/or were more safety conscious compared with Managers who declined to participate. This filtering effect could have skewed
results in two ways, the main means being that the present sample would have been more likely to have had Model recommendations in place
pre-Model thus perhaps under-representing Model impact and non-compliance compared to the rest of the Internet industry. However, secondly,
in places where the present companies did not comply it is more likely that these more safety conscious services would take action to implement
recommendations compared to less responsible businesses, thus increasing Model impact here compared with the general population.

164