You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/221657981

Telephone Research for Library Managers

Article  in  Library Management · April 2005


DOI: 10.1108/01435120510580889

CITATIONS READS
8 113

2 authors, including:

Adam Pope
Arup
6 PUBLICATIONS   10 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The impacts of the Internet on public library use: An analysis of the consumer market for library and Internet services in New Zealand View
project

Arup's Tools Register View project

All content following this page was uploaded by Adam Pope on 23 May 2014.

The user has requested enhancement of the downloaded file.


The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at
www.emeraldinsight.com/researchregister www.emeraldinsight.com/0143-5124.htm

Telephone
Telephone survey research for survey research
library managers
Philip Calvert
Victoria University of Wellington, Wellington, New Zealand 139
Adam Pope
Government Actuary’s Department, London, UK Received June 2004
Revised August 2004
Accepted October 2004

Abstract
Purpose – To explore and evaluate the evidence about the value of using telephone surveys,
especially in market research for a library.
Design/methodology/approach – A critical summary and review of the literature in this field.
Findings – This paper demonstrates that there are five major reasons for using this method of
surveying customer preferences: response rates are higher; data can be analysed sooner; the cost of
surveys is lower than alternative methods; calls can be monitored for quality; and the telephone offers
the benefits of spontaneity.
Practical implications – The paper contains extensive information on “best practice” of telephone
surveying including designing the questionnaire and conducting the interview. There is information
on getting a representative sample, plus coping with “no answers”, unlisted numbers and answering
machines.
Originality/value – Library managers with a commitment to using innovative techniques for
market research will find telephone surveys offer a useful and cheap alternative to other survey
methods. No previous paper has examined the use of, and the value of, telephone surveys in libraries.
Keywords Libraries, Market research, Surveys
Paper type Research paper

The use of telephone surveys has not figured largely in LIS research in the past. For
many years they have been used to good effect in the private sector, especially in
market research and customer satisfaction surveys, and this makes it worth asking
why they have not been more widely used by LIS researchers and practitioners. As
library managers pay increasing attention to the library as a product, and its place in a
market, the relevance of telephone surveys should become more apparent. Using a
telephone comes more naturally to people than reading and writing, yet most library
surveys are conducted using printed survey forms that require a written response,
whereas telephone surveys need only the ability to hear and speak.
The survey is still the most common method used in LIS research (Powell, 1999).
Surveys are usually treated as a single class and few writers have separated telephone
surveys from other methods, but one point that becomes clear, however, is that
telephone surveys represent only a very small proportion of all LIS research.
The American Marketing Association regards telephone surveys as equivalent to
mail surveys, a quantitative method (Dutka, 1993), and quantitative research is the Library Management
Vol. 26 No. 3, 2005
simplest and most efficient for collecting data for manipulation by statistical methods. pp. 139-151
On the other hand, qualitative research is best when seeking free-form responses in q Emerald Group Publishing Limited
0143-5124
which the respondent uses his or her own words. The method allows for the collection DOI 10.1108/01435120510580889
LM of in-depth information such as personal opinions, the reasons why, where and how
26,3 people choose to do things.
When the survey is conducted by telephone, the respondent is immediately
available so the survey can be completed straight away, and clearly this is an
advantage of the method. The person being questioned can only respond with audible
signals, though, hence it is much harder to gauge the mood and demeanor of the
140 respondent. Nor can the respondent see visual signals such as rating scales, or the
color, size and shape of objects (Alreck and Settle, 1995). This lack of “observational
data” has been cited as a weakness in telephone surveys (Blankenship, 1977, p. 48). For
this reason alone, telephones lend themselves to structured surveys rather more than to
open-ended interviews, though they can be used for the latter by competent
interviewers.
Two techniques that assist telephone surveys should be mentioned at this point,
random digit dialing (RDD) and computer assisted telephone interviewing (CATI).
RDD in its purest form is simply the random selection and dialing of area code,
exchange and suffix numbers. It can be done manually, though there is also computer
software to make this process faster and more accurate. The advantage is that it
ensures all numbers have an even chance of being called, hence it is truly random. The
disadvantage is that many numbers could be out of service, or belong to people or
organizations that are not in the target population. This problem can be limited by the
use of systematic RDD, in which only numbers that meet specified criteria are dialed.
CATI systems integrate traditional telephone survey methods with computer
telephone technology. Typically, each interviewer is equipped with a hands-free
headset and is seated at a computer. The computer dials the number and provides the
interviewer with the appropriate introduction screen. Simple keystrokes bring up the
first screen of questions, and if relevant, possible answer choices, then on to further
screens for alternative paths if Yes or No answers lead to different questions. One
advantage of CATI systems is that several members of a household can be
interviewed, yet only one set of common data need be collected for the family and is
then “shared”, saving the need for multiple calls. A second advantage is that
respondents who must leave during a call can be automatically called back at a set time
and the interview resumed with no loss of data. The major disadvantage of CATI
systems for LIS research is the initial investment they require and their operating
costs. Staff using such systems must possess computer skills in addition to the more
traditional skills of the interviewer. There is a simple alternative for researchers
without access to CATI systems, for data can easily be entered into a spreadsheet such
as Microsoft Excel or SPSS. Using the “freeze panes” option in Microsoft Excel allows
the interviewer to progress from one respondent to the next without losing sight of the
questions.

Advantages and disadvantages of telephone surveys


Dutka (1993) lists four advantages of telephone surveys.
(1) Survey forms received through the mail are easy to ignore, but a human calling
on the telephone, particularly one who sounds pleasant, is not so easy to
dismiss, hence the response rate from a telephone survey is usually much
greater than with mail surveys which helps to reduce bias associated with
non-response.
(2) The time to complete a project conducted by telephone survey is shorter than Telephone
with either mail surveys or personal interviews, simply because there is no time survey research
needed to wait until forms are received sometimes four or five weeks later.
Indeed, the CATI systems mentioned earlier can perform data analysis as soon
as the first data is collected.
(3) The cost of telephone surveys is considerably lower than the cost of personal
interviews, especially if extensive travel might otherwise be involved, and may 141
not be much greater than with mail surveys once the response rate is taken into
account. Telephone calls can be made from either home or office, and if the
researcher lives within a free-calling area, such as is the case in New Zealand,
local calls can be made with no variable cost. Lower costs can be translated into
a bigger sample size, if so desired.
(4) Surveys conducted over the telephone can be monitored by supervisors, thus
providing for simple and effective quality control (p. 36).
To this list, Ward (2000) adds
(5) the benefits of spontaneity. Although there is more anonymity in a telephone
call compared to an in-person interview, and one might expect the respondent to
be more candid over the telephone, research on this topic is mixed (e.g.
Colombotos, 1969).
Lastly, McGuckin et al. (2001) add
(6) that the telephone permits for more contact attempts and a great variety in
times contacts are attempted, with a higher response rate as a result.
Similarly, Dutka (1993) has listed three disadvantages of telephone surveys.
(1) The cost may be higher than for mail surveys (depending on response rates),
especially if the researchers have to rent space and equipment and then to hire
and train staff (Ward, 2000).
(2) Some respondents can be difficult to reach by telephone, perhaps due to their
irregular work hours, not being at home much for whatever reasons, or not
having a telephone in the house.
(3) Telephone interviews often generate quick responses, which may seem to be an
advantage, but speed does not allow the respondent an adequate time for
in-depth thinking (p. 36).
Telephone ownership is extensive in most western countries, with telephone
ownership stabilized at about 94 per cent of households in the United States of America
(United States Department of Commerce, 1998). Most West European countries have
enough telephone lines per head to make telephone surveys viable, as do countries such
as Canada, Australia, New Zealand, Israel, Singapore, Japan, and South Korea (ITU).
As East European countries and some East Asian countries develop, they, too, will
become suitable ground for telephone surveys.
All purely telephone surveys only interview people who are accessible via a home
telephone connection (Collins, 1999). This non-response bias is mitigated somewhat as
those without phone connections due to lack of finance are likely to be offset by those
wealthier members of society who use cell-phones as their main means of telephone
LM communication or who choose to go ex-directory. More will be said about the problem
26,3 of non-response later.
The simplicity of telephone surveys gives some advantages over other forms of
collecting data, but the simplicity is itself a barrier to the collection of rich data as there
is no chance to follow up open-ended or ambiguous responses to probe what
respondents really meant, or what lies behind response.
142
Choosing between telephone, mail, and in-person surveys
Telephone interviews and mail questionnaires are the chief methods of collecting data
for customer research (Dutka, 1993, p. 61), and there are times when a choice must be
made between the two alternatives. If the LIS researcher has to make a choice between
using a telephone or a mail survey, criteria that should be considered are:
.
the time available for the whole project;
.
whether cost is a factor and how much is available for the survey; and
.
the desired level of confidentiality for respondents.
There are ten points to consider when choosing between a telephone interview and
in-person interviews.
Reasons to select in-person interviews:
.
Must the interview take place at a specific location?
.
Is it necessary to use visual clues?
.
Is there a quota based on physical appearance or some observed behavior?
.
Will the interview take a long time?
Reasons to select telephone interviews:
.
Could physical appearance cause bias in the sample?
.
Is there a possibility of threat to the interviewer?
.
Could respondents’ companions affect the data?
.
Can the telephone directory provide a sample? (but if not, choose in-person)
.
Are the respondents spread over a large geographic area? (Blankenship, 1977, p. 36)
.
Must data be collected quickly? (Alreck and Settle, 1995, p. 38)
This list does not make mention of the respondents’ accessibility, which can be higher
via the telephone, especially if respondents are concerned about personal security when
answering the door or being approached in the street. There are no recent cost
comparisons, but some years ago Groves and Kahn (1979) estimated that telephone
surveys cost only 45 per cent of a similar study done with in-person interviews.

Telephone and internet surveys


Much wider access to the internet has made the use of e-mail surveys more feasible,
especially for research into the use of the internet itself, as Klobas and Clyde (2000)
demonstrated. They argue that such surveys can not only ask questions about internet
use, but also about perceptions and attitudes even if the respondent has not used the
medium. Roselle and Neufeld (1998) compared the use of electronic mail and postal
mail in the follow-up stage of a project using a mailed questionnaire and concluded that
it was as effective, with advantages over postal mail in terms of speed, cost, and even Telephone
accuracy of responses. By contrast, comparisons between telephone surveys and e-mail survey research
surveys seem non-existent. Clearly, e-mail surveys become viable in environments
with 100 per cent e-mail use amongst potential respondents, such as universities, but
internet access is not yet sufficient for general surveys, e.g. of public library users.
Web-based surveys require all potential respondents to have internet access. There
are advantages of web-based surveys, yet they have a low response rate and many of 143
the same limitations as a printed survey (Gunn, 2002).

Problems with telephone surveys


It has to be accepted that certain data are difficult to obtain with telephone interviews,
e.g. psychographic (lifestyle) data are usually obtained using mail or in-person surveys
(Dutka, 1993, p. 62). Yet telephone surveys appear to be very efficient ways of
gathering most kinds of quantitative data, which could be (for example) data about
usage of library materials and media, service and product brand recognition, customer
satisfaction levels, or interest in new information services.
Researchers in the USA relying on telephone surveys have become concerned by the
decline in response rates with the method. The problem of non-response is the largest
challenge for further use of telephone surveys, especially in the United States but
perhaps progressively it will occur in other countries as survey fatigue creeps in. People
seem to be wary of telephone solicitation, and perhaps concerned by privacy issues
(McGuckin et al., 2001). Telephone surveys have four problems of non-response:
refusals, no answers, unlisted numbers, and answering machines (Dutka, 1993, pp. 66-7).
(1) Refusals. McGuckin et al. (2001) found that some refusals could be termed “soft
refusals” and that ringing back later would get a response from the same person
or another member of the household – 28 per cent of the households who had
initially refused eventually gave a completed interview.
(2) No answers. There are higher rates of “no answer” from elderly, retired, and
unemployed people, and mothers with young children. Members of these
groups may have difficulty in coming to the telephone. Callback procedures can
reduce the bias in a survey if some of the population does not respond. Another
group who are potentially inaccessible to phone surveys are those always
connected to the internet using a dial-up connection, so it is useful if the “no
answer” telephone numbers are re-rung with at least an hour between calls.
(3) Unlisted numbers. People who have a confidential or unlisted number are
another group telephone surveys are unable to reach. This varies from country
to country, with 16 per cent of the New Zealand residential market and up to 50
per cent of some major UK cities unlisted[1]. There are a wide variety of reasons
why people may want to have an unlisted or confidential phone number.
Depending upon the topic of the research, this may or may not matter. Much
telephone research is based upon the assumption that the sampled population
matches that of the target population and if, for example, the research is
investigating public awareness of a recent publication, there is no obvious
reason why unlisted numbers will affect the result. By contrast unlisted
numbers may skew a question about a specific library service if it might appeal
to the same market that also chose to have unlisted numbers.
LM (4) Answering machines. A telephone survey can be conducted at any time, so
26,3 answering machines can be avoided if callbacks are used. Up to 10 per cent of
phone calls will go unanswered. Some will only be answered by a voice
messaging system. Saying the introductory patter into home versions of these
can result in successful interviews, as the intended respondent may pick up the
phone if they realise it is just a short survey about libraries. If the response rate
144 is still too low once the required sample size has been achieved, phoning the
previously unanswered numbers is the best way to increase it. McGuckin et al.
(2001) tried some numbers as often as 19 times and reached a 50.4 per cent
completion rate as a result.

Telephone surveys in LIS research


National surveys have been conducted using telephone surveys. One significant
survey investigated the impact of the internet on the public’s demand for and use of
public libraries (D’Elia et al., 2002). The researchers regarded this as a consumer
survey and segmented the American adult market for information services by use or
non-use of public libraries and of the internet, and by access or lack of it to the internet.
Importantly, they regarded public libraries as operating within a consumer market and
patrons acting as consumers of information and other library services, hence the future
of public libraries will be determined by just such a consumer market in which choices
are available and actively used by public library and internet users. This research
employed a mix of quantitative and qualitative methods, with focus groups used as a
means of preparing the quantitative survey. The focus groups provided a rich pool of
information that assisted with the creation of the questionnaire (p. 805), and pilot
surveys helped with the refining of the method “to achieve a questionnaire that met all
of our data gathering requirements within the time constraints posed by a telephone
survey, that appeared to be understandable to a lay audience, and that appeared to be
error free” (p. 805).
The US Department of Education’s National Center for Education Statistics
conducted a National Household Education Survey in 1996 (NHES-96), gathering data
by telephone. As an example of how telephone surveys can be used, this research
showed that about 44 per cent of US households included individuals who used public
library services in the month prior to the interview and 65 per cent of households used
the public library at some time in the previous year, and it also showed that public
library use was greater when a family member was under 18 years of age. Survey data
also pointed to the most common ways people used public libraries, and the purposes
that lay behind usage of the library, though these data were collected using a checklist
of “ways” and “purposes” pre-determined by the researchers. This example shows how
useful the telephone survey is in research with large samples and for questions with a
finite range of responses. The NHES-96 was a large-scale survey that took from
January to April 1996 to complete. The sample was selected using list-assisted RDD
methods that initially screened 55,838 households, though a few hundred were later
dropped. Data were collected using a CATI system. Gathering data on when someone
in the household last used a public library is quite easy, but using the telephone to
uncover information about ways of using the library and the purposes behind the uses
is most easily done with pre-determined checklists of ways and purposes. This makes
the data collection simple but it does not allow much scope for people to venture
reasons not on the list, hence the survey is unlikely to cause any surprises. Another Telephone
limitation of the telephone survey is identifying respondents who can provide useful survey research
answers to questions relating to specific services or products. Market research
companies can use the telephone to ask about washing powders, beer, chocolate and
other common consumables, but could a public library justify a telephone survey to
discover customer responses to a specialized service such as genealogy records,
business information, or a rental fiction collection, for which the potential market 145
constitutes a small proportion of the general population?
Mossberger et al. (2003) wished to explore what they call “multiple divides” within
society’s use and access to digital information. Their primary source of data was a
national telephone survey conducted by Kent State University’s CATI laboratory. One
national random sample of 1,190 respondents was drawn from all “high-poverty
census tracts” in the 48 mainland states. They achieved a 92 per cent response rate.
A second national random sample of 655 respondents drawn without regard to the
poverty rate of the census tract had a response rate of 88 per cent. Callbacks were used
to increase the response rate. The questionnaire consisted of 50 items and each
telephone survey took an average of 8.5 minutes to complete. The data analysis was
done using multiple regression to identify independent variables, or possible
explanations for each result.

Conducting a telephone survey


As with all research, a problem must be clearly defined, a research question posed, and
a thorough review of the literature conducted before any attempt at deciding on
methodology is made. Once the boundaries of the problem have been delineated,
however, a telephone survey has merits and ought to be considered. As with other
forms of survey, it is first advisable to find out how many people will need to be
surveyed[2].

The questionnaire
There are no aesthetics in the design and construction of telephone surveys, only utility
(Frey, 1983, p. 89). Ease of administration by the interviewer is a prime concern.
The survey instrument should be designed as a whole, so the introduction, and the
instructions to the interviewer and respondent are of great importance. For
the interviewer: the response categories must be clear; question wording must allow
for a conversational tone; and transitional statements, adequate question introductions,
and branching statements may all be necessary to aid the flow between questions. For
the respondent: the logic of question order makes a difference; long questions simply
become confusing; and the level of motivation must be maintained.
The design of the questionnaire can be divided into three groups of questions.
Initially the survey needs to introduce the surveyor and assess the ability of the
responder to answer the questionnaire, followed by the body of the questionnaire, and
finally the demographic and miscellaneous questions. Leaving the simple demographic
questions until the end helps avoid respondent fatigue (Frey, 1983, p. 110).
The initial sentence requires careful design as it will make or break the response
rate. It requires a friendly greeting, the interviewer’s name, a statement about the
purpose of the call, and how long it will take. At this point the intended recipient makes
the decision whether to participate or not, and “Obtaining a favourable decision on
LM participation is largely dependent on the nature of the introduction to the survey”
26,3 (Frey, 1983, p. 87). Here, a library survey has an advantage because the topic is
non-threatening and for a large part of the population, a service that they mostly like.
Mentioning the library several times during the introduction is likely to raise the
response rate. If the respondents’ interest is raised by the introduction, the first
questions must continue the theme – there is no point in saying the survey will help
146 improve library services only to then ask the respondent’s age, or how often they use
the internet.
The simplest questions are the best in telephone surveys. Closed question such as
Yes/No, How often/how many?, Tell me all that you recognise from this list, Please rate
the following on a scale of one to five, are all suitable questions for the telephone. Open
questions can be asked, of course, but they pose problems in coding, and they are
potentially ambiguous (Blankenship, 1977, p. 93).
Questions are best grouped by topic, and in a way that matches the respondent’s
perception of the relationship between items. Pre-testing may discover that
respondents would not group questions in the same way as the survey’s authors.
Turning to practical interviewing, it is necessary to consider how to get a
satisfactory sample. To incorporate an element of chance into the survey the
interviewer should first ask to speak to the member of the household whose birthday
comes next.
The issue of age needs to be given careful consideration. To interview people under
the age of 18 should require the consent of a parent or guardian, and the questions
must ensure that the expected literacy of youth is accommodated. If people under 18
years of age are nevertheless an important group to be surveyed there will be other
methods of interviewing them, such as through their school.
Those elements of the community with a personal stake in the outcome of the
survey also ought to be excluded to avoid any skewing of the data in their favor. In the
case of a library survey, ask a question early on in the survey, such as “Do you work in
any of the following occupations: library work, information management, or library
supply?”
Order response bias occurs when respondents become fatigued by the repetitive
nature of questions, with interviewees responding with awareness to the first questions
then more carelessly towards the end. To avoid this bias occurring, when half of the
required sample has been interviewed, simply swap portions of the questionnaire
around, bringing the second half of the body to the beginning, for instance.
Demographic questions are almost always a valuable tool – not only to assess how
well the sample fits the demographic of the population, but also to be able to tailor any
resulting solutions to specific age, gender, educational, income or ethnic groups. Some
questions, however, can be disturbing to the respondent, and it is important to phrase
them in such a way as not to offend. To find out the respondent’s age, it might be
asked, “in what year were you born?” – gender is sometimes obvious from the
respondent’s voice – and ethnicity can be determined by asking, “Would you consider
yourself to be a New Zealander?” (for example). If the reply is “yes”, then ask “of Maori
descent?. . .European descent?. . .Chinese descent? . . . .” If the respondent answers “no”
to all such questions, then “Which ethnic group do you most identify with?” is the
logical next question.
Income can be divided into tiers by asking, “Is your income above or below $35,000 Telephone
per year?” If below, ask “. . .above or below $20,000?” and similar if above $35,000. survey research
Finally education should easily be answered with a question such as “What was your
highest educational qualification?”
In order to validate that the research has been conducted, respondents can be asked
towards the end of the survey if they are happy to have their phone number released to
a trusted and independent body, such as a supervisor, Justice of the Peace, or member 147
of the Library Board, in order that a third party can verify that the interview took
place. If the respondent declines, their phone number is deleted and this can be
recorded with a code such as FA (full, anonymous), indicating an interview was
conducted but the respondent wished to remain anonymous.
Another question towards the end of the survey asking whether the respondent
would like a summary of the results is important for the profession’s credibility as well
as the ethical rigor of the survey. If the respondent asks for one, the interviewer should
explain that he/she needs to open a separate file independent of their answers and enter
the respondent’s address into it whilst assuring them of confidentiality. E-mail
addresses can be used when possible as this can save a great deal of money and effort
when delivering the final results. Depending on the interest value of the survey to
respondents, about one third will be keen to see a summary.
Finally, thank the respondents for their time and assure them of the usefulness of
their contribution.

Telephone numbers
There are a number of methods of sourcing telephone numbers to ring. Many
commercial market research companies are willing to sell lists of randomly chosen
numbers that have a good chance of being answered, though it is far cheaper to go
through the phone directory and randomly pick phone numbers. It is most advisable to
run a telephone survey shortly after a new directory is published to avoid disconnected
numbers and to catch transient members of the population. Using a ruler and marking
every fifth centimetre on each column, then transposing the telephone number beside
each mark into a spreadsheet is an efficient way to provide a random list. A
computer-based alternative is RDD software, described earlier.
Phone numbers should be stored in a separate, password-protected file to ensure
respondent’s identities are kept apart from their answers, and secure.
If the population needed for the research covers a larger area than a free-calling area
there are a number of demographic factors to consider. To check what the demographic
is, look up the latest census figures. If the demographic in the free-calling area does not
closely match that of the population to be surveyed it will be difficult to claim in the
results that the sample is in any way representative of the larger population.
There are options to solve this. Calling outside of the free-calling area is a possibility
if funds allow. Be sure to check all the options for calling – often pre-paid cards or
other telecommunications companies can offer better deals than the predominant
market player.
Another option is to attempt to replicate the demographic of the population in the
sample. This is simply solved when gender is involved by asking “. . . may I speak to
the lady/gentleman of the house whose birthday comes next please?” However, if
20 per cent of the people in the target population are of Chinese origin, and only
LM 10 per cent of your free calling area is of Chinese origin then, towards the end of the
26,3 survey, it will be necessary to exclude potential respondents of races other than
Chinese. Randomly finding a 60-70 year-old female of Indian descent with no formal
education who is willing to do the survey, for instance, can be particularly vexing. You
may have to settle for “close enough” and mention any demographic limitations in your
results.
148 The statistical solution is to bring the responses obtained from one segment of the
population in line with their proportion of the demographic. For example, if conducting
a survey on use of the internet within a library, and if all females used the library for
internet services but were only 40 per cent of the sample – and 50 per cent of the
population – the proportion of females answering “Yes” would increase by 25 per cent
(10 per cent of the total “Yes” responses). Where only half of the male population used
the library for internet services but were 60 per cent of the sample, the proportion of
male “Yes” answers would decrease by 16.6 per cent (5 per cent of the total “Yes”
responses) (Tables I and II).

Response rate
It is important that the response rate of the survey is as high as possible. There are a
number of ways to increase the willingness of respondents to answer the survey. The
depth of the interviewer’s desire to learn from any potential respondent is invariably
immediately obvious – and will strongly be reflected in the results.
Perhaps the second biggest negative for respondents is the time the interviewer
says the survey will take. Though honesty is the best policy, it is often hard to judge
exactly how long a survey will take from the outset. Furthermore, the shorter the time
quoted at the beginning the higher the chance of the responder accepting the invitation.
It is therefore advisable to quote the average time it takes to conduct an interview
whilst taking out the longest 20 per cent of interviews since the respondents in that
category invariably dithered and cared not how long the interview took. As a rough
guide, if the questioner quotes 5 minutes for the interview at the start, people who
answer the survey speedily will start to question the length of the interview once 12 to

Internet use
Yes No

Table I. 60 males 30 30
With 60 per cent of 40 females 40 0
sample male 100 people 70 30

Internet use
Yes No

50 males 25 25
Table II. 50 females 50 0
After analysis Total (per cent) 75 25
15 minutes is up. A 15 minutes quote, though getting a lower response rate, will start Telephone
similar questions arising after 30 minutes. Any longer than 15 minutes and it is better survey research
not to give a time. If queried, issue the responder with a challenge such as “I had one
respondent who managed to knock off a survey within 20 minutes. Care to better her?”
Another effective method of increasing the response rate of a survey is to mirror the
language of the respondents. Proficiently adopting the traditional reply to the initial
phone greeting, such as a “Ni hao ma?” if a respondent answers the phone with “Wei?”, 149
attempting to develop rapport with the respondents by speaking at a similar pace,
using comparable language and responding with similar intensity all aid in producing
empathy in the respondent and upping the response rate.
Once the interviews are complete, unprotect the phone number spreadsheet and
forward it to the independent body, asking that they phone a random set of the
completed interviews to verify that the interviews took place. Once this is complete the
spreadsheet must be destroyed.

Limitations
Phone surveys are by no means enumerative and this has to be considered as a major
limitation. There are furthermore a number of biases resulting from phone surveys that
need to be taken into consideration, and avoided where and as best as possible.
Telephone surveys can expect refusals to be in the range of 10-25 per cent of those
called. In comparison to a mail survey, where 90 to 95 per cent refusal rates are not
uncommon (Alreck and Settle, 1995, p. 35) the decision to conduct a telephone survey is
still efficient. Nevertheless, when confronted with the introductory patter about a short
survey on library services, some people will say they are not interested. It is a good
idea to formalise a response to draw them back into the survey, such as “You haven’t
used a library in the last year?” Invariably the answer will be “No”, and the interview
can be continued with other questions pertinent to their information needs and why
they do not use a library. This will draw a markedly better response rate.
To make the results clearer in the final analysis and show any population biases the
survey may have, in the figures and tables which have demographic data for the entire
sample, incorporate a column containing equivalent data from the latest census to
highlight any differences between the norm for the population and the sample.

The potential of telephone surveys in LIS research


New technology and greater bandwidth will inevitably make remote surveying more
interesting to the respondent. It is possible to imagine scenarios using videophones in
which the interviewers can use the visual aids not possible with traditional telephony,
such as images of book covers, library interiors, screen shots of online databases, and
so on. The cost of toll calls has dropped and is continuing to drop, making even
international surveys relatively affordable.
The most widespread use of telephone surveys to date has been for market research
and the assessment of customer satisfaction. If this were extended to LIS it would
assume that libraries and other information services exist in a consumer market in
which customers have different needs, are offered choices of different information
products and services, and can make informed judgments about which ones they will
use. Library managers need good information about the value customer’s place upon
library services, such as whether an existing combination of price, place and product
LM results in a high-perceived value, or if some changes to the product would produce a
26,3 better customer response. For large-scale market research conducted amongst both
existing library customers and those who do not currently use the library, the
telephone offers a practical solution.

Notes
150 1. New Zealand data from Darren Lim, a representative of Telecom Yellow Pages Data
Solutions Team. UK data from Noble et al. (1998).
2. In order to calculate the number required for smaller populations, or to find the confidence
ratings of smaller samples, there are look-up tables in many books on statistics (Hernon,
1994). Also try the calculator at www.surveysystem.com/sscalc.htm.

References
Alreck, P.L. and Settle, R.B. (1995), The Survey Research Handbook: Guidelines and Strategies for
Conducting a Survey, 2nd ed., McGraw-Hill, New York, NY.
Blankenship, A.B. (1977), Professional Telephone Surveys, McGraw-Hill, New York, NY.
Collins, M. (1999), “Editorial: sampling for UK telephone surveys”, Journal of the Royal Statistical
Society, Vol. 162 No. 1, pp. 1-4.
Colombotos, J. (1969), “Personal vs telephone interviews effect responses”, Public Health Reports,
pp. 773-820, September.
D’Elia, G., Jorgensen, C., Woelfel, J. and Rodger, E.J. (2002), “The impact of the internet on public
library use: an analysis of the current consumer market for library and internet services”,
Journal of the American Society for Information Science and Technology, Vol. 53 No. 10,
pp. 802-20.
Dutka, A. (1993), AMA Handbook for Customer Satisfaction, NTC Business Books, Lincolnwood,
Il.
Frey, J.H. (1983), Survey Research by Telephone, Sage, Beverly Hills, CA.
Groves, R.M. and Kahn, R.L. (1979), Surveys by Telephone: A National Comparison with Personal
Interviews, Academic Press, New York, NY.
Gunn, H. (2002), “Web-based surveys: changing the survey process”, First Monday, Vol. 7 No. 12,
available at: http://firstmonday.org/issues/issue7_12/gunn/index.html (accessed 12 April
2004).
Hernon, P. (1994), Statistics: A Component of the Research Process, revised ed., Ablex, Norwood,
NJ, p. 121.
Klobas, J.E. and Clyde, L.A. (2000), “Adults learning to use the internet: a longitudinal study of
attitudes and other factors associated with intended internet use”, Library & Information
Science Research, Vol. 22 No. 1, pp. 5-34.
McGuckin, N., Keyes, M.A. and Liss, S. (2001), Hang-Ups: Looking at Non-Response in Telephone
Surveys, available at: www.fhwa.dot.gov/ohim/hang_ups.htm (accessed 27 April 2004),
Federal Highway Administration, US Department of Transportation, Washington, DC.
Mossberger, K., Tolbert, C.J. and Stansbury, M. (2003), Virtual Inequality: Beyond the Digital
Divide, Georgetown University Press, Washington, DC.
Noble, I. et al. (1998), “Bringing it all back home. . .”, Proceedings of the Annual Conference of the
Market Research Society, p. 48.
Powell, R.R. (1999), “Recent trends in research: a methodological essay”, Library & Information
Science Research, Vol. 21 No. 1, pp. 91-119.
Roselle, A. and Neufeld, S. (1998), “The utility of electronic mail follow-ups for library research”, Telephone
Library & Information Science Research, Vol. 20 No. 2, pp. 153-62.
United States Department of Commerce (1998), Falling Through the Net: Defining the Digital
survey research
Divide, available at: www.ntia.doc.gov/ntiahome/fttn99/part1.html (accessed 1 April 2004),
National Telecommunications and Information Administration, Washington, DC.
Ward, S.M. (2000), “The client satisfaction survey as a tool for evaluating library fee-based
information services”, Journal of Interlibrary Loan, Document Delivery & Information 151
Supply, Vol. 10 No. 3, pp. 63-76.

Further reading
International Telecommunications Union (ITU) (2004), Main telephone lines, subscribers per 100
people, available at: www.itu.int/ITU-D/ict/statistics/at_glance/main01.pdf (accessed 2nd
June 2004).

View publication stats

You might also like