Professional Documents
Culture Documents
(2002),"Extracting meaningful measures of user satisfaction from LibQUAL+™ for the University Libraries
at Virginia Tech", Performance Measurement and Metrics, Vol. 3 Iss 2 pp. 48-58 <a href="https://
doi.org/10.1108/14678040210440937">https://doi.org/10.1108/14678040210440937</a>
Access to this document was granted through an Emerald subscription provided by emerald-srm:264987 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please
visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication
Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
Atlas.ti, a qualitative data analysis program, each quotation, the text within the primary
was used to discover patterns in respondent document that the researcher considers to be
comments. While Atlas.ti allows the meaningful. Assigning codes involves first
researcher to manage large amounts of highlighting a particular quotation within the
qualitative data, it does not perform the primary document and then creating a code
analysis for the researcher, i.e. it is not an for that quotation, i.e. open coding, or
artificial intelligence program that sifts selecting a previously created code, i.e. code
through data to discover themes. Rather, by list.
Atlas.ti provides a tool for the researcher to There are several considerable advantages
organize and document themes within his/her of using Atlas.ti as opposed to a non-
data. In order to provide a better computerized method for coding the data.
understanding of the analysis and to First, once the data have been coded, Atlas.ti
familiarize readers with the program, the allows quick access to the quotations for a
three steps used to conduct the analysis in particular code. A window can be opened for
Atlas.ti will be discussed. This illustration is each code that displays the first few words of
not meant to show the entire array of features every quotation for that code with its location,
available in Atlas.ti but rather to provide a noting the primary document of the quotation
general understanding of the program. and its line number within the primary
The first step in the analysis was to prepare document. By clicking on the quotation, the
and import the primary data. This step is quotation's primary document is retrieved
somewhat analogous to data preparation in and shown within the primary document
quantitative analysis, in which the data file window with the particular quotation
must be cleaned and edited before it is highlighted. Second, Atlas.ti provides a search
imported into a quantitative data analysis feature that lets the researcher find patterns or
program such as SPSS or SAS. For the strings, specified by the researcher, within the
LibQUAL+TM project, the primary data, the primary document. Third, the program allows
text of the respondents' e-mail, was collected the researcher to assign more than one code
from various libraries. Duplicate e-mails were to a quotation. The double coding strategy is
removed. Additional primary data cleaning often useful in the beginning of the analysis in
included removal of nonsensical characters which the codes have not been thoroughly
and symbols associated with the e-mail developed; the researcher may not be able to
transmission. One distinct advantage of decide at that point exactly how the data
analyzing e-mail messages in Atlas.ti is that should be coded. Finally, Atlas.ti also allows
they are already in text format and do not the researcher to assign more than one
have to be transcribed, as does a tape- quotation to the same piece of text. For
recorded interview or focus group. Once the instance, the first two-thirds of a sentence
primary data for each institution was cleaned, could belong to one quotation, while the last
101
LibQUAL+TM spring 2001 comments Performance Measurement and Metrics
Julie Anna Guidry Volume 3 . Number 2 . 2002 . 100±107
two-thirds could belong to a second quotations were most often those comments
quotation. Thus, in this case, the middle third associated with a particular institution's
of the sentence would be associated with two library rather than comments about the
different quotations and, most likely, two survey itself. Because the purpose of this
different codes. If this were attempted study was to refine the survey instrument,
manually, the researcher would likely be comments about a particular library most
required to duplicate the sentence so that it often, as opposed to the survey, were not
could be stored in two locations, one for each analyzed in detail. Thus, this information
code. was most often left in long strings.
In all, 36 codes were created, as shown in Additionally, some long quotations captured
Table I. The length of the quotations, for the specific error messages produced while the
most part, was approximately one line or respondent was taking the survey. Many
sentence long. However, some quotations respondents cut and pasted these error
were as short as one word while others were messages into their e-mails, some of which
several paragraphs long. The longer were up to 35 lines long.
Table I List and frequency of codes generated from LibQUAL+TM 2001 survey comments
Code Frequency
Downloaded by Georgetown University At 07:14 15 August 2017 (PT)
102
LibQUAL+TM spring 2001 comments Performance Measurement and Metrics
Julie Anna Guidry Volume 3 . Number 2 . 2002 . 100±107
The third step in the analysis involved and ``other/specific technical problems'',
synthesizing the codes to form broader a code.
categories. This process is then used to While a hierarchical representation of the
develop analytic frameworks for the data data is normally desired, it may be restrictive
collected under a constructivist grounded if complex relationships exist in the data.
framework (Charmaz, 2000). Although Fortunately, Atlas.ti allows any node within
Atlas.ti provides several methods for the network view to be linked with any other
organizing data, this analysis was achieved node, allowing for a rich representation of
using the following method, which was the data.
thought to be the most flexible. Within
Atlas.ti, the network view provides a graphical
representation of the data for model building
Results
purposes. This view allows the researcher to
import the codes that were previously created, Figure 1 shows the overall dimensions of all e-
that are represented by boxes called nodes. mail messages using Atlas.ti. Additionally,
Nodes may represent entities other than tallies of each code are provided in Table I. In
codes, such as memo, quotations, or primary all, 1,344 comments were coded. It must be
documents; however, in this analysis, nodes emphasized that these comments were
represented only codes. These nodes can then unsolicited, in that there was no question in
Downloaded by Georgetown University At 07:14 15 August 2017 (PT)
be related to one another through the use of the survey that asked respondents to provide
lines, or links. Invoking grounded theory, their opinions. Thus, it is likely that these
each link may show a different type of respondents were highly motivated to provide
relation, such as ``is associated with'', ``is part feedback and that less motivated people who
of'', ``is cause of'', etc., that is selected by the did not provide feedback felt similarly.
researcher. As can be seen in Figure 1, the comments
were broken into two major categories: those
` ... Atlas.ti allows any node within the related to the survey itself and those related to
network view to be linked with any a particular library. E-mail comments related
other node, allowing for a rich to a specific library totaled 119, as shown in
representation of the data... ' Table I. Again, because the purpose of this
analysis was to improve the LibQUAL+TM
The general structure of the synthesized data survey instrument, comments about a library
in this analysis forms a hierarchy, in which were not further analyzed.
codes are grouped into categories, categories Those comments related to the survey itself
are grouped into higher-level categories, and were also broken into two categories: positive
so forth. It is important to understand that comments and problems. Positive comments
within the network view, Atlas.ti does not totaled 13 and reflected varying aspects of the
distinguish between these different levels. survey. Problems with the survey, on the
Rather, these are all considered nodes in other hand, accounted for a very large
Atlas.ti. The researcher, however, can form a number of comments about the survey itself:
hierarchical structure. To do this, the 1,212 from 1,225. While this number may
researcher would first create a new node in seem alarmingly high, it must be kept in
the network view. The original codes perspective: 20,416 people participated in the
developed during the coding process can then survey and some people provided more than
be linked to this new node, or, in the one comment. Thus, no more than 6 percent
researcher's terms, category. For example, the (1,212/20,416) of respondents provided
codes/nodes of ``right side of screen not negative feedback.
visible'' and ``graphics hide buttons'' were The problems with the survey reported by
linked to the new node named ``visual respondents are likely a reflection of non-
problems''. In order to further synthesize the sampling error, i.e. error in a survey method
data, higher-level categories were created in design other than sampling error. Sampling
the same fashion, which subsumed codes and/ error refers to the random error involved in
or categories. For example, the higher-level drawing a sample and is the statistically
category labeled ``technical problems'' estimated distance between the sample results
included both ``visual problems'', a category, and the estimated true population results
103
LibQUAL+TM spring 2001 comments Performance Measurement and Metrics
Julie Anna Guidry Volume 3 . Number 2 . 2002 . 100±107
Figure 1 Dimensions of e-mail comments received during spring 2001 LibQUAL+TM survey
Downloaded by Georgetown University At 07:14 15 August 2017 (PT)
(Hair et al., 2000). Unlike sampling error, several general technical problems that
however, non-sampling error: occurred during the survey. These were likely
. is systematic; due to Internet traffic, a slow connection, or
. is controllable; possible server problems. Another set of
. cannot be statistically estimated; and technical problems that prevented
. is iterative, meaning one type of error respondents from completing the survey
often generates another type. concerned survey software dynamics. For
The purpose of the Atlas.ti comment analysis instance, 37 comments noted that the survey
was to determine the sources of non-sampling would not move past the fourth page,
error, correct the errors and, thus, improve indicating the respondent had not completed
the next iteration of the LibQUAL+TM survey the page even though they had. Additionally,
instrument. Although this analysis used a another bug, in which an error occurred after
grounded theory approach, in that the respondents completed the demographic
literature on non-sampling error was not items, was reported 18 times. An
consulted immediately before the analysis was incompatibility problem between the
initiated, allowing patterns in the data to respondents' computer system and the survey
emerge without the influence of previous software was another technical problem that
theory, the results found here are consistent respondents faced. Visual problems referred
with non-sampling concepts. Thus, the to technical problems that may have caused
problems with the LibQUAL+TM survey error, although the respondents were still able
identified through respondent feedback will to complete the survey, e.g. small size, was
be discussed employing a non-sampling error mentioned 26 times. All of the above-
conceptual framework. mentioned technical problems can be referred
The problems with the survey were grouped to as survey instrument design error, which
into five categories: technical problems; ``represents a `family' of design or format
survey content ± demographic items; survey errors that produces a questionnaire that does
content ± service quality items; administrative not accurately collect the appropriate raw
and formatting issues; and respondent data'' (Hair et al. 2000, p. 276). These
problems. findings helped the LibQUAL+TM team
remove the technical bugs and also motivated
Technical problems them to design a survey with fewer Java script
The first category of problems, ``technical applications, which would allow more
problems'', included three sub-categories. expedient downloading of the survey, as well
Blocked from finishing survey referred to as address incompatibility problems.
104
LibQUAL+TM spring 2001 comments Performance Measurement and Metrics
Julie Anna Guidry Volume 3 . Number 2 . 2002 . 100±107
Survey content ± demographic items Furthermore, the survey's length may have
The second category of problems was labeled influenced ``yea- and nay-saying'' (Hair et al.,
``survey content ± demographic items'', with 2000). For instance, several respondents
27 respondents reporting problems with these noted that they began filling out the survey in
items. Within in this category, respondents good faith but, once they realized the survey's
noted that either their ``academic discipline'', length, they began mindlessly completing the
e.g. law, or their ``position'', e.g. survey simply to enter the PalmPilot drawing.
administrator, was not one of the possible These problems were addressed when
responses. This problem is considered scaling developing the spring 2002 survey
measurement error, which ``occurs when instrument. Based on the results of this
inaccuracies are designed into the various qualitative analysis, along with the results of
scale measures used to collect the primary raw the quantitative analysis, the number of
data'' (Hair et al., 2000, p. 275). A small library service quality items was reduced from
number of respondents pointed out that the 56 to 25. The use of the nine-scale point,
age and gender items were inappropriate or however, has remained the same in order to
unnecessary. These items may have caused properly differentiate between respondents'
response error, which occurs when minimum, perceived, and desired levels of
respondents unconsciously or deliberately service, as recommended by Parasuraman,
misrepresent their answers (Hair et al., 2000). Berry and Zeithaml in gap analysis (cf.
Downloaded by Georgetown University At 07:14 15 August 2017 (PT)
did not provide space for respondents' category concerned the directions provided,
feedback, implying that the survey did not with some claiming that the instructions
address their particular problems with their were not truthful (completing the survey
library. Both of these problems would suggest took longer than specified) or were not clear
construct development error, which occurs regarding how to begin or finish the survey.
when the researcher does not adequately The actual format of the survey was also a
identify important constructs related to the recurring comment. Some respondents
research problem. Furthermore, some specifically indicated that the number of
respondents noted that the absence of an area buttons was ``dizzying'', while others noted
to provide feedback also made them feel as that the font was too small, which was
though the survey was too impersonal. To actually related to a technical problem.
rectify these concerns, a free text comments
section was added to the 2002 version of the Respondent problems
survey. Finally, the last set of problems was labeled
The forced choice format was the final ``respondent problems'', meaning the
problem associated with the library service respondents were either unwilling or unable
quality items. Each item allowed the person to to complete the survey. For instance, 33
either score all three levels of service quality or respondents were upset about being sent
choose the ``N/A'' response. A total of 60 unsolicited e-mail. Also, several respondents
Downloaded by Georgetown University At 07:14 15 August 2017 (PT)
respondents felt that neither of these options simply did not have time to complete the
allowed them to accurately reflect their survey, which may have been related to the
attitude. For some items, these respondents survey's length. Both of these comments
could not indicate a perceived level of service influenced nonresponse error. In addition to
since that had not experienced that service, these comments, 81 respondents stated that
yet they did wish to express attitudes about they either no longer used the library (were
their minimum and desired levels of service. retired or moved) or had recently begun to
The problem with forced choice is likely a use the library and, thus, had not yet formed
reflection of sample design error, or an opinion about particular aspects of it.
``systematic inaccuracies by using a faulty These comments may reflect sample design
sampling design to identify and reach the error and were related to the forced choice
selected `right' respondents . . .'' (Hair et al., format problem mentioned previously. These
2000, p. 276). Thus, those persons who felt a problems reflect issues associated with the
need to express their desired and minimum Web-based survey design, in which
service levels but did not have attitudes of respondents must be contacted via e-mail.
perceived levels of service were often those Unfortunately, these problems are not easily
new to the library who had not yet had the remedied. However, the advantages of a
opportunity to form perceptions of some Web-based survey, such as the convenience
services. afforded to the respondent and the absence of
data entry error, were believed to outweigh
Administrative and formatting issues these problems.
The fourth category of problems pertained
to ``administrative and formatting issues''.
Many respondents felt that the processes Conclusion
associated with administering the survey
were too intruding, with 44 persons annoyed Although the quantitative analysis of the 2001
that they were continually reminded to LibQUAL+TM results helped to refine the
complete the survey although they had survey, that analysis could not reflect all of the
already done so. Another interesting non-sampling errors associated with the
problem in this category was that several survey. The major impetus for this qualitative
people (26) felt the survey had too many analysis of unsolicited respondent comments
technical requirements, in that respondents was to uncover these non-sampling errors
were requested to change their computer's and, thus, to further refine the LibQUAL+TM
configurations, e.g. accepting cookies and survey instrument. The most common
modify their monitor setting. Many of these problem respondents reported was the
comments indicate nonresponse error was survey's length, which was due to a
possible. Another issue related to this fourth combination of factors, such as the number of
106
LibQUAL+TM spring 2001 comments Performance Measurement and Metrics
Julie Anna Guidry Volume 3 . Number 2 . 2002 . 100±107
107
This article has been cited by:
1. Porcia Vaughn, Cherie Turner. 2016. Decoding via Coding: Analyzing Qualitative Text Data Through Thematic Coding
and Survey Methodologies. Journal of Library Administration 56:1, 41-51. [CrossRef]
2. Michael Luther Kennesaw State University, Kennesaw, Georgia, USA . 2015. Comment sifting: pragmatic qualitative analysis
of LibQUAL + Comments. Library Hi Tech News 32:9, 8-13. [Abstract] [Full Text] [PDF]
3. Wonsik Shim, Eun-Chul Lee. 2013. Service Quality Assessment of University Libraries in Korea using LibQUAL+ : A Case
Study. Journal of the Korean Society for information Management 30:2, 245-268. [CrossRef]
4. Jennifer Rosenfeld and Raida GattenB. Jane ScalesWashington State University Libraries, Pullman, Washington, USA. 2013.
Qualitative analysis of student assignments: a practical look at ATLAS.ti. Reference Services Review 41:1, 134-147. [Abstract]
[Full Text] [PDF]
5. Michael J. Roszkowski, John S. Baky, David B. Jones. 2005. So which score on the LibQual+™ tells me if library users are
satisfied?. Library & Information Science Research 27:4, 424-439. [CrossRef]
Downloaded by Georgetown University At 07:14 15 August 2017 (PT)