You are on page 1of 13

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0090-7324.htm

Business
Systematic review of research instruction
methods: the case of business
instruction
385
Ann Manning Fiegen
California State University San Marcos, San Marcos, California, USA Received 12 April 2010
Revised 4 May 2010
Accepted 7 May 2010
Abstract
Purpose – The purpose of this paper is to assess the body of business instruction literature by
academic librarians against evolving models for evidence-based research.
Design/methodology/approach – The paper used systematic review and inter-rater reliability of
the literature of business information research instruction to test two attributes of research quality:
the evidence-based levels of evidence and the EBLIP critical analysis checklist.
Findings – Intervention questions and case studies are the most popular research methods on the
EBL levels of evidence scale. The majority of articles score below 75 on the EBLIP critical appraisal
checklist. Prediction questions are represented by higher levels of evidence and study quality.
Intervention questions paired with the cohort design and exploratory questions paired with survey
design indicate strong areas of research quality. The case study method, while most popular, showes
lower scores across all question types yet revealed some high-quality benchmark examples.
Research limitations/implications – Error is possible when distinguishing between cohort and
case study – some articles may fall into one or the other study design. Rater training was conducted
only once, and best practices for inter-rater reliability recommend multiple rounds to achieve higher
rater agreement.
Practical implications – Recommendations are presented for ways to improve the evidence base of
research articles and suggest areas for professional development opportunities for librarian
researchers wishing to increase the quality of research publications.
Originality/value – The paper goes beyond the narrative review of the literature of business
instruction to measure the research methods employed in those publications against two
evidence-based standards. The results will show where the literature stands as a maturing
discipline and provide recommendations for increasing the levels of evidence for future research.
Keywords Academic libraries, Research methods
Paper type Literature review

Introduction
Evidence-based practice advocates that academic librarians look to the published
literature to find reliable and valid studies as guidance. To inform and guide this
researcher’s practice, an effort was undertaken to locate studies that could offer that
guidance for information literacy instruction for business students. The result of that
search is that business librarians are prolific authors of business information research

This study was funded in part by a BRASS Emerald Research Award 2009 and a California State
Reference Services Review
University San Marcos, Faculty Research Grant. The author wishes to acknowledge the Vol. 38 No. 3, 2010
participation of Martha Cooney, Cheryl Delson, Nancy Dewald, Patrick Ragains, Frank Vuotto, pp. 385-397
q Emerald Group Publishing Limited
and Diana Wu. Portions of this report were presented at the California Academic and Research 0090-7324
Libraries Conference 2010. DOI 10.1108/00907321011070883
RSR instruction, resulting in hundreds of published studies to read and apply to practice.
38,3 This wealth of information prompted this researcher to ask: What measure defines
high-quality evidence-based research that can reliably be applied from the many
studies published? What kind of research do business librarians undertake, and taken
in the aggregate do they suggest a maturing research methodology in the discipline?
And finally, against what standards can this body of literature be measured?
386 Before the content of the articles could reliably be applied to best practice, the
quality of the studies needed to be ascertained. The questions explored in this report
form a portion of a larger study that analyzes the content of the literature along the
dimensions of study objectives and results, institution setting sample population, and
pedagogy (theorist, standard, or model) employed. The focus of this report is only on
the research methods employed. Systematic review offers a model for summarizing
and critiquing the literature to improve future practice and possibly encourage higher
levels of research methods. A systematic literature review of 30 years should reveal
evidence toward a maturing research methodology.
Academic librarians have applied theory to practice and documented improvement
efforts through the published literature of business instruction. The case study is a
popular method for describing best practices and well suited for the action research
needed by librarians. Case studies, however, are a marker for a young discipline and
considered by some not to be as rigorous as compared to higher levels of research. How
then can this discipline strive for higher levels of evidence?
The objective of this study is to assess the body of business instruction literature in
academic libraries against evolving models for evidence-based research. The results
should indicate opportunities for higher levels of research methodology in keeping with
a maturing discipline. Eldredge (2006) challenged librarians to apply and test his
proposed evidence-based librarianship levels of evidence model he applied initially to
research studies in medical librarianship. The question is whether the same can be said
for the literature of instruction in business information research. Where on Eldredge’s
proposed matrix is the state of research in this discipline, what if any are the implications
for future research, and what is the case for business instruction literature?
Academic librarians are introduced to research methods through graduate
education and must continue to learn the process essentially independently. The
expertise is gained from continuing education, professional development, and
mentoring relationships and using library collections on research methods. Guidelines
for evidence-based research can be used as a professional development tool to guide the
new researcher and as an instrument to assess the quality of existing research. The
medical field has led this effort by developing guidelines that help the researcher
assess the quality of existing research. Examples of “critical appraisal tools” can be
accessed from the International Centre for Allied Health Evidence (2009) web site
among others. Glynn (2006) studied many models to arrive at her instrument, the
EBLIP critical appraisal tool for library research. This study will show how that
instrument can be applied to the critical appraisal of business instruction literature by
testing two attributes of research quality using the business information instruction
literature as a case analysis. A systematic review of the literature will test for levels of
evidence and evidence of research method quality:
H1. Library articles on this subject will overwhelmingly be exploratory case
studies and low on the Eldredge levels of evidence hierarchy.
H2. The majority of articles will score below 75 percent as measured by the EBLIP Business
critical appraisal checklist. instruction
Literature review
Commonly accepted social science research method definitions vary slightly depending
on the discipline and the author’s objectives. Widely followed for case analysis is
Yin (2003) who categorizes social science research into experimental, survey, archival 387
analysis, history, and case study. Gormon and Clayton’s (2005) Handbook for
Information Professionals divided qualitative research methods into observational,
interviewing, group discussion, and historical study. Fink (2005) emphasized assessing
quality of research studies for literature reviews and categorized studies into either
experimental or quasi-experimental families. Cooper (2010) cautioned about error when
evaluating the quality of studies.
There are a number of studies that have analyzed and critiqued the preferred research
methods used by librarians. Specific examples include Watson-Boone (2000, p. 87) who
examined 24 articles from Journal of Academic Librarianship ( JAL), grouped them into
six research methods, and ranked them by order of frequency of research method used:
survey research, action research, secondary data analysis, and case study, with
evaluation research and experimental tied for last. The typical JAL article emphasized
problem-solving and managerial issues, and therefore, the Watson-Boone categories
cannot be generalized to this study. Most recently, Hildreth and Aytac (2007)
summarized library practitioner articles from 2003 to 2005 into descriptive, exploratory,
explanatory, and evaluative research. A summative approach does not support the
benchmarking objective of this study.
Eldredge (2002) borrows from clinical medicine with the intent of applying his model
for evidence-based librarianship (EBL) levels of evidence to medical librarianship
research. The matrix approach associates three types of research questions (prediction
questions, intervention questions, and exploration questions) with ranked research
methods. The highest ranked level of evidence is systematic review, followed by
meta-analysis, summing up, prospective or retrospective cohort study, qualitative
studies, descriptive study or survey, and case study. He observes that typical library
research studies fall into the lower levels of descriptive survey, case study, and
qualitative methods, areas where error and author bias will more typically occur
(Eldredge, 2002, p. 294). Given (2006) continues a long-standing debate by arguing that
the levels of evidence introduced by Eldredge favor quantitative research methods and
that relegating qualitative research to the lowest level overlooks its appropriate place in
social science, including library science research, continuing a long-standing debate.
The Eldredge model of 2002 is adopted here as it most closely supports the objectives of
this study.
Edwards (1994) reported on the percent of research articles to non-research articles
between 1971 and 1991 when reviewing the research of bibliographic instruction.
She ranked frequency of research method used among the research articles and
frequency of library instruction topic. While the Edwards’ study is not directly
comparable for the present research, results can confirm or deny a trend. Literature
reviews of library instruction are characterized by the exploratory question (what was
published) and the descriptive narrative review method (summary of trends and
annotation of entries for a given time period). Typical of this genre are Rader (1974, 2002)
RSR and Johnson et al. (2007). Crawford and Feldt (2007) conducted a systematic analysis of
38,3 library instruction literature using citations from the ERIC database as their source.
It expands the narrative review by including explicit research objectives, statements of
study inclusion and exclusion, and article analysis of the articles. Koufogiannakis (2006)
reported on a systematic review and meta-analysis of the most effective method for
teaching information literacy skills to undergraduate students and found that
388 computer-aided instruction was as effective as traditional instruction and that
traditional and self-directed instructions are more effective than no instruction. She
recommended further research be conducted in comparative and validated research
methods and suggested that additional replication be included in existing high-quality
studies.
Examples of narrative review articles that summarize the state of information literacy
specific to business students include Jacobson (1993) who summarized the literature of
best practices for business instruction from 1985 to 1992. Most published articles about
business instruction include literature reviews. Cooney (2005) surveyed business
instruction librarians at AACSB-accredited colleges to assess the extent of business
instruction in libraries (indicating a trend toward more evidence-based information
literacy research) and a systematic review by Zhang et al. (2007) that compared the
effectiveness of face to face to computer-assisted instruction. This study will go beyond
the narrative review to measure the research methods employed against two standards:
the EBL levels of evidence model and the EBLIP critical appraisal checklist.

Methodology
Library and business education bibliographic databases were searched for English
language publications between 1980 and spring 2009. The database was initially created
in 2004 and repeated thereafter through spring 2009. Databases searched included
EbscoHost Premier, Emerald Fulltext, ERIC, Library Literature and Information
Science, LISA, ProQuest Inform Global, and the ISI’s Web of Knowledge. Hand searches
of cited references in the primary literature were also conducted. File drawer bias is
outside the parameters of the study as this study’s objective was to research only the
published literature. The databases searched replicate those used by Johnson et al. (2007)
in their annual review article of library instruction with the addition of Emerald and ABI
Inform Global. The later were included to expand the search to internationally published
reports in library science and business management education. Each index was
searched for the terms: library and business and (instruct * or literac * or assess* or
teach *) and (academic or higher education or college or university).
Bibliographic records and abstracts were scanned resulting in an initial set of
245 articles about library instruction for business students. Further review resulted in
69 articles as the initial set of articles for this study. Criteria for inclusion were as follows:
articles were authored or coauthored by a practicing academic librarian, the subject of
the study was instruction in business research in academic libraries, and the article
was in English and published between 1980 and 2009. Excluded were non-peer-reviewed
articles, articles appearing as columns, studies authored by business faculty or by
faculty teaching in library, and information science graduate programs but
not coauthored by a practicing librarian and business education literature about
information competencies that did not explicitly refer to library instruction with a
librarian.
The bibliographic software EndNote was used to hold the data sets. Each article was Business
read, coded, and color classified according to one of the Eldredge’s (2002) three research instruction
questions: predication, intervention, or exploratory. Table I shows Eldredge’s levels of
evidence matrix where each article is categorized into its corresponding question type
and research method of meta-analysis, summing up, prospective or retrospective cohort
study, qualitative studies, descriptive study or survey, and case study.
Eldredge (2004) distinguishes higher order levels of evidence from the lower level by 389
their “distinct hypothesis and objectives, interventions, and measurable outcomes”.
The definitions in Eldredge’s (2004) inventory of research methods guided the
classification of the data set. Cohort studies used a defined population, indicated some
kind of intervention even if it only described a change from status quo, and had a
measurable outcome. Those studies were further identified as either using proscriptive
or using retrospective data collection methods. Articles defined as case studies described
an experience. According to Eldredge (2004), case studies are distinguished by their
description and analysis of author’s experiences, have multiple sources of evidence, and
will answer how and why questions; refer to Booth and Brice (2004) and Eldredge (2004)
for more complete descriptions of definitions and study design.
The paired response inter-rater reliability method (Fink, 2005) was used to rate the
research quality of the articles against the Glynn (2006) EBLIP Critical Appraisal
Checklist. Six raters were selected to participate in the study, five of whom were authors
writing on the subject in the last ten years. One recent library and information science
(LIS) graduate with prior research methods experience was also invited to participate.
No rater was assigned to their own study, although one rater disclosed frequent
collaboration with one of the assigned articles. It was not deemed enough of a conflict to
warrant exclusion. Permissions to use the Eldredge and Glynn instruments were
obtained from the authors as were publisher supplied reprints of the Glynn article for all
raters. Exempt status was submitted and granted by the university review board since
the subjects under study, the published articles, were not human subjects.
A training packet was mailed to the raters that included instructions describing
the expectations and compensation for participation in the study, procedures for using
the rating instrument, a copy of the Glynn article and checklist, and a sample article to
rate that was not included as part of the study. Training sessions were scheduled and
conducted in summer of 2009. Raters were instructed to read the sample article, use the
checklist to rate the article independently by noting any comments or questions,
and return their rating sheet to the principle investigator prior to the scheduled WebEx
training session. At each of the three WebEx training sessions, each pair of raters and

Prediction n ¼ 8 Intervention n ¼ 22 Exploration n ¼ 17

Systematic review Systematic review Systematic review 1


Meta-analysis Meta-analysis Summing up
Retrospective cohort study 2 RCT 2 Qualitative studies
Prospective cohort study 2 Retrospective cohort study 4 Survey 8
Survey study 1 Prospective cohort study 3 Case study 8
Case study 3 Survey
Case study 13 Table I.
Business instruction
Source: Used by permission of the author Eldredge (2002) levels of evidence
RSR the principle investigator met virtually to review the rating of the training sample
38,3 article with the intention of reaching agreement for the checklist elements. Raters were
to be guided by their own experience as practitioner researchers, though some
expressed concern about their own expertise in research methods. The discussion often
centered upon reaching agreement on the definition and interpretation of the checklist
questions as well as how to respond to the categories of: yes, no unclear or not
390 applicable. Glynn’s (2006) annotations for using the instrument-guided discussion. For
example, an Yes answer would indicate that there was an explicit statement in the
study. An Yes response regarding the use of a validated instrument for data collection
indicated that the instrument was pilot tested, there was evidence of revision prior to
use, or it was based on a previously published study. After the training session, raters
were invited to follow up with the principal investigator on any questions or concerns.
Raters were sent their full set of articles with instructions to rate and return half the
articles at a three-month midpoint, with the final set due at six months. At the end of
the study period, raters were debriefed and comments summarized for reporting
purposes. The checklists contained no information regarding the identity of raters.
Completed checklists were then paired to the corresponding articles. Paired
responses were recorded, and where there was no agreement, discrepancies in scoring
were broken by a vote from the principle investigator. Glynn’s checklist uses a scoring
mechanism where each of the 29 question “yes, no, unclear” responses is added and
averaged along four dimensions (study population, data collection, study design, and
results) to arrive at an average score. Not applicable is a response option but is not used
for scoring. A score of $ 75 indicates a valid research study. She cautions that
numerical scoring is not the sole indicator of the quality of an article.

Analysis and discussion


Once each article was classified into the EBL levels of evidence matrix and then by the
EBLIP critical appraisal checklist patterns emerged to show what kind of research
librarians for business information instruction conduct indicated the top, mid, and
lower tiers for articles for each category.
A total of 69 studies were sent to raters; of those, 22 scored zero on the checklist and
therefore were eliminated as not meeting the definition of evidence-based research.
A total of 47 articles advanced to analysis and were plotted into the EBL levels of
evidence matrix. H1 is that the majority of articles will be exploratory and does not hold
but rather is led first by intervention n ¼ (22:47) or 46 percent, second exploratory
n ¼ (16:47) or 34 percent, and finally by prediction n ¼ (8:47) or 17 percent. H1 is that the
majority will be case studies and low on the levels of evidence scale does hold n ¼ (24: 47)
or 51 percent. Descriptive or survey studies constitute n ¼ 9:49 or 19 percent, cohort
studies n ¼ 11:47 or 23 percent, and randomized controlled trials form the least used type
of study n ¼ 2:47 or 04 percent, and finally one lone systematic review n ¼ 1:47
or 02 percent.
When levels of evidence are viewed within each question, some patterns emerge.
Prediction questions tended to favor higher level research. The cohort design (four) is
slightly more likely than the case study (three). The intervention questions favored
case studies (13) followed by seven cohort studies and two randomized control studies.
Exploratory questions split evenly between survey design method (eight) and case
studies (eight). Table I shows how the studies are plotted against the levels of evidence Business
matrix (Eldredge, 2002, p. 10). instruction
Table II plots the research question against the EBLIP critical appraisal checklist
and indicates how groups of articles were ranked for research rigor. Overall, 15:47 or
31 percent of all studies ranked in the top third by receiving a score of 75 or above.
Represented by 5:8 or 62 percent predictive studies, 4:22 or 18 percent intervention
studies, and 5:17 or 29 percent were exploratory studies. As a percentage, predictive 391
studies score higher on the EBLIP critical appraisal scale, though significantly, high
ratings are represented in each research question type.
Highly rated articles described clear populations or samples and often used cohorts.
Instructional design was based on validated models or industry standards. Data
collection used validated or piloted survey or assessment measures. Multiple measures
using both quantitative and qualitative data collection methods increased validity or
supplemented self-perception surveys. If longitudinal, the time spans between pre- and
post-test were a semester or longer. Statistical analysis reported more than percentages
and included deviations and significance factors. Explicit articulation of study design,
methodology, and results that could facilitate replication characterized all highly
ranked articles. The Appendix lists all highly ranked articles grouped by level of
evidence and type of study.
Articles showing some, but uneven, research quality in the middle tier scores of 50-74
were 2:8 or 25 percent predictive questions, 11:22 or 50 percent intervention studies, and
7:17 or 41 percent exploratory questions. This middle range represents the largest
number of studies and suggests opportunities for additional training and professional
development to increase validity and rigor for intervention and exploratory questions.
Of those with low scores 1:8 or 12 percent were predictive questions, 7:22 or
32 percent were intervention questions, and 4:17 or 10 percent were exploratory
questions. This validates finding that that predictive questions score higher on the
EBLIP critical appraisal checklist and suggests that studies using intervention
questions would benefit from additional attention to research rigor
Figure 1 illustrates the level of evidence against the critical appraisal ranking by
type of question. Prediction questions that use higher levels of evidence represented
by RCT and cohort design as a group score high on the EBLIP critical appraisal scale.
Intervention questions are the most popular research method, n ¼ 22, and when
viewed by EBL levels of evidence, the article set shows that the cohort design method,
scoring 60 and above, outranks the case study method where all but one score below 66.
This suggests that choosing a cohort design over case study tends to increase quality
of research for intervention questions.
An interesting phenomenon appears with exploratory studies, while represented
by fewer articles, six using that methodology are in the top tier of the checklist

Predictive Intervention Exploratory Total


EBLIP critical (n ¼ 8) (n ¼ 22) (n ¼ 17) (n ¼ 47)
appraisal score n % n % n % n %
Table II.
75-100 5 0.62 4 0.05 6 35 15 0.31 Critical appraisal score
50-74 2 0.25 11 0.50 7 14 20 0.42 by type of research
0-49 1 0.12 7 0.31 4 10 12 0.25 question
RSR 0 10 20 30 40 50 60 70 80 90 100
38,3 SR
RCT
RCT
COH
392 COH
COH
COH
COH
COH
COH
Desc/survey
Desc/survey
Desc/survey
Desc/survey
Desc/survey
Desc/survey
Desc/survey
Desc/survey
Case
Case
Case
Case
Case
Case
Case
Case
Case
Case Prediction
Case
Figure 1. Intervention
Levels of evidence and Case
critical appraisal Exploration
Case

(one systematic, two surveys, and three cases), a stronger showing in the top tier than
intervention articles. The descriptive survey design method n ¼ 8 is generally clustered
high with two studies in the top tier, five in the middle tier, and only four studies in the
lower tier. This validates the hypothesis that librarians are familiar and comfortable
with exploratory survey design methodology. Case studies on the other hand were
mixed with three high scores, two in the middle range, while three scored low as did most Business
of the studies deemed ineligible. instruction
Study limitations
Every effort was made to include all published business research instruction by
academic librarians in the initial search but some may have been missed. Owing to the
ambiguity in some of the articles, error is possible when distinguishing between cohort
393
and case study some articles may fall into one or the other study design. Some raters,
while among the higher ranked authors, expressed a need for additional training that
time and resources did not allow. Rater training was conducted only once, and best
practices for inter-rater reliability recommend multiple rounds to achieve higher rater
agreement. The reliability of this method may increase with higher knowledge of
research methodology and multiple rounds of training to increase inter-rater reliability.
Kappa statistic was not conducted for this study but relied rather on simple majority.
Article authors were not blind to the raters; therefore, bias is possible. This researcher’s
own article and in another case a rater’s colleague were in the set; nevertheless, the
articles were included. Outside the scope of this research is an important group of
writers of business instruction in libraries. Articles that were solely authored by LIS
professors in graduate schools or business faculty and were not coauthored by
practicing librarians were not included in this data set. Although they contribute
important studies, inclusion of those reports would skew results for this study
population. Case studies of reports of practice are outside the scope of this study but
continue to serve an important function for advancing the practice of library
instruction for business students and were represented in the first large data set. Many
other studies appear as book chapters and outside the indexing and abstracting
services and therefore may have been missed.

Conclusion and recommendations


This study sought to categorize business information literacy research articles into the
two models. H1 stated that library articles about business information research
instruction would overwhelmingly be represented by exploratory studies and would
use the case study method that is low on the Eldredge EBL levels of evidence
hierarchy. The first part of H1 does not hold as exploratory studies comprise only
36 percent of studies, less than the 46 percent of the intervention studies, but more than
the only 17 percent of prediction studies. The second part of the statement does hold
since case studies represented fully 51 percent of research designs chosen. The second
most popular method is the cohort study with 11 or 23 percent of articles, followed by
survey design with 9 or 19 percent of articles, and finally only two randomized control
trials and one systematic review.
H2 stated that the majority of articles will score below 75 percent as measured by
the Glynn critical appraisal checklist. H2 holds as the majority of articles scored below
the Glynn definition of high-quality research. Over 245 articles over the study period of
1980-2009 met the practical screen. Only 47 met the criteria for evidence-based research
studies that comprised the population under study. Of those, only 15 or 31 percent
scored in the top tier of the EBLIP critical appraisal checklist, while 42 percent scored
in the middle tier and 25 percent scored in the lowest tier.
RSR Pursuing prediction questions yielded higher levels of evidence and study quality.
38,3 Prediction questions comprised 8 percent of all studies yet overwhelmingly scored high
for quality in both cohort and case study design. Intervention questions paired with the
cohort design method scored above 60 and is also suggested as a method for those
conducting this kind of research. For those choosing exploratory questions and using
the descriptive survey method, two articles scored in the top tier with the remaining
394 articles scoring in the middle tier.
Surprisingly, the case study method (n ¼ 24) scored low across all question types
with only a few exceptions 5 $ 75. Considering that it represents the most popular
choice for research design, it is an important area for future training and professional
development among potential authors. The five highly ranked case studies are notable
since they show that there is a place for evidence-based research quality case studies
even though they are considered low on the levels of evidence scale. Future
professional development for true research-based case study as opposed to the more
informal description of practice would increase the evidence-based and validity of the
case studies. This is especially important as the case study is and will continue to hold
an important role in this research genre. One rater commented that:
There is a long tradition of case study articles in librarianship. We all like them, read them,
write them, and we could definitely use a push to provide the hard evidence of success, rather
than just the comfortable assurances that we succeeded.
When examining all articles by the four segments of the EBLIP critical appraisal
checklist, the population segment and results segment were highly rated. The segments
for data collection and to a lesser extent research methodology scored lower. Often, the
lower scoring articles only lacked additional clarity, resulting in a score of “unclear”.
More clarity in data collection reporting and study design would have increased the
quality scores of those articles in the middle tier. Glynn’s points regarding use of
validated instruments, and using questions posed to elicit precise answers, would
increase rigor in data collection. Special attention to explicit documentation about the
use of consent forms, ethics panel clearance and disclaimers, and divulgence about the
role of authors in data collection would increase quality as measured by the checklist.
Generally, these indicators suggest that research reports need only moderate
improvements to become highly rated articles. All raters remarked that the checklist
would serve as a useful planning guide when undertaking and writing research reports,
though they also cautioned that it was not applicable for some kinds of writing.
Hildreth and Aytac (2007) review the quality of LIS practitioner and academic
scholar research. They also cite validity and lack of evidence as limitations in those
articles. They credit journal editors and reviewers for urging higher quality studies.
They suggest collaborations as a way to increase quality and notes significantly that
collaborations were more associated with other disciplines than with LIS faculty,
a finding similar to this report. They suggest further research into why there are low
levels of collaborative research, how they can be increased, and what the role is for the
LIS graduate curriculum (Hilderth and Aytac, 2007, p. 255).
Results of this research suggest education and professional development in rigorous
case study method. Collaborations with business faculty as coauthors have a rich
tradition and tend to increase the quality of the research reports, though the evidence
shows that coauthorship with business faculty was not a deciding factor among the
highly ranked set as a group. High-quality studies authored solely by LIS faculty Business
appeared in the initial phases of data collection yet were out of scope for this study. Yet, instruction
as Hildreth and Aytac (2007) indicated that suggestions for future collaborations
between LIS graduate faculty and practicing academic librarians could increase the
quality of research.
In summary, business librarians conduct high-quality research at all levels of
evidence. Prediction questions are represented by higher levels of evidence and study 395
quality. Intervention questions paired with the cohort design and exploratory
questions paired with survey design indicate strong areas of research quality among
the set of articles studied. The case study method, while most popular showed lower
scores across all question types, yet revealed some high-quality benchmark examples.
Authors preferred case studies, though using cohort design tended to increase the
measure of rigor. To continue to raise the levels of evidence, it is recommended that
business librarians conduct more random control trials and more systematic reviews of
existing research. To increase the rigor of research, the results of this study suggest
closer attention to clarity of description in all segments but especially data collection
and research methodology. Education and professional development in the case study
method are indicated for this preferred research method. The EBLIP critical appraisal
checklist or similar guidance is strongly recommended as a planning guide when
undertaking a research project.
This systematic review yielded 15 studies that met the criteria for evidence-based
research (the Appendix). Those studies now provide a set of articles that can reliably be
applied to the practice of business information literacy instruction. The many quality
articles ranked in the middle tier also provide valuable reports of practice. This
systematic review of the literature shows that Eldredge’s EBL levels of evidence and
Glynn’s EBLIP critical appraisal checklist can be used as indicators for the maturity of
business instruction research by academic librarians as a body of research and can
suggest future direction for new evidence-based studies.

References
Booth, A. and Brice, A. (Eds) (2004), Evidence-based Practice for Information Professionals:
A Handbook, Facet, London.
Cooney, M. (2005), “Business information literacy instruction – a survey and progress report”,
Journal of Business and Finance Librarianship, Vol. 11 No. 1, pp. 3-25.
Cooper, H. (2010), Research Synthesis and Meta-analysis: A Step-by-step Approach, 4th ed., Sage,
Thousand Oaks, CA.
Crawford, G.A. and Feldt, J. (2007), “An analysis of the literature on instruction in academic
libraries”, Reference & User Services Quarterly, Vol. 46 No. 3, pp. 77-87.
Edwards, S. (1994), “Bibliographic instruction research: an analysis of the journal literature from
1977 to 1991”, Research Strategies, Vol. 12 No. 2, pp. 68-78.
Eldredge, J.D. (2002), “Evidence-based librarianship levels of evidence”, Hypothesis, Vol. 16 No. 3,
pp. 10-13.
Eldredge, J.D. (2004), “Inventory of research methods for librarianship and informatics”, Journal
of the Medical Librarian Association, Vol. 92 No. 1, pp. 83-90.
Eldredge, J.D. (2006), “Evidence-based librarianship: the EBL process”, Library Hi Tech, Vol. 24
No. 3, pp. 341-54.
RSR Fink, A. (2005), Conducting Research Literature Reviews: From the Internet to Paper, 2nd ed.,
Sage, Thousand Oaks, CA.
38,3
Given, L. (2006), “Qualitative research in evidence-based practice: a valuable partnership”,
Library Hi Tech, Vol. 24 No. 3, pp. 376-86.
Glynn, L. (2006), “A critical appraisal tool for library and information research”, Library Hi Tech,
Vol. 24 No. 3, pp. 387-99.
396 Gormon, G.E. and Clayton, P. (2005), Qualitative Research for the Information Professional:
A Practical Handbook, 2nd ed., Facet, London.
Hildreth, C.R. and Aytac, S. (2007), “Recent library practitioner research: a methodological
analysis and critique”, available at: http://myweb.cwpost.liu.edu/childret/practitioner-
research.doc (accessed 15 April 2007).
International Centre for Allied Health Evidence (2009), “Critical Appraisal Tools”, available at:
www.unisa.edu.au/cahe/CAHECATS/ (accessed 29 March 2010).
Jacobson, T.E. (1993), “Another look at bibliographic instruction for business students”, Journal
of Business and Finance Librarianship, Vol. 1 No. 14, pp. 17-24.
Johnson, A.M., Jent, S. and Reynolds, L. (2007), “Library instruction and information literacy
2006”, Reference Services Review, Vol. 35 No. 4, pp. 584-640.
Koufogiannakis, D. (2006), “Effective methods for teaching information literacy skills to
undergraduate students: a systematic review and meta-analysis”, Evidence Based Library
and Information Practice, Vol. 1 No. 3, pp. 3-43.
Rader, H. (1974), “Library orientation and instruction – 1973: an annotated review of the
literature”, Reference Services Review, Vol. 2, pp. 91-3.
Rader, H. (2002), “Information literacy 1973-2002: a selected literature review”, Library Trends,
Vol. 51 No. 2, p. 242.
Watson-Boone, R. (2000), “Academic librarians as practitioner-researchers”, Journal of Academic
Libraries, Vol. 26 No. 2, pp. 85-93.
Yin, R.K. (2003), Case Study Research: Design and Methods, 3rd ed., Sage, Thousand Oaks, CA.
Zhang, L., Watson, E.M. and Banfield, L. (2007), “The efficacy of computer-assisted instruction
versus face-to-face instruction in academic libraries: a systematic review”, The Journal of
Academic Librarianship, Vol. 33 No. 4, pp. 478-84.

Appendix. Top-ranked business instruction articles by practioner academic


business librarians based on EBL levels of evidence and EBLIP critical appraisal
checklist
Prediction – cohort
Bowers, C.V.M., Chew, B., Bowers, M.R., Ford, C.E., Smith, C. and Herrington, C. (2009),
“Interdisciplinary synergy: a partnership between business and library faculty and its effects on
students’ information literacy”, Journal of Business and Finance Librarianship, Vol. 14 No 2,
pp. 110-27.
Magi, T.J. (2003), “What’s best for students? Comparing the effectiveness of a traditional print
pathfinder and a web-based research tool”, Portal – Libraries and the Academy, Vol. 3 No. 4,
pp. 671-86.
Orme, W.A. (2004), “A study of the residual impact of the Texas information literacy tutorial
on the information-seeking ability of first year college students”, College & Research Libraries,
Vol. 65 No. 3, pp. 205-15.
Prediction – survey Business
Dewald, N.H. (2005), “What do they tell their students? Business faculty acceptance of the web
and library databases for student research”, The Journal of Academic Librarianship, Vol. 31 No. 3, instruction
pp. 209-15.

Intervention – RCT
Diamond, T. and McGee, J.E. (1995), “Bibliographic instruction for business writing students:
implementation of a conceptual framework”, RQ: Research Quarterly, Vol. 34 No. 3, pp. 340-60. 397
Intervention – cohort
Fiegen, A.M., Cherry, B. and Watson, K. (2002), “Reflections on collaboration: learning outcomes
and information literacy assessment in the business curriculum”, Reference Services Review,
Vol. 30 No. 4, pp. 307-18.
Lombardo, S.V. and Miree, C.E. (2003), “Caught in the web: the impact of library instruction
on business student’s perceptions and use of print and online resources”, College & Research
Libraries, Vol. 64, January, pp. 6-24.

Intervention – case
Judd, V., Tims, B., Farrow, L. and Periatt, J. (2004), “Evaluation and assessment of a library
instruction component of an introduction to business course: a continuous process”, Reference
Services Review, Vol. 32 No. 3, pp. 274-83.

Exploratory – systematic review


Song, Y.-S. (2004), “International business students: a study on their use of electronic library
services”, Reference Services Review, Vol. 32 No. 4, pp. 366-71.

Exploratory – survey
Cooney, M. (2005), “Business information literacy instruction – a survey and progress report”,
Journal of Business and Finance Librarianship, Vol. 11 No. 1, pp. 3-25.
Dewald, N.H. (2003), “Anticipating library use by business students: the uses of a syllabus
study”, Research Strategies, Vol. 19 No. 1, pp. 33-45.

Exploratory – case study


Koss, A.I. (1996), “Information needs of Kent State University Masters of Business
Administration students”, Masters thesis – N.P., Kent State University, Kent, OH.
Littlejohn, A.C. and Benson-Talley, L. (1990), “Business students and the academic library:
a second look”, Journal of Business and Finance Librarianship, Vol. 1 No. 1, pp. 65-88.
Thomas, J. (1994), “Faculty attitudes and habits concerning library instruction: how much has
changed since 1982?” Research Strategies, Vol. 12 No. 4, pp. 209-23.

About the author


Ann Manning Fiegen is a Business Librarian at California State University, San Marcos.
She received her MLS from the University of Arizona. Ann Manning Fiegen can be contacted at:
afiegen@csusm.edu

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like