You are on page 1of 18

Besara, R. (2012). Apps for assessment: A starting point. The Reference Librarian, 53, 304309.

This short article is a brief review of mobile applications available in 2012 for primarily iOS
devices. Though it does not review the applications most likely to be used by this researcher, the
general conclusions drawn are useful in supporting the use of electronic media for gathering qualitative
data:
Mobile devices with well-chosen applications can gather qualitative data through media
recordings and drawings, as well as quantitative data through counters, environmental
meters, and surveys. It is clear that smartphones and tablets are developing into more than
just basic communication and media consumption devices. Whether considering a quick,
informal assessment project or planning a formal, long-term venture, mobile data
gathering offers access to a multiplicity of tools and streamlines data management in
ways previously impossible (Besara, 2012, p. 308).
Bolger, N., Davis, A., & Rafaeli, E. (2003). Diary methods: Capturing life as it is lived. Annual Review in
Psychology, 54, 579616.
The work cited above reviews diary study design using several platforms. This annotation will
focus on the electronic modes of collecting data for diary studies (p. 596-599).
Beyond highlighting the same advantages of electronic collection shared by other researchers
(flexibility in sending/receiving messages, accuracy of transcript, etc.), the authors acknowledge that
mobile communication allows online, interactive, and ongoing contact with participants that includes
media beyond the written word. To implement mobile technologies successfully, researchers should be
careful to moderate the number and type of questions asked. Too many questions or questions asked in
a slow or tedious format may slow participant responses or make the responses less rich. They also
mention that finding the right software to support the collection of such data can be cost prohibitive and
that it may be necessary to train participants.

Bosnjak, M., Metzger, G., & Graf, L. (2010). Understanding the willingness to participate in mobile surveys:
Exploring the role of utilitarian, affective, hedonic, social, self-expressive, and trust-related factors.
Social Science Computer Review, 28(3), 350370.
This quantitative study draws conclusions about the named factors (as stated in the title) as they
contribute to participants willingness to engage in mobile surveys. Though my own theoretical
perspective prevents me from believing that such conclusions could be drawn through quantitative
measures, the literature review presented is helpful to understanding the established research concerning
users motivation to engage with electronic research platforms.
In their review of relevant literature, the
authors present Davis (1986, 1989, 1993)
technology acceptance model (TAM), a starting
point for thinking about users perceptions of
technology (diagram presented in Figure 1 from
the article). Using this model, we can see that
users perceptions about ease of use is the catalyst
for their perceptions of usefulness, attitude, behavioral intentions, and end behavior. In a nutshell, the
TAM posits that the easier the technology is to use, and the more useful users perceive it to be, the more
positive their attitude and intention will be about using that technology (2010, p. 352). From the dates
associated with Davis, it is clear that the model was intended to be applied primarily to computer usage.
Davis model is connected to current mobile devices with more recent research that recognizes
perceived enjoyment and perceived trustworthiness to the original TAM.
Another, more current model of technology use is the unified theory of acceptance and use of
technology (UTAUT), which highlights performance expectancy (including perceived usefulness),
effort expectancy (which extends perceived ease of use), social influences (subjective norms and image),
and facilitating conditions (2010, p. 353). The influence of these factors on intentions to use
technology are mediated by demographics, experience, and the degree to which users proceed
voluntarily.
The authors propose the following six-factor models for their work: (a) affective (attitude toward
participation), (b) hedonic (perceived enjoyment), (c) social (subjective norm), (d) self-expressive
aspects (self-congruity), (e) utilitarian aspects (perceived usefulness and perceived costs [Cheong &
Park, 2005]), and (f) trust considerations (perceived trustworthiness).

Commented [MJE1]: As stated above, I do not inherently
trust the results of the testing of model as the path diagram
was built using quantitative measures only. My own
ontology prevents me from trusting data that does not
include qualitative interviews to gather experiential data
from users.
Bowker, N., & Tuffin, K. (2004). Using the online medium for discursive research about people with
disabilities. Social Science Computer Review, 22(2), 228241.
In their study concerning the use of online resources by people with disabilities, Bowker and
Tuffin directly address the use of email and other electronic platforms for conducting interviews and
collecting qualitative data. In its textual format, data collected in this manner has the advantage of being
ready-made for qualitative analysis without the risk of transcript bias. Participants also benefit from
this format because of the great flexibility that allows them to respond at their own pace, in their own
time, and from their own location. Further, online data gathering allows researchers to reach a wider
range of participants regarding both ability and geography.
When considering the data itself, an online platform may offer participants an anonymous,
judgment-free space in which to reveal a greater depth of personal experience and meaning. It is
important to note, though, that this setting also removes the ability of the researcher to observe and
react to the facial expressions and body language visible in face-to-face interactions. Ethically,
researchers should be aware of the online personas and identifiable textual structures that participants
may develop that can remove a degree of anonymity in chat rooms or other multi-user formats. In
addition, verifying the authenticity of participant data and assuring the security of such data is more
difficult in an online setting.
The researchers designed (and reflected upon) a series of online interviews conducted primarily
through email. Researchers sent one or more questions to which participants responded in their own
time. Though the researchers cited many difficulties in the email interview format, all were well within
the bounds of normal interview dilemmaswhen to press for more information, how to pose questions
clearly, how to clarify participant responses, etc.
Dowling, S. (2012). Online asynchronous and face-to-face interviewing: Comparing methods for exploring
womens experiences of breastfeeding long term. In J. Salmons (Ed.), Cases in Online Interview
Research (pp. 277302). Thousand Oaks, CA: SAGE Publications, Inc.
Dowlings chapter (found within Salmons 2012 work) is a research study conducted to
investigate the experiences of women who have chosen long-term breastfeeding for their children.
Though I am not interested in this topic, the context of her methodological choices is important.
Because her research subjects were mothers with varying schedules in various locations, online (email)
interviewing, allowed her to include participants she might not have otherwise been able to interview.
Only four of her ten participants were interviewed online, however, allowing her to contrast those
experiences with the face-to-face interviews she conducted with the remaining six women. It is this
comparison I find especially interesting.
In her email interviews, Dowling found that she and her participants were less distracted by the
presence of children, fatigue, or other ongoing elements. Both researcher and participants could choose
their own time and place for reflection and response, producing deeper reflections. Online interviewing
also reduced the time and cost of the research process, though the author does note that the
correspondence required more time than she had anticipated. Face-to-face interviews, on the other hand,
required interviewer and interviewees to be alert and focused on a specific schedule (not always possible
for nursing mothers). As the interviewer, she commented that she also felt less comfortable disclosing
personal information in the face-to-face interviews. Challenges associated with the email interview
included the additional time required to establish rapport, the frustration of late-to-respond participants,
and the varying degrees of verbal skill amongst the participants.
In addition to these pros and cons, Dowlings article provides a number of additional resources
for follow-up.
Franklin, K. K., & Lowry, C. (2001). Computer-mediated focus group sessions: Naturalistic inquiry in a
networked environment. Qualitative Research, 1(2), 169184.
The authors of this article designed a study that utilized networked computers to create a focus
group setting. Rather than speaking aloud with one another, groups of participants typed their
comments into a computer forum. The purpose was to have student researchers facilitate group
discourse amongst faculty members while possibly improving the objectivity of the data collection and
analysis process, easing the discomfort of publicly discussing a sensitive topic, and managing
challenging participants (2001, 173). Student researchers (referred to as rapporteurs) introduced and
demonstrated the technology, gave participants time to practice with it, and led the group in discussion
using a pre-determined script.
In designing the study, the authors identified several advantages to the electronic format. By
reducing the interaction between mediator and participant, this method forgave the novice abilities of the
student mediators to keep the conversation on track without influencing the participants. The choice of
electronic medium also offered a cloak of anonymity that encouraged faculty participants to more
fully share their thoughts and opinions. Lastly, the chosen format removed the power paradigm that
exists between student (researcher) and professor (participant) by reducing the interaction between them.
There were several challenges as well. The most prominent difficulty associated with the
computer-mediated focus group was the reduced ability of mediators to mediate the group. They were
not as able to redirect discussion or probe for more information as is the norm in a face-to-face setting.
The researchers also discovered that the transcript (that was very easily gained) was filled with
incomplete thoughts and at times was difficult to analyze. In addition, some participants dominated the
electronic conversation due to their comfort level with the computer keyboard. Lastly, the anonymity of
participants was somewhat compromised due to faculty members familiarity with one anothers
opinions and style of speech.
James, N., & Busher, H. (2009). Online Interviewing. London: SAGE Publications, Inc. Retrieved from
http://srmo.sagepub.com.ezproxy.lib.usf.edu/view/online-interviewing/SAGE.xml
This text covers a wide range of topics surrounding the use of online interviews, including
considering theoretical framework, establishing methodologies, asynchronous vs. synchronous
communication, credibility, equality, online cultures, and disseminating online data. Rather than
discussing each of these at length, this annotation will focus on the information that is different from or
an extension of Salmons work.
In the introductory section, the authors cite several studies that make use of asynchronous online
interview methods (predominantly email), that may serve as further sources. They are careful, however,
to say that these methods may not be appropriate for all research. Researchers must make decisions
about methodology based upon the topic under investigation and the way knowledge will be best
generated.
In section two, the authors recognize the culture of online communications: despite the absence
of face-to-face interaction and a physical place in which to ground fieldwork, the development of
research relationships and interactions online are still embedded in everyday lives (2009, p. 19). They
also cite research to support the claim that online interactions reduce self-awareness and awareness of
others. Regardless, researchers must recognize the culture that influences social identity online.
They also support the idea that asynchronous email communications can allow for extended and
deliberate sequence of events and for researchers and participants to digest messages before replying
(2009, p. 24). In this setting, participants can respond at a time that is convenient to them, allowing for
greater reflection before responding. Email correspondence can also allow researcher and participant to
reflect upon previous responses by continuing to add to the existing conversation text. To facilitate this
exchange, researchers should set an interview protocol or framework to be distributed amongst the
participants so that roles are clear.
Disadvantages to asynchronous online communication were also discussed. The online
researcher cannot assume that participants have the freedom to speak freely on the Internet (2009, p.
86). Email isnt anonymous and the written word is more permanent than the spoken word, so some
participants may feel the need to exercise caution online. Perhaps anonymous emails or formats may be
helpful to protect participants.
Another disadvantage is the separation participants may feel from the researcher. Without
regular interaction, participants can feel disengaged and drop out of the study. Setting predictable
structures that connect researcher to participant is vital to build relationships between the two. Also,
Commented [MJE2]: Im beginning to doubt the level of
rigor in publication with this particular publisher. Many of
the books Ive read (including this one) seem to be self-
published and contain a number of grammatical and
typographical errors.
Commented [MJE3]:
Crotty 1998, as found in James & Busher, 2009
Commented [MJE4]: Can we answer these questions?

Commented [MJE5]: And I will be following up on the
citations they list.
Commented [MJE6]: Though its been my goal to limit
these annotations to one page each, I am bending this rule to
include all pertinent information from this book.
participants may lose some of their verbal power onlinethose not as able to express themselves with
the written word may feel that they have less impact. For this reason, it is necessary that the researcher
not impose linguistic/grammatical requirements upon the participants. Doing so shifts the power
structure and diminishes the contributions of the participants. Furthermore, reporting of participants
words should remain in the vernacular presented to prevent loss of meaning and representation of
participants true selves.
Lastly, interview topics that may contain sensitive or painful stories require a structure in which
the participant feels both safe and connected to the researcher. For this reason, the aforementioned
structures should be held firmly in place to ensure the safety and comfort of the storyteller.
Commented [MJE7]: Perhaps providing an alternate
means of contributing would be helpfulphotographs,
documentation, video, etc.
$42.85
$13.71
$1.03
Merlien Institute. (2014). A quick review of mobile apps for qualitative research. Market Research in the
Mobile World. Retrieved July 3, 2014, from http://www.mrmw.net/news-blogs/295-a-quick-review-of-
mobile-apps-for-qualitative-research
This short list of application resources does not include a lot of text and has not been peer
reviewed, but it offers a variety of application ideas that may be useful in collecting qualitative interview
data electronically. I have culled through the list and downloaded several to be tested over the course of
these investigations. I will use this annotations to name those applications I found to be potentially
useful and their features.
QualBoard Mobileis produced by 2020 Research as a means to connect participants in their
day-to-day lives by hosting discussion questions, surveys, and other message boards in a mobile format.
Researchers can pose questions for thought during a certain time frame or encourage participants to
share photographs/documents about particular topics. It is connected to the QualBoard online service
that hosts synchronous and asynchronous focus groups. I have placed an inquiry email with their
service (and its associated costs) and received a phone call from the company that gave me some
additional information about the program. More information can be found at
http://www.2020research.com/tools/qualboard-mobile/.
Ethos is connected
to an online service that
identifies itself as an
Ethnographic
Observation System.
The mobile app is
equipped to do many of the same things listed above in QualBoard, though it appears to be more heavily
focused upon images. Prices are as listed in the figure and additional information can be found at
https://www.ethosapp.com/#apps.
MyI nsights includes features that allow participants to record
journal entries, respond to surveys, and share media (photos/videos)
with the researcher. It includes the capability to manage both
individual and group information. Prices are listed to the right and
more information can be found at http://www.mobilemarketresearch.net/myinsights/features.
$3.39 $6.78 $10.18
Commented [MJE8]: For all of these applications, the
main usefulness would be immediacy. With the ability to
post during the instructional day or directly after a teaching
experience, we may capture more accurate data to represent
the experiences of our preservice teachers.
Commented [MJE9]: 7/15 10:30am:
Web support almost 24/7. Participants can login on
computer or mobile application. Website may allow for
more lengthy (or follow-up) responses. Application look is
more modern (more like texting usually does).
Built-in disclaimer for participants to agree to the
protocols/structures put in place online.
Can be either individual or focus-grouped, on a question-by-
question basis. Participants can be grouped (tagged) and
questions can be directed at specific groups.
Questions can be segmented into topics or time intervals.
Researchers can assign specific settings to all questions in a
segment or set them on a question-by-question basis.
Participant responses are not editable, but they may reply
multiple times (only once on the mobile device). Using the
mobile app, participants can only reply to questions (no
unsolicited responses), though they can use their mobile
devices web browser to access the lifenotes journal on the
website.
Cost: tech-only seems like the best option. (time-based)
Rep will send access info (for demo), financial info, and
security info.
Commented [MJE10]: Demo/sandbox available during
the week of July 14.
Training for participants is also available for free.
Support 24/day for researchers and participants with trained
staff.
Very speedy response time: After sending the electronic
request for information, received a call in less than 20
minutes.
Commented [MJE11]: Not a survey site. Participants
freely offer feedback through photos, audio, text, and video.
Participants can be subdivided into groups and interact
within those groups. (Potential for interface between profs,
CTs, and PSTs.) We can also track who has contributed (and
when) at a glance.

Unlike MyInsights (below), the Ethos app allows
participants to mark the tasks to which they are responding,
meaning they can classify a single response as answering
multiple tasks.
Commented [MJE12]: Ive had the opportunity to play
around with this application on my own. It works rather like
the Ethos app except that participants require prompts to
respond with text and pictures.
OConnor, H., & Madge, C. (2003). Focus groups in cyberspace: Using the internet for qualitative
research. Qualitative Market Research: An International Journal, 6(2), 133143.
Though OConnor and Madge are not educational researchers, their study relates directly to the
use of electronically collected data for qualitative research. Unlike Bowker and Tuffin (2004), these
authors used a focus group approach and required all participants to interact at a specific time. In this
way, the interactions included a sense of immediacy missing from the email format previously
discussed. The authors point out that a listserv format did not suit their purposes because they sought
immediate reactions. Instead, they used a software package called Hotline Connect that was installed
on each participants computer and used it to create an interview forum at a specified time. This
platform prevented participants from anonymously lurking in the background or entering/leaving the
conversation without notice.
In order to monitor the focus group effectively, the researchers prepared questions ahead of time
that could be copied/pasted into the conversation quickly. Working in tandem, one researcher would
focus on participant responses and feed information to the second researcher, who was responsible for
adding to the conversation textually. The authors also stress the importance of developing a rapport
with participants. Since face-to-face interactions are lost, these authors suggest utilizing self-disclosure
early in the study.
About the data itself, the authors comment that the usual rules of discourse are suspended in the
online format. Rather than being dominated by the largest personalities, online discussions tend to be
led by the best typists. Also removed are the usual ethnic, age, and social status identifiers, opening
conversations to directions that might otherwise be avoided as inhibitions are removed. Participants
must also be more motivated to contribute to the data since the equipment, time commitment, and
intellectual engagement are greater.
Polkinghorne, D. E. (2005). Language and meaning: Data collection in qualitative research. Journal of
Counseling Psychology, 52(2), 137145.
Polkinghornes manuscript does not explicitly address the use of electronic formats in qualitative
research. As a novice qualitative researcher, however, I find in his work several themes that will be
immediately useful at this stage of my own work.
First, he is careful to outline the purpose of qualitative research as the study of experience. A
primary purpose of qualitative research is to describe and clarify experience as it is lived and constituted
in awareness (Polkinghorne, 2005, p. 138). Inasmuch, it is my job to capture participants experiences
(evidence) as described by those participants.
Next, the gathering of evidence is limited to the ability of my participants to relate their own
stories and my own ability to elicit those stories. These limitations imply that while I must gather data
from participants, those reports do not serve as mirrored reflections of lived experience. People do not
have complete access to their experiences. The capacity to be aware of or to recollect ones experiences
is intrinsically limited (2005, p. 139). As the researcher, then, I must be sensitive to the use of
metaphors and other linguistic devices and carefully clarify their meanings.
Polkinghorne discusses interviewing, a professional conversation that requires a give-and-take
dialectic in which the interviewer follows the conversational threads opened up by the interviewee and
guides the conversation toward producing a full account of the experience under investigation (2005, p.
142). He stresses the need for a series of interviews rather than a simple one-shot one-hour session in
order to produce sufficient depth and breadth. This elongated process also allows participants the time
needed to reflect upon their experiences.
For the purposes of bringing this kind conversation online, I glean from his work the importance
of questions intended to begin dialogue and a platform that allows for more than a one-shot simple
questionnaire or survey. The challenge will be to create questions that allow the researcher to fully
interact with the participants.

Pyke-Grimm, K. A., Kelly, K. P., Stewart, J. L., & Meza, J. (2011). Feasibility, acceptability, and usability of
web-based data collection in parents of children with cancer. Oncology Nursing Forum, 38(4), 428435.
Though I chose this article because it was billed as a mixed methods study that utilized the
electronic collection of data, I discovered upon reading that only the quantitative data was collected in
this manner. The qualitative portions of the data were collected through traditional interview means. I
include it in this bibliography, however, because Pyke-Grimm and her colleagues were able to conclude
that eighty percent of their participants (n=16) preferred the electronic data collection method and that
all participants were comfortable with it. Though this preference is not necessarily transferrable to
qualitative data, it does show a certain level of comfort with web-based methodology. The authors
strengthen their position by drawing upon other web-based studies that garnered similar results.
Ravert, R. D., Calix, S. I., & Sullivan, M. J. (2010). Research in brief: Using mobile phones to collect daily
experience data from college undergraduates. Journal of College Student Development, 51(3), 343352.
Ravert and his colleagues begin by stating that mobile data collection provides a way to study
phenomena under the conditions in which they naturally occur and to examine how those phenomena
progress over time or across contexts (Ravert, Calix, & Sullivan, 2010, p.343). They outline several
approaches to collecting data over a period of time: fixed-interval reporting, event-based reporting, and
interval-based recording (when prompted).
There are several advantages to this type of data collection. First, it gathers data in context.
Rather than interviewing participants at the end of a month or semester, participants can contribute their
experiences in the moment they occur. Next, data collected over time can reveal changes in participant
responses. Finally, the data collected is not reliant upon participants memories, making it potentially
more trustworthy.
The study reviewed investigated the response rates, length and reliability of replies, promptness
of replies, and users perceptions for text message data collection. Participants were texted several times
per day (at random times) and asked to respond to a static set of questions. Researchers determined an
89% response rate, all of which were legible and comprehensible. Due to the medium, responses were
generally short (an average of 42.6 characters/text).
Limitations for use in future studies focused on the restrictions and socially accepted premises
for mobile devices. Because most responses are short, they may be missing a certain amount of context.
Follow-up questions via email, observation, or interview may be required to fill in gaps in researchers
understanding of participant responses. Also, much like email, responses can be somewhat delayed,
causing researchers to wonder if participants received the prompts or if participants have lost interest in
the study.
The authors recommend that prompt/response structures be well established at the beginning of
the study. Giving clear indications of the types of prompts to be sent and the format of responses will
help the study run more smoothly. Also, it should be understood that mobile responses are one of many
data collection sources to be utilized. Follow-up discussions via email, interview, or focus group will
likely be needed to get a full picture of participants experiences.
Commented [MJE13]: More than likely, this type of
collection will fit our research best. Participants would be
asked to report when specific events occurspecific types of
conversations, collaborations, or instructional experiences.
Robinson, K. M. (2001). Unsolicited narratives from the Internet: A rich source of qualitative data.
Qualitative Health Research, 11(5), 706714.
Robinson focuses her work on the collection of unsolicited data from various online resources.
She identifies personal narratives as a possible source of rich qualitative data, which are neither forced
nor formulaic. They are naturalistic, revelatory, and authentic (2001, p. 706). She recognizes that the
narratives freely produced by individuals online may be difficult for an individual to articulate or
acknowledge in a face-to-face interview (2001, p. 709).
Because these narratives are the personal thoughts and ideas of unsolicited individuals, Robinson
discusses thoroughly the ethical considerations of using this type of data. She concludes that individuals
who post their information in a public place have forfeited control over that posting, but that researchers
should be careful to seek permission to use information found on secured or even password-protected
sites.
Due to their unsolicited and somewhat untrustworthy creation (after all, who can say who is
really creating these narratives?), researchers should also be wary of using this type of data in isolation.
Rather, internet data is a supplemental source, offering triangulation of other (solicited) data.
Commented [MJE14]: This type of data makes me
wonder if the Facebook and Twitter postings of our students
would be supplemental information to triangulate other
(solicited) data.
Salmons, J. (2010). Online Interviews in Real Time. Thousand Oaks, CA: SAGE Publications, Inc.
In this textbook, Salmon begins by establishing the difference between synchronous and
asynchronous online interviewing. Though her work is dedicated primarily to real-time (synchronous)
interviewing in which researcher and participant must be online at the same time, she does offer useful
information about qualitative research in an electronic medium and details several types of
asynchronous interviewing mediums. In the first two chapters of her book, she reminds readers to
match the characteristics of the media to specific design requirements (2010, p. 3) for inquiry.
Convenience is certainly a factor in the decision-making process, but it is not the only factor. The type
of presence needed, artifacts desired, and pace of the study are also important aspects to consider.
Echoing the sentiments of countless other researchers, Salmons cites online research as a
medium that has the potential for making participants more comfortable and loquacious during
interviews. She follows this statement with several references, though, that are missing in the work of
her peers that I will review as a part of these annotations. She also offers researchers a list of other
reasons for using asynchronous communication, like time for reflection, iteration, flexibility,
convenience, simultaneous multiple interviews, and accurate transcripts, each of which is accompanied
by references that may be of use to me.
For asynchronous communication tools, the texts lists a number of resources, including email,
forums, threaded discussions, bulletin boards, podcasts, blogs, and wikis. As USF provides us with a
platform that encompasses many of these tools (Canvas), I wonder if we utilize it in some meaningful
way for collecting such data.
Salmons, J. (2012). Designing and conducting research with online interviews. In J. Salmons (Ed.), Cases in
Online Interview Research (pp. 130). Thousand Oaks, CA: SAGE Publications, Inc.
Though much of the opening chapter of Salmons 2012 text is a reduction of her 2010 work,
there are a few new pieces of information to note. First, Salmon poses questions for the researcher about
the purpose of the online interviews. Specifically, are the interviews used to collect data about online or
real-world experiences? For real-world experiences (like those I seek to study), she comments that an
online format can offer a natural way to share artifacts, provides greater time for reflection, and allows
participants and researcher alike the convenience of choosing the location and (for asynchronous
methods) the time.
She also adds a table that outlines the actions researchers might take (transmitting images,
viewing data, navigating the environment, and generating images) in order to achieve various goals
(communicate visually, elicit responses from visual stimuli, and collaborate visually) when utilizing
visual data. These actions and benefits bring to mind the possible artifacts of our participants (lesson
plans, emails, presentations, etc.) and the ways we might use them to elicit responses to questions about
their experiences.
Lastly, Salmons reiterates the ability of asynchronous communications to allow for fruitful
exchange with time to think between message and response (2012, p. 33). Considering college-aged
participants with busy schedules, an asynchronous format may be a more honest way to begin.
Participants can answer in their own time and from their own space.
Shields, C. (2003). Giving Voice to students: Using the internet for data collection. Qualitative Research,
3(3), 397414.
Shields work concerns an electronic survey that was created to collect qualitative data from a
large number of elementary school students (500+). Though some of the data she collected was
demographic and quantifiable, she also asked students to respond to twenty-one open-ended interview
questions (roughly organized by their importance to the researcher) that allowed lengthy qualitative
answers.
In her literature review she synthesizes past research (primarily from the 1990s) that cites
electronic data collection as offering more truthful and forthcoming responses from participants.
Participants, she said, are less inhibited online and more willing to provide information that they might
otherwise not in face-to-face interactions.
Though the survey itself was administered online, participants completed at schools while being
supervised by a selection of their teachers. Shields does not address the role of the facilitators in this
setting other than to say they provided appropriate surveillance to ensure that students were responding
individually (and only once), without conferring among themselves (2003, p. 403). Since she
subsequently claims that she was surprised by the volume of qualitative data she received from the
students who participated, I would be interested in knowing if facilitators also ensured the participants
completed all the questions (a feature not found in most online surveys). Indeed, the purpose of this
particular survey was to give students a voice in expressing their feelings about their teachers, their
classes, and their schools. Whether or not the teachers made completion of the survey compulsory
certainly impacts the surveys ability to do so. Also, as she used many quantitative measures to analyze
the data, I find Shields analysis less useful than her collection.
Suzuki, L. A., Ahluwalia, M. K., Arora, A. K., & Mattis, J. S. (2007). The pond you fish in determines the
fish you catch: Exploring strategies for qualitative data collection. The Counseling Psychologist, 35(2),
295327.
Suzuki et al. (2007) explore four strategies for data collection: participant observation,
interviews, physical data, and electronic data. It is the last of these strategies that will receive my
attention in this annotation. Before addressing each strategy in turn, the authors remind the reader that
the data collection process is complex and intentional, requiring the researcher to consider several
factors: (a) the relationship between researcher and subjects of study, (b) sampling criteria, (c) insider
versus outsider perspectives, (d) language and communication, (e) culture shock, and (f) ethical
considerations. For the purposes of gathering data electronically, I found their comments about the
relational aspects especially helpful. Because qualitative research is both reflexive and relational, it is
important that the investigator understand the language of the participants. In an electronic environment
in particular, the researcher must be able to utilize this language to communicate in a clear manner.
Unlike face-to-face interviews, participants and researchers are not always able to ask clarifying
questions to overcome language barriers, so a common language becomes even more essential.
In the section addressing electronic data collection, the authors provide much evidence that
electronic media have been used in qualitative research. They list many benefits: participant access,
anonymity, transcript accuracy, geographic leeway, comfort. In addition, the authors list a great
number of electronic resources available, including email, instant messaging, listservs, bulletin boards,
web pages, chat rooms, and surveys. Most helpful, though, were the references to other authors who
have researched electronic data collection more deeply. I will be using several of them for further
citations.
Commented [MJE15]: A possibility for setting up
communication amongst CT, PST, supervisors, and
professors for Helios.

You might also like