Professional Documents
Culture Documents
Lord Research Design, Data Analysis, Data Gathering
Lord Research Design, Data Analysis, Data Gathering
Data analysis is defined as a process of cleaning, transforming, and modeling data to discover
useful information for business decision-making. The purpose of Data Analysis is to extract
useful information from data and taking the decision based upon the data analysis.
Whenever we take any decision in our day-to-day life is by thinking about what happened last
time or what will happen by choosing that particular decision. This is nothing but analyzing our
past or future and making decisions based on it. For that, we gather memories of our past or
dreams of our future. So that is nothing but data analysis. Now same thing analyst does for
business purposes, is called Data Analysis.
To grow your business even to grow in your life, sometimes all you need to do is Analysis!
If your business is not growing, then you have to look back and acknowledge your mistakes and
make a plan again without repeating those mistakes. And even if your business is growing, then
you have to look forward to making the business to grow more. All you need to do is analyze
your business data and business processes.
Data Analysis Tools
Data analysis tools make it easier for users to process and manipulate data, analyze the
relationships and correlations between data sets, and it also helps to identify patterns and trends
for interpretation. Here is a complete list of tools.
There are several types of data analysis techniques that exist based on business and technology.
The major types of data analysis are:
Text Analysis
Statistical Analysis
Diagnostic Analysis
Predictive Analysis
Prescriptive Analysis
Text Analysis
Text Analysis is also referred to as Data Mining. It is a method to discover a pattern in large data
sets using databases or data mining tools. It used to transform raw data into business information.
Business Intelligence tools are present in the market which is used to take strategic business
decisions. Overall it offers a way to extract and examine data and deriving patterns and finally
interpretation of the data.
Statistical Analysis
Statistical Analysis shows "What happen?" by using past data in the form of dashboards.
Statistical Analysis includes collection, Analysis, interpretation, presentation, and modeling of
data. It analyses a set of data or a sample of data. There are two categories of this type of
Analysis - Descriptive Analysis and Inferential Analysis.
Descriptive Analysis
analyses complete data or a sample of summarized numerical data. It shows mean and deviation
for continuous data whereas percentage and frequency for categorical data.
Inferential Analysis
analyses sample from complete data. In this type of Analysis, you can find different conclusions
from the same data by selecting different samples.
Diagnostic Analysis
Diagnostic Analysis shows "Why did it happen?" by finding the cause from the insight found in
Statistical Analysis. This Analysis is useful to identify behavior patterns of data. If a new
problem arrives in your business process, then you can look into this Analysis to find similar
patterns of that problem. And it may have chances to use similar prescriptions for the new
problems.
Predictive Analysis
Predictive Analysis shows "what is likely to happen" by using previous data. The simplest
example is like if last year I bought two dresses based on my savings and if this year my salary is
increasing double then I can buy four dresses. But of course it's not easy like this because you
have to think about other circumstances like chances of prices of clothes is increased this year or
maybe instead of dresses you want to buy a new bike, or you need to buy a house!
So here, this Analysis makes predictions about future outcomes based on current or past data.
Forecasting is just an estimate. Its accuracy is based on how much detailed information you have
and how much you dig in it.
Prescriptive Analysis
Prescriptive Analysis combines the insight from all previous Analysis to determine which action
to take in a current problem or decision. Most data-driven companies are utilizing Prescriptive
Analysis because predictive and descriptive Analysis are not enough to improve data
performance. Based on current situations and problems, they analyze the data and make
decisions.
Data Analysis Process is nothing but gathering information by using proper application or tool
which allows you to explore the data and find a pattern in it. Based on that, you can take
decisions, or you can get ultimate conclusions.
Data Collection
Data Cleaning
Data Analysis
Data Interpretation
Data Visualization
First of all, you have to think about why do you want to do this data analysis? All you need to
find out the purpose or aim of doing the Analysis. You have to decide which type of data
analysis you wanted to do! In this phase, you have to decide what to analyze and how to measure
it, you have to understand why you are investigating and what measures you have to use to do
this Analysis.
Data Collection
After requirement gathering, you will get a clear idea about what things you have to measure and
what should be your findings. Now it's time to collect your data based on requirements. Once
you collect your data, remember that the collected data must be processed or organized for
Analysis. As you collected data from various sources, you must have to keep a log with a
collection date and source of the data.
Data Cleaning
Now whatever data is collected may not be useful or irrelevant to your aim of Analysis, hence it
should be cleaned. The data which is collected may contain duplicate records, white spaces or
errors. The data should be cleaned and error free. This phase must be done before Analysis
because based on data cleaning, your output of Analysis will be closer to your expected outcome.
Data Analysis
Once the data is collected, cleaned, and processed, it is ready for Analysis. As you manipulate
data, you may find you have the exact information you need, or you might need to collect more
data. During this phase, you can use data analysis tools and software which will help you to
understand, interpret, and derive conclusions based on the requirements.
Data Interpretation
After analyzing your data, it's finally time to interpret your results. You can choose the way to
express or communicate your data analysis either you can use simply in words or maybe a table
or chart. Then use the results of your data analysis process to decide your best course of action.
Data Visualization
Data visualization is very common in your day to day life; they often appear in the form of charts
and graphs. In other words, data shown graphically so that it will be easier for the human brain to
understand and process it. Data visualization often used to discover unknown facts and trends.
By observing relationships and comparing datasets, you can find a way to find out meaningful
information.
Summary:
Data analysis means a process of cleaning, transforming and modeling data to discover useful
information for business decision-making
Types of Data Analysis are Text, Statistical, Diagnostic, Predictive, Prescriptive Analysis
Data Analysis consists of Data Requirement Gathering, Data Collection, Data Cleaning, Data
Analysis, Data Interpretation, Data Visualization
Academic work is not easy. The quality of the content is determined by the credibility of the
content. Therefore, you need many data to justify your viewpoint. Although the internet has
made things easy for those who need information, it is important to verify the reliability of the
data you collect before using it.
The data gathering procedure you employ in your paper determines if you receive a piece that is
trustworthy or not. Therefore, it is crucial to employ the best procedure to get the perfect results.
It improves the quality of the paper and makes you sound scholarly.
Most people struggle when need to gather data. While some do not know the data collection
methodologies to follow, the majority do not have the experience in data handling. Eventually,
they prepare papers that only earn them low grades. What is the remedy in such cases? Have a
look at a perfect data gathering procedure example to be well-versed with the procedure that can
work for your situation. In the process, you can make your work easier and improve the general
quality of the papers you can prepare.
The best remedy for those without the sills in data gathering is to hire experts who are proficient
in this field. Fortunately, we stand out as the company that can assist you with such issues. We
have worked on a variety of papers that require verifiable data and understand what can work
perfectly for you. With our assistance, you do not strain with data collection and handling. You
follow every stage to ensure what you receive is perfect.
What Is the Definition of Data Gathering Procedure?
Dissertation writing involves the handling of statistical data. Therefore, you need to know the
best data to use in your paper. The definition of data gathering procedure is that it is the
technique used to obtain the information used in a dissertation to substantiate the claims made by
a writer. To get the perfect outcome, you should use the best procedure. If you are unsure of how
to obtain your data, it is advisable to hire experts in this field to offer assistance. We have data
experts who can help with these tasks.
What are the data collection methods that you can use? They are explained below:
Use of surveys
The method is mainly effective for those who need qualitative data to use in their academic
documents. In surveys, open-ended questions are used. What kind of information can be
collected using this method? They include the perception people have on a product, attitudes
towards government policy, the beliefs people hold, or the knowledge people have on a given
issue, among other information types. For the exact information needed, the questions should not
be leading and should cover the exact areas needed by the researchers. The data is later analyzed
to obtain the conclusions needed.
Conducting interviews
This quantitative research data gathering procedure is used to obtain from people on a one-on-
one basis. In this case, the researcher should have several predetermined questions. The
interview questions can be close-ended, like in the case where the interviewees are expected to
provide the ‘YES’ or ‘NO’ type of responses. It can also have open-ended questions in which the
respondent has the freedom to provide a response they are comfortable with. To ensure the data
collected is rich in the content required, the interviewer should ensure there are follow-up
questions for areas where the respondent may provide ambiguous information.
There are different ways the interviews can be conducted. The first way is to do it face-to-face.
As the respondent provides the answers, the interviewee can record them by writing or tape-
recording. The data collected is later sorted and written in the paper. The other method is through
phone conversations. Your respondent should provide the answers required as you keep a clean
record that you can use later to write the paper needed.
In this case, the interviewee can take a group and get the information from them. There is a set of
predetermined questions that are inquired from the respondents in turns. The method is effective
when different people hold varied opinions on the same issue. Focus groups differ depending on
the type of responses required in the probe. To get the most reliable results from this method, the
number of people in the group should be between 5 and 10 people.
Direct observation
The data gathering procedure for qualitative research applies the sensory organs such as the eyes
to see what is going on, ears to hear the things going on, and the ears to smell. The method helps
the researcher to avoid bias in what people say.
Content Analysis
The researcher uses data that is already available and supports their point of view. Different
documents can be used in this case, including newspapers with reputation, research articles from
known experts, approved government reports, and other online data sources that can be of help in
this case. For the reliability of the data, different sources should be used for research.
It is you to determine the methodology that can work for your case when it comes to data
collection. Choosing a wrong procedure may mean that you obtain unreliable or irrelevant data.
You do not want to face the frustrations of presenting data that is unrelated to your topic.
Therefore, it is advisable to hire an expert who understands how things work as far as data is
concerned. We come in handy in such situations. Do not use faulty data gathering procedures
when we can assist you in collecting the best data using our proven collection techniques.
Not all the procedures are effective for your paper. What applies to one paper may not be
recommended for another. What are the factors in assessing to settle on the best procedure? Get
answers:
Different courses require varying procedures when it comes to the collection and handling of
data. While there are those courses where secondary information sources can work, others need
data that one obtains first first-hand. For example, the type of data that is acceptable for those
handling engineering courses is not the same as what works for those pursuing psychology. The
same applies to the topic. The data needs for different subjects vary. Therefore, you must analyze
the needs of your course and topic before selecting a procedure for data gathering.
Your department has its instructions when it comes to the sample of data gathering procedure.
Failure to adhere to what is specified may mean you miss important marks because your paper
may not be as good as what is expected from you. Therefore, it is crucial to be well-versed with
your faculty guidelines. Where the rules seem too strict for you, it is advisable to get experts who
are comfortable with the specifications. We are the best company when it comes to adherence to
the rules. The professionals assess all the guidelines you submit to ensure the data obtained meet
the specifications you submit.
The convenience encountered in data gathering varies from one person to the next. What one
person considers to be hard may be easy for another. On a personal level, you should opt for a
procedure that you are comfortable with. It is you who decide on the topic, settle on the data,
analyze and come up with the conclusion. Therefore, selecting a procedure you are sure can
work for you is fundamental. A convenient information gathering procedure saves you from
stress.
You should not embark on the data gathering if you are unsure of what is required. The first step
is to analyze and understand the topic you have. The keywords encountered determine whether
you need a quantitative or qualitative type of data. Where you are expected to settle on your own
topic, take something you are sure you can easily obtain data to defend.
The next procedure is to study the guidelines that are provided for doing the paper and collection
of the data. For example, some professors insist that a student should use a given method of data
collection. Your grade depends on whether you adhere to that specification or not.
Prepare adequately before you begin the gathering. For instance, you have to settle on a given
method and determine the tools you need for data gathering. You can read an approved data
gathering procedure pdf to understand what to do.Methodology: research design
Methodology refers to the nuts and bolts of how a research study is undertaken. There are a
number of important elements that need to be referred to here and the first of these is the research
design. There are several types of quantitative studies that can be structured under the headings
of true experimental, quasi-experimental and non-experimental designs (Robson, 2002) {Table
2). Although it is outside the remit of this article, within each of these categories there are a
range of designs that will impact on how the data collection and data analysis phases of the study
are undertaken. However, Robson (2002) states these designs are similar in many respects as
most are concerned with patterns of group behaviour, averages, tendencies and properties.
The next element to consider after the research design is the data collection method. In a
quantitative study any number of strategies can be adopted when collecting data and these can
include interviews, questionnaires, attitude scales or observational tools. Questionnaires are the
most commonly used data gathering instruments and consist mainly of closed questions with a
choice of fixed answers. Postal questionnaires are administered via the mail and have the value
of perceived anonymity. Questionnaires can also be administered in face-to-face interviews or in
some instances over the telephone (Polit and Beck, 2006).
After identifying the appropriate data gathering method the next step that needs to be considered
is the design of the instrument. Researchers have the choice of using a previously designed
instrument or developing one for the study and this choice should be clearly declared for the
reader. Designing an instrument is a protracted and sometimes difficult process (Burns and
Grove, 1997) but the overall aim is that the final questions will be clearly linked to the research
questions and will elicit accurate information and will help achieve the goals of the
research.This, however, needs to be demonstrated by the researcher.
Data analysis in quantitative research studies is often seen as a daunting process. Much of this is
associated with apparently complex language and the notion of statistical tests. The researcher
should clearly identify what statistical tests were undertaken, why these tests were used and what
•were the results. A rule of thumb is that studies that are descriptive in design only use
descriptive statistics, correlational studies, quasi-experimental and experimental studies use
inferential statistics. The latter is subdivided into tests to measure relationships and differences
between variables (Clegg, 1990). Inferential statistical tests are used to identify if a relationship
or difference between variables is statistically significant. Statistical significance helps the
researcher to
rule out one important threat to validity and that is that the result could be due to chance rather
than to real differences in the population. Quantitative studies usually identify the lowest level of
significance as PsO.O5 (P = probability) (Clegg, 1990). To enhance readability researchers
frequently present their findings and data analysis section under the headings of the research
questions (Russell, 2005). This can help the reviewer determine if the results that are presented
clearly answer the research questions. Tables, charts and graphs may be used to summarize the
results and should be accurate, clearly identified and enhance the presentation of results (Russell,
2005). The percentage of the sample who participated in the study is an important element in
considering the generalizability of the results. At least fifty percent of the sample is needed to
participate if a response bias is to be avoided (Polit and Beck, 2006).
Discussion/conclusion/recommendations
The discussion of the findings should Oow logically from the data and should be related back to
the literature review thus placing the study in context (Russell, 2002). If the hypothesis was
deemed to have been supported by the findings, the researcher should develop this in the
discussion. If a theoretical or conceptual framework was used in the study then the relationship
with the findings should be explored. Any interpretations or inferences drawn should be clearly
identified as such and consistent with the results. The significance of the findings should be
stated but these should be considered within the overall strengths and limitations of the study
(Polit and Beck, 2006). In this section some consideration should be given to whether or not the
findings of the study were generalizable, also referred to as external validity. Not all studies
make a claim to generalizability but the researcher should have undertaken an assessment of the
key factors in the design, sampling and analysis of the study to support any such claim. Finally
the researcher should have explored the clinical significance and relevance of the study.
Applying findings in practice should be suggested with caution and will obviously depend on the
nature and purpose of the study. In addition, the researcher should make relevant and meaningful
suggestions for future research in the area (Connell Meehan, 1999).
Data collection is defined as the procedure of collecting, measuring and analyzing accurate
insights for research using standard validated techniques. A researcher can evaluate their
hypothesis on the basis of collected data. In most cases, data collection is the primary and most
important step for research, irrespective of the field of research. The approach of data collection
is different for different fields of study, depending on the required information.
The most critical objective of data collection is ensuring that information-rich and reliable data is
collected for statistical analysis so that data-driven decisions can be made for research.
Essentially there are four choices for data collection – in-person interviews, mail, phone and
online. There are pros and cons to each of these modes.
In-Person Interviews
Phone Surveys
Pros: High degree of confidence on the data collected, reach almost anyone
Web/Online Surveys
Cons: Not all your customers might have an email address/be on the internet, customers may be
wary of divulging information online.
In-person interviews always are better, but the big drawback is the trap you might fall into if you
don’t do them regularly. It is expensive to regularly conduct interviews and not conducting
enough interviews might give you false positives. Validating your research is almost as
important as designing and conducting it. We’ve seen many instances where after the research is
conducted – if the results do not match up with the “gut-feel” of upper management, it has been
dismissed off as anecdotal and a “one-time” phenomenon. To avoid such traps, we strongly
recommend that data-collection be done on an “ongoing and regular” basis. This will help you in
comparing and analyzing the change in perceptions according to marketing done for your
products/services. The other issue here is sample size. To be confident with your research you
have to interview enough people to weed out the fringe elements.
A couple of years ago there was quite a lot of discussion about online surveys and their statistical
validity. The fact that not every customer had internet connectivity was one of the main
concerns. Although some of the discussions are still valid, the reach of the internet as a means of
communication has become vital in the majority of customer interactions. According to the US
Census Bureau, the number of households with computers has doubled between 1997 and 2001.
In 2001 nearly 50% of the households had a computer. Nearly 55% of all households with an
income of more than 35,000 have internet access, and this jumps to 70% for households with an
annual income of 50,000. This data is from the US Census Bureau for 2001.
There are primarily three modes of data collection that can be employed to gather feedback –
Mail, Phone, and Online. The method actually used for data-collection is really a cost-benefit
analysis. There is no slam-dunk solution but you can use the table below to understand the risks
and advantages associated with each of the mediums:
Survey Medium Cost per Response Data Quality/Integrity Reach (ALL US Households)
Keep in mind, the reach here is defined as “All U.S. Households.” In most cases, you need to
take a look at how many of your customers are online and make a determination. If all your
customers have email addresses, you have a 100% reach of your customers.
Another important thing to keep in mind is the ever-increasing dominance of cellular phones
over landline phones. United States FCC rules prevent automated dialing and calling cellular
phone numbers and there is a noticeable trend towards people having cellular phones as the only
voice communication device. This introduces the inability to reach cellular phone customers who
are dropping home phone lines in favor of going entirely wireless. Even if automated dialing is
not used, another FCC rule prohibits from phoning anyone who would have to pay for the call.
Learn more: Qualitative Market Research
Multi-Mode Surveys
Surveys, where the data is collected via different modes (online, paper, phone etc.), is also
another way of going. It is fairly straightforward and easy to have an online survey and have
data-entry operators to enter in data (from the phone as well as paper surveys) into the system.
The same system can also be used to collect data directly from the respondents.
The survey should have all the right questions about features and pricing such as “What are the
top 3 features expected from an upcoming product?” or “How much are your likely to spend on
this product?” or “Which competitors provide similar products?” etc.
For conducting a focus group, the marketing team should decide the participants as well as the
mediator. The topic of discussion and objective behind conducting a focus group should be made
clear beforehand so that a conclusive discussion can be conducted.
Data collection methods are chosen depending on the available resources. For example,
conducting questionnaires and surveys would require the least resources while focus groups
require moderately high resources.
Feedback is a vital part of any organizations growth. Whether you conduct regular focus groups
to elicit information from key players or, your account manager calls up all your marquee
accounts to find out how things are going – essentially they are all processes to find out from
your customers’ eyes – How are we doing? What can we do better?
Online surveys are just another medium to collect feedback from your customers, employees and
anyone your business interacts with. With the advent of Do-It-Yourself tools for online surveys,
data collection on the internet has become really easy, cheap and effective.
It is a well-established marketing fact that acquiring a new customer is 10 times more difficult
and expensive than retaining an existing one. This is one of the fundamental driving forces
behind the extensive adoption and interest in CRM and related customer retention tactics.
In a research study conducted by Rice University Professor Dr. Paul Dholakia and Dr. Vicki
Morwitz, published in Harvard Business Review, the experiment inferred that the simple fact of
asking customers how an organization was performing by itself to deliver results proved to be an
effective customer retention strategy. In the research study, conducted over the course of a year,
one set of customers were sent out a satisfaction and opinion survey and the other set was not
surveyed. In the next one year, the group that took the survey saw twice the number of people
continuing and renewing their loyalty towards the organization.
Learn more: Research Design
The research study provided a couple of interesting reasons on the basis of consumer
psychology, behind this phenomenon:
Satisfaction surveys boost the customers’ desire to be coddled and induce positive feelings. This
crops from a section of the human psychology that intends to “appreciate” a product or service
they already like or prefer. The survey feedback collection method is solely a medium to convey
this. The survey is a vehicle to “interact” with the company and reinforces the customer’s
commitment to the company.
Surveys may increase awareness of auxiliary products and services. Surveys can be considered
modes of both inbound as well as outbound communication. Surveys are generally considered to
be a data collection and analysis source. Most people are unaware of the fact that consumer
surveys can also serve as a medium for distributing data. It is important to note a few caveats
here.
In most countries including the US, “selling under the guise of research” is illegal.
c. Other disclaimers may be included in the survey to ensure users are aware of this fact. For
example: “We will be collecting your opinion and informing you about products and services
that have come online in the last year…”
Induced Judgments: The entire procedure of asking people their feedback can prompt them to
build an opinion on something they otherwise would not have thought about. This is a very
underlying yet powerful argument which can be compared to the “Product Placement” strategy
currently used for marketing products in mass-media like movies and television shows. One
example is the extensive and exclusive use of the “mini-Cooper” in the blockbuster movie
“Italian Job.” This strategy is questionable and should be used with great caution.
Surveys should be considered as a critical tool in the customer journey dialog. The best thing
about surveys is its ability to carry “bi-directional” information. The research conducted by Paul
Dholakia and Vicki Morwitz shows that surveys not only get you the information that is critical
for your business, but also enhances and builds upon the established relationship you have with
your customers.
Recent advances in technology have made it incredibly easy to conduct real-time surveys and
opinion polls. Online tools make it easy to frame questions and answers and create surveys on
the Web. Distributing surveys via email, website links or even integration with online CRM tools
like Salesforce.com have made online surveying a quick-win solution.
So, you’ve decided to conduct an online survey. There are a few questions in your mind that you
would like answered and you are on the lookout for a fast and inexpensive way to find out more
about your customers, clients etc. The first and foremost thing you need to decide what the
objectives of the study are. Ensure that you can phrase these objectives as questions or
measurements. If you can’t, you are better off looking at other means of gathering data like focus
groups and other qualitative methods. The data collected via online surveys is dominantly
quantitative in nature.
Review the basic objectives of the study. What are you trying to discover? What actions do you
want to take as a result of the survey? – Answers to these questions help in validating collected
data. Online surveys are just one way of collecting and quantifying data.
Visualize all of the relevant information items you would like to have. What will the output
survey research report look like? What charts and graphs will be prepared? What information do
you need to be assured that action is warranted?
Assign ranks to each topic (1 and 2) according to their priority, including the most important
topics first. Revisit these items again to ensure that the objectives, topics, and information you
need are appropriate. Remember, you can’t solve the problem if you ask the wrong questions.
How easy or difficult is it for the respondent to provide information on each topic? If it is
difficult, is there an alternative medium to gain insights by asking a different question? This is
probably the most important step. Online surveys have to be Precise, Clear and Concise. Due to
the nature of the internet and the fluctuations involved, if your questions are too difficult to
understand, the survey dropout rate will be high.
Create a sequence for the topics that are unbiased. Make sure that the questions asked first do not
bias the results of the next questions. Sometimes providing too much information, or disclosing
purpose of the study can create bias. Once you have a series of decided topics, you can have a
basic structure of a survey. It is always advisable to add an “Introductory” paragraph before the
survey to explain the project objective and what is expected of the respondent. It is also sensible
to have a “Thank You” text as well as information about where to find the results of the survey
when they are published.
Decide the question type according to the requirement of the answers to meet analysis
requirements. Choose from an array of question types such as open-ended text questions,
dichotomous, multiple choice, rank order, scaled, or constant sum (ratio scale) questions. You
have to consider an important aspect – Usually difficult analysis requirements will lead to an
exponentially complicated survey design. However, there are a couple of tools available to make
life easier:
Page Breaks – The attention span of respondents can be very low when it comes to a long
scrolling survey. Add page breaks as wherever possible. Having said that, a single question per
page can also hamper response rates as it increases the time to complete the survey as well as
increases the chances for dropouts.
Branching – Create smart and effective surveys with the implementation of branching wherever
required. Eliminate the use of text such as, “If you answered No to Q1 then Answer Q4” – this
leads to annoyance amongst respondents which result in increase survey dropout rates. Design
online surveys using the branching logic so that appropriate questions are automatically routed
based on previous responses.
Write the questions. Initially, write a significant number of survey questions out of which you
can use the one which is best suited for the survey. Divide the survey into sections so that
respondents do not get confused seeing a long list of questions.
Repeat all of the steps above to find any major holes. Are the questions really answered? Have
someone review it for you.
Time the length of the survey. A survey should take less than five minutes. At three to four
research questions per minute, you are limited to about 15 questions. One open end text question
counts for three multiple choice questions. Most online software tools will record the time taken
for the respondents to answer questions.
Pretest the survey to 20 or more people. Obtain their feedback in detail. What were they unsure
about? Did they have questions? Did they have trouble understanding what you wanted? Did
they take a point of view not covered in your answers or question?
Include a few open-ended survey questions that support your survey object. This will be a type
of feedback survey.
Send an email to the project survey to your test group and then email the feedback survey also
after that.
This way, you can have your test group provide their opinion about the functionality as well as
usability of your project survey by using the feedback survey.
Online surveys have over the course of time, evolved into an effective alternative to expensive
mail or telephone surveys. There are a few conditions which need to be met to online surveys
however that you must be aware of. If you are trying to survey a sample which represents the
target population, please keep in mind that not everyone is online.
Moreover, not everyone is receptive to an online survey also. Generally, the demographic
segmentation belonging to younger individuals are inclined towards responding to an online
survey.
Writing great questions can be considered by an art. Art always requires a significant amount of
hard work, practice, and help from others.
A small change in content can produce effective results. Words such as could, should, might are
all used for almost the same purpose, but may produce a 20% difference in agreement to a
question. For example, “The management could.. should.. might.. have shut the factory”.
Intense words such as – prohibit or action, which represent control or action also produce similar
results. For example, “Do you believe that Donald Trump should prohibit insurance companies
from raising rates?”.
Sometimes the content is just biased. For instance, “You wouldn’t want to go to Rudolpho’s
Restaurant for the organization’s annual party, would you?”
Misplaced questions
Questions should always have reference to the intended context, questions placed out of order or
without its requirement should be avoided. Generally, a funnel approach should be implemented
– generic questions should be included in the initial section of the questionnaire as a warm-up
and specific ones should follow and towards the end, demographic or geographic questions
should be included.
Multiple choice answers should be mutually unique in order to provide distinct choices.
Overlapping answer options frustrate the respondent and make interpretation difficult at best.
Also, the questions should always be precise.
This question is vague. In which terms is the liking for orange juice is to be rated? – Sweetness,
texture, price, nutrition etc.
Asking about industry related terms such as caloric content, bits, bytes, mbs, and other such
terms and acronyms can be confusing for respondents. Ensure that the audience understands your
language level, terminology and above all, the question you ask.
What suggestions do you have for improving our shoes? The question is about quality in general,
but the respondent may offer suggestions about texture, the type of shoes or variants.
There will always be certain questions which cross certain privacy rules and since privacy is an
important issue for most people, these questions should either be eliminated from the survey or
not kept as mandatory. Survey questions about income, family income and status, religious, and
political beliefs etc. should always be avoided as they are considered to be intruding and
respondents can choose not to answer them.
Unbalanced answer options in scales such as Likert Scale and Semantic Scale may be
appropriate for some situations and biased in others. When analyzing a pattern in eating habits, a
study used a quantity scale that made obese people appear in the middle of the scale with the
polar ends reflecting a state where people starve and an irrational amount to consume. There are
cases where we usually would not expect poor service such as hospitals.
What is the fastest and most convenient ISP for your location? The fastest ISP would be
expensive and the less expensive ones will most likely be slow. To understand both factors, two
separate questions should be asked.
Dichotomous questions
Dichotomous questions are used in case you want a distinct answer, for example – Yes/No,
Male/Female. For example, the question “Do you think Hillary Clinton will win the election?” –
The answer can either be Yes or No.
The use of long questions will definitely increase the time taken for completion which will
generally lead to an increase in the survey dropout rate. Multiple choice questions are the longest
and most complex and open-ended questions are the shortest and easiest to answer.
Qualitative Quantitative
Goals To gain insight; explore the depth, To test relationships, describe, examine
richness, and complexity inherent in cause and effect relations
the phenomenon.
Phenomenology
Method
Outcomes
Grounded theory
Analysis
Concept formation
Concept development - reduction; selective sampling of literature;
selective sampling of subjects; emergence of core concepts
Concept modification & integration
Method
Historical
Purpose - describe and examine events of the past to understand the present and
anticipate potential future effects
Method
Analysis - synthesis of all data; accept & reject data; reconcile conflicting evidence
Case study
Purpose - describe in-depth the experience of one person, family, group, community, or
institution
Method
Data collection
Interview with audiotape & videotape
Direct, non-participant observation
Participant observation
Field notes, journals, logs
Bracketing
Intuiting
Can use > 1 researcher & compare interpretation and analysis of data
Data analysis
Living with data
Cluster & categorize data
Examine concepts & themes
Define relationships between/among concepts
Theoretical framework
Following the identification of the research problem and the review of the literature the researcher
should present the theoretical framework (Bassett and Bassett, 2003). Theoretical frameworks are a
concept that novice and experienced researchers find confusing. It is initially important to note that not
all research studies use a defined theoretical framework (Robson, 2002). A theoretical framework can be
a conceptual model that is used as a guide for the study (Conkin Dale, 2005) or themes from the
literature that are conceptually mapped and used to set boundaries for the research (Miles and
Huberman, 1994). A sound framework also identifies the various concepts being studied and the
relationship between those concepts (Burns and Grove, 1997). Such relationships should have been
identified in the literature. The research study should then build on this theory through empirical
observation. Some theoretical frameworks may include a hypothesis. Theoretical frameworks tend to be
better developed in experimental and quasi-experimental studies and often poorly developed or non-
existent in descriptive studies (Burns and Grove, 1999).The theoretical framework should be clearly
identified and explained to the reader.
b. Research Problem
As the title of the article suggests, cultural synchronization between the teacher
and the students can be influential in discipline. This is illustrated in the authors’
transcription of an interaction among students and the teacher along with commentary.
More specifically, the authors note that Ms. Simpson (the teacher) has shifted her
language to include dialect that is non-standard English but is closer to the students’ own
language. The authors state “Ms. Simpson’s comments, undoubtedly, have altered her
III. Methods
The authors state they used a qualitative case study approach. The teacher
participant was selected based on being an effective teacher (self-report and principal
recommendation) and the class/students were selected by the teacher based on the
The student participants are described in terms of ethnicity/race, gender, age and
socioeconomic backgrounds. The teacher is also described as being 31 years old, African
American, and having 10 years experience. Her degree and socioeconomic background
are also given. The authors include examples of actions of Ms. Simpson that illustrate
her culturally responsive nature (e.g., student council advisor, sensitivity to students’
with examples of how Ms. Simpson’s disciplinary style may differ from more traditional
makeup, and percentage eligible for reduced or free lunch, along with geographical data
that inform the reader the school is a metropolitan, diverse, largely low income populated
school.
Data collection methods are described in considerable detail as one would expect
with a qualitative study. The number of field visits were extensive (36) and both formal
and informal interviews conducted. Come documents were reviewed (teacher handouts
concerning expectations). The first author also maintained a research journal. Themes
from the field notes , documents, and interviews were coded and examined for themes.
These are also described in considerable detail. Reliability and validity issues were
addressed by the authors who noted how they ensured reliability of data obtained and the
triangulation of sources of data. Field visits and interviews were autiotaped for
example. Ms. Simpson also reviewed with field notes and interview transcripts for errors
or disagreements she noted. Authors’ colleagues also reviewed and critiqued the study.
IV. Results and Conclusions
The authors begin by relating their findings back to the overall purpose of the
teachers and students influences classroom disciplinary actions.” To that end, they
Patterns of cultural humor is the first theme and is illustrated with several
transcriptions of interactions along with commentary explaining how they enabled the
teacher to “build cultural bridges between students’ home and school lives.” They also
explain how the use of dialect allows the teacher to emphasize expectations in
meaningful ways. Her responses also “promoted and reaffirmed solidarity with her
students.” They note that humor may allow the teacher to be more authentic in the
students’ perceptions. The teacher also demonstrated how to use humor in place of
Demonstrations of affect and emotion was another major theme noted, also
provided to illustrate the presence of this theme in the teacher’s actions. Of importance is
the teacher’s reprimands were generally met with solemnity and quiet when they were
intended to get a serious reaction from students. The authors note such blunt and direct
types of discipline are more common among urban African American students’ homes.
teachers. They assert their case study does help to establish that cultural synchronicity
experience and status, and her relationships within the community and with families.
Finally, the authors make suggestions for future research including empirical studies with
larger sample sizes, use of both male and female teachers, across grade levels and with
teachers from various cultural communities.
The degree to which a sample reflects the population it was drawn from is known as representativeness
and in quantitative research this is a decisive factor in determining the adequacy of a study (Polit and
Beck, 2006). In order to select a sample that is likely to be representative and thus identify findings that
are probably generalizable to the target population a probability sample should be used (Parahoo,
2006). The size of the sample is also important in quantitative research as small samples are at risk of
being overly representative of small subgroups within the target population. For example, if, in a sample
of general nurses, it was noticed that 40% of the respondents were males, then males would appear to
be over represented in the sample, thereby creating a sampling error. The risk of sampling
RESEARCH METHODOLOGIES
errors decrease as larger sample sizes are used (Burns and Grove, 1997). In selecting the sample the
researcher should clearly identify who the target population are and what criteria were used to include
or exclude participants. It should also be evident how the sample was selected and how many were
invited to participate (Russell, 2005).
Ethical considerations
Beauchamp and Childress (2001) identify four fundamental moral principles: autonomy, non-
maleficence, beneficence and justice. Autonomy infers that an individual has the right to freely decide to
participate in a research study without fear of coercion and with a full knowledge of what is being
investigated. Non-maleficence imphes an intention of not harming and preventing harm occurring to
participants both of a physical and psychological nature (Parahoo, 2006). Beneficence is interpreted as
the research benefiting the participant and society as a whole (Beauchamp and Childress, 2001). Justice
is concerned with all participants being treated as equals and no one group of individuals receiving
preferential treatment because, for example, of their position in society (Parahoo, 2006). Beauchamp
and Childress (2001) also identify four moral rules that are both closely connected to each other and
with the principle of autonomy. They are veracity (truthfulness), fidelity (loyalty and trust),
confidentiality and privacy.The latter pair are often linked and imply that the researcher has a duty to
respect the confidentiality and/or the anonymity of participants and non-participating subjects. Ethical
committees or institutional review boards have to give approval before research can be undertaken.
Their role is to determine that ethical principles are being applied and that the rights of the individual
are being adhered to (Burns and Grove, 1999).
Operational definitions In a research study the researcher needs to ensure that the reader understands
what is meant by the terms and concepts that are used in the research. To ensure this any concepts or
terms referred to should be clearly defined (Parahoo, 2006).
What is Critical Thinking?
When examining the vast literature on critical thinking, various
definitions of critical thinking emerge. Here are some samples:
Back to List
Back to List
Among the several major approaches to teaching critical thinking skills, the literature
seems to favor infusion-teaching thinking skills in the context of subject matter. This
approach entails integrating content and skills as equally as possible in order to
maintain a balance of the two (Willis 1992). Thinking skills are reinforced throughout
the teaching of the subject and later retained. Research shows that students learn both
skills and subject matter if they are taught concurrently (Beyer 1988).
Influence of Modeling
Metacognition refers to the knowledge and control people have over their thinking
and learning activities (Flavell 1979); it involves "thinking about thinking."
On the other hand, an opinion is one's belief, feeling, or judgment about something. It
is subjective or a value judgment, not something that can be objectively verified. The
teacher describes the reasoning process and presents several examples and non-
examples to help explain the process. Simultaneously, the teacher anticipates the
kinds of problems students may have about when and how to use the reasoning
process, and uses a variety of passages to illustrate the skill.
The teacher can illustrate how to distinguish fact from opinion based on the following
statements:
For example, to determine whether a statement is fact or opinion, the following "inner
dialogue" based on the following passage might be the sequence of mental processes
going on inside the head of the teacher-expert.
Teacher: a) I'm going to pretend I don't know the difference between fact and opinion
in the passage given. See what happens. (Teacher reads and pretends to have trouble
distinguishing fact and opinion.) Hmmm . . . Let me begin by drawing out all the
statements of facts in the passage (Teacher reads).
b) What did we say a fact is? (Teacher refers to the earlier definition of a fact.) Okay,
if that's the case, then the following are facts. The tropical forests of Malaysia are the
oldest and have the largest variety of flora and fauna. I guess this has been proved by
geologists, biologists, and botanists. Perhaps, the diversity of flora and fauna was
compared in relation to the temperate forests. Mount Kinabalu is the highest peak in
Malaysia and South East Asia. Yes, this is a fact. It would have been more believable
if the height of the mountain peak had been stated by the author.
c) Next, let me look at the statements that are likely to be opinions. Now, what's an
opinion? (Teacher refers to the previous definition of an opinion.) The statement that
the tropical forests of Malaysia are reputed to be the most beautiful in the world is
certainly an opinion. Why? Because beauty is in the eyes of the beholder! For me, the
temperate forests are equally beautiful, especially during fall. Oh, the colors!
d) Also, the statement that tourists are attracted to the tropical forests because of the
legends and stories associated with the jungle is another opinion. Surely, this is a
personal preference and a matter of taste which may not be the reason for seeing the
tropical forests.
The teacher checks how the students interpreted the modeling sessions, asking them to
tell when and how to use the reasoning process. If, however, the students still do not
understand, the teacher provides cues in the form of prompts, analogies, metaphors, or
other forms of elaboration that help students refine their understanding of the
reasoning process (Herrmann 1988).
Through modeling, teachers are sharing their thinking through externalizing their
inner dialogue and verbalizing the questions they are asking themselves. By sharing
their strategies, teachers are in fact providing their students with models of mental
processes.
Michael Fay deserved the punishment because he broke the laws of Singapore.
Singapore is the only country in the world that uses caning as a form of punishment.
Student 1: Here's my list. I think they are all facts except the third one because the
word "wrong" is used. While some people consider caning wrong, others consider it
right.
Student 2: You're right. It sounds like an opinion to me. Caning could also be wrong
or right for different reasons. What about number 1, though, isn't that an opinion?
Student 1: Maybe, because many people in the United States don't believe he
deserved to be caned even though he broke Singapore laws. I think you're right, it
seems to be an opinion.
Student 2: Is Singapore the only country that uses the cane for punishment? How do
we find out?
After students have discussed in pairs, they share their answers and ask further
questions before the whole class. The teacher guides their thinking by providing
additional explanations and illustrations in order to help them understand the
differences between facts and opinions. Modeling by the learner involves students
interacting with one another in order to become aware of their thinking processes. The
teacher facilitates the process directly and indirectly.
Conclusion
Teaching learners to think critically is a difficult task and requires a great deal of
patience. But the time and effort are well spent to try to prepare a citizenry capable of
making decisions and solving problems using reflective thought to guide action for the
common good. One approach to teaching critical thinking is the metacognitive
approach, which emphasizes explaining and modeling the thinking strategy. The
metacognitive approach proposed serves as a guide for teachers interested in orienting
their teaching toward helping learners become more analytical and independent
thinkers.
References
Bandura, A. Social Learning Theory. Englewood Cliffs, New Jersey: Prentice Hall,
1979.Beck, A. T. Cognitive Therapy and Emotional Disorders. New York:
International Universities Press, 1976.Beyer, B. K. Developing a Thinking Skills
Program. Boston: Allyn & Bacon, 1988.Flavell, J. H. "Metacognition and Cognitive
Monitoring: A New Area of Cognitive-Developmental Inquiry." American
Psychologist 34 (1979): 906-911.Good, T. L., and J. E. Brophy. Looking in
Classrooms, 6th ed. New York: Harper Collins, 1994.Herrmann, B. A. "Two
Approaches for Helping Poor Readers Become More Strategic." The Reading
Teacher (1988): 24-28.Kagan, S. Cooperative Learning. San Juan Capistrano,
California: Resources for Teachers, 1992.Manning, B. H. Cognitive Self-Instruction
for Classroom Processes. Albany: State University of New York Press, 1992.National
Council for the Social Studies. "A Vision of Powerful Teaching and Learning in the
Social Studies: Building Social Understanding and Civic Efficacy." Social
Education 57 (October 1993): 213-233.Newmann, F. M. "Promoting Higher Order
Thinking in Social Studies: Overview of a Study of Sixteen High School
Departments." Theory and Research in Social Education 19 (1991): 324-
340.Sanacore, J. "Metacognition and Improvement of Reading: Some Important
Links." Journal of Reading (1991): 707-713.Taylor, N. E. "Metacognitive Ability: A
Curriculum Priority." Reading Psychology: An International Quarterly 4 (1983): 269-
278.Wilen, W. W. "Thinking Skills Instruction." In Crucial Issues in Social Studies,
K-12, edited by B. Massialas and R. Allen. Belmont, California: Wadsworth
Publishing, in press.Willis, D. Teaching Thinking: Curriculum Update. Alexandria,
Virginia: Association of Supervision and Curriculum Development, 1992.William W.
Wilen is Professor of Education at Kent State University, Kent, Ohio.
John Arul Phillips is Associate Professor in the Faculty of Education at the
University of Malaya, Kuala Lumpur, Malaysia.
*This paper is a draft for discussion only, please do not quote or cite without
contacting the authors first
Abstract
The aim of this paper is to search for a model and a method that can help teachers
facilitate critical thinking in school children. Many critical thinking texts fail to
delineate the scope of their individual approaches to critical thinking or neglect to
review their conceptual foundations before offering insights or formulations of their
critical thinking programs. To avoid this shortcoming, the paper first addresses three
core issues in educating for critical thinking by a critical review of the literature: The
need for critical thinking; the definitive characteristics; what a critical thinking
programme should aim to teach, and how teachers might stimulate critical thinking.
Having established a deeper understanding of these core issues, the paper presents a
synthesized model that aims to coherently guide student, curriculum, and professional
development. Justification of the selected components and their relationships central
to the model will be discussed. The feasibility of translating this model into classroom
practice has been examined via a university-school collaborative action research
project conducted in a primary school in Auckland, New Zealand. The final part of the
paper discusses the findings and implications derived from the project.
Introduction
In recent years there has been a shift in many countries in their national educational
curriculum policy statements from an emphasis on knowledge to an emphasis on
higher order thinking skills. Particularly, in North America, critical thinking is
considered to be an important educational goal at all levels of schooling. However,
while there may be agreement about the importance of increasing critical thinking
ability, there appears to be little agreement in what it is, how it should be done, and
how to facilitate it in students. In fact, the major difficulty in educating for critical
thinking is lack of a synthesised understanding about what is meant by the term, and
what an effective critical thinking program should involve. To date, there is still little
research that links conceptual analysis and classroom practice. Many studies stop
short at the conceptual and theoretical level and seldom go further to pursue their
empirical support or to examine their feasibility of translating into everyday
classroom practice. On the other hand, as Halonen has observed, many studies fail to
delineate the scope of their individual approaches to critical thinking or neglect to
review their conceptual foundations before offering insights or formulations of their
critical thinking programs. This was the dilemma faced by the authors when
embarking on a university-school collaborative project to develop critical thinking in
primary school children in response to recent New Zealand Ministry of Education
policy initiatives.
The aim of this paper is to outline the rationale for a model and a method that can
effectively enhance teachers to facilitate critical thinking in school children. To avoid
the shortcoming pointed out by Halonen , the paper first addresses four cores issues in
educating for critical thinking by a critical review of the literature on critical thinking
and critical thinking education: Why do we need critical thinking? What are the
definitive characteristics of critical thinking? What should a critical thinking program
aim to teach? What could/should teachers do to stimulate student thinking and
encourage them to think critically about their thinking and learning? After developing
a deeper understanding of these core issues, the paper presents a synthesized model
that aims to provide a coherent guide to student, curriculum, and professional
development. Justification of the selected components and their relationships central
to the model will be discussed.
The question of who needs critical thinking appears simple and straightforward if
critical thinking is considered to be a kind of thinking that helps our students to think
better, so that they can learn better and solve their problems in and out of school more
effectively. It is a general view that the thinking of our students often tends to be
inadequate and faulty. In learning to think in a better way, students will be better
prepared to cope with challenges in their lives, both in and out of school. However,
the authors consider that this applies not only to students. The complexity and the
rapidly changing nature of the contemporary world contribute to the fact that even
adults may be unsure about beliefs and actions. Very often we find what adults
believe or do is based on faulty thinking, and the consequences are often costly for
individuals or for community and society. For this reason, we believe it is of great
practical importance for every member of a community and society to learn to think
better so that they become more capable in responding to challenges and
opportunities. If critical thinking is a tool to help us do this, then everyone should
need it and learn how to use it.
Much of the work on critical thinking alerts us the usual sources of faulty thinking and
suggests ways to overcome our natural tendencies to err as we think. In particular,
there are two well-known accounts of the fallible nature of human thinking. One is
Francis Bacon�s Four Idols of Truth, and the other is Herbert Simon�s Bounded
Rationality. Regardless of our age, gender, ethnicity, culture, nationality, life
experiences or level of education, we are more or less susceptible to produce one kind
or another of the faulty thinking. A quick review of these sources as described in the
two accounts may explain better why learning to think critically about the quality of
the thinking of our own and other should concern not only students but everyone.
Francis Bacon used "four idols of truth" to represent four potential sources of faulty
thinking that all humans tend to have. He alerted his readers that these are limitations
for humans to strive to overcome or compensate for. The idols of the tribe cover the
limitations and tendencies common to human nature . For example, we tend to rely
very much on our senses to give meanings to the world and our experiences, yet our
senses inherently have their limitations and are easily deceived. The idols of the
cave have to do with the belief system and worldview we adopt as a result of our
individual interactions with the particular environments in which that we are raised.
Bacon explained that each of us lives in a world of our own, confined, as it were, to a
cave, lacking reliable knowledge of what exists outside it. With our own frame of
reference, we tend to see things not as they are but shaped by our individual and
cultural idiosyncrasies and specialties. The idols of the market place represent all
types of faulty thinking that are caused by the semantic problem of words used in
everyday language. Bacon observed that everyday words are often ambiguous, vague,
and misleading; hence entangle and pervert our judgment. We are often confused
when we think in terms of words whose meanings are unclear or not commonly
shared. The idols of the theatre refer to all kinds of dogmas that have been created
with little or no regard to truth or realities, yet appear to be authoritative and may
deeply influence people�s mind into excesses of dogmatism and denial. Bacon
dismissed this kind of dogmatic truth as fictions of stage plays, which distract
audience from what is, to illusory worlds.
If we appreciate Bacon and Simon�s points, and if we want to enhance our ability to
respond to problems and opportunities so that we can better facilitate the quality of
our lives, clearly it is a mistake to treat critical thinking as something only for students
to learn in school. Understanding that the quality of our thinking has a direct impact
on the quality of our lives, we should see the value of making deliberate efforts to
push the boundaries of our bounded rationality, to overcome or counteract our idols
of truth.
It is generally agreed that that there no single definition available in the literature can
fully capture the complexity of this construct. Yet, for effective teaching and learning
of critical thinking, it is crucial for teachers and students to be able to identify it and
distinguish it from other kinds of thinking. From the earlier discussion, we can
roughly distinguish critical thinking from other kinds of thinking by its functional
characteristic. That is, it serves to deliberately direct one�s thinking to examine the
quality of the thinking of one�s own as well as of others in order to make a good
choice of what to believe and do. This functional characteristic is captured in a widely
cited definition of critical thinking offered by Ennis � "Critical thinking roughly
means reasonable and reflective thinking focused on deciding what to believe or do"
(p. 10).
However, by using this functional characteristic, one can only roughly distinguish
critical thinking from other kinds thinking, for example, intuitive thinking,
daydreaming, associative thinking, offering opinions or judgments without providing
good reasons or justifications � none of these falls into the category of critical
thinking. By using this functional characteristic, one still cannot specifically
distinguish critical thinking from a cluster of other types of thinking that are closely
related to it, for example, the concepts of metacognition, problem solving, decision-
making, reflective thinking, and higher order thinking. To establish more precision in
distinguishing critical thinking from other related types of thinking, the three
characteristics described in Matthew Lipman�s definition of critical thinking are
more helpful. These three characteristics specify the requirements of how critical
thinking should operate so as to ensure that the thinking process will produce a well-
reasoned judgment of whether certain beliefs or actions should be taken or rejected. A
review of a number of contemporary critical thinking theorists� discussion of each of
them suggests that the three requirements are not only individually justified but also
inter-related.
Lipman defines critical thinking as "skilful, responsible thinking that facilitates good
judgment because it (i) relies upon criteria, (ii) is self-correcting, and (iii) is sensitive
to context." From this definition, the first criterion for critical thinking is that it relies
on external criteria and standards for judging the reasonableness of one�s own, and
others� claim of certain beliefs and actions. Second, it demands the thinker�s
responsibility to self-correct when a certain part of his or her reasoning is found
failing to meet the criteria and standards of reasonableness. Third, the criteria and
standards employed for facilitating judgment of reasonableness must have the
universal normative force but at the same time must be sensitive to the specific
context where the judgment is made.
Without meeting the first requirement, the judgment of the merits of certain beliefs
and actions produced by the thinking process will be arbitrary, undisciplined,
unreliable, and hence, cannot be justified . A thinking process that may involve
problem solving, decision making, reflective thinking, metacognition, or higher order
thinking cannot be qualified as critical thinking if it does not employ external criteria
and standards to facilitate judgment of the merits of certain beliefs and action. As Paul
argues, responsible critical thinking requires one not merely to be engaged in the
mental processes but demands that one "do these mental processes well, that is, in
accord with the appropriate standards and criteria."
Without meeting the second requirement, the judgment of the merits of certain beliefs
and actions produced by the thinking process will not be accurate. When errors are
identified but not rectified in thinking process, the judgement generated from the
thinking process will inevitably be faulty. Further, according to Richard Paul , it can
only be called weak sense critical thinking, if the thinker only self-corrects those
neutral, procedural, or technical errors, but not those arising from his or her deep-
seated egocentric thinking, feelings, and desires. Strong sense critical thinkers aim to
identify and self-correct both types of thinking errors.
Without meeting the third requirement, the judgment of the merits of certain beliefs
and actions resulting from the thinking process cannot be a good one because the
criteria and standards employed are treated as absolute, without regard to whether
they are appropriate and responsive to the purpose and needs of a particular context.
By including the requirement that criteria and standards that have universal normative
force and are sensitive to the specific context be employed, the judgment made can
avoid dogmatic, absolutist thinking errors as well as subjective, relativist thinking
errors. As Burbules argues, the person who wants to make a reasoned judgment of
beliefs and actions has a practical problem to solve in a specific social context in
which the person is related to other persons. The person needs to make sense, to be
fair to alternative points of view, to be careful and prudent in the taking important
stances, as well as to be willing to admit when he or she has made a mistake.
These three criterial characteristics make critical thinking unique because, as Lipman
says, "it both employs criteria and can be assessed by appeal to criteria." Failing to
meet any one of the three requirements, means a thinking process would not be
adequate enough to be qualified as critical thinking. By using these three criterial
characteristics in addition to the functional characteristic, teachers and students should
be more able to distinguish critical thinking from a cluster of other types of thinking
that are closely related to it. They should also become more able to tell whether they
are, in fact, teaching and learning critical thinking.
It is generally agreed that critical thinking cannot operate unless it rests upon some
proficient abilities and skills that can assure competency. Hence, many standard
critical thinking programs focus on developing thinking abilities and skills in students.
However, controversies arise not only in the justification of the kind of thinking
abilities and skills that should be taught in a critical thinking program, or assessed in a
critical thinking assessment but also in the justification of a "skills-only" approach per
se. Critics of the "skills-only" critical thinking programs argue that apart from having
certain required abilities and skills for critical thinking, one also needs certain
desirable dispositions to accomplish critical thinking. Yet, controversies arise at the
conceptual level with regard to how critical thinking dispositions should be conceived,
as well as at the practical level of how to teach and assess them. This section analyses
how theorists, researchers, and scholars address these issues.
Many theorists realize that it is impossible to form a comprehensive list of the mental
abilities and skills required to accomplish critical thinking because critical thinking
involves both first order and second order thinking (as can be seen from its definitive
characteristics discussed above). Further, it can be called for in a wide range of
domains and situations, so a variety of background knowledge is needed. As Lipman
comments, the list can consist of nothing less than an inventory of all possible kinds
of intellectual powers. To facilitate development of critical thinking curriculum and
assessment tools, many theorists tend to identify sets of generic skills for good
reasoning, assuming that students who have mastered these skills will be able to put
them to use in various domains and contexts . Therefore, it is the standard practice
that only general principles and skills of good reasoning (for example, formal logic,
informal logic, and argumentation) are taught and assessed.
However, research has shown that students� ability to transfer is weak when standard
stand-alone critical thinking programs focus only on training and drilling discrete
generic thinking skills, engaging students in solving context- and content-free
thinking tasks . One major problem of this approach is that students are unable to see
the wood for the trees. That is, they are unable to understand how the discrete skills
can be orchestrated and used as an integrated whole critical reasoning process .
Another problem of this approach is that it leads students to view learning critical
thinking and learning other content knowledge as unrelated to one another, rather than
see them as complementary, with their integration interacting to the advantage of each
.
To promote transfer, many theorists argue, students need both to understand the
general critical thinking principles, criteria, and standards, and have the opportunity to
use them to think critically in all curriculum areas as they acquire domain-specific
knowledge. Moreover, classroom experience of teaching and learning of critical
thinking has to be made relevant to students� lives outside of the classroom setting
so as to create a correspondence between what is required for critical thinking in real-
world situations and what is taught in school programs intended to develop critical
thinking). Most importantly, students should be equipped with self-assessment and
self-correction abilities so that they can think for themselves and render prudent
�right, sound, just, and fair � judgments in dealing with problems and issues in the
course of their everyday life .
3.2 Critical thinking dispositions
Many theorists argue that to accomplish critical thinking, having abilities and skills is
not enough. It also takes certain desirable dispositions. Hence, they offer and delineate
sets of thinking dispositions that serve to alert one to think critically about the
problem or issue at hand, and to strive to raise ordinary thinking and reasoning to a
higher level of quality . However, the term "dispositions" has complex meaning when
it is used in the context of promoting critical thinking. A standard account of critical
thinking dispositions focuses on whether a person has the willingness and spontaneity
to apply critical thinking abilities and skills in situations where they are called for . A
more substantive account of critical thinking dispositions focuses on whether the
value, attitude, and purpose the person holds are desirable when he or she uses (or
does not use) his or her abilities and skills .
Although both the standard account and a more substantive account argue for a skills-
plus-dispositions conception of critical thinking and pedagogical approaches to
developing critical thinking in students, they consider the importance of developing
critical thinking dispositions in different ways. In a standard account, developing in
students critical thinking dispositions is important because it facilitates students�
critical thinking skills transfer to other contexts . In a more substantive account,
developing critical thinking dispositions is important because it enhances students�
simultaneous development of intellectual capacities and intellectual character. Its
underlying assumption is that when students become more intellectually efficacious
and more intellectually virtuous in thinking for themselves, they are more able to
make reasonable and responsible judgment of what values, beliefs, and actions to
take. They then also have greater agency to take charge of the nature and quality of
their public and private lives, and to contribute to their community and society .
As shown in Figure 1, the CR-CT model includes both macro- and micro-goals. Its
macro goal is to develop students into better thinkers, learners, and persons. Its micro-
goals include (1) developing critical thinking skills and dispositions, language
abilities, and content knowledge simultaneously so that they reinforce one another and
enhance deep learning; (2) establishing instructional coherence for students so that
they can make sense of and see connection and order among the seeming randomness
of various achievement objectives; and (3) communicating clearly performance
expectations and standards to students so as to enable students to assess their own and
their peers� performance. Student development towards both micro and macro goals
is likely to be facilitated by a synthesized instructional approach. The instructional
approach adopted in the model is an integration of three components: (1) an infusion
approach, (2) an enculturation approach, and (3) collaborative reasoning discussion as
an interface for the integration of the first two components. An infusion approach
emphasizes embedding critical thinking in content learning. An enculturation
approach emphasizes creating and sustaining a social learning environment to
instantiate the norms of language, values, expectations, dispositions, and skills of
good thinking within the classroom and school. Learners are initiated into the culture,
and supported to advance to their successful participation in the reasoned discourse
and inquiry practices of the learning community. Collaborative reasoning discussions
are important learning activities that provide participants the opportunities and
platforms to engage in dialogical and dialectical thinking. This component acts as an
interface for the integration of the first two components.
The infusion component in the CR-CT model is based, in large part, on the version
developed by Paul and his colleagues at the Foundation for Critical Thinking . This
version is selected because it is grounded in more substantive conception of critical
thinking principles, skills and dispositions when compared with other versions
articulated in the literature. A brief summary of Paul�s critical thinking principles,
skills and dispositions is in order. Paul strongly emphasizes that good reasoning
should meet some basic universal intellectual standards, for example, clarity,
accuracy, precision, consistency, relevance, sound evidence, logical reasons, depth,
breadth, fairness, etc. Paul argues that these intellectual values, among others, are
universal because they are embedded not only in the history of the intellectual and
scientific communities but also in the self-assessing behaviour of reasonable persons
in everyday life. Hence, these intellectual standards should be reasonable, defensible,
objective, and appropriate to use to assess all kinds of reasoning .
Paul emphasizes that critical thinkers should be able to recognize the fallible nature of
human thinking, and that strong-sense critical thinkers would consciously develop in
themselves a set of intellectual traits: intellectual humility, intellectual integrity,
intellectual perseverance, intellectual empathy, and intellectual self-discipline, among
others. These intellectual traits are interdependent. Take the trait of intellectual
humility as an example. To become aware of the limits of one�s knowledge, one
needs the intellectual courage to face one�s own prejudices and ignorance. To
discover one�s own prejudices, one must intellectually empathize with the reason
within points of view with which one fundamentally disagrees. And one, typically,
must persevere intellectually over a period of time, for reasoning within a point of
view against which one is biased takes time and significant effort. One will not make
that effort and time unless one both sees their justification and has the necessary
confidence in reason to discern what is well-reasoned from what is not in the opposing
viewpoint. Further, one must recognize an intellectual responsibility to be fair to
views that one opposes. One must feel obliged to hear them in their strongest form to
ensure that one is not condemning them out of ignorance or bias. At this point, one
comes full circle back to where one began: the need for intellectual humility .
The first author tried out this version of an infusion approach in her pilot study
conducted in 2001 in a primary school in Auckland with a group of three teachers.
She found that this approach relied too heavily on the individual teacher�s
commitment to self-direct development of critical thinking in terms of its application
to classroom practice. Without a sustained school-based professional development and
support from a working environment within the school that valued critical-thinking-
driven self-correction and self-improvement in teaching and learning, this approach
was found to be extremely difficult to implement.
To address this issue, two modifications to Paul et al.�s infusion approach have been
made. First, the modified version has incorporated two other components �
enculturation approach and collaborative reasoning discussion. These are two well-
researched methods for teaching thinking and learning. Second, it has turned
teacher�s individual and haphazard remodelling work into a university-school
collaborative action research project so the remodelled curriculum plans become more
principle-driven, evidence-based, and more responsive to the constraints and
affordances inherent in the school�s particular context. The authors agree that this
modified version of infusion approach is more able to accommodate the third
precondition for achieving our macro-goals, namely, promoting participation and
contribution in the reasoned discourse of communities of critical inquiry and critical
practice.
With this clarification, incorporating the notion of critical thinking which emphasizes
independent thinking and the notion of knowledge transmission which emphasizes
socialization and conformity to norms and standards of community practices in the
CR-CT model should be complementary rather than contradictory. Nevertheless, we
also note that the third component � collaborative reasoning discussion � as an
interface for integrating the two approaches.
Perkins argues that if private thinking is, in significant part, an internalisation of more
public verbal and other interactions, then by making important patterns of thinking
public, by forcing them to occur in public, this begins to teach the person something
about possible patterns of thinking. Such practice encourages the person to view
patterns of thinking that have been made explicit as objects that he or she can adopt or
not adopt, choose among, revise, and so forth.
We incorporated this instructional strategy into the CR-CT model because firstly, we
note its potential strength in facilitating students� development in thinking and
communication of their thinking. As the works of Piaget and Vygotsky have shown
thought and language are intimately related. Secondly, we see the potential strength of
collaborative reasoning in facilitating professional development when it is used for
collaborative action by the research team to accomplish the lesson/curriculum plans
remodelling, evaluation, and problem solving together.
Late in the programme, the children were given a writing task. It asked them to write
an ending for an incomplete story (The selected story was Jenny Wagner�s John
Brown, Rose and the Midnight Cat). They had to answer four questions to explain in
writing: (i) why they chose to make their character (John Brown) think and act the
way they did in their story ending; (ii) whether the character�s decision was good;
(iii) why they preferred to have the story ended in this way; and (iv) what message
they were trying to get across to their readers. The same task was given to two classes
of Year 6 students in a non-participating school, but of the same socio-economic-
status ranking in Auckland. Writing samples collected from both schools were
analysed and assessed in terms of their language skill and reasoning ability.
Criteria and standards for assessing the writing of the final part (resolution) of this story
A. Continuity achieved: This writing shows a logical link to the storyline established
in the given part of the story because:
a. The writing of what JB decided to do or actually did is relevant to his
attempt to solve his problem.
b. The writing of whether J.B. eventually solved the problem by doing what he
did is clear.
B. Closure accomplished: This story ending makes the story complete by:
a. Writing clearly whether JB was able to move on from or would triumph over
his primary emotional conflict;
b. Showing there is a theme or the moral of the story;
c. Creating an effect (or mood) that helps make the story theme clear to the
reader.
C. Effective language use shown
a. Language use for explaining cause/effect or sequencing actions/events is
effective.
b. Language devices (e.g., imagery; action words; emotional words; direct
speech/dialogue) are effective in developing characters, actions and settings.
c. Grammar is correct.
d. Punctuation is correct.
e. Spelling is correct.
The task of writing an end to an incomplete story aimed to assess students� language
abilities and reasoning abilities in their creative/critical writing. For scoring these
abilities, a set of criteria and standards was created by the researcher (see Table 1).
Students� reasoning abilities were assessed by the extent to which they met five
criteria that were considered essential for achieving continuity of the story and for
bringing a closure to the story. Students� language abilities were assessed by
meeting five criteria that were considered essential for effective writing. Students�
performance were scored according to a 4-point scale with 30 as the maximum total
score: 0 point for failing to meet the criterion; 1 point for only partially meeting the
criterion; 2 points for mainly rather than fully meeting of this criterion; and 3 points
for fully meeting the criterion.
Table 2 Criteria for Assessing the Quality of Students�
Analysis of JB�s Reasoning in his Decision Making
The student�s analysis of JB�s reasoning is thorough if the student is able to show in his/her
answer as well as in his/her story the following:
1. JB noticed he had some problems to solve.
2. JB was able to identify the plausible causes of these problems from different points of
view.
3. JB was able to figure out the thinking, feelings, and wanting of himself, of Rose, and
of the Midnight Cat.
4. In deciding what to do, JB was able to consider the well being of Rose (and/or the
Midnight Cat) while he was in great concern of the well being of his own.
5. JB was clear about what he wanted to achieve by making this decision or taking the
actions.
The set of four questions were to examine how students think as a story writer. The
first question aimed to assess students� ability to analyse the main character�s
behaviour in relation to his thinking, feeling, and desires. A set of five criteria was
created for scoring student performance (see Table 2). Again a 4-point scale was used.
The maximum total score on this measure was 15. The second question aimed to
assess students� ability to evaluate the quality of the main character�s decision
making. Performance on this measure was assessed against a set of five criteria (see
Table 3) using a 4-point scale. The maximum total score on this measure was 15. The
last two questions aimed to cross check whether the student had a theme for his/her
story, and whether the student had deliberate intension to create an effect (mood) for
the story.
An analysis of variance was used to compare student performance in the CR-CT
program and standard program. As shown in Table 4, when students� writing
performance was assessed in terms of language abilities, children in standard program
demonstrated significantly higher performance (F =4.92 df=1, p<0.05). However,
when the criterion of reasoning abilities was included in the writing assessment,
students in the CR-CT program did significantly better than students in the standard
program (F =43.90 df=1, p<0.05). In addition, students in the CR-CT program also
performed higher on the analytical reasoning measure (F =15.73 df=1, p<0.05), and
higher on the evaluative reasoning measure (F =42.61 df=1, p<0.05). Students in the
CR-CT program generally were more able to explain the relationship between
thinking, emotions, and desires behind the character�s decision and actions. Further,
these students were more able to judge, as well as to justify their judgment of, whether
the main character�s decision was a good one or a bad one. Overall, these students
were more able to think deeper and better as they wrote the story.
* p<0.05
Conclusion
The purpose of this paper is to search for a model and a method that can help teachers
facilitate critical thinking in school children. Before presenting the conceptual model,
we clarified three cores issues in educating for critical thinking by a critical review of
the literature, namely, Why do we need critical thinking? What are the definitive
characteristics of critical thinking? What should a critical thinking program aim to
teach? The conceptual model presented in this paper is entitled Collaborative
Reasoning: Critical Thinking Based Learning and Instruction. It is derived from our
critical review of the literatures on critical thinking, critical thinking education, as
well as on theoretical and empirical studies on effective teaching and learning. It
espouses a macro educational goal to develop students into better thinkers, learners,
and persons. It adopts a pedagogical approach that enhances Paul et al.�s (1995)
version of an infusion approach by incorporating an enculturation approach using an
interface of collaborative reasoning discussion. The model also has three explicit
micro-goals to achieve: (1) developing critical thinking skills and dispositions,
language abilities, and content knowledge simultaneously so that they reinforce one
another and enhance deep learning; (2) establishing instructional coherence for
students so that they can make sense of and see connection and order among the
seeming randomness of various achievement objectives; and (3) communicating
clearly performance expectations and standards to students so as to enable students to
think critically about their thinking and learning as they assess their own and their
peers� performance. These three micro-goals serve to guide teachers� work in
remodelling their existing curriculum/lesson plans into ones that critical thinking
principles and skills are incorporated into everyday classroom learning and
instruction.
The feasibility of translating the conceptual model into daily classroom practices has
been examined by a collaborative action research. Different sources of empirical
evidence suggest that it is practically possible to implement the CR-CT program,
although supports (in terms of time, efforts, and commitment) from the students,
teachers, principals, and researchers are needed. Thanks to all these supports,
implementation of the program in the present study was not only possible, but also
largely successful. Not only students learned how to think more deeply and better, as
shown in their written work, but also the teachers and the researcher involved in the
action research project learned, in theory and in practice, how to enrich and enhance
teaching and learning thinking critically in language arts. We have realized teaching
students to think critically should go beyond thinking critically about the quality of
the form of the text, to the quality of the meaning and reasoning of the author behind
the text. Further, the research team also learned that simple, picture stories available in
schools and public libraries are, in fact, valuable resources (i) for guiding children to
think critically about conflicting ideas, values, and beliefs that make people do what
they do and say what the say in the stories; and (ii) for guiding children to analyse and
evaluate the thinking, feeling, and desires behind the characters� actions or reaction
to problems and opportunities. The team believed that these learning experiences
when internalized will become students� inner voice to guide their own thinking
about their thinking, feelings, desires, actions, and reactions to problems and
opportunities.
The authors note that because of the exploratory nature of this collaborative action
research, the findings cannot be directly generalised to other school contexts.
However, we want to emphasize that the CR-CT conceptual model and method is
theoretically sound because first, it is grounded in a substantive conception of critical
thinking and critical thinking education. Second, its pedagogical approach is
predicated on the current educational theories of effective teaching and learning.
Third, it can be used to coherently guide student, curriculum, and professional
development. Nevertheless, the authors also note that developing critical thinking
abilities and dispositions takes time, which implies that a longitudinal study is needed
if stronger claims about the feasibility and the impact of translating the conceptual
model into everyday classroom practice.
References
Anderson, R. C., Chinn, C., Chang, J., Waggoner, M., & Yi, H. (1997). On the logical
integrity of children's arguments. Cognition and Instruction, 15(2), 135-167.
Anderson, R. C., Chinn, C., Waggoner, M., & Nguyen, K. (1998). Intellectually
stimulating story discussions. In J. Osborn & F. Lehr & et al. (Eds.), Literacy for all:
Issues in teaching and learning (pp. 170-196). New York, NY: The Guilford Press.
Bacon, F. (1605). The advancement of learning. In J. Spedding & R.L. Ellis & D. D.
Heath (Eds.), Works (Vol. 3, pp. 1858-1874). London.
Bailin, S. (1998). Skills, generalizability and critical thinking. Paper presented at the
Twentieth World Congress of Philosophy, Boston, Massachusetts.
Bailin, S., Case, R., Coombs, J. R., & Daniels, L. B. (1999). Conceptualizing critical
thinking. Journal of Curriculum Studies, 31(3), 285-302.
Bailin, S., & Siegel, H. (2003). Critical thinking. In N. Blake & P. Smeyers & R.
Smith & P. Standish (Eds.), The Blackwell guide to the philosophy of education (pp.
181-193). Oxford, UK: Blackwell Publish.
Barrow, R. (1991). The generic fallacy. Educational Philosophy and Theory, 23(1), 7-
17.
Baumfield, V., & Han, C. (1998). Let's be reasonable: Fostering the common good in
primary school classrooms in the UK and Singapore. Paper presented at the European
Conference for Educational Research, University of Ljubljana, Slovenia.
Brown, A. L., Ash, D., Rutherford, M., Nakagawa, K., Gordon, A., & Campione, J. C.
(1993). Distributed expertise in the classroom. In G. Salomon (Ed.), Distributed
cognitions: Psychological and educational considerations (pp. 188-228). New York:
Cambridge University Press.
Brown, A. L., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of
learning. Educational Researcher, 18(1), 32-42.
Cazden, C., Cope, B., Fairclough, N., Gee, J., Kalantzis, M., Kress, G., Luke, A.,
Luke, C., Michaels, S., & Nakata, M. (1996). A pedagogy of multiliteracies:
Designing social futures. Harvard Educational Review, 66(1), 60-92.
Costa, A. (1991). The school as a home for the mind. Palatine, IL: Skylight
Publishing.
Ennis, R. H. (1989). Critical thinking and subject specificity: Clarification and needed
research. Educational Researcher, 18(3), 4-10.
Foundation for Critical Thinking. (1999). Critical thinking: Basic theory and
instructional structures. Dillon Beach, CA: Author.
Fung, I. Y. Y., Townsend, M. A. R., & Parr, J. M. (2004). Teaching school children
to think critically in language arts: How and why? Paper presented at the British
Educational Research Association Annual Conference (16-18 September 2004),
UMIST, Manchester, UK.
Gutierrez, K. (1994). How talk, context, and script shape contexts for learning: A
cross comparison of journal writing. Linguistics and Education, 5, 335-365.
Hatch, T., & Gardner, H. (1993). Finding cognition in the classroom: an expanded
view of human intelligence. In G. Salomon (Ed.), Distributed cognitions:
Psychological and educational considerations. Cambridge: Cambridge University
Press.
Higgins, E. T. (2000). Social cognition: Learning about what matters in the social
world. European Journal of Social Psychology, 30, 3-39.
Hogan, D., & Tudge, J. R. H. (1999). Implications of Vygotsky's theory for peer
learning. In A. M. O'Donnell & A. King (Eds.), Cognitive perspectives on peer
learning (pp. 39-65). Mahwah, NJ: Erlbaum.
Lipman, M. (1988). Critical thinking: What can it be? Educational Leadership, 46(1),
38-43.
McPeck, J. E. (1990). Teaching critical thinking: Dialogue and dialectic. New York:
Routledge.
Mehan, H. (1972). "What time is it, Denise?": Asking known information questions
classroom discourse. Theory into Practice, 18(4), 285-294.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ:
Prentice-Hall.
Norris, S. P., & Ennis, R. H. (1989). Evaluating critical thinking. Pacific Grove, CA:
Critical Thinking Press & Software.
Paul, R. W. (1982). Teaching critical thinking in the strong sense: A focus on self-
deception, world views, and a dialectical mode of analysis. Informal Logic
Newsletter, 4(2), 2-7.
Paul, R. W. (1990). Critical thinking: What every person needs to survive in a rapidly
changing world (2nd ed.). Rohnert Park, CA: Sonoma State University, Center for
Critical Thinking and Moral Critique.
Paul, R. W. (1992). Critical thinking: What, why, and how. New Directions for
Community Colleges, 20(1), 3-24.
Paul, R. W. (1995). Critical thinking: How to prepare students for a rapidly changing
world (3rd ed.). Dillon Beach, CA: Foundation for Critical Thinking.
Paul, R. W., Binker, A. J. A., & Weil, D. (1995). Critical thinking handbook: K-3rd
grades. A guide for remodelling lesson plans in language arts, social studies &
science. U.S. California.
Paul, R. W., & Elder, L. (2001). Critical thinking: Tools for taking charge of your
learning and your life. Upper Saddle River, NJ.: Prentice Hall.
Paul, R. W., Elder, L., & Bartell, T. (1997). California teacher preparation for
instruction in critical thinking: Research findings and policy recommendations.
Sacramento, CA: California Commission on Teacher Credentialing.
Paul, R. W., & Nosich, G. M. (1992). Using intellectual standards to assess student
reasoning. Santa Rosa, CA: Foundation for Critical Thinking.
Perkins, D. N., Jay, E., & Tishman, S. (1993). Beyond abilities: A dispositional theory
of thinking. Merrill-Palmer Quarterly, 39(1), 1-21.
Salomon, G. (1994). Not just the individual: A new conception for a new educational
psychology. Paper presented at the Keynote address presented at the 23rd International
Congress of Applied Psychology, Madrid, Spain.
Sarason, S. (1983). Schooling in America: Scapegoat and salvation. New York: Free
Press.
Siegel, H. (1999). What (good) are thinking dispositions? Educational Theory, 49(2),
207-221.
Simon, H. A. (1957). Models of man: Social and rational. New York: Wiley.
Splitter, L. J., & Sharp, A. M. (1995). Teaching for better thinking: The classroom
community of inquiry. Camberwell, Victoria: ACER Press (ED387454).
Swartz, R. J., & Perkins, D. N. (1989). Teaching thinking: Issues and approaches.
Cheltenham, Australia: Hawker Brownlow Education.
Tharp, R. G., & Gallimore, R. (1988). Rousing minds to life: Teaching, learning, and
schooling in social context. New York: Cambridge University Press.
Tishman, S., Jay, E., & Perkins, D. N. (1993). Teaching thinking dispositions: From
transmission to enculturation. Theory Into Practice, 32(3), 147-153.
Tishman, S., Perkins, D. N., & Jay, E. (1995). The thinking classroom: Learning and
teaching in a culture of thinking. Boston, MA: Allyn and Bacon.
Tudge, J. R. H., & Winterhoff, P. (1993). Can young children benefit from
collaborative problem solving? Tracing the effects of partner competence and
feedback. Social Development, 2, 242-259.
Waggoner, M., Chinn, C., Yi, H., & Anderson, R. (1995). Collaborative reasoning
about stories. Language Arts, 72, 582-589.
Wagner, J. (1977). John Brown, Rose and the Midnight Cat. Victoria, Australia:
Puffin Books.
Wegerif, R., Mercer, N., & Dawes, L. (1999). From social interaction to individual
reasoning: An empirical investigation of a possible socio-cultural model of cognitive
development. Learning and Instruction, 9(6), 493-516.
Weinstein, M. (2000). A frame for critical thinking. High School Magazine, 7(8), 40-
43.
Wells, G. (1999). Dialogic inquiry: Toward a sociocultural practice and theory of
education. Cambridge, UK: Cambridge University Press.
Note:
1. Corresponding author
Postal Address: RCITL, Faculty of Education, University of Auckland
Private Bag 92019, Auckland, NEW ZEALAND
Telephone: (64-9) 3737599 extn 83721
Fax: (64-9) 367-7191
E-mail: i.fung@auckland.ac.nz
One thing that was not discussed in this paper is the literature review.
In previous classes we spent more time talking about statistics than the
literature review. That's why you'll see some fairly complex
explanations in this paper on the data analysis but no information on the
literature review.
Your paper will contain information on the literature review and less
specific information on statistics.
Chris K.
Research Critique 1
using the Revised Leadership for Sport Scale (RLSS), between male and
female coaches
hypothesis was that male and female coaches would respond differently to
the RLSS in
would occur among coaching levels: junior high, high school, and college.
The sample was nonrandom, including 162 coaches that were chosen
on a volunteer
basis. Within the sample, 118 (0.73) of the coaches were male, while 44
(0.27) were
female. With regard to coaching level, 25 (0.15) were junior high coaches,
99 (0.61) high
school, and 38 (0.24) at the college level. While this is a good sample size,
the problem lies
with the distribution of the sample. The sample number for junior high
coaches, in particular,
is rather low. A larger sample with regard to all categories would have
aided in the data
The instrument utilized was the Revised Leadership for Sport Scale
(RLSS) developed
by Zhang, Jensen, and Mann in 1996. This scale is used to measure six
leadership
coaching, I:� A Likert scale was then given for each statement: 1 =
never; 2 = seldom; 3 =
fields, and offices. The internal consistency for each section was
calculated: 0.84 for training
and instruction; 0.66 for democratic; 0.70 for autocratic; 0.52 for social
support; 0.78 for
collected. The RLSS used a Likert scale (ordinal), yet a MANOVA would
be most
applicable for normally distributed, quantitative data. The analysis
showed there were no
When the six leadership styles were examined separately, there was a
significant difference
levels of coaching (junior high, high school, and college) with regard to
leadership behavior
down the six behaviors and examining them individually, an ANOVA was
used to analyze
the data. Again, because the data for the RLSS is ordinal, an ANOVA is
not the best
analysis tool. The three coaching levels scored differently on three of the
six behaviors:
A MANOVA was again used to analyze the data for any interaction
between gender and
method could have been chosen based on the nature of the data collected.
The results
the results would not generalizable beyond the 162 participants in the
study. There was no
honestly and confidentiality was stressed so that the �coaches might feel
more at ease in
the study, yet were not addressed by the researchers. Coaching experience
would greatly
effect the responses of the participants, yet this was not considered in the
study. The gender
levels, will demonstrate more social support than those of male athletes.
The nature of the
sport could also be critical. Certain coaching styles are more applicable
for individual sports
(wrestling, track, and tennis) than for team sports (football, soccer, and
basketball). The
better athletes and programs in a particular sport, while others may not
be able to field a
greatly influence responses. If the program has had several losing seasons
in a row, perhaps
the attitude of the coach could be different than that of a coach who has
recently won a state
title.
researchers may have been able to use a modified matching system when
analyzing the
setting could have reduced location threat. Coaches meet seasonally for
clinics. Perhaps
While the study has merit, the methods need to be re-evaluated. The
power of the study