You are on page 1of 3

Question 9: Explain the necessary component involves in the research proposal

Data collection is the methodological process of gathering information about a specific subject.
It’s crucial to ensure your data is complete during the collection phase and that it’s collected
legally and ethically. If not, your analysis won’t be accurate and could have far-reaching
consequences. Here are some data collection methods :
- Physical measurement
- Unobtrusive
- Interview: A widely used method of collecting data in business research is to interview
respondents to obtain information on an issue of interest. An interview is a guided,
purposeful conversation between two or more people. There are many different types of
interviews. Individual or group interviews may be unstructured or structured, and
conducted face to face, by telephone, or online.
Unstructured and structured interviews will be discussed first. Some important factors to
be borne in mind while interviewing will then be detailed. Next, the advantages and
disadvantages of face‐to‐face interviewing and telephone interviews are considered and
then, computer‐assisted interviews are described.
- Observation : Researchers and managers might be interested in the way workers carry
out their jobs, in how consumers watch commercials, use products, or behave in waiting
areas, or in how a merchant bank trades and operates. A useful and natural technique to
collect data on actions and behavior is observation. Observation involves going into “the
field” – the factory, the supermarket, the waiting room, the office, or the trading room –
watching what workers, consumers, or day traders do, and describing, analyzing, and
interpreting what one has seen. Observation concerns the planned watching, recording,
analysis, and interpretation of behavior, actions, or events.
Observational methods are best suited for research requiring non‐self‐report descriptive
data; that is, when behavior is to be examined without directly asking the respondents
themselves. Observational data are rich and uncontaminated by self‐report bias.
Controlled observation occurs when observational research is carried out under carefully
arranged conditions. Uncontrolled observation is an observational technique that makes
no attempt to control, manipulate, or influence the situation.
- Questionairies: Questionnaires are generally designed to collect large numbers of
quantitative data. They can be administered personally, distributed electronically, or
mailed to the respondents. Questionnaires are generally less expensive and time
consuming than interviews and observation, but they also introduce a much larger chance
of nonresponse and nonresponse error.
Types of questionaires: - Personally administered questionaires / Mail questionaires /
Electronic and online questionaires

Sampling design:
Probability / nonprobability: There are two major types of sampling design: probability and
nonprobability sampling. In probability sampling, the elements in the population have some known,
nonzero chance or probability of being selected as sample subjects. In nonprobability sampling, the
elements do not have a known or predetermined chance of being selected as subjects. Probability sampling
designs are used when the representativeness of the sample is of importance in the interests of wider
generalizability. When time or other factors, rather than generalizability, become critical, nonprobability
sampling is generally used. Each of these two major designs has different sampling strategies. Depending
on the extent of generalizability desired, the demands of time and other resources, and the purpose of the
study, different types of probability and nonprobability sampling design are chosen.
Determining the sample size
Is a sample size of 40 large enough? Or do you need a sample size of 75, 180, 384, or 500? Is a large
sample better
than a small sample; that is, is it more representative? The decision about how large the sample size should
be can
be a very difficult one. We can summarize the factors affecting decisions on sample size as:
1. The research objective.
2. The extent of precision desired (the confidence interval).
3. The acceptable risk in predicting that level of precision (confidence level).
4. The amount of variability in the population itself.
5. The cost and time constraints.
6. In some cases, the size of the population itself.
Thus, how large your sample should be is a function of these six factors. We will have more to say about
sample size later on in this chapter, after we have discussed sampling designs.

Data Analysics
- Feel for data: We can acquire a feel for the data by obtaining a visual summary or by
checking the central tendency and the dispersion of a variable. We can also get to know
our data by examining the relation between two variables.
Depending on the scale of our measures, the mode, median, or mean, and the semi‐
interquartile range, standard deviation, or variance will give us a good idea of how the
participants in our study
have reacted to the items in the questionnaire. These statistics can be easily obtained, and
will indicate whether the responses range satisfactorily over the scale. If the response to
each individual item in a scale does not have a good spread (range) and shows very little
variability, then the researcher may suspect that the particular question was probably not
properly worded. Biases, if any, may also be detected if the respondents have tended to
respond similarly to all items – that is, they have stuck to only certain points on the scale.
Getting a feel for the data is thus the necessary first step in all data analysis. Based on this
initial feel, further detailed analyses may be undertaken to test the goodness of the data.

- Godness of data: The reliability and validity of the measures can now be tested. the reliability of a
measure is established by testing for both consistency and stability. Consistency indicates how well the
items measuring a concept hang together as a set. Cronbach’s alpha is a reliability coefficient that indicates
how well the items in a set are positively correlated to one another. Cronbach’s alpha is computed in terms
of the average intercorrelations among the items measuring the concept.
The closer Cronbach’s alpha is to 1, the higher the internal consistency reliability. Another measure of
consistency reliability used in specific situations is the split ‐half reliability coefficient. Since this reflects
the correlations between two halves of a set of items, the coefficients obtained will vary depending on how
the scale is split. Sometimes split‐half reliability is obtained to test for consistency when more than one
scale, dimension, or factor, is assessed. The items across each of the dimensions or factors are split, based
on some predetermined logic (Campbell, 1976). In almost every case, Cronbach’s alpha is an adequate test
of internal consistency reliability.

- Testing hypothesis: The purpose of hypothesis testing is to determine accurately if the


null hypothesis can be rejected in favor of the alternate hypothesis. Based on the sample
data the researcher can reject the null hypothesis (and therefore accept the alternate
hypothesis) with a certain degree of confidence: there is always a risk that the inference
that is drawn about the population is incorrect. There are two kinds of errors (or two ways in
which a conclusion can be incorrect), classified as type I errors and type II errors. A type I error, also
referred to as alpha (α), is the probability of rejecting the null hypothesis when it is actually true.
A type II error, also referred to as beta (β), is the probability of failing to reject the null hypothesis given
that the alternate hypothesis is actually true; e.g., concluding, based on the data, that burnout does not affect
intention to leave when, in fact, it does. The probability of type II error is inversely related to the
probability of type I error: the smaller the risk of one of these types of error, the higher the risk of the other
type of error.

You might also like