You are on page 1of 9

Summer 2013 Master of Business Administration- MBA Semester 3 MB0050 Research Methodology -4 Credits (Book ID: B1700)

Q1.Explain the process of problem identification with example. Ans: Problem identification is not a simple task. Also, one persons problem is another persons satisfactory state of affairs. A problem occurs when there is a difference between what should be and what is, between the ideal and the actual situation. A problem: expresses the difference between the hoped for and the actual situation; and is directly or indirectly related to the health of the population. Before something can be said to be a problem, you have to: be aware of the discrepancy; be under pressure to take action; and have the resources necessary to take action. How do you become aware you have a discrepancy? You have to make a comparison between the current state of affairs and some standard. What is that standard? It can be past performance, previously set goals, or the performance of some other unit within the organization or in other organizations. The problem must also be such that it puts some type of pressure to act. Pressure might include organizational policies, deadlines, financial crises, complaints, expectations from management, a proposed change or a continuing community demand. Identifying the problem and the desired outcome Finally, you are not likely to describe something as a problem if you think that you do not have the authority, budget, information or other resources necessary to act on it. When you believe you have a problem and are under pressure to act, but feel you have inadequate resources, you usually describe the situation as one in which unrealistic expectations are being placed on you. Before attempting to solve a problem, you need to describe it in detail. You do this so you can understand how the problem affects the process being examined, such as delivery of a health service to a community

Q2. Interview method involves a dialogue between the interviewee and the interviewer. Explain the interview method of data collection. What are the uses of this technique? What are the different types of interview? Ans: Interview method of data collection : Interviews are a systematic way of talking and listening to people and are another way to collect data from individuals through conversations. The researcher or the interviewer often uses open questions. Data is collected from the interviewee. The researcher needs to remember the interviewers views about the topic is not of importance. Quantitative and Qualitative Data collection methods The Quantitative data collection methods, rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. They produce results that are easy to summarize, compare, and generalize. Quantitative research is concerned with testing hypotheses derived from theory and/or being able to estimate the size of a phenomenon of interest. Depending on the research question, participants may be randomly assigned to different treatments. If this is not feasible, the researcher may collect data on participant and situational characteristics in order to statistically control for their influence on the dependent, or outcome, variable. If the intent is to generalize from the research participants to a larger population, the researcher will employ probability sampling to select participants. Typical quantitative data gathering strategies include: Experiments/clinical trials. Observing and recording well-defined events (e.g., counting the number of patients waiting in emergency at specified times of the day). Obtaining relevant data from management information systems. Administering surveys with closed-ended questions (e.g., face-to face and telephone interviews, questionnaires etc). Types of interviews: 1. Structured Interview- Here, every single detail of the interview is decided in advance. The questions to be asked, the order in which the questions will be asked, the time given to each candidate, the information to be collected from each candidate, etc. is all decided in advance. Structured interview is also called Standardised, Patterned, Directed or Guided interview. Structured interviews are pre-planned. They are accurate and precise. All the interviews will be uniform (same). Therefore, there will be consistency and minimum bias in structured interviews. 2. Unstructured- There are no specifications in the wording of the questions or the order of the questions. The interviewer forms questions as and when required. The structure of the interview is flexible. 3. Group Interview- Here, all the candidates or small groups of candidates are interviewed together. The time of the interviewer is saved. A group interview is similar to a group discussion. A topic is given to the group, and they are asked to discuss it. The interviewer carefully watches the candidates. He tries to find out which candidate influences others, who clarifies issues, who summarises the discussion, who speaks effectively, etc. He tries to judge the behaviour of each candidate in a group situation. 4. Exit Interview- When an employee leaves the company, he is interviewed either by his immediate superior or by the HRD manager. This interview is called an exit interview. Exit interview is taken to find out why the employee is leaving the company. Sometimes, the employee may be asked to withdraw his resignation by providing some incentives. Exit interviews are taken to create a good image of the company in the minds of the employees who are leaving the company. They help the company to make proper HRD policies, to create a favourable work environment, to create employee loyalty and to reduce labour turnover. 5. Depth Interview- This is a semi-structured interview. The candidate has to give detailed information about his background, special interest, etc. He also has to give detailed information about his subject. Depth interview tries to find out if the candidate is an expert in his subject or not. Here, the interviewer must have a good understanding of human behaviour. 6. Stress Interview- The purpose of this interview is to find out how the candidate behaves in a stressful situation. That is, whether the candidate gets angry or gets confused or gets frightened or

gets nervous or remains cool in a stressful situation. The candidate who keeps his cool in a stressful situation is selected for the stressful job. Here, the interviewer tries to create a stressful situation during the interview. This is done purposely by asking the candidate rapid questions, criticising his answers, interrupting him repeatedly, etc. 7. Individual Interview- This is a 'One-To-One' Interview. It is a verbal and visual interaction between two people, the interviewer and the candidate, for a particular purpose. The purpose of this interview is to match the candidate with the job. It is a two way communication. 8. Informal Interview- Informal interview is an oral interview which can be arranged at any place. Different questions are asked to collect the required information from the candidate. Specific rigid procedure is not followed. It is a friendly interview. 9. Formal Interview- Formal interview is held in a more formal atmosphere. The interviewer asks preplanned questions. Formal interview is also called planned interview. 10. Panel Interview- Panel means a selection committee or interview committee that is appointed for interviewing the candidates. The panel may include three or five members. They ask questions to the candidates about different aspects. They give marks to each candidate. The final decision will be taken by all members collectively by rating the candidates. Panel interview is always better than an interview by one interviewer because in a panel interview, collective judgement is used for selecting suitable candidates.

Q3. A study of different sampling method is necessary because precision, accuracy, and efficiency of the sample results depend on the method employed for selecting the sample. Explain the different type of probable and non-probable sampling designs. Ans: Non-probability Sampling Techniques Non-probability sampling is a sampling technique where the samples are gathered in a process that does not give all the individuals in the population equal chances of being selected. Reliance On Available Subjects. Relying on available subjects, such as stopping people on a street corner as they pass by, is one method of sampling, although it is extremely risky and comes with many cautions. This method, sometimes referred to as a convenience sample, does not allow the researcher to have any control over the representativeness of the sample. It is only justified if the researcher wants to study the characteristics of people passing by the street corner at a certain point in time or if other sampling methods are not possible. The researcher must also take caution to not use results from a convenience sample to generalize to a wider population. Purposive or Judgmental Sample. A purposive, or judgmental, sample is one that is selected based on the knowledge of a population and the purpose of the study. For example, if a researcher is studying the nature of school spirit as exhibited at a school pep rally, he or she might interview people who did not appear to be caught up in the emotions of the crowd or students who did not attend the rally at all. In this case, the researcher is using a purposive sample because those being interviewed fit a specific purpose or description. Snowball Sample. A snowball sample is appropriate to use in research when the members of a population are difficult to locate, such as homeless individuals, migrant workers, or undocumented immigrants. A snowball sample is one in which the researcher collects data on the few members of the target population he or she can locate, then asks those individuals to provide information needed to locate other members of that population whom they know. For example, if a researcher wishes to interview undocumented immigrants from Mexico, he or she might interview a few undocumented individuals that he or she knows or can locate and would then rely on those subjects to help locate more undocumented individuals. This process continues until the researcher has all the interviews he or she needs or until all contacts have been exhausted. Quota Sample. A quota sample is one in which units are selected into a sample on the basis of prespecified characteristics so that the total sample has the same distribution of characteristics assumed to exist in the population being studied. For example, if you a researcher conducting a national quota sample, you might need to know what proportion of the population is male and what proportion is female as well as what proportions of each gender fall into different age categories, race or ethnic categories, educational categories, etc. The researcher would then collect a sample with the same proportions as the national population. Probability Sampling Techniques Probability sampling is a sampling technique where the samples are gathered in a process that gives all the individuals in the population equal chances of being selected. Simple Random Sample. The simple random sample is the basic sampling method assumed in statistical methods and computations. To collect a simple random sample, each unit of the target population is assigned a number. A set of random numbers is then generated and the units having those numbers are included in the sample. For example, lets say you have a population of 1,000 people and you wish to choose a simple random sample of 50 people. First, each person is numbered 1 through 1,000. Then, you generate a list of 50 random numbers (typically with a computer program) and those individuals assigned those numbers are the ones you include in the sample. Systematic Sample. In a systematic sample, the elements of the population are put into a list and then every kth element in the list is chosen (systematically) for inclusion in the sample. For example, if the population of study contained 2,000 students at a high school and the researcher wanted a sample of 100 students, the students would be put into list form and then every 20th student would be selected for inclusion in the sample. To ensure against any possible human bias in this method, the researcher should select the first individual at random. This is technically called a systematic sample with a random start. Stratified Sample. A stratified sample is a sampling technique in which the researcher divided the entire target population into different subgroups, or strata, and then randomly selects the final subjects proportionally from the different strata. This type of sampling is used when the researcher

wants to highlight specific subgroups within the population. For example, to obtain a stratified sample of university students, the researcher would first organize the population by college class and then select appropriate numbers of freshmen, sophomores, juniors, and seniors. This ensures that the researcher has adequate amounts of subjects from each class in the final sample. Cluster Sample. Cluster sampling may be used when it is either impossible or impractical to compile an exhaustive list of the elements that make up the target population. Usually, however, the population elements are already grouped into subpopulations and lists of those subpopulations already exist or can be created. For example, lets say the target population in a study was church members in the United States. There is no list of all church members in the country. The researcher could, however, create a list of churches in the United States, choose a sample of churches, and then obtain lists of members from those churches.

Q4. Differentiate between descriptive and inferential analysis of data. Explain with examples various measures of central tendency. Ans: Descriptive Statistics Lets say youve administered a survey to 35 people about their favorite ice cream flavors. Youve got a bunch of data plugged into your spreadsheet and now it is time to share the results with someone. You could hand over the spreadsheet and say heres what I learned, or you could summarize the data with some charts and graphs that describe the data and communicate some conclusions. This would sure be easier for someone to interpret than a big spreadsheet. There are hundreds of ways to visualize data, including data tables, pie charts, line charts, etc. Thats the gist of descriptive statistics. Note that the analysis is limited to your data and that you are not extrapolating any conclusions about a full population. Descriptive statistic reports generally include summary data tables (kind of like the age table above), graphics (like the charts above), and text to explain what the charts and tables are showing. For example, I might supplement the data above with the conclusion vanilla is the most common favorite ice cream among those surveyed. Just because descriptive statistics dont draw conclusions about a population doesnt mean they are not valuable. There are thousands of expensive research reports that do nothing more than descriptive statistics. Descriptive statistics usually involve measures of central tendency (mean, median, mode) and measures of dispersion (variance, standard deviation, etc.). Heres a great video that explains the concept of average very well. Inferential Statistics OK, lets continue with the ice cream flavor example. Lets say you wanted to know the favorite ice cream flavors of everyone in the world. Well, there are about 7 billion people in the world, and it would be impossible to ask every single person about their ice cream preferences. Instead, you would try to sample a representative population of people and then extrapolate your sample results to the entire population. While this process isnt perfect and it is very difficult to avoid errors, it allows researchers to make well reasoned inferences about the population in question. This is the idea behind inferential statistics. As you can imagine, getting a representative sample is really important. There are all sorts of sampling strategies, including random sampling. A true random sample means that everyone in the target population has an equal chance of being selected for the sample. Imagine how difficult that would be in the case of the entire world population since not everyone in the world is easily accessible by phone, email, etc. Another key component of proper sampling is the size of the sample. Obviously, the larger the sample size, the better, but there are trade-offs in time and money when it comes to obtaining a large sample. Thats enough on market research sampling techniques for now. Lets get back on track here When it comes to inferential statistics, there are generally two forms: estimation statistics and hypothesis testing. The three most commonly-used measures of central tendency are the following. (1) Mean : The sum of the values divided by the number of valuesoften called the average. Add all of the values together. Divide by the number of values to obtain the mean. Example: The mean of 7, 12, 24, 20, 19 is (7 + 12 + 24 + 20 + 19) / 5 = 16.4.

Q5. The chi-square scale is widely used in research. Discuss the variation of chi-square test. Under what conditions is this test applicable? Ans: Chi -square test : Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. For example, if, according to Mendels laws, you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8 males, then you might want to know about the goodness to fit between the observed and expected. Were the deviations (differences between observed and expected) the result of chance, or were they due to other factors. Applications of chi-square test: 1.In business : No matter the business analytics problem, the chi-square test will find uses when you are trying to establish or invalidate that a relationship exists between two given business parameters that are categorical (or nominal) data types. 2.In biological statistics : Use the chi-square test for goodness-of-fit when you have one nominal variable with two or more values (such as red, pink and white flowers). Conditions For The Application of Chi-Square Test: The sample must be large; the sample should preferably contain 50 or more items even though the number of cells or class intervals may be small. Aggregation and classification generally reduces the number of cells; N individual items in the sample must have been drawn independently; The number of cells must be neither too small nor too large. It is preferable to have the class intervals or cells in the range of 5 to 20; The constraints to which the cell frequencies are subjected must be linear. The researcher can exercise his choice in favour of formulating the constraints to satisfy the condition of linearity; The cell frequencies must not be small. In any case, no cell frequency should be less than five. It is preferable to have 10 or more than 10 as the smallest value of the cell frequency. This condition can easily be satisfied by clubbing several classes and aggregating the corresponding frequencies together in case their frequencies are less than five.

Q6.What is analysis of variance? What are the assumptions of the technique? Give a few examples where the technique could be used Ans: Analysis of variance : Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as variation among and between groups). In ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes ttest to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing (testing) three or more means (groups or variables) for statistical significance. Assumptions : ANOVA models are parametric, relying on assumptions about the distribution of the dependent variables (DVs) for each level of the independent variable(s) (IVs). Textbook analysis using a normal distribution The analysis of variance can be presented in terms of a linear model, which makes the following assumptions about the probability distribution of the responses: Independence of observations this is an assumption of the model that simplifies the statistical analysis. Normality the distributions of the residuals are normal. Equality (or "homogeneity") of variances, called homo scedasticity the variance of data in groups should be the same. The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors ( 's) are independent and

Randomization-based analysis See also: Random assignment and Randomization test In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of unit treatment additive, which is discussed in the books of Kempthorne and David R. Cox. Unit-treatment additive In its simplest form, the assumption of unit-treatment additive states that the observed response from experimental unit when receiving treatment can be written as the sum of the unit's response and the treatment-effect , that is

The assumption of unit-treatment additive implies that, for every treatment exactly the same effect on every experiment unit.

, the

th treatment have

The assumption of unit treatment additive usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additive can be falsified. For a randomized experiment, the assumption of unit-treatment additive implies that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additive is that the variance is constant. The use of unit treatment additive and randomization is similar to the design-based inference that is standard in finite-population survey sampling. Derived linear model

Kempthorne uses the randomization-distribution and the assumption of unit treatment additive to produce a derived linear model, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is no assumption of a normal distribution and certainly no assumption of independence. On the contrary, the observations are dependent! The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments. Statistical models for observational data However, when applied to data from non-randomized experiments or observational studies, modelbased analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public. Summary of assumptions The normal-model based ANOVA analysis assumes the independence, normality and homogeneity of the variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additive) and uses the randomization procedure of the experiment. Both these analyses require homo scedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additive for the randomization-based analysis. However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA.[24] There are no necessary assumptions for ANOVA in its full generality, but the F-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest. Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additive is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additive. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance.[25] Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model.[16][26] According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition.