RESEARCH METHODOLOGY ASSIGNMENT SET 1

1.Explain the different types of research. ANS: Although any typology of research is inevitably arbitrary, Research may be classified crudely according to its major intent or the methods. According to the intent, research may be classified as: Pure Research It is undertaken for the sake of knowledge without any intention to apply it in practice, e.g., Einstein’s theory of relativity, Newton’s contributions, Galileo’s contribution, etc. It is also known as basic or fundamental research. It is undertaken out of intellectual curiosity or inquisitiveness. It is not necessarily problem-oriented. It aims at extension of knowledge. It may lead to either discovery of a new theory or refinement of an existing theory. It lays foundation for applied research. It offers solutions to many practical problems. It helps to find the critical factors in a practical problem. It develops many alternative solutions and thus enables us to choose the best solution. Applied Research It is carried on to find solution to a real-life problem requiring an action or policy decision. It is thus problem-oriented and actiondirected. It seeks an immediate and practical result, e.g., marketing research carried on for developing a news market or for studying the post-purchase experience of customers. Though the immediate purpose of an applied research is to find solutions to a practical

problem, it may incidentally contribute to the development of theoretical knowledge by leading to the discovery of new facts or testing of theory or o conceptual clarity. It can put theory to the test. It may aid in conceptual clarification. It may integrate previously existing theories. Exploratory Research It is also known as formulative research. It is preliminary study of an unfamiliar problem about which the researcher has little or no knowledge. It is ill-structured and much less focused on predetermined objectives. It usually takes the form of a pilot study. The purpose of this research may be to generate new ideas, or to increase the researcher’s familiarity with the problem or to make a precise formulation of the problem or to gather information for clarifying concepts or to determine whether it is feasible to attempt the study. Katz conceptualizes two levels of exploratory studies. “At the first level is the discovery of the significant variable in the situations; at the second, the discovery of relationships between variables.” Descriptive Study It is a fact-finding investigation with adequate interpretation. It is the simplest type of research. It is more specific than an exploratory research. It aims at identifying the various characteristics of a community or institution or problem under study and also aims at a classification of the range of elements comprising the subject matter of study. It contributes to the development of a young science and useful in verifying focal concepts through empirical observation. It can highlight important methodological aspects of data collection and interpretation. The information obtained may be useful for prediction about areas of social life outside the boundaries of the research. They are valuable in providing facts needed for planning social action program.

Diagnostic Study It is similar to descriptive study but with a different focus. It is directed towards discovering what is happening, why it is happening and what can be done about. It aims at identifying the causes of a problem and the possible solutions for it. It may also be concerned with discovering and testing whether certain variables are associated. This type of research requires prior knowledge of the problem, its thorough formulation, clear-cut definition of the given population, adequate methods for collecting accurate information, precise measurement of variables, statistical analysis and test of significance. Evaluation Studies It is a type of applied research. It is made for assessing the effectiveness of social or economic programmes implemented or for assessing the impact of developmental projects on the development of the project area. It is thus directed to assess or appraise the quality and quantity of an activity and its performance, and to specify its attributes and conditions required for its success. It is concerned with causal relationships and is more actively guided by hypothesis. It is concerned also with change over time. Action Research It is a type of evaluation study. It is a concurrent evaluation study of an action programme launched for solving a problem for improving an exiting situation. It includes six major steps: diagnosis, sharing of diagnostic information, planning, developing change programme, initiation of organizational change, implementation of participation and communication process, and post experimental evaluation. According to the methods of study, research may be classified as:

1. Experimental Research: It is designed to asses the effects of particular variables on a phenomenon by keeping the other variables constant or controlled. It aims at determining whether and in what manner variables are related to each other. 2. Analytical Study: It is a system of procedures and techniques of analysis applied to quantitative data. It may consist of a system of mathematical models or statistical techniques applicable to numerical data. Hence it is also known as the Statistical Method. It aims at testing hypothesis and specifying and interpreting relationships. 3. Historical Research: It is a study of past records and other information sources with a view to reconstructing the origin and development of an institution or a movement or a system and discovering the trends in the past. It is descriptive in nature. It is a difficult task; it must often depend upon inference and logical analysis or recorded data and indirect evidences rather than upon direct observation. 4. Survey: It is a fact-finding study. It is a method of research involving collection of data directly from a population or a sample thereof at particular time. Its purpose is to provide information, explain phenomena, to make comparisons and concerned with cause and effect relationships can be useful for making predications 3. Discuss the criteria of good research problem.

Horton and Hunt have given following characteristics of scientific research: 1. 7. Objectivity: That is free being from all biases and vested interests. Systematization: That is attempting to find all the relevant data. Accuracy: That is describing what really exists. This avoids colourful literature and vague meanings. 4. 5. 3. beliefs and preferences to the extent possible and he is able to see and accept facts as they are. Since human memory is fallible. Precision: That is making it as exact as necessary. It means truth or correctness of a statement or describing things exactly as they are and avoiding jumping to unwarranted conclusions either by exaggeration or fantasizing. Recording: That is jotting down complete details as quickly as possible. 6. Verifiable evidence: That is factual observations which other observers can see and check. Controlling conditions: That is controlling all . or collecting data in a systematic and organized way so that the conclusions drawn are reliable. or giving exact number or measurement. all data collected are recorded. 2. not as he might wish them to be. Data based on casual recollections are generally incomplete and give unreliable judgments and conclusions. It means observation is unaffected by the observer’s values.

how to interpret in and avoid inaccurate data collection.Describe the procedure used to test the hypothesis ? ANS: To test a hypothesis means to tell (on the basis of the data researcher has collected) whether or not the hypothesis seems to be valid. 8. This is the basic technique in all scientific experimentation – allowing one variable to vary while holding all other variables constant. In hypothesis testing the main question is: whether the null hypothesis or not to accept the null hypothesis? Procedure for hypothesis testing refers to all those steps that we undertake for making a choice between the two .variables except one and then attempting to examine what happens when that variable is varied. 3. Training investigators: That is imparting necessary knowledge to investigators to make them understand what to look for.

rejection and acceptance of a null hypothesis. . It also indicates whether we should use a tailed test or a two tailed test.e. The various steps involved in hypothesis testing are stated below: Making a Formal Statement The step consists in making a formal statement of the null hypothesis (Ho) and also of the alternative hypothesis (Ha). Mohan of the Civil Engineering Department wants to test the load bearing capacity of an old bridge which must be more than 10 tons. we use alone tailed test. If Ha is of the type greater than. The state wants to know if there is a significance difference between the local scores and the national scores. in that case he can state his hypothesis as under: Null hypothesis HO: µ =10 tons Alternative hypothesis Ha: µ >10 tons Take another example. To evaluate a state’s education system. the average score of 100 of the state’s students selected on the random basis was 75. considering the nature of the research problem. For instance. Mr.actions i. In such a situation the hypothesis may be state as under: Null hypothesis HO: µ =80 Alternative hypothesis Ha: µ ≠ 80 The formulation of hypothesis is an important step which must be accomplished with due care in accordance with the object and nature of the problem under consideration. This means that hypothesis should clearly state. The average score in an aptitude test administered at the national level is 80. but when Ha is of the type “whether greater or smaller” then we use a two-tailed test..

draw a sample to furnish empirical data. say. either 5% level or 1% level is adopted for the purpose. means).Selecting a Significant Level The hypothesis is tested on a pre-determined level of significance and such the same should have specified. In other words. in practice. The variability of measurements within samples. The choice generally remains between distribution and the t distribution. The rules for selecting the correct distribution are similar to those which we have stated earlier in the context of estimation. the level of significance must be adequate in the context of the purpose and nature of enquiry. . The factors that affect the level of significance are: • The magnitude of the difference between sample . • • • Deciding the Distribution to Use After deciding the level of significance. the next step in hypothesis testing is to determine the appropriate sampling distribution. Selecting A Random Sample & Computing An Appropriate Value Another step is to select a random sample(S) and compute an appropriate value from the sample data concerning the test statistic utilizing the relevant distribution. In brief. Whether the hypothesis is directional or non – directional (A directional hypothesis is one which predicts the direction of the difference between. The size of the sample. Generally.

accept the alternative hypothesis). In case we reject H0 we run a risk of (at most level of significance) committing an error of type I. Flow Diagram for Testing Hypothesis committing type I error committing type II error 4. the significance level. If the calculated probability is equal to smaller than α value in case of one tailed test (and α/2 in case of two-tailed test). then we run some risk of committing error type II. but if the probability is greater then accept the null hypothesis.Calculation of the Probability One has then to calculate the probability that the sample result would diverge as widely as it has from expectations. but if we accept H0. if the null hypothesis were in fact true.e. then reject the null hypothesis (i. Comparing the Probability Yet another step consists in comparing the probability thus calculated with the specified value for α. Write a note on experimental design Principles of Experimental Designs .

We can collect the data yield of the two varieties and draw conclusion by comparing the same. 2. By doing so. that is to say. The principle of randomization: It provides protection. Consequently replication does not present any difficulty. when we conduct an experiment. then we first divide the field into several parts. each treatment is applied in many experimental units instead of one. grow one variety in half of these parts and the other variety in the remaining parts.Professor Fisher has enumerated three principles of experimental designs: 1. But if we are to apply the principle of replication to this experiment. the statistical accuracy of the experiments is increased. it should be remembered that replication is introduced in order to increase the precision of a study. The principle of replication: The experiment should be reaped more than once. For example. against the effect of extraneous factors by randomization. Thus. to increase the accuracy with which the main effects and interactions can be estimated. suppose we are to examine the effect of two varieties of rice. this principle indicates that we should design or plan the ‘experiment in such a way that the variations caused by extraneous factors can all be combined under the general heading of “chance”. However. For this purpose we may divide the field into two parts and grow one variety in one part and the other variety in the other part. In other words. The entire experiment can even be repeated several times for better results. The result so obtained will be more reliable in comparison to the conclusion we draw without applying the principle of replication. We can compare the yield of the two parts and draw conclusion on that basis. then it is just possible that the soil fertility may be different in the first half in comparison to the . For instance if we grow one variety of rice say in the first half of the parts of a field and the other variety is grown in the other half. but computationally it does.

we may apply randomization principle and protect ourselves against the effects of extraneous factors. Important Experimental Designs Experimental design refers to the framework or structure of an experiment and as such there are several experimental designs. As such. 3. If this is so. viz. through the principle of local control we can eliminate the variability due to extraneous factors from the experimental error.. we can have a better estimate of the experimental error.other half. the known source of variability. is made to vary deliberately over as wide a range as necessary and this needs to be done in such a way that the variability it causes can be measured and hence eliminated from the experimental error. Then the treatments are randomly assigned to these parts of a block. Under it the extraneous factors. the extraneous factor and experimental error. in which the total variability of the data is divided into three components attributed to treatments. In general.. so that we can measure its contribution to the variability of the data by means of a two-way analysis of variance. and then each such block is divided into parts equal to the number of treatments. This means that we should plan the experiment in a manner that we can perform a two-way analysis of variance. we may assign the variety of rice to be grown in different parts of the field on the basis of some random sampling technique i. our results would not be realistic. Principle of local control: It is another important principle of experimental designs. In such a situation. through the application of the principle of randomization. we first divide the field into several homogeneous parts. . blocks are the levels at which we hold an extraneous factors fixed. known as blocks. We can classify experimental designs into two broad categories.e. In brief. In other words. according to the principle of local control. informal experimental designs and formal experimental designs.

two groups or areas (test and control area) are selected and the treatment is introduced into the test area only. Treatment impact is assessed by subtracting the value of the dependent variable in the control area from its value in the test area. single test group or area is selected and the dependent variable is measured before the introduction of the treatment. The treatment is then introduced into the test area only. The dependent variable is then measured in both the areas at the same time. and the dependent variable is measured in both for an identical time-period after the introduction of the treatment. where as formal experimental designs offer relatively more control and use precise statistical procedures for analysis. Before and after with control design: In this design two areas are selected and the dependent variable is measured in both the areas for an identical time-period before the treatment. After only with control design: In this design. The • • . The treatment is then introduced and the dependent variable is measured again after the treatment has been introduced.Informal experimental designs are those designs that normally use a less sophisticated form of analysis based on differences in magnitudes. The effect of the treatment would be equal to the level of the phenomenon after the treatment minus the level of the phenomenon before the treatment. Informal experimental designs: • Before and after without control design: In such a design.

Latin square design (LS design): It is used in agricultural research. They are especially important in several economic and social phenomena where usually a large number of factors affect a particular problem.treatment effect is determined by subtracting the change in the dependent variable in the control area from the change in the dependent variable in test area. Technically. 2. when all the variations due to uncontrolled extraneous factors are included under the heading of chance variation. we refer to the design of experiment as C R Design. Randomized block design (RB design): It is an improvement over the C Research design. 4.. the principle of replication and randomization. 3. Factorial design: It is used in experiments where the effects of varying more than one factor are to be determined. The treatments in a LS design are so allocated among the plots that no treatment occurs more than once in any row or column. Completely randomized design (CR design): It involves only two principle viz. In the RB design the principle of local control can be applied along with the other two principles. Formal Experimental Designs 1. It is generally used when experimental areas happen to be homogenous. .

That is. That is. in case of an individual being the member of a family. ii) The organic motto of action must be socially relevant. the case drawn out from its total context for the purposes of study must be considered a member of the particular cultural group or community. standards and their shared way of life. the social meaning of behaviour must be taken into consideration. The scrutiny of the life histories of persons must be done with a view to identify thee community values.Elaborate the ways of making a case study effective. ? Let us discuss the criteria for evaluating the adequacy of the case history or life history which is of central importance for case study. gradually blossoms forth into a social person. the role of family in shaping his behaviour must never be overlooked. iv) The specific method of elaboration of organic material onto social behaviour must be clearly shown. the man. the action of the individual cases must be viewed as a series of reactions to social stimuli or situation. That is.4. . iii) The strategic role of the family group in transmitting the culture must be recognized. John Dollard has proposed seven criteria for evaluating such adequacy as follows: i) The subject must be viewed as a specimen in a cultural series. In other words. are especially fruitful. That is case histories that portray in detail how basically a biological organism.

when the study does . ? Non-probability sampling or non-random sampling is not based on the theory of probability.What is non probability sampling? Explain its types with examples. convenience and low cost. vii) The life history material itself must be organised according to some conceptual framework. Advantages: The only merits of this type of sampling are simplicity. Disadvantages: The demerits are it does not ensure a selection chance to each population unit. vi) Social situation must be carefully and continuously specified as a factor. This sampling does not provide a chance of selection to each population element.v) The continuous related character of experience for childhood through adulthood must be stressed. The selection probability is unknown. The reasons for usage of this sampling are when there is no other feasible alternative due to non-availability of a list of population. One of the important criteria for the life history is that a person’s life must be shown as unfolding itself in the context of and partly owing to specific social situations. It suffers from sampling bias which will distort results. The selection probability sample may not be a representative one. 5. the life history must be a configuration depicting the inter-relationships between thee person’s various experiences. this in turn would facilitate generalizations at a higher level. In other words.

RESEARCH METHODOLOGY ASSIGNMENT SET 2 5.What are the advantages and .

if available can be secured quickly and cheaply. the use of secondary data extends the researcher’s space and time reach. collection of data is just matter of desk work. The use of secondary data broadens the data base from which scientific generalizations can be made. Even the tediousness of copying the data from the source can now be avoided. The use of secondary data enables a researcher to verify the findings bases on primary data. Disadvantages of Secondary Data The use of a secondary data has its own limitations. . Wider geographical area and longer reference period may be covered without much cost. 3. 2. Environmental and cultural settings are required for the study. Once their source of documents and reports are located. Thus. 5. 4. thanks to Xeroxing facilities. Secondary data. It readily meets the need for additional empirical support. The researcher need not wait the time when additional primary data can be collected.disadvantages of secondary data ? Advantages of Secondary Data Secondary sources have some advantages: 1.

? Ans: . 2. units of measure may not match. and time periods may also be different. For example. The most important limitation is the available data may not meet our specific needs. The definitions adopted by those who collected those data may be different. For example. because of time lag in producing them. and they are not within the easy reach of researchers based in far off places. 3.Explain the prerequisites and advantages of observation.1. The available data may not be as accurate as desired. Finally. population census data are published tow or three years later after compilation. most of the unpublished official records and compilations are located in the capital city. information about the whereabouts of sources may not be available to all social scientists. 4. The secondary data are not up-to-date and become obsolete when they appear in print. 6. Even if the location of the source is known. To assess their accuracy we need to know how the data were collected. and no new figures will be available for another ten years. the accessibility depends primarily on proximity.

The observer must be in vantage point to see clearly the objects to be observed. Data collected by observation may describe the . The mechanical devices used must be in good working conditions and operated by skilled persons. two separate observers and sets of instruments may be used in all or some of the original observations. The main virtue of observation is its directness: it makes it possible to study behaviour as it occurs. A certain number of cases can be observed again by another observer/another set of mechanical devices. Observation must cover a sufficient number of representative samples of the cases. If it is feasible. he can simply watch what they do and say.The prerequisites of observation consist of: • Observations must be done under conditions which will permit accurate results. Recording should be accurate and complete. The results could then be compared to determine their accuracy and completeness. The accuracy and completeness of recorded results must be checked. The distance and the light must be satisfactory. as the case may be. 2. The researcher need not ask people about their behaviour and interactions. • • • Advantages of observation Observation has certain advantages: 1.

8. 4. Observation is less demanding of the subjects and has less biasing effect on their conduct than questioning. tribal. Observations make it possible to capture the whole event as it occurs. The validity of what men of position and authority say can be verified by observing what they actually do. 7. animals. For example only observation can provide an insight into all the aspects of the process of negotiation between union and management representatives. It is easier to conduct disguised observation studies than disguised questioning.g. birds etc. Further more verbal resorts can be validated and compared with behaviour through observation. 3. in interview. the respondent may not behave in a natural way. studies of children. 6. Other methods introduce elements or artificiality into the researched situation for instance.observed phenomena as they occur in their natural settings. e. 5. Observations is more suitable for studying subjects who are unable to articulate meaningfully. Observations improve the opportunities for analyzing the contextual back ground of behaviour. especially when the observed persons are not aware of their being observed. Mechanical devices may be used for recording data in order to secure more accurate data and . There is no such artificiality in observational studies.

The data structure also defines and stages of the preliminary relationship between variables/groups that have been pre-planned by the researcher. the data are prepared in a data format. in which one variable leads to the other and finally. The major criterion in this is to define the data structure. . ? Ans: Checking for Analysis In the data preparation step. to the resultant end variable. which involves several types of instruments being collected for the same research question. A sample structure could be a linear structure. the procedures for drawing the data structure would involve a series of steps. A data structure is a dynamic collection of related variables and can be conveniently represented as a graph where nodes are labelled by variables. 7.also of making continuous observations over longer periods. the heterogeneous data structure of the individual data sets can be harmonized to a common standard and the separate data sets are then integrated into a single data set. The identification of the nodal points and the relationships among the nodes could sometimes be a complex task than estimated. Most data structures can be graphically presented to give clarity as to the frames researched hypothesis. However. which allows the analyst to use modern analysis software such as SAS or SPSS. the clear definition of such data structures would help in the further processing of data. When the task is complex. In several intermediate steps.Discuss the stages involved in data collection.

. which are the same for all research questions? Have variable descriptions been specified? Have labels for variable names and value labels been defined and written? • • All editing and cleaning steps are documented. Some of the usual check list questions that can be had by a researcher for editing data sets before analysis would be: 1. Editing is a process of checking to detect and correct errors and omissions. the redefinition of variables or later analytical modification requirements could be easily incorporated into the data sets. Data Editing at the Time of Analysis of Data Data editing is also a requisite before the analysis of data is carried out. so that. Data editing happens at two stages. Is the documentary material sufficient for the methodological description of the study? 3. one at the time of recording of the data and second at the time of analysis of data. Is the storage medium readable and reliable. Is the coding frame complete? 2. Data Editing at the Time of Recording of Data Document editing and testing of the data at the time of data recording is done considering the following questions in mind.Editing The next step in the processing of data is editing of the data instruments. • • Do the filters agree or are the data inconsistent? Have ‘missing values’ been set to values. This ensures that the data is complete in all respect for subjecting them to further analysis.

If there were any omission. Has the correct data set been framed? 5. then the researcher can take the step of contacting the respondent personally again and solicit the requisite data again. accuracy and uniformity of the data as created by the researcher. Consistency in response can also be checked at this step. The cross verification to a few related responses would help in checking for consistency in responses. A random check process can be applied to trace the errors at this step. If this is possible. coding frame and data? 7.4. Are there undefined and so-called “wild codes”? 8. Completeness: The first step of editing is to check whether there is an answer to all the questions/variables set out in the data set. Accuracy: Apart from checking for omissions. fact responses should be dropped from the . The editing step checks for the completeness. Comparison of the first counting of the data with the original documents of the researcher. For example. approximate spending and saving and borrowing habits of family members’ etc. Are there differences between questionnaire. sources of income. the data set has to rewritten on the basis of the new information. the researcher sometimes would be able to deduce the correct answer from other related data on the same instrument. the approximate family income can be inferred from other answers to probes such as occupation of family members. If none of these steps could be resorted to the marking of the data as “missing” must be resorted to. Is the number of cases correct? 6. While clear inconsistencies should be rectified in the data sets. The reliability of the data set would heavily depend on this step of error correction. the accuracy of each recorded answer should be checked. If the information is vital and has been found to be incomplete.

It is therefore a pre-requisite to prepare a coding scheme for the data set. This responses sheet coding gives a . The responses collected in a data sheet varies. another keen lookout should be for any lack of uniformity. care should be taken as a record the answer as a “positive question” response or as “negative question” response in all uniformity checks for consistency in coding throughout the questionnaire/interview schedule response/data set. it would be useful in the data analysis. For instance. While interpreting the answers. When codification is done. The first coding done to primary data sets are the individual observation themselves. the responses towards a specific feeling could have been queried from a positive as well as a negative angle. sometimes the responses could be the choice among a multiple response. if some codification were done to the responses collected. At the recording stage itself. The recording of the data is done on the basis of this coding scheme. it is imperative to keep a log of the codes allotted to the observations. The final point in the editing of data set is to maintain a log of all corrections that have been carried out at this stage. in interpretation of questions and instructions by the data recorders. Coding The edited data are then subject to codification and classification. sometimes the response could be in terms of values and sometimes the response could be alphanumeric. This code sheet will help in the identification of variables/observations and the basis for such codification.data sets. Coding process assigns numerals or other symbols to the several responses of the data set. The documentation of these corrections helps the researcher to retain the original data set. Uniformity: In editing data sets.

. The codification can be made at the time of distribution of the primary data sheets itself. most preferable to not preferable. The codes can be alphanumeric to keep track of where and to whom it had been sent. Alphabetic Coding: A mere tabulation or frequency count or graphical representation of the variable may be given in an alphabetic coding. the sheets that are distributed in a specific locality may carry a unique part code which is alphabetic. Professional. Numeric Coding: Coding need not necessarily be numeric.. the verification and editing of recordings and further contact with respondents can be achieved without any difficulty.benefit to the research. any specific queries on a specific responses sheet can be clarified. the codification needs to be carefully done to include all possible responses under “Others. or it could be very specific such as Gender classified as Male and Female. if the data consists of several public at different localities. a numeric code can be attached to distinguish the person to whom the primary instrument was distributed. Certain classifications can lead to open ended classification such as education classification. Others. Graduate. The variables or observations in the primary instrument would also need codification. Even at a latter stage. For instance. To this alphabetic code. especially when they are categorized. The categorization could be on a scale i. In such instances. Please specify.e. then it will be better to create a separate variable for the “Others please specify” category and records all responses as such. It can also be alphabetic. Coding has to be compulsorily numeric. This also helps the researcher to keep track of who the respondents are and who are the probable respondents from whom primary data sheets are yet to be collected. in that. please specify”. when the variable is subject to further parametric analysis. If the preparation of the exhaustive list is not feasible. Illiterate.

but is outsourced to a data entry firm or individual. = Could be treated as a separate variable/observation and the actual response could be recorded.Zero Coding: A coding of zero has to be assigned carefully to a variable. In many instances. classification of data is also necessary at the data entry stage. Hence. it should not lead to the same interpretation of ‘non response’. the researcher might not be able to code the data from the primary instrument itself. Sometimes. From all responses. when manual analysis is done. then a different coding than 0 should be given in the data sheet. classification is necessary to code the responses. In order to enter the data in the same perspective. a . a code of 0 would imply a “no response” from the respondents. He may need to classify the responses and then code them. if a value of 0 is to be given to specific responses in the data sheet. For instance. The new variable could be termed as “other occupation” The coding sheet needs to be prepared carefully. For this purpose. there will be a tendency to give a code of 0 to a ‘no’. if the data recording is not done by the researcher. the income of the respondent could be an open-ended question. as the researcher would like to view it. An illustration of the coding process of some of the demographic variables is given in the following table. For instance. Classification When open ended responses have been received. the data coding sheet is to be prepared first and a copy of the data coding sheet should be given to the outsourcer to help in the data entry procedure.

That is. A classification method should meet certain requirements or should be guided by certain rules. Second. the reading habits of newspaper may be surveyed. The categorization should meet the information required to test the hypothesis or investigate the questions. Third. so that each case is classified only once. For instance. However. The classification “others” will be very useful when a minority of respondents in the data set give varying answers. This requirement is violated when some of the categories overlap or different dimensions are mixed up.suitable classification can be arrived at. First. But “others” categorization has to carefully used by the researcher. the other categorization tends to defeat the very purpose of classification. an open ended question will be the best mode of getting the responses. The inclusion of the classification “Others” tends to fill the cluttered. . These given answer rather than being separately considered could be clubbed under the “others” heading for meaningful interpretation of respondents and reading habits. From the responses collected. the categories must also be mutually exhaustive. which is designed to distinguish between observations in terms of the properties under study. the scheme of classification should be exhaustive. the researcher can fit a meaningful and theoretically supportive classification. classification should be linked to the theory and the aim of the particular study. but few responses from the data sheets. because responses like “widower” or “separated” cannot be fitted into the scheme. “married” “Single” and “divorced” is not exhaustive. For example. there must be a category for every response. the classification of martial status into three category viz. The objectives of the study will determine the dimensions chosen for coding. Here. The 95 respondents out of 100 could be easily classified into 5 large reading groups while 5 respondents could have given a unique answer..

sorting cards or sorting strips could be used by the researcher to manually transcript the responses. can be transferred to a data sheet. The computerized transcription could be done using a data base package such as spreadsheets. The transcription process helps in the presentation of all responses and observations on data sheets which can help the researcher to arrive at preliminary conclusions as to the nature of the sample collected etc. Suppose a research instrument contains 120 responses and the observations has been collected from 200 respondents. which is a summary of all responses on all observations from a research instrument.The number of categorization for a specific question/observation at the coding stage should be maximum permissible since. However the number of categories is limited by the number of cases and the anticipated statistical analysis that are to be used on the observation. . a simple summary of one response from all 200 observations would require shuffling of 200 pages. Transcription is hence. which can be drawn from the observations. Methods of Transcription The researcher may adopt a manual or computerized transcription. an intermediary process between data coding and data tabulation. Long work sheets. text files or other databases. The main aim of transition is to minimize the shuffling proceeds between several responses and several observations. reducing the categorization at the analysis level would be easier than splitting an already classified group of responses. the simple inferences. The process is quite tedious if several summary tables are to be prepared from the instrument. Transcription of Data When the observations collected by the researcher are not very large.

if the number of responses is less than 30. CS7. The label names are thus the links to specific questions in the research instrument. it leads to a worksheet of 100×200 sizes which might not be easily managed by the researcher manually. on the other hand the variables in the research instrument are more than 40 and each variable has 5 options. but gives a link to the question in the research instrument though variable labels. Once the labelling process has been done for all the responses in the research instrument. If. the data sheet does not contain the details of the statement. CS5. it is advisable to use a computerized transcription process. CS6. the transcription of the response is done. A transcription sheet with 100×50 (assuming each response has 5 options) row/column can be easily managed by a researcher manually. For instance. CS8. In all other instances. then the manual worksheet could be attempted manually. Manual Transcription When the sample size is manageable. In this instance the variable names could be given as CS1. The label CS indicating Consumer satisfaction and the number 1 to 10 indicate the statement measuring consumer satisfaction. the researcher need not use any computerization process to analyze the data. The choice of manual transcription would be when the number of responses in a research instrument is very less. opinion on consumer satisfaction could be identified through a number of statements (say 10). CS4. CS3. say 10 responses. and the numbers of observations collected are within 100. Long Worksheets . Each variable should be given a label so that long questions can be covered under the label names. CS9 and CS10. The researcher could prefer a manual transcription and analysis of responses. CS2.The main requisite for a transcription process is the preparation of the data sheets where observations are the row of the database and the responses/variables are the columns of the data sheet. In the second instance.

thick enough to last several usages. the transcript data has to be subjected to a testing to ensure error free transcription of data. The arrival of computers has changed the data processing methodology altogether. Other methods of manual transcription include adoption of sorting strips or cards. . now the responses from the research instrument are then transferred to the worksheet by ticking the specific option that the observer has chosen. requisite length for recording the actual response of the observer should be provided for in the work sheet. If the variable cannot be coded into categories. Transcription can be made as and when the edited instrument is ready for processing. the frequency tables can be constructed straight from worksheet. In olden days. the researcher may use multiple rules sheets to accommodate all the observations. These worksheets normally are ruled both horizontally and vertically. The original research instrument can be now kept aside as safe documents. The worksheet can then be used for preparing the summary tables or can be subjected to further analysis of data. If one sheet is not sufficient. Copies of the data sheets can also be kept for future references.Long worksheets require quality paper. Once all schedules/questionnaires have been transcribed. data entry and processing were made through mechanical and semi auto-metric devices such as key punch using punch cards. allowing responses to be written in the boxes. preferably chart sheets. For each variable. The first column contains the code of observations. As has been discussed under the editing section. Heading of responses which are variable names and their coding (options) are filled in the first two rows.

Sometimes the researcher has to cross tabulate two variables. the fifth tally is cut across the previous four tallies. Tabulation can be done manually or through the computer. This arrangement facilitates easy counting of each one of the class groups. the age group of vehicle owners. . category-wise totals can be extracted from the respective columns of the work sheets. This represents a group of five items. from each line of response in the worksheet. The choice depends upon the size and type of study. The process is called tabulation. Thus. Tally marks are then made for the respective group i. cost considerations.. It involves counting the number of cases falling into each of the categories identified by the researcher. for instance. If one wants to prepare a table showing the distribution of respondents by age. a tally sheet showing the age groups horizontally is prepared. After every four tally. This requires a two-way classification and cannot be inferred straight from any technical knowledge or skill. tabulation is a process of summarizing raw data displaying them on compact statistical tables for further analysis. This is a one-way frequency table and they are readily inferred from the totals of each column in the work sheet. Manual Tabulation When data are transcribed in a classified form as per the planned scheme of classification. ‘vehicle owners’. time pressures and the availability of software packages. A simple frequency table counting the number of “Yes” and “No” responses can be made easily by counting the “Y” response column and “N” response column in the manual worksheet table prepared earlier. Illustration of this tally sheet is present below. Manual tabulation is suitable for small and simple studies.Tabulation The transcription of data can be used to summarize and arrange the data in compact form for further analysis.e.

Briefly explain the types of interviews. Structured Directive Interview This is an interview made with a detailed standardized schedule. cross tabulation.Although manual tabulation is simple and easy to construct. it can be tedious. attention is not diverted to extraneous. This package contains programs for a wide range of operations and analysis such as handling missing data. The most popular package is the Statistical package for Social Science (SPSS). Computerized tabulation is easy with the help of software packages. Second. It is an integrated set of programs suitable for analysis of social science data. (c) focused interview. slow and error-prone as responses increase. (b) unstructured or non-directive interview. Advantages: This interview has certain advantages. simple descriptive analysis. This type of interview is used for large-scale formalized surveys. ? The interview may be classified into: (a) structured or directive interview. recording variable information. Each question is asked in the same way in each interview. (d) clinical interview and (e) depth interview. The input requirement will be the column and row variables. recording and coding data do not pose any problem. First. multivariate analysis and non-parametric analysis. The software package then computes the number of records in each cell of three row column categories. Lastly. Limitation: However. The same questions are put to all the respondents and in the same order. data from one interview to the next one are easily comparable. 4 . irrelevant and time consuming conversation. and greater precision is achieved. promoting measurement reliability. this type of interview suffers from some .

the investigator may run the risk of being led up blind ally. First. it is not free from limitations. It provides opportunity to explore the various aspects of the problem in an unrestricted manner. It can closely approximate the spontaneity of a natural conversation. Only a broad interview guide is used. It is also useful for gathering information on sensitive topics such as divorce. As there is no particular . class conflict. it is not suitable for surveys. social discrimination. Unstructured or Non-Directive Interview This is the least structured one. It provides greater opportunity to explore the problem in an unrestricted manner. Hence. It is less prone to interviewer’s bias. It is particularly useful in exploratory research where the lines of investigations are not clearly defined. The interviewer encourages the respondent to talk freely about a give topic with a minimum of prompting or guidance. Questions are not standardized and ordered in a particular way. One of its major limitations is that the data obtained from one interview is not comparable to the data from the next.limitations. This interviewing is more useful in case studies rather than in surveys. Advantages: This type of interview has certain special advantages. Time may be wasted in unproductive conversations. Second. The interviewer avoids channelling the interview directions. In this type of interview. Instead he develops a very permissive atmosphere. the way in which the interview is structured may be such that the respondent’s views are minimized and the investigator’s own biases regarding the problem under study are inadvertent introduced. By not focusing on one or another facet of a problem. it tends to lose the spontaneity of natural conversation. generation gap. the scope for exploration is limited. drug-addiction etc. a detailed preplanned schedule is not used. Lastly. Limitations: Though the unstructured interview is a potent research instrument.

This type of informal interviewing calls for greater skill than the formal survey interview. An interview guide specifying topics relating to the research hypothesis used. Clinical Interview This is similar to the focused interview but with a subtle difference.. involved in a train/bus accident. his attitudes and emotional responses regarding the situation under study. viewing a particular program on TV. the classification of responses and coding may required more time. The interview is focused on the subjective experiences of the respondent. The focused interview permits the interviewer to obtain details of personal reactions. i. It takes place with the respondents known to have involved in a particular experience..e. The interviewer is also free to choose the sequence of questions and determine the extent of probing. The respondent is asked for certain information. yet he has plenty of opportunity to present his views. seeing a particular film. While the focused interview is concerned with the effects of specific experience. clinical interview is concerned with broad underlying feelings or motivations or with the course of the individual’s life experiences. e.g. Merits: This type of interview is free from the inflexibility of formal methods. etc. yet gives the interview a set form and insured adequate coverage of all the relevant topics. The ‘personal history’ interview used in social case work. prison . specific emotions and the like. Focused Interview This is a semi-structured interview where the investigator attempts to focus the discussion on the actual effects of a given experience to which the respondents have been exposed.order or sequence in this interview. The situation is analysed prior to the interview.

The interviewer should totally avoid advising or showing disagreement.administration.Describe the principles involved in the table construction. The tile should represent a succinct description of the contents of the table. This requires much more training on interpersonal skills than structured interview. The table numbers should run in . It requires probing. Some times the interviewer has to face the problem of affections. 5 .? There are certain generally accepted principles of rules relating to construction of tables. emotions or convictions on the basis of an interview guide. They are: 1. The specific aspects of the individual’s life history to be covered by the interview are determined with reference to the purpose of the study and the respondent is encouraged to talk freely about them. i. This deliberately aims to elicit unconscious as well as extremely personal feelings and emotions. psychiatric clinics and in individual life history research is the most common type of clinical interview. It should be placed above the body of the table.e. A number facilitating easy reference should identify every table. The number can be centred above the title. 2. This is generally a lengthy procedure designed to encourage free expression of affectively charged information. Every table should have a title. the respondent may hide expressing affective feelings. The interviewer should handle such situation with great care. It should be clear and concise. Of course. he should use encouraging expressions like “uh-huh” or “I see” to motivate the respondent to continue narration. Depth Interview This is an intensive and searching interview aiming at studying the respondent’s opinion.

2.. Alternatively tables in chapter 1 be numbered as 1. All column figures should be properly aligned. 3. 1. Usually lines separate columns from one another. Any explanatory footnotes concerning the table itself are placed directly beneath the table and in order to obviate any possible confusion with the textual footnotes such reference symbols as the asterisk (*) DAGGER (+) and the like may be used. The units of measurement under each heading must always be indicated. 7.1. 4. If the data in a series of tables have been obtained from different sources. in chapter 2 as 2. and so on. The columns may be numbered to facilitate reference. 6. Totals of rows should be placed at the . 10. The captions (or column headings) should be clear and brief. 9. 11.. Decimal points and “plus” or “minus” signs should be in perfect alignment. 5. 1….1. it is ordinarily advisable to indicate the specific sources in a place just below the table. 8. Columns and rows that are to be compared with one another should be brought closed together.2.consecutive serial order. 2.3…. Lines are always drawn at the top and bottom of the table and below the captions. 2.

Tables should not exceed the page size by photo stating. geographical. if . with the top facing the left margin or binding of the script. Text references should identify tables by number. rather than by such expressions as “the table above” or “the following table”. spacing and identifications can be used. alphabetical or according to magnitude. The arrangement of the categories in a table may be chronological. Where tables should be placed in research report or thesis? Some writers place both special purpose and general purpose tables in an appendix and refer to them in the text by numbers. 16. 12. The table should be made as logical. 14.extreme right column and totals of columns at the bottom. This practice has the disadvantages of inconveniencing the reader who wants to study the tabulated data as the text is read. 17. clear. 13. Numerical categories are usually arranged in descending order of magnitude. In order to emphasize the relative significance of certain categories. Miscellaneous and exceptions items are generally placed in the last row of the table. 15. Tables those are too wide for the page may be turned sidewise. Usually the larger number of items is listed vertically. This means that a table’s length is more than its width. different kinds of type. Abbreviations should be avoided whenever possible and ditto marks should not be used in a table. A more appropriate procedure is to place special purpose tables in the text and primary tables. accurate and simple as possible.

needed at all. • • • • • • • • .Write a note on contents of research report.? The outline of a research report is given below: I. in an appendix. • • • • • • • • Prefatory Items Title page Declaration Certificates Preface/ acknowledgements Table of contents List of tables List of graphs/ figures/ charts Abstract or synopsis Body of the Report Introduction Theoretical background of the topic Statement of the problem Review of literature The scope of the study The objectives of the study Hypothesis to be tested Definition of the concepts II. 6 .

conclusions and recommendations Reference Material Bibliography Appendix Copies of data collection instruments Technical details on sampling plan Complex tables Glossary of new terms used.• • • • • • • • • • • • • Models if any Design of the study Methodology Method of data collection Sources of data Sampling plan Data collection instruments Field work Data processing and analysis plan Overview of the report Limitation of the study Results: findings and discussions Summary. • • • • • • . III.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.