This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

Based on the questions provided, interpret them Comment on the following statements bringing out clearly the fallacy occurring in them. (a) In a factory, the labour union asked for increase in salaries because they claim that 50% of the employees get less than Rs. 800 per month. The management claims that there is no case for increase in salaries because the average salary of the employees is Rs. 1200 per month. Which view will you support and why? [5 Marks] (b) “The number of Bus accidents committed in a city in a particular year by women drivers is much less than committed by male drivers.” Hence women are safe drivers? [5 Marks]

(a) The average is a very dicey term when used with extreme data pointers. The above case study of wages is a typical reason why there is confusion between the Labour union and the Management. It’s quite obvious that some of the employees in the above said organisation are highly paid and some have very low salary structure. Let us look at the following data points to understand the same.

Employe e Name Wages in Rs

A 3000

B 190 0

C 180 0

D 120 0

E 750

F 750

G 750

H 750

I 650

J 600

The above data is skewed as there is a high variation between the wages of the employees. (Standard Deviation being 786). Looking at the data from the employee union perspective, it is evident that 6 out of 10 people are getting low wages. However at the same time as some of the employees are getting as high at Rs 3000, the average salary paid would be around Rs 1200 per employee. This again stands correct as per the Employer point of view. So as a management it is very important to check why there are so many differences in the salary structures. Also we need to understand is it because of the “Type of Work” the salary is different; for example a quality analyst would be in a higher package compared to a normal worker, if it is so then management should make employees understand the same and also encourage people to work well so that they can become quality analyst in near future. There are lot more other nitty-gritty to be looked into while addressing the salary problem. Salary could also be low if the employee is a part-timer or a contract employee. Further depending on other allowances given; like medical-claim, insurance, housing facility,

conveyance the salary could go down. Hence a transparent HR intervention is required to make people understand all these small things and come to a conclusion. (b) Just by looking into a year data for a particular city we cannot conclude that women’s are safer drivers. This can be because we have analyzed data for only one city we have got a biased data. The same study should be done by collecting data from different cities. This

Q.2 a. Present the following information in a suitable tabular form supplying the figures not directly given. In 1989, out of 2,000 workers in a factory, 1550 were members of a trade union. The number of women workers employed was 250, out of which 200 did not belong to any trade union. In 1990, the number of union workers was 1725 of which 1600 were men. The number of non-union workers was 380, among whom 155 were women. [5 marks]

Answer 2 a. Year 1989 Total 1989 1990 Total 1990 F M Sex F M Member of TU 50 1500 1550 125 1600 1725 Non Member of TU 200 250 450 155 225 380 Total 250 1750 2000 280 1825 2105

**b. What are the characteristics of a good measure of central tendency? [5 Marks]
**

The characteristics of a good measure of central tendency are: • Present mass data in a concise form: The mass data is condensed to make the data readable and to use it for further analysis. Facilitate comparison: It is difficult to compare two different sets of mass data. But we can compare those two after computing the averages of individual data sets. While comparing, the same measure of average should be used. It leads to incorrect conclusions when the mean salary of employees is compared with the median salary of the employees. Establish relationship between data sets: The average can be used to draw inferences about the unknown relationships between the data sets. Computing the averages of the data sets is helpful for estimating the average of population.

•

•

•

Provide basis for decision-making: In many fields, such as business, finance, insurance and other sectors, managers compute the averages and draw useful inferences or conclusions for taking effective decisions.

The following are the requisites of a measure of central tendency: 1. It should be simple to calculate and easy to understand 2. It should be based on all values 3. It should not be affected by extreme values 4. It should not be affected by sampling fluctuation 5. It should be rigidly defined 6. It should be capable of further algebraic treatment

Q.3. The means of two samples of sizes 50 and 100 respectively are 54.1 and 50.3 and the standard deviations are 8 and 7. Find the mean and standard deviation of the sample of size 150 obtained by combining the two samples. [10 marks]

Currently, pretax profit = $2.5B ─ 2B ─ 0.4B = $0.1B. We assume that variable costs are proportional to sales. Sales in the new year are 2.5(1.15) = $2.875B. Costs are 2.0(1.15) = $2.3B. So, pretax profit = 2.875 ─ 2.3 ─ 0.4 = $0.175B, a 75% increase. This problem demonstrates the potential effects of operating leverage.

Ans : Q.4.The coefficients of variation of two series are 66% and 80% and their standard deviation are 20 & 16 respectively. Find their arithmetic means. (X bar= = 30.3 and X bar =20)

[10 Marks] Ans :

Low

Mean

High

(1) Sales (units)

160

500

960

(2) Price (per unit) (3) Variable cost (per unit) (4) Fixed cost (5) Initial investment

$3,000 3,000 100,000 1,000,000

$3,750 3,000 200,000 2,000,000

$4,000 3,000 4,000 4,000,000

Q5. Explain in detail the different types of Index numbers, and what are the uses of Index numbers? [10 Marks] Ans : Index numbers are designed to measure the magnitude of economic changes over time. Because they work in a similar way to percentages they make such changes easier to compare. Briefly, this works in the following way. Suppose that a cup of coffee in a particular café cost 75p in 1995. In 2002, an identical cup of coffee cost 99p. How has the price changed between 1995 and 2002? The particular time period of 1995 which we've chosen to compare against, is called the base period. The variable for that period, in this case the 75p, is then given a value of 100, corresponding to 100%. The index can then be calculated for the later period of 2002 as a proportionate change as follows:

The index number shows us that there has been a price increase of 32% since the base period. An index number for a single price change like this is called a price relative. Rule for finding the price relative If we let po be the price in the base period and let pn be the price in the later period, then the price relative for the price change between these periods is given by (pn/po) x 100. The Index of Retail Prices is probably the most generally known of all index numbers. Its aim is to measure the change in price over time of a whole range of widely bought goods and services and so give a measurement of the cost of living. This measurement can then be used to alter the amounts of the payments in index-linked pensions, for example. The main uses of index numbers are given below: Index numbers are used in the fields of commerce, meteorology, labour, industrial, etc.

The index numbers measure fluctuations during intervals of time, group differences of geographical position of degree etc. They are used to compare the total variations in the prices of different commodities in which the unit of measurements differs with time and price etc. They measure the purchasing power of money. They are helpful in forecasting the future economic trends. They are used in studying difference between the comparable categories of animals, persons or items. Index numbers of industrial production are used to measure the changes in the level of industrial production in the country. Index numbers of import prices and export prices are used to measure the changes in the trade of a country. The index numbers are used to measure seasonal variations and cyclical variations in a time series.

6.a. It was found that when a manufacturing process is under control, the average number of defectives per sample batch of 10 is 1.2. What limits would you set in a quality control chart based on the examination of defectives in sample batches of 10? Ans :

Low

Mean

High

(1) Sales (units) (2) Price (per unit) (3) Variable cost (per unit) (4) Fixed cost

160 $3,000 3,000 100,000

500 $3,750 3,000 200,000

960 $4,000 3,000 4,000

(5) Initial investment

1,000,000

2,000,000

4,000,000

b. What are the control limits in the p-chart if the average proportion of defectives in first 10 samples of size 100 each was observed to be 0.5? [4 Marks] Ans : In addition, horizontal lines are drawn at the mean number of defectives and at the upper and lower control limits. The distribution of the number of defective items is assumed to be binomial. This assumption is the basis for the calculating the upper and lower control limits. The control limits are calculated as: (EQ 2-16) (EQ 2-17) where p is the total number of defects divided by the total number of items and N is the number of items in a given sub-group. Note that this means that the control limits can vary with the sub-group. Also, zero serves as a lower bound on the LCL value. SYNTAX P CHART <y> <size> <x> <SUBSET/EXCEPT/FOR qualiﬁcation> where <y> is a variable containing the number of defective items in each sub-group; <size> is a variable containing the sample size for each sub-group; <x> is a variable containing the sub-group identiﬁer (usually 1, 2, 3, ...); and where the <SUBSET/EXCEPT/FOR qualiﬁcation> is optional.

NP CHART = Generates a Np control chart. XBAR CHART = Generates a xbar control chart. R CHART = Generates a range control chart.

S CHART = Generates a standard deviation control chart. CHARACTERS = Sets the types for plot characters. LINES = Sets the types for plot lines. SPIKES = Sets the on/off switches for plot spikes. PLOT = Generates a data or function plot. REFERENCE “Guide to Quality Control,” Kaoru Ishikawa, Asian Productivity Organization, 1982 (Chapter 8). APPLICATIONS Quality Control IMPLEMENTATION DATE 88/2 PROGRAM SKIP 25 READ CCP.DAT X NUMDEF SIZE LINES SOLID SOLID DOT DOT XLIMITS 0 20 XTIC OFFSET 0 1 TITLE AUTOMATIC Y1LABEL PERCENTAGE OF DEFECTIVES X1LABEL GROUP-ID P CONTROL CHART NUMDEF SIZE X

Assignment Set- 2

**Q.1. what do you understand by the degrees of freedom? [3 Marks]
**

In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary. Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step). Mathematically, “degrees of freedom” is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined. The term is most often used in the context of linear models (linear regression, analysis of variance), where certain random vectors are constrained to lie in linear subspaces, and the number of degrees of freedom is the dimension of the subspace. The degrees-of-freedom are also commonly associated with the squared lengths (or "Sum of Squares") of such vectors, and the parameters of chi-squared and other distributions that arise in associated statistical testing problems.

b. From the following table, test whether the colour of the son’s eyes is associated with that of the fathers. Eye color of son Not Light Not Light Eye color of father Light Total 230 151 381 Light 148 471 619 Total 378 622 1000

The value of Chi Square (0.05)=3.84 for v=1, 5.99 for v=2. [ 7 Marks] Let’s establish the null and alternate hypothesis statements first

H0: The row value and column value are independent (i.e. there is not association between son’s and father’s eye colour) H1: The row value and column value are dependent (i.e. there is an association between son’s and father’s eye colour)

Let’s take the significance level of Alpha = 0.05 Compute the Expected frequency of the table using the formula Expected Frequency = [(Row Total)*(Column Total)]/ (Table Total)

Eye color of son Not Tot Light Light al 233.98 144.018 2 378 385.01 236.982 8 622 100 381 619 0

Not Light Eye color of father Light Total

**Compute the Degree of Freedom for the Analysis
**

Degrees of freedom = (row - 1) x (column - 1) Degrees of freedom = (2-1)*(3-1) =1*2 =2 Compute the Chi Square statistics Chi Square = Summation [[(Observed)-(Expected)]2 /(Expected)]

Chi Square

51.333 2 31.196 06

31.596 04 19.201 45

=

=

Compute the result

133.3267 (Summation of all the above 4 values)

Since 133.3267 > 5.99 the decision is to reject the null hypothesis

Hence we could conclude that “There is an association between Father’s and Son’s eye colour

Q.2. What are the essentials of a good questionnaire? Draft a suitable questionnaire to enable you to study the effects of supermarkets on prices of essential consumer goods. [10 Marks]

A questionnaire is a research instrument consisting of a series of questions and other prompts for the purpose of gathering information from respondents. Although they are often designed for statistical analysis of the responses, this is not always the case. The questionnaire was invented by Sir Francis Galton. Questionnaires have advantages over some other types of surveys in that they are cheap, do not require as much effort from the questioner as verbal or telephone surveys, and often have standardized answers that make it simple to compile data. However, such standardized answer may frustrate users. Questionnaires are also sharply limited by the fact that respondents must be able to read the questions and respond to them. Thus, for some demographic groups conducting survey by questionnaire may not be practical. As a type of survey, questionnaires also have many of the same problems relating to question construction and wording that exist in other types of opinion polls. Points to take into account while drafting a questionnaire: Writing an effective questionnaire is not a task for novices. At the very least it requires an understanding of four basics. These are: Considering the differences that exist when writing a questionnaire that respondent’s will fillout themselves as opposed to when a professional interviewer administers the questionnaire to the respondent. Knowing what questions should be asked early on in the questionnaire, in the middle or toward the end. Understanding how to phrase questions. Being sensitive to questionnaire length.

**Q.3.Explain the importance and significance of sample? [5 Marks]
**

Sample is a collection of few units of a large population which is the total target market. Like for a tooth paste market potential research the population is the all households in the country and sample is few households from selected cities and villages. If the market potential is to assess of a metro city than the whole city will be population and the few selected households are the sample.

The each unit of the population is also regarded as elements of the population.

An ordinary person also uses the sampling method to take decisions and make inferences based on experience of a fraction of such items like developing attitudes about different classes, religions and races of people in everyday activities. Followings are the advantages of using sampling over census method for data collection. 1. Lower cost: The cost of collecting data from the whole of the units of the target market or population gets very costly whereas collecting the data from a few units would be comparatively much cheaper. 2. Saves time: the use of sampling technique to collect data is time saving in comparison to total enumeration or census method. 3. Usefulness of units: The small size of the sampling does not affect the average estimation of different features of the total population. 4. Few researchers: Usually the trained and competent researchers are not available in numbers. This problem can be easily avoided by using sampling method for data collection. Sampling also has certain limitations which are 1. Problem of determining sample units: Usually deciding the sample type, method and process is a difficult task. 2. Problem of determining sample size: The researchers find it difficult to determine that what proportion or percentage of population would be sufficient to represent the total population in the research conducted. 3. Problem of executing data collection: The interview and observation methods of data collection are not easy to operate as these require a lot of experience and skills like communication and psychological understanding. 4. Biased ness in selecting sample: Usually the marketers do not make due effort to decide and implement the sampling designs and hence affect the accuracy and reliability of the sample.

**b. Give a comparative account of a various methods of selecting of selecting a sample. [ 5 marks]
**

Before an organisation conducts primary research it has to be clear which respondents it wishes to interview. A company cannot possibly interview the whole population to get their opinions and views. This simply would be to costly and unfeasible. A sample of the population is taken to help them conduct this research. To select this sample there are again different methods of choosing your respondents, a mathematical approach called 'probability sampling' and a non- mathematical approach, simply called 'non-probability sampling'. Lets look at these in a little more detail.

Probability Sampling Methods Simple Random Samples: With this method of sampling the potential people you want to interview are listed e.g. a group of 100 are listed and a group of 20 may be selected from this list at random. The selection may be done by computer. Systematic samples: Out of the 100 people we talked about above, systematic sampling suggests that if we select the 5th person from the above list, then we would select every 5th, 10th, 15th, 20th etc. The pattern is the every consecutive 5th. If the 6th person was selected then it would be every consecutive 6th. Multi-Stage Samples: With this sampling process the respondents are chosen through a process of defined stages. For example residents within Islington (London) may have been chosen for a survey through the following process: Throughout the UK the south east may have been selected at random, ( stage 1), within the UK London is selected again at random (stage 2), Islington is selected as the borough (stage 3), then polling districts from Islington (stage 4) and then individuals from the electoral register (stage 5). As demonstrated five stages were gone through before the final selection of respondents were selected from the electoral register. Non Probability Samples Convenience Sampling: Where the researcher questions anyone who is available. This method is quick and cheap. However we do not know how representative the sample is and how reliable the result. Quota Sampling: Using this method the sample audience is made up of potential purchasers of your product. For example if you feel that your typical customers will be male between 18-23, female between 26-30, then some of the respondents you interview should be made up of this group, i.e. a quota is given. Dimensional Sampling: An extension to quota sampling. The researcher takes into account several characteristics e.g. gender, age income, residence education and ensures there is at least one person in the study that represents that population. E.g. out of 10 people you may want to make sure that 2 people are within a certain gender, two a certain age group who have an income rate between £25000 and £30000, this will again ensure the accuracy of the sample frame again. To summaries there are two types of sampling frames - probability and non-probability, and within this six types of sampling methods as discussed above.

Q.4.Distinguish between (i) Biased and Unbiased errors (ii) Sampling and non-sampling errors. [5 Marks] (i) Biased and Unbiased Sampling

The basic idea of statistics is to make statements about a set of things using a subset (a sample) of those things. A sample is "biased" if some members of the population are more likely to be included than others. A sample is "unbiased" if all members of the population are equally likely to be included. Here are two examples. Suppose I want to find out how big a typical fish is in a lake. One way of getting a sample of the fish would be to use a net. But then I will never catch any of the fish that are smaller than the holes in the net, so I'll think that all the fish in the lake are big. This sample is biased because the big fish are more likely to be included in my sample; to get an unbiased sample I need to sample the fish in a different way. Now suppose I want to know how long a bear sleeps every day on average during the year. If I watched it for 10 days in the winter, each day it would sleep 24 hours since bears hibernate in the winter. This sample is biased because in winter the bear sleeps all day while during the rest of the year it is awake part of the day and asleep part of the day. If I want an unbiased sample, I could watch it for 1 day each month for a year, instead of just in the winter. A sample is an unbiased sample if every individual or the element in the population has an equal chance of being selected. Here is one example; Sameera wants to know how many students in her city use the internet for learning purposes. She used an email poll. Based on the replies to her poll, she found that 83% of those surveyed used the internet. Kathy’s sample is biased as she surveyed only the students those who use the internet. She should have randomly selected a few schools and colleges in the city to conduct the survey.

**(ii) Sampling and non-sampling errors.
**

A statistical error to which an analyst exposes a model simply because he or she is working with sample data rather than population or census data. Using sample data presents the risk that results found in an analysis do not represent the results that would be obtained from using data involving the entire population from which the sample was derived. The use of a sample relative to an entire population is often necessary for practical and/or monetary reasons. Although there are likely to be some differences between sample analysis results and population analysis results, the degree to which these can differ is not expected to be substantial. Methods of reducing sampling error include increasing the sample size and ensuring that the sample adequately represents the entire population. A statistical error caused by human error to which a specific statistical analysis is exposed. These errors can include, but are not limited to, data entry errors, biased questions in a questionnaire, biased processing/decision making, inappropriate analysis conclusions and false information provided by respondents. Non-sampling errors are part of the total error that can arise from doing a statistical analysis. The remainder of the total error arises from sampling error. Unlike sampling error, increasing the sample

size will not have any effect on reducing non-sampling error. Unfortunately, it is virtually impossible to eliminate non-sampling errors entirely.

**b. Explain (i) Cluster sampling (ii) Multistage sampling [5 Marks] (i) Cluster sampling
**

Cluster sampling is a sampling technique in which the entire population of interest is divided into groups, or clusters, and a random sample of these clusters is selected. Each cluster must be mutually exclusive and together the clusters must include the entire population. After clusters are selected, then all units within the clusters are selected. No units from non-selected clusters are included in the sample. This differs from stratified sampling, in which some units are selected from each group. When all the units within a cluster are selected, the technique is referred to as onestage cluster sampling. If a subset of units is selected randomly from each selected cluster, it is called two-stage cluster sampling. Cluster sampling can also be made in three or more stages: it is then referred to as multistage cluster sampling. The main reason for using cluster sampling is that it usually much cheaper and more convenient to sample the population in clusters rather than randomly. In some cases, constructing a sampling frame that identifies every population element is too expensive or impossible. Cluster sampling can also reduce cost when the population elements are scattered over a wide area. Suppose you want to survey school children of a certain age in a specific area. If you drew a simple random sampling of school children, you might have to visit all schools in the area to interview your sample. With cluster sampling you could first select the schools to be included in your sample, and then select school children within each of the selected schools. That would probably reduce the number of schools you have to visit and therefore reduce the cost of data collection. In this example, the schools are what are sometimes referred to as natural clusters. In other cases, the population may be widely distributed geographically, and then cluster sampling, where the clusters consists of geographical areas, could reduce the number of areas that need to be visited. A smaller number of areas that need to be visited could reduce travel expenses and also make possible more efficient supervision of the fieldwork.

(ii) Multistage sampling

Multi-stage sampling is a kind of complex sample design in which two or more levels of units are imbedded one in the other. For example: geographic areas (primary units), factories (secondary units), and employees (tertiary units). At each stage, a sample of the corresponding units is selected. At first, a sample of primary units is selected, then, in each of those selected, a sample of secondary units is selected, and so on. All ultimate units (individuals, for instance) selected at the last step of this procedure are then surveyed. The reasons for adopting such a design may be reducing costs, for example, when interviewers are assigned to persons located in a restricted area, or reducing the sample error. Multi-stage sampling is sometimes used when no general sample frame exists. In this case, a first step is to select, at random, a sample of areas, collective units, or villages from a list where they are all registered (primary units). Then, for each selected primary unit, a comprehensive enumeration of all units of lower rank is made, thus obtaining a local sample frame among which a sample of secondary units will be selected.

For example, for each village of the primary sample, a list of all housing units is established, allowing for a selection of a sample of households. Different probabilities can be used at each stage, as well as within one particular stage, for the different units to be selected. Probabilities at the successive stages multiply, so that the resulting probability for selecting one final unit is the product of the probabilities used at each step. The corresponding answers need to be weighted by the inverse of that final probability in order to obtain unbiased estimates. A cluster sample can be seen as a two-stage sample where the secondary probability is 100 percent.

**Q.5.a. Why the Fisher’s formula is called “ideal”? [4 Marks]
**

IRWING Fisher has suggested a compromise between Laspeyre’s and Paasche’s formula by taking geometric mean of this formula thus Fishers formula for price index is given by. Po1 = √L x P x 100 OR Po1 = √∑P1q0 x ∑P1q1 x 100 ∑P0q0 ∑P0q1 1. This formula takes in to account current year as well as base year prices and quantities. 2. It is free from bias, upward as well as downward. 3. It statistics both the time reversal test as well as the factor several test. That is why it is called an ideal formula.

b. What do you mean by Time Reversal Test? Show how Fisher’s formula satisfies this test? 6 Marks]

A test that may be used under the axiomatic approach which requires that if the prices and quantities in the two periods being compared are interchanged the resulting price index is the reciprocal of the original price index. When an index satisfies this test, the same result is obtained whether the direction of change is measured forwards in time from the first to the second period or backwards from the second to the first period. The time reversal test requires that the index for the later period based on the earlier period should be the reciprocal of that for the earlier period based on the later period; one of the desirable features of the “Fisher Ideal” price and volume indexes is that they satisfy this test (unlike either the Paasche or Laspeyres indexes)

Q.6.a. In a question on correlation the value of r is 0.64 and its P.E=0.13274. What was the value of „n‟? [ 4 Marks] b. A student calculates the value of „r‟ as +0.5 when the value of „n is 16 and comments the „r‟ is significant. Is he correct? [ 2 marks]

Low

Mean

High

(1) Sales (units) (2) Price (per unit) (3) Variable cost (per unit) (4) Fixed cost (5) Initial investment

160 $3,000 3,000 100,000 1,000,000

500 $3,750 3,000 200,000 2,000,000

960 $4,000 3,000 4,000 4,000,000

The problem implies that since PVIFA = 6.447, the discount rate is 8.9% per year. For the mean case,

**E[NPV] = 2,000,000 + 6.447{ 500 (3750 3000) 200,000 }
**

= $871,775 Sensitivity Analysis allows each variable individually to deviate from its mean value: (e.g. 2,515,760 = 2,000,000 + 6.447{160(3750 3000) 200,000}

Variable Sales Price Variable Costs Fixed Costs Low High

$2,515,760 $1,352,440 $3,289,400 $871,775 $227,075 $65,900 $871,775 $2,161,175

B . The economics department has prepared sales projections for three business scenarios: recession, normal, and recovery. Sales under each scenario are expected to be as follows: recession, 300,000 units; normal, 500,000 units; and recovery, 800,000 units. Calculate the return on assets for the two plants under these three scenarios. Answer: ROA = Return/Investment; for Plant 1 (Recession): Return = 300,000(2 ─ 1.5) ─ 200,000 = ─50,000 ROA = ─50,000/1,000,000 = ─5%

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd