# BEE1020 { Basic Mathematical Economics Dieter Balkenborg Week 12, Lecture Thursday 23.01.

03 Department of Economics Constrained optimization University of Exeter

1

Objective

We give the \¯rst order conditions" for constrained optimization problems like utility maximization and costs minimization. The Lagrangean method to obtain these conditions is introduced and its economic interpretation is discussed. The relevant reading for this handout is Chapter 7, Sections 4. See also the book by Begg on the consumer optimum and on cost minimization.

2
2.1

Constrained optimization
Utility maximization

The preferences of a consumer are described by a family of indi®erence curves. A mathematically convenient way to describe a family of indi®erence curves is to describe them as the level curves of a utility function u (x; y). The utility u (x; y) can be interpreted as a numerical measure of the satisfaction the consumer has when consuming x units of the ¯rst commodity (say, apples) and y units of the second commodity (say oranges). Along a level curve utility is constant, so all commodity bundles on a level curve bring equal satisfaction, they are hence the indi®erence curves. If one bundle has more utility than another, the consumer will prefer it. As an example consider the utility function u (x; y) = xy. The indi®erence curves for u = 1, u = 2 and u = 3 have the form
5 4 3 2 1

0

1

2

x 3

4

5

where consumption on higher indi®erence curves yields higher utility. Suppose the price of apples is p, the price of oranges is q and the consumer has a budget b. Then the budget constraint px + qy · b

This is a constrained optimization problem because the consumer would consume in¯nite amounts of both commodities if he were not constrained by his budget. u (x. i. we expect the budget inquality to hold with equality in the optimum. is the relative price of apples in terms q of oranges. This second equation is the budget equation px + qy = b (4) To repeat: the constrained optimum (x¤ .must be satis¯ed for the consumption bundle (x. We call total expenditure g (x. y) is the objective function for this problem because the objective is to maximize this function.e. Thus in optimum the marginal rate of substitution must be equal @u=@y to the relative price @u=@x p = @u=@y q (1) This is the ¯rst equation which characterizes the constrained optimum. i. The ¯rst equation expresses that in the optimum the marginal rate of substitution is equal to the relative price.. The condition becomes @x @y p y = x q or qy = px. 2 .. his total expenditure on the two commodities cannot exceed his budget. We need a second equation for the optimum. in optimum the consumer spends equal amounts on each commodity.) q The slope of the indi®erence curve is { apart from the sign { the marginal rate of substitution. Now the budget line is given by the equation px + qy = b or y= b p ¡ x q q so it has the slope ¡ p . which. y) = px + qy the constraining function because it appears in the constraint.e. From the Principles lecture you know the essential condition which must hold in an optimum: The budget line must be tangential to the indi®erence curve. This is so because by assumption the consumer has no possibility to save and because no other commodities are available. The aim of the consumer is to maximize his utility subject to the budget constraint. (3) (2) i. y) which the consumer buys.e. In the previous handout we tried to explain that the marginal rate of substitution is @u=@x . For the example u (x. (To buy an apple more the consumer must give up p units of oranges. The second one states that the consumer spends all his money in the optimum. y) = xy we have @u = y and @u = x.. y ¤ ) is the solution of a simultaneous system of two equations in two unknowns. We expect the budget constraint to be binding in optimum. apart form the sign.

when the budget is b = 100 and the prices are p = 2 and q = 5 the optimal consumption is x¤ = 100 = 25 2£2 y= 100 = 10: 2£5 2. L) = KL. Then she wants to minimize total costs rK + wL subject to the constraint that she produces at least Q0 units Q0 · Q (K. so L r = K w or wL = rK 3 . Since the slope of an iso-cost line rK + wL = constant r is the negative of the relative price of inputs w it follows that in the optimum the marginal rate of substitution @Q=@K must equal the relative price @Q=@L r @Q=@K = : @Q=@L w In our example @Q @K = L and @Q @L = K. Consider a price-taking ¯rm with production function Q (K. Suppose the ¯rm wants to ¯nd the least costly way to produce Q0 units of output given the prices r and w for the inputs. L) : Here expenditure on inputs is the objective function whereas the production function is the constraining function.In our example condition (1) gives y = p x. Again from the principles lecture we know that in the cost minimum the iso-cost line must be tangential to the isoquant. Substituting this into the budget equation q gives µ ¶ p px + q x = px + px = 2px = b q b x¤ = 2p b p ¤ p b y¤ = x = = q q 2p 2q We have found the optimum. For instance.2 Cost minimization The cost minimization problem is in many ways dual to the utility maximization problem and leads to very similar conditions.

As the second equation we have that in the optimum the ¯rm will produce exactly Q0 units and no more.e. y) = c holds at the optimum.) The optimum must then solve the two conditions @g=@x @f =@x = @f =@y @g=@y g (x. y) · c where c is a constant. L¤ ) must satisfy the conditions L 2 = K 5 and 250 = KL: Substituting in the last equation L = 2 K yields 5 2 250 = 2 £ 125 = 2 £ 53 = K 2 5 2 ¤ ¤ L = K = 10: 5 54 = K 2 K ¤ = 52 = 25 or 2 L= K 5 The optimal input combination is (K ¤ . that w = 5 and that the ¯rm wants to produce 250 units of output at the lowest costs.3 The general problem In general we want to maximize or to minimize an objective function z = f (x. L) : In our example suppose that r = 2. L¤ ) must lie on the isoquant for Q0 Q0 = Q (K. L¤ ) = (5. 10).so in optimum equal amounts are spend on both inputs. y) susbject to a constraint g (x.. the cost-minimizing optimum (K ¤ . y) = c 4 . Then the optimum (K ¤ . where g (x. (Otherwise the constraint is always satis¯ed near the optimum and the optimum must be a critical point of f. 2. We are interested in the case where the constraint is binding in the optimum. This is the ¯rst equation. i.

For a maximization problem the interpretation is as follows: Instead of strictly forbidding that the constraint g (x.) Now we look for an unconstrained maximum (x¤ . y ¤ ) of this function and suppose and we were able to choose a positive ¸ such that (x¤ . Hence f (x. y ¤ ) is a constrained maximum.. i. y) = c: 5 (8) .e. y) and overall we obtain that f (x. g (x. This new function { called the Lagrangian { is L (x. The method transform the constrained optimization problem into an unconstrained optimization problem in conjunction with a pricing problem. y ¤ ) ¡ ¸ (g (x¤ . y ¤ ) ¡ c) = f (x¤ .3 The Lagrangian approach The Lagrangian method is an alternative way to derive these two conditions. (If the constraint is not violated. y ¤ ) = c. y) · L (x. To ¯nd the maximum of the Lagrangian we solve the ¯rst order conditions @L @f @g = ¡¸ =0 @x @x @x @L @f @g = ¡¸ =0 @y @y @y or or @f @g =¸ @x @x @f @g =¸ @y @y (5) (6) Notice that if we divide the two equations on the right we get @g=@x @f =@x = : @f =@y @g=@y (7) In addition we want the constraint to hold with equality in the optimum. y) ¡ c) Then we look for an unconstrained maximum or a minimum of this function and determine the Lagrange multiplier such that the constraint holds with equality in the optimum. the so-called Lagrange multiplier. y) = f (x. we impose the equation g (x. y) ¡ c) is positive since (x. y) that does not violate the constraint. This means that (x¤ . First we form out of the two given functions f and g a new function which depends on an additional parameter ¸. y) · c. Then L (x. Moreover ¡¸ (g (x. y ¤ ) = f (x¤ . This amount is subtracted from the value of the objective function which we want to maximize. y ¤ ) is automatically an optimum of the constrained optimization problem: We have L (x¤ . y) · L (x¤ . y ¤ ) because (x¤ . but at a price. y) satisfying the constraint. This price is ¸: It is a ¯ctituous price in a ¯ctitious problem and hence called a \shadow price". y) satis¯es the constraint. Suppose the constraint is violated. Suppose we have found an absolute maximum (x¤ . When each unit of violation costs ¸. y ¤ ) of the Lagrangian. y ¤ ) is an absolute maximum. i. the total amount ¸ (g (x. y ¤ ) holds for all (x. that satis¯es g (x. there is a reward. we allow to violate it. We discuss here only the maximization problem. y) ¡ c) must then be paid for violation. y) · f (x¤ .e. y) · c ever gets violated. Now consider any pair (x. y ¤ ) .. Then (x¤ . y) ¡ c is then the number of units by which the constraint is violated. y ¤ ) satis¯es the constraint g (x¤ . y) ¡ ¸ (g (x.

1 Utility maximization again In this case the Lagrangian is L (x.We then solve the two equations (7) and (8) for x and y. Now we have a maximization problem. First we transorm the problem such that it ¯ts with our general framework above. we can calculate the value of the Lagrange multiplier ¸ from equation (5) or (6). y) · c with the constant on the right.2 Cost minimization again The ¯rm wants to minimize the costs rK + wL subject to the constraint Q0 · Q (K. L). When we solve the cost minimization problem with the Lagrangian method we therefore ask: What would the output price have to be such that it is optimal to produce exactly Q0 units of output. L) as ¡Q (K. L) ¡ rK ¡ wL) ¡ ¸Q0 We see that { up to a constant which will drop out in the di®erentiation { the Lagrangian is just the pro¯t function of the ¯rm when the output price is ¸. When we solve the constrained optimization problem with the Lagrangian approach we e®ectively ask: What would the marginal utility of saving have to be such that it is optimal not to save and not to go into debt. 6 . The Lagrangian can then be interpreted as the total utility from consumption today and in the future. The Lagrangian was just a tool to derive these equations. Therefore we rewrite Q0 · Q (K. Minimizing rK + wL is the same as maximizing ¡rK ¡ wL. One also speaks of the marginal utility of money. Now suppose it is possible to save and also to go into debt. For simplicity we assume that this future utility is linear in savings and given by ¸s Then ¸ is the marginal utility of saving a penny. The savings are s = b ¡ px + py Now suppose there is some future utility from saving because the money can be spend in the future. y) = u (x. y) ¡ ¸ (px + qy ¡ b) = u (x. L) + Q0 ) = (¸Q (K. L) = ¡rK ¡ wL ¡ ¸ (¡Q (K. If the need arises. 3. Next we had the constraint in the form g (x. y) + ¸ (b ¡ px + py) It can be interpreted as follows: In the original problem saving is not possible. 3. The Lagrangian becomes: L (K. L) · ¡Q0 .

How to distinguish between these types will be discussed in a later handout. The relevant reading for this handout is Chapter 7. y ¤ ) ¸ f (x. y) holds for all (x. y ¤ ) such that f (x¤ . The relevant reading for the previous handout is Chapter 7. minima or saddle points. 7 . y ¤ ) { which is now just a function in just one variable { has a maximum at x¤ ..y=y¤ @z = 0 @y jx=x¤ .y=y¤ which must typically be satis¯ed. i. a minimum or a saddle point. We are thus led to the ¯rst order conditions @y @z = 0 @x jx=x¤ . as will be discussed in a later lecture. To ¯nd out the latter we will have to look at the second derivatives and the so-called \second order conditions". after we have discussed constrained optimization problems. y). In this handout we discuss the main step to solve an unconstrained optimization problem namely to ¯nd the two \¯rst order conditions" and how to solve them. We speak of ¯rst order conditions because only the ¯rst derivatives are involved. y) in two variables. Typically the maximum of this function in one variable will be a critical point.4 Objective Two types of optimization problems frequently occur in economics: unconstrained optimization problems like pro¯t maximization problems and constrained optimization problems like the utility maximization problem of consumer constrained by a budget.e. If (x¤ . Hence we expect the @z partial derivative @x to be zero at the optimum. The conditions do not tell us whether we have a maximum. The solutions for this system of two equations in two unknowns can be local maxima. we want to ¯nd the pair of numbers (x¤ .1 Optimization problems Unconstrained optimization Suppose we want to ¯nd the absolute maximum of a function z = f (x. Sections 3. y ¤ ) is such a maximum the following must hold: If we freeze the variable y at the optimal value y ¤ and vary only the variable x then the function f (x. Similarly we expect the partial derivative @z to be zero at the optimum. 5 5. Sections 1 and 2 in the textbook of Ho®mann and Bradley.

L¤ ) which maximizes her pro¯t. Similarly (10) should hold. It is useful to rewrite the two ¯rst order conditions as r @Q = @K P w @Q = @L P (11) (12) Thus. it tells us how much more can be produced by using one more @Q unit of capital.w r = = (13) @K @L P P w so the marginal rate of substitution must be equal to the price ratio of the input price (which is the relative price of capital in terms of labour).. L). let w be the wage rate (the price of labour L) and let P be the price of the output the ¯rm is producing. When the ¯rm uses K units of capital and L units labour she can produce Q (K. To ¯nd it we want to solve the ¯rst order conditions @¦ @Q = P ¡r =0 @K @K @Q @¦ = P ¡w = 0 @L @L (9) (10) @Q @Q These conditions are intuitive: Suppose for instance P @K ¡ r > 0: @K is the marginal product of capital. L) units of the output and hence make a revenue of P £ Q (K. L) = P Q (K. Let r be the interest rate (the price of capital K). for both inputs it must be the case that the marginal product is equal to the price ratio of input. i. P @K ¡ r < 0 would mean that pro¯t can be increased by producing less and using less capital. Her production costs will then be her total expenditure on the inputs capital and labour rK + wL: Her pro¯t will be the di®erence ¦ (K.. 8 . r is the marginal cost of using one @Q more unit of capital. P @K ¡ r > 0 means that the marginal pro¯t from using one more unit of capital is positive. P @K is the marginal increase in revenue when one more unit of capital is used and the additional output is sold on the market.Example 1 Consider a price-taking ¯rm with the production function Q (K. When we divide here the two left-hand sides and equate them with the quotient of the right hand sides we obtain as a consequence Á @Q @Q r.to output price.t. So (9) should hold in the optimum. L) ¡ rK ¡ rL: The ¯rm tries to ¯nd the input combination (K ¤ . it increases pro¯ts to produce more by using more capital @Q and therefore we cannot be in a pro¯t optimum.e. i. Correspondingly. L) by selling each unit of output at the market price P .

Example 2 Let us be more speci¯c and assume that the production function is Q (K.and input prices are P = 12. Then 1 ¡5 1 1 1 1 @Q @Q = K 6 L2 = K 6 L¡ 2 6 2 Á @K Á @L 5 1 1 1 1 ¡5 1 1 1 ¡1 @Q @Q 1 1L = K 6 L2 K 6 L 2 = K¡ 6 L 2 K¡ 6 L 2 = @K @L 6 2 3 3K The ¯rst order conditions (11) and (12) become 1 ¡5 1 1 K 6L2 = 6 12 1 1 ¡1 3 K6L 2 = : 2 12 (14) (15) 1 1 This is a simultaneous system of two equations in two unknowns. r = 1 and w = 3. When he sells QB units of the good in country B he obtains the price PB = 34 ¡ 4QB : How much should the monopolist sell in the two countries in order to maximize pro¯ts? To solve this problem we calculate ¯rst the pro¯t function as the sum of the revenues in each country minus the production costs. The implied condition (13) becomes 1L 1 = 3K 3 which simpli¯es to L=K Example 3 A monopolist with total cost function T C (Q) = Q2 sells his product in two di®erent countries. Total revenue in country A is T RA = PA QA = (22 ¡ 3QA ) QA Total revenue in country B is T RB = PB QB = (34 ¡ 4QB ) QB Total production costs are T C = (QA + QB )2 9 . When he sells QA units of the good in country A he will obtain the price PA = 22 ¡ 3QA for each unit. L) = K 6 L 2 and that the output.

a quadratic function in QA and QB . i.1 Using the slope-intercept form (17) We rewrite both linear equations in slope-intercept form 7y = 50 ¡ 5x 50 5 ¡ x y = 7 7 4x ¡ 6y = ¡18 4x + 18 = 6y 2 y = x+3 3 8 7 6 5 4 3 2 -2 0 2 x 4 6 10 . we must solve the ¯rst order conditions @¦ = ¡3QA + (22 ¡ 3QA ) ¡ 2 (QA + QB ) = 22 ¡ 8QA ¡ 2QB = 0 @QA @¦ = ¡4QB + (34 ¡ 4QB ) ¡ 2 (QA + QB ) = 34 ¡ 2QA ¡ 10QB = 0 @QB or 8QA + 2QB = 22 2QA + 10QB = 34: (16) Thus we have to solve a linear simultaneous system of two equations in two unknowns.1.e.. 6 6.1 Simultaneous systems of equations Linear systems We illustrate four methods to solve a linear system of equations using the example 5x + 7y = 50 4x ¡ 6y = ¡18 6. QB ) = (22 ¡ 3QA ) QA + (34 ¡ 4QB ) QB ¡ (QA + QB )2 .Therefore the pro¯t is ¦ (QA . In order to ¯nd the pro¯t maximum we must ¯nd the critical points.

2 The method of substitution 50 5 ¡ x 7 7 50 ¡3 7 150 ¡ 63 87 First we solve one of the equations in (17). x. 5x + 7y ¶ 2 5x + 7 x+3 3 14 5x + x + 21 3 14 15 x + x + 21 3 3 29 x 3 µ = 50 = 50 = 50 = 50 = 50 ¡ 21 = 29 3 £ 29 = 3 29 x = 11 . which we solve for x.The solutions to each equation form a line. for one of the variables. say the second.) We obtain an equation in only one variable. To calculate the value of y we use one of the linear functions in slope-intercept form 2 £3=2 3 The solution to the system of equations is x¤ = 3. which we have now described as the graph of linear functions. Hence 2 = y = x+3 3 2 5 = x+ j£3£7 3 7 = 14x + 15x = 29x 87 x = =3 29 We have found the value of x in a solution. y= 5 £ 3 + 7 £ 5 = 15 + 35 = 50 4 £ 3 ¡ 6 £ 5 = 12 ¡ 30 = ¡18 6. (Do not forget to place brackets around the expression. At the intersection point of the two lines the two linear functions must have the same y-value. say y: 4x ¡ 6y = ¡18 4x + 18 = 6y 4 2 y = x+3 = x+3 6 3 (18) Then we replace y in the other equation by the result. y ¤ = 2 It is strongly recommended to check the result for the original system of equations.1.

We have found x and can now use (18) to ¯nd y: 2 2 y = x+3= £3+3 =5 3 3 Hence the solution is x = 3.1.1. 6. y = 5. In linear algebra the system of equations is written as · ¸· ¸ · ¸ 5 7 x 50 = 4 ¡6 y ¡18 12 .3 The method of elimination To eliminate x we proceed in two stages: First we multiply the ¯rst equation by the coef¯cient of x in the second equation and we multiply the second equation by the coe±cient of x in the ¯rst equation 5x + 7y 4x ¡ 6y 20x + 28y 20x ¡ 30y = = = = 50 ¡18 200 ¡90 j£4 j£5 Then we subtract the second equation from the ¯rst equation 20x + 28y = 200 20x ¡ 30y = ¡90 j¡ 0 + 28y ¡ (¡30y) = 200 ¡ (¡90) 58y = 290 y = 290 = 5 58 Having found y we use one of the original equations to ¯nd x 5x + 7y 5x + 35 5x x = = = = 50 50 15 3 Instead of ¯rst eliminating x we could have ¯rst eliminated y: 5x + 7y 4x ¡ 6y 30x + 42y 28x ¡ 42y 58x x 6.4 Cramer's rule = = = = = = 50 j£6 ¡18 j£7 300 ¡126 j+ 174 3 This is a little preview on linear algebra.

i. A 2 £ 2-matrix is a system of four numbers arranged in a square and surrounded by square brackets. i. For x¤ the determinant in the numerator is the determinant of the matrix obtained by replacing in the matrix of coe±cients the coe±cients a and c of x by the constant terms e and f.e. In both cases we divide by the determinant of the matric of the coe±cients. the second row is replaced by the row vector with the constant terms. For instance. With each 2 £ 2-matrix · ¸ a b A= c d we associate a new number called the determinant ¯ ¯ ¯ a b ¯ ¯ det A = ¯ ¯ c d ¯ = ad ¡ cb · x1 x2 ¸ and · 50 ¡18 ¸ Notice that the determinant is indicated by vertical lines in contrast to square brackets. for y ¤ the determinant in the numerator is the determinant of the matrix obtained by replacing in the matrix of coe±cients the coe±cients b and d of y by the constant terms e and f . is the 2 £ 2-matrix 4 ¡6 of coe±cients. Similarly. ¯ ¯ ¯ 5 7 ¯ ¯ ¯ ¯ 4 ¡6 ¯ = 5 £ (¡6) ¡ 4 £ 7 = ¡30 ¡ 28 = 58 ax + by = e cx + dy = f or · Namely. ¯ ¯ ¯ ¯ x¤ = ¯ ¯ ¯ ¯ e f a c b d b d ¯ ¯ ¯ ¯ ed ¡ bf ¯ = ¯ ad ¡ bc ¯ ¯ ¯ ¯ ¯ ¯ y¤ = ¯ ¯ ¯ ¯ a c a c e f b d ¯ ¯ ¯ ¯ ae ¡ ce ¯ = ¯ ad ¡ bc ¯ ¯ a b b d ¸· x y ¸ = · e f ¸ : Cramer's rule uses determinants to give an explicit formula for the solution of a linear simultaneous system of equations of the form Thus x¤ and y ¤ are calculated as the quotients of two determinants. simply two numbers written · ¸ 5 7 below each other and surrounded by square brackets.e. the ¯rst row is replaced by the row vector with the constant terms.where are so-called columns vectors. The method works only if the determinant in the denominator is not zero... 13 .

. i. when the determinant of the matrix of b coe±cients ad ¡ cb is zero. both equations describe the same line.1. The arguments can be easily extended to these cases. If the intercepts are di®erent the two equations describe parallel lines. exactly one or in¯nitely many solution. The slopes are identical when a = d . Example 4 x + 2y = 3 2x + 4y = 4 has no solution: The two lines 3 1 ¡ x 2 2 1 y = 1¡ x 2 y = We assume here for simplicity that the coe±cients b and d are not zero. In this case all points (x. If in addition the intercepts are equal. y) on the line are solutions. bring both equations into slope-intercept form1 e a ¡ x y = f b f c y = ¡ x d d If the slopes of these two linear functions are the same. 1 14 . These do not intersect and hence there is no solution. To see that only these cases can arise. The latter case is trivial .5 Existence and uniqueness. ¯ ¯ ¯ 50 7 ¯ ¯ ¯ ¯ ¡18 ¡6 ¯ 50 £ (¡6) ¡ (¡18) £ 7 ¡300 + 126 ¡174 ¤ ¯ ¯ = x = = = =3 ¯ 5 5 £ (¡6) ¡ 4 £ 7 ¡30 ¡ 28 ¡58 7 ¯ ¯ ¯ ¯ 4 ¡6 ¯ ¯ ¯ ¯ 5 50 ¯ ¯ ¯ ¯ 4 ¡18 ¯ ¡90 ¡ 200 ¡290 5 £ (¡18) ¡ 4 £ 50 ¤ ¯ = y = ¯ = = =5 ¯ 5 ¯ 5 £ (¡6) ¡ 4 £ 7 ¡58 ¡58 7 ¯ ¯ ¯ 4 ¡6 ¯ A linear simultaneous system of equations ax + by = e cx + dy = f can have zero. In our example.e.Exercise 1 Use the above methods to ¯nd the optimum in Example 16 6. except when all four coe±cients are zero. y) are solutions.either there is no solution or all pairs of numbers (x. they describe identical or parallel c lines.

15 . but admittedly some luck is needed to solve the non-linear equation in one variable. Trying to ¯nd one yields a contradiction 3 1 1 ¡ x = y =1¡ x 2 2 2 3 = 1 2 1 j+ x 2 6. One needs to memorize some tricks which work in special cases. for instance. one linear y2 + x ¡ 1 = 0 1 y+ x = 1 2 Consider. One may have to try both possibilities. This makes the problem simpler.3 Two non-linear equations There is no general method.2 One equation non-linear. Solving the linear equation for x can sometimes give a simpler non-linear equation than solving for y and vice versa. 6. As a result we obtain one non-linear equation in a single unknown. In our example it is convenient to solve for x: 1 x = 1¡y 2 x = 2 ¡ 2y 2 y + (2 ¡ 2y) ¡ 1 = y 2 ¡ 2y + 1 = (y ¡ 1)2 = 0 So the unique solution is y ¤ = 1 and x¤ = 2 ¡ 2 £ 1 = 0. In this case we solve the linear equation for one of the variables and substitute the result into the non-linear equation.are parallel 2 -2 -1 0 -1 1 2 x 3 4 5 There is no common solution.

We can substitute this into. the ¯rst equation and obtain 5 1 5 3 2 1 1 1 ¡5 1 1 1 1 1 = K 6 L 2 = K¡ 6 K 2 = K¡ 6 +6 = K¡ 6 = K¡3 12 6 6 6 6 6 1 1 K¡ 3 = 2 1 1 p = 3 2 K p 3 K = 2 K = 23 = 8 Thus the solution to the ¯rst order conditions (and. if you take the logarithms (introduced later!) you get the system 1 5 ¡ ln K + 6 6 1 1 ln + ln K ¡ 2 5 ln 1 1 ln L = ln 2 12 1 3 ln L = ln 2 12 which is linear in ln K and ln L. Remark 1 This method works with any Cobb-Douglas production function Q (K. say. One can solve this system for ln K and ln L. as we have seen. Namely.In the example of pro¯t-maximization with a Cobb-Douglas production function we were led to the system 1 ¡5 1 1 K 6L2 = 6 12 1 1 ¡1 3 K6L 2 = : 2 12 This is a simultaneous system of two equations in two unknowns which looks hard to solve. division of the two equations shows that L = K must hold in a critical point. 16 . in fact. L) = K a Lb where the indices a and b are positive and have a sum less than 1. However. which then gives us the values for L and K. the optimum) is K ¤ = L¤ = 8. Remark 2 The ¯rst order conditions (14) and (15) form a \hidden" linear system of equations.