Professional Documents
Culture Documents
Linear law y = hx + c
Profits, y
6 5 4 3 2 1 0 0
Mount Profit
Power-law y = mxn
Table of Contents
Topic No. 1. 2. 3. 4. Topic Summary Introduction The Case for Deducing Simple Mathematical laws Three Type of Companies: Linear Law e.g., Google, Facebook, ExxonMobil, Ford, Best Buy Is there a Mount Profit? Non-linear behavior: Power-law Acceleration and Deceleration of Profits growth Discussion: The Generalized Planck law Conclusions Appendix 1: Brief discussion of cost-cutting Appendix 2: Hubbles 1929 paper on Hubbles law and the expanding Universe and Big Bang Appendix 3: Facebook IPO debacle discussion Appendix 4: Quarterly Profits-Revenues data for Ford My Open Letter to some friends and to all who may be interested Lets build a Profits Engine Page No. 3 4 5 10
5. 6.
26 28
32 37 39 44 54 55 58
Page 2 of 62
1. Summary
It appears that simple mathematical laws describing the profits-revenues behavior for a company can be conceived by treating energy in physics as being equivalent to money in economics. This mathematical analogy permits an extension of Plancks law from physics to economics with a reinterpretation of the significance of the various mathematical symbols. A careful study of the readily available profits-revenues data for several companies both large and small indicates that: There are three types of companies, called Type I, Type II and Type III, which follow the simple linear law, y = hx + c, depending on the values of h and c (positive or negative). While the large majority of companies follow the linear law, y = hx + c, a few follow the simplest type of a nonlinear law, the power-law, y = mxn + c. This again yields two possibilities: decelerating profits growth with increasing revenues (n < 1) and accelerating profits growth with increasing revenues (n > 1). The third case of n = 1 yields the linear law. What is remarkable is that these purely mathematical speculations can be confirmed with examples of all the three types of linear behavior, and the two types of nonlinear behavior. All of these are indeed observed in the real world. This is discussed here by citing the financial data for Facebook, Google, ExxonMobil, Ford Motor Company, Best Buy and Universal Insurance Holdings, Inc. The last company in this list is currently ranked No. 2 in the Fortune Small Business 100. These observations lead us to believe that there must be at least a few recorded examples of public companies that operate in the region past the (i.e., to right of the) maximum point of the power-exponential law, i.e., where profits decrease even as revenues increase. Such a deceleration of profits will eventually become unsustainable and the companies will head towards bankruptcy. Finally, Mount Profit (the maximum point on the profits-revenue curve) we do not want to see. And, if we do see the peak of Mount Profit, we would be better advised to stay on that side of its peak where profits continue to rise.
Page 3 of 62
2. Introduction
All these years, it seems like such a long, looonnnngggg time, I have resisted becoming a part of the modern social media trend and the friend requests that I get from, yes, sometimes genuine friends and family members too. Sometimes, I have even called myself anti-Facebook to describe my gut-resistance to these extended contacts with not just friends but their friends friends friends and God only knows who else. And, yes, I do personally know of a couple of tragic situations that evolved from such extended contacts (I dont mean Facebook, specifically, but social media contacts, in general). Little did I know that Facebook would become the poster child for sharing what I have done here about the need for the application of more rigorous scientific methods to the analysis of the huge volumes of financial data that are being published now on a quarterly and annual basis for literally hundreds and thousands of companies worldwide. It all started when Facebook launched its IPO and its stock started heading south soon after trading began on Nasdaq last Friday May 18, 2012. And, then, for the first time, I started paying attention to Facebook as a company and what the Wall Streeters were speculating about. Now, I have even started wondering why we do not Facebook monthly, or even weekly, financial statements! May be Facebook Inc., can start a new trend in this as well and start releasing monthly financial statements. Then, I would have even more data to analyze and even more MRPs to calculate. And may be many more can merrily join this new data mining industry. Cheers! This brings us to an interesting point that we can learn from (or speculate about) the simple mathematical laws that I have been talking about in the two earlier documents on Facebook. We will call them Ref. [1] and Ref. [2] in what follows. 1. http://www.scribd.com/doc/94103265/The-FaceBook-Future The FaceBook Future Revenues-Profits Analysis, published May 19, 2012. 2. http://www.scribd.com/doc/94325593/The-Future-of-Facebook-I The Future of Facebook I, published on May 21, 2012.
Page 4 of 62
In fact, in my doctoral thesis work, when we started analyzing my experimental results (very briefly, we built a special instrument, similar to a parallel-plate viscometer, to determine the viscosity of a new type of partially solid metal alloy, with a novel microstructure, first produced at MIT), we first used a simple linear law to explain our data. The linear law of interest was Newtons law for the viscosity of a fluid, which is usually written as y = x, where the constant is the viscosity and x and y are the two quantities (shear rate, x, and shear stress, y) that we measure in the experiment. There seemed to be an unmistakable deviation from the linear law. This led us to the non-linear law, or the power-law, which became the hallmark of the mathematical analysis of the experimental data presented in my doctoral thesis, see V. Laxmanan and M. C. Flemings, Metallurgical and Materials Transactions A Volume 11, Number 12 (1980), 1927-1937, DOI: 10.1007/ BF02655112 http://www.springerlink.com/content/45uj4gh37051172r/). The power-law is written as y = mxn where n is the power-law index. This is a special case of the more general law for CMB Spectrum (a = b = 0 and c = 0). For n =1 we get the linear law y = mx, or y = x. We use different symbols for the constants since the numerical value of the constants change when we go from one type of behavior to the other. More generally, the power law is y = mxn + c, where c is the nonzero intercept made by the curve on the y-axis. In the real world, when x = 0, y is NOT always equal to zero. A nonzero constant must often be added as also recognized by Planck himself to develop his expression for the entropy S; see discussion in 7, Generalized Radiation Law and Appendix 1. There are many such examples of laws with non-zero intercepts (see Ref.[1]), the most common ones being: the hydrostatic law, which describes how pressure varies with depth below the free surface of water (as in a swimming pool, a huge lake, or the ocean), Charles law describing how the volume of a gas increases with temperature, and Einsteins law which describes the photoelectric effect. The modern photocells that we use widely work on the photoelectric principle.
Page 6 of 62
As we have discussed already, in the world of finance, or economics, the nonzero intercept is related to Costs in the simple equation 1 given below in words and then restated using math symbols. Profits = Revenues Costs P=RC P/R = 1 (C/R) (1) (2) (3)
Since the costs C is inherently nonzero (see appendix 1 for a brief aside about how costs affect us all), the ratio profits/margins, P/R, can either increase or decrease as revenues increase. Because of the nonzero C, a doubling of the revenues, will NOT lead to a doubling of the profits. But, this is exactly what we implicitly assume when we use the profit margin P/R to compare different companies, within a sector of the economy, or even across sectors. The higher the profit margin, the better the company. This also means that, logically speaking, that we expect a doubling of the revenues to double the profits. But, we all know that this is NOT true. We have actually witnessed companies like GM, Ford, GE, Exxon Mobil, Microsoft, Walmart, grow and mature into huge companies reporting hundreds of billions of dollars in revenues and billions of dollars in profits, but never once has anyone bothered to check if a doubling of the revenues did indeed lead to a doubling of the profits! Or, ask, why it did not, if it did not. Consider the data for Ford Motor Company for the period 2000-2011 (see Table 4 in a later section where the Ford data is discussed). In 2009, Ford reported profits of $2.72 billion with revenues of $116.3 billion. In 2010, the profits had more than doubled to $6.6 billion with revenues of $128.95 billion, with an increase of only about 11% in revenues. Or, explain the tripling of profits from $6.6 billion in 2010 to $20.21 billion in 2011, with revenues going up by only about 5%. Profits increased by $13.65 billion with revenues only going up by $7.31 billion.
Page 7 of 62
How do we explain this behavior using the y/x ratio as one of the leading measures of performance and profitability? Yet, we continue to use the simple profit margin y/x, or P/R, blithely, every day to analyze the financial performance of companies. And, we use the ubiquitous EPS which is also a simple ratio, just like the profit margin (PM). And, very soon, companies, both good and bad, are driven out of business since they cannot meet Wall Street expectations about the continuous increases in the EPS and the PM that is demanded by Wall Street analysts, along with increasing revenues. These are unreasonable expectations. Let me repeat what I said in Ref. [2]. It is not all just Wall Street greed. It is also a lot of Wall Street stupidity since there are unappreciated laws that seem to dictate how a company will grow and mature. This can only be understood if we shed the current obsession with ratio analysis and start using rigorous scientific and mathematically sound approaches to analyzing the large masses of business data that are being compiled each quarter, summarized in annual reports, and in many special reports at the end of each decade or every 15 years, or 25 years, or 50 years, or 75 years, and so on. To me, as a R & D professional, who had spent all of his time at some of the worlds topmost research institutions, this seemed like an area that was ripe for research. It seemed like the world of finance and economics was operating like astronomy or physics before the advent of Galileo, Kepler, and Newton. Just take a look at Hubbles law and the raw data on the speed (V) and the distances of galaxies (D) that Hubble reported (after years of painstaking and meticulous observations) in his 1929 paper which led to his famous law, see http://www.mpa-garching.mpg.de/~lxl/personal/images/science/hub_1929.html Since, this is so important to the understanding of why finance, business, and economics majors must take the whole idea of discovering mathematical laws very seriously, I have actually reproduced the entire 1929 paper by Hubble as Appendix 2 and have also replotted the data from Tables 1 and 2 of Hubbles famous paper, for convenience and clarity. Please do take a look.
Page 8 of 62
Does the astronomical data warrant such a conclusion, as far-reaching as Hubbles, which eventually led to the idea of an expanding universe and the Big Bang theory of creation? No finance major will dare to, or even dream about, proposing such a law, based on the quality of the raw data that Hubble presented in 1929. But, Hubble did make such a conjecture and even Einstein accepted it and later called his introduction of the cosmological constant (into general relativity, this was before the publication of Hubbles paper in 1929) the biggest blunder of his life. Lets summarize here some facts to ponder. Hubbles observations were limited to astronomical distances of just a few megaparsecs (Mpc) and maximum galaxy velocities of about 1000 to 2000 kilometers per second (km/s). The slope of the graph, now called the Hubble constant H0, that he deduced was 500. This has now been revised and is believed to be in the range of 60 to 80 based on observations of galaxies that are several thousands of Mpc away and moving at far higher velocities than those reported by Hubble. The speed of light is 300,000 km/s (186,000 miles per second). This is how science progresses and astronomy, in particular, has progressed since the days of Kepler and Hubble. In fact, Hubble never received the Nobel Prize since, not so long ago back in the 1930s, astronomy was not considered to be a discipline worthy of such a high honor. Since then, at least two Nobel Prizes have been awarded for discoveries that confirmed the Big Bang theory and its implications (example, the discovery the cosmic microwave background radiation, the 2006 Nobel Prize for Smoots and Mather for the discovery of the blackbody form and anisotropy of the cosmic microwave background radiation. http://www.msnbc.msn.com/id/15113168/ns/technology_and_sciencescience/t/americans-win-nobel-big-bang-study/#.T753dcUxeQ4 and the earlier 1978 Nobel Prize for Penzias and Wilson for the discovery of CMB radiation http://www.bell-labs.com/user/apenzias/nobel.html and http://www.nobelprize.org/nobel_prizes/physics/laureates/1978/ ).
Page 9 of 62
With this general background, let us now consider at least three different types of companies that one can envision, theoretically, based on this simple mathematical law y = hx + c which is a restatement of P = R C.
As discussed in Ref. [1], this is related to the classical breakeven analysis for profitability. The constant c = - a, where a is the fixed cost in the equation Total Costs = Fixed Costs + Variable Costs = a + bN where N denotes the number of units of a product that must be sold to generate the revenues R. If k is unit price, the total revenues R = kN and hence the profits P = R C = kN (a + bN) = (k b)N a (4)
This basic relation can be rewritten as P = [(k b)/k] R a since units N = R/k. This yields the linear law between profits and revenues which can be shown to apply to literally hundreds of companies. (Just prepare the graph for your company of interest and convince yourselves!)
14000 12000
10000
8000 6000 4000 2000 0 -2000 0 10000 20000 30000 40000 50000
acceleration in the profits-revenues growth (n > 1) but this was quickly followed by a deceleration. The overall long term trend is the linear law, as shown here. Notice that revenues have more than doubled since the year ending 2007, from $16.6 billion to $ 37.9 billion. What about the profits? Amazingly, Googles profits have also more than doubled, from $4.2 billion to $9.74 billion. However, the law describing the profits-revenues data is y = hx + c NOT y/x = constant. As x increases, the ratio c/x becomes smaller and smaller and y/x slope h. Notice that 37.9/16.6 = 2.283 and 9.74/4.2 = 2.319 and there is an almost but an EXACT doubling of profits with a doubling of revenues. x = $21.3 billion and y = $5.54 billion. If Googles revenues double again will we see profits doubling again?
1200 1000 800 600 400 200 0 -200 -400 0 500 1000 1500 2000 2500 3000 3500 4000 4500
slope decreases and the intercept changes from a negative value (due to the fixed costs) to a positive value. This does NOT mean the company has a negative fixed cost. Rather it means that the company has matured and is now currently operating with a lower MRP (lower slope h).
70,000 60,000 50,000 40,000 30,000 20,000 10,000 0 0 100,000 200,000 300,000 400,000 500,000 600,000 700,000
Figure 3: The annual revenues and profits data for ExxonMobil for the period 2004-2011, obtained from their quarterly reports. At its peak, in 2008, the company reported revenues of $477.3 billion and profits of $45.22 billion. The revenues were higher in 2011, being $486.4 billion but profits were lower, only $41 billion. Nonetheless, the graphical representation here shows that the simple linear law y = hx + c holds and these differences are essentially small fluctuations that are not statistically significant. The company exhibits what has been called Type I behavior, although it clearly operates are revenue levels well above the cut-off or breakeven value. This is given by the positive intercept on the xaxis and equals x0 = - c/h = $79,662 million. If these trends continue (and they will if energy prices keep rising), the company should report revenues of $1 trillion and profits in excess of $100 billion in the next decade. An exactly similar upward sloping linear trend (i.e., Type I behavior) is revealed if we consider quarterly data (24 quarters were examined) for ExxonMobil. The same trend is revealed if we consider the increase in revenues and profits during the course of a single year by considering the cumulative revenues and profits for 3, 6, 9, and 12 months. The linear behavior observed here, with a highly mature compare like ExxonMobil, and relatively young, but mature, company like Google, also emphasizes the need to exercise extreme caution in applying non-linear laws to analyze the finances of a young and emerging company like Facebook. While nonlinear laws may seem to be intellectually and mathematically appealing, their hasty application is fraught with dangers and the unrealistic predictions can actually destroy emerging companies like Facebook. Google revelaed a period of highly accelerated profits before settling down to the overall linear trend revealed in Figure 1. Again, the linear law implies that a fixed x always yields a fixed y and y = hx, always. Predicting the future profits thus becomes a simple as predicting the future position of a car going on a highway at a fixed speed v, as discussed in Ref. [2] above. The future location, or the additional distance traveled, s, after some additional time t, is always given by the equation s = vt. The slope h, or the
Page 14 of 62
marginal rate of increase of profit (MRP for convenience) is like the speed v of moving vehicle. The Type I and Type II companies differ only because of the magnitude of the slope h which affects the numerical value of the constant c. Note that we are actually dealing with local values of the constants h and c, in profits-revenues space. These values apply over a (limited) range of revenues and profits, much like the situation with Hubbles constant. The exact value of the Hubble constant H0 depends on the distances and the velocities that are being considered in preparing the V-D plot (now called the Hubble diagram). One likes to think that there is a single value of the Hubble constant that applies to ALL galaxies at all distances and that the universe is and has been expanding uniformly since it was created in a BIG BANG. Yes, some astrophysicists believe this. Others do not. Yet, they all believe in Hubbles law and the relation V = H0 D even if the numerical value of H0 is still a matter of debate. In fact, more recent observations with the Hubble Space Telescope (HST) are leading to a re-examination of the all idea of uniform expansion and if there is actually an acceleration and if so why and what that all means as far as Einsteins famous blunder. This is the beauty of astrophysics. Why cant we start thinking of growth and maturing of companies with increasing revenues and profits in the same way? Lets start thinking about revenues which range from a few million to several hundreds of billions much like the vast distance scales that we encounter in the Hubble diagram. Profits are just like the velocities of the galaxies. Like astronomers, we can make accurate observations on a number of companies. These are just like the galaxies that interest an astronomer. Here Accurate observations means HONEST reporting of financial data consistent with all rules and regulations. Only then can we deduce laws of far-reaching consequence. Recall also what Johannes Kepler was able to do with Tycho Brahes observations on the motion of Mars. This has already been discussed in Ref.[1]. As demonstrated here, the linear law seems to be the most widely observed. What remains then is the accurate determination of the slope h, or the MRP.
Page 15 of 62
1000
800 600 400 200 0 -200 -400 0 500 1000 1500 2000
950 300 0.316 Q42011 1060 205 0.193 Q12012 1130 145 0.128 Q22012 1250 41 0.033 Q32012 1340 -37 -0.027 Q42012 1500 -175 -0.117 Q12013 The mathematical equation describing this behavior is y = -0. 864x + 1120.46
Table 2: Annual and Quarterly data for Best Buy Potential Type III behavior
Year/Quarter end date Revenues, x $, millions Annual 35934 40,023 45,015 50,705 49,694 49,747 Profits, y S, millions Annual 1377 1407 1003 -1231 1317 1277 Revenues, x $, millions Quarterly 12,899 13,418 14,724 16,630 16,083 16,553 Profits, y $, millions Quarterly 763 737 570 -1698 651 779
We see some glimpses of this Type III behavior with Best Buy, which has been struggling to maintain its viability. The annual profits-revenues data starting with fiscal year ending March 2007 to 2012 is given below and also the quarterly data. These suggest the pattern of decreasing profits with increasing revenues and then a sudden plunge into a loss for 2012. Of course, we are overlooking the data for two intervening 2010 and 2011 in this assessment. Nonetheless, Type III trend is also evident if we compare annual data for 2010 and 2011. The higher annual revenue
Page 17 of 62
for 2011 yielded lower profits although this trend is bucked in the quarterly data, see http://phx.corporate-ir.net/phoenix.zhtml?c=83192&p=quarterlyearnings
2000 1500
1000 500 0
c). However, this is NOT sustainable and the company will eventually be forced into bankruptcy, as it happened with GM by June 2009. Indeed, such theoretical speculations regarding Type III mode, and the simple analysis being proposed, can be a useful tool for assessing future survivability of a company. There are surely many other examples of such very mature or old companies, with unsustainable legacy costs, that went into bankruptcy after being on the verge of extinction for a few years. It is to be hoped that Best Buy is able to overcome its Type III syndrome and return to profitability. (Profit Margins are noticeably low and rarely exceeded 2% on an annual basis, even when profitable.) Another example of what more definitely appears like Type III behavior is seen with Universal Insurance Holding Inc. This is a public company (Stock Symbol AMEX: UVE) trading at $3.55 as of Friday May 25, 2012. The company has reported a healthy profit for the period 2008-2011 and is ranked No. 2 in the Fortune Small Business (FSB) 100, see FSB 100 http://money.cnn.com/magazines/fsb/fsb100/2009/full_list/index.html http://money.cnn.com/magazines/fsb/fsb100/2009/snapshots/2.html
Through various subsidiaries, Universal provides insurance services throughout the state of Florida, including all aspects of insurance underwriting, distribution and claims processing. Its primary business is homeowner insurance, including covering about 360,000 Florida homeowners against hurricane damage. Its subsidiary Universal Property and Casualty Insurance Company recently secured approval to write insurance policies in North Carolina, Georgia, and Hawaii.
Page 19 of 62
Revenues in the above table represents premiums earned plus other operating income. This annual data is plotted in Figures 6a. Notice that revenues increased from $183 million in 2008 to $226 million in 2011 but profits were one-half, dropping from $40.0 million to $20.1 million. The earning per share (EPS) also decreased, falling from $0.99 to $0.50.
140
120
100 80
60 40 20 0 -20 -40 -60 0 50 100 150
Universal Insurance Holdings, Inc. FSB 100 Rank 2 in 2012 Potential Type III behavior
200
250
300
350
on either side of the straight line connecting the data for 2007 and 2011. The arrow indicates (at $247.8 million) the revenues beyond which the company will start reporting a loss, if this annual trend continues. The analysis of the quarterly data (for the ten consecutive quarters ending with Q12012, however, shows Type I behavior, see Figure 6b. Nonetheless the implications of the Type III behavior evident with the 5-year annual data needs more careful study and efforts must be made to address the cost structure to increase profits further.
25,000 20,000
15,000 10,000 5,000 0 -5,000 -10,000 -15,000 0 20,000 40,000 60,000 80,000 100,000
Page 21 of 62
on the graph in Figure 6b. This method of deducing C-R equation first to deduce the P-R equation is useful when the profits-revenue graph is difficult to interpret. In this context, it is also of interest to review the financial data for Ford Motor Company, for the twelve-year period 2000-2011. This is summarized in Table 4. Ford is emerging as a stronger company after all of its turmoils of the past decade. The years 2002 and 2003 seem to have been the watershed years with Ford barely reporting a profit. This implies that Ford was operating close to its breakeven point, with all the revenues going to meet all of its total obligations, or cost, or the effective cost. Profits = Revenues - Costs and profits had all but vanished.
250
150
100
50
Demarcation line Revenues = Costs Costs z = (x - y) = Revenues - Profits Profits if data point falls below this line
0 50 100 150 200 250
Page 22 of 62
Then the plunge, or turnaround (??), began with a slightly higher profit with higher revenues in both 2004 and 2005, followed by the biggest loss in 2006. Between 2009 and 2010 both revenues and profits increased. Likewise, between 2006 and 2007, there was a jump in the revenues with a concurrent reduction in the reported losses. And, now the biggest profit ($20.2 billion in 2011) has come with significantly lower revenues ($136.26 billion in 2011 compared to $176. 9 billion in 2005, or reduction in revenues of $40 billion compared to 2005). This means that Ford has actually changed its costs structure very significantly.
2009 2010 2011 2008 2006 2001 2002 2003 2000 2004 2007 2005
The financial data has been sorted out intentionally to reveal the trend with increasing revenues. From a staggering loss of $12.6 billion in 2006, Ford has reported an astounding profit of $20.2 billion in 2011. (Yes, more than $20 billion in profit just about one-half the loan given to GM to keep alive. Cheers!) At this point, there is no clear mathematical law that appears to relate the rather erratic nature of the profits variations for Ford. In fact, after eliminating the extreme points (2011 unusually high profit, and 2008, 2006 massive losses; the
Page 23 of 62
other losses are not eliminated and counterbalance the years of decent profits), it appears that the best one could argue for (from a statistical standpoint) is a Type III behavior for Ford, see analysis of the quarterly data in Figures 7b and 7c. The raw data analyzed here may be found in Appendix 4 at the end of this document.
15
10
-5
-10
20
40
60
80
100
Page 24 of 62
3 2 1 0 -1 -2 -3 -4 0 10 20 30 40 50 60 70 80 90 100
A graph of revenues x versus z reveals a nice upward trend with the data points falling above and below the line z = x. This line is like par for the course in golf. Points above the graph represent losses, with costs exceeding revenues. These are akin to bogeys in golf. Points below the line are akin to birdies and represent profits, with costs lower than revenues. When Ford begins to operate like a more consistent world class golfer, we will begin to see a nice linear law in the profits-revenues space with a minimal of scatter, with all the data points lining up nicely along a straight line with a positive slope y = hx + c, instead of the negative slope now evident in Figures 7b and 7c. It would be revealing to take a closer look at the profits-revenues data for Ford over say the last 20 years to see if there is indeed a maximum point as suspected by this analysis.
Page 26 of 62
dy/dx = (n ax) (y/x) dy/dx = n (y/x) for power law, with a = 0 dy/dx = (y/x) = h for linear law with n = 1 and c = 0
50 40
Type II
30 20 10
Type III
0
Type I
-10 -20 -30 0 5 10 15 20 25
Local segments of this more general curve reveal the apparently linear behavior that has been emphasized so far. When x = n/a, dy/dx = 0 and the graph has its maximum point. For higher values of x, the derivative dy/dx becomes negative and this is similar to Type III behavior and leads to decreasing profits with increasing revenues. This is certainly not desirable in the financial world but we do observe this when a company is operating in a crisis mode. (Universal Insurance Holdings, see Figure 6, seems like an exception since it is NOT in a crisis mode and is reporting a nice profit.) These ideas could also, perhaps, be extended to the economy as a whole. Revenues now refer to government revenues and profits are nothing more than the budget surplus. If revenues exceed the outlays (or government expenditures) we have a period of decreasing budget surplus with increasing government revenues. The US economy, with its budget deficits must be operating in the negative portion of the Type III line, or the power-exponential curve! The ideas presented here are by no means complete and need further refinement. Nonetheless, it appears that one can start fresh and take a new look at the functioning of the financial and business world and the economy as a whole, starting with some of the simple mathematical models described here.
6. Nonlinear behavior
Acceleration and deceleration of profits
As discussed briefly, the power-law, with n < 1 or n > 1, is the simplest type of non-linear behavior that we can envision. The power law can be rewritten as equation 8 below. Again, we have three possibilities. y = mxn + c dy/dx = n (y c)/x (8) (9)
Page 28 of 62
1600
(6139, 1465.4)
1400
(1466, 105.6)
(3189.2, 399.2)
1000
2000
3000
4000
5000
6000
7000
Page 29 of 62
Google, more recently, is a manifestation of the negative intercept c made by the straight line describing the profits-revenues relations. With n < 1, dy/dx = n(y c)/x decreases as revenues x increases. Profits increase but at a decelerating rate. This was discussed in Ref.[2] with reference to Facebook. However, Facebook is a young and emerging company and long term profits-revenues data is lacking. The need for caution in rushing to a hasty judgment about Facebook (since n < 1, see Ref. [2], profits are rising but at a decelerating rate!) may be appreciated by reconsidering the data for Google Inc. and dividing it into two sub-periods. In Figures 9 and 10 we consider the period from 2001 to 2005 and in Figure 11 the sub-period 2005-2011. With n > 1, dyd/dx = n(y c)/x increases as revenues x increase. Profits will therefore increase at an accelerating rate (see Figures 9 and 10). However, to date, such a sustained acceleration in profits-revenue space does NOT seem to have been reported.
1800
1600
1400 1200 1000 800 600 400 200 0 0 1000 2000 3000 4000 5000 6000 7000 8000
Page 30 of 62
Figure 10: The profits-revenue data for Google, for the initial period 2001-2005 can be modeled using the power-law curve n = 1.5 as indicated here. The continuous curve is the graph of the equation y = 0.0028x1.5. The index n = 1.5 = 3/2 is observed in Keplers third law for planetary orbits. The index n is fixed first (n = 2 seems unreasonable) and this then dictates the value of m in the power-law. The value of m = 0.0028 clearly gives good agreement with the data.
12000
Page 31 of 62
Page 32 of 62
If one denotes the resonators by the numbers 1, 2, 3, ..N and writes these side by side, and if one sets under each resonator the number of energy elements assigned to it by some arbitrary distribution, then one obtains for every complex a pattern of the form: Plancks illustration of a complex in the December 1900 paper Resonator No. Energy units 1 7 2 38 3 11 4 0 5 9 6 2 7 20 8 4 9 4 10 5
Here we assume N = 10 and P = 100. Planck uses the term complex (which describes one of the many ways of accomplishing the distribution on P = 100 elementary energy units among N =10 resonators), following the terminology introduced by Boltzmann, who was the leading proponent (after James Clerk Maxwell, who was the pioneer here and also developed the electromagnetic field theory) of the application of statistics to physics and largely developed what is now called Statistical Mechanics. There are many such complexions (an alternative term). Entropy S is proportional to the total number of complexions W. The higher the number W the higher will be the entropy S. Here Planck is illustrating the meaning of a complex and how a fixed total amount of energy UN is distributed. The total energy equals 100 energy elements (sum of all the numbers in the bottom row). This is distributed among the N = 10 resonators. In general, the total energy of N resonators is UN = P where P is a very large integer and (epsilon) is some unknown elementary energy element, now called the Planck energy quantum. Later in the 1900 paper, Planck introduces = h, kind of out of the blue, where is the frequency at which the resonator is vibrating and h is a constant, now called the Planck constant. The quantity is the famous Planck energy quantum. Instead of energy, Planck could just as easily have been distributing a fixed amount money among N people, or a fixed total amount of revenues among N different products sold by a company, or a fixed amount of revenues among N companies in a sector of the economy, and so on. Hence, it appears that the transition from
Page 33 of 62
quantum physics to economics and finances can be accomplished almost seamlessly or to use the nice comforting business clich transparently. The correction factor deduced by Planck (by continuing the above to arrive at an expression for the average energy on the N resonators) was needed to modify the Rayleigh-Jeans law which, mathematically speaking, is a power-law. The linear law is a special case of the power law. The power law, in turn, is a special case of the power-exponential laws, conceived first by first Wien and later by Planck, who modifies Wiens law using the statistical arguments. Also, as noted already, Einstein starts with Wiens law and arrives at a linear law to explain the photoelectric effect, after invoking the essential new idea introduced by Planck to arrive at the correction factor there is an elementary and indivisible unit of energy or energy quantum. This, in words, is what Plancks law means. We have already discussed now the linear law and the power-law apply in the financial and business world. We have also been able to conceptualize the idea of three basic types of companies, all of which follow the linear law. The composite of these three types of companies, envisioned in Figure 8, is the company that reveals the elusive maximum point on the profits-revenue curve. Is there such a company? Based on the discussion here, and some potential examples, it appears that there must be. At the very least we can conceptualize the existence of such a company even if we cannot find it. Companies that grow, mature, and eventually die (due to mismanagement, lack of demand for their product, etc.) due to falling profits that eventually push them to bankruptcy, must all, it appears, go through this elusive maximum point before they die or disappear or go bankrupt. It follows that the simplest mathematical law (and one that can be justified using physical and/or statistical arguments), that must describe this situation in the financial world is the generalization of Plancks law as suggested here. Again, with some reinterpretation of the meaning of various mathematical symbols, and nodding our head through concepts such as entropy and
Page 34 of 62
temperature and frequency that appear in Plancks mathematical deliberations (see original paper cited above), we can arrive at equations 10 and 11 below. y = mxn [ e-ax /(1 + be-ax) ] + c y = mxne-ax Planck b = - 1, c = 0 Wein b = 0, c = 0 (10) .(11)
The detailed arguments can be developed but this is a purely academic exercise. Of immediate interest is the fact that equations 10 and 11 can be extended to the economics, business, and finance, by simply invoking the postulated (mathematical) equivalence between energy in physics and money in economics. Many other concepts such as entropy (which is a measure of the extent of chaos in a system) and temperature need to be clarified as well. The power law states that as x increases y increases indefinitely regardless of the exact value of the index n. If n > 1, there is an acceleration in the rate of increase of y as x increases. If n < 1, again y increases indefinitely with increasing x but will do so at a decelerating rate. In blackbody radiation studies, experiments had already revealed the existence of a maximum point. In the financial world and in economics, we might actually be trying to avoid the appearance of such a maximum point. Better to have profits increase indefinitely, even at a decelerating rate, than hitting a maximum point! Nonetheless, it appears that a fuller understanding of the power-exponential law and the underlying statistical arguments will help us better understand, businesses, finance, and even the economy as a whole. Is there a maximum point? If yes, what is the significance of the maximum point? The maximum point arises when the slope dy/dx changes sign and becomes negative. This is what we mean by Type III behavior. If a company changes from Type II to Type III behavior, a maximum point will be observed. Type I behavior is observed when it is operating close to x = x0. This has NOTHING to do with the absolute values of x in terms of $ figures. Type I behavior is seen with Facebook, Google, and also ExxonMobil. All three companies reveal Type I behavior although revenues differ by a thousand fold.
Page 35 of 62
In the business world, the transition from Type I to Type II is inevitable and not entirely adverse. However, the transition from Type II to Type III is NOT desirable and signifies decreasing profits with increasing revenues (we see glimpses of this with Best Buy, a company which is now struggling, and also, it is suspected companies like GM before the bankruptcy filing). This will eventually become unsustainable and the company will cease to exist in its present form and would have to reexamine its cost structure. When costs exceed revenues, a company will report a loss, or decreasing profits. Its creditworthiness will allow it to continue operations for a few quarters with increasing revenues producing decreasing profits, or even negative profits. But the day of reckoning will come. We do see real world examples of Type III behavior. One is Universal Insurance Holdings, Inc. currently ranked number 2 in the Fortune Small Business 100. The company has been reporting a healthy profit but its profits have decreased by exactly one-half between 2008 and 2011 with 33.7% increase in revenues ($40 billion profits with revenues of $183 billion) and 2011($20 billion in profits with revenues of $244 billion). Ford Motor Company also seems to reveal Type III behavior but this needs more careful analysis. However, Ford has increased its profits significantly in recent years with reducing revenues. Indeed, its highest profits of $20.2 billion in 2011 came with a reduced revenue level of $136.24 billion compared to 2005 when it reported $1.44 billion in profits with $176.9 billion in revenues --- reducing revenues and increasing profits an exact reversal of the trend we now see with Universal Insurance Holding. Finally, Mount Profit (the maximum point on the profits-revenue curve) we do not want to see. But, if we do see the peak of Mount Profit, we would be better advised to stay on that side of its peak where profits continue to rise. This may the lesson here for an iconic corporate giant like Ford and also smaller and growing companies like Universal Insurance Holdings, Inc.
Page 36 of 62
Conclusions
1. An attempt has been made here to extend Plancks radiation law from physics to economics (and perhaps, even beyond) by treating energy in physics as being synonymous to money in economics. A generalized mathematical statement of Plancks blackbody radiation law suggests that it is nothing more than a power-exponential law, which implies a maximum point on the (x, y) graph, where x and y are any two quantities of interest with a stimulus-response type of a relationship. This also yields the linear law and the nonlinear law as special cases, as local segments. 2. Applying these ideas to analyze the profits-revenues behavior for a company suggests three types of linear behavior (called Type I, Type II, and Type III, for simplicity) and three types of nonlinear behavior (accelerating profits and decelerating profits, with fixed velocity or zero acceleration, linear law, being the third special case). Amazingly, a careful study of the readily available profits-revenues data from real world companies yields a PERFECT and stunning confirmation for these ideas. 3. Perhaps, the most significant conclusion of all is the speculation regarding the existence of a maximum point on the profits-revenues curve. This obviously has far-reaching implications. Indeed, the existence of three the types of companies, operating under the three types of linear laws, which is readily confirmed, also implies the existence of a company (or companies) that will reveal this speculated theoretical maximum point. 4. It appears that we may also have found this elusive maximum point in Ford Motor Company. The study of the annual and quarterly profitsrevenues data for Ford suggests a Type III behavior which implies that Ford Motor Company may be operating to the right of the maximum point. Reducing revenues further will change Ford from Type III to Type II and eventually to Type I and operating more like ExxonMobil. However, a more careful analysis of Fords operations and the profits-revenues data
Page 37 of 62
since 2000, and even earlier years, is needed to fully support these conclusions. 5. Like modern heat engines (automotive engines, locomotive engines, aircraft engines, rocket engines, and the engines on smaller household appliances like lawnmovers, snowblowers, other garden tools, etc.), that deliver any desired horsepower consistently, every single time the engine is turned on, perhaps, a day will come when we can learn to operate companies like they are meant to be like a real Profits Engine delivering profits for their stockholders and immense benefits to their employees, the local community, and to society at large. It could all begin, amazingly, with a fuller understanding of the significance of the power-exponential law conceived by Planck in 1900, to solve a vexing problem in physics. One of the Two Clouds hanging over 19th century physics, as Lord Kelvin put it, would thus dissipate, at the very dawn of the 20th century. The other cloud dissipated in 1905, with the enunciation of the theory of relativity by Einstein. y = mxn [ e-ax/(1 + be-ax) ] + c
Page 38 of 62
Page 39 of 62
As a R&D person all of my professional life, I was fascinated by the large volumes of x and y data being compiled for the North American automotive plants (in the USA, Canada, and Mexico) owned by GM, Ford and Chrysler (the inefficient trio) and their Japanese counterparts (Toyota, Honda, and Nissan, the efficient trio, or the transplants). Here x is number of vehicles produced and y is the number of workers or labor hours. Labor productivity is the ratio y/x. In such analyses, the productivity metric number of workers per vehicle, or WPV was soon replaced by the more politically correct metric of labor hours per vehicle, or HPV. Labor productivity, y/x = WPV = Number of workers /Number of vehicles .(1) Or, Productivity y/x = HPV = Labor hours/Number of vehicles .(2) As revenues increase, we expect profits to increase. In the same way, as the number of workers (or labor hours) employed in a plant x increases, the number of vehicles produced y will also increase. Is there a simple law relating x and y in this problem? Is this law y = hx with a nonzero intercept? Or, is it the law y = hx + c with a finite nonzero intercept? What is the significance of a nonzero intercept in this problem, if there is one? That is how I realized the significance of the nonzero intercept c in the linear law y = hx + c or y = mx + c, or y = ax + b. These are all just different ways of writing the same law. In general, when x = 0, y = c and c = 0 is a special case of this general law. The ratio y/x can be used for comparisons if and only if c = 0. The automotive labor productivity data (or more generally, any labor productivity data, compiled monthly by Bureau of Labor Statistics for the whole economy) can be described by the linear law y = hx + c, not the law y = mx. How many hours does it take to assemble a cellphone, a camera, a TV, a refrigerator, etc.? One can keep gathering such data and before we realize it, we will see that Type I behavior (h > 0 and c < 0) has been replaced by Type II behavior (h > 0 and c > 0). Not too long ago, I distinctly remember, a Wall Street Journal article explored the politically sensitive question, Unemployment rate is going down but the number of unemployed just keeps going up. This (labor productivity, unemployment data) is an infinitely more complex and politically charged topic and is best avoided.
Page 40 of 62
Just prepare a x-y scatter graph and see if we are observing Type I or Type II behavior. Sometimes, we even observe Type III behavior, as with traffic fatality statistics - another hot politically charged topic with Texas have raised its speed limit to 85 mph and soon, who knows, someone will be pushing for the 100 mph speed limit! They already tried NO SPEED LIMIT, or prudent speed limit, in Montana and the law was quickly overturned in the courts. Some of my writings on this topic (Does Speed Kill?) can be found by those who know how to Google it. There is NO END to the (ab)use of the ratio y/x. They are the easiest to calculate. But lurking in the shadow is that nonzero intercept c, or what Einstein so wisely called the Work Function in this 1905 Nobel Prize winning paper on the Photoelectric Effect, that affects the whole social, political, environmental, financial, and economic world around us. Work Done = Heat In Heat Out (describes the working of any heat engine) K = (E W) Kinetic Energy of photoelectron K, Energy of Photon (hf), Work function (W) Profits = Revenues Costs Savings = Income Expenses Budget Surplus = Government Revenues Government Outlays Number of Unemployed = Total Labor force Number of Employed The first two equations here in the above list are simple statements of far-reaching laws that forever changed physics in the 19th and 20th centuries. The equations that follow are also simple statements, all implying that a nonzero intercept, that even Planck recognized in 1900, must always be introduced. The entropy SN = k ln W + constant. Writes Planck in his Decembeer 1900 paper that laid the foundations of Quantum Physics (see reference cited in 7 to read the English translation of the original paper), We now set the entropy SN of the system (of N resonators) as proportional
Page 41 of 62
to the logarithm of W (the number of possible complexions), within an arbitrary additive constant, so that N resonators together have an energy E N. We cannot overlook the significance of these arbitrary additive constants which, in words, describe what the symbol c is doing in the simple linear law y = hx + c. The correct equation for entropy SN = k ln W + S0 where S0 is the entropy in the limit when W = 1, that is there only ONE single complexion, or just one single way to distribute energy, or money, or whatever it is that is of interest to us. Physicists have spent a lot of time thinking about the meaning of S0 and even arrived at what is sometimes called the Third Law of Thermodynamics to finally dismiss this annoying arbitrary additive constant; see link below. And, then they realized that in their theory they had forgotten something and a new law had to be added. But, there were three laws of thermodynamics in place already. So, they called their fourth law the Zeroth law of thermodynamics. It must precede the other three laws. http://en.wikipedia.org/wiki/Third_law_of_thermodynamics http://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics We must understand all these laws thoroughly, and their deeper implications, as we try to extend Plancks mathematical law (and all of its theoretical and statistical underpinning) from quantum physics to economics and to other systems (such as labor productivity studies, traffic fatality studies, etc.) far beyond where it was found to be useful when first conceived in 1900. The following, extracted from the above, makes instructive reading: Sommerfeld in 1951 gave the title the "Zeroth Law" to the statement "Equality of temperature is a condition for thermal equilibrium between two systems or between two parts of a single system"; he wrote that this title followed the suggestion of Fowler, made when he was giving an account of a certain book.[22] Sommerfeld's statement took the existence of temperature for granted, and used it to specify one of the characteristics of thermodynamic equilibrium. This is converse to many statements that are labeled as the zeroth law, which take thermal equilibrium for granted and use it to contribute to the concept of temperature. We may guess that Fowler had made his suggestion because the
Page 42 of 62
notion of temperature is in effect a presupposition of thermodynamics that earlier physicists had not felt needed explicit statement as a law of thermodynamics, and because the mood of his time, pursuing a "mechanical" axiomatic approach, wanted such an explicit statement. The first law of thermodynamics, Work = (Heat In Heat Out) describes the working of a heat engine and is essentially the same as the law of conservation of energy. The second law is more enigmatic and is sometimes stated as follows: Heat flows of its own accord only from a body at high temperature to a body at a lower temperature. The question posed by Sommerfeld is What is temperature?, which we all seem to understand intuitively. We take it for granted. How is it measured? How do we use the idea of a temperature in areas outside physics like a hot stock, a hot golfer, a hot political candidate and so on.
Page 44 of 62
TABLE 1 NEBULAE WHOSE DISTANCES HAVE BEEN ESTIMATED FROM STARS INVOLVED OR FROM MEAN LUMINOSITIES IN A CLUSTER object S. Mag. L. Mag. N.G.C.6822 598 221 224 5457 4736 5194 4449 4214 3031 3627 4826 5236 1068 5055 7331 4258 4151 4382 4472 4486 4649 Mean ms r v mt ms .. .. .. .. .. .. 17.0 17.3 17.3 17.8 18.3 18.5 18.5 18.5 18.5 18.7 19.0 19.0 19.5 20.0 .. .. .. .. r 0.032 0.034 0.214 0.263 0.275 0.275 0.45 0.5 0.5 0.63 0.8 0.9 0.9 0.9 0.9 1.0 1.1 1.1 1.4 1.7 2.0 2.0 2.0 2.0 v + 170 + 290 - 130 - 70 - 185 - 220 + 200 + 290 + 270 + 200 + 300 - 30 + 650 + 150 + 500 + 920 + 450 + 500 + 500 + 960 + 500 + 850 + 800 +1090 mt 1.5 0.5 9.0 7.0 8.8 5.0 9.9 8.4 7.4 9.5 11.3 8.3 9.1 9.0 10.4 9.1 9.6 10.4 8.7 12.0 10.0 8.8 9.7 9.5 Mt -16.0 17.2 12.7 15.1 13.4 17.2 13.3 15.1 16.1 14.5 13.2 16.4 15.7 15.7 14.4 15.9 15.6 14.8 17.0 14.2 16.5 17.7 16.8 17.0 ------15.5
Mt
= photographic magnitude of brightest stars involved = distance in units of 106 parsecs. The first two are Shapley's values. = measured velocities in km./sec. N. G. C. 6822, 221, 224 and 5457 are recent determinations by Humason. = Holetschek's visual magnitude as corrected by Hopmann. The first three objects were not measured by Holetschek, and the values of mt represent estimates by the author based upon such data as are available. = total visual absolute magnitude computed from mt and r.
Finally, the nebulae themselves appear to be of a definite order of absolute luminosity, exhibiting a range of four or five magnitudes about an average value M (visual) = - 15.2.[1] The application of this statistical average to individual cases can rarely be used to advantage, but where considerable numbers are involved, and
Page 45 of 62
especially in the various clusters of nebulae, mean apparent luminosities of the nebulae themselves offer reliable estimates of the mean distances. Radial velocities of 46 extra-galactic nebulae are now available, but individual distances are estimated by only 24. For one other, N. G. C. 3521, an estimate could probably be made, but no photographs are available at Mount Wilson. The data are given in table 1. The first seven distances are the most reliable, depending, except for M 32 athe companion of M 31, upon extensive investigations of many stars involved. The next thirteen distances, depending upon the criterion of a uniform upper limit of stellar luminosity, are subject to considerable probable errors but are believed to be the most reasonable values at present available. The last four objects appear to be in the Virgo Cluster. The distance assigned to the cluster, 2 x 10[6] parsecs, is derived from the distribution of nebular luminosities, together with luminosities of stars in some of the later-type spirals, and differs somewhat from the Harvard estimate of ten million light years.[2] The data in the table indicate a linear correlation between distances and velocities, whether the latter are used directly or corrected for solar motion, according to the older solutions. This suggests a new solution for the solar motion in which the distances are introduced as coefficients of the K term, i. e., the velocities are assumed to vary directly with the distances, and hense K represents the velocity at unit distance due to this effect. The equations of condition then take the form rK + Xcos(alpha)cos(delta) + Y sin(alpha)cos(delta)+ Zsin(delta) = v. Two solutions have been made, one using the 24 nebulae individually, the other combining them into 9 groups according to proximity in direction and in distance. The results are
24 objects 9 groups
X Y Z K parsecs. A D Vo
+/+/+/+/-
50 95 40 50
For such scanty material, so poorly distributed, the results are fairly definite. Differences between the two solutions are due largely to the four Virgo nebulae,
Page 46 of 62
which, being the most distant objects and all sharing the peculiar motion of the cluster, unduly influence the value of K and hence of Vo. New data on more distant objects will be required to reduce the effect of such peculiar motion. Meanwhile round numbers, intermediate between the two solutions, will represent the probably order of the values. For instance, let A = 277deg. , D = +36deg. (Gal. long. = 32deg. , lat. = +18deg. ), Vo = 280 km./sec., K = +500 km./sec. per million parsecs. Mr. Stromberg has very kindly checked the general order of these values by independent solutions for different groupings of the data. A constant term, introduced into the equations, was found to be small and negative. This seems to dispose of the necessity for the old constant K term. Solutions of this sort have been published by Lundmark,[3] who replaced the old K by k + lr + mr[2]. His favored solution gave k = 513, as against the former value of the order of 700, and hence offered little advantage.
TABLE 2 NEBULAE WHOSE DISTANCES ARE ESTIMATED FROM RADIAL VELOCITIES object N.G.C.278 404 584 936 1023 1700 2681 2683 2841 3034 3115 3368 3379 3489 3521 3623 4111 4526 4565 4594 5005 5866 Mean + + + + + + + + + + + + + + + + + + + + + v 650 25 1800 1300 300 800 700 400 600 290 600 940 810 600 730 800 800 580 1100 1140 900 650 + + + + + + + + + + + vs 110 65 75 115 10 220 10 65 20 105 105 70 65 50 95 35 95 20 75 25 130 215 r 1.52 .. 3.45 2.37 0.62 1.16 1.42 0.67 1.24 0.79 1.00 1.74 1.49 1.10 1.27 1.53 1.79 1.20 2.35 2.23 2.06 1.73 mt 12.0 11.1 10.9 11.1 10.2 12.5 10.7 9.9 9.4 9.0 9.5 10.0 9.4 11.2 10.1 9.9 10.1 11.1 11.0 9.1 11.1 11.7 -----10.5 Mt -13.9 .. 16.8 15.7 13.8 12.8 15.0 14.3 16.1 15.5 15.5 16.2 16.4 14.0 15.4 16.0 16.1 14.3 15.9 17.6 15.5 -14.5 ---------15.3
Page 47 of 62
The residuals for the two solutions given above average 150 and 110 km./sec. and should represent the average peculiar motions of the individual nebulae and of the groups, respectively. In order to exhibit the results in a graphical form, the solar motion has been eliminated from the observed velocities and the remainders, the distance terms plus the residuals, have been plotted against the distances. The run of the residuals is about as smooth as can be expected, and in general the form of the solutions appears to be adequate. The 22 nebulae for which distances are not available can be treated in two ways. First, the mean distance of the group derived from the mean apparent magnitudes can be compared with the mean of the velocities corrected for solar motion. The result, 745 km./sec. for a distance of 1.4 x 10[6] parsecs, falls between the two previous solutions and indicates a value for K of 530 as against the proposed value, 500 km./sec. Secondly, the scatter of the individual nebulae can be examined by assuming the relation between distances and velocities as previously determined. Distances can then be calculated from the velocities corrected for solar motion, and absolute magnitudes can be derived from the apparent magnitudes. The results are given in table 2 and may be compared with the distribution of absolute magnitudes among the nebulae in table 1, whose distances are derived from other criteria. N. G. C. 404 can be excluded, since the observed velocity is so small that the peculiar motion must be large in comparison with the distance effect. The object is not necessarily an exception, however, since a distance can be assigned for which the peculiar motion and the absolute magnitude are both within the range previously determined. The two mean magnitudes, -15.3 and -15.5, the ranges, 4.9 and 5.0 mag., and the frequency distributions are closely similar for these two entirely independent sets of data; and even the slight difference in mean magnitudes can be attributed to the selected, very bright, nebulae in the Virgo Cluster. This entirely unforced agreement supports the validity of the velocity-distance relation in a very evident matter. Finally, it is worth recording that the frequency distribution of absolute magnitudes in the two tables combined is comparable with those found in the various clusters of nebulae. Velocity-Distance graphs such as Figure 1 in Hubbles 1929 paper are now called Hubble diagrams. The velocity v is plotted in kilometers per second (km/s) and distances are plotted in astronomical units called Megaparsec (Mpc), where 1 Mpc = 3.09 1019 km = 3.26 million light years. One light year is the distance light will travel in one year at the fixed speed of 300 million meters per second (or 299,792,458 m/s, 48 of 62 Page exactly). http://heasarc.nasa.gov/docs/cosmic/glossary.html
Figure 1: Radial velocities, corrected for solar motion, are plotted against distances estimated from involved stars and mean luminosities of nebulae in a cluster. The black discs and full line represent the solution for solar motion using the nebulae individually; the circles and broken line represent the solution combining the nebulae into groups; the cross represents the mean velocity corresponding to the mean distance of 22 nebulae whose distances could not be estimated individually.
The results establish a roughly linear relation between velocities and distances among nebulae for which velocities have been previously published, and the relation appears to dominate the distribution of velocities. In order to investigate the matter on a much larger scale, Mr. Humason at Mount Wilson has initiated a program of determining velocities of the most distant nebulae that can be observed with confidence. These, naturally, are the brightest nebulae in clusters of nebulae. The first definite result,[4] v = + 3779 km./sec. for N. G. C. 7619, is thoroughly consistenct with the present conclusions. Corrected for the solar motion, this velocity is +3910, which, with K = 500, corresponds to a distance of 7.8 x 10[6] parsecs. Since the apparent magnitude is 11.8, the absolute magnitude at such a distance is -17.65, which is of the right order for the brightest nebulae in a cluster. A preliminary distance, derived independently from the cluster of which this nebula appears to be a member, is of the order of 7x10[6] parsecs. The constant K highlighted here is Hubbles estimate of what is now called the Hubble constant. K = 500 km/s/Mpc (km/s per Mpc).
Page 49 of 62
New data to be expected in the near future may modify the significance of the dpresent investigation or, if confirmatory, will lead to a solution having many times the weight. For this reason it is thought premature to discuss in detail the obvious consequences of the present results. For example, if the solar motion with respect to the clusters represents the rotation of the galactic system, this motion could be subtracted from the results for the nebulae and the remainder would represent the motion of the galactic system with respect to the extra-galactic nebulae. The outstanding feature, however, is the possibility that the velocity-distance relation may represent the de Sitter effect, and hence that numerical data may be introduced into discussions of the general curvature of space. In the de Sitter cosmology, displacements of the spectra arise from two sources, an apparent slowing down of atomic vibrations and a general tendency of material particles to scatter. The latter involves an acceleration and hence introduces the element of time. The relative importance of these two effects should determine the form of the relation between distances and observed velocities; and in this connection it may be emphasized that the linear relation found in the present discussion is a first approximation representing a restricted range in distance.
[1]Mt. Wilson Contr., No. 324; Astroph. J., Chicago, Ill., 64, 1926 (321). [2]Harvard Coll. Obs. Circ., 294, 1926. [3]Mon. Not. R. Astr. Soc., 85, 1925 (865-894). [4]These PROCEEDINGS, 15, 1929 (167). *****************This ends Hubbles original 1929 paper**************** http://en.wikipedia.org/wiki/Hubble%27s_law The following is verbatim quote extracted from the Wikipedia article cited, including the diagram.
Hubbles law
The linear relationship between the velocity V of a galaxy and its distance D can be expressed as V = H0D. The constant H0 is the slope of the straight line and is
Page 50 of 62
now referred to as the Hubble constant. This law was actually first derived from the General Relativity equations by Georges Lematre in a 1927 article where he proposed that the Universe is expanding and suggested an estimated value of the rate of expansion, i.e., the Hubble constant.[2][3][4][5][6] Two years later Edwin Hubble confirmed the existence of that law and determined a more accurate value for the constant that now bears his name[7]. The recession velocity of the objects was inferred from their redshifts, many measured earlier by Vesto Slipher (1917) and related to velocity by him.[8] 1 megaparsec (3.091019 km), Mpc
Fit of redshift velocities to Hubble's law; patterned after William C. Keel (2007). The Road to Galaxy Formation. Berlin: Springer published in association with Praxis Pub., Chichester, UK. ISBN 3-540-72534-2.Various estimates for the Hubble constant exist. The HST Key H0 Group fitted type Ia supernovae for redshifts between 0.01 and 0.1 to find that H 0 = 71 2(statistical) 6 (systematic) km s1Mpc1,[21] while Sandage et al. find H0 = 62.3 1.3 (statistical) 5 (systematic) km s1Mpc1.[22]
The law is often expressed by the equation v = H0D, with H0 the constant of proportionality (the Hubble constant) between the "proper distance" D to a galaxy (which can change over time, unlike the comoving distance) and its velocity v (i.e. the derivative of proper distance with respect to cosmological time coordinate; see Uses of the proper distance for some discussion of the subtleties of this definition of 'velocity'). The SI unit of H0 is s1 but it is most frequently quoted in (km/s)/Mpc, thus giving the speed in km/s of a galaxy 1 megaparsec (3.091019 km) away. The reciprocal of H0 is the Hubble time.
Page 51 of 62
A recent 2011 estimate of the Hubble constant, which used a new infrared camera on the Hubble Space Telescope (HST) to measure the distance and redshift for a collection of astronomical objects, gives a value of H0 = 73.8 2.4 (km/s)/Mpc. An alternate approach using data from galactic clusters gave a value of H0 = 67.0 3.2 (km/s)/Mpc.[11][12]
V = 500 d
400
200 0 -200 -400 0 0.5 1 1.5 2 2.5 3 3.5
2500
2000
V = 500 d
1500
1000
500
0
0 1 2 3 4 5
Contact the reporters: Danielle Kucera in San Francisco at dkucera6@bloomberg.net; Douglas MacMillan in San Francisco at dmacmillan3@b loomberg.net To contact the editor responsible for this story: Tom Giles at tgiles5@bloomberg.net
Page 54 of 62
1Q2012 4Q2011 3Q2011 2Q2011 1Q2011 4Q2010 3Q2010 2Q2010 1Q2010 4Q2009 3Q2009 2Q2009 1Q2009 4Q2008 3Q2008 2Q2008 1Q2008 4Q2007 Q32007 Q22007 Q12007
32.4 34.6 33.1 35.5 33.1 32.5 29 31.3 28.1 34.8 30.9 27.2 24.8 29.2 32.1 38.6 39.4 45.5 41.1 44.2 43
1.396 13.6 1.649 2.398 2.551 0.19 1.687 2.599 2.085 0.886 0.997 -0.638 -1.427 -5.875 -0.129 -8.667 0.1 -2.753 -0.38 0.75 -0.282
31.004 21 31.451 33.102 30.549 32.31 27.313 28.701 26.015 33.914 29.903 27.838 26.227 35.075 32.229 47.267 39.3 48.253 41.48 43.45 43.282
Exceptional
Exceptional Exceptional
The costs in column 4 were deduced from the revenues and profits in columns 2 and 3 using the equation Profits = Revenues Costs. The graph of C versus R is then prepared to deduce the C-R relation and hence the P-R relation. The three exceptional data points were eliminated from the linear regression analysis.
https://resources.oncourse.iu.edu/access/content/group/8f7ba376-1242-4e8a-0048acbde2ffaad8/StudentResources/scans/Macroeconomics.pdf Changing slope of a nonlinear curve, Cost of producing ipods. Page 55 of 62
Planck, referred to here as the generalized power-exponential law, might actually have many applications far beyond blackbody radiation studies where it was first conceived. Einsteins photoelectric law is a simple linear law, as we see here, and was deduced from Plancks non-linear law for describing blackbody radiation. It appears that financial and economic systems can be modeled using a similar approach. Finance, business, economics and management sciences now essentially seem to operate like astronomy and physics before the advent of Kepler and Newton.
Page 57 of 62
In particular, I call your attention to conclusions section on pages 37 and 38, especially the very last one. If anything, this is what motivates me to continue to pursue this. When a young James Watt was looking for something to do with himself, after completing his studies, the University of Glasgow gave him some laboratory space (for free, of course) and also told him to take a look at the so-called Newcomen engine - a steam engine - that they owned. http://inventors.about.com/od/wstartinventors/a/JamesWatt.htm http://inventors.about.com/library/inventors/blsteamengine.htm http://inventors.about.com/od/indrevolution/ss/Industrial_Revo_4.htm http://en.wikipedia.org/wiki/Watt_steam_engine Back in those olden golden days, graduate students would come to college in their best clothes, even sporting a tie. When the students (who were majoring in what was called Natural Philosophy) were assembled to show them how an actual steam engine worked, this college-owned-engine would invariably huff and puff and quit. A lot of smoke, a lot of noise, but no action. Yes, it did work sometimes but NOT when the students were assembled for the show. Always an embarrassment. So, James Watt got busy and he studied how this stupid engine worked, or rather, did NOT work. Then he even talked to a Professor of Physics (as we would now call him), Joseph Black, and learned about a remarkable property of steam which we now call the latent heat. Nobody knew about this but Black. Armed with this valuable information, James Watt figured how to improve the steam engine and make it work consistently. He started "recycling" the heat in the steam and designed and built what is called the "steam condenser". He also improved the heat insulation. The thermal efficiency of the engine was more than doubled and, more importantly, it worked consistently. This is part one of the story. Now listen to part two.
Page 59 of 62
Then Watt tried to sell his engine to local industrialists. Back in those days, coal mines would frequently get flooded and then they would use horses to draw out the water from the flooded mines. James Watt talked to a lawyer friend of his and tried to start a company that would sell his new and improved steam engine to the owners of the coal mines. He told his lawyer friend (Matthew Boulton) that his engine can do the work of many horses. The coal miner can save money since they do not have to feed the horses. Of course, there is a cost associated with replacing the horse by the engine - cost of the coal needed to operate Watt's steam engine. Now, the lawyer friend asked James Watt an interesting question. "How much work can your engine do and how many horses will it replace?" http://www.newton.dep.anl.gov/askasci/phy99/phy99x45.htm A really fascinating question indeed! Newton conceived the idea of a force and also enunciated the three laws of force, or the three laws of motion. But, even Newton did not know anything about the work done by a force. As we all know today, the work W done by a force F is given by the product Fd, the product of the force F times the distance d over which the force acts. This was James Watts' most far reaching contribution to physics - and a little recognized one at that. Anyway, Bolton asked Watt something he did not have a ready answer for. So, he went back (I presume to the flooded coal mines and volunteered to do some "work" there) and got hold of a horse. He tied a big bucket to one end of a rope, passed the rope over a pulley and tied the other end of the rope to the horse. Then he made the horse walk back and forth as it raised the water from the flooded coal mine. Thus, came the idea of work and eventually a "horsepower". Work became a precisely measurable quantity. W = Fd. Even Einstein uses this idea later, in 1905, when he describes the work done by an electron when it is moved in an electric field and is under the action of the relativistic electrical force. But, even Einstein uses Watt's basic equation for work done by a force. A generation later, James Prescott Joule took the same idea of work and tried to
Page 60 of 62
measure the amount of "heat energy" that is equal to a given amount of "mechanical work". Joule allowed a weight to fall, which was attached to a spindle that rotated a shaft and which stirred water in a well-insulated container. The work done by the falling weight, produces frictional heat Q in the water, which raises the temperature of water. Thus, we get the precise relation between heat energy Q and mechanical work W. This also led to the far reaching law called the law of conservation of energy. In the old days, they used the unit "calorie" to measure heat energy. We still use it. Control your "calorie" intake, if you want to control your weight. But, now we also use the unit called Joule for energy, in honor of this great experimenter. We also use horsepower and the units of Watts, Kilowatts, Megawatts, etc. in honor of James Watt. One Joule (unit of energy) equals one Watt-sec, the constant power of one Watt if it can be applied consistently for one second. Now, if you understand all this, and are wondering why I am bothering you with this stupid history of engines and science, please do go back and read the last conclusion that I had mentioned. Just like we did not know how to build "heat engines" or "steam engine" until Watt did his pioneering work, we do not know how to build "Profits Engines" today. I came to this conclusion, even as far back as 2000. Now I am restating it with new data and after we have experienced a total financial collapse in 2008. If we want the economy to improve and people to get back jobs, we must learn how to build a good "Profits Engine". The gedanken (thought) experiments, such as those I have described in the above document, may be the first step. I may be at part one of the story, as told above. Now part two has to begin. I need some more associates and those who want to passionately like this to take it to the next level. A good place to start might be to find people who have connections at the highest levels at Ford Motor Company. In the early 2000s, I did know a few and did succeed in making some good connections. But, the efforts
Page 61 of 62
failed. We have to try again. Why Ford? Location. Their headquarters is right here in Dearborn, MI. Ford is also interesting in another sense. It seems to be the case of a company that has actually gone past the maximum point of the profits-revenue curve predicted by this extension of the ideas from quantum physics to economics. It is still behaving "erratically" and delivers profits erratically, inspite of all the great efforts to get this "Profits engine" running smoothly since 2000. A whopping $20 billion in profits in 2011 but what about 2012? Who knows? Is anyone willing to make any bets yet? If you agree with the last, then we need to take a look at how Mr. Bill Ford Jr. will pay attention to these ideas and getting his Profits Engine churning. As always, my sincere thanks for giving me your valuable time. But, now let's try and do something more and/or point me in the right direction. With my best regards. Very sincerely
Page 62 of 62