Wednesday, January 01, 2020
The emphasis on this blog, however, is mainly critical of neoclassical and mainstream economics. I have been alternating numerical counter-examples with less mathematical posts. In any case, I have been documenting demonstrations of errors in mainstream economics. My chief inspiration here is the Cambridge-Italian economist Piero Sraffa.
In general, this blog is abstract, and I think I steer clear of commenting on practical politics of the day.
I've also started posting recipes for my own purposes. When I just follow a recipe in a cookbook, I'll only post a reminder that I like the recipe.
Comments Policy: I'm quite lax on enforcing any comments policy. I prefer those who post as anonymous (that is, without logging in) to sign their posts at least with a pseudonym. This will make conversations easier to conduct.
Friday, May 19, 2017
|Figure 1: Random Patterns in Life and Flip Life|
This post has nothing to do with economics, albeit it does illustrate emergent behavior. And I have figures that are an eye test. I am subjectively original. But I assume somebody else has done this - that I am not objectively original.
This post is an exercise in combinatorics. There are 131,328 life-like Celluar Automata (CA), up to symmetry.2.0 Conway's Game of Life
The GoL is "played", if you can call it that, on an infinite plane divided into equally sized squares. The plane looks something like a chess board, extended forever. See the left side of Figure 1, above. Every square, at any moment in time, is in one of two states: alive or dead. Time is discrete. The rules of the game specify the state of each square at any moment in time, given the configuration at the previous instant.
The state of a square does not depend solely on its previous state. It also depends on the states of its neighbors. Two types of neighborhoods have been defined for a CA with a grid of square cells. The Von Neumann neighbors of a cell are the four cells above it, below it, and to the left and right. The Moore neighborhood (Figure 2) consists of the Von Neumann neighbors and the four cells diagonally adjacent to a given cell.
|Figure 2: Moore Neighborhood of a Dead Cell|
The GoL is defined for Moore neighborhoods. State transition rules can be defined in terms of two cases:
- Dead cells: By default, a dead cell stays dead. If a cell was dead at the previous moment, it becomes (re)born at the next instant if the number of live cells in its Moore neighborhood at the previous moment was x1 or x2 or ... or xn.
- Alive Cells: By default, a live cell becomes dead. If a cell was alive at the previous moment, it remains alive if the number of live cells in its Moore neighborhood at the previous moment was y1 or y2 or ... or ym.
The state transition rules for the GoL can be specified by the notation Bx/Sy. Let x be the concatenation of the numbers x1, x2, ..., xn. Let y be the concatenation of y1, y2, ..., ym. The GoL is B3/S23. In other words, if exactly three of the neighbors of a dead cell are alive, it becomes alive for the next time step. If exactly two or or three of the neighbors of a live cell are alive, it remains alive at the next time step. Otherwise a dead cell remains dead, and a live cell becomes dead.
The GoL is an example of recreational mathematics. Starting with random patterns, one can predict, roughly, the distributions of certain patterns when the CA settles down, in some sense. On the other hand, the specific patterns that emerge can only be found by iterating through the GoL, step by step. And one can engineer certain patterns.3.0 Life-Like Celluar Automata
For the purposes of this post, a life-like CA is a CA defined with:
- A two dimensional grid with square cells and discrete time
- Two states for each cell
- State transition rules specified for Moore neighborhoods
- State transition rules that can be specified by the Bx/Sy notation.
How many life-like CA are there? This is the question that this post attempts to answer.
The Moore neighborhood of cell contains eight cells. Thus, for each of the digits 0, 1, 2, 3, 4, 5, 6, 7, and 8, they can appear in Bx. For each digit, one has two choices. Either it appears in the birth rule or it does not. Thus, there are 29 birth rules.
The same logic applies to survival rules. There are 29 survival rules.
Each birth rule can be combined with any survival rule. So there are:
29 29 = 218
life-like CA. But this number is too large. I am double counting, in some sense.4.0 Reversing Figure and Ground
Figure 1 shows, side by side, grids from the GoL and from a CA called Flip Life. Flip Life is specified as B0123478/S01234678. Figure 3 shows a window from a computer program. In the window on the left, the rules for the GoL are specified. The window on the right is used to specify Flip Life.
|Figure 3: Rules for Life and Flip Life|
Flip Life basically renames the states in the GoL. Cells that are called dead in the GoL are said to be alive in Flip Life. And cells that are alive in the GoL are dead in Flip Life. In counting the number of life-like CA, one should not count Flip Life separately from the GoL. In some sense, they are the same CA.
More generally, suppose Bx/Sy specifies a life-like CA, and let Bu/Sv be the life-like CA in which figure and ground are reversed.
- For each digit xi in x, 8 - xi is not in v, and vice versa.
- For each digit yj in y, 8 - yj is not in u, and vice versa.
So for any life-like CA, one can find another symmetrical CA in which dead cells become alive and vice versa.5.0 Self Symmetrical CAs
One cannot just divide 218 by two to find the number of life-like CA, up to symmetry. Some rules define CA that are the same CA, when one reverses figure and ground. As an example, Figure 4 presents a screen snapshot for the CA called Day and Night, specified by the rule B1/S7.
|Figure 4: Day and Night: An Example of a Self-Symmetrical Cellular Automaton|
Given rules for births, one can figure out what the rules must be for survival for the CA to be self-symmetrical. Thus, there are as many self-symmetrical life-like CAs as there are rules for births.6.0 Combinatorics
I bring all of the above together in this section. Table 1 shows a tabulation of the number of life-like CAs, up to symmetry.
|Life-Like Rules||29 29 = 262,144|
|Non-Self-Symmetric Rules||29(29 - 1)|
|Without Symmetric Rules||28(29 - 1)|
|With Self-Symmetric Rules Added Back||28(29 + 1) = 131,328|
How many of these 131,328 life-like CA are interesting? Answering this question requires some definition of what makes a CA interesting. It also requires some means of determining if some CA is in the set so defined. Some CAs are clearly not interesting. For example, consider a CA in which all cells eventually die off, leaving an empty grid. Or consider a CA in which, starting with a random grid, the grid remains random for all time, with no defined patterns ever forming. Somewhat more interesting would be a CA in which patterns grow like a crystal, repeating and duplicating. But perhaps an interesting definition of an interesting CA would be one that can simulate a Turing machine and thus may compute any computable function. The GoT happens to be Turing complete.
Acknowledgements: I started with version 1.5 of Edwin Martin's implementation, in Java, of John Conway's Game of Life. I have modified this implementation in several ways.References
- David Eppstein (2010). Growth and Decay in Life-Like Celluar Automata
Saturday, May 13, 2017
|Figure 1: National Income and Product Accounts|
This post contains some speculation about technical progress.2.0 Non-Random Innovations and Almost Straight Wage Curves
The theory of the production of commodities by means of commodities imposes one restriction on wage-rate of profits curves: They should be downward-sloping. They can be of any convexity. They are high-order polynomials, where the order depends on the number of produced commodities. So no reason exists why they should not change convexity many times in the first quadrant, where the the rate of profits is positive and below the maximum range of profits. The theory of the choice of technique suggests that, if multiple processes are available for producing many commodities, many techniques will contribute to part of the wage-rate of profits frontier.
The empirical research does not show this. When I looked at all countries or regions in the world, I found very little visual deviation from straight lines for most wage curves, for the ruling technique1. The exceptions tended to be undeveloped countries. Han and Schefold, in their empirical search for capital-theoretic paradoxes in OECD countries, also found mostly straight curves. And only a few techniques appeared on the frontier.
I have a qualitative explanation of this discrepancy between expectations from theory and empirical results. The theory I draw on above takes technology as given. It is as if economies are analyzed based on an instantaneous snapshot. But technology evolves as a dynamic process. The flows among industries and final demands have been built up over decades, if not centuries.
In advanced economies, technology does not change randomly. Large corporations have Research and Development departments, universities form extensive networks, and the government sponsors efforts to advance Technology Readiness Levels2. Sponsored research is not directed randomly. Technical feasibility is an issue, albeit that changes over time. Another concern is what is costly at the moment, with cost being defined widely. I suggest a constant effort to lower a reliance on high cost inputs in production process, over time, results in coefficients of production being lowered such that wage curves become more straight3.
The above story suggests that one should develop some mathematical theorems. I am aware of two areas of research in Sraffian economics that seem promising for further inquiry along these lines. First, consider Luigi Pasinetti's structural economic dynamics. I have an analysis of hardware and software costs in computer systems, which might be suggestive. Second, Bertram Schefold has been analyzing the relationship between the shape of wage curves; random matrices; and eigenvalues, including eigenvalues other than the Perron-Frobenius root.3.0 Innovations Dividing Columns in Input-Output Table, Not Adjoining Completely New Ones
I have been moping during my day job how I cannot keep up with some of my fellow software developers. I return to, say, Java programming after a few years, and there is a whole new set of tools. And yet, much of what I have learned did not even exist when I received either of my college degrees. For example, creating an Android app in Android Studio or IntelliJ involves, minimally, XML, Java, and Virtual Machines for testing. Back in the 1980s, I saw some presentations from Marvin Zelkowitz for what might be described as an Integrated Development Environment (IDE). He had an editor that understood Pascal syntax, suggested statement completions, and, if I recall correctly, could be used to set breakpoints and examine states for executing code. I do not know how this work fed, for example, Eclipse.
Nowadays, you can specialize in developing web apps4. Some of my co-workers are Certified Information Systems Security Professionals (CISSPs). They know a lot of concepts that are sort of orthogonal to programming5. I also know people that work at Security Operations Centers (SOCs)6. And there are many other software specialities.
In short, software should no longer be considered a single industry. Glancing quickly at the web site for the Bureau of Economic Analysis, I note the following industries in the 2007 benchmark input-output tables:
- Software publishers (511200)
- Data processing, hosting, and related services (518200)
- Internet publishing and broadcasting and Web search portals (518200)
- Custom computer programming services (541511)
- Computer systems design services (541512)
- Other computer related services, including facilities management (54151A)
Coders, programmers, and software engineers definitely provide labor inputs in many other industries. Cybersecurity does not even appear above.
What would input-tables looked like, for software, in the 1970s? I speculate you might find industries for the manufacture of computers, telecommunication equipment, and satellites & space vehicles. And data processing would probably be an industry.
I am thinking that new industries come about, in modern economies, more by division and greater articulation of existing industries, not by suddenly creating completely new products. And this can be seen in divisions and movements in industries in National Income and Product Accounts (NIPA). One might explore innovation over the last half-century or so by looking at the evolution of industry taxonomies in the NIPA.7.4.0 Conclusion
This post suggests some research directions8. At this point, I do not intend to pursue either.Footnotes
- Reviewers, several years ago, had three major objections to this paper. One was that I had to offer some suggestion why wage curves should be so straight. The other two were that I needed to offer a more comprehensive explanation of how to map from the raw data to the input-output tables I used and that I had to account for fixed capital and depreciation.
- John Kenneth Galbraith's The New Industrial State is a somewhat dated analysis of these themes.
- They also move outward.
- The web is not old. Tools like Glassfish, Tomcat, and JBoss, and their commercial competitors are neat.
- Such as Confidentiality, Integrity, and Availability; two-factor identification; Role-Based Access Control; taxonomies for vulnerabilities and intrusions; Public Key Infrastructure; symmetric and non-symmetric encryption; the Risk Management Framework (RMF) for Information Assurance (IA) Certification and Accreditation; and on and on.
- A SOC differs from a Network Operations Center. Operators of a SOC have to know about host-based and network-based Intrusion Detection, Security Incident and Event Management (SIEM) systems, Situation Awareness, forensics, and so on.
- One should be aware that part of the growth on the tracking of industries might be because computer technology has evolved. Von Neumann worried about numerical methods for calculating matrix inverses. Much bigger matrices are practical now.
- I do not think my ideas in Section 3 are expressed well.
Saturday, May 06, 2017
|Figure 1: Blowup of Distribution of Maximum Rate of Profits|
This post extends the results from my last post. I think of the results presented here as providing information about the implementation of my simulation. I do not claim any implications about actually existing economies. I did not have any definite anticipations about what I would see. I suppose it could be of interest to regenerate these results where coefficients of production are randomly generated from some non-uniform distribution.
I continue to use a capability to generate a random economy, where such an economy is characterized by a single technique. A technique is specified by a row vector of labor coefficients and a corresponding square Leontief input-output matrix. The labor coefficients are randomly generated from a uniform distribution on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated from a uniform distribution on [0.0, 1.0). The random number generator is as provided by the class java.util.Random, in the Java programming language. I am running Java version 1.8.
Each random economy is tested for viability. Non-viable economies are discarded. Table 1 shows how many economies needed to be generated, given the number of produced commodities, to end up with a sample size of 300 viable economies. The maximum rate of profits is calculated for each viable economy. The maximum rate of profits occurs when the wage is zero, and the workers live on air. Thus, labor coefficients do not matter for the calculation of the maximum rate of profits.
|547,527||5||> 231 - 1|
I looked at the distribution of the maximum rate of profits, calculated as a percentage, in several ways. Figure 2 presents four histograms, superimposed on one another. Figure 1 expands the left tails of these histograms. I suppose Figure 2 is somewhat easier to make sense of than Figure 1, when you click on the image. Maybe the statistics in Tables 2 and 3 are clearer. One can see, for example, in random economies in which two commodities are produced, the mean of the maximum rate of profits is 43.9%. The minimum, in these 300 random economies, of the maximum rate of profits is about 0.03% and the maximum is 318%. If I wanted to be more thorough, I would have to review how skewness and kurtosis are calculated by default in the Java class org.apache.commons.math3.stat.descriptive.DescriptiveStatistics. The coefficient of variation is the ratio of the standard deviation to the mean. The nonparametric analogy, reported in the last row in Table 3, is the ratio of the Inter-Quartile Range to the median. Anyways, the distribution of the maximum rate of profits, in random viable economies generated by the simulation, is non-Gaussian and highly skewed, with a tail extending to the right.
|Figure 2: Distribution of Maximum Rate of Profits|
|Number of Produced Commodities|
|Coef. of Var.||0.875||0.811||1.10||0.839|
|Number of Produced Commodities|
With the simulation, the maximum rate of profits tends to be smaller, the more commodities are produced. I wish I could extend these results to a lot more produced commodities. National Income and Product Accounts (NIPAs), at the grossest level of aggregation have on the order of 100 produced commodities. Even if results with the assumption of an arbitrary probability distribution for coefficients of production could be directly applied empirically, one would like confirmation that trends seen with a very small number of produced commodities continue.
Wednesday, May 03, 2017
|Figure 1: Probability a Random Economy Will Be Viable|
I have begun working towards replicating certain simulation results reported by Stefano Zambelli's.
At this point, I have implemented a capability to generate a random economy, where such an economy is characterized by a single technique. A technique is specified by a row vector of labor coefficients and a corresponding square Leontief input-output matrix. The labor coefficients are randomly generated from a uniform distribution on (0.0, 1.0]. Each coefficient in the Leontief input-output matrix is randomly generated from a uniform distribution on [0.0, 1.0). The random number generator is as provided by the class java.util.Random, in the Java programming language. I am running Java version 1.8.
A Monte Carlo simulation, in the results reported here, tests each random economy for viability, where the technique, for each economy, is used to produce a specified number of commodities. A viable economy can reproduce the inputs used up in producing the outputs. If the economy is just viable, nothing is left over to pay the workers and the capitalists. The Hawkins-Simon condition can be used to check for viability.
Table 1 reports the results. The number of Monte Carlo runs, for each row, is 1,000,000,000. The seed is reported so I can replicate my results, if I want. I think I can provide a symmetry argument for why the probability for the first row should be 1/2. I reran the simulation for the last row with 2,000,000,000 runs and the same seed. I still found zero viable economies.
Zambelli suggests randomly specifying a rescaled output, in some sense, for the technology so as to ensure viability. I have a rough conceptual understanding of this step, but I need a better understanding to reduce it to source code. I think I'll go on to further analyses before revisiting the issue of viability. The above results certainly suggest that my analyses will be limited, in the mean time, to economies that produce only two, three, or maybe four commodities.
I think that Zambelli's approach is worthwhile for pursuing the results in which he is interested. One limitation arises with applying a probability distribution to one particular description of technology. In practice, coefficients of production evolve in a non-random manner. Pasinetti's structural dynamics is a good way of exploring technical progress in the tradition of Sraffa.References
- Stefano Zambelli (2004). The 40% neoclassical aggregate theory of production. Cambridge Journal of Economics 28(1): pp. 99-120.
Thursday, April 20, 2017
This post contains some musing on corporate finance and its relation to the theory of production.2.0 Investments, the NPV, and the IRR
In finance, an investment project or, more shortly, an investment, is a sequence of dated cash flows. Consider an investment in which these cash flows take place at the end of n successive years. Let Ct; t = 0, 1, ..., n - 1; be the cash flow at the end of the tth year here, counting back from the last year in the investment. That is, Cn - 1 is the cash flow at the end of the first year in the investment, and C0 is the last cash flow.
The Net Present Value (NPV) of an investment is the sum of discounted cash flows in the investment. Let r be the interest rate used in time time discounting, and suppose all cash flows are discounted to the end of the first year in the investment. Then the NPV of the illustrative investment is:
NPV0(r) = Cn - 1 + Cn - 2/(1 + r) + ... + C0/(1 + r)n - 1
If the above expression is multiplied by (1 + r)n - 1, one obtains the NPV of the investment with every cash flow discounted to the last year in the investment:
NPV1(r) = Cn - 1(1 + r)n - 1 + Cn - 2(1 + r)n - 2 + ... + C0
For the next step, I need some sign conventions. Let a positive cash flow designate revenues, and a negative cash flow be a cost. Suppose, for now, that the (temporally) first cash flow is a cost, that is negative. Then (-1/Cn - 1) NPV1(r) is a polynomial in (1 + r), with unity as the coefficient for the highest-order term. All other terms are real.
Such a polynomial has n - 1 roots. These roots can be real numbers, either negative, zero, or positive. They can be complex. Since all coefficients of the polynomial are real, complex roots enter as conjugate pairs. Roots can be repeating. At any rate, the polynomial can be factored, as follows:
NPV1(r) = (-Cn - 1)(r - r0) (r - r1)... (r - rn - 1)
where r0, r1, ..., rn - 1 are the roots of the polynomial. Note that the interest rate appears only in terms in which the difference between the interest rate and one root is taken. And all roots appear on the Right Hand Side. I am going to call an specification of NPV with these properties an Osborne expression for NPV.
Suppose, for now, that at least one root is real and non-negative. The Internal Rate of Return (IRR) is the smallest real, non-negative root. For notational convenience, let r0 be the IRR.3.0 Standard Investments in Selected Models of Production
A standard investment is one in which all negative cash flows precede all positive cash flows. Is there a theorem that an IRR exists for each standard investment? Perhaps this can be proven by discounting all cash flows to the end of the year in which the last outgoing cash flow occurs. Maybe one needs a clause that the undiscounted sum of the positive cash flows does not fall below the undiscounted sum of the negative cash flows.
At any rate, an Osborne expression for NPV has been calculated for standard investments characterizing two models of production. As I recall it, Osborne (2010) illustrates a more abstract discussion with a point-input, flow-output example. Consider a model in which a machine is first constructed, in a single year, from unassisted labor and land. That machine is then used to produce output over multiple years. Given certain assumptions on the pattern of the efficiency of the machine, this example is of a standard investment, with one initial negative cash flow followed by a finite sequence of positive cash flows.
On the other hand, I have presented an example for a flow-input, point-output model. Techniques of production are represented as finite series of dated labor inputs, with output for sale on the market at a single point in the time. Each technique is characterized by a finite sequence of negative cash flows, followed by a single positive cash flow.
In each of these two examples, the NPV can be represented by an Osborne expression that combines information about all roots of a polynomial. Thus, basing an investment decision on the NPV uses more information than basing it on the IRR, which is a single root of the relevant polynomial.4.0 Non-standard Investments and Pitfalls of the IRR
In a non-standard investment at least one positive cash flow precedes a negative cash flow, and vice-versa. Non-standard investments can highlight three pitfalls in basing an investment decision on the IRR:
- Multiple IRRs: The polynomial defining the IRR may have more than one real, non-negative root. What is the rationale for picking the smallest?
- Inconsistency in recommendations based on IRR and NPV: The smallest real non-negative root may be positive (suggesting a good investment), with a negative NPV (suggesting a bad investment).
- No IRR: All roots may be complex.
Berk and DeMarzo (2014) present the example in Table 1 as an illustration of the third pitfall. They imagine an author who receives an advance of $750 hundred thousands, sacrifices an income of $500 hundred thousand in each year of writing a book, and, finally, receives a royalty of one million dollars upon publication. The roots of the polynomial defining the NPV are -1.71196 + 0.78662 j, -1.71196 - 0.78662 j, 0.04529 + 0.30308 j, 0.04529 - 0.30308 j. All of these roots are complex; none satisfy the definition of the IRR.
5.0 Issues for Multiple Interest Rate Analysis
Osborne, in his 2014 book, extends his 2010 analysis of the NPV to consider the first and second pitfall above. Nowhere do I know of is an Osborne expression for the NPV derived for an example in which the third pitfall arises.
The idea that the pitfalls above for the use of the IRR might be a problem for multiple interest rate analysis was suggested to me anonymously. On even hours, I do not see this. Why should I care about how many roots there are in an Osborne expression for the NPV, their sign, or even if they are complex?
On the other hand, I wonder about how non-standard investments relate to the theory of production. I know that an example can be constructed, in which the price of a used machine becomes negative before it becomes positive. Can the varying efficiency of the machine result in a non-standard investment? After all, the cash flow, in such an example of joint production, is the sum of the price of the conventional output of the machine and the price of the one-year older machine. Even when the latter is negative, the sum need not be negative. But, perhaps, it can be in some examples.
Not all techniques in models with joint production, of the production of commodities by means of commodities, can be represented as dated labor flows. I guess one can still talk about NPVs. Can one formulate an algorithm, based on NPVs, for the choice of technique? How would certain annoying possibilities, such as cycling be accounted for? Can one always formulate an Osborne expression for the NPV? Do properties of multiple interest rates have implications for, for example, a truncation rule in a model of fixed capital? Perhaps a non-standard investment, for a fixed capital example and one pitfall noted above, always has a cost-minimizing truncation in which the pitfall does not arise. Or perhaps the opposite is true.
Anyway, I think some issues could support further research relating models of production in economics and finance theory. Maybe one obtains, at least, a translation of terms.Appendix: Technical Terminology
See body of post for definitions.
- Flow Input, Point Output
- Investment Project
- Internal Rate of Return (IRR)
- Net Present Value (NPV)
- Non Standard Investment
- Osborne Expression (for NPV)
- Point-Input, Flow Output model
- Standard Investment
- Jonathan Berk and Peter DeMarzo (2014). Corporate Finance, 3rd edition. Boston: Pearson Education
- Michael Osborne (2010). A resolution to the NPV-IRR debate? Quarterly Review of Economics and Finance, V. 50, Iss. 2: 234-239.
- Michael Osborne (2014). Multiple Interest Rate Analysis: Theory and Applications. New York: Palgrave Macmillan
- Robert Vienneau (2016). The choice of technique with multiple and complex interest rates, DRAFT.
Thursday, April 13, 2017
- Beatrice Cherrier suggests that Paul Samuelson originated the term "mainstream economics", in his textbook. (h/t I think I found this by Unlearning Economics's twitter feed.)
- Jo Michell reviews The Econocracy, by Joe Earle, Cahal Moran, and Zach Ward-Perkins.
- On twitter, Cameron Murray finds, of the 46 who responded, 78% did not "learn about the Cambridge Capital Controversy at point in [their] degree".
- In the Guardian, Kate Raworth argues that new economics is needed to replace the old economics and its foundation on false laws of physics.