As yours truly has argued – e.g. here, here and here – time irreversibility and non-ergodicity are extremely important issues for understanding what are the deep fundamental flaws of mainstream neoclassical economics in general.
Ole Peters presentation at Gresham College gives further evidence why expectation values are irrelevant for understanding economic systems in specific.
I Silvio Gesells Die natürliche Wirtschaftsordnung (1916) pekar författaren på pengarnas benägenhet att sänka kostnader för varuutbyte (vad som i moderna termer kan kallas deras förmåga att sänka transaktionskostnader). Emellertid gäller också att pengar till skillnad från varor är lätta att lagra eftersom de inte ”rostar”. De fungerar alltså inte bara som bytesmedel, utan också som värdebevarare.
Pengarnas användbarhet och smidighet leder till en efterfrågan att disponera dem, vilket är orsaken till att det existerar ränta. Gesell menar att bruttoräntan innehåller en riskpremie för utlånaren och att det också finns en ”hausse-premie” som ska ersätta den spekulationsvinst (genom att köpa och därefter sälja varaktiga varor) som utlånaren anser sig gå miste om genom att i stället låna ut sina pengar.
Tar man bort dessa båda pålägg, återstår vad Gesell kallar ”urräntan”. Den motsvarar i huvudsak vad som annars kallas ren ränta eller nettoränta. Gesell menar sig ha historiskt stöd för att urräntan har varit ungefär densamma sedan pengarnas uppkomst, nämligen 3-5 procent per år. Den grundar sig nämligen på pengarnas inneboende fördelar för innehavaren.
Räntan – som liksom hos John Maynard Keynes främst ses som en betalning för att motverka tendensen att spara i madrassen – är inte Gesells primära måltavla. Den ses i stället som ett symptom på det som enligt Gesell är grundproblemet – pengarnas funktion som kapital. Räntan är emellertid ett problem därför att den styr inkomstflöden till dem som har pengar. Detta anses dels vara orättvist, dels leda till att man inte får någon riktig avstämning mellan produktion och konsumtion på varumarknaden. Personer som lever på ränteinkomster har nämligen en överskottslikviditet (överskott på pengar) som inte direkt används till varuinköp. Detta leder till en osäker avsättning för varulagren och därigenom till kriser med sjunkande varupriser och arbetslöshet.
Read more …
Som mareld tindrar en stjärna, släcks och tänds
och släcks och tänds igen. De dallrande djupen bär den
Så har jag stått vid hundrade Land´s Ends
och tänkt på vad jag vill och vad jag skall i världen
Det ena vore väl: att vara som man är
Det andra är väl: att mot udden spjärna
Och hade jag den skarpa udden mindre kär
så vore jag som andra, mer än gärna
Somligas väsen är: vara. Andras: att vara förutan
Vägar har inget mål. Det är stigar som leder dit
Såg du ett fönster lysa? Tänkte du knacka på rutan?
Din är en månskensväg, som slingrar i dyningen, vit.
(På youtube går det att återuppleva Tone Bengtssons underbart fina tv-porträtt av Gunnar Ekelöf från 1995: http://youtu.be/NwEJCvniooA
h/t Jan Milch)
Jager and Leek may well be correct in their larger point, that the medical literature is broadly correct. But I don’t think the statistical framework they are using is appropriate for the questions they are asking. My biggest problem is the identification of scientific hypotheses and statistical “hypotheses” of the “theta = 0″ variety.
Based on the word “empirical” title, I thought the authors were going to look at a large number of papers with p-values and then follow up and see if the claims were replicated. But no, they don’t follow up on the studies at all! What they seem to be doing is collecting a set of published p-values and then fitting a mixture model to this distribution, a mixture of a uniform distribution (for null effects) and a beta distribution (for non-null effects). Since only statistically significant p-values are typically reported, they fit their model restricted to p-values less than 0.05. But this all assumes that the p-values have this stated distribution. You don’t have to be Uri Simonsohn to know that there’s a lot of p-hacking going on. Also, as noted above, the problem isn’t really effects that are exactly zero, the problem is that a lot of effects are lots in the noise and are essentially undetectable given the way they are studied.
Jager and Leek write that their model is commonly used to study hypotheses in genetics and imaging. I could see how this model could make sense in those fields … but I don’t see this model applying to published medical research, for two reasons. First … I don’t think there would be a sharp division between null and non-null effects; and, second, there’s just too much selection going on for me to believe that the conditional distributions of the p-values would be anything like the theoretical distributions suggested by Neyman-Pearson theory.
So, no, I don’t at all believe Jager and Leek when they write, “we are able to empirically estimate the rate of false positives in the medical literature and trends in false positive rates over time.” They’re doing this by basically assuming the model that is being questioned, the textbook model in which effects are pure and in which there is no p-hacking.
Indeed. If anything, this underlines how important it is not to equate science with statistical calculations. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero – even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:
There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.
Här stå de nu, pampiga, värdiga,
mens marknadsfiolerna skorra –
De äro de tio rättfärdiga
som söktes en gång i Gomorra.
Alls inte så goda att rå på;
de lyfter sej själva med orden.
Och hade de något att stå på
så lyfte de hela jorden.
Förunderligt friska i hjärnorna;
Talar om solen och stjärnorna
Sen bärgar de sitt på det torra,
pampiga, värdiga –
Pampiga tio rättfärdiga
som förstenade Gud och Gomorra.
Nils Ferlin: Tio rättfärdiga
Even though the interest may not be reciprocated, it would obviously be a good idea for Greg Mankiw to listen to his Harvard colleague Lawrence Summers, instead of trivializing the problems created by increasing inequality! Summers has some interesting thoughts on why income inequality is on the rise and what to do about it:
Why has the top 1 per cent of the population done so well relative to the rest? The answer probably lies substantially in changing technology and globalisation. When George Eastman revolutionised photography, he did very well and, because he needed a large number of Americans to carry out his vision, the city of Rochester had a thriving middle class for two generations. By contrast, when Steve Jobs revolutionised personal computing, he and the shareholders in Apple (who are spread all over the world) did very well but a much smaller benefit flowed to middle-class American workers both because production was outsourced and because the production of computers and software was not terribly labour intensive …
What then is the right response to rising inequality? There are too few good ideas in current political discourse and the development of better ones is crucial. Here are three.
First, government must be careful that it does not facilitate increases in inequality by rewarding the wealthy with special concessions. Where governments dispose of assets or allocate licences, there is a compelling case for more use of auctions to which all have access. Where government provides insurance implicitly or explicitly, premiums must be set as much as possible on a market basis rather than in consultation with the affected industry. A general posture for government of standing up for capitalism rather than particular well-connected capitalists would also serve to mitigate inequality.
Second, there is scope for pro-fairness, pro-growth tax reform. When there are more and more great fortunes being created and the government is in larger and larger deficit, it is hardly a time for the estate tax to be eviscerated. With smaller families and ever more bifurcation in the investment opportunities open to those with wealth, there is a real risk that the old notion of “shirtsleeves to shirtsleeves in three generations” will become obsolete, and those with wealth will endow dynasties.
Third, the public sector must insure that there is greater equity in areas of the most fundamental importance. It will always be the case in a market economy that some will have mansions, art and the ability to travel in lavish fashion. What is more troubling is that the ability of the children of middle-class families to attend college has been seriously compromised by increasing tuition fees and sharp cutbacks at public universities and colleges.
At the same time, in many parts of the country a gap has opened between the quality of the private school education offered to the children of the rich and the public school educations enjoyed by everyone else. Most alarming is the near doubling over the last generation in the gap between the life expectancy of the affluent and the ordinary.
Neither the politics of polarisation nor those of noblesse oblige will serve to protect the interests of the middle class in the post-industrial economy. We will have to find ways to do better.
[T]he authors take as their text a principle of Haavelmo that every testable economic theory should provide a precise formulation of the joint probability distribution of all observable variables to which it refers. It can be argued, however, that Haavelmo’s principle is sounder than the program for realizing it worked out in this book. For, as noted above, what we are asked to assume is that the precept can be carried out in economics by techniques which are established for linear systems, serially independent disturbances, error-free observations, and samples of a size not generally obtainable in economic time series today. In view of such limitations, anyone using these techniques must find himself appealing at every stage less to what theory is saying to him than to what solvability requirements demand of him. Certain it is that the empirical work of this school yields numerous instances in which open questions of economics are resolved in a way that saves a mathematical theorem.
Still, there are doubtless many who will be prepared to make the assumptions required by this theory on pragmatic grounds. We cannot know in advance how well or badly they will work, and they commend themselves on the practical test of convenience. Moreover, as the authors point out, a great many models are compatible with what we know in economics – that is to say, do not violate any matters on which economists are agreed. Attractive as this view is, it fails to draw a necessary distinction between what is assumed and what is merely proposed as hypothesis. This distinction is forced upon us by an obvious but neglected fact of statistical theory: the matters “assumed” are put wholly beyond test, and the entire edifice of conclusions (e.g., about identifiability, optimum properties of the estimates, their sampling distributions, etc.) depends absolutely on the validity of these assumptions. The great merit of modern statistical inference is that it makes exact and efficient use of what we know about reality to forge new tools of discovery, but it teaches us painfully little about the efficacy of these tools when their basis of assumptions is not satisfied. It may be that the approximations involved in the present theory are tolerable ones; only repeated attempts to use them can decide that issue. Evidence exists that trials in this empirical spirit are finding a place in the work of the econometric school, and one may look forward to substantial changes in the methodological presumptions that have dominated this field until now.
Almost a century and a half after Léon Walras founded neoclassical general equilibrium theory, economists still have not been able to show that markets move economies to equilibria.
We do know that – under very restrictive assumptions – equilibria do exist, are unique and are Pareto-efficient. After reading Franklin M. Fisher‘s masterly article The stability of general equilibrium – what do we know and why is it important? one, however, has to ask oneself – what good does that do?
As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria – the value of general equilibrium theory is nil. As long as we cannot really demonstrate that there are forces operating – under reasonable, relevant and at least mildly realistic conditions – at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.
A stability that can only be proved by assuming “Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up, leaving their Santa Claus economics in the dustbin of history.
Continuing to model a world full of agents behaving as economists – “often wrong, but never uncertain” – and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away), is a gross misallocation of intellectual resources and time.
In case you think, even for a moment, that drawing this dismal conclusion is just an idiosyncracy of yours truly and other heterodox economists, you better think twice! Here is what one of the world’s greatest microeconomists – Alan Kirman – writes in his thought provoking paper The intrinsic limits of modern economic theory:
If one maintains the fundamentally individualistic approach to constructing economic models no amount of attention to the walls will prevent the citadel from becoming empty. Empty in the sense that one cannot expect it to house the elements of a scientific theory, one capable of producing empirically falsifiable propositions …
Starting from ‘badly behaved’ individuals, we arrive at a situation in which not only is aggregate demand a nice function but, by a result of Debreu, equilibrium will be ‘locally unique. Whilst this means that at least there is some hope for local stability, the real question is, can we hope to proceed and obtain global uniqueness and stability?
The unfortunate answer is a categorical no! [The results of Sonnenchein (1972), Debreu (1974), Mantel (1976) and Mas Collel (1985)] shows clearly why any hope for uniqueness or stability must be unfounded … There is no hope that making the distribution of preferences or income ‘not to dispersed’ or ‘single peaked’ will help us to avoid the fundamental problem …
The problem seems to be embodied in what is an essential feature of a centuries-long tradition in economics, that of treating individuals as acting independently of each other …
To argue in this way suggests … that once the appropriate signals are given, individuals behave in isolation and the result of their behaviour may simply be added together …
The idea that we should start at the level of the isolated individual is one which we may well have to abandon … we should be honest from the outset and assert simply that by assumption we postulate that each sector of the economy behaves as one individual and not claim any spurious microjustification …
Economists therefore should not continue to make strong assertions about this behaviour based on so-called general equilibrium models which are, in reality, no more than special examples with no basis in economic theory as it stands.
From a macroeconomic point of view, the arguments of Fisher and Kirman also show why New Classical, Real Business Cycles, Dynamic Stochastic General Equilibrium (DSGE) and “New Keynesian” microfounded macromodels are such bad substitutes for real macroeconomic analysis.
These models try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent. That is, with something that has absolutely nothing to do with reality. And – worse still – something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since the Sonnenschein-Mantel-Debreu theorem unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.
Opting for cloned representative agents that are all identical is of course not a real solution to the fallacy of composition that the theorem points to. After all – as Nobel laureate Robert Solow noted in “The State of Macroeconomics” (Journal of Economic Perspectives 2008:243-249) – “a modern economy is populated by consumers, workers, pensioners, owners, managers, investors, entrepreneurs, bankers, and others, with different and sometimes conflicting desires, information, expectations, capacities, beliefs, and rules of behavior.” So, representative agent models are rather an evasion whereby issues of distribution, coordination, heterogeneity – everything that really defines macroeconomics – are swept under the rug.
Of course, most neoclassical macroeconomists know that to use a representative agent is a flagrantly illegitimate method of ignoring real aggregation issues. They keep on with their business, nevertheless, just because it significantly simplifies what they are doing. It reminds – not so little – of the drunkard who has lost his keys in some dark place and deliberately chooses to look for them under a neighbouring street light just because it is easier to see there!
The myth is of a dynamic, creative, colourful, entrepreneurial private sector, that at most needs ‘unleashing’ from its constraints from the public sector. The latter is instead depicted as necessary for fixing ‘market failures’ (investing in ‘public goods’ like infrastructure or basic research) but inherently bureaucratic, slow, grey, and often too ‘meddling’. It is told to stick to the ‘basics’ but to avoid getting too directly involved in the economy …
All this fear about the government trying and failing to pick winners is exaggerated. Both Apple and the technologies behind the iPhone were picked! But picking winners is more probable when the state is described as though it is relevant rather than irrelevant …
Today, we see countries that are growing thanks to a courageous public sector and through mission oriented policies. For example, China is spending $1.7 trillion on five key new broadly defined sectors, including ‘environmentally friendly’ technologies. Brazil’s active state investment bank is spending more than $60 billion just this year on green technology. The economics profession doesn’t adequately account for this kind of state-led activity, but only warns of governments ‘crowding out’ private business or failing at picking winners …
The problem is that by not admitting this entrepreneurial risk-taking role that the state provides, we have not confronted a key relationship in finance: the relationship between risk and return. Innovation is deeply uncertain, with most attempts failing. For every Internet there are many Concordes or Solyndras. Yet this is also true for private venture capital (VC). But while private VC is then able to use the profits from the 1 out of 10 successes to fund the 9 losses, the state has not been allowed to reap a return. Economists think this will happen via tax (from the jobs created, and from the profits of the companies), yet so many of the companies that receive such benefits from state funding, bring their jobs elsewhere, and of course we know they also pay very little tax. Thus the return generating mechanisms must be rethought. It could be done through retaining equity, a ‘golden share’ of the intellectual property rights, or through income contingent loans …
What this means is that we have socialized the risk of innovation but privatised the rewards. This dynamic is one of the key drivers of increasing inequality. Because innovation today builds on innovation tomorrow, the ‘capture’ can be very large. This would not be the case if innovation were just a random walk. Policy makers must think very hard how to make value creation activities (done by all the collective actors in the innovation game) rewarded above value extraction activities (in this sense capital gains taxes are way too low). And since the booty from the latter can be very large, redirecting incentives and rewards towards the value creators is essential. The problem is that some of the ‘extractors’ like to sell themselves as the creators.