Keynes and the principle of insufficient reason (wonkish)

25 Sep, 2012 at 17:01 | Posted in Statistics & Econometrics | Comments Off on Keynes and the principle of insufficient reason (wonkish)

After their first night in paradise, and having seen the sun rise in the morning, Adam and Eve was wondering if they were to experience another sunrise or not. Given the rather restricted sample of sunrises experienced, what could they expect? According to Laplace’s rule of succession, the probability of an event E happening after it has occurred n times is

p(E|n) = (n+1)/(n+2).
 
The probabilities can be calculated using Bayes’ rule, but to get the calculations going, Adam and Eve must have an a priori probability (a base rate) to start with. The Bayesian rule of thumb is to simply assume that all outcomes are equally likely. Applying this rule Adam’s and Eve’s probabilities become 1/2, 2/3, 3/4 …
 
 
Now this might seem rather straight forward, but as already e. g. Keynes (1921) noted in his Treatise on Probability, there might be a problem here. The problem has to do with the prior probability and where it is assumed to come from. Is the appeal of the principle of insufficient reason – the principle of indifference – really warranted? Laborating on Keynes example, Gillies (2000) wine-water paradox in Philosophcal Theories of Probability shows it may not be so straight forward after all.

Assume there is a certain quantity of liquid containing wine and water mixed so that the ratio of wine to water (r) is between 1/3 and 3/1. What is then the probability that r ≤ 2? The principle of insufficient reason means that we have to treat all r-values as equiprobable, assigning a uniform probability distribution between 1/3 and 3/1, which gives the probability of r ≤ 2 = [(2-1/3)/(3-1/3)] = 5/8.

But to say r ≤ 2 is equivalent to saying that 1/r ≥ ½. Applying the principle now, however, gives the probability of 1/r ≥ 1/2 = [(3-1/2)/(3-1/3)]=15/16. So we seem to get two different answers that both follow from the same application of the principle of insufficient reason. Given this unsolved paradox, we have reason – and this is far from the only one (as I have argued e. g. here) – to stick with Keynes and be skeptical of Bayesianism.

EMU and the policy convergence hypothesis

25 Sep, 2012 at 11:04 | Posted in Economics, Politics & Society | Comments Off on EMU and the policy convergence hypothesis

These data suggest that while there is evidence of euro area convergence in terms of long-term interest rates and the real exchange rate (which would be expected given the common currency and regional short-term interest rate), there is surprisingly little evidence of euro area policy convergence in terms of economic growth, employment, and inflation. Helping to explain the lack of economic convergence in terms of these primary policy outcomes, the data also show relatively little euro area convergence, even increased divergence, in terms of government spending, a primary fiscal policy instrument.

In short, a common regional monetary policy has not been sufficient to produce convergence in terms of the most politically important macroeconomic policy outcomes (e.g. growth, employment, and inflation) when there remains divergence among EMU member states in terms of their fiscal policies. Indeed, given the European Union’s (EU’s) inability to enforce the terms of its Stability and Growth Pact (SGP) and varying national preferences for fiscal policy expansion, economic policy divergence seems likely to persist, even expand as more (and less developed) national economies enter into the euro area …

To conclude, what have political scientists learned after ten years of experience with a common currency and monetary policy in Western Europe? Despite all the optimism associated with regional policy convergence during a period of favorable economic conditions in the late 1990s, we have learned that economic policy convergence among the euro area countries has not continued after 1999 and may have even reversed in certain dimensions. For policymakers, the lack of policy convergence means that European monetary union rests on a fragile foundation; indeed, this foundation will likely become even more problematic as the EU’s new less developed national economies formally enter into the euro area. For IPE scholars, the lack of EMU policy convergence means that convergence theory has not demonstrated much validity even in its most favorable empirical domain. Although it may still be too early to declare it as dead, the convergence hypothesis appears to be on life support. Perhaps in another ten years, IPE scholars will have sufficient empirical evidence to make a final judgment as to its empirical validity.

David Bearce Journal of European Public Policy

Romnesia

25 Sep, 2012 at 08:25 | Posted in Politics & Society | 1 Comment

The parasitical ultra-rich often deny the role of others in the acquisition of their wealth – and even seek to punish them for it.

We could call it Romnesia: the ability of the very rich to forget the context in which they made their money. To forget their education, inheritance, family networks, contacts and introductions. To forget the workers whose labour enriched them. To forget the infrastructure and security, the educated workforce, the contracts, subsidies and bailouts the government provided …

As the developed nations succumb to extreme inequality and social immobility, the myth of the self-made man becomes ever more potent. It is used to justify its polar opposite: an unassailable rent-seeking class, deploying its inherited money to finance the seizure of other people’s wealth …

Scarcely a Republican speech fails to reprise the Richard Hunter narrative, and almost all these rags-to-riches tales turn out to be bunkum. “Everything that Ann and I have,” Mitt Romney claims, “we earned the old-fashioned way”. Old fashioned like Blackbeard, perhaps. Two searing exposures in Rolling Stone magazine document the leveraged buyouts which destroyed viable companies, value and jobs, and the costly federal bailout which saved Romney’s political skin.

Romney personifies economic parasitism. The financial sector has become a job-destroying, home-breaking, life-crushing machine, which impoverishes others to enrich itself. The tighter its grip on politics, the more its representatives must tell the opposite story: of life-affirming enterprise, innovation and investment, of brave entrepreneurs making their fortunes out of nothing but grit and wit …

Equal opportunity, self-creation, heroic individualism: these are the myths that predatory capitalism requires for its political survival. Romnesia permits the ultra-rich both to deny the role of other people in the creation of their own wealth and to deny help to those less fortunate than themselves. A century ago, entrepreneurs sought to pass themselves off as parasites: they adopted the style and manner of the titled, rentier class. Today the parasites claim to be entrepreneurs.

George Monbiot The Guardian

Hugh Hudson’s masterpiece at last available on BD

24 Sep, 2012 at 17:56 | Posted in Varia | Comments Off on Hugh Hudson’s masterpiece at last available on BD

 

Instability – not stability – is what defines our economies

24 Sep, 2012 at 12:58 | Posted in Economics | 1 Comment

With an almost religious conviction many neoclassical economists seem to still think of modern market economies as being in a blissful state of stable equilibrium. How that is even conceivable today – in the fifth year of the latest economic and financial crises that started back in 2008 – is really beyond my wildest imagination.

And it was also beyond the imagination of  Dan Kervick, who last week wrote an interesting article – Shamanistc Economics – lambasting neoclassical economists for this unfounded theoretical assumption.

Canadian neoclassical economist Nick Rowe was not pleased and wrote a more than usual pompous and condescending comment, telling Kervick that “before writing about this stuff” he really did ought to think about the difference between “a system with multiple equilibria … where befiefs about intrinsically irrelevant events can change which equilibrium is the outcome” and a “dynamic system with a unique equilibrium time path, where beliefs about that future equilibrium path are part of what determines the current outcome.”

Kervick was non-plussed. His answer is a good read:

You guys in economics are supposed to be empirical scienctists, not philosophers. You are supposed to develop the a priori elements of your science only so that you can produce empirically testable models of the real world, and then bring those models to bear on the world we actually live in. You are also supposed to help develop techniques that are relevant to decision-making and govenment policy in having predictable outcomes. You need to map the terrain of the actual world in detail, so you can help others navigate through it. To the extent you want to give policy advice that deserves to be taken seriously, your focus needs to be on contingent reality, not a priori possibility.

My criticism is that an awful lot of the policy advice we are getting lately is from theorists who are lost in the clouds of a priori models, and who don’t have a clear understanding of the structure of the actual economic order we live in, based on the functioning of actual, highly contingent and specific economic and political institutions.

If you are trying to navigate your way through a mountain range, you don’t ask a geologist; you ask a guide who has explored the mountain range in detail. If the guide has geological knowledge that can definitely help, but the geological knowledge itself is not sufficient to guide people through the terrain. If you want to fix a broken airplane engine, you don’t ask a theoretical thermodynamicist, you ask an engineer. The engineer’s knowledge of thermodynamics can help, but the thermodynamical knowledge itself is not sufficient to know how to fix an airplane engine.

It is not enough for you to desciibe logically coherent possible worlds with possible sets of beliefs about possible equilibria and possible time paths to those equilibria, where possible statements, and possible ections have possible effects as a result. You need to show we live in such a world – and this is a task for which you don’t seem to have much patience. When challenged on the score of institutional facts, you have repeatedly retreated back into the contruction of other models and thought experiments.

Almost a century and a half after Léon Walras founded general equilibrium theory, economists still have not been able to show that markets lead economies to equilibria.

We do know that – under very restrictive assumptions – equilibria do exist, are unique and are Pareto-efficient.

But after reading Franklin M. Fisher‘s masterly article The stability of general equilibrium – what do we know and why is it important? one has to ask oneself – what good does that do?

As long as we cannot show, except under exceedingly special assumptions, that there are convincing reasons to suppose there are forces which lead economies to equilibria – the value of general equilibrium theory is nil. As long as we can not really demonstrate that there are forces operating – under reasonable, relevant and at least mildly realistic conditions – at moving markets to equilibria, there can not really be any sustainable reason for anyone to pay any interest or attention to this theory.

A stability that can only be proved by assuming “Santa Claus” conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up, leaving their Santa Claus economics in the dustbin of history.

Continuing to model a world full of agents behaving as economists – “often wrong, but never uncertain” – and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away), is a gross misallocation of intellectual resources and time.

How to really reorient economics

23 Sep, 2012 at 18:23 | Posted in Theory of Science & Methodology | 3 Comments

A couple of economic methodology colleagues have been kind enough to share their questioning views on my review of Mary Morgan’s book The World in the Model. What they especially seem to react to is the following claim I made in that review:

Even if epistemology is important and interesting in itself, it ought never be anything but secondary in science, since the primary questions asked have too be ontological. First after having asked questions about ontology can we start thinking about what and how we can know anything about the world. If we do that, I think it is more or less necessary also to be more critical of the reasoning by  modelling that has come to be considered the only and right way to reason in mainstream economics for more tham 50 years now.

Tony Lawson provides a really good plaidoyer for the importance of ontological under-labouring of science in general, and economics in particular, in this video:  
 

Modern Monetary Theory

23 Sep, 2012 at 16:06 | Posted in Economics | 2 Comments

 

OECD Education at a Glance – ingen rolig läsning för svenska lärare

23 Sep, 2012 at 15:20 | Posted in Education & School | 1 Comment

Årets upplaga av OECD:s Education at a Glance är ingen rolig läsning för svenska lärare.

Svenska lärare tjänar i snitt 42 000 kronor mindre om året än OECD-genomsnittet.

Under perioden 2000-2010 ökade lärarlönerna i Sverige i snitt med 8% och i OECD med 22%.

Som diagrammmet nedan visar var utvecklingen när det gäller lön efter 15 års yrkesverksamhet inte heller mycket att hänga i julgranen:
 

Keynes transcended?

22 Sep, 2012 at 11:12 | Posted in Varia | 1 Comment

Jan Garbarek

22 Sep, 2012 at 10:01 | Posted in Varia | Comments Off on Jan Garbarek

For the benefit of those who still haven’t found their way to Jan Garbarek – here’s a couple of his masterpieces!
 

 

 

The World in the Model – How the Model Became the Message in Economics

21 Sep, 2012 at 17:29 | Posted in Economics, Theory of Science & Methodology | 1 Comment

In her new book – The World in the Model Mary Morgan gives a historical account of how the model became the messsage in economics. On the question of how the models provide a method of enquiry where they can function both as “objects to enquire into” and as “objects to enquire with”, Morgan argues that model reasoning involves a kind of xperiment. She writes:

It may help to clarify my account of modelling as a double method of enquiry in economics if we compare it with two of the other reasoning styles … the method of mathematical postulation and proof and the method of laboratory experiment.

If we portray mathematical modelling as a version of the method of mathematical postulation and proof … models can indeed be truth-makers about that restricted and mathematical small world … But whether they can come to valid conclusions about the behaviour of their actual economic universe is a much more difficult problem …

If we make the alternative comparison with laboratory experiments … the important question of whether the results of the experiment on the model can be transferred to the world that the model represents can be considered an inference problem …

Of course, model experiments in economics are usually pen-and-paper, calculator, or computer, experiments on a model world or an analogical world … not laboratory experiments on the real world …

The experiments made on models are different from the experiments made in the laboratory … because model experiments are less powerful as an epistemic genre. It does make a difference to the power and scope of inference that the model experiment is carried out on a pen-and-paper represenation, that is on the world in the model, not on the world itself.

Now, I think it is but fair to say that field experiments, model experiments and laboratory experiments, are basically facing the same problems in terms of generalizability and external validity. They all have the same basic problem – they are built on rather artificial conditions and have difficulties with the trade-off between internal and external validity. The more artificial conditions, the more internal validity, but also less external validity. The more we rig experiments/field studies/models to avoid confounding, the less the conditions are reminicent of the real target system.

The nodal issue is how economists using different isolation strategies in different nomological machines attempt to learn about causal relationships. By contrast with Morgan, I would more explicitly and forcefully argue that  the generalizability of all these research strategies – because the probability is high that causal mechanisms are different in different contexts and the lack of homogeneity/stability/invariance – doesn’t give us warranted export licenses to the real target system.

If we mainly conceive of laboratory experiments, field studies and model experiments as heuristic tools, the dividing line is difficult to perceive. But if we see them as activities that ultimately aspire to say something about the real target system, then the problem of external validity is central. Let me elaborate a little on this point:

Assume that you have examined how the performance of A is affected by B (treatment). How can we extrapolate/generalize to new samples outside the original population? How do we know that any replication attempt succeeds? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing a extrapolative prediction of E [P(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P*(A|B).

As I see it is, this is the heart of the matter. External validity/extrapolation/generalization is founded on the assumption that we could make inferences based on P(A|B) that is exportable to other populations for which P*(A|B) applies. Sure, if one can convincingly show that P and P* are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability/homogeneity, is, at least for an epistemological realist far from satisfactory. And often it is – unfortunately – exactly this that I see when I take part of neoclassical economists’ models/laboratory experiments/field studies.

By this I do not mean to say that empirical methods per se are so problematic that they can never be used. On the contrary, I am basically – though not without reservations – in favour of the increased use of laboratory experiments and field studies within economics. Not least as an alternative to completely barren “bridge-less” axiomatic-deductive theory models. My criticism is more about aspiration levels and what we believe that we can achieve with our mediational epistemological tools and methods in the social sciences.

Many laboratory experimentalists claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? If in the example given above, we run a test and find that our predictions were not correct – what can we conclude? That B works in X but not in Y? That B worked in the field study conducted in year Z but not in year W? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific laboratory experiments/fields to specific real world situations/institutions/structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Nowadays many economists are in for randomized experiments. But just as most other methods used within neoclassical economics, randomization is basically a deductive method – or as Morgan calls it, “a deductive mode of manipulation”. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity etc) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in  laboratory models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

Ideally controlled laboratory experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right “closures”. Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. It works there is no evidence for it will work here. Causes deduced in a laboratory experiment still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of “rigorous” and “precise” methods is despairingly small.

Most neoclassical economists want to have deductively automated answers to fundamental causal questions. But to apply “thin” methods we have to have “thick” background knowledge of what’s going on in the real world, and not in (ideally controlled randomized) laboratory experiments or “models experiment”. Conclusions can only be as certain as their premises – and that also goes for methods based on laboratory experiments.

Morgan’s new book – and especially those carefully selected case studies presented – is an important contribution to the history of economics in general, and more specifically to our understanding of how mainstream economics has become a totally model-based discipline. 

However, I haven’t – as you may have surmised – read it without objections.

In her description of how economists have used – and are using – these “reasoning tools” that we call models, Morgan puts to much emphasis – at least for my taste – on modelling as an epistemic genre of “reasoning to gain knowlege about the economic world”. Even if epistemology is important and interesting in itself, it ought never be anything but secondary in science, since the primary questions asked have too be ontological. First after having asked questions about ontology can we start thinking about what and how we can know anything about the world. If we do that, I think it is more or less necessary also to be more critical of the reasoning by  modelling that has come to be considered the only and right way to reason in mainstream economics for more tham 50 years now.

In a way it is rather symptomatic of the whole book that  when Morgan gets in the the all-important question of external validity in isolationist closed economic models, she most often halts at posing the question as “if those elements can be treated in isolation” and noting that this aspect of models is “much more difficult to charcterize than the way economists use models to investigate their ideas and theories”. Absolutely! But this doesn’t make model reasonings as “objects to enquire” into activities that from a scientific point of view are on a par with the much more important question if these models really have export-certificates or not. I think many readers of the book would have found it even more interesting to read if they would get more of  argued and critical evaluations of the activities, and not just more or less analytical descriptions.

So, by all means, read Morgan’s book. It’s in many ways a superb book. As a detailed and well-informed case studies-based history it is definitely a proof of great scholarship. I’m sure it will be a classic in the history of modern economics – but – to get more on the question if the economists’ models really give truthful and valid explanations on things happening in the real world, also do read two other modern classics – Tony Lawson’s Economics and Reality (1997) and Nancy Cartwright’s Hunting Causes and Using Them  (2009).

New Keynesian macroeconomics and involuntary unemployment

21 Sep, 2012 at 08:56 | Posted in Economics | 2 Comments

People calling themselves “New Keynesians” – a gross misnomer – ought to be rather embarrassed by the fact that the kind of microfounded dynamic stochastic general equilibrium models they use, cannot incorporate such a basic fact of reality as involuntary unemployment!

Of course, working with representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility.

What did Keynes himself had to say on the issue? In General Theory (1937, chapter 2), he writes:

The classical school [maintains that] while the demand for labour at the existing money-wage may be satisfied before everyone willing to work at this wage is employed, this situation is due to an open or tacit agreement amongst workers not to work for less, and that if labour as a whole would agree to a reduction of money-wages more employment would be forthcoming. If this is the case, such unemployment, though apparently involuntary, is not strictly so, and ought to be included under the above category of ‘voluntary’ unemployment due to the effects of collective bargaining, etc …

The classical theory … is best regarded as a theory of distribution in conditions of full employment. So long as the classical postulates hold good, unemployment, which is in the above sense involuntary, cannot occur. Apparent unemployment must, therefore, be the result either of temporary loss of work of the ‘between jobs’ type or of intermittent demand for highly specialised resources or of the effect of a trade union ‘closed shop’ on the employment of free labour. Thus writers in the classical tradition, overlooking the special assumption underlying their theory, have been driven inevitably to the conclusion, perfectly logical on their assumption, that apparent unemployment (apart from the admitted exceptions) must be due at bottom to a refusal by the unemployed factors to accept a reward which corresponds to their marginal productivity …

Obviously, however, if the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment – if there be such a thing (and who will deny it?). The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics. We need to throw over the second postulate of the classical doctrine and to work out the behaviour of a system in which involuntary unemployment in the strict sense is possible.

The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than “hand waving” that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

Vinstfrågan redux

21 Sep, 2012 at 08:42 | Posted in Politics & Society | Comments Off on Vinstfrågan redux

 

Reality Check on Mitt Romney

21 Sep, 2012 at 08:10 | Posted in Economics, Politics & Society | 2 Comments

 

The man who gave the 1% a face

20 Sep, 2012 at 17:19 | Posted in Politics & Society | Comments Off on The man who gave the 1% a face

 

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.