## Where did the Greek bailout money go?

20 August, 2016 at 20:19 | Posted in Economics | Leave a commentThis paper provides a descriptive analysis of where the Greek bailout money went since 2010 and finds that, contrary to widely held beliefs, less than €10 billion or a fraction of less than 5% of the overall programme went to the Greek fiscal budget. In contrast, the vast majority of the money went to existing creditors in the form of debt repayments and interest payments. The resulting risk transfer from the private to the public sector and the subsequent risk transfer within the public sector from international organizations such as the ECB and the IMF to European rescue mechanisms such as the ESM still constitute the most important challenge for the goal to achieve a sustainable fiscal situation in Greece.

## Bayesian rationality — nothing but a probabilistic version of irrationalism

19 August, 2016 at 09:26 | Posted in Economics, Theory of Science & Methodology | 9 CommentsThe initial choice of a prior probability distribution is not regulated in any way. The probabilities, called subjective or personal probabilities, reflect personal degrees of belief. From a Bayesian philosopher’s point of view, any prior distribution is as good as any other. Of course, from a Bayesian decision maker’s point of view, his own beliefs, as expressed in his prior distribution, may be better than any other beliefs, but Bayesianism provides no means of justifying this position. Bayesian rationality rests in the recipe alone, and the choice of the prior probability distribution is arbitrary as far as the issue of rationality is concerned. Thus, two rational persons with the same goals may adopt prior distributions that are wildly different …

Bayesian learning is completely inflexible after the initial choice of probabilities: all beliefs that result from new observations have been fixed in advance. This holds because the new probabilities are just equal to certain old conditional probabilities …

According to the Bayesian recipe, the initial choice of a prior probability distribution is arbitrary. But the probability calculus might still rule out some sequences of beliefs and thus prevent complete arbitrariness.

Actually, however, this is not the case: nothing is ruled out by the probability calculus …

Thus, anything goes … By adopting a suitable prior probability distribution, we can fix the consequences of any observations for our beliefs in any way we want. This result, which will be referred to as the anything-goes theorem, holds for arbitrarily complicated cases and any number of observations. It implies, among other consequences, that two rational persons with the same goals and experiences can, in all eternity, differ arbitrarily in their beliefs about future events …

From a Bayesian point of view, any beliefs and, consequently, any decisions are as rational or irrational as any other, no matter what our goals and experiences are. Bayesian rationality is just a probabilistic version of irrationalism. Bayesians might say that somebody is rational only if he actually rationalizes his actions in the Bayesian way. However, given that such a rationalization always exists, it seems a bit pedantic to insist that a decision maker should actually provide it.

## The Keynes-Ramsey-Savage debate on probability

18 August, 2016 at 16:23 | Posted in Economics | 2 CommentsNeoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules, axiomatized by Ramsey (1931) and Savage (1954) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump”argument – susceptible to being ruined by some clever “bookie”.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here, here and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10 %. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ *A Treatise on Probability* (1921) and *General Theory* (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.

Stressing the importance of Keynes’ view on uncertainty John Kay writes in Financial Times:

Keynes believed that the financial and business environment was characterised by “radical uncertainty”. The only reasonable response to the question “what will interest rates be in 20 years’ time?” is “we simply do not know” …

For Keynes, probability was about believability, not frequency. He denied that our thinking could be described by a probability distribution over all possible future events, a statistical distribution that could be teased out by shrewd questioning – or discovered by presenting a menu of trading opportunities. In the 1920s he became engaged in an intellectual battle on this issue, in which the leading protagonists on one side were Keynes and the Chicago economist Frank Knight, opposed by a Cambridge philosopher, Frank Ramsey, and later by Jimmie Savage, another Chicagoan.

Keynes and Knight lost that debate, and Ramsey and Savage won, and the probabilistic approach has maintained academic primacy ever since. A principal reason was Ramsey’s demonstration that anyone who did not follow his precepts – anyone who did not act on the basis of a subjective assessment of probabilities of future events – would be “Dutch booked” … A Dutch book is a set of choices such that a seemingly attractive selection from it is certain to lose money for the person who makes the selection.

I used to tell students who queried the premise of “rational” behaviour in financial markets – where rational means are based on Bayesian subjective probabilities – that people had to behave in this way because if they did not, people would devise schemes that made money at their expense. I now believe that observation is correct but does not have the implication I sought. People do not behave in line with this theory, with the result that others in financial markets do devise schemes that make money at their expense.

Although this on the whole gives a succinct and correct picture of Keynes’s view on probability, I think it’s necessary to somewhat qualify in what way and to what extent Keynes “lost” the debate with the Bayesians Frank Ramsey and Jim Savage.

In economics it’s an indubitable fact that few mainstream neoclassical economists work within the Keynesian paradigm. All more or less subscribe to some variant of Bayesianism. And some even say that Keynes acknowledged he was wrong when presented with Ramsey’s theory. This is a view that has unfortunately also been promulgated by Robert Skidelsky in his otherwise masterly biography of Keynes. But I think it’s fundamentally wrong. Let me elaborate on this point (the argumentation is more fully presented in my book *John Maynard Keynes* (SNS, 2007)).

It’s a debated issue in newer research on Keynes if he, as some researchers maintain, fundamentally changed his view on probability after the critique levelled against his A *Treatise on Probability* by Frank Ramsey. It has been exceedingly difficult to present evidence for this being the case.

Ramsey’s critique was mainly that the kind of probability relations that Keynes was speaking of in *Treatise* actually didn’t exist and that Ramsey’s own procedure (betting) made it much easier to find out the “degrees of belief” people were having. I question this both from a descriptive and a normative point of view.

What Keynes is saying in his response to Ramsey is only that Ramsey “is right” in that people’s “degrees of belief” basically emanates in human nature rather than in formal logic.

**Patrick Maher**, former professor of philosophy at the University of Illinois, even suggests that Ramsey’s critique of Keynes’s probability theory in some regards is invalid:

Keynes’s book was sharply criticized by Ramsey. In a passage that continues to be quoted approvingly, Ramsey wrote:

“But let us now return to a more fundamental criticism of Mr. Keynes’ views, which is the obvious one that there really do not seem to be any such things as the probability relations he describes. He supposes that, at any rate in certain cases, they can be perceived; but speaking for myself I feel confident that this is not true. I do not perceive them, and if I am to be persuaded that they exist it must be by argument; moreover, I shrewdly suspect that others do not perceive them either, because they are able to come to so very little agreement as to which of them relates any two given propositions.” (Ramsey 1926, 161)

I agree with Keynes that inductive probabilities exist and we sometimes know their values. The passage I have just quoted from Ramsey suggests the following argument against the existence of inductive probabilities. (Here P is a premise and C is the conclusion.)

P: People are able to come to very little agreement about inductive proba- bilities.

C: Inductive probabilities do not exist.P is vague (what counts as “very little agreement”?) but its truth is still questionable. Ramsey himself acknowledged that “about some particular cases there is agreement” (28) … In any case, whether complicated or not, there is more agreement about inductive probabilities than P suggests.

Ramsey continued:

“If … we take the simplest possible pairs of propositions such as “This is red” and “That is blue” or “This is red” and “That is red,” whose logical relations should surely be easiest to see, no one, I think, pretends to be sure what is the probability relation which connects them.” (162)

I agree that nobody would pretend to be sure of a numeric value for these probabilities, but there are inequalities that most people on reflection would agree with. For example, the probability of “This is red” given “That is red” is greater than the probability of “This is red” given “That is blue.” This illustrates the point that inductive probabilities often lack numeric values. It doesn’t show disagreement; it rather shows agreement, since nobody pretends to know numeric values here and practically everyone will agree on the inequalities.

Ramsey continued:

“Or, perhaps, they may claim to see the relation but they will not be able to say anything about it with certainty, to state if it ismore or less than 1/3, or so on. They may, of course, say that it is incomparable with any numerical relation, but a relation about which so little can be truly said will be of little scientific use and it will be hard to convince a sceptic of its existence.” (162)

Although the probabilities that Ramsey is discussing lack numeric values, they are not “incomparable with any numerical relation.” Since there are more than three different colors, the a priori probability of “This is red” must be less than 1/3 and so its probability given “This is blue” must likewise be less than 1/3. In any case, the “scientific use” of something is not relevant to whether it exists. And the question is not whether it is “hard to convince a sceptic of its existence” but whether the sceptic has any good argument to support his position …

Ramsey concluded the paragraph I have been quoting as follows:

“Besides this view is really rather paradoxical; for any believer in induction must admit that between “This is red” as conclusion and “This is round” together with a billion propositions of the form “a is round and red” as evidence, there is a finite probability relation; and it is hard to suppose that as we accumulate instances there is suddenly a point, say after 233 instances, at which the probability relation becomes finite and so comparable with some numerical relations.” (162)

Ramsey is here attacking the view that the probability of “This is red” given “This is round” cannot be compared with any number, but Keynes didn’t say that and it isn’t my view either. The probability of “This is red” given only “This is round” is the same as the a priori probability of “This is red” and hence less than 1/3. Given the additional billion propositions that Ramsey mentions, the probability of “This is red” is high (greater than 1/2, for example) but it still lacks a precise numeric value. Thus the probability is always both comparable with some numbers and lacking a precise numeric value; there is no paradox here.

I have been evaluating Ramsey’s apparent argument from P to C. So far I have been arguing that P is false and responding to Ramsey’s objections to unmeasurable probabilities. Now I want to note that the argument is also invalid. Even if P were true, it could be that inductive probabilities exist in the (few) cases that people generally agree about. It could also be that the disagreement is due to some people misapplying the concept of inductive probability in cases where inductive probabilities do exist. Hence it is possible for P to be true and C false …

I conclude that Ramsey gave no good reason to doubt that inductive probabilities exist.

Ramsey’s critique made Keynes more strongly emphasize the individuals’ own views as the basis for probability calculations, and less stress that their beliefs were rational. But Keynes’s theory doesn’t stand or fall with his view on the basis for our “degrees of belief” as logical. The core of his theory – when and how we are able to measure and compare different probabilities – he doesn’t change. Unlike Ramsey he wasn’t at all sure that probabilities always were one-dimensional, measurable, quantifiable or even comparable entities.

Keynes’s analysis of the practical relevance of probability and weight to decision-making provides the basis for a theory of decision under uncertainty that, in its critique of mathematical expectations in the TP, constitutes the grounds on which Benthamite calculation is deemed to be ill-suited to deal with uncertainty in the GT. In his last letter to Townshend, this aspect clearly emerges … As already noted, Keynes reminds Townshend that he is inclined to associate “risk premium with probability strictly speaking, and liquidity premium with what in my Treatise on Probability I called ‘weight’”. Also, the correspondence shows that significant technical aspects of the TP survived Ramsey’s critique and Keynes did not endorse the subjective probability viewpoint suggested by Ramsey … Had he yielded to Ramsey’s on the possibility to derive point probabilities from action in every instances, Keynes would not refer to non-numerical probabilities as such strong an objection to received analysis of decision-making under uncertainty … Keynes’s analogy in his last letter to Townshend, associating the liquidity premium with “an increased sense of comfort and confidence”, cannot be accommodated with Ramsey’s subjectivist perspective, in which there is no room for a measure representing the degree of reliance on a probability assessment. So he may have been perplexed, in the assessment of his early beliefs, about the significance of defending the epistemological underpinnings of his theory of probability … But the correspondence shows that Keynes never stopped thinking of possible uses of his theory of probability.

## Uskali Mäki and Tony Lawson — different varieties of realism

18 August, 2016 at 11:05 | Posted in Theory of Science & Methodology | Leave a commentWe are all realists and we all—Mäki, Cartwright, and I—self-consciously present ourselves as such. The most obvious research-guiding commonality, perhaps, is that we do all look at the ontological presuppositions of economics or economists.

Where we part company, I believe, is that I want to go much further. I guess I would see their work as primarily analytical and my own as more critically constructive or dialectical. My goal is less the clarification of what economists are doing and presupposing as seeking to change the orientation of modern economics … Specifically, I have been much more prepared than the other two to criticise the ontological presuppositions of economists—at least publically. I think Mäki is probably the most guarded. I think too he is the least critical, at least of the state of modern economics …

One feature of Mäki’s work that I am not overly convinced by, but which he seems to value, is his method of theoretical isolation (Mäki 1992). If he is advocating it as a method for social scientific research, I doubt it will be found to have much relevance—for reasons I discuss in

Economics and reality(Lawson 1997). But if he is just saying that the most charitable way of interpreting mainstream economists is that they are acting on this method, then fine. Sometimes, though, he seems to imply more …I cannot get enthused by Mäki’s concern to see what can be justified in contemporary formalistic modelling endeavours. The insights, where they exist, seem so obvious, circumscribed, and tagged on anyway …

As I view things, anyway, a real difference between Mäki and me is that he is far less, or less openly, critical of the state and practices of modern economics … Mäki seems more inclined to accept mainstream economic contributions as largely successful, or anyway uncritically. I certainly do not think we can accept mainstream contributions as successful, and so I proceed somewhat differently …

So if there is a difference here it is that Mäki more often starts out from mainstream academic economic analyses accepted rather uncritically, whilst I prefer to start from those everyday practices widely regarded as successful.

## On the irrelevance of Milton Friedman

17 August, 2016 at 17:16 | Posted in Economics | 5 CommentsIn producing theories couched in terms of isolated atoms that are quite at odds with social reality, modellers are actually compelled to make substantive claims that are wildly unrealistic. And because social reality does not conform to systems of isolated atoms, there is no guarantee that event regularities of the sort pursued will occur. Indeed, they are found not to …

Friedman enters this scene arguing that all we need to do is predict successfully, that this can be done even without realistic theories, and that unrealistic theories are to be preferred to realistic ones, essentially because they can usually be more parsimonious.

The first thing to note about this response is that Friedman is attempting to turn inevitable failure into a virtue. In the context of economic modelling, the need to produce formulations in terms of systems of isolated atoms, where these are not characteristic of social reality, means that unrealistic formulations are more or less unavoidable. Arguing that they are to be preferred to realistic ones in this context belies the fact that there is not a choice.

What amazed me about the initial responses to Friedman by numerous philosophers and others is that they mostly took the form: prediction is not enough, we need explanation too. Rarely, if ever, was it pointed out that because the social world is open, we cannot have successful prediction anyway.

So my own response to Friedman’s intervention is that it was mostly an irrelevancy, but one that has been opportunistically grasped by some as a supposed defence of the profusion of unrealistic assumptions in economics. This would work if successful prediction were possible. But usually it is not.

If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journal being filled with articles supporting the stories with empirical evidence confirming the predictions. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject isn’t considered required.

If the ultimate criteria of success of a model is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economies.

A scientific theory is, in fact, the embodiment of its assumptions. There can be no theory without assumptions since it is the assumptions embodied in a theory that provide, by way of reason and logic, the implications by which the subject matter of a scientific discipline can be understood and explained. These same assumptions provide, again, by way of reason and logic, the predictions that can be compared with empirical evidence to test the validity of a theory. It is a theory’s assumptions that are the premises in the logical arguments that give a theory’s explanations meaning, and to the extent those assumptions are false, the explanations the theory provides are meaningless no matter how logically powerful or mathematically sophisticated those explanations based on false assumptions may seem to be.

## Robert Lucas’ umbrella

16 August, 2016 at 13:03 | Posted in Economics | 2 CommentsTo understand New Classical thinking about this crucial issue, consider Lucas’s response to the following question: If people know the true distribution of future outcomes, why are autocorrelated mistakes such a common occurrence?

“If you were studying the demand for umbrellas as an economist, you’d get rainfall data by cities, and you wouldn’t hesitate for two seconds to assume that everyone living in London knows how much it rains there. That would be assumption number one. And no one would argue with you either. [But] in macroeconomics, people argue about things like that. (In Klamer 1983, p. 43)”

What Lucas clearly has in mind is a model in which the distribution of outcomes (like the distribution of rainfall in London) is pregiven and independent of agent decisions (about whether or not to carry umbrellas) and agent errors. Future equilibrium states exist prior to and independent of the agent choice process that is supposed to generate them.

Conclusion: umbrellas are not economies. And I guess most people — at least outside The University of Chicago Department of Economics — knows that …

## Reasons to dislike DSGE models

15 August, 2016 at 19:16 | Posted in Economics | 1 CommentThere are many reasons to dislike current DSGE models.

First:They are based on unappealing assumptions. Not just simplifying assumptions, as any model must, but assumptions profoundly at odds with what we know about consumers and firms …

Second:Their standard method of estimation, which is a mix of calibration and Bayesian estimation, is unconvincing …

Third:While the models can formally be used for norma- tive purposes, normative implications are not convincing …

Fourth:DSGE models are bad communication devices …

And still Blanchard and other mainstream economists seem to be impressed by the ‘rigour’ brought to macroeconomics by New-Classical-New-Keynesian DSGE models and its rational expectations and micrcofoundations — Blanchard even hopes that although current DSGE models are ‘flawed,’ in the future they can ‘fulfil an important need in macroeconomics, that of offering a core structure around which to build and organise discussions.’

It is difficult to see how.

Take the rational expectations assumption for example. Rational expectations in the mainstream economists’s world implies that relevant distributions have to be time independent. This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization *tout court*, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It is — to say the least — very difficult to see any similarity between these modelling assumptions and the expectations of real persons. In the world of the rational expectations hypothesis we are never disappointed in any other way than as when we lose at the roulette wheels. But real life is not an urn or a roulette wheel. And that’s also the reason why allowing for cases where agents make ‘predictable errors’ in DSGE models doesn’t take us any closer to a relevant and realist depiction of actual economic decisions and behaviours. If we really want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make we have to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations than childish roulette and urn analogies.

‘Rigorous’ and ‘precise’ DSGE models cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.

No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimeter if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not *per se* say anything about real world economies.

Proving things ‘rigorously’ in DSGE models is at most a starting-point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.

Blanchard thinks there is a gain from the DSGE style of modeling in its capacity to offer ‘a core structure around which to build and organise discussions.’ To me that sounds more like a religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

## On the importance of pluralism

15 August, 2016 at 16:05 | Posted in Economics | Leave a comment

A good illustration of what makes mainstream economics go astray — lack of pluralism (diversity) and like-minded people who are all wrong …

## Truth and validity

15 August, 2016 at 14:01 | Posted in Theory of Science & Methodology | 1 Comment

Mainstream economics has become increasingly irrelevant to the understanding of the real world. The main reason for this irrelevance is the failure of economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity and truth nothing. Why anyone should be interested in that kind of theories and models — as long as mainstream economists do not come up with any export licenses for their theories and models to the real world in which we live — is beyond comprehension. Stupid models are of no or little help in understanding the real world.

Create a free website or blog at WordPress.com.

Entries and comments feeds.