Bayesian rationality — nothing but a probabilistic version of irrationalism

19 August, 2016 at 09:26 | Posted in Economics, Theory of Science & Methodology | 9 Comments

The initial choice of a prior probability distribution is not regulated in any way. The probabilities, called subjective or personal probabilities, reflect personal degrees of belief. From a Bayesian philosopher’s point of view, any prior distribution is as good as any other. Of course, from a Bayesian decision maker’s point of view, his own beliefs, as expressed in his prior distribution, may be better than any other beliefs, but Bayesianism provides no means of justifying this position. Bayesian rationality rests in the recipe alone, and the choice of the prior probability distribution is arbitrary as far as the issue of rationality is concerned. Thus, two rational persons with the same goals may adopt prior distributions that are wildly different …

bayestheoremBayesian learning is completely inflexible after the initial choice of probabilities: all beliefs that result from new observations have been fixed in advance. This holds because the new probabilities are just equal to certain old conditional probabilities …

According to the Bayesian recipe, the initial choice of a prior probability distribution is arbitrary. But the probability calculus might still rule out some sequences of beliefs and thus prevent complete arbitrariness.

Actually, however, this is not the case: nothing is ruled out by the probability calculus …

Thus, anything goes … By adopting a suitable prior probability distribution, we can fix the consequences of any observations for our beliefs in any way we want. This result, which will be referred to as the anything-goes theorem, holds for arbitrarily complicated cases and any number of observations. It implies, among other consequences, that two rational persons with the same goals and experiences can, in all eternity, differ arbitrarily in their beliefs about future events …

From a Bayesian point of view, any beliefs and, consequently, any decisions are as rational or irrational as any other, no matter what our goals and experiences are. Bayesian rationality is just a probabilistic version of irrationalism. Bayesians might say that somebody is rational only if he actually rationalizes his actions in the Bayesian way. However, given that such a rationalization always exists, it seems a bit pedantic to insist that a decision maker should actually provide it.

Max Albert

Uskali Mäki and Tony Lawson — different varieties of realism

18 August, 2016 at 11:05 | Posted in Theory of Science & Methodology | Leave a comment

We are all realists and we all—Mäki, Cartwright, and I—self-consciously present ourselves as such. The most obvious research-guiding commonality, perhaps, is that we do all look at the ontological presuppositions of economics or economists.

ecrealWhere we part company, I believe, is that I want to go much further. I guess I would see their work as primarily analytical and my own as more critically constructive or dialectical. My goal is less the clarification of what economists are doing and presupposing as seeking to change the orientation of modern economics … Specifically, I have been much more prepared than the other two to criticise the ontological presuppositions of economists—at least publically. I think Mäki is probably the most guarded. I think too he is the least critical, at least of the state of modern economics …

One feature of Mäki’s work that I am not overly convinced by, but which he seems to value, is his method of theoretical isolation (Mäki 1992). If he is advocating it as a method for social scientific research, I doubt it will be found to have much relevance—for reasons I discuss in Economics and reality (Lawson 1997). But if he is just saying that the most charitable way of interpreting mainstream economists is that they are acting on this method, then fine. Sometimes, though, he seems to imply more …

I cannot get enthused by Mäki’s concern to see what can be justified in contemporary formalistic modelling endeavours. The insights, where they exist, seem so obvious, circumscribed, and tagged on anyway …

As I view things, anyway, a real difference between Mäki and me is that he is far less, or less openly, critical of the state and practices of modern economics … Mäki seems more inclined to accept mainstream economic contributions as largely successful, or anyway uncritically. I certainly do not think we can accept mainstream contributions as successful, and so I proceed somewhat differently …

So if there is a difference here it is that Mäki more often starts out from mainstream academic economic analyses accepted rather uncritically, whilst I prefer to start from those everyday practices widely regarded as successful.

Tony Lawson

Truth and validity

15 August, 2016 at 14:01 | Posted in Theory of Science & Methodology | 1 Comment

 

Mainstream economics has become increasingly irrelevant to the understanding of the real world. The main reason for this irrelevance is the failure of economists to match their deductive-axiomatic methods with their subject.

It is — sad to say — a fact that within mainstream economics internal validity is everything and external validity and truth nothing. Why anyone should be interested in that kind of theories and models — as long as mainstream economists do not come up with any export licenses for their theories and models to the real world in which we live — is beyond comprehension. Stupid models are of no or little help in understanding the real world.

Friedman’s ‘as if’ methodology — a total disaster

30 July, 2016 at 20:12 | Posted in Economics, Theory of Science & Methodology | 1 Comment

the-only-function-of-economic-forecasting-is-to-make-astrology-look-respectable-quote-1The explicit and implicit acceptance of Friedman’s as if methodology by mainstream economists has proved to be disastrous. The fundamental paradigm of economics that emerged from this methodology not only failed to anticipative the Crash of 2008 and its devastating effects, this paradigm has proved incapable of producing a consensus within the discipline of economics as to the nature and cause of the economic stagnation we find ourselves in the midst of today. In attempting to understand why this is so it is instructive to examine the nature of Friedman’s arguments within the context in which he formulated them, especially his argument that the truth of a theory’s assumptions is irrelevant so long as the inaccuracy of a theory’s predictions are cataloged and we argue as if those assumptions are true …

A scientific theory is, in fact, the embodiment of its assumptions. There can be no theory without assumptions since it is the assumptions embodied in a theory that provide, by way of reason and logic, the implications by which the subject matter of a scientific discipline can be understood and explained. These same assumptions provide, again, by way of reason and logic, the predictions that can be compared with empirical evidence to test the validity of a theory. It is a theory’s assumptions that are the premises in the logical arguments that give a theory’s explanations meaning, and to the extent those assumptions are false, the explanations the theory provides are meaningless no matter how logically powerful or mathematically sophisticated those explanations based on false assumptions may seem to be.

George Blackford

If scientific progress in economics – as Robert Lucas and other latter days followers of Milton Friedman seem to think – lies in our ability to tell ‘better and better stories’ one would of course expect economics journal being filled with articles supporting the stories with empirical evidence confirming the predictions. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these predictive claims. Equally amazing is how little one has to say about the relationship between the model and real world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject isn’t considered required.

If the ultimate criteria of success of a deductivist system is to what extent it predicts and coheres with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models, is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real world economies.

Mainstream economics — going for the wrong kind of certainty

6 July, 2016 at 15:24 | Posted in Economics, Theory of Science & Methodology | 1 Comment

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

64800990Following the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world all evidence has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing/rival/contrasting potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the fact/evidence better than any other competing explanation — and so it is reasonable to consider/believe the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get in to math or logic, not science.

Keynes’ critique of scientific atomism

7 April, 2016 at 19:12 | Posted in Theory of Science & Methodology | Leave a comment

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be much less simple than the bare principle of uniformity. They appear to assume something much more like what mathematicians call the principle of the superposition of small effects, or, as I prefer to call it, in this connection, the atomic character of natural law. 3The system of the material universe must consist, if this kind of assumption is warranted, of bodies which we may term (without any implication as to their size being conveyed thereby) legal atoms, such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state. We do not have an invariable relation between particular bodies, but nevertheless each has on the others its own separate and invariable effect, which does not change with changing circumstances, although, of course, the total effect may be changed to almost any extent if all the other accompanying causes are different. Each atom can, according to this theory, be treated as a separate cause and does not enter into different organic combinations in each of which it is regulated by different laws …

The scientist wishes, in fact, to assume that the occurrence of a phenomenon which has appeared as part of a more complex phenomenon, may be some reason for expecting it to be associated on another occasion with part of the same complex. Yet if different wholes were subject to laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts. Given, on the other hand, a number of legally atomic units and the laws connecting them, it would be possible to deduce their effects pro tanto without an exhaustive knowledge of all the coexisting circumstances.

Keynes’ incisive critique is of course of interest in general for all sciences, but I think it is also of special interest in economics as a background to much of Keynes’ doubts about inferential statistics and econometrics.

Since econometrics doesn’t content itself with only making ‘optimal predictions’ but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions. Most important of these are the ‘atomistic’ assumptions of additivity and linearity.

overconfidenceThese assumptions — as underlined by Keynes — are of paramount importance and ought to be much more argued for — on both epistemological and ontological grounds — if at all being used.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we ‘export’ them to our ‘target systems,’ we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics like Keynes — and yours truly — will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, and a rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations.

But — real world social systems are not governed by stable causal mechanisms or capacities. The kinds of ‘laws’ and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. As Keynes argued, when causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it as a rule only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existant.

Science and truth

21 March, 2016 at 13:07 | Posted in Theory of Science & Methodology | 1 Comment

28mptoothfairy_jpg_1771152eIn my view, scientific theories are not to be considered ‘true’ or ‘false.’ In constructing such a theory, we are not trying to get at the truth, or even to approximate to it: rather, we are trying to organize our thoughts and observations in a useful manner.

Robert Aumann

 

What a handy view of science.

How reassuring for all of you who have always thought that believing in the tooth fairy make you understand what happens to kids’ teeth. Now a ‘Nobel prize’ winning economist tells you that if there are such things as tooth fairies or not doesn’t really matter. Scientific theories are not about what is true or false, but whether ‘they enable us to organize and understand our observations’ …

Mirabile dictu!

What Aumann and other defenders of scientific storytelling ‘forgets’ is that potential explanatory power achieved in thought experimental models is not enough for attaining real explanations. Model explanations are at best conjectures, and whether they do or do not explain things in the real world is something we have to test. To just believe that you understand or explain things better with thought experiments is not enough. Without a warranted export certificate to the real world, model explanations are pretty worthless. Proving things in models is not enough. Truth is an important concept in real science.

Bayesianism — confusing degree of confirmation with probability

18 March, 2016 at 09:59 | Posted in Theory of Science & Methodology | 5 Comments

If we identify degree of corroboration or confirmation with probability, we should be forced to adopt a  number of highly paradoxical views, among them the following clearly self-contradictory assertion:

“There are cases in which x is strongly supported by z and y is  strongly undermined by z while, at the same time, x is confirmed by z to  a lesser degree than is y.”

LogicSciDiscoveryCoverConsider the next throw with a homogeneous die. Let x be the statement ‘six will turn up’; let y be its negation, that is to say, let y = x;  and let z be the information ‘an even number will turn up’.

We have the following absolute probabilities:

p(x)=l/6; p(y) = 5/6; p(z) = 1/2.

Moreover, we have the following relative probabilities:

p(x, z) = 1/3; p(y, z) = 2/3.

We see that x is supported by the information z, for z raises the probability of x from 1/6 to 2/6 = 1/3. We also see that y is undermined by z, for z lowers the probability of y by the same amount from 5/6 to 4/6 = 2/3. Nevertheless, we have p(x, z) < p(y, z) …

A report of the result of testing a theory can be summed up by an appraisal. This can take the form of assigning some degree of corroboration to the theory. But it can never take the form of assigning to it a degree of probability; for the probability of a statement (given some test statements) simply does not express an appraisal of the severity of the tests a theory has passed, or of the manner in which it has passed these tests. The main reason for this is that the content of a theory — which is the same as its improbability — determines its testability and its corroborability.

Although Bayesians think otherwise, to me there’s nothing magical about Bayes’ theorem. The important thing in science is for you to have strong evidence. If your evidence is strong, then applying Bayesian probability calculus is rather unproblematic. Otherwise — garbage in, garbage out. Applying Bayesian probability calculus to subjective beliefs founded on weak evidence is not a recipe for scientific akribi and progress.

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes’ theorem.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing repeatedly over the years, there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

So, why then are so many scientists nowadays so fond of Bayesianism? I guess one strong reason is that Bayes’ theorem gives them a seemingly fast, simple and rigorous answer to their problems and hypotheses. But, as already Popper showed back in the 1950’s, the Bayesian probability (likelihood) version of confirmation theory is “absurd on both formal and intuitive grounds: it leads to self-contradiction.”

Heckscher-Ohlin and the ‘principle of explosion’

17 March, 2016 at 12:14 | Posted in Theory of Science & Methodology | 4 Comments

The other day yours truly had a post up on the Heckscher-Ohlin theorem, arguing that since the assumptions on which the theorem build are empirically false, one might, from a methodological point of view, wonder

how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods anything logically follows.

Some people have had troubles with the last sentence — from falsehoods anything whatsoever follows.

But that’s really nothing very deep or controversial. What I’m referring to — without going into the intricacies of distinguishing between ‘false,’ ‘inconsistent’ and ‘self-contradictory’ statements — is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.

poppWhilst tautologies, purely existential statements and other nonfalsifiable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsifiers is identical with that of all possible basic statements: it is falsified by any statement whatsoever.

Ceteris paribus — an alibi fairy

6 March, 2016 at 15:37 | Posted in Theory of Science & Methodology | 2 Comments

When applying deductivist thinking to economics, mainstream economists usually set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work — as e. g. IS-LM and DSGE models — simply don’t hold.
cpf5web

Defending his IS-LMism from the critique put forward by e. g. Hyman Minsky and yours truly, Paul Krugman writes:

When people like me use something like IS-LM, we’re not imagining that the IS curve is fixed in position for ever after. It’s a ceteris paribus thing, just like supply and demand.

But that is actually just another major problem with the Hicksian construction!

As Hans Albert noticed more than fifty years ago:

The law of demand … is usually tagged with a clause that entails numerous interpretation problems: the ceteris paribus clause … The ceteris paribus clause is not a relatively insignificant addition, which might be ignored … The clause produces something of an absolute alibi, since, for every apparently deviating behavior, some altered factors can be made responsible. This makes the statement untestable, and its informational content decreases to zero.

This is a critique rehearsed by John Earman et al. in a special issue of Erkenntnis on the ceteris paribus assumption, concluding that it is bad to admit ceteris paribus laws at all in science, since they are untestable:

In order for a hypothesis to be testable, it must lead us to some prediction. The prediction may be statistical in character, and in general it will depend on a set of auxiliary hypotheses. Even when these important qualifications have been added, CP law statements still fail to make any testable predictions. Consider the putative law that CP, all Fs are Gs. The information that x is an F, together with any auxiliary hypotheses you like, fails to entail that x is a G, or even to entail that with probability p, x is a G. For, even given this information, other things could fail to be equal, and we are not even given a way of estimating the probability that they so fail. Two qualifications have to be made. First, our claim is true only if the auxiliary hypotheses don’t entail the prediction all by themselves, in which case the CP law is inessential to the prediction and doesn’t get tested by a check of that prediction. Second, our claim is true only if none of the auxiliary hypotheses is the hypothesis that “other things are equal”, or “there are no interferences”. What if the auxiliaries do include the claim that other things are equal? Then either this auxiliary can be stated in a form that allows us to check whether it is true, or it can’t. If it can, then the original CP law can be turned into a strict law by substituting the testable auxiliary for the CP clause. If it can’t, then the prediction relies on an auxiliary hypothesis that cannot be tested itself. But it is generally, and rightly, presumed that auxiliary hypotheses must be testable in principle if they are to be used in an honest test. Hence, we can’t rely on a putative CP law to make any predictions about what will be observed, or about the probability that something will be observed. If we can’t do that, then it seems that we can’t subject the putative CP law to any kind of empirical test.

The conclusion from the massive critique should be obvious.

Ditch the ceteris paribus fairy!

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.