Brel

31 October, 2015 at 09:16 | Posted in Varia | Comments Off on Brel

 

Une grande chanson d’espoir.

Jacques-BrelQuand on a que l’amour
A offrir а ceux-lа
Dont l’unique combat
Est de chercher le jour

Alors sans avoir rien
Que la force d’aimer
Nous aurons dans nos mains
Amis, le monde entier
 
 

Advertisements

And did those feet in ancient time

30 October, 2015 at 21:45 | Posted in Varia | Comments Off on And did those feet in ancient time

 

Noam Chomsky on postmodern flimflam

30 October, 2015 at 18:58 | Posted in Theory of Science & Methodology | Comments Off on Noam Chomsky on postmodern flimflam

 

(h/t Jan Milch)

Falsk matematik

30 October, 2015 at 11:49 | Posted in Politics & Society | 1 Comment

 

chart

When I’m 104 (private)

28 October, 2015 at 13:25 | Posted in Varia | Comments Off on When I’m 104 (private)

old104
[h/t Jeanette Meyer]

Macroeconomic uncertainty

28 October, 2015 at 10:11 | Posted in Economics | 7 Comments

The financial crisis of 2007-08 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable?

There are many who have ventured to answer this question. And they have come up with a variety of answers, ranging from the exaggerated mathematization of economics, to irrational and corrupt politicians.

0But the root of our problem goes much deeper. It ultimately goes back to how we look upon the data we are handling. In “modern” macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and New ‘Keynesian’ — variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the “data-generating process” – if we do not have the “true” model – the whole edifice collapses. And of course it has to. I mean, who really honestly believes that we should have access to this mythical Holy Grail, the data-generating process?

“Modern” macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

This is like saying that you are going on a holiday-trip and that you know that the chance the weather being sunny is at least 30%, and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either-or. Uncertainty is reduced to risk.

But as Keynes convincingly argued in his monumental Treatise on Probability (1921), this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another – equally good – model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.

In the end this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty-type. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

wrongrightSome macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.

How much better – how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control – if instead we could just admit that we often simply do not know, and that we have to live with that uncertainty as well as it goes.

Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing – economic catastrophe!

Ein Volk schreibt Geschichte

27 October, 2015 at 17:32 | Posted in Politics & Society | Comments Off on Ein Volk schreibt Geschichte

 

Time for a new sojourn in my second hometown — Berlin.

Gretl and Hansl — econometrics made easy

27 October, 2015 at 15:29 | Posted in Statistics & Econometrics | 6 Comments

Hansel-and-gretel-rackhamThanks to Allin Cottrell and Riccardo Lucchetti we today have access to a high quality tool for doing and teaching econometrics — Gretl.

And, best of all, it is totally free!

Gretl is up to the tasks you may have, so why spend money on expensive commercial programs?

The latest snapshot version of Gretl – 2015d – can be downloaded here.

With this new version also comes a handy primer on Hansl — the scripting language of Gretl.

So just go ahead. With Gretl and Hansl, econometrics has never been easier to master!

 

Econometrics and the dangers of calling your pet cat a dog

27 October, 2015 at 13:08 | Posted in Economics | Comments Off on Econometrics and the dangers of calling your pet cat a dog

The assumption of additivity and linearity means that the outcome variable is, in reality, linearly related to any predictors … and that if you have several predictors then their combined effect is best described by adding their effects together …

catdogThis assumption is the most important because if it is not true then even if all other assumptions are met, your model is invalid because you have described it incorrectly. It’s a bit like calling your pet cat a dog: you can try to get it to go in a kennel, or to fetch sticks, or to sit when you tell it to, but don’t be surprised when its behaviour isn’t what you expect because even though you’ve called it a dog, it is in fact a cat. Similarly, if you have described your statistical model inaccurately it won’t behave itself and there’s no point in interpreting its parameter estimates or worrying about significance tests of confidence intervals: the model is wrong.

Andy Field

Econometric beasts of bias

27 October, 2015 at 11:26 | Posted in Statistics & Econometrics | 1 Comment

In an article posted earlier on this blog — What are the key assumptions of linear regression models? — yours truly tried to argue that since econometrics doesn’t content itself with only making “optimal” predictions,” but also aspires to explain things in terms of causes and effects, econometricians need loads of assumptions — and that most important of these are additivity and linearity.

overconfidenceLet me take the opportunity to elaborate a little more on why I find these assumptions of such paramount importance and ought to be much more argued for — on both epistemological and ontological grounds — if at all being used.

Limiting model assumptions in economic science always have to be closely examined since if we are going to be able to show that the mechanisms or causes that we isolate and handle in our models are stable in the sense that they do not change when we “export” them to our “target systems”, we have to be able to show that they do not only hold under ceteris paribus conditions and a fortiori only are of limited value to our understanding, explanations or predictions of real economic systems. As the always eminently quotable Keynes wrote (emphasis added) in Treatise on Probability (1921):

The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … Yet there might well be quite different laws for wholes of different degrees of complexity, and laws of connection between complexes which could not be stated in terms of laws connecting individual parts … If different wholes were subject to different laws qua wholes and not simply on account of and in proportion to the differences of their parts, knowledge of a part could not lead, it would seem, even to presumptive or probable knowledge as to its association with other parts … These considerations do not show us a way by which we can justify induction … /427 No one supposes that a good induction can be arrived at merely by counting cases. The business of strengthening the argument chiefly consists in determining whether the alleged association is stable, when accompanying conditions are varied … /468 In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfill its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo, nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics on the whole has not delivered “truth”. And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a cautious attitude towards probabilistic inferences in economic contexts. Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour, since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality, But there is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive, not necessarily epistemologically inaccessible – that were not considered for the model. Those who were can hence never be guaranteed to be more than potential causes, and not real causes. A rigorous application of econometric methods in economics really presupposes that the phenomena of our real world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

Real world social systems are not governed by stable causal mechanisms or capacities. As Keynes wrote in his critique of econometrics and inferential statistics already in the 1920s (emphasis added):

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of “laws” and relations that econometrics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of mainstream economic theoretical modeling – rather useless.

Following our recent post on econometricians’ traditional privileging of unbiased estimates, there were a bunch of comments echoing the challenge of teaching this topic, as students as well as practitioners often seem to want the comfort of an absolute standard such as best linear unbiased estimate or whatever. Commenters also discussed the tradeoff between bias and variance, and the idea that unbiased estimates can overfit the data.

I agree with all these things but I just wanted to raise one more point: In realistic settings, unbiased estimates simply don’t exist. In the real world we have nonrandom samples, measurement error, nonadditivity, nonlinearity, etc etc etc.

So forget about it. We’re living in the real world …

figure3

It’s my impression that many practitioners in applied econometrics and statistics think of their estimation choice kinda like this:

1. The unbiased estimate. It’s the safe choice, maybe a bit boring and maybe not the most efficient use of the data, but you can trust it and it gets the job done.

2. A biased estimate. Something flashy, maybe Bayesian, maybe not, it might do better but it’s risky. In using the biased estimate, you’re stepping off base—the more the bias, the larger your lead—and you might well get picked off …

If you take the choice above and combine it with the unofficial rule that statistical significance is taken as proof of correctness (in econ, this would also require demonstrating that the result holds under some alternative model specifications, but “p less than .05″ is still key), then you get the following decision rule:

A. Go with the safe, unbiased estimate. If it’s statistically significant, run some robustness checks and, if the result doesn’t go away, stop.

B. If you don’t succeed with A, you can try something fancier. But . . . if you do that, everyone will know that you tried plan A and it didn’t work, so people won’t trust your finding.

So, in a sort of Gresham’s Law, all that remains is the unbiased estimate. But, hey, it’s safe, conservative, etc, right?

And that’s where the present post comes in. My point is that the unbiased estimate does not exist! There is no safe harbor. Just as we can never get our personal risks in life down to zero … there is no such thing as unbiasedness. And it’s a good thing, too: recognition of this point frees us to do better things with our data right away.

Andrew Gelman

Economics journals — publishing lazy non-scientific work

27 October, 2015 at 11:00 | Posted in Economics | 1 Comment

In a new paper, Andrew Chang, an economist at the Federal Reserve and Phillip Li, an economist with the Office of the Comptroller of the Currency, describe their attempt to replicate 67 papers from 13 well-regarded economics journals …

unscientificTheir results? Just under half, 29 out of the remaining 59, of the papers could be qualitatively replicated (that is to say, their general findings held up, even if the authors did not arrive at the exact same quantitative result). For the other half whose results could not be replicated, the most common reason was “missing public data or code” …

H.D. Vinod, an economics professor at Fordham University … noted that … caution could be outweighed by the sheer amount of work it takes to clean up data files in order to make them reproducible.

“It’s human laziness,” he said. “There’s all this work involved in getting the data together” …

Bruce McCullough, said he thought the authors’ definition of what counted as replication – achieving the same qualitative, as opposed to quantitative, results – was far too generous. If a paper’s conclusions are correct, he argues, one should be able to arrive at the same numbers using the same data.

“What these journals produce is not science,” he said. “People should treat the numerical results as if they were produced by a random number generator.”

Anna Louie Sussman

Why non-existence of uncertainty is such a monstrously absurd assumption

26 October, 2015 at 16:04 | Posted in Economics | 1 Comment

All these pretty, polite techniques, made for a well-panelled Board Room and a nicely regulated market, are liable to collapse. At all times the vague panic fears and equally vague and unreasoned hopes are not really lulled, and lie but a little way below the surface.

check-your-assumptionsPerhaps the reader feels that this general, philosophical disquisition on the behavior of mankind is somewhat remote from the economic theory under discussion. But I think not. Tho this is how we behave in the marketplace, the theory we devise in the study of how we behave in the market place should not itself submit to market-place idols. I accuse the classical economic theory of being itself one of these pretty, polite techniques which tries to deal with the present by abstracting from the fact that we know very little about the future.

I dare say that a classical economist would readily admit this. But, even so, I think he has overlooked the precise nature of the difference which his abstraction makes between theory and practice, and the character of the fallacies into which he is likely to be led.

This is particularly the case in his treatment of Money and Interest.

John Maynard Keynes

Question the monetary system and you’re academically dead!

25 October, 2015 at 18:11 | Posted in Economics | 10 Comments

 

Bernard Lietaer is a former professor of international finance and president at the Central Bank of Belgium.

Did Keynes ‘accept’ the IS-LM model?

25 October, 2015 at 09:02 | Posted in Economics | Comments Off on Did Keynes ‘accept’ the IS-LM model?

hicksbbcLord Keynes has some interesting references and links for those wanting to dwell upon the question if Keynes really “accepted” Hicks’s IS-LM model.

My own view is that  IS-LM doesn’t adequately reflect the width and depth of Keynes’s insights on the workings of modern market economies:

Almost nothing in the post-General Theory writings of Keynes suggests him considering Hicks’s IS-LM anywhere near a faithful rendering of his thought. In Keynes’s canonical statement of the essence of his theory — in the famous 1937 Quarterly Journal of Economics article — there is nothing to even suggest that Keynes would have thought the existence of a Keynes-Hicks-IS-LM-theory anything but pure nonsense. John Hicks, the man who invented IS-LM in his 1937 Econometrica review of Keynes’ General Theory — “Mr. Keynes and the ‘Classics’. A Suggested Interpretation” — returned to it in an article in 1980 — “IS-LM: an explanation” — in Journal of Post Keynesian Economics. Self-critically he wrote that ”the only way in which IS-LM analysis usefully survives — as anything more than a classroom gadget, to be superseded, later on, by something better — is in application to a particular kind of causal analysis, where the use of equilibrium methods, even a drastic use of equilibrium methods, is not inappropriate.” What Hicks acknowledges in 1980 is basically that his original IS-LM model ignored significant parts of Keynes’ theory. IS-LM is inherently a temporary general equilibrium model. However — much of the discussions we have in macroeconomics is about timing and the speed of relative adjustments of quantities, commodity prices and wages — on which IS-LM doesn’t have much to say.

IS-LM forces to a large extent the analysis into a static comparative equilibrium setting that doesn’t in any substantial way reflect the processual nature of what takes place in historical time. To me Keynes’s analysis is in fact inherently dynamic — at least in the sense that it was based on real historic time and not the logical-ergodic-non-entropic time concept used in most neoclassical model building. And as Niels Bohr used to say — thinking is not the same as just being logical …

IS-LM reduces interaction between real and nominal entities to a rather constrained interest mechanism which is far too simplistic for analyzing complex financialised modern market economies.

IS-LM gives no place for real money, but rather trivializes the role that money and finance play in modern market economies. As Hicks, commenting on his IS-LM construct, had it in 1980 — “one did not have to bother about the market for loanable funds.” From the perspective of modern monetary theory, it’s obvious that IS-LM to a large extent ignores the fact that money in modern market economies is created in the process of financing — and not as IS-LM depicts it, something that central banks determine.

IS-LM is typically set in a current values numéraire framework that definitely downgrades the importance of expectations and uncertainty — and a fortiori gives too large a role for interests as ruling the roost when it comes to investments and liquidity preferences. In this regard it is actually as bad as all the modern microfounded Neo-Walrasian-New-Keynesian models where Keynesian genuine uncertainty and expectations aren’t really modelled. Especially the two-dimensionality of Keynesian uncertainty — both a question of probability and “confidence” — has been impossible to incorporate into this framework, which basically presupposes people following the dictates of expected utility theory (high probability may mean nothing if the agent has low “confidence” in it). Reducing uncertainty to risk — implicit in most analyses building on IS-LM models — is nothing but hand waving. According to Keynes we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the “confidence” or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modeled by “modern” social sciences. And often we “simply do not know.”

6  IS-LM not only ignores genuine uncertainty, but also the essentially complex and cyclical character of economies and investment activities, speculation, endogenous money, labour market conditions, and the importance of income distribution. And as Axel Leijonhufvud so eloquently notes on IS-LM economics — “one doesn’t find many inklings of the adaptive dynamics behind the explicit statics.” Most of the insights on dynamic coordination problems that made Keynes write General Theory are lost in the translation into the IS-LM framework.

Given this, it’s difficult to see how and why Keynes in earnest should have “accepted” Hicks’s construct. 

Bayesianism — a scientific cul-de-sac

24 October, 2015 at 20:27 | Posted in Statistics & Econometrics | 2 Comments

wpid-bilindustriella-a86478514bOne of my favourite “problem situating lecture arguments” against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

 
For more on my own objections to Bayesianism:
Bayesianism — a patently absurd approach to science
Bayesianism — preposterous mumbo jumbo
One of the reasons I’m a Keynesian and not a Bayesian
Keynes and Bayes in paradise

‘Le pied dans le plat’

24 October, 2015 at 18:04 | Posted in Varia | Comments Off on ‘Le pied dans le plat’

 

Old love dies hard …

We few, we happy few

24 October, 2015 at 14:14 | Posted in Varia | Comments Off on We few, we happy few

 

Saint Crispin’s Day falls on 25 October and is the feast day of Saint Crispin. It is a day most famous for the Battle of Agincourt, that occurred on it in 1415.

Paul “Chicolini” Ryan

24 October, 2015 at 10:53 | Posted in Politics & Society | Comments Off on Paul “Chicolini” Ryan

paul_ryan_healthcare_0
 

“He may talk like an idiot, he may look like an idiot. But don’t let that fool you, he really is an idiot.”

Econometric delusions

22 October, 2015 at 16:42 | Posted in Economics, Statistics & Econometrics | 3 Comments

reality header3

Because I was there when the economics department of my university got an IBM 360, I was very much caught up in the excitement of combining powerful computers with economic research. Unfortunately, I lost interest in econometrics almost as soon as I understood how it was done. My thinking went through four stages:

1.Holy shit! Do you see what you can do with a computer’s help.
2.Learning computer modeling puts you in a small class where only other members of the caste can truly understand you. This opens up huge avenues for fraud:
3.The main reason to learn stats is to prevent someone else from committing fraud against you.
4.More and more people will gain access to the power of statistical analysis. When that happens, the stratification of importance within the profession should be a matter of who asks the best questions.

Disillusionment began to set in. I began to suspect that all the really interesting economic questions were FAR beyond the ability to reduce them to mathematical formulas. Watching computers being applied to other pursuits than academic economic investigations over time only confirmed those suspicions.

1.Precision manufacture is an obvious application for computing. And for many applications, this worked magnificently. Any design that combined straight line and circles could be easily described for computerized manufacture. Unfortunately, the really interesting design problems can NOT be reduced to formulas. A car’s fender, for example, can not be describe using formulas—it can only be described by specifying an assemblage of multiple points. If math formulas cannot describe something as common and uncomplicated as a car fender, how can it hope to describe human behavior?
2.When people started using computers for animation, it soon became apparent that human motion was almost impossible to model correctly. After a great deal of effort, the animators eventually put tracing balls on real humans and recorded that motion before transferring it to the the animated character. Formulas failed to describe simple human behavior—like a toddler trying to walk.

Lately, I have discovered a Swedish economist who did NOT give up econometrics merely because it sounded so impossible. In fact, he still teaches the stuff. But for the rest of us, he systematically destroys the pretensions of those who think they can describe human behavior with some basic Formulas.

Jonathan Larson

Wonder who that Swedish economist could be …

Exile (private)

21 October, 2015 at 21:27 | Posted in Varia | Comments Off on Exile (private)

 

How do we know which explanation is the best?

21 October, 2015 at 13:43 | Posted in Theory of Science & Methodology | Comments Off on How do we know which explanation is the best?

 

If only mainstream economists also understood these basics …

But they don’t!

Why?

Because in mainstream economics it’s not inference to the best explanation that rules the methodological-inferential roost, but deductive reasoning based on logical inference from a set of axioms. Although — under specific and restrictive assumptions — deductive methods may be usable tools, insisting that economic theories and models ultimately have to be built on a deductive-axiomatic foundation to count as being economic theories and models, will only make economics irrelevant for solving real world economic problems. Modern deductive-axiomatic mainstream economics is sure very rigorous — but if it’s rigorously wrong, who cares?

Instead of making formal logical argumentation based on deductive-axiomatic models the message, I think we are better served by economists who more than anything else try to contribute to solving real problems — and in that endeavour inference to the best explanation is much more relevant than formal logic.

Inference to the Best Explanation

20 October, 2015 at 22:23 | Posted in Theory of Science & Methodology | Comments Off on Inference to the Best Explanation

 

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is reveal what this reality that is the object of science actually looks like.

Science is made possible by the fact that there are structures that are durable and are independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

Instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliatve deductive reasoning — as in mainstream economic theory — it would be so much more fruitful and relevant to apply inference to the best explanation, given that what we are looking for is to be able to explain what’s going on in the world we live in.

Rules of inference — the vain search for the Holy Grail

20 October, 2015 at 20:12 | Posted in Theory of Science & Methodology | 1 Comment

Traditionally, philosophers have focused mostly on the logical template of inference. The paradigm-case has been deductive inference, which is topic-neutral and context-insensitive. The study of deductive rules has engendered the search for the Holy Grail: syntactic and topic-neutral accounts of all prima facie reasonable inferential rules. The search has hoped to find rules that are transparent and algorithmic, and whose following will just be a matter of grasping their logical form. Part of the search for the Holy Grail has been to show that the so-called scientific method can be formalised in a topic-neutral way. We are all familiar with Carnap’s inductive logic, or Popper’s deductivism or the Bayesian account of scientific method.
monthly-sharpe-header

There is no Holy Grail to be found. There are many reasons for this pessimistic conclusion. First, it is questionable that deductive rules are rules of inference. Second, deductive logic is about updating one’s belief corpus in a consistent manner and not about what one has reasons to believe simpliciter. Third, as Duhem was the first to note, the so-called scientific method is far from algorithmic and logically transparent. Fourth, all attempts to advance coherent and counterexample-free abstract accounts of scientific method have failed. All competing accounts seem to capture some facets of scientific method, but none can tell the full story. Fifth, though the new Dogma, Bayesianism, aims to offer a logical template (Bayes’s theorem plus conditionalisation on the evidence) that captures the essential features of non-deductive infer- ence, it is betrayed by its topic-neutrality. It supplements deductive coherence with the logical demand for probabilistic coherence among one’s degrees of belief. But this extended sense of coherence is (almost) silent on what an agent must infer or believe.

Stathis Psillos

Phelps’ smackdown on Lucas’ rational expectations

20 October, 2015 at 18:47 | Posted in Economics | 2 Comments

The tiny little problem that there is no hard empirical evidence that verifies rational expectations models doesn’t usually bother its protagonists too much. Rational expectations überpriest Thomas Sargent has defended the epistemological status of the rational expectations hypothesis arguing that since it “focuses on outcomes and does not pretend to have behavioral content,” it has proved to be “a powerful tool for making precise statements.”

Precise, yes, but relevant and realistic? I’ll be dipped!

In their attempted rescue operations, rational expectationists try to give the picture that only heterodox economists like yours truly are critical of the rational expectations hypothesis.

But, on this, they are, simply … eh … wrong.
 

Question: In a new volume with Roman Frydman, “Rethinking Expectations: The Way Forward for Macroeconomics,” you say the vast majority of macroeconomic models over the last four decades derailed your “microfoundations” approach. Can you explain what that is and how it differs from the approach that became widely accepted by the profession?

rethinkAnswer: In the expectations-based framework that I put forward around 1968, we didn’t pretend we had a correct and complete understanding of how firms or employees formed expectations about prices or wages elsewhere. We turned to what we thought was a plausible and convenient hypothesis. For example, if the prices of a company’s competitors were last reported to be higher than in the past, it might be supposed that the company will expect their prices to be higher this time, too, but not that much. This is called “adaptive expectations:” You adapt your expectations to new observations but don’t throw out the past. If inflation went up last month, it might be supposed that inflation will again be high but not that high.

Q: So how did adaptive expectations morph into rational expectations?

A: The “scientists” from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let’s be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.

Q: And what’s the consequence of this putsch?

A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted — thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.

Q: So rather than live with variability, write a formula in stone!

A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.

Q: One of the issues I have with rational expectations is the assumption that we have perfect information, that there is no cost in acquiring that information. Yet the economics profession, including Federal Reserve policy makers, appears to have been hijacked by Robert Lucas.

A: You’re right that people are grossly uninformed, which is a far cry from what the rational expectations models suppose. Why are they misinformed? I think they don’t pay much attention to the vast information out there because they wouldn’t know what to do what to do with it if they had it. The fundamental fallacy on which rational expectations models are based is that everyone knows how to process the information they receive according to the one and only right theory of the world. The problem is that we don’t have a “right” model that could be certified as such by the National Academy of Sciences. And as long as we operate in a modern economy, there can never be such a model.

Bloomberg

Phelps’ critique is much in line with the one yours truly put forward in On the use and misuse of theories and models in economics.

Go Canada Go!

20 October, 2015 at 12:52 | Posted in Economics | Comments Off on Go Canada Go!

 

Never seek to tell thy love (private)

20 October, 2015 at 11:37 | Posted in Varia | Comments Off on Never seek to tell thy love (private)

 

lovessecret-williamblakepoe

Neoclassical economics — purchasing validity at the cost of empirical content

19 October, 2015 at 15:12 | Posted in Economics | 3 Comments

The fact that the economics profession was caught unawares in the long build- up to the 2007 worldwide financial crisis and substantially underestimated its dimensions once it started to unfold can be attributed to two factors … first, the lack of an empirical motivation in the essential modelling assumptions, combined with questionable ceteris paribus assumptions; and second, a blatant disregard for the complete absence of the empirical fit of macroeconomic research. ct10Indeed, the deeper roots for this are found in what has been termed the ‘systemic failure of the economics profession’ to design models that take into account key elements that drive economic outcomes in real-world markets. Half a century of research that conveniently disregarded essential institutional and behavioural characteristics of the markets that it supposedly modelled has left its (hardly surprising) imprint on financial market regulation and ultimately on the world economy. Instead of letting theories die,  economists preferred experimentation on a worldwide scale.

D. Arnold & F. P. Maier-Rigaud

Denmark’s euro problem

19 October, 2015 at 08:35 | Posted in Economics | 2 Comments

Denmark has combined high taxes and strong social benefits (free college, heavily subsidized child care, and more) with strong employment and high productivity. It shows that strong welfare states can work.

But it is worth noting that Denmark has had a fairly bad run since the global financial crisis, with a severe slump and a very weak recovery. In fact, real GDP per capita is about as far below pre-crisis levels as that of Portugal or Spain, although with much less suffering. What’s going on?

fotboja-handvaska-1Part of the answer may be high levels of household debt. But Sweden has high private debt, too, and (despite its monetary missteps) has done much, much better.

My interpretation is that Denmark is paying a high price for shadowing the euro — it hasn’t joined, but it runs monetary policy as if it had — and also, for the past few years, for imposing a lot of fiscal austerity despite very low borrowing costs.

None of this has much bearing on the welfare state issue: short-run macro policy is a different subject. But just in case you wanted to think of Denmark as a role model across the board, this is a useful reminder.

Paul Krugman

Jeffrey Sachs — studying neoclassical economics make students less ‘pro-social’

18 October, 2015 at 19:42 | Posted in Economics | 2 Comments

worldhappiness15_cover_onlyStudents trained in egoistic game theory, notably in university courses in neoclassical economics, are less likely to cooperate in laboratory settings. There is now a large literature on the lower levels of pro-sociality of economics students compared with non-economics students … The findings of low pro-sociality among economics students are robust; the interpretation, however, has differed between those who have identified self-selection as the cause and those who have identified the content of neoclassical economics training as the cause. In short, does economics attract students with low tendencies towards pro-sociality, or does it make them? The answer, after three decades of research, seems to be both. There is an element of self-selection, but there is also a clear “treatment effect,” according to which pro-sociality declines as the result of instruction in mainstream, egoistic game theory and neoclassical economics more generally.”

Jeffrey Sachs

(h/t Bo Rothstein)

Manhattan — Berlin

18 October, 2015 at 15:02 | Posted in Varia | 2 Comments

 

Next Page »

Blog at WordPress.com.
Entries and comments feeds.