Just playing games? Count me out!

18 Nov, 2015 at 15:41 | Posted in Economics | 2 Comments

I have spent a considerable part of my life building economic models, and examining the models that other economists have built. I believe that I am making reasonably good use of my talents in an attempt to understand the social world.two-scientists-like-to-play-simple-gamesI have no fellow-feeling with those economic theorists who, off the record at seminars and conferences, admit that they are only playing a game with other theorists. If their models are not intended seriously, I want to say (and do say when I feel sufficiently combative), why do they expect me to spend my time listening to their expositions? Count me out of the game.

Robert Sugden

Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly has tried to show in On the use and misuse of theories and models in economics there is really no support for this conviction at all. On the contrary. If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to place macroeconomic models building on representative actors and rational expectations-microfoundations where they belong – in the dustbin.

For if this microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand waving that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place – instead of simply conjuring the problem away by assuming rational expectations and treating uncertainty as if it was possible to reduce it to stochastic risk. That is scientific cheating. And it has been going on for too long now. If that’s the kind of game you want to play — count me out!

The open society and its enemies

15 Nov, 2015 at 16:37 | Posted in Politics & Society | Comments Off on The open society and its enemies

Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them … We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.

Karl Popper The Open Society and Its Enemies (1945)

12743454_f496

Bleu Blanc Rouge

14 Nov, 2015 at 22:58 | Posted in Varia | Comments Off on Bleu Blanc Rouge

 

Though I speak with the tongues of angels,
If I have not love…
My words would resound with but a tinkling cymbal.
And though I have the gift of prophesy…
And understand all mysteries…
and all knowledge…
And though I have all faith
So that I could remove mountains,
If I have not love…
I am nothing.

November 13th, 2015 — a date which will live in infamy

14 Nov, 2015 at 15:11 | Posted in Economics | Comments Off on November 13th, 2015 — a date which will live in infamy

 
Paris-Attack-390x250

The verdict of history will be harsh.

Are economic models ‘true enough’?

13 Nov, 2015 at 19:39 | Posted in Theory of Science & Methodology | 2 Comments

Stylized facts are close kin of ceteris paribus laws. They are ‘broad generalizations true in essence, though perhaps not in detail’. They play a major role in economics, constituting explananda that economic models are required to explain. Models of economic growth, for example, are supposed to explain the (stylized) fact that the profit rate is constant. The unvarnished fact of course is that profit rates are not constant. All sorts of non-economic factors — e.g., war, pestilence, drought, political chicanery — interfere. Manifestly, stylized facts are not (what philosophers would call) facts, for the simple reason that they do not actually obtain. It might seem then that economics takes itself to be required to explain why known falsehoods are true. (Voodoo economics, indeed!) This can’t be correct. truth_and_liesRather, economics is committed to the view that the claims it recognizes as stylized facts are in the right neighborhood, and that their being in the right neighborhood is something economic models should account for. The models may show them to be good approximations in all cases, or where deviations from the economically ideal are small, or where economic factors dominate non-economic ones. Or they might afford some other account of their often being nearly right. The models may diverge as to what is actually true, or as to where, to what degree, and why the stylized facts are as good as they are. But to fail to acknowledge the stylized facts would be to lose valuable economic information (for example, the fact that if we control for the effects of such non-economic interference as war, disease, and the president for life absconding with the national treasury, the profit rate is constant.) Stylized facts figure in other social sciences as well. I suspect that under a less alarming description, they occur in the natural sciences too. The standard characterization of the pendulum, for example, strikes me as a stylized fact of physics. The motion of the pendulum which physics is supposed to explain is a motion that no actual pendulum exhibits. What such cases point to is this: The fact that a strictly false description is in the right neighborhood sometimes advances understanding of a domain.

Catherine Elgin

Catherine Elgin thinks we should accept model claims when we consider them to be ‘true enough,’ and Uskali Mäki has argued in a similar vain, maintaining that it could be warranted — based on diverse pragmatic considerations — to accept model claims that are negligibly false.

Hmm …

When criticizing the basic (DSGE) workhorse model for its inability to explain involuntary unemployment, its defenders maintain that later elaborations — especially newer search models — manage to do just that. However, one of the more conspicuous problems with those “solutions,” is that they — as e.g. Pissarides’ ”Loss of Skill during Unemployment and the Persistence of Unemployment Shocks” QJE (1992) — are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. For how could one even imagine to empirically test assumptions such as Pissarides’ ”model 1″ assumptions of reality being adequately represented by ”two overlapping generations of fixed size”, ”wages determined by Nash bargaining”, ”actors maximizing expected utility”,”endogenous job openings”, ”jobmatching describable by a probability distribution,” without coming to the conclusion that this is — in terms of realism and relevance — far from ‘negligibly false’ or ‘true enough’?

Suck on that — and tell me if those typical mainstream — neoclassical — modeling assumptions in any possibly relevant way — with or without due pragmatic considerations — can be considered anything else but imagined model worlds assumptions that has nothing at all to do with the real world we happen to live in!

Econometrics — fictions masquerading as science

13 Nov, 2015 at 17:14 | Posted in Statistics & Econometrics | Comments Off on Econometrics — fictions masquerading as science

rabbitIn econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes casual knowledge. This is like pulling a rabbit from a hat. Great — but first you have to put the rabbit in the hat. And this is where assumptions come in to the picture.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we equate randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts.

Accepting a domain of probability theory and a sample space of “infinite populations” — which is legion in modern econometrics — also implies that judgments are made on the basis of observations that are actually never made! Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

In his book Statistical Models and Causal Inference: A Dialogue with the Social Sciences David Freedman touches on this fundamental problem, arising when you try to apply statistical models outside overly simple nomological machines like coin tossing and roulette wheels:

statLurking behind the typical regression model will be found a host of such assumptions; without them, legitimate inferences cannot be drawn from the model. There are statistical procedures for testing some of these assumptions. However, the tests often lack the power to detect substantial failures. Furthermore, model testing may become circular; breakdowns in assumptions are detected, and the model is redefined to accommodate. In short, hiding the problems can become a major goal of model building.

Using models to make predictions of the future, or the results of interventions, would be a valuable corrective. Testing the model on a variety of data sets – rather than fitting refinements over and over again to the same data set – might be a good second-best … Built into the equation is a model for non-discriminatory behavior: the coefficient d vanishes. If the company discriminates, that part of the model cannot be validated at all.

Regression models are widely used by social scientists to make causal inferences; such models are now almost a routine way of demonstrating counterfactuals. However, the “demonstrations” generally turn out to depend on a series of untested, even unarticulated, technical assumptions. Under the circumstances, reliance on model outputs may be quite unjustified. Making the ideas of validation somewhat more precise is a serious problem in the philosophy of science. That models should correspond to reality is, after all, a useful but not totally straightforward idea – with some history to it. Developing appropriate models is a serious problem in statistics; testing the connection to the phenomena is even more serious …

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved, although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science.

Making outlandish statistical assumptions does not provide a solid ground for doing relevant social science.

Methodological foundations of heterodox economics

12 Nov, 2015 at 21:48 | Posted in Economics | Comments Off on Methodological foundations of heterodox economics

 

[h/t Jan Milch]

Read my lips — regression analysis does not imply causation

11 Nov, 2015 at 18:45 | Posted in Statistics & Econometrics | Comments Off on Read my lips — regression analysis does not imply causation

Many treatments of regression seem to take for granted that the investigator knows the relevant variables, their causal order, and the functional form of the relationships among them; measurements of the independent variables are assumed to be without error. Indeed, Gauss developed and used regression in physical science contexts where these conditions hold, at least to a very good approximation. Today, the textbook theorems that justify regression are proved on the basis of such assumptions.

In the social sciences, the situation seems quite different. Regression is used to discover relationships or to disentangle cause and effect.Ho wever, investigators have only vague ideas as to the relevant variables and their causal order; functional forms are chosen on the basis of convenience or familiarity; serious problems of measurement are often encountered.

bDhy4

Regression may offer useful ways of summarizing the data and making predictions. Investigators may be able to use summaries and predictions to draw substantive conclusions. However, I see no cases in which regression equations, let alone the more complex methods, have succeeded as engines for discovering causal relationships …

The larger problem remains. Can quantitative social scientists infer causality by applying statistical technology to correlation matrices? That is not a mathematical question, because the answer turns on the way the world is put together. As I read the record, correlational methods have not delivered the goods. We need to work on measurement, design, theory. Fancier statistics are not likely to help much.

David Freedman

If you only have time to study one mathematical statistician, the choice should be easy — David Freedman.

Why do I have a blog?

11 Nov, 2015 at 16:34 | Posted in Varia | Comments Off on Why do I have a blog?

 
strenght-quote

Mainstream economics — nothing but pseudo-scientific cheating

10 Nov, 2015 at 17:24 | Posted in Economics | 4 Comments

A common idea among mainstream — neoclassical — economists is the idea of science advancing through the use of ‘as if’  modeling assumptions and ‘successive approximations’. But is this really a feasible methodology? I think not.

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world).  All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built assumptions — like “rational expectations” or “representative actors” — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

60088455All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is not the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

The implications that follow from the kind of models that mainstream economists construct are always conditional on the simplifying assumptions used — assumptions predominantly of a rather far-reaching and non-empirical character with little resemblance to features of the real world. From a descriptive point of view there is a fortiori usually very little resemblance between the models used and the empirical world. *As if’ explanations building on such foundations are not really any explanations at all, since they always conditionally build on hypothesized law-like theorems and situation-specific restrictive assumptions. The empirical-descriptive inaccuracy of the models makes it more or less miraculous if they should — in any substantive way — be able to be considered explanative at all. If the assumptions that are made are known to be descriptively totally unrealistic (think of e.g. “rational expectations”) they are of course likewise totally worthless for making empirical inductions. Assuming that people behave ‘as if’ they were rational FORTRAN programmed computers doesn’t take us far when we know that the ‘if’ is false.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

But models do not only face theory. They also have to look to the world. Being able to model a “credible world,” a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.

One could of course also ask for robustness, but the “as if worlds,” even after having tested it for robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Anyway, robust theorems are exceedingly rare or non-existent in macroeconomics. Explanation, understanding and prediction of real world phenomena, relations and mechanisms therefore cannot be grounded (solely) on robustness analysis. Some of the standard assumptions made in neoclassical economic theory – on rationality, information handling and types of uncertainty – are not possible to make more realistic by de-idealization or successive approximations without altering the theory and its models fundamentally.

If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that what when we export them from are models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.

The obvious shortcoming of a basically epistemic — rather than ontological — approach such as “successive approximations” and ‘as if’ modeling assumptions, is that “similarity” or “resemblance” tout court do not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts made in the model, if the successive ‘as if’ approximations do not result in models similar to reality in the appropriate respects (such as structure, isomorphism, etc), they are nothing more than ‘substitute systems’ that do not bridge to the world but rather misses its target.

So, I have to conclude that constructing minimal macroeconomic ‘as if’ models or using microfounded macroeconomic models as “stylized facts” somehow “successively approximating” macroeconomic reality, is a rather unimpressive attempt at legitimizing using fictitious idealizations for reasons more to do with model tractability than with a genuine interest of understanding and explaining features of real economies. Many of the model assumptions standardly made by neoclassical macroeconomics are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.

Mainstream economics building on such a modeling strategy does not  produce science.

It’s nothing but pseudo-scientific cheating.

The thrust of this realist rhetoric is the same both at the scientific and at the meta-scientific levels. It is that explanatory virtues need not be evidential virtues. It is that you should feel cheated by “The world is as if T were true”, in the same way as you should feel cheated by “The stars move as if they were fixed on a rotating sphere”. Realists do feel cheated in both cases.

Alan Musgrave

The ultimate argument for scientific realism

10 Nov, 2015 at 15:03 | Posted in Theory of Science & Methodology | 2 Comments

No-miracle-640x426Realism and relativism stand opposed. This much is apparent if we consider no more than the realist aim for science. The aim of science, realists tell us, is to have true theories about the world, where ‘true’ is understood in the classical correspondence sense. And this seems immediately to presuppose that at least some forms of relativism are mistaken … If realism is correct, then relativism (or some versions of it) is incorrect …

Whether or not realism is correct depends crucially upon what we take realism to assert, over and above the minimal claim about the aim of science.

My way into these issues is through what has come to be called the ‘Ultimate Argument for Scientific Realism’.’ The slogan is Hilary Putnam’s: “Realism is the only philosophy that does not make the success of science a miracle” …

We can at last be clear about what the Ultimate Argument actually is. It is an example of a so-called inference to the best explanation. How, in general, do such inferences work?

The intellectual ancestor of inference to the best explanation is Peirce’s abduction. Abduction goes something like this:

F is a surprising fact.
If T were true, F would be a matter of course.
Hence, T is true.

The argument is patently invalid: it is the fallacy of affirming the consequent …

What we need is a principle to the effect that it is reasonable to accept a satisfactory explanation which is the best we have as true. And we need to amend the inference-scheme accordingly. What we finish up with goes like this:

It is reasonable to accept a satisfactory explanation of any fact, which is also the best available explanation of that fact, as true.
F is a fact.
Hypothesis H explains F.
Hypothesis H satisfactorily explains F.
No available competing hypothesis explains F as well as H does.
Therefore, it is reasonable to accept H as true …

To return to the Ultimate Argument for scientific realism. It is, I suggest, an inference to the best explanation. The fact to be explained is the (novel) predictive success of science. And the claim is that realism (more precisely, the conjecture that the realist aim for science has actually been achieved) explains this fact, explains it satisfactorily, and explains it better than any non-realist philosophy of science. And the conclusion is that it is reasonable to accept scientific realism (more precisely, the conjecture that the realist aim for science has actually been achieved) as true.

Alan Musgrave

Macroeconomic forecasting

10 Nov, 2015 at 09:16 | Posted in Economics | 3 Comments

Macroeconomic forecasts produced with macroeconomic models tend to be little better than intelligent guesswork. That is not an opinion – it is a fact. It is a fact because for decades many reputable and long standing model based forecasters have looked at their past errors, and that is what they find. It is also a fact because we can use models to generate standard errors for forecasts, as well as the most likely outcome that gets all the attention. Doing so indicates errors of a similar magnitude as those observed from past forecasts. In other words, model based forecasts are predictably bad …

Sales-forecast-guessing-dart_0I think it is safe to say that this inability to accurately forecast is unlikely to change anytime soon. Which raises an obvious question: why do people still use often elaborate models to forecast? …

It makes sense for both monetary and fiscal authorities to forecast. So why use the combination of a macroeconomic model and judgement to do so, rather than intelligent guesswork? (Intelligent guesswork here means some atheoretical time series forecasting technique.) The first point is that it is not obviously harmful to do so …

Many other organisations, not directly involved in policy making, produce macro forecasts. Why do they bother? Why not just use the policy makers’ forecast? A large part of the answer must be that the media shows great interest in these forecasts. Why is this? I’m tempted to say it’s for the same reason as many people read daily horoscopes. However I think it’s worth adding that there is a small element of a conspiracy to deceive going on here too …

The rather boring truth is that it is entirely predictable that forecasters will miss major recessions, just as it is equally predictable that each time this happens we get hundreds of articles written asking what has gone wrong with macro forecasting. The answer is always the same – nothing. Macroeconomic model based forecasts are always bad, but probably no worse than intelligent guesses.

Simon Wren-Lewis

Hmm …

Strange. On the one hand Wren-Lewis says that “macroeconomic forecasts are always bad,” but, on the other hand, since they are “probably no worse than intelligent guesses” and anyway are “not obviously harmful,” we have no reason to complain.

But Wren-Lewis is wrong. These forecasting models and the organizations and persons around them do cost society billions of pounds, euros and dollars every year. And if they do not produce anything better than “intelligent guesswork,” I’m afraid most taxpayers would say that they are certainly not harmless at all!

Mainstream neoclassical economists often maintain – usually referring to the methodological individualism of Milton Friedman – that it doesn’t matter if the assumptions of the models they use are realistic or not. What matters is if the predictions are right or not. But, if so, then the only conclusion we can make is – throw away the garbage! Because, oh dear, oh dear, how wrong they have been!

When Simon Potter a couple of years ago analyzed the predictions that the Federal Reserve Bank of New York did on the development of real GDP and unemployment for the years 2007-2010, it turned out that the predictions were wrong with respectively 5.9% and 4.4% – which is equivalent to 6 millions of unemployed:

Economic forecasters never expect to predict precisely. One way of measuring the accuracy of their forecasts is against previous forecast errors. When judged by forecast error performance metrics from the macroeconomic quiescent period that many economists have labeled the Great Moderation, the New York Fed research staff forecasts, as well as most private sector forecasts for real activity before the Great Recession, look unusually far off the mark.

One source for such metrics is a paper by Reifschneider and Tulip (2007). They analyzed the forecast error performance of a range of public and private forecasters over 1986 to 2006 (that is, roughly the period that most economists associate with the Great Moderation in the United States).

On the basis of their analysis, one could have expected that an October 2007 forecast of real GDP growth for 2008 would be within 1.3 percentage points of the actual outcome 70 percent of the time. The New York Fed staff forecast at that time was for growth of 2.6 percent in 2008. Based on the forecast of 2.6 percent and the size of forecast errors over the Great Moderation period, one would have expected that 70 percent of the time, actual growth would be within the 1.3 to 3.9 percent range. The current estimate of actual growth in 2008 is -3.3 percent, indicating that our forecast was off by 5.9 percentage points.

Using a similar approach to Reifschneider and Tulip but including forecast errors for 2007, one would have expected that 70 percent of the time the unemployment rate in the fourth quarter of 2009 should have been within 0.7 percentage point of a forecast made in April 2008. The actual forecast error was 4.4 percentage points, equivalent to an unexpected increase of over 6 million in the number of unemployed workers. Under the erroneous assumption that the 70 percent projection error band was based on a normal distribution, this would have been a 6 standard deviation error, a very unlikely occurrence indeed.

In other words — the “rigorous” and “precise” macroeconomic mathematical-statistical forecasting models were wrong. And the rest of us have to pay.

Potter is not the only one who lately has criticized the forecasting business. John Mingers comes to essentially the same conclusion when scrutinizing it from a somewhat more theoretical angle:

It is clearly the case that experienced modellers could easily come up with significantly different models based on the same set of data thus undermining claims to researcher-independent objectivity. This has been demonstrated empirically by Magnus and Morgan (1999) who conducted an experiment in which an apprentice had to try to replicate the analysis of a dataset that might have been carried out by three different experts (Leamer, Sims, and Hendry) following their published guidance. In all cases the results were different from each other, and different from that which would have been produced by the expert, thus demonstrating the importance of tacit knowledge in statistical analysis.

Magnus and Morgan conducted a further experiment which involved eight expert teams, from different universities, analysing the same sets of data each using their own particular methodology. The data concerned the demand for food in the US and in the Netherlands and was based on a classic study by Tobin (1950) augmented with more recent data. The teams were asked to estimate the income elasticity of food demand and to forecast per capita food consumption. In terms of elasticities, the lowest estimates were around 0.38 whilst the highest were around 0.74 – clearly vastly different especially when remembering that these were based on the same sets of data. The forecasts were perhaps even more extreme – from a base of around 4000 in 1989 the lowest forecast for the year 2000 was 4130 while the highest was nearly 18000!

The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not confirm with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule “we simply do not know.”

So, to say that this counterproductive forecasting activity is harmless, simply isn’t true. Spending billions after billions of hard-earned money on an activity that is no better than “intelligent guesswork,” is doing harm to our economies.

A couple of years ago Lars E. O. Svensson — former deputy governor of the Swedish Riksbank — was able to show that the bank had conducted a monetary policy — based to a large extent on forecasts produced by its macroeconomic models — that led to far too high unemployment according to Svensson’s calculations. Unharmful? Hardly!

In New York State, Section 899 of the Code of Criminal Procedure provides that persons “Pretending to Forecast the Future” shall be considered disorderly under subdivision 3, Section 901 of the Code and liable to a fine of $250 and/or six months in prison. Although the law does not apply to “ecclesiastical bodies acting in good faith and without fees,” I’m not sure where that leaves macroeconomic model-builders and other forecasters …

Some unfounded expectations of economic theory

9 Nov, 2015 at 21:51 | Posted in Economics | 7 Comments

The weaknesses of social-scientific normativism are obvious. The basic assumptions refer to idealized action under pure maxims; no empirically substantive lawlike hypotheses can be derived from them. Either it is a question of analytic statements recast in deductive form or the conditions under which the hypotheses derived could be definitively falsified are excluded under ceteris paribus stipulations. Despite their reference to reality, the laws stated by pure economics have little, if any, information content.51T4SXX4DSL._SX326_BO1,204,203,200_To the extent that theories of rational choice lay claim to empirical-analytic knowledge, they are open to the charge of Platonism (Modellplatonismus). Hans Albert has summarized these arguments: The central point is the confusion of logical presuppositions with empirical conditions. The maxims of action introduced are treated not as verifiable hypotheses but as assumptions about actions by economic subjects that are in principle possible. The theorist limits himself to formal deductions of implications in the unfounded expectation that he will nevertheless arrive at propositions with empirical content. Albert’s critique is directed primarily against tautological procedures and the immunizing role ofqualifying or “alibi” formulas. This critique of normative-analytic methods argues that general theories of rational action are achieved at too great a cost when they sacrifice empirically verifiable and descriptively meaningful information.

Armchair theorists

9 Nov, 2015 at 17:32 | Posted in Economics | Comments Off on Armchair theorists

51pt3rUjjjL._SX331_BO1,204,203,200_One may conclude that … theoretical analysis still has not yet absorbed and digested the simplest fact establishable by the most casual observation. This is a situation ready-made for armchair theorists willing to make a search for mathematical tools appropriate to the problems indicated. Since the mathematical difficulties have so far been the main obstacle, it may be desirable in initial attempts to select postulates mainly from the point of view of facilitating the analysis, in prudent disregard of the widespread scorn for such a procedure.

Koopmans Three Essays was required reading when I took my doctorate course in microeconomics thirty-five years ago. Sad to say, it’s still relevant reading …

Neoclassical distribution theory

9 Nov, 2015 at 17:25 | Posted in Economics | 1 Comment

Walked-out Harvard economist Greg Mankiw has more than once tried to defend the 1 % by invoking Adam Smith’s invisible hand:

[B]y delivering extraordinary performances in hit films, top stars may do more than entertain millions of moviegoers and make themselves rich in the process. They may also contribute many millions in federal taxes, and other millions in state taxes. And those millions help fund schools, police departments and national defense for the rest of us …

[T]he richest 1 percent aren’t motivated by an altruistic desire to advance the public good. But, in most cases, that is precisely their effect.

negotiation1When reading Mankiw’s articles on the “just desert” of the 1 % one gets a strong feeling that Mankiw is really trying to argue that a market economy is some kind of moral free zone where, if left undisturbed, people get what they “deserve.”

Where does this view come from? Most neoclassical economists actually have a more or less Panglossian view on unfettered markets, but maybe Mankiw has also read neoliberal philosophers like Robert Nozick or David Gauthier. The latter writes in his Morals by Agreement:
 

The rich man may feast on caviar and champagne, while the poor woman starves at his gate. And she may not even take the crumbs from his table, if that would deprive him of his pleasure in feeding them to his birds.

Now, compare that unashamed neoliberal apologetics with what three truly great economists and liberals — John Maynard Keynes, Amartya Sen and Robert Solow — have to say on the issue:

The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist to-day.

John Maynard Keynes General Theory (1936)

 

FOTOGRAFATRIBUNAAMARTYASENThe personal production view is difficult to sustain in cases of interdependent production … i.e., in almost all the usual cases … A common method of attribution is according to “marginal product” … This method of accounting is internally consistent only under some special assumptions, and the actual earning rates of resource owners will equal the corresponding marginal products”only under some further special assumptions. But even when all these assumptions have been made … marginal product accounting, when consistent, is useful for deciding how to use additional resources … but it does not “show” which resource has “produced” how much … The alleged fact is, thus, a fiction, and while it might appear to be a convenient fiction, it is more convenient for some than for others….

The personal production view … confounds the marginal impact with total contribution, glosses over the issues of relative prices, and equates “being more productive” with “owning more productive resources” … An Indian barber or circus performer may not be producing any less than a British barber or circus performer — just the opposite if I am any judge — but will certainly earn a great deal less …

Amartya Sen Just Deserts (1982)

4703325Who could be against allowing people their ‘just deserts?’ But there is that matter of what is ‘just.’ Most serious ethical thinkers distinguish between deservingness and happenstance. Deservingness has to be rigorously earned. You do not ‘deserve’ that part of your income that comes from your parents’ wealth or connections or, for that matter, their DNA. You may be born just plain gorgeous or smart or tall, and those characteristics add to the market value of your marginal product, but not to your deserts. It may be impractical to separate effort from happenstance numerically, but that is no reason to confound them, especially when you are thinking about taxation and redistribution. That is why we want to temper the wind to the shorn lamb, and let it blow on the sable coat.

Robert Solow Journal of Economic Perspectives (2014)

A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed.

Trickle-down economics — neoliberal mumbo jumbo

9 Nov, 2015 at 09:22 | Posted in Economics | Comments Off on Trickle-down economics — neoliberal mumbo jumbo

Doggie

8 Nov, 2015 at 12:09 | Posted in Varia | Comments Off on Doggie

Hedda is the latest addition to the Meyer-Syll family.
Two years today.
And yes — Ibsen is one of our favourite dramatists …
 
IMG_0150

Suspect macroeconomic ideas

7 Nov, 2015 at 15:01 | Posted in Economics | 2 Comments

The advocates of free markets in all their versions say that crises are rare events, though they have been happening with increasing frequency as we change the rules to reflect beliefs in perfect markets. I would argue that economists, like doctors, have much to learn from pathology.religion-and-scienceWe see more clearly in these unusual events how the economy really functions. In the aftermath of the Great Depression, a peculiar doctrine came to be accepted, the so-called “neoclassical synthesis.” It argued that once markets were restored to full employment, neoclassical principles would apply. The economy would be efficient. We should be clear: this was not a theorem but a religious belief. The idea was always suspect.

Joseph Stiglitz

Strange and suspect idea indeed — as is the ‘New Keynesian’ idea that although the economy does not automatically succeed in keeping the economy at a full employment equilibrium, we do not — unless we’re at the Zero Lower Bound — need fiscal policy as long as monetary policy manage to set ‘the’ rate of interest at a level compatible with the targeted inflation.

As Stiglitz has it — this is nothing but “a religious belief.”

Glory to the Father

7 Nov, 2015 at 12:12 | Posted in Varia | Comments Off on Glory to the Father

 
spotify:track:5LVeCnh151n9SCKkgVoqrj

One of my absolute favourites

6 Nov, 2015 at 19:42 | Posted in Theory of Science & Methodology | 5 Comments

Inference to the Best Explanation can be seen as an extension of the idea of `self-evidencing’ explanations, where the phenomenon that is explained in turn provides an essential part of the reason for believing the explanation is correct. For example, a star’s speed of recession explains why its characteristic spectrum is red-shifted by a specified amount, but the observed red-shift may be an essential part of the reason the astronomer has for believing that the star is receding at that speed. Self-evidencing explanations exhibit a curious circularity, but this circularity is benign.

WS00323Inference_zpsd842ff44The recession is used to explain the red-shift and the red-shift is used to confirm the recession, yet the recession hypothesis may be both explanatory and well-supported. According to Inference to the Best Explanation, this is a common situation in science: hypotheses are supported by the very observations they are supposed to explain. Moreover, on this model, the observations support the hypothesis precisely because it would explain them. Inference to the Best Explanation thus partially inverts an otherwise natural view of the relationship between inference and explanation. According to that natural view, inference is prior to explanation. First the scientist must decide which hypotheses to accept; then, when called upon to explain some observation, she will draw from her pool of accepted hypotheses. According to Inference to the Best Explanation, by contrast, it is only by asking how well various hypotheses would explain the available evidence that she can determine which hypotheses merit acceptance. In this sense, Inference to the Best Explanation has it that explanation is prior to inference.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.