The future — something we know very little about

18 Feb, 2018 at 19:54 | Posted in Economics | 1 Comment

All these pretty, polite techniques, made for a well-panelled Board Room and a nicely regulated market, are liable to collapse. At all times the vague panic fears and equally vague and unreasoned hopes are not really lulled, and lie but a little way below the surface.

check-your-assumptionsPerhaps the reader feels that this general, philosophical disquisition on the behavior of mankind is somewhat remote from the economic theory under discussion. But I think not. Tho this is how we behave in the marketplace, the theory we devise in the study of how we behave in the market place should not itself submit to market-place idols. I accuse the classical economic theory of being itself one of these pretty, polite techniques which tries to deal with the present by abstracting from the fact that we know very little about the future.

I dare say that a classical economist would readily admit this. But, even so, I think he has overlooked the precise nature of the difference which his abstraction makes between theory and practice, and the character of the fallacies into which he is likely to be led.

John Maynard Keynes

Poland’s Law and Justice — now and then

18 Feb, 2018 at 15:59 | Posted in Politics & Society | 2 Comments

A new law passed by Poland’s ruling Law and Justice Party and signed by President Andrzej Duda on Feb. 6, means that you may end up in prison for three years if you “publicly and against the facts attribute to the Polish nation or the Polish state responsibility or co-responsibility for Nazi crimes committed by the German Third Reich.”

JedwabneThe Polish Parliament ordered a new investigation into the Jedwabne atrocity in July 2000 … Over the course of two years, investigators from the Polish Institute of National Remembrance (IPN) interviewed some 111 witnesses … On July 9, 2002, IPN released the final findings of its two-year-long investigation. In a carefully worded summary IPN stated its principal conclusions as follows:

The perpetrators of the crime sensu stricto were Polish inhabitants of Jedwabne and its environs; responsibility for the crime sensu largo could be ascribed to the Germans. IPN found that Poles played a “decisive role” in the massacre, but the massacre was “inspired by the Germans”. The massacre was carried out in full view of the Germans, who were armed and had control of the town, and the Germans refused to intervene and halt the killings. IPN wrote: “The presence of German military policemen…..and other uniformed Germans…..was tantamount to consent to, and tolerance of, the crime.”

Wikipedia

The Bayesian folly

16 Feb, 2018 at 18:18 | Posted in Economics | 1 Comment

Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sun rise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and so P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but — even granted this questionable reductionism — do rational agents really have to be Bayesian? As I have been arguing repeatedly over the years, there is no strong warrant for believing so.

The nodal point here is — of course — that although Bayes’ Rule is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. As one of my favourite statistics bloggers —  Andrew Gelman — puts it:

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that different methods work well in different settings … Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective belief, and second, that it’s not clear how to assess subjective knowledge in any case.

bayesfunBeyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an all-purpose statistical solution to genuinely hard problems. Compared to classical inference, which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for a generation of statistics to be ignorant of experimental design and analysis of variance, instead of becoming experts on the convergence of the Gibbs sampler. In the short term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer scientists, and others to move in and fill in the gap …

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence …

As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16 …

Economics education — teaching cohorts after cohorts of students useless theories

15 Feb, 2018 at 20:17 | Posted in Economics | 4 Comments

Nowadays there is almost no place whatsoever in economics education for courses in the history of economic thought and economic methodology.

This is deeply worrying.

A science that doesn’t self-reflect and asks important methodological and science-theoretical questions about the own activity, is a science in dire straits.

How did we end up in this sad state?

Philip Mirowski gives the following answer:

phil After a brief flirtation in the 1960s and 1970s, the grandees of the economics profession took it upon themselves to express openly their disdain and revulsion for the types of self-reflection practiced by ‘methodologists’ and historians of economics, and to go out of their way to prevent those so inclined from occupying any tenured foothold in reputable economics departments. It was perhaps no coincidence that history and philosophy were the areas where one found the greatest concentrations of skeptics concerning the shape and substance of the post-war American economic orthodoxy. High-ranking economics journals, such as the American Economic Review, the Quarterly Journal of Economics and the Journal of Political Economy, declared that they would cease publication of any articles whatsoever in the area, after a prior history of acceptance.

Once this policy was put in place, and then algorithmic journal rankings were used to deny hiring and promotion at the commanding heights of economics to those with methodological leanings. Consequently, the grey-beards summarily expelled both philosophy and history from the graduate economics curriculum; and then, they chased it out of the undergraduate curriculum as well. This latter exile was the bitterest, if only because many undergraduates often want to ask why the profession believes what it does, and hear others debate the answers, since their own allegiances are still in the process of being formed. The rationale tendered to repress this demand was that the students needed still more mathematics preparation, more statistics and more tutelage in ‘theory’, which meant in practice a boot camp regimen consisting of endless working of problem sets, problem sets and more problem sets, until the poor tyros were so dizzy they did not have the spunk left to interrogate the masses of journal articles they had struggled to absorb.

Methodology is about how we do economics, how we evaluate theories, models and arguments. To know and think about methodology is important for every economist. Without methodological awareness it’s really impossible to understand what you are doing and why you’re doing it. Dismissing methodology is dismissing a necessary and vital part of science.

Already back in 1991, a commission chaired by Anne Krueger and including people like Kenneth Arrow, Edward Leamer, and Joseph Stiglitz, reported from own experience “that it is an underemphasis on the ‘linkages’ between tools, both theory and econometrics, and ‘real world problems’ that is the weakness of graduate education in economics,” and that both students and faculty sensed “the absence of facts, institutional information, data, real-world issues, applications, and policy problems.” And in conclusion, they wrote that “graduate programs may be turning out a generation with too many idiot savants skilled in technique but innocent of real economic issues.”

Not much is different today. Economics — and economics education — is still in dire need of a remake.

Twenty-five years ago, Phil Mirowski was invited to give a speech on themes from his book More Heat than Light at my economics department in Lund, Sweden. All the mainstream neoclassical professors were there. Their theories were totally mangled and no one — absolutely no one — had anything to say even remotely reminiscent of a defence. Being at a nonplus, one of them, in total desperation, finally asked: “But what shall we do then?”

rethinkYes indeed — what shall they do when their emperor has turned out to be naked?

More and more young economics students want to see a real change in economics and the way it’s taught. They want something other than the same old mainstream neoclassical catechism. They don’t want to be force-fed with useless mainstream neoclassical theories and models.

Ask the mountains

14 Feb, 2018 at 08:47 | Posted in Economics, Varia | Comments Off on Ask the mountains

 

The problem of extrapolation

14 Feb, 2018 at 00:01 | Posted in Theory of Science & Methodology | 8 Comments

steelThere are two basic challenges that confront any account of extrapolation that seeks to resolve the shortcomings of simple induction. One challenge, which I call extrapolator’s circle, arises from the fact that extrapolation is worthwhile only when there are important limitations on what one can learn about the target by studying it directly. The challenge, then, is to explain how the suitability of the model as a basis for extrapolation can be established given only limited, partial information about the target … The second challenge is a direct consequence of the heterogeneity of populations studied in biology and social sciences. Because of this heterogeneity, it is inevitable there will be causally relevant differences between the model and the target population.

In economics — as a rule — we can’t experiment on the real-world target directly.  To experiment, economists therefore standardly construct ‘surrogate’ models and perform ‘experiments’ on them. To be of interest to us, these surrogate models have to be shown to be relevantly ‘similar’ to the real-world target, so that knowledge from the model can be exported to the real-world target. The fundamental problem highlighted by Steel is that this ‘bridging’ is deeply problematic​ — to show that what is true of the model is also true of the real-world target, we have to know what is true of the target, but to know what is true of the target we have to know that we have a good model  …

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built tractability assumptions — like, e. g., invariance, additivity, faithfulness, modularity, common knowledge, etc., etc. — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.

But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of surrogate models.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity — the claims the extrapolation inference is supposed to deliver — are important. It can never be enough that models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

The arrow of time in a non-ergodic world

13 Feb, 2018 at 09:00 | Posted in Theory of Science & Methodology | 3 Comments

an end of certaintyFor the vast majority of scientists, thermodynamics had to be limited strictly to equilibrium. That was the opinion of J. Willard Gibbs, as well as of Gilbert N. Lewis. For them, irreversibility associated with unidirectional time was anathema …

I myself experienced this type of hostility in 1946 … After I had presented my own lecture on irreversible thermodynamics, the greatest expert in the field of thermodynamics made the following comment: ‘I am astonished that this young man is so interested in nonequilibrium physics. Irreversible processes are transient. Why not wait and study equilibrium as everyone else does?’ I was so amazed at this response that I did not have the presence of mind to answer: ‘But we are all transient. Is it not natural to be interested in our common human condition?’

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and hence in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates real-world economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts — so let me try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student​ you probably would, because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – the ensemble average – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100.​ Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset price​ falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, ​the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility.

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages give​ a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, is​ of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so-called Kelly criterion) is to invest the fraction of  0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk-taking​.

To understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

Irreversibility can no longer be identified with a mere appearance​ that would disappear if we had perfect knowledge … Figuratively speaking, matter at equilibrium, with no arrow of time, is ‘blind,’ but with the arrow of time, it begins to ‘see’ … The claim that the arrow of time is ‘only phenomenological​l,’ or subjective, is therefore absurd. We are actually the children of the arrow of time, of evolution, not its progenitors.

Ilya Prigogine

So​ what you’re saying is …

13 Feb, 2018 at 08:55 | Posted in Politics & Society | 7 Comments

 

Trump on Britain’s NHS

13 Feb, 2018 at 08:17 | Posted in Politics & Society | 1 Comment

 

How central banks create money

12 Feb, 2018 at 11:10 | Posted in Economics | 5 Comments

shake the money tree
Have you ever asked yourself how central banks create money? BBC Radio gives the answer.

The limits of probabilistic reasoning

12 Feb, 2018 at 09:48 | Posted in Statistics & Econometrics | 8 Comments

Probabilistic reasoning in science — especially Bayesianism — reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but, even granted this questionable reductionism, it’s not self-evident that rational agents really have to be probabilistically consistent. There is no strong warrant for believing so. Rather, there is strong evidence for us encountering huge problems if we let probabilistic reasoning become the dominant method for doing research in social sciences on problems that involve risk and uncertainty.

probIn many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind and that in those situations it is not possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1 if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to become unemployed and 90% to become employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by probabilistically reasoning Bayesian economists.

We always have to remember that economics and statistics are two quite different things, and as long as economists cannot identify their statistical theories with real-world phenomena there is no real warrant for taking their statistical inferences seriously.

Just as there is no such thing as a ‘free lunch,’ there is no such thing as a ‘free probability.’ To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events -– in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment -– there, strictly seen, is no event at all.

Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures –- something seldom or never done in economics.

And this is the basic problem!

If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous ‘nomological machines’ for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice in science. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions! Not doing that, you simply conflate statistical and economic inferences.

The present ‘machine learning’ and ‘big data’ hype shows that many social scientists — falsely — think that they can get away with analysing real-world phenomena without any (commitment to) theory. But — data never speaks for itself. Without a prior statistical set-up, there actually are no data at all to process. And — using a machine learning algorithm will only produce what you are looking for. Theory matters.

Causality in social sciences — and economics — can never solely be a question of statistical inference. Causality entails more than predictability, and to really in-depth explain social phenomena require theory. Analysis of variation — the foundation of all econometrics — can never in itself reveal how these variations are brought about. First when we are able to tie actions, processes or structures to the statistical relations detected, can we say that we are getting at relevant explanations of causation.
5cd674ec7348d0620e102a79a71f0063Most facts have many different, possible, alternative explanations, but we want to find the best of all contrastive (since all real explanation takes place relative to a set of alternatives) explanations. So which is the best explanation? Many scientists, influenced by statistical reasoning, think that the likeliest explanation is the best explanation. But the likelihood of x is not in itself a strong argument for thinking it explains y. I would rather argue that what makes one explanation better than another are things like aiming for and finding powerful, deep, causal, features and mechanisms that we have warranted and justified reasons to believe in. Statistical — especially the variety based on a Bayesian epistemology — reasoning generally has no room for these kinds of explanatory considerations. The only thing that matters is the probabilistic relation between evidence and hypothesis. That is also one of the main reasons I find abduction — inference to the best explanation — a better description and account of what constitute actual scientific reasoning and inferences.

And even worse — some economists using statistical methods think that algorithmic formalisms somehow give them access to causality. That is, however, simply not true. Assuming ‘convenient’ things like ‘faithfulness’ or ‘stability’ is to assume what has to be proven. Deductive-axiomatic methods used in statistics do no produce evidence for causal inferences. The real causality we are searching for is the one existing in the real world around us. If there is no warranted connection between axiomatically derived statistical theorems and the real-world, well, then we haven’t really obtained the causation we are looking for.

New Classical macroeconomists — people having their heads fuddled with nonsense

11 Feb, 2018 at 11:10 | Posted in Economics | 3 Comments

McNees documented the radical break between the 1960s and 1970s. The question is: what are the possible responses that economists and economics can make to those events?

robert_solow4One possible response is that of Professors Lucas and Sargent. They describe what happened in the 1970s in a very strong way with a polemical vocabulary reminiscent of Spiro Agnew. Let me quote some phrases that I culled from the paper: “wildly incorrect,” “fundamentally flawed,” “wreckage,” “failure,” “fatal,” “of no value,” “dire implications,” “failure on a grand scale,” spectacular recent failure,” “no hope” … I think that Professors Lucas and Sargent really seem to be serious in what they say, and in turn they have a proposal for constructive research that I find hard to talk about sympathetically. They call it equilibrium business cycle theory, and they say very firmly that it is based on two terribly important postulates — optimizing behavior and perpetual market clearing. When you read closely, they seem to regard the postulate of optimizing behavior as self-evident and the postulate of market-clearing behavior as essentially meaningless. I think they are too optimistic, since the one that they think is self-evident I regard as meaningless and the one that they think is meaningless, I regard as false. The assumption that everyone optimizes implies only weak and uninteresting consistency conditions on their behavior. Anything useful has to come from knowing what they optimize, and what constraints they perceive. Lucas and Sargent’s casual assumptions have no special claim to attention …

It is plain as the nose on my face that the labor market and many markets for produced goods do not clear in any meaningful sense. Professors Lucas and Sargent say after all there is no evidence that labor markets do not clear, just the unemployment survey. That seems to me to be evidence. Suppose an unemployed worker says to you “Yes, I would be glad to take a job like the one I have already proved I can do because I had it six months ago or three or four months ago. And I will be glad to work at exactly the same wage that is being paid to those exactly like myself who used to be working at that job and happen to be lucky enough still to be working at it.” Then I’m inclined to label that a case of excess supply of labor and I’m not inclined to make up an elaborate story of search or misinformation or anything of the sort. By the way I find the misinformation story another gross implausibility. I would like to see direct evidence that the unemployed are more misinformed than the employed, as I presume would have to be the case if everybody is on his or her supply curve of employment. Similarly, if the Chrysler Motor Corporation tells me that it would be happy to make and sell 1000 more automobiles this week at the going price if only it could find buyers for them, I am inclined to believe they are telling me that price exceeds marginal cost, or even that marginal revenue exceeds marginal cost, and regard that as a case of excess supply of automobiles. Now you could ask, why do not prices and wages erode and crumble under those circumstances? Why doesn’t the unemployed worker who told me “Yes, I would like to work, at the going wage, at the old job that my brother-in-law or my brother-in-law’s brother-in-law is still holding”, why doesn’t that person offer to work at that job for less? Indeed why doesn’t the employer try to encourage wage reduction? That doesn’t happen either. Why does the Chrysler Corporation not cut the price? Those are questions that I think an adult person might spend a lifetime studying. They are important and serious questions, but the notion that the excess supply is not there strikes me as utterly implausible.

Robert Solow

No unnecessary beating around the bush here.

The always eminently quotable Solow says it all.

The purported strength of New Classical macroeconomics is that it has firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing ‘forward-loooking’ individuals.

To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim – has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if – after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem – these economists want to resurrect the omniscient Walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world.

That anyone should take that kind of stuff seriously is totally and unbelievably ridiculous. Or as Solow has it:

4703325Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon. Now, Bob Lucas and Tom Sargent like nothing better than to get drawn into technical discussions, because then you have tacitly gone along with their fundamental assumptions; your attention is attracted away from the basic weakness of the whole story. Since I find that fundamental framework ludicrous, I respond by treating it as ludicrous – that is, by laughing at it – so as not to fall into the trap of taking it seriously and passing on to matters of technique.

Robert Solow

Hierarchical models​ and clustered residuals (student stuff)

10 Feb, 2018 at 16:05 | Posted in Statistics & Econometrics | Comments Off on Hierarchical models​ and clustered residuals (student stuff)

 

Exaggerated and unjustified statistical​ claims

9 Feb, 2018 at 23:07 | Posted in Statistics & Econometrics | Comments Off on Exaggerated and unjustified statistical​ claims

 

And here’s another very stable GOP genius

9 Feb, 2018 at 15:40 | Posted in Politics & Society | 2 Comments

 

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: