Apparently, they fixed the insurance piece, and we’re right back on our merry money-creating ways, watching housing prices inflate as buyers get more and more money created by keystroke in the world financial sector …

]]>All mathematics is about mathematics alone, and has nothing directly to say about reality. Any claim that any particular mathematics is applicable to a particular real domain needs a proper scientific justification. In my experience it is always appropriate to apply Bayesian methods, but it is dangerous to suppose that the result is reliable, except where such a view has a reasonable justification.

In the Turkey example there is a further problem that ‘the probability that my hypothesis is false’ is not even well defined, and it is certainly an abuse of the mathematical version of Bayes’ theorem to apply it as Russell does.

]]>Remember that the first one that used Bayesian Theory more properly was Laplace in his studies of Mechanics and Planetary Astronomics.

I think that Laplace would be rather stunned if he lived today and saw the heavy missuse of the Bayesian methods in all sort of “Scienses” from Political, Ethnology ,Sociology and foremost Macro Economics.

The main point for every scientist in choice methods as i see it should be, to first ask oneself, do my method apply?If not i should of course have to chose another one.

As C Wright Mills once stated:as a social scientist you in many cases have to be your own methodologist,simply because in most areas that are of most interest ,there are no developed methodologies that apply.

Incase of Critique of Bayesianism

see :Andrew Gelman : Bayesian Analysis 2008, Number 3, pp. 445-450

Objections to Bayesian statistics

“Bayesian inference is one of the more controversial approaches to

statistics. The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience. The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian

inference. This article presents a series of objections to Bayesian inference, written in the voice of a hypothetical anti-Bayesian statistician. The article is intended to elicit elaborations and extensions of these and other arguments from non-Bayesians and responses from Bayesians who might have different perspectives on these issues.

The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that different methods work well in different settings .

Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically. In contrast, much of

the work in modern non-Bayesian statistics is focused on developing methods that give reasonable answers using minimal assumptions.

The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge. Here the concern from outsiders is, first, that as scientists we should be concerned with objective knowledge rather than subjective

belief, and second, that it’s not clear how to assess subjective knowledge in any case.

Beyond these objections is a general impression of the shoddiness of some Bayesian analyses, combined with a feeling that Bayesian methods are being oversold as an all purpose statistical solution to genuinely hard problems. Compared to classical inference,which focuses on how to extract the information available in data, Bayesian methods seem to quickly move to elaborate computation. It does not seem like a good thing for

a generation of statistics to be ignorant of experimental design and analysis of variance,instead becoming experts on the convergence of the Gibbs sampler.

In the short-term this represents a dead end, and in the long term it represents a withdrawal of statisticians from the deeper questions of inference and an invitation for econometricians, computer

scientists, and others to move in and ll in the gap.

I find it clearest to present the objections to Bayesian statistics in the voice of a hypothetical anti-Bayesian statistician. I am imagining someone with experience in theoretical and applied statistics, who understands Bayes’ theorem but might not be aware of recent developments in the eld. In presenting such a persona, I am not trying to mock or parody anyone but rather to present a strong rm statement of attitudes that deserve serious consideration.

Here follows the list of objections from a hypothetical or paradigmatic non-Bayesian:

Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person,and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically denied, which it’s not). Where do prior distributions

come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence.

To put it another way, why should I believe your subjective prior? If I really believed it, then I could just feed you some data and ask you for your subjective posterior. That would save me a lot of eort!

As Brad Efron wrote in 1986, Bayesian theory requires a great deal of thought about the given situation to apply sensibly, and recommending that scientists use Bayes’ theorem is like giving the neighborhood kids the key to your F-16. I’d rather start

with tried and true methods, and then generalize using something I can trust, such as statistical theory and minimax principles, that don’t depend on your subjective beliefs.

Especially when the priors I see in practice are typically just convenient conjugate forms. What a coincidence that, of all the innite variety of priors that could be chosen, it always seems to be the normal, gamma, beta, etc., that turn out to be the right choices?

To restate these concerns mathematically: I like unbiased estimates and I like conffidence intervals that really have their advertised condence coverage. I know that these aren’t always going to be possible, but I think the right way forward is to get as close to these goals as possible and to develop robust methods that work with minimal assumptions.

The Bayesian approach to give up even trying to approximate unbiasedness and to instead rely on stronger and stronger assumptions|that seems like the wrong way to go.

In the old days, Bayesian methods at least had the virtue of being mathematically clean.

Nowadays, they all seem to be computed using Markov chain Monte Carlo,

which means that, not only can you not realistically evaluate the statistical properties of the method, you can’t even be sure it’s converged, just adding one more item to the list of unveriable (and unveried) assumptions. Computations for classical methods aren’t easy running from nested bootstraps at one extreme to asymptotic theory on

the otherbut there is a clear goal of designing procedures with proper coverage, in contrast to Bayesian simulation which seems stuck in an innite regress of inferentialuncertainty.

People tend to believe results that support their preconceptions and disbelieve results that surprise them. Bayesian methods encourage this undisciplined mode of thinking.

I’m sure that many individual Bayesian statisticians are acting in good faith, but they’re providing encouragement to sloppy and unethical scientists everywhere.

And, probably worse, Bayesian techniques motivate even the best-intentioned researchers to get stuck in the rut of prior beliefs.

As the applied statistician Andrew Ehrenberg wrote in 1986, Bayesianism assumes:

(a) Either a weak or uniform prior, in which case why bother?, (b) Or a strong prior, in which case why collect new data?, (c) Or more realistically, something in between,in which case Bayesianism always seems to duck the issue.

Nowadays people use a lot of empirical Bayes methods. I applaud the Bayesians’ newfound commitment to empiricism but am skeptical of this particular approach, which always seems to rely on an assumption of exchangeability.”

In political science, people are embracing Bayesian statistics as the latest methodological fad. Well, let me tell you something.

The 50 states aren’t exchangeable. I’ve lived in a few of them and visited nearly all the others, and calling them exchangeable is just silly.

Calling it a hierarchical or a multilevel model doesn’t change things|it’s an additional level of modeling that I’d rather not do. Call me old-fashioned, but I’d rather let the data speak without applying a probability distribution to something like the 50 states which are neither random nor a sample.

So, don’t these empirical and hierarchical Bayes methods use the data twice? If you’re going to be Bayesian, then be Bayesian: it seems like a cop-out and contradictory to the Bayesian philosophy to estimate the prior from the data. If you want to do multilevel modeling, I prefer a method such as generalized estimating equations that makes minimal assumptions.

And don’t even get me started on what Bayesians say about data collection. The mathematics of Bayesian decision theory lead inexorably to the idea that random sampling and random treatment allocation are inecient, that the best designs are deterministic.

I have no quarrel with the mathematics here the mistake lies deeper in the

philosophical foundations, the idea that the goal of statistics is to make an optimal decision.

A Bayes estimator is a statistical estimator that minimizes the average risk, but when we do statistics, we’re not trying to \minimize the average risk,” we’re trying to do estimation and hypothesis testing. If the Bayesian philosophy of axiomatic reasoning implies that we shouldn’t be doing random sampling, then that’s a strike against the theory right there.

Bayesians also believe in the irrelevance of stopping times that,if you stop an experiment based on the data, it doesn’t change your inference. Unfortunately for the Bayesian theory,the p-value does change when you alter the stopping rule, and no amount of philosophical reasoning will get you around that point.

I can’t keep track of what all those Bayesians are doing nowadays,unfortunately,all sorts of people are being seduced by the promises of automatic inference through the magic of MCMC,but I wish they would all just stop already and get back to doing statistics the way it should be done, back in the old days when a p-value stood for something, when a condence interval meant what it said, and statistical bias was something to eliminate, not something to embrace.”

]]>“Math is the language of economics. If you are an NYU undergraduate, studying math will open doors to you in terms of interesting economics courses at NYU and job opportunities afterwards. Start with the basics: take three calculus courses (up to and including multivariable calculus), linear algebra, and a good course in probability and statistics. These basic courses will empower you. After you have these under your belt, you have many interesting options all of which will further empower you to learn and practice economics. I especially recommend courses in (1) Markov chains and stochastic processes, and (2) differential equations.

Superb economists at NYU (e.g., Robert Engle, Xavier Gabaix, Stanley Zin, Jess Benhabib, Douglas Gale, Boyan Jovanovic, David Pearce, Debraj Ray, Ennio Stacchetti, Charles Wilson, and others) have made notable contributions to economics partly because they are very creative but also because they studied more math than others.

My personal opinion is that if you are an undergraduate at Stern or the NYU CAS economics department and you are seriously interested in learning as much rigorous economics as you can at NYU, you will be much better off taking one or two additional math and statistics courses rather than spending time and credits writing an undergraduate honors thesis. This will also look better on your transcript if you plan to apply to graduate school…”

http://www.tomsargent.com/math_courses.html

No doubt the prescription for more superb thinking – widely and deeply informed -about our economic problems in the making.

]]>First do no harm.

I also agree with the other poster, Rob Reno, that there are different markets. I claim financial markets are not subject to assumed constraints imposed by economists imagining physical widgets. Finance has figured out how to relax budget constraints by creating new credit that circulates as money today as if the promise to pay will be kept; when the promise comes due, finance has many ways to roll over or forgive loans, or pay from insurance. Insurance can pay based on future promises circulating as money today. When those future promises come due, the same finance mechanisms deal with default by rolling over loans, forgiving them, or insuring against loss … thus the endless cycle of putting off final settlement for another day continues.

]]>This is the first time I have seen an explicit statement of a though-intuition I have been entertaining since I began my deep dive into economics. Since I follow Lars on RWER I have seen posters speak as though the “market” is some single natural entity of nature. This has always not set well with my own experience after disarticulating a specific market (See: https://rwer.wordpress.com/2018/01/26/branko-milanovic-and-the-hypocrites-of-the-world-economic-forum/#comment-131036). My view for now is that there are many different kinds of markets created by many different kinds of actors. A farmers market is not the same as, say, the supply chain created by lead big-tech companies and their “third party vendors” who provide a “contingent” workforce, and the multilayers supply chain of the “third party vendors” who then use RPOs, etc.

Each market must be dealt with (disarticulated) on its own merit. If some share certainly similarities these will emerge with time. But this reified use of the term market in my view hides more than it reveals and is often used to hide manipulative and deceptive supply chains and corporate shenanigans.

]]>.

This contrast would make some sense if actual differences of method or approach could be explicitly made out. But, if such contrast cannot be adduced maybe it is time to give up the metaphor altogether.

.

Perhaps, it would be possible to talk of observation, measurement and — brace yourself! — interpretation? These pedestrian activities depend on using analytic theories and operational models to interact methodically with the phenomenon that is the subject of study.

.

Talking about export warrants and target populations make it seem as if there are separable realms of . . . what? thought? experience? . . . from which we launch idea missiles. Maybe that is a model of knowledge generation: if so, I would like to see it illustrated by example of — if not formally operationalized and tested against — some undoubted example of learning, to see if it adds anything to our understanding of what it is that we do to learn something about the world.

.

Extrapolation is an estimation of a value based on extending or projecting a known sequence of values or facts beyond the area or range that is certainly known. As such, it seems to me that is in the nature of hypothesis rather than inference, per se. The first doubt arises in me from the presumption that some measured value is really suitable to be treated as a “fact” about the world. In economics, it is common for economists to reason about stylized “facts” that are really just vague generalizations grounded in misconceptions and the accidents of local or recent experience. Should we treat “elasticity of demand” or macroeconomic “multiplier” as if an estimate is a fact and not just an ephemeral accounting artifact? The pseudo-physics that infects economics gave the name, elasticity, to suggest a fact akin to physical facts of metallurgy, where the elasticity of a metal alloy can be measured methodically and the estimate reflects a property of the metal. When the engineer relies on elasticity estimates for a metal, he is treating as fact not the measurement per se, but the implied property of the metal.

I think an economist would be a fool to treat any estimate of the elasticity of demand or a multiplier for fiscal stimulus spending as if a reliable and stable property of an object in nature were being represented. The implied ontology is wrong. In metallurgy, the metal alloy being measured exists. It is not clear to me that demand for widgets or aggregate demand in a macroeconomic sense even “exists” in any sense faintly analogous to the existence of a particular metal alloy. To be sure, our understanding of what it means to be a particular metal alloy was itself a product of the study of metallurgy; the facts of an alloy and its properties is more than a singular estimate.

This problem of extrapolation in social science hides a problem of ontology.

]]>