Insignificant ‘statistical significance’

11 January, 2019 at 10:03 | Posted in Statistics & Econometrics | Leave a comment

worship-p-300x214We recommend dropping the NHST [null hypothesis significance testing] paradigm — and the p-value thresholds associated with it — as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, rather than allowing statistical signicance as determined by p < 0.05 (or some other statistical threshold) to serve as a lexicographic decision rule in scientic publication and statistical decision making more broadly as per the status quo, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with the neglected factors [such factors as prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain] as just one among many pieces of evidence.

We make this recommendation for three broad reasons. First, in the biomedical and social sciences, the sharp point null hypothesis of zero effect and zero systematic error used in the overwhelming majority of applications is generally not of interest because it is generally implausible. Second, the standard use of NHST — to take the rejection of this straw man sharp point null hypothesis as positive or even definitive evidence in favor of some preferredalternative hypothesis — is a logical fallacy that routinely results in erroneous scientic reasoning even by experienced scientists and statisticians. Third, p-value and other statistical thresholds encourage researchers to study and report single comparisons rather than focusing on the totality of their data and results.

Andrew Gelman et al.

ad11As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.

We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!

statistical-models-sdl609573791-1-42fd0In journal articles a typical regression equation will have an intercept and several explanatory variables. The regression output will usually include an F-test, with p – 1 degrees of freedom in the numerator and n – p in the denominator. The null hypothesis will not be stated. The missing null hypothesis is that all the coefficients vanish, except the intercept.

If F is significant, that is often thought to validate the model. Mistake. The F-test takes the model as given. Significance only means this: if the model is right and the coefficients are 0, it is very unlikely to get such a big F-statistic. Logically, there are three possibilities on the table:
i) An unlikely event occurred.
ii) Or the model is right and some of the coefficients differ from 0.
iii) Or the model is wrong.
So?

Handy missing data methodologies

10 January, 2019 at 19:16 | Posted in Statistics & Econometrics | 2 Comments

wainerOn October 13, 2012, Manny Fernandez reported in The New York Times that former El Paso schools superintendent Lorenzo Garcia was sentenced to prison for his role in orchestrating​ a testing scandal. The Texas Assessment of Knowledge and Skills (TAKS) is a state-mandated test for high-school sophomores. The TAKS missing data algorithm was to treat missing data as missing-at-random, and hence the score for the entire school was based solely on those who showed up. Such a methodology is so easy to game that it was clearly a disaster waiting to happen. And it did. The missing data algorithm used by Texas was obviously understood by school administrators; all aspects of their scheme were to keep potentially low-scoring students out of the classroom so they would not take the test and possibly drag scores down. Students identified as likely low performing “were transferred to charter schools, discouraged from enrolling in school, or were visited at home by truant officers and told not to go to school on test day.”

But it didn’t stop there. Some students had credits deleted from transcripts or grades changed from passing to failing so they could be reclassified as freshmen and avoid testing. Sometimes​, students who were intentionally held back were allowed to catch up before graduation with “turbo-mesters,” in which a student could acquire the necessary credits for graduation in a few hours in front of a computer.

Groundbreaking study shows parachutes do not reduce death when jumping from aircraft

10 January, 2019 at 16:00 | Posted in Statistics & Econometrics | 1 Comment

parachuteParachute use compared with a backpack control did not reduce death or major traumatic injury when used by participants jumping from aircraft in this first randomized evaluation of the intervention. This largely resulted from our ability to only recruit participants jumping from stationary aircraft on the ground. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials evaluating their effectiveness could selectively enroll individuals with a lower likelihood of benefit, thereby diminishing the applicability of trial results to routine practice. Therefore, although we can confidently recommend that individuals jumping from small stationary aircraft on the ground do not require parachutes, individual judgment should be exercised when applying these findings at higher altitudes.

Robert W Yeh et al.

Yeap — background​ knowledge sure is important when experimenting …

‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.

RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.

What counts as evidence?

10 January, 2019 at 15:29 | Posted in Theory of Science & Methodology | Leave a comment

What counts as evidence? I suspect we tend to overweight some kinds of evidence, and underweight others.

Yeh’s paper is a lovely illustration of a general problem with randomized control trials – that they tell us how a treatment worked under particular circumstances, but are silent about its effects in other circumstances. They can lack external validity. Yeh shows that parachutes are useless for someone jumping from a plane when it is on the ground. But this tells us nothing about their value when the plane is in the air – which is an important omission.

evidenceWe should place this problem with RCTs alongside two other Big Facts in the social sciences. One is the replicability crisis … The other (related) is the fetishization of statistical significance despite the fact that, as Deirdre McCloskey has said, it “has little to do with a defensible notion of scientific inference, error analysis, or rational decision making” and “is neither necessary nor sufficient for proving discovery of a scientific or commercially relevant result.”

If we take all this together, it suggests that a lot of conventional evidence isn’t as compelling as it seems. Which suggests that maybe the converse is true.

Stumbling and Mumbling

What is missing in Keynes’ General Theory

10 January, 2019 at 11:55 | Posted in Economics | 1 Comment

The cyclical succession of system states is not always clearly presented in The General Theory. In fact there are two distinct views of the business cycle, one a moderate cycle which can perhaps be identified with a dampened accelerator-multiplier cycle and the second a vigorous ‘boom and bust’ cycle … The business cycle in chapter 18 does not exhibit booms or crises …

jmkIn chapter 12 and 22, in the rebuttal to Viner, and in remarks throughout The General Theory, a vigorous cycle, which does have booms and crises, is described. However, nowhere in The General Theory or in Keynes’s few post-General Theory articles explicating his new theory are the boom and the crisis adequately defined or explained. The financial developments during a boom that makes a crisis likely, if not inevitable, are hinted at but not thoroughly examined. This is the logical hole, the missing link, in The General Theory as it was left by Keynes in 1937 after his rebuttal to Viner … In order to appreciate the full potential of The General Theory as a guide to interpretation and understanding of moderrn capitalism, we must fill out what Keynes discussed in a fragmentary and casual manner.

Mediernas ensidiga ekonomirapportering

9 January, 2019 at 08:25 | Posted in Economics | 3 Comments

När svenska medier vill veta något om ekonomi frågar de Arturo Arques.Han är privatekonom på en storbank och när tidningen Flamman i förra veckan (4 jan) granskade vilka ekonomer som citeras i 100 artiklar i svensk press är han kårens mest pratsamma stjärna …

moneySex av tio intervjuade ekonomer kommer från näringslivet, framför allt från arbetsgivarorganisationer, banker och försäkringsbolag. Det är tiofalt fler än från facken och deras närstående organisationer, till exempel tankesmedjor.

En sådan dominans formar naturligtvis själva idén om det sunda förnuftet, om vad som är möjligt, önskvärt och naturligt – och vad som är udda och galet. Vi får också, som nationalekonomen Lars Pålsson Syll säger till Flamman, en falsk bild av att ekonomkåren är enig och i förlängningen att ekonomi är ett nästan naturvetenskapligt ämne, vars grundsatser inte kan ifrågasättas.

Visst, ekonomerna kan säkert vara klädsamt oeniga i detaljer, men det är knappast någon djärv gissning att inte en enda av dem propagerar för fyra dagars arbetsvecka, nolltillväxt, nationalisering av storbankerna eller konfiskatoriska förmögenhetsskatter.

Det är liksom inte det de får betalt för.

Petter Larsson/Aftonbladet

Don’t forget about me

8 January, 2019 at 18:57 | Posted in Varia | 1 Comment

 

Låt storseglet gå

8 January, 2019 at 18:46 | Posted in Varia | Leave a comment

 

Why I am not a Bayesian

6 January, 2019 at 19:30 | Posted in Theory of Science & Methodology | 2 Comments

No matter how atheoretical their inclination, scientists are interested in relations between properties of phenomena, not in lists of readings from dials of instruments that detect those properties …

imagesHere as elsewhere, Bayesian philosophy of science obscures a difference between scientists’ problems of hypothesis choice and the problems of prediction that are the standard illustrations and applications of probability theory. In the latter situations, such as the standard guessing games about coins and urns, investigators know an enormous amount about the reality they are examining, including the effects of different values of the unknown factor. Scientists can rarely take that much knowledge for granted. It should not be surprising if an apparatus developed to measure degrees of belief in situations of isolated and precisely regimented uncertainty turns out to be inaccurate, irrelevant or incoherent in the face of the latter, much more radical uncertainty.

Richard W. Miller

Although Bayesians think otherwise, to me there’s nothing magical about Bayes’ theorem. The important thing in science is for you to have strong evidence. If your evidence is strong, then applying Bayesian probability calculus is rather unproblematic. Otherwise — garbage in, garbage out. Applying Bayesian probability calculus to subjective beliefs founded on weak evidence is not a recipe for scientific progress. It is important not to equate science with statistical calculation or applied probability theory. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Statistical models are no substitutes for doing real science. Although Bayesianism has tried to extend formal deductive logic into real-world settings via probability theory, this is not a viable scientific way forward. Choosing between theories and hypotheses can never be a question of inner coherence and consistency. Bayesian probabilism says absolutely​ nothing about reality.

Rejecting probabilism, Popper not only rejects Carnap-style logic of confirmation, he denies scientists are interested in highly probable hypotheses … They seek bold, informative, interesting conjectures and ingenious and severe attempts to refute them.

Debora​h Mayo​

Cutting wages — the wrong medicine

6 January, 2019 at 16:20 | Posted in Economics | 1 Comment

'Sure, your salaries are low but think of all the apples you're getting.'A couple of years ago yours truly had a discussion with the chairman of the Swedish Royal Academy of Sciences (yes, the one that yearly presents the winners of ‘The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel’). What started the discussion was the allegation that the level of employment in the long run is a result of people’s own rational intertemporal choices and that how much people work basically is a question of incentives.

Somehow the argument sounded familiar.

When being awarded the ‘Nobel prize’ in 2011, Thomas Sargent declared that workers ought to be prepared for having low unemployment compensations in order to get the right incentives to search for jobs. The Swedish right-wing finance minister at the time appreciated Sargent’s statement and declared it to be a “healthy warning” for those who wanted to increase compensation levels.

The view is symptomatic. As in the 1930s, more and more right-wing politicians — and economists — now suggest that lowering wages is the right medicine to strengthen the competitiveness of their faltering economies, get the economy going, increase employment and create growth that will get rid of towering debts and create balance in the state budgets.

But, intimating that one could solve economic problems by wage cuts and impairing unemployment compensations, in these dire times, should really be taken more as a sign of how low the confidence in our economic system has sunk. Wage cuts and lower unemployment compensation levels do not save neither competitiveness nor jobs.

What is needed more than anything else is stimuli and economic policies that increase effective demand.

On a societal level wage cuts only increase the risk of more people getting unemployed. To think that that one can solve economic crises in this way is a turning back to those faulty economic theories and policies that John Maynard Keynes conclusively showed to be wrong already in the 1930s. It was theories and policies that made millions of people all over the world unemployed.

It’s an atomistic fallacy to think that a policy of general wage cuts would strengthen the economy. On the contrary. The aggregate effects of wage cuts would, as shown by Keynes, be catastrophic . They would start a cumulative spiral of lower prices that would make the real debts of individuals and firms increase since the nominal debts wouldn’t be affected by the general price and wage decrease. In an economy that more and more has come to rest on increased debt and borrowing this would be the entrance-gate to a debt deflation crises with decreasing investments and higher unemployment. In short, it would make depression knock on the door.

The impending danger in today’s economies is that they won’t get consumption and investments going. Confidence and effective demand have to be reestablished. The problem of our economies is not on the supply side. Overwhelming evidence shows that the problem today is on the demand side. Demand is — to put it bluntly — simply not sufficient to keep the wheels of the economies turning. To suggest that the solution is lower wages and unemployment compensations is just to write out a prescription for even worse catastrophes.

Why is 0! =1?

5 January, 2019 at 12:32 | Posted in Education & School | Leave a comment


The single most important factor behind successful education — from kindergarten to university — is, and has always been — having a good teacher!

Erdogan — der am schnellsten beleidigte Präsident

4 January, 2019 at 21:53 | Posted in Politics & Society | Leave a comment

Erdoğan ist der Staatspräsident, der weltweit entweder am häufigsten beleidigt wird oder am schnellsten beleidigt ist. Seit seiner Wahl wurde gegen 68.817 Personen wegen Präsidentenbeleidigung ermittelt, innerhalb von drei Jahren wurden deshalb nahezu 13.000 Prozesse eröffnet und über 3000 Personen verurteilt …

47796-erdogan-beleidigenVor drei Monaten wurde in Antalya ein Bettler wegen Präsidentenbeleidigung festgenommen. Er kam wieder frei, als er auf der Wache klarstellte, der Cousin seiner Frau heiße ebenfalls Erdoğan, den habe er beschimpft, nicht den Präsidenten …

Der Journalist Onur Erem schrieb, wenn man bei Google nach “Mörder und Dieb” suche, schlage die automatische Vervollständigungsfunktion “Erdoğan” und “AKP” vor. Für den Bericht wurde er zu elf Monaten und zwanzig Tagen Haft verurteilt. Es würde mich nicht wundern, wenn auch Google aufgrund dieser Vorschläge von der Epidemie erfasst und wegen Beleidigung verurteilt werden sollte.

Can Dündar/Die Zeit

De dagar som blommorna blommar

4 January, 2019 at 21:14 | Posted in Varia | Leave a comment


Det bästa Gardell någonsin skapat. Lysande. Vacker. Omskakande. Gripande. Ett mästerverk.

De som talar om pengar

4 January, 2019 at 17:43 | Posted in Politics & Society | 3 Comments

De som framför allt dominerar i den svenska ekonomirapporteringen är ekonomer från banker och försäkringsbolag. Det visar en granskning av artiklar i svensk dagspress som Flamman har gjort.

fredagsmys-med-flamman_logoVi går in med olika värderingar och synsätt. Det gör att man kommer till ganska olika slutsatser. Fackliga ekonomer uttalar sig bara i sex procent av artiklarna. Arbetsgivarorganisationer i nästan 20. Och banker och försäkringsbolag utgör tillsammans över 40 procent …

Bäst representerade bland svenska företag i det material Flamman har gått igenom är Swedbank … Varje år lägger Swedbank och andra banker stora summor pengar på kommunikation. I Swedbanks fall talar man alltså om ”folkbildning”.

Men Ola Pettersson på LO kallar det snarare marknadsföring, och menar att Swedbanks dominans i den svenska ekonomibevaknigen helt enkelt är resultatet av ett riktat arbete med syfte att synas så mycket som möjligt …

Lars Pålsson Syll är nationalekonom och professor i samhällsvetenskap vid Malmö universitet … Han menar att läget i Sverige är sämre än i omvärlden. Här är de som Pålsson Syll kallar för mainstreamekonomer ännu mer dominerande än i andra länder …
– Sverige är ett litet land och det blir liksom inte utrymme för mer än en uppfattning i väldigt många ekonomiska frågor. De är de här standardsvaren man får då, standarduppfattningarna. Och vi som forskar som vet att det finns rätt många olika uppfattningar om ekonomi som aldrig kommer fram, och det ger bilden av att nationalekonomer är ungefär som fysiker eller naturvetare, som har en gemensam bas som de står på och alla tycker nästan samma sak. Så är det ju inte.

Flamman

Why Bayesianism has not resolved a single fundamental​ scientific​ dispute

3 January, 2019 at 23:10 | Posted in Economics, Theory of Science & Methodology | 3 Comments

419fn8sv1fl-_sx332_bo1204203200_Bayesian reasoning works, undeniably, where we know (or are ready to assume) that the process studied fits certain special though abstract causal structures, often called ‘statistical models’ … However, when we choose among hypotheses in important scientific controversies, we usually lack such prior knowledge​ of causal structures, or it is irrelevant to the choice. As a consequence, such Bayesian inference to the preferred alternative has not resolved, even temporarily, a single fundamental scientific dispute.

Mainstream economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules (preferably the ones axiomatized by Ramsey (1931), de Finetti (1937) or Savage (1954)) — that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately — via some “Dutch book” or “money pump” argument — susceptible to being ruined by some clever “bookie”.

bayes_dog_tshirtBayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but — even granted this questionable reductionism — do rational agents really have to be Bayesian? However, there are no strong warrants for believing so.

In many of the situations that are relevant to economics, one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in the US is 10%. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1​ if you are rational. That is, in this case — and based on symmetry — a rational individual would have to assign probability 10% to become unemployed and 90% to become employed.

Its-the-lawThat feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations, most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

We live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we ‘simply do not know.’ There are no strong reasons why we should accept the Bayesian view of modern mainstream economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” As argued by Keynes, we rather base our expectations on the confidence or “weight” we put on different events and alternatives. Expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that standardly have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modelled by mainstream economists.

The replicability crisis

3 January, 2019 at 16:07 | Posted in Statistics & Econometrics | Leave a comment

 

Economists telling fairy tales

3 January, 2019 at 00:02 | Posted in Economics | Leave a comment

MAD-Magazine-Alfred-Quote-Thumb_56d9faade962a4.45249552An important point is implicit. Economics is not hard because of math. The math in even graduate level economics is no greater than in sophomore physics. Classical economics is hard because it can attack social problems in a value-free, cause-and-effect way, and upends the little morality stories that most people use to think about those problems — rents are high because landlords are greedy.

John Cochrane

Mainstream economists “attack social problems in a value-free way”? Only a Cato Institute fellow could come up with such nonsense!

20th anniversary for the euro — no reason for celebration

2 January, 2019 at 17:58 | Posted in Economics | 3 Comments

When the euro was created twenty years ago, it was celebrated with fireworks at the European Central Bank headquarters in Frankfurt. Today we know better. There are no reasons to celebrate the 20-year anniversary. On the contrary.

euroAlready since its start, the euro has been in crisis. And the crisis is far from over. The tough austerity measures imposed in the eurozone has made economy after economy contract. And it has not only made things worse in the periphery countries, but also in countries like France and Germany. Alarming facts that should be taken seriously.

Europe may face a future with growing economic disparities where we will have​ to confront increasing hostility between nations and peoples. What we’ve seen lately in France shows that the protests against technocratic attempts to undermine democracy may go extremely violent.

The problems — created to a large extent by the euro — may not only endanger our economies, but also our democracy itself. How much whipping can democracy take? How many more are going to get seriously hurt and ruined before we end this madness and scrap the euro?

The euro has taken away the possibility for national governments to manage their economies in a meaningful way — and in country after country, the people have had to pay the true costs of its concomitant misguided austerity policies.

The unfolding of the repeated economic crises in euroland during the last decade has shown beyond any doubts that the euro is not only an economic project but just as much a political one. What the neoliberal revolution during the 1980s and 1990s didn’t manage to accomplish, the euro shall now force on us.

austerity22But do the peoples of Europe really want to deprive themselves of economic autonomy, enforce lower wages and slash social welfare at the slightest sign of economic distress? Are​ increasing income inequality and a federal überstate really the stuff that our dreams are made of? I doubt it.

History ought to act as a deterrent. During the 1930s our economies didn’t come out of the depression until the folly of that time — the gold standard — was thrown on the dustbin of history. The euro will hopefully soon join it.

Economists have a tendency to get enthralled by their theories and models and forget that behind the figures and abstractions there is a real world with real people. Real people that have to pay dearly for fundamentally flawed doctrines and recommendations.

General equilibrium theory — nonsense on stilts

2 January, 2019 at 14:49 | Posted in Economics | Leave a comment

General equilibrium is fundamental to economics on a more normative level as well. A story about Adam Smith, the invisible hand, and the merits of markets pervades introductory textbooks, classroom teaching, and contemporary political discourse.getbourse The intellectual foundation of this story rests on general equilibrium, not on the latest mathematical excursions. If the foundation of everyone’s favourite economics story is now known to be unsound — and according to some, uninteresting as well — then the profession owes the world a bit of an explanation.

Frank Ackerman

Almost a century and a half after Léon Walras founded general equilibrium theory, economists still have not been able to show that markets lead economies to equilibria.

We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient.

But after reading Frank Ackerman’s article — or Franklin M. Fisher’s The stability of general equilibrium – what do we know and why is it important? — one has to ask oneself — what good does that do?

As long as we cannot show that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is nil. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory.

Stability that can only be proved by assuming Santa Claus conditions is of no avail. Most people do not believe in Santa Claus anymore. And for good reasons. Santa Claus is for kids, and general equilibrium economists ought to grow up, leaving their Santa Claus economics in the dustbin of history.

Continuing to model a world full of agents behaving as economists — “often wrong, but never uncertain” — and still not being able to show that the system under reasonable assumptions converges to equilibrium (or simply assume the problem away), is a gross misallocation of intellectual resources and time. As Ackerman writes:

The guaranteed optimality of market outcomes and laissez-faire policies died with general equilibrium. If economic stability rests on exogenous social and political forces, then it is surely appropriate to debate the desirable extent of intervention in the market — in part, in order to rescue the market from its​ own instability.

Statistics is no substitute for thinking

2 January, 2019 at 14:44 | Posted in Statistics & Econometrics | Leave a comment

The cost of computing has dropped exponentially, but the cost of thinking is what it always was. That is why we see so many articles with so many regressions and so little thought.

Zvi Griliches

« Previous PageNext Page »

Blog at WordPress.com.
Entries and comments feeds.