The gender pay gap and discrimination

16 Sep, 2022 at 15:26 | Posted in Economics | Leave a comment

The Gender Pay Gap: It's Complicated – WHUS RadioSpending a couple of hours going through a JEL survey of modern research on the gender wage gap, yours truly was struck almost immediately by how little that research really has accomplished in terms of explaining gender wage discrimination. With all the heavy regression and econometric alchemy used, wage discrimination is somehow more or less conjured away …

Trying to reduce the risk of having established only ‘spurious relations’ when dealing with observational data, statisticians and econometricians standardly add control variables. The hope is that one thereby will be able to make more reliable causal inferences. But if you do not manage to get hold of all potential confounding factors, the model risks producing estimates of the variable of interest that are even worse than models without any control variables at all. Conclusion: think twice before you simply include ‘control variables’ in your models!

That women are working in different areas than men, and have other educations than men, etc., etc., are not only the result of ‘free choices’ causing a gender wage gap, but actually to a large degree itself the consequence of discrimination.

The gender pay gap is a fact that, sad to say, to a non-negligible extent is the result of discrimination. And even though many women are not deliberately discriminated against, but rather ‘self-select’ (sic!) into lower-wage jobs, this in no way magically explains away the discrimination gap. As decades of socialization research have shown, women may be ‘structural’ victims of impersonal social mechanisms that in different ways aggrieve them.

Looking at wage discrimination from a graph theoretical point of view one could arguably identify three paths between gender discrimination (D) and wages (W):

  1. D => W
  2. D => OCC => W
  3. D => OCC <= A => W

where occupation (OCC) is a mediator variable and unobserved ability (A) is a variable that affects both occupational choice and wages. The usual way to find out the effect of discrimination on wages is to perform a regression ‘controlling’ for OCC to get what one considers a ‘meaningful’ estimate of real gender wage discrimination:

W = a + bD + cOCC

The problem with this procedure is that conditioning on OCC not only closes the mediation path (2) but — since OCC is a ‘collider’ — opens up the backdoor path (3) and creates a spurious and biased estimate. Forgetting that may even result in the gender discrimination effect being positively related to wages! So if we want to go down the standard path (controlling for OCC) we certainly also have to control for A if we want to have a chance of identifying the causal effect of gender discrimination on wages. And that may, of course, be tough going, since A often (as here) is unobserved and perhaps even unobservable …

How to teach econometrics

16 Sep, 2022 at 15:09 | Posted in Statistics & Econometrics | Leave a comment

One way to change the sad state of econometrics — an industry with a huge output but no sales — would perhaps be to follow Ed Leamer’s advice on how to teach it …

What should we do with econometrics? | LARS P. SYLLTo take my students on a journey toward real intelligence, I have given them the following assignment: Your boss walks into your office and says, “I read in the newspaper this morning that interest rates are going to increase this year. Is this right, and do I need to worry about it? I want you to give a presentation to the Board in four weeks.” OMG! What to do; what to do?? These students will need to know regression assumptions and regression properties, but if all they know is mathematical theorems regarding properties of estimators, they will have a hard time taking the first and most important step, which is to better define the question. What’s the industry? What is the time frame? What are the decision options? After that comes a search of the Web for theory and data. And then the data analysis, not carried out by preprogrammed rules, but designed to fit the special circumstances.

Uncertainty

15 Sep, 2022 at 13:47 | Posted in Economics | Leave a comment

Analytical Economics: 9780674281622: Economics Books @ Amazon.comIt may be argued … that the betting quotient and credibility are substitutable in the same sense in which two commodities are: less bread but more meat may leave the consumer as well off as before. If this were, then clearly expectation could be reduced to a unidimensional concept … However, the substitutability of consumers’ goods rests upon the tacit assumption that all commodities contain something — called utility — in a greater or less degree; substitutability is therefore another name for compensation of utility. The crucial question in expectation then is whether credibility and betting quotient have a common essence so that compensation of this common essence would make sense.

Just like Keynes underlined with his concept of ‘weight of argument,’ Georgescu-Roegen, with his similar concept of ‘credibility,’ underscores the impossibility of reducing uncertainty to risk and thereby being able to describe choice under uncertainty with a unidimensional probability concept.

In ‘modern’ macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical, and New ‘Keynesian’ — variables are treated as if drawn from a known ‘data-generating process’ that unfolds over time and on which we, therefore, have access to heaps of historical time-series. If we do not assume that we know the ‘data-generating process’ – if we do not have the ‘true’ model – the whole edifice collapses. And of course, it has to. Who honestly believes that we should have access to this mythical Holy Grail, the data-generating process?

‘Modern’ macroeconomics obviously did not anticipate the enormity of the problems that unregulated ‘efficient’ financial markets created. Why? Because it builds on the myth of us knowing the ‘data-generating process’ and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.

This is like saying that you are going on a holiday trip and that you know that the chance of the weather being sunny is at least 30% and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either or. Uncertainty is reduced to risk.

But as both Georgescu-Roegen and Keynes convincingly argued, this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another – equally good – model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.

In the end, this is what it all boils down to. We all know many activities, relations, processes, and events are of the Georgescu-Roegen-Keynesian uncertainty type. The data do not unequivocally single out one decision as the only ‘rational’ one. Neither the economist nor the deciding individual can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.

How much better — how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control — if instead, we could just admit that we often simply do not know and that we have to live with that uncertainty as well as it goes.

Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing — economic disaster.

Causal inference — what the machine cannot learn

12 Sep, 2022 at 16:55 | Posted in Statistics & Econometrics | 3 Comments

.

The central problem with the present ‘machine learning’ and ‘big data’ hype is that so many — falsely — think that they can get away with analyzing real-world phenomena without any (commitment to) theory. But — data never speaks for itself. Without a prior statistical set-up, there actually are no data at all to process.

Clever data-mining tricks are not enough to answer important scientific questions. Theory matters.

maIf we wanted highly probable claims, scientists would stick to​​ low-level observables and not seek generalizations, much less theories with high explanatory content. In this day​ of fascination with Big data’s ability to predict​ what book I’ll buy next, a healthy Popperian reminder is due: humans also want to understand and to explain. We want bold ‘improbable’ theories. I’m a little puzzled when I hear leading machine learners praise Popper, a realist, while proclaiming themselves fervid instrumentalists. That is, they hold the view that theories, rather than aiming at truth, are just instruments for organizing and predicting observable facts. It follows from the success of machine learning, Vladimir Cherkassy avers, that​ “realism is not possible.” This is very quick philosophy!

Quick indeed!

The danger of teaching the wrong thing all too well

11 Sep, 2022 at 12:46 | Posted in Economics | 1 Comment

It is well known that even experienced scientists routinely misinterpret p-values in all sorts of ways, including confusion of statistical and practical significance, treating non-rejection as acceptance of the null hypothesis, and interpreting the p-value as some sort of replication probability or as the posterior probability that the null hypothesis is true …

servicemanIt is shocking that these errors seem so hard-wired into statisticians’ thinking, and this suggests that our profession really needs to look at how it teaches the interpretation of statistical inferences. The problem does not seem just to be technical misunderstandings; rather, statistical analysis is being asked to do something that it simply can’t do, to bring out a signal from any data, no matter how noisy. We suspect that, to make progress in pedagogy, statisticians will have to give up some of the claims we have implicitly been making about the effectiveness of our methods …

It would be nice if the statistics profession was offering a good solution to the significance testing problem and we just needed to convey it more clearly. But, no, … many statisticians misunderstand the core ideas too. It might be a good idea for other reasons to recommend that students take more statistics classes—but this won’t solve the problems if textbooks point in the wrong direction and instructors don’t understand what they are teaching. To put it another way, it’s not that we’re teaching the right thing poorly; unfortunately, we’ve been teaching the wrong thing all too well.

Andrew Gelman & John Carlin

Teaching both statistics and economics, yours truly can’t but notice that the statements “give up some of the claims we have implicitly been making about the effectiveness of our methods” and “it’s not that we’re teaching the right thing poorly; unfortunately, we’ve been teaching the wrong thing all too well” obviously apply not only to statistics …

And the solution? Certainly not — as Gelman and Carlin also underline — to reform p-values. Instead, we have to accept that we live in a world permeated by genuine uncertainty and that it takes a lot of variation to make good inductive inferences.

Sounds familiar? It definitely should!

The standard view in statistics — and the axiomatic probability theory underlying it – is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues in his seminal A Treatise on Probability (1921), ‘more of the same’ is not what is important when making inductive inferences. It’s a question of ‘more but different’ — i.e., variation.

Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments does not make you as ‘sure’ of your inductions as when running 10 000 varied experiments — even if the probability values are the same.

According to Keynes, we live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by ‘modern’ social sciences. And often we ‘simply do not know.’

AfD — die dümmste Kolonne Moskaus

10 Sep, 2022 at 09:19 | Posted in Varia | Leave a comment

.

Teachers — people that change your life

10 Sep, 2022 at 08:53 | Posted in Education & School | Leave a comment

.

Econometric fundamentalism

8 Sep, 2022 at 19:36 | Posted in Statistics & Econometrics | Leave a comment

The wide conviction of the superiority of the methods of the science has converted the econometric community largely to a group of fundamentalist guards of mathematical rigour. It is often the case that mathemical rigour is held as the dominant goal and the criterion for research topic choice as well as research evaluation, so much so that the relevance of the research to business cycles is reduced to empirical illustrations. To that extent, probabilistic formalization has trapped econometric business cycle research in the pursuit of means at the expense of ends.

Economic_cycle.svg

Once the formalization attempts have gone significantly astray from what is needed for analysing and forecasting the multi-faceted characteristics of business cycles, the research community should hopefully make appropriate ‘error corrections’ of its overestimation of the power of a priori postulated models as well as its underestimation of the importance of the historical approach, or the ‘art’ dimension of business cycle research.

Duo Qin

Gedenken an Michail Gorbatschow

7 Sep, 2022 at 21:06 | Posted in Politics & Society | Comments Off on Gedenken an Michail Gorbatschow

.

Tuesday Afternoon

7 Sep, 2022 at 18:56 | Posted in Varia | Comments Off on Tuesday Afternoon

.

Leninpriset? Nej tack!

6 Sep, 2022 at 13:14 | Posted in Politics & Society | Comments Off on Leninpriset? Nej tack!

Vem blir glad av Leninpriset? – Jönköpings-PostenFjorton gånger har priset delats ut, blott tre gånger har nâgon tackat nej: Göran Palm (2012, då det i stället gick till Sven Lindqvist), Susanna Alakoski (2013, hon ville inte förknippas med diktatorn Lenin) och Åsa Linderborg (2016, Myrdal gick emot styrelsens val av Linderborg som då tackade nej).

Alla andra pristagare, däribland Mattias Gardell, Sven Wollter, Sven Lindqvist, Mikael Wiehe, Roy Anderson och de senaste åren författaren Nina Björk och journalisten Kajsa Ekis Ekman har inte spottat i prisbucklan.

Nina Solomin/SvD

Econometric FUQs

5 Sep, 2022 at 11:20 | Posted in Statistics & Econometrics | Comments Off on Econometric FUQs

Mostly Harmless EconometricsIf you can’t devise an experiment that answers your question in a world where anything goes, then the odds of generating useful results with a modest budget and nonexperimental survey data seem pretty slim. The description of an ideal experiment also helps you formulate causal questions precisely. The mechanics of an ideal experiment highlight the forces you’d like to manipulate and the factors you’d like to hold constant.

Research questions that cannot be answered by any experiment are FUQs: fundamentally unidentified questions.

One of the limitations of economics is the restricted possibility to perform experiments, forcing it to mainly rely on observational studies for knowledge of real-world economies.

But still — the idea of performing laboratory experiments holds a firm grip on our wish to discover (causal) relationships between economic ‘variables.’If we only could isolate and manipulate variables in controlled environments, we would probably find ourselves in a situation where we with greater ‘rigour’ and ‘precision’ could describe, predict, or explain economic happenings in terms of ‘structural’ causes, ‘parameter’ values of relevant variables, and economic ‘laws.’

Galileo Galilei’s experiments are often held as exemplary for how to perform experiments to learn something about the real world. Galileo’s heavy balls dropping from the tower of Pisa, confirmed that the distance an object falls is proportional to the square of time and that this law (empirical regularity) of falling bodies could be applicable outside a vacuum tube when e. g. air existence is negligible.

The big problem is to decide or find out exactly for which objects air resistance (and other potentially ‘confounding’ factors) is ‘negligible.’ In the case of heavy balls, air resistance is obviously negligible, but how about feathers or plastic bags?

One possibility is to take the all-encompassing-theory road and find out all about possible disturbing/confounding factors — not only air resistance — influencing the fall and build that into one great model delivering accurate predictions on what happens when the object that falls is not only a heavy ball but feathers and plastic bags. This usually amounts to ultimately stating some kind of ceteris paribus interpretation of the ‘law.’

Another road to take would be to concentrate on the negligibility assumption and to specify the domain of applicability to be only heavy compact bodies. The price you have to pay for this is that (1) ‘negligibility’ may be hard to establish in open real-world systems, (2) the generalization you can make from ‘sample’ to ‘population’ is heavily restricted, and (3) you actually have to use some ‘shoe leather’ and empirically try to find out how large is the ‘reach’ of the ‘law.’

In mainstream economics, one has usually settled for the ‘theoretical’ road (and in case you think the present ‘natural experiments’ hype has changed anything, remember that to mimic real experiments, exceedingly stringent special conditions standardly have to obtain).

In the end, it all boils down to one question — are there any Galilean ‘heavy balls’ to be found in economics, so that we can indisputably establish the existence of economic laws operating in real-world economies?

As far as I can see there are some heavy balls out there, but not even one single real economic law.

Economic factors/variables are more like feathers than heavy balls — non-negligible factors (like air resistance and chaotic turbulence) are hard to rule out as having no influence on the object studied.

Galilean experiments are hard to carry out in economics, and the theoretical ‘analogue’ models economists construct and in which they perform their ‘thought experiments’ build on assumptions that are far away from the kind of idealized conditions under which Galileo performed his experiments. The ‘nomological machines’ that Galileo and other scientists have been able to construct have no real analogues in economics. The stability, autonomy, modularity, and interventional invariance, that we may find between entities in nature, simply are not there in real-world economies. That’s are real-world fact, and contrary to the beliefs of most mainstream economists, they won’t go away simply by applying deductive-axiomatic economic theory with tons of more or less unsubstantiated assumptions.

By this, I do not mean to say that we have to discard all (causal) theories/laws building on modularity, stability, invariance, etc. But we have to acknowledge the fact that outside the systems that possibly fulfil these requirements/assumptions, they are of little substantial value. Running paper and pen experiments on artificial ‘analogue’ model economies is a sure way of ‘establishing’ (causal) economic laws or solving intricate econometric problems of autonomy, identification, invariance and structural stability — in the model world. But they are pure substitutes for the real thing and they don’t have much bearing on what goes on in real-world open social systems. Setting up convenient circumstances for conducting Galilean experiments may tell us a lot about what happens under those kinds of circumstances. But — few, if any, real-world social systems are ‘convenient.’ So most of those systems, theories and models, are irrelevant for letting us know what we really want to know.

To solve, understand, or explain real-world problems you actually have to know something about them — logic, pure mathematics, data simulations or deductive axiomatics don’t take you very far. Most econometrics and economic theories/models are splendid logic machines. But — applying them to the real world is a totally hopeless undertaking! The assumptions one has to make in order to successfully apply these deductive-axiomatic theories/models/machines are devastatingly restrictive and mostly empirically untestable– and hence make their real-world scope ridiculously narrow. To fruitfully analyze real-world phenomena with models and theories you cannot build on patently and known to be ridiculously absurd assumptions. No matter how much you would like the world to entirely consist of heavy balls, the world is not like that. The world also has its fair share of feathers and plastic bags.

Most of the ‘idealizations’ we find in mainstream economic models are not ‘core’ assumptions, but rather structural ‘auxiliary’ assumptions. Without those supplementary assumptions, the core assumptions deliver next to nothing of interest. So to come up with interesting conclusions you have to rely heavily on those other — ‘structural’ — assumptions.

In physics, we have theories and centuries of experience and experiments that show how gravity makes bodies move. In economics, we know there is nothing equivalent. So instead mainstream economists necessarily have to load their theories and models with sets of auxiliary structural assumptions to get any results at all in their models.

So why then do mainstream economists keep on pursuing this modelling project?

Mainstream ‘as if’ models are based on the logic of idealization and a set of tight axiomatic and ‘structural’ assumptions from which consistent and precise inferences are made. The beauty of this procedure is, of course, that if the assumptions are true, the conclusions necessarily follow. But it is a poor guide for real-world systems.

The way axioms and theorems are formulated in mainstream economics often leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and theory without informational content never risks being empirically tested and found falsified. Used in mainstream ‘thought experimental’ activities, it may, of course, ​be very ‘handy,’ but totally void of any empirical value.

Some economic methodologists have lately been arguing that economic models may well be considered ‘minimal models’ that portray ‘credible worlds’ without having to care about things like similarity, isomorphism, simplified ‘representationality’ or resemblance to the real world. These models are said to resemble ‘realistic novels’ that portray ‘possible worlds’. And sure: economists constructing and working with those kinds of models learn things about what might happen in those ‘possible worlds’. But is that really the stuff real science is made of? I think not. As long as one doesn’t come up with credible export warrants to real-world target systems and show how those models — often building on idealizations with known to be false assumptions — enhance our understanding or explanations about the real world, well, they are just nothing more than just novels.  Showing that something is possible in a ‘possible world’ doesn’t give us a justified license to infer that it therefore also is possible in the real world. ‘The Great Gatsby’ is a wonderful novel, but if you truly want to learn about what is going on in the world of finance, I would recommend rather reading Minsky or Keynes and directly confronting real-world finance.

Different models have different cognitive goals. Constructing models that aim for explanatory insights may not optimize the models for making (quantitative) predictions or deliver some kind of ‘understanding’ of what’s going on in the intended target system. All modelling in science has tradeoffs. There simply is no ‘best’ model. For one purpose in one context model A is ‘best’, for other purposes and contexts model B may be deemed ‘best’. Depending on the level of generality, abstraction, and depth, we come up with different models. But even so, I would argue that if we are looking for what I have called ‘adequate explanations’ (Syll, Ekonomisk teori och metod, Studentlitteratur, 2005) it is not enough to just come up with ‘minimal’ or ‘credible world’ models.

The assumptions and descriptions we use in our modelling have to be true — or at least ‘harmlessly’ false — and give a sufficiently detailed characterization of the mechanisms and forces at work. Models in mainstream economics do nothing of the kind.

Coming up with models that show how things may possibly be explained is not what we are looking for. It is not enough. We want to have models that build on assumptions that are not in conflict with known facts and that show how things actually are to be explained. Our aspirations have to be more far-reaching than just constructing coherent and ‘credible’ models about ‘possible worlds’. We want to understand and explain ‘difference-making’ in the real world and not just in some made-up fantasy world. No matter how many mechanisms or coherent relations you represent in your model, you still have to show that these mechanisms and relations are at work and exist in society if we are to do real science. Science has to be something more than just more or less realistic ‘story-telling’ or ‘explanatory fictionalism.’ You have to provide decisive empirical evidence that what you can infer in your model also helps us to uncover what actually goes on in the real world. It is not enough to present your students with epistemically informative insights about logically possible but non-existent general equilibrium models. You also, and more importantly, have to have a world-linking argumentation and show how those models explain or teach us something about real-world economies. If you fail to support your models in that way, why should we care about them? And if you do not inform us about what are the real-world intended target systems of your modelling, how are we going to be able to value or test them? Without giving that kind of information it is impossible for us to check if the ‘possible world’ models you come up with actually hold also for the one world in which we live — the real world.

I’m a believer

4 Sep, 2022 at 17:47 | Posted in Varia | Comments Off on I’m a believer

.

Donald Rubin on the history of causal inference

3 Sep, 2022 at 13:18 | Posted in Statistics & Econometrics | 1 Comment

.

Summer Wine

3 Sep, 2022 at 13:09 | Posted in Varia | Comments Off on Summer Wine

.

« Previous PageNext Page »

Blog at WordPress.com.
Entries and Comments feeds.

%d bloggers like this: