Top 20 Heterodox Economics Books

28 February, 2014 at 12:04 | Posted in Economics | 29 Comments

Top 20 KC Cafe Radio Logo
 

  • Karl Marx, Das Kapital (1867)
  • Thorstein Veblen, The Theory of the Leisure Class (1899)
  • Joseph Schumpeter, The Theory of Economic Development (1911)
  • Nikolai Kondratiev, The Major Economic Cycles (1925)
  • Gunnar Myrdal, The Political Element in the Development of Economic Theory (1930)
  • John Maynard Keynes, The General Theory (1936)
  • Paul Sweezy, Theory of Capitalist Development (1956)
  • Joan Robinson, Accumulation of Capital (1956)
  • John Kenneth Galbraith, The Affluent Society (1958)
  • Piero Sraffa, Production of Commodities by Means of Commodities (1960)
  • Johan Åkerman, Theory of Industrialism (1961)
  • Axel Leijonhufvud, Keynes and the Classics (1969)
  • Nicholas Georgescu-Roegen, The Entropy Law and the Economic Process (1971)
  • Michal Kalecki, Selected Essays on the Dynamics of the Capitalist Economy (1971)
  • Paul Davidson, Money and the Real World (1972)
  • Hyman Minsky, John Maynard Keynes (1975)
  • Philip Mirowski, More Heat than Light (1989)
  • Tony Lawson, Economics and Reality (1997)
  • Steve Keen, Debunking Economics (2001)
  • John Quiggin, Zombie Economics (2010)
Advertisements

Teaching of economics — captured by a small and dangerous sect

28 February, 2014 at 10:56 | Posted in Economics | 4 Comments

Dept_of_Econ_Fac_Pic
 
The fallacy of composition basically consists of the false belief that the whole is nothing but the sum of its parts.  In the society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori can’t proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

This fact shows up when orthodox/mainstream/neoclassical economics tries to argue for the existence of The Law of Demand – when the price of a commodity falls, the demand for it will increase – on the aggregate. Although it may be said that one succeeds in establishing The Law for single individuals it soon turned out – in the Sonnenschein-Mantel-Debreu theorem firmly established already in 1976 – that it wasn’t possible to extend The Law of Demand to apply on the market level, unless one made ridiculously unrealistic assumptions such as individuals all having homothetic preferences – which actually implies that all individuals have identical preferences.

This could only be conceivable if all agents are identical (i. e. there is in essence only one actor) — the (in)famous representative actor. So, yes, it was possible to generalize The Law of Demand – as long as we assumed that on the aggregate level there was only one commodity and one actor. What generalization! Does this sound reasonable? Of course not. This is pure nonsense!

How has neoclassical economics reacted to this devastating findig? Basically by looking the other way, ignoring it and hoping that no one sees that the emperor is naked.

Having gone through a handful of the most frequently used textbooks of economics at the undergraduate level today, I can only conclude that the models that are presented in these modern neoclassical textbooks try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent.

That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972) , Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equlibrium solution.

So what modern economics textbooks present to students are really models built on the assumption that an entire economy can be modeled as a representative actor and that this is a valid procedure. But it isn’t — as the Sonnenschein-Mantel-Debreu theorem irrevocably has shown.

Of course one could say that it is too difficult on undergraduate levels to show why the procedure is right and to defer it to masters and doctoral courses. It could justifiably be reasoned that way – if what you teach your students is true, if The Law of Demand is generalizable to the market level and the representative actor is a valid modeling abstraction! But in this case it’s demonstrably known to be false, and therefore this is nothing but a case of scandalous intellectual dishonesty. It’s like telling your students that 2 + 2 = 5 and hope that they will never run into Peano’s axioms of arithmetics.

Once the dust has settled, there is a strong case for an inquiry into whether the teaching of economics has been captured by a small but dangerous sect.

Larry Elliott/The Guardian

28 February 1986 — a date which will live in infamy

28 February, 2014 at 09:06 | Posted in Politics & Society | Comments Off on 28 February 1986 — a date which will live in infamy

Olof Palme. Born 30 January 1927. Murdered 28 February 1986.
 

Robert “The Keynesian” Lucas

27 February, 2014 at 16:28 | Posted in Economics | 1 Comment

In his Keynote Address to the 2003 History of Political Economy Conference, Nobel laureate Robert Lucas said:

Well, I’m not here to tell people in this group about the history of
monetary thought. I guess I’m here as a kind of witness from a vanished
culture, the heyday of Keynesian economics. It’s like historians rushing
to interview the last former slaves before they died, or the last of the
people who remembered growing up in a Polish shtetl. I am going to tell
you what it was like growing up in a day when Keynesian economics
was taught as a solid basis on which macroeconomics could proceed.

keynesdanceMy credentials? Was I a Keynesian myself? Absolutely. And does my
Chicago training disqualify me for that? No, not at all. David Laidler
[who was present at the conference] will agree with me on this, and I will
explain in some detail when I talk about my education. Our Keynesian
credentials, if we wanted to claim them, were as good as could be obtained
in any graduate school in the country in 1963.

I thought when I was trying to prepare some notes for this talk
that people attending the conference might be arguing about Axel
Leijonhufvud’s thesis that IS-LM was a distortion of Keynes, but I didn’t
really hear any of this in the discussions this afternoon. So I’m going to
think about IS-LM and Keynesian economics as being synonyms. I remember
when Leijonhufvud’s book2 came out and I asked my colleague
Gary Becker if he thought Hicks had got the General Theory right with
his IS-LM diagram. Gary said, “Well, I don’t know, but I hope he did,
because if it wasn’t for Hicks I never would have made any sense out of
that damn book.” That’s kind of the way I feel, too, so I’m hoping Hicks
got it right.

Mirabile dictu! I’m a Keynesian – although I haven’t understood anything of what Keynes wrote, but I’ve read anoher guy who said he had read his book, so I hope for the best and assume he got it right (which Hicks actually didn’t, and was intellectually honest to admit in at least three scientific publications published about twenty years before Lucas statement). In truth a very scientific attitude. No wonder the guy after having deluded himself into believing (?) being a Keynesian – although actually only elaborating upon a model developed and then disowned by John Hicks – got the “Nobel prize” in economics …

On chance, probability, randomness, uncertainty and all that

26 February, 2014 at 23:04 | Posted in Statistics & Econometrics | 2 Comments

 

How to escape Mankiwian brainwash

25 February, 2014 at 16:56 | Posted in Economics | 1 Comment

 

[h/t Phil Pilkington]

Taking rational expectations seriously? You’ve got to be joking!

25 February, 2014 at 10:53 | Posted in Economics | 1 Comment

shackleIf at some time my skeleton should come to be used by a teacher of osteology to illustrate his lectures, will his students seek to infer my capacities for thinking, feeling, and deciding from a study of my bones? If they do, and any report of their proceedings should reach the Elysian Fields, I shall be much distressed, for they will be using a model which entirely ignores the greater number of relevant variables, and all of the important ones. Yet this is what ‘rational expectations’ does to economics.

G. L. S. Shackle

Since I have already put forward a rather detailed theoretical-methodological critique of the rational expectations hypothesis in Rational expectations – a fallacious foundation for macroeconomics in a non-ergodic world (real-world economics review, issue 62, 2012), I will limit myself here to elaborate on a couple of the rather unwarranted allegations that defenders of rational expectations put forward in their attempts at rescuing the rational expectations hypothesis from the critique.

In a laboratory experiment run by James Andreoni and Tymofiy Mylovanov, the researchers induced common probability priors, and then told all participants of the actions taken by the others. Their findings are very interesting, and says something rather profound on the value of the rational expectations hypothesis in standard neoclassical economic models:

We look at choices in round 1, when individuals should still maintain common priors, being indifferent about the true state. Nonetheless, we see that about 20% of the sample erroneously disagrees and favors one point of view. Moreover, while other errors tend to diminish as the experiment progresses, the fraction making this type of error is nearly constant. One may interpret disagreement in this case as evidence of erroneous or nonrational choices.

Next, we look at the final round where information about disagreement is made public and, under common knowledge of rationality, should be sufficient to eliminate disagreement. Here we find that individuals weigh their own information more than twice that of the five others in their group. When we look separately at those who err by disagreeing in round 1, we find that these people weigh their own information more than 10 times that of others, putting virtually no stock in public information. This indicates a different type of error, that is, a failure of some individuals to learn from each other. This error is quite large and for a nontrivial minority of the population.

Setting aside the subjects who make systematic errors, we find that individuals still put 50% more weight on their own information than they do on the information revealed through the actions of others, although this difference is not statistically significant.

So in this experiment there seems to be some irrational idiots who don’t understand that they are exactly that — idiots. When told that the earth is flat they still adhere to their own beliefs of a circular earth. It is as if people thought that the probability that all others are idiots — with irrational beliefs –is higher than the probability that the earth is circular.

Now compare these experimental results with rational expectations models, where the world evolves in accordance with fully predetermined models where uncertainty has been reduced to stochastic risk describable by some probabilistic distribution.

The tiny little problem that there is no hard empirical evidence that verifies these models doesn’t usually bother its protagonists too much. Rational expectations überpriest Thomas Sargent has the following to say on the epistemological status of the rational expectations hypothesis (emphasis added):

Partly because it focuses on outcomes and does not pretend to have behavioral content, the hypothesis of rational epectations has proved to be a powerful tool for making precise statements about complicarted dynamic economic systems.

Precise, yes, but relevant and realistic? I’ll be dipped!

And a few years later, when asked in an interview in Macroeconomic Dynamics — in 2005 — if he thought “that differences among people’s models are important aspects of macroeconomic policy debates”, Sargent replied (emphasis added):

The fact is you simply cannot talk about their differences within the typical rational expectations model. There is a communism of models. All agents within the model, the econometricians, and God share the same model.

One might perhaps find it odd to juxtapose God and people, but I think Leonard Rapping – himself a former rational expectationist – was on the right track when in 1984 interviewed by Arjo Klamer — The New Classical Macroeconomics — he said:

Frankly, I do not think that the rational expectations theorists are in the real world. Their approach is much to abstract.

Building models on rational expectations either means we are Gods or Idiots. Most of us know we are neither. So, God may share Sargent’s and Wren-Lewis’s models, but they certainly aren’t my models.

In their attempted rescue operations, rational expectationists try to give the picture that only heterodox economists like yours truly are critical of the rational expectations hypothesis. But, on this, they are, simply, eh, wrong. Let’s listen to Nobel laureate Edmund Phelps — hardly a heterodox economist — and what he has to say (emphasis added):

Question: In a new volume with Roman Frydman, “Rethinking Expectations: The Way Forward for Macroeconomics,” you say the vast majority of macroeconomic models over the last four decades derailed your “microfoundations” approach. Can you explain what that is and how it differs from the approach that became widely accepted by the profession?

frydAnswer: In the expectations-based framework that I put forward around 1968, we didn’t pretend we had a correct and complete understanding of how firms or employees formed expectations about prices or wages elsewhere. We turned to what we thought was a plausible and convenient hypothesis. For example, if the prices of a company’s competitors were last reported to be higher than in the past, it might be supposed that the company will expect their prices to be higher this time, too, but not that much. This is called “adaptive expectations:” You adapt your expectations to new observations but don’t throw out the past. If inflation went up last month, it might be supposed that inflation will again be high but not that high.

Q: So how did adaptive expectations morph into rational expectations?

A: The “scientists” from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let’s be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.

Q: And what’s the consequence of this putsch?

A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted — thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.

Q: So rather than live with variability, write a formula in stone!

A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.

Q: One of the issues I have with rational expectations is the assumption that we have perfect information, that there is no cost in acquiring that information. Yet the economics profession, including Federal Reserve policy makers, appears to have been hijacked by Robert Lucas.

A: You’re right that people are grossly uninformed, which is a far cry from what the rational expectations models suppose. Why are they misinformed? I think they don’t pay much attention to the vast information out there because they wouldn’t know what to do what to do with it if they had it. The fundamental fallacy on which rational expectations models are based is that everyone knows how to process the information they receive according to the one and only right theory of the world. The problem is that we don’t have a “right” model that could be certified as such by the National Academy of Sciences. And as long as we operate in a modern economy, there can never be such a model.

Bloomberg

And this is what another non-heterodox economist, Willem Buiter, has to say about the state of the standard macroeconomic theory that builds on the twin assumptions of rational expectations and efficient markets:

buiterIn both the New Classical and New Keynesian approaches to monetary theory (and to aggregative macroeconomics in general), the strongest version of the efficient markets hypothesis (EMH) was maintained. This is the hypothesis that asset prices aggregate and fully reflect all relevant fundamental information, and thus provide the proper signals for resource allocation. Even during the seventies, eighties, nineties and noughties before 2007, the manifest failure of the EMH in many key asset markets was obvious to virtually all those whose cognitive abilities had not been warped by a modern Anglo-American Ph.D. education. But most of the profession continued to swallow the EMH hook, line and sinker, although there were influential advocates of reason throughout, including James Tobin, Robert Shiller, George Akerlof, Hyman Minsky, Joseph Stiglitz and behaviourist approaches to finance. The influence of the heterodox approaches from within macroeconomics and from other fields of economics on mainstream macroeconomics – the New Classical and New Keynesian approaches – was, however, strictly limited.

But let’s see how rational expectations fares as an empirical assumption. Empirical efforts at testing the correctnes of the hypothesis has resulted in a series of empirical studies that have more or less concluded that it is not consistent with the facts. In one of the more well-known and highly respected evaluation reviews made, Michael Lovell (1986) concluded:

it seems to me that the weight of empirical evidence is sufficiently strong to compel us to suspend belief in the hypothesis of rational expectations, pending the accumulation of additional empirical evidence.

And this is how Nikolay Gertchev summarizes studies on the empirical correctness of the hypothesis:

More recently, it even has been argued that the very conclusions of dynamic models assuming rational expectations are contrary to reality: “the dynamic implications of many of the specifications that assume rational expectations and optimizing behavior are often seriously at odds with the data” (Estrella and Fuhrer 2002, p. 1013). It is hence clear that if taken as an empirical behavioral assumption, the RE hypothesis is plainly false; if considered only as a theoretical tool, it is unfounded and selfcontradictory.

But how about the large mainstream literature on learning? Let me shortly adress the issue.

The rational expectations hypothesis presupposes – basically for reasons of consistency – that agents have complete knowledge of all of the relevant probability distribution functions. And when trying to incorporate learning in these models – trying to take the heat of some of the criticism launched against it up to date – it is always a very restricted kind of learning that is considered. A learning where truly unanticipated, surprising, new things never take place, but only rather mechanical updatings – increasing the precision of already existing information sets – of existing probability functions.

Nothing really new happens in these ergodic models, where the statistical representation of learning and information is nothing more than a caricature of what takes place in the real world target system. This follows from taking for granted that people’s decisions can be portrayed as based on an existing probability distribution, which by definition implies the knowledge of every possible event (otherwise it is in a strict mathematical-statistically sense not really a probability distribution) that can be thought of taking place.

But in the real world it is – as shown again and again by behavioural and experimental economics – common to mistake a conditional distribution for a probability distribution. Mistakes that are impossible to make in the kinds of economic analysis – built on the rational expectations hypothesis – that Levine is such an adamant propagator for. On average rational expectations agents are always correct. But truly new information will not only reduce the estimation error but actually change the entire estimation and hence possibly the decisions made. To be truly new, information has to be unexpected. If not, it would simply be inferred from the already existing information set.

In rational expectations models new information is typically presented as something only reducing the variance of the parameter estimated. But if new information means truly new information it actually could increase our uncertainty and variance (information set (A, B) => (A, B, C)).

Truly new information give birth to new probabilities, revised plans and decisions – something the rational expectations hypothesis cannot account for with its finite sampling representation of incomplete information.

In the world of rational expectations, learning is like being better and better at reciting the complete works of Shakespeare by heart – or at hitting bull’s eye when playing dart. It presupposes that we have a complete list of the possible states of the world and that by definition mistakes are non-systematic (which, strictly seen, follows from the assumption of “subjective” probability distributions being equal to the “objective” probability distribution). This is a rather uninteresting and trivial kind of learning. It is a closed world learning, synonymous to improving one’s adaptation to a world which is fundamentally unchanging. But in real, open world situations, learning is more often about adapting and trying to cope with genuinely new phenomena.

The rational expectations hypothesis presumes consistent behaviour, where expectations do not display any persistent errors. In the world of rational expectations we are always, on average, hitting the bull’s eye. In the more realistic, open systems view, there is always the possibility (danger) of making mistakes that may turn out to be systematic. It is because of this, presumably, that we put so much emphasis on learning in our modern knowledge societies.

So, where does all this leave us? I think John Kay sums it up pretty well:

kayProf Sargent and colleagues appropriated the term “rational expectations” for their answer. Suppose the economic world evolves according to some predetermined model, in which uncertainties are “known unknowns” that can be described by probability distributions. Then economists could gradually deduce the properties of this model, and businesses and individuals would naturally form expectations in that light. If they did not, they would be missing obvious opportunities for advantage.

This approach, which postulates a universal explanation into which economists have privileged insight, was as influential as it was superficially attractive. But a scientific idea is not seminal because it influences the research agenda of PhD students. An important scientific advance yields conclusions that differ from those derived from other theories, and establishes that these divergent conclusions are supported by observation. Yet as Prof Sargent disarmingly observed, “such empirical tests were rejecting too many good models” in the programme he had established with fellow Nobel laureates Bob Lucas and Ed Prescott. In their world, the validity of a theory is demonstrated if, after the event, and often with torturing of data and ad hoc adjustments that are usually called “imperfections”, it can be reconciled with already known facts – “calibrated”. Since almost everything can be “explained” in this way, the theory is indeed universal; no other approach is necessary, or even admissible …

Rational expectations consequently fail for the same reason communism failed – the arrogance and ignorance of the monopolist.

David Ricardo

25 February, 2014 at 08:56 | Posted in Economics | 1 Comment

ricardoI morgon sänder Sveriges Radio P1 — kl. 21:03-21:37 — det andra programmet i en serie om nationalekonomins historia.

Yours truly medverkar.
 

Det sägs att börshandlaren David Ricardo blev nationalekonom för att han hade tråkigt på semestern. Detta kan vara en skröna, men sant är däremot att hans sätt att resonera logiskt och skapa modeller av verkligheten blev stilbildande.

Hicks on the inapplicability of probability calculus

24 February, 2014 at 18:48 | Posted in Economics, Statistics & Econometrics | 11 Comments

To understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

hicksbbcWhen we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.

John Hicks, Causality in Economics, 1979:121

To simply assume that economic processes are ergodic — and a fortiori in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.

Added 25 February: Commenting on this article, Paul Davidson writes:

After reading my article on the fallacy of rational expectations, Hicks wrote to me in a letter dated 12 February 1983 in which he said “I have just been reading your RE [rational expectations] paper … I do like it very much … You have now rationalized my suspicions and shown me that I missed a chance of labeling my own point of view as nonergodic. One needs a name like that to ram a point home.”

On abstraction and idealization in neoclassical economics

23 February, 2014 at 20:55 | Posted in Economics | 2 Comments

Of course macroeconomists research many things, and only a minority are using New Keynesian models, and probably even some of those do not really need the New Keynesian bit. That is the great thing about abstraction. Working with what can be called ‘flex price’ models does not imply that you think price rigidity is unimportant, but instead that it can often be ignored if you want to focus on other processes.

Simon Wren-Lewis

It would, of course, be interesting to know on what reasoning Wren-Lewis based this rather unsubstantiated view.

When applying deductivist thinking to economics, neoclassical economists like Wren-Lewis usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations (what Wren-Lewis, wrongly, calls abstractions) necessary for the deductivist machinery to work — as e. g. “flex price” models — simply don’t hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap.

Or as Hans Albert has it on the neoclassical style of thought:

In everyday situations, if, in answer to an inquiry about the weather forecast, one is told that the weather will remain the same as long as it does not change, then one does not normally go away with the impression of having been particularly well informed, although it cannot be denied that the answer refers to an interesting aspect of reality, and, beyond that, it is undoubtedly true …

hansalbertWe are not normally interested merely in the truth of a statement, nor merely in its relation to reality; we are fundamentally interested in what it says, that is, in the information that it contains …

Information can only be obtained by limiting logical possibilities; and this in principle entails the risk that the respective statement may be exposed as false. It is even possible to say that the risk of failure increases with the informational content, so that precisely those statements that are in some respects most interesting, the nomological statements of the theoretical hard sciences, are most subject to this risk. The certainty of statements is best obtained at the cost of informational content, for only an absolutely empty and thus uninformative statement can achieve the maximal logical probability …

The neoclassical style of thought – with its emphasis on thought experiments, reflection on the basis of illustrative examples and logically possible extreme cases, its use of model construction as the basis of plausible assumptions, as well as its tendency to decrease the level of abstraction, and similar procedures – appears to have had such a strong influence on economic methodology that even theoreticians who strongly value experience can only free themselves from this methodology with difficulty …

Science progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …

The connections sketched out above are part of the general logic of the sciences and can thus be applied to the social sciences. Above all, with their help, it appears to be possible to illuminate a methodological peculiarity of neoclassical thought in economics, which probably stands in a certain relation to the isolation from sociological and social-psychological knowledge that has been cultivated in this discipline for some time: the model Platonism of pure economics, which comes to expression in attempts to immunize economic statements and sets of statements (models) from experience through the application of conventionalist strategies …

Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …

A further possibility for immunizing theories consists in simply leaving open the area of application of the constructed model so that it is impossible to refute it with counter examples. This of course is usually done without a complete knowledge of the fatal consequences of such methodological strategies for the usefulness of the theoretical conception in question, but with the view that this is a characteristic of especially highly developed economic procedures: the thinking in models, which, however, among those theoreticians who cultivate neoclassical thought, in essence amounts to a new form of Platonism.

Tout simplement superbe

23 February, 2014 at 14:56 | Posted in Varia | 1 Comment

 

Bobubblan bara växer och växer i Sverige

20 February, 2014 at 11:55 | Posted in Economics | 6 Comments

index_realt_villor_br_1952-2013

Som  Cornucopia  visar i ovanstående diagram har reala bostadsrättspriser stigit med nästan 900 % de senaste 30 åren! 

Om detta inte är en bubbla vet jag inte vad som skulle kunna vara det.

Men det kanske är bäst att fråga L E O Svensson först …

Added: Martin Flodén och Flute har också tänkvärda bubbeliakttagelser.

[h/t Erik Hegelund]

Macroeconomic challenges

20 February, 2014 at 10:15 | Posted in Economics | 8 Comments

In discussing macroeconomics’ Faustian bargain, Simon [Wren-Lewis] asks:

“By putting all our macroeconomic model building eggs in one microfounded basket, have we significantly slowed down the pace at which macroeconomists can say something helpful about the rapidly changing real world?”

representative agentLet me deepen this question by pointing to five newish facts about the “real world” which any good, useful macro theory should be compatible with.

1. The unemployed are significantly less happy than those in work. This doesn’t merely provide the justification for an interest in macroeconomics. It also casts grave doubt upon RBC-style theories which unemployment is voluntary …

2. Price and wage stickiness is over-rated … Price stickiness isn’t universal …

3. The failure of a handful of organizations can have massive macroeconomic consequences … We need models in which micro failures generate macro ones …

4. Supply shocks do happen. It’s improbable that all productivity fluctuations are due merely to labour hoarding in the face of demand shocks …

5. Interactions between agents can magnify fluctuations. We know there are expenditure cascades, which occur because consumers copy other consumers …

These facts are a challenge to both RBC and New Keynesian models. But they have something in common. They stress the heterogeneity of agents … This, I fear, means that the problem with conventional macro isn’t so much its microfoundations per se as the assumption that these microfoundations must consist in representative agents.

Chris Dillow

Yes indeed, the assumption of representative agents is a critical one in modern macroeconomics — as is the insistence on microfoundations.

The purported strength of New Classical and “New Keynesian” macroeconomics is that they have firm anchorage in preference-based microeconomics, and especially the decisions taken by inter-temporal utility maximizing “forward-loooking” individuals.

To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations – without ever presenting neither ontological nor epistemological justifications for this claim – has put a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if – after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem – these economists want to resurrect the omniscient walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world (how that could even be conceivable is beyond my imagination, given that the ongoing debate on microfoundations, if anything, shows that not even we, the economists, can come to agreement on a common model).

Microfoundations is thought to give macroeconomists the means to fully predetermine their models and come up with definitive, robust, stable, answers. In reality we know that the forecasts and expectations of individuals often differ systematically from what materialize in the aggregate, since knowledge is imperfect and uncertainty – rather than risk – rules the roost.

And microfoundations allegedly goes around the Lucas critique by focussing on “deep” structural, invariant parameters of optimizing individuals’ preferences and tastes. This is an empty hope without solid empirical or methodological foundation.

The kind of microfoundations that “New Keynesian” and New Classical general equilibrium macroeconomists are basing their models on, are not – at least from a realist point of view – plausible.

Without export certificates models and theories should be considered unsold. Unfortunately this understanding has not informed modern neoclassical economics, as can be seen by the profuse use of so called representative-agent models.

A common feature of modern neoclassical macroeconomics is to use simple (dynamic stochastic) general equilibrium models where representative actors are supposed to have complete knowledge, zero transaction costs and complete markets.

In these models, the actors are all identical. Of course, this has far-reaching analytical implications. Situations characterized by asymmetrical information – situations most of us consider to be innumerable – cannot arise in such models. If the aim is to build a macro-analysis from micro-foundations in this manner, the relevance of the procedure is highly questionable (Robert Solow has even considered the claims made by protagonists of rational agent models “generally phony”).

One obvious critique is that representative-agent models do not incorporate distributional effects – effects that often play a decisive role in macroeconomic contexts. Investigations into the operations of markets and institutions usually find that there are overwhelming problems of coordination. These are difficult, not to say impossible, to analyze with the kind of Robinson Crusoe models that, e. g., Real Business Cycle theorists employ and which exclude precisely those differences between groups of actors that are the driving force in many non-neoclassical analysis.

The choices of different individuals have to be shown to be coordinated and consistent. This is obviously difficult if the macroeconomic models don’t give room for heterogeneous individuals (this lack of understanding the importance of heterogeneity is perhaps especially problematic for the modeling of real business cycles in dynamic stochastic general equilibrium models). Representative-agent models are certainly more manageable, however, from a realist point of view, they are also less relevant and have a lower explanatory potential.

Both the “Lucas critique” and Keynes’ critique of econometrics showed that it was inadmissible to project history on the future. Consequently an economic policy cannot presuppose that what has worked before, will continue to do so in the future. That macroeconomic models could get hold of correlations between different “variables” was not enough. If they could not get at the causal structure that generated the data, they were not really “identified”. Lucas himself drew the conclusion that the problem with unstable relations was to construct models with clear microfoundations where forward-looking optimizing individuals and robust, deep, behavioural parameters are seen to be stable even to changes in economic policies.

In microeconomics we know that aggregation really presupposes homothetic an identical preferences, something that almost never exist in real economies. The results given by these assumptions are therefore not robust and do not capture the underlying mechanisms at work in any real economy. And as if this was not enough, there are obvious problems also with the kind of microeconomic equilibrium that one tries to reduce macroeconomics to. Decisions of consumption and production are described as choices made by a single agent. But then, who sets the prices on the market? And how do we justify the assumption of universal consistency between the choices?

Models that are critically based on particular and odd assumptions – and are neither robust nor congruent to real world economies – are of questionable value.

And is it really possible to describe and analyze all the deliberations and choices made by individuals in an economy? Does not the choice of an individual presuppose knowledge and expectations about choices of other individuals? It probably does, and this presumably helps to explain why representative-agent models have become so popular in modern macroeconomic theory. They help to make the analysis more tractable.

One could justifiably argue that one might just as well accept that it is not possible to coherently reduce macro to micro, and accordingly that it is perhaps necessary to forswear microfoundations and the use of rational-agent models all together. Microeconomic reasoning has to build on macroeconomic presuppositions. Real individuals do not base their choices on operational general equilibrium models, but rather use simpler models. If macroeconomics needs microfoundations it is equally necessary that microeconomics needs macrofoundations.

The microeconomist Alan Kirman has maintained that the use of representative-agent models is unwarranted and leads to conclusions that are usually both misleading and false. It’s a fiction basically used by some macroeconomists to justify the use of equilibrium analysis and a kind of pseudo-microfoundations. Microeconomists are well aware that the conditions necessary to make aggregation to representative actors possible, are not met in actual economies. As economic models become increasingly complex, their use also becomes less credible.

Even if economies naturally presuppose individuals, it does not follow that we can infer or explain macroeconomic phenomena solely from knowledge of these individuals. Macroeconomics is to a large extent emergent and cannot be reduced to a simple summation of micro-phenomena. Moreover, as we have already argued, even these microfoundations aren’t immutable. Lucas and the New Classical economists’ deep parameters – “tastes” and “technology” – are not really the bedrock of constancy that they believe (pretend) them to be.

In microfounded-rational-expectations-representative-agent macroeconomics the economy is  described “as if” consisting of one single agent – either by inflating the optimization problem of the individual to the scale of a whole economy, or by assuming that it’s possible to aggregate different individuals’ actions by a simple summation, since every type of actor is identical.

It would be better to just face the truth — it is impossible to describe interaction and cooperation when there is essentially only one actor.

Economics for Dummiez

20 February, 2014 at 09:23 | Posted in Economics | 1 Comment

 

Adam Smith och den “osynliga handen”

20 February, 2014 at 09:21 | Posted in Economics | Comments Off on Adam Smith och den “osynliga handen”

Igår sändes i Sveriges Radio P1 ett första program i en serie om nationalekonomins historia:

wealth of nations Adam Smith gjorde national-ekonomi till modern vetenskap 1776 med boken Nationernas välstånd.

Hans tes är att egennyttigt handlande, den så kallade “osynliga handen”, på sikt gynnar hela samhället. Professor Lars Pålsson Syll ger sin syn på denna teori.

Hur länge ska vi tillåta giriga storbanker att sko sig på oss?

19 February, 2014 at 20:35 | Posted in Economics, Politics & Society | 2 Comments

bankDe fyra stor-bankernas rekordvinster på nära 92 miljarder kronor har upprört många. I en stillastående svensk ekonomi tjänar en liten del stora pengar på alla andra. Men hur går det till? Det finns två delar av bankernas verksamhet som skapar övervinsterna. Den första kallas räntenetto. Den andra avgifter. Bägge betyder att pengar flyttas från de som har lite, till de som har mer …

Tillsammans gjorde bankerna förra året ett räntenettorekord på 116 miljarder kronor. Det här är pengar man får ta in utan att egentligen göra något för dem. Det är bara skillnaden mellan kostnaden för banken när folk sparar och intäkten när folk lånar …

Bankerna höjer räntenettot för att man kan. Inte för att det behövs.

Den andra stora intäktskällan är avgifter. På betalkort, på lån, på flytt av lån, på kontantuttag, på betalning – ja, på att kunderna existerar …

Det digitala samhället har gjort att det mesta kan skötas via datorn eller mobilen. Att avgifterna då samtidigt kan öka för banken säger något om den maktlöshet kunderna står inför.

Avgifterna stiger medan bankens insats minskar.

Räntenettot är politiskt oreglerat i Sverige … Politiskt motiveras den här friheten med att konkurrensen mellan banker om kunderna ska sänka kostnaderna. I verkligheten gäller motsatsen. Bankerna tar ut övervinster gemensamt. För att de kan. Och för att de får.

Johan Ehrenberg & Sten Ljunggren

[h/t Jan Milch]

Big data forecasting and the conundrum of unknown unknowns

19 February, 2014 at 17:54 | Posted in Economics | 1 Comment

Short-term weather forecasting is possible because most of the factors that determine tomorrow’s weather are, in a sense, already there … But when you look further ahead you encounter the intractable problem that, in non-linear systems, small changes in initial conditions can lead to cumulatively larger and larger changes in outcomes over time. In these circumstances imperfect knowledge may be no more useful than no knowledge at all.

economic_forecastingMuch the same is true in economics and business. What gross domestic product will be tomorrow is, like tomorrow’s rain or the 1987 hurricane, more or less already there: tomorrow’s output is already in production, tomorrow’s sales are already on the shelves, tomorrow’s business appointments already made. Big data will help us analyse this. We will know more accurately and more quickly what GDP is, we will be more successful in predicting output in the next quarter, and our estimates will be subject to fewer revisions …

Big data can help us understand the past and the present but it can help us understand the future only to the extent that the future is, in some relevant way, contained in the present. That requires a constancy of underlying structure that is true of some physical processes but can never be true of a world that contains Hitler and Napoleon, Henry Ford and Steve Jobs; a world in which important decisions or discoveries are made by processes that are inherently unpredictable and not susceptible to quantitative description.

John Kay

Ulysses’ Gaze

19 February, 2014 at 17:28 | Posted in Varia | Comments Off on Ulysses’ Gaze

 

För en skola bortom elevers tidsbundna och kontingenta fakticitet

19 February, 2014 at 09:59 | Posted in Education & School | Comments Off on För en skola bortom elevers tidsbundna och kontingenta fakticitet

Skolpedagogiska reformimpulser borde så småningom sluta att förlita sig på sjuttiotalets motiv, innebörder och semantik. Annars går det som när en orkester orubbligt fortsätter att spela de gamla välkända melodierna utan att bry sig om att publiken för länge sedan har lämnat salen. Det är på tiden att spela något nytt, man borde våga en ny början. Farväl till sjuttiotalet!
Thomas Ziehe, ”Adjö till sjuttiotalet”, KRUT 2/1998

År efter år kommer larmrapporter om hur illa det står till i svensk skola. PISA och andra studier visar otvetydigt att svenska skolelever presterar sämre och sämre. Och vi som arbetar inom universitetsvärlden märker av att våra studneter i allt större utsträckning saknar nödiga förkunskaper för att kunna bedriva seriösa studier.

År efter år ser vi hur viljan att bli lärare minskar. I början på 1980-talet fanns det nästan åtta sökande per plats på lågstadielärarutbildningen. Idag är det en sökande per plats på grundlärarutbildningen. Detta är en samhällskatastrof som vi borde tala om. I en värld där allt hänger på kunskap är det på sikt avgörande för svensk ekonomi att åter göra läraryrket attraktivt.

År efter år ser vi hur lärarlönerna urholkas. Så sent som igår presenterade OECD en rapport där man menar sig kunna visa att framgångsrika skolnationer tenderar att prioritera höga lärarlöner. Lärarlönerna som andel av BNP per capita är i Sverige väsentligt lägre än i de länder som ligger i topp i PISA-studierna.

År efter år ser vi hur ojämlikheten ökar på många områden. Inte minst vad avser inkomster och förmögenhet. Skillnader i livsbetingelser för olika grupper vad avser klass, etnicitet och genus är oacceptabelt stora.

År efter år kan vi konstatera att i skolans värld har uppenbarligen familjebakgrunden fortfarande stor betydelse för elevers prestationer. Självklart kan det inte uppfattas som annat än ett kapitalt misslyckande för en skola med kompensatoriska aspirationer.

tilsam

År efter år kan vi notera att tvärtemot alla reformpedagogiska utfästelser så är det främst barn ur hem utan studietraditioner som förlorat i den omläggning i synen på skolan som skett under det senaste halvseklet. I dag – med skolpengar, fria skolval och friskolor – har utvecklingen tvärtemot alla kompensatoriska utfästelser bara ytterligare stärkt de högutbildade föräldrarnas möjligheter att styra de egna barnens skolgång och framtid. Det är svårt att se vilka som med dagens skola ska kunna göra den ”klassresa” så många i min generation har gjort.

Allt detta känner vi väl till. Men jag tror att det också finns andra — och kanske än viktigare — djupgående strukturella orsaker bakom den svenska skolans kräftgång de senaste decennierna.

I stället för skolans samhällsövergripande funktioner, skulle jag här vilja fokusera på några kultur- och subjektteoretiska aspekter av skolan som institution. Hur bör skolan vara relaterat till samhället i stort? Vad kan och bör skolan göra och vara för något?

Vi lever numera i ett samhälle där identitet är något som skapas, och inte så mycket som förr, något som ärvs. Detta kan upplevas både som en befrielse och som en börda. Många unga människor upplever idag en ambivalens inför den frihet som avtraditionaliseringen av våra sociala liv inneburit. Detta kommer också i hög grad till uttryck i skolan. Mycket av de orienteringsproblem som unga ger uttryck för hänger samman med den norm- och värdeupplösning som den genomgripande omvandlingen av det moderna samhällslivet lett till. Förr definierades man tydligare av en ärvd social identitet. Idag är bilden ungdomar gör sig av livet att det i mycket större utsträckning handlar om vad de själva som individer vill göra av livet. Varje individ skapar sin egen identitet. Men detta gör också att ryggstödet att luta sig mot inte finns. Det är ”fri” rörelse under eget ansvar och egna våndor och rotlöshet.

När jag växte upp och gick i skolan på 1960- och 70-talen var det fortfarande en hyfsat klar demarkationslinje mellan skolan – som tog hand om våra kognitiva färdigheter – och familjen – som tog hand om det uppväxande släktets socialisationsmässiga och emotiva behov – och samhället i stort.

Ziehe---Oer-af-intensitet-i-et-hav-af-rutine-pDet som en gång är nytt blir gammalt. Reformpedagogikens uppror mot en förstelnad skolinstitution med sina förlegade normer och lärostilar var nytt och radikalt på 1970-talet. Idag har verkligheten sprungit förbi den. När alla redan uppfattar sig göra som de själva vill och tar utgångspunkt i sina egna erfarenheter och liv, är det inte längre radikalt att säga till eleverna: ”Gör som ni vill”. Svaret blir då ofta: ”Kan vi inte slippa att behöva göra som vi själva vill? Det gör vi ju redan så mycket i våra vardagsliv och i den moderna skolan. Det ger oss ingen emancipatorisk upplevelse längre. Det blir bara mer av samma slappa och likgiltiga låt-gå-anda!”

Den innovativa kraften i avformalisering och subjektivering har sedan länge uttömt sin potential. Att då i dagens skola driva krav på ökad informalisering, närhet och subjektivitet är hopplöst otidsenligt. Att i någon mening ta sin utgångspunkt i elevernas verklighet får inte vara liktydigt med att restlöst anamma och bejaka den. Tvärtom måste man hålla fram en motbild, ett alternativ. Inte identitet, men skillnad.

Skolan ska lära för ett annat liv än det som finns i skolan. Framför allt ska den se till att fullt ut ta vara på varje elevs potential att leva ett annat liv än det de lever i dag. När kontinuitet, stabilitet och traditioner mister sin betydelse och självlegitimerande aura blir detta än viktigare. Skolan måste odla ”hoppets princip”. Utan det framtidsinriktade hoppet om en bättre värld kan inte skolan fullgöra sin uppgift. Därför kan inte heller socialisationsteoretiska explikationer kring frågan ”hur man blivit den man är” vara utgångspunkten. I skolans värld måste den bildningsteoretiska frågan ”hur man i framtiden blir det man har potential att bli” vara den riktgivande utgångspunkten.

Idag när livsvärlden, vardagen och subjektivitet bejakas fullt ut i skolans värld behövs inte mer av samma. Reformpedagogiken har varit så framgångsrik att den nu faller på eget grepp. Subjektiveringen har sedan länge nått mättnadsstadiet och mättnad har lett till att marginalnyttan är negativ. Nyhetens eufori har övergått i övermättnadens obehag. Då hjälper det inte längre att åberopa fraser från en annan tid – en icke-mätt tid – där dessa krav kanske kunde verka emancipatoriskt. Idag blir responsen från eleverna oftast bara: ”We’ve been there. We’ve seen that. We’ve done that. Peka på något som går utanför oss själva och gör att vi kan växa istället!” Det som var högsta radikalitet på 70-talet när jag studerade på lärarhögskolan – ”ta utgångspunkt i ditt eget intresse för mopeder och skriv teknologihistorien utifrån det” – har i dag bara ett löjets skimmer. När alla redan skriver sin ”mopedhistoria” behövs något annat.

Avstånd och avskiljande ger perspektiv. Skolan ska vara en ö i ett hav av samhälleliga rutiner. Att insistera på att skolan ska fortsätta närma sig elevernas upplevelsehorisonter är inte längre gångbart. Skolan får inte vara en reality-serie-värld där människor exponerar sina privata och intima liv för att tillfredsställa exhibitionistiska självbekräftelseimpulser. Vi behöver idag inte mer av nivellering mellan elevernas livsvärld och skolan. Tvärtom. Vi behöver mer respekt för den nödvändiga skillnaden mellan dessa världar. Lärandet i skolan måste erkänna som utgångspunkt skillnaden mellan elevernas livsvärld och skolan själv för att kunna fungera som en framtidsinriktad broslagning.

Skola är inte samhälle eller familj. Skolan ska inte vara en förlängning av elevernas liv utanför skolan. Tvärtom. Den ska vara ett alternativ. Något annat. I sin annorlundahet ska den skapa betingelser för framtid och inte hålla fast elevers upplevelsehorisont i nutid. Skolan ska inte ge självbekräftelse för vad eleverna är, utan hjälpa dem till vad de kan bli.

Framtiden är osäker. Och just därför är det så viktigt att skolan förhåller sig till framtiden och inte till nutiden. Därför ska också skolan vara något annorlunda och inte en spegel av samhället. Genom sin annorlundahet ska skolan kunna ge handlingsberedskap för ett nytt liv och en ny värld. När vi talar om vad skolan behöver ge eleverna måste detta stå i centrum

Ofta frågar man sig vad för slags elever samhället har behov av. Jag tycker den frågan är felställd. Kravet måste vara ett rättighetskrav som transcenderar elevernas egna upplevelsehorisonter och är framtidsinriktat genom att ta sikte på elevernas potential och inte deras kontingenta för-stunden-fakticitet.

Reformpedagogiken närde uppfattningen att avståndet mellan skola och samhälle skulle försvinna och att en identitet skulle råda. Den upplever sig fortfarande som progressiv (ofta med samma oreflekterade utgångspunkter som vad avser identitetspolitik i stort i våra moderna mångkulturella samhällen). Jag ser på det tvärtom. Håll fast vid skillnaden och avståndet mellan skola och samhälle. Liksom Adorno en gång talade om ”falsk intimitet” skulle jag vilja påstå att det här handlar om ”falsk identitet”. Skolan bör vara en egen entitet med egna spelregler och normer. Där går vi in med en del av våra identiteter under en del av våra liv. Skolan ska ställa krav på och utmana eleverna. Det kan få dem att växa och förverkliga deras potential. Att stryka medhårs och bygga falsk intimitet och identitet stjälper snarare än hjälper dessa framtidsriktade aspirationer.

För att kunna lära måste man kunna ”ta in” skillnader. Det innebär också ett krav på distansering. Elever är inte hjälpta av att skolan enbart speglar och bejakar deras egen livsvärld. Den ska vidga och fördjupa den. Här blir det också viktigt med teoretiska kunskaper. Utan dem kan fördjupning och perspektivering på den egna livsvärlden inte äga rum. En identitet skola-samhälle skulle cementera eleverna i nutiden istället för att förbereda för en okänd och osäker framtid.

Skolan ska vara den fasta punkt i unga människors tillvaro dit de kan komma och – liksom lärare – temporärt dra sig undan familje- och samhällslivets stormar. Skolan ska vara en ö i en värld full av intensiva förändringar. För att kunna lära sig saker krävs koncentration och möjligheter till avskärmning. I vår hypermedialiserade värld är kanske just det sistnämnda speciellt viktigt. I det ständiga digitala brus som unga människor omger sig av dygnet runt, behövs öar av distans, lugn och möjligheter till reflektion, sållning av brus och bearbetning av information till kunskaper. Inte för att permanent dra sig undan livsvärldens bekymmer och besvär, utan för att – med den styrka, färdigheter och perspektiv som en kunskaps- och medborgarskapsgrundad skola kan ge – bättre kunna tackla de ständiga reala omvandlingar som kännetecknar våra liv.

Arbetsdelning är en förutsättning för civilisation. Det gäller också skolan i relation till samhället. Skolan kan och ska inte lösa eller ackommodera för alla de problem som en ständigt föränderlig öppen värld skapar. Skolan kan inte kompensera för moderniseringsprocessens alla risker. Familj och föräldrar – liksom samhället i stort – har ett ansvar som inte skolan bara kan förutsättas gå in och ersätta eller kompensera för. Många samhällsforskare beskriver familj, normer och samhällsliv som stadda i upplösning över allt idag. Just därför är det så viktigt att skolan inte i första hand ska fungera som en kompensatorisk samhällsinstitution. Då förlorar den sin själ. Då förlorar den sin aura och förmåga att fungera som energigivande dröm om att allt kan vara annorlunda och att skolan kan bidra till att det också blir annorlunda.

Vi har alla flera olika identiteter. Vi har alla olika aspirationer, bakgrunder och drömmar. Men i skolan ska vi mötas som jämlikar. Olika, men jämlika. När vi går in igenom skolporten är vi alla jämlikar. Att gå in i skolan innebär samtidigt att du (temporärt) lämnar familj och samhälle och går in i ett begränsat rum med egna spelregler och mål. För en del av dagen går vi in i en värld där vi gemensamt skapar oss själva som medborgare och kunskapsutövare.

Ska man lära något nytt i skolan måste den vara något annat än en förlängning av elevers vardag och liv. Ska skolan kunna katalysera och förändra måste den vara något annat och inte identiskt med sin omgivning. I dagens samhälle måste skolan få fungera som något annorlunda, ett alternativ till de eroderande marknadskrafter som idag hotar samhällsbygget genom att reducera samhällsmedborgare till konsumenter. Kunskap är en nödvändig förutsättning för att kunna bjuda stånd mot denna utveckling. När inte familjen eller samhället står emot måste skolan kunna stå upp och ta tillvara det uppväxande släktets genuina emancipatoriska intressen

prinziphoffnungEn bra skola – inte minst i vårt meritokratiska kunskapssamhälle – är en viktig förutsättning för att unga människor ska kunna förverkliga sina drömmar om att i framtiden kunna förbättra sina villkor. Den ska hjälpa eleverna att komma ur den självcentrerade identitetsfixering de utvecklingsmässigt befinner sig i. Då fungerar det väldigt illa med dagens övertro på att skolan ska ta sin utgångspunkt i elevernas individuella, högst privata och subjektiva livsvärld. Idag skapar denna individualiseringsövertro mer problem än den löser.

Skolan ska fostra kunskapande medborgare. En skola med religiösa, etniska, eller vinstgivande bevekelsegrunder är ingen bra skola. Skolan ska möta eleverna utifrån vad de kan bli och inte utifrån vad de är. Skolan ska förse elever med kompass i framtidslandskapet så att de kan lära sig manövrera i osäkra farvatten. För att kunna uppfylla hoppets princip måste skolan få vara en ö av god annorlundahet – ofjättrad av allehanda former av identitetspolitik. Ju mer skolan dras in i samhällets kumulativa dynamiker, desto mer förlorar den sin nödvändiga status och egenlogik.

Om skolan ska kunna utgöra ett förverkligande av varje elevs potential snarare än tidsbundna och kontingenta fakticitet måste den kunna bygga broar till elevens livsvärld, samtidigt som den behåller distansen och avståndet mellan skola och samhälle.

Maktelitens inkomstutveckling

18 February, 2014 at 11:21 | Posted in Economics, Politics & Society | 2 Comments

Maktelitens genomsnittliga inkomster före skatt jämfört med en industriarbetares motsvarade 16,8 industriarbetarlöner 2012. Det innebär att maktelitens relativa inkomster har legat på samma nivå de senaste 3 åren …

christinDen inkomst som vi mäter för makteliten är arbetsinkomst samt inkomster från kapital och näringsverksamhet. År 2012 var den genomsnittliga sammanräknade inkomsten före skatt i makteliten 5,5 miljoner kronor, att jämföra med en genomsnittlig industriarbetarlön på 328 000 kronor.

År 1950, denna undersöknings första år, var maktelitens genomsnittliga inkomst 11,1 industriarbetarlöner. Det år när skillnaden i inkomster mellan maktelit och industriarbetare var som minst, 1980, var maktelitens genomsnittliga inkomst 4,9 industriarbetarlöner …

Jämställdheten går mycket långsamt framåt i den ekonomiska makteliten. Före år 2000 fanns inte någon kvinna i denna grupp och 2012 är enbart tre av näringslivets 50 ledande chefer kvinnor …

Den genomsnittliga inkomsten för kvinnorna i hela makteliten är 2012 cirka 32 procent av männens. Andelen har varit på denna nivå efter år 2004 då den var 45 procent. Samtidigt som antalet kvinnor inom makteliten ökar något så fortsätter männens inkomster att vara betydligt högre.

Landsorganisationen i Sverige

[h/t Jan Milch]

Och enligt nyreviderade data från SCB har utvecklingen av disponibel inkomst per konsumtionsenhet (exklusive kapitalvinst efter deciler, samtliga personer 1995-2010, medelvärden i tusen kr per k.e. i 2010 års priser) de senaste åren sett ut så här:

Och ännu värre är det om man tittar på förmögenhetsutvecklingen. Utvecklingen vittnar i mycket om klassamhällets återkomst. Undrar just vilken roll sänkta skatter för de som tjänar mest och minskade ersättningar för låginkomsttagare, sjuka och arbetslösa spelat …

Next Page »

Blog at WordPress.com.
Entries and comments feeds.