Anders Borg skäller – men vem bryr sig?
29 Feb, 2012 at 12:19 | Posted in Varia | Comments Off on Anders Borg skäller – men vem bryr sig?ECB förklarar Grekland bankrutt!
28 Feb, 2012 at 12:01 | Posted in Economics | Comments Off on ECB förklarar Grekland bankrutt!I ett pressmeddelande från ECB framgår nu att man i princip bankruttförklarar Grekland. Detta kommer efter den senaste nedgraderingen från kreditvärderingsinstitutet Standard & Poor’s, som i gårkväll meddelade att kreditbetyget för Grekland sänks till “selektiv default” från tidigare CC på lång sikt och C på kort sikt:
28 February 2012 – Eligibility of Greek bonds used as collateral in Eurosystem monetary policy operations
The Governing Council of the European Central Bank (ECB) has decided to temporarily suspend the eligibility of marketable debt instruments issued or fully guaranteed by the Hellenic Republic for use as collateral in Eurosystem monetary policy operations. This decision takes into account the rating of the Hellenic Republic as a result of the launch of the private sector involvement offer.At the same time, the Governing Council decided that the liquidity needs of affected Eurosystem counterparties can be satisfied by the relevant national central banks, in line with relevant Eurosystem arrangements (emergency liquidity assistance).
Riskkapital och välfärd
28 Feb, 2012 at 10:31 | Posted in Politics & Society | Comments Off on Riskkapital och välfärd
Vad är satir och vad är verklighet? Ibland undrar man. En som iallafall kan det här med satir är Max Gustafson. Lysande!
Dagens bristvara – ministrar med kompetens
27 Feb, 2012 at 20:58 | Posted in Economics | 8 CommentsNya siffror från Arbetsförmedlingen visar att det idag är mindre än fyra av tio arbetslösa som får a-kassa. I en mer än vanligt förvirrande intervju i Ekot häromdagen kommenterade arbetsmarknadsminister Hillevi Engström siffrorna. Alltid läsvärda Storstad tog sig tid att kolla fakta – till skillnad från statsrådet som bara lät ankors plask och grodors plums avlösa varandra i en strid ström:
Hillevi Engström: “Min stora uppgift är ju att människor ska kunna komma in på arbetsmarknaden och få en egen försörjning.”
Facit: Arbetslösheten i september 2006, då den borgerliga regeringen tog över: 6,1 procent. I januari 2012: 8,0 procent.
Sysselsättningen har under samma period minskat från 65,8 procent till 63,7 procent. Källa: SCB/AKU.
Hillevi Engström: “Ytterst har vi ju ett skyddsnät i samhället som fungerar och så ska vi ha det.”
Facit: 2006 hade 70 procent av de arbetslösa a-kassa. Idag är siffran 36 procent, rapporterar DN.
Arbetslöshets- och sjukförsäkringen fungerar inte längre som skyddsnät. Istället måste man vända sig till den sista utposten, socialbidraget: “Fyra av tio som får ekonomiskt bistånd är arbetssökande […] Personer som är sjukskrivna eller får sjuk- eller aktivitetsersättning och behöver ekonomisk bistånd uppgår till 14 procent” skrev SKL i en rapport i oktober 2010. “De som inte kan försörja sig på grund av sociala problem, utgör bara elva procent”.
Hillevi Engström: “Men själva arbetslöshetsförsäkringen är en inkomstbortfallsförsäkring som ska försäkra inkomsten mellan två olika arbeten.”
Facit: Sedan 2006 har andelen med a-kassa som får ut 80 procent av sin tidigare inkomst sjunkit från ca 40 procent till 12 procent.
Visst är det väl kanon att leva i ett land styrt av så kompetenta och pålästa ministrar? Eller?
Valfrihet
27 Feb, 2012 at 17:14 | Posted in Politics & Society | Comments Off on ValfrihetVerklig frihet är inte att få välja.
Verklig frihet är att kunna välja de val som ska göras.
Tvätta öronen med Aurora
26 Feb, 2012 at 17:57 | Posted in Varia | Comments Off on Tvätta öronen med AuroraI dessa tider – när ljudrummet dränks i den kommersiella radions tyckmyckentrutade ordbajseri och Melodifestivalens fullständigt intetsägande skval – har man ju nästan gett upp. Men det finns ljus i mörkret! I radions P2 går varje morgon ett vederkvickelsens och den seriösa musikens Aurora. Underbart. Så passa på och börja dagen med en musikalisk örontvätt och rensa hörselgångarna från kvarvarande musikslagg. I Aurora kan man till exempel lyssna på Arvo Pärts “Spiegel im spiegel” från hans “Portrait”. Att i åtta minuter få lyssna till sådan musik ger sinnet ro och får hoppet att återvända. Tack public-service-radio! Tack Arvo!
Visst äääälskar vi Melodifestivalen
25 Feb, 2012 at 23:20 | Posted in Varia | Comments Off on Visst äääälskar vi Melodifestivalen“Musikunderhållning” är för många ett sätt att fly undan vardagen med syntetiska dagdrömmar. Med program som Melodifestivalen har kulturen avsvurit sig sin autonomi för att – som Adorno och Horkheimer uttrycker det – ”stolt inordna sig bland konsumtionsartiklarna.” Denna typ av “musikprogram” – där numera även tondöva författare kan göra sin lycka – är ett av de mest synliga tecknen på kulturens barbariska förfall i marknadens kolonisering av vår livsvärld. Här paraderar ytans non plus ultra som den odifferentierad smörja den är.
Tre miljoner flugor kan inte ha fel – ät skit!
Fun with Statistics
25 Feb, 2012 at 18:35 | Posted in Statistics & Econometrics | Comments Off on Fun with StatisticsWhen giving courses in statistics and econometrics I usually – especially on introductory and intermediate levels – encourage my students to use the web as a complement to the textbooks. A fun way to help you learn statistics is to use Youtube. Check out, for example, StatisticsFun, where you can watch instructive videos on regression analysis, hypothesis testing, anova and a lot more in the statistical toolbox.
Vinst i välfärden – redux
24 Feb, 2012 at 16:24 | Posted in Politics & Society | Comments Off on Vinst i välfärden – reduxSocialdemokratiska idé- och debatttidskriften Tiden har länge varit tråkig, ointressant och utan intellektuell spänst.
Men med den nye redaktören Daniel Suhonen verkar något ha hänt. Bra! Och den här bilden visar väl att man verkligen är beredd att ta striden med välfärdens dödgrävare …
Välfärd kan inte köpas
24 Feb, 2012 at 14:01 | Posted in Politics & Society | Comments Off on Välfärd kan inte köpas(S)-kvinnornas ordförande Lena Sommestad hade häromdagen ett ur principiell synpunkt intressant blogginlägg om huruvida välfärd är något som kan eller ska kunna köpas på en “välfärdsmarknad”.
Jag delar uppfattningen att bättre insyn krävs i välfärdsbolagen. Det vore utmärkt om vi hade både offentlighetsprincip och meddelarfrihet. Men jag delar inte uppfattningen att en ökad insyn är en tillräcklig åtgärd för att lösa de problem som skapas på de konkurrensutsatta välfärdsmarknaderna.
…
Min fråga är: ska kvaliteten på välfärdstjänsterna avgöras av den enskildes förmåga att kontrollera verksamheterna och deras kvalitet? Är det verkligen den som är bäst på att välja som ska få den bästa skolan och vården?
Min uppfattning är att medborgare i Sverige ska erbjudas utbildning, vård och omsorg som är likvärdig och av god kvalitet, oavsett vilka resurser och vilken kompetens som den enskilde medborgaren besitter. Välfärden är en social rättighet, inte en tjänst vars kvalitet ska bero på hur duktig och välinformerad du är som kund på välfärdsmarknaden.
Probabilistic econometrics – science without foundations (part I)
21 Feb, 2012 at 15:31 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 5 CommentsModern probabilistic econometrics relies on the notion of probability. To at all be amenable to econometric analysis, economic observations allegedly have to be conceived as random events.
But is it really necessary to model the economic system as a system where randomness can only be analyzed and understood when based on an a priori notion of probability?
Where do probabilities come from?
In probabilistic econometrics, events and observations are as a rule interpreted as random variables as if generated by an underlying probability density function, and a fortiori – since probability density functions are only definable in a probability context – consistent with a probability. As Haavelmo (1944:iii) has it:
For no tool developed in the theory of statistics has any meaning – except , perhaps for descriptive purposes – without being referred to some stochastic scheme.
When attempting to convince us of the necessity of founding empirical economic analysis on probability models, Haavelmo – building largely on the earlier Fisherian paradigm – actually forces econometrics to (implicitly) interpret events as random variables generated by an underlying probability density function.
This is at odds with reality. Randomness obviously is a fact of the real world. Probability, on the other hand, attaches to the world via intellectually constructed models, and a fortiori is only a fact of a probability generating machine or a well constructed experimental arrangement or “chance set-up”.
Just as there is no such thing as a “free lunch,” there is no such thing as a “free probability.” To be able at all to talk about probabilities, you have to specify a model. If there is no chance set-up or model that generates the probabilistic outcomes or events – in statistics one refers to any process where you observe or measure as an experiment (rolling a die) and the results obtained as the outcomes or events (number of points rolled with the die, being e. g. 3 or 5) of the experiment –there strictly seen is no event at all.
Probability is a relational element. It always must come with a specification of the model from which it is calculated. And then to be of any empirical scientific value it has to be shown to coincide with (or at least converge to) real data generating processes or structures – something seldom or never done!
And this is the basic problem with economic data. If you have a fair roulette-wheel, you can arguably specify probabilities and probability density distributions. But how do you conceive of the analogous nomological machines for prices, gross domestic product, income distribution etc? Only by a leap of faith. And that does not suffice. You have to come up with some really good arguments if you want to persuade people into believing in the existence of socio-economic structures that generate data with characteristics conceivable as stochastic events portrayed by probabilistic density distributions!
From a realistic point of view we really have to admit that the socio-economic states of nature that we talk of in most social sciences – and certainly in econometrics – are not amenable to analyze as probabilities, simply because in the real world open systems that social sciences – including econometrics – analyze, there are no probabilities to be had!
The processes that generate socio-economic data in the real world cannot just be assumed to always be adequately captured by a probability measure. And, so, it cannot really be maintained – as in the Haavelmo paradigm of probabilistic econometrics – that it even should be mandatory to treat observations and data – whether cross-section, time series or panel data – as events generated by some probability model. The important activities of most economic agents do not usually include throwing dice or spinning roulette-wheels. Data generating processes – at least outside of nomological machines like dice and roulette-wheels – are not self-evidently best modeled with probability measures.
If we agree on this, we also have to admit that probabilistic econometrics lacks a sound justification. I would even go further and argue that there really is no justifiable rationale at all for this belief that all economically relevant data can be adequately captured by a probability measure. In most real world contexts one has to argue one’s case. And that is obviously something seldom or never done by practitioners of probabilistic econometrics.
What is randomness?
Econometrics and probability are intermingled with randomness. But what is randomness?
In probabilistic econometrics it is often defined with the help of independent trials – two events are said to be independent if the occurrence or nonoccurrence of either one has no effect on the probability of the occurrence of the other – as drawing cards from a deck, picking balls from an urn, spinning a roulette wheel or tossing coins – trials which are only definable if somehow set in a probabilistic context.
But if we pick a sequence of prices – say 2, 4, 3, 8, 5, 6, 6 – that we want to use in an econometric regression analysis, how do we know the sequence of prices is random and a fortiori being able to treat as generated by an underlying probability density function? How can we argue that the sequence is a sequence of probabilistically independent random prices? And are they really random in the sense that is most often applied in probabilistic econometrics – where X is called a random variable only if there is a sample space S with a probability measure and X is a real-valued function over the elements of S?
Bypassing the scientific challenge of going from describable randomness to calculable probability by just assuming it, is of course not an acceptable procedure. Since a probability density function is a “Gedanken” object that does not exist in a natural sense, it has to come with an export license to our real target system if it is to be considered usable.
Among those who at least honestly try to face the problem – the usual procedure is to refer to some artificial mechanism operating in some “games of chance” of the kind mentioned above and which generates the sequence. But then we still have to show that the real sequence somehow coincides with the ideal sequence that defines independence and randomness within our – to speak with science philosopher Nancy Cartwright (1999) – “nomological machine”, our chance set-up, our probabilistic model.
As the originator of the Kalman filter, Rudolf Kalman (1994:143), notes:
Not being able to test a sequence for ‘independent randomness’ (without being told how it was generated) is the same thing as accepting that reasoning about an “independent random sequence” is not operationally useful.
So why should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts (how many sides do the dice have, are the cards unmarked, etc)
If we do adhere to the Fisher-Haavelmo paradigm of probabilistic econometrics we also have to assume that all noise in our data is probabilistic and that errors are well-behaving, something that is hard to justifiably argue for as a real phenomena, and not just an operationally and pragmatically tractable assumption.
Maybe Kalman’s (1994:147) verdict that
Haavelmo’s error that randomness = (conventional) probability is just another example of scientific prejudice
is, from this perspective seen, not far-fetched.
Accepting Haavelmo’s domain of probability theory and sample space of infinite populations– just as Fisher’s (1922:311) “hypothetical infinite population, of which the actual data are regarded as constituting a random sample”, von Mises’ “collective” or Gibbs’ ”ensemble” – also implies that judgments are made on the basis of observations that are actually never made!
Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.
As David Salsburg (2001:146) notes on probability theory:
[W]e assume there is an abstract space of elementary things called ‘events’ … If a measure on the abstract space of events fulfills certain axioms, then it is a probability. To use probability in real life, we have to identify this space of events and do so with sufficient specificity to allow us to actually calculate probability measurements on that space … Unless we can identify [this] abstract space, the probability statements that emerge from statistical analyses will have many different and sometimes contrary meanings.
Just as e. g. Keynes (1921) and Georgescu-Roegen (1971), Salsburg (2001:301f) is very critical of the way social scientists – including economists and econometricians – uncritically and without arguments have come to simply assume that one can apply probability distributions from statistical theory on their own area of research:
Probability is a measure of sets in an abstract space of events. All the mathematical properties of probability can be derived from this definition. When we wish to apply probability to real life, we need to identify that abstract space of events for the particular problem at hand … It is not well established when statistical methods are used for observational studies … If we cannot identify the space of events that generate the probabilities being calculated, then one model is no more valid than another … As statistical models are used more and more for observational studies to assist in social decisions by government and advocacy groups, this fundamental failure to be able to derive probabilities without ambiguity will cast doubt on the usefulness of these methods.
Some wise words that ought to be taken seriously by probabilistic econometricians is also given by mathematical statistician Gunnar Blom (2004:389):
If the demands for randomness are not at all fulfilled, you only bring damage to your analysis using statistical methods. The analysis gets an air of science around it, that it does not at all deserve.
Richard von Mises (1957:103) noted that
Probabilities exist only in collectives … This idea, which is a deliberate restriction of the calculus of probabilities to the investigation of relations between distributions, has not been clearly carried through in any of the former theories of probability.
And obviously not in Haavelmo’s paradigm of probabilistic econometrics either. It would have been better if one had heeded von Mises warning (1957:172) that
the field of application of the theory of errors should not be extended too far.
This importantly also means that if you cannot show that data satisfies all the conditions of the probabilistic nomological machine – including e. g. the distribution of the deviations corresponding to a normal curve – then the statistical inferences used, lack sound foundations! And this really is the basis of the argument put forward in this essay – probabilistic econometrics lacks sound foundations.
.
References
Gunnar Blom et al: Sannolikhetsteori och statistikteori med tillämpningar, Lund: Studentlitteratur.
Cartwright, Nancy (1999), The Dappled World. Cambridge: Cambridge University Press.
Fisher, Ronald (1922), On the mathematical foundations of theoretical statistics. Philosophical Transactions of The Royal Society A, 222.
Georgescu-Roegen, Nicholas (1971), The Entropy Law and the Economic Process. Harvard University Press.
Haavelmo, Trygve (1944), The probability approach in econometrics. Supplement to Econometrica 12:1-115.
Kalman, Rudolf (1994), Randomness Reexamined. Modeling, Identification and Control 3:141-151.
Keynes, John Maynard (1973 (1921)), A Treatise on Probability. Volume VIII of The Collected Writings of John Maynard Keynes, London: Macmillan.
Pålsson Syll, Lars (2007), John Maynard Keynes. Stockholm: SNS Förlag.
Salsburg, David (2001), The Lady Tasting Tea. Henry Holt.
von Mises, Richard (1957), Probability, Statistics and Truth. New York: Dover Publications.
Visst var hon en “jävla kärring”
19 Feb, 2012 at 10:47 | Posted in Varia | 2 CommentsMikael Wiehe kunde inte riktigt hålla sig i Nyhetsmorgon idag. Vid tanken på järnladyns famösa “There is no such thing as a society” rann det över och han var bara tvungen att tillägga – “Jävla kärring”.
Och visst har han också i sak rätt. För vad annat kan man tycka om diktaturkramaren Margaret Thatcher? Möjligen hade Wiehe också annat i tankarna. Som Thatchers ohöljda förmåga att titta åt andra hållet när det gällde apartheid i Sydafrika och diktaturens Chile. Eller att hon förespråkade återinförandet av dödsstraff. Eller införandet av lagstiftning som förbjöd myndigheter att framställa homosexualitet på ett positivt sätt. Eller att superladyn under sina elva år vid makten utsåg ett enda kvinnligt statsråd. Eller hennes hatiska kamp mot facken.
Wiehe har väl aldrig varit någon musikalisk favorit för yours truly, men just idag lyssnar jag gärna en stund på honom:
Randomness and ergodic theory in economics – what went wrong?
18 Feb, 2012 at 12:23 | Posted in Statistics & Econometrics, Theory of Science & Methodology | 6 CommentsErgodicity is a difficult concept that many students of economics have problems with understanding. In the very instructive video below, Ole Peters – from the Department of Mathematics at the Imperial College of London – has made an admirably simplified and pedagogical exposition of what it means for probability structures of stationary processses and ensembles to be ergodic. Using a progression of simulated coin flips, his example shows the all-important difference between time averages and ensemble averages for this kind of processes:
To understand real world ”non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty – where real historical time rules the roost – the probabilities that ruled the past are not those that will rule the future.
When we cannot accept that the observations, along the time-series available to us, are independent … we have, in strict logic, no more than one observation, all of the separate items having to be taken together. For the analysis of that the probability calculus is useless; it does not apply … I am bold enough to conclude, from these considerations that the usefulness of ‘statistical’ or ‘stochastic’ methods in economics is a good deal less than is now conventionally supposed … We should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand. Very often they are not … The probability calculus is no excuse for forgetfulness.
John Hicks, Causality in Economics, 1979:121
Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages – and a fortiori in any relevant sense timeless – is not a sensible way for dealing with the kind of genuine uncertainty that permeates open systems such as economies.
Added I:Some of my readers have asked why the difference between ensemble and time averages is of importance. Well, basically, because when you assume the processes to be ergodic,ensemble and time averages are identical. Let me giva an example even simpler than the one Peters gives:
Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the assetprice falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the assetprice first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).
From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.
Assuming ergodicity there would have been no difference at all.
Added II: Just in case you think this is just an academic quibble without repercussion to our real lives, let me quote from an article of Peters in the Santa Fe Institute Bulletin from 2009 – On Time and Risk – that makes it perfectly clear that the flaw in thinking about uncertainty in terms of “rational expectations” and ensemble averages has had real repercussions on the functioning of the financial system:
In an investment context, the difference between ensemble averages and time averages is often small. It becomes important, however, when risks increase, when correlation hinders diversification, when leverage pumps up fluctuations, when money is made cheap, when capital requirements are relaxed. If reward structures—such as bonuses that reward gains but don’t punish losses, and also certain commission schemes—provide incentives for excessive risk, problems arise. This is especially true if the only limits to risk-taking derive from utility functions that express risk preference, instead of the objective argument of time irreversibility. In other words, using the ensemble average without sufficiently restrictive utility functions will lead to excessive risk-taking and eventual collapse. Sound familiar?
Added III: Still having problems understanding the ergodicity concept? Let me cite one last example that hopefully will make the concept more accessible on an intuitive level:
Why are election polls often inaccurate? Why is racism wrong? Why are your assumptions often mistaken? The answers to all these questions and to many others have a lot to do with the non-ergodicity of human ensembles. Many scientists agree that ergodicity is one of the most important concepts in statistics. So, what is it?
…
Suppose you are concerned with determining what the most visited parks in a city are. One idea is to take a momentary snapshot: to see how many people are this moment in park A, how many are in park B and so on. Another idea is to look at one individual (or few of them) and to follow him for a certain period of time, e.g. a year. Then, you observe how often the individual is going to park A, how often he is going to park B and so on.
Thus, you obtain two different results: one statistical analysis over the entire ensemble of people at a certain moment in time, and one statistical analysis for one person over a certain period of time. The first one may not be representative for a longer period of time, while the second one may not be representative for all the people. The idea is that an ensemble is ergodic if the two types of statistics give the same result. Many ensembles, like the human populations, are not ergodic.
Yours truly goes international
18 Feb, 2012 at 09:55 | Posted in Economics, Theory of Science & Methodology | Comments Off on Yours truly goes internationalI tisdags lade yours truly ut en artikel här på bloggen om en av hörnstenarna i modern neoklassisk makroekonomisk teori – hypotesen om rationella förväntningar. Artikeln – David K. Levine is totally wrong on the rational expectations hypothesis – rönte internationell uppmärksamhet och publicerades i förrgår på Real-World Economics Review. För de med intresse för ekonomisk teori (jo, tro det eller ej, men de finns) kan den efterföljande diskussionen följas här.
Blog at WordPress.com.
Entries and Comments feeds.