The poverty of deductivism

17 March, 2018 at 17:52 | Posted in Theory of Science & Methodology | 4 Comments

guaThe idea that inductive support is a three-place relation among hypothesis H, evidence e, and background factors Ki rather than a two-place relation between H and e has some drastic philosophical implications, which partly explains why philosophers of science have been so reluctant to endorse it. The inductivist program … aimed at doing for inductive inferences what logicians had done for deductive ones … Once the Ki enter the picture, the issue of inductive support becomes contextualized: one cannot answer it by merely looking at the features of e and H. An empirical investigation is necessary in order to establish whether the context is ‘right’ for e to be truly confirming evidence for H or not … Scientists’ knowledge of the context and circumstances of research is required in order to assess the validity of scientific inferences​.

Advertisements

Scientific realism and inference​ to the best explanation

17 March, 2018 at 09:14 | Posted in Theory of Science & Methodology | 11 Comments

In a time when scientific relativism is expanding, it is important to keep up the claim for not reducing science to a pure discursive level. We have to maintain the Enlightenment tradition of thinking of reality as principally independent of our views of it and of the main task of science as studying the structure of this reality. Perhaps the most important contribution a researcher can make is to reveal what this reality that is the object of science actually looks like.

darScience is made possible by the fact that there are structures that are durable and largely independent of our knowledge or beliefs about them. There exists a reality beyond our theories and concepts of it. It is this independent reality that our theories in some way deal with. Contrary to positivism, I would as a critical realist argue that the main task of science is not to detect event-regularities between observed facts. Rather, that task must be conceived as identifying the underlying structure and forces that produce the observed events.

Instead of building models based on logic-axiomatic, topic-neutral, context-insensitive and non-ampliative​ deductive reasoning — as in mainstream economic theory — it would be much more fruitful and relevant to apply inference to the best explanation.

People object that the best available explanation might be false. Quite so – and so what? It goes without saying that any explanation might be false, in the sense that it is not necessarily true. It is absurd to suppose that the only things we can reasonably believe are necessary truths …

People object that being the best available explanation of a fact does not prove something to be true or even probable. Quite so – and again, so what? The explanationist principle – “It is reasonable to believe that the best available explanation of any fact is true” – means that it is reasonable to believe or think true things that have not been shown to be true or probable, more likely true than not.

Alan Musgrave

Abduction — the induction that constitutes the essence​ of scientific reasoning

15 March, 2018 at 17:15 | Posted in Theory of Science & Methodology | 3 Comments

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

holmes-quotes-about-holmesFollowing the general pattern ‘Evidence  =>  Explanation  =>  Inference’ we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (e only counts as evidence in relation to a specific hypothesis H) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of cours, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

The problem of extrapolation

14 February, 2018 at 00:01 | Posted in Theory of Science & Methodology | 8 Comments

steelThere are two basic challenges that confront any account of extrapolation that seeks to resolve the shortcomings of simple induction. One challenge, which I call extrapolator’s circle, arises from the fact that extrapolation is worthwhile only when there are important limitations on what one can learn about the target by studying it directly. The challenge, then, is to explain how the suitability of the model as a basis for extrapolation can be established given only limited, partial information about the target … The second challenge is a direct consequence of the heterogeneity of populations studied in biology and social sciences. Because of this heterogeneity, it is inevitable there will be causally relevant differences between the model and the target population.

In economics — as a rule — we can’t experiment on the real-world target directly.  To experiment, economists therefore standardly construct ‘surrogate’ models and perform ‘experiments’ on them. To be of interest to us, these surrogate models have to be shown to be relevantly ‘similar’ to the real-world target, so that knowledge from the model can be exported to the real-world target. The fundamental problem highlighted by Steel is that this ‘bridging’ is deeply problematic​ — to show that what is true of the model is also true of the real-world target, we have to know what is true of the target, but to know what is true of the target we have to know that we have a good model  …

Most models in science are representations of something else. Models “stand for” or “depict” specific parts of a “target system” (usually the real world). A model that has neither surface nor deep resemblance to important characteristics of real economies ought to be treated with prima facie suspicion. How could we possibly learn about the real world if there are no parts or aspects of the model that have relevant and important counterparts in the real world target system? The burden of proof lays on the theoretical economists thinking they have contributed anything of scientific relevance without even hinting at any bridge enabling us to traverse from model to reality. All theories and models have to use sign vehicles to convey some kind of content that may be used for saying something of the target system. But purpose-built tractability assumptions — like, e. g., invariance, additivity, faithfulness, modularity, common knowledge, etc., etc. — made solely to secure a way of reaching deductively validated results in mathematical models, are of little value if they cannot be validated outside of the model.

All empirical sciences use simplifying or unrealistic assumptions in their modeling activities. That is (no longer) the issue – as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.

Theories are difficult to directly confront with reality. Economists therefore build models of their theories. Those models are representations that are directly examined and manipulated to indirectly say something about the target systems.

There are economic methodologists and philosophers that argue for a less demanding view on modeling and theorizing in economics. And to some theoretical economists it is deemed quite enough to consider economics as a mere “conceptual activity” where the model is not so much seen as an abstraction from reality, but rather a kind of “parallel reality”. By considering models as such constructions, the economist distances the model from the intended target, only demanding the models to be credible, thereby enabling him to make inductive inferences to the target systems.

But what gives license to this leap of faith, this “inductive inference”? Within-model inferences in formal-axiomatic models are usually deductive, but that does not come with a warrant of reliability for inferring conclusions about specific target systems. Since all models in a strict sense are false (necessarily building in part on false assumptions) deductive validity cannot guarantee epistemic truth about the target system. To argue otherwise would surely be an untenable overestimation of the epistemic reach of surrogate models.

Models do not only face theory. They also have to look to the world. But being able to model a credible world, a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified (in terms of resemblance, relevance etc). At the very least, the minimalist demand on models in terms of credibility has to give away to a stronger epistemic demand of appropriate similarity and plausibility. One could of course also ask for a sensitivity or robustness analysis, but the credible world, even after having tested it for sensitivity and robustness, can still be a far way from reality – and unfortunately often in ways we know are important. Robustness of claims in a model does not per se give a warrant for exporting the claims to real world target systems.

Questions of external validity — the claims the extrapolation inference is supposed to deliver — are important. It can never be enough that models somehow are regarded as internally consistent. One always also has to pose questions of consistency with the data. Internal consistency without external validity is worth nothing.

The arrow of time in a non-ergodic world

13 February, 2018 at 09:00 | Posted in Theory of Science & Methodology | 3 Comments

an end of certaintyFor the vast majority of scientists, thermodynamics had to be limited strictly to equilibrium. That was the opinion of J. Willard Gibbs, as well as of Gilbert N. Lewis. For them, irreversibility associated with unidirectional time was anathema …

I myself experienced this type of hostility in 1946 … After I had presented my own lecture on irreversible thermodynamics, the greatest expert in the field of thermodynamics made the following comment: ‘I am astonished that this young man is so interested in nonequilibrium physics. Irreversible processes are transient. Why not wait and study equilibrium as everyone else does?’ I was so amazed at this response that I did not have the presence of mind to answer: ‘But we are all transient. Is it not natural to be interested in our common human condition?’

Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and hence in any relevant sense timeless — is not a sensible way for dealing with the kind of genuine uncertainty that permeates real-world economies.

Ergodicity and the all-important difference between time averages and ensemble averages are difficult concepts — so let me try to explain the meaning of these concepts by means of a couple of simple examples.

Let’s say you’re offered a gamble where on a roll of a fair die you will get €10  billion if you roll a six, and pay me €1 billion if you roll any other number.

Would you accept the gamble?

If you’re an economics student​ you probably would, because that’s what you’re taught to be the only thing consistent with being rational. You would arrest the arrow of time by imagining six different “parallel universes” where the independent outcomes are the numbers from one to six, and then weight them using their stochastic probability distribution. Calculating the expected value of the gamble – the ensemble average – by averaging on all these weighted outcomes you would actually be a moron if you didn’t take the gamble (the expected value of the gamble being 5/6*€0 + 1/6*€10 billion = €1.67 billion)

If you’re not an economist you would probably trust your common sense and decline the offer, knowing that a large risk of bankrupting one’s economy is not a very rosy perspective for the future. Since you can’t really arrest or reverse the arrow of time, you know that once you have lost the €1 billion, it’s all over. The large likelihood that you go bust weights heavier than the 17% chance of you becoming enormously rich. By computing the time average – imagining one real universe where the six different but dependent outcomes occur consecutively – we would soon be aware of our assets disappearing, and a fortiori that it would be irrational to accept the gamble.

Why is the difference between ensemble and time averages of such importance in economics? Well, basically, because when assuming the processes to be ergodic, ensemble and time averages are identical.

Assume we have a market with an asset priced at €100.​ Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be €100 – because we here envision two parallel universes (markets) where the asset price​ falls in one universe (market) with 50% to €50, and in another universe (market) it goes up with 50% to €150, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € – because we here envision one universe (market) where the asset price first rises by 50% to €150, and then falls by 50% to €75 (0.5*150).

From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen. Assuming ergodicity there would have been no difference at all.

On a more economic-theoretical level, ​the difference between ensemble and time averages also highlights the problems concerning the neoclassical theory of expected utility.

When applied to the neoclassical theory of expected utility, one thinks in terms of “parallel universe” and asks what is the expected return of an investment, calculated as an average over the “parallel universe”? In our coin tossing example, it is as if one supposes that various “I” are tossing a coin and that the loss of many of them will be offset by the huge profits one of these “I” does. But this ensemble average does not work for an individual, for whom a time average better reflects the experience made in the “non-parallel universe” in which we live.

Time averages give​ a more realistic answer, where one thinks in terms of the only universe we actually live in, and ask what is the expected return of an investment, calculated as an average over time.

Since we cannot go back in time – entropy and the arrow of time make this impossible – and the bankruptcy option is always at hand (extreme events and “black swans” are always possible) we have nothing to gain from thinking in terms of ensembles.

Actual events follow a fixed pattern of time, where events are often linked in a multiplicative process (as e. g. investment returns with “compound interest”) which is basically non-ergodic.

Instead of arbitrarily assuming that people have a certain type of utility function – as in the neoclassical theory – time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When are assets are gone, they are gone. The fact that in a parallel universe it could conceivably have been refilled, is​ of little comfort to those who live in the one and only possible world that we call the real world.

Our coin toss example can be applied to more traditional economic issues. If we think of an investor, we can basically describe his situation in terms of our coin toss. What fraction of his assets should an investor – who is about to make a large number of repeated investments – bet on his feeling that he can better evaluate an investment (p = 0.6) than the market (p = 0.5)? The greater the fraction, the greater is the leverage. But also – the greater is the risk. Letting p be the probability that his investment valuation is correct and (1 – p) is the probability that the market’s valuation is correct, it means that he optimizes the rate of growth on his investments by investing a fraction of his assets that is equal to the difference in the probability that he will “win” or “lose”. This means that he at each investment opportunity (according to the so-called Kelly criterion) is to invest the fraction of  0.6 – (1 – 0.6), i.e. about 20% of his assets (and the optimal average growth rate of investment can be shown to be about 2% (0.6 log (1.2) + 0.4 log (0.8))).

Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage – and risks – creates extensive and recurrent systemic crises. A more appropriate level of risk-taking is a necessary ingredient in a policy to come to curb excessive risk-taking​.

To understand real world “non-routine” decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not necessarily those that will rule the future.

Irreversibility can no longer be identified with a mere appearance​ that would disappear if we had perfect knowledge … Figuratively speaking, matter at equilibrium, with no arrow of time, is ‘blind,’ but with the arrow of time, it begins to ‘see’ … The claim that the arrow of time is ‘only phenomenological​l,’ or subjective, is therefore absurd. We are actually the children of the arrow of time, of evolution, not its progenitors.

Ilya Prigogine

Hur skiljer man på vetenskap och trams?

3 February, 2018 at 16:01 | Posted in Theory of Science & Methodology | Comments Off on Hur skiljer man på vetenskap och trams?

fransEmma Frans’ med rätta prisbelönta bok Larmrapporten är en rolig, kunnig och ack så nödvändig uppgörelse med allehanda pseudo-vetenskapligt trams som sköljer över oss i media nuförtiden. Inte minst i sociala media sprids en massa ‘alternativa fakta’ och nonsens.

Även om jag varmt rekommederat studenter, vänner och bekanta att läsa boken, kan jag dock inte låta bli att här påpeka att det finns en liten svaghet i boken. Det gäller behandlingen av evidens-baserad kunskap och då speciellt bilden av det som brukar kallas den vetenskapliga evidensens ‘gold standard’ — randomiserade kontrollerade studier (RCT).

Frans skriver:

gold-standard RCT är den typ av studier som överlag anses ha högst bevisvärde. Detta beror på att slumpen avgör vem som utsätts för interventionen och vem som får vara kontroll. Om studien är tillräckligt stor kommer slumpen se till att den enda betydelsefulla skillnaden mellan grupperna som jämförs är om de utsatts för interventionen eller inte. Om det senare går att se en skillnad mellan grupperna med avseende på utfallet så kan vi känna oss säkra på att detta beror på interventionen.

Detta är en rätt standardmässig presentation av vilka (påstådda) fördelar RCT har (bland dess förespråkare).

Problemet är bara att det ur strikt vetenskaplig synpunkt är fel!

Låt mig förklara varför med ett belysande exempel.

Continue Reading Hur skiljer man på vetenskap och trams?…

Mainstream economics gets the priorities wrong

20 January, 2018 at 17:46 | Posted in Theory of Science & Methodology | Comments Off on Mainstream economics gets the priorities wrong

There is something about the way economists construct their models nowadays that obviously doesn’t sit right.

significance_cartoonThe one-sided, almost religious, insistence on axiomatic-deductivist modelling as the only scientific activity worthy of pursuing in economics still has not given way to methodological pluralism based on ontological considerations (rather than formalistic tractability). In their search for model-based rigour and certainty, ‘modern’ economics has turned out to be a totally hopeless project in terms of real-world relevance

If macroeconomic models – no matter of what ilk –  build on microfoundational assumptions of representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that model-based conclusions or hypotheses of causally relevant mechanisms or regularities can be bridged to real-world target systems, are obviously non-justifiable. Incompatibility between actual behaviour and the behaviour in macroeconomic models building on representative actors and rational expectations microfoundations shows the futility of trying to represent real-world target systems with models flagrantly at odds with reality. As Robert Gordon once had it:

Rigor competes with relevance in macroeconomic and monetary theory, and in some lines of development macro and monetary theorists, like many of their colleagues in micro theory, seem to consider relevance to be more or less irrelevant.

The real harm done by Bayesianism

30 November, 2017 at 09:02 | Posted in Theory of Science & Methodology | 2 Comments

419Fn8sV1FL._SY344_BO1,204,203,200_The bias toward the superficial and the response to extraneous influences on research are both examples of real harm done in contemporary social science by a roughly Bayesian paradigm of statistical inference as the epitome of empirical argument. For instance the dominant attitude toward the sources of black-white differential in United States unemployment rates (routinely the rates are in a two to one ratio) is “phenomenological.” The employment differences are traced to correlates in education, locale, occupational structure, and family background. The attitude toward further, underlying causes of those correlations is agnostic … Yet on reflection, common sense dictates that racist attitudes and institutional racism must play an important causal role. People do have beliefs that blacks are inferior in intelligence and morality, and they are surely influenced by these beliefs in hiring decisions … Thus, an overemphasis on Bayesian success in statistical inference discourages the elaboration of a type of account of racial disadavantages that almost certainly provides a large part of their explanation.

For all scholars seriously interested in questions on what makes up a good scientific explanation, Richard Miller’s Fact and Method is a must read. His incisive critique of Bayesianism is still unsurpassed.

wpid-bilindustriella-a86478514bOne of yours truly’s favourite ‘problem situating lecture arguments’ against Bayesianism goes something like this: Assume you’re a Bayesian turkey and hold a nonzero probability belief in the hypothesis H that “people are nice vegetarians that do not eat turkeys and that every day I see the sunrise confirms my belief.” For every day you survive, you update your belief according to Bayes’ Rule

P(H|e) = [P(e|H)P(H)]/P(e),

where evidence e stands for “not being eaten” and P(e|H) = 1. Given that there do exist other hypotheses than H, P(e) is less than 1 and a fortiori P(H|e) is greater than P(H). Every day you survive increases your probability belief that you will not be eaten. This is totally rational according to the Bayesian definition of rationality. Unfortunately — as Bertrand Russell famously noticed — for every day that goes by, the traditional Christmas dinner also gets closer and closer …

For more on my own objections to Bayesianism, see my Bayesianism — a patently absurd approach to science and One of the reasons I’m a Keynesian and not a Bayesian.

Emma Frans och konsten att skilja vetenskap från trams

24 November, 2017 at 19:15 | Posted in Theory of Science & Methodology | 2 Comments

fransEmma Frans’ med rätta prisbelönta bok Larmrapporten är en rolig, kunnig och ack så nödvändig uppgörelse med allehanda pseudo-vetenskapligt trams som sköljer över oss i media nuförtiden. Inte minst i sociala media sprids en massa ‘alternativa fakta’ och nonsens.

Även om jag varmt rekommederat studenter, vänner och bekanta att läsa boken, kan jag dock inte låta bli att här — bland mestadels akademiskt skolade läsare — påpeka att det finns en liten svaghet i boken. Det gäller behandlingen av evidens-baserad kunskap och då speciellt bilden av det som brukar kallas den vetenskapliga evidensens ‘gold standard’ — randomiserade kontrollerade studier (RCT).

Frans skriver:

gold-standard RCT är den typ av studier som överlag anses ha högst bevisvärde. Detta beror på att slumpen avgör vem som utsätts för interventionen och vem som får vara kontroll. Om studien är tillräckligt stor kommer slumpen se till att den enda betydelsefulla skillnaden mellan grupperna som jämförs är om de utsatts för interventionen eller inte. Om det senare går att se en skillnad mellan grupperna med avseende på utfallet så kan vi känna oss säkra på att detta beror på interventionen.

Detta är en rätt standardmässig presentation av vilka (påstådda) fördelar RCT har (bland dess förespråkare).

Problemet är bara att det ur strikt vetenskaplig synpunkt är fel!

Låt mig förklara varför jag anser detta med ett belysande exempel från skolvärlden.

Continue Reading Emma Frans och konsten att skilja vetenskap från trams…

Randomization — a philosophical device gone astray

23 November, 2017 at 10:30 | Posted in Theory of Science & Methodology | 1 Comment

When giving courses in the philosophy of science yours truly has often had David Papineau’s book Philosophical Devices (OUP 2012) on the reading list. Overall it is a good introduction to many of the instruments used when performing methodological and science theoretical analyses of economic and other social sciences issues.

Unfortunately, the book has also fallen prey to the randomization hype that scourges sciences nowadays.

philosophical-devices-proofs-probabilities-possibilities-and-sets The hard way to show that alcohol really is a cause of heart disease is to survey the population … But there is an easier way … Suppose we are able to perform a ‘randomized experiment.’ The idea here is not to look at correlations in the population at large, but rather to pick out a sample of individuals, and arrange randomly for some to have the putative cause and some not.

The point of such a randomized experiment is to ensure that any correlation between the putative cause and effect does indicate a causal connection. This works​ because the randomization ensures that the putative cause is no longer itself systematically correlated with any other properties that exert a causal influence on the putative effect … So a remaining correlation between the putative cause and effect must mean that they really are causally connected.

The problem with this simplistic view on randomization is that the claims made by Papineau on behalf of randomization are both exaggerated and invalid:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization a fortiori does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

Randomization is not a panacea — it is not the best method for all questions and circumstances. Papineau and other proponents of randomization make claims about its ability to deliver causal knowledge that are simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and best method on the market. It is not.

Next Page »

Create a free website or blog at WordPress.com.
Entries and comments feeds.