Random walks model thinking

9 May, 2015 at 09:48 | Posted in Statistics & Econometrics | 1 Comment



1 Comment »

RSS feed for comments on this post. TrackBack URI

  1. Professor Page’s presentation needs an ontology. He has a model and he has that pedagogical favorite, the counter-intuitive insight afforded by the model. But, he makes a leap that he should not make without careful explication, and by making a leap he doesn’t acknowledge, he leaves his students critically confused.
    The modelled concept of “random” is introduced by the example of flipping a fair coin, and further explained by reference to casino games, games of chance. But, then, he makes the leap to basketball, a game of skill, and, finally, to results in business. He never acknowledges the leap, or stops to reflect about what makes the pattern of outcomes from a competitive game of skill like the pattern of outcomes in a casino. Instead, he focuses his pedagogy on the pose of superior knowledge attached to rejecting spurious pattern recognition, which is fine as far as it goes, but I think is likely to leave the student a bit confused.
    The presumption behind the example of flipping a perfectly fair coin is that no one could have a skill in coin-flipping. I’m not sure that’s actually true; I wouldn’t put a magician or determined con artist past the possibility by dint of great effort and lots of practice, developing some kind of skill that would introduce a strong bias into the outcomes from flipping even a fair coin. A professional basketball player actually does have a practiced skill, and it is only the residual variation in outcomes that can be considered random. The concept of a process under control and the randomness of residual variation is one that should have been acknowledged in Professor Page’s presentation. It is an ontological question: in what sense, is a basketball game like a casino game? Why are we justified in modeling a process approaching a limit as if it were akin to flipping a coin? What is being abstracted away?
    It is important, I think, to building a foundation for critical thinking, to contemplate the implications of what is, after all, a strategic decision about what to abstract. The actual basketball player, coach or fan may well be interested in whether the limit or set point has changed. Is the player’s free throw percentage .75 or .78 tonite? If the player has a cold, or had an argument with his wife earlier in the day, or is feeling a bit “off”, maybe the limit of his skill is the lower figure on this particular occasion. The statistician should not be declaring, as Professor Page seems to do, that such variations in the limit, as distinct from the variations in the residual, do not occur, on the basis that the model establishes a “fact” (which no model is epistemologically capable of doing). The right question for what is, after all, a set of tools for measurement, is, how sensitive is the measurement instrument?
    The test Professor Page proposes for the “hot hand” — whether success on the first throw is predictive of a higher rate of success on the second throw — doesn’t really test anything. It just confirms what we, presumably, already know, which is the player is playing at his limit (and the same limit) in any given free throw opportunity, and the residual variation in outcomes can usefully modelled as random. What we’d like to know is how much the limit varies over time? Could we tell? Could we distinguish, from finite series, changes in the limit from .75 to .80 from one week to another?
    The practical questions, we might want to investigate, will often involve changes in the set point limits. A casino goes to a lot of trouble to make sure its games of chance remain games of chance; it will deal black jack from a four deck stack of cards and reshuffle frequently to prevent skilled card counters from beating the house, and will want to investigate when its stratagems appear to be overcome. How will it know? What does it observe and measure?
    We need to be clearer about what the power of statistical tests are in a realistic, investigatory context. And, we need to be clear about the ontology by which we conditionally model residual variation of processes under control as random.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Create a free website or blog at WordPress.com.
Entries and comments feeds.