Monday, January 6, 2014

Statistics and Philosophy.

This is an old exchange between me and Professor David Johnstone (Editor of the Australian journal Abacus). I am posting it here because it is still not dated:

Jagdish:

Your call for a dialogue between statistics and philosophy of science is very timely, and extremely important considering the importance that statistics, both in its probabilistic and non-probabilistic incarnations, has gained ever since the computational advances of the past three decades or so. Let me share a few of my conjectures regarding the cause of this schism between statistics and philosophy, and consider a few areas where they can share in mutual reflection. However, reflection in statistics, like in accounting of late and unlike in philosophy, has been on short order for quite a while. And it is always easier to pick the low hanging fruit. Albert Einstein once remarked, ""I have little patience with scientists who take a board of wood, look for the thinnest part and drill a great number of holes where drilling is easy".

1.
Early statisticians were practitioners of the art, most serving as consultants of sorts. Gosset worked for Guiness, GEP Box did most of his early work for Imperial Chemical Industries (ICI), Fisher worked at Rothamsted Experimental Station, Loeve was an actuary at University of Lyon... As practitioners, statisticians almost always had their feet in one of the domains in science: Fisher was a biologist, Gossett was a chemist, Box was a chemist, ... Their research was down to earth, and while statistics was always regarded the turf of mathematicians, their status within mathematics was the same as that of accountants in liberal arts colleges today, slightly above that of athletics. Of course, the individuals with stature were expected to be mathematicians in their own right.
All that changed with the work of Kolmogorov (1933, Moscow State, http://www.socsci.uci.edu/~bskyrms/bio/readings/kolmogorov_theory_of_probability_small.pdf), Loeve (1960, Berkeley), Doob(1953, Illinois), and Dynkin(1963, Moscow State and Cornell). They provided mathematical foundations for earlier work of practitioners, and especially Kolmogorov provided axiomatic foundations for probability theory. In the process, their work unified statistics into a coherent mass of knowledge. (Perhaps there is a lesson here for us accountants). A collateral effect was the schism in the field between the theoreticians and the practitioners (of which we accountants must be wary) that has continued to this date. We can see a parallel between accounting and statistics here too.

2.
Early controversies in statistics had to do with embedding statistical methods in decision theory (Fisher was against, Neyman and Pearson were for it), and whether the foundations for statistics had to be deductive or inductive (frequentists were for the former, Bayesians were for the latter). These debates were not just technical, and had underpinnings in philosophy, especially philosophy of mathematics (after all, the early contributors to the field were mathematicians: Gauss, Fermat, Pascal, Laplace, deMoivre, ...). For example, when the Fisher-Neyman/Pearson debates had ranged, Neyman was invited by the philosopher Jakko Hintikka to write a paper for the journal Synthese ( "Frequentist probability and Frequentist statistics", 1977).

3.
Since the early statisticians were practitioners, their orientation was usually normative: in sample theory, regression, design of experiments,.... The mathematisation of statistics and later work of people like Tukey, raised the prominence of descriptive (especially axiomatic) in the field. However, the recent developments in datamining have swung the balance again in favour of the normative.
4. Foundational issues in statistics have always been philosophical. And treatment of probability has been profoundly philosophical (see for example http://en.wikipedia.org/wiki/Probability_interpretations).

Regards,
Jagdish

 Reply from David Johnstone:
Dear Jagdish, as usual your knowledge and perspectives are great to read.
In reply to your points:
(1) the early development of statistics by Gossett and Fisher was as a means to an end, i.e. to design and interpret experiments that helped to resolve practical issues, like whether fertilizers were effective and different genetic strains of crops were superior. This left results testable in the real world laboratory, by the farmers, so the pressure to get it right rather than just publish was on. Gossett by the way was an old fashioned English scholar who spent as much time fishing and working in his workshop as doing mathematics. This practical bent comes out in his work.
(2) Neman’s effort to make statistics “deductive” was always his weak point, and he went to great lengths to evade this issue. I wrote a paper on Neyman’s interpretations of tests, as in trying to understand him I got frustrated by his inconsistency and evasiveness over his many papers. In more than one place, he wrote that to “accept” the null is to “act as if it is true”, and to reject it is to “act as if it is false”. This is ridiculous in scientific contexts, since to act as if something is decided 100% you would never draw another sample - your work would be done on that hypothesis.
(3) On the issue of normative versus descriptive, as in accounting research, Harold Jeffreys had a great line in his book, “he said that if we observe a child add 2 and 2 to get 5, we don’t change the laws of arithmetic”. He was very anti learning about the world by watching people rather than doing abstract theory. BTW I own his personal copy of his 3rd edition. A few years ago I went to buy this book on Bookfinder, and found it available in a secondhand bookshop in Cambridge. I rand them instantly when I saw that they said whose book it was, and they told me that Mrs Jeffreys had just died and Harold’s books had come in, and that the 1st edition was sold the day before.
(4) I adore your line that “Foundational issues in statistics have always been philosophical”. .... So must they be in accounting, in relation to how to construct income and net assets measures that are sound and meaningful. Note however that just because we accept something needs philosophical footing doesn’t mean that we will find or agree on that footing. I recently received a comment on a paper of mine from an accounting referee. The comment was basically that the effect of information on the cost of capital “could not be revealed by philosophy” (i.e. by probability theory etc.). Rather, this is an empirical issue. Apart from ignoring all the existing theory on this matter in accounting and finance, the comment is symptomatic of the way that “empirical findings” have been elevated to the top shelf, and theory, or worse, “thought pieces”, are not really science. There is so much wrong with this extreme but common view, including of course that every empirical finding stands on a model or a priori view. Indeed, remember that every null hypothesis that was ever rejected might have been rejected because the model (not the hypothesis) was wrong. People naively believe that a bad model or bad experimental design just reduces power (makes it harder to reject the null) but the mathematical fact is that it can go either way, and error in the model or sample design can make rejection of the null almost certain.
Thank you for your interesting thoughts Jagdish,
David

No comments:

Post a Comment