Sunday, March 8, 2015

On writing

One the most productive researcher in Operations Research I have met in my life once told me that he writes a page or two every day. When I asked him what he writes about, he said that he wrote about anything that came to mind. That often it was just an idea that just came to mind. Many of those pages written did not go anywhere, but the few that he pursued were gems.

I have tried that, but I am not disciplined enough, and so most of the ideas that come to my mind never see the light of the day.

When I was in junior high, my teachers required me to write a page every day. When I was doing my doctorate at Pittsburgh I took two courses from one of the most profoundly scholarly mathematical sociologist I have ever known. He required us to write essays about 4-5 pages EVERY week. Even today, after forty years I have every bit of paper I wrote in those courses, and I look at them every now and then. I look at all the red ink on these notes to realise that I will never be able to repay my debt to that professor. And over the decades almost every paper I have written has been influenced profoundly by what I learnt in those courses. None of the dozens of courses I took in my student days comes close in their impact on me.

Operations Research I have met in my life once told me that he writes a page or two every day. When I asked him what he writes about, he said that he wrote about anything that came to mind. That often it was just an idea
 that just came to mind. Many of those pages written did not go anywhere, but the few that he pursued were gems.

I have tried that, but I am not disciplined enough, and so most of the ideas that come to my mind never see the light of the day.

When I was in junior high, my teachers required me to write a page every day. When I was doing my doctorate at Pittsburgh I took two courses from one of the most profoundly scholarly mathematical sociologists I have ever known. He required us to write essays about 4-5 pages EVERY week. Even today, after forty years I have every bit of paper I wrote in those courses, and I look at them every now and then. I look at all the red ink on these notes to realise that I will never be able to repay my debt to that professor. And over the decades almost every paper I have written has been influenced profoundly by what I learnt in those courses. None of the dozens of courses I took in my student days comes close in its impact on me.

PS: I would be remiss if I did not name the professor. It is Professor Tom Fararo (http://en.wikipedia.org/wiki/Thomas_Fararo).

Thursday, February 12, 2015

On Deidre McCloskey's address at the American Accounting Association Annual Meeting in 2012.

What a wonderful speaker Deidre McCloskey! Reminded me of JR Hicks who also was a stammerer. For an economist, I was amazed by her deep and remarkable understanding of statistics.
It was nice to hear about Gossett, perhaps the only human being who got along well with both Karl Pearson and R.A. Fisher, getting along with the latter itself a Herculean feat.
Gosset was helped in the mathematical derivation of small sample theory by Karl Pearson, he did not appreciate its importance, it was left to his nemesis R.A. Fisher. It is remarkable that he could work with these two giants who couldn't stand each other.
In later life Fisher and Gosset parted ways in that Fisher was a proponent of randomization of experiments while Gosset was a proponent of systematic planning of experiments and in fact proved decisively that balanced designs are more precise, powerful and efficient compared with Fisher's randomized experiments (see http://sites.roosevelt.edu/sziliak/files/2012/02/William-S-Gosset-and-Experimental-Statistics-Ziliak-JWE-2011.pdf

I remember my father (who designed experiments in horticulture for a living) telling me the virtues of balanced designs at the same time my professors in school were extolling the virtues of randomisation. 

In Gosset we also find seeds of Bayesian thinking in his writings.  While I have always had a great regard for Fisher (visit to the tree he planted at the Indian Statistical Institute in Calcutta was for me more of a pilgrimage), I think his influence on the development of statistics was less than ideal.

On Statistics and Philosophy

This is an exchange between Professor David Johnstone (University of Sydney) and I some time ago.

Jagdish Gangolly:
Your call for a dialogue between statistics and philosophy of science is very timely, and extremely important considering the importance that statistics, both in its probabilistic and non-probabilistic incarnations, has gained ever since the computational advances of the past three decades or so. Let me share a few of my conjectures regarding the cause of this schism between statistics and philosophy, and consider a few areas where they can share in mutual reflection. However, reflection in statistics, like in accounting of late and unlike in philosophy, has been on short order for quite a while. And it is always easier to pick the low hanging fruit. Albert Einstein once remarked, ""I have little patience with scientists who take a board of wood, look for the thinnest part and drill a great number of holes where drilling is easy".
1.
Early statisticians were practitioners of the art, most serving as consultants of sorts. Gosset worked for Guiness, GEP Box did most of his early work for Imperial Chemical Industries (ICI), Fisher worked at Rothamsted Experimental Station, Loeve was an actuary at University of Lyon... As practitioners, statisticians almost always had their feet in one of the domains in science: Fisher was a biologist, Gossett was a chemist, Box was a chemist, ... Their research was down to earth, and while statistics was always regarded the turf of mathematicians, their status within mathematics was the same as that of accountants in liberal arts colleges today, slightly above that of athletics. Of course, the individuals with stature were expected to be mathematicians in their own right.
All that changed with the work of Kolmogorov (1933, Moscow State, http://www.socsci.uci.edu/~bskyrms/bio/readings/kolmogorov_theory_of_probability_small.pdf), Loeve (1960, Berkeley), Doob(1953, Illinois), and Dynkin(1963, Moscow State and Cornell). They provided mathematical foundations for earlier work of practitioners, and especially Kolmogorov provided axiomatic foundations for probability theory. In the process, their work unified statistics into a coherent mass of knowledge. (Perhaps there is a lesson here for us accountants). A collateral effect was the schism in the field between the theoreticians and the practitioners (of which we accountants must be wary) that has continued to this date. We can see a parallel between accounting and statistics here too.
2.
Early controversies in statistics had to do with embedding statistical methods in decision theory (Fisher was against, Neyman and Pearson were for it), and whether the foundations for statistics had to be deductive or inductive (frequentists were for the former, Bayesians were for the latter). These debates were not just technical, and had underpinnings in philosophy, especially philosophy of mathematics (after all, the early contributors to the field were mathematicians: Gauss, Fermat, Pascal, Laplace, deMoivre, ...). For example, when the Fisher-Neyman/Pearson debates had ranged, Neyman was invited by the philosopher Jakko Hintikka to write a paper for the journal Synthese ( "Frequentist probability and Frequentist statistics", 1977).
3.
Since the early statisticians were practitioners, their orientation was usually normative: in sample theory, regression, design of experiments,.... The mathematisation of statistics and later work of people like Tukey, raised the prominence of descriptive (especially axiomatic) in the field. However, the recent developments in datamining have swung the balance again in favour of the normative.
4. Foundational issues in statistics have always been philosophical. And treatment of probability has been profoundly philosophical (see for example http://en.wikipedia.org/wiki/Probability_interpretations).
____________________________________

David Johnstone:
In reply to your points: (1) the early development of statistics by Gossett and Fisher was as a means to an end, i.e. to design and interpret experiments that helped to resolve practical issues, like whether fertilizers were effective and different genetic strains of crops were superior. This left results testable in the real world laboratory, by the farmers, so the pressure to get it right rather than just publish was on. Gossett by the way was an old fashioned English scholar who spent as much time fishing and working in his workshop as doing mathematics. This practical bent comes out in his work.
(2) Neman’s effort to make statistics “deductive” was always his weak point, and he went to great lengths to evade this issue. I wrote a paper on Neyman’s interpretations of tests, as in trying to understand him I got frustrated by his inconsistency and evasiveness over his many papers. In more than one place, he wrote that to “accept” the null is to “act as if it is true”, and to reject it is to “act as if it is false”. This is ridiculous in scientific contexts, since to act as if something is decided 100% you would never draw another sample - your work would be done on that hypothesis.
(3) On the issue of normative versus descriptive, as in accounting research, Harold Jeffreys had a great line in his book, “he said that if we observe a child add 2 and 2 to get 5, we don’t change the laws of arithmetic”. He was very anti learning about the world by watching people rather than doing abstract theory. BTW I own his personal copy of his 3rd edition. A few years ago I went to buy this book on Bookfinder, and found it available in a secondhand bookshop in Cambridge. I rand them instantly when I saw that they said whose book it was, and they told me that Mrs Jeffreys had just died and Harold’s books had come in, and that the 1st edition was sold the day before.
(4) I adore your line that “Foundational issues in statistics have always been philosophical”. .... So must they be in accounting, in relation to how to construct income and net assets measures that are sound and meaningful. Note however that just because we accept something needs philosophical footing doesn’t mean that we will find or agree on that footing. I recently received a comment on a paper of mine from an accounting referee. The comment was basically that the effect of information on the cost of capital “could not be revealed by philosophy” (i.e. by probability theory etc.). Rather, this is an empirical issue. Apart from ignoring all the existing theory on this matter in accounting and finance, the comment is symptomatic of the way that “empirical findings” have been elevated to the top shelf, and theory, or worse, “thought pieces”, are not really science. There is so much wrong with this extreme but common view, including of course that every empirical finding stands on a model or a priori view. Indeed, remember that every null hypothesis that was ever rejected might have been rejected because the model (not the hypothesis) was wrong. People naively believe that a bad model or bad experimental design just reduces power (makes it harder to reject the null) but the mathematical fact is that it can go either way, and error in the model or sample design can make rejection of the null almost certain.

Thursday, January 1, 2015

Alan Turing's contributions to Econometrics.

David Giles' blog on Econometrics is one of my most favourite blogs, and I read it all the time. And this blog on Turing is the most informative one from Giles.

 While I am quite familiar with Econometrics having overdosed on it in graduate school, and also am familiar with Turing's work in Computing, I didn't have the foggiest idea about Turing's contributions in Econometrics until I read this blog.

Proving the Central Limit Theorem, developing sequential analysis and LU decomposition for matrix inversion all developed before the age of 35! Turing was the Mozart of Computing, Statistics, and Mathematics.  Life was not kind to him.

It is tragic that I never heard of Turing when I was a Statistics/Mathematics  undergraduate back in the early sixties even though I did study CLT, sequential analysis as well as matrix inversion techniques. I became aware of his work in computing only in the 1970s when seeing the book "Introduction to the Theory of Sequential Machines" by Hartmanis and Stearns on the stacks in Hillman Library at Pitt piqued my interest enough to borrow and read it. A decade later it turned out that Stearns was a very senior colleague of mine at Albany; I was fortunate enough to sit in on his course on Game Theory, but unfortunate enough not to have worked with him.

I plan on watching the "Imagination Game" this weekend.