Monday, January 6, 2014

On Journal reviewers.

This is also an old post on another listserv, but not dated. It is an exchange between me and Professor Paul Williams of North Carolina State University. Steve (Kachelmaier) was the editor of The Accounting Review when this was initially posted, and Bob (Jensen) is the Jesse Jones Professor Emeritus of Trinity University.
______________________________
I have two basic comments.
The first has to do with the competence of accounting reviewers with minimal statistical (and econometrics) training passing judgment on what is essentially econometric work. The second has to do with Vernon Smith cite in Steve's letter. I state these two with no pernicious intent, but in a friendly spirit of intellectual inquiry. In what follows, I'll concentrate on the Vernon Smith cite.

IF I know Vernon personally and can vouch his integrity, then if Vernon says 11:03 I would take it at its face value, heavily discounting possibilities such as his doctoring his watch because he is hungry and we had a 11am lunch appointment, or that he wants to get rid of me for some reason and his 11am appointment with some one else is his alibi. In case of journal submissions with blind reviews, one can not discount such possibilities if Pons-Fleishman situation is to be avoided at all costs.
The point I am making is that with time we all can agree with the US time server as the arbiter, and so avoid calibration issues. On the other hand, with most empirical social sciences, the sampling problem is somewhat like the Turing's Halting problem in computation; it is undecidable. That being the case, in case of most "empirical" work in accounting replication with more, different, or different regime data must be encouraged. Ignorance is no bliss, and we do not know how many Pons-Fleishman situations exist in accounting.
Laws in the social sciences hold only in a probabilistic sense, the reviewers' acceptance decisions are point estimates of such probabilities. In no science do you accept probability numbers based on a single (or two) estimate. If Steve thinks so he must provide arguments. His communitarian argument holds no water in this context. In the social science, truth is socially constructed, but truth values are physically obtained.
Regards,
Jagdish S. Gangolly
Reply from Paul Williams:
Bob and Jagdish,
I pretty much exhausted myself debating with Steve before. Talking to a wall is productive only for someone who is insane and, believing I'm not there yet, I have given up on him. Steve simply doesn't hear you.
Jagdish, your observation about accountants' pretensions to econometric rectitude are well said. In this vein I would suggest that Bob add to the list of references an excellent article by Jon Elster, "Excessive Ambitions," Capitalism and Society, 4(2), 2009, Article 1. The article takes to takes to task the "excessive ambitions" of the social sciences as quantitative sciences. One section is devoted to data analysis. He observes about social science empirical work: "In the absence of substantive knowledge -- whether mathematical or causal -- the mechanical search for correlations can produce nonsense. I suggest that a non-negligible part of empirical social science consists of half-understood statistical theory applied to half- assimilated empirical material (emphasis in the original)."
He goes on to describe a study done by David Freedman, a statistician who selected six research papers from among the American Political Science Review, Quarterly Journal of Economics, and American Sociological Review and analyzed them for statistical errors of all kinds. Needless to say they were loaded with them to the point of being meaningless.
This is reminiscent of our days at Florida State University when Ron Woan (with a masters in stat and 11 years at U of Ill. as a statistics consultant) would conclude every seminar with a devastating deconstruction of the statistical flaws in every paper. The issue goes well beyond simply replication -- what point is there to replication of studies that are nonsense to start with.
This kind of academic community, as Elster concludes, doesn't just produce useless research, but harmful research. In 40 years of "rigorous" empirical accounting research we have not produced anything that meets even minimal standards of "evidence." One comment Elster made that would really piss of Steve: "Let me conclude on this point by exploring a conjecture alluded to earlier: we may learn more about the world by reading medium- prestige journals than by reading high-prestige and low-prestige journals."
Amen to that.
Paul Williams
North Carolina State University

No comments:

Post a Comment