Data stinks

Have you been following the flap over the National Jewish Population Survey? If not, see the editorial by J.J. Goldberg, the editor of the Forward, in the New York Times.

I'm not privy to the motivations of the UJC or the NJPS researchers in comparing two incomparable studies. I'm willing to believe that their ideological tendencies might have encouraged them to skew their presentation to the lay public in one direction or another.

However, those who have squawked the loudest over this study (Goldberg included) should realize that every study of this nature, and especially a series of such studies carried out over time, is susceptible to the problems of the NJPS: lack of comparability between results; results that differ widely among different sub-groups; over- or underrepresentation of different strata in the study population; and every researcher's favorite: data that's too skimpy and error bars that are too wide. Everyone who does research on populations (epidemiologists, physicians, sociologists, demographers) has to face these problems with a mixture of dread and brazenness, or they'd never get any work done. Such problems explain the ephemeral nature of so many ballyhooed developments in medicine, say, or epidemiology, which merit a Gina Kolata article but are retracted or revised months later with nary a ripple on A1 of the Times.

In short, it would be nice if the NJPS controversy (for the several hundred people who care about the game of Jewish demographics) would lead to a greater appreciation of the slipperiness of population science. I would be reluctant to ascribe the NJPS's problems to the ideological tendentiousness of its researchers. It's a lot more likely to be something simple and unavoidable: studies are hard, are always mistaken, and never tell you what you want to know.

No comments:

Post a Comment