O queixinhas que escreve neste blogue já mencionou aqui várias vezes o problema da falta de informação detalhada nas fichas técnicas nas sondagens feitas em Portugal, e até já o fez em artigos académicos. Mas para não ser só queixinhas, também já dei exemplos de boas práticas, especialmente por parte da Marktest. E aproveito para dizer que o problema não é só nosso, longe disso. Um artigo de 2002 já abordava o problema nos Estados Unidos. E vejamos, por exemplo, o que se diz no Pollster.com de Mark Blumenthal e Charles Franklin:
Over the last few months I have written a series of posts that examined the remarkably limited methodological information released about pre-election polls in the early presidential primary states (here, here and here, plus related items here). The gist is that these surveys often show considerable variation in the types of "likely voters" they select yet disclose little about the population they sample beyond the words "likely voter." More often than not, the pollsters release next to nothing about how tightly they screen or about the demographic composition of their primary voter samples.
Why do so many pollsters disclose so little? A few continue to cite proprietary interests. Some release their data solely through their media sponsors, which in the past limited the space or airtime available for methodological details (limits now largely moot given the Internet sites now maintained by virtually all media outlets and pollsters). And while none say so publicly, my sense is that many withhold these details to avoid the nit-picking and second guessing that inevitably comes from unhappy partisans hoping to discredit the results.
Do pollsters have an ethical obligation to report methodological details about who they sampled? Absolutely (and more on that below), and as we have learned, most will disclose these details on request as per the ethical codes of the American Association for Public Opinion Research (AAPOR) and the National Council on Public Polls (NCPP). Regular readers will know that we have received prompt replies from many pollsters in response to such requests (some pertinent examples here, here, here and here).
So I have come to this conclusion: Starting today we will begin to formally request answers to a limited but fundamental set of methodological questions for every public poll asking about the primary election released in, for now, a limited set of states: Iowa, New Hampshire, South Carolina or for the nation as a whole. We are starting today with requests emailed to the Iowa pollsters and will work our way through the other early states and national polls over the next few weeks, expanding to other states as our time and resources allow.
These are our questions:
* Describe the questions or procedures used to select or define likely voters or likely caucus goers (essentially the same questions I asked of pollsters just before the 2004 general election).
* The question that, as Gary Langer of ABC News puts it, "anyone producing a poll of 'likely voters' should be prepared to answer:" What share of the voting-age population do they represent? (The specific information will vary from poll to poll; more details on that below).
* We will ask pollsters to provide the results to demographic questions and key attributes measures among the likely primary voter samples. In other words, what is the composition of each primary voter sample (or subgroup) in terms of gender, age, race, etc.?
* What was the sample frame (random digit dial, registered voter list, listed telephone directory, etc)? Did the sample frame include or exclude cell phones?
* What was the mode of interview (telephone using live interviewers, telephone using an automated, interactive voice response [IVR] methodology, in-person, Internet, mail-in)?
* And in the few instances where pollsters do not already provide it, what was the verbatim text of the trial heat vote question or questions?
Our goal is to both collect this information and post it alongside the survey results on our poll summary pages, as a regular ongoing feature of Pollster.com. Obviously, some pollsters may choose to ignore some or all of our requests, but if they do our summary table will show it. We are starting with Iowa, followed by New Hampshire, South Carolina and the national surveys, in order to keep this task manageable and to determine the feasibility of making such requests for every survey we track.