The 2012 Sight & Sound Poll: Don’t Call it a Consensus
On August 1, British film magazine Sight & Sound released some initial results of its Greatest Films poll, a once-a-decade event begun in 1952. The poll is the largest and most cited of its kind, and this year the editors cast their net wider than ever before, collecting ballots from nearly 850 critics, scholars, archivists, and programmers—a nearly sixfold increase over the 2002 poll.
The poll was eagerly anticipated and the meaning of its results hotly debated—a phenomenon exacerbated by the boundless (and exhausting) opportunities for discussion online. The “news” this year was that Citizen Kane, after fifty years at the top, was replaced by Vertigo. In addition, Dziga Vertov’s Man with a Movie Camera made the celebrated top ten for the first time.
Many critics and journalists leapt to attribute these developments to major shifts in critical tastes and priorities. Sight & Sound‘s Nick James argues that Vertigo‘s victory “reflects changes in the culture of film criticism. The new cinephilia seems to be not so much about films that strive to be great art, such as Citizen Kane […] but more about works that have personal meaning to the critic.” Peter Bradshaw of The Guardian speculates that the presence of three silent films in the top ten might be due to the influence of 2011’s pseudo-silent, Oscar-winning film The Artist. Jim Emerson suggests that the first-ever disappearance from the list of Battleship Potemkin is “bigger news than the spot-swapping of Kane and Vertigo.”
Such claims are hasty and overblown. If you inspect the runners-up in the 2012 and previous polls, it becomes evident that many of the apparent shifts are actually slight reshufflings of the deck—attributable more to chance than any definitive turn of the zeitgeist. Potemkin received just one fewer vote than the number ten entry, and another silent, The Passion of Joan of Arc, has been in or near the top ten for all seven editions of the poll. In 2002, Vertigo received only five fewer votes than Kane.
The Sight & Sound poll does reflect changes—in critical and scholarly priorities, in availability and exposure—but tends to do so glacially, not in sudden shifts (the most prominent exception is L’Avventura, which placed second in 1962, scarcely over a year after its European release). And to trace these, one has to look beyond the top ten and add a bit of historical context. Combing through back issues, we found that Tokyo Story—number three in this year’s poll—is a case in point. The 1953 film was “discovered” in the West largely on the basis of a 1958 showing at London’s National Film Theatre among a program of classic Japanese films. It so impressed a small group of British critics that six voted for it in the 1962 poll. The film continued to receive a handful of votes in 1972 and 1982 before making the list at number three in 1992—aided, like so many other films, by increased availability on home video, as well as by influential monographs like Donald Richie’s (1977) and David Bordwell’s (1988).
This year’s poll-crasher, Man with a Movie Camera, had a similarly long and curious ascent. Vertov’s films were championed largely by documentary-film specialists before they were embraced in the late 1960s by radical critics and filmmakers as a model of anti-bourgeois film practice. Movie Camera received a single vote (from Time Out‘s Verina Glaessner) in 1972, but that year editors of the radical French journal Cinethique voted for several other Vertov features alongside films from Maoist China and works by the “Groupe Dziga Vertov” collective (which included Jean-Luc Godard). Through 1992, the film received votes largely from those with commitments to radical left-wing and/or modernist filmmaking, such as Annette Michelson. Her writings gave the film increased salience among scholars, whose fascination with self-reflexivity made Vertov’s film supremely teachable. It became a staple of undergraduate film courses; most of the votes it received in the 2002 poll were from university professors. It’s possible that some of the additional votes in 2012 were from their students.
Ten years ago, Movie Camera appeared on 5% of the ballots; this year, it entered the top ten with just 8%. What these small numbers suggest is that the “ten greatest films of all time” are not quite the recipients of the general acclaim that the popular media—and Internet grumblers—assume. This year’s critics voted for over 2,000 films. Within that huge number, Vertigo only needed to appear on 19% of ballots to place first (its actual share was 23%). It is likely that, as in 2002, the majority of films in the poll received only one vote. Many had speculated that the rise of the Internet would promote a fracturing of critical consensus, and initially this year’s results appear to bear this out.
But in fact the poll has rarely been characterized by genuine consensus. Diversity, rather than unanimity, was the dominant feature almost from its beginnings. Since 1962, no winner has ever been placed on more than 37% of total ballots. The threshold for placing in the top ten in 1972 was just 10%. That this tendency toward statistical dispersion is intensifying is beyond doubt. But it is doing so slowly, less due to the Internet than to the increasing numbers of those polled—and to their increased geographical, generational, and professional diversity.
What does this all mean? While the top ten gets all the press, beneath it there has always been a much larger, more heterogeneous body of films just as—if not more—revealing of trends in film culture (and culture in general): the rediscovery of a pre-revolutionary Chinese cinema, the emergence of a contemporary Iranian art cinema, and the explosion of scholarly interest in films of the 1900s and 1910s.
Excellent post! When engaging in conversation about this list, I’ve been eager to point out that 78% of voters clearly indicated that Vertigo is not one of the top ten greatest films. It’s like noting that Sunday Night Football is the top show on American television, but more than 80% of viewers do not watch it. There is no ‘critical consensus’ here – just clusters of marginal comparative approval.
I look forward to part two…
This is a fascinating evaluation of the poll itself, its history and of the conclusions that tend to be hastily drawn from its result. I am entirely convinced that full significance of the poll’s results should consider the % of votes and shifts in those films that don’t make the top ten.
But since your quibble seems to some extent to be with terminology–“S&S’s poll reveal the current *consensus*”–I wonder if one ought to give up the fight–jettison the term–so quickly. It’s worth remembering that the term consensus comes from the Latin “agreement,” and agreements don’t always reflect a majority. In ordinary language a “consensus” can simply mean “a judgment arrived at by those concerned,” according to one source.
This isn’t just a detail or a semantic issue, of course. Since the context here is canon-formation in film history, which is an exercise of judgment (as well as exercise in cultural politics, of course), a solid case could be made for viewing the S&S poll as one of among many ways to measure critical consensus. To repeat, the poll is a judgment arrived at by a group of willing participants who agree that the question itself matters and who agree to the rules of the measurement, even if (predictably) the results will not represent the majority.
One more point, which follows from this. Especially given the odds against KANE and VERTIGO (and others) rising to the top–since those polled are permitted to select any film, ever–the fact that KANE and VERTIGO have secured so many votes consistently over time does reflect something about the aesthetic judgments shaped in cinephilic culture globally. I’d be curious to see if KANE and VERTIGO do reflect global cinephilia, though, based on who voted for them.
In short, to use unpopular language, I think there’s plenty of reason to believe that the S&S poll has arrived at a consensus–a judgment.
I can’t revise what I just posted, but I did want to boil my point down a bit more, because the distinction I’m making may come off as a “legal” or overly refined one to some. You seem to be making a case against viewing the poll as the voice of a MAJORITY of critics and scholars, which may NOT be the same thing as a CONSENSUS. A distinction with a difference, in my view.
First, thanks for the excellent post Andrea and Jonah. It’s a nice contrast to all of the typing that generally greets this sort of thing. The Sight & Sound polls seem to be ripe with data, and I hope that more of those who want to talk about the results really dive into that data.
Second, I agree that it’s useful to distinguish a CONSENSUS from a MAJORITY, but I also think it’s fair to pooh-pooh, or at least qualify, the use of CONSENSUS to describe the ranking, partly because a more precise term, PLURALITY, is available. Additionally, while I’m no expert in voting or decision-making processes (and I’d love to hear from one), I think that CONSENSUS implies, even in common usage, an attempt at agreement-making that goes beyond the opting-in to a set of non-negotiable rules offered to the participants in this poll. CONSENSUS, it seems to me, is being used to validate the results (or writing about the implications of the results), rather than to accurately reflect the process used to reach them and/or the level of agreement by the participants. I take Colin’s point, though, that a significant contingent of film culture seems to have agreed that the Sight & Sound poll is an important judgment of the greatest films. But maybe that’s largely because reaching a “truer” consensus where there’s more agreement would be so much more effortful? In any case, I’m not sure that should keep us from pushing back against the use of the term CONSENSUS here.
What I’d also be interested in seeing is what experts in statistics might do with the poll results, beyond a fairly simple ranking, if they’re tasked with analyzing them in meaningful ways to yield insights into group preference. Are the results really even statistically significant, given the diversity of answers, and if so how significant are they?
Mark–thanks for this! I hadn’t thought of the term PLURALITY (to continue with the scare CAPS), which I very much like. It has many benefits–among them, accuracy.
I, too, would love to hear from statistics experts, incidentally. It may turn out that these results, especially over the course of decades, are significant, given the “rules of the game” that S&S established, here, as well as the number of people polled.
Still, the more I think of it, the more I like one aspect of CONSENSUS–an aspect lost if we speak merely of a PLURALITY. The term consensus in some usages explicitly denotes a *judgment* rendered by a group (and implied here is that the majority view may or may not be reflected, but this does not factor into the legitimacy of the judgment). Something tells me that if the S&S people and those who participated were concerned with a process that revealed a MAJORITY opinion, then they would have argued for a different process altogether.
At least implicitly, then, consensus is what S&S was after, not a majority, even if they also see “consensus” as beneficial because, rhetorically, it legitimizes the results AND the process.
I think you’re right that S&S was probably after consensus (although they may have just been after a list) rather than a majority, and they certainly had no interest in announcing Vertigo as winner by plurality. But they took no steps to ensure that the judgment rendered was one the group meaningfully agreed to. I’m not sure we can take the decision to opt-in to the poll as an endorsement of the legitimacy of the judgment. The only other option would have been to opt-out, thereby ensuring the stamping of the dreaded celebrated film X as “consensus greatest.” Particularly because S&S promotes the results as having “achieved a consensus” it does seem fair to ask how CONSENSUS-Y the consensus really is. We might poll the respondents, asking them whether they consent to S&S describing the poll as having achieved a consensus.
I should also say, though, that I don’t think S&S’s poll deserves some of the blanket disdain it gets (and I’m not referring to anyone posting here), even on the basis of whether it achieves consensus. A more consensus-y consensus might require a very tiresome process that leads potential participants to opt-out. As a spectator to Oberlin College dining co-ops’ lengthy consensus procedures, I can certainly appreciate the downside to intense consensus-seeking. And, like I said, the poll certainly seems to have produced some interesting raw data.
That said, I like to imagine a beautiful world where the editors of S&S call up a group of social scientists, or what have you, and say “here’s our data (or better yet, here’s our question), can you help?” Maybe that’s what Ian Christie’s “full essay on changing fashions” will look like when published in September, but I sort of doubt it.
I replied before I saw your latest post, Mark. Your point about process reminds me of the American Film Institute’s “100 Films, 100 Years” (or was it “100 Years, 100 Films”?) list. Some folks at the AFI drew up a list of 400 films (by what criteria we don’t know.) A mixed group of critics, industry insiders, “average Americans” (!), and exceptional figures such as former presidents were then asked to choose their ten favorites from among those pre-selected 400. The results are the hokey list we’ve come to know and shrug our shoulders at. The whole process reminds me of the way we select presidents.
As to whether participation in the poll constitutes agreement that “the question itself matters”–I’d be curious to hear more thoughts on that. The solicitation that Sight & Sound sent out to critics and directors pointedly disavows any singular meaning to the poll, any stable set of aesthetic or other criteria. Participants were invited to interpret the remit however they saw fit. Many likely submitted ballots with no concern as to how or whether they affected the general result. If the few individual ballots they’ve published from this year and the ones from 2002 still viewable online are any indication, a lot of ballots were sent with caveats insisting that they were not meant to designate the “greatest” but simply films of personal resonance. But more on this in a few weeks…
I do think that in ordinary language “consensus” typically implies a majority. As an example, when one points out that a consensus has emerged among scientists regarding anthropogenic climate change, one isn’t likely to mean that a closely-knit and vocal minority has made itself heard (which would have been the case decades ago) but rather that a sizable majority of scientists are in substantial agreement.
Of course polls like this aren’t likely to reveal consensus in that sense, so the question is whether it makes sense to lower the bar a bit. I’m not too concerned with this, despite the title of this blog post. Our main point was simply that what is in and out of the top ten is not all that significant, especially taken out of the context of the whole poll and its history.
It’s possible, even likely, that aside from the pseudo-consensus of the Sight & Sound top ten, there is some general agreement among film critics and scholars about the worthiness of Vertigo, Tokyo Story, and so forth. I’m not convinced that this is something the poll results can tell us.
Yeah, Jason is right: all the hand-wringing about VERTIGO being marginally more popular than KANE misses the fact that the vast majority of people who were polled listed neither film. I think these polls do serve a valuable function as a means of tracking taste, and in the best cases, introducing readers to clusters of films they might otherwise have missed.
Here’s a lengthy response to the poll that makes some similar points re. Man with a Movie Camera: http://www.photogenie.be/blog/?p=280
Tood McCarthy’s response: http://www.hollywoodreporter.com/news/vertigo-citizen-kane-hitchcock-welles-best-movie-357911