policy – Antenna http://blog.commarts.wisc.edu Responses to Media and Culture Thu, 30 Mar 2017 23:48:47 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.5 Net Neutrality is Over— Unless You Want It http://blog.commarts.wisc.edu/2014/01/17/net-neutrality-is-over-unless-you-want-it/ http://blog.commarts.wisc.edu/2014/01/17/net-neutrality-is-over-unless-you-want-it/#comments Fri, 17 Jan 2014 15:27:01 +0000 http://blog.commarts.wisc.edu/?p=23424 series_of_tubesOn Tuesday, the DC Circuit Court of Appeals tore out the heart of net neutrality. In the landmark Verizon v. FCC decision, the court struck down the FCC’s Open Internet rules— the hard-fought regulations passed in 2010 that prohibited broadband providers from blocking or discriminating against internet traffic. Without these protections, network operators like Verizon are legally empowered to not only interfere with the online activities of their users but alter the fundamental structure of the internet and change the terms on which users communicate and connect online. The court threw out the no-blocking and nondiscrimination rules but left intact the transparency provision, so now the company you pay to get on the internet can mess with your traffic as much as it wants, as long as it tells you so. The ruling is not a surprise, but not because the Open Internet rules were not legitimate or net neutrality is a bad idea. It comes down to this: broadband providers are common carriers but the FCC can’t regulate them as common carriers because they didn’t call them common carriers. (I’ll explain in a second.) So if we want net neutrality, what should we do? Well, tell the FCC to call broadband providers common carriers. It really is that simple— not easy, but simple.

First, what’s actually at stake here? Well, the end of the open public internet and the beginning of separate but unequal private internets, under the control of the giant phone and cable companies in possession of the pipes and airwaves we depend upon for access. The FCC’s Open Internet rules left much to be desired but they were minimum protections to count on and a significant beachhead in the net neutrality battle. Without them, what do we get now? A network where Verizon can charge extra to prioritize traffic and block any service that refuses to pay a toll to reach its users (that’s what it said it would do if it won this case). A network where Comcast can derail video distribution that threatens its cable television business (that’s what it did when it blocked BitTorrent and what it does in favoring its Xfinity service— even though it’s obligated to abide by net neutrality until 2017 as a condition of its merger with NBC-U). A network where AT&T can cut deals with the biggest content providers to exempt their apps from counting against monthly data caps but squeeze out the innovative startups that can’t afford to pay (which it just announced last week with its new Sponsored Data plans). Networks — with pay-to-play arrangements, exclusive fast lanes, unfair competition, and prepackaged access tiers— where that independently-produced web video series, that nonprofit alternative news site, or your own blog are left behind in favor of those that can pay protection money to network operators. In other words, a network that is not the internet as we’ve come to know it— an open network where users can be participants in the creation and circulation of online culture, rather than a closed content delivery system for corporate media. While net neutrality proponents’ rhetoric might seem a bit overblown, we are much closer to a “nightmare scenario” than most realize.

The DC Circuit’s ruling was not against net neutrality itself, but rather the twisted way the FCC attempted to enforce it. The majority opinion actually went out of its way to describe why net neutrality regulations are necessary to curb abuses of power by network operators. It ruled that the Open Internet rules themselves were sound— they were just implemented the wrong way. Coming into the case, the FCC’s authority to regulate broadband at all was in doubt, after the agency was handed its hat by the same court in the 2010 Comcast case. The FCC tried it again this time with a slightly different tack (“even federal agencies are entitled to a little pride,” the majority wrote— federal appeals court humor, folks) and, amazingly, the court bought it this time around (while Verizon called the FCC’s argument a “triple-cushion-shot,” the judges pointed out that in billiards it doesn’t matter how much of a stretch the shot is if you actually make it). However, even though the court affirmed the FCC’s legal ability to regulate broadband, it found that it can’t regulate it the way the Commission wanted to in the Open Internet rules.

The court ruled that the FCC’s net neutrality policy treated broadband providers as common carriers, but that it couldn’t do that because it didn’t have those services classified in the common carriage portion of its legal framework. Basically, it all goes back to the FCC using the term “information service” rather than “telecommunications service” to define broadband starting in 2002. That’s it— this is a case where the importance of discourse, and the power to dominate discourse in the policy sphere, could not be more plain.

Net neutrality is essentially an update to common carriage, the centuries-old principle of openness and nondiscrimination on publicly essential infrastructure for communication and transportation. The FCC has regulated general purpose networks of two-way communication as common carriers since its inception with the 1934 Communications Act (at that time the focus was telephone service). Beginning in the 1980s as part of its influential Computer Inquiries and legally formalized in the 1996 Telecommunications Act, the FCC distinguishes between these basic networks, defined as Title II “telecommunication services” (think pipes), and the content made available over those networks, defined as Title I “information services” (think water flowing inside those pipes). Under this framework, the FCC regulated internet access (the connectivity) as common carriage to ensure equality and universality, but could not regulate the internet itself (the content). As telecommunications services, internet access providers’ job is to pass communications back and forth to the internet, while the information services on the internet are publishers with editorial rights to control content. This all changed during a deregulatory binge at the FCC in the 2000s: cable companies called their broadband connections “information services” (pay no attention to their actual cables), conspicuously not subject to regulation, and then-FCC-Chairman Michael Powell was happy to define broadband that way, too (he’s now the head of the NCTA, the cable industry’s trade group, by the way).

Now, because broadband internet access is not classified as “telecommunications,” it cannot be regulated as common carriage. This means that, as the DC Circuit recognized, since net neutrality is basically common carriage, it cannot be implemented as long as broadband is still defined as an “information service.” So, even though broadband is now the essential general purpose communications infrastructure of our time, there can be no openness and nondiscrimination protections for it until the FCC is willing to change the label it has applied to it in its regulatory terminology. The answer, then, is reclassification: the FCC just needs to call broadband the telecommunications service that it is before we can have enforceable net neutrality policy. The policy really is that simple— it’s the politics that are difficult. The reason that the FCC built the Open Internet rules on legal quicksand is that it lacked the political will to go through with its reclassification proposal amidst a firestorm of pressure from the telecom industry and its allies in Washington.

If we want net neutrality, we should put our own pressure on the FCC. We don’t have the money and the lobbyists that the telecom industry does and we can’t count on the clout of any big corporations whose interests overlap with the public’s on the issue— Google already sold out to Verizon and other big online content providers are now backing away from it (the Amazons and Facebooks of the world have deep enough pockets to dominate the payola market of the future, so they seem willing to play ball at this point). It’s up to us, then, to push the FCC to do net neutrality right this time.

Share

]]>
http://blog.commarts.wisc.edu/2014/01/17/net-neutrality-is-over-unless-you-want-it/feed/ 3
Why Verizon v. FCC Matters for Net Neutrality— and Why It Doesn’t http://blog.commarts.wisc.edu/2013/09/06/why-verizon-v-fcc-matters-for-net-neutrality-and-why-it-doesnt/ http://blog.commarts.wisc.edu/2013/09/06/why-verizon-v-fcc-matters-for-net-neutrality-and-why-it-doesnt/#comments Fri, 06 Sep 2013 12:00:42 +0000 http://blog.commarts.wisc.edu/?p=21666 internet_openThe battle over net neutrality (the vital principle that internet access providers should not interfere with what users do online) is heating back up. The FCC’s 2010 Open Internet rules ostensibly established net neutrality principles in policy (we’ll get to how effective it has actually been in practice…) but Verizon has been seeking to overturn the regulations. On Monday, September 9, the DC Circuit Court will hear oral arguments in Verizon v. FCC, focused on whether the FCC has the legal authority to implement the Open Internet rules.

This post will give you some background on the Verizon case and what’s at stake in it. Whether the FCC’s Open Internet rules stand or not is pivotal for net neutrality and the future of the internet— but also isn’t. While net neutrality protections are essential for internet users, the FCC’s Open Internet rules in particular are quite problematic. In some ways net neutrality would be better with these rules and in some ways could be better without them.

Here’s why Verizon v. FCC matters:

1. The rules prohibit the most egregious net neutrality violations. The FCC’s Open Internet rules are based in a deeply compromised version of net neutrality and are far from the strongest protections we could hope for (they were essentially written by Google and— ironically enough— Verizon). In spite of this, though, they are definitely better than nothing. The Open Internet rules bar wired internet access providers from blocking online content, services, applications, and devices or unreasonably discriminating in internet traffic. For instance, this stops Comcast from making youtube.com disappear from your browser (or redirecting it to nbc.com for that matter) and from throttling Netflix’s video streams. The Open Internet rules can be actually stronger than they immediately appear and have potential to be robust safeguards if enforced by the FCC properly.

2. The rules are an important foothold against total deregulation. Underlying the fight over the Open Internet rules is whether the FCC can regulate broadband at all. During a wave of deregulation in the 2000s, the FCC removed almost all of its oversight for internet access and now the agency is left with a shaky legal foundation for the Open Internet rules— what Verizon asserts is not enough authority. The Open Internet rules are important, then, because striking them down would eliminate virtually the last remaining public interest protections for internet access. Beyond that, though, if the courts buy Verizon’s argument in its Open Internet challenge, it would set a very troubling precedent for enforcing net neutrality in policy: the telecom operator says that it has a First Amendment right to “edit” the internet as it sees fit. If the free speech rights of “corporate persons” are allowed to trump the free speech rights of actual people, it doesn’t bode well for the future of the online public sphere.

And here’s why Verizon v. FCC doesn’t matter:

1. The rules haven’t been very effective. Even if the Open Internet rules are allowed to stand, they’re weak enough to allow a lot of net neutrality violations anyway— and for just the sort of activities especially key to the future of the internet. Most glaringly, most of the rules don’t even apply to mobile broadband (which is poised to soon become the dominant means to access the internet and already is primary among the underprivileged). This is why we see AT&T allowed to block FaceTime on the iPhone. Further, the rules don’t apply to “specialized services” (such as IPTV or any other managed service a network operator provides over broadband that isn’t regular internet access). Comcast calls Xfinity a “specialized service,” supposedly separate from the “public internet,” so it’s allowed to favor its own video streaming service by not counting Xfinity-on-Xbox traffic against users’ data caps. In other words, there are many net neutrality abuses not covered by the Open Internet rules.

2. Overturning the rules could actually lead to getting better ones. Paradoxically, there is a possibility that having the Open Internet rules struck down could be for the best in the long run— blowing up the whole thing and starting from scratch may be the only way to get truly effective net neutrality policy. Specifically, if the courts find that the FCC did in fact deregulate itself into oblivion and no longer has any statutory authority to address broadband, the agency could be forced to re-regulate broadband if it wants to actually remain relevant. (To get policy wonky: what the FCC needs to do is reclassify broadband as a “telecommunications service” under Title II of the Communications Act, where it has more authority to implement “common carriage”-based rules like net neutrality than on Title I “information services” where broadband is now). Counting on this outcome is very risky, though, because it’s impossible to know what the FCC will be like under incoming Chairman Tom Wheeler (an enigmatic figure who has inspired both hope and disgust from public interest advocates).

So, protecting net neutrality isn’t as simple as just upholding the FCC’s Open Internet rules— net neutrality could be better off with or without them. It really depends more on what the FCC does— and, crucially, what we as citizens push them to do— after Verizon v. FCC.

Share

]]>
http://blog.commarts.wisc.edu/2013/09/06/why-verizon-v-fcc-matters-for-net-neutrality-and-why-it-doesnt/feed/ 1
SOPA: Just Say NOPA http://blog.commarts.wisc.edu/2011/12/22/sopa-just-say-nopa/ http://blog.commarts.wisc.edu/2011/12/22/sopa-just-say-nopa/#comments Thu, 22 Dec 2011 16:07:28 +0000 http://blog.commarts.wisc.edu/?p=11614 Whatever you’ve been doing on the internet in the last few weeks, chances are you ran across something about SOPA. Whether it was in blacked-out tweets and status updates, at the top of Reddit, or ‘blocked’ access to Tumblr, online protests in opposition to the Stop Online Piracy Act that is being debated in the US House of Representatives have been all over the internet recently. And for good reason— SOPA is a big, big deal and it deserves the attention and action of anyone who cares about the future of the internet. In fact, SOPA— along with its companion bill in the Senate, the PROTECT IP Act— might just be the most dangerous internet legislation the US government has ever considered.

So what’s the big deal? What makes this bill so much worse than all of Congress’s other “anti-piracy” measures? Well, it would put in place an entire system of internet censorship that would empower the US government and corporations to block any website. The Department of Justice would have a blacklist of foreign “rogue sites” which fit a vague definition of enabling intellectual property infringement and would block American users from accessing these sites, in addition to cutting off the sites’ revenues from US-based advertising services and payment processors. All of this would happen within five days of the accusation of infringement, without any judge, any two-sided hearing, or any due process for the accused site. In fact, it further encourages pre-emptive “voluntary action” by offering immunity for internet service providers, browser producers, and search engines that block sites without even any infringement claims.

SOPA’s corporate backers in the recording and film industries focus on overseas sites that they refer to as “dedicated to intellectual property theft,” despite the fact that, for instance, targeted one-click file-hosting services like Rapidshare have been found legal in both American and European courts. In addition to plowing over such “rogue sites” that actually have substantial non-infringing uses, SOPA would also ensnare domestic sites that link to any infringing material or any “rogue site”— and would block the entire domain for even one link on one page. This means that any social media platform that hosts user-generated content— everything from Facebook, Twitter, and YouTube to Reddit, Tumblr, and Wikipedia— would become liable for everything their users post. SOPA, then, would overturn over a decade of precedent for internet law in the “safe harbor” provisions of the Digital Millennium Copyright Act that protect internet intermediaries from liability for what users do (an example of how prior copyright expansion legislation at least included some reasonable limitations).

SOPA would have a huge impact on freedom of expression, creativity, and innovation online. Doing away with safe harbor protections would place a massive burden on online services to police their users and more actively censor what they do online. This would have chilling effects on the free expression and creativity of users by encouraging self-censorship and would stifle innovative new start-ups with limited resources. Further, if whole platforms disappear from US access, the free expression of all other users becomes collateral damage. Of course, these very powerful tools for shutting down online activities hold great potential for abuse— especially when held by industries with a long history of using the law to expand their control and protect them from disruptive innovators.

Further, SOPA flies in the face of the principles of net neutrality and internet freedom that the US government evangelizes everywhere else around the world. While the US extols the virtues of free and open internet connectivity globally, SOPA would institute the same technical censorship system as China, Iran, Syria, and similarly repressive regimes. The only difference is that the American censorship system would instead be used to protect corporate profits— intellectual property now trumps all other rights. In addition to undermining American credibility in calling out authoritarian states’ internet censorship, SOPA would also set a precedent for other liberal democracies to further filter and block internet content. On top of all this, SOPA involves mucking around with the fundamental technical workings of the internet, with serious consequences for the stability and security of critical internet resources like the Domain Name System. By interfering with the connections between site addresses and the servers they are designed to connect to, SOPA’s blocking system would undermine the next-generation DNSSEC system being developed by the US government’s own internet security experts and all other internet protocols that depend on it working universally consistent.

SOPA is now in markup in the House Judiciary Committee, where the hearings have been laughably lopsided and the representatives have openly admitted their ignorance of the constitutional, economic, and technical implications of what they’re proposing. The bill’s sponsors were rushing for a vote before the holidays, but, after some last-minute jerking around with on-again-off-again sessions this week, it has now been delayed until some time in the new year. (PIPA has already made it out of committee and will be coming to the Senate floor in the new year.) This is a positive development: they weren’t able to ram it through committee around the holidays while fewer people are paying attention. However, SOPA’s supporters are surely counting on the large opposition effort losing momentum. If you find any of the above scary— if you don’t want to see your Facebook feed blacked out for real soon— you should help keep the pressure on Congress to stand up for freedom online.


Share

]]>
http://blog.commarts.wisc.edu/2011/12/22/sopa-just-say-nopa/feed/ 2
Thoughts on the Intersection of Communication Research and Policy http://blog.commarts.wisc.edu/2011/09/13/thoughts-on-the-intersection-of-communication-research-and-policy/ http://blog.commarts.wisc.edu/2011/09/13/thoughts-on-the-intersection-of-communication-research-and-policy/#comments Tue, 13 Sep 2011 12:30:00 +0000 http://blog.commarts.wisc.edu/?p=10442 Over the past few years, the field of communication has been engaged in a variety of collaborative initiatives, and quite a bit of self-examination, related to the issue of the field’s relevance to communications policymaking.  These activities stem from a persistent concern that the field has not achieved sufficient prominence or influence in the policy arena, despite presumably having quite a bit to offer policymakers in their efforts to effectively address the wide range of policy issues and concerns confronting them.

I’ve been involved in a number of these discussions and activities over the years, probably because in my career I’ve demonstrated both a willingness and (fortunately) an ability to play an active role in various dimensions of the communications policymaking process.  This is, I assume, also why I was invited to contribute to this topic here.  So I’ll try to offer a few observations and opinions about the place of communication research in communications policymaking.

First, it seems reasonable to say that the relevance and stature of our field, both within academia and beyond, is to some extent a function of our ability to meaningfully engage with – and influence – the relevant policy issues of our day.  Such engagement, I believe, can pay dividends within academia in a variety of forms, including faculty lines, faculty salaries, research funds, and doctoral student support.  I believe it can similarly contribute to increased external stakeholder support, such as grants, consulting opportunities, media coverage, and non-academic employment opportunities.  So, I think it’s in the long-term best interests of the field on a variety of fronts that we work to play a more prominent role in the policy arena.

If we don’t, other fields will.  Actually, whether we do or not, other fields will, as the increasingly inter-disciplinary nature of contemporary communications policy questions is attracting scholars from other fields and disciplines, such as economics, science and technology studies, and computer science.  So, the reality is we are already in competition with other fields and disciplines to play a meaningful role in communications policymaking.  All the more reason to be proactive.

The irony, though, is that while it may be in the long-term best interests of the field to try to play a more prominent role in policymaking, it’s not entirely clear that it’s in the short-term best interests of the individual faculty member. For instance, one risks getting tagged with the dreaded “applied” researcher label by one’s colleagues (this remains a bit of a scarlet A at some institutions).  As such, you might find your career trajectory affected – both within your current academic unit or in terms of your efforts to move up the ladder to higher quality institutions.  The communication field is, of course, incredibly diverse; and so there are plenty of schools and departments in our field for whom this kind of engagement with policymaking has never been seen as part of their identity or mission.

Also, being an engaged policy researcher frequently involves responding to research questions or issues raised by policymakers; and in so doing you run the risk of being criticized for not being very pro-active in your research agenda, or for allowing  your research to be constrained by the (possibly misguided) assumptions or priorities held by policymakers.  Again, this is something that can affect your career trajectory.

And finally, not all policy research projects lend themselves to publication in academic journals.  Consequently, there is the possibility that you’ll end up spending a significant amount of time on a project that won’t pay too many dividends in your academic career.

And so, while I think the field as a whole is coming around to recognizing the importance of being more policy relevant, it’s going to take quite a while for the academic faculty incentive and reward system to line up accordingly.  No one’s ever accused academia of moving quickly.

So, for the foreseeable future, the incentives to engage as a researcher in the policymaking process need to be largely self-generated.  And there are a variety of incentives worth mentioning, ranging from the opportunity to make a few extra bucks from time to time; to the opportunity to reach and inform an audience beyond the fairly small and narrow audience that reads our journals and books; to the satisfaction that comes from trying to contribute to the solving of interesting real world problems.  Personally, I’ve found some of these incentives to be quite powerful; but of course it ultimately all depends on what exactly it is you want out of your academic career.

Assuming, then, that engaging with policymaking is something you want to do, the next question then is how do you do it? How does one inject one’s (presumably policy-relevant) research into the policymaking process?

There are a lot of potential paths that can be traveled here. Personally, I’ve found linking up with individuals or organizations that do this sort of thing regularly to be an effective strategy.  That is, be sure to disseminate your research not only to your academic colleagues, but to members of the public interest and advocacy communities, foundations, NGOs, and relevant industry associations.  These folks are not only good at getting relevant research in front of policymakers (presuming it’s research that supports their policy position), they’re also better than we are at recognizing which aspects of our research have the greatest policy relevance. I’ve experienced a number of instances in which a finding I didn’t see as particularly interesting was seen by a member of a public interest organization as quite important.

Also, keep an eye out for calls for papers from those small, often DC-based conferences that a select few academic units, research centers, foundations, and NGOs sponsor with the specific goal of bringing policy researchers, policy advocates, and policymakers together.  These conferences typically don’t follow any kind of predictable timetable.  They emerge out of a particular funding opportunity or in response to a particular policy issue.  You’ve probably noticed that you don’t see too many FCC or congressional staffers at ICA.

And, of course, make sure your research is accessible online, through your university or personal web page, or through hubs like the SSRN or Academia.edu. Policy-relevant research generally doesn’t have as long a shelf life as some other types of academic research; and as we all know, academic journals are incredibly slow.  Moreover, neither policymakers nor policy advocates have the time to read these journals.  So don’t rely on journals to help inject your work into the policymaking process.

And, a final note of caution: be ready to have your work torn to shreds by whichever stakeholder(s) come out on the short end of your findings/conclusions.  These criticisms may not be fair, and they might even get personal.  But the stakes are a lot higher in policymaking than they are in academia, so a very thick skin is a must.

Share

]]>
http://blog.commarts.wisc.edu/2011/09/13/thoughts-on-the-intersection-of-communication-research-and-policy/feed/ 1
What We Talk About When We Talk About Net Neutrality http://blog.commarts.wisc.edu/2010/10/27/what-we-talk-about-when-we-talk-about-net-neutrality/ Wed, 27 Oct 2010 18:04:00 +0000 http://blog.commarts.wisc.edu/?p=7048 It’s been one year now since the FCC opened up the official policymaking process for net neutrality regulations on internet access.  A lot has changed with the issue since then, but perhaps the biggest is what “net neutrality” actually means to many of those who talk about it.  Despite its reputation as a wonky and bewildering issue, net neutrality actually boils down to a pretty simple principle with wide support: the internet should remain open, allowing universal and equal access to whatever on the network users want.  It’s important to point out, then, that a lot of those who are talking about “net neutrality” these days aren’t actually talking about this.

A few major events have dominated the net neutrality front in the last several months.  The FCC’s policy proposal process was interrupted in April when the Comcast v. FCC case put the commission’s legal authority to regulate internet access at all highly in doubt (the legacy of Bush-era reregulation).  Over the summer, then, the FCC held meetings to negotiate a compromise on net neutrality regulations— meetings held behind closed doors with only representatives of the cable and telecom industries and internet content and service companies at the table.  Then, in August, Verizon and Google reached an agreement and announced their plan for how to enact net neutrality policy, as covered here on Antenna by Mark Hayward and here by Jennifer Holt.

Net neutrality as a term, while it has been translated different ways, ultimately has been articulated to a particular principle of openness and nondiscrimination— like common carriage, the public obligations of private infrastructure owners.  The concept of net neutrality is constructed discursively: while the term has absorbed different values and interests from various stakeholders, a common sense of its meaning has coalesced and it has become relatively stable as a discursive formation.  The term started its life as a technical principle, coined by Tim Wu to describe the most efficient network design to encourage innovation.  It has also been taken up by internet content and service companies like Google (pre-Googizon era) and Skype, mostly as an economic principle describing the most fair marketplace for their content and services to compete.  Public interest organizations like Free Press have used it to describe a civic principle of freedom of expression and democratic participation.  Despite the differing interests of these groups, they formed a sort of alliance that came together in support of a few basic tenets: infrastructure control should be kept separate from content control, so that internet access providers should not interfere with or give preferential treatment to any particular content, service, application, or device based on who owns it.

Now that one of its biggest “supporters” is Verizon and Google, once the loudest pro-neutrality voice, has committed itself to a very compromised position, what net neutrality actually means is changing very quickly.  This is evident especially in the recent (eventually shelved) net neutrality bill introduced into the US House by Rep. Henry Waxman and its resemblance to the Verizon/Google vision of net neutrality.  Judging from this, “net neutrality” means something very different now.  First, openness rules apply to the “public internet,” but there are no such requirements on “differentiated services,” which means that this “private internet” would become a de facto fast lane for only the content and services owned by cable and telecom companies (hello Comcast-NBCU) or those who can afford to cut deals with them (goodbye Antenna).   Further, there are no nondiscrimination rules at all for wireless internet access, which is especially troubling since it’s easy to see that mobile devices will very soon be the dominant way to access the internet.  Finally, the FCC is left to investigate bad actors only on a case-by-case basis and has no rulemaking authority over internet access, which is clearly an indication of how corporations like Verizon and Google can cut regulators out of the policymaking process altogether and just do it themselves.

Clearly, then, there is a difference between the principles behind net neutrality and the way it is now talked about– especially in the policy sphere where discourse has the power to shape the technical structures in question.  One undeniable reason for the progress that had been made toward enforceable net neutrality was the support of big internet corporations– especially Google.  However, the alliance that came together around the issue is beginning to splinter, as shown by Google moving away from the overlapping interests that once brought this alliance together and toward new interests, especially their relationship with Verizon in the mobile device market.  The compromise reached between “both sides,” then, is between two competing sets of corporate interests and the definition of the net neutrality situation is left up to those with the biggest profits to gain.  Like Bill Kirkpatrick detailed here on Antenna recently in regard to ACTA, this is yet another way of confusing the powers at play in making policy: when the party at the negotiating table in these policymaking issues that comes the closest to representing the public interest is just another big corporation and the public interest and the corporate interest inevitably split, then we’re all left out of the process.

Share

]]>
Report From Internet Research 11 http://blog.commarts.wisc.edu/2010/10/25/report-from-internet-research-11/ http://blog.commarts.wisc.edu/2010/10/25/report-from-internet-research-11/#comments Mon, 25 Oct 2010 12:32:10 +0000 http://blog.commarts.wisc.edu/?p=6966 Two people with laptops sit near wall of windows.

Photo credit: Wrote's Flickr stream

Last week, I joined 250 international scholars in Gothenburg, Sweden for Internet Research 11, the 2010 conference of the Association of Internet Researchers. The conference theme – Sustainability, Participation, Action – carried over into the emphasis on producing a greener academic conference, with programs available on USB sticks and an all-organic menu. IR 11 is wildly interdisciplinary, tied together largely by research topic, leading to a number of fascinating connections, disjunctures, and challenges. With up to seven concurrent sessions, however, my experience is obviously only a partial view of the work being done.

On Thursday, I began with a panel on user-generated culture, which included presentations on material fan practices that cross into the online, Chinese fans of US television, YouTube memes, and fan-made film trailers. Limor Shifman drew our attention to the role of the interent not just as paradise for memes, but “paradise for meme researchers,” who can more easily follow the flows of cultural material, and Kathleen Williams expanded on this theme by using spatial frameworks to understand the expansion of fan-made film trailers. Afterwards, the roundtable on “Sustainable Entertainment” offered a range of academic and industrial perspectives on what makes entertainment media sustainable, and how online content and distribution channels may contribute to sustainability. Featuring Jean Burgess, Mia Consalvo, Patrick Wikström, Martin Thörnkvist of music label Songs I Wish I Had Written, Wenche Nag of TelNor, and convened by Nancy Baym, the roundtable addressed the possibilities for the music industry, particularly the possible roles for services such as Spotify, in reframing the possibilities for sustainable entertainment careers, as well as the changing games industry, the challenges of creating the variety and change that sustain systems of culture, and the persistent question of e-waste and the materiality of the digital.

This question of sustainable ICT in terms of devices, hardware, and labor was revisited in Friday’s keynote by Peter Arnfalk of Lund University in Sweden. Arnfalk discussed both the greening of IT (making technology more environmentally friendly) and greening through IT (using technology to reduce the environmental impact of other activities). The audience, via Twitter (hashtag #ir11), appeared struck by the statistics that attempted to concretize the impact of technology – one Google search emits 0.2 grams CO2, and ICTs account for 2% of total CO2 emissions. Arnfalk went on to discuss the European and Swedish experiences with addressing green IT through business and policy channels in the past 15 years.

The final keynote featured Nancy Baym discussing the evolution of media production, distribution, and fandom in light of the internet. Starting from her personal interest in, appropriately, Swedish pop music, she went on to address the “Swedish Model” more broadly, as music labels attempt to work with the decentralization in music, sharing their products freely rather than clinging to old models. This involves looking for new possible revenue streams, including making money from sources other than listeners and fans, and seems to foster the rise of a “middle-class musician,” who can attain mild success in a diverse mediascape. The role of social media was addressed in terms of the relative rewards for fans and audiences, and Baym closed by asking whether this new landscape means academics need to interrogate our notions of “fans,” “audiences,” and “community.”

Facebook was omnipresent – presenters addressed surveillance, identity and personal branding, community, and of course, privacy. Michael Zimmer proposed that the “laws of social networking,” and its business imperatives, lead to a desire to “make privacy hard,” and thus attempting to influence Facebook’s privacy policies may no longer be possible. Instead, he suggested moving forward through channels of government regulation, developing alternate technologies, or encouraging media literacies. Alice Marwick and danah boyd discussed their recent fieldwork with American teenagers, who increasingly treat Facebook as a hyper-public space (“shouting to a crowd”) and turn to hiding their messages from parents while sharing with peers (steganography), or using alternatives such as private Twitter accounts (“talking in a room”).

AoIR also seems to be developing a robust games community, featuring a handful of dedicated sessions and the inclusion of games research in a variety of panels. While MMOs, particularly World of Warcraft, were central to many of the papers presented, scholars also addressed casual online gaming, game cultures that extend beyond the game space, digital distribution of games, gaming in social networking sites, and console games such as Left for Dead. Questions of identity, affect, and power within gaming spaces crossed between panels and conversations.

Finally, Friday’s roundtable on the futures of academic publishing combined exciting possibilities and success stories with warnings about the inherent unsustainability of the current US system of journal and book publishing as it relates to defunding university libraries, tenure and promotion, and the peer review system. Participants included Nicholas Warren Jankowski, Clifford Tatum, Steve Jones, Alex Halavais, Kathleen Fitzpatrick, Siva Vaidhyanathan, and Stu Shulman. Projects such as Media Commons and open peer review were discussed, as were the possibly changing roles of journals and books in an era of self-publishing and alternative forms of scholarly conversation. Panelists advocated writing for more popular audiences – or at least making scholarship accessible outside the academy – as well as the necessity of senior scholars participating in new publishing forms and of all of our active participation in reforming journal hierarchies and the standards by which tenure and promotion are determined.

Share

]]>
http://blog.commarts.wisc.edu/2010/10/25/report-from-internet-research-11/feed/ 1
The ACTA Retreat: Their Ignorance, And Ours http://blog.commarts.wisc.edu/2010/10/21/the-acta-retreat-their-ignorance-and-ours/ http://blog.commarts.wisc.edu/2010/10/21/the-acta-retreat-their-ignorance-and-ours/#comments Thu, 21 Oct 2010 17:14:30 +0000 http://blog.commarts.wisc.edu/?p=6942 Last week the U.S. apparently “caved” on the Anti-Counterfeiting Trade Agreement (ACTA), intended to protect corporate intellectual “property.”  Though only a partial retreat, it’s exciting that the content industries were denied their full wish list of mandatory three-strikes provisions, etc., and will have to settle for a few stocking stuffers like watered-down restrictions on DRM circumvention.

But the intriguing part of this weaker ACTA is:  we’re not exactly sure whom to thank. Canadian professor Michael Geist, tenacious as a wasp in keeping public pressure on negotiators?  The less gung-ho countries who, tired of being Yank-ed around (or perhaps just not seeing what was in it for them), insisted on scaling back the agreement’s ambitions? Maybe the U.S. Trade Representative blundered by pursuing the most undemocratic path available; as James Love suggested, the USTR’s circumvention of Congress may have cost ACTA important legislative buy-in. Or perhaps we should thank “You”—the Time magazine Person-of-the-Year You—for all Your watchfulness and activism.

But no matter to which address we should gratefully ship the Chivas, the ACTA retreat is indicative of a larger crisis in how the policy sphere works today. Specifically: we have no idea how the policy sphere works today.

Once upon a time, it was possible to imagine that we understood policymaking.  There was an official policy sphere comprised of the state (in the U.S., Congress, the FCC, etc.), business, and the public (either public interest groups or individual citizens making their wishes and displeasures known).  Policy emerged from these players working out differences using the (unequal) power at their disposal. To effect policy change was to work through established channels of regulatory authority.

It was never as tidy as that, of course, but the fact remains that today, the legible official policy sphere has been blown all to hell through a combination of new players, differently empowered old players, new technologies for policy, and new technologies of policy.

We also have, importantly, new ignorances. With previous technologies, policymakers may not have understood the technical details but they could usually grasp the basics of the questions they were grappling with, and even some of the implications of those questions.

Today, not so much.  Ted Stevens’ “series of tubes” became a sensation because it was the perfect metaphor for the profound ignorance driving policy today.  The recent Ninth Circuit opinion in Vernor v. Autodesk reaffirms that our policymakers—in this case, judges—are dangerously ignorant of fundamental technological and even legal distinctions. In the other direction, ACTA demonstrates the ignorance of negotiators who, incredibly, believed they could hammer out their agreement in absolute secrecy in this day and age.

But we need to acknowledge our own ignorance of policymaking power as well.  Instead of imagining that we still understand the policy sphere, we need new models, new metaphors for contemporary policymaking. Our old conception of a legible policy sphere won’t cut it anymore.

Some contenders:

The Fraserized Policy Sphere:  Remember how Habermas theorized a unified public sphere for democratic deliberation, and then Nancy Fraser pointed out the existence of subaltern counterpublics?  Maybe that’s what happened to policy: we need to contend with a proliferation of new (or newly visible) subaltern policymaking bodies, from local school boards getting into media regulation, to programmers building policy into their products (“code is law” and all that), to spammers driving policy from the bottom up.

The Networked Policy Sphere:  Borrowing from Yochai Benkler’s own reworking of Habermas, perhaps the better model analyzes networks and nodes of policymaking authority.  Like the Fraserized Policy Sphere but more complex and webby.

The Pains of Policy Stretch:  As discussed by Danny Kimball at the recent Flow Conference, we’re using legacy policy formulated for one set of technologies to govern a new set of technologies, and the resulting legal and regulatory contortions are dislocating a lot of joints.  Maybe we need new regulatory calisthenics to maintain policy fitness, to overextend a metaphor.

The “Spinning Pool Table” Model:  The added complexities of distributed power and technological multiplicity has led to an explosion of unintended consequences, and no one understands where the billiard balls are going, or even where the pockets are.  This is the case in the wrong-wrong-stupid-wrong decision in Vernor, which in the most dramatic interpretation just separated legal transfer from ownership.  That’s what we call a scratch.  How might we begin to establish a new physics of policy so that we can at least regain our ability to estimate the consequences of our policy shots?

I could go on, but the point is that, even if we figure out what really happened to ACTA, we’re left in a state of profound confusion about the range of forces at work in policymaking today.  May those working for the public interest be the first ones to figure it out.

[The photo above is modified from an original by Paul Goyette, released under a Creative Commons Attribution-Noncommercial-ShareAlike license.]

Share

]]>
http://blog.commarts.wisc.edu/2010/10/21/the-acta-retreat-their-ignorance-and-ours/feed/ 3
Of Pigs and BullSh*t: Fox Television Stations, Inc. v. FCC http://blog.commarts.wisc.edu/2010/08/06/of-pigs-and-bullsht-fox-television-stations-inc-v-fcc/ http://blog.commarts.wisc.edu/2010/08/06/of-pigs-and-bullsht-fox-television-stations-inc-v-fcc/#comments Fri, 06 Aug 2010 13:30:09 +0000 http://blog.commarts.wisc.edu/?p=5434 The narrowly decided 1978 Pacifica decision was, from one perspective, a battle over pig metaphors.  The majority decision, penned by Justice Stevens, sanctioned the FCC’s indecency policy on the ground that the broadcast medium was “uniquely pervasive”; therefore it was permissible to restrict broadcasters from airing indecent content during the hours when children were most likely to be in the audience.  In Pacifica, indecent content was not being forbidden, just “rezoned” to a time when kids were less likely to be exposed to it.  Likening the policy to public nuisance laws, the Court reasoned that it “may be merely the right thing in the wrong place, — like a pig in the parlor instead of the barnyard.”  In his strongly worded dissent, Justice Brennan drew on another pig metaphor, this one derived from a 1957 indecency case, in which he claimed that the policy endorsed by the majority was like burning “the house down to roast the pig,” a far too excessive response that could have dire consequences for free speech.

Sadly, there are no new pig metaphors in Fox Television Stations, Inc. v. FCC, the July 13 appellate court decision that ruled the FCC’s indecency policy to be unconstitutional, though the Pacifica case looms large in the decision.  The latest in a series of decisions over the Commission’s indecency enforcement, the Fox court argued that the FCC’s indecency policy was “impermissibly vague,” had a dangerous chilling effect on speech, and violated the First Amendment. According to the court, the current policy had produced too much uncertainty as to what was or was not indecent and thus encouraged broadcasters to adopt overly cautious practices of self-censorship to avoid Commission penalties.

Though the court acknowledged it lacked the authority to overturn the Pacifica precedent, it indicated that the “uniquely pervasive” rationale at its center seemed like something of a relic, an anachronism in an era of online video, the expansion of cable television, social networking sites, or of technologies like the v-chip that allow parents to block the very content the indecency rules were designed to shield from children. Though this was not the meat of the court’s decision, it has been the part of the Fox case that has received a good deal of praise and attention. Commentators ranging from the New York Times editorial board to former FCC chair Michael Powell have suggested that Fox reinforces a marketplace approach to media regulation, one that interprets all content restrictions as outdated, unconstitutional, and unnecessary in a world of media plenty. With so many potential pigs in the parlor, to treat broadcasting as uniquely pervasive no longer seemed tenable.

Yet to me this is the wrong takeaway from Fox.  The appellate court did not only gesture to how the majority decision in Pacifica may no longer be salient, but also importantly implied that the view offered in Brennan’s dissent was perhaps right after all.  The crux of the Fox decision hinged on the vagueness of the Commission’s indecency rules, which had lead to contradictory and discriminatory outcomes.  How does it make sense, the court wondered, that the Commission deemed the word “bullshit” patently offensive, but not “dickhead”?  Perhaps more importantly, why were expletives permissible when uttered by the fictional characters in Saving Private Ryan, understood as necessary to the verisimilitude of the film, but not when spoken by actual musicians interviewed for the documentary The Blues?  Such inconsistencies, the court surmised, potentially reflect the biases of the commissioners themselves and “it is not hard to speculate that the FCC was simply more comfortable with the themes in ‘Saving Private Ryan,’ a mainstream movie with a familiar cultural milieu, than it was with ‘The Blues,’ which largely profiled an outsider genre of musical experience.”

The vagueness of the rules, in other words, not only made it very difficult for broadcasters to anticipate when and whether the use of terms like  “douchebag” (my example, not the court’s) would be ruled indecent, but provided latitude for the Commission to privilege its own mores and values in determining what should be permissible in the public sphere. It is a view that echoed Justice Brennan’s dissent in Pacifica, in which he warned that that the majority decision would sanction the “dominant culture’s efforts to force those groups who do not share its mores to conform to its way of thinking, acting, and speaking,” and in which he accused his colleagues of an “ethnocentric myopia,” the Pacifica decision itself an imposition of the justices’ own “fragile sensibilities” on a culturally pluralistic society.

Brennan’s concern was not that the Commission’s and the Court’s indecency policy was imprecise, but that its intent seemed all too transparent, as a way to silence speech that offended their sense of decorum, expose taboos they’d prefer to remain hidden, articulate political and social values they find unpalatable.  Not only were they burning the house to roast the pig, but the distinctions they were to draw between pearls and swine would sanction their own presumptions about aesthetics, ethics, and respectability.

And this I think is Fox’s important referendum on Pacifica and the indecency policy it had sanctioned: not that the media marketplace is now a panacea of free speech, but that broadcasting policy too often can operate as a cudgel to privilege the sensibilities and perspectives of particular sectors of the community over others in the guise of seemingly neutral regulations.  It’s not, in other words, that our contemporary parlors are overrun with pigs, but that to many of us those pigs in the parlor may never have been pigs after all.

Share

]]>
http://blog.commarts.wisc.edu/2010/08/06/of-pigs-and-bullsht-fox-television-stations-inc-v-fcc/feed/ 1
Winning Some Battles in the Copyfight http://blog.commarts.wisc.edu/2010/08/02/winning-some-battles-in-the-copyfight/ http://blog.commarts.wisc.edu/2010/08/02/winning-some-battles-in-the-copyfight/#comments Mon, 02 Aug 2010 14:42:53 +0000 http://blog.commarts.wisc.edu/?p=5377

Some good news came from the battlefield that is media and technology policy recently: some important fair use rulings that help to hold off the ever expanding clutches of copyright.  Through a nice (if small) corrective built into the generally heinous Digital Millennium Copyright Act, every three years the Library of Congress rules on exemptions to the anti-circumvention clause that makes it illegal to break technological protections on copyrighted material.  Here are the new exemptions:

  • Ripping clips of DVDs for educational purposes and use in documentary and noncommercial works is now allowed under the law.  This extends the previous exemption enjoyed only by lucky film and media studies instructors and their classroom uses to recognize more instructors, non-classroom uses, and students.  Make sure to check out Jason Mittell’s posts here and here for details and what this means for academics.  Beyond that, this ruling is also a big victory for those documentary filmmakers and remix video artists who have to crack encryptions on the existing material they transform for criticism and commentary.
  • It’s also now legal to jailbreak your phone, opening up its operating system for other mobile networks and applications.  This ruling is most specifically about unlocking the iPhone for use with carriers other than AT&T and to run apps other than those available on Apple’s tightly controlled iTunes App Store, which is an important limitation on the power that device producers like Apple can have over users.
  • Users also now have the right to crack digital rights management in order to run screen-readers on ebooks.  Many publishers technologically restrict the use of text-to-speech functions on computers and devices like the Kindle, so allowing for getting around this is especially key for promoting accessibility for people with print and visual disabilities.
  • The ruling also allows for academic security research on video game DRM, in response to concerns over some specific security vulnerabilities.

The legal recognition of these fair uses is a very encouraging development— the result of a lot of great work by organizations like the Electronic Frontier Foundation, American University’s Center for Social Media, the Organization for Transformative Works, the Society for Cinema & Media Studies, and others.  As Jonathan Zittrain and others point out, though, there are still a number of technological and legal hurdles that remain in the way of these uses— not least, of course, are the technical skills necessary to pick the locks in the first place.  And this is all only good for another two years, when these exemptions will have to be defended at the next review.

The Library of Congress’s ruling is even more encouraging, though, when taken along with two other recent court decisions on copyright.  The first case made some headlines: in June, a federal court threw out Viacom’s $1 billion copyright infringement lawsuit against YouTube.  The summary judgment ruling took a good strong reading of the “safe harbor” provision of the DMCA, finding that Google only hosts others’ content on YouTube and therefore can’t be held liable for the actions of its users.  The second case went relatively unnoticed: the day after the LOC announced its exemptions, a federal appeals court ruled that breaking DRM just to access a piece of software isn’t illegal under the DMCA’s anti-circumvention rules.  While a rather abstruse case involving medical system software and dongles (yes, dongles), the decision sets a pretty clear and substantial fair use precedent: breaking technological protections on a work is legal as long as the use you make of it is legal.  These cases, though, are likely far from over— expect to see appeals to the Supreme Court in both.  Nonetheless, in the fight for a more balanced approach to copyright regulation (and especially in light of some really scary stuff on the horizon), it’s nice to have some victories to celebrate.

Share

]]>
http://blog.commarts.wisc.edu/2010/08/02/winning-some-battles-in-the-copyfight/feed/ 3
Retransmission Consent as Awards Show http://blog.commarts.wisc.edu/2010/03/13/retransmission-consent-as-awards-show/ http://blog.commarts.wisc.edu/2010/03/13/retransmission-consent-as-awards-show/#comments Sat, 13 Mar 2010 15:01:25 +0000 http://blog.commarts.wisc.edu/?p=2517

A poster from an earlier retransmission consent battle

Last Sunday was a double nail biter for Oscar enthusiasts who subscribed to Cablevision in the NYC metro area. Not only were they nervously awaiting the envelope (please) but were anxiously wondering whether they would receive the telecast at all. The 3.3 million Cablevision subscribers were the most recent victims of a retransmission consent dispute between a cable operator and the owner of a local broadcast TV station. A 1992 statute dictates that every three years a broadcast TV station must either decide whether to demand that their local cable system carry their signal without receiving compensation (called must-carry), or elect to negotiate some sort of compensation for carriage (called retransmission consent). In this case, WABC, the ABC network’s NYC affiliate, owned by the Disney Corporation, wanted Cablevision to pay Disney $1 per subscriber to carry WABC. Cablevision ran ads lambasting the greedy Disney Empire while Disney encouraged viewers to switch TV service to a satellite or telecom carrier. Early Sunday morning Disney pulled the plug on its WABC signal to Cablevision, prompting the parties to reach a tentative agreement, but not until 14 minutes into the Academy Awards broadcast when WABC’s Cablevision signal was restored.

One impetus for the must-carry and retransmission consent rules was the recognition that local broadcasting was in some way a public good — that the form of television that was locally produced and freely available for all who owned a television receiver was something worth preserving. Must-carry and retransmission consent were policies to ensure that local broadcasting would remain economically viable as cable and satellite forms of nationally distributed programming expanded – they were meant to provide local broadcasters with some measure of leverage in contract negotiations with cable operators. But when media conglomerates grew to own TV stations, broadcast networks, cable networks and other media properties, the must-carry/retransmission rules became tools for leveraging corporate power in carriage negotiations. In the 1990s, FOX used retransmission consent to get cable operators to carry its FX cable network, ABC did so to leverage ESPN2, and NBC leveraged CNBC. More recently, in addition to leveraging carriage of conglomerates’ non-broadcast networks, retransmission consent disputes have negotiated direct per subscriber fees for broadcast TV carriage. For example, FOX negotiated per subscriber fees for its owned and operated broadcast TV stations while other large multi-station groups, such as Sinclair, have obtained per subscriber payments for their broadcast stations as well. With more precedents for per subscriber fees for broadcast TV station carriage, cable and satellite operators have formed a group to lobby the FCC to arbitrate these disputes and prevent broadcasters from pulling their signals during contract negotiations.

But rather than push these public disputes behind closed doors, perhaps now that we, the subscribers, are in effect paying directly for local broadcast TV, perhaps we should have more say about programming decisions and corporate practices. If corporate conglomerates used the privilege of retransmission consent (a policy derived from the foundational principle that the airwaves are a public resource) to leverage their corporate interests in negotiations, why can’t we, the subscribers, use this policy to leverage our demands for more corporate transparency and voice in programming decisions.

Well, for inside-the-beltway folks this would be just silly. But we can imagine, and even begin to organize, a way to make these retransmission consent disputes more publicly relevant beyond missing Alec Baldwin and Steve Martin’s 14 minutes of opening shtick. As the next retransmission consent dispute inevitably looms (quite possibly to a neighborhood near you), perhaps what we need is a Retransmission Consent Awards Show that allows viewers to express their viewing desires and hold the conglomerates accountable for their corporate practices. Let the subscribers have a voice in terms of how their money is allocated, to decide which corporate entity is more worthy of compensation.

An Awards Show for this latest dispute would have subscribers vote for least egregious practices in compensating executives, or for records on labor relations, another for merchandising practices and perhaps one for campaign contributions. The Show might include dramatic reenactments of corporate activities, such as when Disney pressured the Harvard-affiliated Judge Baker Children’s Center to evict the Campaign for a Commercial-Free Childhood after the advocacy group proved Disney’s Baby Einstein videos had no educational value and persuaded Disney to refund customers who purchased these videos. There might be a sports award that allowed viewers to comment on how Disney’s ESPN and the Dolan’s MSG franchises cover sports. I for one miss ESPN’s Playmakers, the scripted show that was critical of NFL culture and, likely, why it was short lived. I’ve also become a fan of women’s softball, but get tired of waiting until the Olympics to watch it on TV (which is no longer the case since softball was dropped from the 2012 games). Indeed, perhaps others would want more coverage of women’s sports in general from these conglomerates, especially given ESPN charges cable systems close to $4 per subscriber for carriage. Cable subscribers might also have something to say about how much their per subscriber fee for a local broadcast channel actually gets allocated to the local station, rather than to the station’s affiliated network or conglomerate. I watch local TV news in the morning (and indeed, studies show that local TV news is still the leading media source for news) and enjoy the weather reports and puff pieces on community events from dog shows to what not. But I would appreciate it if the station had more investigative personnel to cover city hall and local commerce — as I’m sure local news producers would like more resources to do so as well.

It’s our airwaves and increasingly our direct payments. What would other subscribers like to see exposed, talked about and shared in the coming retransmission disputes?

Share

]]>
http://blog.commarts.wisc.edu/2010/03/13/retransmission-consent-as-awards-show/feed/ 5