academic research – Antenna http://blog.commarts.wisc.edu Responses to Media and Culture Thu, 30 Mar 2017 23:48:47 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.5 Textual Analysis & Technology: Information Overload, Part II http://blog.commarts.wisc.edu/2015/07/22/textual-analysis-technology-information-overload-part-ii/ http://blog.commarts.wisc.edu/2015/07/22/textual-analysis-technology-information-overload-part-ii/#comments Wed, 22 Jul 2015 15:23:38 +0000 http://blog.commarts.wisc.edu/?p=27603 Post by Kyra Hunting, University of Kentucky

This post is part of Antenna’s Digital Tools series, about the technologies and methods that media scholars use to make their research, writing, and teaching more powerful.

Filemaker-iconIn my last post, I discussed how I stitched together a system built primarily from simple office spreadsheet software to help me with the coding process used in my dissertation. As I moved into my first year post-diss with new projects involving multiple researchers and multiple kinds of research (examining not only television texts but also social media posts, survey responses, and trade press documents), I realized that my old methods weren’t as efficient or as accessible to potential collaborators as I needed them to be. This realization started a year’s worth of searching for a great software solution that would help me with the different kinds of coding that I found myself doing as I embarked on new projects. While I ultimately discovered a number of great qualitative research software, ultimately nothing was “just right.”

The problem with most of the research-oriented software I found was that they are based on at least one of two assumptions about qualitative research: 1) that researchers have importable (often linguistic, text-based) materials that we are analyzing and/or 2) we know what we are looking for/hoping to find. Both of these assumptions presented limitations when trying to find the perfect software mate for my research.

The first software I tried was NVivo, a qualitative research software platform that emphasizes mixed media types and mixed methods. This powerful software was great in many ways, not the least of which was that it counted for me. I first experimented with NVivo for a project I am doing (with Ashley Hinck) looking at Facebook posts and tweets as a site of celebrity activism, and in this context the software has acquitted itself admirably. It allowed me to import the PDFs into the system and then code them one by one. I found the ability to essentially have a pull-down list of items to consider very convenient, and I appreciated that I could add “nodes” (tags for coding) as I discovered them and could connect them to other broader node categories.

Sample Node List from NVivo

Sample Node List from NVivo

The premise behind my dissertation had been to set up a system to allow unexpected patterns to emerge through data coding and I had wanted to import this into my new work. NVivo supports that goal well, counting how many of the 1,600+ tweets being coded were associated with each node and allowing me to easily see patterns emerge in terms of which codes were most common and which were rare. An effective query system allows researchers to quickly find all the instances of any given node (e.g., all tweets mentioning a holiday) or group of nodes (e.g., all tweets mentioning a holiday and including a photo). While the format of my data meant I wasn’t able to use Nvivo’s very strong text-search query, its capability to search for text within large amounts of data, including transcripts, showed great potential. NVivo seemed to be the answer I was looking for, until I tried to code a television series.

Sort for most frequent nodes from my project with Ashley Hinck

Sort for most frequent nodes from my project with Ashley Hinck

For social media, my needs had actually been relatively simple. I was simply marking if any one of a few dozen attributes were present in relatively short social media posts. But with film and television they increased. It wasn’t as simple as x, y, or z being present, but rather if x (say physical affection) was present I also needed to be able to note how many times, between which people, and to add descriptive notes. This is not what NVivo is built to do. NVivo imagines that researchers are doing three different things as distinct and separate steps (coding nodes, searching text, and linking a single memo to a single source). NVivo is great at doing these things and I expect will continue to serve me well working with survey and text-based data. But for the study of film and television shows I found NVivo demanded that I simplify the questions I asked in ways that were inappropriate. After all, in the complex audio-visual-textual space of film and television it isn’t just that a zebra is present but whether it is live-action, animated, talks, dances, how many zebras are around it, what sound goes with it, etc. Memos allowed you to add notes but it only allowed one memo per source and the memos were awkwardly designed and hard to retrieve alongside the nodes.

I found that NVivo competitor Dedoose gave me a bit more flexibility in terms of the ways I could code but it did not do well with my need to simply add episodes as codable items. I was unable to import the episode. Also, simply typing in an episode’s title and coding as I watched was much harder than I expected. Like NVivo, Dedoose seems to imagine social scientists that work with focus groups, surveys, oral histories, etc. as their primary market. Trying to use Dedoose without having an existing spreadsheet or set of transcripts to upload proved unwieldy. In the analysis of film and television, coding while you are collecting data is possible, even desirable, and the notion that data collection and the coding of data would be two separate acts was built into this system.

If Dedoose’s limitation was the notion of importable data, Qualtrics’ was the notion that I would have already decided what I would find. I quickly discovered that while Qualtrics was wonderful at setting up surveys about each episode and effectively calculated the results, it did not facilitate discovery. If, for example, I wanted to code for physical affection and sub-code for gentle, rough, familial, sexual, it could manage that well. But if I wanted to add which characters were involved, this too needed to be a list to select from. I couldn’t simply type in the characters’ names and retrieve them later. Imagine the number of characters involved in physical affection over six years of a prime-time drama and you can see why a survey list (instead of simply typing in the names) would quickly become unwieldy.

That is how I found myself falling back on enterprise software; this time the database software FileMaker Pro. FileMaker Pro doesn’t do a lot of things. It doesn’t allow you to search the text of hundreds of word documents, it doesn’t visualize or calculate data for you, it doesn’t automatically generate charts. But what it does do is give you a blank slate to put the variety of types of information you need into each database and helps you create a clear interface for inputting this information. Would I like to code using a set of check boxes indicating all the themes that I have chosen to trace in a given episode? No problem! Need a counter to input the number of scenes in a hospital or police station? Why not?! Need to combine a checkbox with a textbox so I can both note what happened and who it happened to? Sure! And since it is a database system, finding all of the episodes (entries) with the items that were coded for is simple and straightforward. This ability to not only code external items but to code them in multiple ways for multiple types of information using multiple input interfaces proved invaluable. As did its ability to allow me to continue coding on an iPad as well as a laptop, which allowed me to stream video on my computer at work or while traveling and coding simultaneously.

FileMaker Pro has its limitations, too. It does not connect easily with other coders unless everyone has access to the expensive FileMaker Server, and since I have just begun using FileMaker I may find myself still paying for a month of Dedoose here and there to visualize data I collected in FileMaker or importing the notes from my database into NVivo to make a word tree. But at the end of the day what characterizes textual analysis is its interpretive qualities. The ability to add new options as you proceed, to combine empirical, descriptive, numerical, linguistic and visual information, and to have a platform that evolves with you is invaluable.

While I didn’t find the perfect software solution, I found a lot of useful tools and I discovered something important: As powerful as the qualitative research software out there currently is, no software currently is well suited to textual analysis. The textual analysis that media studies researchers do creates unique challenges. While transcripts of films and television shows can be easily imported (if they can be obtained), the visual and aural elements of these texts are essential and so many researchers in this area will want to code items without importing them as transcripts into the software. Furthermore, the different ways to approach media – counting things, looking for themes, describing aesthetic elements – necessitate the ability to have multiple ways to input and retrieve information (similar to Elana Levine‘s discussion about incorporating thousands of sources in multiple formats for historiographical purposes). The potential need to have multiple people coding television episodes or films requires a level of collaboration that is not always easily obtained outside of social-science-oriented software like Qualtrics. Early film studies approaches often combined reception with description and these two actions remain important in contemporary textual analysis. Textual analysis requires collecting, coding, analyzing, and experiencing simultaneously (particularly given the difficulties in going back to retrieve a moment from hours and hours of film or television). It is an act of multiplicity, experiencing what you watch in multiple ways and recording the information in multiple ways, that current software does not yet facilitate. The audio-visual text requires a different kind of software, one that does not yet exist, one that would not only allow for all these different kinds of input and analysis but also allow you to easily associate codes with timestamps, count shots, or scene lengths and link them with themes. While the perfect software is not out there, I found that combining software like Filemaker Pro, NVivo, Dedoose, and simple tools like Cinemetrics could still help me dig more deeply into media texts.

Share

]]>
http://blog.commarts.wisc.edu/2015/07/22/textual-analysis-technology-information-overload-part-ii/feed/ 1
Digital Tools for Television Historiography, Part III http://blog.commarts.wisc.edu/2015/06/09/digital-tools-for-television-historiography-part-iii/ Tue, 09 Jun 2015 13:00:49 +0000 http://blog.commarts.wisc.edu/?p=27009 SV300056Post by Elana Levine, University of Wisconsin-Milwaukee

This is the third in a series of posts on my use of digital tools for a television history project. See part 1 and part 2.

Many of the digital research and writing needs I have been discussing in previous posts might apply to any historical project. Anyone who is grappling with thousands of sources in multiple formats might find data management and writing software useful to their task. But the work of managing audio-visual sources is more specific to media history. Television historiography, in particular, can be especially challenging in this regard, for series television includes episode after episode and season after season of programming — a lot of material for any researcher to take on.

In the case of my history of American daytime television soap opera from its beginnings in the early 1950s to the present, I face a task even more daunting than most TV history, for the genre I am studying has run 15-minute, half hour, or hour long episodes each weekday, 52 weeks a year, with individual series continuing this schedule for more than 50 years. Of course there is no way to watch all of it, or even to sample it in some representative way. Much of it no longer exists, for all soaps were broadcast live initially and many of those that started to be shot on video in the 1960s did not preserve the episodes — producers erased the tapes on an established rotation schedule. As far as I know, no public archive anywhere has all of the episodes of any US soap, although some of the shorter-lived ones do exist in complete form in the hands of media distribution companies or fan collectors. Fan archivists have preserved and uploaded to user-generated streaming video sites a massive amount of their private collections, taped off-air from the beginnings of the home VCR era to the present — there is more than one researcher could ever consume from the post-‘80s era alone.

But my point here is not to marvel at the voluminous output of soap creators and soap fans (although, wow!), nor to lament the disappearance or inaccessibility of so much of this crucial form of American popular culture (although, what a loss!). Instead I’d like to explain what I watch and, more specifically, how I watch, for that is entirely dependent on digital tools.

passionsFor the past 7 years, I have been integrating the viewing of past soap episodes into my daily routine. My choices of what to watch have been directed largely by availability. Other than episodes I have been able to see in museums and archives, my viewing has been focused on the limited numbers of soaps I have access to otherwise, of which I have tried to watch as many episodes as are available. Because I have been a soap viewer since the early 1980s, I have been less concerned with seeing programs from my own viewing history, although I am gradually integrating more material from the user-generated streaming archive over time. Instead, I have focused on the one soap that has been released in its entirety on DVD, Dark Shadows, and on soaps that have been rerun in recent years on cable channels, mostly the now-defunct SOAPnet, and on the broadcast network, RetroTV, which is carried primarily by local affiliates’ digital sub-channels.

In addition to daily reruns of just-aired soaps, SOAPnet reran select past episodes from a number of programs, but also aired a full run of ABC’s Ryan’s Hope from its 1975 debut through 1981 (the show aired originally until 1989). It also reran several years’ worth of Another World episodes from the late 1980s and early ’90s, and Port Charles’ telenovela-style 13-week arcs of the early 2000s. There have been other such instances, as in Sci-Fi’s rerun of Passions’ first few months in 2006. These repeats began airing around 2000, so I started recording them well before I was actively working on this project. As these repeats aired, I saved them first to VHS and then, once I abandoned those clunky old tapes, to DVD. DVD is a poor archival medium. But when I started doing this there were not the digital recording and storage options we now have. As with many other technological tools, what I did worked for me so I kept doing it.

I’ve watched much of this material over the past 7 years and am watching more every day. The recent addition of RetroTV’s rerun of the Colgate-Palmolive NBC soap, The Doctors, beginning with the late 1967 episodes, has further contributed to my archive. But how I do my viewing is where I employ digital video tools.

The author's two-screen work set-up.

The author’s two-screen work set-up.

Because most of my episode archive is on DVD-Rs I have burned over the years, my process is to convert these DVDs to mp4 files. Software like Handbrake accomplishes this on my Mac, as did the now-defunct VisualHub. For content I access through user-generated streaming sites, I use downloading software, some of which is available for free online. I also use iSkysoft iTube Studio for its ability to download from a range of different such sites, and to convert those files to iPad-ready mp4s. Managing the files through iTunes, I transfer them to my iPad in week-long viewing chunks, moving them off my limited-capacity first generation iPad after I watch. This multi-step process can be a bit cumbersome, but it achieves some key goals that have allowed me to watch a lot of content over time.

One goal was that my episodes be viewable in an off-line and mobile capacity to increase my ability to watch any time and anywhere (such as airplanes and my community fitness center gym, which did not have wifi until the past few years). Another goal was for the episodes to be on a screen separate from my main computer screen not only for portability but so that I could multitask as I watch. My pattern for years has been to watch three episodes of half-hour soaps or two of hour-long soaps each working weekday. Skipping commercials, this means spending 1–1 ½ hours of my day watching. I rarely take the time to do that in a concentrated way. Instead, I watch the episodes each day while dealing with email or other lower-attention work tasks, and in a host of other times when I find pockets for viewing — doing my hair, making dinner, cleaning a bathroom, waiting for a kid to fall asleep — these, I assure you, are all excellent times to watch soaps. I also watch at the gym and occasionally in the living room, with earbuds, when someone in my household is watching something else (e.g., Teen Titans Go!) on the “big” TV.

darkshadowsI take notes on the shows when I notice revealing moments (in DevonThink), but daytime soaps were not made for one’s full attention at all times. They are excellent at using audio and video cues to signal narrative significance. When I was watching Dark Shadows (perhaps the slowest of the soaps despite vampires, werewolves, and time travel) I knew exactly when to pay close attention because of the predictable music cues. Each of the soaps I watch has its own such patterns, which I have picked up through my regular viewing.

The work of television historiography is distinct in multiple respects, but surely the volume of content one might consider is especially notable. While watching the programs one studies is a central part of our research, cultural studies has helped us to understand that processes of production and reception are equally significant. Still, this de-centering of the text may be puzzling to those more accustomed to traditional forms of cultural analysis. For my soap research, my often-partial attention to the text has become an unintentionally revealing experience. I’ve come to understand my viewing as the 21st century digital version of the 1960s housewife glancing back and forth at the set as she irons, starts dinner, or moderates between squabbling siblings, an experience hilariously portrayed in a 1960 TV Guide Awards sketch. There may be no more fitting research strategy for a TV genre that has long served as a daily companion to its audience’s lives.

Share

]]>
On Wearing Two Badges: Indifference and Discomfort of a Scholar Fan (LeakyCon Portland) http://blog.commarts.wisc.edu/2013/07/31/on-wearing-two-badges-indifference-and-discomfort-of-a-scholar-fan-leakycon-portland/ Wed, 31 Jul 2013 13:00:48 +0000 http://blog.commarts.wisc.edu/?p=21016 LeakyConPortland Multipost Tag

This is the second of a seven-part series about the 4th LeakyCon convention held in Portland Oregon June 27-30, 2013.  Part I and the rest of the series can be found here.

 

LeakyCon should have been a paradise for me.  As a Ph.D student interested in industry/consumer relationships, the chance to attend a convention unified by Harry Potter(!) that celebrates reading, writing, creation, and general enthusiasm for nerdy girl culture seemed like the perfect place to explore my own fandom and experiment with fan ethnographies.

lindsay_two_badges_leakycon_editedDespite the anticipation leading up to Portland, I found myself, initially, surprisingly indifferent about the experience.  As I attended panels and walked the exhibit room, I felt out of place.  LeakyCon created a world within the Oregon Convention Center that constantly went out of its way to remind me that loving nerdy things was awesome, being nerdy was awesome, I was awesome, everyone around me was awesome, and we would all become lifelong friends for sharing this awesome experience.  So why didn’t I feel awesome?

As part of this project, I acquired a press badge in addition to my attendee one.  In a space marked by collecting ribbons to exhibit one’s fan identities, I was marked as both a fan and an academic. At first, this seemed inconsequential.  Wearing these two badges articulated my identity at LeakyCon as much as wearing Hogwarts robes expressed the identities of con attendees.  Yet, I felt serious reservations about my place at LeakyCon because my academic interest and training made me an interloper and because I wasn’t a big enough fan.  The burdens of both badges made me feel that I wore neither of them well. Through my unease, epistemological questions plagued me: As an academic, can one accurately describe fans, fandoms, and conventions without being a fan?  As a fan, can one keep enough distance to provide an accurate assessment of other fans?  Does that type of academic work constitute an act of fandom or tarnish the worlds that fans create with one another?

The first day of the con, for example, consisted of a series of “meet-ups”.  In planning which of these to attend, I instinctively approached the schedule as a reporter, but methodological and ethical questions soon arose. Should I attend this con as a fan and try to experience it for myself?  Or should I collect information as an ethnographer to understand the world around me?  The easy solution seemed to be both.  However, bridging the gap between academic and fan, participant and observer proved difficult. By not being a true participant, how could I fully understand and communicate the fan experience? Moreover, I felt guilty for intruding on spaces intended for people with genuine commonalities, concerned that I could negatively affect their con experiences.

With these insecurities in mind, I decided to shift gears and try the con as fan.  However, I quickly felt inadequate. Although Harry Potter unifies LeakyCon, Rowling’s world also serves as a common space for creating more specific micro-communities based on other fandoms I did not share, such as Dr. Who, Sherlock, and the Starkids. Although my fannish love of Harry Potter and academic interests brought me to the conference, I was only really excited for the panels about my current obsession – The Lizzie Bennet Diaries (LBD).

Lindsay Picture 1

In addition to not sharing most of these fandoms, I don’t share in many of the fan practices that would bring one to a convention in the first place.  Although I am interested in community as an extension of individual fandom, it’s not something I seek out myself:  I don’t know the acronyms, the references, or how to use Tumblr.  My fan love is largely isolated and off-line.  I don’t want fan-fictions that expand the world or to post gifs representing moments I love most.  The world of the text itself is enough for me.  However, it was not enough at LeakyCon.  My lack of extratextual currency made me feel ambivalent about the experience and I disliked feeling distanced from those around me.

Frustrated with my indifference, I decided to do something I have never cared to do otherwise: I bought an LBD poster and got in the autograph line.  Although this experience did not erase the divide completely removing my academic badge helped me enjoy more of the con as an attendee.  I felt part of the community because I did something fans did and connected with my own fandom and friend community.  However, my best experiences of the con are hard to document in academically worthwhile ways because they are far from academic; reconnecting with my childhood best friend who attended the con, chatting with Mary Kate Wiles (who plays Lyd-dee-ah in LBD), and the impromptu singing of the theme to The Fresh Prince of Bel Air along with half the cast of LBD, two Glee Warblers, and a Starkid on the train to the hotel.

These experiences led me to an epiphany.  One of the most striking aspects of LeakyCon was how, by virtue of their youth, the attendees defined the space as one of identity exploration. I realized that I had that in common with them, because the con represented an important moment in my own becoming, as someone who is currently negotiating my new identity as a scholar-fan.  In fact, struggling to bridge the gaps between the badges is what I have always done in my life. I’m the only academic in a blue-collar family, one of the few television students in my department, and the lone scholar at my industry internship.

Lindsay Picture 3

Upon further reflection, the distance I felt from the conference theme of celebrating one’s “authentic” identity as a result of my position between these two worlds was not, in fact, inauthentic at all.  I think I need to adjust my expectations and recognize that perhaps this discomfort in trying to resolve being a fan and academic doesn’t make me less of either.  I hope that acknowledging this divide for what it is will, instead,  make me a truer, dare I say more authentic researcher and fan, without compromising too much of what makes each of these identities so awesome.

For more on LeakyCon 2013, read:

– Part one (“Where the Fangirls Are“)
– Part three (“Fans and Stars and Starkids“)
– Part four (“From LGBT to GSM: Gender and Sexual Identity among LeakyCon’s Queer Youth“)
– Part five (“Inspiring Fans at LeakyCon Portland“)
– Part six (“Redefining the Performance of Masculinity“)
– Part seven (“Embracing Fan Creativity in Transmedia Storytelling“)

Share

]]>