Digital Tools – Antenna http://blog.commarts.wisc.edu Responses to Media and Culture Thu, 30 Mar 2017 23:48:47 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.5 Teaching with Arclight and POE http://blog.commarts.wisc.edu/2015/10/12/teaching-with-arclight-and-poe/ Mon, 12 Oct 2015 18:30:22 +0000 http://blog.commarts.wisc.edu/?p=28561 Project Arclight began two years ago with an idea: If researchers can use Twitter analytics to study trends in discussions of contemporary media, then what if we treated historic trade papers and fan magazines like a giant Twitter stream and explored trends in film and media history? We worked on refining this idea, received a grant, kept working on it, and — just today! — publicly launched our software at http://search.projectarclight.org.

Arclight searches the nearly 2 million page collection of the Media History Digital Library (MHDL) and graphs the results. To provide one example — and an example very much inspired by this month’s baseball postseason, football season, and basketball preseason — here is a visualization of how those three sports trend across the MHDL corpus. Note: Because the MHDL’s collections primarily encompass out-of-copyright works, the results largely cut off after the year of 1964.

Arclight sports line graph -- raw page count

The team that developed the Arclight software — who are acknowledged at the end of this post — are working on a series of journal articles that model how Arclight and the method of Scaled Entity Search can be applied toward investigating large-scale research questions. However, we also hope that Arclight will be valuable as a classroom tool for teachers of film and broadcasting history, especially those teachers keen to expose students to digital humanities methodologies and engage them in active learning. Here are a couple of suggestions for film and media educators about how to use Arclight with your students.

The POE Strategy

For over twenty years, the POE strategy (which stands for Predict, Observe, Explain) has been a highly effective teaching method in the sciences. After being presented with a set of circumstances, students are asked to predict what will happen, observe an experiment, and then explain why it happened and compare their prediction to the outcome. When implemented well, it’s an exercise that actively engages 100% of students in the classroom — not just the two or three who might raise their hands to answer a question that the teacher asks aloud.

The POE strategy is ideal when used with science experiments that play out relatively quickly. What will happen when we mix these two chemicals together? Make your prediction, then observe and explain why the result occurred. But POE can be challenging to implement in a history classroom. If a teacher says, “and guess what happened next?,” then it can certainly facilitate student prediction. But it also perpetuates two unfortunate dynamics. First, the instructor, who reveals the correct answer, becomes reinforced as the authority on the historical record. Second, history is presented as a fixed narrative, rather than as a set of assumptions and arguments that we are always challenging using the available evidence. We need ways to more actively engage students in their learning. And active learning, in a film and media history classroom, means that students get to spend class time doing the work of a film and media historian.

Arclight offers one means of integrating POE and active learning into a film or media history classroom. To use my earlier graph example, a teacher might ask, “How did the discourse of sports change from 1900 to 1960 in books and magazines about American entertainment and media?” Students could write down their predictions, then get to work on their computers or phones running queries for baseball, basketball, football, and other terms in Arclight. Something might immediately jump out at them. For me, it was the decline of both baseball and football during the years of 1942, 1943, 1944, and 1945. Based on this observation, I would offer the explanation that this decline of baseball and football occurred due to the impact of World War II and the enlistment of athletes into armed forces.

But really, this explanation based on distant reading is a new prediction that invites closer inspection, observation, and analysis. Put another way, Arclight is best used with the POEPOE or POEPOEPOE strategy. Any explanation a student offers can and should be further tested. Does the rise of football in the late-1920s and 1930s have more to do with radio coverage or football’s popularity in short films? We need to dig deeper to find the answer.

In developing Arclight, we felt it was important to give users the ability to easily and fluidly access the underlying texts. We were able to achieve this by integrating Arclight with the MHDL’s search engine, Lantern. Students can click through the Arclight graph and access the underlying materials within Lantern. Teachers will also want to encourage students to consult primary sources that are NOT indexed within Lantern, like archival manuscript collections and historical newspapers. Still, Arclight and Lantern provide a fast, user-friendly way for students to actively engage in historical research and analysis within a classroom.

The SES Interpretive Triangle and Changing Graph Views

There is another interpretive method that teachers may want to consider alongside POE.

The line graphs in Arclight are not arguments. They are simply visualizations of how many MHDL pages a given term appears in per year. To help students more fully think through what they are seeing, teachers might ask them to think about the relationships between the terms they are searching, the books and magazines they are searching within, and the impact of digitization and search algorithms within the process. We call this the Scaled Entity Search Interpretative Triangle. And if this sounds big and confusing, see Kit Hughes’ blog posts on the Scaled Entity Search (SES) technical method and interpretive method.

On an interactive level, students can change their visualization and reflective on the corpus and digitization by clicking on the dotted line icon and/or percentage icon. The dotted line icon graphs the MHDL’s entire corpus, revealing how some years have way more pages indexed than other years. The percentage icon helps correct for this by “normalizing” the data that is being visualized by dividing the number of page hits per year by the number of total pages per year. In other words, the normalization feature accounts for the fact that more pages were scanned from some years than others. And in the case of the sports visualization, the trend lines change quite a bit — especially in the stability of the lines from the late-1940s through the mid-1950s:

Arclight sports line graph -- normalized

Ultimately, no visualization is perfect, nor should it be. By offering users a variety of visualization options and the ability to access the underlying data, we hope to drive home the understanding that all graphs are incomplete abstractions and best used as jumping off points into further analysis. Yet they are valuable precisely because they may lead us toward analyses and questions that we otherwise would have never considered. And we hope that you and your students may even have fun creating, changing, and playing with these visualizations.

Please give Arclight a try with your students and let us know how it goes. We hope that it allows students to playfully engage in historical exploration and come away with valuable lessons about digital technology too. We are all living in a big data world. We have long trained our students in how to closely read a singular text. We need to complement this with more teaching activities that encourage analyzing many texts at a large scale — and dealing with all the uncertainty and messiness that goes along with it.

Acknowledgements:

This project was developed by teams at the University of Wisconsin-Madison and Concordia University and sponsored by a Digging into Data grant from the U.S.’s Institute for Museum and Library Services and Canada’s Social Sciences and Humanities Research Council. Additional support came from the University of Wisconsin-Madison’s Office of the Vice Chancellor for Research and Graduate Education and Concordia University’s Media History Research Centre.

The Arclight Software Development Team is Comprised of:

Project Directors: Charles Acland and Eric Hoyt

Interface Design and Programming: Kevin Ponto and Alex Peer

Search Index Development: Eric Hoyt, Kit Hughes, Derek Long, Peter Sengstock, Tony Tran

Thank you also to broader team and community who contributed to Project Arclight.

One final thanks…

The author wishes to thank the Madison Teaching and Learning Excellence (MTLE) program. Without MTLE, this media scholar would never have learned about POE or adopted the strategies of active learning.

Share

]]>
Textual Analysis & Technology: Information Overload, Part II http://blog.commarts.wisc.edu/2015/07/22/textual-analysis-technology-information-overload-part-ii/ http://blog.commarts.wisc.edu/2015/07/22/textual-analysis-technology-information-overload-part-ii/#comments Wed, 22 Jul 2015 15:23:38 +0000 http://blog.commarts.wisc.edu/?p=27603 Post by Kyra Hunting, University of Kentucky

This post is part of Antenna’s Digital Tools series, about the technologies and methods that media scholars use to make their research, writing, and teaching more powerful.

Filemaker-iconIn my last post, I discussed how I stitched together a system built primarily from simple office spreadsheet software to help me with the coding process used in my dissertation. As I moved into my first year post-diss with new projects involving multiple researchers and multiple kinds of research (examining not only television texts but also social media posts, survey responses, and trade press documents), I realized that my old methods weren’t as efficient or as accessible to potential collaborators as I needed them to be. This realization started a year’s worth of searching for a great software solution that would help me with the different kinds of coding that I found myself doing as I embarked on new projects. While I ultimately discovered a number of great qualitative research software, ultimately nothing was “just right.”

The problem with most of the research-oriented software I found was that they are based on at least one of two assumptions about qualitative research: 1) that researchers have importable (often linguistic, text-based) materials that we are analyzing and/or 2) we know what we are looking for/hoping to find. Both of these assumptions presented limitations when trying to find the perfect software mate for my research.

The first software I tried was NVivo, a qualitative research software platform that emphasizes mixed media types and mixed methods. This powerful software was great in many ways, not the least of which was that it counted for me. I first experimented with NVivo for a project I am doing (with Ashley Hinck) looking at Facebook posts and tweets as a site of celebrity activism, and in this context the software has acquitted itself admirably. It allowed me to import the PDFs into the system and then code them one by one. I found the ability to essentially have a pull-down list of items to consider very convenient, and I appreciated that I could add “nodes” (tags for coding) as I discovered them and could connect them to other broader node categories.

Sample Node List from NVivo

Sample Node List from NVivo

The premise behind my dissertation had been to set up a system to allow unexpected patterns to emerge through data coding and I had wanted to import this into my new work. NVivo supports that goal well, counting how many of the 1,600+ tweets being coded were associated with each node and allowing me to easily see patterns emerge in terms of which codes were most common and which were rare. An effective query system allows researchers to quickly find all the instances of any given node (e.g., all tweets mentioning a holiday) or group of nodes (e.g., all tweets mentioning a holiday and including a photo). While the format of my data meant I wasn’t able to use Nvivo’s very strong text-search query, its capability to search for text within large amounts of data, including transcripts, showed great potential. NVivo seemed to be the answer I was looking for, until I tried to code a television series.

Sort for most frequent nodes from my project with Ashley Hinck

Sort for most frequent nodes from my project with Ashley Hinck

For social media, my needs had actually been relatively simple. I was simply marking if any one of a few dozen attributes were present in relatively short social media posts. But with film and television they increased. It wasn’t as simple as x, y, or z being present, but rather if x (say physical affection) was present I also needed to be able to note how many times, between which people, and to add descriptive notes. This is not what NVivo is built to do. NVivo imagines that researchers are doing three different things as distinct and separate steps (coding nodes, searching text, and linking a single memo to a single source). NVivo is great at doing these things and I expect will continue to serve me well working with survey and text-based data. But for the study of film and television shows I found NVivo demanded that I simplify the questions I asked in ways that were inappropriate. After all, in the complex audio-visual-textual space of film and television it isn’t just that a zebra is present but whether it is live-action, animated, talks, dances, how many zebras are around it, what sound goes with it, etc. Memos allowed you to add notes but it only allowed one memo per source and the memos were awkwardly designed and hard to retrieve alongside the nodes.

I found that NVivo competitor Dedoose gave me a bit more flexibility in terms of the ways I could code but it did not do well with my need to simply add episodes as codable items. I was unable to import the episode. Also, simply typing in an episode’s title and coding as I watched was much harder than I expected. Like NVivo, Dedoose seems to imagine social scientists that work with focus groups, surveys, oral histories, etc. as their primary market. Trying to use Dedoose without having an existing spreadsheet or set of transcripts to upload proved unwieldy. In the analysis of film and television, coding while you are collecting data is possible, even desirable, and the notion that data collection and the coding of data would be two separate acts was built into this system.

If Dedoose’s limitation was the notion of importable data, Qualtrics’ was the notion that I would have already decided what I would find. I quickly discovered that while Qualtrics was wonderful at setting up surveys about each episode and effectively calculated the results, it did not facilitate discovery. If, for example, I wanted to code for physical affection and sub-code for gentle, rough, familial, sexual, it could manage that well. But if I wanted to add which characters were involved, this too needed to be a list to select from. I couldn’t simply type in the characters’ names and retrieve them later. Imagine the number of characters involved in physical affection over six years of a prime-time drama and you can see why a survey list (instead of simply typing in the names) would quickly become unwieldy.

That is how I found myself falling back on enterprise software; this time the database software FileMaker Pro. FileMaker Pro doesn’t do a lot of things. It doesn’t allow you to search the text of hundreds of word documents, it doesn’t visualize or calculate data for you, it doesn’t automatically generate charts. But what it does do is give you a blank slate to put the variety of types of information you need into each database and helps you create a clear interface for inputting this information. Would I like to code using a set of check boxes indicating all the themes that I have chosen to trace in a given episode? No problem! Need a counter to input the number of scenes in a hospital or police station? Why not?! Need to combine a checkbox with a textbox so I can both note what happened and who it happened to? Sure! And since it is a database system, finding all of the episodes (entries) with the items that were coded for is simple and straightforward. This ability to not only code external items but to code them in multiple ways for multiple types of information using multiple input interfaces proved invaluable. As did its ability to allow me to continue coding on an iPad as well as a laptop, which allowed me to stream video on my computer at work or while traveling and coding simultaneously.

FileMaker Pro has its limitations, too. It does not connect easily with other coders unless everyone has access to the expensive FileMaker Server, and since I have just begun using FileMaker I may find myself still paying for a month of Dedoose here and there to visualize data I collected in FileMaker or importing the notes from my database into NVivo to make a word tree. But at the end of the day what characterizes textual analysis is its interpretive qualities. The ability to add new options as you proceed, to combine empirical, descriptive, numerical, linguistic and visual information, and to have a platform that evolves with you is invaluable.

While I didn’t find the perfect software solution, I found a lot of useful tools and I discovered something important: As powerful as the qualitative research software out there currently is, no software currently is well suited to textual analysis. The textual analysis that media studies researchers do creates unique challenges. While transcripts of films and television shows can be easily imported (if they can be obtained), the visual and aural elements of these texts are essential and so many researchers in this area will want to code items without importing them as transcripts into the software. Furthermore, the different ways to approach media – counting things, looking for themes, describing aesthetic elements – necessitate the ability to have multiple ways to input and retrieve information (similar to Elana Levine‘s discussion about incorporating thousands of sources in multiple formats for historiographical purposes). The potential need to have multiple people coding television episodes or films requires a level of collaboration that is not always easily obtained outside of social-science-oriented software like Qualtrics. Early film studies approaches often combined reception with description and these two actions remain important in contemporary textual analysis. Textual analysis requires collecting, coding, analyzing, and experiencing simultaneously (particularly given the difficulties in going back to retrieve a moment from hours and hours of film or television). It is an act of multiplicity, experiencing what you watch in multiple ways and recording the information in multiple ways, that current software does not yet facilitate. The audio-visual text requires a different kind of software, one that does not yet exist, one that would not only allow for all these different kinds of input and analysis but also allow you to easily associate codes with timestamps, count shots, or scene lengths and link them with themes. While the perfect software is not out there, I found that combining software like Filemaker Pro, NVivo, Dedoose, and simple tools like Cinemetrics could still help me dig more deeply into media texts.

Share

]]>
http://blog.commarts.wisc.edu/2015/07/22/textual-analysis-technology-information-overload-part-ii/feed/ 1
Textual Analysis & Technology: In Search of a Flexible Solution, Part I http://blog.commarts.wisc.edu/2015/07/15/textual-analysis-technology-in-search-of-a-flexible-solution-part-i/ Wed, 15 Jul 2015 16:08:45 +0000 http://blog.commarts.wisc.edu/?p=27556 Post by Kyra Hunting, University of Kentucky

This post is part of Antenna’s Digital Tools series, about the technologies and methods that media scholars use to make their research, writing, and teaching more powerful.

2,125… or was that 2,215? When working on my dissertation, a question that came up again and again when I said I was trying to look at entire series of several television shows was “how many episodes did you look at total?” It was a perfectly reasonable question, and yet one I often wasn’t quite sure of the exact number when I was asked it. After a certain point what was another 50 episodes or so? If I couldn’t easily remember the number of episodes I was looking at, I knew remembering the details of each one wasn’t going to be possible. As a result, finding a way to code and take notes on the shows I was examining, and make them searchable later, was one of the first steps I took during my dissertation process. Four years later, as the research approach I developed in my dissertation has become increasingly important to my work, I am still in search of the perfect software.

When I began looking for a solution for my dissertation, I ran into three problems that I suspect are pretty common: 1) I had never worked on a project that size and was not aware that there were software solutions out there; 2) the software I had heard of could cost several hundred dollars; and 3) (most importantly) I wasn’t sure exactly what I was looking for. My dissertation began largely from an interest in finding a different way to approach television texts and wanting to investigate how the form of different television genres and a number of different themes of representation intersected. As a result when I sat down with my first stack of teen drama DVDs to code I didn’t know quite what I wanted to code. It was through the process of coding, thinking about what information I would need and want to be able to go back to, that I learned what I was looking for. I only realized that something like acts of physical affection were something I wanted to code with a simply y/n and character names after 6 episodes. It turned out that a shorthand for demographic information (e.g. WASM for White Adult Straight Man) would be important for medical dramas and crime dramas to denote the demographics of criminals, victims, suspects, and patients, although it had been entirely unnecessary for teen dramas. Coding, for me, was a learning process — something that both recorded information, made it accessible, and helped me discover what I was looking for. That process of discovery through research certainly won’t be foreign to most academics. After all, there is joy in finding that unexpected piece of the puzzle in an archive or watching a focus group coalesce in an unexpected way. However, as I have found half-a-dozen or more software demos later, that is not quite how most academic research software works. Most of the software I experimented with wanted me to know what I was looking for, or at least already have what I was looking at (i.e. interview transcripts, survey results) in a concrete way.

Because of this core issue — the fact that how much information, what information, and what kind was constantly evolving — I found then, and again three years later, that it was an enterprise (read: business) not academic software that best suited my needs. During my dissertation it turned out to be the relatively straightforward Numbers spreadsheet software that did the job. For each genre, I would set up a different spreadsheet with the unique sets of information I needed for that genre. For example, for crime dramas there would be a column for each of the following: demographics of victims(s), demographic(s) of perpetrators, demographics of suspect(s), motive, outcome, religion, non-heteronormative sexuality, gender themes, police behavior, and the nebulous “notes” section that inspired the columns and code short-hands that I needed as things evolved.

What made Numbers work was that I was transparently typing in words, the shorthand I evolved to stand in for the boxes on a traditional “coding” sheet, and numbers (episode numbers, number of patients, etc.). I could always change what I coded and how. Every few episodes I watched I would ask, ‘Is there something important and new I want to track?’ If there was I could word search my notes and assign them shorthands; so, as time went on, I needed less notes and could shorthand much more of my fiftieth ER episode’s notes then I could when I began my fifth. The spreadsheets seemed disorderly and overwhelming to my partner when he peeked at my work (see image below) but they had the advantage of elasticity, changing as I learned what I was doing and what I was looking for.

Screen Shot 2015-07-14 at 11.18.18 PM

Numbers didn’t have any assumptions (like a lot of more powerful software does) about what information I would be inputting and how I would use it. Therefore, when it came time to sort that information it also leant itself well to finding the relevant episodes and connections. The filter function allowed me to pick any column and any search term and would show me only the rows (episodes) that were relevant. Every episode that contains the word “jealousy” in the motive column but not the words “anger” or “angry” and the religion code “CH” (for Christian) was only a few filter clicks away.

Like Elana Levine, I found that the software that was available couldn’t do the whole job itself. Numbers didn’t really recognize the information I was putting in as something it should count, so if I wanted to know how many white male victims of crimes there were (hint: a lot) I was on my own to physically count them up. As a result I discovered that Zotero, a research material collection system (similar to Scrivener) that I had been using for reading notes and collecting PDFs also helped me analyze those thousands of episodes. After filtering the information using Numbers, I would create files in Zotero where I would list all the episode numbers that discussed Buddhism, or in which a lesbian character appeared, or in which a patient died. I’d then count up the numbers of episodes in a given category. Because Zotero was so searchable, it made it quick and easy to find all the “important themes” a given episode dealt with and calculate all kinds of relationships that I hadn’t originally expected to look at (percentage of patient deaths that were pregnant women? Alcoholics? Coming right up!).

Spreadsheets and a digital version of a filing cabinet (my best way of describing Zotero) are not necessarily the high-tech solutions I might have initially sought but their content agnosticism and searchability made them perfect fits for the work I was doing at the time. Just the other day I pulled up one of my old spreadsheets looking for the sort of thing I hadn’t coded but likely would have kept in the episode notes, and found an episode of a medical drama featuring an elementary school teacher in mere moments. When I started my new job and embarked on new research projects, including those that required collaboration, I started to feel like spreadsheets just wouldn’t do the job anymore and went in search of the perfect software. One year, several meetings with my college’s IT guys, and quite a few demo downloads later and I still haven’t found it. My new, better spreadsheet alternative has turned out to be yet another business solution: FileMaker Pro. And the shoe still doesn’t quite fit, but more on that later (stay tuned for Part II next week).

While I might not have discovered the perfect piece of software, what I have discovered is that the creative use of open-ended software can serve the study of texts well. However, the available research software is not yet designed for the diversity of information, multiplicity of data input types, and unique twists and turns that accompanies the study of media texts.

Share

]]>
Digital Tools for Television Historiography, Part III http://blog.commarts.wisc.edu/2015/06/09/digital-tools-for-television-historiography-part-iii/ Tue, 09 Jun 2015 13:00:49 +0000 http://blog.commarts.wisc.edu/?p=27009 SV300056Post by Elana Levine, University of Wisconsin-Milwaukee

This is the third in a series of posts on my use of digital tools for a television history project. See part 1 and part 2.

Many of the digital research and writing needs I have been discussing in previous posts might apply to any historical project. Anyone who is grappling with thousands of sources in multiple formats might find data management and writing software useful to their task. But the work of managing audio-visual sources is more specific to media history. Television historiography, in particular, can be especially challenging in this regard, for series television includes episode after episode and season after season of programming — a lot of material for any researcher to take on.

In the case of my history of American daytime television soap opera from its beginnings in the early 1950s to the present, I face a task even more daunting than most TV history, for the genre I am studying has run 15-minute, half hour, or hour long episodes each weekday, 52 weeks a year, with individual series continuing this schedule for more than 50 years. Of course there is no way to watch all of it, or even to sample it in some representative way. Much of it no longer exists, for all soaps were broadcast live initially and many of those that started to be shot on video in the 1960s did not preserve the episodes — producers erased the tapes on an established rotation schedule. As far as I know, no public archive anywhere has all of the episodes of any US soap, although some of the shorter-lived ones do exist in complete form in the hands of media distribution companies or fan collectors. Fan archivists have preserved and uploaded to user-generated streaming video sites a massive amount of their private collections, taped off-air from the beginnings of the home VCR era to the present — there is more than one researcher could ever consume from the post-‘80s era alone.

But my point here is not to marvel at the voluminous output of soap creators and soap fans (although, wow!), nor to lament the disappearance or inaccessibility of so much of this crucial form of American popular culture (although, what a loss!). Instead I’d like to explain what I watch and, more specifically, how I watch, for that is entirely dependent on digital tools.

passionsFor the past 7 years, I have been integrating the viewing of past soap episodes into my daily routine. My choices of what to watch have been directed largely by availability. Other than episodes I have been able to see in museums and archives, my viewing has been focused on the limited numbers of soaps I have access to otherwise, of which I have tried to watch as many episodes as are available. Because I have been a soap viewer since the early 1980s, I have been less concerned with seeing programs from my own viewing history, although I am gradually integrating more material from the user-generated streaming archive over time. Instead, I have focused on the one soap that has been released in its entirety on DVD, Dark Shadows, and on soaps that have been rerun in recent years on cable channels, mostly the now-defunct SOAPnet, and on the broadcast network, RetroTV, which is carried primarily by local affiliates’ digital sub-channels.

In addition to daily reruns of just-aired soaps, SOAPnet reran select past episodes from a number of programs, but also aired a full run of ABC’s Ryan’s Hope from its 1975 debut through 1981 (the show aired originally until 1989). It also reran several years’ worth of Another World episodes from the late 1980s and early ’90s, and Port Charles’ telenovela-style 13-week arcs of the early 2000s. There have been other such instances, as in Sci-Fi’s rerun of Passions’ first few months in 2006. These repeats began airing around 2000, so I started recording them well before I was actively working on this project. As these repeats aired, I saved them first to VHS and then, once I abandoned those clunky old tapes, to DVD. DVD is a poor archival medium. But when I started doing this there were not the digital recording and storage options we now have. As with many other technological tools, what I did worked for me so I kept doing it.

I’ve watched much of this material over the past 7 years and am watching more every day. The recent addition of RetroTV’s rerun of the Colgate-Palmolive NBC soap, The Doctors, beginning with the late 1967 episodes, has further contributed to my archive. But how I do my viewing is where I employ digital video tools.

The author's two-screen work set-up.

The author’s two-screen work set-up.

Because most of my episode archive is on DVD-Rs I have burned over the years, my process is to convert these DVDs to mp4 files. Software like Handbrake accomplishes this on my Mac, as did the now-defunct VisualHub. For content I access through user-generated streaming sites, I use downloading software, some of which is available for free online. I also use iSkysoft iTube Studio for its ability to download from a range of different such sites, and to convert those files to iPad-ready mp4s. Managing the files through iTunes, I transfer them to my iPad in week-long viewing chunks, moving them off my limited-capacity first generation iPad after I watch. This multi-step process can be a bit cumbersome, but it achieves some key goals that have allowed me to watch a lot of content over time.

One goal was that my episodes be viewable in an off-line and mobile capacity to increase my ability to watch any time and anywhere (such as airplanes and my community fitness center gym, which did not have wifi until the past few years). Another goal was for the episodes to be on a screen separate from my main computer screen not only for portability but so that I could multitask as I watch. My pattern for years has been to watch three episodes of half-hour soaps or two of hour-long soaps each working weekday. Skipping commercials, this means spending 1–1 ½ hours of my day watching. I rarely take the time to do that in a concentrated way. Instead, I watch the episodes each day while dealing with email or other lower-attention work tasks, and in a host of other times when I find pockets for viewing — doing my hair, making dinner, cleaning a bathroom, waiting for a kid to fall asleep — these, I assure you, are all excellent times to watch soaps. I also watch at the gym and occasionally in the living room, with earbuds, when someone in my household is watching something else (e.g., Teen Titans Go!) on the “big” TV.

darkshadowsI take notes on the shows when I notice revealing moments (in DevonThink), but daytime soaps were not made for one’s full attention at all times. They are excellent at using audio and video cues to signal narrative significance. When I was watching Dark Shadows (perhaps the slowest of the soaps despite vampires, werewolves, and time travel) I knew exactly when to pay close attention because of the predictable music cues. Each of the soaps I watch has its own such patterns, which I have picked up through my regular viewing.

The work of television historiography is distinct in multiple respects, but surely the volume of content one might consider is especially notable. While watching the programs one studies is a central part of our research, cultural studies has helped us to understand that processes of production and reception are equally significant. Still, this de-centering of the text may be puzzling to those more accustomed to traditional forms of cultural analysis. For my soap research, my often-partial attention to the text has become an unintentionally revealing experience. I’ve come to understand my viewing as the 21st century digital version of the 1960s housewife glancing back and forth at the set as she irons, starts dinner, or moderates between squabbling siblings, an experience hilariously portrayed in a 1960 TV Guide Awards sketch. There may be no more fitting research strategy for a TV genre that has long served as a daily companion to its audience’s lives.

Share

]]>
Digital Tools for Television Historiography, Part II http://blog.commarts.wisc.edu/2015/06/02/digital-tools-for-television-historiography-part-ii/ http://blog.commarts.wisc.edu/2015/06/02/digital-tools-for-television-historiography-part-ii/#comments Tue, 02 Jun 2015 13:20:52 +0000 http://blog.commarts.wisc.edu/?p=26829 scrivenerPost by Elana Levine, University of Wisconsin-Milwaukee

This is the second in a series of posts detailing my use of digital tools in a television history project. Read Part I here.

When I set out to manage all of the research materials for my history of US daytime television soap opera digitally, I was mainly concerned with having a system for storing PDFs, notes, and website clippings in a way that made them easily searchable. But after I had decided to use DevonThink as my data management system, migrated existing materials into the program, and began taking new notes with the software, I had to face the second part of my process—converting research materials into chapter outlines.

As I described in my previous post, my earlier method for this stage involved a floor and piles of papers. It also involved blank notecards, on which I would write labels or topics for the different piles as I sorted them into categories, and then a legal pad and pen, upon which I would sketch an outline of my chapter, figuring out the connections across the piles/categories, and testing ideas for the big picture arguments to which the piles built. Having gone digital, however, there were no physical piles of paper to organize. I needed a digital means of conducting that analog process. I needed digital piles.

For a while, I was resistant to considering writing software as the answer to this dilemma. Writing was not the problem. I had been writing digitally for a long time. (No, you need not remind me of the typewriter I took to my freshman year of college). Because I did so much planning and thinking before writing, I had no problem using conventional word-processing software to write. In fact, I like to write in linear fashion; it helps me construct a tight argument and narrate a coherent story. It was the outlining—the pile making, the planning and thinking—that I had to find a way to digitize. Then I saw the corkboard view on Scrivener with those lined 3X5 index card-like graphics. A virtualization of my piles, beaming at me from the screen instead of surrounding me on the floor!

Binder

The “Binder” feature in Scrivener.

The "Corkboard" view in Scrivener.

The “Corkboard” view in Scrivener.

So began my experimentation with Scrivener, which has now become an integral part of my process. Scrivener is writing software and, like DevonThink or any other digital tool, has many uses. As with my use of DevonThink, I have been learning it as I go, so I am far from expert in all of its features. Because I needed the software to help me to categorize my research materials and outline my chapters, I mainly use its “binder” feature to sort my materials into digital piles. The hierarchical structuring of folders and documents within the Scrivener binder provides me with a way of replicating my mental and, formerly, haptic labor of sorting and articulating ideas and information together in a digital space.

I began by reading through all of the materials in DevonThink associated with the 1950s. As I read I categorized, figuring out what larger point the source spoke to, or what circumstance it served as evidence of. I created what Scrivener calls “documents” for any piece of research, or connected pieces of research, that I thought might be useful in my chapters. Early on, I realized I had multiple chapters to write about the ‘50s and ended up outlining three chapters at once as I moved through my materials. I gradually began to group documents into folders labeled with particular themes or points. This is the equivalent to me putting an index card with a label or category on top of a pile of papers, a way of understanding a set of specific pieces of information as contributing to a larger point or idea. These folders became sub-folders of the larger chapter folders. But it is the way I integrate this process with DevonThink that allows me to connect specific pieces of my archive to my argument. In DevonThink I am able to generate links to particular items in the database. I paste those links in the Scrivener documents I create.

How does this look in Scrivener? Sometimes this means that a Scrivener document is just my link, the text of which is the name of my DevonThink item, such as, “SfT timeline late ‘50s/early ‘60s,” which is my notes on story events from Search for Tomorrow during that period. But Scrivener’s “Inspector” window, which can appear alongside the document on the screen, is a useful space for me to jot down notes about that document, reminding myself of the information it offers or indicating what I see as most relevant about it. The synopsis I create here is what I see if I look at my documents in the corkboard view.

The “Portia and Walter relationship” document in Scrivener.

The “Portia and Walter relationship” document in Scrivener.

Other times my Scrivener documents include a number of DevonThink links that feed into the same point. For example, a document called “Portia and Walter relationship” includes links to five different items in DevonThink, four of which are notes on Portia Faces Life scripts; the fifth is notes on memos from the show’s ad agency producer to writer Mona Kent. In my synopsis notes on this document, I reminded myself that these were examples of the ways that married couple Portia and Walter talked to each other as equals, and how this served as a contrast to another couple on the show, Kathy and Bill. This ability to link to my DevonThink archive has allowed Scrivener to serve as my categorizing and outlining system.

While I have written sentences here and there in Scrivener to help me remember the ideas I had about particular materials, I have not yet found need to actually write chapters within it—I use a conventional word processing program for that. I know this is unlike the typical use of the software, but working this way has helped me to manage an otherwise unwieldy task. Scrivener provides a way to include research materials within its structure, but does not have the functionality for managing those materials that I get with DevonThink.

The "free form text editor" Scapple.

The “free form text editor” Scapple.

This system is working well for me, but at times I do find the Scrivener binder structure to be too linear. The ability to move my paper piles around, to stack them or spread them apart, was a helpful feature of my analog methods. As a result, I have begun experimenting with Scapple, a “free form text editor,” similar to mind-mapping software and created by Scrivener’s publishers, as a way to digitally reimagine the fluidity of the paper piles. Like Scrivener, Scapple allows me to link to DevonThink items and has met my desire for a non-linear planning system. I can connect examples and items from my archive to larger points and, through arrows and other forms of connection, note the relationship of particular pieces of data to multiple concepts.  I’m not yet convinced it is essential to my workflow, but I am intrigued by its possibilities and eager to keep experimenting within the generous trial window (which Scrivener and DevonThink both have, as well).

My use of these digital tools is surely quite idiosyncratic, but in ways more specific to me than to my object of study or the field of television historiography. More particular to the history of soaps and to media history in general are the challenges of managing video sources. Tune in next time for that part of my story.

Share

]]>
http://blog.commarts.wisc.edu/2015/06/02/digital-tools-for-television-historiography-part-ii/feed/ 7
Digital Tools for Television Historiography, Part I http://blog.commarts.wisc.edu/2015/05/26/digital-tools-for-television-historiography-part-i/ http://blog.commarts.wisc.edu/2015/05/26/digital-tools-for-television-historiography-part-i/#comments Tue, 26 May 2015 13:57:01 +0000 http://blog.commarts.wisc.edu/?p=26722 devonthinkPost by Elana Levine, University of Wisconsin-Milwaukee

This is the first in a series of posts detailing my use of digital tools in a television history project.

When I was researching and writing my dissertation at the turn of the 21st century, analog tools were my friend. Because my project was a history of American entertainment television in the 1970s, I drew upon a wide range of source materials: manuscript archives of TV writers, producers, sponsors, and trade organizations; legislative and court proceedings; popular and trade press articles; many episodes of ‘70s TV; and secondary sources in the form of scholarly and popular books and articles. The archive I amassed took up a lot of space: photocopies and print-outs of articles, found in the library stacks or on microfilm; VHS tape after VHS tape of episodes recorded from syndicated reruns; and stacks and stacks of 3X5 notecards, on which I would take notes on my materials. I gathered this research chapter by chapter and so, as it would come time to write each one, I would sit on the floor and make piles in a circle around me, sorting note cards and photocopies into topics or themes, figuring out an organizing logic that built a structure and an argument out of my mountains of evidence. It. Was. Awesome.

As I turned that dissertation into a book over the coming years, and worked on other, less voluminous projects, I stuck pretty closely to my tried and true workflow, though the additions of TV series on DVD and, eventually, of YouTube, began to obviate my need for the stacks of VHS tapes. Around 2008, I began to research a new historical project, one that I intended to spend many years pursuing and that promised to yield a larger archive than I’d managed previously. This project, a production and reception history of US daytime television soap opera, would traverse more than 60 years of broadcast history and would deal with a genre in which multiple programs had aired daily episodes over decades. Still, as I began my research, I continued most of my earlier methods, amassing photocopies and notes, which I was by then writing as word-processor documents rather than handwritten index cards. By late 2012, I was thinking about how to turn these new mountains of research materials into chapters. And I freaked out.

Sitting amidst piles of paper on the floor seemed impractical—there was so, so much of it—and I was technologically savvy enough to realize that printing out my word-processed materials would be both inefficient and wasteful. So I began to investigate tools for managing historical research materials digitally. Eventually, I settled on a data management system called DevonThink. I chose DevonThink for a number of reasons, but mostly because it would allow me to perform optical character recognition (OCR) to make my many materials fully searchable. This was a crucial need, especially because I would be imposing a structure on my research after having built my archive over years and from multiple historical periods. It would be impossible for me to recall exactly what information I had about which topics; I needed to outsource that work to the software.

This required that I digitize my paper archive, which I did, over time, with help. My ongoing archival research became about scanning rather than photocopying (using on-site scanners or a smartphone app, JotNot, that has served me well). And I began to generate all of my new notes within DevonThink, rather than having to import documents created elsewhere. Several years into using DevonThink, I still have only a partial sense of its capabilities, but I see this not as a problem but as a way of making the software fit my needs. (Others have detailed their use of the software for historical projects.) I have learned it as I’ve used it and have only figured out its features as I’ve realized I needed them. There are many ways to tag or label or take notes on materials, some of which I use. But, ideally, the fact that most of my materials are searchable makes generating this sort of metadata less essential. I rely heavily on the highlighting feature to note key passages in materials that I might want to quote from or cite. And I’ve experimented with using the software’s colored labeling system to help me keep track of which materials I have read and processed and which I have not.

levine-devonthinkBecause I have figured out its utility as I’ve gone along, I’ve made some choices that I might make differently for another project. I initially put materials into folders (what DevonThink calls “Groups”) before realizing that was more processing labor than I needed to expend. So I settled for separating my materials into decades, but have taken advantage of a useful feature that “duplicates” a file into multiple groups to make sure I put a piece of evidence that spans time periods into the various places I might want to consider it. I have settled into some file-naming practices, but would be more consistent about this on another go-round. I know I am not using the software to its full capacity, but I am making it work in ways that supplement and enable my work process, exactly what I need a digital tool to do.

In many respects, my workflow remains rather similar to my old, analog ways, in that I still spend long hours reading through all of the materials, but now I sort them into digital rather than physical piles (a process that involves another piece of software, which I will explain in my next post). In writing media history from a cultural studies perspective, one necessarily juggles a reconstruction of the events of the past with analyses of discourses and images and ideas. I don’t think there is a way to do that interpretive work without the time-consuming and pleasurable labor of reading and thinking, of sorting and categorizing, of articulating to each other that which a casual glance—or a metadata search—cannot on its own accomplish.

But having at my fingertips a quickly searchable database has been invaluable as I write. Because I have read through my hundreds of materials from “the ‘50s,” for instance, I remember that there was a short-lived soap with a divorced woman lead. Its title? Any other information about it? No clue. But within a few keystrokes I can find it—Today Is Ours—and not just access the information about its existence (which perhaps an internet search could also elicit) but find the memo I have of the producers discussing its social relevance, and the Variety review that shares a few key lines of dialogue. OCR does not always work perfectly—it is useless on handwritten letters to the editor of TV Guide—but my dual processes of reading through everything and of using searches to find key materials has made me confident that I am not missing sources as I construct my argument and tell my story. It’s a big story to tell, and one that may be feasible largely due to my digital tools.

Share

]]>
http://blog.commarts.wisc.edu/2015/05/26/digital-tools-for-television-historiography-part-i/feed/ 5