software – Antenna http://blog.commarts.wisc.edu Responses to Media and Culture Thu, 30 Mar 2017 23:48:47 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.5 Value Creation Through Digital Commons: Complicating the Discourse http://blog.commarts.wisc.edu/2015/10/21/value-creation-through-digital-commons-complicating-the-discourse/ Wed, 21 Oct 2015 13:00:54 +0000 http://blog.commarts.wisc.edu/?p=28636
Cosmos Laundromat by Blender Institute, 2015.

Cosmos Laundromat by Blender Institute, 2015.

Post by Julia Velkova, Södertörn University

This post is part of a partnership with theInternational Journal of Cultural Studies, where authors of newly published articles extend their arguments here on Antenna.

By the end of 2015, the number of open content works online licensed under a Creative Commons license is estimated to pass beyond one billion[1]. Approximately one-tenth of them are hosted on major sites such as YouTube, Wikipedia, Flickr, Public Library of Science, Scribd and Jamendo, and represent music, photos, animation films, online comics, texts and illustrations that are made explicitly available by their authors with the intent to encourage their circulation, use and the creation of derivative works.

One widely spread assumption about these forms of copyleft-licensed media is that they are a result of unpaid, volunteer labor, often done with the intention to disrupt broader capitalist structures or circuits of production through new forms of organising. Another common view is that while individuals give away their artifacts to the public, the market economy exploits this content by integrating it in the circuits of capitalist production and by extracting value from it without contributing back. These portraits, while true to a certain extent, do not give justice to the much more complex interactions between market and commons that inform the creative practices and intensions of producers engaged in creating digital commons, ranging from free and open source software development and hacking, to producing media of industry quality or within the creative industries themselves. Not least, these views disregard the possibility that the producers of commons are experimenting with models to financially sustain and capitalize on their own work, not necessarily standing in opposition to the markets. At the same time it is admittedly difficult to grasp the way in which economic value is generated through commons if looking just at the final artifacts that are made – software, or media since they are often shared free of charge.

Image: Morevna Project, Anastasia Mayzhegisheva, 2015

Image: Morevna Project, Anastasia Mayzhegisheva, 2015

In the period between 2013 and 2015 I observed ethnographically the processes of creation of two large scale animation film projects anchored in the domain of commons and the techno-artistic communities surrounding them. One of the films was Cosmos Laundromat[2], part of the larger project Gooseberry by the Blender Institute[3] in Amsterdam, Netherlands; the other one was The Beautiful Queen Marya Morevna[4], part of the larger Morevna project[5] by an informal collective located in Gorno-Altaysk, a town of 40,000 inhabitants in Southern Siberia, Russia. The projects were particular in several ways: they gathered artists and programmers to develop and improve free and open source software for computer graphics production, namely the largely popular program Blender for 3D animation; and Synfig for 2D vector animation. These programs represent the non-proprietary equivalents of software such as 3D Studio Max, Adobe After Effects and Maya. At the same time, the production process, and the short animation films were made public and shared as commons, together with all their assets.

If looking at the main software used in the productions, respectively Blender and Synfig, they both have remarkably gone through a process of de-commodification in the course of which they have been converted from proprietary to free software programs, and their development taken over by their user communities. This suggests that not only the industry does appropriate products of the commons, but also reverse processes do occur. De-commodification helped Blender to grow significantly in the course of about fifteen years to an average of 300,000 user downloads a month(ref), and Synfig to about 20,000 a month over slightly over than six years. This growth has in both cases been largely due to the organisation of software development as practice-driven process that emerges from making animation films through a public process anchored in sharing the code, the production process and the components (assets) of the films made. This approach has so far been applied on five[6] open animation films by the Blender Institute, allowing the Blender software user community to grow. As a result Blender has also been incorporated at the core of the professional production practice of many small commercial animation studios across the world. It has also inspired others, such as Morevna project to experiment with making open-films but in a different animation genre. The approach has also led large hardware manufacturers (such as Intel and Dell), actors from the game industry (Valve Corporation) and IT corporations such as Google to provide support to Blender in the form of powerful hardware, monthly monetary donations[7] or contributions targeted at the development of specific features through, for example, the Google Summer of Code program. The direct industry support provided for developing Blender has contributed to the establishment of independent technical infrastructure that has in turn enabled the creation of open animation films. Within the production frameworks of the films themselves more funds are secured with the help of institutions of public cultural funding, philanthropic foundations, private companies, other open source communities, as well as individuals. The financial resources that are generated do not therefore stem from sales of media, but are a result of combining multiple production practices, that of films, software, and training materials that cut across the interests of multiple organisations and as a result attract financial support through the increased range of beneficiaries.

These interactions help the software and film projects ultimately to economically sustain themselves, and develop software, as well as media owned by the public. The latter could at the same time be regarded as a form of critique, from within, of the prevailing systems of cultural production that is more anchored in pragmatic considerations around the possibility to exercise craft autonomy within the cultural industries, rather than being representative of ideologies of the political left.

[1]          Link: https://stateof.creativecommons.org/report/

[2]          Link: http://gooseberry.blender.org/

[3]          Link: https://www.blender.org/institute/

[4]          Link: http://morevnaproject.org/anime/

[5]          Link: http://www.morevnaproject.org

[6]          Link: http://www.blender.org/features/projects/

[7]          Link: http://www.blender.org/foundation/development-fund/

[For the full article, see Julia Velkova and Peter Jakobsson, “At the Intersection of Commons and Market: Negotiations of Value in Open-Sourced Cultural Production,” forthcoming in International Journal of Cultural Studies. Currently available as an OnlineFirst publication: http://ics.sagepub.com/content/early/2015/08/06/1367877915598705.abstract More information on related research can be found at Julia Velkova’s blog: http://phd.nordkonst.org/]

Share

]]>
Digital Tools for Television Historiography, Part II http://blog.commarts.wisc.edu/2015/06/02/digital-tools-for-television-historiography-part-ii/ http://blog.commarts.wisc.edu/2015/06/02/digital-tools-for-television-historiography-part-ii/#comments Tue, 02 Jun 2015 13:20:52 +0000 http://blog.commarts.wisc.edu/?p=26829 scrivenerPost by Elana Levine, University of Wisconsin-Milwaukee

This is the second in a series of posts detailing my use of digital tools in a television history project. Read Part I here.

When I set out to manage all of the research materials for my history of US daytime television soap opera digitally, I was mainly concerned with having a system for storing PDFs, notes, and website clippings in a way that made them easily searchable. But after I had decided to use DevonThink as my data management system, migrated existing materials into the program, and began taking new notes with the software, I had to face the second part of my process—converting research materials into chapter outlines.

As I described in my previous post, my earlier method for this stage involved a floor and piles of papers. It also involved blank notecards, on which I would write labels or topics for the different piles as I sorted them into categories, and then a legal pad and pen, upon which I would sketch an outline of my chapter, figuring out the connections across the piles/categories, and testing ideas for the big picture arguments to which the piles built. Having gone digital, however, there were no physical piles of paper to organize. I needed a digital means of conducting that analog process. I needed digital piles.

For a while, I was resistant to considering writing software as the answer to this dilemma. Writing was not the problem. I had been writing digitally for a long time. (No, you need not remind me of the typewriter I took to my freshman year of college). Because I did so much planning and thinking before writing, I had no problem using conventional word-processing software to write. In fact, I like to write in linear fashion; it helps me construct a tight argument and narrate a coherent story. It was the outlining—the pile making, the planning and thinking—that I had to find a way to digitize. Then I saw the corkboard view on Scrivener with those lined 3X5 index card-like graphics. A virtualization of my piles, beaming at me from the screen instead of surrounding me on the floor!

Binder

The “Binder” feature in Scrivener.

The "Corkboard" view in Scrivener.

The “Corkboard” view in Scrivener.

So began my experimentation with Scrivener, which has now become an integral part of my process. Scrivener is writing software and, like DevonThink or any other digital tool, has many uses. As with my use of DevonThink, I have been learning it as I go, so I am far from expert in all of its features. Because I needed the software to help me to categorize my research materials and outline my chapters, I mainly use its “binder” feature to sort my materials into digital piles. The hierarchical structuring of folders and documents within the Scrivener binder provides me with a way of replicating my mental and, formerly, haptic labor of sorting and articulating ideas and information together in a digital space.

I began by reading through all of the materials in DevonThink associated with the 1950s. As I read I categorized, figuring out what larger point the source spoke to, or what circumstance it served as evidence of. I created what Scrivener calls “documents” for any piece of research, or connected pieces of research, that I thought might be useful in my chapters. Early on, I realized I had multiple chapters to write about the ‘50s and ended up outlining three chapters at once as I moved through my materials. I gradually began to group documents into folders labeled with particular themes or points. This is the equivalent to me putting an index card with a label or category on top of a pile of papers, a way of understanding a set of specific pieces of information as contributing to a larger point or idea. These folders became sub-folders of the larger chapter folders. But it is the way I integrate this process with DevonThink that allows me to connect specific pieces of my archive to my argument. In DevonThink I am able to generate links to particular items in the database. I paste those links in the Scrivener documents I create.

How does this look in Scrivener? Sometimes this means that a Scrivener document is just my link, the text of which is the name of my DevonThink item, such as, “SfT timeline late ‘50s/early ‘60s,” which is my notes on story events from Search for Tomorrow during that period. But Scrivener’s “Inspector” window, which can appear alongside the document on the screen, is a useful space for me to jot down notes about that document, reminding myself of the information it offers or indicating what I see as most relevant about it. The synopsis I create here is what I see if I look at my documents in the corkboard view.

The “Portia and Walter relationship” document in Scrivener.

The “Portia and Walter relationship” document in Scrivener.

Other times my Scrivener documents include a number of DevonThink links that feed into the same point. For example, a document called “Portia and Walter relationship” includes links to five different items in DevonThink, four of which are notes on Portia Faces Life scripts; the fifth is notes on memos from the show’s ad agency producer to writer Mona Kent. In my synopsis notes on this document, I reminded myself that these were examples of the ways that married couple Portia and Walter talked to each other as equals, and how this served as a contrast to another couple on the show, Kathy and Bill. This ability to link to my DevonThink archive has allowed Scrivener to serve as my categorizing and outlining system.

While I have written sentences here and there in Scrivener to help me remember the ideas I had about particular materials, I have not yet found need to actually write chapters within it—I use a conventional word processing program for that. I know this is unlike the typical use of the software, but working this way has helped me to manage an otherwise unwieldy task. Scrivener provides a way to include research materials within its structure, but does not have the functionality for managing those materials that I get with DevonThink.

The "free form text editor" Scapple.

The “free form text editor” Scapple.

This system is working well for me, but at times I do find the Scrivener binder structure to be too linear. The ability to move my paper piles around, to stack them or spread them apart, was a helpful feature of my analog methods. As a result, I have begun experimenting with Scapple, a “free form text editor,” similar to mind-mapping software and created by Scrivener’s publishers, as a way to digitally reimagine the fluidity of the paper piles. Like Scrivener, Scapple allows me to link to DevonThink items and has met my desire for a non-linear planning system. I can connect examples and items from my archive to larger points and, through arrows and other forms of connection, note the relationship of particular pieces of data to multiple concepts.  I’m not yet convinced it is essential to my workflow, but I am intrigued by its possibilities and eager to keep experimenting within the generous trial window (which Scrivener and DevonThink both have, as well).

My use of these digital tools is surely quite idiosyncratic, but in ways more specific to me than to my object of study or the field of television historiography. More particular to the history of soaps and to media history in general are the challenges of managing video sources. Tune in next time for that part of my story.

Share

]]>
http://blog.commarts.wisc.edu/2015/06/02/digital-tools-for-television-historiography-part-ii/feed/ 7
Digital Tools for Television Historiography, Part I http://blog.commarts.wisc.edu/2015/05/26/digital-tools-for-television-historiography-part-i/ http://blog.commarts.wisc.edu/2015/05/26/digital-tools-for-television-historiography-part-i/#comments Tue, 26 May 2015 13:57:01 +0000 http://blog.commarts.wisc.edu/?p=26722 devonthinkPost by Elana Levine, University of Wisconsin-Milwaukee

This is the first in a series of posts detailing my use of digital tools in a television history project.

When I was researching and writing my dissertation at the turn of the 21st century, analog tools were my friend. Because my project was a history of American entertainment television in the 1970s, I drew upon a wide range of source materials: manuscript archives of TV writers, producers, sponsors, and trade organizations; legislative and court proceedings; popular and trade press articles; many episodes of ‘70s TV; and secondary sources in the form of scholarly and popular books and articles. The archive I amassed took up a lot of space: photocopies and print-outs of articles, found in the library stacks or on microfilm; VHS tape after VHS tape of episodes recorded from syndicated reruns; and stacks and stacks of 3X5 notecards, on which I would take notes on my materials. I gathered this research chapter by chapter and so, as it would come time to write each one, I would sit on the floor and make piles in a circle around me, sorting note cards and photocopies into topics or themes, figuring out an organizing logic that built a structure and an argument out of my mountains of evidence. It. Was. Awesome.

As I turned that dissertation into a book over the coming years, and worked on other, less voluminous projects, I stuck pretty closely to my tried and true workflow, though the additions of TV series on DVD and, eventually, of YouTube, began to obviate my need for the stacks of VHS tapes. Around 2008, I began to research a new historical project, one that I intended to spend many years pursuing and that promised to yield a larger archive than I’d managed previously. This project, a production and reception history of US daytime television soap opera, would traverse more than 60 years of broadcast history and would deal with a genre in which multiple programs had aired daily episodes over decades. Still, as I began my research, I continued most of my earlier methods, amassing photocopies and notes, which I was by then writing as word-processor documents rather than handwritten index cards. By late 2012, I was thinking about how to turn these new mountains of research materials into chapters. And I freaked out.

Sitting amidst piles of paper on the floor seemed impractical—there was so, so much of it—and I was technologically savvy enough to realize that printing out my word-processed materials would be both inefficient and wasteful. So I began to investigate tools for managing historical research materials digitally. Eventually, I settled on a data management system called DevonThink. I chose DevonThink for a number of reasons, but mostly because it would allow me to perform optical character recognition (OCR) to make my many materials fully searchable. This was a crucial need, especially because I would be imposing a structure on my research after having built my archive over years and from multiple historical periods. It would be impossible for me to recall exactly what information I had about which topics; I needed to outsource that work to the software.

This required that I digitize my paper archive, which I did, over time, with help. My ongoing archival research became about scanning rather than photocopying (using on-site scanners or a smartphone app, JotNot, that has served me well). And I began to generate all of my new notes within DevonThink, rather than having to import documents created elsewhere. Several years into using DevonThink, I still have only a partial sense of its capabilities, but I see this not as a problem but as a way of making the software fit my needs. (Others have detailed their use of the software for historical projects.) I have learned it as I’ve used it and have only figured out its features as I’ve realized I needed them. There are many ways to tag or label or take notes on materials, some of which I use. But, ideally, the fact that most of my materials are searchable makes generating this sort of metadata less essential. I rely heavily on the highlighting feature to note key passages in materials that I might want to quote from or cite. And I’ve experimented with using the software’s colored labeling system to help me keep track of which materials I have read and processed and which I have not.

levine-devonthinkBecause I have figured out its utility as I’ve gone along, I’ve made some choices that I might make differently for another project. I initially put materials into folders (what DevonThink calls “Groups”) before realizing that was more processing labor than I needed to expend. So I settled for separating my materials into decades, but have taken advantage of a useful feature that “duplicates” a file into multiple groups to make sure I put a piece of evidence that spans time periods into the various places I might want to consider it. I have settled into some file-naming practices, but would be more consistent about this on another go-round. I know I am not using the software to its full capacity, but I am making it work in ways that supplement and enable my work process, exactly what I need a digital tool to do.

In many respects, my workflow remains rather similar to my old, analog ways, in that I still spend long hours reading through all of the materials, but now I sort them into digital rather than physical piles (a process that involves another piece of software, which I will explain in my next post). In writing media history from a cultural studies perspective, one necessarily juggles a reconstruction of the events of the past with analyses of discourses and images and ideas. I don’t think there is a way to do that interpretive work without the time-consuming and pleasurable labor of reading and thinking, of sorting and categorizing, of articulating to each other that which a casual glance—or a metadata search—cannot on its own accomplish.

But having at my fingertips a quickly searchable database has been invaluable as I write. Because I have read through my hundreds of materials from “the ‘50s,” for instance, I remember that there was a short-lived soap with a divorced woman lead. Its title? Any other information about it? No clue. But within a few keystrokes I can find it—Today Is Ours—and not just access the information about its existence (which perhaps an internet search could also elicit) but find the memo I have of the producers discussing its social relevance, and the Variety review that shares a few key lines of dialogue. OCR does not always work perfectly—it is useless on handwritten letters to the editor of TV Guide—but my dual processes of reading through everything and of using searches to find key materials has made me confident that I am not missing sources as I construct my argument and tell my story. It’s a big story to tell, and one that may be feasible largely due to my digital tools.

Share

]]>
http://blog.commarts.wisc.edu/2015/05/26/digital-tools-for-television-historiography-part-i/feed/ 5