Review of Re-collection. Art, New Media, and Social Memory
by Richard Rinehart and Jon Ippolito
MIT Press, Cambridge, MA 2014
Annet Dekker, Piet Zwart Institute, Rotterdam
Republished from Computational Culture, a journal of software studies
Re-collection. Art, New Media, and Social Memory by Richard Rinehart and Jon Ippolito asks the question of how our increasingly digital civilisation will persist beyond our lifetimes. The authors focus on the preservation of new media art as a case study in new media’s broader challenge to social memory. Rinehart and Ippolito are well-known names in the field of media art preservation; both have respectively worked in the field as curators, teachers and initiators of various documentation models. Re-collection has been in the making for quite some time and can be seen as a collection of their previous writings supplemented by futurological perspectives and practical guidelines as ‘a recipe for cultural permanence’ (p. 29). In this sense the book resembles a manifesto or a polemic essay. This is not to say that it lacks argumentation, but that it provides a single narrative. Better said, a dual narrative, because not only do the two authors alternate the chapters they also at times insert comments in each other’s texts. Although this is not an uncommon practice and could be said to vaguely mimic today’s digital culture of commenting, in this case it works well as the comments exemplify a multiplicity of perspectives that is also in the nature of the works that they discuss. However, at times the comments function more as alternative notes to upcoming sections and they loose their strength as complementary and critical observations.
Rinehart and Ippolito stress the use and importance of the controversial term ‘new media art’. Whereas they resist indicating or defining a new genre, according to them, in the context of preservation, the term makes it easier to refer to the medium specific elements of artworks (p. 21). These medium specific elements, although never made explicit, foremostly concern the obsolescence of hard- and software. Nevertheless, as they argue, the preservation challenges of these traits are not restricted to new media art and can also be found in other non-traditional artforms such as earth art, performance art, conceptual art and installation art. As such, new media art has digital art at its centre and other non-traditional art forms at its edges (p. 21). Hence, it is of no surprise that the book starts with two very telling examples of the connections between new media art and other art forms, in particular in relation to the challenges these works pose for conservators: the artworks Enduring Expansion (1969) by Eva Hesse and Wall Drawing 146 (1972) by Sol LeWitt. Both former painters, Hesse and LeWitt are better known as artists whose artworks became connected with process art (Hesse) and conceptual art (LeWitt). Whereas Hesse’s work has crumbled over time into something that is hardly recognisable due to the nature of the material she used (a combination of fiberglass, polyester resin, latex, and cheesecloth), Sol LeWitt’s drawings continue to be exhibited, not because he used a more sustainable material but because his works consist of meticulous notes that can be executed by anyone who can read and interpret the instructions. As Ippolito argues, in these cases and similar works, ‘fixity equals death’ (p. 7). He continues that it is only through variability that such works can survive and be experienced in the centuries to come. Although it remains questionable whether Hesse actually wanted her work to stand the test of time, in this book it serves as an example of how museums tend to handle artworks: preserve against all odds.
Variability is a key term for the authors, and connects to what is known as the variable media approach, a mode of thinking about media that started in the 1990s, by Ippolito amongst others, and developed at the Variable Media Initiative at the Guggenheim Museum, in the late 1990s and by the Variable Media Network (VMN) in 2002 in which several North American cultural organisations took part 1
As explained by Ippolito, the variable media approach ‘encourages creators to define a work in medium-independent terms so that it can be translated into a new medium once its original format is obsolete’ (p. 11). The term variable media seems to perfectly cover all kinds of work that deal with obsolescence and change. It seems therefore strange that the term is not used throughout the book, instead of the more restrictive ‘new media art’ term. Whereas the latter has strong references to digital or computational media as well as the polemical ‘new,’ 2 variable media refers to a characteristics of the media that ignores the type of media as well as a historical dimension of time. For example, the VMN tried to come up with a methodology that works across mediums and therefore can still be recognised in the far future, a future where someone might not understand the term ‘U-matic’ (a videocassette format used in the 1970-1980s), but will recognise the meaning of the term ‘installed’. According to the VMN, shifting the focus to a work’s behaviour says something about the presentation and perception of the work – can the work be installed, performed, reproduced, duplicated, interactive, encoded, networked, or contained. Whereas traditional methods for describing an artwork stay closer to ‘object dependent’ terminology, i.e., name of the artist(s), date of the work, medium used, and the dimensions (height, width, and depth), which reveals little about its functioning. In response to a comment on Rinehart’s and Ippolito’s insistence on ‘new media’ Ippolito argues on the Yasmin_discussion list, that the ‘new’ does not refer ‘to the latest gizmos available now but to expressive technologies of any period that outpace their culture’s ability to control them’. Whether or not the term variable media would advance the discussion is hard to say. What could speak in favour of the term variable media over new media is that it would gain more rhetorical traction, since the term ‘new media’ is, for some people in the art domain, reason enough to not get involved. That said, as Rinehart remarks in the same discussion, one should be aware that the current ubiquity of different terms that relate some way or other to computational culture (“computational” being one of them) might, in this sector, soon be ‘replaced by “contemporary art” and the whole fascinating history of new media art will be discursively swept under the museum rug’ 3
It may seem strange to step out of the contents of the book into a discussion that takes place elsewhere, however, it speaks to the aims of the authors to be transparent about their methods and views. So, even before its European release and immediately after its North American publication, a discussion on the book’s main arguments followed, by invitation of the Yasmin_discussion mailing list (moderated by Roger F. Malina, also the executive editor of Leonardo Publications, who published the book). The timing might be questioned or feel more like a marketing effort than a serious attempt to enable discourse. However, this would downplay their intentions and critical efforts to make these issues available to a wider public. As such, it could be seen in the light of a transmedia approach in which the authors try through different means and platforms – from books, conference presentations, Twitter to e-mailing lists – to get their message across. Altogether, and perhaps not coincidentally, this type of circulation can also be said to be typical of many of the works they try to safeguard for the future. The book itself is thus part of a wider discursive process.
To return to the content in the book: the authors argue that there are three main threats contributing to the obsolescence and death of contemporary culture (p. 8): technology, institutions and law, and they emphasise that these three can also be turned around to be their saviours. Each of the three parts starts with the subsequent provocative titles ‘Death by Technology’, ‘Death by Institution’ and ‘Death by Law’. With these titles they want to show how an artwork’s materials [technology], database systems that are used in museums [institution] and copyright infringements [law] are enforcing a ‘digital dark age’. These ‘facts’ are followed by analysis of how these particular causes are motivated and how they can be reversed. In the final chapters of the book the authors present their ‘recipes for cultural permanence’ (p. 29).
The first section on technology follows the paradigm of conservation: looking at, analysing and finding a strategy to maintain, the material components of an artwork, to see in what way technology is to blame for a quickly disappearing culture. It is no secret that companies use the method of build-in obsolescence, so that some artists are faced with failing hard- or software after a certain period. However, and as acknowledged by the authors, several artists specifically choose to work with or within these restrictions. A solution for the preservation of these technology-based artworks is, according to the authors, that ‘society has to move from preserving media to preserving artworks’ (p. 46). I will come back to this assumption shortly, but to continue their argument, they suggest that we ‘view change not as an obstacle but as the means of survival’ (p. 46). This type of change is explained in the next chapter as something that is an inherent characteristic of digital media. Through the example of Alan Turing’s theories of computation they argue that variability is build into digital technology and as such should be embraced rather than fought against. Up to this point the argumentation holds strong (although I believe museums are not unaware of these challenges), however, they continue by making a comparison between the performative qualities of new media (in that it exhibits variable forms) to music. For example, similar to new media, a musical score can be performed several times with different kinds of instruments, that is, they continue, as long as it is performed ‘within appropriate parameters’ (p. 48).
This is the point that clearly illustrates two of the main issues in this book: First of all, the kind and amount of change ‘within appropriate parameters’ is never really made explicit. What can be changed and to what degree is one of the main questions that conservators have been dealing with for decades. The notion of change is something that every conservator faces whenever they handle an artwork – and which is openly discussed among museum professionals at conferences and meetings, something the authors neglect to mention. To assume that conservation only tries to ‘freeze’ an artwork sends the practice and theory of conservation plummeting back to its early days, and ignores any progress that has been made at least since the 1980s, and arguably long before. The assumption that museums are institutions that are incapable or unwilling to change or adapt to changing conditions permeates the entire book. Although there is some truth in this statement, since museums are not at the forefront of new technologies and practices, it forecloses any discussion as well as shuts out the successful, as well as the unsuccessful, attempts that are being made by museums. This closure is striking, and more so because both authors have themselves done many inspiring things while working in museums. The picture they paint of the museum is very generalised. Paradoxically, they emphasise only one role of the museum when talking about the future of new media: the museum as the place for the preservation of new media art. This is surprising, not only because of their disdain towards the museum but more so because the book presents many interesting examples of how people, amateurs and professionals alike, outside of museum structures are preserving digital culture. While recognising and emphasising the importance of these ‘outside’ tactics, and while attempts are made to think from those strategies, the path that other means/institutions/networks may better suit the artform is not explored. According to Rinehart, in the before mentioned Yasmin-discussion, a reason for holding on to the museum is because he is not ready yet to ‘[give up] on museums just yet, I think they/we deserve strong critique and have a lot of work to do, but I also have hope that they can adapt to history again and serve even this art well’ 4
Their vision of the museum in the future is an ‘open museum’. The idea of the open museum is based on five criteria that come down to: preserving ‘born-digital new media artworks’; allowing ‘unprecedented access’ to the artworks; supporting or following ‘innovative legal, economic and cultural frameworks’; exploring ‘participatory culture’; and experimenting with usage (p. 107). The latter is proposed in various ways, although there are the obvious ones, for example, emphasising new possibilities of digital sources, including new possibilities with traditional arts; using custom-built open source software tools; developing compatibilities between open systems to forge dynamic network registries; and, using existing museum categories alongside folksonomies; they also suggest reusing artworks via an adapted ‘Open Art License’ and that museums should compensate artists early on and commission more artworks, rather than pay for after artists are established. Although several museums already follow these tactics, as the authors admit, according to them it is only when a museum abides fully to the open museum model that a whole field can move forward (p. 111). It is in this section that their thinking seems already to be out-dated. First of all, as they acknowledge, many museums are already following some, or many, of these tactics. Secondly, the idea of a shared and networked heritage museum structure has proven to be one of the largest bottlenecks, a prime example being the Europeana project. Thirdly, the notion of openness has become conflated with neoliberal politics, so, before suggesting more ‘openness’, the quality and consequences of ‘open’ should be critically analysed. Perhaps instead of adding onto old systems and methods, the open museum could depart from the materiality of the artworks and implement those in their structures. 5
For example, with artworks that thrive on re-use and distribution, rather than acquiring a ‘final object’ a museum could be part of the process by stimulating development. The process and the development is what artists are paid for, and the outcome is for everyone else to use. This means that economic ‘acquisitions’ institutions are related to an engagement with the practice, and not to an outcome of that process. Put into practice, this would extend the role of the museum to one of producer, or facilitator, of artworks. As such, this would be a call for discussing, analysing and stimulating alternatives. 6
This faith in the museum brings me to my second point of critique, which is the comparison that is made between the variability of new media and music. Although at first sight the comparison between the two makes sense, music performances and new media exemplify complex relationships and their materiality is often open to interpretation. However, there are also important differences, which when applying one system onto another can become diffuse or conflate the specificities of these differences. For example, from a computational point of view the comparison falls short on two accounts: First of all, each time a file is accessed on a computer a new version is stored to memory. As Matthew Kirschenbaum explains:
One can, in a very literal sense, never access the “same” electronic file twice, since each and every access constitutes a distinct instance of the file that will be addressed and stored in a unique location in computer memory. 7
Thus, as argued by Kirschenbaum in the same article, ‘preservation is creation – and recreation’. In other words, the distinction between creation and preservation collapses. The copy is seen as the result of a process of copying. In this case, the notion of variability may not be very helpful because it is questionable whether the copy is an instantiation of the original or if it is something new. Therefore, what is a copy in digital environments?8 My second point is related and considers the copy in relation to the product instead of process. How faithful is the copy to the original? Even in cases where artists are insisting on keeping the ‘original’ code, additional code needs to be written to enable outdated code to function properly. Although it could be argued as variable, in most cases the ‘original’ code will change. On a practical level, an element of an art work that no longer functions because of browser settings could be made to work by adding a patch that translates the code. This means that instead of being variable, the work is always in process. In other words, any transformation of the code gives it a different meaning.9 By translating the code, the language changes as well as the acquired meaning. Furthermore, it follows that code attains meaning in relation to specific contexts, for instance, when combined with that which lies outside of the code. In other words, the notion of variability is more complicated when used with software-based artworks than, say, analogue installation art. This said, although variability in the true sense of the word (i.e. instantiations based on the same score/code) might not be possible, digital documents contain remarkable amounts of historical information, through which saved metadata can be accessed. 10
Both these two points are emblematic for the book. They indicate a lack of thinking-through that can also be found in the subsequent sections on institutions and law. At times this leads to an over-simplification of the issues at hand, and can leave the reader at loss wondering whether the statements of the authors really make sense since references are only briefly commented upon. Clearly the authors have years of experience and know their respective fields very well, but flattening out differences and conflating the stakes or just being imprecise, is a problem that makes it difficult to see the value of the book for professionals in the field of conservation, whether in practical or theoretical terms.
Although this is acknowledged by the authors, and they direct those in the know to the last parts of the sections and the concluding chapters of the book (which are indeed an interesting read, and of which the chapter ‘variable organism’ deserves special mention for its provocative and intriguing speculations), I believe it is a missed opportunity. Perhaps it is too early to have the first book on new media art conservation talk about all the nitty-gritty that makes new media different from or similar to more traditional artworks; a book that would sharpen and focus current discussions on conservation, and bring them to other levels. On a positive note, when regarding the manifesto as a first entry or a guidebook it could potentially lead to very interesting future research that digs deep in the issues that are sketched. In the end, in all their catchy statements, Rinehart and Ippolito set up a wealth of topics, a spectre of references and practical and analytical examples that is at times overwhelming, or frustrating, but makes me confident that we can expect interesting times ahead.
Annet Dekker is an independent researcher and curator. She is currently Researcher in Digital Preservation at Tate, London, a part-time Postdoctoral Research Fellow at London South Bank University & The Photographers Gallery, and core tutor at the Piet Zwart Institute, Rotterdam (Master Media Design and Communication, Networked Media and Lens-Based Media). Previously she worked as Web curator for SKOR (Foundation for Art and Public Domain, 2010–12), was programme manager at Virtueel Platform (2008–10), and head of exhibitions, education and artists-in-residence at the Netherlands Media Art Institute (1999–2008). From 2008-14 she wrote her Ph.D. Enabling the Future, or How to Survive FOREVER. A study of networks, processes and ambiguity in net art and the need for an expanded practice of conservation, at the Centre for Cultural Studies, Goldsmiths University, London. http://aaaan.net
1. For more information about the Variable Media Approach and network see: http://www.variablemedia.net.
2. See, for example, the edited anthology New Media, Old Media by Wendy Chun and Thomas Keenan, which provides a comprehensive volume of classic essays that explore the tensions of old and new in digital culture. Chun, Wendy Hui Kyong, and Keenan, Thomas, ed., New Media Old Media. A History and Theory Reader. Cambridge, MA: The MIT Press, 2005.
3. Rinehart, Richard, Yasmin_discussions, ART, NEW MEDIA, AND SOCIAL MEMORY, 11 July 2014, http://estia.media.uoa.gr/pipermail/yasmin_discussions/20140711/001937.html
4. Rinehart, Richard, Yasmin_discussions, ART, NEW MEDIA, AND SOCIAL MEMORY, 11 July 2014, http://estia.media.uoa.gr/pipermail/yasmin_discussions/20140711/001937.html.
5. I am referring to materiality as described by N. Katherine Hayles (2002:32-3). Hayles, N. Katherine, Writing Machines. Cambridge, MA: The MIT Press, 2002.
6. As part of the Yasmin_discussion, Rinehart also emphasises the need for such alternative practices, however as he argues, ‘museums are already in this game, trying to preserve this art, so let them do it well. And, collectively, they command huge resources that we would all do well to tap in our collective efforts’. Rinehart, Richard, Yasmin_discussions, Fwd: Fwd: Dragan critique on Yasmin?, 7 August 2014, http://estia.media.uoa.gr/pipermail/yasmin_discussions/20140807/001982.html.
7. Kirschenbaum, Matthew G. 2013. “The .txtual Condition: Digital Humanities, Born-Digital Archives, and the Future Literary”. Digital Humanities Quarterly. 7 (2013) No. 1. Website http://www.digitalhumanities.org/dhq/vol/7/1/000151/000151.html.
8. Computer scientist David Levy, clearly describes the process of copy(ing) in different media and the consequences it has on code and programming. See Levy, David M., Scrolling Forward: Making Sense of Documents in the Digital Age. New York, NJ: Arcade Publishing, 2001.
9. This process is perfectly exemplified by experiments on a single line of vintage computer code, the 10 PRINT, or the extremely concise BASIC program for the Commodore 64. In: Montfort, Nick, Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglas, Mark C. Marino, Michael Mateas, Casey Reas, Mark Sample and Noah Vawter, 10 PRINT CHR$ (205.5+RND (1)); : GOT 10. Cambridge, MA: The MIT Press, 2013.
10. Similarly, Kirschenbaum concludes, ‘computer operating systems are characterised less by their supposedly ephemeral nature than by the exquisite precision of their internal environments’ (2008:204). Kirschenbaum, Matthew G., Mechanisms. New Media and the Forensic Imagination. Cambridge, MA: The MIT Press, 2008.