Yochai Benkler. The Wealth of Networks, 2006

Yochai Benkler assesses a shift from “Industrial Information Economy” to “Networked Information Economy” (31 – 32). The former was characterized by the high cost of the means of producing and sharing (the media), which buttressed a much centralized and concentrated production of content (TV, newspapers, etc). The latter is characterized by a dramatic decrease in these costs. It opened the door a democratic, participatory and rhizomatic production of content, and to the flourishing of “nonmarket production” (56); that is, to a world in which individuals fully retrieve their power to create. Through his advocating of the saint trilogy “information, knowledge, culture”, Benkler outlines technological affordances personal computers and access to the Internet have bestowed, while responding to problems that were coming to a head at the time he was writing. Indeed, in the 2000s, and under the pressure of music and movie industries, many countries were enacting repressive regulations aimed at sanctioning and circumscribing the sprawling of copyright infringement that the Internet made possible. Taking a firm stance against such institutional, vertical forms of regulation, Benkler supports horizontal forms of self- or peer-regulation (he illustrates his point through the example of Wikipedia (71-74) Slashdot (76-80), and Amazon (!), if I remember well). Benkler also demonstrates the economic sustainability of the model of free culture, and how “libre” knowledge fosters further innovations.

Although optimistic in tone, Benkler cautions us that “there is no guarantee that networked information technology will lead to the improvements in innovation, freedom, and justice that I suggest are possible. That is a choice we face as a society”(18)… It is good to be reminded that it is at least partly thanks to idealists and altruists that we are able to share our thoughts on this blog, amongst many other things. Yet from our 2017 standpoint, Benkler’s “techno-future” sounds in many ways like a path not taken. As Tim Berners-Lee – hardly suspected of having a prejudice against the Internet – bluntly puts it in a recent interview: “The system is failing“. This failure is arguably due to phenomena that may not have been predictable back in 2006, but also to problems embedded in the ideological tenets of “hacker culture”, and stemming from the blind spots of the idealistic views on which Benkler draws. I especially take issue with the idea that voluntarily creating for free is a practice everyone can equally afford, and that sharing is an inherently altruistic practice. I shall summarize some of these problems below. Please feel free to comment, correct, and add up any thought.

Common knowledge“, “semiotic democracy and transparency” and “participatory culture” are all predicated on a generous view of humanity. Granted, Benkler does mention the possibility of misinformation (“gobbledygook”) and even trolling (75-77), but these are posited as problems that peer-production will easily redress. And… well. I will not dwell on this point, for we are all too aware of what’s happened: the destabilizing of traditional forms of knowledge hierarchization and validation has given way to the rampant spread of fake news and propagandistic forms of disinformation, to an extent only comparable to what happens during wartime, and in the face of which peer processes of accreditation have seemed mostly powerless. It is disheartening to see that faith in humanity can be proven so wrong. But perhaps this error of judgment is related to the way enemies were picked…

Indeed, Benkler posits liberal States – jointly with traditional mass industries – as the main villain of the story. An “anarchic/libertarian” view, as he puts it. From what I can remember, the debate on intellectual property was very much constructed under these terms in the 2000s. Those terms and the enemy thus picked seem to have made Benkler (along with everyone involved at the time) oblivious/unaware of the fact that digital sharing was to be done through (private and for-profits) intermediaries. Material ones (cables, physical storages aka “clouds” etc) bring up the issue of net neutrality. Digital intermediaries, such as web search engines, social media, and social networking services have generated a whole set of problems of which we are now all too aware as well. Perhaps “nonmarket behavior is becoming central to producing our information and environment” (56), but it’s also turned out that intermediaries did find ways to turn those “nonmarket” exchanges into commodities. The advent of social media and social networking services has thus dramatically reshaped the terms of the discussion throughout the last decade. The opaque processing of citizens’ data, subsequent manipulations of public opinion, the looming threat over democracies etc. spring from the ways companies have taken hold of the sharing culture the Networked Information Economy fosters (see Adeline Koh). A bit of wariness towards the very private companies (Google, Amazon, etc) that sounded so nice back then, a bit of state regulation instead of laissez-faire towards them (and instead of targeting and criminalizing individuals) might have been an answer. This is an easy thing to say now…

Keeping in mind the terms of the debate in which Benkler was engaged is also helpful to account for the insufficiencies of the binary he builds on to think of intellectual property. I  exaggerate a little bit, but the big picture is copyright = big bad industries protected by big bad states vs. commons/copyleft = universally good model for altruistic individuals. Benkler promotes an unbridled access to, reworking, and sharing of creative contents. His view is buttressed by a representation of “information, knowledge, culture” as non-rivals goods, i.e goods whose value does not decrease when sharing them, just like love or friendship (vs. rival ones, 35-36). At the individual level, that framework leaves out some important issues. In a capitalist context, one risk is to have “libre” content creation reduced to a privilege granted to those who are sufficiently stable/safe, economically and emotionally speaking, to take risks, and/or who are from a culture which encourages risks taking, and/or who are somehow insiders having a sense of which kind of creations is worth investing time and energy on. In the meantime, the rest is silenced or pressed to take more or less informed risks in the hope that their time and energy will eventually generate earnings. More bluntly: how do individuals make a living out of their creations in the reign of unbridled “cultural freedom”?

This American Life’s latest episode illustrates some of the limits entailed in Benkler’s radical stance:

The story began in 2011 when the wildlife photographer David J. Slater got stunning photos of monkeys in Indonesia. The trick is that these pictures were selfies, which made Wikipedia claim them as public domain. Wikipedians got pretty nasty about it, publicly ridiculing Slater when he asked for compensation (nicely asking, not suing). One thing led to another, and the photographer ended up being sued, for copyright infringement, by one of the monkeys… It is an interesting case for diverse reasons, notably for the blurring of the concept of “authorship” (is the monkey the author for having pressed the button?). Here, the point is that if we were to follow Benkler’s altruistic, volunteering and disinterested conception of individual content creation, this photographer could not make a living and pay for the months he spent in Indonesia. Perhaps Benkler would respond that Slater should rather become a “Joe Einstein” (p43). But then, how does one reach such status in the first place? I felt that Fred Benenson‘s framework (fungible vs. nonfungible work) and the nuances he makes between different models, along with Lewis Hyde‘s historically-based definition of commons (workable in a “stinted market, one constrained by moral concerns”, Hyde: 36) addresses these issues in a much more effective way.

“Democracy for anyone with a fifteen-hundred-dollar computer”

Lawrence Lessig is one of the most important figures in shaping the movement for free (libre) technology and media. Since my time as a teenager playing with linux, he and other figures of the FOSS and creative commons movement instilled an image of a better society where everyone can be free of state oppression and freely collaborate. Contributing to the commons according to their ability, and in turn getting what they need. This all culminates into their vision of a new political future of… sensibly regulated capitalism?

It pains me to criticize these heroic figures, but it pains me even more to read the Liberalism in their work. Endorsement of the market place of ideas (“certain fantastic ideas will win in this cultural debate”) and similar sentiments reduce creativity and speech to mere inevitablities of greater economic intensives. His sentiments are far kinder than the status quo, but throughout he defends the interests of those with access from those without. Like many family members I will be seeing this holiday weekend, I just want to shake Lawrence and say “I love you, but please wake up”.

This, but unionically

This post and all images are covered by the Creative Communism license. Use anything according to your needs, provided you participate in group solidarity, resist state oppression, and embody trans-inclusive feminism(s).

Continue reading

how to access Fred Benenson, “On the Fungibility and Necessity of Cultural Freedom”; and Michael Mandiberg, “Giving Things Away is Hard Work: Three Creative Commons Case Studies” in Mandiberg, The Social Media Reader, Part V: Law.

I was trying to access the article “Fred Benenson, “On the Fungibility and Necessity of Cultural Freedom”; and Michael Mandiberg, “Giving Things Away is Hard Work: Three Creative Commons Case Studies” in Mandiberg, The Social Media Reader, Part V: Law.” However, when I clicked the link to the E-book, it requires that I should log in by selecting my institution from a list, and CUNY GC doesn’t seem to be there. So I was wondering how to access the file. Thank you.


Klein & Manovich: a Textual Analysis

As someone in academia, and who cares about non-academics’ access to information, the first thing that strikes me is that Lauren Klein‘s article is published in a paywalled academic journal, American Literature published by the prestigious Duke University Press. I’m disappointed that she did not contribute the article to her institutional repository. Naughty DH scholar! In contrast, Lev Manovich‘s article is published on his personal website. Self-publishing may lead readers to grant Manovich’s writing to be less authoritative than a refereed journal like American Literature. Also, Manovich’s article is full of usage errors, beginning with the title, “What is Visualization?,” which proper usage would render “What Is Visualization?” The usage errors prompt me to observe that Manovich is a non-native English speaker, having emigrated to the US when he was 21. At the same age, more or less, Klein was graduating from Harvard. Fancy! Continue reading

Graphs, Maps, Trees and Distant Reading (Franco Moretti)


Before coming to Graduate Center’s Theatre Program, I studied in English program (in Korea) for about six years. Back then “distant reading” was not part of the academic curriculum (not sure if it is now) and I remember how canonical “the rise of the novel” (Ian Watt) was. My provocation is partly based on my journey from literary to theatre studies, although not always exclusive, so please do correct my response and add comments if there is any misreading/interpretation (as I am reading it alone for the first time).

Graphs, Maps, Trees by Franco Moretti

In Graphs, Maps, Trees: Abstract Models for Literary History, Franco Moretti provides an alternative/radical methodology of doing literary studies, which has traditionally been based on close reading of an individual text. Moretti’s interdisciplinary approach proposes “distance reading” as a new form of knowledge, based not on individual (canonic) texts but three “deliberately” abstract models—graphs (from quantitative history), maps (from geography, though closer to geometry), and trees (from evolutionary theory). The book was first published in 2005 and developed out of three essays that Moretti wrote for New Left Review.

The Polemics?

The recent New York Times article on Moretti (as well as the book cover of GMT) states that he is “famous for urging his colleagues to stop reading books.” Moretti might have been more polemical and radical in proposing his views in other places (please add comments if you know more about it) and I also understand that it might have become a signature of distant reading. However, I think that more helpful way to read GMT is learning different ways of engaging with the “books” to find out patterns, structures, and relations that are independent of/or unavailable from interpretations (close reading).

I find it worth noting that Moretti’s work received criticism for comparing natural evolution with cultural change or for not providing connections with other fields of study. In addition to that, Harold Bloom’s dismissive reaction toward Moretti, described in another New York Times article published in 2004, is also worth noting as Bloom said “with an audible shudder” that he is interested in reading and that’s all he is interested in. Bloom’s definition of “reading” here is that of interpretive reading, a traditional way of engaging with the books (remember: he is the author of The Western Canon).

Based on the assumption that people in our class might have varying degrees of acceptance/rejection of Moretti’s argument, I would like to ask the following questions: Do you buy Moretti’s concept of distant reading in literary studies? How about for other fields? Have the scholars in your field of study accepted, criticized, or wholly abandoned this type of reading? Is Moretti’s argument helpful in understanding and expanding our discussion on “what is text” and “what is data” last week? Reading GMT in 2017, I wonder if digitization and database (in the context of data visualization) have played (or will play) an important role in circulating and/or expanding Moretti’s models. As far as I know, distant reading has not been a big thing in theatre studies (other than Shakespeare-related work), but as I have recently discovered an example of “distant watching” (visualizing Broadway project), I would like to hear about any interesting projects in your fields.

The Canon, the Genre, and the Model?

Rather than discussing the specific examples-figures Moretti provided, I would like to focus more on a methodological perspective. On the one hand, Moretti’s GMT can be understood as a political project (or can it be?) as the approach problematizes the literary history written out of “the one per cent of the cannon and the ninety-nine of forgotten literature” (77). Since the 1960s, feminist, queer, postcolonial theories (to name but a few) have challenged the construction of the Western canon, but in a way they have also created other sets of canon that are now frequently part of the curriculum. I think Moretti’s approach is fundamentally different because it is not about evaluating the aesthetic quality of the canon or would-be-canon, but about teasing out the relations between the canonical and the non-canonical work (either by abolishing all qualitative difference or articulating the very difference). Then, is there a place for aesthetic connoisseurship as such? How will it change the status of the canon (or the “high” and “low” forms)?

Moretti’s analysis in GMT is grounded in literary genres. By discovering patterns and devices of genres/cycles, he often aims to understand the structural whole which is larger than the sum of individual parts (in case of the literary maps, 53 & also in the New York Times articles). I am mindful of speaking the language of “the whole” and wonder if this seemingly “scientific” view would provoke any backlash. Although larger sample sizes can offer more accurate analysis, we should also be mindful of differentiating large “samples” from “the whole.” After all, as an extension of last week’s discussion, what is available in the archive or database can influence directions and results of the studies proposed by Moretti. Thsn, is it always the better way to map the “world literature” as such?


In sum, I like that Moretti does not propose his models (“materialist concept of form”) as “the ultimate” model for rational literary history. He states in the last sentence of GMT that “opening new conceptual possibilities seemed more important than justifying in every detail” (92). Are we still at the stage of opening new possibilities, or now it is time to justify details? What Moretti said as a “dream” in 2004, which is “a literary class that would look more like a lab than a Platonic academy,” is still a dream? How can you (or do you want to) use the methods of distant reading in your classroom?

Have a great weekend!

Database and Narrative

Lev Manovich notes that traditional GUI use elements of the “real” work place to make its interface more readily understandable–files for storing, a trash can for deleting, etc.  However, he notes that if elements from our physical environment first migrated into the computational sphere, now the conventions of computation are migrating back into our physical reality.  It is an essentially bidirectional movement: just as we first used elements from the physical world to understand and represent computerized space, elements from computerized space are now being used to understand and represent the physical world.  The “database” is one such conception–it is, as Manovich describes it, the “symbolic form” of the computer age, a particular way of making meaning out of the world that fundamentally opposes traditional forms of meaning-making (in particular, narrative).  In short, the fundamental idea seems to be that changes in ways of thinking are the direct result of changes in the technology we design and use.

Manovich suggests that database and narrative are “natural enemies.”  In his view, databases are non-linear while narratives are linear, and narratives focus on rigid processes of selection while at the heart of database logic lies unlimited combination and juxtaposition.  Yet the very issue of order here demonstrates how messy such distinctions can get. While an impetus to cause-effect principles may lurk in the background of any narrative, it must be said that narrative form often complicates or calls into question such chronological or teleological order. And while databases always offer the possibility of re-ordering or relocating their records, any given representation of a database must, finally, present its contents in some order.   A database needs to be able to both collect and store new data and yet retain a certain (relatively) changeless underlying structure for organizing and presenting that data.

Katherine Hayles, I think more correctly, describes these two modes (database/narrative) as symbiotic and not antagonistic. As the various authors we read point out, the human impetus to “collect data” has always been present.  We see this, for instance, in the case of Whitman and his many catalogues. But typically, the mere collection of data is only the means to an end–to a certain interpretation of that data.  And in the fundamental logic of database design itself, the selection, collection, description, organization, and presentation of a database’s contents almost always involves narrative structures, including assumptions about how the information will be manipulated and used through the interface. Narrative and database, it seems, cannot easily be disentangled.  But I do think Manovich is on to something interesting when he considers the need for an “info-aesthetic” understanding of the database itself: what happens when a database becomes not merely the means to an end but an end in itself?

Drawing upon his experience as co-editor of The Walt Whitman Archive (headquarted in my hometown of Lincoln, NE) Folsom observes that traditional notions of genre are too rigid—or have been deployed too rigidly—to do justice to the “rhizomorphous” nature of many authors, Whitman in particular. Folsom happily argues that the database resists this rigidity of traditional notions of genre; and indeed, declares that database is itself a “new genre, the genre of the twenty-first century” (1576), one that fundamentally challenges two traditional cultural forms, narrative and archive. Yet if we consider the database as a concrete form of cultural expression, I wonder if conceptually pairing it with narrative only serves to efface its own idiosyncracies.  For undoubtedly, our way of accessing narratives (e.g., reading, listening, watching) is strikingly different than our way of accessing databases.  You may browse or search information contained within a database (i.e., the metadata), but you certainly don’t read that information in the very important sense of the word in which we read (or view or listen to) more traditional literary narratives.  It seems to me that the difference is not really one of genre, but of access or engagement, and the very different positional attitudes each mode asks us to assume. Folsom does indeed point out that “Leaves of Grass as a database is a text very different from Leaves of Grass contained within covers” (1578); however, what is key is not the different ontologies of the two texts, but rather the fundamentally different experiences of engaging with them (and this is perhaps the point, for if database is to be an entirely new form of aesthetics, it cannot, and should not, simply digitally replicate the experience of reading a printed book).


McGann and “The Rationale of Hypertext”

In his essay, “The Rationale of Hypertext”, Jerome McGann makes the case for de-centered, hypertext editions of literary works. According to McGann, who is a textual scholar, current critical editions (editions that offer authoritative versions of texts and include critical commentary or variants) are limited by the book form: “The logical structures of the ‘critical edition’ function at the same level as the material being analyzed. As a result, the full power of the logical structures is checked and constrained by being compelled to operate in bookish format” (Radiant Textuality: Literature After the World Wide Web 56). Basically, books establish formal limits to the study of literature. Because the book determines how the scholar engages with the text, it constrains his analysis. Throughout the essay, McGann offers several examples of nontraditional texts that cannot be adequately represented in book form—poems set to music, based off pictures/paintings, or relying on the specifics of inscription/medium. In the example of Emily Dickinson, who often created her poems to fit the scraps of paper available, McGann explains that it would be difficult to combine the “facsimiles” (exact copies, or images of text) with appropriate scaffolding and criticism in a book form. The result would be too vast and unwieldy. McGann concludes that hypertext editions offer an opportunity for presenting texts in a more flexible way. He makes the comparison between hypertext editions and libraries (which are collections of texts, rather than a single text) and the internet (where information is connected through a network). He argues for presenting texts and all their variants, components, and critical materials in non-centralized form, so “when one goes to read a poetical work, no documentary state of the work is privileged over the others. All options are presented for the reader’s choice” (Radiant Textuality: Literature After the World Wide Web  73). This dramatically breaks open the traditional text to new ways of reading, and therefore, of analysis.

I have one major question about McGann’s proposal. It’s obvious from his essay that his main audience consists of other textual scholars or literary scholars in general. I’m wondering how he might present such a proposal to students of literature, or people in other disciplines? Re-reading this text (I first read it several years ago, when I was much more idealistic and less familiar with teaching), I was struck by the pedagogical implications, or lack thereof. I’m not sure how students would interact with these “non-centered” texts. How would undergraduates, especially those who don’t have much experience handling the book, or experience with literature in the first place, have the confidence to confront and navigate through the hypertext edition? How could we scaffold the experience in a way that doesn’t constrain them? To spur your thinking, I’m going to link to the Rossetti Archive which is McGann’s project. I’m also going to link to one of my favorite online editions, on Virginia Woolf’s To The Lighthouse. Both of these resources are non-centralized, and it’s up to the user to determine her engagement with them. How might today’s students (who are largely familiar with hypertext, but less so with literature) interact with these resources?

Attending the ITP skills lab on Monday, 11/6? Please bring your laptop!

To those of you attending next Monday’s (11/6) “Low Level Superpowers” lab:

If you have a personal laptop, please bring it to use during the lab. The Mac Lab where we meet has 12 computer terminals and we have a few more people than that registered–so bringing a laptop will ensure you have a machine to work on.

Thanks for your help!

Provocation on James Paul Gee’s What Videogames Have to Teach Us about Learning and Literacy

First I apologize for my belated provocation. James Paul Gee’s What Videogames Have to Teach Us about Learning and Literacy is inspiring to me in ways I cannot even articulate – I guess I can say, in a joke that, after reading his theories in chapters 1, 2, 4, and 7, that I have gone through a “tacit learning” process, the results of which are very valuable to personal learning and growth in spite of their underappreciation in traditional education, where the evaluation and assessment are based on students’ explicit performance, according to Gee. Having grown up in a small conservative place in China, where playing games is a sin for students, whose only task is to study hard and do well in standard tests. Video games were seen as an evil monster by parents as “a waste of time”, as was said by the grandpa in Gee’s book, because first, in their opinions, it is “playing” and has no function of “learning” at all. This binary thinking stems from a very special historical background in China. It was only in 1977, a year after Mao had passed away, that the Chinese government restored the College Entrance Examination system, which completely came to a halt during cultural revolution, when the “extreme leftist” claimed that the educational system was “capitalist” and educators were tortured and beaten to death because they were “rightists” and “capitalists”. A small group of people were admitted to universities in China, where only “revolutionary students”, which was a synonym of “economically poor students with ancestors who were all peasants”, could be accepted through “recommendations” from people in power in the communist parties. This small group and students who were accepted through the College Entrance Examinations in the following years later became the most successful and influential in China and a lot of them are still playing very important roles in various fields domestically and internationally. Common Chinese people then suddenly found their path to success other than becoming a faithful Chinese Communist Party member and climbing a social ladder set up by chairman Mao and his entourage, which had no clear path at all. Then kids around China were expected to perform extremely well in standardized tests, which were seen as the most “fair” way of social mobility. All things related to “play” were seen by “good” parents and educators as the enemy of their children’s academic and ultimate “success” in life. Arcades were deemed by them as places where “bad kids” went and there was never a lack of stories of such kids stealing money from their parents to play videos games in the arcades, where they were picked up by “bad people”. Video games were also considered “addictive” and conducive to lower academic performance of academic performance and decreasing moral standards in children. Seldom had anyone associated video games with “learning” and until when I went to in college,  when online games became popular with the development of the internet, media coverage about such games were still very negative: gamers of online games were always so addicted to online games that they died in “internet cafes” from exhaustion caused by excessive sleeplessness. Nowadays, there is a tremendous turn of the attitudes toward “gaming” in China because after neoliberalism has gained control of people’s lives and pressure for making money has made everyone as exhausted as playing games too much, games have become not only a tool for releasing tension and anxiety in their daily life, but also have become a wealth-generating industry that has produced rich CEOs of gaming companies as well as winners of international game contestants. However, it seems like such change of attitudes for the general public is influenced mostly by the counterforce of “capital”, which happens to be balancing the negative impacts caused by excessive political dictatorship reigned over China. A book like What Videogames Have to Teach Us about Learning and Literacy that thoroughly analyzes “gaming” from the perspectives of learning and education is very inspiring to me because it is calm and rational articulation of what some of the values and advantages games have in learning and education and how we could use those features to improve the deficiencies of the modern education system. In Chapter 2, Gee uses the concept of “semiotic domain” to as a basis of qualifying video games. I think this is a great concept because it neither demonizes nor glorifies video games. Rather, it sees video games as every other “semiotic domain”, which has its own “content”, “design grammar”, and one needs sufficient “literacy” to understand and use it, like one learns the language system. The biases towards games, be it positive or negative,  stems from failure to recognize this commonality that games share with other “semiotic domains”. To be literate in games, one has to learn to “read” multi-media sources and how they interact to function in the gaming environment, and thus the principles behind such literacy. Therefore, it is not only “playing”, but also “learning”, in its unique way that is different from traditional learning. According to Gee, games are advantageous compared to traditional learning because it encourages active learning through creating embodied experiences that enable game players to participate in a process in which they are interested. In order to achieve their purposes, which can be multiple in the games they play, they will need to learn actively how to use the tools provided in such games to solve problems through their own experiences. And they are willing to articulate the results of their “tacit-learning” – how they solve those problems- in communities through writing strategy guides, which will be learnt critically by other gamers who may have similar experiences, and whose feedbacks may strengthen the existing theories of playing the games, and thus, a “probe-hypothesize-reprobe-rethink” process, which is a process similar to conducting scientific research, is formed naturally and organically. This process helps a child to become a “self-teacher”, and it is significantly different from traditional learning, where students learn passively about knowledge in which they have very limited embodied experience, and memorizes the answers they don’t understand because of the reasons mentioned above to complete standardized tests consist mostly of multiple choice questions, which seriously ignores the precious fruits of students’ tacit learning and instead focuses on the “performance” of answering questions testing imposed knowledge that they can hardly digest because of a lack of embodied experience. Above all, I love the analysis of the educational features of gaming and I highly recommend the learning principles summarized through the author’s experience of playing games at the end of each chapter of this book because not only are they useful for understanding the educational features of video games, they are also useful in instructional design with or without video games because they are essentially about how to activate students’ agency to engage them in participating in an active, embodied and communal learning experience that will kindle the fire for learning and they will be motivated by a burning desire to learn for the rest of their life.