Planned Obsolescence. Kathleen Fitzpatrick 2011

Fitzpatrick, Kathleen. Planned Obsolescence. Publishing, Technology, and the Future of Academy. New York University Press: 2011.

Kathleen Fitzpatrick makes a compelling case for the need to reform university publishing. She starts with ways to revise the peer review system, and advocates for post-publication, “transparent” review rather than pre-publication, anonymous review. She navigates multiple levels throughout the book: broad forms of institutional forms of resistance – epitomized by the motto “This Is How We Have Always Done It” – but also individual anxieties triggered by the destabilization of the notion of authorship. I also appreciated her relatively balanced view on technological innovation, as she addresses the potentialities, but also the flaws of different initiatives (Slashdot, Philica, MediaCommons, etc). The politics of it all come to the fore in the last chapter, when she expounds the problems related to the trade-oriented model of US university publishing that came to prevail in the 20th century, and what to do to reconnect university press to the university community.

I shall here focus on the way Planned Obsolescence came to existence. In her conclusion, Kathleen Fitzpatrick explains the peculiar genesis of her book. After submitting her manuscript to the publishing press, she posted it online. It thus received both “traditional” and online peer review. By engaging in such process (and reporting on it), Fitzpatrick crossed the bridge between theory and practice and directly put to the test some of the ideas expounded in her book. While going over the online draft of the book, I was struck by the overall quality and supportive tone of the comments it received there. Fitzpatrick’s dual peer review experiment was able to reconcile both models of peer review for the best, perhaps because they were articulated as to be complementary while sharing a same purpose, and not in competition. A key element – and limit of the experiment, perhaps – is that Fitzpatrick got to decide on the “first circle” of reviewers. It is likely that this selection at the outset of her experiment shaped the ensuing audience, paving the way for “sharp, thoughtful criticism to make the project better” (190). As a matter of fact, the online comments she got seemed overall much nicer than anonymous feedback from journals I have read (and no need to say, much, much nicer than the average comment you get online these days). Yet, does such process of cherry-picking one’s reviewers really stand for open peer-review?

I also could not help thinking of Fitzpatrick’s early example of dual peer review in the light of the most recent Hypatia Transracialism Controversy. Instead of fostering “sharp, thoughtful criticism to make the project better”, the social-media response that followed the publication of an article on transracialism in a peer-reviewed journal (Hypatia) plainly aimed at having this article deleted. This was a textbook case of technological disruption, inasmuch as I properly understand this concept. Some counter-commentators contended that the arguments this campaign was making were mostly baseless and that they provided evidence that some of the most outspoken opponents had not even read the article they wanted to see taken down. Yet the screenshot below shows how hastily a part of Hypatia’s board of associate editors yielded to the bashing (that also included online shaming targeting an untenured assistant professor).Was that the darker side of open peer review we just saw?

Hypatia’s Facebook page. Part of an apology, posted by Hypatia’s board of associate editors on 1 May 2017, for the publication of one of the journal’s peer-reviewed articles. At the top is a statement, added 25 May 2017, that the apology does not reflect the views of the editor or board of directors (Source: Wikipedia page)

 

Yochai Benkler. The Wealth of Networks, 2006

Yochai Benkler assesses a shift from “Industrial Information Economy” to “Networked Information Economy” (31 – 32). The former was characterized by the high cost of the means of producing and sharing (the media), which buttressed a much centralized and concentrated production of content (TV, newspapers, etc). The latter is characterized by a dramatic decrease in these costs. It opened the door a democratic, participatory and rhizomatic production of content, and to the flourishing of “nonmarket production” (56); that is, to a world in which individuals fully retrieve their power to create. Through his advocating of the saint trilogy “information, knowledge, culture”, Benkler outlines technological affordances personal computers and access to the Internet have bestowed, while responding to problems that were coming to a head at the time he was writing. Indeed, in the 2000s, and under the pressure of music and movie industries, many countries were enacting repressive regulations aimed at sanctioning and circumscribing the sprawling of copyright infringement that the Internet made possible. Taking a firm stance against such institutional, vertical forms of regulation, Benkler supports horizontal forms of self- or peer-regulation (he illustrates his point through the example of Wikipedia (71-74) Slashdot (76-80), and Amazon (!), if I remember well). Benkler also demonstrates the economic sustainability of the model of free culture, and how “libre” knowledge fosters further innovations.

Although optimistic in tone, Benkler cautions us that “there is no guarantee that networked information technology will lead to the improvements in innovation, freedom, and justice that I suggest are possible. That is a choice we face as a society”(18)… It is good to be reminded that it is at least partly thanks to idealists and altruists that we are able to share our thoughts on this blog, amongst many other things. Yet from our 2017 standpoint, Benkler’s “techno-future” sounds in many ways like a path not taken. As Tim Berners-Lee – hardly suspected of having a prejudice against the Internet – bluntly puts it in a recent interview: “The system is failing“. This failure is arguably due to phenomena that may not have been predictable back in 2006, but also to problems embedded in the ideological tenets of “hacker culture”, and stemming from the blind spots of the idealistic views on which Benkler draws. I especially take issue with the idea that voluntarily creating for free is a practice everyone can equally afford, and that sharing is an inherently altruistic practice. I shall summarize some of these problems below. Please feel free to comment, correct, and add up any thought.

Common knowledge“, “semiotic democracy and transparency” and “participatory culture” are all predicated on a generous view of humanity. Granted, Benkler does mention the possibility of misinformation (“gobbledygook”) and even trolling (75-77), but these are posited as problems that peer-production will easily redress. And… well. I will not dwell on this point, for we are all too aware of what’s happened: the destabilizing of traditional forms of knowledge hierarchization and validation has given way to the rampant spread of fake news and propagandistic forms of disinformation, to an extent only comparable to what happens during wartime, and in the face of which peer processes of accreditation have seemed mostly powerless. It is disheartening to see that faith in humanity can be proven so wrong. But perhaps this error of judgment is related to the way enemies were picked…

Indeed, Benkler posits liberal States – jointly with traditional mass industries – as the main villain of the story. An “anarchic/libertarian” view, as he puts it. From what I can remember, the debate on intellectual property was very much constructed under these terms in the 2000s. Those terms and the enemy thus picked seem to have made Benkler (along with everyone involved at the time) oblivious/unaware of the fact that digital sharing was to be done through (private and for-profits) intermediaries. Material ones (cables, physical storages aka “clouds” etc) bring up the issue of net neutrality. Digital intermediaries, such as web search engines, social media, and social networking services have generated a whole set of problems of which we are now all too aware as well. Perhaps “nonmarket behavior is becoming central to producing our information and environment” (56), but it’s also turned out that intermediaries did find ways to turn those “nonmarket” exchanges into commodities. The advent of social media and social networking services has thus dramatically reshaped the terms of the discussion throughout the last decade. The opaque processing of citizens’ data, subsequent manipulations of public opinion, the looming threat over democracies etc. spring from the ways companies have taken hold of the sharing culture the Networked Information Economy fosters (see Adeline Koh). A bit of wariness towards the very private companies (Google, Amazon, etc) that sounded so nice back then, a bit of state regulation instead of laissez-faire towards them (and instead of targeting and criminalizing individuals) might have been an answer. This is an easy thing to say now…

Keeping in mind the terms of the debate in which Benkler was engaged is also helpful to account for the insufficiencies of the binary he builds on to think of intellectual property. I  exaggerate a little bit, but the big picture is copyright = big bad industries protected by big bad states vs. commons/copyleft = universally good model for altruistic individuals. Benkler promotes an unbridled access to, reworking, and sharing of creative contents. His view is buttressed by a representation of “information, knowledge, culture” as non-rivals goods, i.e goods whose value does not decrease when sharing them, just like love or friendship (vs. rival ones, 35-36). At the individual level, that framework leaves out some important issues. In a capitalist context, one risk is to have “libre” content creation reduced to a privilege granted to those who are sufficiently stable/safe, economically and emotionally speaking, to take risks, and/or who are from a culture which encourages risks taking, and/or who are somehow insiders having a sense of which kind of creations is worth investing time and energy on. In the meantime, the rest is silenced or pressed to take more or less informed risks in the hope that their time and energy will eventually generate earnings. More bluntly: how do individuals make a living out of their creations in the reign of unbridled “cultural freedom”?

This American Life’s latest episode illustrates some of the limits entailed in Benkler’s radical stance:

The story began in 2011 when the wildlife photographer David J. Slater got stunning photos of monkeys in Indonesia. The trick is that these pictures were selfies, which made Wikipedia claim them as public domain. Wikipedians got pretty nasty about it, publicly ridiculing Slater when he asked for compensation (nicely asking, not suing). One thing led to another, and the photographer ended up being sued, for copyright infringement, by one of the monkeys… It is an interesting case for diverse reasons, notably for the blurring of the concept of “authorship” (is the monkey the author for having pressed the button?). Here, the point is that if we were to follow Benkler’s altruistic, volunteering and disinterested conception of individual content creation, this photographer could not make a living and pay for the months he spent in Indonesia. Perhaps Benkler would respond that Slater should rather become a “Joe Einstein” (p43). But then, how does one reach such status in the first place? I felt that Fred Benenson‘s framework (fungible vs. nonfungible work) and the nuances he makes between different models, along with Lewis Hyde‘s historically-based definition of commons (workable in a “stinted market, one constrained by moral concerns”, Hyde: 36) addresses these issues in a much more effective way.