Digital Commons

Sorry for the lengthy lull in activity (busy IRL).

I’ve translated a part of a recent open letter /petition initiated by the Conseil National du Numérique in France and signed by 75 prominent professors and leaders of research and cultural organisations. The petition was published in Le Monde on 10 September 2015. The full text is available on the Conseil’s website:  . I think there are some acutely made points here about the threats from the privatisation of knowledge by a parasitic knowledge economy, and the general ‘situation’ of knowledge today in the digital transition. I’m not so sure the rosy picture of open access in other western countries is fulfilled in practice currently (and the more nationalistic tones of the petition are understandable but a little regrettable in their competitive tone), but the potential of open access policy and law is clearly an important subject for today.

Promoting the dissemination of culture and knowledge

The commons will soon make their entrance into French law, on the occasion of the future law on the digital announced by Manuel Valls, after consultation with the Conseil National du Numérique. This is to be welcomed: the commons always feed the exchange and sharing practices that underly scientific production and cultural creation.

Science has always been understood as a common. Historically, the scientific method involves a collective construction of knowledge, organized around verification and validation by peers. The massive influx of digital technology into most fields of human activity is creating new situations. Networks facilitate the emergence of large distributed communities which can mobilize to create and share their knowledge. These common knowledges are all deposits made up of the initiatives, creativity and engagement of individuals in a collective goal. In a broader perspective concerning the safeguarding of a way of shared ownership and the collective management of resources according to a “communal” model, they form part of the natural resources managed by all the members of a community. The digital has reactivated this notion that has brought together the dynamics of the two major transitions that our world is undergoing: the protection of the informational commons as part of the digital transition and the protection of the natural commons as part of the ecological transition.

It is therefore time to give a proper legal basis for preserving the commons, and to adapt the law to right existing practices….

… Public domain information is made up of what cannot and what is no longer framed by Intellectual property laws. At present its protection is not very effective. Indeed it is defined in the hollow of the code of intellectual property, which makes it impossible to fight effectively against abusive IP claims over a work– this is what is meant by the term ‘copyfraud’. Examples are numerous: it is common that the scanning of a public domain work, or even simply photographing it, is used as a justification to claim a copyright on this work! Is it not astonishing – and this is a euphemism – that the Department of the Dordogne could claim a copyright on reproductions of the cave of Lascaux, 17 000 years after the death of its creators? Because it limits the distribution and reuse of works that make up the public domain, the copyfraud constitutes an infringement of the rights of the whole community.

Creating a positive law concerning the public domain is also the way to protect any misappropriation of items that cannot be the subject of an intellectual property right, such as information, facts, ideas, and principles. For instance Amazon has successfully filed a patent for photography against a white background….


Note: see this post about the Amazon patent:


… Open Access, already adopted by our neighbors, particularly in Germany and the UK, enshrines in law the possibility for researchers who wish to publish in open access research articles that have been funded by public money, after a short embargo period. This measure aims to limit the dependence of public research institutions to major scientific publishers. Currently these are subject to a system of double payment, although since 2012 the European Commission has invited member States to enshrine open access in their legislation.

In fact, researchers funded by public money are mostly obliged, for reasons of visibility and career, to publish in prestigious scientific journals. They are therefore in a situation of dependency to scientific journals that now belong to an oligopoly comsisting of a few large publishers (Elsevier, but also Springer, Wiley, Nature). In order to publish in these journals, the authors are forced to give up their copyright. These same researchers also provide their expertise to shape the editorial choices of the journals. In this respect, the increase in subscription prices of journals does not appear to be justified, especially as the transition to digital has significantly decreased publication costs.

Meanwhile, higher education and research institutions annually spend more than € 80 million for access to electronic resources. And access costs have continuously increased. The access prices have also continuously increased by 7% per year for 10 years…. This situation severely limits the advancement of research while weighing on public finances.

But open access does not only aim to reduce expenditures of public institutions– open access has a real impact on the advancement of research, and in some cases on the preservation of public health: The team in charge of Liberia’s response to the threat of the Ebola virus were unable to access certain items because of their high costs, impacting on their ability to identify the virus early and develop faster prevention and care measures.

Other measures are needed to build an open digital environment conducive to research, innovation and creation. The provision to enable the automatic search of text data (text and data mining) is to allow for automated search methods in a very large volume of text or data; it is possible through this process to obtain results that would not have been discovered by another method. This would give new impetus to the entrance of French research into the age of big data and to realize very significant productivity gains. Other countries such as the UK, Japan and the United States have outpaced us in this area.

The real enhancement of cultural heritage occurs through its open use by the greatest number. This is also the historical mission of public libraries, which will largely benefit from these provisions. The circulation of open science helps us to face the transitions that confront us. A positive definition of the public domain and its incorporation in law will benefit the influence and standing of science and culture in the digital age. The United States, the United Kingdom and Germany have already understood this. What are we waiting for, before we also benefit from a wider diffusion of this open science and the new audiences and the new reputation it will bring?

List of the 75 signatories on the version published in Le Monde:

 Pierre LESCURE, Président du Festival de Cannes, Journaliste

 Bruno CHAUDRET, Directeur de Recherches CNRS, Président du conseil Scientifique du CNRS, Académicien

 Denis PODALYDES, Acteur, metteur en scène, scénariste et écrivain français, et sociétaire de la Comédie-Française

 Bruno LATOUR, Directeur scientifique de Sciences Po

 Benoît THIEULIN, Président du Conseil national du numérique

 Marc TESSIER, Président de Video Futur Entertainment Group et membre du Conseil national du numérique

 Alain BENSOUSSAN, Avocat à la Cour d’appel de Paris

 Michel WIEVIORKA, Sociologue, Président de la FMSH, Directeur d’études à l’EHESS

 Paul JORION, Anthropologue, essayiste

 Judith ROCHFELD, Professeur de droit privée à l’Ecole de droit de la Sorbonne, Université Panthéon-Sorbonne (Paris 1)

 Patrick WEIL, Directeur de recherche au CNRS, Président de Bibliothèques sans Frontières

 Yann MOULIER BOUTANG, Professeur des Universités en sciences économiques UTC

 Antoine PETIT, Président et Directeur Général de l’INRIA

 Nathalie MARTIAL-BRAZ, Professeur de droit Privé, Université Paris Descartes

 Melanie DULONG DE ROSNAY, Chargée de recherche au CNRS, Responsable du pôle Gouvernance de l’Information et des Communs de l’Institut des Sciences de la Communication du CNRS/Paris-Sorbonne/UPMC, Co-fondatrice de l’association Communia pour le domaine public

 Valérie PEUGEOT, Présidente de l’association Vecam

 Bernard STIEGLER, Philosophe, président de l’association Ars Industrialis et Directeur de l’Institut de Recherche et d’Innovation (IRI) du Centre Georges Pompidou

 Sophie PENE, Professeur à l’Université Paris Descartes

 Daniel KAPLAN, Délégué général de la Fondation pour l’Internet Nouvelle Génération (la FING)

 Serge ABITEBOUL, Directeur de recherche à Inria et Professeur affilié à l’ENS Cachan

 Pierre MUTZENHARDT, Président de l’université de Lorraine et Président de la commission recherche de la Conférence des Présidents d’Université

 Dominique BOULLIER, professeur de sociologie, médialab Sciences Po

 Camille DOMANGE, Chargé d’enseignement Sorbonne Paris I.

 Christine BERTHAUD, directrice du CCSD, CNRS.

 Claude KIRCHNER, Directeur de recherche Inria, Conseiller du président d’Inria, Président du comité de pilotage du Centre pour la Communication Scientifique Directe

 Jean-François ABRAMATIC, Informaticien et ancien président du W3C

 Brigitte VALLEE, Directrice de Recherche émérite au CNRS, rattachée au laboratoire GREYC (Caen Normandie)

 François TADDEI, Généticien, Directeur du Centre de recherches interdisciplinaires

 Albertine MEUNIER, Artiste

 Claire LEMERCIER, directrice de recherche au CNRS en histoire, présidente du conseil scientifique d’Openedition, membre du conseil scientifique du CNRS.

 Francis ANDRE, Chargé de mission Données de la recherche, Direction de l’information scientifique et technique, CNRS

 Alexandre MONNIN, philosophe, chercheur chez Inria, membre du réseaux d’experts d’Etalab

 Colin DE LA HIGUERA, Société informatique de France, directeur adjoint du Laboratoire informatique LINA à Nantes

 Christine OLLENDORFF, Directrice de la Documentation et de la Prospective, Arts et Métiers ParisTech

 Nicolas CATZARAS, Secrétaire général de la FMSH

 Maurice RONAI, Membre de la CNIL, Chercheur à l’École des Hautes Études en Sciences Sociales (EHESS)

 Fabienne ORSI, Economiste, chercheuse à l’Institut de Recherche pour le Développement

 Pierre GINER, Artiste

 Christian PHELINE, Membre de la Commission d’accès aux documents administratifs (CADA), ancien directeur du développement des médias

 Valérie BERTHE, Directrice de recherche CNRS et membre du conseil scientifique du CNRS

 Jean-Pierre FINANCE, président de COUPERIN et du CA de l’ABES, ancien président de la CPU

 Virginia CRUZ, designer, directrice adjointe de l’agence IDSL et enseignante à l’Ecole Polytechnique

 John STEWART, Chercheur en sciences cognitives, Université de Technologie de Compiègne

 Cécile MEADEL, Professeure de l’Université Panthéon Assas (Paris II)

 Brigitte PLATEAU, Professeur des Universités, Administratrice Générale de l’Institut Polytechnique de Grenoble

 Jean-François BALAUDE, Professeur de philosophie, Président de l’Université Paris Ouest Nanterre La Défense

 Hervé LE CROSNIER, Université de Caen Normandie

 Anne VERNEUIL, Présidente de l’Association des Bibliothécaires de France

 Sophie ROUX, Professeur d’histoire et de philosophie des sciences, ENS

 Serge BAUIN, Université Sorbonne Paris Cité, chargé de mission libre accès aux publications scientifiques au CNRS

 Margot BEAUCHAMPS, coordinatrice du Groupement d’intérêt scientifique M@rsouin

 Michel BIDOIT, Directeur de l’Institut des Sciences de l’Information et de leurs Interactions, CNRS

Being and Space

Sam Kinsley, former colleague and technophilia, now at Exeter Uni, recently published ‘The Matter of “Virtual” Geography’ in Progress in Human Geography. It gives a comprehensive overview of the history of formulations of virtual spaces and realities since the heady days of the 1990s articulations of cyberspace, up to recent approaches to ideas of coded and networked spatialities. Sam perceptively mobilises Stiegler’s work including his use of Simondon and Heidegger to propose a way of describing and analysing digitally enabled spatial and temporal refigurations of contemporary existence and sociality.

I wanted to add a gloss on this mobilisation of Stiegler’s notion of technicity, as a point that seemed to me to touch on an important element in Stiegler’s critical adoption of Heidegger’s Being and Time — hence the ‘Being and Space’ title. Sam has this to say about Stiegler’s positioning of humans as always already preceded by technics in a way:

“Culture”, he writes, “can accordingly be thought of as metastable systems of retention, of exteriorized thought: ‘A new born child arrives into a world in which tertiary retention [data, images, writing and so on] both precedes and awaits it, and which, precisely, constitutes the world as world’ (Stiegler, 2010a: 9, original emphasis). The ongoing creation of shared knowledge, and thus a shared memory and history, is in large part mediated by technology (with the notable exceptions of practices of oral history and storytelling).”

Absolutely, and here Stiegler’s take on and taking from Heidegger’ notion of Dasein’s ‘throwness’ is evident. Dasein, the being for whom its being is a question, ‘falls’ into time, and encounters a facticity already there. This paradoxical futurity of what precedes Dasein in a sense programmes (though this word is evocative much more of Stiegler’s Heidegger than Heidegger) the questioning of being that characterises Dasein, along with the tension between an intratemporal business with everyday things seeking to avoid the question and an authentic encounter with it via (in Heidegger) an assuming of the heritage of the collective past as pro-genitor and horizon of Dasein’s future possibilities.

In the latter part of Technics and Time 1 Stiegler ‘deals with’ Heidegger, identifying this notion of a throwness into an already existent facticity as his major insight, while also identifying quite precisely the point in Being and Time  (at a certain moment in the famous chapter on historicality and temporality) where Heidegger turns away from the implications of this constitutive factical technicity of Dasein and towards the more problematic notion of a history of being as expressed in the community of the volk  — the community — thought separately as a spiritual continuity, somehow transcendent from a facticity now relegated to the status of intratemporal covering over of the former. For Stiegler, as Sam’s account indicates, technics is an irreducible dimension of individual and collective being and any ‘authentic’ reflection or encounter with the question of one’s being, or of being in general (in philosophy, religion, politics etc) develops on the basis of and out of conditions that are factical, that pre-exist s/he reflecting, and that also make possible the transmission and communication of that reflecting to others to come after.

One more note: the oral history and storytelling that is part of the the “ongoing creation of shared knowledge” Sam describes is also mediated technically, if not ‘technologically’ (but perhaps today few instances of mediation passes completely to one side of the pervasive electronic media milieu). Oral transmission is always part of a linguistic technicity; it is always undertaken in conjunction with certain rituals and gestures associated with the cultural event of story recital; and often these will include the production of graphics of various kinds, rupestral, sand-painting, bodily inscription and so forth. That minds retain these forms and conventions and rites testifies to the profound interdependence of organic and non-organic spatial memory supports in the maintenance and evolution of individual and cultural identity.    


Event-ization gloss

I recently posted about a symptomatic episode in the recent history of military drone R&D that involved the licensing of proprietary software developed by ESPN for its media coverage of American football (cited in Chamayou’s Theorie du drone) . I referred to Stiegler’s notion of ‘event-ization’ (événementialisation) there rather breezily, a term which deserves some further unpacking to explore its relevance to these developments in which a media coverage software system is being deployed in a different context. So, here goes…

I took the term from Technics and Time 2: Disorientation, where it is discussed in ch 3 on the ‘industrialization of memory’ (cf p. 100, and p. 115ff). It resembles similar formulations (Derrida, Virilio, Baudrillard, McLuhan, come immediately to mind) concerning the way that mass, industrial technological mediation has affected the production of ‘historical reality’ through both the speed of electronic transmission of events and, secondly, the extent to which many events are ‘co-produced’ to be media at the same time as they are ‘events’ covered by media. In terms of the first aspect, the collapse of the delay between the event and its mediated reproduction as ‘story’, report, analysis and record is what characterises the industrial, electronic media’s impact on the production of experience. The reduction of the delay between event and its representation and interpretation in some kind of media (oral account, print, newsreel, radio and tv news, to blogging, live coverage and tweets) challenges thought to comprehend the event as something that can be placed in an individual’s or collective’s memory in a way that enables it to contribute to the understanding of reality, and the evolution of one’s historical/cultural identity. Instead, events seem to appear today as already consigned a significance and an impact via their immediate processing in and as a composed, selectively synthesized mediated transmission. Stiegler calls this a ‘short-circuiting’ of the ‘transindividuation’ that otherwise passes (in longer circuits) between individuals in the collective negotiation of significance, value, identity etc.

In terms of the latter aspect of co-production of event/media coverage, sporting events assume something of an exemplary status inasmuch as those pro-sports that are heavily mediated become thoroughly permeated (in terms of rules, scheduling, ‘monetisation’ of talent, merchandising, audience, player and fan culture, etc) by commercial media logics and prerogatives. But also, since Walter Benjamin’s acute analysis of the fascist aestheticization of politics, the mediatization of parliamentary and presidential democratic politics has increasingly imposed itself as a question and a crisis of ‘liberal democracy’. And so it goes for much of social and cultural ‘experience’ which today is subject to ever-increasing and ever more pervasive industrial mediation.

What makes Stiegler’s account of event-ization different is his characterization of this media-overlapping and preemption of experience (here I would refer you to my book Gameplay Mode which develops this theme of pre-emption) as a singular transformation of what is the very basis of human spatio-temporal experience in the production and interpretation of exterior memory supports. In Stieger’s view spatiotemporality is historically and culturally conditioned, which is also to say, technically conditioned. It is always already a technically mediated (from flint stone to cave or sand graphics to play, book, radio, video to computer) processing of an always already exteriorised memory–exterior forms being co-constitutive of what we like to understand is our species specific interior consciousness.

Let me say 2 things then about how ‘eventization’ which is about mainstream media’s impact on lived culture/experience relates here to the military adoption of a mainstream media programming of sporting eventfulness. 1. The ‘audience’ is initially here restricted to the military drone operator/command and personnel and those reviewing it in the field or higher up in the military-political complex (even if these videos and ones simulating them also populate video-sharing sites; something which certainly needs to be addressed as a further aspect of the transformation of eventfulness, that is of historical reality and the production of its political significance…but not for this post…). So this eventization may not be the production of war as media in any general, propagandistic manner initially, but it is about accumulating ‘audience credit’ for what is a major military-industrial business. Drones in operation are always also part of what are major R&D cycles of testing and improvement, maximising the enormous capitalization advantages provided by the government investment in these automatic weapons systems. ‘Audience credit’ is what Stiegler identifies as the lynchpin of contemporary commercial eventization; securing attention and belief of the minds of consumers (and in this case innovators and tech speculators) is at the heart of the unprecedented and problematic domination of the mediation of eventfulness by capitalist (and here militaro-corporate) interests today in Stiegler’s analysis. (We should add without developing this further here that this also has routes into major military-entertainment leverage potential in virtual entertainments of all kinds–to go with the military-entertainment dimensions of the drone developments in general.)

2. What Stiegler characterizes as the ‘forceful recounting’ of the event in contemporary electronic, realtime eventization–by which events are forcefully produced according to the logics of audience capture/management noted above–takes on a particular sense with drone eventizing of the overflown territory. And this is one which insists with lethal force on its pre-interpretation of human activity subject to surveillance and action as counter-insurgent/counter-terrorist instance. This has effects on the lives not only of those targeted–and this is not even to get into the hotly contested arguments about the numbers of ‘civilian’ vs ‘insurgent’ or ‘terrorist’ casualties– but operates as a powerful determinant of the experience of living under the permanent and would-be ubiquitous surveillance that requires the mobilization of such an eventization software package so well suited to the pro football arena. The ‘experiential costs’ of the thoroughgoing mediatization of the ‘arena’ are more difficult to quantify but no less significant for people who must live with the forceful eventizing of their existence as one coming within ‘insurgent’ or terrorist inhabited battle arena. See for instance, this story publicising a recent visit to the US congress by civilian victims of a drone strike in Pakistan sponsored by politicians sympathetic to human rights initiatives against drone use.

Accustomisation to Lethal Autonomous Robots

To get a sense of how the development of autonomously acting robot weapon systems is becoming an established notion in the U.S. and allied military-political-media contexts, take a look at freelance journalist/former Pentagon staffer Joshua Foust’s article in the National Journal last week: ‘Soon, Drones May Be Able to Make Lethal Decisions on Their Own’. In fact the article argues that this is not going to that soon at all, but is rather discussing how LAR’s (Lethal Autonomous Robots) would solve some problems while creating others for military planners and political leaders. The headline performs the principal task of introducing a coming technological development; this is the key bit of ‘news’: something new is coming down the pipeline and we need to have a think about what to do with it. Deploying LARs may be the best means of preventing the hacking of drones, suggests former intelligence analyst and Defense One writer, Foust, by reducing the communications avenues into the robotic system. But the complexity of ‘asymmetrical conflicts’ is a formidable challenge to their successful deployment, ‘political issues aside’. Syria is just too complicated for drones or even human warfighters to figure out, according to one military academic cited. But, the reader assumes, they’re working on it, and soon the decision-making gap will be narrowed between these two military assets.

Drones, sport and ‘eventization’

This post is to start some ideas circulating from work I am increasingly becoming preoccupied with concerning military robotics and AI, as a particular (and also particularly important, in many ways) case of automatizing technologies emerging today. This is a big topic attracting an increasing amount of critical attention, notably from people like Derek Gregory (whose Geographical Imaginations blog is a treasure trove of insights, lines of inquiry and links on much of the work going on round this topic), and Lucy Suchman who is part of the International Committee for Robot Arms Control and brings a critical STS perspective to drones and robotics on her Robot Futures blog.


I’m reading French CNRS researcher Gregoire Chamayou’s Théorie du drone, a book which has made a powerful start on the task of philosophically (as he has it) interrogating the introduction of these new weapons systems which are transforming the conduct, conceptualisation and horizon of war, politics and the technocultural global future today. Many riches in there, but I just read (p. 61) that the U.S. Air Force Intelligence, Surveillance and Reconnaissance Agency, looking for ways to deal with the oceans of video data collected by drones constantly overflying territory with unblinking eyes, obtaianed a version of software developed by ESPN and used in their coverage of American football. The software provides for the selection and indexing of clips from the multiple camera coverage of football games to enable their rapid recall and use in the analysis of plays which (as anyone who watches NFL or College football coverage knows takes up much more time than the play itself in any given broadcast). The software is able to archive footage (from the current or previous games) in a manner that makes it immediately available to the program director in compiling material for comparative analysis, illustration of player performance or tactical/strategic traits of a team, etc. The player and the key play can be systematically broken down, tracked in time, identified as exceptional or part of a broader play style, and so forth.

These capacities are precisely what makes the software desirable to the US Air Force inasmuch as the strategic development of drone operations deals with effectively the same analytical problem: the player and the key play, the insurgent/terrorist and the key act (IED, ambush, etc).  The masses of video surveillance of the vast ‘gridded’ space of battlespace, a vast ‘arena’ similarly zoned in precisely measurable slices (but in 3D) must be selectable, taggable and recoverable in such a way to be usable in the review of drone operations. And the logic (or logistic as Virilio would immediately gloss it) of this treatment of ‘battlespace’ is realised in what has recently emerged unofficially from the Obama administration-Pentagon interface as the emerging strategic deployment of drones by the CIA (which runs a significant and un-reported proportion of drone operations globally). This targeting strategy is based precisely on pattern analysis both in tracking known suspected enemies of the state and in identifying what are called ‘signature targets’ (the signature referring to a ‘data signature’ of otherwise unidentified individuals, one that matches the movements and associations of a known insurgent/terrorist — see Gregory’s post on this in Geographic Imaginations ).

The ethical and juridical-political dimensions of this strategy are coming under increasing and much-needed scrutiny (more to come on this). As a media/games theorist, the striking thing about this felicitous mutuality of affordances between pro sport mediatisation technics and those in development for the conduct of drone operations is the reorientation to space it not only metaphorically suggests (war, become game now steering the metaphoric vehicle back in the other direction) but enacts through an ‘eventization’ (Stiegler) operating in the very constitution of the ‘event’ of war or counter-insurgency (or what James Der Derian called ‘post war warring’) . While there are many complicit actors benefiting from the profitable mediatized evolution of American football into a protracted, advertising friendly broadcast, no such ‘partnership’ exists between key players ‘on the ground’ and those re-processing their data trails.

Memory and Space


Some interesting reflections on Stiegler’s theorisation of event, as process, in the ‘Industrialisation of Memory’ chapter of T&T2. In particular, this post usefully points to the (almost?) un-discussed Virilio-inspired similarities between Stiegler’s ‘collapse of distance’ between the input and reception of an event and David Harvey’s ‘time-space compression’. This also happens to be something I’m currently thinking about for a paper…

Originally posted on The Semaphore Line:


Over the last week or so I’ve returned to reading some Stiegler, after a break of maybe 6 months, as a result of editing a book chapter. I’ve used him as a key reference point to talk about human access to and cognition of an event. I’ve argued that social media works to construct the nature of a protest event – and have claimed that differently bundled accounts of an event bring individualizing conclusions. For example, that each account of an incident brings a different ‘spin’, which when brought together on a media platform moulds unique perceptions of the event. The nuances in language between accounts is telling of this ‘spin’ – some are detailed, some are satirical, some are instructive, some are rote.

I’ve also said that social media has brought a new orientation to events too. I’ve played on the spatial dynamic of Stiegler’s use of the term…

View original 1,373 more words

Living books about life, Open Humanities Press

A year on from the publication of the ‘Paying Attention‘ theme issue of Culture Machine, the excellent Open Humanities Press published journal, I’ve been poking around the various linked websites and stumbled on the Living books about life site, which is really interesting.

The JISC-funded, OHP published website offers 24 open access ‘living books’, curated by a range of innovative scholars to ‘bridge the gap between the humanities and the sciences’:

All the books in the series are themselves ‘living’, in the sense that they are open to ongoing collaborative processes of writing, editing, updating, remixing and commenting by readers. As well as repackaging open access science research — along with interactive maps, visualisations, podcasts and audio-visual material — into a series of books, Living Books About Life is thus engaged in rethinking ‘the book’ itself as a living, collaborative endeavour in the age of open science, open education, open data and e-book readers such as Kindle and the iPad.

In the series there’s a Bioethics™ book, curated by Joanna Zylinska, with a range of open access readings thematically organised, ‘biomanufacturing and biopatenting’ for example, as well as some artistic reflections in video and text form. There are also living books curated by David Berry on ‘Life in code and software‘, Steven Shaviro on ‘Cognition and decision‘ and Claire Colebrook on ‘Extinction‘. There’s lots to explore and I would encourage people to take a look… perhaps there should be one on ‘attention’?!

Translation – “Is not all creation a transgression?” – Gilbert Simondon Interview (1983) “Save the Technical Object”


Andrew Iliadis has translated a really interesting interview with Simondon from 1983, originally in the magazine Esprit. Simondon covers creativity, novelty, alienation (in technics as the originary relation of the human-technical) and invention:
“Technics are never completely and forever in the past. They contain a power that is schematic, inalienable, and that deserves to be conserved and preserved.”

Originally posted on ethics & philosophy of information:



Interview with Gilbert Simondon

Translated by Andrew Iliadis

The following is an English translation of a 1983 interview that Simondon gave to the French magazine Esprit (Esprit 76:147-52. 04/1983).

[Simondon makes references to a variety of individuals here, including Ducrocq, Maxwell, DuMont, Illich, Stephenson, and Faraday. Albert Ducrocq was a French scientist and writer who specialized in robotics. James Clerk Maxwell was a Scottish theoretical physicist. Allen B. DuMont was an American scientist and inventor specializing in cathode ray tubes. Ivan Illich was an Austrian philosopher. Robert Stephenson was an English civil engineer specializing in locomotive and railway engineering. Michael Faraday was an English scientist specializing in electromagnetism and electrochemistry.]

Anita Kechickian: In 1958 you wrote about alienation produced by non-knowledge of the technical object. Do you always have this in mind as you continue your research?

Gilbert Simondon: Yes, but I amplify it by…

View original 2,123 more words

Biography of Gilbert Simondon

[Reposted from my personal blog]

Jussi Parikka has highlighted the translation of a biography of the philosopher Gilbert Simondon [the original was written by Nathalie Simondon], who was a key influence, of course, on the work of Bernard Stiegler and also Gilles Deleuze. In his post, Parikka highlights the hands-on nature of Simondon’s practice – the fact that he built a television in the basement of his school – and the resonances with Friedrich Kittler’s building of a synthesiser. This is also a link, as Phillipe Petit highlights in his introduction to the book of interviews Économie de l’hypermatériel et psychopouvoir, with Bernard Stiegler, whose father worked for Radiodiffusion-Télévision Française, the French national broadcaster between 1939-64, and built their first TV. Parikka picks out the neologism of ‘thinkerer’ (commingling ‘tinkerer’ and ‘thinker’) coined by Erkki Huhtamo to describe Simondon, a term that might also be applied to Stiegler for his various means of practising philosophy.

The biography demonstrates what an extraordinary, and, sadly, relatively short, career Simondon had, including a fairly meteoric rise from teaching at a lycée in Tours (1953-55) to being appointed Chair of Psychology B at the Sorbonne (1965). Simondon worked with Barchelard and Hyppolite, as a postgraduate, and his thesis was examined by Jean Hyppolite, Raymond Aron, Georges Canguilhem, Paul Ricoeur and Paul Fraisse. Quite something!

The biography also includes very interesting quotes from letters to Bachelard and Hyppolite as well as fantastic summaries of Simondon’s key works. The experimental spirit of Simondon’s work is strongly evoked throughout, with a clear commitment to a collaborative methodology (across and between science and philosophy):

[He] chose a path of reflection where philosophy might inform science. Such collaboration between science and philosophy, he wrote in 1954 to his future supervisor [Hyppolite], must be carried out not in the results, which would be “an invasion of thought by unworthy followers, as shown in scientistic time,” but in the method: “At the level of method, science is never a feudal lord ruling over a vassal philosophy; rather, it is a relation between the spontaneous and the reflective. The spontaneous governs the reflective, as in scientism, only when the reflective activity is not contemporaneous with the spontaneous activity.”

The biography makes for essential reading for those interested not only in the philosophy of technology and technics, but also for those with a broader interest in the history of ideas, in particular related to the development of what we call continental philosophy.

Memory programmes: the retention of mediated life

Following on from Patrick’s post, I thought I’d also put up a post concerning the Conditions of Mediation conference held at Birkbeck on the 17th of June 2013. I thought the conference was an excellent, if very condensed, occasion for a variety of people interested in media theory, philosophies of/for media and in particular phenomenological understandings of mediation.

There was a series of interesting, and rather diverse, keynotes, including Graham Harman, Sean Moores and Lisa Parks and two slots of parallel paper sessions. I was pleased to be able to give a paper as part of this really interesting event, in the ‘Technics, Interface and Infrastructure’ paper session.

I spoke in the same session as James Ash, who presented a great paper synthesising a reading of Graham Harman’s Object-Oriented Ontology, optics to interrogate understandings of ‘interface’. I was also hoping to speak alongside Patrick, because our papers compliment one another as a kind of meditation on Bernard Stiegler’s reading of Husserl in relation to understandings of the perception of time and the processes of memory. Patrick has posted his excellent paper here on this blog.

For those interested, I have reposted below, from my own blog, a slightly cleaned up, and referenced(!), version of my paper. Continue reading Memory programmes: the retention of mediated life