This is What Happens When You Map a Deleuze and Guattari Book

a thousand plateaus map
Click to enlarge

Gilles Deleuze and Felix Guattari’s “A Thousand Plateaus” is, arguably, the duo’s most important work, fleshing out important concepts like the rhizome in the 600+ page tome. It also utilizes a non-linear writing style that casually injects phrases like “God is a lobster” into some of the most provoking philosophical thought of the 20th century.

Now, a grad student at the European Graduate School, Justin Joque, has decided to map the duo’s language. Any work on Deleuze and Guattari are rife with terms like “assemblage,” “molar,” “becoming” and so on. The map illustrates the jargon as it appears, disappears, and reappears in the text.

“The red bars represent the number of occurrences of key phrases that appear, build and disappear throughout the text. The vertical grey lines mark the given division of the text into sections,” Joque writes of his method. The 1987 English translation was used for analysis.

Deleuze and Guattari’s (quoted by Joque) describe their work in “A Thousand Plateaus”:

We are writing this book as a rhizome. It is composed of plateaus. We have given it a circular form, but only for laughs. Each morning we would wake up, and each of us would ask himself what plateau he was going to tackle, writing five lines here, ten there. We had hallucinatory experiences, we watched lines leave one plateau and proceed to another like columns of tiny ants. We made circles of convergence. Each plateau can be read starting anywhere and can be related to any other plateau (22)

Check out Justin Joque’s page here.

  • Antonio Mauryshmelly

    Ladies and gentlemen of the jury, with this graph we can show, with 99% certainty, that the semen found at the site of the buggery matched the semen of D&G. The prosecution rests, your Honor.

  • And this is what happens when you do not map the previous posts already published on the question, just to blog and blog blindly without any previous research (blogging is now not what it was used to be, definitely). Recently posted on Deleuze FB group: “Maybe
    not very original after all, this same idea was developed 8 years ago
    by R. Stuart Geiger. “This was my final project for an Information
    Studies class I took back in 2006, when I was an undergraduate at the
    University of Texas. Our assignment was to transform information from
    one form to another, and I chose to perform this analysis of Deleuze and
    Guattariā€™s A Thousand Plateaus. I scanned and OCRed the entire book
    and did a visual frequency representation of certain words.” Justin
    seems to have retaken it but with a better presentation and with some
    other concepts and terms included. It would be interesting to match the
    two maps to see the coincidences. Here some links: the map…/milmesetastablascomp… Stuart original post…/words-and-things-a-de-re-sub…/ the PDF of his work his FB page the post I published in my old blog

  • Hello all, I’m Stuart Geiger. I’ll be writing this comment up into a blog post shortly, because I think it hits on some interesting issues.

    To Naxos and others: I don’t mind at all that Joque made this work, which is great! It looks much better than mine, although my rough aesthetic was intentional — I was indeed trying to replicate the look of a DNA sequence test. Anyways, I wouldn’t judge Joque too hard for not finding my earlier version in a literature review. There is an interesting epistemological infrastructure issue here: what term do you search for to find something like this? Anything about visualization, structure, diagram, map, frequency, distribution, representation, count, etc. might work when querying for any other book (Like Harry Potter), but not with ATP. All of those terms we use to indicate abstract representations of elements in a book… well, those terms are all present in the book, if not the commentary around the book. Try out a search for “map a thousand plateaus word frequency” and the top hit isn’t even this or his blog — it is the book itself, which is actually a quite relevant hit for this query because it contains quite a bit about maps, words, and frequency. My original post is also very hard to find on Google, as text from ATP dominates every conceivable result. I think this is obviously a case of independent discovery.

    And another epistemological infrastructure issue: they look the same and operate in the same manner, but not because I think Joque saw mine and copied it. Instead we were both probably using the same tools, the open source Natural Language ToolKit for python, one of the most popular and powerful tools for natural language processing and visualization. Both python and its modules like NLTK operated under the ideology that as much code as possible should be written behind the scenes and imported in packages as needed, rather than being written in scratch from the ground up. That’s what I love about python: you can get a simple web server running in 15 lines of code. Of course, this black-boxing does’t merely make tasks easier; it supports particular kinds of approaches to problems and not others. The black boxes are already there., so why not use them?

    If either Joque or I had started with pencil and paper and had to draw out a diagram involving the frequency of words in ATP, they would certainly look different. If our hand-written diagrams separated by 8 years looked similar, then it would be suspicious. But since we both probably loaded up the most common suite of software code for analyzing and visualizing texts and started exploring, then some similarity is to be expected.

    And one of the simplest ways to visualize text using NLTK is the dispersion plot, so simple that it is included in the NLTK’s training manuals. The dispersion plot function, like most, is implemented so well that you just need four lines of code to generate it. The code required to generate such an identical visualization (except for Sense and Sensibility with the characters names) is below:

    import nltk.compat
    from nltk.corpus import gutenberg
    words = [‘Elinor’, ‘Marianne’, ‘Edward’, ‘Willoughby’]
    dispersion_plot(gutenberg.words(‘austen-sense.txt’), words)

  • Steve Fuller

    Aren’t you re-inventing the wheel here?