Bush, Vannevar. “As We May Think.” The Atlantic Monthly. July 1945. Reprinted in Life magazine September 10, 1945.

Main Claims / Executive Summary

Before starting, it is interesting to note that Bush’s article was published before and reprinted after the dropping of the atomic bombs on Hiroshima and Nagasaki.

While acknowledging that professionals involved in life sciences will continue to work toward better understandings and cures to improve the general human condition, Bush wonders what will become of the hard scientists involved in war-making technologies in the post war era.  Considering the vast amount of information/knowledge accumulated in the rapid expansion of the hard sciences between WWI and the conclusion of WWII, Bush hopes future scientists will work to develop new information technologies capable of subverting specialized knowledge while extending human cognition through rapid information recall.

Situating this information revolution at a particularly opportune time, Bush notes that new ways of recalling knowledge are made possible by leaps in mechanized production and interchangeability.  In the paragraphs that follow, Bush recalls existing technologies – photography, facsimiles, television, microfilm, hard drives, speech-to-text and text-to-speech technologies, RAM memory  – and imagines plausible improvements that extend the human capacity for information storage, replication, and recall.

Thankfully, Bush sees the need for creativity and invention in the collective creation of his technocratic utopia – and he recognizes the importance of “manipulative processes”  and symbolic logic in achieving that world.  Bush also recognizes the problems with hierarchical, arboreal thinking.  By noting how humans actually make meaning associationally – much like D&G describe the rhizome – Bush actually anticipates theories of hypertextuality and the practice of hyperlinking in his “memex.”    He extends these ideas when describing the associative encyclopedias of the future – what we would call hyperlinked Wikipedia articles – and the “new profession of trailblazers” who account for multiplicity by unearthing new associative routes long buried under the well-worn paths of History.

In closing, Bush veers toward cyborg territory eclipsing the spatial gap of human-technology interfacing with integrated, assistive optic technologies that allow for neurological impulse recognition and subsequent physical representation.

Key Words/Phrases/Concepts


associational thinking / linking

Key Citations

“A mathematician is not a man who can readily manipulate figures; often he cannot.  He is not even a man who can readily perform the transformations of equations by the use of calculus.  He is primarily an individual who is skilled in the use of symbolic logic on a high plane, and especially he is a man of intuitive judgment in the choice of the manipulative processes he employs.”

“Selection by association, rather than indexing, may yet be mechanized.  One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage.”

“Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path.”


  1. If anything, I find it incredible that many of Bush’s future-visions are far surpassed.  In the age before silicon transistors and integrated circuits, a mathematical machine capable of tabulating at 100X the speed of punch machines seemed fantastic.  The $1 calculator at the dollar store now far exceeds that processing capacity.
  2. Though Bush seems to fall on the side of the associational link and against indexing, both seem important to the functioning of contemporary hypertext environments.  Obviously Bush couldn’t have anticipated the importance of both in a technology 50 years in the future.
  3. The interaction between reading and writing – via linking or “trails” – seems to draw the processes of reading and writing much closer together.  So, in a sense, Bush is talking about a new literacy.
  4. I’m not sure how to phrase this question, but the process of linking seems to put a premium on data/information.  Because the user is the author of the hyperlinked text – or maybe a bricoleur /schizoproducer is a better term – the originary structure of the information (like in a book) is no longer important.  I wonder what sort of implications this has for textual ownership/authority and authorship.


Below this line are just notes for the rest of week two’s readings.


Tim Berners-Lee “Information Management:  A Proposal”

  • Before beginning, it’s important to note that Berners-Lee invented the WWW.  Yep.  He INVENTED it.  Specifically, he composed the hypertext transfer protocol (HTTP) and connected it to existing transmission control protocol (TCP) and domain names.  WOW.
  • Berners-Lee takes up Bush and Wells and attempts to put their desires to collect, retain, and recall information electronically across space into practice.  To do so, he uses a hypertext system described by Bush in his discussion of the memex.
  • Even in his graphic organizer of the project you can see a rejection of hierarchical systems and an adoption of associational thinking.  Pretty sweet.  Here’s a pic:

  • The observed working structure at CERN – what Berners-Lee calls a “web” – is the model he finds most successful at creating meaningful communication.
  • Like Bush and Wells, B-L is also interested in how to recall recorded information.
  • The logic that B-L sets up between nodes and links is associational and very loose (just like real links!).
  • Again, D&G prove insightful and forward looking in their discussion of trees and rhizomes in Capitalism and Schizophrenia.  B-L indicts arborescent/hierarchical structures in this discussion in his section called “The problem with Trees.”  He likens early Usenets like Newsgroups as very arborescent.  In that sense, that don’t allow for direct linking.
  • Really, B-L’s problems with the tree-system is one of definition.  Because definitions vary so widely, hierarchies are subjective – or doubly subjective when trying to use them.  Tough.
  • Relying on Nelson, B-L defines hypertext and hypermedia (linking of multimedia) in this piece.
  • CERN needs for a hypertextual environment:
    • Remote access across networks
    • Heterogeneity (use across OS’s).
    • Non-Centralisation – new nodes on each system should be added without a complete reconfiguration of the old system.
    • Archiving
    • Private links – the ability to customize links and nodes privately.
    • Bells and whistles – graphics!
    • Data analysis – a quick way to sift information
    • Live links – links remain static while content is dynamic.
    • Non requirements – copyright?  Nah, here at CERN it’s secondary!
    • B-L makes reference to the need for easily usable “browsers” or human-internet interfaces in this piece.
    • First client/server interface for web content distribution:

      First client/server diagram

  • The accessibility of existing data is what makes this system particularly useful.  This gets at the encyclopedia that Wells discusses and Bush’s memex.
  • The process of web content generation is exponential.  B-L says that “The result should be sufficiently attractive to use that it the information contained would grow past a critical threshold, so that the usefulness the scheme would in turn encourage its increased use.”

Paul Ceruzzi “The Advent of Commercial Computing, 1945-1956”

  • C. claims that the story of computing after 1945 is about how a small group opened up computing “to new markets, new applications, and a new place in the social order” (14).
  • The computer moved from a business invention to a interactive device to augment intellect (calculator?), to a personal appliance to a commercialized software piece to a communicative medium.
  • Computers replaced extraordinarily complex punch card mechanisms.  The core component – among many – that distinguished the computer from punch card machines is the use of electronic storage for sets of instructions and data – we commonly call this RAM today.
  • The “First Draft of a Report on the EDVAC” by John von Neumann from 1945 is considered the first document of modern computing.  The von Neumann architecture is significant because the processing and storing of data occurs in two different places (think processors and memory/RAM today).  The fetch-decode-execute model of single processor machines was central to early computer architecture.  Multi-threaded and multi-processor are examples of parallel processing designs.
  • First commercial sale of a computer was between the Eckert-Mauchly division of Remington Rand and the US Census Bureau in March 1951.
  • The UNIVAC’s processor ran at 2.25 MHz! HA!  That’s great!
  • UNIVAC was perceived as an information processing system, not a calculator; hence, it replaced “not only existing calculating machines, but also people who tended them” (30).
  • The computer signaled the end of the “push button age” because the “buttons now pushed themselves” (32).
  • Utopias enabled by computers – mentioned halfway through 33.

Licklider & Taylor – “The Computer as a Communication Device” April 1968

  • This article argues for an interpretation of information as living – not simply passive but constructed in the process of engaging with it – in other words, we bring something to information simply by our interaction with it.
  • They call this the “creative aspect of communication” and it happens when minds interact and is made possible by the computer which allows both access to information resources and the process for making use of said resources.
  • The authors define communication as “cooperative modeling” in that effective and successful communication relies on models (chartsengrafs) to convey meaning and focuses discussion.
  • The authors point to the subjectivity of data when they note that “To each participant [in a research project], his own collections of data are interesting and important in and of themselves. . . . They are strongly influenced by insight, subjective feelings, and educated guesses.”  A bit of the rhetoric of communication here perhaps?
  • The F2F through a computer section discusses a presentation of text, image, research results, etc., integrated into a televisual output.  So, like a screencast of sorts.  This allowed for a couple of new things, namely, greater depth was possible through the computer display of chartsengrafs.
  • Group decision making is the result of this new form of communication.  This communication will occur – according to the authors – over phone lines rather than on tape drives mailed via the postal service.
  • The authors note the problems in infrastructure that prevent widespread internet telephony at the time.  Problems include cost and speed of data transfer (some things never change).
  • The “supercommunity” is really the internet for the authors.  It means a densely woven network of users across geographic space.
  • Group collaborative work – on the order of Goole Docs – is anticipated in the section entitled “Message Processing.”  File permissions – or “keys” – are also discussed in this section.
  • On-line communities are discussed in the piece – they are identified as geographically disparate users collected in virtual space because of a common interest.
  • Computers as integrated office machines are envisioned toward the end of this work – this complete integration into a single piece of technology is the future of the computer according to the authors.  It will perform many, many tasks.
  • The OLIVER in this article seems to predate cookies and “learning” computer technologies (like the “you should buy” stuff from Google. . . this is really a discussion of browsing habits and data associated with those habits toward sales/marketing/capitalist ends.
  • The authors argue that life will be better because: 1) people will naturally affiliate with folks with who they share common interests; 2) communication will be more effective and enjoyable; 3) communication will be highly responsive, supplementary to one’s own abilities, and capable of representing more complex ideas without complete restructure of existing systems.
  • In the end, the authors wonder whether having an internet connection will be a privilege or a right in the future.  A great question!  According to the authors, if only a small segment of the population gets to interact with the net, then the network might further social inequality; however, if used toward education ends, it would be positive for making because unemployment would disappear (OK, so a little optimistic!).

5 Responses to “CCR760 – As We May Have Thought: Dreaming Technological Hypertextuality”

  1. Missy

    Ha! I love how we both pulled the same quote out from Licklider and Taylor, relating it to a rhetorician’s view of communication. Great minds :).

    Also, I was interested in your question posed about how hyperlinked texts allow the user/reader to author the construction/organization/presentation of material. This further complicated Barthes’ claim in “The Death of the Author” that readers’ social, historical, and cultural contexts significantly affect the reading of texts and make the reader a much more influential agent in communication that once acknowledged. If we are authors of our own reading experiences, how might the texts original authors and website designers work to gain more agency in how texts are read? –or, Should they gain this agency?

  2. Missy

    Oops! A few errors:

    *complicated: complicates
    *that once acknowledged: than once acknowledged
    *the texts original: the text’s original

  3. Luce

    It’s really interesting to see the readings translated through a frame accustomed and comfortable with doing so; or rather, of orientating it to other technological know-how. You pulled a lot of stuff from the readings that I didn’t gleam and also related them to aspects of my computing that I ignored or took for granted pre-your-blog-post.

    I’m curious if you see the type of “creative aspect of communication” happening in Licklider & Taylor as organic learning?

    Also, do you see Licklider & Taylor as a good representative example of collaboration without tying it to feminism (as per Anna’s comment last class)? There is a strong repetition of collaboration and I’m curious how you orientated that for yourself…

  4. Anna

    I the comment you made in #3 of your questions/challenges/observations section. It’s interesting to think about hypertext bringing writing and reading together in a new way to form a new kind of literacy. I feel like so much of what I’ve read about hypertext in the past has really only emphasized the way hypertext disrupts linear patterns of reading and, to a lesser extent, linear patterns of composing with a particular concern for the way hypertext disrupts relationships of power between reader and writer–in other words, the stuff I’ve read in the past has treated the reading and writing of hypertext rather separately rather than considering the dynamic relationship between them. And while I don’t feel like I have any good way of articulating this thought at the moment, this all reminds me of Slack et al’s piece from last week about different communication theories. Maybe there’s some connection there that might also relate to your last question about authorship?

  5. justin

    @Amber – I’m not sure what I would call Licklider & Taylor’s creative aspect of communication. I suppose it’s as organic as it can be . . . maybe organocyborg learning? I tend to shy away from organic as a name because of the work being done in “organic computing” – sensors and automated systems that act as natural mechanisms in computer interfaces. It sounds crazy, but I think some Japanese research institutes are working toward this sort of self-learning computer systems. . . but how do you program stimulus response and adaptation? AI is so hard to wrap my head around.

    @Anna – I think that I may have oversimplified the idea of hypertextual reading as a – to steal an idea from Barthes – process of creating writerly texts. I do think the dialogic process of meaning making enabled by hypertext is promising, interesting, and disruptive to traditional authorial power. . . but it’s replaced by another form of authorial power I guess, even if it’s the wreader’s own (did that even make sense? sorry!)
    I think the thing that I found most interesting about Slack’s piece is wrapped up in this idea of power and fixing multiplicity. Slack seems to argue that power manages to stabilize the free play of signifiers (bad language, sorry) or at least subjectivities that occur in the multiple. So, in a sense that authorial power is recongifured toward the reader in the process of reading. . . but the code still fixes the meaning. Someone still made the link intentionally and – despite ideas to the contrary (liberating, wonderful, a way to escape the linear text), you’re still following predetermined paths with traditional linking. There are web browser plug-ins that allow hypertextual searching over any word, and I suppose this might change that fixed articulation. . . but I don’t know? Anyhow, I don’t think that made much sense! Sorry!

    @Missy: I’m not sure about how to negotiate the question. I think this is a confusing issue – should we respect the primacy of authorial intention or use our new found (but hypertextually determined) power to pawn the text. . . it’s really confusing! 🙂

Leave a Reply