What follows is a short book detailing the mechanisms by which computers have thwarted our sense of reality and children’s sense of embodiment, with receipts. The narrative centers Terry A. Davis, creator of TempleOS, as self-reporting on the effects of cyberspace on children; cyberspace as designed and implemented in order to sell computers to adults.
Peekaboo! ICQ!
Peekaboo is a game we play with infants in order for them to learn what child psychologist Jean Piaget termed object permanence.
A world composed of permanent objects constitutes not only a spatial universe but also a world obeying the principle of causality in the form of relationships between things, and regulated in time, without continuous annihilations or resurrections. Hence it is a universe both stable and external, relatively distinct from the internal world and one in which the subject places himself as one particular term among all the other terms. (Piaget, The Construction of Reality in the Child)
In a world full of sensible, physical objects, one’s every waking moment experiencing the world teaches one that there are rules to reality. As our bodies interact with all the things large enough to see and feel, loud enough to hear, and pungent enough to smell and taste, we are perpetually affirming physical constants. Heavy things strain our muscles. Liquids maintain their volume, regardless of shape. Fluids swirl about and resettle according to density. And, central to all this, is the fact that things don’t stop existing just because you aren’t looking at them right now.
Whether or not object permanence is learned during infant-hood or innate is besides the point. The experience of embodied life in the physical world, as we crawl and flail, stumble and grab, walk and carry, affirms the dynamics of physicality. Long before we learn the mathematics to weigh and measure and calculate speed or mass or trajectory, we have developed a robust reliance on intuitions about physics. At least, physics at the human scale—the scale which is directly proportioned to our bodies. We might not be able to guess the weight of mountains, the distance to the sun, or the speed of a hardball thrown by a professional pitcher, but we can largely anticipate all the phenomenon we are familiar with.
And culturally, since Piaget’s ideas about object permanence became central to our appreciation of early childhood development, the game of peekaboo has become its prime cultural symbol.
Objects at Rest
Peekaboo is an extremely simple game played with infant children. It’s more like a trick played on them, actually. Someone older than them merely looks the infant in the face, establishing eye-contact. Once a friendly feeling of personal connection is established, the older person suddenly blocks the line-of-sight with their hands, apparently disappearing to the infant. The infant gets confused, having suddenly lost the sensation of empathy with an other being. Where did they go? What happened?
And then, magically, the older person reappears by removing their hands and unblocking their face, declaring “peekaboo! I see you!” at the same time. Usually the infant is shocked, or laughs in happy surprise at the return of social connection with the friendly face.
That’s it. That’s the game.
Eventually, infants grow bored of peekaboo once they learn the trick. And that’s the point, as it were. To learn the trick is to have grown, matured, developed, and now be ready to play more complex games. The trick in peekaboo is that things persist. And mom or dad or whoever didn’t disappear or abandon baby just because because baby can’t see them for a few seconds.
While they will be centrally-relevant to our subject later on, philosophical treatises about the impermanence of all things are not relevant now to understanding the intended lesson of peekaboo. Wyndham Lewis spilled hundreds of thousands of words (Lewis 1926, 1927) fighting against the retorts of “everything is change!” and “nothing lasts forever!” which were blurted out every time he insisted that static conceptions of space are important to sanity and civilized identity. If you’re inclined toward the same Bergsonian or Heraclitian line of thought, you’re free to replace the word “permanent” with “persistence.” Perhaps a better way to reconcile static space and fluxing time, however, lays in physics.
The first clause of Isaac Newton’s first law of motion reads “Corpus omne perseverare in statu suo quiescendi…” Written in 1697, it translates to “Every body perseveres in its state of rest…” Newton’s physics was the Western world’s central theory of physics for centuries. Why? Because it was good enough to be taken for granted as universally true. In fact, it was so good enough that you’d have to have been crazy—or have been Albert Einstein—to have questioned it.
Newton’s physics is overwhelmingly obviously true. True, in fact, in virtually all cases that virtually all sober, sane people experience while awake for virtually their entire lives. It provides for us “a world obeying the principle of causality in the form of relationships between things, and regulated in time…,” to return again to our opening Piaget quote.
Newton’s physics, I might argue on the these Piagetian grounds, is reality. At least as much as we can hope for as social beings in a world after the destruction of positivist science. The handful of people who have historically dedicated their lives to participating in multi-generational experiments to record and making predictions about celestial mechanics are, of course, the exceptions which prove this rule. Most people are not involved in such endeavours as meticulously measuring and recording the motion of planets, and so most people don’t notice when Newton’s physics fails to apply.
The fit of our embodied selves to the external physical world is founded on the habituated experience of the general truth of the world captured within Newton’s system of physics. It’s right in the name: systems of “physics” describe physical things. Hockey pucks and limbs, cannonballs and waterfalls, galaxies and self-important gnats and everything in between. “Peekaboo!” Daddy’s still there.
The dynamics expressed in Newton’s system of physics is true for virtually all of our experience with the various material objects which are laying about us. We take it entirely for granted that it is so. It is, in fact, so taken for granted that it is weird for me to point it out. I’m practically wasting your time spelling it out as deliberately as I am. Yet it is necessary to do so.
When the World Breaks
I do so to make the case that many of the “events” in our lives, the things worth noticing and talking about, entail the exceptional cases to this fundamental state of being. In our normal state, we safely ignore everything that is boring. What’s boring is what’s constant and unchanging and uninteresting and mundanely true ninety-nine times out of one hundred. But in those rare moments when books are suddenly leaping from my bookshelf, or my wallet has magically ceased its material existence in my coat-pocket, I must first be quite sure that I’m not experiencing a violation in the laws of physics.
Even the most rational person sometimes needs to re-establish the laws of the physics in the moments after experiencing a sufficiently surprising situation. Everyone occasionally must reassure themselves that the universe itself is not itself breaking down. That the fundamentals of reality have not come undergone cosmic revision.
In the case of flying books, or my demateralized wallet, I can usually be quite certain that I am not experiencing “magick,” if you like. What’s more likely is that an earthquake is underway, or that my house is otherwise in some state of collapse. Or, in the second case, that I’m the victim of either my own absent-mindedness or someone else’s sticky fingers.
Breakdowns of our expectations occur when our sense of reality is tricked, momentarily, into thinking that the impossible has happened. But exceptions to our steady and constant experience of inviolable physical truths, such as object permanence, are not contradictions to reality. They are the perceptual signs of emerging situations requiring our attention. When apparent violations occur of what our bodies have been trained, since birth, to know is inviolably true, our conscious mind is forced to step in and regain an understanding of what’s going on. In the worst case, these become what Piaget above claimed that object permanence allowed for this “without continuous annihilations or resurrections.”
In a morbid turn from Marshall McLuhan’s maneuver of turning “breakdowns into breakthroughs,” twenty-first century observers notice only annihilation in the breakdowns of our age of meaning. For instance, Vervaeke, Mastropietro, and Miscevic see only Zombies in Western Culture (2017). Nevertheless, in moments of breakdown, we are driven—sometimes against all hope—to attempt to regain control and reaffirm the constancy of reality, in order to reconcile everything back into coherency. If someone left your cake out in the rain, you must invent the universe to have the recipe again. Oh no.
Abstract and Falsely-Concrete Bases
One under-examined area which may explain society’s contemporary breakdowns is in our relation to media. We’ll let one example, among many, suffice.
There exists, within our intuitively grasped and socially constructed understanding of our physical world today, a hole the size of an early mainframe computer. You could drive an entire IBM System/360 Model 91 through this hole, if the Model 91 were an Apollo rocket on an oversized-flatbed truck and not an oversized computer delivered to NASA in 1967.
This [w]hole, or gap, is papered over by what today is bandied about as the “virtual,” or virtuality. I could trace the thread of illusions and confusions which media plays upon our senses, but I’d rather cut straight to the answers as directly as possible. The goal of this piece is to direct a single bee in a bee-line straight through the hands of peek-a-boo, lifting them to reveal the faces beneath. That’s where the honey is: the human faces of our technological history.
These, in contrast to the conceptual and abstracted bases upon which cultural theories and psychologies have attempted to premise our human understanding. Structuralism and mythological archetypes and cognitive models which render our brains into information processors tend toward hive-mind conceptions of psychology given their abstraction and generalities. I’d like to side-step these bases without much comment, in order to flesh out the nature of our human-crafted environment and the many histories which converge within its ecology. We will learn to see our material world when it’s cluttered in abstract ideas without faces or stories!
There’s no single entrance to a hive, so let’s jump in anywhere!
Pouring the Knowledge In
In The Nurnberg Funnel, John M. Carroll summarizes ten years of research he performed at IBM in computer interface design and user training throughout the 1980s.
Computer usage was exploding, and software companies which had formerly produced software for a limited number of business and scientific users were now expected to service the needs of tens of thousands of people at every level of an institution. Home users, even! On average, it took two phone-calls to a 1-800 customer support line to wipe out the profits made upon the sale of that software to the customer in question. In order to stay profitable amidst an explosion of popularity, software companies needed an entirely new paradigm for training new users at scale.
After observing dozens of new computer users struggling to learn software, Carroll and his team reached a diagnosis of the problem. Computer manuals, he says, are too boring. So nobody reads them. He open The Nurnberg Funnel with these early studies, performed on users learning both the text-based word-processor program IBM DisplayWriter and Apple’s graphical interface for the Lisa desktop computer. New users tended to jump around, miss tutorial steps, and frequently mess up the state of the computer, rendering documentation useless. They’d rather play around and experiment straight away than read several chapters of preparatory concepts and terms in a fat bound book. Hell, if a manual put an instruction such as “Read this entire page before following the next step!” users would ignore it more often than not, and get completely lost.
The solution Carroll and his team of researchers arrived at was inspired largely by video games. Namely, the text-based video game Adventure, a well known relic of computer history. Why is it, asked Carroll, that people would spend hours trying to learn how to play and win a video game, but would give up after a few minutes trying to learn word-processing software? The answer lay in what we’ll call an element of play. People’s bad habits of ignoring instruction and just experimenting with software could be turned into a strength if documentation was written to facilitate such open-ended exploration.
The title of The Nurnburg Funnel is actually something of an anti-title. The “legendary Funnel of Nurnberg [was] said to make people wise very quickly. The right knowledge could be poured in, as it were. Sad to say, there is no Nurnberg Funnel. Indeed, given much of what we understand about human intelligence and learning, the very idea of ‘pouring’ material into the human mind seems ill-conceived.” The Nurnberg Funnel, then, is another metaphor for the Shannon-Weaver communication models, wherein messages “flow” from transmitter to receiver.
Making Sense of Minimalism
From this point we might depart into continuing to meditate on “pouring” information by considering the notion of “instructionism” against which Piaget’s colleague Seymour Papert developed his computerized classroom pedagogy of constructionism in opposition. Where both Piaget and, subsequently, Papart, drew on structuralist principles for cognition based upon mathematical principles, McLuhan’s annotations in Structuralism, a primer by Piaget, provide a passageway further into McLuhan’s thoughts on cliché and archetype, figure ground, and center margin associations. Taking a different tack, we might also chase metaphors of information fluidity in the naive ways in which contemporary language reifies information into a physical substance which flows about the world on a wire (or welt am draht, if you prefer) (Galouye, 1964; Fassbinder 1973; Rusnak, 1999). But let’s stick with Carroll for now.
He remarks that, unlike children, adults are already experts in many domains before they first encounter a computer. The Nurnberg Funnel was written in 1990, when computers were still a rare sight in most settings; an exotic object only encountered by a still-growing minority of occupations and classes. When encountering a high-level computer interface for the first time, adults brought a great-deal of pre-existing knowledge to the work of making sense of the system and its metaphors. What they need, then, is hints and solutions to common problems—not long introductory readings on theory which come before ever touching the machine, or tedious step-by-step instructions.
The approach toward documentation developed by Carroll’s research team was called “minimalism,” named so for the method’s abandonment of detailed explanations and structures of systematic instruction. Computer manuals, he said, should sacrifice comprehensive coverage in favour of presenting a series of simple missions or tasks reflecting the sorts of things users might want to do with the software they’re learning. The documentation should also be non-linear—the systems-style approach of beginning with fundamentals, and then building upon those toward more advanced tasks, should be abandoned. End users, at scale, did not have the discipline or time to follow along with rigorously structured education. They’re going to want to start playing around straight away—all a computer manual can hope to do accomplish is maybe toss them a few clues, like a video game hint-book. It should be structured around those facilitating that style of experimentation and immediate use.
He began confirming this approach by developing decks of helpful cards for software which gave hints on how to accomplish the sorts of tasks users most-likely wanted to accomplish, and with hints for how to get out of the most common problems.
Another approach deployed was the creation of a simplified interface with fewer options, in order to allow beginners fewer chances to make mistakes or get lost in the interface. Every contemporary video game opens with such tutorial missions today, only slowly enabling features as players get comfortable with the basics of game play.
The work by Carroll and his team at IBM is among the most cited in computer interface instruction guides. His name reoccurs often in the bibliographies not only of IBM user interface design guides, but also those of Apple Computers. We can certainly credit Minimalism, in large part, for the present state and style of all computer software design. Especially when we are encouraged, with only a few hints, to hop in and start playing around with a new piece of software. We all live in the digital world shaped by John Carroll’s research at IBM.
Carroll, in turn, frequently turns to cite Jean Piaget among many other educational scholars like John Dewey and Jerome Brunner. From such work, Carroll generalizes the “paradox of sense-making” as the issue to be solved.
It is surprising how poorly the elegant scheme of systems-style instructional design actually works, as detailed in chapters 2 and 3 [which covered users struggling to learn IBM DisplayWriter and LisaPlan on the Apple Lisa]. Everything is laid out for the learner. All that needs to be done is to follow the steps. But as it turns out, this may be both too much and too little to ask of people. The problem is not that people cannot follow simple steps; it is that they do not. People are thrown into action they can understand only through the effectiveness of their actions in the world… In a word, they are already too busy learning to make much use of the instruction. This is the paradox of sense-making… (Carroll, 74)
A paradox is a problem utterly refractory to a simple, comprehensive, logical treatment. Human learning is in this sense paradoxical. No simple, comprehensive, logical treatment of the paradox of sense-making is possible. The tension between personally meaningful interaction and guidance by a structured curriculum entails a priori limitations on how much we can ever accelerate learning. It entails a sort of Heisenberg Uncertainty Principle of instruction: one cannot design training that is both usable and comprehensive… (Carroll 93)
There is no instantaneous step across the paradox of sense-making. The project of minimalist instruction is to make a direct attempt to narrow this step, indeed to try and exploit the sense-making capabilities and propensities people bring to a learning situation, in order to produce more efficient learning. (Carroll 102)
The use of the words “exploit” and “efficient” should give us pause. The problem driving this research in the ‘80s was the sudden needs for training adult office-workers at scale, hundreds of thousands of whom were encountering desktop computers for the first time in their professional and private lives. What was efficient in achieving the transition to the computerized office, however, has had unforeseen ramifications down the line.
These office workers in the ‘80s did not encounter make-believe objects behind a screen in infancy—they came to such a miraculous illusion having already developed their relation to material, physical space in childhood. (and they watched television and movies, but that’s outside of our scope except to quote McLuhan in Understanding Media, “Indeed, the world of the movie that was prepared by the photograph has become synonymous with illusion and fantasy, turning society into what Joyce called an ‘all night newsery reel,’ that substitutes a ‘reel’ world for reality.” Computers have to be understood within this greater context—John Carrol’s first book was on the semiotics of film.) To understand the gap being leaped by adults, and not by children, we’ll have to go a little deeper down the stack.
OOPsy Daisy!
A study on learning computers by Campbell, Brown, and DiBello in the early ’80s dealt not with end-users, but with experienced computer programmers. Using “Piagetian structural interviews and longitudinal tape diaries” they tracked the experts’ progress in learning a new programming language, Smalltalk. To be more accurate, what they really studied was the entry of these experts into an entirely new computer programming paradigm.
Smalltalk was developed at Xerox Parc Labs, contemporaneously with the graphic user interface design which was famously copied by Apple for the Lisa and Macintosh computer lines. The object oriented programming paradigm it popularized represented a radical break from programming methods of the past. In fact, nearly everyone today “in tech,” writing and editing screens full of text and symbols in Python or JavaScript or CSS or HTML are not computer programmers—at least not in the meaning as the participants in this study styled themselves. The phrase “learn to code” is an empty marketing hook for mall colleges offering certifications matched to job postings; this story we are telling here takes place long before the dawn of that time, but in some ways marks its beginning.
The participants were learning Smalltalk, the object-oriented computer language used to build the uniform point-and-click interface at Xerox Parc which inspired Steve Jobs and the Macintosh. Object Oriented Programming provided an answer central to the needs of a field called “software engineering,” which sought to standardize software production.
When computers first went on the commercial market, they came with languages like FORTRAN, BASIC, and COBOL. Corporate customers would then hire programmers and develop their own software in-house. With the introduction of the System/360 line in 1964, IBM created APL, a language which allowed programs to be transferred between computers, allowing easy upgrades of hardware. Nevertheless, in the field of “software engineering” was inaugurated in 1967 to address “the software crisis:” problems of standardization, quality, and maintenance of software. The production of home-grown, incompatible and non-standardized software had grown completely unwieldy.
With the right “engineering,” and re-use of standard parts, software programs could be planned in advance and assembled the way buildings are, or commodities produced on assembly lines. The dream of “software engineering” was precisely this ease of mass-production, and the elimination of reliance on specific programmer expertise or the monopolization of knowledge by the human creators. Ford Motors, after all, aren’t going to shut down and lose expert knowledge on how their cars are made just because workers on the assembly line retire or quit—why should it be the same for software?
What the programmers in the study encountered in Smalltalk was a requirement to abandon attempts to understand what their code was really doing. Understanding what was going on wasn’t necessary, only understanding the OOP-way of doing things, and then doing things that way.
Being a programmer, I have this burden of trying to understand all the bits and bytes. That’s my disadvantage, that I was dealing with languages that are, compared to Smalltalk, lower level languages. I hope somebody you randomly catch on the street, who doesn’t know programming, would actually have less of a headache trying to understand. I guess most people would just, in that case, skip getting into details trying to understand… I guess I would have to get used to the different kind of work that Smalltalk demands… You don’t have to understand all the stuff that you are using.
He’s complaining that he doesn’t need to understand what he’s doing to program in Smalltalk? Yup. This is because, in Smalltalk, the programmer was piecing together their programs from a massive collection of ready-made libraries of objects, or classes. All the programmer had to do was design the logic of how instances of those objects, or derivatives of those objects, were to be called into existence and how those objects would communicate, or message one another. The learning curve wasn’t even in learning the language syntax—it was more-so in just navigating the sheer volume of abstract, magical “objects” one had to work with and evoke by name.
I find I make a lot of depth-first regressions through the code. I’ll be looking at a method and I try to look at things that are new that I haven’t seen… Sometimes that takes me through several layers deep … The main frustration in trying to do that is just locating where the methods are… You can guess fairly easily what the class of an object is that is receiving a message. But when you go there in the Class Hierarchy Browser, and the method isn’t there, you have to look through the superclass chain in order to find it, and that can be pretty frustrating.
I guess the reason I’m using this depth-first approach is exactly because I feel that I know most of the syntax. So at this point… it is a matter of learning the whole hierarchy, and what’s there, and what the classes do, and then the details of what the methods are, and the protocols for the classes… It makes more sense to start with something concrete, so that … it sort of builds a more natural understanding of what the things are doing.
Traditional procedural programming, as IN APL, and C entailed learning how the computer worked, and then stepping the computer through procedures line-by-line. Higher level languages like FORTRAN, COBOL, and BASIC emulated this process, but with the rough edges smoothed away.
Smalltalk, by contrast, was incantatory. Alchemical. The fundamental units with-which one worked were not algebraic variables encoded within the computer memory—they were fully-formed complex entities one declared into existence within an imaginary space totally removed from the materially of the computer. The sensation of coding within the object-oriented paradigm is the programming equivalent to the user experience of the modern GUI desktop full of objects and icons and surfaces, in emulation of the physical world, which one can click and press and drag and drop. It’s what Sherry Turkle calls the Macintosh Aesthetic.
The difference is that programmers are not spared the documentation, or the trials of ease-of-use which users are. They’d constantly have to run experiments to learn how the system worked, rather than just know how the computer worked. And those experiments would often end in even deeper confusion.
I completely change the content of array a, but I didn’t change the values of pointers b and c. So this == should still return true because we have again just one object involved, and all these pointers point to the same thing. Instead, it returns false, so I don’t know what to think about it.
Figuring out what the computer was actually doing, then, was largely a waste of time, compared to just doing the work and trusting the objects to do whatever they do. Today, “coding” involves learning how to string together many different pre-made libraries and objects and frameworks, each with their own mountain of documentation and examples to copy and paste from. It’s a Frankenstein stitch-up of the Smalltalk paradigm—and it made no sense to adults who already knew how computers worked. So what does it mean that it intuitively does make sense to young people who don’t, and who learn the object oriented paradigm first?
Tentatively, we ought to agree that it means that even at the low level of programming, beneath the graphical interface, there is a magical world of objects in cyberspace laying behind the glass of their computer screen. Not a “computer” in any sense that the subjects of this study appreciated.
On top of Spaghetti…
Piaget tested the infant’s development of object permanence by covering their current play-thing—a toy, or sometimes a matchbook or Piaget’s pocket watch—with a blanket. Until the child learned to lift the blanket to find the play-thing, Piaget could assume that the child had not yet developed an understanding of the object’s continued existence. It had rather, for them, disappeared from the universe.
The first exceptions to this naive understanding of the world occurred when objects dropped out of sight. Sometimes the child’s eyes would follow along the direction the object had travelled, until it disappeared, and then move to try and see where it had went—say, over the edge of their bassinet. It is easier for a child to extrapolate where an object might have gone when it is moving, than when it has passively been covered by something else. Both cases, however, are exploratory gestures, an attempt to get back in sensory contact with something which only-just disappeared. They depend on a belief that there is something beyond the horizon of their senses, which one can get in touch with through one’s own efforts and work. It is an act of imagination.
If the adult can lend the quality of objects to bodies whose trajectory he does not know or to bodies he has seen for only a moment, it is by analogy with otehrs of whose displacements he is already aware, whether these are absolute or related to the movements of the body itself. But, sooner or later, representation and deduction enter into this knowledge. With regard to the baby at the fifth stage, to the extent that he does not know how to imagine or to deduce the invisible displacements of bodies he remains incapable of perceiving these bodies as objects truly independent of the self. A world in which only perceived movements are regulated is neither stable nor dissociated from the self; it is a world of still chaotic potentialities whose organization begins only in the subject’s presence. Outside the perceptual field and the beginnings of objectivity which are constituted by the organization of perceived movements, the elements of such a universe are not objects but realities at the disposal of action and conscoiusness.—Piaget, The Construction of Reality in the Child
The seasoned computer programmers had difficulty giving up their sense of being in-touch with the actual computer while learning Smalltalk. They would spend hours trying to dig through the code or the documentation to get in touch with what was really going on beneath the high-level imaginary mechanics of the object-oriented paradigm. The paradigm had lifted their work away from the material world which their entire professional lives had been spent operating within. True, portable languages like C and APL had abstracted away many of the particularities of specific machines, allowing them to write code which was abstracted from the machine. But at least they were still programming a computer in C, albeit an abstracted, generic one.
The GUI interface covered up the low level, hiding something which was unsightly and aesthetically unpleasing. But those who grew up since the dawn of the GUI interface have, for the most part, never grappled in any meaningful way—never learned an embodied relation to—those low levels at all. They don’t exist within the universe of these young generations at all. Don Norman wrote The Invisible Computer while working at Apple in 1998. Since then, his designs have become ubiqtuitous. The computer isn’t an object to deal with (or an object to think with). It is hidden away from them before they were born, never to be uncovered.
If not Computers, What are they?
Car metaphors are always useful in these matters. Each specific car is also a generic car in the abstract. All cars have steering wheels, pedals, etc. In the eyes of the law, one acquires a license to drive all cars by learning to drive just one. And every time you drive, you are developing a tactile sense of the road through the car’s extensions of your embodied sensibility through its wheel, pedals, and seat. In both your specific car, and across all different models of car you have driven, you experience the potential energies released by the pressures you apply to the wheels and pedals. All wheels and pedals—powered or not—are felt to correspond to material, electro-mechanical systems which you develops a feel for, and over time experience an embodied merger with.
A mere passenger, to the contrary, is in quite a more helplessly abstract relation to a vehicle. In Object Oriented Programming, the programmer becomes passenger to their own computer. A vehicular passenger’s interface with the car is their dialogue with the driver. The passenger’s sense of control is to be on good-terms with them, such that they end up where they want to be in the end. What is consulted is the expertise of the driver—“do you know where you’re going? Why are you driving so fast? Why are you going this way?” The control over the car, then, is entirely performative—a social relation.
The development of easy-to-use computers are, likewise, a social relation between designers and end-users. What goes right or wrong is entirely within the artistic design of the high-level interface as it communicates its affordances (a term popularized by Don Norman) and inner-logic through the human interface of input and output devices. Users are asking themselves “who wrote this? Can I trust my data to these people? Did my document save, or will I lose everything when I turn this off? Did my email go through?” and the answers to these questions lay in how well the software communicates these facts, not within the end-users ability to check for themselves. As Sherry Turkle wrote in Life on the Screen,
The simulated desktop that the Macintosh presented came to be far more than a user-friendly gimmick for marketing computers to the inexperienced. It also introduced a way of thinking that put a premium on surface manipulation and working in ignorance of the underlying mechanism. Even the fact that a Macintosh came in a case that users could not open without a special tool (a tool which I was told was only available to authorized dealers) communicated the message. The desktop’s interactive objects, its anthropomorphized dialogue boxes in which the computer “spoke” to its user—these developments all pointed to a new kind of experience in which people do not so much command machines as enter into conversations with them. People were encouraged to interact with technology in something resembling the way they interact with other people. We project complexity onto people; the Macintosh design encouraged the projection of complexity onto the machine.
The swapping of one’s direct embodied relation to the material world of objects, in exchange for a social relation, characterizes the means by which computers became so ubiquitous in our life. Furthermore, the way in which computer’s high-level interfaces present themselves as virtualized worlds of pseudo-physical objects for our embodied manipulation drags the transformation one step further. Not only are we out of touch with the physical machine, relying upon the intercession of specialists to deal with it for us, that which we are in touch with and learning is of an entirely illusory, yet ever-more convincing, materiality. And even that materiality is slipping away as interfaces move away from ‘80s and ‘90s conceptions of desktop applications toward the social-relations of digital agents and AI voice assistants.
Peekaboo is both a social relation with the parent and a direct material relation between faces and bodies. Parents are gauging the emotional state of the child while playing. They are detecting the confusion in the loss of the parent’s visage, and the joy in its reappearance. The hands of the parent are manipulating the line-of-sight—the child’s eyes are fixated on the parent’s face and also searching the space where it last was when obscured. After this sense of trust in the parent develops, what follows is the child’s ability to use their own hands and eyes to uncover what is covered. To push aside the hands, as it were, which are covering what one knows or believes lays behind. The permanence of the world outside develops, then, from a social situation into a situation of embodiment and direct material control.
Personally, I don’t want to be in a social relationship with my computer operating system. Nor do I want to rely on a social relationship with a technician who repairs my computer for me. I only feel secure, and in touch with my material world, by understanding and fixing and using and programming my own computer. That is the precondition for my continued sensation of the permanence of my material world, beneath the shrouds of abstraction which always threaten to distance me from it. Only from this sense of security am I then free to willingly enter into new trustworthy social relations in new domains of personal development. Self-sufficiency, in this sense, is not derived out of political ideology, but basic existence as a dignified, embodied human subject. It’s not anti-social, but a developmental prerequisite for growth; both social growth and the growth of proficiency and competence over the material affairs for which I am personally responsible.
Life on the Modding Scene
In Life on the Screen, Turkle makes a differentiation between hackers, hobbyists, and users. This makes sense for her project and for the time she’s working—she needs a categorization for the different “aesthetics” of human-computer relations. She says hobbyists care about the low-level, and hackers care about building crazy unwieldy high-level things. This makes sense at a time when the low-level is still the ugly thing which is always getting in the way, hard to hide.
But for the past 20 years the low-level has vanished. It’s not an elephant in the room, as was the early home computer on the diningroom table. It’s not a massive stain on the floor you’re laying a rug over-top of. It’s actually just gone. It is now Norman’s Invisible Computer. No more blue screens. No more hard drive thrashing, tick tick tick, as you max out your RAM and begin violently swapping data. Not even the spinning up of anything physical besides, perhaps, a fan. No more text-editing configuration files, or entering port assignments or anything technical—just user names and passwords. No more strange technical words to learn—everything is an icon and a gesture now, for ease of internationalization.
Hobbyists have moved to arduinos and GNU/Linux. And, on the flip side, they’ve become video game modders, creating new virtual environments for play out of existing works by adding and changing skins, levels, and rule sets. Hackers today work, as far as I can tell, to counteract artificial barriers such as proprietary drivers and firmware, copy-protection schemes, and closed protocols and standards. In other words, they are working down the stack, not building up. The drama of exploring what was possible and new with computers gave way in the ’90s to the challenge of confronting the forces which thwarted the dreams of computer and information freedom. Hacking as a creative force, exploring potential, created the so-called utopian dream which has long since given way to cynicism. And, so, hacking has become remedial and corrective—a fight to get back to ground, back to reality.
The image of the hacker in McKenzie Wark’s Hacker Manifesto (as in paragraphs 17-20, and 74 and 75) is precisely wrong insofar as it posits that the hacker works toward ever-increasing levels of abstraction. Up may certainly be a direction one can hack, but not the only one. As I argued in the close to my speech to the Free Software Foundation at LibrePlanet 2023, the hacker today predominantly hacks down the layers of abstraction back to the material reality. In doing so, means to potentially bring culture back to ground are dug up. The hacker reveals what lays underneath once what is underneath is no longer something intrusive which we wish to hide; when it has become something which has been missing so long that it has become mysterious again.
In her earlier book, The Second Self, Turkle includes more examples of hackers moving down the stack. Like Steven Levy and his 1984 book Hackers, she not only features Richard Stallman (founder of the Free Software Foundation, and for Levy the final protagonist of the archetype), but in including the hackers propensity for breaking and entering locked rooms and closed systems. This is, obviously, the precise opposite of elaborating higher abstraction and systems. The embrace of these goals in terms of mastery over a system, or of winning, imply gender stereotypes which tend to cloud what I believe to be the real phenomenon at play. What’s really going on is escape.
Escape Room
While I was in Boston for LibrePlanet, I visited the campus book store at MIT. (I also, I’ll admit, infiltrated the Media Lab.)
But while at the bookstore, I inquired about the availability of Seymour Papert’s foundational book Mindstorms. It was not available. Instead, I browsed around and picked up a title by Brendan Keogh, A Play of Bodies: How We Perceive Videogames, released in 2018. In fact, what the book offers is an approach toward a formal criticism of computer games as a medium, just as cinema and literature have their own schools of criticism. It is a capstone to the internet culture-war battles of #GamerGate ten years ago, summarizing and justifying the approach of academic game theorists and journalists.
Keogh’s basis for criticism is the cybernetic circuit between embodiment and game content, or game world. These concepts are derived largely from the work of N. Katherine Hayles in her 1999 book How We Became Posthuman, which is the story of cybernetics, or computer control systems, as they came to regulate and enchant the world we live in, penetrating the human subject with feedback circuits spliced into their cognition.
What’s interesting to me is that everything he says about video games, as a video game critic, is true too of every level of computer interface. What video game criticism does is very clearly, very deliberately handle and manage the “splice” between reality/actuality and the virtual. In other words, critics know that there is a massive discontinuity which is being dynamically interfaced. Academics and games journalists look at games “the wrong way around” (from my point of view) in games criticism because they intentionally want to start with the illusion and then go in, rather than start with the real and work their way up. This is, after all, the direction which newcomers to games approach them, young children included.
It is my stance that there is nothing to be taken absolutely for granted about the discontinuities over which the post-human subject is spliced. The post-human subject is deprived, by design, of the necessary continuity of tactile interface with material reality necessary for integral embodiment. The post-human subject as an “assemblage” of “heterogenous” elements which are constantly co-constituting it in cybernetic feedback are theoretically easily remedied by removal of the medium. In other words, if we had no computers, this problem wouldn’t exist at the scale it does. Obviously. However, as my friend Brian Taylor jokes, such a “cure takes years, but the side effects are immediate.” Going cold turkey on anything is hard, even the media which co-constitutes the preponderance of one’s identity since childhood. And the interfaces into which we are cybernetically spliced are environmental and ubiquitous—moving to the woods and leaving society is just not a scalable approach.
This is why study of the whole stack, and hacking through its enveloping discontinuities or gaps, seems the only viable approach. At least if one sees, as I do, the uneasy sutures of a cybernetic split within the core of one’s identity and appreciation of space as a problem to be resolved. What I’m calling for, however, goes well beyond what’s needed for a video game critic and player alike, for whom immersion is the entire point, and who just want to play and talk about video games.
The attempt to directly put on, or merge with media-content is the entire matter of what makes post-humanism an interesting intellectual field. It’s about the victims of society’s mad rush to close the “digital divide.” I mean the masses who have been rushed into illusion directly, rather than entering from the materially-continuous way, ground up, the way it was built. If illusions are “high level,” or on the top of the “stack,” the way to integrate them coherently into one’s worldview is from the bottom up, which entails historical knowledge of the medium itself. All Koegh’s arguments about how virtuality is “enfolded” into materiality are attempts to rationalize the discontinuity without bothering to understand the medium.
On the third page of Keogh’s A Play of Bodies we read
How we come to feel embodied in videogame play is much more complicated than simply stepping out of one world and skin and into others. David Sudnow had it right in 1983 when he called the spaces depicted by videogames “microworlds”: small worlds under glass that we don’t inhabit in any straightforward manner but that we engage with from a distance, our actions reflected back to us, translated by lights and color and sound.
Keough recurses often the concept of microworlds throughout the book, always citing Sudnow’s book Breakout: Pilgrim in the Microworld about the development of the eponymous Atari arcade game.
However, the term microworld is Seymour Papert’s—his usage predating Sudnow’s by years. It had been popularized by Papert’s own breakout constructivist text Mindstorms: Children, Computers, and Powerful Ideas, covering his research with children playing with Logo, mathematics education, and robotic Turtles.
This incorrect attribution extends beyond Keogh—he is guilty only of trusting his sources, it seems. The 2008 edition of New Media: A Critical Introduction, which Keogh cites elsewhere in his book, also mistakenly credits microworlds to Sudnow rather than Papert. Papert’s Mindstorms is even referenced elsewhere in New Media; it’s authors don’t share the same excuse of ignorance of its real source. It’s worth stating that Sudnow never mentions in Breakout where he got the terms from, so there were no easy leads there.
This issue is one example of a problem throughout Koegh’s book: the history of computers, when it it is brought up at all, is largely made-up to fit. A short section detailing the origins of the QWERTY keyboard, as compared with Doug Englebart’s chorded keyboard, relies on a single source. The conversation than jumps immediately to game input devices—as though decades of human computer interaction studies had no bearing upon devices like joysticks or gamepads whatsoever. The very notion that control interfaces from industry, aeronautics, or any other non-gaming field might have informed video games is conspicuously absent. To be fair, it is not a book on the history of computers. However, the references outside of the tightly-proscribed domain are so minimal and fleeting, that the agenda to imbue gaming with its own decontextualized history becomes glaringly apparent. There is an obvious attempt throughout the book to delineate “video games” from computing as though games were a completely different medium.
Of course, video games aren’t just related to computing—they, unequivocally are computer applications. (With the possible exception of extremely rare and obscure of purpose designed digital circuit-based games without processors or software. But those don’t come up in the book, so it’s a moot point).
To carve gaming out of its larger body of origin, Keogh begins his incision at the point in computer history when individual passion and play became the prime motivator for software development: the history of hackers. He quotes Sherry Turkle describing hacker culture in her 1984 book The Second Self, where she writes, “Their culture supports them as holders of an esoteric knowledge and defenders of the purity of computation seen not as a means to an end but as an artists material whose internal aesthetic must be protected…They are caught up in an intense need to master their medium… To win over computation is to win, period.” From there, Keogh writes
Through a desire to appreciate and master computers for their own sake, hackers gave birth to the videogame form. What better way to appreciate computers’ innate aesthetics than to play with them?
However, hackers also imbued in videogames an ethos, attitude, and culture ”that [was] produced by the conjunction of particular kinds of young men, technology and the mathematical systems of coding that are the language of computing.” (Dovey and Kennedy 2006, 38). The mythos of hackers building up technology in campus dorm rooms and garages reinscribes a dominant masculinity that feminist scholars of technology have traced.
This is the flavour of the entire book. What Turkle calls the “modernist” “aesthetic” about winning power over, or controlling, or understanding computers is, in the wake of her research, always coded in male terms, and opposed to women as cyborgs who more easily acquiesce to merger with the high-level screen world. As Keogh states near the books’ beginning
Rather than our bodies existing before the incorporation of objects such as feathers or cars or sticks or typewriters (or, indeed, videogames), it is the intercorporeal assemblage of feathers-and-bodies, cars-and-bodies, sticks-and-bodies, typewriters-and-bodies (and, indeed, videogames-and-bodies) that modulates our being in the world. The move is semantic, but it decenters the human body as the stable locus that the universe orbits to make way for a diversity of human and nonhuman bodies that are constant made and remade in their relations with other human and nonhuman bodies.
If we are to understand how bodies produce meaning in videogame play, then, we cannot start with an essential player body and peripheral tools. We have instead a cybernetic amalgam of material and virtual artifacts across which the player’s perception and consciousness are distributed and transformed and from which the player’s embodied experience of playing a particular videogame emerges. The conceptual shift here that feminist theorists afford here is important: the player cannot be considered before or distinct from the videogame but instead reflexively as producing the videogame experience that in turn produces the player.
There is plenty to disentangle here in just this quote. If one wants to criticize videogames, then one ought to understand that gamers are all different, and that yes, of course, many gamers are merging with their game rather than approaching it with intent to mastery and submission. To me, how this is coded in gendered terms is a rather distracting sideshow, originating from the origins of scholarly discourse on embodiment within feminism, and its legacy of offering alternatives to domineering maleness as a cultural norm.
What rubs me the wrong way, however, is the equivalency between beating a video game and controlling and owning ones computer. The first case is about how to understand video games, the matter of the second case comes down to issues of great importance regarding the ability to resist tyranny imposed by states, monopolistic corporations, oppresive ideological propaganda, or any other forms of coercion or government the 21st century might manifest. Politically speaking, if we don’t control our computers, they control us—that’s not a relic of male chauvinism in the same way that complaining about “casual” video games designed for women might be.
The overlaps between posthumanism and my own McLuhan studies are overwhelming. McLuhan’s answer to Keogh’s point about the distinction between our bodies and objects our bodies incorporate can be found in Understanding Media‘s chapter on the Photograph:
But amputation of such extensions of our physical being calls for as much knowledge and skill as are prerequisite to any other physical amputation.
What strikes me as interesting is how Keogh ignores the generations for whom all these objects—certainly not feathers or sticks of course—were, in fact, new things. John Carroll’s book The Nurnberg Funnel, is all about introducing computers to office workers who were used to typewriters. They were not co-constituted with desktop computers. Against Keogh’s claim, they were, in fact, “existing before the incorporation” of computers into their lives and, yes, bodies.
Even more pertinent here: one conclusion which John Carroll reached in The Nurnberg Funnel was that learning a wordprocesor should be more like learning to play videogames. He built that chapter of the book out of his 1982 paper The Adventure of Getting to Know a Computer. You should read it; the byline goes, “Making text editors more like computer games may seem ridiculous on the surface, but these ‘games’ use basic motivational techniques—something designers of application systems have overlooked.” Since then, the gamification of the real world has hurried along steadily. Shoppers collect loyalty points on every card in their wallet, and transit riders grind their passenger score sharing information about their bus rides with colour graphics. The sort of merging with games which Keogh presents is far too uncomfortable when taken out of the scope he tries so hard to keep it within. But the constant allusions to cultural change, changing conception of the “gamer” identity, and the like, keep me from staying within his box.
Again, video games are not a different medium than computers—they are computers. Passive spectating and merging with high-level graphics and sound, and an unwillingness to understand what’s going on beneath the surface are not “feminine,” they’re a style of play which is fine in videogames—and child like and self destructive in real life. The imposition of gender norms onto these forms of play is insulting to both genders, which is one big reason the internet exploded in 2014 in the first place.
Just as feminist theorists demonstrate how lived, situated experience accounts for a fuller spectrum of human experience than the patriarchal and colonialist transcendence of objective consciousness, cybernetic conceptualizations of videogame play highlight how the oft-privileged but niche pleasure of mastering a videogame—of dominating and “beating” it—is but a rationalist subset of the broader embodied pleasures of participating with a videogame. An active player is not the essence of videogame play that differentiates it from other media. A videogame player is at times afforded and at times constrained so that both activity and passivity—a flow of agency—give shape to the videogame experience…
The important point is not that posthumanist cyborgs are a hybrid of machine and organism, whereas rationalist hackers are not, but that the dualisms that allow the hacker to be seen as distinct from the machine—Nature distinct from Culture, Man distinct from Machine, mind distinct from body, gamer distinct from nongamer—are themselves constructed mythologies at the services of dominant and hegemonic values.
All this only holds true if one ignores the fact that videogames, unlike movies or paintings, are being produced on one’s own privately owned computer, and that to the degree that one is merging with the only the surface of the game, one is leaving out the computer as a black box in one’s subjective being. The book tries so hard to ignore the existence of the medium to which video games are content, to make videogames the medium, that it becomes nonsensical at times. Sure, I’ve often surrendered my agency to a video game—but I always had the option to walk away. To turn it off. When the real world itself, when my society, my smartphone, my company computer, my university education, one’s child’s elementary school education, the electronic payment processor at the store, the system which evaluates our credit scores and insurance premiums and eligibility for social inclusion all begin to operate like a video game, then we’re in a different situation. Perhaps Keogh doesn’t realize how everything he writes, when taken out of the highly-artificial sandbox of videogames-as-medium which he writes within, is a peon for happy enslavement. But that’s how the internet took it in 2014: the “game” of reality became an escape room.
Bodies at Play is an essential book to for appreciating the internet milieu immediately preceding the election of Trump, when everyone’s mind was on the lack of boundaries between video games, educational software, and the gamification of all business and commerce at large. Unlike film criticism or music criticism, video game criticism just will not stay in its box—the protean medium beneath it just won’t allow it to be so.
We’ve gotten into the heavy territory here, so we might as well sink to the bottom. McLuhan often opened his books and speeches with references to Edgar Allen Poe’s The Maelstrom. What the sailor did to survive the wrecking of his ship in the massive whirlpool was to go with the flow of the current, study its dynamics, and latch himself to objects likely to float up and outward, rather than sink to the bottom. What follows is the example of the one person I know who developed and implemented precisely the opposite approach to survival, which I like to think of as the diving bell strategy.
Terry A. Davis, Prophet
Lord, how can man preach thy eternall word?
He is a brittle crazie glasse:
Yet in thy temple thou dost him afford
This glorious and transcendent place,
To be a window, through thy grace.
—George Herbert, ‘Windows,’ 1633
The following is the story of a courageous man with cools shades, grappling with mental illness.
Terry A. Davis (1969-2018), born in Wisconsin, was a gifted child. He was reared on the first generation of commercial microcomputers of the 1980s—first the Apple II at school, and then his own personal Commodore 64. With his copy of Mapping the Commodore 64 by Sheldon Leemon, he learned every nuance and memory address of his machine with only a few years of dedicated study and play.
At age twenty, he began work as an operating systems developer at Ticketmaster, only a year before the corporation acquired its largest competitor to come to dominate the computerized ticket-sales market. He held this job while completing his Masters Degree in Electrical Engineering at Arizona State University, which he received in 1994. While born into a Catholic family, he had become an atheist during this time.
(Coincidentally, Marshall McLuhan had transferred programs from Engineering into English in his first term at the University of Manitoba. He also famously told an interviewer in 1977 that the Antichrist “is the greatest electrical engineer. Technically speaking, the age in which we live is certainly favourable to an Antichrist. Just think: each person can instantly be tuned to a ‘new Christ’ and mistake him for the real Christ.” McLuhan was a convert to Catholicism.)
An exposé on Terry in Vice explains, “‘I thought the brain was a computer,’ Davis says, ‘And so I had no need for a soul.’ He saw himself as a scientific materialist; he believes that metaphor—the brain as a computer—has done more to increase the number of atheists than anything by Darwin.”
After a six years designing hardware and systems with Ticketmaster, he left to seek new challenges in aerospace. Soon afterwards, however, he was overtaken by extreme paranoia. He tried unsuccessfully to convince himself that the shadowy figures whom he sensed were stalking him were merely employed by the defense contractors to whom he had sent job applications. Secret messages which he felt were undeniably intended personally for him were hidden in the subtext of the stations he tuned into with his car radio. A rapid, violent break-down resulted, leading to hospitalization, jail, and a diagnosis of schizophrenia which would mark him for the rest of his life.
Davis renounced his atheism and focused his runaway capacity for sense-making with sola scriptura biblical studies. The text spoke to him in direct relation to immediate issues and questions. He decided that the special, conversational-style of personal meaning his mind was making out of the Biblical stories—often chosen at random—was a form of direct communion with God. He also redoubled his dedication to low-level programming, and over the course of 10 years of full-time work, single-handedly produced an extremely unique, culturally-unassimilable relic: a completely novel, tightly-integrated computer operating system for the brand-new amd64 CPU architecture. According to him, its specifications had been laid down in exact detail from on high. The God of Abraham had commissioned Terry, he said, to build his OS as the new Temple of Solomon. It was a task he dutifully fulfilled, to the letter.
Since its announcement under the name TempleOS in 2012, curiosity about its origins and its creator have grown. Documentary film-maker Fredrik Knudsen’s feature-length film on Davis and TempleOS has received well-over 5 million views in under three years. (I was fortunate to interview Knudsen just before the episode’s release). Knudsen’s sympathetic exploration of Davis’ has brought his life work and story to the mainstream and has inspired many more creators to explore his the topic, cementing its place in digital folklore. Popular YouTube channel ‘Linus Tech Tips’ has racked up 2.4 million view in just one year with a short exposé on Terry’s software project. Many online communities have sprung up dedicated to users installing, studying, and improving TempleOS.
Terry regularly live-streamed videos of himself in the last few years of his life. His intention was to demonstrate features of his software projects. Unfortunately, in these videos his mental illness often takes the center stage—most-jarringly with his angry use of hateful, racist epithets to name the forces he perceives to be stalking and discouraging him. This was the form his psychological warfare took against all suspected agents of the all-controlling U.S. Central Intelligence Agency. The visceral toxicity of these public, paranoid lapses into racist slurs present a difficult challenge for anyone seeking to discuss his remarkable project responsibly and respectfully. Anyone first introduced to the character of Terry Davis through such displays would understandably question the social or educational value of bringing any attention to him whatsoever.
Perusal of comments which are made on these videos, however, suggest there is ample justification for digging a little deeper. More often than not, content related to Terry Davis prove to be exceptions to the general rule that comment sections on the internet are always terrible. Those who have fallen down the TempleOS rabbit hole demonstrate a surprisingly mature, humane sympathy for someone clearly in the grips of debilitating disease. Comments on one recorded livestream of Terry during a coding session in the last year of his life include, “terry the original cyber punk, fixing bugs from his van on the run from the cia. RIP,” “a schizophrenic programmer on the run fixing bugs in his van… can’t make this shit up :D,” “damn nothing more depressing than watching a man hunting bugs on an OS he built in a van in a plaza so he can have some internet to get through the night goddamn this is sad as hell,” “When I watch these vids i feel like im hanging out with my buddy Terry. What a guy!”
Naturally, a significant fraction of his online audience goaded, manipulated, and provoked the vulnerable Terry in the worse possible ways. They would encourage his use of hateful rhetoric, or masquerade as interested romantic partners. His mental condition deteriorated, and his rambling monologues took on the form of free-form associations and wild leaps of logic expressing totally delusional beliefs—many of which he was fooled-into by those interacting with him online. Yet long moments of lucidity and clear thinking would shine through when the talk turned to either computers, or to explaining his state of utter confusion about the nature of reality.
Uh— for quite a number of years I have had confusion on my exact reality.
Umm, umm— an example of the— okay, uh— The Truman Show reflects an example of a uh— of a strange reality. You know? He’s like, you know, he’s like— gradually picks up that, “something’s not quite right about my life.” You know what I’m saying? “My reality is just not quite right.” Then eventually he discovers he’s in a little bubble, and he’s on TV.
So— Terry—me—there’s just something not right about my reality and I don’t have any— I don’t have any—
You know maybe, one day he came up with the conception, “Hey what if I’m like in a bubble and there’s cameras—” and he had this conception that explained his reality. What I need is some concept of what my reality is, you know what I mean? For the Truman Show guy, his answer was: he’s on a TV show and they’re taking cameras of him. For Terry, I have zero conception of what my reality is: I don’t know….
So uh, there’s always the desire to make it real you know? But you know after 20 years it doesn’t seem to get real. And the whole time there’s been a— a verge of “making it big.” For the last 20 years I’ve been like, “Oh great! Now I’m finally— they’re finally understanding it. I have a space alien. The whole world will know.” So uh— yeah I’m really— let’s just say I’m walking in the air and— somehow I’m floating in the way up in the air—and I have no idea what’s under me. I mean, just— just imagine you’re just like walking around up in like the sky or some crazy shit, and you say, “Why am i walking around in the sky?”
On August 11th, 2008, Terry Davis recorded a short, relatively-cogent, tranquil video outside a public library he had been frequenting in The Dalles, Oregon and uploaded it to YouTube. He was struck and killed by a train several hours later.
Analysis of Davis and TempleOS
As an electrical engineer with a strong understanding in how computers work, Davis was unusually grounded within the contemporary material world. Socially, however, Davis had no means of talking about this vantage or relation to computing with others. Our popular culture has not traditionally afforded “hackers” who eschew corporate work much more than a politically-radical cyberpunk aesthetic or a shamanic, techno-wizard mystique for performative social existence. Neither archetype affords a social existence grounded within mutual understanding of the actual material world—only a society which had normalized the kind of computer-expertise Davis had could satisfy the inclusion of Davis. And so he worked to enable the creation of such a society with his work.
The Commodore 64 takes its name from it’s 64 kilobytes of of RAM memory. It was, however, an 8-bit computer, meaning that the memory segments addressable by the CPU were limited to eight bus-lines, or parallel circuit pathways on the motherboard. To stick with intel’s line of processors for the sake of brevity and reader familiarity, the 8-bit world in which Davis’ childhood Commodore 64 came forth gave way to the 16-bit world of the intel 286 architecture. The 286 world was soon swept away by the 32-bit intel 386 architecture, which was dominant in some form through the 486 and Pentium-class architectures of the ‘90s. Intel’s dominance was finally ended with the victory of the American Micro Devices in winning the popular market for 64 bit microprocessors in 2003.
In the 20 years between 8-bit machines and 64-bit machines, the personal computer went from hobbyist novelty into popular household commodity, thanks to the techniques of interface design and computer application training we explored above. The actual machine became completely submerged within high-level abstractions and were only accessible indirectly through social relations with the priestly-class of commercial software developers and technicians. As staed above, contrary to McKenzie Wark’s formulation in The Hacker Manifesto, the computer hacker’s primary drive is not to create abstraction, but to demolish and collapse it. The Hackers of Steven Levy’s celebrated book stole power over the machines back from the abstractions of centralized and socially-mediated control in the ‘60s and ‘70s, putting the bare metal back into the hands of those who’d undertake to learn and master it. In other words, to hack is to pull aside the hands in the game of commercial peekaboo being played both by hardware and software vendors, and by the square administrators of institutional IT departments. The computer hackers shamanic mystique derives entirely from their intuitive, embodied relation to real material systems which exist unperceived to most behind the appearances of our user-friendly technological world. But for Terry, the parent hiding behind the hands he pulled away was the Old Testament God.
The last hacker of Levy’s book, Richard Stallman, operated within the social arena of law in order to secure the rights of computer users to understand the materiality of their own computers. The cornerstone of his Free Software Foundation, the GNU Public License governing the Linux kernel, among other things, has forever changed our world. Stallman did his work within an existing hacker culture at MIT, where cooperation, collaboration, and sharing between hackers was already pre-established. Stallman’s first computers were shared mainframes and minicomputers in universities.
By contrast, Davis’ interests were of a more pedagogical sort, derived from his own experiences as a child—moreso a presumably lonely child. Davis’ lessons were privately won. Davis didn’t care to legislate freedom for an existing group or tribe. Rather he sought to enable freedom for the individual, self-motivate learner. All Davis had needed as a child was a computer and a good book—no larger culture or social existence was necessary for the hacking the personal computer.
TempleOS was designed explicitly to create the perfect learning environment for the precious child to play around in and learn the amd64 architecture. As Davis often said, “The Commodore 64 is our ancestral land.” The tribe whose ancestry he speaks of, however, is that of “precocious boys,” an identity which Morgen Ames dismisses as a nostalgic fantasy in The Charisma Machine. But whereas the XO Laptop of the OLPC Project, originating from MIT’s Media Lab, was designed to appeal to a new generation of child hackers, its programming never lead down to the machine. The Sugar environment which Ames rightly critiques fails because children would hack through its child-like high-level interface down to the actual OS, where they can then install “real” software. Ames critique of the actual machine unfortunately stops there, pivoting away from the machine toward an uncharitable and personal critique—or attack—of the OLPC management and of the theoretical figurehead of the project, Seymour Papert. By substituting the social arena for the material arena where laptops and the children who own them exist, she fails to derive the necessary lessons from the failed OLPC project.
Even though the OLPC was inspired by the work of Seymour Papert, an actual colleague and successor to Jean Piaget, Terry Davis’ own childhood experience put him on a better path toward designing a learning environment suitable for children. After all, he was the actual kid growing up with the computer. Papert’s first projects with Cynthia Solomon teaching children computer programming in the late ‘60s lead him to focus upon their embodied relation to the machine, in the form of a robot “turtle” as an “object to think with.” When he had been teaching them to create word games and math games in LOGO, the children struggled. By teaching them to move around the robot turtle, he had allowed the children to “hack through” to the machine from top to bottom, going into the high-level interface and back out into the physical world. By contrast, the Sugar interface on the OLPC machine was a cul-de-sac into abstraction. One could see the highlevel source-code, but children were not given a way back out into the material reality. There was no programming the actual CPU at the bottom of the machine—for reasons which are obvious. Well, obvious to everyone except Terry A. Davis.
Papert used the turtle as a proxy for the child’s actual body navigating in our physical space. They then learned to code by learning how to command the turtle in language which also described their own bodies. From that basis, they could develop the syntax for programming languages in general, using variables, loops, logical operators such as “if/then” constructs, algebraic functions, recursion, and all the other features of computation. All the things, that is, which low-level programmers have traditionally told a CPU to do in memory space. Unlike Keogh’s cyborg, the child learning to manipulate the turtle incorporated with the computer at its base, not the top-level interface of someone else’s pre-programmed plans for you. Papert built an understanding of computers from the near-bottom up, rather than throw kids into the high-level interface and give them no way of swimming down.
In his own, ideosyncretic way, this is what Terry Davis did with with TempleOS. Terry thought, as a prophet of God, he could use divine power to have TempleOS installed in ROM chips, at the factory, for all new 64-bit microcomputers. That way, if there was no OS on the hard-drive, the machine could fall-back to booting it, just as IBM machines would boot into a BASIC interpreter until the early ‘90s. This way, the low-level architecture of the machine could be easily messed-around with, in private, by anyone. And thus the machine itself, the computer, would be back in the hands of all who wanted it or needed it.
Terry was insane. But the idea behind the project whose strange execution consumed the last decade of his life was, I truly believe, completely sound. We should be able to touch our computers.
At the center of the Maelstrom
Let’s learn a little more about Seymour Papert, who I’ve brought up several times so far. Papert studied with preeminant child psychologist Jean Piaget in Geneva between 1958 to 1963. Paperts contribution to Time Magazine’s ‘The Century’s Greatest Minds’ special issue about Jean Piaget is worth the read regardless—not least of all because in it Papert equates Piaget’s respect for children to the degree of those of Dewey, Montessori… and Paolo Friere! But it doesn’t end there, folks.
After that tenure in Switzerland, Papert co-founded the Artificial Intelligence lab at MIT with “the father of AI,” Marvin Minsky. He worked in that domain his entire life—but that work is only adjacent to our explorations here.
It is his work in child psychology which is important now. Observing young children learning the programming language BASIC, Papert noticed a major flaw in BASIC’s usefulness as a first programming language: BASIC doesn’t give programmers an easy means to create their own commands, routines, or functions. Without that ability, children can’t build up from simple steps into complicated assemblages—they can’t grow the way that children learn to scale up any project or skill.
With Wally Geurzeig and Cynthia Solomon, Papert designed the first educational programming language for children in 1967. It was called Logo, for the Greek term Logos. Logo was based on LISP, a programming langauge designed entirely around the ability to define, and then use, smaller functions which can be built up into larger and larger structures of increasing complication.
Teaching with Solomon, his trial Logo curriculum replaced a year-long seventh-grade math class in a Lexington, Massachusetts middle school. The students excelled in their math exams.
Turtles All the Way Down
In the middle-school trial, Papert and Solomon had focused the lessons around word games. The computer was remote, and the input/output devices available were, thus, paper teletype machines. If all you have a an alphanumeric keyboard, than programming projects like simple guessing games and pig-latin translators are atural-nay irst-fay eps-stay. Plus, programming languages are languages, so words seem the natural entry point.
But Papert realized that something more concrete was suitable for children. Kids need something physical to play with. Something that moves. Geometry is math, but it also takes place within physical space. And, thus, the turtle was born.
I first attended the Media Ecology Association annual convention in 2019 with a paper titled Toppling the Pillars of Cyberspace: A McLuhan-Syntonic approach to Computing. In that paper I made the case that simple, procedural programming languages treat the Central Processing Unit, as an agent to be directed. Programming is treated like a game of snakes and ladders, with the CPU as the game-piece going through it’s RAM, following the instructions that it finds. BASIC works on that paradigm: most BASIC programs contain `GOTO` keywords which tell interpreter where to go in the program.
The language of LISP—and so the language of Logo—is a simple convolution of that, insofar as new commands which are defined contain other commands, all the way down to the same linearity. It’s like one layer extra of abstraction, letting you tuck away multiple commands inside of one single, newly defined command. Beneath it all, there is still a single identity being attributed to whomever you are addressing, or directing, or instructing as a programmer. That recipient of your instructions is, naturally, the computer. The computer’s actual CPU, or an interpreter of a higher-level language which the the CPU is executing, or…
Or a robot with a pen which drops out of its bottom! This is Logo as most people are familiar with it. Logo directions begin with instructing a “turtle” to move about in a 2-dimensional plane and draw things. Most students in the past fifty years who were introduced to Logo in school, but never progressed passed the introductory stages—like myself—, don’t know that there’s anything more to the language than that. In cultural memory, the turtle is Logo.
But no. Practically speaking, Logo is LISP, and LISP is an extremely powerful and widely-applicable programming langauge in usage in many fields—especially A.I. And that’s what it became when taught, once the kids were hooked. Kids who stuck with Logo, who learned it beyond the initial stages, were able to apply it to all sorts of things which adults and educators found astounding.
What the turtle was, to Papert, was “an object to learn with.” He saw embodiment as central to learning, and like Jean Piaget, he understood that abstract thinking in children grows out of experience with physical, tangible, singular use objects and situations. We aren’t born into this world knowing language or math or silently pondering symbolic structures in our minds—we crawl and chew and hit and feel and yell. We’re in our skin. And like children, so is the turtle.
The sensation that sprang from watching children program computers—a domain previously thought arcane and abstract—was phenomenal. Over the course of two decades Logo became a central part of education.
And unlike the imaginary worlds of videogames, it was towards science and belonging in the material world that the child’s embodiment was being massaged. As Papert wrote in the intro to his 1982 book Mindstorms,
It is in fact easy for children to understand how the Turtle defines a self-contained world in which certain questions are relevant and others are not. The next chapter discusses how this idea can be developed by constructing many such “microworlds,” each with its own set of assumptions and constraints. Children get to know what it is like to explore the properties of a chosen microworld undisturbed by extraneous questions. In doing so they learn to transfer habits of exploration from their personal lives to the formal domain of scientific theory construction.
However, the world moved on from Papert’s LISP machine for children. As I first detailed in my paper for the Media Ecology Association in 2019, more and more layers accrued atop of the rudimentary programming experience associated with computers. Alan Kay, who worked closely with Papert, went on to Xerox to the Object Oriented Programming Smalltalk, as well as graphical user interfaces. These layers, as well as the file system metaphor for disk-based program and document “file” storage, completely obscured the nature of computing beneath layers of “cyberspace.” This cyberspace became the hands of peekaboo which were never to be lifted, were computers to remain an easy-to-use commodity device.
Throughout the ‘80s, as these high level concepts grew in ubiquity, Logo faded away. More importantly, the existence of a single computer as a closed, singular, addressable agent also went away. The contents of the computer screen became an orchestra of agents, and with networking gave way into an infinite world of objects and layers to glom on to.
On the Default Friend substack, I argued that our individual fragmentation is mirrored economically by the division of labour amongst specialisms, but that something more fundamental than that was at play. Our relation to the book, the printed page, was the deeper culprit. As an “object to learn from,” books are portals to worlds large and small—portals to everything and everything. But they don’t move. Their contents stay abstract, needing to be tussed out within our minds as images and, behind that, human voices.
Computers are the constituative medium today. And beneath the changes in I/O devices and gradual increases in specs and ever-changing layers of surface-software, they’re still largely the same as they’ve been since the end of World War II. The basic principles are the same—and yet how and where those principles take-place today in the smart devices which surround us is beyond our ability to touch, to reach, to play with, to learn from.
What Logo offered was an educational curriculum rebased around the latest technology, just as education was built-up around books for centuries past.
Conclusion
By targeting the mainstream, computer interface designers made the conscious decision to pivot away from comprehensive documentation of computer software, toward a rough-and-tumble learn-as-you-go approach. Like all one-to-many approaches, the effects has been a normalization toward mediocrity.
What may be missed, however, is just how deep the cost of “exploiting” our human sense-making apparatus through this approach to not-only computer interfaces, but indeed all aspects of industrial design, product design, architecture, and other fields of environmental and affordance creation. Children are born into these environments. And, while far from being blank-slates, children do not approach contemporary environments with pre-existing expert knowledge, or even cultural knowledge, or even stable senses of the nature of physical, material reality, to be exploited. Instead, these environments form the physical, material world from which children learn these fundamentals. A world designed to hide away things adults don’t understand become, for kids, a world where those things don’t even exist to their senses. These kids are, as it were, playing peekaboo against a bad-faith adversary. And to get at the world, they must cheat.
The act of learning what our world is made of is one-and-the-same as the act of learning who and what one’s self is. Says Piaget,
Through an apparently paradoxical mechanism whose parallel we have described apropos of the ego-centrism of thought of the older child, it is precisely when the subject is most self-centered that he knows himself the least, and it is to the extent that he discovers himself that he places himself in the universe and constructs it by virtue of that fact. In other words, egocentrism signifies the absence of both self-perception and objectivity, whereas acquiring possession of the object as such is on a par with the acquisition of self-perception.
The symmetry between the representation of things and the functional development of intelligence enables us from now on to glimpse the directional line of the evolution of the concepts of object, space, causality, and time. In general it may be said that during the first months of life, as long as assimilation remains centered on the organic activity of the subject, the universe presents neither permanent objects, nor objective space, nor time interconnecting events as such, nor causality external to the personal actions. (Piaget, xii-xiii)
The consequences of exploiting the child-like learning capacity of adults in order to get them to become useful office-workers in the ‘80s must be followed through to their logical conclusions. The very sense-making faculties of children which are exploited by the videogame-ification of virtual office equipment, obviously, must be affected by a child’s immersion in such virtual environments. Children have, traditionally, relied on those sense-making faculties in order to build a stable sense of the physical, material world and, thus, their embodied relation to that world.
We are now in a position to observe what happens to a child’s sense of embodiment, and sense of the world, when those early-childhood processes are hijacked by virtual environments expressly designed to exploit those processes.
The Forest from the Trees
Outside of critical theory of a post-modern, feminist, or queer variety, I have seen no contemporary literature tackling the nature of user interface design, virtual environments, screen time, or equivalent terms which reference media usage, which have broken through to these fundamental issues of embodiment and subjective identity. And those critical theory texts seldom attempt a comprehensive approach focused on the nature of how user interfaces evolved, or what computers physically are. Instead those texts largely premise their observations upon the supposition of the existence of non-physical, post-Newtonian subjective spaces, wherein the boundaries of embodiment break down. I’d like to respect the sensation of living in such spaces as they are constituted to the post-modern subject, while also asserting their fictive and unreal nature except as phenomenal illusions of modern media.
This piece started with a discussion of Newtonian space—the physical world of objects we all learned exist as kids. Well, are kids still learning that? Or, even better, were they even learning it within the past century years? As McLuhan pointed out, the instantaneous nature of electrical communication short-circuited causal reality itself. The Newtonian universe imploded as wires stretched the surface of the earth. Maybe that seems like a stretch—have you heard the old farmer’s joke about the city kid who thinks food comes from the supermarket? Is it entirely unbelievable? Cyberspace is far more tactile “place” for our experience to develop in than farms—at least for city kids. The world behind the glass of a screen is one we touch every day; it’s more real than most places on earth. And, as I hope I’ve demonstrated, the language we use to discuss them is entirely misleading. If it wasn’t then the postmodernist and posthumanist language of cybernetic splices would make perfect sense to everyone, right off the bat.
So long as we are in a social relationship with our machines, rather than a material one, we are not grounded in the physical world.
In short, if people learned what reality was made of, that which only seems to be reality—or is socially constructed to be reality—wouldn’t feel so real. And paradoxically, nor would what seems so real also feel to be so fake at the same time.
What I hope to demonstrate in this paper is the viability of an entry-way into appreciating the post-modern condition, and contemporary issues of uncertain embodiment and identity, through a rigorous and comprehensive analysis of the scientific texts and material designs which make-up our technological world. Put simply: we must tell the history of computers, and the history of the world as constituted in relation to computers, at human scale.
The tack of analyzing texts is insufficient. The objects themselves—in this case, computers—must be understood.
Taking a very different tack from my own, Michael Black’s 2022 book Transparent Designs tries to make the case that “user friendly” sales rhetoric was a social construct of computer marketing departments. Whether or not computers were, in fact, user friendly was rather besides the case. His book is greatly weakened by relying more upon academic scholarship and textual analysis from a critical theory vein, than on any attempt to flesh out a comprehensive understanding of the unfriendly nature of computers as-they-are.
Uncle Jean
The quote of Jean Piaget’s at the opening of this piece continues
A universe without objects, on the other hand, is a world in which space does not constitute a solid environment but is limited to structuring the subject’s very acts; it is a world of pictures each one of which can be known and analyzed but which disappear and reappear capriciously.
Could a better formulation of the world presented to us on the other-side of our screens be written today?
From the point of view of causality it is a world in which the connections between things are masked by the relations between the action and its desired results; hence the subject’s activity is conceived as being the primary and almost the sole motive power.
“I don’t care about computers, I just want to use them to achieve my ends and goals. I need applications of the computer to my life.”
As far as the boundaries between the self and the external world are concerned, a universe without objects is such that the self, lacking knowledge of itself, is absorbed in external pictures for want of knowing itself; moreover, these pictures center upon the self by failing to include it as a thing among other things, and thus fail to sustain interrelationships independent of the self.
There is more in these closing lines than can be unpacked in any commentary which might fit here.
Great piece, although I think the immersion itself is a rearview mirror idea leftover from the TV. Hence, the tendency among scholars who grew up with TV to attribute a similar desire for immersion to video games. You can see how this is objectively false if you try to trace it in the history of gaming itself. Whenever a company introduced a product meant to increase the immersion factor, it either flopped or was briefly entertained as a technological novelty only to be discarded and forgotten with no lasting impact (Virtual Boy, Microsoft Kinect, Playstation VR). The recent failure of Metaverse to take off can be thought of as another example. The analog interfaces of video games or computers in general create a far more discreet relation between user and the screen than the previous era.
Great piece, although I think the immersion itself is a rearview mirror idea leftover from the TV. Hence, the tendency among scholars who grew up with TV to attribute a similar desire for immersion to video games. You can see how this is objectively false if you try to trace it in the history of gaming itself. Whenever a company introduced a product meant to increase the immersion factor, it either flopped or was briefly entertained as a technological novelty only to be discarded and forgotten with no lasting impact (Virtual Boy, Microsoft Kinect, Playstation VR). The recent failure of Metaverse to take off can be thought of as another example. The analog interfaces of video games or computers in general create a far more detached relation between user and the screen than the previous era.
You highlight a very important difference between the media, which is that television is basically a disembodied experience for the watcher, while video games are putting the player inside the frame somehow. As Keough points out, mastery of the controller is essential for forgetting its existence and merging with the experience, just as learning to choreograph turn signals and acceleration and breaking and steering and mirror checks and road-sign comprehension into a singular activity (part-to-whole gestalt formation) is essential for a driver merging with their vehicle. Mobile phone games have the simplest interfaces yet, so you see them everywhere that people are killing time. I suspect that immersion isn’t less for video games, but that the level of immersion they give already—a level similar to TV—is already sufficient, and so the VR headset tech and funky controller technology is excessive and unnecessary for the experienced gamers they were marketed to. Since games do need some kind of embodiment, though, the shape and form of that relation is everything. (Remember the hype over “hand-eye coordination” which video game controllers bestowed upon youth, as though that was some kind of particular, generalizable gift? Did all the kids who played Mortal Combat become expert whittlers and puppeteers?)