It is not enough to understand computers to understand their proportions and scales. We only know that they are very complex and very fast. But they have been very complex and very fast for about half-a-century now, and it seems culture has all but given up on retaining any sense of scope for computers relative to human experience or meaning. They no longer exist within our subjective universe.
Full-stack media ecology is not just an explanation of what goes on between the top and the bottom of the computer stack; that is, between the high-level, easy-to-use interfaces and the bare metal and silicon. It’s about building the historical context for the development and growth of the stack upward and downwards, as a narrative about our lived environment, culture, and who and what we are as humans. We are embodied beings, and computers were designed in our proportions—for reasons direct and indirect.
From this history, we can glean the same few, repeating principles which over and over underlay all the changes which lead to the complexity of our modern systems.
Here’s some fun examples. Telephone lines had been long optimized to focus on capturing the range of the human voice—it was a waste of bandwidth to transmit, with any fidelity, bass and treble frequencies outside of the mid-range of speech. So when modems—data modulator-demodulators—came along, they too had to transmit binary data within the pitch range of human speech. Recall the sound of a dial-up modem—does it makes any noises you can’t roughly imitate with your mouth? Try it! No wonder the judge didn’t let Kevin Mitnick have access to a cellphone in prison!
And before computer sound cards featured digital audio producing—i.e. could record and produce “real audio” like audio tape could—the first standardized computer soundcard, by the Canadian company Ad Lib, Inc., was a 9-channel synthesizer, each channel with two oscillators. Basically, then, it was a Yamaha synth keyboard, as you might have heard in many popular ’80s songs. Today, software just emulate such a circuit in software, or play back digital-audio samples of synth sounds. Your 21st century computer certainly doesn’t have any actual synthesizer microchips on its motherboard.
Now consider how both modems and soundcards relate to audio! In digital audio, there is a fixed “speed,” or rate, or frequency of capturing or re-creating the state of a temporal “frame” of sound. The Audio CD standard, for instance, stores audio at 44,100khz. Telephone modems, by contrast, go as fast as the phone-line quality will allow them to go. Every phone call might take a different route from end to end, and factors like line length matter. That’s why it takes 10 seconds of strange noises to establish a dial-up phone call: there are test signals going back and forth, feeling out the thresholds of how fast data can be transmitted without losing bits. So whereas digital audio has to properly deal with all the sounds in the human range of hearing at high fidelity, every dial-up modem connection is tailor-fit to the precisely-measured dimensions of quality in that particular end-to-end phone circuit. That’s because the sound doesn’t matter—the preservation and transmission of uncorrupted 1s and 0s is what’s important in data modulation into, and demodulation from, a carrier medium.
As the telephone systems of the world converted from analogue to digital over the past half-century, now consider how modems communicate *through* networks of digital audio! Or how phone-calls are, conversely, sent through data networks via modems! We’re constantly nesting and burying every basic aspect of our technology within another at every layer.
Sometimes actual purpose-dedicated digital microchips would be doing the work, and sometimes microprocessors—like your CPU—would just run the logic of those microchips as software. When microcomputer designers wanted to take the load off of a CPU, they’d build a custom chip to do specific work. That’s what video accelerator cards were—they drew lines and boxes and filled areas with colour faster than the CPU would (only much later would video cards become responsible for tasks like rendering 3D environments or parallel processing). In the other direction, if you wanted to save money, you’d take the work that a dedicated microchip or circuit used to do and have the CPU do it in software. That’s what software modems, also known as “Winmodems” were—without the drivers, which were always only made for Microsoft Windows, the modems were useless. That’s because the software did all the modulating and demodulating—the so-called “modems” were merely cheap digital audio processors. That’s right, they were soundcards! Soundcards with a telephone jack instead of headphone and microphone jacks. In 1997, Intel released the AC’97 sound chipset, folding software modem capability into their onboard soundcard design, eliminating discrete modem circuitry altogether besides the physical phone-jack.
To finish off our story about video accelerator cards—the manipulation of video was also a task where computer technology evolved around and within pre-existing infrastructure. The CPU of the Amiga line of microcomputers worked at a speed precisely useful for manipulating television video signals—so too did the first Apple computer, in a much simpler way. The ability to encode and decode digital video required a great deal of signal compression to be small enough for computers to handle—the MPEG standard was standardized in early DVD players and early video cards as compression “codec” (another made-up word like modem, derived from coding and decocoding) of choice. This was done by discrete microchips—today video compression codecs are usually run on a GPU on PCs, while smartphones and webcams still use dedicated mpeg4 encoding and decoding (for televisions) chips. But this all began 30 years ago, when television providers, both cable and satellite, used MPEG internally to route their video. Eventually digital cable-boxes were installed in everyone’s living-room in the past 20 years to end analogue cable transmission entirely So what about cable internet? That’s right! Early cable internet systems would modulate and demodulate data within digital video streams. Digital cable modems did their modeming within the MPEG codec!
In this fun story (at least, I find it fun!), we can see how the complexity of our computing world can actually be discussed and talked about given fundamentals and given history. And, further, that this is all proportional at its bottom to us as humans, and built out of things we made toward human ends. Stories like these give a home to the more technical details. Technical details alone, as abstract engineering specifications, are alienating and inhuman. They seem to exist outside of history, in a world behind the computer screen, divorced from our warm world of flesh and blood and tangible objects. But circuits and microchips are tangible and real, just like telephones and satellites. And they can get pretty hot, too—which is why “laptop” computers were uniformly rebranded as “notebook” computers in the past twenty years. Ouch!
This small snippet also exists embedded in the larger course of technology. Instead of modems, we might have started with radio. Imagine children learning how to make a simple AM radio circuit, understanding radio waves with their hands and eyes? Embed that in a history of FM transmission, and explain just what “modulation” means in many contexts, and you’d be off the the races! Stories about technology are environmental, and belong in context to our entire material environment and culture.
Should everything abstract and technical in our lives be told in this style? With historical context and human form? I’d like to see it done so—we’d shed a great deal of helplessness and anxiety by grounding more of our technical within the stories of our lives and our world.
Leave a Reply