January 28, 2018

Ten Minutes Ahead of My Time, Part 415


I wasn’t always as suspicious of (and resistant to) the digital age as I am now. When I was a junior in high school, this was circa 1981, I signed up for the first computer class my school ever offered. Basic Computer Concepts, or BCC, it was called, and it was offered through the math department. It was taught by a small, balding, nervous man who generally taught algebra and seemed to sense from the beginning he was in over his head.

            There was only one section of the class offered that first year, and about fifteen of us signed up. We were all geeks, we all knew each other from other geeky classes, and we saw little choice but to sign up, given there was nothing geekier than computers. The first primitive home computers had only been on the market a couple of years at that point, and remained for the most part either glorified video game consoles or typewriters with pretensions, but they were all the rage.

            That early on there was only so much the teacher, Mr. Snow, could do. So he taught us how to make flow charts. Then he cleverly morphed that groundwork into writing simple BASIC programs. Then he began working in some symbolic logic until the BASIC programs morphed into FORTRAN programs. That was as far as we got. We spent precious little time on computers themselves (mostly Apple IIs hooked up to TV monitors), and our final project involved designing a very simple X-and-Y axis game along the lines of Battleship. Mine, I remember, involved a hunt for the mythical Spambeast, while my friend Norb’s was a search for the remains of a dead baby.

            As the rest of us moved on, two or three kids from that class got seriously hooked on computers, spending hours every day after school fiddling with programs and new games. Mr. Snow mentioned to me in the hall once that they’d already moved way beyond what he could teach them.

            Well, it was good timing. Come the next semester, demand was such Mr. Snow was forced to offer four sections of BCC, with the class size swelling to twenty-five or thirty students in each. The semester after that it was the only course he was teaching. Then he had a nervous breakdown and someone else had to take over.

            I wasn’t terribly smitten with computers at that point, even turning down my parents’ offer to get me a Commodore 64 before I headed off to college. Still, as I read more philosophy I was intrigued by the potential of the coming digital age, especially as some contemporary philosophers began describing the human thought process in computer terms. While I never bought into that, considering that historically human thought had always been described in terms of the latest dominant technology (hydraulics, electricity), I did get the sense computers were about to become much more central to our daily lives than any of us realized.

            After graduation I went to the University of Chicago thinking I was going to study physics. Realizing shortly thereafter I was mistaken about that, I switched to philosophy. Then at the beginning of my second year I was presented with a Third option. The U of C offered an independent study program in which you could design your own interdisciplinary degree. The basic idea was you would come up with a question or idea, do a ton of research involving a variety of fields, and write a long thesis about it, and that would earn you a degree. One of my professors pointed me in that direction, and it sounded like a pretty good and easy way out of more math classes, so I decided to give it a shot.

            Now, here’s the thing. I have no idea where this idea came from. This was 1984. Apple had just introduced the Macintosh, which struck me as a new gizmo and little more. Forget smartphones—no one had heard of the Internet or the World Wide Web or email or even computer bulletin boards yet. Nobody in the mainstream had, anyway. But somehow and for some reason I had a hunch. Maybe I vaguely recalled that reference in Mr. Pynchon’s novel V. to a future in which we’d all have access to a connected network of information machines, but I doubt I was that clever. Maybe it was simply because I remembered one tiny section of that high school computer class involved working on a small network of stations, and discovering we could send brief private messages to classmates if we knew where they were sitting. It was a primitive form of email, but maybe that was all it took to see something bigger. I simply don’t remember anymore.

            Anyway, when it came time to make my pitch to the head of the independent study program, I stepped into a dusty and cluttered office illuminated by three tall, arched, unwashed windows, and took a seat across the table from an enormous bearded professor with unkempt hair and a heavy brown sweater. The ashtray next to him was full. He’d been sitting there all morning listening to proposals from dorks like me, and it was clear he had yet to be impressed. His head was leaning on his hand, and as I spoke he just stared, his features drooping with existential boredom.

            Only slightly daunted by this, I went on ahead without notes to explain that up to that point anarchist ideals had only been realized on a very small scale, usually taking the form of a collective farm. Anything bigger than that just seemed to fall apart. I then hypothesized that the coming computer age offered the potential, for the first time in history really, for an honestly viable form of anarchism that would not only work, but work on a global scale. You get enough interconnected computers, and you open the door to a real grassroots movement in which people all over the world would be in contact, would have a voice, would be able at last to undercut governments and corporations to make their own policy decisions. It was possible that the evolution of this new technology in the hands of the people would give rise to a true democracy, without any need for chosen leaders or overarching power structures.

            Yes, it was my last burst of anarchist idealism. To be honest I was much more interested in the anarchist angle than the technological angle, but the latter seemed a necessary hook that would get the idea over the transom. I had no understanding of the technology involved, but learning about it would be a central element of the paper I had in mind.

            My pitch lasted about ten minutes during which his expression never changed, his jowls never left his palm, and he didn’t say a word. He just stared. When I was finished he said, “Okay, thanks,” and that was that. I thanked him for his time and left the office to make way for the next loser. I felt like an idiot. Maybe that was the crux of their whole game, right? It was a dream program, simple as pie with a U of C degree at the end of it, so they were going to make you feel like an imbecile for even trying.

            Never did learn whether or not I had been accepted into the program, because before those decisions came down it became clear I could no longer afford to continue at the U of C, so I transferred to the much cheaper University of Wisconsin. I think the humiliation of that pitch purged the whole stupid “anarchism in the digital age” idea clear out of my head, and once I got to Madison I embraced the much more rational nihilist stance instead. Anarchists were so fucking boring anyway, what with their stupid boring collective farms.

            A decade later in the early Nineties when my friend and editor Derek started telling me about the wonders of this still-embryonic “world wide web” thing, I never for a moment recalled that old thesis proposal. Derek went on to post what might have been the Web’s first general interest mag, and other friends started telling me about these crazy message boards they were reading. Despite that dim glimmer of something I’d had ten years earlier, I dismissed it all as some kind of fad.

            When the Internet finally exploded and people everywhere were touting it as a true democratic tool, something that would give everyone a voice, a platform, an equal say, I just thought, “Yeah, and how long will it be before governments and big corporations move in and crack down hard once they learn how to make money off it?” Shortly thereafter the paranoia and contempt set in. What are you all gonna do, I asked repeatedly, when the power grid goes down?

            I had completely forgotten about that old stupid idea of mine until a couple of nights back, when I heard a radio interview with Jonathan Taplin, author of the new book, Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy.

            “Hey, wait a second . . . ” I started to think when the host announced the book’s title.

            The first thing Taplin brought up was how a big consumer electronics and computer convention in Vegas the previous week had been thrown into chaos when the power went out.

            “Hey, wait a second . . . ” I thought again.

            Most of the interview, however, focused on those early idealistic anarchist dreams of a wide-open internet, before they were crushed by government and corporate interests.

            That’s when that old would-be paper came back to me for the first time in nearly thirty-five years, and for an instant I wondered if this Taplin fellow was actually the fat and bored professor who listened to my hapless pitch. Then I decided that was extremely unlikely.

            Now that I remember that short-lived proposal, I can’t help but think if I could have afforded to stay at Chicago, and if I’d made it into that program, and if I’d written that paper, why, I might be considered some kind of tech prophet these days, ruling over some multi-billion dollar empire instead of being another idiot Luddite hack. As always.


You can contact Jim Knipfel at this address:

With occasional exceptions Slackjaw generally appears weekly. For email notification of other Jim Knipfel publications (books, etc.) and events please join the Slackjaw email list here.