Incomplete Essays

Approaching the Singularity (Eventually)

Wheels keep spinning around and round…As I sleep, I dream of electric sheep. Their bleating is that of popping toasting, bleeping Facebook messages, and the wailing sirens of police cruisers.

Approaching the Singularity (Eventually)

This is a work-in-progress. I am still torn with the many directions this piece could go. Moreover, there are a number of factual errors within this piece that I hope to clean upeventually.

Wheels keep spinning around and round…As I sleep, I dream of electric sheep. Their bleating is that of popping toasting, bleeping Facebook messages, and the wailing sirens of police cruisers. Each has a programmed sign etched into its ivory fleece. These signs are made entirely of neon colored LEDs, which burn bright images of familiar numbers into my brain. 1, 2, 3, 4, 5… They keep moving through my mind’s eye, jumping over an imaginary fence for my counting pleasure. I know that somewhere in the background is a factory pumping these babies out—mass producing them for my own benefit, but I don’t hear, smell, or see this factory. Cake jams out in the background, making counting less of a monotonous task: And the muscular cyborg German dudes dance with sexy French Canadians/While overweight Americans wear their patriotic jumpsuits/ Wheels keep on spinning round.My sleeping mind wonders if sentient machines will dream of such things. Will a machine worry about the next car payment? Will a machine worry about its blood pressure or about its subpar sex life, while masturbating and using the last of the hot water in the shower? These are questions I have about the post-Singularity world. Will a machine commute to a nine-to-five cubicle job, wishing, hoping for a mass shooting to take place in the office? Will the machine have to ask the boss for a raise in order to feel like it has gotten somewhere in the world? Will a machine doubt climate change because it snows outside?

The Singularity is described in a number of ways by today’s futurists. However, the Singularity boils down to a few basic ideas. The Singularity is often seen as a point in which technological progress has a snowball effect on human civilization and the human body itself. This theoretical event suggests that our technological progress will allow us to cheat death, have sentient machines, and live on a planet that can be modified or hacked to work more efficiently and effectively. The Singularity will be a pivotal moment in our species’ evolutionary history—or so the futurists say. Some suggest the Singularity could very well destroy human beings altogether, making us obsolete and paving the way for something better, stronger, faster, and more efficient than the older models. Instead of walking around in organic meat sacks, we’ll be cruising around in sleek machine bodies or digital representations of our former bodies. Neal Stephenson points out that the Singularity has a number of quasi-religious undertones, making it harder to take this theoretical event seriously.

I can never get past the structural similarities between the Singularity prediction and the apocalypse of St. John the Divine. This is not the place to parse it out, but the key thing they have in common is the idea of a rapture, in which some chosen humans will be taken up and made one with the infinite while others will be left behind. (Some Remarks, 25-6)

Might we suppose that the Singularity, like any technological period, will exclude certain peoples or nations across the globe? In other words, will the Singularity develop unevenly, taking place in more developed nations first before spreading to lesser developed countries? If this is the case, does the Singularity really change anything? Do we still face the same politico-economic paradigms that dominate our modern world?

So what do we call these things that come out of the Singularity? Will they be humans? Will they be machines? Or, will they be a mixture of the two? The likely answer is that those coming out of the Singularity will be a mixture of inorganic and organic components, making the survivors of this event neither human nor machine. Ray Kurzweil calls these individuals Singularitarians. It’s a label that has yet to catch on. Maybe it needs a t-shirt and T.V. show to gain some traction. Whatever happens, those coming out of the Singularity alive won’t be recognizable. They’ll be aliens with new cultural norms and bodies out of the coolest sci-fi flick. If flesh and blood humans still exist, they’ll be viewed as flesh and blood fossils. I’m sure these humans will find homes in the new museums and living history exhibits built by the Singularitarians.

When exactly this technological event will take place appears to be a significant point of contention. In 1993, Vernor Vinge, a computer scientist and science fiction writer, claimed that the Singularity was thirty years away (i.e., 2023). Ray Kurzweil believes it will happen in the next century. Thus, we’re left scratching our heads when exactly this event will take place—if it occurs at all. I believe Neal Stephenson offers the best answer concerning the technological singularity described by Kurzweil, Vinge, and those transhumanist or extropian thinkers who frequent science and science fiction conventions across the nation.

My thoughts are more in line with those of Jaron Lanier, who points out that while hardware might be getting faster all the time, software is shit […] And without software to do something useful with all that hardware, the hardware’s nothing more than a really complicated space heater. (Some Remarks, 26)

To best illustrate what Stephenson is talking about, just consider HEALTHCARE.GOV. The original website contained some five million lines of code, but the website was complete shit. It didn’t work like it was supposed to. Even the best programmers couldn’t fix it. Those programmers working on operating systems and video games see similar occurrences. Code is full of bugs. Moreover, code is expensive and takes far too long to create. The technical knowhow needed for coding also limits what code can do and where it will go. This makes it even harder for me to imagine a not too distant future when human bodies have been upgraded with the newest hardware or swapped out for a sleek machine. How does this hardware work if the software is complete shit? Can you imagine having hardware in your body that has software issues? Can you imagine a world where vital organs start crashing left and right because of a software glitch? The very idea of the Singularity hinges on software developing at the same clip as hardware. However, recent experiences concerning software seem to indicate that software still needs to catch up. Maybe the Singularity will be full of cool hardware but shitty software. Maybe that means we’ll need a Singularity 2.0 that will bring kickass software to the party.

Friedrich Nietzsche once observed that man is a slave to the state. If Nietzsche were alive today, he might say that humans have found a new taskmaster: information-communication(s) technology or ICT. We are bound to the literal and proverbial whims of this new taskmaster. Our world has become increasingly dependent on its mediation. ICT technology permeates every crevice of modern society, including manufacturing, warfare, politics, advertising, pornography, shopping, dating, healthcare, and, now, learning in public and private institutions. Calling ICT technology simply the new taskmaster doesn’t seem to fit what is really occurring in the world we humans have been building for more than thirty years. Die Informationstechnologieist das Opium des Volkes. It has sunken its roots into everything, leeching off and growing in every place imaginable—from developed nations like the United States to the hermit kingdom of the Democratic People’s Republic of Korea.

In order to understand the mass adoption surrounding IC technology, we need to understand how technology develops. To those laypersons looking at the development of computers and the rise of the Internet, it all seems like a coincidence—an unexplainable series of events outside of time and space. However, this would dehistoricize IC technology. Like all technology, ICT was built on the backs of much older technologies.

The most significant technological development in human history happened between 100,000 and 50,000 years ago. Human developed speech during this time period. It is not coincidence that humans also happened to develop tools during this early period in our history. This was soon followed, some 95,000-45,000 years ago, by the development of writing. The first writing was stored on clay tablets, wood, or special wax. This was humankind’s second information-communication technological breakthrough. In Egypt, the third technological boom took place. This was the development of papyrus, which allowed cheaper and more accessible writing to take place. In essence, the Egyptians developed the first inexpensive proto-hard drive, storing information in a verbose, archaic language that is inefficient compared to today’s languages. It was no long afterwards that bound manuscripts became popular. However, these manuscripts were severely limited: 1). The time it took to copy a manuscript was somewhere in the ballpark of days, weeks, or possibly months; 2). Required a significant number of handmade resources such as ink, vellum or early paper, and so on; and 3). Needed an army of trained experts to produce a single volume. The development of moveable type, some 500 to 1,000 years ago, eliminated the inefficient methods espoused by those creating handwritten manuscripts. Books could be produced with little effort or capital investment. The number of books skyrocketed, with over a billion printed books in Europe alone during the nineteenth-century. Books could not be easily transported. Books required extensive roads, secure sea passages, and things like horses, trains, and sailboats. Although books were state of the art, they still suffered from data transportation costs and speeds. The invention of the telegraph, invented by a European and not an American, solved this problem—to an extent. You could transport information over wires using electrical signals. These electrical signals moved faster than anything in existence. However, much like early writing, Morse code was verbose and inefficient, making it difficult to use. Moreover, telegraph operators needed to be trained experts. Thus, much like the days of scribes, telegraph operators existed as a specialized class tasked with handling the world’s communication. The ability to send more and more information over wires or the airwaves soon followed. These methods were more efficient and even faster than the telegraph, which was state of the art in its day. One could listen to broadcasts—either audio, video, or both—from the comfort of home. You could speak to friends or family members with a little help from something called the telephone. However, these technologies were still limited in the amount of information they could transport. Phone lines were tightly controlled by telecomm corporations, who overcharged for things like long distance. Even audio and video broadcasts were limited by the very components that allowed them to operate. The age of silicon was only around the corner.

It’s the manipulation of silicon that brings about one of the most important technological booms in human history. Silicon is cheap. You can find silicon almost anywhere. Moreover, silicon does things that tubes could never do. You can miniaturize electronics. You can produce cheaper, faster, and more efficient processors, making way for computers. Silicon-based transistors make it possible for the personal computer to exist. It allows our smartphones to overpower and do laps around the high-tech computers of the 1960s and 1970s. Our Texas Instrument calculators—the ones used in college and high school classrooms across the country—have computing power that surpassed anything possessed by the United States military during much of the Cold War.

To better illustrate what has happened during much of the Silicon Age, we need to focus on Moore’s Law. Moore’s Law was developed by Gordon Moore, a professional chemist and pioneer in the ICT industry. The law still haunts the ICT industry to this day, despite it being first published in 1965 in Electronics magazine. Utilizing Moore’s Law, one finds that computing technology advances follow a simple, predictable logarithmic scale. Every eighteen to twenty-four months, the industry doubles transistor densities on integrated circuits, increasing computational power and halving prices, making the modern computer what it is today. Since the Internet went public in the 1990s, it too has benefited from this predictable progress, creating a transnational network for communication, information, and business—unlike anything in history.

However, Moore’s Law has a downside. We’re currently seeing the law come up against the very laws of physics. It is getting harder and harder to build smaller transistors. This is due to a complicated mess in thermodynamics. Moreover, the cost of building smaller transistors in increasingly denser clusters is costing billions in R&D dollars. This could mean companies like IBM decide to call it quits. Maybe we’ll stop making smaller transistors—and if that’s the case, miniaturization of electronics could be put on hold. Inexpensive computing technology could go the way of the dodo, meaning the Singularity could be dead on arrival before it manages to gain traction.

When I was in high school, Internet Explorer (IE) still ruled the Web with an iron fist. I began to see it as proprietary bloatware, with more vulnerabilities and leaks than a Third World navy. I changed over to Mozilla Firefox. Firefox was cool kid on the block. It was the browser my high school technology office hated. Everyone wanted to install Firefox. The installation of Firefox on network computers became a heated battle between IT technicians and end users. Before too long, Mozilla could be seen on many of our school’s desktops. It shared the same space as Internet Explorer. Then there was a collective shift toward Mozilla. People stopped using IE altogether. It was like mini revolution had taken place on everyone's computers. IE was easily toppled in this bloodless coup, with little hope for an IE-inspired countercoup. People started using Mozilla for everything. On occasion, people still used Internet Explorer, especially for those websites that were incompatible with Mozilla Firefox. However, there was a general feeling that Mozilla Firefox was the future. It had cool add-ons; it worked like a smooth-running Cadillac; it was open source; it didn't have the same bugs that IE had.

For me, Mozilla Firefox was a path of resisting what others wanted me to use. My father hated Mozilla Firefox. He kept telling me to use Internet Explorer. He stuck with IE when all others fled to Firefox, saying IE was vastly superior. I decided that installing and using Firefox was a way of telling the old man, "Up yours!" Firefox opened doors to the seedier, darker part of the Internet—or at seedier and darker for most teenagers. I pirated movies and music with relative ease. I could modify my browser without having to deal with the nanny OS. It also allowed me to delete my Internet history with a click of a mouse, making it harder for my parents to see what I was up to. Firefox gave me my first taste of anarchism. It was great.

My love affair with Mozilla Firefox didn't start to wane until early in graduate school. I felt that the browser was getting slower and slower. Its vulnerabilities were beginning to show. I felt that Firefox was beginning to become the new IE. It seemed to me that Firefox was becoming complacent with its vulnerabilities and security errors. The add-ons seemed to be hacked or attacked on a daily basis. I felt disillusioned with the software that gave me my early freedom on the Internet. Then, last month, I started noticing problems with my computer. Mozilla kept getting slower and slower, reaching glacial speeds that seemed unnatural for the Internet Age. Firefox would often crash or leave me with a white screen and nothing else. It began sucking up more and more from my computer. The very thing that gave me my Internet freedom was beginning to feel like a parasite. A parasite that was draining my enthusiasm for the Internet. It was making it harder and harder to watch videos and publish blog posts. Even going onto Facebook became tedious.

I stuck with Mozilla Firefox as long as I could, but I finally decided to change browsers. It was an almost heart-wrenching to give up Firefox. It had been my closest companion on the Internet. It provided a safe place for me to explore the Web's weird nooks and crannies. I finally decided to switch over to SlimBrowser. I can already feel the differences. However, I miss that orange Firefox icon. It has been a familiar sight throughout the years.

SlimBrowser feels like something pulled out of a time machine. It doesn’t have the aesthetic that came with Mozilla. I feel like I’ve gone backwards in technology. I feel like I have dug up a dinosaur to cruise the Internet Superhighway. I guess this might portend what the future holds for technology or even technological advancement. If we’re constantly going back to what simply works, where’s the incentive to push into the bleeding edge? Is the Singularity doomed? And, going back to the software issue, does the Singularity have a software problem?

During my sophomore year in college, I bought an Apple computer. I still have no idea why I bought a Macbook. I’m guessing it has to do with my naivety concerning the Cult of Jobs and Apple’s incessant declarations that viruses and spyware don’t target their machines. I dished out two grand on my Macbook, adding all of the bells and whistles Apple offered. It was the most expensive computer I’ve ever purchased. I opened up the white box and booted up the computer in seconds. It was my first foray into all things Apple. I was introduced to the closed garden that is Mac OSX. Apple apps for everything. I didn’t go to Microsoft for word processing nor did I go to Adobe for a pdf reader. Apple had everything I needed. The closed garden that Apple created was oddly comforting to me. Even Apple’s audacity to charge for new software that had little in the way of upgrades didn’t alienate me. I enjoyed updating my Macbook for $29.99. I even believed the party line that Apple computers never experienced problems with viruses or spyware.

My Macbook was a sort quasi-authoritarian state. I accepted what Apple had to say. I didn’t question Apple’s logic. I used Apple-approved software, as third-party software wasn’t supported and was often blocked or hindered. I refused to download and install antivirus software. I didn’t look under the hood. I stayed with the apps I was most familiar with.

When my Macbook finally gave out, I could see the light, so to speak. It opened my eyes to something I’d never thought about. I began seeing technology as closed box, especially the proprietary hardware that dominates the market. This made me rethink my position on the Singularity. I was an avid supporter of the movement. I wanted to cheat death and have my body uploaded to a computer or cool looking robot. Who was going to build the Singularity? Apple? Microsoft? Google? Would the Singularity offer only a closed (controlled) garden for consumers? Would I be able to move with relative ease and even dabble in hacking my own body or mind? Or, would I have to deal with proprietary hardware and software restrictions?

0 Comments 0 Comments
0 Comments 0 Comments