The Telegraph vs. The Nanosecond

Perhaps one of the most poignant examples of a technology not only altering the behavior of a populous, but also having a permanent effect on the trajectory of the “thought-world” of its time, is the telegraph. James Carey, in his essay, “Technology and ideology: the case of the telegraph” presents an excellent argument for that piece of innovation as America’s tipping point into the industrial revolution — acting as both an enabler for so many other advances as well as a catalyst in a shift of human ideology. The telegraph, argues Carey, was responsible for a reconceptualization of what a business looked like, what language sounded like, and what role electricity would play in the consumer goods market. Now that railroads need not depend on riders on horseback to change signals along the rails, trains could move much faster and at greater frequency. Humans could communicate with one another from across the globe, and do so, for the first time, without requiring any major travel by any party. A global synchronization began — from railroad timetables to city halls to financial markets.

In 1858, Charles F. Briggs and August Maverick explored the meaning of the telegraph to humanity. They asserted that (as cited by Carey), “It is impossible that old prejudices and hostilities should longer exist, while such an instrument has been created for an exchange of thought between all nations of the earth.” The “annihilation of time and space” inspired more than just new business ventures. It inspired hope.

Further, it enabled speed. Humans could travel faster, communicate faster, even trade faster. As Carrey notes, “time has been redefined as an ecological niche to be filled down to the microsecond, nanosecond, and picosecond — down to a level at which time can be pictured but not experienced.” This is due to a global time synchronization — a product of our relatively new ability to travel quickly across time-zones, delineations developed only after the introduction of the telegraph itself — and also of the synchronization of financial markets. Carey calls out the example of commodities traders in the American Midwest in the early- and mid-19th century. Markets whose prices lagged “two years behind Eastern markets” in the 1820s soon lagged behind by only months thanks to the speed with which prices traveled from New York or England. By 1857, however, the price shifts were virtually instantaneous. 

Today, the inevitable requirement to keep apace with the speed of electronic markets has lead to the advent of algorithmic trading — the tracking, buying, and selling of financial products via preprogrammed equations based on the logic of these markets. In the 1970s and 1980s, as banks and other firms began to turn their focus to the electronic modeling of these equations, it became imperative to make accessible to individual employees the technology to do so. Thus, when Dan Bricklin and Bob Frankston released the first electronic spreadsheet software, VisiCalc, in 1976, the change catalyzed by the telegraph shifted into high gear.

Along with the spreadsheet came the ability to complete “better analysis faster” (Wain). With repetitive cells, built-in equations, and instantaneous cross-references, calculations “were automatically checked by the computer” (Wain). Soon, the complexity of models allowed by the software (and its derivatives, Lotus 1-2-3, Microsoft Excel, et al.) became a measure of value: those models with more complexity were heralded as more legitimate. Financial markets began speculating based on these models, depending on sensitivities   to convey a sense of sureness or proof. As one investment banker put it, “it was this idea that we could sit with [a client’s] ceo and say ‘we thought of everything’” (Saunders).

This complexity required a new kind of thinking and individuals with the ability to do so. Firms began to bring in physicists to build models on a proprietary basis. One such individual is Emanuel Derman, a physicist who began his career at Bell Labs but quickly became “bored” and moved on to Goldman Sachs. In his 2004 autobiography he writes of taking a “many-sided view of risk” that was a “never ending enterprise.” He notes that “So much of financial modeling is an exercise of the imagination” (257). Nearly ten years later, it is not uncommon to read of “flash-crashes” within the market, brief but significant dips caused by a bug within algorithmic trading code (Nanex). Certainly, the financial collapse of late-2008 can be blamed in part on an overconfidence in complexity within our markets (The White House). These occurrences are the product of the marrying of imagination with risk, especially in the context of our Technopoly — a place where our markets conform to the technology available to them. As Steven Levy wrote in a 1984 Harper’s feature on electronic spreadsheets, “those who use them have a tendency to lose sight of the crucial fact that the imaginary businesses they create on their computer are just that — imaginary.”

Actual businesses constantly depend on the type of “imaginary” modeling performed using devices such as the electronic spreadsheet — patents, customer data, sales forecasts, and so on — for aid in decision making. The increased capability of our technology allows us to consume and process more information — information which is being produced by that same technology. But are we put in a better position to make decisions on our own or do our instruments interfere with our ability to do so? This depends on the forces acting on those tools.

Ted Turner vs. The Sensational Media

“I wanted to start the Cable News Network,” noted business mogul and cnn founder, Ted Turner, in 1979, “because I felt that America needed an in-depth voice in what’s going on in the news” (Oprah.com).  One year later Turner launched the 24-hour cable news channel as a way to reach individuals who were not able watch the news on its existing schedule (5 p.m., 6 p.m., and 11 p.m.) of the “big three” networks: ABC, CBS, and NBC (Oprah.com). Having already built his television empire by taking advantage of the UHF broadcast spectrum (one that had been recently made available by the FCC; all of the major networks were on the legacy VHF), Turner understood what it meant to pioneer a new industry via a new technology. “With satellites today,” he noted, “there’s no reason why people should be ignorant about what’s going on in the rest of the world” (National Constitution Center). Without much programming available of which to speak on most cable providers, CNN broke ground by broadcasting what it considered to be valuable information 24-hours per day.

With costs for both physical (shipping and traveling) and virtual (teleconferencing and Internet enabled sharing) connections so low thanks to new technologies, companies (including media networks) began expanding rapidly throughout the globe. Eventually, after expanding globally via CNN International, Turner’s network was sharing information across the planet, part of a much larger globalization trend seen throughout the business world. Turner and his contemporaries heralded this expanded network in a strikingly similar fashion to the telegraph’s advocates — the sharing of information would be a means through which humanity would find common causes and reduce violence and strife. “I am the right man in the right place at the right time, not me alone, but all the people who think the world can be brought together by telecommunications” (Henry).

The means to communicate and share information gathered via satellite and broadcast into homes was a realization of Marshall McLuhan’s “Global Village,” a place where “electric speed [was] bringing all social and political functions together in a sudden implosion [that] heightened human awareness of responsibility to an intense degree” (5). Turner, at the helm for the network’s first 20 years, believed that CNN could be a catalyst in this heightening through its objective and “fact-based” reporting on national and international affairs. As the media landscape has changed, however, an emphasis on profit making has forced cable news into a different place than Turner envisioned.

At the halfway point of 2012, CNN sat low in the ratings for 24-hour cable news networks, trailing far behind both Fox News and MSNBC, two networks which had emerged years after CNN’s initial launch. Much commentary was written about the “fall” of the once heralded cable news giant, often quoting official CNN spokespersons espousing the value of the network as a source for “non-partisan, quality journalism” (Byers). Certainly, this defense stemmed from the organization’s attempt to differentiate itself from the ideologically-driven Fox News and MSNBC (conservative- and liberal-leaning, respectively). But further criticism has been lobbed at the network for veering away from the type of “news-gathering” for which it was originally known (Byers). Even Turner himself stated in recent years that he would like to see less “fluffy” news and “more environmental news and more international news . . . a little more substantive” (Shea).

Whether the root of the CNN’s ratings woes is based on the network’s lack of focus on “international news” or is too focused on staying “non-partisan,” the fact remains that it is unable to garner the share of audience it once could. “There’s tremendous pressure to get people to watch these channels so they can sell advertising for a higher price,” Turner told Tom Brokaw in 2011. “So they go to more sensational — to me, trivial — programming” (US Zeitgeist 2010). And, as the media industry consolidated, the type of disruption which Turner was able to accomplish in 1980 was no longer possible. In the July/August 2004 issue of Washington Monthly, Turner writes, “What will programming be like when it’s produced for no other purpose than profit? What will news be like when there are no independent news organizations to go after stories the big corporations avoid?”

As with the telegraph, cable and satellite communication was viewed as a possible bridge between global parties who may not be familiar with one another’s culture. Informing these groups about one another could have been facilitated by the new technology. However, the considerations of capital complicated and compromised the original vision. But cable news is an inherently one way communication. The evolution of technology has led to the design and curation of “interactive” experiences — those which allow us to both consume and create, allowing us to realize more complex discourses across a plethora of channels.

The Memex vs. Facebook

Vannevar Bush’s and Mark Zuckerberg’s visions shared a lofty goal: the betterment of mankind. One innovator, having just finished developing the atomic bomb, felt that emerging technology at the time could usher in a level of enlightenment, one that would encourage humans to grow in the wisdom of experience — rather than digress and kill. The other innovator, with his penchant for tinkering with the latest software and networking tools, believed that a society with liberal policies on sharing personal information would encourage connections between neighbors and international strangers alike. Each of these innovators used similar building blocks to conceive and construct his respective tool — Zuckerberg’s Facebook having the luxury of nearly 60 years of technological developments on Bush’s mythical “Memex.”

As Bush emerged from his work on the Manhattan Project in 1945, he sought out ways to utilize the era’s rapidly developing technological advances for academic, rather than military, objectives. He focused on research techniques, fueled by his belief that people were not equipped with the proper tools to collect, consult, and share findings. As he wrote in The Atlantic Monthly, “Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems” (47).

Bush was mainly concerned with using technology to replicate — or supplement — processes of the human brain. As such, the collective value of the individual technologies was fully illustrated once they came together in his vision of the Memex, a desk-enclosed mechanized system for collection, retrieval, review, and consultation of research. Central to the Memex concept were trails of data: associative, rather than alphabetical or otherwise indexed content. Designed based on the thought association process, individual pieces of research were collected from disparate sources, transferred to and from a storage medium (microfilm), and tagged with unique identifying “addresses.” These records of trails could also be shared or duplicated so that peers and colleagues would have the ability to view and annotate each other’s research (44 – 46).

While never fully realized as a physical product, many features of the Memex have been implemented throughout today’s technological advances. Voice recognition software, camera and lens miniaturization, and Sir Tim Berners-Lee’s Hypertext Markup Language (HTML) can all be tied back to proposals Bush makes in his Atlantic article, entitled “As We May Think.” Perhaps one of the most encompassing contemporary examples of the Memex, however, is Facebook, launched in 2003. Just as Bush hoped that his Memex trails would lead to the elevating of “man’s spirit,” Zuckerberg’s vision for Facebook was one in which “the world [is] more open and transparent, which . . . will create greater understanding and connection.” Zuckerberg and his team believed they could achieve this by encouraging the use of its technology to promote “Social Value,” “Common Welfare,” and “One World” — among other principles (“Facebook Principles”).

A Facebook user’s ability to upload thoughts (or “notes”), photographs, and videos mimics Bush’s vision of a multi-media recording tool for researchers. The site’s “Like” buttons allow users to mark any content they find online and place a reference to it on a Facebook profile page. This button, along with comment threads, act as consultation tools, allowing connections to declare if they share the same opinions on or approve of a piece of content. Finally, users are encouraged to share content via the Facebook platform, either to the general public (via the News Feed) or to a specific connection via a Facebook message. The Facebook platform (encompassing Facebook.com, mobile applications, and any technology utilizing its application programming interface) uses all of the data to which it has access and creates its own version of Bush’s Memex trails. If Berners-Lee’s World Wide Web is an implementation of “a sea of interactive shared knowledge” inspired by Bush’s Memex (1995 Vannevar Bush Symposium), then Facebook’s intelligent recommendation and filtering engine is a Memex trail, created on the fly.

It is this engine that drives the core of Facebook’s value. Consider Neil Postman’s information glut versus Clay Shirky’s “publish-then-filter.” When the flow of information goes through a central algorithm that categorizes, prioritizes, and presents that information, there is no glut, no need for filtering on the side of the human. For Facebook as a corporation, building and maintaining this engine provides a twofold benefit: users are presented with what they believe to be the most personalized information, ensuring traffic stays high, a boon to advertisers; those same advertisers can draw on the data moving through the engine in order to better customize their ads to their target audiences (Pariser 194). Both of these selling points result
in higher revenue possibilities.

Bush, of course, never references revenue opportunities in “As We May Think.” This fundamental difference is where Facebook and the Memex’s shared continuum of mechanics (collect, consult, share) to philosophical goals (make the world a better place) branches: where Memex was focused on sharing information for the benefit of the academy and its endeavors, Facebook is focused on gathering, filtering, and presenting information at the behest of its advertisers. Does serving its paying customers counter-act the “greater understanding and connection” promised? 

A Technopolian Critique

The struggle to understand technology’s progression is not a new one. Sigmund Freud, in his 1929 work, Civilization and Its Discontents, highlights the dialectic between the problems solved by technology and those new ones raised by it:

But here the voice of pessimistic criticism makes itself heard and warns us that most of these satisfactions follow the model of the ‘cheap enjoyment’ extolled in the anecdote — the enjoyment obtained by putting a bare leg from under the bedclothes on a cold winter night and drawing it in again. If there had been no railway to conquer distances, my child would never have left his native town and I should need no telephone to hear his voice; if traveling across the ocean by ship had not been introduced, my friend would not have embarked on his sea-voyage and I should not need a cable to relieve my anxiety about him (61).

To understand the effect of any of these technological tools — the telegraph, cable television, social trails — on humanity as a whole is not in the scope of this document. Nor is the point of this chapter to call out these tools as having never facilitated world-altering events with positive outcomes. In fact, Facebook and other social networks have been credited with providing the channels through which the oppressed participants in the “Arab Spring” built revolutions to overthrow their dictatorial governments (Zuckerman “Civic Disobedience”). But the events in Tunisia, Libya, Egypt, and other nations in early January 2010 have also raised significant questions about what might happen as the governments become aware of and move on to these platforms (Zuckerman “Morozov”). While the networks hosted the content which motivated Egyptian revolutionaries, it was shutting off the country’s Internet connection that brought “protesters in larger numbers to the street” to participate in the Tahrir Square protests (Howard, et al.). And what of the commercial entities (some with relationships to governments) being the gate keepers to the data and content used within these revolutions? To whom do they ultimately answer?

These and other questions illustrate the importance of considering the forces that have acted on these tools and the resulting implications on the way society is able (or willing) to use them. When the nanosecond becomes a unit of measurement for trade, a real estate boom occurs surrounding the optical fiber that brings data to buildings in such a small amount of time (Miller). When the news is always on, it is paid for by advertising driven by the fact that someone must always be watching — even if that means tailoring that news to a specific ideology. And when, as Vannevar Bush notes, “there will always be plenty of things to compute in the detailed affairs of millions of people doing complicated things” (41), it becomes imperative to begin to compute, even, the details of the uncomplicated.

In considering these forces, two themes punctuate the case studies explored in this chapter. First, each of the utopian visions driving the technological innovations were predicated on the belief that sharing information between unknowns would result in the betterment of humanity. According to danah boyd, this is, perhaps, a curious belief: 

Exposure to new people doesn’t automatically produce tolerance. When explorers traversed the earth looking for opportunity, they pillaged and plundered even before they began colonizing. Fear ruled the seas. And let’s be honest...exposure to other people during great explorations did not magically produce tolerance. It bred anger, distrust, and hatred.

Secondly, as Carey notes in his analysis of the various ideological changes traced back to the invention of the telegraph, “Technology as such is too abstract a category to support any precise analysis; therefore, changes in technology go unanalyzed except for classifying them within various stages of capitalist development.” Here, he is calling out a fault in most analyses surrounding technology but also bringing to light one of the themes of both Postman’s work and this chapter: the requirement to contextualize a critique of technology with a review of the capitalist influences on it along the way. Each of the case studies presented above do precisely that.

Understanding Design’s Susceptibility to Technopolian Forces

That the focus here has been at the crossroads of knowledge transfer and commerce is certainly no coincidence. Surely, today’s “information” society places much value on both. From the roots of technological communication to today’s most populous virtual network, these are examples of communication systems in action, seen through the lens of their designed purpose , eventually establishing their place in Technopoly.

Each of the technologies discussed were heralded for the promised outcome of their implementation and use, as designed by their creators. But what of design itself as a Technopolian technique? Design, after all, represents the planning of converting the conceptual to the concrete. This conversion is observed distinctly in the information design field, in particular, as the concepts described by textual and numerical data is presented in a seemingly neutral manner.

No matter the final form, however, that conversion must undergo scrutiny by interested parties. The conversion is, to reference boyd’s mention of Melvin Kranzenberg, neither good, nor bad, nor is it neutral, and provides for a rich analysis of its susceptibility to Technopolian forces, especially when applied to a use-case steeped in human motives and belief systems. As such, an examination of information design used in social movements—arguably, highly motivated parties with strongly varied interests based on specific points of view—brings to light the important question: Is a consideration of the forces acting on design inherently built-in to or excluded from the way the field is understood today?