Introduction
IN JANUARY 1983, Time magazine selected the personal computer as its “Man of the Year,” and public fascination with the computer has continued to grow ever since. That year was not, however, the beginning of the computer age. Nor was it even the first time that Time had featured a computer on its cover. Thirty-three years earlier, in January 1950, the cover had sported an anthropomorphized image of a computer wearing a navy captain’s hat to draw readers’ attention to the feature story, about a calculator built at Harvard University for the US Navy. Sixty years before that, in August 1890, another popular American magazine, Scientific American, devoted its cover to a montage of the equipment constituting the new punched-card tabulating system for processing the US Census. As these magazine covers indicate, the computer has a long and rich history, and we aim to tell it in this book.
In the 1970s, when scholars began to investigate the history of computing, they were attracted to the large one-of-a-kind computers built a quarter century earlier, sometimes now referred to as the “dinosaurs.” These were the first machines to resemble in any way what we now recognize as computers: they were the first calculating systems to be readily programmed and the first to work with the lightning speed of electronics. Most of them were devoted to scientific and military applications, which meant that they were bred for their sheer number-crunching power. Searching for the prehistory of these machines, historians mapped out a line of desktop calculating machines originating in models built by the philosophers Blaise Pascal and Gottfried Leibniz in the seventeenth century and culminating in the formation of a desk calculator industry in the late nineteenth century. According to these histories, the desk calculators were followed in the period between the world wars by analog computers and electromechanical calculators for special scientific and engineering applications; the drive to improve the speed of calculating machines during World War II led directly to the modern computer.
Although correct in the main, this account is not complete. Today, research scientists and atomic weapons designers still use computers extensively, but the vast majority of computers in organizations are employed for other purposes, such as word processing and keeping business records. How did this come to pass? To answer that question, we must take a broader view of the history of the computer as the history of the information machine.
This history begins in the early nineteenth century. Because of the increasing population and urbanization in the West resulting from the Industrial Revolution, the scale of business and government expanded, and with it grew the scale of information collection, processing, and communication needs. Governments began to have trouble enumerating their populations, telegraph companies could not keep pace with their message traffic, and insurance agencies had trouble processing policies for the masses of workers.
Novel and effective systems were developed for handling this increase in information. For example, the Prudential Assurance Company of England developed a highly effective system for processing insurance policies on an industrial scale using special-purpose buildings, rationalization of process, and division of labor. But by the last quarter of the century, large organizations had turned increasingly to technology as the solution to their information-processing needs. On the heels of the first large American corporations came a business-machine industry to supply them with typewriters, filing systems, and duplication and accounting equipment.
The desk calculator industry was part of this business-machine movement. For the previous two hundred years, desk calculators had merely been handmade curiosities for the wealthy. But by the end of the nineteenth century, these machines were being mass-produced and installed as standard office equipment, first in large corporations and later in progressively smaller offices and retail establishments. Similarly, the punched-card tabulating system developed to enable the US government to cope with its 1890 census data gained wide commercial use in the first half of the twentieth century, and was in fact the origin of IBM.
Also beginning in the nineteenth century and reaching maturity in the 1920s and 1930s was a separate tradition of analog computing. Engineers built simplified physical models of their problems and measured the values they needed to calculate. Analog computers were used extensively and effectively in the design of electric power networks, dams, and aircraft.
Because of the exigencies of the war, the military was willing to pay whatever it would take to develop the kinds of calculating machines it needed. Millions of dollars were spent, resulting in the production of the first electronic, stored-program computers – although, ironically, none of them was completed in time for war work. The military and scientific research value of these computers was nevertheless appreciated, and by the time of the Korean War a small number had been built and placed in operation in military facilities, atomic energy laboratories, aerospace manufacturers, and research universities.
Although the computer had been developed for number crunching, several groups recognized its potential as a data-processing and accounting machine. The developers of the most important wartime computer, the ENIAC, left their university posts to start a business building computers for the scientific and business markets. Other electrical manufacturers and business-machine companies, including IBM, also turned to this enterprise. The computer makers found a ready market in government agencies, insurance companies, and large manufacturers.
The basic functional specifications of the computer were set out in a report written by John von Neumann in 1945, and these specifications are still largely followed today. However, decades of continuous innovation have followed the original conception. These innovations are of two types. One is the improvement in components, leading to faster processing speed, greater information-storage capacity, improved price/performance, better reliability, less required maintenance, and the like: today’s computers are literally millions of times better than the first computers on almost all measures of this kind. These innovations were made predominantly by the firms that manufactured computers.
The second type of innovation was in the mode of operation, but here the agent for change was most often the academic sector, backed by government financing. In most cases, these innovations became a standard part of computing only through their refinement and incorporation into standard products by the computer manufacturers. There are five notable examples of this kind of innovation: high-level programming languages, real-time computing, time-sharing, networking, and graphically oriented human-computer interfaces.
While the basic structure of the computer remained unchanged, these new components and modes of operation revolutionized our human experiences with computers. Elements that we take for granted today – such as having a computer on our own desk, equipped with a mouse, monitor, and disk drive – were not even conceivable until the 1970s. At that time, most computers cost hundreds of thousands, or even millions, of dollars and filled a large room. Users would seldom touch or even see the computer itself. Instead, they would bring a stack of punched cards representing their program to an authorized computer operator and return hours or days later to pick up a printout of their results. As the mainframe became more refined, the punched cards were replaced by remote terminals, and response time from the computer became almost immediate – but still only the privileged few had access to the computer. All of this changed with the development of the personal computer and the growth of the internet. The mainframe has not died out, as many have predicted, but computing is now available to the masses.
As computer technology became increasingly less expensive and more portable, new and previously unanticipated uses for computers were discovered – or invented. Today, for example, the digital devices that many of us carry in our briefcases, backpacks, purses, or pockets serve simultaneously as portable computers, communications tools, entertainment platforms, digital cameras, monitoring devices, and conduits to increasingly omnipresent social networks. The history of the computer has become inextricably intertwined with the history of communications and mass media, as our discussion of the personal computer and the internet clearly illustrates. But it is important to keep in mind that even in cutting-edge companies like Facebook and Google, multiple forms and meanings of the computer continue to coexist, from the massive mainframes and server farms that store and analyze data to the personal computers used by programmers to develop software to the mobile devices and applications with which users create and consume content. As the computer itself continues to evolve and acquire new meanings, so does our understanding of its relevant history. But it is important to remember that these new understandings do not refute or supersede these earlier histories but rather extend, deepen, and make them even more relevant.
WE HAVE ORGANIZED the book in five parts. The first covers the way information processing was handled before the arrival of electronic computers. The next two parts describe the mainframe computer era, roughly from 1945 to 1980, with one part devoted to the computer’s creation and the other to its evolution. The fourth part discusses the origins of personal computing and the internet. The fifth part examines the computer as it has become more ubiquitous and more global, as it becomes increasingly intertwined with issues of telecommunication, and as new legal problems arise as the computer raises new issues about the distribution of content and the control of people. In this fourth edition of the book, we have streamlined but not substantially changed the material that appears in the first four parts. The fifth part presents entirely new material, providing extended coverage of some of the most important developments related to computers of the twenty-first century.
Part One, on the early history of computing, includes three chapters. Chapter 1 discusses manual information processing and early technologies. People often suppose that information processing is a twentieth-century phenomenon; this is not so, and the first chapter shows that sophisticated information processing could be done with or without machines – slower in the latter case, but equally well. Chapter 2 describes the origins of office machinery and the business-machine industry. To understand the post-World War II computer industry, we need to realize that its leading firms – including IBM – were established as business-machine manufacturers in the last decades of the nineteenth century and were major innovators between the two world wars. Chapter 3 describes Charles Babbage’s failed attempt to build a calculating engine in the 1830s and its realization by Harvard University and IBM a century later. We also briefly discuss the theoretical developments associated with Alan Turing.
Part Two of the book describes the development of the electronic computer, from its invention during World War II up to the establishment of IBM as the dominant mainframe computer manufacturer in the mid-1960s. Chapter 4 covers the development of the ENIAC at the University of Pennsylvania during the war and its successor, the EDVAC, which was the blueprint for almost all subsequent computers up to the present day. Chapter 5 describes the early development of the computer industry, which transformed the computer from a scientific instrument for mathematical computation into a machine for business data processing. In Chapter 6 we examine the development of the mainframe computer industry, focusing on the IBM System/360 range of computers, which created the first stable industry standard and established IBM’s dominance.
Part Three presents a selective history of some key computer innovations in the quarter century between the invention of the computer at the end of the war and the development of the first personal computers. Chapter 7 is a study of one of the key technologies of computing, real time. We examine this subject in the context of commonly experienced applications, such as airline reservations, banking and ATMs, and supermarket barcodes. Chapter 8 describes the development of software technology, the professionalization of programming, and the emergence of a software industry. Chapter 9 covers the development of some of the key features of the computing environment at the end of the 1960s: timesharing, minicomputers, and microelectronics. The purpose of the chapter is, in part, to redress the commonly held notion that the computer transformed from the mainframe to the personal computer in one giant leap.
Part Four gives a history of the developments of the last quarter of the twentieth century when computing became “personal.” Chapter 10 describes the development of the microcomputer from the first hobby computers in the mid-1970s up to its transformation into the familiar personal computer by the end of the decade. Chapter 11’s focus is on the personal-computer environment of the 1980s, when the key innovations were user-friendliness and the delivery of “content,” by means of CD-ROM storage and consumer networks. This decade was characterized by the extraordinary rise of Microsoft and the other personal-computer software companies. Chapter 12 begins a discussion of the internet and its consequences. The chapter describes the creation of the internet and the World Wide Web, their precedents in the information sciences, and the ever-evolving commercial and social applications.
Part Five discusses the growing ubiquity of the computer, in terms of both parts of the world more actively participating in computing and also the ongoing spread of computing into various aspects of people’s personal and work lives. It is indicative of these recent changes that Time magazine selected Apple’s iPhone as “Person of the Year” for 2007. Chapter 13 discusses globalization. Topics include the off-shoring of software and services, the outsourcing of manufacture to other countries, and innovation in both Asia and beyond. Companies that receive their first attention in the book include Tata, Foxconn, Lenovo, Huawei, and the Taiwan Semiconductor Manufacturing Corporation (TSMC). When we wrote the first edition, we had no idea how the internet would unfold. It has exceeded every expectation – both good and bad. In Chapter 14 we discuss this unfolding of the internet through issues about the platformization of commerce and culture. We discuss such topics as the ascendancy of mobile devices and “cloud computing,” surveillance capitalism, fake news and the online threat to democracy, threats to brick-and-mortar businesses from online enterprises, the creation of new forms of labor in the gig economy, the disruptions to media delivery created by streaming, and the rise of cryptocurrencies. Chapter 15 discusses recent computer developments in the context of the environment, law, and politics. Topics include the environmental impacts of manufacturing and online services, intellectual property (IP) concerns, unfair competition, online political organization and activism, and privacy and surveillance issues for individuals and nations in a networked society.
We have included notes at the end of each chapter and a bibliography toward the end of the book.
Note on sources
- Between the extremes of giving a popular treatment with no citations at all and full scholarly annotation, we have decided on a middle course in which we have attempted to draw a line between pedantry and adequacy – as Kenneth Galbraith eloquently put it. In the notes to every chapter, we have given a short literature review in which we have identified a dozen or so reliable secondary sources. Where possible, we have used the monographic literature, falling back on the less accessible periodical literature only where the former falls short. The reader can assume, unless explicitly indicated, that every assertion made in the text can be readily found in this secondary literature. This has enabled us to restrict citations to just two types: direct quotations and information that does not appear in the monographs already cited.
- Full publication data is given in the bibliography. The reader should note that its purpose is largely administrative – to put all our sources in one place. It is not intended as a reading list. If a reader should want to study any topic more deeply, we recommend beginning with one of the books cited in the literature review in the notes to each chapter.