During the 1960s, computers were large mainframes stored in rooms– what we call a “data center” today. They were costly and businesses could rent out space on the mainframe to fulfill specific functions. During the 1980s, the computer industry experienced the boom of the microcomputer era and computers were being widely used in the office. When the dot-com bubble occurred in the 1990s, so did the boom of data centers. Businesses needed a quick way to establish presence on the Internet and companies like Rackspace were fulfilling that need by opening up data centers. Check out this infographic to see how data centers have evolved over time.
<a href="http://www.rackspace.com/whyrackspace/network/datacenters/"> Data Center <iframe style="height: 733px; width: 533px;" src="http://broadcast.rackspace.com/blog/InfographicDatacenter/timeline.html" frameborder="0" width="320" height="240"></iframe>
Prior to 1960 (1945), the Army developed a huge machine called ENIAC (Electronic Numerator, Integrator, Analyzer, and Computer):
• Weighed 30 tons
• Took up 1,800 sq ft of floor space
• Required 6 full-time technicians to keep it running
• Did 5000 operations per second
Up until the early 1960s, computers were primarily used by government agencies. They were large mainframes stored in rooms– what we call a “datacenter” today.
Starting in 1960, computers converted from vacuum tube to solid state devices such as the transistor, which last much longer, are smaller, more efficient, more reliable, and cheaper than equivalent vacuum tube devices.
In the early 1960s many computers cost about $5 million each and time on one of these computers could be rented for $17,000 per month.
By the mid 1960s, computer use developed commercially and was shared by multiple parties.
American Airlines and IBM teamed up to develop a reservation program termed the Sabre® system. It was installed on 2 IBM 7090 computers, located in a specially designed computer center in Briarcliff Manor, New York. The system processed 84,000 telephone calls per day.
Computer memory slowly moved away from magnetic core devices to solid-state static and dynamic semiconductor memory, which greatly reduced the cost, size and power consumption of computing devices.
In 1971, Intel released the world’s first commercial microprocessor: the 4004.
Datacenters in the US began documenting formal disaster recovery plans in 1973. If disaster did strike, it wouldn’t necessarily affect business operations, as most functions handled by computers were after-the-fact bookkeeping duties. These functions were batch operations and not complex in nature.
In 1978, SunGuard™ developed the first commercial disaster recovery business, a leased 30,000 square feet of space at 401 Broad Street, where they’re still located.
In 1973, the minicomputer Xerox Alto was a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high-resolution screen, large internal and external memory storage, mouse, and special software.
In 1977, the world’s first commercially available local area network, ARCnet was first put into service at Chase Manhattan Bank, New York, as a beta-site. It was the simplest, and least expensive type of local area network using token-ring architecture, supporting data rates of 2.5 Mbps, and connecting up to 255 computers.
Mainframes required special cooling and in the late 1970s, air-cooled computers moved into offices. Consequently, datacenters died.
During the 1980s, the computer industry experienced the boom of the microcomputer era thanks to the birth of the IBM Personal Computer (PC).
Computers were installed everywhere, and little thought was given to the specific environmental and operating requirements of the machines.
Starting in 1985, IBM provided more than $30 million in products and support over the course of 5 years to a supercomputer facility established at Cornell University in Ithaca, New York.
In 1988, IBM introduces the IBM Application System/400 (AS/400), and quickly becomes one of the world’s most popular business computing systems.
As information technology operations started to grow in complexity, companies grew aware of the need to control IT resources.
Microcomputers (now called “servers”) started to find their places in the old computer rooms and were being called “data centers.”
Companies were putting up server rooms inside their company walls with the availability of inexpensive networking equipment.
The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet.
Many companies started building very large facilities to provide businesses with a range of solutions for systems deployment and operation.
Rackspace Hosting opened their first datacenter to businesses in 1999.
As of 2007, the average datacenter consumes as much energy as 25,000 homes.
There are 5.75 million new servers deployed every year.
The number of government data centers has gone from 432 in 1999 to 1,100+ today.
Data centers account for 1.5% of US energy consumption and demand is growing 10% per year.
Facebook launched the OpenCompute Project, providing specifications to their Prineville, Oregon data center that uses 38% less energy to do the same work as their other facilities, while costing 24% less.
As the growth of online data grows exponentially, there is opportunity (and a need) to run more efficient data centers.