Computers Are Able to Continue Their Operation Even When Problems Are Present

Computer to PC- A Brief History From Distant Past to Present Day

Photo Credit: lorenzoherrera.com

Computers

Computers truly came into their own as great inventions in the last two decades of the 20th century. But their history stretches back more than 2500 years to the abacus: a simple calculator made from beads and wires, which is still used in some parts of the world today. The difference between an ancient abacus and a modern computer seems vast, but the principle—making repeated calculations more quickly than the human brain—is exactly the same.

Can you imagine your life without a computer? Think about all of the things you wouldn't be able to do. Send an email, online shop, find an answer to a question instantly.

Before the true power of computing could be realized, therefore, the naive view of calculation had to be overcome. The inventors who laboured to bring the computer into the world had to learn that the thing they were inventing was not just a number cruncher, not merely a calculator.

For example, they had to learn that it was not necessary to invent a new computer for every new calculation and that a computer could be designed to solve numerous problems, even problems not yet imagined when the computer was built. They also had to learn how to tell such a general problem-solving computer what problem to solve. In other words, they had to invent programming.

They had to solve all the heady problems of developing such a device, of implementing the design, of actually building the thing. The history of the solving of these problems is the history of the computer. That history is covered in this section, and links are provided to entries on many of the individuals and companies mentioned. In addition, see the articles computer science and supercomputer.

The first counting device was used by the primitive people. They used sticks, stones and bones as counting tools. As human mind and technology improved with time more computing devices were developed. Some of the popular computing devices starting with the first to recent ones are described below

The Abacus

A Japanese abacus (soroban)
A Japanese abacus (soroban)

The earliest known calculating device is probably the abacus. It dates back at least to 1100 BCE and is still in use today, particularly in Asia. Now, as then, it typically consists of a rectangular frame with thin parallel rods strung with beads. Long before any systematic positional notation was adopted for the writing of numbers, the abacus assigned different units, or weights, to each rod.

This scheme allowed a wide range of numbers to be represented by just a few beads and, together with the invention of zero in India, may have inspired the invention of the Hindu-Arabic number system. In any case, abacus beads can be readily manipulated to perform the common arithmetical operations—addition, subtraction, multiplication, and division—that is useful for commercial transactions and in bookkeeping.

The abacus is a digital device; that is, it represents values discretely. A bead is either in one predefined position or another, representing unambiguously, say, one or zero.

The abacus (plural abaci or abacuses), also called a counting frame, is a calculating tool that was in use in the ancient Near East, Europe, China, and Russia, centuries before the adoption of the written Hindu–Arabic numeral system. The exact origin of the abacus is still unknown. Today, abacuses are often constructed as a bamboo frame with beads sliding on wires, but originally they were beans or stones moved in grooves of sand or on tablets of wood, stone, or metal.

The Antikythera mechanism, first computer device
The Antikythera mechanism is an ancient hand powered Greek analogue computer which has also been described as the first example of such device used to predict astronomical positions and eclipses for calendar and astrological purposes decades in advance

The oldest known complex computing device, called the Antikythera mechanism, dates back to 87 B.C; it's surmised the Greeks used this gear-operated contraption (found in a shipwreck in the Aegean Sea early in the 20th century, though its significance wasn't realized until 2006) to calculate astronomical positions and help them navigate through the seas.

Computing took another leap in 1843, when English mathematician Ada Lovelace wrote the first computer algorithm, in collaboration with Charles Babbage, who devised a theory of the first programmable computer. But the modern computing-machine era began with Alan Turing's conception of the Turing Machine, and three Bell Labs scientists invention of the transistor, which made modern-style computing possible and landed them the 1956 Nobel Prize in Physics.

For decades, computing technology was exclusive to the government and the military; later, academic institutions came online, and Steve Wozniak built the circuit board for Apple-1, making home computing practicable. On the connectivity side, Tim Berners-Lee created the World Wide Web, and Marc Andreessen built a browser, and that's how we came to live in a world where our glasses can tell us what we're looking at. With wearable computers, embeddable chips, smart appliances, and other advances in progress and on the horizon, the journey towards building smarter, faster and more capable computers is clearly just beginning.

Engines of Calculation

Neither the abacus nor the mechanical calculators constructed by Pascal and Leibniz really qualified as computers. A calculator is a device that makes it quicker and easier for people to do sums—but it needs a human operator. A computer, on the other hand, is a machine that can operate automatically, without any human help, by following a series of stored instructions called a program (a kind of mathematical recipe). Calculators evolved into computers when people devised ways of making entirely automatic, programmable calculators.

Punched cards: Herman Hollerith
Photo: Punched cards: Herman Hollerith perfected the way of using punched cards and paper tape to store information and feed it into a machine. Here's a drawing from his 1889 patent Art of Compiling Statistics (US Patent#395,782), showing how a strip of paper (yellow) is punched with different patterns of holes (orange) that correspond to statistics gathered about people in the US census. Picture courtesy of US Patent and Trademark Office.

The first person to attempt this was a rather obsessive, notoriously grumpy English mathematician named Charles Babbage (1791–1871). Many regard Babbage as the "father of the computer" because his machines had an input (a way of feeding in numbers), a memory (something to store these numbers while complex calculations were taking place), a processor (the number-cruncher that carried out the calculations), and an output (a printing mechanism)—the same basic components shared by all modern computers.

During his lifetime, Babbage never completed a single one of the hugely ambitious machines that he tried to build. That was no surprise. Each of his programmable "engines" was designed to use tens of thousands of precision-made gears. It was like a pocket watch scaled up to the size of a steam engine, a Pascal or Leibniz machine magnified a thousand-fold in dimensions, ambition, and complexity. For a time, the British government financed Babbage—to the tune of £17,000, then an enormous sum.

But when Babbage pressed the government for more money to build an even more advanced machine, they lost patience and pulled out. Babbage was more fortunate in receiving help from Augusta Ada Byron (1815–1852), Countess of Lovelace, daughter of the poet Lord Byron.

An enthusiastic mathematician, she helped to refine Babbage's ideas for making his machine programmable—and this is why she is still, sometimes, referred to as the world's first computer programmer. Little of Babbage's work survived after his death. But when, by chance, his notebooks were rediscovered in the 1930s, computer scientists finally appreciated the brilliance of his ideas. Unfortunately, by then, most of these ideas had already been reinvented by others.

Babbage had intended that his machine would take the drudgery out of repetitive calculations. Originally, he imagined it would be used by the army to compile the tables that helped their gunners to fire cannons more accurately. Toward the end of the 19th century, other inventors were more successful in their effort to construct "engines" of calculation.

American statistician Herman Hollerith (1860–1929) built one of the world's first practical calculating machines, which he called a tabulator, to help compile census data. Then, as now, a census was taken each decade but, by the 1880s, the population of the United States had grown so much through immigration that a full-scale analysis of the data by hand was taking seven and a half years.

The statisticians soon figured out that, if trends continued, they would run out of time to compile one census before the next one fell due. Fortunately, Hollerith's tabulator was an amazing success: it tallied the entire census in only six weeks and completed the full analysis in just two and a half years. Soon afterwards, Hollerith realized his machine had other applications, so he set up the Tabulating Machine Company in 1896 to manufacture it commercially. A few years later, it changed its name to the Computing-Tabulating-Recording (C-T-R) company and then, in 1924, acquired its present name: International Business Machines (IBM).

The Mechanical Era (1623-1945)

Babbage's Difference Engine
Babbage's Difference Engine

The idea of using machines to solve mathematical problems can be traced at least as far as the early 17th century. Mathematicians who designed and implemented calculators that were capable of addition, subtraction, multiplication, and division included Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz. The first multi-purpose, i.e. programmable, computing device was probably Charles Babbage's Difference Engine, which was begun in 1823 but never completed.

A more ambitious machine was the Analytical Engine. It was designed in 1842, but unfortunately, it also was only partially completed by Babbage. Babbage was truly a man ahead of his time: many historians think the major reason he was unable to complete these projects was the fact that the technology of the day was not reliable enough. In spite of never building a complete working machine, Babbage and his colleagues, most notably Ada, Countess of Lovelace, recognized several important programming techniques, including conditional branches, iterative loops and index variables.

A machine inspired by Babbage's design was arguably the first to be used in computational science. George Scheutz read of the difference engine in 1833, and along with his son Edvard, Scheutz began work on a smaller version. By 1853 they had constructed a machine that could process 15-digit numbers and calculate fourth-order differences.

Their machine won a gold medal at the Exhibition of Paris in 1855, and later they sold it to the Dudley Observatory in Albany, New York, which used it to calculate the orbit of Mars. One of the first commercial uses of mechanical computers was by the US Census Bureau, which used punch-card equipment designed by Herman Hollerith to tabulate data for the 1890 census. In 1911 Hollerith's company merged with a competitor to found the corporation which in 1924 became International Business Machines.

Replica of W. Schickard's calculator (1592-1635)
Replica of W. Schickard's calculator (1592-1635)

In 1623 the German astronomer and mathematician Wilhelm Schickard built the first calculator. He described it in a letter to his friend the astronomer Johannes Kepler, and in 1624 he wrote again to explain that a machine he had commissioned to be built for Kepler was, apparently along with the prototype, destroyed in a fire. He called it a Calculating Clock, which modern engineers have been able to reproduce from details in his letters. Even general knowledge of the clock had been temporarily lost when Schickard and his entire family perished during the Thirty Years' War.

But Schickard may not have been the true inventor of the calculator. A century earlier, Leonardo da Vinci sketched plans for a calculator that was sufficiently complete and correct for modern engineers to build a calculator on their basis.

A Pascaline signed by Pascal in 1652
A Pascaline signed by Pascal in 1652

The first calculator or adding machine to be produced in any quantity and actually used was the Pascaline, or Arithmetic Machine, designed and built by the French mathematician-philosopher Blaise Pascal between 1642 and 1644. It could only do addition and subtraction, with numbers being entered by manipulating its dials. Pascal invented the machine for his father, a tax collector, so it was the first business machine too (if one does not count the abacus). He built 50 of them over the next 10 years.

The Jacquard loom

Calculators such as the Arithmometer remained a fascination after 1820, and their potential for commercial use was well understood. Many other mechanical devices built during the 19th century also performed repetitive functions more or less automatically, but few had any application to computing. There was one major exception: the Jacquard loom, invented in 1804–05 by a French weaver, Joseph-Marie Jacquard.

The Jacquard loom was a marvel of the Industrial Revolution. A textile-weaving loom, it could also be called the first practical information-processing device. The loom worked by tugging various-coloured threads into patterns by means of an array of rods. By inserting a card punched with holes, an operator could control the motion of the rods and thereby alter the pattern of the weave.

Moreover, the loom was equipped with a card-reading device that slipped a new card from a pre-punched deck into place every time the shuttle was thrown, so that complex weaving patterns could be automated.

The Jacquard loom. Credit National Museums Scotland
The Jacquard loom. Credit National Museums Scotland

What was extraordinary about the device was that it transferred the design process from a labour-intensive weaving stage to a card-punching stage. Once the cards had been punched and assembled, the design was complete, and the loom implemented the design automatically. The Jacquard loom, therefore, could be said to be programmed for different patterns by these decks of punched cards.

For that intent on mechanizing calculations, the Jacquard loom provided important lessons: the sequence of operations that a machine performs could be controlled to make the machine do something quite different; a punched card could be used as a medium for directing the machine; and, most important, a device could be directed to perform different tasks by feeding it instructions in a sort of language—i.e., making the machine programmable.

It is not too great a stretch to say that, in the Jacquard loom, programming was invented before the computer. The close relationship between the device and the program became apparent some 20 years later, with Charles Babbage's invention of the first computer.

Electro-Mechanical Computers(the 1930s)

Electro-mechanical computers generally worked with relays and/or vacuum tubes, which could be used as switches.

George R. Stibitz with Model K
George R. Stibitz with Model K

Stibitz, by the way, was also responsible for the first remote access computing, done at a conference at Dartmouth College in New Hampshire. He took a teleprinter to the conference, leaving his computer in New York City, and then proceeded to take problems posed by the audience. He then entered the problems on the keypad of his teleprinter, which outputted the answers afterwards.

It was during the development of these early electro-mechanical computers that many of the technologies and concepts still used today were first developed. The Z3, a descendent of the Z1 developed by Konrad Zuse, was one such pioneering computer. The Z3 used floating-point numbers in computations and was the first program-controlled digital computer. Other electro-mechanical computers included Bombes, which were used during WWII to decrypt German codes.

First Generation Electronic Computers (1937-1953)

Three machines have been promoted at various times as the first electronic computers. These machines used electronic switches, in the form of vacuum tubes, instead of electromechanical relays. In principle, the electronic switches would be more reliable, since they would have no moving parts that would wear out, but the technology was still new at that time and the tubes were comparable to relays in reliability. Electronic components had one major benefit, however: they could "open" and "close" about 1,000 times faster than mechanical switches.

The earliest attempt to build an electronic computer was by J. V. Atanasoff, a professor of physics and mathematics at Iowa State, in 1937. Atanasoff set out to build a machine that would help his graduate students solve systems of partial differential equations. By 1941 he and graduate student Clifford Berry had succeeded in building a machine that could solve 29 simultaneous equations with 29 unknowns. However, the machine was not programmable, and was more of an electronic calculator.

The Colossus computer at Bletchley Park, Buckinghamshire,
The Colossus computer at Bletchley Park, Buckinghamshire, England, c. 1943. Funding for this code-breaking machine came from the Ultra project. Geoff Robinson Photography/Shutterstock.com

A second early electronic machine was Colossus, designed by Alan Turing for the British military in 1943. This machine played an important role in breaking codes used by the German army in World War II. Turing's main contribution to the field of computer science was the idea of the Turing machine, a mathematical formalism widely used in the study of computable functions. The existence of Colossus was kept secret until long after the war ended, and the credit due to Turing and his colleagues for designing one of the first working electronic computers was slow in coming.

The first general-purpose programmable electronic computer was the Electronic Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John V. Mauchly at the University of Pennsylvania. Work began in 1943, funded by the Army Ordnance Department, which needed a way to compute ballistics during World War II. The machine wasn't completed until 1945, but then it was used extensively for calculations during the design of the hydrogen bomb. By the time it was decommissioned in 1955 it had been used for research on the design of wind tunnels, random number generators, and weather prediction.

Eckert, Mauchly, and John von Neumann, a consultant to the ENIAC project, began work on a new machine before ENIAC was finished. The main contribution of EDVAC, their new project, was the notion of a stored program. There is some controversy over who deserves the credit for this idea, but none over how important the idea was to the future of general-purpose computers.

ENIAC was controlled by a set of external switches and dials; to change the program required physically altering the settings on these controls. These controls also limited the speed of internal electronic operations. Through the use of a memory that was large enough to hold both instructions and data, and using the program stored in memory to control the order of arithmetic operations, EDVAC was able to run orders of magnitude faster than ENIAC.

By storing instructions in the same medium as data, designers could concentrate on improving the internal structure of the machine without worrying about matching it to the speed of external control.

ENIAC Computer
Marlyn Wescoff [left] and Ruth Lichterman were two of the female programmers of ENIAC. Photo: Corbis/Getty Images

Regardless of who deserves the credit for the stored-program idea, the EDVAC project is significant as an example of the power of interdisciplinary projects that characterize modern computational science. By recognizing that functions, in the form of a sequence of instructions for a computer, can be encoded as numbers, the EDVAC group knew the instructions could be stored in the computer's memory along with numerical data.

The notion of using numbers to represent functions was a key step used by Goedel in his incompleteness theorem in 1937, work which von Neumann, as a logician, was quite familiar with. Von Neumann's background in logic, combined with Eckert and Mauchly's electrical engineering skills, formed a very powerful interdisciplinary team.

Software technology during this period was very primitive. The first programs were written out in machine code, i.e. programmers directly wrote down the numbers that corresponded to the instructions they wanted to store in memory. By the 1950s programmers were using a symbolic notation, known as assembly language, then hand-translating the symbolic notation into machine code. Later programs known as assemblers performed the translation task.

As primitive as they were, these first electronic machines were quite useful in applied science and engineering. Atanasoff estimated that it would take eight hours to solve a set of equations with eight unknowns using a Marchant calculator and 381 hours to solve 29 equations for 29 unknowns. The Atanasoff-Berry computer was able to complete the task in under an hour.

The first problem run on the ENIAC, a numerical simulation used in the design of the hydrogen bomb, required 20 seconds, as opposed to forty hours using mechanical calculators. Eckert and Mauchly later developed what was arguably the first commercially successful computer, the UNIVAC; in 1952, 45 minutes after the polls closed and with 7% of the vote counted, UNIVAC predicted Eisenhower would defeat Stevenson with 438 electoral votes (he ended up with 442).

Some of the popular first-generation computers are;

  • ENIAC ( Electronic Numerical Integrator and Computer)
  • EDVAC ( Electronic Discrete Variable Automatic Computer)
  • UNIVAC I( Universal Automatic Computer)
  • IBM-701
  • IBM-650

Second Generation Computers (1954-1962)

The second generation saw several important developments at all levels of computer system design, from the technology used to build the basic circuits to the programming languages used to write scientific applications.

Electronic switches in this era were based on the discrete diode and transistor technology with a switching time of approximately 0.3 microseconds. The first machines to be built with this technology include TRADIC at Bell Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Memory technology was based on magnetic cores which could be accessed in random order, as opposed to mercury delay lines, in which data was stored as an acoustic wave that passed sequentially through the medium and could be accessed only when the data moved by the I/O interface.

TRAnsistor DIgital Computer
AT&T Bell Laboratories announces the completion of the first fully transistorized computer, TRADIC. TRADIC, which stood for TRAnsistor DIgital Computer, contained nearly 800 transistors

Important innovations in computer architecture included index registers for controlling loops and floating-point units for calculations based on real numbers. Prior to these accessing successive elements in an array was quite tedious and often involved writing self-modifying code (programs which modified themselves as they ran; at the time viewed as a powerful application of the principle that programs and data were fundamentally the same, this practice is now frowned upon as extremely hard to debug and is impossible in most high-level languages).

Floating-point operations were performed by libraries of software routines in early computers but were done in hardware in second-generation machines. During this second-generation many high-level programming languages were introduced, including FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of this era include the IBM 704 and its successors, the 709 and 7094. The latter introduced I/O processors for better throughput between I/O devices and main memory.

Livermore Advanced Research Computer (LARC)
Livermore Advanced Research Computer (LARC)

The second generation also saw the first two supercomputers designed specifically for numeric processing in scientific applications.

The term "supercomputer" is generally reserved for a machine that is an order of magnitude more powerful than other machines of its era. Two machines of the 1950s deserve this title. The Livermore Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early examples of machines that overlapped memory operations with processor operations and had primitive forms of parallel processing.

The Microchip and the Microprocessor

The microchip (or integrated circuit) is one of the most important advances in computing technology. Many overlaps in history existed between microchip-based computers and transistor-based computers throughout the 1960s, and even into the early 1970s.

The microchip spurred the production of minicomputers and microcomputers, which were small and inexpensive enough for small businesses and even individuals to own. The microchip also led to the microprocessor, another breakthrough technology that was important in the development of the personal computer.

The Intel C4004, the very first commercially available microprocessor
The Intel C4004,

There were three microprocessor designs that came out at about the same time. The first was produced by Intel (the 4004). Soon after, models from Texas Instruments (the TMS 1000) and Garret AiResearch (the Central Air Data Computer, or CADC) followed.

The first processors were 4-bit, but 8-bit models quickly followed by 1972.

16-bit models were produced in 1973, and 32-bit models soon followed.

Labs created the first fully 32-bit single-chip microprocessor, which used 32-bit buses, 32-bit data paths, and 32-bit addresses, in 1980. The first 64-bit microprocessors were in use in the early 1990s in some markets, though they didn't appear in the PC market until the early 2000s.

Some of the popular second-generation computers are;

  • IBM 1620
  • IBM 7094
  • CDC 1604
  • CDC 3600
  • UNIVAC 1108

Third Generation Computers (1963-1972)

The third generation brought huge gains in computational power. Innovations in this era include the use of integrated circuits, or ICs (semiconductor devices with several transistors built into one physical component), semiconductor memories starting to be used instead of magnetic cores, microprogramming as a technique for efficiently designing complex processors, the coming of age of pipelining and other forms of parallel processing, and the introduction of operating systems and time-sharing.

The first ICs were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit (or "chip"), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to 100 devices per chip. Multilayered printed circuits were developed and core memory was replaced by faster, solid-state memories.

Computer designers began to take advantage of parallelism by using multiple functional units, overlapping CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream and the data stream. In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use functional parallelism.

CDC 6600
The CDC 6600 was a mainframe computer from Control Data Corporation, first delivered in 1964.

By using 10 separate functional units that could operate simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a computation rate of 1 million floating-point operations per second (1 Mflops). Five years later CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its pipelined functional units, is considered to be the first vector processor and was capable of executing at 10 Mflops.

The IBM 360/91, released during the same period, was roughly twice as fast as the CDC 660. It employed instruction look ahead, separate floating-point and integer functional units and pipelined instruction stream. The IBM 360-195 was comparable to the CDC 7600, deriving much of its performance from very fast cache memory.

The SOLOMON computer, developed by Westinghouse Corporation, and the ILLIAC IV, jointly developed by Burroughs, the Department of Defense and the University of Illinois, were representative of the first parallel computers. The Texas Instrument Advanced Scientific Computer (TI-ASC) and the STAR-100 of CDC were pipelined vector processors that demonstrated the viability of that design and set the standards for subsequent vector processors.

Early in this third-generation Cambridge and the University of London cooperated in the development of CPL (Combined Programming Language, 1963). CPL was, according to its authors, an attempt to capture only the important features of the complicated and sophisticated ALGOL.

However, like ALGOL, CPL was large with many features that were hard to learn. In an attempt at further simplification, Martin Richards of Cambridge developed a subset of CPL called BCPL (Basic Computer Programming Language, 1967). In 1970 Ken Thompson of Bell Labs developed yet another simplification of CPL called simply B, in connection with an early implementation of the UNIX operating system. comment)

Some of the popular third-generation computers are;

  • IBM-360 series
  • Honeywell-6000 series
  • PDP(Personal Data Processor)
  • IBM-370/168
  • TDC-316

Computers Fourth Generation  (1972-1984)

The next generation of computer systems saw the use of large scale integration (LSI – 1000 devices per chip) and very large scale integration (VLSI – 100,000 devices per chip) in the construction of computing elements. At this scale, entire processors will fit onto a single chip, and for simple systems, the entire computer (processor, main memory, and I/O controllers) can fit on one chip. Gate delays dropped to about 1ns per gate.

Semiconductor memories replaced core memories as the main memory in most systems; until this time the use of semiconductor memory in most systems was limited to registers and cache. During this period, high-speed vector processors, such as the CRAY 1, CRAY X-MP and CYBER 205 dominated the high-performance computing scene. Computers with large main memory, such as the CRAY 2, began to emerge.

A variety of parallel architectures began to appear; however, during this period the parallel computing efforts were of a mostly experimental nature and most computational science was carried out on vector processors. Microcomputers and workstations were introduced and saw wide use as alternatives to time-shared mainframe computers.

CRAY X-MP/22 Supercomputer
CRAY X-MP/22 Supercomputer

Developments in software include very high level languages such as FP (functional programming) and Prolog (programming in logic). These languages tend to use a declarative programming style as opposed to the imperative style of Pascal, C, FORTRAN, et al. In a declarative style, a programmer gives a mathematical specification of what should be computed, leaving many details of how it should be computed to the compiler and/or runtime system.

These languages are not yet in wide use, but are very promising as notations for programs that will run on massively parallel computers (systems with over 1,000 processors). Compilers for established languages started to use sophisticated optimization techniques to improve code, and compilers for vector processors were able to vectorize simple loops (turn loops into single instructions that would initiate an operation over an entire vector).

Two important events marked the early part of the third generation: the development of the C programming language and the UNIX operating system, both at Bell Labs. In 1972, Dennis Ritchie, seeking to meet the design goals of CPL and generalize Thompson's B, developed the C language. Thompson and Ritchie then used C to write a version of UNIX for the DEC PDP-11. This C-based UNIX was soon ported to many different computers, relieving users from having to learn a new operating system each time they change computer hardware. UNIX or a derivative of UNIX is now a de facto standard on virtually every computer system.

An important event in the development of computational science was the publication of the Lax report. In 1982, the US Department of Defense (DOD) and National Science Foundation (NSF) sponsored a panel on Large Scale Computing in Science and Engineering, chaired by Peter D. Lax.

The Lax Report stated that aggressive and focused foreign initiatives in high-performance computing, especially in Japan, were in sharp contrast to the absence of coordinated national attention in the United States. The report noted that university researchers had inadequate access to high-performance computers. One of the first and most visible of the responses to the Lax report was the establishment of the NSF supercomputing centres.

Phase I on this NSF program was designed to encourage the use of high-performance computing at American universities by making cycles and training on three (and later six) existing supercomputers immediately available. Following this Phase I stage, in 1984-1985 NSF provided funding for the establishment of five Phase II supercomputing centres.

The Phase II centres, located in San Diego (San Diego Supercomputing Center); Illinois (National Center for Supercomputing Applications); Pittsburgh (Pittsburgh Supercomputing Center); Cornell (Cornell Theory Center); and Princeton (John von Neumann Center), have been extremely successful at providing computing time on supercomputers to the academic community.

In addition, they have provided many valuable training programs and have developed several software packages that are available free of charge. These Phase II centres continue to augment the substantial high-performance computing efforts at the National Laboratories, especially the Department of Energy (DOE) and NASA sites.

Some of the popular fourth-generation computers are;

  • DEC 10
  • STAR 1000
  • PDP 11
  • CRAY-1(Super Computer)
  • CRAY-X-MP(Super Computer)

Computers Fifth Generation (1984-1990)

The development of the next generation of computer systems is characterized mainly by the acceptance of parallel processing. Until this time parallelism was limited to pipelining and vector processing, or at most to a few processors sharing jobs. The fifth-generation saw the introduction of machines with hundreds of processors that could all be working on different parts of a single program. The scale of integration in semiconductors continued at an incredible pace – by 1990 it was possible to build chips with a million components – and semiconductor memories became standard on all computers.

A photograph featuring a DEC VAX-11/780 computer in operation (August 1980)
A photograph featuring a DEC VAX-11/780 computer in operation (August 1980)

Other new developments were the widespread use of computer networks and the increasing use of single-user workstations. Prior to 1985 large scale parallel processing was viewed as a research goal, but two systems introduced around this time are typical of the first commercial products to be based on parallel processing. The Sequent Balance 8000 connected up to 20 processors to a single shared memory module (but each processor had its own local cache).

The machine was designed to compete with the DEC VAX-780 as a general-purpose Unix system, with each processor working on a different user's job. However Sequent provided a library of subroutines that would allow programmers to write programs that would use more than one processor, and the machine was widely used to explore parallel algorithms and programming techniques.

Intel iPSC/860 (1990)
Intel iPSC/860 (1990)

The Intel iPSC-1, nicknamed "the hypercube", took a different approach. Instead of using one memory module, Intel connected each processor to its own memory and used a network interface to connect processors. This distributed memory architecture meant memory was no longer a bottleneck and large systems (using more processors) could be built.

The largest iPSC-1 had 128 processors. Toward the end of this period, the third type of parallel processor was introduced to the market. In this style of machine, known as a data-parallel or SIMD, there are several thousand very simple processors. All processors work under the direction of a single control unit; i.e. if the control unit says "add a to b" then all processors find their local copy of a and add it to their local copy of b. Machines in this class include the Connection Machine from Thinking Machines, Inc., and the MP-1 from MasPar, Inc.

IBM Personal Computer XT (Typ 5160)
IBM Personal Computer XT (Typ 5160). Credit Engelbert Reineke

Scientific computing in this period was still dominated by vector processing. Most manufacturers of vector processors introduced parallel models, but there were very few (two to eight) processors in these parallel machines. In the area of computer networking, both wide area network (WAN) and local area network (LAN) technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing environment toward a distributed computing environment in which each user has their own workstation for relatively simple tasks (editing and compiling programs, reading mail) but sharing large, expensive resources such as file servers and supercomputers.

RISC technology (a style of the internal organization of the CPU) and plummeting costs for RAM brought tremendous gains in computational power of relatively low-cost workstations and servers.

Some of the popular fifth generation computers are;

  • Desktop
  • Laptop
  • NoteBook
  • UltraBook
  • Chromebook

Computers Sixth Generation (1990-?)

Transitions between generations in computer technology are hard to define, especially as they are taking place. Some changes, such as the switch from vacuum tubes to transistors, are immediately apparent as fundamental changes, but others are clear only in retrospect. Many of the developments in computer systems since 1990 reflect gradual improvements over established systems, and thus it is hard to claim they represent a transition to a new "generation", but other developments will prove to be significant changes.

Sixth Generation (1990-?)

In this section, we offer some assessments about recent developments and current trends that we think will have a significant impact on computational science.

This generation is beginning with many gains in parallel computing, both in the hardware area and in improved understanding of how to develop algorithms to exploit diverse, massively parallel architectures. Parallel systems now compete with vector processors in terms of total computing power and most expect parallel systems to dominate the future.

Combinations of parallel/vector architectures are well established, and one corporation (Fujitsu) has announced plans to build a system with over 200 of its high-end vector processors. Manufacturers have set themselves the goal of achieving teraflops ( 10 arithmetic operations per second) performance by the middle of the decade, and it is clear this will be obtained only by a system with a thousand processors or more.

Workstation technology has continued to improve, with processor designs now using a combination of RISC, pipelining, and parallel processing. As a result, it is now possible to purchase a desktop workstation for about $30,000 that has the same overall computing power (100 megaflops) as fourth-generation supercomputers. This development has sparked an interest in heterogeneous computing: a program started on one workstation can find idle workstations elsewhere in the local network to run parallel subtasks.

One of the most dramatic changes in the sixth generation will be the explosive growth of wide-area networking. Network bandwidth has expanded tremendously in the last few years and will continue to improve for the next several years. T1 transmission rates are now standard for regional networks, and the national "backbone" that interconnects regional networks uses T3.

Networking technology is becoming more widespread than its original strong base in universities and government laboratories as it is rapidly finding application in K-12 education, community networks and private industry. A little over a decade after the warning voiced in the Lax report, the future of a strong computational science infrastructure is bright.

The federal commitment to high-performance computing has been further strengthened with the passage of two particularly significant pieces of legislation: the High-Performance Computing Act of 1991, which established the High-Performance Computing and Communication Program (HPCCP) and Sen. Gore's Information Infrastructure and Technology Act of 1992, which addresses a broad spectrum of issues ranging from high-performance computing to expanded network access and the necessity to make leading-edge technologies available to educators from kindergarten through graduate school.

In bringing this encapsulated survey of the development of a computational science infrastructure up to date, we observe that the President's FY 1993 budget contains $2.1 billion for mathematics, science, technology and science literacy educational programs, a 43% increase over FY 90 figures.

Timeline of how computers evolved

1801: In France, Joseph Marie Jacquard invents a loom that uses punched wooden cards to automatically weave fabric designs. Early computers would use similar punch cards.

1822: English mathematician Charles Babbage conceives of a steam-driven calculating machine that would be able to compute tables of numbers. The project, funded by the English government, is a failure. More than a century later, however, the world's first computer was actually built.

1890: Herman Hollerith designs a punch card system to calculate the 1880 census, accomplishing the task in just three years and saving the government $5 million. He establishes a company that would ultimately become IBM.

1936: Alan Turing presents the notion of a universal machine, later called the Turing machine, capable of computing anything that is computable. The central concept of the modern computer was based on his ideas.

1937: J.V. Atanasoff, a professor of physics and mathematics at Iowa State University, attempts to build the first computer without gears, cams, belts or shafts.

1939: Hewlett-Packard is founded by David Packard and Bill Hewlett in a Palo Alto, California, garage, according to the Computer History Museum.

The Phone – History and Evolution from 1876 to present day

1941: Atanasoff and his graduate student, Clifford Berry, design a computer that can solve 29 equations simultaneously. This marks the first time a computer is able to store information on its main memory.

1943-1944: Two University of Pennsylvania professors, John Mauchly and J. Presper Eckert, build the Electronic Numerical Integrator and Calculator (ENIAC). Considered the grandfather of digital computers, it fills a 20-foot by 40-foot room and has 18,000 vacuum tubes.

1946: Mauchly and Presper leave the University of Pennsylvania and receive funding from the Census Bureau to build the UNIVAC, the first commercial computer for business and government applications.

1947: William Shockley, John Bardeen and Walter Brattain of Bell Laboratories invented the transistor. They discovered how to make an electric switch with solid materials and no need for a vacuum.

1953: Grace Hopper develops the first computer language, which eventually becomes known as COBOL. Thomas Johnson Watson Jr., son of IBM CEO Thomas Johnson Watson Sr., conceives the IBM 701 EDPM to help the United Nations keep tabs on Korea during the war.

1954: The FORTRAN programming language, an acronym for FORmula TRANslation, is developed by a team of programmers at IBM led by John Backus, according to the University of Michigan.

1958: Jack Kilby and Robert Noyce unveil the integrated circuit, known as the computer chip. Kilby was awarded the Nobel Prize in Physics in 2000 for his work.

1964: Douglas Engelbart shows a prototype of the modern computer, with a mouse and a graphical user interface (GUI). This marks the evolution of the computer from a specialized machine for scientists and mathematicians to technology that is more accessible to the general public.

1969: A group of developers at Bell Labs produce UNIX, an operating system that addressed compatibility issues. Written in the C programming language, UNIX was portable across multiple platforms and became the operating system of choice among mainframes at large companies and government entities. Due to the slow nature of the system, it never quite gained traction among home PC users.

1970: The newly formed Intel unveils the Intel 1103, the first Dynamic Access Memory (DRAM) chip.

1971: Alan Shugart leads a team of IBM engineers who invent the "floppy disk," allowing data to be shared among computers.

1973: Robert Metcalfe, a member of the research staff for Xerox, develops Ethernet for connecting multiple computers and other hardware.

1974-1977: A number of personal computers hit the market, including Scelbi & Mark-8 Altair, IBM 5100, Radio Shack's TRS-80 — affectionately known as the "Trash 80" — and the Commodore PET.

1975: The January issue of Popular Electronics magazine features the Altair 8080, described as the "world's first minicomputer kit to rival commercial models." Two "computer geeks," Paul Allen and Bill Gates, offer to write software for the Altair, using the new BASIC language. On April 4, after the success of this first endeavour, the two childhood friends form their own software company, Microsoft.

1976: Steve Jobs and Steve Wozniak start Apple Computers on April Fool's Day and roll out the Apple I, the first computer with a single-circuit board, according to Stanford University.

1977: Radio Shack's initial production run of the TRS-80 was just 3,000. It sold like crazy. For the first time, non-geeks could write programs and make a computer do what they wished.

1977: Jobs and Wozniak incorporate Apple and show the Apple II at the first West Coast Computer Faire. It offers color graphics and incorporates an audio cassette drive for storage.

1978: Accountants rejoice at the introduction of VisiCalc, the first computerized spreadsheet program.

1979: Word processing becomes a reality as MicroPro International releases WordStar. "The defining change was to add margins and word wrap," said creator Rob Barnaby in an email to Mike Petrie in 2000. "Additional changes included getting rid of command mode and adding a print function. I was the technical brains — I figured out how to do it, and did it, and documented it. "

1981: The first IBM personal computer, code-named "Acorn," is introduced. It uses Microsoft's MS-DOS operating system. It has an Intel chip, two floppy disks and an optional colour monitor. Sears & Roebuck and Computerland sell the machines, marking the first time a computer is available through outside distributors. It also popularizes the term PC.

1983: Apple's Lisa is the first personal computer with a GUI. It also features a drop-down menu and icons. It flops but eventually evolves into the Macintosh. The Gavilan SC is the first portable computer with the familiar flip form factor and the first to be marketed as a "laptop."

1985: Microsoft announces Windows, according to Encyclopedia Britannica. This was the company's response to Apple's GUI. Commodore unveils the Amiga 1000, which features advanced audio and video capabilities.

1985: The first dot-com domain name is registered on March 15, years before the World Wide Web would mark the formal beginning of Internet history. The Symbolics Computer Company, a small Massachusetts computer manufacturer, registers Symbolics.com. More than two years later, only 100 dot-coms had been registered.

1986: Compaq brings the Deskpro 386 to market. Its 32-bit architecture provides as speed comparable to mainframes.

1990: Tim Berners-Lee, a researcher at CERN, the high-energy physics laboratory in Geneva, develops HyperText Markup Language (HTML), giving rise to the World Wide Web.

1993: The Pentium microprocessor advances the use of graphics and music on PCs.

1994: PCs become gaming machines as "Command & Conquer," "Alone in the Dark 2," "Theme Park," "Magic Carpet," "Descent" and "Little Big Adventure" are among the games to hit the market.

1996: Sergey Brin and Larry Page developed the Google search engine at Stanford University.

1997: Microsoft invests $150 million in Apple, which was struggling at the time, ending Apple's court case against Microsoft in which it alleged that Microsoft copied the "look and feel" of its operating system.

1999: The term Wi-Fi becomes part of the computing language and users begin connecting to the Internet without wires.

2001: Apple unveils the Mac OS X operating system, which provides protected memory architecture and pre-emptive multi-tasking, among other benefits. Not to be outdone, Microsoft rolls out Windows XP, which has a significantly redesigned GUI.

2003: The first 64-bit processor, AMD's Athlon 64, becomes available to the consumer market.

2004: Mozilla's Firefox 1.0 challenges Microsoft's Internet Explorer, the dominant Web browser. Facebook, a social networking site, launches.

2005: YouTube, a video sharing service, is founded. Google acquires Android, a Linux-based mobile phone operating system.

2006: Apple introduces the MacBook Pro, its first Intel-based, dual-core mobile computer, as well as an Intel-based iMac. Nintendo's Wii game console hits the market.

2007: The iPhone brings many computer functions to the smartphone.

2009: Microsoft launches Windows 7, which offers the ability to pin applications to the taskbar and advances in touch and handwriting recognition, among other features.

2010: Apple unveils the iPad, changing the way consumers view media and jumpstarting the dormant tablet computer segment.

2011: Google releases the Chromebook, a laptop that runs the Google Chrome OS.

2012: Facebook gains 1 billion users on October 4.

2015: Apple releases the Apple Watch. Microsoft releases Windows 10.

2016: The first reprogrammable quantum computer was created. "Until now, there hasn't been any quantum-computing platform that had the capability to program new algorithms into their system. They're usually each tailored to attack a particular algorithm," said study lead author Shantanu Debnath, a quantum physicist and optical engineer at the University of Maryland, College Park.

2017: The Defense Advanced Research Projects Agency (DARPA) is developing a new "Molecular Informatics" program that uses molecules as computers. "Chemistry offers a rich set of properties that we may be able to harness for rapid, scalable information storage and processing," Anne Fischer, program manager in DARPA's Defense Sciences Office, said in a statement. "Millions of molecules exist, and each molecule has a unique three-dimensional atomic structure as well as variables such as shape, size, or even colour. This richness provides a vast design space for exploring novel and multi-value ways to encode and process data beyond the 0s and 1s of current logic-based, digital architectures." [Computers of the Future May Be Minuscule Molecular Machines]

And now where?

What of the future? The power of computers (the number of components packed on a chip) has doubled roughly every 18 months to 2 years since the 1960s. But the laws of physics are expected to bring a halt to Moore's Law, as this idea is known, and force us to explore entirely new ways of building computers. What will tomorrow's PCs look like? One long-touted idea is that they'll be using particles of light—photons—instead of electrons, an approach known as optical computing or photonics.

Currently, much of the smart money is betting on quantum computers, which deploy cunning ways of manipulating atoms to process and store information at lightning speed. There's also hope we might use spintronics (harnessing the "spin" of particles) and biomolecular technology (computing with DNA, proteins, and other biological molecules), though both are in the very early stages of research. Chips made from new materials such as graphene may also offer ways of extending Moore's law. Whichever technology wins out, you can be quite certain the future of computing will be just as exciting as the past!

Thanks for joining! You have successfully subscribed.

mixontabstair.blogspot.com

Source: https://matrixdisclosure.com/computers-to-pc-a-brief-history-from-distant-past-to-present-day/

0 Response to "Computers Are Able to Continue Their Operation Even When Problems Are Present"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel