Monday, September 24, 2018

Origin of c programming language


C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs,[6] and used to re-implement the Unix operating system.[7] It has since become one of the most widely used programming languages of all time,[8][9] with C compilers from various vendors available for the majority of existing computer architectures and operating systems. C has been standardized by the American National Standards Institute (ANSI) since 1989 (see ANSI C) and subsequently by the International Organization for Standardization (ISO).

C is an imperative procedural language. It was designed to be compiled using a relatively straightforward compiler, to provide low-level access to memory, to provide language constructs that map efficiently to machine instructions, and to require minimal run-time support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program that is written with portability in mind can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a very wide range of platforms, from embedded microcontrollers to supercomputers.



Overview 
..Dennis Ritchie (right), the inventor of the C programming language, with Ken Thompson
Like most imperative languages in the ALGOL tradition, C has facilities for structured programming and allows lexical variable scope and recursion, while a static type system prevents many unintended operations. In C, all executable code is contained within subroutines, which are called "functions" (although not in the strict sense of functional programming). Function parameters are always passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values. C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements.

The C language also exhibits the following characteristics:

There is a small, fixed number of keywords, including a full set of control flow primitives: for, if/else, while, switch, and do/while. User-defined names are not distinguished from keywords by any kind of sigil.
There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc.
More than one assignment may be performed in a single statement.
Function return values can be ignored when not needed.
Typing is static, but weakly enforced: all data has a type, but implicit conversions may be performed.
Declaration syntax mimics usage context. C has no "define" keyword; instead, a statement beginning with the name of a type is taken as a declaration. There is no "function" keyword; instead, a function is indicated by the parentheses of an argument list.
User-defined (typedef) and compound types are possible.
Heterogeneous aggregate data types (struct) allow related data elements to be accessed and assigned as a unit.
Union is a structure with overlapping members; only the last member stored is valid.
Array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects; they cannot be assigned or compared using single built-in operators. There is no "array" keyword, in use or definition; instead, square brackets indicate arrays syntactically, for example month[11].
Enumerated types are possible with the enum keyword. They are freely interconvertible with integers.
Strings are not a separate data type, but are conventionally implemented as null-terminated arrays of characters.
Low-level access to computer memory is possible by converting machine addresses to typed pointers.
Procedures (subroutines not returning values) are a special case of function, with an untyped return type void.
Functions may not be defined within the lexical scope of other functions.
Function and data pointers permit ad hoc run-time polymorphism.
A preprocessor performs macro definition, source code file inclusion, and conditional compilation.
There is a basic form of modularity: files can be compiled separately and linked together, with control over which functions and data objects are visible to other files via static and extern attributes.
Complex functionality such as I/O, string manipulation, and mathematical functions are consistently delegated to library routines.
While C does not include some features found in some other languages, such as object orientation or garbage collection, such features can be implemented or emulated in C, often by way of external libraries (e.g., the Boehm garbage collector or the GLib Object System).

Relations to other languages
Many later languages have borrowed directly or indirectly from C, including C++, C#, Unix's C shell, D, Go, Java, JavaScript, Limbo, LPC, Objective-C, Perl, PHP, Python, Rust, Swift, and Verilog (hardware description language)[5]. These languages have drawn many of their control structures and other basic features from C. Most of them (with Python being the most dramatic exception) are also very syntactically similar to C in general, and they tend to combine the recognizable expression and statement syntax of C with underlying type systems, data models, and semantics that can be radically different.

Friday, September 21, 2018

Grow your site in Five days

Two months ago, the Sumos were sitting around having tacos and whiskey (typical lunch here), and a question came up:
“If you had to grow a blog from scratch with no existing audience or mailing list, how would you grow it to 10,000 visitors per month, in just a few months?”
Over 350,000 sites have Sumo installed, but very few have reached this traffic tipping point. Maybe you’re still working to hit that 10,000 mark.
To help you get there, I want to show you how to grow blog traffic by sharing the strategy I used to increase traffic to my personal site, nateliason.com, to 50,000+ visits a month:
Screenshot of a google analytics graph
And I’m going to show how my site’s traffic increased steadily over the course of December, despite publishing only one article and not emailing my list:
Screenshot showing traffic and nat eliason
The reason I saw so much traffic with little time spent?
Yes, you’ve read unsatisfying articles about SEO before, but I’m going to show you the exact steps I follow to take advantage of it and actually get results.
I’ll walk you through how I’d use the same strategies that have worked on nateliason.com (and some other things that I’d do if I had more time for it) to grow a brand new website to 10,000 visitors a month, all through a focus on SEO.
And, as promised, I’ll talk about how I’d do it assuming I had no existing audience, no big blogs (like this one!) to link from, no connections in the industry, NADA.
  1. Pick the Topic
  2. Create an Initial Article List
  3. Prioritize Your Articles
  4. Pick Your Top 5-10 Articles
  5. Create a Schedule
  6. Create a List of Similar Blogs for Guest Posting
  7. Write The First Article
  8. Create a Content Upgrade
  9. Promote Your Article
  10. Create a List of Guest Topics
  11. Start Guest Posting
  12. Repeat Steps 7 through 11!

STEP 1: PICK THE TOPIC

(Skip this section if you already have your site built)
It’s popular to recommend going through Google, BuzzSumo, Facebook groups, etc. to try to find the perfect topic to build a high traffic site around… but it’s all a waste of time.
Why would you build a site about anything you’re not passionate about?
Meme of a cat planning to start a puppy training blog
For the sake of this article, I’m going to start a site called Nat Likes Tea at natlikestea.com.
I like tea, mention it at least once in most of my articles, and all the good “tea” puns are taken: honest tea, ingenuitea, insanitea…
Nat Likes Tea it is.

STEP 2: CREATE AN INITIAL ARTICLE LIST

With the topic in mind, I’d come up with an initial list of articles that I could write, and keep them all in a spreadsheet.
By doing this, I can make sure that I’m writing about the best topics to go after, not just picking things willy-nilly. The first couple ideas I have might be fine, but by spending some time to put all the options out there I can be sure that I’m spending my time as effectively as possible.
The easiest way to make this list is to come up with a few article styles, then mix and match styles and content to build a huge list.
For Nat Likes Tea (NLT from now on), I might have these categories:
Screenshot showing different content categories in a google spreadsheet
Then, I’d make a “generator” column like this one:
Screenshot showing content generators on a google spreadsheet
And then I’d use that with the “CONCATENATE” function to make up article topics:
GIF showing a spreadsheet being edited
And then add those to a new Worksheet for my topics:
Screenshot showing a google spreadsheet being used to plan marketing
Pro tip: If you paste them and get a REF error, do “Paste Values Only”
Then I’d repeat that for all the other combinations of things I could write about, until I have an initial list of possible topics:
Screenshot showing possible topics on a google spreadsheet
Now, I can start listing some of the more meta topics that I can think of. These are the ones that would be longer, more in-depth posts, and will likely have higher SEO value.
This is why you should write about something you’re already passionate about. It’s way easier to come up with these big meaty topics when you know the subject matter already, and odds are that friends have already asked you questions related to some of them:
Screenshot showing meaty topics that can be used for later
Then I just take those and add them to the list, and now we have our initial list of things I can write about!
Screenshot showing a plethora of topics for content on a google spreadheet, along with doge
All that’s left is lock myself in a Starbucks and start doing lines of ground coffee beans until they’re all written.
Or, for the sake of my nose and adrenal glands, we could prioritize them.

STEP 3: PRIORITIZE YOUR ARTICLES

Now that we have all of our possible topics, we need to figure out which ones it makes sense to focus on for our site (in order to get that sweet sweet SEO traffic), and which ones are fine to give to other people as guest posts.
To do that, we’re going to rank our posts by Depth, then SEO value.

RANKING BY DEPTH

I’m going to go through and assign a score of 1 to 3 for depth to each topic on the list:
  • 1: Shorter, fun, one-off post. Probably < 1,000 words
  • 2: Somewhere in the middle, 1,000-2,000 word guides
  • 3: Massively useful in-depth guide on a topic, likely 2,000+ words
Don’t sell any of your posts long, if they could fit into a bigger post (within reason) then they’re a 1 or 2.
Think of it like a pyramid, with your 3s as a base that the other article ideas build on top of or expand on.
Screenshot showing depth ranking on a google spreadsheet for planning content
Then, take all the posts you ranked a “1” and put them in another Worksheet labeled “Later Posts”:
Screenshot showing a google spreadsheet with a content plan

COMPARE SEO VALUE

With the 2s and 3s, we’re going to figure out how valuable they are from an SEO perspective. This means assessing how many visitors we could potentially get to them from Google as a result of people searching for those topics.
First, go to Google Keyword Planner.
Then, take a topic you came up with, and plug it in as you have it:
Screenshot showing the google keyword planner in use
See what the results are:
Screenshot showing the google keyword planner
In this case, there are almost no monthly searches for that topic, so we can plug some variations into that search bar at the top until we find a keyword (it’s a keyword even if there are multiple words) with a high volume:
Screenshot showing keyword search results on the keyword planner
GIF showing a military man saying "oooooh that
Now I just take that Keyword and the Keyword Volume, and add it to our spreadsheet:
Screenshot showing a google spreadsheet about content planning
As you’re researching, you’ll come up with more ideas. For example, when I was looking for a good keyword for “high caffeine teas,” I noticed that there was a lot of searches for the amount of caffeine in specific types of tea:
Screenshot showing keyword search results for water fasting
So I made a note to myself to add more topics around that later.
Screenshot showing search volume on a google spreadsheet
And then you just need to repeat this for all of your topics, or at least all of your “3” ranked ones.
Screenshot of a spreadsheet being used to plan content

DISQUALIFY TOUGH COMPETITION

Once I have my list of keywords, I’d go through and make sure that none of them are so competitive that I shouldn’t bother trying to rank for them (yet).
This does NOT mean to look at the competitiveness rating in Keyword Planner.
Screenshot showing how you can ignore tough competition
That competitiveness ranking is how competitive the ads are. Since we’re not buying ads, we don’t have to worry about how many other people are buying ads.
What I mean by competitiveness is who else is ranking for this keyword in Google right now.
All I have to do is take each keyword, plug it into Google, and see what comes up. I’ll also use the Moz Toolbar to tell me how highly ranked the pages are.
If there’s a few major sites competing on a keyword, I’ll highlight it in red, but if it looks like smaller sites that I can definitely compete against then I’ll move on.
Screenshot showing competitors on google search
You might see results from social media, Amazon, and other “big” sites in the results, but don’t let those scare you off.
Since these sites cover such a wide variety of topics you can still compete with them, and if all you’re seeing are social media results that’s a good sign that no one has written a good article on the topic.

STEP 4: PICK YOUR TOP 5-10 TOPICS

With your list of topics (excluding the ones that seem too competitive), it’s time to pick the 5-10 that you’ll put on your site.
With just 5-10 articles, you can easily reach 10k visitors a month. I currently get ~2,500 visitors per day from just 4 of my articles, so if just one of your articles reaches that level then you’ll have hit the goal.
In this case, I would go with:
  • Drink these teas for weight loss
  • A guide to the different types of tea
  • Tea vs. Coffee
  • Best tea for your skin
  • The best decaf teas
  • New to Tea? Start here
  • The best low caffeine teas to go to sleep
They’re all not too competitive, have a large enough search volume to each get me 5,000+ visitors a month (anything over 1,000 is a safe bet), and they’re topics I want to write about.

Important note: the “keyword volume” is not an absolute, it’s only an indicator. For example, the keyword “water fasting results” has a monthly keyword volume of 2,400, but my article targeting “water fasting results” gets 15,000-30,000 visitors a month.

This is because 80% or more of searches are unique. They’re long highly-specific searches (e.g. “what happens to your body if you don’t eat and only drink water for 5 days”) that are rarely repeated, and Google just makes its best guess to match those to easier keywords.
10,000 visitors per month is roughly 333 per day, so each article only needs to get 30-60 visits per day. Not a crazy high bar.
In fact, you can plug in how many posts you want to write to see what traffic amount you need on the spreadsheet. Just set your monthly goal and how many posts you want to write:
Screenshot showing a google spreadsheet outlining how many articles need to be writtenf or a goal
Just so you know though, it won’t be an even spread. A few articles will get substantially more traffic than others (the 80/20 rule at work), so when you’re picking topics, pick ones that could be massively popular (volume of 1,000/month or higher).
Here’s the relative traffic amounts for the top 20 articles on my site to give you an idea:
Piechart showing relative traffic counts for different articles

STEP 5: MAKE A SCHEDULE

Before you start writing, create a schedule in Google Calendar for how often you’ll publish, both on your site and other sites.
Aim for a minimum of two posts per month on your site, and one post a week on other sites. If you want to post more, put more time into posting on other sites. That’ll have a bigger ROI for you in the short term.
But the most important thing is that once you make the schedule… stick to it. Lock it in, and make sure that you’re getting your articles out when you say you will. That’s the only way you succeed at this.
Screenshot showing the schedule for articles for a blog

STEP 6: CREATE A LIST OF SIMILAR BLOGS FOR GUEST POSTING

We’re almost to the writing! I promise! This is the last step before you buckle down with your typewriter.
You need a list of blogs that you can guest post on, and that you want to link back to you. The reason we’re doing this now is that once you have your list, you can include links to other people’s sites in your articles to earn good karma with them.
Think about it. What’s more appealing:
“Hey I just linked to your article about XYZ from my article about ABC. Seems like we have a lot of topics in common, would love to put something together for your audience about DEF if that’s interesting to you :)”
Or
“Hey can I write stuff on your site to promote my site?”
Obviously the first one, since you’ve already done them a favor by giving them a link whether or not they let you guest post!
And ideally, you’re doing this over separate emails (so one that says “hey I linked to you” then a follow up later asking about guest posting), which means that in your first communication you’re not asking for anything, just making their life better.
Add a worksheet like this one to your content plan:
Screenshot showing a content planning worksheet
And fill it in with the blogs in a similar niche as you.
The best ways to find these blogs is to:
  • See what comes up in Google right now for your topics
  • Search on BuzzSumo for posts that performed well related to your topic
  • Search on Twitter for people sharing articles from related blogs
If you’re having trouble finding their email addresses, Connectifier or Email Huntercan usually take care of it for you.
Now, with list in hand, it’s time to start writing :)

Tuesday, September 18, 2018

History oF computer

Cogs and Calculators

It is a measure of the brilliance of the abacus, invented in the Middle East circa 500 BC, that it remained the fastest form of calculator until the middle of the 17th century. Then, in 1642, aged only 18, French scientist and philosopher Blaise Pascal (1623–1666) invented the first practical mechanicalcalculator, the Pascaline, to help his tax-collector father do his sums. The machine had a series of interlocking cogs (gear wheels with teeth around their outer edges) that could add and subtract decimal numbers. Several decades later, in 1671, German mathematician and philosopher Gottfried Wilhelm Leibniz (1646–1716) came up with a similar but more advanced machine. Instead of using cogs, it had a "stepped drum" (a cylinder with teeth of increasing length around its edge), an innovation that survived in mechanical calculators for 300 hundred years. The Leibniz machine could do much more than Pascal's: as well as adding and subtracting, it could multiply, divide, and work out square roots. Another pioneering feature was the first memory store or "register."
Details of the user interface and internal mechanism of Blaise Pascal's Pascaline.
Artwork: Pascaline: Two details of Blaise Pascal's 17th-century calculator. Left: The "user interface": the part where you dial in numbers you want to calculate. Right: The internal gear mechanism. Picture courtesy of US Library of Congress.
Apart from developing one of the world's earliest mechanical calculators, Leibniz is remembered for another important contribution to computing: he was the man who invented binary code, a way of representing any decimal number using only the two digits zero and one. Although Leibniz made no use of binary in his own calculator, it set others thinking. In 1854, a little over a century after Leibniz had died, EnglishmanGeorge Boole (1815–1864) used the idea to invent a new branch of mathematics called Boolean algebra. In modern computers, binary code and Boolean algebra allow computers to make simple decisions by comparing long strings of zeros and ones. But, in the 19th century, these ideas were still far ahead of their time. It would take another 50–100 years for mathematicians and computer scientists to figure out how to use them (find out more in our articles about calculators and logic gates).

Engines of Calculation

Neither the abacus, nor the mechanical calculators constructed by Pascal and Leibniz really qualified as computers. A calculator is a device that makes it quicker and easier for people to do sums—but it needs a human operator. A computer, on the other hand, is a machine that can operate automatically, without any human help, by following a series of stored instructions called a program (a kind of mathematical recipe). Calculators evolved into computers when people devised ways of making entirely automatic, programmable calculators.
How punched cards were used in early computers. A drawing from Herman Hollerith's Art of Compiling Statistics Patent, January 8, 1889.
Photo: Punched cards: Herman Hollerith perfected the way of using punched cards and paper tape to store information and feed it into a machine. Here's a drawing from his 1889 patent Art of Compiling Statistics (US Patent#395,782), showing how a strip of paper (yellow) is punched with different patterns of holes (orange) that correspond to statistics gathered about people in the US census. Picture courtesy of US Patent and Trademark Office.
The first person to attempt this was a rather obsessive, notoriously grumpy English mathematician named Charles Babbage(1791–1871). Many regard Babbage as the "father of the computer" because his machines had an input (a way of feeding in numbers), a memory (something to store these numbers while complex calculations were taking place), a processor (the number-cruncher that carried out the calculations), and an output (a printing mechanism)—the same basic components shared by all modern computers. During his lifetime, Babbage never completed a single one of the hugely ambitious machines that he tried to build. That was no surprise. Each of his programmable "engines" was designed to use tens of thousands of precision-made gears. It was like a pocket watch scaled up to the size of a steam engine, a Pascal or Leibniz machine magnified a thousand-fold in dimensions, ambition, and complexity. For a time, the British government financed Babbage—to the tune of £17,000, then an enormous sum. But when Babbage pressed the government for more money to build an even more advanced machine, they lost patience and pulled out. Babbage was more fortunate in receiving help from Augusta Ada Byron (1815–1852), Countess of Lovelace, daughter of the poet Lord Byron. An enthusiastic mathematician, she helped to refine Babbage's ideas for making his machine programmable—and this is why she is still, sometimes, referred to as the world's first computer programmer. Little of Babbage's work survived after his death. But when, by chance, his notebooks were rediscovered in the 1930s, computer scientists finally appreciated the brilliance of his ideas. Unfortunately, by then, most of these ideas had already been reinvented by others.
Babbage had intended that his machine would take the drudgery out of repetitive calculations. Originally, he imagined it would be used by the army to compile the tables that helped their gunners to fire cannons more accurately. Toward the end of the 19th century, other inventors were more successful in their effort to construct "engines" of calculation. American statistician Herman Hollerith (1860–1929) built one of the world's first practical calculating machines, which he called a tabulator, to help compile census data. Then, as now, a census was taken each decade but, by the 1880s, the population of the United States had grown so much through immigration that a full-scale analysis of the data by hand was taking seven and a half years. The statisticians soon figured out that, if trends continued, they would run out of time to compile one census before the next one fell due. Fortunately, Hollerith's tabulator was an amazing success: it tallied the entire census in only six weeks and completed the full analysis in just two and a half years. Soon afterward, Hollerith realized his machine had other applications, so he set up the Tabulating Machine Company in 1896 to manufacture it commercially. A few years later, it changed its name to the Computing-Tabulating-Recording (C-T-R) company and then, in 1924, acquired its present name: International Business Machines (IBM).

Bush and the bomb

The history of computing remembers colorful characters like Babbage, but others who played important—if supporting—roles are less well known. At the time when C-T-R was becoming IBM, the world's most powerful calculators were being developed by US government scientist Vannevar Bush (1890–1974). In 1925, Bush made the first of a series of unwieldy contraptions with equally cumbersome names: the New Recording Product Integraph Multiplier. Later, he built a machine called the Differential Analyzer, which used gears, belts, levers, and shafts to represent numbers and carry out calculations in a very physical way, like a gigantic mechanical slide rule. Bush's ultimate calculator was an improved machine named the Rockefeller Differential Analyzer, assembled in 1935 from 320 km (200 miles) of wire and 150 electric motors. Machines like these were known as analog calculators—analog because they stored numbers in a physical form (as so many turns on a wheel or twists of a belt) rather than as digits. Although they could carry out incredibly complex calculations, it took several days of wheel cranking and belt turning before the results finally emerged.
Photo of Differential Analyzer c.1951 by NASA
Photo: A Differential Analyzer. The black part in the background is the main part of the machine. The operator sits at a smaller console in the foreground. Picture courtesy of NASA on the Commons (where you can download a larger version of this photo).
Impressive machines like the Differential Analyzer were only one of several outstanding contributions Bush made to 20th-century technology. Another came as the teacher of Claude Shannon (1916–2001), a brilliant mathematician who figured out how electrical circuits could be linked together to process binary code with Boolean algebra (a way of comparing binary numbers using logic) and thus make simple decisions. During World War II, President Franklin D. Roosevelt appointed Bush chairman first of the US National Defense Research Committee and then director of the Office of Scientific Research and Development (OSRD). In this capacity, he was in charge of the Manhattan Project, the secret $2-billion initiative that led to the creation of the atomic bomb. One of Bush's final wartime contributions was to sketch out, in 1945, an idea for a memory-storing and sharing device called Memex that would later inspire Tim Berners-Lee to invent the World Wide Web. Few outside the world of computing remember Vannevar Bush today—but what a legacy! As a father of the digital computer, an overseer of the atom bomb, and an inspiration for the Web, Bush played a pivotal role in three of the 20th-century's most far-reaching technologies.

Turing—tested

Many of the pioneers of computing were hands-on experimenters—but by no means all of them. One of the key figures in the history of 20th-century computing, Alan Turing(1912–1954) was a brilliant Cambridge mathematician whose major contributions were to the theory of how computers processed information. In 1936, at the age of just 23, Turing wrote a groundbreaking mathematical paper called "On computable numbers, with an application to the Entscheidungsproblem," in which he described a theoretical computer now known as a Turing machine (a simple information processor that works through a series of instructions, reading data, writing results, and then moving on to the next instruction). Turing's ideas were hugely influential in the years that followed and many people regard him as the father of modern computing—the 20th-century's equivalent of Babbage.
Although essentially a theoretician, Turing did get involved with real, practical machinery, unlike many mathematicians of his time. During World War II, he played a pivotal role in the development of code-breaking machinery that, itself, played a key part in Britain's wartime victory; later, he played a lesser role in the creation of several large-scale experimental computers including ACE (Automatic Computing Engine), Colossus, and the Manchester/Ferranti Mark I (described below). Today, Alan Turing is best known for conceiving what's become known as the Turing test, a simple way to find out whether a computer can be considered intelligent by seeing whether it can sustain a plausible conversation with a real human being.

The first modern computers

The World War II years were a crucial period in the history of computing, when powerful gargantuan computers began to appear. Just before the outbreak of the war, in 1938, German engineer Konrad Zuse (1910–1995) constructed his Z1, the world's first programmable binary computer, in his parents' living room. The following year, American physicist John Atanasoff (1903–1995) and his assistant, electrical engineer Clifford Berry (1918–1963), built a more elaborate binary machine that they named the Atanasoff Berry Computer (ABC). It was a great advance—1000 times more accurate than Bush's Differential Analyzer. These were the first machines that used electrical switches to store numbers: when a switch was "off", it stored the number zero; flipped over to its other, "on", position, it stored the number one. Hundreds or thousands of switches could thus store a great many binary digits (although binary is much less efficient in this respect than decimal, since it takes up to eight binary digits to store a three-digit decimal number). These machines were digital computers: unlike analog machines, which stored numbers using the positions of wheels and rods, they stored numbers as digits.
The first large-scale digital computer of this kind appeared in 1944 at Harvard University, built by mathematician Howard Aiken (1900–1973). Sponsored by IBM, it was variously known as the Harvard Mark I or the IBM Automatic Sequence Controlled Calculator (ASCC). A giant of a machine, stretching 15m (50ft) in length, it was like a huge mechanical calculator built into a wall. It must have sounded impressive, because it stored and processed numbers using "clickety-clack" electromagnetic relays (electrically operatedmagnets that automatically switched lines in telephone exchanges)—no fewer than 3304 of them. Impressive they may have been, but relays suffered from several problems: they were large (that's why the Harvard Mark I had to be so big); they needed quite hefty pulses of power to make them switch; and they were slow (it took time for a relay to flip from "off" to "on" or from 0 to 1).
Photo of analog computer c.1949 by NASA
Photo: An analog computer being used in military research in 1949. Picture courtesy of NASA on the Commons(where you can download a larger version.
Most of the machines developed around this time were intended for military purposes. Like Babbage's never-built mechanical engines, they were designed to calculate artillery firing tables and chew through the other complex chores that were then the lot of military mathematicians. During World War II, the military co-opted thousands of the best scientific minds: recognizing that science would win the war, Vannevar Bush's Office of Scientific Research and Development employed 10,000 scientists from the United States alone. Things were very different in Germany. When Konrad Zuse offered to build his Z2 computer to help the army, they couldn't see the need—and turned him down.
On the Allied side, great minds began to make great breakthroughs. In 1943, a team of mathematicians based at Bletchley Park near London, England (including Alan Turing) built a computer called Colossus to help them crack secret German codes. Colossus was the first fully electronic computer. Instead of relays, it used a better form of switch known as a vacuum tube (also known, especially in Britain, as a valve). The vacuum tube, each one about as big as a person's thumb and glowing red hot like a tiny electric light bulb, had been invented in 1906 by Lee de Forest(1873–1961), who named it the Audion. This breakthrough earned de Forest his nickname as "the father of radio" because their first major use was in radio receivers, where they amplified weak incoming signals so people could hear them more clearly. In computers such as the ABC and Colossus, vacuum tubes found an alternative use as faster and more compact switches.
Just like the codes it was trying to crack, Colossus was top-secret and its existence wasn't confirmed until after the war ended. As far as most people were concerned, vacuum tubes were pioneered by a more visible computer that appeared in 1946: the Electronic Numerical Integrator And Calculator (ENIAC). The ENIAC's inventors, two scientists from the University of Pennsylvania, John Mauchly (1907–1980) and J. Presper Eckert (1919–1995), were originally inspired by Bush's Differential Analyzer; years later Eckert recalled that ENIAC was the "descendant of Dr Bush's machine." But the machine they constructed was far more ambitious. It contained nearly 18,000 vacuum tubes (nine times more than Colossus), was around 24 m (80 ft) long, and weighed almost 30 tons. ENIAC is generally recognized as the world's first fully electronic, general-purpose, digital computer. Colossus might have qualified for this title too, but it was designed purely for one job (code-breaking); since it couldn't store a program, it couldn't easily be reprogrammed to do other things.
ENIAC was just the beginning. Its two inventors formed the Eckert Mauchly Computer Corporation in the late 1940s. Working with a brilliant Hungarian mathematician, John von Neumann (1903–1957), who was based at Princeton University, they then designed a better machine called EDVAC (Electronic Discrete Variable Automatic Computer). In a key piece of work, von Neumann helped to define how the machine stored and processed its programs, laying the foundations for how all modern computers operate. After EDVAC, Eckert and Mauchly developed UNIVAC 1 (UNIVersal Automatic Computer) in 1951. They were helped in this task by a young, largely unknown American mathematician and Naval reserve named Grace Murray Hopper (1906–1992), who had originally been employed by Howard Aiken on the Harvard Mark I. Like Herman Hollerith's tabulator over 50 years before, UNIVAC 1 was used for processing data from the US census. It was then manufactured for other users—and became the world's first large-scale commercial computer.
Machines like Colossus, the ENIAC, and the Harvard Mark I compete for significance and recognition in the minds of computer historians. Which one was truly the first great modern computer? All of them and none: these—and several other important machines—evolved our idea of the modern electronic computer during the key period between the late 1930s and the early 1950s. Among those other machines were pioneering computers put together by English academics, notably the Manchester/Ferranti Mark I, built at Manchester University by Frederic Williams(1911–1977) and Thomas Kilburn (1921–2001), and the EDSAC (Electronic Delay Storage Automatic Calculator), built byMaurice Wilkes (1913–2010) at Cambridge University.

The microelectronic revolution

Vacuum tubes were a considerable advance on relay switches, but machines like the ENIAC were notoriously unreliable. The modern term for a problem that holds up a computer program is a "bug." Popular legend has it that this word entered the vocabulary of computer programmers sometime in the 1950s when moths, attracted by the glowing lights of vacuum tubes, flew inside machines like the ENIAC, caused a short circuit, and brought work to a juddering halt. But there were other problems with vacuum tubes too. They consumed enormous amounts of power: the ENIAC used about 2000 times as much electricity as a modern laptop. And they took up huge amounts of space. Military needs were driving the development of machines like the ENIAC, but the sheer size of vacuum tubes had now become a real problem. ABC had used 300 vacuum tubes, Colossus had 2000, and the ENIAC had 18,000. The ENIAC's designers had boasted that its calculating speed was "at least 500 times as great as that of any other existing computing machine." But developing computers that were an order of magnitude more powerful still would have needed hundreds of thousands or even millions of vacuum tubes—which would have been far too costly, unwieldy, and unreliable. So a new technology was urgently required.
A FET transistor on a printed circuit board.
Photo: A typical transistor on an electronic circuit board.
The solution appeared in 1947 thanks to three physicists working at Bell Telephone Laboratories (Bell Labs). John Bardeen(1908–1991), Walter Brattain (1902–1987), and William Shockley (1910–1989) were then helping Bell to develop new technology for the American public telephone system, so the electrical signals that carried phone calls could be amplified more easily and carried further. Shockley, who was leading the team, believed he could use semiconductors (materials such as germanium and silicon that allow electricity to flow through them only when they've been treated in special ways) to make a better form of amplifier than the vacuum tube. When his early experiments failed, he set Bardeen and Brattain to work on the task for him. Eventually, in December 1947, they created a new form of amplifier that became known as the point-contact transistor. Bell Labs credited Bardeen and Brattain with the transistor and awarded them a patent. This enraged Shockley and prompted him to invent an even better design, the junction transistor, which has formed the basis of most transistors ever since.
Like vacuum tubes, transistors could be used as amplifiers or as switches. But they had several major advantages. They were a fraction the size of vacuum tubes (typically about as big as a pea), used no power at all unless they were in operation, and were virtually 100 percent reliable. The transistor was one of the most important breakthroughs in the history of computing and it earned its inventors the world's greatest science prize, the 1956 Nobel Prize in Physics. By that time, however, the three men had already gone their separate ways. John Bardeen had begun pioneering research into superconductivity, which would earn him a second Nobel Prize in 1972. Walter Brattain moved to another part of Bell Labs.
William Shockley decided to stick with the transistor, eventually forming his own corporation to develop it further. His decision would have extraordinary consequences for the computer industry. With a small amount of capital, Shockley set about hiring the best brains he could find in American universities, including young electrical engineer Robert Noyce (1927–1990) and research chemist Gordon Moore (1929–). It wasn't long before Shockley's idiosyncratic and bullying management style upset his workers. In 1956, eight of them—including Noyce and Moore—left Shockley Transistor to found a company of their own, Fairchild Semiconductor, just down the road. Thus began the growth of "Silicon Valley," the part of California centered on Palo Alto, where many of the world's leading computer and electronics companies have been based ever since.
It was in Fairchild's California building that the next breakthrough occurred—although, somewhat curiously, it also happened at exactly the same time in the Dallas laboratories of Texas Instruments. In Dallas, a young engineer from Kansas named Jack Kilby (1923–2005) was considering how to improve the transistor. Although transistors were a great advance on vacuum tubes, one key problem remained. Machines that used thousands of transistors still had to be hand wired to connect all these components together. That process was laborious, costly, and error prone. Wouldn't it be better, Kilby reflected, if many transistors could be made in a single package? This prompted him to invent the "monolithic" integrated circuit (IC), a collection of transistors and other components that could be manufactured all at once, in a block, on the surface of a semiconductor. Kilby's invention was another step forward, but it also had a drawback: the components in his integrated circuit still had to be connected by hand. While Kilby was making his breakthrough in Dallas, unknown to him, Robert Noyce was perfecting almost exactly the same idea at Fairchild in California. Noyce went one better, however: he found a way to include the connections between components in an integrated circuit, thus automating the entire process.
Inside a typical microchip. You can see the  integrated circuit and the wires that connect to the terminals around its edge.
Photo: An integrated circuit seen from the inside. Photo by courtesy of NASA Glenn Research Center (NASA-GRC).
Integrated circuits, as much as transistors, helped to shrink computers during the 1960s. In 1943, IBM boss Thomas Watson had reputedly quipped: "I think there is a world market for about five computers." Just two decades later, the company and its competitors had installed around 25,000 large computer systems across the United States. As the 1960s wore on, integrated circuits became increasingly sophisticated and compact. Soon, engineers were speaking of large-scale integration (LSI), in which hundreds of components could be crammed onto a single chip, and then very large-scale integrated (VLSI), when the same chip could contain thousands of components.
The logical conclusion of all this miniaturization was that, someday, someone would be able to squeeze an entire computer onto a chip. In 1968, Robert Noyce and Gordon Moore had left Fairchild to establish a new company of their own. With integration very much in their minds, they called it Integrated Electronics or Intel for short. Originally they had planned to make memory chips, but when the company landed an order to make chips for a range of pocket calculators, history headed in a different direction. A couple of their engineers,Federico Faggin (1941–) and Marcian Edward (Ted) Hoff (1937–), realized that instead of making a range of specialist chips for a range of calculators, they could make a universal chip that could be programmed to work in them all. Thus was born the general-purpose, single chip computer or microprocessor—and that brought about the next phase of the computer revolution.

Personal computers

By 1974, Intel had launched a popular microprocessor known as the 8080 and computer hobbyists were soon building home computers around it. The first was the MITS Altair 8800, built by Ed Roberts. With its front panel covered in red LED lights and toggle switches, it was a far cry from modern PCs and laptops. Even so, it sold by the thousand and earned Roberts a fortune. The Altair inspired a Californian electronics wizard name Steve Wozniak (1950–) to develop a computer of his own. "Woz" is often described as the hacker's "hacker"—a technically brilliant and highly creative engineer who pushed the boundaries of computing largely for his own amusement. In the mid-1970s, he was working at the Hewlett-Packard computer company in California, and spending his free time tinkering away as a member of the Homebrew Computer Club in the Bay Area.
After seeing the Altair, Woz used a 6502 microprocessor (made by an Intel rival, Mos Technology) to build a better home computer of his own: the Apple I. When he showed off his machine to his colleagues at the club, they all wanted one too. One of his friends, Steve Jobs (1955–2011), persuaded Woz that they should go into business making the machine. Woz agreed so, famously, they set up Apple Computer Corporation in a garage belonging to Jobs' parents. After selling 175 of the Apple I for the devilish price of $666.66, Woz built a much better machine called the Apple ][ (pronounced "Apple Two"). While the Altair 8800 looked like something out of a science lab, and the Apple I was little more than a bare circuit board, the Apple ][ took its inspiration from such things as Sony televisions and stereos: it had a neat and friendly looking cream plastic case. Launched in April 1977, it was the world's first easy-to-use home "microcomputer." Soon home users, schools, and small businesses were buying the machine in their tens of thousands—at $1298 a time. Two things turned the Apple ][ into a really credible machine for small firms: a disk drive unit, launched in 1978, which made it easy to store data; and a spreadsheet program called VisiCalc, which gave Apple users the ability to analyze that data. In just two and a half years, Apple sold around 50,000 of the machine, quickly accelerating out of Jobs' garage to become one of the world's biggest companies. Dozens of other microcomputers were launched around this time, including the TRS-80 from Radio Shack (Tandy in the UK) and the Commodore PET.
Apple ][ microcomputer in a museum glass case Sinclair ZX81 microcomputer 
Photos: Microcomputers—the first PCs. The Apple ][ and The Sinclair ZX81, a build-it-yourself microcomputer that became hugely popular in the UK when it was launched in 1981. Both of these machines live in glass cases at Think Tank, the science museum in Birmingham, England.
Apple's success selling to businesses came as a great shock to IBM and the other big companies that dominated the computer industry. It didn't take a VisiCalc spreadsheet to figure out that, if the trend continued, upstarts like Apple would undermine IBM's immensely lucrative business market selling "Big Blue" computers. In 1980, IBM finally realized it had to do something and launched a highly streamlined project to save its business. One year later, it released the IBM Personal Computer (PC), based on an Intel 8080 microprocessor, which rapidly reversed the company's fortunes and stole the market back from Apple.
The PC was successful essentially for one reason. All the dozens of microcomputers that had been launched in the 1970s—including the Apple ][—were incompatible. All used different hardware and worked in different ways. Most were programmed using a simple, English-like language called BASIC, but each one used its own flavor of BASIC, which was tied closely to the machine's hardware design. As a result, programs written for one machine would generally not run on another one without a great deal of conversion. Companies who wrote software professionally typically wrote it just for one machine and, consequently, there was no software industry to speak of.
In 1976, Gary Kildall (1942–1994), a teacher and computer scientist, and one of the founders of the Homebrew Computer Club, had figured out a solution to this problem. Kildall wrote an operating system (a computer's fundamental control software) called CP/M that acted as an intermediary between the user's programs and the machine's hardware. With a stroke of genius, Kildall realized that all he had to do was rewrite CP/M so it worked on each different machine. Then all those machines could run identical user programs—without any modification at all—inside CP/M. That would make all the different microcomputers compatible at a stroke. By the early 1980s, Kildall had become a multimillionaire through the success of his invention: the first personal computer operating system. Naturally, when IBM was developing its personal computer, it approached him hoping to put CP/M on its own machine. Legend has it that Kildall was out flying his personal plane when IBM called, so missed out on one of the world's greatest deals. But the truth seems to have been that IBM wanted to buy CP/M outright for just $200,000, while Kildall recognized his product was worth millions more and refused to sell. Instead, IBM turned to a young programmer named Bill Gates (1955–). His then tiny company, Microsoft, rapidly put together an operating system called DOS, based on a product called QDOS (Quick and Dirty Operating System), which they acquired from Seattle Computer Products. Some believe Microsoft and IBM cheated Kildall out of his place in computer history; Kildall himself accused them of copying his ideas. Others think Gates was simply the shrewder businessman. Either way, the IBM PC, powered by Microsoft's operating system, was a runaway success.
Yet IBM's victory was short-lived. Cannily, Bill Gates had sold IBM the rights to one flavor of DOS (PC-DOS) and retained the rights to a very similar version (MS-DOS) for his own use. When other computer manufacturers, notably Compaq and Dell, starting making IBM-compatible (or "cloned") hardware, they too came to Gates for the software. IBM charged a premium for machines that carried its badge, but consumers soon realized that PCs were commodities: they contained almost identical components—an Intel microprocessor, for example—no matter whose name they had on the case. As IBM lost market share, the ultimate victors were Microsoft and Intel, who were soon supplying the software and hardware for almost every PC on the planet. Apple, IBM, and Kildall made a great deal of money—but all failed to capitalize decisively on their early success.
Photo of mainframe computer c.1990 by NASA
Photo: Personal computers threatened companies making large "mainframes" like this one. Picture courtesy of NASA on the Commons (where you can download a larger version).

The user revolution

Fortunately for Apple, it had another great idea. One of the Apple II's strongest suits was its sheer "user-friendliness." For Steve Jobs, developing truly easy-to-use computers became a personal mission in the early 1980s. What truly inspired him was a visit to PARC (Palo Alto Research Center), a cutting-edge computer laboratory then run as a division of the Xerox Corporation. Xerox had started developing computers in the early 1970s, believing they would make paper (and the highly lucrative photocopiers Xerox made) obsolete. One of PARC's research projects was an advanced $40,000 computer called the Xerox Alto. Unlike most microcomputers launched in the 1970s, which were programmed by typing in text commands, the Alto had a desktop-like screen with little picture icons that could be moved around with a mouse: it was the very first graphical user interface (GUI, pronounced "gooey")—an idea conceived by Alan Kay (1940–) and now used in virtually every modern computer. The Alto borrowed some of its ideas, including the mouse, from 1960s computer pioneer Douglas Engelbart (1925–2013).
Back at Apple, Jobs launched his own version of the Alto project to develop an easy-to-use computer called PITS (Person In The Street). This machine became the Apple Lisa, launched in January 1983—the first widely available computer with a GUI desktop. With a retail price of $10,000, over three times the cost of an IBM PC, the Lisa was a commercial flop. But it paved the way for a better, cheaper machine called the Macintosh that Jobs unveiled a year later, in January 1984. With its memorable launch ad for the Macintosh inspired by George Orwell's novel 1984, and directed by Ridley Scott (director of the dystopic movie Blade Runner), Apple took a swipe at IBM's monopoly, criticizing what it portrayed as the firm's domineering—even totalitarian—approach: Big Blue was really Big Brother. Apple's ad promised a very different vision: "On January 24, Apple Computer will introduce Macintosh. And you'll see why 1984 won't be like '1984'." The Macintosh was a critical success and helped to invent the new field of desktop publishing in the mid-1980s, yet it never came close to challenging IBM's position.
Ironically, Jobs' easy-to-use machine also helped Microsoft to dislodge IBM as the world's leading force in computing. When Bill Gates saw how the Macintosh worked, with its easy-to-use picture-icon desktop, he launched Windows, an upgraded version of his MS-DOS software. Apple saw this as blatant plagiarism and filed a $5.5 billion copyright lawsuit in 1988. Four years later, the case collapsed with Microsoft effectively securing the right to use the Macintosh "look and feel" in all present and future versions of Windows. Microsoft's Windows 95 system, launched three years later, had an easy-to-use, Macintosh-like desktop and MS-DOS running behind the scenes.
Photo of IBM Blue Gene supercomputer at Argonne National Laboratory.
Photo: The IBM Blue Gene/P supercomputer at Argonne National Laboratory: one of the world's most powerful computers. Picture courtesy of Argonne National Laboratory published on Flickr in 2009 under a Creative Commons Licence.

From nets to the Internet

Standardized PCs running standardized software brought a big benefit for businesses: computers could be linked together into networks to share information. At Xerox PARC in 1973, electrical engineer Bob Metcalfe(1946–) developed a new way of linking computers "through the ether" (empty space) that he called Ethernet. A few years later, Metcalfe left Xerox to form his own company, 3Com, to help companies realize "Metcalfe's Law": computers become useful the more closely connected they are to other people's computers. As more and more companies explored the power of local area networks (LANs), so, as the 1980s progressed, it became clear that there were great benefits to be gained by connecting computers over even greater distances—into so-called wide area networks (WANs).
An iPod Touch touchscreen.
Photo: Computers aren't what they used to be: they're much less noticeable because they're much more seamlessly integrated into everyday life. Some are "embedded" into household gadgets like coffee makers or televisions. Others travel round in our pockets in our smartphones—essentially pocket computers that we can program simply by downloading "apps" (applications).
Today, the best known WAN is the Internet—a global network of individual computers and LANs that links up hundreds of millions of people. The history of the Internet is another story, but it began in the 1960s when four American universities launched a project to connect their computer systems together to make the first WAN. Later, with funding for the Department of Defense, that network became a bigger project called ARPANET (Advanced Research Projects Agency Network). In the mid-1980s, the US National Science Foundation (NSF) launched its own WAN called NSFNET. The convergence of all these networks produced what we now call the Internet later in the 1980s. Shortly afterward, the power of networking gave British computer programmer Tim Berners-Lee(1955–) his big idea: to combine the power of computer networks with the information-sharing idea Vannevar Bush had proposed in 1945. Thus, was born the World Wide Web—an easy way of sharing information over a computer network, which made possible the modern age of cloud computing (where anyone can access vast computing power over the Internet without having to worry about where or how their data is processed). It's Tim Berners-Lee's invention that brings you this potted history of computing today!
  •  
  • Tweet
Advertisement

Find out more

On this site

Other websites

There are lots of websites covering computer history. Here are a just a few favorites worth exploring!

Videos

There are some superb computer history videos on YouTube and elsewhere; here are three good ones to start you off:
  • The Difference Engine: A great introduction to Babbage's Difference Engine from Doron Swade, one of the world's leading Babbage experts.
  • The ENIAC: A short Movietone news clip about the completion of the world's first programmable electronic computer.
  • A tour of the Computer History Museum: Dag Spicer gives us a tour of the world's most famous computer museum, in California