Will Computers Beat Us at Our Own Game?

This article from Time raises some interesting questions: How long until computers are “smarter” than humans? (Especially considering Watson’s recent performance on Jeopardy!) What about machines with creativity, intuition, and generalized intelligence — the historical province of humans alone? And if that’s not scary enough, how long until we can achieve immortality by downloading our brains into computers and preserving ourselves as thinking machines indefinitely?

Their article on the subject from a social point of view is quite interesting. However, it seems that the issue could benefit from a computer science perspective as well. What kind of technology would be required to achieve these things, and how tough will it be to get there? Let’s find out, or at least make some guesses.

Malthus, Darwin, and The Myth of Moore’s Law

Much speculation regarding the future of computers takes one thing for granted: exponential growth.  A common way to illustrate exponential growth is population size. Let’s say Noah drops his pair of rabbits off on the shore of Australia. Rabbits being what rabbits are, the number of rabbits living on the continent doubles every month. After one year, there are over 4000 rabbits — a sizable number. After two years, there are 16 million rabbits. And after four years, half the surface area of Australia is literally covered in rabbits.

After four years and one month, the entire surface area is covered in rabbits.

The point is, exponential growth is incredibly powerful. But we’ve grown accustomed to it: something known as Moore’s Law has been treated as an inviolable principle since the 70’s. (In computer terms, since the time of Noah.) Moore predicted that the number of transistors we can squeeze into a certain area would double every two years, and this prediction has generally held true. That means that we can fit about a million times as many transistors on a chip as we could in 1970. (The math: 40 years worth of doubling every two years means that we’ve had 20 cycles of doubling. 2^20 is about a million.)

But we need to be careful in interpreting Moore’s Law and in our expectations that growth of that type will continue to occur in the future. For instance? As recently as 10 years ago, plenty of people thought that clock rate (the speed of a computer’s processor) would continue to climb indefinitely. Computers could get faster and faster and faster.

Roundabout 5 years ago, we began to discover that such was not the case. Recently, CPU speeds have stalled in the 3.0-GHz range. (Instead, we’ve seen moves toward multi-core machines — but that’s another story.)  The reason is simple. In fact, all that is required to disprove Moore’s Law — at least in the long run — is an insight that Malthus had 200 years ago, and that Darwin later put to good use describing evolution.

See also: Malthus

Malthus realized that, as population sizes increased quickly while resources increased slowly, there would eventually be a point where the population ran out of resources and stopped increasing. (Malthus also thought that this doomed the human race to poverty and misery, but that’s another story.) Darwin realized that this answered the question everyone was wondering all along: why isn’t the Earth currently covered in rabbits? (The answer: rabbits reproduced fast enough, but there were only so many resources available.)

With CPU speeds, the limiting factors are power and heat. As we run our processors faster and faster, we need to supply more power and we produce more heat. And for now, we can’t really dissipate that heat efficiently enough.

These problems aren’t fatal — we’re continually innovating in how we deal with these types of problems — but they aren’t going to magically disappear, either. Exponential increases will hit a ceiling sooner or later. In some cases, like transistor counts, we still haven’t quite hit those resource limits. But to claim that exponential growth can continue forever means claiming that resources available will keep increasing exponentially as well. Not gonna happen.

And similarly, we can’t just keep making parts smaller and smaller, either. At some point, we leave the macroscopic scale and enter the atomic scale. When this happens, we won’t even be using transistors anymore. We’ll have moved on to something different.

A New Era and Imitating the Brain

In fact, trying to feasibly imitate the human brain will require that we move on to something different. We can double clock speeds and transistor counts all we want. But computers as we currently know them will never be suited to imitating the brain. (The key phrase is as we currently know them. Changes are coming!)

A Sense of Structure

Taking a simple top-level view could help explain why current technology is badly suited to mimicking the human brain. So allow me to generalize by claiming that the brain has two basic functions: (1) data storage/memory: information retrieval, and (2) computation: coming up with new answers from old information.

In a modern computer, there are two pieces of hardware that achieve these functions separately. Data storage is accomplished by RAM and/or the hard drive; computation is accomplished by the central processing unit.

But in the brain, both of these actions occur simultaneously within neurons and between networks of neurons. A brain consists of billions of little tiny processors, all running in parallel (that is, simultaneously); at the same time, a brain consists of billions of data storage centers, all linked and accessible quickly.

Does this mean a computer will have to imitate the brain’s structure in order to think like one? Well … maybe. One of the most promising areas of artificial intelligence is that of neural networks: simulations of the workings of neurons in the brain. But because memory and processing are separate steps on modern computers, these neural networks are inefficient. Efficiency would require producing neural networks hard-wired together, each with its own mini-processor, in a structure completely alien to that of current computers. Work is being done in the area, but it’s not easy! And of course, computers built in this way would likely be quite bad at things normal computers are good at, like fast arithmetic.

Memory Constraints and Constraints on Remembering

If we’re trying to build something as smart, general, and creative as the brain, we’ll soon run into memory problems. The human brain has something like 100 billion neurons with a thousand connections each. Just to be able to write down all the connections between neurons (the adjacency list) would take something like 100 terabytes (100,000 gigabytes) of memory. Additionally, each neuron is a self-contained system, chemically complex and difficult to reproduce or model.

But just describing the brain pales in comparison to mimicking its memory. In fact, it often seems like any difficulties humans have with memory are problems with recalling the correct item, not with actual data storage, as Isaac Asimov illustrates in this short story. You know the knowledge is in your brain somewhere, you just have trouble getting at it!

We don’t know exactly how human memory works, though Wikipedia has much to say on the subject. But for the sake of argument, let’s represent each neuron with a bit that can be either zero or one (a massive simplification). At any one time, the brain is in a particular configuration: some neurons are one, and some are zero. In this case, the total number of different configurations the brain could have is 2^(100 billion).

(Sidenote: 2^(100 billion) is the number of rabbits that would be on Australia after about 8 billion years. If you typed this number up on your computer and printed it out, it would take up something like 3 million sheets of paper.)

But that actually doesn’t sound so bad, on the surface. 12 GB of computer memory also has that many different configurations — but not so fast! 12 GB of memory can only store one of those configurations at a time. The brain has all of these configurations potentially available for recall, and each configuration might have a specific meaning. (Like the configuration your brain achieves when it smells french toast frying, for example.)

Under our simplification of neurons as bits (zeros and ones), think of a 12-GB chunk of RAM as being able to store a number between 0 and 2^(100 billion) = 2^10^9. Then your brain could store every number between 0 and 2^10^9.

That’s 2^10^18 bits of storage, corresponding to about 112 petabytes of information. That’s 112,000 terabytes or 112 million GB.  But now, drop the simplification: for each “number” or configuration, it could be that you’re actually storing pieces of information such as the color of your cousin’s ex-wife’s car. We are talking very large amounts of data. Add this to the fact that each neuron is not always “zero” or “one”. It has a different signal strength at each synapse.

What does it all mean? This is all speculation — we don’t really know how the brain stores and accesses information. But no matter how it does, it packs an incredible amount of information into a very small space, yet can process and operate on that information fast. Our current computers are nowhere near this right now.

Size Matters

Let’s say we want to create an artificial replica of the human brain. Nothing fancy, nothing too advanced, just a network of artificial neurons that would, in theory work just like the brain. So we make tiny little hardware processors. Each one is responsible for processing a thousand inputs, computing an output, and sending that output a thousand different places.  Say we fit each one into a cubic millimeter and stack them all on top of each other, ignoring wiring problems. Then our entire computerized “brain” will take up 100 cubic meters, or a good-sized living room.

Remember when?That’s what I mean when I say that, by the time we’re making artificial brains, we won’t be using transistors anymore. By the time we get to brain-sized computers, we will necessarily be constructing them at a molecular level. Perhaps by then, electrical engineers won’t be the ones building computers — biologists will be growing them in labs!  Who knows? But intelligent, creative computers will certainly bear little resemblance to the machine sitting on your desk today.

What If We Keep Doubling?

If we accept the claim that computer processing power will keep doubling indefinitely, we still don’t have to accept the claim that computers will become smarter than humans. At least, not in certain senses. After all, computers are already millions of times smarter than your average human, and I mean that literally. Quick: what’s 15.9274 times 26.2385? Your computer can do that calculation a million times per second and keep it up for days on end.

So making computers faster and faster doesn’t say anything about making computers that can compete with humans in areas like creativity or artistic creations. Modern computers are made to do two things: fetch instruction, execute instruction. Human brains are made to do whatever they choose. It may not just be a difference of processing power; it may be a fundamental structural difference. Perhaps to get a brain’s strengths — creativity, intuition, generality — one must also accept its weaknesses: fallibility, inconsistency, failures of memory and poor reproducibility. Perhaps our computers, which are incapable of possessing these weaknesses (at least in theory), are also incapable of achieving these strengths.

And in fact, it is well known that there are some functions that a computer simply cannot compute, no matter how fast it is. (Can humans solve these problems? It’s not yet known.) So there’s no guarantee that exponential progress will solve this particular problem.

It could be that imitating the brain will require more than technological advances. It may require a completely new design. Could it happen? Definitely. But pure processing power and memory — things subject to the law of doubling — aren’t necessarily the limiting factors here.

Will I Live Forever?

Fast-forward to The Singularity. A brain-equivalent computer has been created. Not only does it model all the synapses and connections, it takes into account chemical balances in the brain and all that stuff we don’t understand but will by 2045.

The computer is turned on. Having no data or information, it has started at “I think therefore I am” and worked its way up to income tax and rice pudding. And now some crazy old scientist plugs some electrodes into her brain and copies the entire contents of her consciousness into the computer.  What happens?

Obviously, the computer goes rogue and takes over the world, like in any good science-fiction story. (Unlike a good sci-fi story, though, this idea of being able to copy the contents of a brain is completely, ridiculously implausible.)

The movie is not like the book.Lets take it seriously, though, and see where we end up. Odds are, the computer-brain spazzes out. It can’t feel its toes anymore. It can’t see anymore, or maybe it’s seeing through different, mechanical eyes. It’s hooked up to an entirely different set of inputs. And unless the hardware was somehow built to match this scientist’s brain and body incredibly precisely, it has a different way of monitoring its “chemical levels”. It has no adrenal gland, no thyroid gland, no pineal and no pituitary.

At the moment of transfer, the computer and the scientist were presumably exactly in sync. Their thoughts were one and the same. Immediately afterward, though, the computer begins to diverge. It is seeing a different angle of the world than the scientist is; it is processing different inputs. Soon, it may process even the same inputs differently. Thanks to chaos, within minutes (wo)man and machine are thinking completely different thoughts. They still have much in common, of course, but they are irrevocably separated. If the scientist dies, is it fair to say that she will live on within the computer? Is the computer happy about being able to continue on without the use of its human body? It must be happy at first, since it remembers making the decision to go ahead with the procedure and it feels triumphant upon realizing that the experiment has succeeded. But as time goes on, and it adjusts to its new life as a machine, who can tell how happy it will be? And who can say whether the consciousness in there is still that of the crazy old professor, or whether it is now someone entirely different?

Two Paths Diverged In a Circuit

There are two ways to modify the computer in this scenario. On the one hand, we could try to make it more and more like the scientist, by physically providing it with hormones, glands, and so on. In the limit, we clone the scientist and put the old brain in a new body. But in doing so, we inevitably lose some information in that old body, some encoded aspects of who this person was. To keep all of that self, we would have to keep the part that was growing old and dying as well. This is true whether we try to physically replicate these structures, or merely model them with software programs. And any attempt to model the human body in software must either include aging and death, or find a way to leave them out. Leave them out, and you likely leave out an essential part of who that person is as well.

On the other hand, we could make the computer more mechanical. Forget glands, ears, and maybe even eyes. Perhaps forget things like chemical imbalances. In this case, we have a being that is completely different from the scientist starting the moment she makes the transfer. It can barely understand what it’s like to be human, let alone mimic it. To achieve immortality, this route is hopeless.

Regardless of whether or not you believe in the soul, it is difficult not to believe in the body. Nevertheless, this sort of disbelief is exactly what is required of anyone who hopes to preserve their thoughts in a computer. To them, I say: good luck! That you may succeed in creating some sort of consciousness, I can believe; that you will live on in any meaningful sense, I seriously doubt. But I will be interested to see them try.

Confession: My Computer Wrote This Post

Nope, not really! And despite Watson’s antics on Jeopardy, don’t expect it to happen anytime soon.

As our computational abilities continue to grow exponentially, we will continue to hit resource limits. Overcoming them will likely require moving on to a completely new type of technology. Meanwhile, we face other limits, including those that are unaffected by doubling computer power. Building something as creative and intelligent as the human brain will be incredibly difficult.

All that said, will we eventually create self-aware artificial devices? Actually, I believe the answer is a definite yes. The question is how we will use this technology. To try to live forever? To try to compose music? Or to solve problems in our world? I guess we’ll have to wait and see. Either way, it’ll be a while before we get there. I wouldn’t let it keep you up at night.

Youtube link of the day: Simple thing, where have you gone?

Advertisements

One comment on “Will Computers Beat Us at Our Own Game?

  1. Justin says:

    what if I’m smelling French toast, while thinking of my cousin’s ex-wife’s-car, while listening to the flaming lips, while watching sportscenter, and talking to my friend about how much I hate Michigan.

    Is this one specific configuration? Or is our brain able to have different configurations at once somehow?

    Also, you mentioned this was all speculation and greatly simplified, but would you expect the brain to store memory digitally like this or in more of an analog way? I suppose it’s impossible to know for sure.

    Also, until computers can “learn” new things, or write their own programs so to speak, they are essentially an extension of what a certain human (or a collaboration of humans, like say the internet) already knows. In a sense I don’t think they can be truely “smarter” than us until they can write their own instructions, I think you hit that pretty well towards the end

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s