I got my first computer when I was a teen growing up in in a middle class family, and it was a really cool device. You could play games with it. You could process a lot of things at the same time. And I was fascinated. So I went into the library to figure out how did this thing work. I read about how the CPU is constantly shuffling data back and forth between the memory, the RAM and the ALU, the arithmetic and logic unit. And I thought to myself, this CPU really has to work like crazy just to keep all this data moving through the system.
But nobody was really worried about this. When computers were first introduced, they were said to be a million times faster than neurons. People were really excited. They thought they would soon outstrip the capacity of the brain. This is a quote, actually, from Alan Turing: “In 30 years, it will be as easy to ask a computer a question as to ask a person.” This was in 1946. And now, in 2016, it’s still not true. And so, the question is, why aren’t we really seeing this kind of power in computers that we see in the brain?
What people didn’t realize, and I’m just beginning to become aware of right now, is that we pay a huge price for the speed that we claim is a big advantage of these computers. Let’s take a look at some numbers.
This is Blue Gene, the fastest computer in the world. It’s got 120,000 processors; they can basically process 10 quadrillion bits of information per second. That’s 10 to the sixteenth. And they consume one and a half megawatts of power. So that would be really great, if you could add that to the production capacity in Tanzania. It would really boost the economy. Just to go back to the States, if you translate the amount of power or electricity this computer uses to the amount of households in the States, you get 1,200 households in the U.S. That’s how much power this computer uses.
Now, let’s compare this with the brain. Now, how much computation does the brain do? I estimate 10 to the 16 bits per second, which is actually about very similar to what Blue Gene does. So that’s the question. The question is, how much — they are doing a similar amount of processing, similar amount of data — energy or electricity does the brain use? And it’s actually as much as your laptop computer: it’s just 10 watts. So what we are doing right now with computers with the energy consumed by 1,200 houses, the brain is doing with the energy consumed by your laptop.
So the question is, how is the brain able to achieve this kind of efficiency? And let me just summarize. So the bottom line: the brain processes information using 100,000 times less energy than we do right now with this computer technology that we have. How is the brain able to do this? Let’s just take a look about how the brain works, and then I’ll compare that with how computers work.
So, this clip is from the PBS series, “The Secret Life of the Brain.” It shows you these cells that process information. They are called neurons. They send little pulses of electricity down their processes to each other, and where they contact each other, those little pulses of electricity can jump from one neuron to the other. That process is called a synapse. You’ve got this huge network of cells interacting with each other — about 100 million of them, sending about 10 quadrillion of these pulses around every second. And that’s basically what’s going on in your brain right now as you’re reading this.
In the computer, you have all the data going through the central processing unit, and any piece of data basically has to go through that bottleneck, whereas in the brain, what you have is these neurons, and the data just really flows through a network of connections among the neurons. There’s no bottleneck here. It’s really a network in the literal sense of the word. The net is doing the work in the brain. If you just look at these two pictures, these kind of words pop into your mind. This is serial and it’s rigid — it’s like cars on a freeway, everything has to happen in lockstep — whereas this is parallel and it’s fluid. Information processing is very dynamic and adaptive.
This is — it’s actually really this remarkable convergence between the devices that we use to compute in computers, and the devices that our brains use to compute. The devices that computers use are what’s called a transistor. This electrode here, called the gate, controls the flow of current from the source to the drain — these two electrodes. And that current, electrical current, is carried by electrons, just like in your house and so on. And what you have here is, when you actually turn on the gate, you get an increase in the amount of current, and you get a steady flow of current. And when you turn off the gate, there’s no current flowing through the device. Your computer uses this presence of current to represent a one, and the absence of current to represent a zero.
Now, what’s happening is that as transistors are getting smaller and smaller and smaller, they no longer behave like this. In fact, they are starting to behave like the device that neurons use to compute, which is called an ion channel. And this is a little protein molecule. I mean, neurons have thousands of these. And it sits in the membrane of the cell and it’s got a pore in it. And these are individual potassium ions that are flowing through that pore. Now, this pore can open and close. But, when it’s open, because these ions have to line up and flow through, one at a time, you get a kind of sporadic, not steady — it’s a sporadic flow of current. And even when you close the pore — which neurons can do, they can open and close these pores to generate electrical activity — even when it’s closed, because these ions are so small, they can actually sneak through, a few can sneak through at a time. So, what you have is that when the pore is open, you get some current sometimes. These are your ones, but you’ve got a few zeros thrown in. And when it’s closed, you have a zero, but you have a few ones thrown in.
Now, this is starting to happen in transistors. And the reason why that’s happening is that, right now, in 2016 — the technology that we are using — a transistor is big enough that several electrons can flow through the channel simultaneously, side by side. In fact, there’s about 12 electrons can all be flowing this way. And that means that a transistor corresponds to about 12 ion channels in parallel. Now, in a few years time, by 2020, we will shrink transistors so much. This is what Intel does to keep adding more cores onto the chip. Or your memory sticks that you have now can carry one gigabyte of stuff on them — before, it was 256. Transistors are getting smaller to allow this to happen, and technology has really benefited from that.
But what’s happening now is that in 2020, the transistor is going to become so small, that it corresponds to only one electron at a time can flow through that channel, and that corresponds to a single ion channel. And you start having the same kind of traffic jams that you have in the ion channel. The current will turn on and off at random, even when it’s supposed to be on. And that means your computer is going to get its ones and zeros mixed up, and that’s going to crash your machine.
So I have been reading a few papers on how to make a computer think like a brain and haven’t found the time to will myself to come to the Game Maker Studio mood…. All personalities aside what I am gonna do is after I manage to take the long due placement test and put up a ful fleged game with levels and move on to Cinema4D from next week… Well that’s the plan right now… I do hope that I stop getting distracted lol