h+ Magazine

Winter 2009.

Issue link: http://cp.revolio.com/i/5039

Contents of this Issue

Navigation

Page 40 of 89

41 www.hplusmagazine.com could be that complicated. The design of the brain is in the genome. The genome — well... it's 800 million bytes. Well, back up and take a look at that. It's 3 billion base pairs, 6 billion bits, 800 million bytes before compression — but it's replete with redundancies. lengthy sequences like Alu are repeated hundreds of thousands of times. In The Singularity is Near, I show that if you apply lossless compression, you get down to about 50 million bytes. About half of that is the brain, so that's about 25 million bytes. That's about a million lines of code. That's one derivation. You could also look at the amount of complexity that appears to be necessary to perform functional simulations of different brain regions. You actually get about the same answer, about a million lines of code. So with two very different methods, you come up with a similar order of magnitude. There just isn't trillions of lines of code — of complexity — in the design of the brain. There is trillions, or even thousands of trillions of bytes of information, but that's not complexity because there's massive redundancy. For instance, the cerebellum, which comprises half the neurons in the brain and does some of our skill formation, has one module repeated 10 billion times with some random variation with each repetition within certain constraints. And there are only a few genes that describe the wiring of the cerebellum that comprise a few tens of thousands of bytes of design information. As we learn skills like catching a fly ball — then it gets filled up with trillions of bytes of information. But just like we don't need trillions of bytes of design information to design a trillion-byte memory system, there are massive redundancies and repetition and a certain amount of randomness in the implementation of the brain. It's a probabilistic fractal. If you look at the Mandelbrot set, it is an exquisitely complex design. So: So you're saying the initial intelligence that passes the Turing test is likely to be a reverse-engineered brain, as opposed to a software architecture that's based on weighted probabilistic analysis, genetic algorithms, and so forth? RK: I would put it differently. We have a toolkit of AI techniques now. I actually don't draw that sharp a distinction between narrow AI techniques and AGI techniques. I mean, you can list them — markup models, different forms of neural nets and genetic algorithms, logic systems, search algorithms, learning algorithms. These are techniques. Now, they go by the label AGI. We're going to add to that tool kit as we learn how the human brain does it. And then, with more and more powerful hardware, we'll be able to put together very powerful systems. My vision is that all the different avenues — studying individual neurons, studying brain wiring, studying brain performance, simulating the brain either by doing neuron-by-neuron simulations or functional simulations — and then, all the AI work that has nothing to do with direct emulation of the brain — it's all helping. And we get from here to there through thousands of little steps like that, not through one grand leap. So: James lovelock, the ecologist behind the gaia hypothesis, came out a couple of years ago with a prediction that more than 6 billion people are going to perish by the end of this century, mostly because of climate change. Do you see the gnR technologies coming on line to mitigate that kind of a catastrophe? RK: Absolutely. Those projections are based on linear thinking, as if nothing's going to happen over the next 50 or 100 years. It's ridiculous. For example, we're applying nanotechnology to solar panels. The cost per watt of solar energy is coming down dramatically. As a result, the amount of solar energy is growing exponentially. It's doubling every two years, reliably, for the last 20 years. People ask, "Is there really enough solar energy to meet all of our energy needs?" It's actually 10,000 times more than we need. And yes you lose some with cloud cover and so forth, but we only have to capture one part in 10,000. If you put efficient solar collection panels on a small percentage of the deserts in the world, you would meet 100% of our energy needs. And there's also the same kind of progress being made on energy storage to deal with the intermittency of solar. There are only eight doublings to go before solar meet 100% of our energy needs. We're awash in sunlight and these new technologies will enable us to capture that in a clean and renewable fashion. And then, geothermal — you have the potential incredible amounts of energy. Global warming — regardless of what you think of the models and whether or not it's been human-caused — it's only been one degree Fahrenheit in the last 100 years. There just isn't a dramatic global warming so far. I think there are lots of reasons we want to move away from fossil fuels, but I would not put greenhouse gasses at the top of the list These new energy technologies are decentralized. They're relatively safe. Solar energy, unlike say nuclear power plants and other centralized facilities, are safe from disaster and sabotage and are non-polluting. So I believe that's the future of energy, and of resource utilization in general. Global Warming & GNR Technologies just simulating a brain... mindlessly, so to speak... that's not going to get you far enough.

Articles in this issue

Archives of this issue

view archives of h+ Magazine - Winter 2009.