Why is singularity bad




















Now, the first thing you need to know about the singularity is that it is an idea mostly believed by people not working in artificial intelligence. People like the philosopher Nick Bostrom, and the futurist and inventor Ray Kurzweil. Most people working in AI like myself have a healthy skepticism for the idea of the singularity. We know how hard it is to get even a little intelligence into a machine, let alone enough to achieve recursive self-improvement. There are many technical reasons why the singularity might never happen.

We might simply run into some fundamental limits. Every other field of science has fundamental limits. Perhaps there are some fundamental limits to how smart you can be? Or perhaps we run into some engineering limits.

Intel is no longer looking to double transistor count every 18 months. They have no desires or goals other than the ones that we give them. And it is certainly not going to wake up and decide to take over the planet. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations.

We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed.

We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways.

Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level.

The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors.

The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will end—the brain is, after all, a finite set of neurons and operates according to physical principles.

But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity. So, while we think a fine-grained understanding of the neural structure of the brain is ultimately achievable, it has not shown itself to be the kind of area in which we can make exponentially accelerating progress.

But suppose scientists make some brilliant new advance in brain scanning technology. He suggests that these scanners would most likely operate from inside the brain via millions of injectable medical nanobots.

However, the difficulty of building human-level software goes deeper than computationally modeling the structural connections and biology of each of our neurons. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made using many different organisms to chain together simulations of different neurons along with their chemical environment.

Without this information, it has proven impossible to construct effective computer-based simulation models.

Especially for the cognitive neuroscience of humans, we are not close to the requisite level of functional knowledge. Brain simulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly seems to be exponential.

Again, as we learn more and more about the actual complexity of how the brain functions, the main thing we find is that the problem is actually getting harder. Singularity proponents occasionally appeal to developments in artificial intelligence AI as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition.

This would be true of an intelligent neural network as well. In other words, a super intelligent computer could upgrade itself, and that would probably be handy, but it wouldn't necessarily be getting exponentially more intelligent just because it was snapping more and better Pentiums onto its motherboard. So a singularity in which sentient robots with guns march down the post-apocalyptic highways repeating "kill all humans" is probably Hollywood bullshit.

But the dawn of any form of what Vinge called "superhuman intelligence" is still scary for other reasons, Asaro told me. Even if the intelligent robots that take over our lives are friendly and only ever want to protect us, they might put us in peril—economic and social peril.

Sure, Silicon Valley fetishizes so-called disruptive technologies that show up and slap the corded phones out of our hands, toss our CD collections out the window, and annihilate the taxi business. But disruptions have downsides. For instance, the rise of Uber, he pointed out, "economically benefitted one company at the expense of many hundreds or thousands of companies.

So when AI comes, Asoro worries, it could just be one more in a long line of technologies that show up in society and toss aspects of our lives that we value into the dumpster of obsolescence.

And after some form of a singularity, if AI itself is guiding innovation and adoption of new technologies, the rapid march of progress that brings about those innovations will become less of a rapid march, and more of a tornado, with "no individual and no group of people [able to] really guide it anymore.

Sign In Create Account. This story is over 5 years old. Maybe extermination by an army of self-aware machines isn't in humanity's future, but that doesn't mean we should be complacent. October 4, , pm. Phase one: Computers Control Everything.



0コメント

  • 1000 / 1000