What is Technological Singularity?
The idea that human history is approaching a “singularity”—that ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence, or both—has moved from the realm of science fiction to serious debate.
Some singularity theorists predict that if the field of artificial intelligence (AI) continues to develop at its current dizzying rate, the singularity could come about in the middle of the present century. Murray Shanahan offers an introduction to the idea of the singularity and considers the ramifications of such a potentially seismic event.
The concept of a Technological Singularity, an accelerated rush in processing speed improvements culminating in a super-mind, has captivated people for decades.
Will the Technological Singularity end life as we know it?
“Life as we know it will end in 2045.” This isn’t the prediction of a conspiracy theorist, but of Google’s chief of engineering, Ray Kurzweil.
Kurzweil has said that the work happening now ‘will change the nature of humanity itself’.
And it’s all down to the many complexities of artificial intelligence (AI).
AI is currently limited to Siri or Alexa-like voice assistants that learn from humans, Amazon’s ‘things you might also like’, machines like Deep Blue, which has beaten grandmasters at chess, and a few other examples.
But the Turing test, where a machine exhibits intelligence indistinguishable from a human, has still not been fully passed.
What we have at the moment is known as narrow AI, intelligent at doing one thing or a narrow selection of tasks very well.
General AI, where humans and robots are comparable, is expected to show breakthroughs over the next decade.
They become adaptable and able to turn their hand to a wider variety of tasks, in the same way as humans have areas of strength but can accomplish many things outside those areas.
This is when the Turing Test will truly be passed.
What will the singularity look like?
For those who accept the possibility of the singularity it is seen as a date in the not-so-distant future when machine intelligence outstrips our own and goes on to improve itself at an exponential rate.
The most basic tools for learning are the instructions contained in our genes, honed over billions of years of evolution. Machine learning techniques can mimic the brain, but they miss the deeper elements that help us learn. Some believe the only way to get close to a mind that learns as well as us is to repeat human evolution.
For many the singularity is better thought of as an acceleration of human progress, despite being fuelled by a near-future technological breakthrough. For them, it’s all about putting human and artificial minds together to solve real-world problems.
To begin with, these are likely to be relatively mundane. Berlin-based Leverton is using natural language processing to speed up the task of wading through huge volumes of written documents, a job that large firms typically employ many people to do.
Can the technological singularity be avoided?
Maybe it won’t happen at all. Try to imagine the symptoms that we should expect to see if the Singularity is not to develop.
There are the widely respected arguments against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question “How We Will Build a Machine that Thinks”.
As you might guess from the workshop’s title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance
to the existence of minds.
However, there was much debate about the raw hardware power that is present in organic brains. A minority felt that the largest 1992 computers were within three orders of magnitude of the power of the human brain.
The majority of the participants agreed with Moravec’s estimate that we are ten to forty years away from hardware parity. And yet there was another minority who conjectured that the computational competence of single neurons may be far higher than generally believed. If so, our present computer hardware might be as much as _ten_ orders of magnitude short of the equipment we carry around in our heads.
If this is true, we might never see a Singularity. Instead, we would find our hardware performance curves begin to level off — this caused by our inability to automate the complexity of the design work necessary to support the hardware trend curves.
We’d end up with some _very_ powerful hardware, but without the ability to push it further. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever “wake up” and there would never be the intellectual runaway which is the essence of the Singularity. It would likely be seen as a golden age … and it would also be an end of progress.
Can we prevent a technological Singularity?
But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue.
In fiction, there have been stories of laws passed forbidding the construction of “a machine in the form of the mind of man”. In fact, the competitive advantage — economic, military, even artistic — of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first.
Republished by Blog Post Promoter