AbstractWithin the next century, we will have the technological means to create superhuman intelligence as well as superintelligent machines. Shortly after, the human era will be ended. This essay investigates an Intelligence Amplification Escape Velocity (I.A.E.V.) as a necessary circumstance towards superhuman intelligence. This circumstance can be called a Biological Singularity: an additional model to Vernor Vinge’s influential essay “The Coming Technological Singularity”. This additional model to Vinge’s 1993 thesis presents a distinguishment between the development of a Mechanological (machine-based) superintelligence, a Biological (organic) superintelligence, and a redefinition of a Technological (cybernetic) superintelligence as an indistinguishability of the two. Three arguments are presented to support the plausibility of an additional model: that machines are useless if not allowed to think, that biological intelligence is rendered pointless if not enabled to progress faster than machine intelligence, and that the ultimate goal of both of these ‘escape velocities’ may be substrate-less intelligence (ie. energy or information).
i. machines are useless if not allowed to think
Progress in computer hardware has followed an amazingly steady curve in the last few decades. Machine intelligence and computational capacity is progressing at a faster rate than the capacity of its makers; human intelligence. In 1993, Vernor Vinge wrote in his influential essay The Coming Technological Singularity, “
There are several means by which science may achieve this breakthrough: o The development of computers that are "awake" and superhumanly intelligent. o Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity. o Biological science may find ways to improve upon the natural human intellect.
The consequences of accelerating progress in machine intelligence is that when greater-than-human intelligence drives progress, the end result is the creation of still more intelligent entities. This intelligence explosion or event horizon, cedes control to the machines. Some thinkers may disagree on this point, noting it possible that we can program AI to be docile or show us how to control AI. Vinge counters, “(the concept of the Singularity) is a point where our models must be discarded and a new reality rules”. To be precise, we have no capacity to know with certainty because the capabilities of such a superintelligence may be impossible for a human to comprehend. As such the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable. Thus, the very meaning of an ‘intelligence explosion’ may be, by nature, uncontrollable. If machine intelligence currently advances more rapidly and in a direction of uncontrollability, why ought we still create a thinking-machine, or as I.J. Good declared, “the last invention man need ever make”?
The counterfactual, is that machines that are not allowed to think and are singular creations, crippled by limits to intelligence. Essentially useless, serving singular purposes, machines not allowed to think are comparable to basic tools. Restricted from higher capacities of thought, a machine unable to autonomously solve problems is incapable of approaching problems unsolvable by human intelligence. Some ideas have been suggested that machines be given a ceiling limit to intelligence. This is not an option, as it again renders a machine essentially useless by obstructing higher cognition and the capacity for autonomous thought. The difference between a machine that is allowed to think (an artilect) is the same paradigmatic difference between a tool and a machine (ie. tool > machine > artilect, a biological intelligence analogy being an insect > mammal > human). A machine’s intelligence compared to an artilect’s is useless, the same as a lower mammal’s intelligence is comparably useless to a human’s. Thusly, machines are ‘useless’ to biological intelligence, the conceptual model of a Biological Singularity, if not allowed to think or have higher intelligence.
ii. biological intelligence is pointless if not enabled to progress faster than machine intelligence
When machines are capable of recursive self-improvement (redesigning itself), or of designing and building computers or robots better than itself, repetitions of this cycle may likely result in a runaway effect — an intelligence explosion — where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. This effectively cedes control to machines and renders the difference between a biological intelligence and a machine intelligence (an artilect) the same functional cognitive gulf between an insect and a human.
If biological intelligence is not enabled to progress faster than machine intelligence, biological intelligence may be as equally pointless in a Singularity as ‘machines not allowed to think’. This presents the opportunity for an additional model under the umbrella term ‘Technological Singularity’, because both machine intelligence and biological intelligence are definitionally distinct and definitionally incompatible. Machines being ‘useless’ to the conceptual model of a Biological Singularity if not allowed to think or have higher intelligence, and biological intelligence being pointless to a Mechanological (machine-based) Singularity model if allowed to be far outpaced from human intellectual capacity and control.
Indeed, the definitionally distinct prerequisite for a Biological Singularity model is not AI, but Intelligence Amplification (IA). IA has three potential routes: Genetics, Pharmacology, and Cybernetics. Each route, or combination of routes, lead to the idea that machine intelligence need not necessarily outpace and out-progress biological intelligence. However, a Biological Singularity model and a biological intelligence does not improve by the same function machine intelligence recursively self-improves by via an intelligence explosion (nor on the same timescale). Rather, accelerating returns in biological intelligence via these routes, can be called an Intelligence Amplification Escape Velocity (I.A.E.V.) and is defined as: biological intelligence getting smart enough, fast enough, to outpace machine intelligence.
While machine intelligence has been enabled to develop only recently, biological intelligence, it can be argued, has been programmed for many millennia via natural selection. This development has in turn, enabled biological intelligence to develop non-biological intelligence (machine intelligence). Will biological intelligence or machine intelligence be enabled to develop the fastest? Both are distinct, incompatible models of a Singularity or superintelligence event.
iii. the ultimate goal of both of these intelligent, machine and biological, ‘escape velocities’ towards superintelligence events is substrate-less intelligence (ie. energy or information).
Biological intelligence and machine intelligence are similarly obstructed by physical constraints and yet similarly enabled by accelerating returns by different rates and functions. If machine intelligence need not necessarily outpace and out-progress biological intelligence, what is the end destination? The definition of a ‘Technological Singularity’ is that we have no capacity to know with certainty because the capabilities of such a superintelligence may be unpredictable for a human to comprehend.
However, Vinge’s term, the ‘Technological Singularity’ can be further defined as having two (temporarily) knowably incompatible models toward unpredictable superintelligence: a Mechanological (machine-based) Singularity, and a Biological (organic-based) Singularity. Ultimately, both models may borrow from aspects of each to achieve superintelligence, in what can thus be called a ‘Technological (cybernetic) Singularity’ via an indistinguishable merger of the intelligences of man and machine. Despite the uncertain methods of technological superintelligence’s two agents, biological and mechanological, an ultimate goal for such superintelligences may be substrate-less intelligence, or intelligence unconstrained by physical limits. To be more precise, such intelligence would be based in information (DNA) or energy itself.
In conclusion, machines are useless toward biological intelligence if not allowed to think. However, Biological Intelligence is rendered similarly pointless by magnitudes of computation if smart machines recurively surpass a biological Intelligence Amplification Escape Velocity. Ultimately, the Technological Singularity as we know it, coined by Vernor Vinge’s influential 1993 essay, is further defined in this essay by the distinguishment of the incompatibility of these two models of superintelligence. There are not only two incompatible routes to superintelligence, but two distinct models of its ultimate substrate-less realization, biological (organic) and mechanological (machine-based). The merger or unification of these two models, is then the integrated definition of a Technological (cybernetic) Singularity, the merger of man and machine to achieve superintelligence.