The Revolution Has Arrived 11 written language—and in fact, if they are to maintain an ironclad Hobbes- ian methodical approach and should not incorporate any of the idiosyncra- sies of human languages. Hume’s conception of thoughts literally moving through the mind became the concept of data transfers—and also the pos- sibility of copying data for replication in other machine environments. In the twentieth century, scientists began to actively pursue the concept of AI. In the early 1960s, computer and robotics pioneer Marvin Minsky founded the Artificial Intelligence Laboratory at the Massachusetts Insti- tute of Technology. His former colleague and close friend John McCarthy founded the Stanford Artificial Intelligence Laboratory, developing a new key locus for AI research on the West Coast. Both sites remain ground- breaking robotic research locations that have made enormous contribu- tions to the development of AI. Hundreds of other academic institutions, private corporations, and government entities have also moved into the field of robotics development in the ensuing five decades. Predictions regarding the future of AI have ranged from the absurdly optimistic to the hopelessly pessimistic. In 1989, Paul Lehner proclaimed, “My first prediction is that most of the national defense applications of AI presently being pursued will not succeed in the near-term development of operationally useful systems, despite the fact that many of the programs have the specific objective of developing operationally functional systems in the near future.”19 Two years later, the AI systems controlling Patriot missile batteries proved quite successful in military operations, though not as infallible as some media reports suggested at the time. In 1993, Ver- nor Vinge opined, “[W]ithin thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”20 One of the most prominent, and optimistic, futurists is Ray Kurzweil, who in 2003 claimed that “it is hard to think of any problem that a superintelligence could not either solve or help us solve.”21 Of course, if the problem proves to be a hostile artificial superintelligence, harnessing that capability might prove difficult. Even Kurzweil believes that control- ling such a development might prove impossible, or, as he states the mat- ter, “Once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence.”22 He considers this to be a positive development in human history but seems unwill- ing or unable to consider the potential negative ramifications of such an achievement. None of these predictions has proven correct, but then again, none of them was completely wrong, either. Lehner’s prediction that there would be no operationally useful systems was obviously quickly proved ­ incorrect—and yet, there are few operational autonomous systems nearly three decades after his prediction. Vinge’s idea of superintelligence was wrong, if one requires that the AI in question be capable of every form of cognition practiced by humans. If one allows for limited applications,
Previous Page Next Page