January 2006
Fiftienth Anniversary Issue
Introduction
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
These words begin the PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE, written by John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. The Dartmouth Summer Project on Artificial Intelligence itself took place in 1956. In this issue of the Newsletter, I celebrate its 50th anniversary.
I've taken eightynine people - mainly AI researchers - from the 1920s to the present day, and collected from the Web their memories of, views on, and predictions for, AI. There are also a few memorabilia, and assorted papers to demonstrate the diversity of approaches. In some cases, very personal approaches, where the author has strong views about how to advance AI by striking away from current fashion.
Some papers come from before the phrase "artificial intelligence" was coined. Others might not have been considered AI by the Project's founders. John McCarthy, for example, says one of the reasons for inventing the term "artificial intelligence" was to escape association with "cybernetics". However, I have included some cybernetics work, because not only is it fascinating, but it demonstrates the spirit of its time, and has almost certainly influenced some AI research.
I'm obviously restricted to what's on the Web. Unfortunately, I found myself more restricted because much of what I wanted was there, but wasn't on its authors' Websites and wasn't free. Bobrow and Brady's Artificial Intelligence 40 years later, I found only on the site of its journal Artificial Intelligence, at a cost of $30. That's for 4 pages. To discover how, in the 1940s and 1950s, Bobrow's colleague Berkeley was applying symbolic logic and building electromechanical computers, I'd have had to pay $35 for Edmund Berkeley, Computers, and Modern Methods of Thinking. Thus I've had to ignore some fine papers which I could find only at sites such as ACM Portal, IEEE, and Project Muse. Even if I'd been able to afford the several hundred pounds needed to examine them all, linking to them would have been unfairly tantalising to readers without institutional subscriptions.
This was irritating, unhelpful, a waste of search and composition time, and deeply frustrating. Any researchers reading this - please, get copies of your papers onto your Web sites. Some organisations are very good at doing so. I knew about MIT's wonderful OpenCourseWare resource; now I've discovered that the MIT Computer Science and Artificial Intelligence Laboratory has a digital archive. (Coincidentally, this starts with a paper by Bobrow, on his classic 1964 Student program for solving algebra problems stated in English.) Such archives are just what I want when teaching - which is, after all, what I'm doing in this Newsletter.
There are other resources on the Web. Good starting points are the AAAI's pages on History, Interviews and Oral History, and Brief History. The list in this Newsletter isn't intended to be comprehensive; it's just what I could glean in a reasonable time. Send me links you think I should include, and when I've received enough, I may publish an updated version.
I have a bet with another member of St Peter's, my college, that we shall achieve AI capable of human-equivalent conversation within 50 years. We are to meet in the College Bar, and the winner has to hand over money equivalent to what was £10 when the bet was made. If I remember correctly, that was 18th June 1986. I hope the next 30 years will enable me to raise a few pints to AI.
Eightynine People, Three Programs, and a Computer
Philip E. Agre
https://en.wikipedia.org/wiki/Philip_E._Agre - Wikipedia page for Philip E. Agre
https://pages.gseis.ucla.edu/faculty/agre/ - Agre's home page.
Intelligent Agents: Theory and Practice by Michael Wooldridge and Nick Jennings:
"At about the same time as Brooks was describing his first results with the subsumption architecture, Chapman was completing his Master's thesis, in which he reported the theoretical difficulties with planning ... and was coming to similar conclusions about the inadequacies of the symbolic AI model himself. Together with his co-worker Agre, he began to explore alternatives to the AI planning paradigm. Agre observed that most everyday activity is 'routine' in the sense that it requires little - if any - new abstract reasoning. Most tasks, once learned, can be accomplished in a routine way, with little variation. Agre proposed that an efficient agent architecture could be based on the idea of 'running arguments'. Crudely, the idea is that as most decisions are routine, they can be encoded into a low-level structure (such as a digital circuit), which only needs periodic updating, perhaps to handle new kinds of problems. His approach was illustrated with the celebrated PENGI system. PENGI is a simulated computer game, with the central character controlled using a scheme such as that outlined above."
www.stanford.edu/group/SHR/4-2/text/agre.html
the soul gained and lost - artificial intelligence as a philosophical project, from Constructions of the Mind: Artificial Intelligence and the Humanities. Volume 4, Issue 2 of Stanford Humanities Review, 1995:
"To watch the dynamics of this process unfold, it will help to consider one final chapter: the STRIPS program. The purpose of STRIPS is to automatically derive "plans" for a robot to follow in transporting objects around in a maze of rooms. The program constructs these plans through a search process modeled on those of Newell and Simon. ... To those who have had experience getting complex symbolic programs to work, the STRIPS papers make intense reading. Because the authors were drawing together so many software techniques for the first time, the technically empathetic reader gets a vivid sense of struggle: the unfolding logic of what the authors unexpectedly felt compelled to do, given what seemed to be required to get the program to work."
polaris.gseis.ucla.edu/pagre/critical.html
Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI, online version of Agre's chapter in Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work, edited by Geof Bowker, Les Gasser, Leigh Star, and Bill Turner, 1997.
James F. Allen
https://en.wikipedia.org/wiki/James_F._Allen - Wikipedia page for James F. Allen
https://www.cs.rochester.edu/u/james/ - Allen's home page.
Page linking to AI Growing Up: The Changes and Opportunities, edited transcript of AAAI Keynote Address in Providence, 1997; AI Magazine Volume 19, Issue 4, Winter 1998:
"AI has always been a strange field. Where else could you find a field where people with no technical background feel completely comfortable making claims about viability and progress? We see articles in the popular press and books regularly appear telling us that AI is impossible, although it is not clear what the authors of these publications mean by that claim. Other sources tell us that AI is just around the corner or that it's already with us. Unlike fields such as biology or physics, apparently you don't need any technical expertise in order to evaluate what's going on in this field. But such problems are not limited to the general public. Even within AI, the researchers themselves have sometimes misjudged the difficulty of problems and have oversold the prospects for short-term progress based on initial results. As a result, they have set themselves up for failure to meet those projections. Even more puzzling, they also downplay successes to the point where, if a project becomes successful, it almost defines itself out of the field. An excellent example is the recent success of the chess-playing program DEEP BLUE, which beat the world chess champion in 1997. Many AI researchers have spent some effort to distance themselves from this success, claiming that the chess program has no intelligence in it, and hence it is not AI. I think this is simply wrong and will spend some time trying to argue why. So how can we explain this strange behaviour?"
John R. Anderson
http://act-r.psy.cmu.edu/peoplepages/ja/ - Anderson's home page.
act-r.psy.cmu.edu/papers/97/ACT.ASimpleTheory.pdf
ACT: A simple theory of complex cognition. American Psychologist, Volume 51, 1996:
"We (e.g., Anderson, Boyle, Corbett, & Lewis, 1990; Anderson, Corbett, Koedinger, & Pelletier, 1995; Anderson & Reiser, 1985) have created computer-based instructional systems, called intelligent tutors, for teaching cognitive skills based on this kind of production-rule analysis [of students learning to write recursive programs]. By basing instruction on such rules, we have been able to increase students' rate of learning by a factor of 3. Moreover, within our tutors we have been able to track the learning of such rules and have found that they improve gradually with practice .... Our evidence indicates that underlying the complex, mystical skill of recursive programming is about 500 rules like the one above, and that each rule follows a simple learning curve .... This illustrates the major claim of this article: All that there is to intelligence is the simple accrual and tuning of many small units of knowledge that in total produce complex cognition. The whole is no more than the sum of its parts, but it has a lot of parts. The credibility of this claim has to turn on whether we can establish in detail how the claim is realized in specific instances of complex cognition. The goal of the ACT theory, which is the topic of this article, has been to establish the details of this claim. It has been concerned with three principal issues: How are these units of knowledge represented, how are they acquired, and how are they deployed in cognition?"
Anthony J. Bell
Bell's home page (last updated 2000):
"My long-term scientific goal is to try to work out how the brain learns (self-organises). This took me in directions of Information Theory and probability theory for neural networks. This provides a hopelessly crude and impoverished model (called redundancy reduction) of what the brain does and how it lives in its world. Unfortunately, it's the best we have at the moment. We have to do some new mathematics before we reach self-organisational principles that will apply to the physical substrate of the brain, which is molecular: ion channels, enzyme complexes, gene expression networks. We have to think about dynamics, loops, open systems, how open dynamical systems can encode and effect the spatio-temporal trajectories of their perturbing inputs."
https://cnl.salk.edu/~tony/ptrsl.pdf
Bell's home page continues with his invited contribution to the special Millennial issue of the Philosophical Transactions of the Royal Society of London, in which contributors were asked to predict the future of their research and its effects on society: Levels and loops: the future of artificial intelligence and neuroscience, Philosophical Transactions of the Royal Society of London B, Volume 354, 1999.
Given the many levels of explanation required in neuroscience, from neural spike signals down to the effect of proteins' electrical interactions, can we separate the brain's hardware level from its software level, or is this kind of hardware/software independence impossible? Bell concludes that:
"AI and neuroscience are exactly placed where the deaths of dualism and feed-forward thinking are scheduled to take place. If these disciplines choose to participate in this shift, rather than cling to concepts that are not empirically supported, then there will be many interesting PhD theses to write."
Randall D. Beer
https://en.wikipedia.org/wiki/Randall_Beer - Wikipedia pageon Randall D. Beer
vorlon.cwru.edu/~beer/ - Beer's home page.
www.ecs.soton.ac.uk/~harnad/Tp/robot.html
Is it an ant? A cockroach? Or Simply a Squiggle?, by Stephen Strauss, The Toronto Globe and Mail, c. 1990.
Popular feature on the work of Beer and others:
"'Even the simplest animals are better at changing their behaviour to cope with the real world than the most sophisticated robot today', says Randall Beer, a computer scientist at Case Western Reserve University in Cleveland. To many researchers, this interest in animal models has a clear evolutionary rationale. 'Nature never evolved a brain and then built the first body around it. That is nonsense' says University of Waterloo engineer Mark Tillen."
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/neural/systems/nerves/0.html
NERVES: Nervous System Construction Kit, from the CMU AI Repository.
This directory contains NERVES, the nervous system construction kit. It includes a computer simulation of the real-time behavior of a simplified cockroach, based on Beer's work.
vorlon.cwru.edu/~beer/Papers/TICS.pdf
Dynamical approaches to cognitive science Trends in Cognitive Sciences, Volume 4, Issue 3, 2000.
Beer reviews three contrasting examples of work on dynamical ideas in cognition, using these to articulate the main differences between dynamical approaches and the symbolic and connectionist approaches:
"In a second experiment, Elman extended this work to longer and more complex sentences with long-distance dependencies involving number agreement, verb argument structure, and relative clauses. The dynamics of a network trained on the prediction task was then examined by plotting projections of the trajectories of hidden-unit activation produced by sample sentences. The trajectory produced by the sentence 'Boy who chases boy chases boy' is shown ... Note that occurrences of the same word at different points in the sentence leave the network in different states. These differences in network state correspond to the network's memory of the information required to process long-distance dependencies correctly. Because the local dynamics at each point determine the effect that subsequent words can have on the network state, grammatical constraints are manifested in the structure of the network dynamics itself. Thus, as a sentence is processed, each word drives the network along one of the different trajectories allowed by the dynamics at that point, with context manifested as variations in state that influence subsequent processing."
Stafford Beer
https://en.wikipedia.org/wiki/Stafford_Beer - Wikipedia page on Stafford Beer
www.soc.uiuc.edu/people/CVPubs/pickerin/cybernetics.pdf
Cybernetics and the Mangle: Ashby, Beer and Pask, by Andrew Pickering, University of Illinois, 2002.
This paper about the three cyberneticists contains a fascinating account of Beer's work:
"Beer, perhaps more than anyone else, believed that the homeostat held out the promise of constructing superhuman brains, and I first want to emphasise the variety of materialisations of homeostat-type set-ups that Beer contemplated, and often built, in the 1950s and early 1960s. In 1956, for example, he devised a game for solving simultaneous linear equations ... The key feature of this game was that it could be played by children who did not know the relevant mathematics. The children would make selections from various alternative moves, and their choices would be encouraged or discouraged by what Beer called algedonic feedback - in this case, coloured lights signifying pleasure or pain at whatever moves the children made. In effect, the children were the material basis of an adaptive or self-organising system that could be trained to perform the relevant calculations without having to be explicitly 'programmed' to do so. Beer then moved on from children to mice, thinking that mice could be trained to solve simultaneous equations, too. It is not clear whether this worked or not, but I do believe this mouse-computer eventually moved into popular culture, having a role in Douglas Adams' book The Hitch-Hiker's Guide to the Universe. It certainly features in a recent novel in Terry Pratchett's Discworld series. A rather general point here, I suppose, is that cybernetics had a sense of humour - one of its many differences from the classical science paradigm."
He apparently also tried to use the water-flea Daphnia and the single-celled Euglena as homeostatic systems.
www.chroniclesofwizardprang.com/
Beer's Chronicles of Wizard Prang, 1989. From the Cwarel Isaf Institute for management cybernetics.
"Wizard Prang was threatened by toast. He knew that he was supposed to eat breakfast. It was good for him. But eating was exactly what he did not want to do early in the morning."
https://www.theguardian.com/technology/2003/sep/08/sciencenews.chile
Santiago dreaming, by Andy Beckett, The Guardian, 8th September, 2003.
"During the early 70s, in the wealthy commuter backwater of West Byfleet in Surrey, a small but rather remarkable experiment took place. In the potting shed of a house called Firkins, a teenager named Simon Beer, using bits of radios and pieces of pink and green cardboard, built a series of electrical meters for measuring public opinion."
The story of Beer's Project Cybersyn in Chile, an attempt to "implant" an electronic "nervous system" into an entire country, not long before the Allende deposal.
Edmund C. Berkeley
Berkely was coeditor with Daniel Bobrow of the classic The Programming Language LISP: Its Operation and Applications.
A Berkeley timeline suggests that he had an interest in symbolic computing much earlier, having written memoranda on the applications of symbolic logic in 1941. In 1948, he organised Berkeley Enterprises, Inc., which began as a consulting firm and later sold construction kits for building robots and computing devices as well as publications on logic and cybernetics. One of these kits was "Simon", described in the Fact Sheet on "Simon" as "A very simple model, mechanical brain - the smallest complete mechanical brain in existence".
The fact sheet continues "We shall now consider how we can design a very simple machine that will think.. Let us call it Simon, because of its predecessor, Simple Simon... Simon is so simple and so small in fact that it could be built to fill up less space than a grocery-store box; about four cubic feet....It may seem that a simple model of a mechanical brain like Simon is of no great practical use. On the contrary, Simon has the same use in instruction as a set of simple chemical experiments has: to stimulate thinking and understanding, and to produce training and skill. A training course on mechanical brains could very well include the construction of a simple model mechanical brain, as an exercise". In fact, in his conclusion, Berkeley hopes that Simon might start a fad of building baby mechanical brains, similar to the crystal-set fad of the 1920s. Amongst the points that make Simon unique are: "it can be carried around in one hand (and the power supply in the other hand)"; "it is a mechanical brain that has cost less than $1,000"; and "it is an excellent device for teaching, lecturing and explaining". From the pages of Berkeley's 1949 book Giant Brains or Machines That Think reproduced at www.newbegin.com/html/misc__item_detail_5.html, we see that Simon was a relay machine using paper tape for input and lights for output.
What originally drew me to include Berkeley in this newsletter was his 1956 SMALL ROBOTS - REPORT. It's an interesting and fairly detailed description of a number of electromechanical machines. These include Squee (named after "squirrel"), a robot squirrel which will hunt and pick up tennis-ball "nuts". Squee uses phototubes and contact switches as sensors, three motors as effectors, and half a dozen relays as "brain". Most of the robots are games, intended as show-stopping demonstrations of electronic wonder: there's a maze-solving robot with magnetic-drum memory, a Noughts-and-Crosses machine, and a Divorce Mill with Bigamy Alarm.
The robots were not intended only as show-stoppers. Berkeley says he has a second and perhaps more scientific purpose: to explore the intelligent behavior of machines and master their techniques.
community.computerhistory.org/scc/projects/LISP/book/III_LispBook_Apr66.pdf
The Programming Language LISP: Its Operation and Applications, edited by Bobrow and Berkeley, 1964, 1966.
www.blinkenlights.com/classiccmp/berkeley/
Edmund Berkeley timeline.
www.blinkenlights.com/classiccmp/berkeley/simonfaq.html
Fact Sheet on "Simon".
www.newbegin.com/html/misc__item_detail_5.html
Cover and four pages of Berkeley's 1949 book Giant Brains or Machines That Think.
www.blinkenlights.com/classiccmp/berkeley/report.html
Berkeley's SMALL ROBOTS - REPORT, 1956.
Tim Berners-Lee
https://en.wikipedia.org/wiki/Tim_Berners-Lee - Wikipedia page on Tim Berners-Lee
www.w3.org/People/Berners-Lee/ - Berners-Lee's home page.
The Semantic Argument Web - What really scares me, by David Weinberger, 14th June, 2002.
The author fears that, although Berners-Lee claims the Semantic Web is not AI, it is dragging him into one of AI's stickiest morasses - knowledge representation.
www.w3.org/DesignIssues/Semantic.html
Semantic Web Road map, 1998.
Berners-Lee talks about RDF as a logic language. This page links to What the Semantic Web can represent, https://www.w3.org/DesignIssues/RDFnot.html, which states that "A Semantic Web is not Artificial Intelligence". I suspect not everyone will agree.
Daniel G. Bobrow
https://en.wikipedia.org/wiki/Daniel_G._Bobrow - Wikipedia page on Daniel G. Bobrow
www2.parc.com/spl/members/bobrow/ - Bobrow's home page.
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-051.pdf
METEOR: A LISP Interpreter for String Transformations. Memo 51, Artificial Intelligence Projects, RLE and MIT Computation Center, April 24, 1963. This is an earlier version of the two articles The LISP Program for METEOR and METEOR: A LISP Interpreter for String Transformations printed in The Programming Language Lisp : Its Operation and Applications.
Meteor was a pattern-matching extension to Lisp, inspired by Yngve's Comit language. Bobrow wrote Student, his classic program for solving simple algebra problems stated in English, in it. The Meteor paper notes that lists containing more than 16,000 atoms would not fit into the 7090.
www.lcs.mit.edu/specpub.php?id=573
Natural Language Input for a Computer Problem Solving System, MIT-LCS Technical Report 001, by Bobrow, 1964.
The original report about Student. From its number, this looks like the starting point, of an important MIT series.
https://people.eecs.berkeley.edu/~bh/v3ch6/ai.html
Artificial Intelligence, by Brian Harvey, University of California, Berkeley. A chapter from Beyond Programming, volume 3 of Harvey's Computer Science Logo Style (2nd edition), 1997.
This chapter analyses Student and translates it into Logo.
Nick Bostrom
https://en.wikipedia.org/wiki/Nick_Bostrom - Wikipedia page on Nick Bostrom
https://www.nickbostrom.com - Bostrom's home page.
https://www.nickbostrom.com/superintelligence.html
How Long Before Superintelligence? 1997, 1998; postscripts added 2000 and 2005. Originally published in International Journal of Future Studies, Volume 2, 1998. To be reprinted in Linguistic and Philosophical Investigations, March 2006.
Bostrom, director of the Future of Humanity Institute at Oxford University, outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century.
Rodney Brooks
https://en.wikipedia.org/wiki/Rodney_Brooks Wikipedia page on Rodney Brooks
http://people.csail.mit.edu/brooks/ - Brooks's home page.
http://people.csail.mit.edu/brooks/papers/AIM-1293.pdf
Intelligence Without Reason, MIT AI Lab Memo 1293 (1991), prepared for Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI-91):
"Computers and Thought are the two categories that together define Artificial Intelligence as a discipline. It is generally accepted that work in Artificial Intelligence over the last thirty years has had a strong influence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong influence on our models of thought. The Von Neumann model of computation has lead Artificial Intelligence in particular directions. Intelligence in biological systems is completely different. Recent work in behavior-based Articial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation."
http://people.csail.mit.edu/brooks/papers/nature.pdf
The relationship between matter and life, Nature, Volume 409, January 2001.
The paper above was an early account of the behaviour-based approach to AI. Ten years later, Brooks asks why, although they are much more lifelike than the pure engineering artefacts of traditional AI, the systems built under the behaviour-based and Artificial Life approaches do not seem as alive as we might hope. Why, despite the computer power we now have at our disposal, are we not good at modelling living systems? What is the fundamental gap in our understanding?
Paul Brown
http://www.paul-brown.com - Brown's home page.
http://www.paul-brown.com/WORDS/CR2003.PDF
The Idea Becomes a Machine AI and Alife in Early British Computer Arts, 2003.
Brown discusses the influence of AI and Artificial Life on the computer arts in the UK up to around 1980, focussing on work at the Slade School of Art, and including the 1968 Cybernetic Serendipity exhibition at the Institute of Contemporary Art. This essay was written as part of the CACHe (Computer Arts, Contexts, Histories, etc) project, looking at the early days of the computer arts in the UK.
David Chalmers
https://en.wikipedia.org/wiki/David_Chalmers - Wikipedia David Chalmers
http://consc.net - Chalmers's home page.
http://consc.net/papers/matrix.html
The Matrix as Metaphysics.
If one thing is certain, it's that AI will give rise to philosophical questions about consciousness. If another thing is certain, it's that Hollywood will never understand AI *. But we can at least use one to teach about the other:
"This paper was written for the philosophy section of the official Matrix website. As such, the bulk of the paper is written to be accessible for an audience without a background in philosophy. At the same time, this paper is intended as a serious work of philosophy, with relevance for central issues in epistemology, metaphysics, and the philosophy of mind and language. A section of 'philosophical notes' at the end of the article draws out some of these connections explicitly."
Online papers on consciousness, compiled by Chalmers.
https://fragments.consc.net
Chalmers's blog.
(*) Or any other science.
Ron Chrisley
www.cogs.susx.ac.uk/users/ronc/ - Chrisley's home page.
www.cogs.susx.ac.uk/users/ronc/papers/ai.txt
Artificial intelligence, an entry by Chrisley for The Oxford Companion to the Mind (second edition), edited by Richard Gregory, 2004:
"Since the mid-1980s, there has been sustained development of the core ideas of artificial intelligence, e.g., representation, planning, reasoning, natural language processing, machine learning, and perception. In addition, various sub-fields have emerged, such as research into agents (autonomous, independent systems, whether in hardware or software), distributed or multi-agent systems, coping with uncertainty, affective computing/models of emotion, and ontologies (systems of representing various kinds of entities in the world) - achievements which, while new advances, are conceptually and methodologically continuous with the field of artificial intelligence as envisaged at the time of its modern genesis: the Dartmouth conference of 1956.
However, a substantial and growing proportion of research into artificial intelligence, while often building on the foundations just mentioned, has shifted its emphasis. ...
The new developments, which have their roots in the cybernetics work of the 40s and 50s as much as, if not more than they do in mainstream AI, can be divided into two broad areas: adaptive systems, and embodied/situated approaches."
Chrisley goes on to survey these two areas, and the relevance of AI to understanding the mind.
William J. Clancey
https://en.wikipedia.org/wiki/William_Clancey - Wikipeda page on William J. Clancey
https://billclancey.name/index.html - Clancey's home page.
Notes on "Epistemology of a Rule-based Expert System", Artificial Intelligence, Volume 59, 1993 - special issue Artificial Intelligence in Perspective.
Clancey writes about how his time with Mycin led him to go beyond the simple backward-chaining explanation of its conclusions given by it and many other expert systems.
Andy Clark
https://en.wikipedia.org/wiki/Andy_Clark Wikipedia page on Andy Clark
Artificial Intelligence and the Many Faces of Reason, in The Blackwell Guide To Philosophy Of Mind, edited by S. Stich and T. Warfield, 2003.
"I shall focus this discussion on one small thread in the increasingly complex weave of Artificial Intelligence and Philosophy of Mind: the attempt to explain how rational thought is mechanically possible. This is, historically, the crucial place where Artificial Intelligence meets Philosophy of Mind. But it is, I shall argue, a place in flux. For our conceptions of what rational thought and reason are, and of what kinds of mechanism might explain them, are in a state of transition. To get a sense of this sea change, I shall compare several visions and approaches, starting with what might be termed the Turing-Fodor conception of mechanical reason, proceeding through connectionism with its skill-based model of reason, then moving to issues arising from robotics, neuroscientific studies of emotion and reason, and work on 'ecological rationality'. As we shall see there is probably both more, and less, to human rationality than originally met the eye."
*Towards a Cognitive Robotics *(with Rick Grush), Adaptive Behavior, Volume 7, Issue 1, 1999:
"Contemporary cognitive science, it is fair to say, displays a deep-seated commitment to a representational view of the mind. According to such a view, intelligence is largely a matter of problem solving, and problem- solving is carried out via computations defined over internal representations of salient real-world structures, facts and hypotheses. ...
This picture may be dubbed, without malice, the same old story (SOS). Classic statements of SOS include, e.g., Pylyshyn, 1987; Fodor 1975, 1987. But the same broad outline applies equally to the bulk of work in connectionism and neural networks (Rumelhart, McClelland, & The PDP Research Group, 1986; Smolensky, 1988; Elman, 1993; Churchland & Sejnowski, 1992). Nevertheless, scepticism concerning SOS is undoubtedly on the rise. In particular, there is a definite challenge in the air regarding the pivotal notion of internal representation itself."
Dave Cliff
https://en.wikipedia.org/wiki/Dave_Cliff_(computer_scientist) Wikipedia page on Dave Cliff
Dave Cliff: On "Articulate Rebels With Brains", HP Labs Featured Inventor, February 2003.
Cliff has worked in artificial life and evolutionary robotics. He joined Hewlett-Packard six years ago; in this HP inventor profile, he talks about inspiring children to be inventors.
http://news.bbc.co.uk/2/hi/uk_news/england/1658983.stm
Scientists invent electronic DJ, BBC. Friday, 16 November, 2001.
Cliff's robot disc jockey.
Alain Colmerauer and Philippe Roussel
https://en.wikipedia.org/wiki/Alain_Colmerauer - Wikipedia page on Alain Colmerauer
The birth of Prolog, November 1992. The authors describe how Prolog was invented - in a project aimed not at producing a programming language, but at processing natural language.
La naissance de Prolog, July 1992.
The original, French, version of this paper.
Jared Darlington
Boston, 2001.
An account of the early 1960s, a heady time:
"... there was a definite feeling of being where it's happening. At MIT, time-sharing on IBM 7090s and 7094s was getting off the ground, allowing users effectively to run and debug programs on-line, and other applications of such large new mainframes were being explored. Under Marvin Minsky's direction, research on artificial intelligence was well under way at Tech Square amid speculation as to how soon we will 'have AI'."
(Page is in HTML but lacks an appropriate extension, so your browser may display the HTML verbatim.)
Daniel Dennett
https://en.wikipedia.org/wiki/Daniel_Dennett - Wikipedia page for Daniel Dennett
Dennett's home page. See it not only for Dennett's research and publications - The Mind's I, coedited with Douglas Hofstadter, is a well-known book - but also for the photos of a 1950s French robot dog. Dennett would be pleased to receive any substantiated information about its provenance.
When HAL Kills, Who's to Blame? Computer Ethics, in Hal's Legacy: 2001's Computer as Dream and Reality, edited by David Stork, 2001:
"If we want to trace the skein of moral responsibility in the actions of HAL, recent fiction has provided us with models - from RoboCop, to Max Headroom, to Blade Runner, which may help us understand the kinds of issues which we need to face in this kind of study. What issues would come in to play if we make moral judgements on HAL's behavior?"
Review of Hofstadter et al., "Fluid Concepts and Creative Analogies", Complexity, 1995.
"Hofstadter has numerous important reflections to offer on 'the knotty problem of evaluating research,' and one of the book's virtues is to draw clearly for us 'the vastness of the gulf that can separate different research projects that on the surface seem to belong to the same field. Those people who are interested in results will begin with a standard technology, not even questioning it at all, and then build a big system that solves many complex problems and impresses a lot of people.' He has taken a different path, and has often had difficulties convincing the grown-ups that it is a good one: 'When there's a little kid trying somersaults out for the first time next to a flashy gymnast doing flawless flips on a balance beam, who's going to pay any attention to the kid?' A fair complaint, but part of the problem, now redressed by this book, was that the little kid didn't try to explain (in an efficient format accessible to impatient grown-ups) why his somersaults were so special."
Are your somersaults special??
Edsger W. Dijkstra
https://en.wikipedia.org/wiki/Edsger_W._Dijkstra - Wikipedia page for Edsger W. Dijkstra
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/EWD448.html
Trip report E.W.Dijkstra, Edinburgh and Newcastle, 1 - 6 September 1974.
"On Sunday 1st September 1974 I flew via London from Amsterdam to Edinburgh ... Monday morning I passed at the Department of Machine Intelligence, Hope Park Square ... The more I hear about Artificial Intelligence the more ridiculous becomes its often heard defense that Artificial Intelligence has contributed so much to programming technology - for both, the claim and its debunking, see the various contributions to the Lighthill Report. I see more and more reason to characterize its contribution as 'adding to the confusion'".
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD665.html
Trip report E.W.Dijkstra, U.K. - Bahamas - U.S.A., 11-30 April 1978.
"On our 'afternoon off' I did not tour the countryside - although I knew it to be very beautiful - but had a long discussion with prof. R.M.Burstall from Edinburgh. ... The reason was that earlier this year I had had to referee a couple of papers with a strong flavour of Artificial Intelligence. In both cases I had recommended rejection because, according to my scientific standards, they did not have enough 'meat' in them. After I had done so a number of times in succession, I got a little worried, and wanted to know whether the superficiality observed was only characteristic of the authors in question, or was typical for the whole field. As I was fearing the latter I wanted to give Artificial Intelligence a last chance before definitely rejecting it..."
Hubert Dreyfus
https://en.wikipedia.org/wiki/Hubert_Dreyfus - Wikipedia page on Hubert Dreyfus
ist-socrates.berkeley.edu/~hdreyfus/ - Dreyfus's home page.
http://www-formal.stanford.edu/jmc/reviews/dreyfus/dreyfus.html
Review of Hubert Dreyfus's book What Computers Still Can't Do, by John McCarthy.
"In the first edition of Dreyfus's book there were some challenges to AI. Dreyfus said computers couldn't exhibit 'ambiguity tolerance', 'fringe consciousness' and 'zeroing in'. These were left so imprecise that most readers couldn't see any definite problem at all. In the succeeding 30 years Dreyfus has neither made these challenges more precise nor proposed any new challenges, however imprecise. It's a pity, because AI could use a critic saying, 'Here's the easiest thing I don't see how you can do'."
McCarthy's review includes sections on progress in, and the future of, logic-based AI; and on common sense in Lenat's work - "one of the few workers in AI at whose recent work Dreyfus has taken a peek". He quotes Dreyfus: "While representationalists have written programs that attempt to deal with each of these problems [in representing and reasoning with common-sense knowledge], there is no generally accepted solution, nor is there a proof that these problems cannot be solved. What is clear is that all attempts to solve them have run into unexpected difficulties, and this in turn suggests that there may well be in-principle limitations on representationalism. At the very least these difficulties lead us to question why anyone would expect the representationalist project to succeed."
Why, as McCarthy says, should one expect such work to be easy?
www.pbs.org/newshour/bb/entertainment/jan-june97/big_blue_5-12.html
BIG BLUE WINS, May 12, 1997. Transcript of a discussion on PBS between Gary Kasparov, Frederic Friedel (Kasparov's Adviser), C. J. Tan, (Deep Blue Programmer), Daniel Dennett, Dreyfus, Jim Lehrer, Paul Solman, and Margaret Warner.
Dreyfus claims that although in a chess world, the computer will always beat people, "in a world in which relevance and intelligence play a crucial role and meaning in concrete situations, the computer has always behaved miserably, and there's no reason to think that that will change with this victory."
https://slate.com/news-and-politics/1997/05/artificial-intelligence-2.html
https://slate.com/news-and-politics/1997/05/artificial-intelligence-3.html
Artificial Intelligence - mails between Hubert Dreyfus and Daniel Dennett, in Slate's E-mail debates of newsworthy topics, 1997. These emails concern the PBS AI debate above.
Greg Egan
https://en.wikipedia.org/wiki/Greg_Egan - Wikipedia page for Greg Egan
http://www.gregegan.net/index.html - Egan's home page.
http://www.gregegan.net/MISC/ORACLE/Oracle.html
Oracle, 1987. First published in Asimov's Science Fiction, July 2000.
Under Turing, I link to Andrew Hodges's Turing Day lecture on what Turing would have done had he lived beyond 1954. Oracle reshapes Turing's life in a different way. In a slightly alternate universe, it evokes the era when the British Government made Turing's life a sin. But Egan knows his maths, physics and computing as well as his history. There's a BBC debate between the alternate Turing and an alternate C. S. Lewis, in which appears a proof of the Halting Problem. And Egan shows what might have happened if, with the help of some very advanced AI, we were able to advance 20th Century technology as much as, for the good of humanity, he must wish we were able to.
Eliza, Parry, and Racter
www.stanford.edu/group/SHR/4-2/text/dialogues.html
dialogues with colorful personalities of early ai, by Güven Güzeldere and Stefano Franchi, from Constructions of the Mind: Artificial Intelligence and the Humanities. Volume 4, Issue 2 of Stanford Humanities Review, 1995:
Sample dialogues from these three famous interactive programs.
Richard Ennals
https://www.amazon.co.uk/exec/obidos/ASIN/047191293X/026-0473756-9838829
Star Wars: A Question of Initiative, 1987.
Ennals managed information technology research in the UK Government Alvey Programme, but resigned when the Pentagon sought to use this research for the Strategic Defense Initiative. In this book, he wrote about how current political systems are inadequate for coping with the advanced technology - computing and AI included - used in projects such as SDI.
https://www.atarimagazines.com/creative/v9n11/220_Logic_and_recursion_the_.php
Logic and recursion: the prolog twist, by Jesse M. Heines, Jonathan Briggs, and Richard Ennals. Creative Computing, Volume 9, Number 11, November 1983.
This early 1980s feature introduces Prolog and recursion to users of such microcomputers as the Sinclair Spectrum, the BBC, the Apple, and the Commodore 64.
Andrei Ershov
http://ershov.iis.nsk.su/ru/archive/subgroup?nid=763243
Academician A. Ershov's archive.
Ershov was one of the visitors to the 1958 Symposium on the Mechanization of Thought Processes at the National Physical Laboratory in Britain. This archive contains a variety of memorabilia, including talks given by Ershov, luggage tickets in English and Russian, a demonstration of English Electric's Deuce computer, questions composed by Members of Parliament, and a souvenir programme of Norman Wisdom in Where's Charley?
Oren Etzioni
https://en.wikipedia.org/wiki/Oren_Etzioni - Wikipedia page for Oren Etzioni
https://homes.cs.washington.edu/~etzioni/ - Etzioni's home page.
Moving up the information food chain: deploying softbots on the World Wide Web, AI Magazine, Summer 1997.
"I view the World Wide Web as an information food chain. The maze of pages and hyperlinks that comprise the Web are at the very bottom of the chain. The WEBCRAWLERS and ALTAVISTAS of the world are information herbivores; they graze on Web pages and regurgitate them as searchable indices. Today, most Web users feed near the bottom of the information food chain, but the time is ripe to move up."
Edward Feigenbaum
https://en.wikipedia.org/wiki/Edward_Feigenbaum - Wikipedia page for Edward Feigenbaum
http://ksl-web.stanford.edu/people/eaf/ - Feigenbaum's home page (last updated 1998).
Artificial intelligence gets real, Daniel Lyons, Forbes Global, November 30, 1998.
https://www.forbes.com/global/1998/1130/0118096a.html#574d6f7f7828
"On a recent visit to the doctor, Edward Feigenbaum had the eerie experience of seeing one of his inventions used in a way he never expected: His 25-year-old concept was being used to diagnose a problem with his own breathing. 'It's using artificial intelligence,' the doctor patiently explained about the spirometer, which measures airflow. 'Oh, I see,' said Feigenbaum."
The feature tells of the birth of expert systems:
"Feigenbaum [in contrast to researchers who were teaching computers to solve logic puzzles and play chess] succeeded by thinking small. Unlike his rivals, he didn't set out to recreate all of human intelligence in a computer. His idea was to take a particular expert - a chemist, an engineer, a pulmonary specialist - and figure out how that person solved a single narrow problem. Then he encoded that person's problem-solving method into a set of rules that could be stored in a computer."
http://infolab.stanford.edu/pub/voy/museum/feigentree.html
Tree (incomplete) of Feigenbaum's students, 2005, from the Stanford Computer History exhibits.
Walter J. Freeman
Tutorial On Neurobiology: From Single Neurons To Brain Chaos, International Journal of Bifurcation and Chaos, Volume 2, Number 3, 1992.
Chaos theory became fashionable in the late 1980s. In cognitive science, a famous chaos-related paper was Skarda and Freeman's How brains make chaos in order to make sense of the world, Behavioral and Brain Sciences, Volume 10, 1987. The paper linked here is a tutorial by Freeman on the subject:
"This review opens a window onto an approach through neurobiology to nonlinear brain dynamics. It was written with a deep conviction that collaboration between mathematicians, physicists, engineers and biologists holds the key to understanding some of the most fascinating secrets of the central nervous system. It follows a path from the elementary unit of the brain, the neuron, to one of the more complicated networks of neural populations, the cerebral cortex. It emphasizes the need for a two-level approach to brain function. The neuron is seen as the microscopic element for integration and transmission, whereas the neural population is viewed as the macroscopic element for the organization of behavior. As we proceed from neurons toward networks of populations we gain a hierarchical perspective that enables us to understand how chaotic activity can exist at multiple levels: subcellular organelles, neurons, networks, populations, and brain systems, all of which are found in the cerebral cortex. Information can be exchanged across levels at differing time and distance scales. What roles might chaos play in brain function? We will conclude that chaotic dynamics makes it possible for microscopic sensory input that is received by the cortex to control the macroscopic activity that constitutes cortical output, largely owing to the selective sensitivity of chaotic systems to small fluctuations, and their capacity for rapid state transitions."
The Importance of Chaos Theory in the Development of Artificial Neural Systems, by Dave Gross, probably written in 1991 or 1992.
A short summary of chaos as Skarda and Freeman apply it.
https://en.wikipedia.org/wiki/Chaos_theory
Wikipedia on chaos theory. The Chaos Hypertextbook linked at https://hypertextbook.com/chaos/, up to the section on strange attractors, seems a reasonable followup. You needn't be a mathematician, but need to be happy with multidimensional real functions.
https://www.wolframscience.com/reference/notes/971c
Stephen Wolfram's history of chaos theory, up to James Gleick's popular-science book Chaos.
Irving John ("Jack") Good
David L. Banks A conversation with I. J. Good, by David Banks, Statistical Science, Volume 11, Number 1, 1996.
"The Perceptron, and a 1949 book by the psychologist Donald Hebb, provoked me to write an article called 'Speculations Concerning the First Ultraintelligent Machine,' based on the concept of artificial neural networks and what I called a subassembly theory of the mind. I thought neural networks, with their ultraparallel working, were as likely as programming to lead to an intelligent machine, but brains use both methods; they have parallel architecture and also use language and reasoning. So we can learn from our brains as well as with them. When discussing complex systems, like brains and other societies, it is easy to oversimplify: I call this Occam's lobotomy. Evolution is opportunist; it doesn't have to choose when a compromise works better."
http://ei.cs.vt.edu/~history/Good.html
Biography of I. J. Good, by A. N. Lee, 1994. This describes him as the "Overlooked Father of Computation". Overlooked because of the secrecy surrounding his work at Bletchley Park.
Good featured, together with Minsky, in Arthur C. Clarke's, in 2001, a Space Odyssey (1968). According to the above biography, this contains the quote:
"In the 1980's, [Marvin] Minsky and [Jack] Good had shown how neural networks could be generated automatically - self replicated - in accordance with any arbitrary learning program. Artificial brains could be grown by a process strikingly analogous to the development of a human brain."
Steve Grand
https://en.wikipedia.org/wiki/Steve_Grand_(roboticist) - Wikipedia page for Steve Grand
Grand's home page at Cyberlife, describing the origins of the evolving neural-net driven animats in his game "Creatures".
The history of "Creatures" by Creature Labs, now a subsidiary of Gameware.
Moving AI Out of its Infancy: Changing Our Preconceptions, IEEE Intelligent Systems, November/December 2004.
Grand talks about his work on the robot orang-utan Lucy:
"The other day, it was my turn to answer stupid questions about the movie I, Robot. 'Do you think it's about time we started incorporating Asimov's three laws into real robots?' a journalist asked. I replied that Asimov's laws are about as relevant to real robotics as leechcraft is to modern medicine. Yes, before anyone writes me smug emails, I know that leeches are very useful in modern medicine, but I said 'leechcraft.' Leeches might be useful, but the paradigm of thought that originally led to their use is a ridiculous anachronism. The same is true for Asimov's laws."
www.akri.org/ai/steveg.htm
Artificial Intelligence : Steve Grand "Machines Like Us". Transcript from Grand's presentation at Applied Knowledge Research and Innovation's Biennial Seminar on 17th October 2002.
"Anyway what went wrong with A.I.? Well its all this guy's fault and I presume most of you recognise Alan Turing."
The emotional machine, by Suzy Hansen, Salon, 2nd January, 2002.
"Steve Grand, designer of the artificial life program Creatures, talks about the stupidity of computers, the role of desire in intelligence and the coming revolution in what it means to be 'alive.'"
Stevan Harnad
https://en.wikipedia.org/wiki/Stevan_Harnad - Wikipedia page for Stevan Harnad
https://www.southampton.ac.uk/~harnad/Hypermail/Foundations.Cognitive.Science2001/0158.html
Immediate future of AI, a newsgroup discussion with a reporter for Smart Business Magazine, 28th July 2001.
Harnad writes about where AI went wrong, and on how robotics has made its way to centre-stage: as a way to "ground" the symbols that John Searle's Chinese Room inhabitant merely shuffles without understanding. The mail links to several Harnad publications, including his classic The Symbol Grounding Problem.
Harry Harrison
https://en.wikipedia.org/wiki/Harry_Harrison_(writer) - Wikipedia page for Harry Harrison
www.harryharrison.com/ - Harrison's home page.
Harrison is a well-known science-fiction writer. Less well-known is that he co-authored a book with Marvin Minsky. The Turing Option, 1993, is about the inventor of a new AI who is shot through the brain, destroying crucial neural connections. But, using advanced AI techniques based upon Minsky's Society of Mind (and, I believe, expert systems and Lenat's Cyc), his brain is repaired and his memories restored. This link reviews the book.
https://web.media.mit.edu/~minsky/papers/option.chapters.txt
Two unpublished chapters of The Turing Option, on Minsky's site.
Examining the Society of Mind, by Push Singh, MIT, October 2003.
The author looks at some of the AI history behind Society of Mind, breaks the theory down into its component ideas, and examines some implementation problems.
Donald O. Hebb
https://en.wikipedia.org/wiki/Donald_O._Hebb - Wikipedia page for Donald O. Hebb
https://www.southampton.ac.uk/~harnad/Archive/hebb.html
D. O. Hebb. Father of Cognitive Psychobiology 1904-1985, by Stevan Harnad, 1985.
Harnad's personal recollection and appreciation of Hebb:
"But then Hebb reminded us of the problem anew, first through suggestive accounts of his work with Penfield on the localization of memories in the brain, and then from the viewpoint of his own specific hypothesis that thoughts could actually be the activity of reverberating circuits of neurons called 'cell-assemblies.' I don't think his idea had its full impact on me at the moment he described it. Rather, it was after the lecture, as I thought about it, and thought that my thoughts may well consist of those physical things I was thinking about, that I realized what a radically different world view such a theory represented, and that it all had a ring of reality to it that made the Freudian notions I had been flirting with sound like silly fairy tales. Here were the real unconscious processes underlying our thinking, instead of the anthropomorphic machinations of some Freudian 'unconscious mind,' which now began to look rather like a supernumerary and supererogatory alter homunculus: One mind/body problem was enough!"
http://psychclassics.yorku.ca/Hebb/
Drives and the C.N.S. (Conceptual Nervous System), Psychological Review, Volume 62, 1955. Online at Christopher D. Green's Classics in the History of Psychology:
"The problem of motivation of course lies close to the heart of the general problem of understanding behavior, yet it sometimes seems the least realistically treated topic in the literature. In great part, the difficulty concerns that c.n.s., or "conceptual nervous system," which Skinner disavowed and from whose influence he and others have tried to escape. But the conceptual nervous system of 1930 was evidently like the gin that was being drunk about the same time; it was homemade and none too good, as Skinner pointed out, but it was also habit-forming; and the effort to escape has not really been successful. Prohibition is long past. If we must drink we can now get better liquor; likewise, the conceptual nervous system of 1930 is out of date and - if we must neurologize - let us use the best brand of neurology we can find.
Though I personally favor both alcohol and neurologizing, in moderation, the point here does not assume that either is a good thing. The point is that psychology is intoxicating itself with a worse brand than it need use. Many psychologists do not think in terms of neural anatomy; but merely adhering to certain classical frameworks shows the limiting effect of earlier neurologizing."
James A. Hendler
https://en.wikipedia.org/wiki/James_Hendler - Wikipedia page for James A. Hendler
www.cs.umd.edu/~hendler/ - Hendler's home page.
http://www.cnn.com/chat/transcripts/1999/12/hendler/index.html
A chat about the future of artificial intelligence, from CNN's @2000 chat series, January 1, 2000.
Did you know Microsoft's paperclip used Bayesian belief networks? Hendler answers audience questions about AI.
www.cs.umd.edu/users/hendler/funding-talk/
How to get that first grant:♂A young scientist's guide to (AI) funding in America. Slides from a tutorial presented at the Fifteenth National Conference on Artificial Intelligence (AAAI98), July 1998.
Geoffrey E. Hinton
https://en.wikipedia.org/wiki/Geoffrey_Hinton - Wikipedia page for Geoffrey E. Hinton
http://www.cs.toronto.edu/~hinton/ - Hinton's home page.
http://www.cs.toronto.edu/~hinton/talks/gentle.ppt
A "very gentle after-dinner version" of Hinton's IJCAI-2005 Research Excellence Award Lecture Can computer simulations of the brain allow us to see into the mind?.
Preface to the special issue on connectionist symbol processing, Artificial Intelligence, Volume 46, 1990.
www.cs.rhul.ac.uk/NCS/vol1_3.pdf
A Brief History of Connectionism by David A. Medler, Neural Computing Surveys, Volume 1, 1998.
A detailed history of connectionism in cognitive science. It mentions the 1981 book Parallel Models of Associative Memory by Hinton and J. A. Anderson, saying that in many ways, this book acts as a bridge between the "Old Connectionism" of the Perceptron and the "New Connectionism" of fully-trainable and computationally powerful networks.
Douglas R. Hofstadter
https://en.wikipedia.org/wiki/Douglas_Hofstadter - Wikipedia page for Douglas R. Hofstadter
https://prelectur.stanford.edu/lecturers/hofstadter - Hofstadter's home page.
www.stanford.edu/group/SHR/4-2/text/hofstadter.html
On seeing A's and seeing As, from Constructions of the Mind: Artificial Intelligence and the Humanities. Volume 4, Issue 2 of Stanford Humanities Review, 1995.
Hofstadter argues that logic-based AI may have reached a dead-end, being brittle and too little concerned with perception. But, in contrast to some researchers' views of it, perception is itself a highly abstract act - even a highly abstract art - in which intuitive guesswork and subtle judgments play starring roles. He illustrates with pictures of Bongard pattern-recognition problems, and concludes:
"As Heinz Pagels reports in his book The Dreams of Reason, one time the mathematician Stanislaw Ulam and his mathematician friend Gian-Carlo Rota were having a lively debate about artificial intelligence, a discipline whose approach Ulam thought was simplistic. Convinced that perception is the key to intelligence, Ulam was trying to explain the subtlety of human perception by showing how subjective it is, how influenced by context. He said to Rota, 'When you perceive intelligently, you always perceive a function, never an object in the physical sense. Cameras always register objects, but human perception is always the perception of functional roles. The two processes could not be more different.... Your friends in AI are now beginning to trumpet the role of contexts, but they are not practicing their lesson. They still want to build machines that see by imitating cameras, perhaps with some feedback thrown in. Such an approach is bound to fail...'"
John Holland
https://en.wikipedia.org/wiki/John_Henry_Holland - Wikipedia page for John Holland
https://www.sciencedaily.com/releases/2003/02/030214075837.htm
Falling Prey To Machines? Adapted from a news release issued by University Of Michigan College Of Engineering, ScienceDaily, 14th February 2003:
"For Holland, the crucial leap in machine intelligence will be when computers start thinking like human beings, rather than just reaching the same results as them with different processes. This kind of advanced artificial intelligence would involve learning new skills, adapting to unforeseen circumstances and using analogy and metaphor like humans do. To make these breakthroughs possible, researchers will need an overarching theory that can shape the field of artificial intelligence in the same way that Maxwell's theory of electromagnetism shaped modern physics."
Jim Howe
www.inf.ed.ac.uk/people/staff/James_Howe.html - Howe's home page.
http://www.inf.ed.ac.uk/about/AIhistory.html
Artificial Intelligence at Edinburgh University: a Perspective, 1994.
And a history.
Mark Humphrys
https://computing.dcu.ie/~humphrys/ - Humphrys's home page.
AI is possible .. but AI won't happen: The future of Artificial Intelligence. Talk given at the "Next Generation" symposium, the "Science and the Human Dimension" series, Jesus College Cambridge, August 1997.
In this and a later talk (The Hardest Problem in the History of Science, 2000), Humphreys claims that although possible in theory, AI is impossible in practice. One reason: we can't expect to get anywhere by building a single isolated Artificial Intelligence alone in the lab; our AIs must have the opportunity to engage in repeated social interactions and evolve a rich culture.
Alan Kay
https://en.wikipedia.org/wiki/Alan_Kay - Wikipedia page for Alan Kay
The Early History of Smalltalk, 1993:
"I will try to show where most of the influences came from and how they were transformed in the magnetic field formed by the new personal computing metaphor. It was the attitudes as well as the great ideas of the pioneers that helped Smalltalk get invented. Many of the people I admired most at this time - such as Ivan Sutherland, Marvin Minsky, Seymour Papert, Gordon Moore, Bob Barton, Dave Evans, Butler Lampson, Jerome Bruner, and others - seemed to have a splendid sense that their creations, though wonderful by relative standards, were not near to the absolute thresholds that had to be crossed. Small minds try to form religions, the great ones just want better routes up the mountain."
http://people.cs.uchicago.edu/~mark/51050/lectures/lecture.4/lecture.4.pdf
There's a nice anecdote by Kay recalled in J. Mark Shacklette's lecture notes linked here on object-oriented programming:
"One little incident of LISP beauty happened when Allen Newell visited PARC with his theory of hierarchical thinking and was challenged to prove it. He was given a programming problem to solve . . . given a list of items, produce a list consisting of all the odd indexed items followed by all of the even indexed items. Newell got into quite a struggle to do the program [with his IPL-V like language]. In 2 seconds I wrote down oddsEvens(x) = append(odds(x), evens(x)). This characteristic of writing down many solutions in declarative form and have them also be the programs is part of the appeal and beauty of this kind of language. Watching a famous guy much smarter then I struggle for more than 30 minutes to not quite solve the problem his way (there was a bug) made quite an impression."
Robert Kowalski
https://en.wikipedia.org/wiki/Robert_Kowalski - Wikipedia page for Robert Kowalski
http://www.doc.ic.ac.uk/~rak/ - Kowalski's home page.
http://www.doc.ic.ac.uk/~rak/history.pdf Robert Kowalski: A Short Story of My Life and Work, April 2002.
Kowalski's memories of school, Stanford, logic, computing, and the heady days of Prolog working with Alain Colmerauer and others. On arriving at Edinburgh University Meta-mathematics Unit, and seeing the sign 'Department of Computer Science':
"My heart sank. I hated computers, but I decided I would stick it out, get my PhD as quickly as possible, and resume my search for truth."
He describes how he helped develop microProlog for schools, working with Frank McCabe and Richard Ennals. Three years later, he became the most senior academic in Britain to argue the logic programming case for Britain's response to the Japanese Fifth Generation Project:
"It was chaos. Academics argued with academics, industrialists with both academics and fellow industrialists - all presided over by the British civil service. We all wanted to carve out a slice of the action for ourselves. Some of us went further by arguing that we should follow the lead of the Fifth Generation Project and focus on logic programming to the detriment of other areas. That was a big mistake."
Ray Kurzweil
https://en.wikipedia.org/wiki/Ray_Kurzweil - Wikipedia page for Ray Kurzweil
www.kurzweilai.net/ - Kurzweil's KurzweilAI.net page.
"The Singularity is near"!
https://www.theguardian.com/science/2005/nov/21/academicexperts.elearning
The ideas interview: Ray Kurzweil, Guardian, 21st November, 2005.
"'By 2020, $1,000 (£581) worth of computer will equal the processing power of the human brain,' he says. 'By the late 2020s, we'll have reverse-engineered human brains.'"
Christopher G. Langton
https://en.wikipedia.org/wiki/Christopher_Langton - Wikipedia page for Christopher G. Langton
What is Artificial Life?, Zooland site.
Karl S. Lashley
https://en.wikipedia.org/wiki/Karl_Lashley - Wikipedia page for Karl S. Lashley
http://psychclassics.yorku.ca/Lashley/neural.htm
Basic Neural Mechanisms in Behavior, Psychological Review, Volume 37, 1930. Online at Christopher D. Green's Classics in the History of Psychology.
"Among the systems and points of view which comprise our efforts to formulate a science of psychology, the proposition upon which there seems to be most nearly a general agreement is that the final explanation of behavior or of mental processes is to be sought in the physiological activity of the body and, in particular, in the properties of the nervous system. The tendency to seek all causal relations of behavior in brain processes is characteristic of the recent development of psychology in America. Most of our text-books begin with an exposition of the structure of the brain and imply that this lays a foundation for a later understanding of behavior. It is rare that a discussion of any psychological problem avoids some reference to the neural substratum, and the development of elaborate neurological theories to 'explain' the phenomena in every field of psychology is becoming increasingly fashionable.
In reading this literature I have been impressed chiefly by its futility. The chapter on the nervous system seems to provide an excuse for pictures in an otherwise dry and monotonous text. That it has any other function is not clear; there may be cursory references to it in later chapters on instinct and habit, but where the problems of psychology become complex and interesting, the nervous system is dispensed with."
Douglas B. Lenat
https://en.wikipedia.org/wiki/Douglas_Lenat - Wikipedia page for Douglas B. Lenat
https://www.cyc.com/about-us/leadership-team - Lenat's home page.
www.cyc.com/cyc/technology/halslegacy.html
From 2001 to 2001: Common Sense and the Mind of HAL, in Hal's Legacy: 2001's Computer as Dream and Reality, edited by David Stork, 2001.
Lenat explains how to build HAL in three easy steps:
-
Prime the pump with the millions of everyday terms, concepts, facts, and rules of thumb that comprise human consensus reality - that is, common sense.
-
On top of this base, construct the ability to communicate in a natural language, such as English. Let the HAL-to-be use that ability to vastly enlarge its knowledge base.
-
Eventually, as it reaches the frontier of human knowledge in some area, there will be no one left to talk to about it, so it will need to perform experiments to make further headway in that area.
My January 2005 entry on Lenat, including links concerning his common-sense reasoning project Cyc which will supply these millions of everyday terms, concepts, facts, and rules of thumb.
My January 2005 entry on OpenCyc, the open-source version of Cyc.
Robert A. Levinson
www.cse.ucsc.edu/personnel/faculty/levinson.html - Levinson's home page.
www.ucsc.edu/oncampus/currents/97-05-05/chess.htm
"Deep Blue" inspires deep thinking about artificial intelligence by computer scientist, by Robert Irion, 5th May, 1997.
Levinson criticises Deep Blue for its lack of meta-reasoning and learning:
"'Deep Blue is a powerful entity, and it represents a wonderful engineering effort,' Levinson said this week as he looked forward to following the games live on the Internet. 'I do agree that it sits somewhere on the scale of ''intelligence.'' But even if it proves the most successful approach toward beating the world champion in chess, it's a long way from artificial intelligence. What it really lacks is autonomy and adaptability.'"
http://satirist.org/learn-game/projects/morph.html
Levinson's Morph project, his learning chess program which the feature contrasts with Deep Blue.
James Lighthill
https://en.wikipedia.org/wiki/James_Lighthill - Wikipedia page for James Lighthill
http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html
Review of "Artificial Intelligence: A General Survey", by John McCarthy, 1973 or 1974.
Lighthill was commissioned by the British Science Research Council, the main funding body for university research, to write a report which would help them decide on future requests for funding. The report criticised a lot of AI research, and many believe it was responsible for the large cuts in funding that took place after 1973, causing researchers to leave for the U.S., and a British "AI Winter" that lasted until the expert systems boom and the response to the Fifth Generation project in the early 1980s. Here, McCarthy reviews the report; while he finds fault with Lighthill's approach to AI, he also identifies faults in AI research itself, including the "look ma, no hands" disease.
http://www-formal.stanford.edu/jmc/reviews/bloomfield/bloomfield.html
The Question of Artificial Intelligence, by John McCarthy, 2000:
"We make a final remark about the Lighthill report... When a physicist is forced to think about AI he generally reinvents the subject in his individual way. Some expect it to be easy and others impossible. Lighthill was in the latter category. In the 1974 BBC debate, I thought I had a powerful argument and asked Lighthill why, if the physicists hadn't mastered turbulence in 100 years, they should expect AI researchers to give up just because they hadn't mastered AI in 20. Lighthill's reply, which BBC unfortunately didn't include in the broadcast, was that the physicists should give up on turbulence. Hardly any physicists would agree with Lighthill's statement, and maybe he didn't mean it."
http://www-formal.stanford.edu/jmc/reviews/lighthill-20/lighthill-20.html
Lessons from the Lighthill Flap, by John McCarthy:
"This is a review of Martin Lam's The Lighthill Report - 20 years after: Martin Lam gives us a British civil servant's view of the Lighthill report and subsequent developments. My comments concern some limitations of this view that may be related to the background of the author - or maybe they're just a scientist's prejudices about officials.
Lam accepts Lighthill's eccentric partition of AI research into Advanced Automation, Computer-based Studies of the Central Nervous System and Bridges in between. This classification wasn't accepted then and didn't become accepted since, because it almost entirely omits the scientific basis of AI."
Chris Lucas
http://www.calresco.org/lucas/selforg.htm
Self-Organization and Human Robots, International Journal for Advanced Robotic Systems, March 2005.
A speculative proposal to apply complexity, self-organisation and attractors to robot design.
George F. Luger
https://www.cs.unm.edu/~luger/ - Luger's home page.
https://www.cs.unm.edu/~luger/ai-final/preface.html
This page links to the preface of Luger's book Artificial Intelligence: Structures and Strategies for Complex Problem Solving, now in its 5th edition. In his preface, written in 2004, Luger talks about how earlier editions have dated as AI developed. One change is that stochastic methods such as Bayesian networks and Markov models are much more important. More generally, the debate between the neats and the scruffies has given way to dozens of other debates between diverse interests. This diversity is to be welcomed:
"Our original image of AI as frontier science where outlaws, prospectors, wild-eyed prairie prophets and other dreamers were being slowly tamed by the disciplines of formalism and empiricism has given way to a different metaphor: that of a large, chaotic but mostly peaceful city, where orderly bourgeois neighborhoods draw their vitality from diverse, chaotic, bohemian districts."
Chris Malcolm
www.dai.ed.ac.uk/homes/cam/ - Malcolm's home page.
www.dai.ed.ac.uk/homes/cam/WRRTW.shtml
Why Robots Won't Rule the World, 2000:
"This is a general resource page for arguments against the idea that robots (or some other superintelligent machines) will supersede us as the dominant 'life' form and take over the world from us. These arguments have received a lot of publicity in the national press of many countries, on TV and radio, and in popular science journals such as Scientific American. I'm surprised that so many well-educated people take the ideas seriously. Since they do, it is worth while explaining why these ideas are silly."
www.dai.ed.ac.uk/homes/cam/RWR_comments.shtml
Robots Won't Rule, 2000:
"It was rumoured in some of the UK national press of the time [a bit more than 10 years after Lighthill] that Margaret Thatcher watched Professor Fredkin being interviewed on a late night TV science programme. Fredkin explained that superintelligent machines were destined to surpass the human race in intelligence quite soon, and that if we were lucky they find human beings interesting enough to keep us around as pets. The rumour is that Margaret Thatcher decided on seeing that that the 'artificial intelligentsia' whom she was just proposing to give lots of research funds under the Alvey Initiative were seriously deranged. Her answer was to double the amount of industrial support required by a research project in order to be eligible for Alvey funding, hoping thereby to counterbalance their deranged flights of fancy with industrial common sense."
David Marr
https://en.wikipedia.org/wiki/David_Marr_(neuroscientist) - Wikipedia page for David Marr
http://kybele.psych.cornell.edu/~edelman/marr/marr.html
David Marr a short biography, International Encyclopaedia of Social and Behavioral Sciences, by Shimon Edelman and Lucia M. Vaina, 2001:
"A consummation of this three-pronged effort to develop an integrated mathematical-neurobiological understanding of the brain would in any case have earned Marr a prominent place in a gallery, spanning two and a half centuries (from John Locke to Kenneth Craik), of British Empiricism, the epistemological stance invariably most popular among neuroscientists. As it were, having abandoned the high-theory road soon after the publication of the hippocampus paper, Marr went on to make his major contribution to the understanding of the brain by essentially inventing a field and a mode of study: computational neuroscience. By 1972, the focus of his thinking in theoretical neurobiology shifted away from abstract theories of entire brain systems, following a realization that without an understanding of specific tasks and mechanisms - the issues from which his earlier theories were 'once removed' - any general theory would be glaringly incomplete."
John McCarthy
https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist) - Wikipedia page for John McCarthy
http://www-formal.stanford.edu/jmc/ - McCarthy's home page.
http://www-formal.stanford.edu/jmc/whatisai/whatisai.html
What is Artificial Intelligence?, 2004.
"This article for the layman answers basic questions about artificial intelligence. The opinions expressed here are not all consensus opinion among researchers in AI".
http://www-formal.stanford.edu/jmc/robotandbaby.html
The Robot and the Baby, 2004.
McCarthy's first science fiction story, which partly illustrates his opinions about what household robots should be like. To be contrasted with the film AI, of which McCarthy notes in his intro that: "There is no more of the science of AI in the movie than there is in the Pinochio story of more than 100 years ago. One should also not take seriously any of the ideas of the movie of what robots might really be like."
http://www-formal.stanford.edu/jmc/history/lisp/lisp.html
History of Lisp, 1979.
http://infolab.stanford.edu/pub/voy/museum/pictures/display/1-7.htm
"In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI."
A 1971 quote from McCarthy, visible in one of the photos of the Stanford AI Lab pictured here.
http://www-formal.stanford.edu/jmc/mcc59/mcc59.html
Programs with Common Sense, 1959.
About McCarthy's Advice Taker program.
http://infolab.stanford.edu/pub/voy/museum.html
Tree (incomplete) of McCarthy's students, from the Stanford Computer History exhibits.
Drew McDermott
https://en.wikipedia.org/wiki/Drew_McDermott - Wikipedia page for Drew McDermott
http://cs-www.cs.yale.edu/homes/dvm/ - McDermott's home page.
McDermott's road to Lisp, via the assembly-like list-processing language IPL-V. From the Association of Lisp Users wiki:
"In the 1950s list processing seemed like a radical innovation in an array-oriented world."
(Herbert Simon talks briefly about using IPL-V for the Logic Theorist in his memories of Allen Newell, which I've linked from that section.)
An Interview with Drew McDermott, by Kentaro Toyama, ACM Crossroads, 1996:
"Q: Do you think there are advances in other fields that might propel AI forward?"
"A: When I was in grad school, there was a tendency to believe AI was a paradigm competing with other paradigms. So we would say, we're not going to use Kalman filters [from engineering], we'll use AI. Nowadays, AI simply absorbs those techniques. Those techniques will continue to be of great importance. Anything at all that might be considered a part of a theory of control of an organism would be used by AI."
Donald M. MacKay
https://en.wikipedia.org/wiki/Donald_MacCrimmon_MacKay - Wikipedia page for Donald M. MacKay
An I Behind the Eye: Donald MacKay's Gifford Lectures, by W. R. Thorson. Perspectives on Science and Christian Faith, Volume 44, March 1992.
This essay is a review of Donald MacKay's Behind the Eye, edited by Valerie MacKay, 1991. The book originated with MacKay's 1986 Gifford Lectures in Natural Theology, and relates cognitive science - connectionism included - to Christian belief, spirituality, death, and a hereafter.
Responding to the Word, Peter J. Blackburn, 1999.
Sermon concerning Christianity in a Mechanistic Universe and other essays, edited by Donald M. MacKay, 1965:
"A number of years ago, Donald M. MacKay, then Professor of Communication at the University of Keele, wrote an article in which he was discussing the relationship between faith and science. In the course of the article he describes what he calls 'the fallacy of nothing buttery.' The phrase catches the attention and lodges in the memory - but has nothing to do with the kitchen or dining table. He was trying to highlight the danger of assuming that, because we grasp some of the truth, we therefore know all the truth - in particular, the inference that the spiritual dimension can be ruled out because of our insights into the physical world, the tendency to say that reality is 'nothing but' the physical world."
Donald Michie
https://en.wikipedia.org/wiki/Donald_Michie - Wikipedia page for Donald Michie
http://www.aiai.ed.ac.uk/~dm/ - Michie's home page.
http://www.aiai.ed.ac.uk/events/ccs2002/CCS-early-british-ai-dmichie.pdf
Recollections of early AI in Britain: 1942-1965. Transcript of the video for the BCS Computer Conservation Society's October 2002 Conference on the history of AI in Britain.
Michie's life in AI, from friendships with Turing and Good at Bletchley Park, up to the Edinburgh project on FREDERICK - "Friendly Robot for Education, Discussion and Entertainment, the Retrieval of Information and the Collation of Knowledge".
http://www.doc.ic.ac.uk/~shm/MI/Michie.ps
The "Machine Intelligence" series, a note "prepared in response to a suggestion made at the recent York meeting of the new MI Board".
Michie writes about the requirements for maintaining this series, and the boldness needed to start it:
"Identifying and fostering new departures demands a correspondingly radical editorial style. Paradigm-spotting in science is in spirit closer to maritime exploration than it is to the administration of the settled landfalls that follow. Atypically in human affairs boldness is all, both in new sightings and in immediate follow-up. With equal boldness, editorial leadership must select contributors and themes for each new Workshop on one single criterion: are they likely to engender the proliferation of influential novelty? In this spirit the first Machine Intelligence Workshop was convened in 1965. That year had seen a potential revolution in mechanizable aids to reasoning. Thanks to the vigilance of the young Rod Burstall, it was instantly spotted and given prominence in this first volume, namely Robinson's resolution principle."
George A. Miller
https://en.wikipedia.org/wiki/George_Armitage_Miller - Wikipedia page for George A. Miller
The cognitive revolution: a historical perspective, Trends in Cognitive Sciences Volume 7, Number 3, March 2003:
"Cognitive science is a child of the 1950s, the product of a time when psychology, anthropology and linguistics were redefining themselves and computer science and neuroscience as disciplines were coming into existence. Psychology could not participate in the cognitive revolution until it had freed itself from behaviorism, thus restoring cognition to scientific respectability. By then, it was becoming clear in several disciplines that the solution to some of their problems depended crucially on solving problems traditionally allocated to other disciplines. Collaboration was called for: this is a personal account of how it came about."
The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, The Psychological Review, Volume 63, 1956.
Marvin L. Minsky
https://en.wikipedia.org/wiki/Marvin_Minsky - Wikipedia page for Marvin L. Minsky
https://web.media.mit.edu/~minsky/ - Minsky's home page.
AI Founder Blasts Modern Research, by Mark Baard, Wired, 13 May 2003.
"'AI has been brain-dead since the 1970s,' said AI guru Marvin Minsky in a recent speech at Boston University. Minsky co-founded the MIT Artificial Intelligence Laboratory in 1959 with John McCarthy."
Disagreement between Minsky and Brooks and other researchers on important problems versus mere fads.
It's 2001. Where Is HAL?. Transcript of a 23rd May Dr Dobbs TechNetCast for Game Developers Conference 2001, previewing Minsky's book The Emotion Machine.
This is a long and detailed, but easy to read, account of AI's history and some of the faults in current research. Worth reading for the views on multiple representations and the need to keep knowledge explicit - and for the views on immortality.
Scientist on the Set: An Interview with Marvin Minsky, by David Stork, in Hal's Legacy: 2001's Computer as Dream and Reality, edited by David Stork, 2001.
https://web.media.mit.edu/~minsky/papers/CausalDiversity.html
Future of AI Technology, Toshiba Review, Volume 47, Number 7, July 1992.
Why People Think Computers Can't, AI Magazine, Fall 1982.
https://web.media.mit.edu/~minsky/papers/steps.html
Steps Toward Artificial Intelligence, 1960.
A detailed account of the Dartmouth Project, fascinating to read for its perspectives on search, learning, and other topics.
www.stanford.edu/group/mmdd/SiliconValley/Levy/Hackers.1984.book/Chapter6.html
Winners and losers, online version of Chapter 6 of David Levy's 1984 book Hackers:
"Gosper wanted to go all the way, have the robot geared to move around and make clever shots, perhaps with the otherworldly spin of a good Gosper volley. But Minsky, who had actually done some of the hardware design for the ball-catching machine, did not think it an interesting problem. He considered it no different from the problem of shooting missiles out of the sky with other missiles, a task that the Defense Department seemed to have under control. Minsky dissuaded Gosper from going ahead on the Ping-Pong project and Gosper would later insist that that robot could have changed history."
Allen Newell
https://en.wikipedia.org/wiki/Allen_Newell - Wikipedia page for Allen Newell
stills.nap.edu/readingroom/books/biomems/anewell.html
Allen Newell. March 19, 1927 - July 19, 1992. By Herbert Simon.
carbon.cudenver.edu/~mryder/itc_data/cogsci.html#newell
Links to writings by and about Newell, from Celebrities in Cognitive Science, by Martin Ryder, University of Colorado at Denver.
Nils J. Nilsson
https://en.wikipedia.org/wiki/Nils_John_Nilsson - Wikipedia page for Nils J. Nilsson
ai.stanford.edu/users/nilsson/bio.html - Nilsson's home page.
http://ai.stanford.edu/users/nilsson/OnlinePubs-Nils/General Essays/OtherEssays-Nils/hlai.pdf
Considerations Regarding Human-Level Artificial Intelligence, 2002:
"AI researchers have several overlapping objectives. Among these are: to build systems that aid humans in intellectual tasks; to build agents that can function autonomously in circumscribed domains; to build a general science of intelligence as manifested in animals, humans, and machines; and to build versatile agents with human-level intelligence or beyond. In these notes, I list what I think are some important considerations for those working toward building humanlevel AI agents."
http://www.ai.sri.com/shakey/
Shakey, at SRI's AI Center.
Nilsson was one of several to work on the classic mobile robot nicknamed "Shakey". This SRI page links to a number of SRI technical reports on Shakey, previously hard to come by.
Peter Norvig
https://en.wikipedia.org/wiki/Peter_Norvig - Wikipedia page for Peter Norvig
http://www.norvig.com - Norvig's home page.
http://www.norvig.com/Lisp-retro.html
A Retrospective on "Paradigms of AI Programming", 1997, 2002.
A look at how Lisp and AI programming have changed since Norvig finished writing his book Paradigms of AI Programming.
Brian Oakley
The Age of Intelligent Machines: Intelligent Knowledge-Based Systems - AI in the U.K., from Ray Kurzweil's book The Age of Intelligent Machines, 1990.
Oakley on AI in the U.K.: paradise lost with the Lighthill report (I wonder whether there's a typo in saying this was commissioned at the end of the 1950s); paradise regained at the start of the 1980s. Oakley was director of the Alvey Committee which coordinated Britain's response to the Fifth Generation project, and explains why the phrase "(intelligent) knowledge-based systems" was used rather than "Artificial Intelligence": in essence, to allay fears about a computer takeover.
Gordon Pask
https://en.wikipedia.org/wiki/Gordon_Pask - Wikipedia page for Gordon Pask
To evolve an ear: epistemological implications of Gordon Pask's electrochemical devices, by Peter Cariani, Systems Research, Volume 10, Number 3, 1993.
Cariani describes how, in the late 1950s, Pask tried to develop self-organising homeostatic systems which would adaptively construct their own sensors by electrochemical deposition of metal fibres. As Warren McCulloch says in the preface to Pask's Approach to Cybernetics:
"With this ability to make or select proper filters on its inputs, such a device explains the central problem of epistemology. The riddles of stimulus equivalence or of local circuit action in the brain remain only as parochial problems."
The following quote is from Carian's paper, about Stafford Beer:
"Some close friends of Pask's, like Stafford Beer, were attempting to use populations of biological organisms (such as the water flea Daphnia) to compute complex functions. The advantages of biologically-based elements revolve around their ability to self-regulate and self-proliferate; their disadvantages involve the difficulties of steering such elements in directions contrary to their natural homeostatic tendencies. ... Whether biological or inorganic, it was important that the elements could be grown in great numbers so that large scale adaptive networks (analog and/or digital) could potentially be built. This strategy would start with a plastic medium with a rich set of possible structures and let the medium self-organize guided by appropriately structured reward system. The elements could proliferate themselves and the reward constraints could then mold their connections to form a functioning device. At the time there were also people who were contemplating the prospects of having to wire up extremely large computing machines and were looking for cheap, 'self-wiring' analog elements which could be grown to do the job."
David L. Parnas
https://en.wikipedia.org/wiki/David_Parnas - Wikipedia page for David L. Parnas
www.cas.mcmaster.ca/sqrl/parnas.homepg.html - Parnas's home page.
klabs.org/richcontent/software_content/papers/parnas_acm_85.pdf
Software Aspects of Strategic Defense Systems, Communications of the ACM, Volume 28, Number 12, December 1985. Richard Ennals was not the only software researcher to argue against SDI:
"On 28 June 2985, David Lorge Parnas, a respected computer scientist who has consulted extensively on United States defense projects, resigned from the Panel on Computing in Support of Battle Management, convened by the Strategic Defense lnitiative Organization (SDIO). With his letter of resignation, he submitted eight short essays explaining why he believed the software required by the Strategic Defense Initiative would not be trustworthy. Excerpts from Dr. Parnas's letter and the accompanying papers have appeared widely in the press. The Editors of American Scientist believed that it would be useful to the scientific community to publish these essays in their entirety to stimulate scientific discussion of the feasibility of the project. As part of the activity of the Forum on Risks to the Public in the use of computer systems the Editors of Communications are pleased to reprint these essays."
Roger Penrose
https://en.wikipedia.org/wiki/Roger_Penrose - Wikipedia page for Roger Penrose
http://www-formal.stanford.edu/jmc/reviews/penrose1/penrose1.html
REVIEW OF THE EMPEROR'S NEW MIND by Roger Penrose, John McCarthy, 1998:
"Penrose doesn't believe that computers constructed according to presently known physical principles can be intelligent and conjectures that modifying quantum mechanics may be needed to explain intelligence. He also argues against what he calls ``strong AI''. Neither argument makes any reference to the 40 years of research in artificial intelligence (AI) as treated, for example, in Charniak and McDermott (1985). Nevertheless, artificial intelligence is relevant, and we'll begin with that."
psyche.cs.monash.edu.au/v2/psyche-2-06-moravec.html
Roger Penrose's Gravitonic Brains - A Review of "Shadows of the Mind" by Roger Penrose, Hans Moravec, PSYCHE, Volume 2, Issue 6, May 1995.
https://math.ucr.edu/home/baez/penrose.html
A Chat With Penrose, John Baez, 10th June, 1996.
One needs a deep understanding of quantum theory, as well as computing, to do justice to this book. Who better than John Baez?
Raj Reddy
https://en.wikipedia.org/wiki/Raj_Reddy - Wikipedia page for Raj Reddy
http://www.rr.cs.cmu.edu - Reddy's home page.
http://www.rr.cs.cmu.edu/aaai.pdf
Foundations and Grand Challenges of Artificial Intelligence, 1988 AAAI Presidential Address. Published in AI Magazine, Winter 1988.
A detailed, wide-ranging but easy to read article on the history of AI (including commercial applications), and the lessons learnt so far, which we can use as design principles for AI programs. Reddy also talks about the future:
"As the size of investment in AI rises above the noise level, we can no longer expect people to fund us on blind faith. We are entering an era of accountability. Rather than being concerned, I think we should view this as a challenge and lay out our vision for the future."
http://infolab.stanford.edu/pub/voy/museum/jmctree.html#reddy
Tree (incomplete) of Reddy's students, from the Stanford Computer History exhibits.
Howard Rheingold
https://en.wikipedia.org/wiki/Howard_Rheingold - Wikipedia page for Howard Rheingold
http://rheingold.com - Rheingold's home page.
http://www.rheingold.com/texts/tft/13.html
Knowledge engineers and Epistemological Entrepreneurs, Chapter 13 of Rheingold's book Tools for Thought.
Rheingold calls his book an exercise in retrospective futurism: "that is, I wrote it in the early 1980s, attempting to look at what the mid 1990s would be like". The chapter I've linked to is a mid-1980s popular-science view of expert systems and the expert systems boom.
Dennis M. Ritchie
https://en.wikipedia.org/wiki/Dennis_Ritchie - Wikipedia page for Dennis M. Ritchie
https://www.bell-labs.com/usr/dmr/www/ - Ritchie's home page.
https://www.bell-labs.com/usr/dmr/www/hopl.html
*Five Little Languages and How They Grew: Talk at HOPL [History of Programming Languages] *
We don't all program our AIs in Lisp and Prolog. Ritchie looks at the history of some other languages:
"A paper on the development of C was presented at the second ACM History of Programming Languages conference in Cambridge, Mass. in 1993. It was printed in History of Programming Languages, ed. T. Bergin and R. Gibson ... The paper itself has been available for some time; here I record the transcript of the talk I gave at the time. Unlike the paper, it doesn't talk about C's history, but instead concentrates on its relationships with other contemporary languages that are at heart similar to C but have some characteristic differences."
SAIL
https://en.wikipedia.org/wiki/Stanford_University_centers_and_institutes#Stanford_Artificial_Intelligence_Laboratory - Wikipedia page for SAIL
http://infolab.stanford.edu/pub/voy/museum/pictures/AIlab/SailFarewell.html
TAKE ME, I'M YOURS - The autobiography of SAIL. 1991.
The Stanford AI Lab computer says goodbye to robotics, Foonly, FINGER and SOS.
http://infolab.stanford.edu/pub/voy/museum.html
Computer History Page which exhibits Stanford's role in the history of computing. This is the result of collaboration between Stanford computer scientists and the Computer History Museum.
Robert J. Sawyer
https://en.wikipedia.org/wiki/Robert_J._Sawyer - Wikipedia page for Robert J. Sawyer
https://www.sfwriter.com/index.htm - Sawyer's home page.
https://www.sfwriter.com/precarn.htm
AI and Sci-Fi: My, Oh, My!. Keynote Address presented 31st May, 2002 at The 12th Annual Canadian Conference on Intelligent Systems Calgary, Alberta.
Robots and AI in SF, from Čapek to Greg Egan.
Murray Shanahan
https://en.wikipedia.org/wiki/Murray_Shanahan - Wikipedia page for Murray Shanahan
http://www.doc.ic.ac.uk/~mpsha - Shanahan's home page.
http://www.doc.ic.ac.uk/~mpsha/shakey.pdf
Reinventing Shakey, in Logic-Based Artificial Intelligence, edited by Jack Minker, 2000:
"In the late Sixties, when the Shakey project started, the vision of robot design based on logical representation seemed both attractive and attainable. Through the Seventies and early Eighties, however, the desire to build working robots led researchers away from logic to more practical but ad hoc approaches to representation. This movement away from logical representation reached an extreme in the late Eighties and early Nineties when Brooks jettisoned the whole idea of representation, along with the so-called sense-model-plan-act architecture epitomised by Shakey. However, the Shakey style of architecture, having an overtly logic-based deliberative component, seems to offer researchers a direct path to robots with high-level cognitive skills, such as planning, reasoning about other agents, and communication with other agents. Accordingly, a number of researchers have instigated a Shakey revival, and are aiming to achieve robots with these sorts of high-level cognitive skills by using logic as a representational medium."
Anderson Silva
https://linuxgazette.net/issue50/silva2.html
Artificial Intelligence and Linux (2nd Edition), published in Linux Gazette, Issue 50, February 2000.
One student's enthusiastic account of learning AI in the 21st century:
"For the first time in the history of my school, there was going to be offered an Artificial Intelligence (AI) class. I was very excited about this class because you hear a lot about AI, but you don't really see a lot of material for it on magazines and online articles."
Herbert Simon
https://en.wikipedia.org/wiki/Herbert_A._Simon - Wikipedia page for Herbert Simon
https://pages.gseis.ucla.edu/faculty/agre/simon.html
Hierarchy and History in Simon's "Architecture of Complexity", by Philip E. Agre, Journal of the Learning Sciences, Volume 12, 2003:
"Herb Simon came to artificial intelligence from organizational studies in New Deal-era public administration, and only now, it seems, after Simon's sad passing in early 2001, are we in position to place this development in historical context."
www.acm.org/crossroads/dayinlife/bios/herbert_simon.html
A Day in the Life of... Herbert A. Simon, ACM Crossroads:
"What I do to mentor those who work for me: Correct the grammar in the papers they submit; show them how to live patiently in confusion, thinking on it constantly until the answer comes."
www.psy.cmu.edu/psy/faculty/hsimon/hsimon.html
Herbert A. Simon 1916-2001.
His departmental web pages in 2001. There are links to obituaries.
Aaron Sloman
https://en.wikipedia.org/wiki/Aaron_Sloman - Wikipedia page for Aaron Sloman
https://www.cs.bham.ac.uk/~axs/- Sloman's home page:
"There is no need for a university to imprison young minds in a Microsoft universe: instead we should teach them to fly in many directions, and design new systems for the future." [YES!!! - Ed.]
https://www.cs.bham.ac.uk/research/projects/cogaff/misc/talks/ase03-slides.pdf
Talk 20: When will real robots be as clever as the ones in the movies?
This was originally a presentation at the 2003 Conference of the Association for Science Education held at The University of Birmingham, January 2003.
https://www.cs.bham.ac.uk/research/projects/cogaff/Sloman.eace-interview.html
Patrice Terrier Interviews Aaron Sloman for for EACE Quarterly, European Association for Cognitive Ergonomics, August 1999:
"Terrier: To what extent is Poplog, the programming language you have developed since 1980, linked to the development of human-like agents?"
"Sloman: Poplog is a very flexible and powerful toolkit for use in research and applications in Artificial Intelligence. ... It supports interactive, incremental, development of software using multiple programming paradigms (e.g. list processing, pattern matching, rule based programming, functional programming, conventional procedural programming, logic programming and object oriented programming). It is supplied with incremental compilers for four languages (Pop-11, Lisp, Prolog and ML) all of them implemented via Pop-11.
It is inherently extendable, and with colleagues in Birmingham I have extended it with the Sim_agent toolkit, which conveniently combines a number of paradigms (including rule-based programming, object-oriented programming, and conventional AI programming) to support the exploration of designs for interacting objects and agents each of which is able to sense others and communicate with others, while running within itself a number of 'concurrent' mechanisms (e.g. perception, motive generation, planning, plan execution, reasoning, emergency detection, etc.). ...
It would be possible to implement something like Sim_agent in a Lisp environment, but I believe that doing it in one of the more popular languages, such as C, C++, or Java would be far more difficult."
There is more on Poplog linked via the above feature and Sloman's home page.
https://www.cs.bham.ac.uk/research/projects/cogaff/crp/
The Computer Revolution in Philosophy: Philosophy, science and models of mind. An online version of this classic book, originally published in 1978. At the start of this version, Sloman notes:
"Some parts of the book are dated whereas others are still relevant both to the scientific study of mind and to philosophical questions about the aims of science, the nature of theories and explanations, varieties of concept formation, and to questions about the nature of mind.
In particular, Chapter 2 analyses the variety of scientific advances ranging from shallow discoveries of new laws and correlations to deep science which extends our ontology, i.e. our understanding of what is possible, rather than just our understanding of what happens when.
Insofar as AI explores designs for possible mental mechanisms, possible mental architectures, and possible minds using those mechanisms and architectures, it is primarily a contribution to deep science, in contrast with most empirical psychology which is shallow science, exploring correlations.
This 'design stance' approach to the study of mind was very different from the 'intentional stance' being developed by Dan Dennett at the same time, expounded in his 1978 book 'Brainstorms', and later partly re-invented by Alan Newell as the study of 'The knowledge Level' (see his 1990 book 'Unified Theories of Cognition'). Both Dennett and Newell based their methodologies on a presumption of rationality, whereas the design-stance considers functionality, which is possible without rationality. as insects and microbes demonstrate well. Functional mechanisms may provide limited rationality, as Herb Simon noted in his 1969 book 'The Sciences of the Artificial'."
Paul Smolensky
https://www.microsoft.com/en-us/research/people/psmo/ - Smolensky's Microsoft page.
"Precise theories of higher cognitive domains like language and reasoning rely crucially on complex symbolic rule systems like those of grammar and logic. According to traditional cognitive science and artificial intelligence, such symbolic systems are the very essence of higher intelligence. Yet intelligence resides in the brain, where computation appears to be numerical, not symbolic; parallel, not serial; quite distributed, not as highly localized as in symbolic systems. Furthermore, when observed carefully, much of human behavior is remarkably sensitive to the detailed statistical properties of experience; hard-edged rule systems seem ill-equipped to handle these subtleties. My research attempts to identify the proper roles within a unified theory of cognition for symbolic computation, numerical neural computation, and statistical computation.
...
More specifically, the basic questions driving this research include: What are the central general principles of computation in connectionist - abstract neural - networks? How can these principles be reconciled with those of symbolic computation? Addressing these questions over the past two decades, my work has led to a new computational architecture for cognition which integrates connectionist and symbolic computation.
...
The connectionist conception of intuitive knowledge as a collection of conflicting soft constraints, interacting via optimization of well-formedness or Harmony, led in joint research with Géraldine Legendre to the connectionist-based formalism of Harmonic Grammar."
www.cog.jhu.edu/faculty/smolensky/what_i_learned_from_dave_rumelhart-no-photo.pdf
What I learned from Dave Rumelhart - the fundamentals of PDP.
Rough transcript of a contribution to the David Rumelhart Celebration held at Carnegie-Mellon University, October 15-17, 1999. Ten excellent principles. Remember, crap doesn't come until a transfinite ordinal.
Luc Steels
arti.vub.ac.be/~steels/ - Steels's home page.
arti.vub.ac.be/previous_projects/krest/robot/alife.ps
The artificial life roots of artificial intelligence, Artificial Life Journal, Volume 1, Issue 1, 1994.
Gives an overview of the field of behavior-oriented AI. Steels says this is still a reasonable overview, despite its age.
arti.vub.ac.be/steels/space.ps - A self-organizing spatial vocabulary, Artificial Life Journal, Volume 2, Issue 3, 1996.
An experiment whereby agents spontaneously develop a common vocabulary to talk about spatial relations. This is a first application of the lexicon formation process described in arti.vub.ac.be/steels/mi15.ps, The Spontaneous Self-organization of an Adaptive Language, Machine Intelligence 15, edited by Stephen Muggleton, 1996:
"We now focus on how such a vocabulary may emerge spontaneously through a self-organizing process. Self-organization is a common phenomenon in certain types of complex dynamical systems. A complex dynamical system is a system where there are many elements that exhibit a dynamic behavior without a central control source. To support self-organization such a system must exhibit a series of spontaneous fluctuations and a feedback process that enforces a particular fluctuation so that it eventually forms a (dissipative) structure. The feedback process is related to a particular condition in the environment, for example an influx of materials that keeps the system in a non-equilibrium state. As long as the condition is present, the dissipative structure will be maintained. Some standard examples of self-organization are the Bhelouzow-Zhabotinsky reaction, morphogenetic processes, or the formation of a path in an ant society or a termite nest."
arti.vub.ac.be/miscellaneous/brochure/brochure.html
Ten years VUB Artificial Intelligence Laboratory. Author not stated.
The history of the VUB AI Lab from 1983 to 1993, with snapshots taken along each of the Lab's two routes towards AI, the symbolic paradigm and the dynamic paradigm.
www.csl.sony.fr/downloads/papers/2003/manuel-03a.pdf
Creating a Robot Culture: An Interview with Luc Steels, by Tyrus L. Manuel, IEEE Intelligent Systems, May/June 2003:
Manuel: "Your new theories diverge from the common concept that AI breakthroughs must be achieved by building more advanced machines. Your approach runs parallel with how humans learn and develop, so why has much of the AI community met your ideas with resistance and skepticism?"
Steels: "What we need today (and I think most people in AI would agree) is not really more powerful or novel hardware but new ideas. New ideas will always be received with skepticism. If there is no resistance, the idea is simply not revolutionary enough. In the earliest phases of AI, there was a greater openness and much more variety and freedom of thinking than today. Many AI researchers are too focused on short-term applications."
David S. Touretzky
www.cs.cmu.edu/~dst/ - Touretzky's home page.
www.cogsci.rpi.edu/CSJarchive/1988v12/i03/p0423p0466/MAIN.PDF
A Distributed Connectionist Production System, Touretzky and Hinton, Cognitive Science, Volume 12, 1988.
The authors describe a production system which represents explicit rules, but uses a distributed connectionist representation, thus gaining advantages over the standard symbolic implementation. A nice experiment in unifying two of AI's main concerns at the time.
Alan Turing
www.turing.org.uk/philosophy/lausanne1.html
What would Alan Turing have done after 1954? Lecture at the Turing Day, Lausanne, 2002, by Andrew Hodges.
Speculations on Turing's research had he lived:
"The computer scientist John McCarthy would have invited Turing to Dartmouth College in 1956, for what is wrongly thought of as the conference that began Artificial Intelligence. What would Turing have said? Well, I hope he would have been living witness to the fact that Artificial Intelligence had started well before 1956, as Prof. Copeland rightly said in his talk. I like also to think he would have advocated avoiding the separation of 'top-down' from 'bottom-up' research that was in fact to develop so strongly for the next 30 years (as Christof Teuscher brought out so clearly in his talk.) In contrast, Turing in 1948 and again in 1950 described both approaches together, saying that both approaches should be tried."
www.turing.org.uk/philosophy/iwm.html
Alan Turing at the Imperial War Museum and Europride, talk by Andrew Hodges, part of the programme of the lesbian and gay Europride week, August 2003. Also in Gay and Lesbian Humanist, Summer 2004.
www.teuscher.ch/alanturing/index1.php?content=turing_day_home
Teuscher's Turing Day page: Computing science 90 years from the birth of Alan M. Turing, Friday, 28th June, 2002.
Vernor Vinge
www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html
Vernor Vinge on the Singularity. The original version of this article was presented at the VISION-21 Symposium, March 1993. A slightly changed version appeared in Whole Earth Review, Winter 1993. Vinge explains the essence of the Singularity thus:
"In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.
Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's 'tool' - any more than humans are the tools of rabbits or robins or chimpanzees."
(The quote by Good is from Speculations Concerning the First Ultraintelligent Machine, Advances in Computers, Volume 6, 1965.)
Or, brevity enhancing apocalypticity in Vinge's abstract:
"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
David L. Waltz
www1.cs.columbia.edu/~waltz/ - Waltz's home page.
www.cs.washington.edu/homes/lazowska/cra/ai.html
Artificial Intelligence: Realizing the Ultimate Promises of Computing in Computing Research: A National Investment for Leadership in the 21st Century, Computing Research Association, 1997. Reprinted in AI Magazine, Volume 18, Issue 3, Fall 1997.
Some of AI's impressive achievements and a short historical perspective.
www.poplog.org/docs/popdocs/pop11/teach/waltz
TEACH WALTZ by Aaron Sloman, January 1981.
A very clear explanation of Waltz filtering, the famous constraint-satisfaction algorithm for filtering out impossible interpretations of lines in an image. Waltz developed this from line-labelling approaches to image understanding which Huffman and Clowes devised in the 1970s.
Kevin Warwick
www.kevinwarwick.com/ - Warwick's home page.
www.wired.com/wired/archive/8.02/warwick.html
Cyborg 1.0 - Kevin Warwick outlines his plan to become one with his computer, Wired, Issue 8.02, February 2000:
"I was born human. But this was an accident of fate - a condition merely of time and place. I believe it's something we have the power to change. I will tell you why. ... Since childhood I've been captivated by the study of robots and cyborgs. Now I'm in a position where I can actually become one. Each morning, I wake up champing at the bit, eager to set alight the 21st century - to change society in ways that have never been attempted, to change how we communicate, how we treat ourselves medically, how we convey emotion to one another, to change what it means to be human, and to buy a little more time for ourselves in the inevitable evolutionary process that technology has accelerated."
www.kevinwarwick.com/photogallery.htm
Photos of Professor Warwick, some of the robots he's been involved with, and images from his two cybernetic implant procedures.
Warren Weaver
www.essex.ac.uk/linguistics/clmt/MTbook/HTML/node7.html
A Bit of History in Machine Translation: An Introductory Guide by Doug Arnold, Lorna Balkan, Siety Meijer, R.Lee Humphreys, and Louisa Sadler, Essex University, 1993:
"The actual development of MT [Machine Translation] can be traced to conversations and correspondence between Andrew D. Booth, a British crystallographer, and Warren Weaver of the Rockefeller Foundation in 1947, and more specifically to a memorandum written by Weaver in 1949 to the Rockerfeller Foundation which included the following two sentences.
I have a text in front of me which is written in Russian but I am going to pretend that it is really written in English and that it has been coded in some strange symbols. All I need to do is strip off the code in order to retrieve the information contained in the text.
I was amused to encounter this quote in a Russian text (together with its translation) about the history of machine translation at www.transinter.ru/articles/266.
ourworld.compuserve.com/homepages/WJHutchins/Weaver49.htm
Warren Weaver Memorandum, July 1949, from MT News International, July 1999.
A more detailed account of Weaver's proposals for machine translation. Much more early MT history may be found from the home page for this site, by John Hutchins.
Norbert Wiener
www.nybooks.com/articles/18112
The Tragic Tale of a Genius By Freeman J. Dyson. Review of Dark Hero of the Information Age: In Search of Norbert Wiener, the Father of Cybernetics by Flo Conway and Jim Siegelman, in The New York Review of Books, Volume 52, Number 12, 14th July, 2005.
Dyson compares this new biography of Wiener with two others. He gives an account of Wiener's antiaircraft control system and other work, of how Wiener's wife may have broken up his friendship with Warren McCullough and dashed his hopes of unifying cybernetics with biology, and of why cybernetics seems to have disappeared after Wiener's death. The digital overcame the analogue.
pespmc1.vub.ac.be/CYBSHIST.html History of Cybernetics and Systems Science by J. de Rosnay, from Principia Cybernetica Web. Rosnay follows thirty years of the cybernetic thread at MIT from its start with Norbert Wiener, Warren McCulloch and Jay Forrester.
scholar.lib.vt.edu/ejournals/SPT/v7n3/hong.html Man and Machine in the 1960s by Sungook Hong, University of Toronto and Seoul National University. In Techné: Research in Philosophy and Technology, Volume 7, Spring 2004.
Hong writes about new conceptions of the relationship between man and machine in the 60s:
"In 1960, the father of cybernetics Norbert Wiener published a short article titled 'Some Moral and Technical Consequences of Automation' in Science. Wiener distinguished here between industrial machines in the time of Samuel Butler (1835-1902, the author of the novel on the dominance of humans by machines, Erehwon) and intelligent machines of his time. Machines circa 1960 had become very effective and even dangerous, Wiener stated, since they possessed 'a certain degree of thinking and communication' and transcended the limitations of their designers. Describing in detail gameplaying and learning machines, he contemplated a hypothetical situation in which such cybernetic machines were programmed to push a button in a 'push-button' nuclear war. Simply by following the programmed rules of the game, Wiener warned, these machines would probably do anything to win a nominal victory even at the cost of human survival."
Frederic Calland Williams
www.alanturing.net/turing_archive/pages/Reference%20Articles/BriefHistofComp.html
A Brief History of Computing, Jack Copeland, 2000:
"The first working AI program, a draughts (checkers) player written by Christopher Strachey, ran on the Ferranti Mark I in the Manchester Computing Machine Laboratory."
This program formed the basis for Samuel's well-known checkers program. The Ferranti on which it was running used the cathode-ray-tube memory developed by Williams and Tom Kilburn. Copeland's article explains its history, and that of other early computers from Babbage's Difference Engine to the IBM 705.
www.computer50.org/mark1/williams.html
Frederic Calland Williams (1911 - 1977).
Manchester University page on Williams's work.
Terry Winograd
hci.stanford.edu/winograd/ - Winograd's home page.
pcd.stanford.edu/winograd/acm97.html
From Computing Machinery to Interaction Design, in Beyond Calculation: The Next Fifty Years of Computing, edited by Peter Denning and Robert Metcalfe, 1997:
"Today's popular press plays up efforts like those of Pattie Maes and her research group at the MIT Media Laboratory, where they have produced agents to help people browse the web, choose music, and filter email. In fact, a notable indicator of the current trajectory is the ascendancy of MIT's Media Lab, with its explicit focus on media and communication, over the AI Laboratory, which in earlier days was MIT's headline computing organization, one of the world centers of the original AI research.
With hindsight, of course, it is easy to fault early predictions and quixotic enterprises, such as Lenat's attempt to produce common sense in computers by encoding millions of mundane facts in a quasi-logical formalism. But we can sympathize with the optimistic naivete of those whose predictions of future computing abilities were based on projecting the jump that led us from almost nothing to striking demonstrations of artificial intelligence in the first twenty-five years of computing. A straightforward projection of the rate of advance seemed that it would lead within another few decades to fully intelligent machines.
But there is something more to be learned here than the general lesson that curves don't always continue going up exponentially (a lesson that the computing field in general has yet to grapple with). The problem with artificial intelligence wasn't that we reached a plateau in our ability to perform millions of LIPS (logical inferences per second), or to invent new algorithms. The problem was in the foundations on which the people in the field conceived of intelligence."
www-db.stanford.edu/pub/voy/museum/winogradtree.html
Tree (incomplete) of Winograd's students, from the Stanford Computer History exhibits.
Patrick Henry Winston
people.csail.mit.edu/phw/index.html - Winston's home page.
people.csail.mit.edu/phw/optimism.html
Why I am Optimistic:
"From the engineering perspective, Artificial Intelligence is a grand success. Programs with roots in Artificial Intelligence research perform feats of mathematical wizardry, act as genetic counselors, schedule gates at airports, and extract useful regularities from otherwise impenetrable piles of data.
From the scientific perspective, however, not so much has been accomplished, and the goal of understanding intelligence, from a computational point of view, remains elusive. Reasoning programs still exhibit little or no common sense. Today's language programs translate simple sentences into database queries, but those language programs are derailed by idioms, metaphors, convoluted syntax, or ungrammatical expressions. Today's vision programs recognize engineered objects, but those vision programs are easily derailed by faces, trees, and mountains.
Why so little progress?"
people.csail.mit.edu/phw/aaai99.ppt
Why We Should Start Over, slides used in keynote address, conference of the American Association for Artificial Intelligence, July, 1999.
Amongst the slides are the negative lessons we have learned:
Nobody cares about saving money
Using cutting edge technology
To replace expensive experts
and the positive lessons:
Everybody cares about
New revenues
Saving a mountain of money
Increasing competitiveness
and the things we must not do:
Loose our faith
Waste time arguing
Squander our capital.
Don Woods
www.icynic.com/~don/
Woods's home page. According to his Wiki entry, he may be best known for his introduction of the Colossal Cave Adventure game, which he found by accident one day in 1976 at Stanford, and moved to the Stanford Artificial Intelligence Lab. He contacted the original author, Will Crowther, by sending an e-mail to crowther@sitename, where sitename was every host currently on the Internet...
www.avventuretestuali.com/interviste/woods_eng.html
Interactive Fiction? I prefer Adventure, interview with Don Woods, L'avventura è l'avventura - il sito per appassionati di narrativa interattiva: storie da giocare, 2001.
Don Woods tells of his discovery.
www.rickadams.org/adventure/a_history.html
A history of Adventure, starting with Will Crowther's original Colossal Cave game. It is rumoured, the history concludes, that as a result of Adventure, many college seniors did not graduate that year.
Victor Yngve
humanities.uchicago.edu/depts/linguistics/faculty/yngve.html - Yngve's home page, 1998:
"On graduation in 1953, I went to MIT to become the second person to be employed full time on the problem of machine translation. Important papers of that era introduced the three-step transfer model of machine translation and specified the architecture of the COMIT programming language designed for use by linguists. COMIT was later used by Bell Labs as a basis for their language SNOBOL. With the COMIT language we were able to write computer programs that would produce sentences to order at random following the rules of a grammar. The method of random generation proved very effective in writing complex grammars that were internally consistent and testing them against what informants would accept as grammatical. This work led to the gradual realization that linguistic theory was not advanced enough as a science."
Lotfi Zadeh
www.cs.berkeley.edu/~zadeh/ - Zadeh's home page.
www.azer.com/aiweb/categories/magazine/24_folder/24_articles/24_fuzzylogic.html
Interview with Lotfi Zadeh Creator of Fuzzy Logic, by Betty Blair, for Azerbaijan International, Volume 2, Issue 4, Winter 1994:
Blair: "Back in 1965 when you published your initial paper on Fuzzy Logic, how did you think it would be accepted?"
Zadeh: "Well, I knew it was going to be important. That much I knew. In fact, I had thought about sealing it in a dated envelope with my predictions and then opening it 20-30 years later to see if my intuitions were right."
Immortality Limited
I read the following in Minsky's It's 2001. Where Is HAL?:
I was once giving some lectures on longevity and immortality. I noticed that people didn't like the idea much, so I actually took a poll of a couple of audiences. I asked how many of you would like to live for 200 years. Almost no one raised their hand. They said because you'd be so crippled and arthritic and amnesiac that it would be no fun. So I changed the question. How would you like to live 200 or 500 years in the same physical condition that you were at half your age. Guess what, almost nobody raised their hand. But when I tried the same question with a technical audience, scientific people, they all raised their hand. So I did ask both groups. The ordinary people, if you'll pardon the stereotype, generally said that they thought human lifetime was just fine. They'd done most of the things they wanted to do. Maybe they wanted to visit the Buddhist statues in Afghanistan, but they could live without that. And surely another 100 years would be terribly boring.
Can you understand the attitude of the first group? I don't. Like Minsky, I also don't understand why, as he says, "everyone isn't very excited about this" - he's talking about replicating brain structure on the computer - "and puts money and research into it so that they can live forever".
Nick Bostrom wrote a fable. He doesn't understand the attitude either.
Baking Logic By the Pound
We believe that if the 'complexity barrier' is to be broken, a major revolution in production and programming techniques is required, the major heresies of which would mean weakening of machine structural specificity in every possible way. We may as well start with the notion that with 10,000,000,000 parts per cubic foot (approximately equal to the number and density of neurons in the human brain), there will be no circuit diagram possible, no parts list (except possibly for the container and the peripheral equipment), not even an exact parts count, and certainly no free and complete access with tools or electrical probes to the "innards" of our machine or for possible later repair.....We would manufacture 'logic by the pound', using techniques more like those of a bakery than of an electronics factory.
From Electrochemically active field-trainable pattern recognition systems by R.M. Stewart, in IEEE Transactions on Systems Science and Cybernetics, SSC-5, 1969. Quoted in the paper on Pask's electrochemical ear.