In ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, held in the summer of 1956, it states:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve problems now reserved for humans, and improve themselves.
This was the same year in which Erich Fromm wrote a book titled The Sane Society—as a follow-on to the wartime Escape from Freedom (The Fear of Freedom in the UK)—whose first two chapters are, ‘Are We Sane?’ and ‘Can a Society be Sick?’, answering these questions with a resounding ‘NO’ and ‘YES’, respectively.
From a business management perspective, Herbert A. Simon wrote in The New Science of Management Decision, “Decisions are programmed to the extent that they are repetitive and routine, to the extent that a definite procedure has been worked out for handling them so that they don’t have to be treated de novo each time they occur.” On the other hand, “Decisions are nonprogrammed to the extent that they are novel, unstructured, and unusually consequential.” Nevertheless, even though Simon made a distinction between programmed and nonprogrammed decisions, he said in 1960, “I believe that in our time computers will be able to perform any cognitive task that a person can perform.”
For myself, seeing that the global economy holds the seeds of its own destruction within it and not knowing the essential difference between programmed and nonprogrammed decision-making, I resigned from my innovative marketing programme for Decision Support Systems with IBM in London in May 1980.
I soon found a kindred spirit in Ada Lovelace, the daughter of Lord Byron and his mathematician wife Annabella. Ada was also a ‘cousin-niece’ of Viscount Melbourne, the Prime Minister who greatly helped the eighteen-year-old Queen Victoria at the beginning of her reign to be free of the constraints of her upbringing. So, even though Ada was a member of the British Establishment, she delightfully wrote in 1843, in a brilliant memoir on Charles Babbage’s Analytical Engine, the first design for a general-purpose computer, “We may say, most aptly, that the Analytical Engine weaves algebraic patterns just as the Jacquard-loom weaves flowers and leaves.” For, as she said,
The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.
Despite these words of wisdom, a primary focus in science and technology is to build machines that could ‘improve themselves’, driven by politicians’ obsession with economic growth and people’s fear of death. Given such an inhibiting cultural environment, there is very little incentive to cocreate a stimulating, nourishing global economy that would enable us humans to stretch out to our fullest potential as superintelligent beings, able to make decisions with a full understanding of what causes us to think, learn, and behave as we do.
The central issue here is that as children—from infancy to adolescence—we learn what our parents and teachers want us to learn. Then, as adults, we use our learning in the workplace, as cogs in a gigantic economic machine. Few have the courage to question the assumptions on which the education and economic systems are built—for doing so can feel like an existential threat to people’s narrow and shallow sense of identity, their most precious possession—paradoxically liberating us from the fear of death.
The idea that humans are nothing but machines lies so deep in the cultural mind that Vernor Vinge wrote a paper in 1993 for NASA titled ‘The Technological Singularity’, saying, “Within thirty years, we will have the technological means to create superhuman intelligence [in machines]. Shortly after, the human era will be ended.”
In a similar manner, Ray Kurzweil wrote in 2001, “By 2019, a $1,000 computer will match the processing power of the human brain.” Similarly, Hans Moravec forecast in Robot in 1990 that robots “could replace us in every essential task and, in principle, operate our society increasingly well without us.” Martin Rees, Astronomer Royal and former President of the Royal Society, picked up this viewpoint by writing in Our Final Century: Will the Human Race Survive the Twenty-first Century?, “A superintelligent machine could be the last invention that humans need ever make.” And again, Stephen Hawking told the BBC on 2nd December 2014, “The development of full artificial intelligence could spell the end of the human race.”
Then, as recently as 2019, Peter Russell posted an article on the website for the Science and Nonduality (SAND) community, titled ‘What if There Were No Future?’, saying “sometime in the late 2020s (that’s only ten years from now) there will be artificial intelligence that surpasses the human brain in performance and abilities. These ultra-intelligent systems would then be able to design and create even more intelligent systems, and do so far faster than people could, leading to an exponential explosion of intelligence.”
This is nonsense of course. We humans are the leading edge of evolution, not machines with so-called artificial general intelligence. Yet, the belief that machines could one day replace many jobs in the workplace still prevails. For instance, in 2012, Stuart Armstrong, a James Martin Research Fellow at the Future of Humanity Institute at Oxford University, and Kaj Sotala, of the Singularity Institute, presented a paper at a conference in Pilsen, Czech Republic on research that they had done of predictions of artificial intelligence since Alan Turing’s 1950 seminal paper on the subject, which asked the question, “Can machines think?”
As Armstrong writes in Smarter than Us, “The track record for AI predictions is … not exactly perfect. Ever since the 1956 Dartmouth Conference launched the field of AI, predictions that AI will be achieved in the next fifteen to twenty-five years have littered the field, and unless we’ve missed something really spectacular in the news recently, none of them have come to pass.” This chart shows the frequency of the various predictions of time to AI that he and Kaj Sotala have developed.
It is vitally important here not to be confused by computers able to beat humans at games, such as Chess, Othello, and Jeopardy! In Superintelligence in 2014, Nick Bostrom, the Director of the Future of Humanity Institute, called such machines ‘superhuman’. Since he wrote this book, DeepMind’s AlphaGo has defeated a 9-dan Go champion using a deep-learning technique, even starting from scratch, without the patterns of previous games as models. But all deep-learning algorithms can do is repeatedly apply the mechanistic data-processing function, albeit in highly complex structures. There is nothing deep about them at all.
Then, in March 2023, global headlines declared that machines programmed with deep-learning algorithms are about to take over our lives, causing politicians and technocrats to go all a-flutter. In the year that Vinge had surmised, AI had apparently arrived, about to make humans redundant in the workplace. If this were true, human beings would no longer be both workers and consumers in the economy, a principle that Adam Smith articulated in 1776 in the opening words of The Wealth of Nations, the book that laid down the foundations of capitalism:
The annual labour of every nation is the fund which originally supplies it with all the necessaries and conveniences of life which it annually consumes, and which consists always either in the immediate produce of that labour, or in what is purchased with that produce from other nations.
Accordingly, a couple of weeks later, I asked ChatGPT 38 fundamental questions about human existence, such as who are we, where do we come from, and where are we heading? It could not answer them because its database of ‘knowledge’ has been built on thousands of years of fragmented learning during the conflict-ridden patriarchal epoch.
Six times, it told me that its identity is ‘an AI language model’. Therein lies the fundamental limit of any machine, no matter how clever it might become. For Ferdinand de Saussure made a distinction between the inner and outer ways we have of mapping the world we live in, which is one that stored-program computers cannot make.
A computer’s ‘mind’ consists solely of binary digits (bits), representing numerals, strings of characters, internal pointers, and instructions to the computer’s central processing unit. In contrast, humans first process data, as concepts within the mind and Cosmic Psyche, before expressing this meaningful information externally in words, mathematical symbols, and pictures, for instance.
This becomes crystal clear when practitioners conduct a thought experiment in which they have the task of developing the Method that integrates all knowledge in all cultures and disciplines at all times into a coherent whole. This generative experiment, reversing Turing’s Imitation Game, reveals what we all cognitively and Gnostically share, once we are free of our limiting assumptions.
As the entries on function and know explain, I began this experiment by asking whether an APL systems function could create a new function, that is think creatively, without human, that is, Divine involvement. In other words, could a computer program itself, acting like a human programmer? The answer is a resounding NO!
So Panosophers, as information systems architects, are artists—in Latin artifices, root of artificial. Yet, such a creative activity is perfectly natural, following evolution’s convergent tendencies by ‘fitting everything together’, which scientists, influenced by evolutionary divergence, have separated into fragments, as the root of science, cognate with schizoid, well illustrates.
But to what extent Life could free us from our cultural conditioning by healing our fragmented minds and split psyches in Wholeness before out inevitable demise is the great unknown.
The term artificial intelligence first appeared in print in ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, dated 31st August 1955, issued by John McCarthy, Marvin Lee Minsky, Nathan Rochester, and Claude E. Shannon, then working at Dartmouth College, Harvard University, IBM Corporation, and Bell Telephone Laboratories, respectively. They defined the artificial intelligence problem “to be that of making a machine behave in ways that would be called intelligent if a human were so behaving,” without having a clear understanding of either intelligent or human.
Artificial is from 1425, ‘made by man, not natural or spontaneous’, from Old French artificial, from Latin artificiālis ‘of or belonging to art’, from artificium ‘a work of art; skill; theory, system’, from artifex (genitive artificis) ‘craftsman, artist, master of an art’, from stem of ars ‘skill, way, method, art’, from PIE base *ar- ‘to fit together’, and -fex ‘maker’, from facere ‘to do, make’, from PIE base *dhē- ‘to set, put’.
See also intelligence.