Why the latest artificial intelligence technology scares us : Google’s Sergey Brin’s take on AI future kinda explains but it’s movies about AI that got us here & how mind farts keep distorting the meaning of AI
Last updated on June 6th, 2017 at 01:10 pm
Recently, I became less critical of my artificial intelligence know how when revered tech figure,Google co-founder Sergey Brin, admitted to AI sneaking up on him.
At Davos 2017, Serge, confessed to have been wrong in taking artificial intelligence for a flash in the pan.
“I didn’t pay attention to it at all, to be perfectly honest.” he offered. “Having been trained as a computer scientist in the 90s, everybody knew that AI didn’t work. People tried it, they tried neural nets and none of it worked.[Now] the revolution in deep nets has been very profound, it definitely surprised me, even though I was sitting right there.”
While it’s already unsettling to process such a pronouncement from the few we can count on to be on permanent beast mode in tech matters, Sergey did not let it end there. Not only did the google co-founder label himself a Luddite, he also confirmed our fears of the future in an AI driven 21st Century:
“What can these things do? We don’t really know the limits….It has incredible possibilities. I think it’s impossible to forecast
What is AI ? A Cake Mixture Analogy
To better appreciate this fear of and AI future, consider this analogy: Take Sergey Brin’s thoughts on artificial intelligence as a premixed cake mixture. Pour this cake mixture into a bowl [rationality]. Then whisk it up using an electric hand mixer as per the recipe you desire [your imagination]. What do you end up with?
Keeping with the cake analogy, at this point all that one needs is some sort of controlled heat to bake a perfect cake. But for our attempts to understand a 21st century ai future, the uncertainty as expressed by Sergey [cake mixture] + limited rationality [bowl] + fickle imagination [electronic hand mixer] = frisson and trepidation.
Why this fear? Why does artificial intelligence scare us?
Ever since stone age man discovered the flint hand axe, the relationship between man and machine has been a curious coexistence. On one hand humanity strives to better harness the utility, alleviation of human suffering and human advancement that comes with machines. On the other hand, man entertains a morbid fear of the dawn of an Armageddon where machines will take over the world and violate his autonomy.
With the advent of artificial intelligence in the 21st century, the fears in this queasy marriage morphs into a firm reality. With every passing day new evidence of machine learning, fills public space. That machines have learned to think is now ubiquitous.
As a consequence of machine learning, machines have become as/more intelligent than man. Exactly how we got here is lost in the details as. All we know is that it happened too fast and it has caught us flat footed.
Like the proverbial ostrich, burying our heads in the sand is an attractive preposition as ignorance is indeed bliss. But we chose otherwise. First we acknowledge that there is a problem. Then we set out to define that problem as the first step towards an ai future.
The Artificial Intelligence Problem
First, allow us to christen this challenge ahead of us as “the artificial intelligence problem”. A good place to start in defining this problem to take a nuanced look at Google’s co-founder, Sergey Brin, Davos2017 pronouncements on artificial intelligence.
Doing so, we find ourselves standing on the shores of a phenomenon in human advancement. Before us is a marvel that holds power unbounded,and as vast as the sea. A frontier that we barely can appreciate yet early signs point towards humanity attaining a powerful tool to cure it of its sickness, hunger, and injustice
Having appreciated the anatomy of the problem, we seek to qualify our definition of the AI problem this way: Artificial intelligence will be at the root of all ethical questions of the 21st Century. It is a ‘problem’ that will only grow in complexity as ai is a product of the essence of man’s existence .i.e. man’s ceaseless pursuit of happiness.
The meaning of AI as the Pursuit of Happiness
Here, we define happiness a’ la mode de, the Greek philosopher Aristotle as the supreme good.Meaning that for man -who is unlike animals as he is a rational creature-happiness cannot be as simple thing as giving a dog a bone.
Sure, being fed or having a play thing can make one happy.But having food whilst imprisoned-for instance-is not ideal. What this means that the pursuit of pleasure, wealth or honor cannot constitute happiness.But while each of these has some value, none of them can occupy the place of the chief good for which humanity should aim.
These things are not worthy aims as the primary end in their pursuit is to make one happy. Why not just aim for happiness instead rather than a seemingly endless list of vain stuff?
We might pursue money to access things that make us happy. But you well know, money can’t buy happiness. So instead, man might be tempted to pursue pleasure. But Maslow hierarchy of needs will implore us to desist from such a strategy. As whichever way, pursuit of whatever goal that is not happiness lends us at the mercy of the Ouroboros cycle.
Therefore, to find happiness we have to define that which is uniquely human and find ways of perfecting it. To get there, we search for that one quality in man that appeals to his essence. By essence, we mean that if we were to strip man of non-unique qualities like locomotion, what would remain?
What remains is rationality. Therefore, happiness would mean living a life that enables us to use and develop our reason. The whole spectrum of realities that arises from machine learning namely: augmented intelligence and artificial intelligence work to do exactly that.
How We Got Here: The Role of AI Movies
Whenever man is faced with a new phenomenon, for answers, he relies on his brain. Though this seems a natural thing to do, behavioral science informs that this is not always the wise thing. Sounds confusing? An oxymoron maybe? To settle why we shouldn’t always trust our neurons, we start with the blame first.
We trace the blame to Metropolis. Metropolis is a 1927 movie whose plot is set in 2026. The story line is of industrial oligarchs who reside in skyscrapers and lord over the city of Metropolis.
The power of these men stems from the sweat of workers working in slave like conditions. A precariat who dwell in underground dungeons slaving in never ending, repetitive work. Sounds familiar? See my life.
Well, contextualizing Metropolis today calls for Mike Savage’s book: Social Class in the 21st Century.
To the precariat in Metropolis, we enroll the established middle class, technical middle class, new affluent workers, traditional working class and emergent service workers. We find reason to do this to mirror the disenfranchisement then to today’s realities.
While Metropolis was acclaimed for its artistic merits, today it has attained cult status thanks to its pioneering Sci-Fi/ artificial intelligence films.
But for all its acclaim, it is our view that Metropolis,even if at its time was an artistic illustration of the tension between science, technology and society. Metropolis unfavorably set the tone for a decades long rift.
Moreover, Metropolis makes it evident how the Armageddon narrative of man’s end via machines got going. The bandwagon effect from the success of Metropolis is responsible for our negative perceptions of artificial intelligence. Perceptions distorted by the artistic liberties of Hollywood.
While it is tempting to discount the impact of Hollywood on your perceptions of life,these things matter. Here is why
How the Limits Of Human Reason Make AI an Attractive Preposition: Behavioral Science Helps Reveal the Blind Spots Created by Brain Farts
You see, behavioral studies inform us that the mind often tricks us. It tricks us when there is too much information to process. Mind farts occur when the brain can’t find enough meaning in stuff. These cognitive biases also occur when the pressure to act fast is on.
Our brains also trick us because they process memory in disparate ways. For instance, at times our brains store memories based on how we experienced them. Not what they are.
Other times while creating memories, we discard specifics to form generalities. Kind of favoring a deductive approach over inductive approach. Conversely, in some instances,we reduce events and lists to their key elements.
The product of such a repertoire of brain ‘tricks’ is up to 180 cognitive biases.
I find the Ambiguity effect and Semmelweis reflex as some of the cognitive biases that have significant play on our perceptions of application ai. Ambiguity effect is the tendency to avoid options for which missing information makes the probability seem “unknown”. Semmelweis reflex is the tendency to characterize animals, objects, and abstract concepts as possessing human-like traits, emotions, and intentions
A cursory look at the cognitive biases, implore us to appreciate our rational limits in making sense of AI [remember the cake mixture analogy].
However, as we practice reflexive thinking by accounting for the different cognitive biases, we still could be non the wiser.
The Linda Problem Reveals That Even The Most Intelligent Humans are no match for AI applications
Linda is 31 years old, single, outspoken and very intelligent. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations”
Which is more probable:
Linda is a bank teller
Linda is a bank teller and is active in the feminist movement
What’s your answer?
In studies conducted in the 70’s by professors Kahneman and Tversky, most subjects chose option 2. In doing so, they fell prey to what behavioral scientists have characterized as the conjunction fallacy or Linda Problem.
This is the belief that the co-occurrence of two events is more likely than the occurrence of one of the events. For Linda’s case, whilst bank tellers can be feminist, not all bank tellers are feminist.
However, there is something else more astonishing about the Linda problem experiments. People with higher IQ tended to fall prey to the conjunction fallacy than those of lower intelligence.
This misstep served to illustrate that intelligence doesn’t equate rationality. Instead, rationality is a function of reflexive thinking. Reflexive thinking is the ability to actively step back from ones thinking, audit and correct ones fallacies.
Leading artificial intelligence theorist,Eliezer Yudkowsky, argues that rationality can be learned. When rationality can be learned by machines, the morality of artificial intelligence applications is a concern to the AI community.
What this translate to is that ultimately, the morality of cyber physical systems, ai robotics and artificial intelligence software is linked to the morality of those who design and build them. This is a reality echoed in the seminal Asilomar AI ethical principles.
What’s most striking in the Linda problem experiment has been the realization that in spite of man being wired to be rational, we often act in irrational ways.
Increasing Evidence of Artificial Intelligence Applications in Society Today
Putting all this together reveals artificial intelligence’s femme fatale nature. A personality trait that makes it attractive yet scary-even to a figure as revered as Serge Brin.
Not only have machines become intelligent, its is increasingly evident that humans can’t beat applied artificial intelligence in a straight contest. Today, thanks to complex artificial intelligence algorithms, machines can also learn to be more rational than we can ever be.
Latest artificial intelligence technology has prompted observers to proclaim the 21st century the age of the fourth industrial revolution. A revolution driven by big data, ubiquity of processing power, stored energy and artificial intelligence that will causing system-wide changes. What this means is machines and machine-man hybrids will be able to do work more efficiently and effectively then ever.
Think driver-less cars/trucks and the transport system. One of the more exciting areas of investment in ai development is medical diagnosis and medical monitoring.
Further a field, ai advances promise better use of distributed ledger technology: blockchain, bitcoin and smart contracts in the financial and legal system.
However, the impact of recent developments in artificial intelligence on employment and in disrupting the demand and supply side of business are areas of grave concern.