“If we all die,” said Dr. Ben Goertzel, lead scientist of Hong Kong hedge fund management firm Aidyia, “it would keep trading.”
His words marked the launch of a project four years in the making. Goertzel and his team had adapted artificial general intelligence (AGI) programs to make trades on global financial markets.
AGI is a branch of artificial intelligence aimed at mimicking the human brain’s understanding. Goertzel says the algorithms have the human-like ability to recognize mathematical patterns in data and decide what these patterns mean without the influence of memory, emotion, bias, and mood.
Aidyia is not the first financial firm to use this kind of technology, but they are the loudest advocates of using it to replace human influence on the trading floor.
“Algorithmic trading already dominates high frequency trading decisions, leaving humans highly disadvantaged,” said Ken Cooper, the company’s CEO. “Now is the time for longer-term decisions to be disrupted in the same way.”
As of today, there is not enough data to conclusively analyze the CEO’s claims.
Cooper’s confidence is rooted in the belief humanity can create machines greater than themselves. As an idea, it is as old as our drive to unlock the mystery of life and by proxy create immortality. In short, we’ve been obsessing over this a long time.
In the age of the Greeks, philosophers mulled over myths about robots built by human hands. Often the answer they sought was: Would our creations remain loyal servants or rise to compete with us the moment they became self-aware?
Modern archaeologists have uncovered ancient automatons built in ancient China and Egypt. So, clearly our species-wide desire for an automated future has been ingrained since antiquity.
Flash forward to 1956, when members of the international scientific elite gathered at Dartmouth College to decide on a name for this path of study. The name they settled on was “artificial intelligence.”
With their new snappy title attached, a renaissance was born. And the post-war citizenry ate up every story of robots and atomic-age realities that publishers and Hollywood movie studios could churn out. Just like today.
The optimists in the crowd dream of intergalactic travel and lives of leisure provided by robots programmed to do every unpleasant task. They picture shiny butlers made from blinking lights and metal, motherboards and spools of wire. Meanwhile others predict a future of every horror imaginable, from Dr. Frankenstein’s monster made of reanimated flesh through The Matrix’s murderous robots to James Cameron’s indestructible Terminator.
These works of fiction are set in dystopian futures where human creations return to destroy us. In the latter of the bunch, we are nearly out of hope as militarized robotic intelligence manipulates time to finish us off.
As I write this, we are nearly 20 years after Skynet was originally “scheduled” to go live, take control, and run amok in the Terminator’s fictional universe. Truth is, our reality is perhaps more vulnerable to a systemic attack than ever before. Mass transit, freight delivery, and warfare all rely increasingly on drones.
We buy our groceries, get our news, legal advice, health care, and interact with government institutions, all while coming into contact with some form of artificial intelligence. It may not be the rise of the machines yet, but let’s remember the systems have been hacked and can be hacked again with disastrous implications.
Now more than ever, we exist in the algorithms that control what we see, how we live, and how we spend.
The risks haven’t stopped the big-name companies from getting in on the bonanza.
Tech giants Amazon (AMZN), IBM (IBM), Google (GOOG, GOOGL), Apple (AAPL), Microsoft (MSFT), and Facebook (FB) have all placed major bets on the successful future of artificial intelligence.
The trick is to find the right applications for the technology. One of the most attention grabbing is the driverless car Google pushes forward as the new normal. Critics have argued driving, like many other skills we have handed off to technology, is a skill we will lose.
In a recent article by Tim Harford of The Guardian, he examines the crash of Air France Flight 447 as an example of our inability to reassume control after we’ve handed the rudders to an autopilot.
The tragedy in which 228 passengers and crew died instantly on impact with the Atlantic Ocean was originally attributed to human error. It later became clear the pilots had been unable to determine whether they or a sophisticated artificial piloting system known as “fly-by-wire” was in control of the flight.
A traditional aeroplane gives the pilot direct control of the flaps on the plane – its rudder, elevators and ailerons. This means the pilot has plenty of latitude to make mistakes. Fly-by-wire is smoother and safer. It inserts itself between the pilot, with all his or her faults, and the plane’s mechanics. A tactful translator between human and machine, it observes the pilot tugging on the controls, figures out how the pilot wanted the plane to move and executes that manoeuvre perfectly. It will turn a clumsy movement into a graceful one…
Earl Wiener, a cult figure in aviation safety, coined what is known as Wiener’s Laws of aviation and human error. One of them was: “Digital devices tune out small errors while creating opportunities for large errors.” We might rephrase it as: “Automation will routinely tidy up ordinary messes, but occasionally create an extraordinary mess.” It is an insight that applies far beyond aviation.
Chances are, our frustrations will peak when our robotic counterparts misunderstand our commands. A machine will do exactly what it thinks it’s been told to do. If it doesn’t understand what has been asked of it, it will not do what we need it to do. (Apple's) “Siri” often mishears my requests, and don’t get me started on having to repeat myself to directory assistance or such other “automated” systems.
Now imagine this same confusion applied to the financial destiny of you and every other human being drawing breath.