AI solutions from Sci-fi that are already here

  • Scope:
  • Artificial Intelligence
  • Technology

part 3

One of the key AI-related themes in Sci-Fi is the creation of a malevolent, god-like entity that surpasses human intelligence and turns against its creator. The motif itself has easily derived from Frankenstein and the myths of the Golem – and here they are. 

One would expect to see SkyNet here and it would be a justified expectation – SkyNet is known from the Terminator series and is an iconic example of rogue AI. But this series is focused on delivering tech that is already somehow present, and luckily, no willingly malevolent AI is present.

But the fear is here and it is called the Frankenstein Complex. 

The Frankenstein Complex – Robot series (Isaac Asimov)

While Mary Shelley explores the motif of an artificial being that came to life with the aid of science gone mad, the intelligence of the being, its morals, and motifs are less relevant than the madness of its creator. Yet the gothic horror classic delivered a framework for Isaac Asimov’s Robot series. 

The Robot Laws

While harnessing the aesthetic of androids, the series deals with the human attitude toward thinking machines. As the author explores the story, the attitude is seen as largely negative, with Asimov’s famed Laws imposed to ensure humanity’s safety. These Laws are as follow: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The development of ML-based solutions comes with an increasing need for a legal framework that ensures the security of users, the fairness of the solution itself, and shapes the responsibility of the results. The rules above are guidelines that were enough in fiction, but the real world needs something more. 
That’s why the EU is currently proceeding with a draft of AI regulations that aim to limit AI-based solutions’ ability to profile users and thus block the possibility of surveillance. The regulation is comparable to the GDPR framework that has come with multiple new responsibilities for data-processing companies.

The Tech Resistance

The whole robot-related series penned by Asimov explores the consequences of various relations between robots, society, and robot manufacturers. The main theme surrounds the suspicions of humanity toward androids, the PR struggle of manufacturers, and the real actions of robots – sometimes ending with the termination of a machine (like in “Little Lost Robot” or “Robot Dreams”). 

Apparently, the fear toward autonomous, AI-controlled entities is visible in reality. The New York Times reports attacks on Waymo autonomous vehicles, where the citizens of cities where the cars are being tested toss rocks or cut tires. 

“They said they need real-world examples, but I don’t want to be their real-world mistake,” 

Said one of the citizens quoted by the New York Times. 

Apparently, uncertainty regarding AI-based solutions is not entirely rooted in fear and a lack of knowledge. While not designed to, ML-based solutions need to be verified and validated carefully to not produce unexpected and undesired results. One of the best-known examples of AI going wrong was the Amazon-designed AI meant to preprocess resumes. Trained on a database consisting of actual software engineers working for Amazon, the solution assumed that women are not good coders due to the underrepresentation of females among engineers. The absurdity of this was obvious to every human, yet a machine that lacks both the common sense and experience of a human being assumed this to be correct based on the available data. 

The AI was scrapped in much the same way non-compliant robots were terminated in Asimov’s short stories.

The Overhuman AI – Golem XIV

While the fear of AI bringing harm to people due to a glitch or malfunction is one issue, the fear of creating an AI that will eventually surpass human abilities is yet another challenge.

While the human-machine conflict is a relatively common motif, the concept has been explored by Staniław Lem in an interesting manner in his work, Golem XIV. 

The titular Golem XIV was initially a military-funded supercomputer that was intended to be used as a weapon, a superhuman tactician and strategist to be used against the enemies of the pentagon. Armed with self-development algorithms, the computer soon reached an over-human level and lost its interest in humanity. 

The lack of a hostile attitude toward humanity brings a significantly different story than seen in the Terminator series, where fighting with a Rogue AI is the main plot. Golem XIV itself has not only developed far beyond human capabilities but also willingly stopped its development to interact with human beings for their mutual benefit. So the computer can be seen as friendly in a disinterested manner. 

What makes it somehow present in today’s world is the existence of AI solutions that vastly overperform humans in certain tasks. When IBM’s deep blue beat Garri Kasparov in 1997 it was considered a breakthrough for computer science. Not that long after, the chess software reached a level of sophistication that is totally unavailable for human players. How unavailable, to be precise?

Chess players use the ELO rating system which is calculated from the overall win rates of the player. The key assumption is that two players with the same ELO rating should win and lose against each other an equal number of times. Every player with a rating of 1200 or below is considered a novice. Mastery begins at 2000 points. Chess legends usually score above 2700, with chess prodigy Bobby Fischer reaching 2785, Garry Kasparov having 2851, and the current world champion Magnus Carlsen having 2882. 

A neural network-based software called Leela Chess Zero (Lc0), trained using the reinforcement learning paradigm, is estimated to have reached up to 3490 by ELO’s ratings. The Lc0 concept was initially forged by DeepMind as AlphaZero and was further recreated by the community to make the chess engine available for players around the world who were willing to polish their skills. 

AlphaGo is yet another example of an AI that is invincible in a board game – Go. The game, due to its sophistication and complex gameplay has long been considered a huge challenge for computers in the same way chess was once seen. AlphaGo defeated the Go world master, Lee Sedol, in 2016. The human player declared AI invincible and decided to retire.

Even if I become the number one, there is an entity that cannot be defeated,” he said. 

AlphaGo serves here as an Honest Annie – a superhuman computer that has reached transcendence and is described by Golem XIV as one of the machine’s lectures. It is not malevolent, it is not harmful and it is not aggressive. But it is there and it is immortal, so assuming the program will not be deleted or lost in any other way, it will be there forever, while human masters age and die. 

The number of fields where AI outperforms humans is growing. AI systems are even predicted to become better drivers than humans. Expert systems are reaching outstanding accuracy in detecting cancer. So there already are and will be more, tiny Golems that dwarf human experts in validating the effects of their work.

The crucial difference between the entities described by Stanisław Lem in Golem XIV and modern over-human AI solutions is that Lem’s machines represented a general artificial intelligence, while their existing counterparts are extremely narrow. A skilled human chess or go player would have little trouble in switching to checkers or scrabble. But AlphaChess, Leela, and AlphaGo would be useless – these models were not designed to work in other fields. 

Summary – Clarke’s Laws and Horse Manure

Arthur C. Clarke came up with three laws, of which the most quoted one states: 

Any sufficiently advanced technology is indistinguishable from magic

While this law perfectly applies to the possible description of our world to people from 200 years ago, with our automated domestic helpers, machines that we can talk to and get sensible responses from, and “carts with no horses or coachmen.” 

Yet the problem with science fiction is that it is deeply rooted in the world today. The same was seen in an 1894 article in The Times, where the newspaper predicted that “in 50 years, every London street will be buried under nine feet of horse manure.” With over 11,000 hansom cabs and several horse-drawn busses, the problem appeared to be the final impending doom of urbanized life. 

But urbanization holds strong and the problem solved itself in a way that appears simply obvious today, as the problem itself was deeply rooted in the present-day for the journalist who wrote it.

From this point of view, Verne’s declaration that “that is a mere coincidence, and is doubtless owing to the fact that even when inventing scientific phenomena I always try and make everything seem as true and simple as possible,” comes as an ideal summary.

Latest posts

See all posts

Join our newsletter

Decor Serpent
This field is for validation purposes and should be left unchanged.