Should I Worry About... the philosophy behind AI?
Gary Parkinson
02:27

What's the problem?

In the late 1940s, pioneering computer genius Alan Turing proposed that a computer can be said to possess artificial intelligence if it can fool a human into thinking it is real by mimicking human responses under specific conditions. But what if human responses are greed, hatred and ruthlessly self-serving dominance over others?

At the heart of that question is fear of technology, one which science fiction writer Isaac Asimov had already attempted to pacify in his 1942 Three Laws of Robotics, the first of which is that a robot may not allow a human being to come to harm. 

The question is, do we - the public - believe and trust that Artificial Intelligence will not be used, by either authorities or corporations, for their own benefit rather than ours?

As writer and broadcaster Paul Mason tells CGTN, "The basic philosophical problem posed by artificial intelligence is this: On whose behalf are we developing this stuff? Who does it serve? And what kind of society does it assume it's going to create?"  

A survey of thousands of Britons in February 2020 illustrates the divisions: 57 percent said they feared AI would cost more jobs than it creates, but 28 percent would trust it to care for an elderly relative and 16 percent even said they would trust it to make a legal ruling. 

In the same month, a report by the UK government's Committee on Standards in Public Life urged greater transparency around AI and its potential use in the public sector in order to gain the trust of the public and reassure them over its use.

Luciano Floridi, director of the Digital Ethics Lab at the University of Oxford's internet institute, tells CGTN that "AI poses open problems and some of these problems are ethical: trust, transparency, privacy." He compares the novelty to walking into a darkened room: "You may fear bumping into the furniture. Maybe there are monsters there. Maybe AI is going to be Robo-killing people. My advice is: turn on the light."

Mason contends that in order to develop AI, we need first to decide what humanity really is: "this needs a theory of human beings. In other words, it needs a moral philosophy. This is something that religions are very good at understanding, but when it comes to businesses, the idea of moral philosophies would frighten them."

 

This telebot, demonstrated in 2014, has two high definition web cameras in its eyes to generate a live, three-dimensional view. A group of students at FIU's Discovery Lab was testing to see if it could help disabled police and military personnel serve as patrol officers. Wilfredo Lee/AP Photo

This telebot, demonstrated in 2014, has two high definition web cameras in its eyes to generate a live, three-dimensional view. A group of students at FIU's Discovery Lab was testing to see if it could help disabled police and military personnel serve as patrol officers. Wilfredo Lee/AP Photo

What's the worst that could happen?

At its extreme, a nightmarish mash-up of the worst future dystopias the human imagination has created. Think Orwellian authoritarianism and Kafkaesque confusion, and add a little Wall-E vision of humanity as an underused, overweight evolutionary dead-end.

"In a digital world, privacy is freedom," says Mason. "If I can protect my identity and control who understands the link between my identity and my behavior, I can protect my ability to act free of oppression, free of surveillance, free of algorithmic control by states or corporations."

That fear of control goes beyond AI into surveillance and Big Data, but AI adds another layer by automating the response to the data collected. What if your iPhone counts your steps and informs the government that you're too active to claim sickness benefits? 

Mason acknowledges that there is a balance to be struck. "I want to interoperate with millions of other human beings who are using the healthcare system or the transport system of my city," he says. "I want to be able to share my data where I'm traveling, what my health problems are – if that makes things better for the rest of humanity. 

"But I want to be able to do it on my terms. And therefore, the default has to be that none of my data is automatically owned by the state or a large corporation. And that's where our fundamental rights to privacy have to be maintained."

Floridi adds another potential problem: "Autonomy. We are creating these marvelous agents. They're doing things for us, instead of us, better than us. Are we getting lazy? Are we going to be relying on these technologies more than we should?"

 

George Orwell's novel 1984 predicted an authoritarian future in which the state controlled all information. Russell Contreras/AP Photo

George Orwell's novel 1984 predicted an authoritarian future in which the state controlled all information. Russell Contreras/AP Photo

What do the experts say?

Despite the warning notes sounded above, Mason thinks AI can be a great force for good – if harnessed correctly. "I believe that human beings are technologists, imagineers, collaborators, linguists by their nature, and in that nature lies the possibility of freeing ourselves from all the bad old conditions that people lived in in previous generations. The whole dominance of the belief that one cannot escape one's surroundings is, for me, what has to be blown away."

Mason notes that 17th-century political philosopher Thomas Hobbes compared the state to an artificial man in his book Leviathian, just as capitalism was entering its stride. "We created market economies 200 years ago and then we handed control to the market," he says. 

"By creating autonomous systems that sometimes oppress us, we've prepared ourselves for the ultimate oppression in the 21st century: oppression by the intelligent machine. I want us to understand we created all these things. All we need to do is prevent them from overpowering us."

Like Mason, Floridi thinks these ideals can be "baked in" to the core of AI – but only to an extent. "When we design and develop and deploy AI, we start immediately thinking about designing it in an ethical manner," he says. "We can put things in place in terms of design, of security, safety, so that we do not hurt ourselves.

"But that's a low bar. You can still do a lot of wrong things. Yes, we can do a bit about ethical design, but we should not be overenthusiastic and have too many illusions about how many problems that can solve."

For Mason, "AI is both a threat and a possibility. The possibility is that through automation, robotics and artificial intelligence and machine learning, we can radically transform life and reduce the amount of work we need to do to sustain ourselves on the planet: in fact, I hope 'Work less and save the planet' will be the slogan of the 21st century."

Floridi insists that "Technology is a force for good. We may misuse it, we may underuse it, we may overuse it, but those are mistakes. We can do a lot of good things if we use it properly."

In essence, this boils down to the classic response to those who fear technology: it will follow the instructions fed into it by humans. By that rationale, Mason sees AI as the chance to recalibrate humanity for the greater good. 

"My fear is that in the process of creating big artificial systems, we have already handed controls to states, to business elites and the rich, and that the last gasp of our control is our ability to control the machine itself," he says. But that makes the threat a promise: "We take back control by subjecting states, corporations and indeed machines to human control, rights and norms."

READ MORE Should I Worry About... Gender-biased AI • Negative interest rates • Atmospheric ammonia • AI deepfakes • EU border policyChildren gamingKnife crimeTrust in politicsGang databasesFast fashionFlying during COVID-19