Jump directly to the content
SUPER AI

Humans risk being overrun by artificial superintelligence in 30 years

A MACHINE with human-level intelligence could be built in the next 30 years and could represent a threat to life on Earth, some experts believe.

AI researchers and technology executives like Elon Musk are openly about human extinction caused by machines.

The next stage of artificial intelligence could be the last humans develop
2
The next stage of artificial intelligence could be the last humans develop
The tiers of artificial intelligence
2
The tiers of artificial intelligence

Smart computers make smarter computers

The Law of Accelerating Returns is a concept popularized by futurist Ray Kurzweil that states the rate of technological improvement is on a very steep curve.

As technology gets more advanced, society and industry are better equipped to improve technology faster and more drastically.

"With more powerful computers and related technology, we have the tools and the knowledge to design yet more powerful computers, and to do so more quickly," Kurzweil wrote in his famous 2001 .

Read More on AI

This is visible when checked against the past: the first United States patent was issued in 1836 and the millionth patent was issued 75 years later in 1911.

The US had two million patents by 1936 - it took just 25 years to match the production, ingenuity and creativity of the previous 75.

At today's pace, one million patents are issued every three years, and it's only getting faster.

Apply this improvement principle to the artificial intelligence revolution and you can see why scientists think AI could become very capable, and possibly threatening, in our lifetimes.

Present and future threats

According to Dr. Lewis Liu, CEO of an AI-driven company called , some artificial has already "gone dark".

"Even the 'dumb, non-conscious' models we have today may have ethical issues around inclusion," Dr Liu told The US Sun. "That kind of stuff is already happening today."

Research from Johns Hopkins University shows that artificial intelligence algorithms tend to show biases that could unfairly target people of color and women while executing their operations.

The American Civil Liberties Union also warns that AI could "" as more selective processes like hiring and housing become automated.

"General AI or AI Superintelligence is just going to be a much broader, larger propagation of these problems," Dr Liu said.

The all-out, Terminator-style war of man versus machine is not said to be an impossibility either.

A poll in futurist Nick Bostrom's book Superintelligence says that almost 10% of experts believe a computer with human-level intelligence would be a life-threatening crisis for humanity.

One of the misconceptions about AI is that it's restricted to its black box that can just be unplugged if it intends to hurt us.

"It's much more likely that AGI is going to emerge in the Web itself and not from some human constructed box just because of the requirement of complexity," Dr Liu said. "And if that is the case, then you can't unplug the Web."

Relatedly, several military programs are intertwined with AI and "killer robots" that are capable of taking a life without human input have been developed.

Some experts believe the threat landscape should be taking sentient AI into account because we can't know for sure when it will come online, or how it will react to humans.

Preventing Judgement Day

Dr Liu sadly conceded "it's going to be a pretty s***y world" if we achieve artificial superintelligence with today's lax style of technology regulation.

He advises the development of oversight where the data that powers AI models is scoured for bias.

READ MORE SUN STORIES

Read More on The US Sun

If the data training a model is sourced from the public, then programmers should have to gain users' consent to apply it.

Regulation in the US is short of emphasizing "a human check on the outputs" but have begun to highlight keeping artificial intelligence under human control.

Topics