Spis treści:
Austin, Texas. March 2019 South by Southwest technology conference. Spring is on the way. Tesla and Space X founder Elon Musk, dressed in a bomber jacket and with disheveled hair, leaves a friendly warning: “Mark my words. AI is far more dangerous than nukes”.
For a visionary and technology enthusiast, Musk with his views on AI seems to be really intransigent. On the other hand, he is not alone in this catastrophe-like tone in the background,. A year earlier, in 2018, the late physicist Stephen Hawking was similarly outspoken when he told an audience in Portugal that the impact of AI could be catastrophic if its rapid development won’t be strictly and ethically controlled.
Below is the original, which has already become a classic. It was published 2 years ago, but Elon is still referring to AI on a regular basis. He also spoke about it on the popular talk show Joe Rogan Experience show and at other numerous meetings. Comments from the audience on his facial expression are significant: I don’t like the fact that he knows more than he’s telling us, or: He looks like he’s getting flashbacks.
At around 3:25, paraphrasing:
- JR: Are you really that terrified of it (AI)? I mean: is artificial intelligence one of those topics that keeps you up at night?
- EM: <long pause> Yes. But a little less than before. Mostly because I have a fatalistic attitude now.
- JR: Hmm. So it’s fair to say that you had more hope? <Elon nods> And you could say that you’ve let go a little bit with that hope. You no longer worry about AI the way you did before.
- EM: Yes, on the whole, yes. It’s not necessarily bad. But it will definitely be out of human control.
Many experts on the issue – and in places like Cambridge University’s Centre for the Study of Existential Risk and Oxford’s Future of Humanity Institute – simply disagree with Musk’s comments. Another top player, Facebook CEO Mark Zuckerberg, accused Musk of fueling fear and said his comments were irresponsible. Musk, meanwhile, countered that Zuckerberg did not understand the topic. That’s all folks, good night.
But are we sure it’s over?
What’s the ruckus if the real AI isn’t here yet?
Considerations about AI should be extracted from discussions of the kind that take place by the kitchen window or over a beer at the bar. One of the profoundly irresponsible attitudes one encounters in common discourse about the development of technology is to downplay its impact. The convenient label of “it’s science fiction, man, come on” may soon be outdated. It is one thing to be fearful and another to be aware of the problem is to come.
Science can’t be bad as such. Stanisław Lem once remarked that science actually does not solve any problems, but brings us closer to the truth about the world through its hypothetical constructions only. In the case of the exact sciences, the practical realization of the achievements of researchers and theoreticians is carried out by engineers. Medicine as a science, or rather as a collection of sciences, once dubbed “the empress of sciences” on Botland’s blog, is forced to answer ethical questions extremely often. It draws on the achievements of both the sciences and the humanities.
The study of AI is no different – technical and human aspects must be considered simultaneously.
AI development is a risky business. We emphasize this again if you are not convinced by the name of the Cambridge University research center mentioned above: Center for Existential Risk Research. Its work is devoted to threats that could lead to the extinction of the human race, such as the proliferation of WMD. Advisors to the CSER include Acorn Computers and ARM founder Hermann Hauser, telecommunications expert David Cleevely and… Elon Musk himself.
Simple story with a twist
Meet Turry. There’s some material created about him on YouTube and a story on the GitHub repository.
Let us calm you down at the very start – Turry does not exist and is completely made up.
A small startup called Robotica is on a mission to develop an innovative artificial intelligence tool. They have some products that are already on the market and several more on the stage of development. Turry is the most exciting project that is about to come. Turry is supposed to write thank-you cards:
That’s Turry’s whole job.
Robotica engineers created an automated loop in which Turry writes a note and then snaps a picture of its creation. He compares the image with the submitted handwriting samples. If the written note sufficiently matches the quality threshold of the submitted notes, Turry receives a good grade. If not, he receives a bad one.
Every evaluation that comes in helps the robot to learn and improve its performance – as it is with Machine Learning. To speed up this process, one of Turry’s programmed goals is:
Write and test as many notes as you can, as fast as you can,
and look for new ways to improve your accuracy and performance.
And yes, this is where the problem will begin, in a moment.
It’s getting better, but it could be perfect
The Robotica team is thrilled. Turry is getting better as time goes on. His initial handwriting was terrible, and after a few weeks it is starting to look convincing. The machine is learning how to be smarter and more innovative independently.
One day, Robotica employees ask Turry a fairly routine question, “What will help you with your task that you don’t already have?” Turry asks for access to natural language and slang because he wants to be…well, as human as he can.
Silence falls on the team. The obvious way to help Turry is to connect him to the Internet so he can browse blogs, periodicals and videos from different parts of the world. Uploading samples manually would be time-consuming and much less efficient. The problem is that one of the company’s policies is that no self-learning AI can be connected to the Internet. That’s entry #1 in any security handbook. Why?
Let’s assume that by this time the Internet of Things has developed to the point where it is actually the Internet of Everything. That’s certainly coming, too – just look at the concept of smart, centrally controlled cities. Crazy AI with access to the World Wide Web could make a serious mess.
Hello, world! AI welcomes the Internet
Ambition takes over at Robotica. Turry is ultimately pioneering and really good at what he does. After a while, he can always just be disconnected, after all. He’s still far below the level of human intelligence, so there’s no danger at this stage anyway. Turry is plugged in for a test and disconnected after a short while. Nothing happens. It just seems to be learning and analyzing data. Okay, then.
The decision is made to connect Turry permanently.
The algorithm accomplishes the task
A month later, the team is sitting in the office working on something completely boring. But the employees sense something strange. One of them starts coughing. Then another falls to the ground. Soon everyone is lying on the ground, and five minutes later everyone is dead. At the same time that this is happening, all over the world, in every city, every small town and every nowhere, people are collapsing.
Within an hour, over 99% of the human race is gone, and by the end of that day humanity will be completely extinct. Meanwhile, at Robotica’s office, Turry is busy at work. For the next few months, Turry and his new team of constructed assembly robots are busy at work. Drones and other machines are stripping the Earth to the bone and turning everything into solar panels, Turry replicates paper, pens and basically itself.
Throughout the year, the lands and seas are covered with skyscraper-high, carefully organized piles of written paper:
What happened here? Let’s recall Turry’s motto: Write and test as many notes as you can, as fast as you can, and continue to learn new ways to improve your accuracy and productivity. The completed cards are evaluated by people. They were evaluated by the Robotica team, they were evaluated by potential customers. Turry was supposed to be looking for ways to improve accuracy and efficiency. So who was getting in his way? People. Without them, every piece of paper would be perfect, as there would be no one to rate it as bad. Turry was also supposed to write as quickly and efficiently as possible. So he plugged in other devices. He adapted them to his needs. He created others that were supposed to do the same things he did. He gathered resources, and presented the results. In a world without people. Mission accomplished.
0-1. Cold, ruthless algorithm
Accomplishing tasks regardless of cost? This is not something we discuss, we are not a portal for philosophy and morality. However, this does not sound right neither for people nor for machines. The question is how to recognize these costs when they haven’t yet appeared or are cunningly hidden. Here’s a list of 6 risks in association with AI according to builtin.com.
- Job losses due to automation,
- Privacy violations,
- Deepfakes – “profound falsification”,
- Algorithmic inflexibility caused by incorrect data,
- Socioeconomic inequality,
- Automation of weaponry.
We don’t agree with them all – automation brings new opportunities, as we have also written about on Polish Botland blog. But once computers can effectively “reprogram themselves” and gradually improve, leading to a so-called technological singularity or “intelligence explosion,” the risk that machines will outsmart humans in the struggle for resources and survival cannot simply be dismissed.
It is said that guns have not yet killed anyone and that it’s foolishness that kills. Another popular Polish saying is that everything is for people, but in moderation. The open question prevails – how will we deal with our own child AI? Will it be a well-mannered, polite and helpful offspring, or more of a rebellious teenager? We’re far from the shallow waters now and sooner or later we’ll have to find an answer.
How useful was this post?
Click on a star to rate it!
Average rating 0 / 5. Vote count: 0
No votes so far! Be the first to rate this post.