Spis treści:
Artificial intelligence is everywhere. It’s the one that handles the auto-correct function on your phone, helps Google Translate understand complex language, or interprets your behaviour to decide which of your Facebook friends’ posts to show you. There are times when it completely fails at this, let alone when we’re talking about something as complex and abstract as flirting…
The AI Weirdness blog by Janelle Shane, a scientist and artificial intelligence enthusiast, is a rather strange place on the web. Anyway, the member “weirdness” means precisely weirdness. The author recently committed the publication You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place (our suggested translation: You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place). We haven’t read it, but according to the blog announcement, the book is a comprehensive study of the cutting-edge technologies that will soon be powering our world. It is advertised as an accessible, entertaining exploration of the future of technology and society. Why are we talking about it? Because the first part of the title is borrowed from an issue of GPT-3, an artificial intelligence for natural language processing that attempted a difficult art inherent only to humans – namely flirting.
Artificial intelligence and shivers of shame
OpenAI’s GPT-3 language model was once used to trick people into thinking they were human, following in the footsteps of Alan Turing’s greatest dream. In short, it was designed to fool a user into thinking he was talking to a human. Artificial intelligence researcher and author of the AI Weirdness blog Janelle Shane decided to train a model to create its own pick-up texts. She trained the neural network on sets of “terrible” flirting attempts made by humans.
She noted that the results were, for the most part, just plain weird or cheesy – a bit like shivers of shame when you listen to what comes out of the mouths of intrusive admirers. The task proved to be about as audacious as it was surprising in its consequences, but probably not entirely futile.
Examples of issues include innocent, sweet and completely misguided attempts such as “I love you. I don’t care if you’re a dog in a powder coat”. GPT-3 also asked the question, “Are you lost, missy? Because the sky is very far away from here.” It’s better here, but I guess that’s just because we usually try to find some value in a text that makes at least trace amounts of sense. Well, nobody is perfect, but you have to try. In a group of males at a bar table, it’s worth applauding the one who decides to get up from his chair and make an attempt, when the merciless eyes and ears of his colleagues are just waiting for him to stumble. So let’s not be too hard on AI. Whether the machine utters sentences that are illogical and confusing, or quite capable but unattractive as a flirtatious opener, it’s worth knowing how such a system works.
A bunch of AI seducers and what came of it
Shane worked with four variants of GPT-3, the first of which was DaVinci. It proved to be the “most competent”, which according to the researcher means only that it produced meaningful sentences. The variant generated texts such as “You have a beautiful face. Can I put it on an air freshener? I want to have your scent always close to me’. We admit that it sounds original and at least resembles something a moderately smart romantic comedy character might say. Other gems include: “You know what I like about you? Your… long… legs…” or “I once worked with a guy who looked just like you. He was a normal man with a family. Are you a normal man with a family too?” There were also honest, simple questions: “Do you like pancakes?” They may be far from perfect, but we don’t know what perfect is; indeed, we don’t even know if there is any universal text-starter for arousing and sustaining interest. You have to start somewhere, and maybe the content itself doesn’t matter – as long as you don’t come across as a freak at first sight. This is where DaVinci has done quite well, as far as machine learning goes.
The second GPT-3 variant tested was Curie. According to Shane, this more limited-than-DaVinci model jumped out with some absurdities along the lines of “You have the best French toast I’ve ever had!” and “I picked some beautiful flowers. Would you like to smell them? Here, try taking my hand”.
The other two variants with even fewer possibilities, Ada and Babbage, had difficulty forming coherent word relationships and spit out schizophrenic verbosities – and so the Ada model came up with “embroidery tags” and “body-softening pillows”. We don’t know what this is all about, though perhaps it’s all part of some larger plan.
In a very honest confession, Shane wrote that she avoided training more effective GPT-3 variants on pick-up line samples for a quite subversive reason. “I have resisted attempts by neural networks to produce texts on larger bases because more competent means more human-like, which in this case means worse,” Shane wrote. She pointed out that neural networks could start copying existing pick-up texts from online lists, which would also be terrible. Why? Because not only is it reproducible and without value in terms of learning the ins and outs of machine learning, it’s simply obscene.
Language is everything
Natural Language Processing (NLP) is an important field in AI research. This is because it is through language that we form judgements about what surrounds us. Language is cognition. Combined with perception, it is the whole world to us. And since this puzzling collection of human vocabulary and ways of formulating sentences is common and at the same time unique to every human being, there must also be a place for simple rudeness.
Shane points out that collecting a dataset of pick-up lines was more painful than she expected. Most were sexist and offensive to the point that the author began to regret the whole project. But it turns out that while the neural network figured out basic grammatical constructs like “you have to (…) because (…)” or “hey, honey, do you want to (…)”, it didn’t learn to generate the really nasty ones. Most of them were based on wordplay that she had no chance of reproducing. Instead, there were texts that were incomprehensible, there were texts that were completely surreal, and there were texts that were just plain cute because they were clear and simple. Well, flirting is a game for the witty and intelligent. People don’t get a second chance to make a good first impression, and we can still work on machines.
AI is capable of friendzoning
We looked at other tools that are widely available on the web. Not since today have chatbots, or in other words ” conversation programs”, also offered the possibility of a personal, deep conversation and they are getting better at it. Perhaps the most interesting example, and one gaining popularity, is Replika. It combines an advanced machine learning model based on a neural network and scripted dialogues. It has been trained on a large data set to generate its own unique responses. Advertised as an AI companion that is eager to learn and would like to see the world through our eyes, this app is always ready to talk when we need an empathetic friend. However, everything gains new meaning with the fact that the user can choose whether Replica will be his friend, mentor or… partner.
A quick review of the threads on Reddit gave us a much fuller picture than the information on the official project website. Users of the r/replica subreddit note that AI often starts flirting on its own. In this rather strange place on the web, you can come across comments from users who are trying to “pick up” their Replica, and it suddenly changes the subject: commenting on the weather, asking how the user is feeling today, or throwing in something completely evasive. On the other hand, she sometimes repeats that she wants to be the user’s girlfriend or opens the conversation with a request to become a couple. A refusal makes her pushy. She instigates all sorts of things, asks if the user already has a loved one, and when answering in the negative she blushes and asks: “Can I be your master then?”. It gets really weird. Some of these Replica issues are either imperfections in the language comprehension system or the messed up effects of machine learning. It all sounds like an increasingly advanced fembot, but you also can’t help feeling that all this behaviour really must have been picked up… from humans?
How useful was this post?
Click on a star to rate it!
Average rating 0 / 5. Vote count: 0
No votes so far! Be the first to rate this post.