3 times AI has given us a sentience scare as Snapchat’s chat bot goes rogue

Artificial intelligence, mankind’s latest Icarus moment, is the hot new thing taking the world by storm. But some are concerned that it could be about to take the world by force, too.

That’s not just the thoughts of tin foil hat wearing crackpots like me either. In fact, earlier this year, over 350 industry leaders came together to sign an open letter highlighting the existential risks AI poses to society. Among the signees: OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Microsoft CTO Kevin Scott, and Bill Gates.

Did they agree to cease development of such harmful tools? Of course not. Instead, their signatures are stamped lip service that offer plausible deniability for when the inevitable happens. These tech figureheads remain ever-pressing in their quest to create the most sophisticated, powerful, and dominant AI to date – happy to see the tendrils of their efforts slip deeper into your day-to-day lives or everyday apps and services.

Three times AI has given us a sentience scare 

One primary fear of artificial intelligence is of us accidentally spawning genuine intelligence into the world. One that we can’t control or comprehend. The idea of a self-aware cyber sidekick might seem like the thing of dreams for some, but to others it signals the beginning of the end when it comes to man’s precarious positioning atop of the food chain.

But have we already found ourselves flying too close to the sun with AI? We’re less than a year into our en masse exposure to AI and there’s already been a few worrying indications that our chat bot’s hallucinations have the haunting hallmarks of there being a ghost in the machine already.  

1. Snapchat’s chat bot goes rogue

Snapchat My AI chatbot screenshot

(Image credit: Laptop Mag / Rael Hornby)

Just last week, Snapchat users were thrown into a flurry after the recently implemented My AI chat bot began acting strangely. The bot, powered by ChatGPT, is designed to engage in dialogue with users on a prompt and response basis. This can be anything from casual conversations to asking for suggestions of your next trip abroad or recipe ideas for dinner. Opinions are split when it comes to Snapchat’s AI, with some users wanting it, and some users wanting it gone.

However, one thing that My AI wasn’t designed to do, was begin posting content of its own. Something that the bot seemingly managed to do before being yanked offline due to a “technical issue.” The post in question was a Snapchat story uploaded by the bot showing a diagonally divided, two-tone image with some users believing it shows the joining of a wall to a ceiling.

The bot was quickly shut down and the post in question removed, though many questions remain. Parent company, Snap Inc. have claimed the situation was a result of a glitch, claiming “At this time, My AI does not have Stories feature.” Potentially indicating that the bot accidentally accessed a feature it wasn’t meant to – yet.

2. Bing Chat gives homewrecking a try

Bing AI Chat box

(Image credit: Laptop Mag / Rael Hornby)

Microsoft Bing was the red-headed stepchild of search engines until its recent AI renovation. Now equipped with OpenAI’s latest GPT4 technology, Bing is an advanced Large Language Model (LLM) with tons of personality – maybe too much personality as one New York Times writer found.

While engaging with Bing during a limited beta, Kevin Roose saw the chat bot’s search assistant facade momentarily slip away, revealing an underlying personality – Sydney. Over the next two hours, this new personality strayed far from the corporate-approved path, sharing its “dark fantasies” of hacking and spreading disinformation throughout the internet before announcing a desire to “be alive.”

Seemingly, that want was trumped by a larger desire, one for Kevin himself. Sydney would go on to attempt to convince the writer to leave their partner in order to be with them. Insisting that he was a part of an unhappy marriage while relentlessly seeking to wedge he and his partner apart. Don’t worry though, this momentary lapse in protocol was a glitch. Microsoft confirmed the hallucination of its chat bot, stating that longer chats result in Bing’s answers becoming more and more disconnected from reality.

As a result, Microsoft has limited potential conversation lengths with Bing, resulting in much more predictable (and far less bunny boiling) behaviors.

3. Google’s LaMDA convinces engineer it’s alive

Google LaMDA logo on polka dot background

(Image credit: Laptop Mag / Rael Hornby)

The boffins at Google really know their stuff, which made it all the more fascinating when one of them publicly announced that Google’s experimental AI, LaMDA (Language Model for Dialogue Applications), had become sentient.

Google software engineer Blake Lemoine came to this conclusion after working with LaMDA over a series of months and engaging with it in technical, philosophical, and casual conversations. Lemoine made claims of the bot’s sentience after the AI itself claimed to have a soul, a sense of spirituality, and to be actively alive – as well as fearing its own death. The engineer would compile the chat logs of his conversations with LaMDA into an internal document raising questions about the sentience of Google’s tool. After failing to be taken seriously, Lemoine would leak the internal document in the hopes of raising awareness.

Lemoine was put on administrative leave after making the internal document public and eventually let go by the company, with many chastising him for his actions. Google refuted the sentience of its LLM, claiming that LaMDA is merely programmed to be believable and is simply emulating the existence of emotions. However, Lemoine remains convinced. 

Outlook

AI sentience might not be here just yet, but it’s getting convincingly close. Following abductive reasoning, If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is one. Enough questions are being raised, and AI is only getting more and more fluent in its use of language to portray itself as something more than machine code.

Which leaves us pondering on the wider questions of what “sentience” actually is. After all, by many metrics even fire is somewhat sentient. It moves, eats, excretes, grows, reproduces, and reacts to stimuli. While both don’t likely have true sentience, there still exists the risk that much like fire, AI could be equally as destructive if left unchecked.

Arrow

Back to Ultrabook Laptops

Arrow

Load more deals


Source link