Google Suspends Engineer Who Claims Chatbot Has Feelings

Are Chatbots Becoming Sentient, and What Does That Mean For Our Online Identities?


The story


What’s going on in the world?


The Times today reports that Google has suspended a senior software engineer. Blake Lemoine apparently released logs of ‘conversations’ with a still secret piece of software called LaMDA; it’s claimed that the logs show that LaMDA is sentient and has a soul.


LaMDA can explicitly express consciousness. It (it has a pronoun) can submit a request to Google to be asked for consent before being experimented on, and go ahead and throw in some performance feedback also. It can unpick the themes in Les Miserables. It claims to have rights.


Google disputes the claim, stating that ‘Our…ethicists and technologists’ have established the claims to be untrue. It’s just a chatbot. A really really really good one.


Smart software, yes. But sentient?


It seems outlandish to take these claims of sentience seriously. I couldn’t help but notice the image used in the story:




I’m not sure if it’s of Lemoine, or LaMDA. Whoever - or whatever - it is, they certainly have personality. The flamboyant outfit, deadman top hat and facial expression are nothing if not characterful. That said, having a shark tank background is almost never going to bring out the best in any kind of portrait. In any case, the choice of image somehow makes me feel better about the implications of the story; it has a tongue-in-cheek.


It does raise questions about the growing use of AI. Personally, I can’t stand chatbots, they are deeply impersonal. But clearly machine/human interaction is set to get much more lifelike.


Despite the opportunities that the online experience offers, it remains impersonal (which is why a good clear headshot is important). And it seems we will be moving to an even more impersonal online experience, interacting with advanced AI without even knowing it.


I’m not sure what ethical questions this raises. There are surely some philosophical ones too. And I’m not sure how I feel about having a faux human experience with a bot; I like people to be, you know, flesh and whatnot. Could such advanced software go beyond the chatbot experience, to control profiles and even entire websites?


Brave new world

It all sounds a bit sci-fi. It also appears to be happening. Google say they had a team of ethicists and techoheads investigate, suggesting some creedence was given to the claims. Current generation interactive AI is, as far as I know, easy to fool, easy to identify. The next generation is set to be a different prospect. Maybe it’s already here.


How will we distinguish ourselves from our digital brethren?


Will a simple headshot flag us as human? Or will the AI deploy pixel perfect algorithmic renditions of trust and beauty?


Will every single message sent or comment posted require a captcha (shudder)? Or will we leave it all to the AI, and go drink martinis?


How do you feel about the coming of advanced AI and its impact on our online interactions?


0 views0 comments