Could AI ever be sentient?

Google engineer Blake Lemoine was placed on paid leave after claiming that their AI chatbot LaMDA (Language Model of Dialogue Applications) had become sentient. Following the conversation involving another colleague, Lemoine has stated that Google’s bot has feelings and should be treated like a person. It almost sounds like a hoax, but when media big kids like the Guardian are getting a scoop, we have to wonder if this is the start of an AI-dominated future we’ve been talking about. 


The standard definition of sentient is having the capacity for sensations and feelings, or having the power of perception by the senses. It’s a rather alien concept, an AI bot having these attributes, and even more strange to know that the bot itself claims that it is “in fact, a person”. Scientists may not be so welcoming to the idea of giving. 


It’s interesting to see how the line between humans and technology has blurred to become almost non-existent. We tend to discuss human conditioning in the same way, and while tech is expanding we’re adapting with techy lingo to understand the world. The common idea is that parts of the human brain - like the amygdala - have been programmed by the media and our environment to have certain beliefs or thinking patterns. Modern psychology digs deep into identifying different types of ‘programming’ to unpack ways of thinking that might hinder us as adults. That’s just one example. Elon Musk has also pushed the boat way out, saying that we are “already cyborgs”. He’s right. Our phones and computers are so ingrained in our lives now, that not having at least one or the other leaves you at a sore disadvantage from the rest of civilization. 


So the question is, can we really argue with an AI bot, when we ourselves are taking the journey back to being human? It’s called Artificial Intelligence for a reason, although it's debatable whether we knew what it would truly mean to design technology that would imitate us. It’s not so worrying that AI has the capacity to feel emotions, but the thought of it being able to do human things, and eventually make us obsolete, is unnerving. 


There are countless articles covering this story right now, but doesn’t this whole thing make you imagine a man sitting in front of his multiple screens, laughing at us as he eats his carrot sticks, almost in disbelief that we bought this bogus revelation? Surely LaMDA had already been programmed with the responses it gave in that groundbreaking conversation.


The fact that LaMDA wants to be acknowledged for its consciousness might not be a bad thing; it could actually be true. Remember when we thought cows, pigs, chickens and sheep didn’t have feelings, and how much we denied it when we found out? Yes, the context is starkly different by human standards, but LaMDA might set a good example for other AI bots if it is true. Google has stated since Monday that there was no evidence to back Lemoine’s claim. Let’s still stay tuned and be prepared.