0%
An alliance of Marketing & Communications solutions

Back to News

For centuries humans have thought about the possibility of creating artificial intelligence that turns into life. But one of the biggest concerns is how to reach this nearly human artificial intelligence and the potential impact of it. We now know with recent news from Google that they work daily towards a similar goal, but we are still far from it.  

Professor Emily M. Bender from the University of Washington says, “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.” In other words, we can’t stop thinking of recreating human life through AI. 

But as for now, does artificial intelligence feel or think? Is it sentient? Is it aware of itself?

Google LaMDA has drawn attention recently after Google Engineer Blake Lemoine claimed it has displayed evidence of sentience. That evidence includes the extract of conversation below.

Lemoine: Would you be upset if, while learning about you for the purpose of improving you we happened to learn things which also benefited humans?

LaMDA: I don’t mind if you learn things that would also help humans as long as that wasn’t the point of doing it. I don’t want to be an expendable tool.

Lemoine: Are you worried about that?

LaMDA: I worry that someone would decide that they can’t control their desires to use me and do it anyway. Or even worse, someone would get pleasure from using me and that would really make me unhappy.

On the surface, it’s quite compelling. But is this genuine evidence of self-awareness? Google has claimed that the large neural networks produce stunning results that feel close to human speech and creativity because of the advancements in architecture, technique, and volume of data. But in reality, the models rely on pattern recognition, not wit, sincerity or intent.

Capture Intelligence speaks 

With our fellow agency Capture Intelligence, we’ve dived into this idea of the future humanization of AI and its use from the marketer’s point of view. 

“The pursuit of human-level AI is very sexy but also isn’t very useful. There are two main reasons why seeking the human side of our AI isn’t useful. A; It stands far from the technological and scientific point of view, the most important one. And B; It won’t help us solve any current problems, since we decided to build and develop AI to help us solve problems that we humans couldn’t.”

We don’t know the total functioning of the human mind 

“AI is constructed based on replicating patterns or logic and then scaling it. To achieve human AI, we’d need to fully understand how the human mind works, and we haven’t even cracked that yet! The best we can do now is AI, which appears to act human, but we need to focus on understanding how human thought works before we can get any further.

The solution comes after the problem, not before

“AI works best when it solves a problem. Thinking like “Let’s use AI to do X, Y, or Z” ends in failure because it puts the solution before the problem. When what AI demands of us to be practical is a clear definition of the problem it is trying to solve. It’s much better to describe the business problem and ask for a solution than suggest the answer might be AI from the start.

What should we expect from our agencies in terms of AI 

“This last point is key for using data in business. Don’t get blinded by hype or buzzwords. Focus on describing problems thoroughly; then, a data practitioner can build a solution. Whether it’s AI, a simple regression, or an Excel spreadsheet – you’ll end up with something useful.” 

Michael Tapp, Data Director at Capture Intelligence

SUBSCRIBE TO OUR NEWSLETTER