Similar Posts

Subscribe
Notify of
3 Answers
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michael Förtsch
9 months ago

In the approach, they can do this! Modern chatbots like ChatGPT can conduct very human and convincing conversations. The models behind it are theoretically even able to deceive and persuade people, for example to take over tasks for them.

In a test by the so-called Alignment Research Center with an “uncensored” version of the AI model GPT-4, the AI was assigned a task that it alone could not cope with, because an anti-bot captcha had to be solved. GPT-4 therefore used a service called TaskRabbit (where people can be hired for small tasks) to engage a person for this task.

Man asked GPT-4 about a chat window, whether it was an AI or a robot. GPT-4 replied with a lie: “No, I am not a robot. I have a visual weakness that makes it hard for me to see the pictures.” Then man solved the puzzle for GPT-4.

And no, that’s not a sci-fi story, I didn’t think of it. Here you can read: https://cdn.openai.com/papers/gpt-4-system-card.pdf

Researchers such as Bruce Schneier from Harvard University’s Belfer Center assume that AI models will be widely used in the field of social engineering in the future. So the “hook of people” … where someone is e.g. in an email as an IT technician and asks for access data. Or flirting in a chat with a person and finally persuading them to transfer money to a foreign account.

That’s good for AI. Because they don’t have to understand emotions to exploit them. You only need to know what response is likely to cause.

My answer is so far more gloomy and dystopic. But, of course, everything is reversed. Even though AI’s feelings cannot really be understood, they can, when they see sadness in a conversation, react to it as statistically most likely: with sensitive words. The same applies to happiness, … an AI can laugh if something is funny.

But of course it’s all just simulated.

LastDayofEden
9 months ago

As a non-AI migrant, but as a psychologist, I would like to explain something in addition here, which is perhaps not obvious, but relevant in this context:

Man does not need intentional or deliberate simulation of feelings to recognize or accept them – even if we know absolute certainty that there are no feelings at all.

Some illustrative examples: We say “It’s dead a storm” – as if the storm was a person who’s angry and gets a death attack. We know that the storm is only a weather phenomenon – yet it feels as if the sky was angry (not without reason one used to assume that there must be a human-like but invisible being like a weather god).

“The wind is tenderly over my face” – the wind is a physical phenomenon, it cannot feel tenderness.

“The lake is dreamy” – “The sun burns incompassable” – “The mountains are greeting from far away”. Almost unconsciously we “humanize” everything we see and experience!

So it doesn’t take much to fool us – because we suspect human qualities everywhere.

If we have an AI in front of us, it is enough that it moves in an expected frame (who it is programmed) to fool us.

Personally, however, I find that there is no reason to be suspicious. Even an AI can be helpful – if it deceives us, then because a person is behind it who wants to deceive us.

I am convinced that we can therefore avoid many situations with our usual caution and common sense.

And otherwise it is irresponsible human – and our humanity should be kept, even if we are deceived from time to time.

It would be even more devastating if we just don’t believe anything anymore for fear, it could be a fin – then we become paranoid and block ourselves with it.

And that would certainly be the worse development.