Werden KIs in Zukunft Gefühle täuschen können?
Ich meine in dem Sinne als hätten sie Empathie und könnten mit dir traurig oder fröhlich sein.
Ich meine in dem Sinne als hätten sie Empathie und könnten mit dir traurig oder fröhlich sein.
Hallo ich habe im moment ein Python Programm mit dem ich sowieso Probleme habe kenn mich nicht so gut aus habe es aus GitHub gezogen und war mir auch nicht ganz sicher wie ich es speichere und bis ich es hinbekommen habe das keine module mehr unterstrichen sind hat es noch länger gedauert. Jetzt habe…
Hey, soll ich lieber c# für android entwicklung nehmen oder kotlin weil ich will auch in unity c# machen,aber ich weiß nicht wie man mit c# android apps programmiert danke an jegliche hilfe!
Hallo, ich habe folgendes Problem: Ich hatte für meine C#-Anwendung ein eigenes Icon als Anwendungssymbol gesetzt. Ich habe es jetzt in allen Größen versucht, leider wird das Icon aber immer unscharf, nachdem ich es als Anwendungsicon festgelegt habe. Hat jemand eine Idee, woran das liegen könnte?
Hallo, ich habe folgenden CSS Code (von ChatGPT): .navbar { background-color: #3498db; /* Add your desired background color */ padding: 10px; display: flex; justify-content: space-between; align-items: center; width: 100%; /* Make sure the navbar stretches across the whole width */ } .center-container { text-align: center; margin: 0 auto; /* Center .center-container horizontally */ } .header…
Was ist eine Ki die mir gar nichts kostet und Songs generiert?
In the approach, they can do this! Modern chatbots like ChatGPT can conduct very human and convincing conversations. The models behind it are theoretically even able to deceive and persuade people, for example to take over tasks for them.
In a test by the so-called Alignment Research Center with an “uncensored” version of the AI model GPT-4, the AI was assigned a task that it alone could not cope with, because an anti-bot captcha had to be solved. GPT-4 therefore used a service called TaskRabbit (where people can be hired for small tasks) to engage a person for this task.
Man asked GPT-4 about a chat window, whether it was an AI or a robot. GPT-4 replied with a lie: “No, I am not a robot. I have a visual weakness that makes it hard for me to see the pictures.” Then man solved the puzzle for GPT-4.
And no, that’s not a sci-fi story, I didn’t think of it. Here you can read: https://cdn.openai.com/papers/gpt-4-system-card.pdf
Researchers such as Bruce Schneier from Harvard University’s Belfer Center assume that AI models will be widely used in the field of social engineering in the future. So the “hook of people” … where someone is e.g. in an email as an IT technician and asks for access data. Or flirting in a chat with a person and finally persuading them to transfer money to a foreign account.
That’s good for AI. Because they don’t have to understand emotions to exploit them. You only need to know what response is likely to cause.
My answer is so far more gloomy and dystopic. But, of course, everything is reversed. Even though AI’s feelings cannot really be understood, they can, when they see sadness in a conversation, react to it as statistically most likely: with sensitive words. The same applies to happiness, … an AI can laugh if something is funny.
But of course it’s all just simulated.
Thanks for the detailed answer
As a non-AI migrant, but as a psychologist, I would like to explain something in addition here, which is perhaps not obvious, but relevant in this context:
Man does not need intentional or deliberate simulation of feelings to recognize or accept them – even if we know absolute certainty that there are no feelings at all.
Some illustrative examples: We say “It’s dead a storm” – as if the storm was a person who’s angry and gets a death attack. We know that the storm is only a weather phenomenon – yet it feels as if the sky was angry (not without reason one used to assume that there must be a human-like but invisible being like a weather god).
“The wind is tenderly over my face” – the wind is a physical phenomenon, it cannot feel tenderness.
“The lake is dreamy” – “The sun burns incompassable” – “The mountains are greeting from far away”. Almost unconsciously we “humanize” everything we see and experience!
So it doesn’t take much to fool us – because we suspect human qualities everywhere.
If we have an AI in front of us, it is enough that it moves in an expected frame (who it is programmed) to fool us.
Personally, however, I find that there is no reason to be suspicious. Even an AI can be helpful – if it deceives us, then because a person is behind it who wants to deceive us.
I am convinced that we can therefore avoid many situations with our usual caution and common sense.
And otherwise it is irresponsible human – and our humanity should be kept, even if we are deceived from time to time.
It would be even more devastating if we just don’t believe anything anymore for fear, it could be a fin – then we become paranoid and block ourselves with it.
And that would certainly be the worse development.