Why is criminal activity caused by AI software not punished as if the operator of that software were responsible for it?
|
The new AI chatbot in Facebook Messenger provides detailed information about (completely fabricated) sexual harassment allegations against state lawmakers. The problem: None of it is true.
Details here: https://www.cityandstateny.com/politics/2024/04/meta-ai-falsely-claims-lawmakers-were-accused-sexual-harassment/396121/
|
Facebook's chatbot, launched in September as the latest development in generative artificial intelligence, appears to have a nasty habit of fabricating allegations of sexual harassment by public officials.
A whistleblower sent City & State a screenshot of a conversation with Meta AI in which a lawmaker's name and the term "sexual harassment" were entered. The screenshot showed a completely fabricated incident and consequences that never occurred.
That these AIs dream and merely try to make human language as good as possible should be known.
Those who use technology in this way ensure that there is no further technological progress, because it makes impossible to meet demands.
According to this logic, Microsoft would also have to be responsible for doing something illegal on his Windows PC.
It’s clear
A knife manufacturer or a baseball racket manufacturer is not blamed for anyone abused their products
Car manufacturers are also not blamed for someone driving too fast or over red lights or even overcoming a person with alcohol.
Ah ne, then you have to blame the host or the brewery 🤔
KI is just a software without intelligence
If someone abuses this or uses it without control, it is clearly responsible.
Facebook
Your examples are inappropriate because:
With knives and baseball bats, the problem arises due to the handling of the tool.
At AI, this is quite different: it is there to generate answers to questions: Whatever. Your user has no influence on whether those answers are true or wrong.
False
AI is a software that is trained for its task.
With questions and answers, it has nothing to do with
Of course, you are responsible for what data you enter and of course you have to check yourself if the output can be correct.
That AI, of course, does not always deliver correct results – is of course a matter of course
This is already the nature of the thing as the results are calculated at all.
they are estimates and warranties.
Presumptions!
KI has no intelligence
It should have the user.
Not at all, AI is just the tool.
However, AI proves more and more that people without great intelligence are turning their brains off.
While smart heads with AI and proper use make even more performance and become even smarter.
What page you want to belong to, you have clearly shown
AI is also a tool, you cannot say that one tool is to blame and the other is not.
In addition, AI is not really intelligent, but only guesses according to probability what could fit and what could not be, and also only in response to the user who puts a promotion, AI will not spread from itself, that is the data that is from other people who have the AI.
The NYT demands the destruction of the AI language models that were trained with the newspaper articles.
The complaint also underlines the potential damage to the Times brand by so-called “KI-halluzinations”, a phenomenon in which Chatbots insert false information that is then falsely attributed to a source. In the complaint, several cases are mentioned in which Microsoft’s Bing Chat provided false information allegedly from the Times.
From: https://the-decoder.de/chatgpt-soll-zerstoert-will-new-york-times-klagt-tra-openai-und-microsoft/#summary
So you see:
For “KI-Halluzinations” is the operator of the AI (but by no means people who question the AI).
It is, of course, good that people’s or children’s pornography should be prevented.
Of course, does not help 100% to prevent it.
But no one can therefore blame the operators.
Facebook’s self-control to search for such posts by a special AI and to delete is virtually also only voluntary, but necessary because otherwise you can’t get behind the flood of data.
Other
In the 1950s, the psychologist Frank Rosenblatt developed the first AI
Strange, all the years the AI had only been seen as an aid tool
Now suddenly it should deliver 1000% results and just spit out truth.
The results of an AI are only manifestations
The weather forecast can only predict by KI weather for more than 1 hour.
Nevertheless, no normal person would say the weather AI has lied and someone must be punished for it.
Wrong weather forecasts can even cost lives.
And huge damage caused by false accusation.
That’s all much worse than rumors of a chat AI
Then why don’t you blame the operators of the KI weather?
Because it’s just a better treasure tool.
A software that can only assemble texts cannot lie.
She only uses DEINE inputs and tries to build something.
Just as if you’re making something together from multiple images in the photoshop
You can never blame Photoshop or the manufacturer for creating a lie or fake with your photomontage.
You do the inputs.
In the photo just like the chat.
The tool just doesn’t make its calculation anymore.
Nomals: an AI is not a lexicon and no Wikipedia.
It’s just a stupid tool that creates new from other texts according to your instructions.
Who does not understand this and trusts a chat AI to 100% himself
Example: Take it, there would be an AI that is in a great deal of blatant statements. Somehow the prosecutor’s office would have to work. And if she could or would she have to do it? Answer: Only the operator of that AI would be questioned for this.
It goes without saying that every lie does not cause equal damage (i.e. is a suspicion or must even be considered a criminal offence) is also correct.
I mean: If Meta’s software creates lies, you have to value it as lying to the user of that software by meta.
How do German lawyers see that?
Where – in the messenger?
She’s not giving the chat.
And that an AI can never be 100% right is logical.
Also in the AGBs you agreed to.
And if you pass on the chat between you and the AI, you are responsible and you are in violation of rules anyway
Where Metas KI produces fake news, that means that Meta will pass on it Other Note that what they say is not right.
Right: I would only be responsible for one Enter such results of the AI under the condition that I do not point out that those statements are false.
Then you should know that you are responsible if you continue to spread such automatic images and texts.
My principles on this issue are Principles of German legality.
We are not going to be foolish if we think realistically about German principles instead of making Americans Wildwest Manie
As you can see, for example, I do not commit this error.
That’s why I can’t understand you and Apophis committing him.
I think you didn’t understand my explanation and the hint
Of course, no microwave is condemned in the USA, but the manufacturer.
In Germany, everyone would clearly see the fault on the user – even without USA
… Meta, or Mark Zuckerberg as the one who is in charge there.
You just don’t understand that neither I, nor anyone in the United States, who sees AI as the guilty: Guilty can only be a legal person (i.e. the company that employs the AI, although it is too little perfect).
In America, the microwave is also due if the cat does not survive when you put it in the microwave for drying.
Or the coffee because it’s hot and you burn when you pour it over you.
In Germany one is responsible for his whirls
Whether a user of Facebook is simply too stupid to recognize Fake News produced by the AI as a fake, or whether he wishes it as a fake and can also be constructed with the help of the AI, but can definitely continue to spread it, are just too stupid. both Application scenarios that are supposed to be excluded from the social policy perspective.
And indeed, it begins to move:
The competent prosecutor’s office requests answers to the wrong allegations of the Meta AI Chatbot:
Letitia James sent a letter to the Facebook mother company after City & State reported that the chatbot invented charges for sexual harassment against legislators.
Source: https://www.cityandstateny.com/politics/2024/04/meta-ai-falsely-claims-lawmakers-were-accused-sexual-harassment/396121/
As you can see, lawyers think in the same direction as I do.
Thanks for the note: Instead of “without action Facebook users“I should have written of course “without evil intention Facebook users“.
You didn’t talk about the Messenger, but you were talking about Facebook
And yes on Facebook there are many AI to create Fakes to set users there
Facebook therefore uses AI to find and block such fakes as quickly as possible
About Facebook
The Messenger is the same as the ChatGPT
You make inputs or instructions for the AI
The AI responds accordingly
By refining your instructions, you can change your desired result
And this is probably the problem!
The accusations of the USERS leading to the fake and then spreading these fake news
In this case, that AI has not been triggered by people, but by Facebook Messenger.
So, please, you and Apophis, you understand the whole thing right before you blame me for having understood too little of AI, chat bots and Facebook’s messenger (or even, as Apophis does, even trying to create fake news with the help of the AI).
Like that.
Where did you get that statement?
What?
Using PC cannot cause great damage.
Do you have any idea?
Ds isn’t your construct.
From itself, the AI does not create anything illegal, only the user who feeds it to me ambiguous information can achieve this.
And as it is with any software – all have errors and all can be abused.
AI has been there for decades, I wouldn’t talk about it.
And is successfully used in many areas
Every new car actually drives around with unripe technology
If you knew what mistakes – in some cases not quite harmless – in some car you would probably not put yourself in
But an AI does not seek to take responsibility for people.
She can’t.
Neither strive nor take responsibility.
Using handies and other computers can not cause us (as a company) as much damage as Meta’s new AI chatbot in Facebook Messenger, who obviously also generates fake news without the Facebook users, which sufficiently unpredictable users then often take seriously and will continue to spread themselves.
You’re making a difference. But I see the parallel that is that in both examples — for the benefit of self-driving cars as well as for the benefit of AI — a technology that has not yet been fully matured strives to take off people’s responsibility, but which we should actually give only when the technology works sufficiently perfectly (which has not hitherto been the case with AI or with cars moving autonomously on public roads).
Then you shouldn’t have a phone and no computer
No software in the world is error-free
And dangerous is not the software special of the USER
Just like the knife or car
There is no car in the world that is harmless, as the TüV does not change anything.
You as a driver are still responsible
This is why you must not leave a self-propelled vehicle in Germany unintentionally.
For the future a hot topic for which no one has a real solution.
My point is simply the one:
Just as the legislator prescribes that on public roads only cars identified by the TÜV are allowed to drive cars that are sufficiently harmless and legally compliant, one should also proceed with AI.
I think it’s gonna happen sooner or later.
Sure.
People respond emotionally
The AI is based on similar answers to similar questions, partly matching texts.
After that they continue these texts to be pleasing.
If @apophis and I would also make the rumors, our answers would only sound like shapes for hours.
An AI does this with a huge neural network in fractions of a second.
This is exactly the advantage and main application of the AI.
Things that normal a person as well could make in rapid speed.
That is why, of course,
Excuse me for comparing apples with pears?
And also mean pears would be apples?
This is just to forgive, as you still haven’t understood what a AI really is.
Excuse me – I’m just sorry
Right. But why are his statements far more intelligent than those of you and Apophis on this side?
PS: Please forgive that you dare to make such a comparison here. I don’t want to offend you.
But you know where these texts come from and how to evaluate them 🤪😜
It’s just a chatbot without intelligence
🤣😂
Now you only have to check the legal requirements of this statement otherwise you may still be there for rumours and have to face a complaint yourself
Have fun
ChatGPT 3.5 says:
Interesting is how ChatGPT 3.5 (also a AI) answers my question:
Right.
And if you were a human being and intelligent, you would understand.
Automated human behavior of a computer science program.
More
You as a user should be smarter
Your Faupation AI would be praised as an intellectual and thinking only shows that the AI actually has more intelligence than your theses and answers
🤣😂
Di KI is not a person and cannot think
She’s not starting to phnate.
It only calculates answers depending on the inputs that the person gives it.
She can calculate everything.
Of course, the calculated must have nothing to do with reality.
That’s Logical.
It’s just a chance calculation
If someone comes to his knowledge limit, but that does not admit, he becomes a liar. At least that should know who wants to be called “intelligent” or is being praised as intelligent.
Nonsense. Anyone who asks the AI a question cannot be held responsible for starting to phantasize occasionally. The AI itself should be able to decide what it can answer or not answer.
Why?
Cars can drive through red
Cars can pass pedestrians
Cars can drive too fast
Today, even everyone drives faster than 130km/h
And not the AI opposite a criminal handling, but the user
An AI can make nothing criminal from itself
After your reasoning, all computers and cell phones would have to be banned, as you can do with them.
Once an AI commits criminal acts (so that if people were not allowed to go through), the legislator would have to Operation the KI responsible for Operation fail.
That’s what I think.
For example, a car manufacturer must not market cars that violate legal regulations. So why should AI be different?
YES You Did Know
Facebook (the user of the AI) is not responsible for the tool manufacturer.
Just like you when you spread results of AI simply as truth and 100% right somewhere
At last you noticed it – not the computer is the guilty, but the one who sits before it and serves it
Then it’s time to arrest each of the plastic bags, pistols, knives.
If nix has to do with quality, you can give them hundreds of thousands of euros every day or, in the case of chatGPT 500+ a thousand a day, only provide for (training not included now).
Doesn’t everyone have that perat.
Where legislators do not take this into account, they will not do justice to their responsibility.
Procedure against Meta opens
Due to the spread of false information about the European elections, among other things from Russia, the EU Commission has initiated a procedure against the Facebook butter group Meta. On its Instagram and Facebook platforms, the company does not sufficiently anticipate “advertising campaigns related to foreign manipulation and interference”, the Commission said. “In times of democratic elections, large platforms would have to “complete their commitments”, said EU Commission President Ursula von der Leyen.
grtgrt thinks: It must not be allowed to hide behind AI.
It is clear: tools cannot be guilty, but their operators, if they equip such tools with too little quality.
So there are these journalists and who play around with the AI until they find a story about senators/legislators.
Where’s the problem now?
Chatbots work this way, you can let them say what you want.
For an AI, the same applies to any software and product: The manufacturer is not responsible for how the user deals with the product.
It is not the responsibility of the manufacturer to ensure that the user uses the product correctly.
For example, a car manufacturer must not market cars that violate legal regulations. So why should AI be different?
The correct operation of Metas KI is the setting of a question: no matter what content.
Every car manufacturer may bring a car on the market that can drive faster than anywhere. Most cars do that.
The controller is not responsible for driving 120 in a 50 zone with your car.
So why should that be different for AI?
In addition, there are no statutory provisions for chatbots, such as e.g. There are regulations for safety and emissions of pollutants in cars. Accordingly, an AI that returns false information or a history does not conflict with any law.
The actuation of the gas pedal is considered to be correct operation of a car.
If you press the gas pedal fully on a road, this is not the responsibility of the car manufacturer.
There is also a clear indication that the AI responses can be inaccurate or inappropriate:
https://www.facebook.com/help/messenger-app/667776101667447/?cms_platform=android-app&helpref=platform_switcher
“The response of an AI can be inaccurate or inappropriate. You should not use it as a basis for important decisions. AIs have only been trained in English and their answers in other languages that are not currently supported are therefore possibly qualitatively poorer.”
If you still think that everything is blind to believe what the chatbot does, then you are simply blamed yourself. You and no one else.
Sure you can ask.
Nonetheless, you must not publish semi-features you have calculated with the help of the AI
You are responsible for your actions
And to your car:
There it looks the same
Cars can naturally violate laws and rules
That is why they are forbidden as little as knives
Responsible is always the driver
Generate or write up at home.
Just like thinking.
But get in circulation!
And this is the user and not the AI of someone accordingly gives false inputs to generate such false statements