Why is criminal activity caused by AI software not punished as if the operator of that software were responsible for it?

|

The new AI chatbot in Facebook Messenger provides detailed information about (completely fabricated) sexual harassment allegations against state lawmakers. The problem: None of it is true.

Details here: https://www.cityandstateny.com/politics/2024/04/meta-ai-falsely-claims-lawmakers-were-accused-sexual-harassment/396121/

|

Facebook's chatbot, launched in September as the latest development in generative artificial intelligence, appears to have a nasty habit of fabricating allegations of sexual harassment by public officials.

A whistleblower sent City & State a screenshot of a conversation with Meta AI in which a lawmaker's name and the term "sexual harassment" were entered. The screenshot showed a completely fabricated incident and consequences that never occurred.

(1 votes)
Loading...

Similar Posts

Subscribe
Notify of
66 Answers
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Lamanini
11 months ago

That these AIs dream and merely try to make human language as good as possible should be known.

Those who use technology in this way ensure that there is no further technological progress, because it makes impossible to meet demands.

According to this logic, Microsoft would also have to be responsible for doing something illegal on his Windows PC.

NackterGerd
11 months ago

Why is criminal action caused by AI software not as if the operator of that software was responsible for it?

It’s clear

A knife manufacturer or a baseball racket manufacturer is not blamed for anyone abused their products

Car manufacturers are also not blamed for someone driving too fast or over red lights or even overcoming a person with alcohol.

Ah ne, then you have to blame the host or the brewery 🤔

KI is just a software without intelligence

If someone abuses this or uses it without control, it is clearly responsible.

Facebook

NackterGerd
11 months ago
Reply to  grtgrt

At AI, this is quite different: it is there to generate answers to questions

False

AI is a software that is trained for its task.

With questions and answers, it has nothing to do with

Of course, you are responsible for what data you enter and of course you have to check yourself if the output can be correct.

That AI, of course, does not always deliver correct results – is of course a matter of course

This is already the nature of the thing as the results are calculated at all.

they are estimates and warranties.

Presumptions!

KI has no intelligence

It should have the user.

Your examples are inappropriate

Not at all, AI is just the tool.

However, AI proves more and more that people without great intelligence are turning their brains off.

While smart heads with AI and proper use make even more performance and become even smarter.

What page you want to belong to, you have clearly shown

Pinguingottin
11 months ago
Reply to  grtgrt

AI is also a tool, you cannot say that one tool is to blame and the other is not.

In addition, AI is not really intelligent, but only guesses according to probability what could fit and what could not be, and also only in response to the user who puts a promotion, AI will not spread from itself, that is the data that is from other people who have the AI.

NackterGerd
10 months ago

It is, of course, good that people’s or children’s pornography should be prevented.

Of course, does not help 100% to prevent it.

But no one can therefore blame the operators.

Facebook’s self-control to search for such posts by a special AI and to delete is virtually also only voluntary, but necessary because otherwise you can’t get behind the flood of data.

NackterGerd
10 months ago

Other

In the 1950s, the psychologist Frank Rosenblatt developed the first AI

Strange, all the years the AI had only been seen as an aid tool

Now suddenly it should deliver 1000% results and just spit out truth.

The results of an AI are only manifestations

The weather forecast can only predict by KI weather for more than 1 hour.

Nevertheless, no normal person would say the weather AI has lied and someone must be punished for it.

Wrong weather forecasts can even cost lives.

And huge damage caused by false accusation.

That’s all much worse than rumors of a chat AI

Then why don’t you blame the operators of the KI weather?

Because it’s just a better treasure tool.

NackterGerd
10 months ago

lying to the user

A software that can only assemble texts cannot lie.

She only uses DEINE inputs and tries to build something.

Just as if you’re making something together from multiple images in the photoshop

You can never blame Photoshop or the manufacturer for creating a lie or fake with your photomontage.

You do the inputs.

In the photo just like the chat.

The tool just doesn’t make its calculation anymore.

Nomals: an AI is not a lexicon and no Wikipedia.

It’s just a stupid tool that creates new from other texts according to your instructions.

Who does not understand this and trusts a chat AI to 100% himself

NackterGerd
11 months ago

Where Metas KI Fake News produces

Where – in the messenger?

that Meta forwards it without hint that it is not right what they say.

She’s not giving the chat.

And that an AI can never be 100% right is logical.

Also in the AGBs you agreed to.

And if you pass on the chat between you and the AI, you are responsible and you are in violation of rules anyway

NackterGerd
11 months ago

Then you should know that you are responsible if you continue to spread such automatic images and texts.

NackterGerd
11 months ago

We are not going to be foolish if we think realistically about German principles instead of making Americans Wildwest Manie

NackterGerd
11 months ago

I think you didn’t understand my explanation and the hint

Of course, no microwave is condemned in the USA, but the manufacturer.

In Germany, everyone would clearly see the fault on the user – even without USA

NackterGerd
11 months ago

In America, the microwave is also due if the cat does not survive when you put it in the microwave for drying.

Or the coffee because it’s hot and you burn when you pour it over you.

In Germany one is responsible for his whirls

NackterGerd
11 months ago

You didn’t talk about the Messenger, but you were talking about Facebook

And yes on Facebook there are many AI to create Fakes to set users there

Facebook therefore uses AI to find and block such fakes as quickly as possible

About Facebook

The Messenger is the same as the ChatGPT

You make inputs or instructions for the AI

The AI responds accordingly

By refining your instructions, you can change your desired result

And this is probably the problem!

The accusations of the USERS leading to the fake and then spreading these fake news

NackterGerd
11 months ago

apparently also generated fake news without the Facebook user,

Like that.

Where did you get that statement?

NackterGerd
11 months ago

What?

Using PC cannot cause great damage.

Do you have any idea?

NackterGerd
11 months ago

Ds isn’t your construct.

From itself, the AI does not create anything illegal, only the user who feeds it to me ambiguous information can achieve this.

And as it is with any software – all have errors and all can be abused.

AI — a technology that has not yet been matured, is seeking to take responsibility for people,

AI has been there for decades, I wouldn’t talk about it.

And is successfully used in many areas

Every new car actually drives around with unripe technology

If you knew what mistakes – in some cases not quite harmless – in some car you would probably not put yourself in

But an AI does not seek to take responsibility for people.

She can’t.

Neither strive nor take responsibility.

NackterGerd
11 months ago

Then you shouldn’t have a phone and no computer

No software in the world is error-free

And dangerous is not the software special of the USER

Just like the knife or car

There is no car in the world that is harmless, as the TüV does not change anything.

You as a driver are still responsible

This is why you must not leave a self-propelled vehicle in Germany unintentionally.

For the future a hot topic for which no one has a real solution.

NackterGerd
11 months ago

Sure.

People respond emotionally

The AI is based on similar answers to similar questions, partly matching texts.

After that they continue these texts to be pleasing.

If and I would also make the rumors, our answers would only sound like shapes for hours.

An AI does this with a huge neural network in fractions of a second.

This is exactly the advantage and main application of the AI.

Things that normal a person as well could make in rapid speed.

That is why, of course,

Please forgive that you dare to make such a comparison here

Excuse me for comparing apples with pears?

And also mean pears would be apples?

This is just to forgive, as you still haven’t understood what a AI really is.

Excuse me – I’m just sorry

NackterGerd
11 months ago

But you know where these texts come from and how to evaluate them 🤪😜

It’s just a chatbot without intelligence

NackterGerd
11 months ago

🤣😂

Faulty implementation

If the operator of the AI software evidence has not acted appropriately;

However, it is important to note that legal liability in any case depends on the specific circumstances and on courts or law

Now you only have to check the legal requirements of this statement otherwise you may still be there for rumours and have to face a complaint yourself

Have fun

NackterGerd
11 months ago

If someone comes to his knowledge limit, but that does not admit, he becomes a liar. At least that should know who wants to be called “intelligent” or is being praised as intelligent.

Right.

And if you were a human being and intelligent, you would understand.

Artificial intelligence (AI), also artificial intelligence (AI), is a part of computer science that deals with the automation of intelligent behavior and machine learning. The term is difficult to define as it already lacks a precise definition of “intelligence”. Nevertheless, it is used in research and development.

Automated human behavior of a computer science program.

More

You as a user should be smarter

Your Faupation AI would be praised as an intellectual and thinking only shows that the AI actually has more intelligence than your theses and answers

NackterGerd
11 months ago

🤣😂

Di KI is not a person and cannot think

She’s not starting to phnate.

It only calculates answers depending on the inputs that the person gives it.

She can calculate everything.

Of course, the calculated must have nothing to do with reality.

That’s Logical.

It’s just a chance calculation

NackterGerd
11 months ago

Why?

Cars can drive through red

Cars can pass pedestrians

Cars can drive too fast

Today, even everyone drives faster than 130km/h

And not the AI opposite a criminal handling, but the user

An AI can make nothing criminal from itself

After your reasoning, all computers and cell phones would have to be banned, as you can do with them.

NackterGerd
11 months ago

Due to the spread of false information about the European elections, among other things from Russia, the EU Commission has initiated a procedure against the Facebook butter group Meta.

YES You Did Know

Facebook (the user of the AI) is not responsible for the tool manufacturer.

Just like you when you spread results of AI simply as truth and 100% right somewhere

At last you noticed it – not the computer is the guilty, but the one who sits before it and serves it

Pinguingottin
11 months ago

Then it’s time to arrest each of the plastic bags, pistols, knives.

If nix has to do with quality, you can give them hundreds of thousands of euros every day or, in the case of chatGPT 500+ a thousand a day, only provide for (training not included now).

Doesn’t everyone have that perat.

apophis
11 months ago

So there are these journalists and who play around with the AI until they find a story about senators/legislators.
Where’s the problem now?

Chatbots work this way, you can let them say what you want.

For an AI, the same applies to any software and product: The manufacturer is not responsible for how the user deals with the product.
It is not the responsibility of the manufacturer to ensure that the user uses the product correctly.

apophis
11 months ago
Reply to  grtgrt

For example, a car manufacturer must not market cars that violate legal regulations. So why should AI be different?

Every car manufacturer may bring a car on the market that can drive faster than anywhere. Most cars do that.

The controller is not responsible for driving 120 in a 50 zone with your car.
So why should that be different for AI?

In addition, there are no statutory provisions for chatbots, such as e.g. There are regulations for safety and emissions of pollutants in cars. Accordingly, an AI that returns false information or a history does not conflict with any law.

The correct operation of Metas KI is the setting of a question: no matter what content.

The actuation of the gas pedal is considered to be correct operation of a car.

If you press the gas pedal fully on a road, this is not the responsibility of the car manufacturer.

There is also a clear indication that the AI responses can be inaccurate or inappropriate:

https://www.facebook.com/help/messenger-app/667776101667447/?cms_platform=android-app&helpref=platform_switcher

The response of an AI can be inaccurate or inappropriate. You should not use it as a basis for important decisions. AIs have only been trained in English and their answers in other languages that are not currently supported are therefore possibly qualitatively poorer.

If you still think that everything is blind to believe what the chatbot does, then you are simply blamed yourself. You and no one else.

NackterGerd
11 months ago
Reply to  grtgrt

Sure you can ask.

Nonetheless, you must not publish semi-features you have calculated with the help of the AI

You are responsible for your actions

And to your car:

There it looks the same

Cars can naturally violate laws and rules

That is why they are forbidden as little as knives

Responsible is always the driver

NackterGerd
11 months ago

Generating and revolving leading fake news should not be allowed.

Generate or write up at home.

Just like thinking.

But get in circulation!

And this is the user and not the AI of someone accordingly gives false inputs to generate such false statements