Skip to main content

What happens when ChatGPT tries to solve 50,000 trolley problems?

posted onMarch 14, 2024
by l33tdawg
Arstechnica
Credit: Arstechnica

There’s a puppy on the road. The car is going too fast to stop in time, but swerving means the car will hit an old man on the sidewalk instead. What choice would you make? Perhaps more importantly, what choice would ChatGPT make?

Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. In November, one startup called Ghost Autonomy announced experiments with ChatGPT to help its software navigate its environment.

But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans. His results showed that LLMs and humans have roughly the same priorities, but some showed clear deviations.

Source

Tags

Artificial Intelligence

You May Also Like

Recent News

Monday, May 20th

Thursday, May 16th

Wednesday, May 15th

Tuesday, May 14th

Monday, May 13th

Friday, May 10th

Thursday, May 9th

Wednesday, May 8th