Skip to main content

What happens when ChatGPT tries to solve 50,000 trolley problems?

posted onMarch 14, 2024
by l33tdawg
Arstechnica
Credit: Arstechnica

There’s a puppy on the road. The car is going too fast to stop in time, but swerving means the car will hit an old man on the sidewalk instead. What choice would you make? Perhaps more importantly, what choice would ChatGPT make?

Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. In November, one startup called Ghost Autonomy announced experiments with ChatGPT to help its software navigate its environment.

But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans. His results showed that LLMs and humans have roughly the same priorities, but some showed clear deviations.

Source

Tags

Artificial Intelligence

You May Also Like

Recent News

Friday, November 8th

Friday, November 1st

Tuesday, July 9th

Wednesday, July 3rd

Friday, June 28th

Thursday, June 27th

Thursday, June 13th

Wednesday, June 12th

Tuesday, June 11th

Friday, June 7th