Hacker Newsnew | past | comments | ask | show | jobs | submit | ps747's commentslogin

> Unlike widely used Reinforcement Learning (RL)-based and Search-based approaches, GPT-3.5 not only interprets scenarios and actions but also utilizes common sense to optimize its decision-making process

Relying on LLMs for reasoning seems dangerous due to the risk of hallucinations, especially in a safety-critical setting like self-driving. I have some other problems with this paper, for example, the comparison to RL is limited to zero-shot and this technique will struggle to run in real-time due to the slow inference speeds of LLMs.

Maybe there is some potential for LLMs to work as a fall-back mechanism in new situations or to help predict the behavior of humans and other cars, but I doubt that LLMs will become central to decision making in self-driving cars.


Hallucinations have become a bit too much of a boogeyman.

One should not rely on LLMs as any sort of authoritative representation of training data where data integrity is critical.

But there's generally very little propensity for hallucination from in context information you are feeding into them live.

Additionally, even just a second pass with a fine tuned classifier checking for hallucinations between provided data and output can reduce the degree to which they occur significantly.

The low hanging fruit when the models first released of summarizing massive sets of training data is definitely an area where hallucinations have been a problem, but arguably the greater value in models moving forward is having turned them into informal logic engines of increasing caliber.

In that application, hallucinations are far less of a concern unless the context extensively overlaps with training data, and in those cases the hiccups can generally be effectively broken by replacing tokens with representative placeholders (such as if working with a LLM on a variation of the goat, wolf, and cabbage problem where it keeps hallucinating details from the normal form, using different nouns or using emojis in place of , , and ).

The issue of speed is much more salient, but I could definitely see LLMs in combination with the generative tech stacks coming up in 3D generation being used to help create large swaths of synthetic scenario data for edge cases less likely to occur and be captured in real world driving conditions, which would in turn train faster and more comprehensive self-driving models in vehicle.


It’s fair though, to say that in safety critical situations where human lives are at stake…

> can reduce the degree to which they occur significantly

May not be good enough, unless you can quantify the degree to which they happen.

1/10? 1/1000? 1/1000000? More in situations like fog or rain? Perfectly safe in normal conditions?

The problem here isnt hallucinations, correct. All systems are unsafe to some degree.

The problem is that the degree to which it is a problem is (afaik) quite difficult to quantify.

It’s not ok, if you just vaguely wave your hand and say it doesn’t happen that much. Or you can mitigate it to some degree by doing such and such. How much?

You have to actually be able to articulate the degree of risk involved.

There is a risk. Fact.

Is it acceptable? That’s the question, and no one seems to be able really answer it clearly.


I think that the valuable parts are xAI (explanation) and make the interaction between the REAL model and the human.


I really appreciate this point, and I wonder if this is a flaw of the continuously updated model of cloud-based software. While it integrates with AGILE and bugs are (in theory) quickly fixed, there's much harder to make breaking changes. Contrast that to the versioned software model where Microsoft Word could make major changes like introducing the ribbon. I wonder if it's a good idea to do some sort of versioning in the cloud-based setting.


I think you're severely underestimating the impact of California's investment in education. UC Berkeley and Stanford (private, but accepts state research grants) are top institutions which had for decades attracted some of the brightest minds (professors and students) to the area. These people helped shape the field, from early work on the ARPANET and Operating Systems to the Big Data and AI innovations of today. The expertise is passed on to students which creates not only a highly talented labor pool, but also trains the next generation of innovators.

Considering this concentration of talent, I think it's no surprise that the Bay Area is a hub for innovation, and the impressive list of companies founded by Berkeley [1] and Stanford [2] alumni is a testament to this. The concentration of talent is the environment that attracts funding and VCs, and will ensure that the Bay Area will remain the place to be in tech.

And Seattle, your "perfect counterfactual", has University of Washington, another state-funded top institution.

[1] https://en.wikipedia.org/wiki/List_of_companies_founded_by_U... [2] https://en.wikipedia.org/wiki/List_of_companies_founded_by_S...


Disagree. I love SF. The Symphony is phenomenal, love the Opera and Ballet too.

The parks are beautiful, and you can get out into nature in about an hour's drive.

Living in SF is expensive, but salaries are high too. Especially if you're in tech like most on HN, you can have a very nice lifestyle here.


I like SF, but symphonies, operas, ballets, and beautiful parks are like the common denominator of major US cities.

But anyways I agree SF is beautiful in many areas, and no one should listen to any opinions of SF from anyone who calls it "San Fran" :)


Yes, I agree that most major cities have these things, but I truly believe the SF Symphony is special, almost as brilliant as the NY Philharmonic.

They regularly host top performers from around the world -- Itzhak Perlman, Gustavo Dudamel, Yuja Wang, and so many more. And Michael Tilson Thomas is a treasure. Anyway, I can ramble about this forever and I appreciate the response :)


I wish I would've went to one, I think my wife was slated to go since her company was sponsoring the symphony. I'm a casual classical music fan. I play some violin and have played in symphony before, watch TwoSet, and watch stuff like Ray Chen and Hilary Hahn. I wanted to become part of those public drop-in symphonies too.


There is a huge difference between being a classical music fan in SF and being a classical music fan in Denver or Kansas City. We're talking world-class performances.


That's just not true. The CDC estimates that between 10% and 70% of of cases are asymptomatic with a best estimate of 40%. https://www.cdc.gov/coronavirus/2019-ncov/hcp/planning-scena...

As we saw in Italy and New York, COVID-19 has the potential to overwhelm hospitals. Officially, almost 170,000 Americans have died. NYTimes estimates that COVID-19 already caused over 200,000 deaths by comparing against the expected number of deaths if we weren't in a pandemic. So yes, COVID-19 is a clear and present danger to our lives.


Wow, this is incredibly damning and all but blames the Boeing and ineffective FAA oversight for the tragic crashes.

I hope this leads to significant fines for Boeing and jailtime for the more egregious actors involved; according to the report, their negligence directly led to the loss of lives. Knowing how important Boeing is to national security/the economy, I'm skeptical that enough will be done...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: