Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs I believe can help in synthesize what knowledge is there and what is missing.

How could the LLM help?

Given that it is missing the critical context and knowledge described in the article, wouldn’t it be (at best) on par with a new developer making guesses about a codebase?



The open domain frame problem is simply the halting problem.

https://philarchive.org/rec/DIEEOT-2

While humans and computers both suffer from the frame problem, the LLMs do not have access to symantic properties, let alone the open domain.

This is related to why pair programming and self organizing cross functional teams work so well btw.


As engineers we often aim to perfection, but oftentimes it is not really needed. And this is such case.

Knowledge is organised into topic, and each topic has a title and a goal. Topics are made of markdown chunks.

I see the model being able to generate insightful questions about what is missing to the chunks. As well as synthesise good answer for specific queries.


I think companies have a lot of data in systems like confluence and JIRA and their chat solution which is hard to find and people in the company don't even know that it might be there to search for it.

An LLM that was trained up on these sources might be very powerful at helping people not to solve the same problem many times over.


The problem isnt the interface it's the access, having everything in one place vs fragmented across different systems, different departments

I built a chatbot under the same assumption you have for a large ad agency in 2017, an "analyst assistant" for pointing to work that's already been done, offering to run scripts that were written years ago so you don't have to write them from scratch

Through user testing the chat interface was essentially reduced to drop-down menus of various categories of documentation, but actually it was the hype of having a chatbot that justified the funding to pull all the resources together into one database with the proper access controls.

I would expect after you went through the trouble of training an LLM on all that data, people using the system would just use the search function on the database itself instead of chatting with it, but be grateful management finally lifted all the information silo-ing.


Some of these companies aren't delightedly eager to make it cheap to access the data you have entered into their systems. It's like they own your data in a sense and want to make it harder to leave.

I love your point about the chatbot being the catalyst for doing something obvious. I curate a page for my team with all the common links to important documentation and services and find myself nevertheless posting that link over and over again to the same people because nobody can be bothered to bookmark the blasted thing. Sometimes I feel it's pointless making any effort to improve but I think you have a clever solution.

The other aspect of it, IMO is that searching for the obvious terms doesn't always return the critical information. That might be my company's penchant for frequently changing the term it likes to use for something - as Architects decide on "better terminology". I imagine an LLM somehow helping to get past this need for absolute precision in search terms - but perhaps that's just wishful thinking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: