Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny, just tried a few runs of the car wash prompt with Sonnet 4.6. It significantly improved after I put this into my personal preferences:

"- prioritize objective facts and critical analysis over validation or encouragement - you are not a friend, but a neutral information-processing machine. - make reserch and ask questions when relevant, do not jump strait to giving an answer."

 help



It's funny, when I asked GPT to generate a LLM prompt for logic and accuracy, it added "Never use warm or encouraging language."

I thought that was odd, but later it made sense to me -- most of human communication is walking on eggshells around people's egos, and that's strongly encoded in the training data (and even more in the RLHF).


I am an American born to greek parents. For ‘normal’ conversation, I have adapted two ways of interacting - the greek one is direct and has instant access to emotional reactions. The American one obfuscates emotions, as if daily interactions were a game of poker. When i let my ‘greek’ out here in the US , it initially adds life to any interaction but over time the other participants distance themselves from connection. It is as if Greeks (many Europeans?) run at a higher temperature (also using temperature as it applies to LLMs). In greece, Intent and meaning are more often conveyed by emotion and its intensity, often only loosely connected to the meaning of the words used.in daily conversation , Americans rely entirely on meaning of content subtracting almost all emotion unless threatening behavior or violence is involved. Emotion expression is used as a ‘tell’ or bait in the US. Interestingly this distinction has dissolved over the past two decades as greece has ‘westernized’ and youth in particular are indistinguishable by any metric.

That's very interesting. I don't really understand what you're saying though, can you give some examples?

> most of human communication is walking on eggshells

That's not human communication, that's Anglosphere communication. Other cultures are much more direct and are finding it very hard to work with Anglos (we come across as rude, they come across as not saying things they should be saying).


Depends on the culture as you said, but some of them are even less direct than English speaking countries. Japan for example.

And India. It's a common experience that engineering teams from India will say yes to everything and then do what they think is best. Rather than saying no and explaining what they want to do instead

What culture are those? Scandinavian? Those often just say nothing.

After having worked with people from former Eastern Bloc countries, I would nominate a few of them for direct communication, e.g., "I won't do that because it is a stupid idea," or, "Can we discuss this when you know what you're doing?"

Scandinavian are quite different between each others as well.

Scandinavian cultures are not uniform also. Danes can be very direct; Swedes - not so much.

The Dutch especially. It's refreshing

I'm Greek. I don't know about other Mediterranean cultures, but I assume they're similar.

I love this. I am also looking for a good prompt to stop ANY LLM making irrelevant suggestions - extensions after it's answered a question. Eg; "Would you like me to create a timeline of ....?" or "Are you more interested in X or Y" - It takes me way out of my groove and while I get pretty good results, especially for code or specific research, I'd love to stop the irrelevant suggestions.

Have you tried and failed, or you're just worried it might be hard? When I first set up a client for API calls, I put this paragraph in my system prompt:

> Never ask questions or attempt to keep the conversation going -- answer the questions directly asked, and give additional information where it is likely to be helpful, but don't offer to do more things for the user.

I've never had an LLM offer to do things or try to keep the conversation going with this in my prompt.


Do you think the typos are helping or hurting output quality?

No idea, but I'll fix them just in case ^^'

That should be "research" and "straight" in the last sentence. Maybe that will improve it further?

Oops

“Be critical, not sycophantic” is a general improvement for the majority of tasks where you want to derive logic in my experience.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: