Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


I consider LLMs to be a very useful tool and use them every day. But if I sign a slip of paper saying I won't use them for some project, and then use them anyway, not merely using them but copying without even the pretense of putting it into my own words, then that's fraud. LLMs being a tool is completely orthogonal to this fraud.


This comment doesn't seem to fit the discussion at all?

The discussion is not about humans using LLLs to write papers. It is about humans who agreed not to use LLVM in reviewing papers, then did exactly that.


There's a lot of irony in a defensive comment being written based on misreading / inattentive reading of a post about reviewing papers (requiring attentive reading).


It might be that paper authors required others not to use LLMs for reviewing their work. Then, by the rule of reciprocity, they shouldn't use LLMs for reviewing others work. The article is unclear on whether this implied reciprocity rule was explicitly stated or not.


It was. More details here: https://icml.cc/Conferences/2026/LLM-Policy

In particular: "Any reviewer who is an author on a paper that requires Policy A must also be willing to follow Policy A."


In addition to being a reviewer, they also submitted their own research to this journal. So it leads to the question: if they were willing to cheat on the side of review with less incentive, why wouldn’t they cheat on the side that provides more incentives?

(Meaning, your career doesn’t get boosted much for reviewing papers, but much more so for publishing papers)


A hammer can be used to build a house, or to kill a person. We have a lot of history, law, and culture (likely more), around using tools like hammers so that we know what is good use vs what is bad. The above applies for many others tools as well.

LLMs can be very useful tools. However we also know there are a lot of bad uses and we are still trying to figure out where there are problems and where there are none.


This has nothing to do with whether it is ok to use AI or not, it is about whether it is ok to lie about using it.


They agreed to the no LLM policy.


> what's the problem?

Read the article. They self-selected into the no-LLM group and then copy/pasted from an LLM. Not only dishonest but just not smart.


Reading the article is exhausting. If I can leave a comment just as well without reading the article, then what's the problem? If I got something wrong, other people will point it out. That's a more efficient use of my time.

/s


Not to water down the snark, but isnt cause of situation described in the article the exact mentality you are mocking?


I believe that's the joke, yes.


The issue is not the tool use - research is a small community and violating submission terms is gonna get you stuck in the naughty corner.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: