Hacker Newsnew | past | comments | ask | show | jobs | submit | bsza's commentslogin

They were right back then because these tools didn't exist yet, and they're right today because they do now.

What even is your point? Are you... mad because the truthiness of a statement can change over time?


It definitely doesn’t help that prints from filament printers are very porous, 100% infill or not. Maybe sealing it with epoxy after printing would help?

This seems like another case where the hobby has discovered the 3d printer hammer and forgot that cnc tools (lathe, milling machines) are often better and faster for the job. Or if plastic is what you want injection molding is something you can do on a hobby scale and it is much better (but unlike the others this isn't something you can go from CAD to widget)

In my experience it is very rarely the case that setting up machine tools is faster than 3d printing. And even when it's faster, it's not less trouble. And you have to go and acquire materials in suitable shapes and sizes, and deal with cleaning up chips and offcuts, and deal with deburring and cleaning the part after it's finished.

The 3d printer is always ready and always has material in the right shape. It doesn't make a lot of noise, it doesn't make a lot of waste, the parts come off the machine clean and dry and ready to use. It's really hard to overstate the convenience of 3d printing.


It's also probably easier to migrate to if you have complex workflows as Forgejo Actions is designed to be similar to GH Actions. (Never actually tried it myself though, I switched to Woodpecker long ago.)

Nice find. I'm going to print this and put it on my wall.

haha, great one.

As captured by the Leunig cartoon "TV sunrise"

https://images.squarespace-cdn.com/content/v1/68809bfcd88dbd...


> ChatGPT equalizes intelligence

Yes, I love living in communism too. Imagine if you had to pay money for it or something. The wealthiest people would get unrestricted access to intelligence while the poor none. And the people in the middle would eventually find themselves unable to function without a product they can no longer afford. Chilling, huh? Good thing humans are known for sharing in the benefits of technological progress equally. /s


Huh?

Before ChatGPT it costs ~$100,000 to aquire intelligence good enough to solve this Erdos problem, now it costs ~$200.

I'm really confused at what you are even taking an issue with.


His core issue is jealousy and fear. I don't think these types of people are at the top of the intelligence curve (more closer to bottom) but that is orthogonal to my point. What I'm saying is his personality archetype makes him think (keyword) he's at the top of the intelligence curve and an equalization means, personally to him, that he's losing his edge.

More specific to HN is the archetype of: "I have spent years honing my craft as a expert programmer, my identity is predicated on being an expert programmer in which high intelligence is causal and associated positively with my identity" That's why ironically most of HN was completely wrong about AI. They were wrong about driverless cars, they claimed vibe coding was trash. It's the people who think (keyword) their stupid/average (aka general public) who got it right... because perceptually they stand to gain from the equalization.

Anyway.. this fear and jealousy is not something most humans can admit to themselves. Nobody will actually be able to realize that these emotions drive there thinking. They have to lie to themselves and rationalize a different reality. That's why you get absurdist takes like this.

To everyone reading. It is obviously that chatGPT does not equalize intelligence to the point of 100%. That statement is obviously not saying that. Everyone knows this. You want proof?:

Look at the declaration of independence... without getting to pedantic: "All Men are created equal" is not saying all Males are 100% equal. Everyone knows this. First off no one is 100% equal.. and second the statement in a modern context is obviously not referring to only men. It is referring to women&men and clearly men and women are nowhere near equal.

So if you all know this about the declaration of independence... how can you not see the same nuance for: "ChatGPT equalizes intelligence."? First ask yourself... do you think you're smart? If you do, then the self delusion I just described is likely happening with you.


what? the post is literally titled "Amateur armed with ChatGPT solves an Erdős problem". stop spreading FUD about unaffordability

They used ChatGPT Pro to solve it. Over 50% of people in the world couldn't afford ChatGPT Pro ($200/mo) even if they spent more than half of their income on it. [1]

What was that about "spreading FUD about unaffordability"?

[1] https://ourworldindata.org/grapher/share-living-with-less-th...


They didn't buy ChatGPT Pro themselves. You could've done the same as the students in the article and get a free subscription if you were interested in this instead of trolling.

> You could've done the same

Please show me the steps to get a $200 subscription for free that works 100% of the time regardless of who you are. I'm listening.


ChatGPT flattened the difference between top .0001 percentile mathematician and an amateur. This is the definition of making intelligence more available.

You are exaggerating the situation by essentially claiming since some people can’t afford 200 dollars this means ChatGPT is not democratising intelligence. It’s a bit strange to claim this because according to you it only becomes affordable when maximal number of people can afford it. It’s a bit childish.

Directionally it is democratising. Are more people able to afford higher level intelligence? Yes.


> ChatGPT flattened the difference between top .0001 percentile mathematician and an amateur

It flattened the difference between a top epsilon percentile mathematician and an amateur with money. It didn't flatten the difference between an amateur with a little money and an amateur with a lot of money. It widened it. That's the part I'm scared about.

You are shrugging this off because it currently isn't that expensive. But we're talking about the massively subsidized price here, which is bound to get orders of magnitude higher when the bubble pops. Models are also likely to get much better. If it gets to a point where the only way to obtain exceptionally high intelligence is with an exceptionally high net worth and vice versa, how is that going to democratize anything?


What you are saying is similar to saying "computers and internet don't democratise intelligence and access to information because some supercomputers exist". Its pedantic and frankly childish.

This is the most pedantic argument ever.

"All men are created equal" is obviously not literally saying all humans are 100% equal. Just like how "ChatGPT equalizes intelligence" is not saying ChatGPT equalizes the intelligence of all humans to a level of 100%.

I'm not going to spell out what I meant by: "ChatGPT equalizes intelligence". You can likely figure it out for yourself, because the problem doesn't have anything to do with your reading comprehension. The problem is more akin to self delusion, you don't want to face reality so you interpret the statement from the most absurdist angle possible.

The admins at HN actually noticed this tendency among people and encoded it into the rules: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."


It is not “absurdist” to call out a baseless claim that doesn’t take into account over half of humanity, a percentage that will grow even further once investor money inevitably runs out. If your response to that is to wave away more than 4 billion people, then you’re not even trying to look like you care about reality, you’re just trying to make yourself feel better with some made-up nonsense.

You seem to be under the misconception that you somehow “own” ChatGPT or are entitled to the insight it provides. You don’t and you aren’t. You are at the mercy of trillion-dollar private companies that owe you nothing. Their products’ intelligence is not your intelligence. Whatever profits you’re seeing from it, it’s currently losing them money. And when that changes, so will your image of them as benefactors of humanity who make intelligence available to all.


It is fucking absurdist and pedantic when I hear this drivel coming out of the mouth of a hypocrite. You’re already part of the privileged few. Every single thing that you do from drinking clean water to writing your bullshit on the Internet is the result of your own arguments of distributing technology among the top percentage. And as a recipient of such benefits you should have the intelligence to see that even that much matters. Why don’t you raise your shit against the ass holes who are really making things unequal: Internet service providers and their astronomical fees which don’t equalize the world enough such that homeless people have access to the internet. That’s societies real problem according to your genius logic… so stop your tirade against AI as their are bigger fish to fry.

> You seem to be under the misconception that you somehow “own” ChatGPT or are entitled to the insight it provides.

Right now for the price of a new car I can definitely get enough hardware to run a local LLM to the quality of ChatGPT at my home. And this is just the status quo. The demand for this technology and the projection of improvement in prices predicts a future where you can run one for the price of a new computer. Wake up.

But who the fuck cares? Point being is AI is equalizing intelligence and you’re just throwing in tangents and side branches to try to disentangle the obvious general truth which I will repeat: AI is fucking equalizing intelligence and if you don’t agree, you’re absurd.


You open with an insult directed at the HN community. Then you call me names. Then you lecture me about HN guidelines. Then you post this.

Flagging because this kind of language has no place on HN.


Oh if your so but hurt by this go ahead and call me names if you want. Hypocrite is not really that much of an insult and it's true. You called someone absurd as well.

> Then you lecture me about HN guidelines.

Not a lecture. An example of how it's a well known issue. I'm obviously not a rule follower myself; and your content is not really fit for HN either. Once you flag the entire conversation is over. I don't really care, but if I were you I'd rather end the argument by being right instead of running away and tattle tailing to the authorities. Up to you.

maybe the admins come in and block the convo, delete it, and/or ban me. Who knows. I don't care. The fact of the matter is... I'm right, and you know it. Everything I said here is true, and you're turning to this way to end it because you can't face it.


<meta> You're incredibly rude but at the same time.... 100% right. On first reading it was quite off-putting, but your conclusions are solid. Emotions take over rationality, and people - just like thinking models - reverse-engineer a logical-sounding explanation for their actions, they don't "expose" their internal chain of thought.

Maybe the models are closer to us than we're comfortable to admit.


> It is not “absurdist” to call out a baseless claim that doesn’t take into account over half of humanity, a percentage that will grow even further once investor money inevitably runs out.

I love the confidence that comes from this claim. You can run open models in your laptop today compare to the best models from 2 years ago. But sure spread your FUd about investor money running out


I have nothing against open weight models, my issue is more with these mega-corporations posing as saviors of humanity. That said, how is your consumer hardware going to out-compete a datacenter when it has more mouths to feed per token than a datacenter? Who is going to give you money to run anything when a machine can do everything you can do?

No matter how you spin it, we humans are now becoming thermodynamically less efficient versions of LLMs. We contribute nothing of value to the system, so economics dictates we have no place in it except as investors. Skill is nothing now, and ownership is everything. So yeah, I'm afraid of the future. Call it FUD or whatever, I don't care.


> 5x5 isn't enough to draw "e" properly if you also want lowercase letters to have less height than uppercase

It can be enough if you "cheat" and make use of the horizontal space. This is how I did it in my font:

   ##
  # #
  ##  #
   ###

They also lead the world in EV production on paper, but in practice a large portion of those numbers might be driven by government pressure, not actual demand [1].

I’d personally take this data with a big grain of Goodhart’s law.

[1]: https://www.bloomberg.com/features/2023-china-ev-graveyards/


I’m guessing this comment was intended on a different post or on someone else’s comment.


I dont’t see the relevance, the discussion is over whether boilerplate text that occurs intermittently in the output purely for the sake of linguistic correctness/sounding professional is of any benefit. Chain of thought doesn’t look like that to begin with, it’s a contiguous block of text.


To boil it down: chain of thought isn’t really chain of thought, it’s just more token generation output to the context. The tokens are participating in computations in subsequent forward passes that are doing things we don’t see or even understand. More LLM generated context matters.


That is not how CoT works. It is all in context. All influenced by context. This is a common and significant misunderstanding of autoregressive models and I see it on HN a lot.


I don't see the relevance -- and casually dismiss years of researches without even trying to read those paper.


Then what is it? I'm seeing 4x5 transform matrices in the code, looks 4D enough to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: