Hacker Newsnew | past | comments | ask | show | jobs | submit | MindSpunk's commentslogin

NAT is not a security device. A firewall, which will be part of any sane router's NAT implementation, is a security device. NAT is not a firewall, but is often part of one.

Any sane router also uses a firewall for IPv6. A correctly configured router will deny inbound traffic for both v4 and v6. You are not less secure on IPv6.


Misconfigured firewall is a gaping hole. Misconfigured NAT is not letting data from outside into your local network.

So firewall is actually worse than NAT.


Even a correctly-configured NAT will let connections in from outside, and a lot of people don't understand this.

Personally I'd count "your security thing doesn't actually do the thing it's supposed to do" as being pretty bad on the security scale. At least people understand firewalls.


> Even a correctly-configured NAT will let connections in from outside, and a lot of people don't understand this.

Yes, that's called port forwarding and it is normal thing. You actually want that.


It will let them in without a port forward in place. The port forward just rewrites the IP on an incoming connection, nothing more.

If you can reuse opened connection, but that will work with firewall too.

You don't need any tricks like that. Regular new connections will work.

No it won't because that's not how NAT is working.

It will, and if you test it then it does.

NAT doesn't apply to inbound connections if you don't have a matching port forward rule, so it kind of doesn't matter how NAT works here. This is pure routing, not NAT.


All the names for waves come from different hardware and software vendors adopting names for the same or similar concept.

- Wavefront: AMD, comes from their hardware naming

- Warp: Nvidia, comes from their hardware naming for largely the same concept

Both of these were implementation detail until Microsoft and Khronos enshrined them in the shader programming model independent of the hardware implementation so you get

- Subgroup: Khronos' name for the abstract model that maps to the hardware

- Wave: Microsoft's name for the same

They all describe mostly the same thing so they all get used and you get the naming mess. Doesn't help that you'll have the API spec use wave/subgroup, but the vendor profilers will use warp/wavefront in the names of their hardware counters.


You can add to this the Apple terminology, which is simdgroup. This reinforces your point – vendors have a tendency to invent their own terminology rather than use something standard.

Rule #1 in not getting involved in any patent lawsuit: don't use the same terminology as your competitors.

I have to give it to Apple though in this case. Waves or warps are ridiculously uninformative, while simdgroups at least convey some useful information.

Not recommending the Steins;Gate anime adaption is pretty wild, it's an incredibly highly rated Anime series. The story telling language of a VN and an Anime are very different so it's no surprise they don't perfectly capture the complexities of the other medium. They don't have to be the same to be worth watching.

fwiw: no idea on the other anime adaptions quality


Oh yeah, the Steins:Gate animes are perfectly fine to be clear! I was only thinking of the non-SG animes which are... pretty messy!

The Chaos;Head anime barely makes any sense, even having read the VN because it only got 12 episodes. I haven't finished Chaos;Child or Robotics;Notes which seem fine so far for C;C anyway, it doesn't quite feel the same.

There is Occultic;Nine which anime only in that the VN was never localised plus the game itself isn't on Steam so there's nothing to base a patch off of.

Anyway, take my thoughts with a grain of salt and not like some correct stance haha


Do you think the Chaos;Head anime could have been good like SG if it had had more episodes? Or is the fundamental nature of a VN story hard to adapt to a linear anime format?


It's actually the third highest-rated anime TV show of all time by user rating: https://myanimelist.net/topanime.php

And for a long stretch in the 2010s, it was #1.


Well, it's more because of its cult status (especially on imageboards) than actual objective appreciation.

Personally, I saw it again a year or two ago and it was good, but clearly (and not in a good way) a VN adaptation. Still worth watching.


I came into Steins;Gate completely cold. I watched it when it came out and I only just realised there's more to the universe. It's a ridiculously good anime, probably a top 10 for me. It's got a really cool storyline with loads of plot twists, interesting characters and deep mystery.


> Well, it's more because of its cult status (especially on imageboards) than actual objective appreciation.

No way. There are nearly 3M user ratings on that website. How many of those would you say belong to regular imageboard users? A few tens of thousands, tops? And when was the last time you saw a Steins;Gate meme?

Also, absolute meme shows which are imageboard fodder don't tend to rank very highly. See, e.g.: https://myanimelist.net/anime/32615/Youjo_Senki (#808 on the all-time list.)

People really liked the Steins;Gate anime on its merits.


This may well be, but japanese image boards are huge


I get the feeling he was more so referring to the others, because the others besides S;G are pretty meh.


The hard part of Linux ports isn't the first 90% (Using the Linux APIs). It's the second 90%.

Platform bugs, build issues, distro differences, implicitly relying on behavior of Windows. It's not just "use Linux API", there's a lot of effort to ship properly. Lots of effort for a tiny user base. There's more users now, but proton is probably a better target than native Linux for games.


It’s not really about OS differences - as the GP said, games don’t typically use a lot of OS features.

What they do tend to really put a strain on is GPU drivers. Many games and engines have workarounds and optimizations for specific vendors, and even driver versions.

If the GPU driver on Linux differs in behavior from the Windows version (and it is very, very difficult to port a driver in a way that doesn’t), those workarounds can become sources of bugs.


As if you don't get a jumble of UI frameworks on Linux too.

You can run KDE but depending on the app and containerization you open you'll get a Qt environment, a Qt environment that doesn't respect the system theme, random GTK apps that don't follow the system theme, random GTK apps that only follow a light/dark mode toggle. The GTK apps render their own window decorations too. Sometimes the cursor will change size and theme depending on the window it's on top of.


Sure but its not baked into the system utilities like in windows.


> Converting directX into Vulkan (potentially very large performance gains)

That's not at all how that works. DirectX12 isn't slow by any stretch of the imagination. In my personal and professional experience Vulkan is about on par depending on the driver. The main differences are in CPU cost, the GPU ultimately runs basically the same code.

There's no magic Vulkan can pull out of thin air to be faster than DX12, they're both doing basically the same thing and they're not far off the "speed of light" for driving the GPU hardware.


Not all games are DX12 though.


Emulating DX11 and below, as well as OpenGL, using Vulkan does not confer any performance benefits. In fact, it’s really hard to surpass them that way.

The performance benefits of Vulkan and DX12 come from tighter control over the hardware by the engine. An engine written for older APIs needs to be adapted to gain anything.


Being so absolutist is silly but their counter argument is very weak. Can I invalidate any memory safe language by dredging up old bug reports? Java had a bug once I guess it's over, everyone back to C. The argument is so thin it's hard to tell what they're trying to say.

It's just as reductive as the person they're replying to.


> Being so absolutist is silly but their counter argument is very weak.

The entire point is that being so absolutist is silly.

The comment reflects the previous poster's logic back at them so they (or others) can hopefully see how little sense it makes.

You seem to be trying to see some additional argument about rust being bad/invalid, but there isn't one... The reason that argument is, indeed, "very weak" and "so thin", as you say, is that it isn't even there at all.


> The entire point is that being so absolutist is silly.

You're misinterpreting what Rust people are telling you.

- Rust is safe lang

- Nah, C is safe if you're good

- Rust evangelical gestures towards billions or CVEs brought on by overly-sure C programmers

- Yeah, well, a version of Rust was unsafe for few months ten years ago. Besides Zig prevents more bugs than C and is the successor

- Rust person points to Bun's abysmal record

- Stop being absolutist.

The issue is that in C or Zig few people can write mostly UB free code. In Rust anyone can write UB free code as long as they don't reach for unsafe.


It seems odd to me to put this much effort into misunderstanding what people are saying. You just end up talking past everyone, essentially talking to no one about nothing.


If it wasn't obvious from my ramble, Rust concerns are pragmatic, not absolutist. The only absolutism is that for memory safety to be truly upheld, you can't half-ass it (Zig) or ignore it (C).

Some properties are like that.


fwiw, the M139 engine they're putting on those AMGs is completely insane.

It's a production 2.0L 4-cylinder engine making (in the most powerful config) 350kw. From the factory. Insane.


Toyota with the G family and JZ family + Nissan with the RB family too. They were prolific in RWD cars.

Daewoo put one in a FWD car in the mid 2000s for some reason too.


What is a safe ABI? An ABI can't control whether one or both parties either end of the interface are honest.

You can't have safe dynamic linking, dynamic linking requires you to trust the library you load with no ability to verify.


> An ABI can't control whether one or both parties either end of the interface are honest.

You are aware that Rust already fails that without dynamic linking? The wrapper around the C getenv functionality was originally considered safe, despite every bit of documentation on getenv calling out thread safety issues.


Yes? That's called a bug? The standard library incorrectly labelled something as safe, and then changed it. The root was an unsafe FFI call which was incorrectly marked as safe.

It's no different than a bug in an unsafe pure Rust function.

I'm choosing to ignore that libc is typically dynamically linked, but linking in foreign code and marking it safe is a choice to trust the code. Under dynamic linking anything could get linked in, unlike static linking. At least a static link only includes the code you (theoretically) audited and decided is safe.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: