NAT is not a security device. A firewall, which will be part of any sane router's NAT implementation, is a security device. NAT is not a firewall, but is often part of one.
Any sane router also uses a firewall for IPv6. A correctly configured router will deny inbound traffic for both v4 and v6. You are not less secure on IPv6.
Even a correctly-configured NAT will let connections in from outside, and a lot of people don't understand this.
Personally I'd count "your security thing doesn't actually do the thing it's supposed to do" as being pretty bad on the security scale. At least people understand firewalls.
NAT doesn't apply to inbound connections if you don't have a matching port forward rule, so it kind of doesn't matter how NAT works here. This is pure routing, not NAT.
All the names for waves come from different hardware and software vendors adopting names for the same or similar concept.
- Wavefront: AMD, comes from their hardware naming
- Warp: Nvidia, comes from their hardware naming for largely the same concept
Both of these were implementation detail until Microsoft and Khronos enshrined them in the shader programming model independent of the hardware implementation so you get
- Subgroup: Khronos' name for the abstract model that maps to the hardware
- Wave: Microsoft's name for the same
They all describe mostly the same thing so they all get used and you get the naming mess. Doesn't help that you'll have the API spec use wave/subgroup, but the vendor profilers will use warp/wavefront in the names of their hardware counters.
You can add to this the Apple terminology, which is simdgroup. This reinforces your point – vendors have a tendency to invent their own terminology rather than use something standard.
I have to give it to Apple though in this case. Waves or warps are ridiculously uninformative, while simdgroups at least convey some useful information.
Not recommending the Steins;Gate anime adaption is pretty wild, it's an incredibly highly rated Anime series. The story telling language of a VN and an Anime are very different so it's no surprise they don't perfectly capture the complexities of the other medium. They don't have to be the same to be worth watching.
fwiw: no idea on the other anime adaptions quality
Oh yeah, the Steins:Gate animes are perfectly fine to be clear! I was only thinking of the non-SG animes which are... pretty messy!
The Chaos;Head anime barely makes any sense, even having read the VN because it only got 12 episodes. I haven't finished Chaos;Child or Robotics;Notes which seem fine so far for C;C anyway, it doesn't quite feel the same.
There is Occultic;Nine which anime only in that the VN was never localised plus the game itself isn't on Steam so there's nothing to base a patch off of.
Anyway, take my thoughts with a grain of salt and not like some correct stance haha
Do you think the Chaos;Head anime could have been good like SG if it had had more episodes? Or is the fundamental nature of a VN story hard to adapt to a linear anime format?
I came into Steins;Gate completely cold. I watched it when it came out and I only just realised there's more to the universe. It's a ridiculously good anime, probably a top 10 for me. It's got a really cool storyline with loads of plot twists, interesting characters and deep mystery.
> Well, it's more because of its cult status (especially on imageboards) than actual objective appreciation.
No way. There are nearly 3M user ratings on that website. How many of those would you say belong to regular imageboard users? A few tens of thousands, tops? And when was the last time you saw a Steins;Gate meme?
The hard part of Linux ports isn't the first 90% (Using the Linux APIs). It's the second 90%.
Platform bugs, build issues, distro differences, implicitly relying on behavior of Windows. It's not just "use Linux API", there's a lot of effort to ship properly. Lots of effort for a tiny user base. There's more users now, but proton is probably a better target than native Linux for games.
It’s not really about OS differences - as the GP said, games don’t typically use a lot of OS features.
What they do tend to really put a strain on is GPU drivers. Many games and engines have workarounds and optimizations for specific vendors, and even driver versions.
If the GPU driver on Linux differs in behavior from the Windows version (and it is very, very difficult to port a driver in a way that doesn’t), those workarounds can become sources of bugs.
As if you don't get a jumble of UI frameworks on Linux too.
You can run KDE but depending on the app and containerization you open you'll get a Qt environment, a Qt environment that doesn't respect the system theme, random GTK apps that don't follow the system theme, random GTK apps that only follow a light/dark mode toggle. The GTK apps render their own window decorations too. Sometimes the cursor will change size and theme depending on the window it's on top of.
> Converting directX into Vulkan (potentially very large performance gains)
That's not at all how that works. DirectX12 isn't slow by any stretch of the imagination. In my personal and professional experience Vulkan is about on par depending on the driver. The main differences are in CPU cost, the GPU ultimately runs basically the same code.
There's no magic Vulkan can pull out of thin air to be faster than DX12, they're both doing basically the same thing and they're not far off the "speed of light" for driving the GPU hardware.
Emulating DX11 and below, as well as OpenGL, using Vulkan does not confer any performance benefits. In fact, it’s really hard to surpass them that way.
The performance benefits of Vulkan and DX12 come from tighter control over the hardware by the engine. An engine written for older APIs needs to be adapted to gain anything.
Being so absolutist is silly but their counter argument is very weak. Can I invalidate any memory safe language by dredging up old bug reports? Java had a bug once I guess it's over, everyone back to C. The argument is so thin it's hard to tell what they're trying to say.
It's just as reductive as the person they're replying to.
> Being so absolutist is silly but their counter argument is very weak.
The entire point is that being so absolutist is silly.
The comment reflects the previous poster's logic back at them so they (or others) can hopefully see how little sense it makes.
You seem to be trying to see some additional argument about rust being bad/invalid, but there isn't one... The reason that argument is, indeed, "very weak" and "so thin", as you say, is that it isn't even there at all.
It seems odd to me to put this much effort into misunderstanding what people are saying. You just end up talking past everyone, essentially talking to no one about nothing.
If it wasn't obvious from my ramble, Rust concerns are pragmatic, not absolutist. The only absolutism is that for memory safety to be truly upheld, you can't half-ass it (Zig) or ignore it (C).
> An ABI can't control whether one or both parties either end of the interface are honest.
You are aware that Rust already fails that without dynamic linking? The wrapper around the C getenv functionality was
originally considered safe, despite every bit of documentation on getenv calling out thread safety issues.
Yes? That's called a bug? The standard library incorrectly labelled something as safe, and then changed it. The root was an unsafe FFI call which was incorrectly marked as safe.
It's no different than a bug in an unsafe pure Rust function.
I'm choosing to ignore that libc is typically dynamically linked, but linking in foreign code and marking it safe is a choice to trust the code. Under dynamic linking anything could get linked in, unlike static linking. At least a static link only includes the code you (theoretically) audited and decided is safe.
Any sane router also uses a firewall for IPv6. A correctly configured router will deny inbound traffic for both v4 and v6. You are not less secure on IPv6.
reply