Tinfoil hat mode: a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can't update and thus patch the vuln
Double tinfoil hat mode: an attacker learned of my plan to finally update my personal computer out of 20.04 today and is DDoSing canonical so I can't do that and I remain vulnerable to the backdoors they've found.
If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
Yeah you need native code execution, and if you have AF_ALG access there is clearly no sandboxing in place. At that point it's game over on Linux, there are too many bugs. Even if you fix all the known ones in the current kernel, by the time the version with those fixes is qualified and released (not to mention, the machine must reboot), new LPEs have been discovered.
Look at the CVE database. Most of those UAFs are LPE. Many of the OOBs and many of the race conditions too. These are fixed in Linus' master but you are running an old kernel.
Then look at the KASAN reports on the syzkaller dashboard. Many of them are LPE. Many of the WARNs and crashes are revealing and underlying bugs that is also an LPE. Most of these never get fixed.
Then try pointing your LLM at the codebase and saying "find an LPE". It will find as many as you want (you will exhaust your tokens long before it stops finding bugs). 99.99% of them will be bogus so you need a way to evaluate them at scale, currently this is the weakest approach but we'll get better at it.
I can't actually point you to a list of confirmed LPEs coz the only way they get confirmed is when someone exploits them, but there aren't enough exploit authors to do this for all of them. If inference gets really cheap and someone builds a really good agent harness we might start to see it get automated at some point.
My mind immediately went to chaining this with another recent vulnerability in the Ninja Forms - File Upload plugin [0]
> This makes it possible for unauthenticated attackers to upload arbitrary files on the affected site's server which may make remote code execution possible.
So, upload and execute a script that loads Copy Fail and even if you're only executing as www-data or another restricted user that "can't" sudo -- suddenly, uid=0!
Yes but what I'm saying is that copy.fail is a minor detail in this scenario.
If you are running Ninja Forums you need to run it in its own VM so that if it gets compromised _you don't care if it has uid=0_.
You need to do that regardless of copy.fail. Now that you've patched copy.fail, there are loads and loads of other vulns that can be used the same way.
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
It's not RCE it's an LPE in an obscure corner of the kernel attack surface that no sensible application depends on. They are absolutely a dime a dozen.
Even just in AF_ALG there have been several such vulns fixed in 2026 already. Kernel wide probably hundreds. It's true that most of them will be harder to exploit than this one but that just means you need to prompt your AI a bit harder to get an exploit. (To be fair, in a lot of cases it's gonna be hard to escalate privs without crashing the machine).
Ubuntu has userns restrictions now which takes away the main sources of LPEs (random qdiscs, nftables, all that garbage) but there are still huge numbers of these vulns.
This is why platforms that do native untrusted code executions have extreme sandboxing. Note Android and ChromeOS aren't affected coz they already knew this code was broken and hide it from unpriv workloads.
You can't run untrusted code on Linux without either a very very carefully designed sandboxing layer (like Android/ChromeOS) or virtualization. copy.fail is just one among tens of thousands of reasons for this, and it's a pretty uninteresting one at that.
What is "special" depends on your usecase but for my job it's mostly about stuff that's exposed to KVM guests. Biggest source of concerning vulns for us is probably vhost. I expect there are also lots of undiscovered and scary vulns in places like virtiofs, vfio, DAX, and wherever we do device passthrough.
> I could find any places running containered services and exfiltrate secrets parallel services, no?
Yes. Regardless of copy.fail. Cloud providers don't do that without a VM layer. (If yours does, you need to switch).
The cope of some people is insane. Why even have UID:GID? All you need is 0:0. I always tell people to run everything as root because there is literally no point.
Well, there's still value in users and namespaces! Just, it's not a strong security boundary.
Also even if it's not strong, it doesn't mean it's entirely worthless. You can't rely on it, but it's usually free and it still buys you time / increases attack cost.
Like, if you leave 100k cash in a car on the street in SF, that's dumb. If you really need to do that for some strange reason, you should hire a security guard to watch your car, because cars a not a good security boundary. BUT, that doesn't mean you would leave the car unlocked just coz someone's watching it!
They're not dime a dozen exactly but LPE bugs in Linux (and common Linux distros) are easily common enough that nobody sane relies on user isolation as a serious security boundary.
Clouds use VMs as the security barrier, which is also not always 100% perfect, but is much better.
It could be useful as part of an exploit chain but generally once you've got to local code execution it's not going to be difficult to get further.
A "special" bug would be something that defeats a security barrier that people actually use, e.g. something that works remotely, or as you say - a hypervisor hack.
Seems reasonable to assume it's something to do with the recently publicized exploits. More likely, this could be an extortion attempt by criminals rather than a competitor.
Or the fact that if they sell all their RAM without putting it in devices, they won’t be able to sell devices, and some portion of their customer base will leave their ecosystem, possibly forever.
And you think this is the first sign that they’ve decided they’re going to spend the next few years being a RAM reseller before starting to sell consumer products again?
No, but "shipping less RAM" is clearly on that spectrum. The point wasn't about literal product strategy, it's that there's a limit to what actions are financially feasible and it's set by "what else could you do with that junk?"
That's my whole point. M3 Max 128GB -> M3 Ultra 512GB. M5 Max 128GB -> M5 Ultra 512GB. But if M5 Max 192GB -> M5 Ultra 768GB, i.e. Ultra having 4x the memory of Max.
Shouldn't future contract sellers be smart enough to take these aspects in account? So you might not pay the spot price. But overall you will cover it. As those selling futures are there to make money. So they charge more than they pay for the power.
reply