https://github.com/doy/rbw is a Rust alternative to the Bitwarden CLI. Although the Rust ecosystem is moving in NPM's direction (very large and very deep dependency trees), you still need to trust far fewer authors in your dependency tree than what is common for Javascript.
326 packages right now when doing a build. Seems large in general, but for a Rust project, not abnormal.
Takes what, maybe 15 seconds to compile on a high-core machine from scratch? Isn't the end of the world.
Worse is the scope to have to review all those things, if you'd like to use it for your main passwords, that'd be my biggest worry. Luckily most are well established already as far as I can tell.
Why are you talking about compile times in a thread about supply chain security.
326 packages is approximately 326 more packages than I will ever fully audit to a point where my employer would be comfortable with me making that decision (I do it because many eyes make bugs shallow).
It's also approximately 300 more than the community will audit, because it will only be "the big ones" that get audited, like serde and tokio.
I don't see people rushing to audit `zmij` (v1.0.19), despite it having just as much potential to backdoor my systems as tokio does.
"326 seems large, but not abnormal" was the state of JS in the past as well.
Chance of someone auditing all of them is virtually zero, and in practice no one audits anything, so you are still effectively blindly trusting that none of those 326 got compromised.
It is baffling to me that a language that is as focused on safety/security as Rust decided to take the JavaScript approach to their ecosystem. I find it rather contradictory.
I doubt Microsoft's kernel/system Rust code is pulling in a lot of crates. The Linux kernel sure isn't, and Android's Bluetooth stack doesn't seem to either.
Using crates is a choice. You can write fully independent C++ or you can pull in Boost + Qt + whatever libraries you need. Even for C programs, I find my package manager downloading tons of dependencies for some programs, including things like full XML parsers to support a feature I never plan to use.
Javascript was one of the first languages to highlight this problem with things like left-pad, but the xz backdoor showed that it's also perfectly possible to do the same attack on highly-audited programs written in a system language that doesn't even have a package manager.
That's because you're mixing things. "Rust the language" isn't the one starting new projects and add new dependencies that have hundreds of dependencies of their own, this is the doing of developers. The developers who built Rust with a focus on safety and security is not the same developers mentioned before.
Design decisions have predictable consequences. Large masses of people, who make up an ecosystem like the that of a programming language community, respond predictably to their environment. Each individual programmer has a choice, sure, but you can't just "individual responsibility" your way out of the predictable consequences of incentive structures.
That's true. But it does seem like a logic result of having no real standard library. That lone fact has kept me away from Rust for real projects, because I don't want to pull in a bunch of defacto-standard-but-not-officially dependencies for simple tasks. That's probably a large contributor to the current state of dependency bloat.
Yeah, it does require you to be meticulous about what you depend on. Personally I stick with libraries that don't use 100s of other crates, and tried and reviewed various libraries over the year, so you have your "toolkit" of libraries you know are well built and you know how they work internally.
Ultimately in any language you get the sort of experience you build for yourself with the environment you setup, it is possible in most languages to be more conservative and minimal even if the ecosystem at large is not, but it does require more care and time.
'no real standard library' doesn't seem entirely fair. Rust has a huge standard library. What it does have is the policy to only include "mature" things with little expected API evolution in the standard libary, which leaves gaping holes where a json parser, a http client or a logging library should be. Those are all those defacto-standard-but-not-officially dependencies
It's a sign that they learned from Python more than anything else. Better be conservative than have Python's situation of multiple versions of those common functionalities (in the stdlib) that almost nobody uses and goes for 3rd party libraries anyway. Is that a better state of affairs?
The Rust vs. Node comparison seems very shallow to me, and it seems to require a lot of eye squinting to work.
People have beef with Rust in other, more emotional ways, and welcome the opportunity to pretend they dislike it on seemingly-rational grounds a la "Node bad amirite lol".
I’m not the person you asked, but given the choice I avoid a language without JSON parsing officially supported because I need that frequently. It’s the reason I never picked up Lua, despite being interested in it.
Interesting, thanks for sharing your anecdote. Upvoted.
I am openly admitting I don't care. Such libraries are in a huge demand and every programming language ecosystem gains them quite early. So to me the risk of malicious code in them is negligibly small.
To me it’s not just the risk of malicious code, but also convenience. For example, if I’m using a scripted language and sharing it in some form with users, I don’t want to have to worry about keeping the library updated, and fight with the package manager, and ship extraneous files, and…
I think there's actually a decent compromise one could make here (not that Rust did, mind), and it's what I'm planning for my own language, in big part to avoid the Rust/NPM/etc situation.
TL;DR, the official libraries are going to be split into three parts:
---
1) `core.*` (or maybe `lang.*` or `$MYLANGUAGE.*` or w/e, you get the point) this is the only part that's "blessed" to be known by the compiler, and in a sense, part of the compiler, not a library. It's stuff like core type definitions, interfaces, that sort of stuff. I may or may not put various intrinsics here too (e.g. bit count or ilog2), but I don't know yet.
Reserved by the compiler; it will not allow you to add custom stuff to it.
There is technically also a "pseudo-package" of `debug.*` ("pseudo" in the sense that you must always use it in the full prefixed form, you can't import it), which is just going to be my version of `__LINE__` and similar. Obviously blessed by compiler by necessity, but think stuff like `debug.file` (`__FILE__`), `debug.line` (`__LINE__`), `debug.compiler.{vendor,version}` (`__GNUC__`, `_MSC_VER`, and friends). `debug` is a keyword, which makes it de-facto non-overridable by users (and also easy for both IDEs and compiler to reason about). Of course I'll provide ways of overriding these, as to not leak file paths to end users in release builds, etc.
(side-note: since I want reproducible builds to be the default, I'm internally debating even having a `debug.build.datetime` or similar ... one idea would be to allow it but require explicitly specifying a datetime [as build option] in such cases, lest it either errors out, or defaults to e.g. 1970-01-01 or 2000-01-01 or whatever for reproducibility)
---
2) `std.*`, which is minimal, 100% portable (to the point where it'd probably even work in embedded [in the microcontroller sense, not "embedded Linux" sense] systems and such --- though those targets are, at least for now, not a primary goal), and basically provides some core tooling.
Unlike #1, this is not special to the compiler ... the `std.*` package is de jure reserved, but that's not actually enforced at a technical level. It's bundled with the language, and included/compiled by default.
As a rule (of thumb, admittedly), code in it needs to be inherently portable, with maybe a few exceptions here or there (e.g. for some very basic I/O, which you kind of need for debugging). Code is also required to have no external (read: native/upstream) dependencies whatsoever (other than maybe libc, libdl, libm, and similar things that are really more part of the OS than any particular library).
All of `std.*` also needs to be trivially sandboxable --- a program using only `core.*` & `std.*` should not be able to, in any way, affect anything outside of whatever the host/parent system told it that it can.
---
3) `etc.*`, which actually work a lot like Rust/Cargo crates or npm packages in the sense that they're not installed by default ..... except that they're officially blessed. They likely will be part of a default source distribution, but not linked to by default (in other words: included with your source download, but you can't use them unless you explicitly specify).
This is much wider in scope, and I'm expecting it to have things like sockets, file I/O (hopefully async, though it's still a bit of a nightmare to make it portable), downloads, etc. External dependencies are allowed here --- to that end, a downloads API could link to libcurl, async I/O could link to libuv, etc.
---
Essentially, `core.*` is the "minimal runtime", `std.*` is roughly a C-esque (in terms of feature count, or at least dependencies) stdlib, and `etc.*` are the Python-esque batteries.
Or to put it differently: `core.*` is the minimum to make the language run/compile, `std.*` is the minimum to make it do something useful, and `etc.*` is the stuff to make common things faster to make. (roughly speaking, since you can always technically reimplement `std.*` and such)
I figured keeping them separate allows me to provide for a "batteries included, but you have to put them in yourself" approach, plus clearly signaling which parts are dependency-free & ultra-sandbox-friendly (which is important for embedding in the Lua/JavaScript sense), plus it allows me to version them independently in cases of security issues (which I expect there to be more of, given the nature of sockets, HTTP downloads, maybe XML handling, etc).
Cargo made its debut in 2014, a year before the infamous left-pad incident, and three years before the first large-scale malicious typosquatting attacks hit PyPI and NPM. The risks were not as well-understood then as they are today. And even today it is very far from being a solved problem.
You can write simple http server or rest client with stdlib in Go. No need to include tokio, serde and hundred other cargos which constantly break things. I have apps written in Go more than a decade ago work the same now with recent version of Go. Where as, I had issues with getting few year old github apps in rust compiling and working in rust.
> 326 packages right now when doing a build. Seems large in general, but for a Rust project, not abnormal.
That's a damning indictment of Rust. Something as big as Chrome has IIRC a few thousand dependencies. If a simple password manager CLI has hundreds, something has gone wrong. I'd expect only a few dozen
Does this take into account feature flags when summing LOC? It's common practice in Rust to really only use a subset of a dependency, controlled by compile-time flags.
My experience has been that while there's significant granularity in terms of features, in practice very few people actively go out of their way to prune the default set because the ergonomics are kind of terrible, and whether or not the default feature set is practically empty or pulls in tons of stuff varies considerably. I felt strongly enough about this that I wrote up my only blog post on this a bit over a year ago, and I think most of it still applies: https://saghm.com/cargo-features-rust-compile-times/
For a given tool, I'd expect the Rust version to have even more deps than the JS version because code reuse is more important in a lower-level language. I get the argument that JS users are on average less competent than Rust users, but we're talking about authors who build serious tools/libs in the first place.
Frustratingly, they're not by default though; you need to explicitly use `--locked` (or `--frozen`, which is an alias for `--locked --offline`) to avoid implicit updates. I've seen multiple teams not realize this and get confused about CI failures from it.
The implicit update surface is somewhat limited by the fact that versions in Cargo.toml implicitly assume the `^` operator on versions that don't specify a different operator, so "1.2.3" means "1.2.x, where x >= 3". For reasons that have never been clear to me, people also seem to really like not putting the patch version in though and just putting stuff like "1.2", meaning that anything other than a major version bump will get pulled in.
> The implicit update surface is somewhat limited by the fact that versions in Cargo.toml implicitly assume the `^` operator on versions that don't specify a different operator, so "1.2.3" means "1.2.x, where x >= 3". For reasons that have never been clear to me, people also seem to really like not putting the patch version in though and just putting stuff like "1.2", meaning that anything other than a major version bump will get pulled in.
Not quite: "1.2.3" = "^1.2.3" = ">=1.2.3, <2.0.0" in Cargo [0], and "1.2" = "^1.2.0" = ">=1.2.0, <2.0.0", so you get the "1.x.x" behavior either way. If you actually want the "1.2.x" behavior (e.g., I've sometimes used that behavior for gmp-mpfr-sys), you should write "~1.2.3" = ">=1.2.3, <1.3.0".
I don't know how I got this wrong because I literally went and looked at that page to try to remind myself, but I somehow misread it, because you're definitely right. This probably isn't the first time I've gotten this wrong either.
From thinking it through more closely, it does actually seem like it might be a little safer to avoid specifying the patch version; it seems like putting 1.2.3 would fail to resolve any valid version in the case that 1.2.2 is the last non-yanked version and 1.2.3 is yanked. I feel like "1.2.3" meaning "~1.2.3" would have been a better default, since it at least provides some useful tradeoff compared to "1.2", but with the way it actually works, it seems like putting a full version with no operator is basically worse than either of the other options, which is disappointing.
Are we talking about `cargo build` here? Because my understanding is that if a lockfile is present and `Cargo.toml` hasn't changed since the lockfile was created then the build is guaranteed to use the versions in the lockfile.
If however `Cargo.toml` has changed then `cargo build` will have to recalculate the lockfile. Hence why it can be useful to be explicit about `cargo build --locked`.
I haven't heard anything about this, but I really wish it was there by default. I don't think the way it works right now fits anyone's expectations of what the lockfile is supposed to do; the whole point of storing the resolved versions in a file is to, well, lock them, and implicitly updating them every time you build doesn't do that.
Since you're here, and you happened to indirectly allude to something that seems to have become increasingly common in the Rust world nowadays, I can't help but be curious about your thoughts on libraries checking their lockfiles into version control. It's not totally clear to me exactly when or why it became widespread, but it used to be relatively rare for me to see in open source libraries in the first few post-1.0 years of Rust, whereas at this point I think it's more common for me to see than not.
Do you think it's an actively bad practice, completely benign, or something in between where it makes sense in some cases but probably should still be avoided in others? Offhand, the only variable I can think of that might influence a different choice is that maybe closed-source packages been reused within a company (especially if trying to interface with other package management systems, which I saw firsthand when working at AWS but I'm guessing is something other large companies would also run into), but I'm curious if there are other names nuances I haven't thought of
Sure, but according to semver it's also totally fine to change a function that returns a Result to start returning Err in cases that used to be Ok. Semver might be ae to project from your Rust code not compiling after you update, but it doesn't guarantee it will do the same thing the next time you run it. While changes like that could still happen in a patch release, I'd argue that you're losing nothing by forgoing new API features if all you're doing is recompiling the existing code you have without making any changes, so only getting patches and manually updating for anything else is a better default. (That said, one of the sibling comments pointed out I was actually wrong about the implicit behavior of Cargo dependencies, so what I recommended doesn't protect from anything, but not for the reasons it sounds like you were thinking).
Some people might argue that changing a function to return an error where it didn't previously would be a breaking change; I'd argue that those people are wrong about what semver means. From what I can tell, people having their own mental model of semver that conflicts with the actual specification is pretty common. Most of the time when I've had coworkers claim that semver says something that actively conflicts with what it says, after I point out the part of the spec that says something else, they end up still advocating for what they originally had said. This is fine, because there's nothing inherently wrong with a version schema other than semver, but I try to push back when the term itself gets used incorrectly because it makes discussions much more difficult than they need to be.
For any of my rust projects I really don't bump my deps unless dependabot shows a serious vulnerability or I want to use a new feature added. Outside of that my deps are locked to the last known good version i use.
Does it support autofill for other apps on mobile? I'd argue that putting passwords in your phone clipboard could itself be risky (although for someone who's extremely security conscious, maybe discouraging using apps isn't a downside)
Yes, weirdly enough at the time there was no reply button, I thought HN comments had a maximum nested depth, but now it has a reply button and so does yours. Weird.
Ah, no worries! Replies seem to get throttled sometimes when the site detects a lot of nested replies quickly and it intentionally delays the ability to reply a bit. I've always assumed that it's intended as a way to try to mitigate threads that potentially are devolving into flamewars.
Different things. "Rust is safer" generally means memory safety i.e. no double-free, no use-after-free, no buffer-/under-flows, and the like. The safety you seem to have in mind is "minimal dependency count".
That’s my concern too. Rust has the same dependency concerns, which is how hackers get into code. VaultWarden has the same Rust dependency concern. Ironically we’re entering an age where C/C++ seems to have everything figured out from a dependency injection standpoint
A few months ago I tried to build a .NET package on Linux, and the certificate revocation checks for the dependencies didn't complete even after several minutes. Eventually I found out about the option `NUGET_CERTIFICATE_REVOCATION_MODE=offline`, which managed to cause the build to complete in a sane amount of time.
It's hard for me to take seriously any suggestion that .NET is a model for how ecosystems should approach dependency management based on that, but I guess having an abysmal experience when there are dependencies is one way to avoid risks. (I would imagine it's probably not this bad on Windows, or else nobody would use it, but at least personally I have no interest in developing on a stack that I can't expect to work reliably out of the box Linux)
Property-based testing is nice, but making it coverage-driven is a game changer. It will explore code paths that naive random inputs will not trigger in a thousand years. In Rust this works very well with libFuzzer and the Arbitrary crate to derive the generators.
If you run your Hegel tests in Antithesis, you get this for free (along with various sorts of “non-local” assertions, perfect reproducibility even for concurrent or distributed code, etc.).
But yeah, not hard to hack together basic coverage guidance outside Antithesis. That works well for large classes of programs, just not a majority of them.
I contacted the EU DMA team about my concerns and got a real reply within 24 hours. Not just an automated message, it looked like a real human read my message and wrote a reply. I'd urge other EU citizens to do the same.
Great idea, I just did the same. I encourage other EU citizens to do the same. Keeping at least one of the two major mobile ecosystems open is important.
(And install GrapheneOS, the more successful open Android becomes, the better.)
True. I'm really happy that they are working with an OEM to bring an alternative in 2027. Until then:
- A refurbished Pixel works (except some weird Verizon locking that I heard about the other day).
- Pixels get really heavily discounted near the end of the cycle (e.g. 9a currently). Google probably doesn't make much on it if you are opting out of your ecosystem.
Still, you are stopping the extraction of analytics, which probably bring Google the much more revenue over the longer term, and it is not possible to disable on regular Android phones.
Remember that on every certified Google Android phone, Google Play Services runs with system-level privileges. On GrapheneOS, it is sandboxed like pretty much any other app (if you choose to install Play Services) and you can make it 'blind' by revoking most privileges.
Same for Pixel Camera, etc., I just block network access.
Done! I wrote up both my concerns about this and how it affects app/app-store market competition, and how limitations like Play Integrity encourage apps to block usage on non-Google approved devices as well, since that's anti-competitive within the mobile device & OS market (blocking GrapheneOS, Waydroid, etc).
Supporting free competition with and within the Android market is in theory what these teams are all about so hopefully with enough voices they'll push harder on it. I'd love to see a shift here that makes non-Google/Apple-controlled mobile a possible option (even if it's a Linux-on-desktop-style niche for the foreseeable future)
Well, Google has marketed Android as an open source operating system (AOSP) and openness about the system [1] and encouraged manufacturers and developers to build on it based on the premise of openness and of course being "free". People advocated for Android because it was open source compared to other alternatives. But with this change they are simply ending that openness. People that have developed F-Droid and other alternative stores have contributed to the platform value (such as not being able to de-google their phone), the same goes for many other developers who have spent countless of hours developing for Android.
To say they don't owe you nothing seems like a betrayal on the promise that Android was an open platform (and open source).
> You are free to not use their products or start a company to compete
That's not an option as you are making it out to be. For a user switching means buying a new phone, repurchasing apps (if you bought) and maybe apps won't be even available to the new system, for developers that means all their knowledge about the system gone. Building a mobile operating system requires millions if not billions of dollars, years of work and convincing developers and businesses (hardware makers) to use your operating system. The barrier to enter is so high that telling people to just compete with Google is not a realistic solution.
Party A does not owe Party B the right to sell in Party A:s legal area.
Party B is allowed to choose not to sell in EU. If you wanna sell in EU you have to comply with EU rules. If you wanna sell in US you have to comply with US laws. That simple.
Maybe "intelectual property" is really imaginary property given how the same big companies just gobble data from other people and companies wothout permission to feed their AI models (Facebook with books, recently NVIDIA with milions of videos from Youtube).
I guess they would not due that if they really believed some questionable synthetic construct like "intelectual property" really existed ?
That is not how the European Union works. One of the core goals of the EU is to guarantee the European single market. One of the core principles of the single market is the Freedom to establish and provide services [1]. The Apple/Google duopoly have effectively created a market within the single market where the core principles of the single market do not apply anymore.
Tech has a strong tendency to favor outcomes with only a handful large players that make competition impossible due to network effects, etc., distorting the market. The Digital Markets Act was made to address this problem.
IANAL, but Google's Android changes seem like a fairly clear violation of the DMA.
This is typically hard for people from the US to grasp (I saw that you are not originally from the US though). In Europe, capitalism is not the end goal, the goal of capitalism is to serve the people and if that fails, it needs to be regulated.
---
As an aside, the lengths people go to defend a company with $402.836B yearly revenue :).
Yes. I am effectively asking you what the moral justification for DMA is. I understand that lawmakers can make whatever law they want. I understand they made it. I am curious how people who agree this should be possible think of this from a moral angle, especially as engineers who make their living by creating intellectual property and probably wouldn’t want to see control of it seized randomly
I'd ask the inverse of the question: morally, should a single gatekeeper have the right to deny two consenting parties the ability for one to run the other's software?
Especially when that ability has been established practice and depended upon for decades? And the gate-kept device in question is many users' primary gateway to the modern world?
There's nuance here, of course - I'm not morally obliged to help you run Doom on your Tamagotchi just because you want to do so. But many people around the world rely on an Android device as their only personal computing device (and this is arguably more true for Android than it is for iOS). And to install myself as an arbiter of what code they can and cannot run, with full knowledge that I could at any time be required to leverage that capability at the behest of a government those worldwide users never agreed to be dependent on? That would be a morally fraught system for me to create.
At some point free markets become fiction. There's no financially viable way to start competing businesses in markets as entrenched as mobile OSes. Otherwise this would have happened. And if that becomes anti consumers, then the consumers start changing the rules the companies operate under. Because in a democracy we have more consumers than CEOs,so they vote with majority.
(This obviously simplifies things, but ultimately we as humans still haven't found the one and only true philosophy or moral, and maybe that's not possible (I'm no philosopher))
The moral justification is that I am a citizen, and can demand the laws I want. When enough people think like me, we can actually make it a law. By holding the smartphone OS oligopoly these companies hold a lot of power on the people. I do not like that. Hence I like laws that try to change that.
> especially as engineers who make their living by creating intellectual property and probably wouldn’t want to see control of it seized randomly
If these people try to use their intellectual property to control my device and hence my ability to do things, I want to have a say what they do. Yes, that is what software is: directions to machines. I own the machine, hence I want a say what it does. You are free to keep your intellectual property for yourself, if you want to.
The moral argument is that vertically integrated monopolies threaten the rights of consumers, who are human beings. Corporations are legal fictions and their "rights" are another convenient fiction to align incentives. They carry zero moral weight.
> especially as engineers who make their living by creating intellectual property and probably wouldn’t want to see control of it seized randomly
The premise of your question seems surprising.
1. In what sense the DMA enables seizure of control of intellectual property? I haven't heard of seizure being part of this.
2. In what sense DMA does so randomly? The DMA's rules seem to be written down, not random. Where are you seeing randomness?
Also:
3. One intention of regulation is that you don't want one (or a few) entities, regardless of what they are, to gain too much power over your citizens' lives. They want power to be distributed, just like to America's 3 branches of federal government were designed to distribute power. Could you explain what specifically you find difficult to understand about people finding it immoral to give a single entity too much power?
There are no absolute morals. But I think in general healthy societies are arranged around the ideas that people should have: the basics of living (housing, food, vacation, and some luxory), agency, and equal opportunities.
It should be clear that having a small number of companies murder all competition and personal freedoms (like doing what you want to do with something you own like a phone) are in contrast to these basic values.
---
Or the alternative, more blunt answer: it does not require a moral justification. EU citizens directly elected the EP, the EP ratified the DMA. So Google can either comply or leave the EU as a market (which they wont do because it's too large and others would be happy to take it).
The moral argument is that private companies aren't elected and Google/Apple aren't supposed to have the power they have, they aren't government bodies.
Are we still talking about massive companies with power to arbitrarily decide how billions of people use the personal computers they bought? Who's doing the feeling? Why would we presume all of their conduct to be moral?
> Party B owes you nothing. You are free to not use their products or start a company to compete.
When 99% of government/banks/etc require you to use a certain service to access basic services, you need some way of ensuring you don't have to sell your soul to use it. Alternatives would be really great, but Google is part of a duopoly.
Just because you build the rails doesn't mean you get to decide who gets to use the trains.
That is not their fault, though. I can see how you could complain to the people who mandate you use B’s products. Otherwise what you’re saying is that control of any intellectual property can be stolen from its owners simply by becoming popular outside of their control
It is though. They are actively working on increasing their marketshare. That doesn't happen by accident. They have chosen to place the interests of the corporation over the interest of their fellow people. They are fine to do that, because we separated that responsibility. Corporations can only chase for profit, because we have governments, that make the rules, so that chasing profits is in the interests of the people.
Maybe you don't like that, and that is fine for you, although I don't like that you don't like that. Maybe you want a society where might makes right. However a lot of people don't feel that way, hence why we outsourced that world model to the government.
People don't like that their neighbor is stronger than them and takes there stuff, so they pay feudal lords. Then the feudal lords want some security, so they outsource that to elected emperors. After a while the feudal lords misuse their power, so parliaments are invented. Eventually people have enough and demand voting rights. The elected leaders betray the people by sending them to war, so they created multinational institutions, that try to prevent this (EU). They haven't used their power to betray the people enough, so we are still fine with them.
"Wealth comes with obligations" is literally in my country's constitution. You, may don't like that, but I do. I think a lot of other people do as well. It is of course always for discussion how much.
It kind of is their fault because of Google Play Integrity APIs. They are effectively developing tools that are designed to make their product mandatory. There wouldn't be a backlash that big if we could just unlock our bootloaders and run a patched version of Android.
> any [] property can be [taken by the state] from its [original] owners simply by [those owners becoming more powerful than the state wants]
When rephrased like the above, I think what you’re describing is pretty common in history. Many industries and assets have been nationalized when it serves the state’s interests.
IMO the moral justification is that there is no ownership or private property except that which is sanctioned by the state (or someone state-like) applying violence in its defense. In this framing, there’s little moral justification for the state letting private actors accrue outsized power that harms consumers/citizens.
People outsource the brutality (to the government), so that they don't need to deal with it in their daily life. If we couldn't force companies to act in ways we want through a formal system, then the world would look much more brutal.
I can ban persons from doing things, I rather not have them do. Companies are legal persons, so why shouldn't this apply to them? At some point ignoring behaviour is not making it go away, it needs to be actively worked against, otherwise it will become (practically) mandatory.
the core problem with banning is who is doing it and why, right? once we allow it, it goes into the hands of the “politicians” and then books get banned today, ice scream gets banned tomorrow, math gets banned the next day…
Which is why the more serious consequences a law has the harder it is to change it and the more people need to sign off on it. There is stuff that needs simple majorities, stuff that is in the constitution and requires a super majority, stuff that can't be changed short of abolishing the current state and stuff that can't be changed at all, because it is just an assertion that is independently on anyone asserting it.
This is kind of a "solved*" thing in theory, not so much in practice of course.
*solved meaning we have a proper process established
Not really stealing their IP, just putting limits on how much they can shaft their customers. If they don't like it, they can leave the EU, and others will take their place. It's like saying your company is losing its railway tracks if people of the wrong colour are allowed on the trains that run on your tracks. There's no need to get hysterical.
So you're saying that if I open a restaurant I'm free to poison the food and you can just decide whether to eat there or not and no government should be able to forbid me to do this?
Google is engaging in immoral business practices. Since they are immoral, it is morally justified to say they must be stopped.
> how would you feel if you were on the receiving end of such a dictum?
I continue to be astounded how people still just flat out assume that everyone must be a capitalist.
If I were on the receiving end of a dictum aimed at stopping immoral behavior, I would cease my immoral behavior. But I'm not going to be on the receiving end in the first place because I don't aim to do immoral things in the first place.
My moral justification is that my right to do with the physical property I have in my physical hand is more important than any noncorporeal corporation's right to do anything with their noncorporeal intellectual property.
The truth is, I gave party C money for a product. Party B does not get to say anything about what party C gave me. And they absolutely do owe me something, and that is the use of the product they gave me for my money. Whatever their terms of service say about licensing versus owning should not trump the fact that I made a one-time purchase and I have physical ownership that they cannot revoke. This is not a car lease where I have a contract with the dealership and they can reposses the car if I don't make the payments.
And you can use it. You can, in fact, keep using the software that shipped on it. What you want is access to further intellectual property they develop (updates, features), that just so happens to be able to run on your hardware and ability to shepherd it in a direction you want and they don’t.
> that just so happens to be able to run on your hardware
the hardware is specifically locked down with "trusted computing" features to facilitate this. It's not a random coincidence. The problem here lies in the network effects and the use of trusted computing. If my bank app mandates that I use "real deal 100% certified android", then I can't just develop my own OS. So it's an antitrust situation.
If every company in the world teamed up with MegaCorp and made their services contingent on wearing a MegaCorp shock collar powered by trusted computing, would you wear it? You are free to not use the collar... and starve to death in the woods I suppose.
I don't usually even care about intellectual property. It's a hack to grant a temporarily exclusive monopoly as a way to incentivize R&D. The R&D in this case is just solving the question of "how do we establish a larger monopoly". So why should the public be forced to uphold it?
Asking me if I am willing to violate intellectual property in this situation is like if I was being lowered into a pit of liquid hot magma and in order to get out I had to break the flag code or jaywalk or something.
> What you want is access to further intellectual property they develop (updates, features), that just so happens to be able to run on your hardware and ability to shepherd it in a direction you want and they don’t.
Well yeah, I am paying them with money (and data) and thereby with power and expect them in turn to provide directions for my device, so that is does what I want. That's kind of the deal. If they don't want to provide that, then they can just not accept my money (and data). They can of course produce devices, that to what they want, and want me to carry them around, but then they better pay me.
If they use the power I gave them against me, then I will demand my power projection as a service provider (aka. the government) to project power in my interest.
I'll gladly take that trade, either:
- They lose the right to their "intellectual property" and I'll accept that they owe me nothing.
or:
- They continue to enjoy "intellectual property" protections granted by the state, but the state subdues them into actions which are for the benefit of the public.
I'd be happy to make that offer to any of the parties that build closed ecosystems, but none of them will take the offer since closed ecosystems are almost always built with the intent of misusing the copyright system to create a state-enforced monopoly and bloodsuck value produced by real economic activity.
The moral justification is the same anyone else employs. I have a tool to create an outcome and I'm going to use that tool to produce that outcome. It's that simple.
It's not complex, in the sense that the rules are simple, but simple rules can still lead to complicated emergent behavior that is difficult for humans to understand, even if each of the 153 steps that the typechecker took to arrive at the result were easy to understand individually.
It's not any different than having 153 steps in any other computational sense. Even limiting ourselves to elementary arithmetic, horrendous opaqueness arises with 153 operations spanning the whole set. Are we going to pretend like arithmetic is a systemically problematic because of this? Any non-trivial formal construct is potentially dangerous.
If you're having trouble reasoning about how variables are unified, it's either because you never actually built a strong gut intuition for it, or it's because you're writing Very Bad Code with major structural issues that just so happen to live in the type system. In this case it's the latter. For an HM type system, 153 choice points for an expression is ludicrous unless you're doing heavy HKT/HOM metaprogramming. The type system, and more broadly unification, is a system to solve constraints. Explosive choice indicates a major logical fault, and most probably someone naively trying to use a structural type system like a nominal one and/or a bit too much unsound metaprogramming.
Thankfully of course, you can simply just specify the type and tell the compiler exactly what it should be using. But that's not really resolving the issue, the code still sucks at the end of the day.
Now higher order unification? That's an entirely different matter.
I tried Zed for some time. Then it had a regression which broke it completely on my laptop. (Zed can't start any more, logging a PlatformNotSupported error even though earlier versions worked fine.) I carefully bisected it, and it turned out to be due to an intentional change in Blade. The issue was acknowledged, and confirmed by several other users. Then it got converted into a "discussion" because there was nothing actionable to do according to the devs. Then the discussion got closed because they are "directing all support questions to Discord going forward". Then Discord announced mandatory age verification.
Give https://rcl-lang.org/#intuitive-json-queries a try! It can fill a similar role, but the syntax is very similar to Python/TypeScript/Rust, so you don’t need an LLM to write the query for you.
RCL (https://github.com/ruuda/rcl) pretty-prints its output by default. Pipe to `rcl e` to pretty-print RCL (which has slightly lighter key-value syntax, good if you only want to inspect it), while `rcl je` produces json output.
It doesn’t align tables like FracturedJson, but it does format values on a single line where possible. The pretty printer is based on the classic A Prettier Printer by Philip Wadler; the algorithm is quite elegant. Any value will be formatted wide if it fits the target width, otherwise tall.
reply