Says this:
"Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead."
Again, it seems Anthropic prefers to bill API token rates (long run), not subscriber effective token rates.
It seems clear that Anthropic wants users pay API rates for tokens when use in a programatic way, and not subscriber rates for tokens when used from code. As a user, I want to pay the subscription rates with -p, but it seems they want to block that.
I don't claim to understand the factors which cause this, but a lack of security and exposed valuables at unlocked-doors, pre-opening mall in Shenzhen China without issues at 8am in the morning is very curious.
I totally buy this as someone located in the US, but what is everybody else using? It can’t be WhatsApp? Is everyone sending all their connection graphdata to Meta?
A lot of SMBs use Instagram to connect to their clients, so Instagram build-in messenger is a default option for a lot of people (especially women) in many parts of the world.
Some places have regional messengers that are very entrenched, like Line in Japan or KakaoTalk in Korea.
WhatsApp is a default option in a large number of countries including most of Middle East, parts of Europe, Brazil, most of Africa, Southern Asia. To me it is surprising, too, because out of all messaging options WhatsApp seems like the least developed and least ergonomic.
And yes, this does mean that most people share whatever data Big Tech wants. They use Meta to talk to each other, auto-upload their photos to Google, click "accept" to every cookie banner so that thousands of no-name companies around the world know where they are and what they are doing at all times.
It’s WhatsApp. No one thinks about sending data to Meta. The world is much bigger than the HN bubble, where almost no one thinks about privacy implications.
Absolutely this. No one cares about privacy. 99,9% population has no clue how tech works. “Oh, it’s an app on my phone.” That’s what typical consumer understands. How text travels from one phone to other is something magical.
Got WhatsApp, because there is no other channel to communicate with customers. It’s literally used by everyone without exceptions. Really scary.
This looks useful. But, it's interesting how the backend-world and front-end world keep diverging. I must admit, I had no idea what this was from the title. "CLI framework"? But in backend-land, these would typically be called "argument parsers" or "command line argument parsers". But maybe I am missing some of the functionality.
Both in the frontend and the backend, I've usually used "If it calls your code, it's a framework, if you call its code, it's a library", and would seem to fit here too. An argument parser you'd call from your main method, then do stuff with what it returns. In Crust, it seems you instead setup the command + what will happen when it's called, then let the framework call your code.
we’re using “framework” intentionally because it goes beyond argument parsing. crust handles parsing, but also:
type inference across args + flags end to end
compile-time validation (so mistakes fail before runtime)
plugin system with lifecycle hooks (help, version, autocomplete, etc.)
composable modules (prompts, styling, validation, build tooling)
auto-generates agent skills and modules from the CLI definitions
so it sits a layer above a traditional arg parser like yargs or commander, closer to something like oclif, but much lighter and bun-native.
Hi, I called it "CLI framework" because it is more of a ecosystem of modules that contains everything you need to build a CLI, argument parsing is just part of it. The @crustjs/core module is the argument parser, and there are more modules such as @crustjs/skills that would derive agent skills from your command definition, @crustjs/store that state persistent and so on
That’s a great idea. I think I’ll restructure the entire project to be based around a collection of community managed rules, a UI generator to build a custom text file from those rules, and an LLM skill so people can evolve their policies themselves. The Bash script will remain in the background as one implementation, but shouldn’t be the only way.
If you look at the first image in the article, the one with a floppy on a serving tray, it looks like an 8 inch floppy to me. I think the floppy disks in the board room might also be 8 inch floppy disks
It is rumored heavily on HN that when the first employee of Google, Craig Silverstein was asked about his biggest regret, he said: "Not pushing for ECC memory."
One of the points Linus Torvalds made a few years back was that enthusiasts/PC gamers should be pissed that consumer product availability/support for ECC is spotty because as mentioned up-thread they're the kind of user that will push their system, and if memory is the cause of instability there will be a smoking gun (and they can then set the speed within its stable capacity). Diagnosing bad RAM is a pain in the rear even if you're actively looking for a cause, never mind trying to get a general user to go further than blaming software or gremlins in the system for weirdness on whatever frequency it's occurring at.
It's true that in the very early days Google used cheap computers without ECC memory, and this explains the desire for checksums in older storage formats such as RecordIO and SSTable, but our production machines have used ECC RAM for a long time now.
One of the nicest guys I have met. Was an intern at Google at that time, firing off mapreduces then (2003-2004) was quite a blast. The Peter Weinberger theme T-shirt too.
reply