At the very least I'd add release cadence to it and the quality of releases. Mature, good software will have hotfixes and patch releases every now and then. But not in every release and certainly not 50% of the changes. In the same sense I will often look at the effort put in changelogs. If they took the effort of putting things in category, writing about possible breaking changes, etc it is a possible indicator of some level of quality. At the very least I will have a lot more faith in software with good changelogs compared to something that is just a list of the last N commit messages.
To be honest, these days I have more faith in an application or library with a moderate development pace where maybe the last commit wasn't 2 seconds ago co-authored by claude (in the most blatant examples).
The same is true for amount of commits, the type of commits, release cadence and the amount of fixes and hotfixes in releases. I don't feel like being a glorified alpha tester so I look for maturity in a project.
Which more often than not means that, yes there needs be activity. But, it is also fine if it was two days ago and there is a clear sign of the same pattern over a longer period. Combined with a stable release cycle, sane versioning and clear changelogs that aren't just a list of the last 10 commit messages.
On your point of stars, I think they used to be a valid metric in a similar category. Namely, community behind the software. But it has been a while since that has been true. It certainly hasn't been for a while, ever since I saw these star tracking graphs pop up on repos I knew that there was no sense in paying attention to them anymore.
There is truth in that. A lot of claude co-authored repos look frantic and unstable. It will still depend on the contributors managing things properly to maintain stability and not succumb to AI addiction and insanity.
> community behind the software
Right. You can't just look at stars. You have to look to see that there is an actual community, along with other contributors.
> That’s not to say that there is no microplastics pollution, the U-M researchers are quick to say.
>
> “We may be overestimating microplastics, but there should be none. There’s still a lot out there, and that’s the problem,”
> That’s not to say that there is no microplastics pollution, the U-M researchers are quick to say.
>
> “We may be overestimating microplastics, but there should be none. There’s still a lot out there, and that’s the problem,”
And with some actual numbers, when digging in further:
> They found that on average, the gloves imparted about 2,000 false positives per millimeter squared area.
> Clough prepared the substrates while wearing nitrile gloves, which is recommended by the guidance of literature in the microplastics field. But when she examined the substrates to estimate how many microplastics she captured, the results were many thousands of times greater than what she expected to find.
The reason this is important is that one flawed dataset reports a hopeless situation; the other at least provides a “if we stop now” message.
I opened the article and read the first paragraph. Then skimmed the rest.
As others pointed out: the fact you can do this in CSS tells you everything you need to know if you consider what CSS is for. Even w/o ever looking at the spec or understanding how it came to be.
i don't see what you mean? it's a rendering technology
i guess if you're someone still stuck on the "web browsers are for displaying static documents" and "css is for prettifying markup" thing, then sure, I bet what you said sounds real witty
Sure they do, computers repeatedly, quickly, and predictably do what they are programmed to do. Which includes any human errors in that programming.
reply