Hacker Newsnew | past | comments | ask | show | jobs | submit | krooj's commentslogin

My man, all these fuckers use the same parasitic management consultancies. That's why all this shit looks the same.

Time to watch Office Space again?

"I'm an agent person. I'm good at dealing with the agents! Can't you understand that?! What the hell is wrong with you people?! I'm good at dealing with the agents!"

That line is so damn good i recommend everyone who doesn't get it to watch Office Space.

Honestly, Office Space looks like a dream job compared to normal practices. Imagine getting your own cubicle! That would be amazing!

"Management consultancies" that make sense. I thought that maybe all these higher-up people know each other and share a common "Slack group"

Cloudflare has never made a profit.

Is your stance that shareholders should perpetually subsidize it out of the goodness of their hearts?


My stance is this: Fine, maybe you need to restructure for profit reasons. If that is the case, then it is also beholden upon the people doing the layoffs to understand their responsibility in that.

In an ideal world, a layoff of this scale would also require a shakeup of the management that let it get this bad in the first place.

What's more, the higher up the chain, the less onerous the layoff for the individual getting laid off.


Why should people who are profitable to employ be laid off as well?

It just sounds like you're upset and want to hurt whoever you feel is responsible for making you upset. That's not a productive stance to have on important topics.


What an odd view of what I said.

I'm not asking for the people who hurt me to be hurt. I am asking that the responsibility of the actions that management layers took be considered in layoffs.

For instance - If overhiring happened, how is this not at least a little bit on the individual that approved of a hiring spree? Why is it that they should be able to yield a baton that hurts the workers they hired, without having to actual bare the brunt of the decisions?

If a business is still unprofitable, a business that touches so much of the internet like Cloudflare, then that is also a strategic failure and should be punished as such.

I feel like your tone in this response was also so condescending.


>For instance - If overhiring happened, how is this not at least a little bit on the individual that approved of a hiring spree? Why is it that they should be able to yield a baton that hurts the workers they hired, without having to actual bare the brunt of the decisions?

Do you think shareholders do not consider their employees performance when deciding to hire/keep them?

Do you think CEOs don't do that when it comes to their executive team?

Do you think the executive team doesn't consider that?

It all comes flowing down.

I can assure you as a shareholder i am 100% focused on getting a return, and I will fire (or vote to fire) any executives that i believe are doing a bad job, or who accept that their underlings do a bad job.

Hiring people, and then firing them some time later is not intrinsically the same as doing a bad job, nor does it mean there was "overhiring".

Also. "hurts workers"? What?

Workers receive the payments that were agree to, for the period that was agreed to. No more, no less.

You are no more entitled to a job than the supermarket is entitled to my patronage, and me choosing to no longer purchase from you, whether it be groceries or labor, is not me hurting you.


This is how the elites actually feel tho. They think they can do no wrong, it's not their fault that they don't know how to run a business but you should please give them another chance and not change corporate law to stop benefiting them over workers.

It's a mindset that enables neoliberalism to flourish while vast majorities suffer immensely to benefit the few.

It's a system that's worth questioning as the material lives of 100s of millions of Americans are getting objectively worse every year while we are always being told there is no money for healthcare or childcare but there is always trillions laying around for imperialistic activities like data center expansions and war.


Let's throw the elites in jail, so that more elites can come in and do the same thing?

There's a limited pool of execs to run companies. Its a pretty homogenous group of people, similar skill sets, some have varying philosophies on how to run companies, but the majority of them will likely make the same decision if given identical sets of circumstances.

I get triggered when people start calling out "elites" and other boogeymen - what does it mean to have companies run by non-elites? What even is an "elite"? Are they elite simply because they are employed as an exec? Is it possible to have a non-elite executive?

Using "elites" in this context makes it feel like an emotional complaint about the world rather than anything rooted in logic.


None of these elites are operators -- they're liberal arts educated and their primary skills are in using words and lack of to achieve status gains -- nothing more.

In many traditional industries, companies are built and run by the most senior members of whichever discipline. Tech is different because most of its skilled members intentionally cede soft power because of personal (imo short-sighted) predilections and the exorbitant amount of money flowing in has caused a mad dash from other disciplines.

The "elites" are people trained either institutionally or personally (from their relationships with others) to understand power dynamics and utilize them for personal gain.

Navel gazing is a great away to achieve nothing. Being lorded over by someone who couldn't even figure out how to build and host a simple webapp is ludicrous. Should the CEO of my hospital not understand that the mitochondria is the powerhouse of the cell? Should the partners at big M&A law firms have no idea how to read a contract? No, because that's fucking ludicrous -- yet it persists in tech because its technicians have zero training, education, or even sense for elite ways of thinking.


No, it's more like we have undemocratically elected people in positions of power that want to act like dictatorships when in reality these people made a mistake that is costing the company billions of dollars and their ineptitude means they should be removed from these positions.

I thought Silicon Valley was all about meritocracy? Why should corporate shills that does not know how to profit from entity that controls 25% of internet traffic be allowed to keep their jobs but the actual people providing real value, the workers, aren't?

That is a system that doesn't benefit humanity. It selfishly benefits the few.


CEOs are democratically elected. And if they do a bad job they are democratically removed from office.

What are you talking about??


Cloudflare has never made a profit? The thread commentator said the product he maintained was 95% profit.

My orgs products. Others, probably not. There is a lot water they could have targeting and gone after. Starting with Dane's idiotic incomplete messes he left around and declaring them done and leaving people to clean his garbage up

Leadership is terrible and they're out of ideas. AI is going to be the future but it can't even review code properly

Their "no profit" is entire accounting trickery


Interesting - I wonder if this isn't a case of theft on a refresh token that was minted by a non-confidential 3LO flow w/PKCE. That would explain how a leaked refresh token could then be used to obtain access, but does the Vercel A/S not implement any refresh token reuse detection? i.e.: you see the same R/T more than once, you nuke the entire session b/c it's assumed the R/T was compromised.


Question - from the perspective of the actual silicon, are these NPUs just another form of SIMD? If so, that's laughable sleight of hand and the circuits will be relegated to some mothball footnote in the same manner as AVX512, etc.

To be fair, SIMD made a massive difference for early multimedia PCs for things like music playback, gaming, and composited UIs.


> circuits will be relegated to some mothball footnote in the same manner as AVX512

AVX512 is widely used...


Not on Intel consumer CPUs.


NPUs are a separate accelerator block, not in-CPU SIMD. The latter exists for matrix compute, but only in the latest version of AVX which has yet to reach consumer CPUs.


> The latter exists for matrix compute, but only in the latest version of AVX which has yet to reach consumer CPUs.

As far as I am aware, AMD implemented has implemented many parts of AVX-512 in their consumer CPUs since Zen 4:

https://en.wikipedia.org/w/index.php?title=AVX-512&oldid=133...

On the other hand, Intel still does not support AVX-512 in Raptor Lake, Meteor Lake and Arrow Lake:

> https://en.wikipedia.org/wiki/Raptor_Lake

> https://en.wikipedia.org/wiki/Meteor_Lake

> https://en.wikipedia.org/wiki/Arrow_Lake_(microprocessor)


The comment in lines 163 - 172 make some claims that are outright false and/or highly A/S dependent, to the point where I question the validity of this post entirely. While it's possible that an A/S can be pseudo-generated based on lots of training data, each implementation makes very specific design choices: i.e.: Auth0's A/S allows for a notion of "leeway" within the scope of refresh token grant flows to account for network conditions, but other A/S implementations may be far more strict in this regard.

My point being: assuming you have RFCs (which leave A LOT to the imagination) and some OSS implementations to train on, each implementation usually has too many highly specific choices made to safely assume an LLM would be able to cobble something together without an amount of oversight effort approaching simply writing the damned thing yourself.


This is one of those cases where I would hope that extremely strong federalism is exercised from Ottawa: essentially, Alberta could be dissolved, stripped of its provincial status and relegated to a territory. From that point, allow for further subdivision to the various First Nations people, allowing reformation into other territories or offer provincial status. The rest of it could be federally administered - see how they like that.

As much as it pains me to say it, Canada's diversity is also it's weakness, and there needs to be precedent - perhaps not as severe as in the US - that you do NOT leave the dominion.


Putting aside that this isn't that popular a position in reality, why do you think such actions from the federal government would go over well with not only Albertans, but the rest of us in the rest of the country?


Yep - I remember the CCAT from 4th grade that resulted in my being placed into a different class for 5th. AFAIK, we were given this test "cold" (no prep) and I remember it being timed.


> In short, Open ID Connect is quite accurately described as an Authentication standard. But OAuth 2.0 has little to do with Authorization. It allows clients to specify the "scope" parameter, but does not determine how scopes are parsed, when user and client are permitted to request or grant a certain scope and what kind of access control model (RBAC, ABAC, PBAC, etc.) is used. That's ok, since it leaves the implementers with a lot of flexibility, but it clearly means OAuth 2.0 is not an authorization standard. It only concerns itself with requesting authorization in unstructured form[3].

This misses the mark - scopes are abstractions for capabilities granted to the authorized bearer (client) of the issued access token. These capabilities are granted by the resource owner, let's say, a human principal, in the case of the authorization_code grant flow, in the form of a prompt for consent. The defined capabilities/scopes are specifically ambiguous as to how they would/should align with finer-grained runtime authorization checks (RBAC, etc), since it's entirely out of the purview of the standard and would infringe on underlying product decisions that may have been established decades prior. Moreover, scopes are overloaded in the OAuth2.0/OIDC ecosystem: some trigger certain authorization server behaviours (refresh token, OIDC, etc), whereas others are concerned with the protected resource.

It's worth noting that the ambiguity around scopes and fine-grained runtime access permissions is an industry unto itself :)

RFC 9396 is interesting, but naive, and for a couple of reasons: 1) it assumes that information would like to be placed on the front-channel; 2) does not scale in JWT-based token systems without introducing heavier back-channel state.

I personally do not view OIDC as an authentication standard - at least not a very good one - since all it can prove is that the principal was valid within a few milliseconds of the iat on that id_token. The recipient cannot and should not take receipt of this token as true proof of authentication, especially when we consider that the authorization server delegates authentication to a separate system. The true gap that OIDC fills is the omission of principal identification from the original OAuth2.0 specification. Prior to OIDC, many authorization servers would issue principal information as part of their response to a token introspection endpoint.


Linus always has a great way of summarizing what others might be thinking (nebulously). What's being said in the article is really mirrored in the lost art of DDD, and when I say "lost" I mean that most developers I encounter these days are far more concerned with algorithms and shuttling JSON around than figuring out the domain they're working within and modelling entities and interactions. In modern, AWS-based, designs, this looks like a bunch of poorly reasoned GSIs in DDB, anemic objects, and script-like "service" layers that end up being hack upon hack. Maybe there was an implicit acknowledgement that the domain's context would be well defined enough within the boundaries of a service? A poor assumption, if you ask me.

I don't know where our industry lost design rigor, but it happened; was it in the schools, the interviewing pipeline, lowering of the bar, or all of the above?


I’d argue software design has never been taken seriously by industry. It’s always cast in negative terms, associated with individuals seen as politically wrong/irrelevant and brings out ton of commenters who can’t wait to tell us about this one time somebody did something wrong, therefore it’s all bad. Worse, design commits the cardinal sin of not being easily automated. Because of this, people cargo cult designs that tools impose on them, and chafe at the idea that they should think further on what they’re doing. People really want to outsource this thinking to The Experts.

It doesn’t help that isn’t really taught, but is something you self-teach over years, it is seen as less real than code (ergo, not as important). All of these beliefs are ultimately self-limiting and keep you at advanced beginner stage in terms of what you can build, however.

Basically, programmers collectively choose to keep the bar as low as possible and almost have a crab-like mentality on this subject.


I can see a swing finally starting. It isn’t “huge” by any stretch, but at the same time

“deVElOpErS aRe MoRE EXpEnSivE tHaN HArDwaRE”

Commenters are no longer just given free internet points. This is encouraging as these people controlled the narrative around spending time on thinking things through and what types of technical debt you should accept for like 20 YEARS.

I think maybe people are finally sick of having 128 gigs of ram being used by a single 4kb text file.


There is some truth to the idea that developer time is expensive, and can dwarf the monetary gains gotten through micro-optimization.

I agree that some people took the idea to mean "what's a profiler?" and that is why our modern machines still feel sluggish despite being mind-bogglingly fast.


This might be driven by the cost per computation being vastly lower while the benefit having remained mostly constant. There is little incentive for making a text editor that runs in 10k of memory because there is no benefit compared to one that runs in 10 megabytes or, soon, 10 gigabytes.

I spend a lot of my day in VScode and PyCharm and the compute resources I consume in an hour are more than what the Apollo program consumed over its full existence. Our collective consumption at any given decade is most likely larger than the sum of computing resources consumed up until that point in our history.


> most developers I encounter these days are far more concerned with algorithms and shuttling JSON around than figuring out the domain they're working within and modelling entities and interactions

The anemic domain model was identified as an anti-pattern quite a long time ago[1]. It usually shows up along with Primitive Obession[2] and result in a lot of code doing things to primitive types like strings and numbers, with all kinds of validation and checking code all over the place. It can also result in a lot of duplication of code that doesn't look obviously like duplication because it's not syntactically identical, yet it's functionally doing the same thing.

1 https://martinfowler.com/bliki/AnemicDomainModel.html

2 https://wiki.c2.com/?PrimitiveObsession


The industry predominately rewards writing code, not designing software.

I think the results of bad code aren't as obvious. A bad bridge falls down, bad code has to be... refactored/replaced with more code? It goes from one text file that execs don't understand to a different text file that execs doesn't understand.

And once something works, it becomes canon. Nothing is more permanent than a temporary hack that happens to work perfectly. But 1000 temporary hacks do not a well-engineered system make.

I believe that maturing in software development is focusing on data and relationships over writing code. It's important to be able to turn it into code, but you should turn those into code, not turn code that works into a data model.


> The industry predominately rewards writing code, not designing software.

The sad part of this is that code is absolutely a side-effect of design and conception: without a reason and reasonable approach, code shouldn't exist. I really think that the relative austerity happening in industry right now will shine a light on poor design: if your solution to solving poorly understood spaces was to add yet another layer of indirection in the form of a new "microservice" as the problem space changed over time, it's probably more likely that there was an inherent poor underlying understanding of the domain and lack of planning extensibility in anticipation. Essentially, code (bodies) and compute aren't as "cheap" as they were when money was free, so front-loading intelligent design and actually thinking about your space and it's use-cases becomes more and more important.


> The industry predominately rewards writing code, not designing software.

This also stems from most of the code being written at any given moment being to solve problems we already solved before and doing or supporting mundane tasks that are completely uninteresting from the software design point of view.


> anemic objects

I have yet to come across a compelling reason why this is such a taboo. Most DDD functions I have seen also are just verbose getters and setters. Just because a domain entity can contain all the logic doesn't mean it should. For example, if I need to verify if a username exists already, then how do I go about doing that within a domain entity that "cannot" depend on the data access layer? People commonly recommend things like "domain services," which I find antithetical to DDD because now business logic is being spread into multiple areas.

I quite enjoy DDD as a philosophy, but I have the utmost disdain for "Tactical DDD" patterns. I think too many people think Domain-Driven Design == Domain-Driven Implementation. I try to build rich domains where appropriate, which is not in all projects, but I try not to get mired up in the lingo. Is "Name" type a value object or an aggregate root? I couldn't care less. I am more concerned about the bounded contexts than anything else. I will also admit that DDD can sometimes increase the complexity of an application while providing little gains. I wouldn't ever dare say it's a silver-bullet.

I will continue to use DDD going forward, but I can't help but shake this feeling that DDD is just an attempt at conveying, "See? OOP isn't so bad after all, right?" Of which, I am not sure it accomplishes that goal.


If you replace the Object-Oriented mechanism for encapsulation with some other mechanism for encapsulation then there's probably no reason for this taboo.

But in 99.999999% of real-world projects, anemic object-oriented code disregards encapsulation completely, and so business logic (the core reason why you're building the software in the first place) gets both duplicated and strewn randomly throughout the entire code project code.

Or in many cases, if the team disregards encapsulation at the type level then they're likely to also disregard encapsulation at the API/service/process level as well.


Ok, I see where you are coming from, and I agree. However, I would like to add that poorly implemented DDD can be just as awful.


With decades of exponential growth in CPU power, and memory size, and disk space, and network speed, and etc. - the penalties for shit design mostly went away, so you could usually get away with code monkeys writing crap as fast as they could bang on the keyboards.


Weird - this is the first place I saw the "internet" on display as a kid. Shame to see it close in such an unceremonious way.


You'd be surprised at how little cloud vendors give a shit about security internally. Story time: I recently went ahead and implemented key rotation for one of our authz services, since it had none, and was reprimanded for "not implementing it like Google". Fun fact: Google's jwks.json endpoint claims to be "certs" from the path (https://www.googleapis.com/oauth2/v3/certs). They are not certs - there is no X.509 wrapper, no stated expiration, no trust hierarchy. Clients are effectively blind when performing token validation with this endpoint, and it's really shitty.

Other nonsense I've seen: leaking internally signed tokens for external use (front-channel), JWTs being validated without a kid claim in the header - so there's some sketchy coupling going on, skipping audience validation, etc...

Not much surprises me anymore when it comes to this kinda stuff - internally, I suspect most cloud providers operate like "feature factories" and security is treated as a CYA/least-concern thing. Try pushing for proper authz infrastructure inside your company and see what kinda support you'll get.


Are there any large companies that don't operate like feature factories? It seems to be such a common issue and the natural result of the incentive structure.


although this is a valid insight, it reduces the detail of the conversation into "yes or no" on a topic that is not a "yes or no" topic.. it is behavior and messaging among a dozen critical functions of business. Almost every business is different in their mix.. perhaps faced with similar rhetoric, law says "show me an example then we can discuss" instead of "classify all examples then apply to a situation"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: