I'd argue that the engineers of 20 years ago were better than the engineers of today because they were significantly more resource constrained and for example, would never use a 300mb javascript library for a profile page.
John Carmack did praise restraint of resources when he recalled his early days working as a lone contractor and as an employee of Softdisk, when he and the team had to push out games on a very tight schedule.
I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends — but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. That’s why I always think an abundance of X may not be the best option for me. That’s why probably includes money, too.
As a percentage of good to mediocre, maybe.
Engineers of 40 years ago were probably better than engineers 20 years ago. Less of them and more constraints they had to deal with.
Democratization of technology makes it easier for more people to use. It applies to programming as much as just using a computer.
I never buy these examples. Being a good engineer is more than purely resource optimization. I can think of many times over my career where resource optimization mattered but it’s not always a valuable undertaking.
You're missing a step in the middle: It's not resource optimization itself, it's working under the constraint that forces you to learn, get creative, and figure things out. The investigation, attempts, failures, detours - all of it teaches you more about the language and system you're working on. That's where the experience and improved skills comes from.
Referring to it just by the end result of "resource optimization" is overly simplistic, along the lines of "painting is no big deal, it's just a bunch of colors".
Why do folks like yourself jump to such dull and cheap comments.
> “Painting is no big deal, it’s just a bunch of colors”
I don’t think anyone was saying such a thing. The original post stated that engineers of yesterday were so much better because of resource use, nobody would install a large JavaScript library.
My counter is that I don’t believe these arguments ever truly hold up. The times change and engineers are just as good as they once were but in other attributes. Sure constraints are great in work and in the market but that was not the original thesis.
20 years ago we were complaining about steam being bloated and unnecessary, we were 6 months off vista being a bloated mess and the Office Ribbon debacle being in full swing. PC games were often half baked console ports with atrocious performance and filled with game breaking bugs. Software was super rigid - there was no real cross platform support. We were just heading into the core 2 duo realm and it was a mess.
Not sure why this is hitting the home page right now but people may also be interested in Mujoco Playground [1] which is the latest RL environment wrapper of mujoco, implementing both classic deepmind-control benchmarks, and some very new interesting ones!
Snowdens document leaks happened in 2013 (implying the surveillance state was set up well before then). So this is more a leisurely stroll than a sprint.
Room 641A was leaked in 2006. To some extent, this all started in the 1940s with the Enigma and JN-25 code breaks. After that, everyone knew that intelligence was the future of power.
Unfortunately while evocative, it doesn't really make sense.
A Zamboni has a "conditioner" at the rear that contains a sharp horizontal blade that shaves the ice as the machine runs across the ice. The blade is a bit like a very wide wood-plane. It is sharp and controlled to be a little below the current surface of ice. The shavings are moved to a waste tank using an internal horizontal auger and vertical auger.
You usually couldn't get near enough to the blade to have a close enough shave for it to harm you. However I'm guessing a Zamboni could hurt you in other ways.
Disclaimer: I only skimmed the details . . . I'm sure applying the right amount of intelligence could discover harmful means.
Anyone who had read Bamford's books on the NSA many years prior to 2013 took a look at what info came out and had an internal thought process like "this is nothing new at all".
I see your point, and think its valid, but here is a counter:
Content is graded on both instant appeal (e.g. rotten tomatoes "popcornmeter") and artistic appeal (e.g. rotten tomatoes "tomatometer").
I firmly believe that AI generated content cannot have any artistic appeal, because I believe art is fundamentally an invocation of human expression. This might be fine in some contexts, but in general I'd prefer consuming content from groups that I trust to strike a good balance between these types of appeal (e.g. A24 movies).
> Content is graded on both instant appeal (e.g. rotten tomatoes "popcornmeter") and artistic appeal (e.g. rotten tomatoes "tomatometer").
I understand the distinction, but I don’t find the examples compelling. The difference between the popcorn and tomato meters, as I understand it, is just the source. The latter are critics’ opinions while the former are “regular people” opinions. Professional critics may have some concern for the artistic value of a movie, but their job is to help you decide “should you spend your time with this” and the entertainment value is a primary consideration. Furthermore, a critic can have early access and needs to write their review fast. An audience member, who has no such obligation, can let it ruminate and have their opinions evolve. In that sense, a critic’s opinion may be more influenced by initial appeal.
Very interesting discrepancy in the attached example:
- "removing the kettlebell" led to removing the visual representation of the kettlebell as well the deformation it makes on the pillow
- "removing the hands" removed the childs hands from the tops, but did not then lead to the tops falling over!
Others like the colliding cars are in some weird gray area between the two.
One should note as these tools proliferate, there is a lot of artistic expression that we are giving up to these imprecise natural language parsing engines.
I think we'll regain the artistic expression/control in the future if the tools become popular.
We saw this with StableDiffusion. First it was just text-to-image. Now we have controlnets which can control the precise placement of objects. We've got depth maps, body positioning, all kinds of things. I can even sketch on my tablet and tunr that into an image.
I'm sure we can add options for what this physically affects or doesn't affect.
What does this even mean? Who is being "genuine"? This is far to naive a take for a company thats burning through hundreds of millions of dollars, and constantly striving to set the tone of AI and their own supremacy.
I share this dilemma too. Just a thought -- I feel less okay with AI processing "data made for humans" (i.e. the images themselves, audio recordings of speech) and more ok with it processing "data made for software" (exif data, shazam logs).
The issue often manifests in victim blaming. They assume that because something bad has happened to someone then the someone must be guilty of some transgression. Its often done on an unconscious level and we have to check ourselves that we're not doing it.
reply