Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn't make any sense, this would be like an audio engineer getting rid of their studio monitors and mixing on AirPods because that's what people listen on.

What they do instead is use their multi-thousand dollar speaker and room setup for mixing (because it is better!) but then check the end result on airpods, car stereos, etc.

Just test on the resolutions you care about, no need to cripple your development setup full time.



Maybe if audio engineers constrained themselves to what people listened on, I would actually be able to hear what people are saying in modern movies.


this is the correct take for Netflix and Christopher Nolan. The answer is they used to.

I read a forum where an engineer was working on a TV show would flip the show in his living room, bedroom, guest bedroom. Walking around the house making sure the sound was balanced everywhere, driving his wife insane.

It's not engineers making that decision it's Nolan overreaching "If you see it particularly in an IMAX theater, projected, it’s pretty remarkable". source -> https://www.indiewire.com/2020/11/christopher-nolan-director...

Not sure what his deal is. 'Nolan also admitted in a 2017 interview with IndieWire that his team decided “a couple of films ago that we weren’t going to mix films for substandard theaters,” adding, “We’re mixing for well-aligned, great theaters.”' source -> https://www.indiewire.com/2020/09/tenet-sound-mixing-backlas...

It's madness. Idk what Netflix's excuse is. Their spec sheet is 2600+ words. But I think the issue is this line "5.1 audio is required and 2.0 is optional." My best guess if you have 5 speakers and subwoofer it's fine. But if you're on cheap headphones or a laptop good luck.


Nope, I can’t understand the dialog even in Imax. I think Nolan has whatever the opposite of audio processing disorder is.


Nolan's stuff sounds like shit on my fully calibrated 5.1 setup.

Also sounded like shit when I saw it in a (non-imax, but "X-Treme audio" or whatever it was) cinema.

Guess I'd need to buy my own imax cinema to make it watchable.

Think I'll just pass and watch something from a competent director instead.


Netflix sound is just crap. It has nothing to do with "cinematic experience". Voice is not intelligible even at very high volume.


Realistically, they'd need two mixes. One for people who care about audio quality and have invested in a proper home theatre and another stereo mix for the large majority of movie enjoyers.


Yes? That's what we had on DVDs back in the days, one 5.1 or 7.1 audiophile mix (or sometimes both!), and one simple stereo mix for the other 99% of the population; both distinct from the cinema mix.

There was never a reason to get rid of that in favour of the current "one size fits none" approach, other than cheapness.


Check Vox's "Why we all need subtitles now"

https://youtu.be/VYJtb2YXae8


This was actually a very frustrating video. The audio engineer goes on to say that we _need_ dynamic range to enjoy the movie, which just no true. I remember when the dynamic range (really, the lack of) debate was going on with regard to music. I can see _how_ the argument would apply to movies and TV, but I really hate when I have to keep adjusting the volume because I don't want to damage my hearing. It's like the worst of both worlds: music remains heavily compressed, while movies have incomprehensible dialog volume.


Any proper mix engineer will do exactly this. They will mix on studio monitors but also shitty laptop speakers, bookshelf speakers, mono Bluetooth speakers etc. You have to design for the medium, and most people have shitty displays. Might look great on a retina screen and have no contrast at all on a $50 LG.


Don’t forget the famous car test! A mix is not complete until it sounds good in an old Toyota Camry from the 90s.


Sure shitty laptop speakers are how most people listen to music, but they each listen on different shitty laptop speakers with completely different sound characteristics. This would be like a cobbler wearing the shoes he makes for his customers but only ever trying on the size 9 shoes, because they are the most common, while having a size 11 foot.


> Sure shitty laptop speakers are how most people listen to music, but they each listen on different shitty laptop speakers with completely different sound characteristics.

The canonical shitty speakers that almost every studio used for years was the yamaha NS10.

They were used because of their limited frequency response and dynamic range. When mixing the engineer would switch between the good speakers and the Yamahas. The reason is if it sounds good on the bad speakers then it will sound good anywhere.

It’s not necessary to test mixes on every conceivable speaker, but a couple of different types should satisfy you that it’s in the pocket.


I remember reading a paper that analysed the Yamaha NS10 and came to the conclusion that they were not just some shitty speakers but actually had some exceptional qualities like their transient response. It probably was the one linked in this article: https://www.soundonsound.com/reviews/yamaha-ns10-story


Thanks, that’s a good article. I liked this observation which I think applies to multiple fields.

> Misunderstanding also tends to breed misinformation, which is often disseminated by well-meaning amateurs: those whose knowledge of a subject is sketchy are always prey to the intuitively plausible but utterly wrong explanation for one phenomenon or another.


I thought an overwhelming majority list to music on headphones/earphones?

As for the rest it seems for me that cheaper (“non-smart”) HomePod style speakers are pretty prevalent at least in higher income countries


This [1] was a multiple choice poll in 2015 that had: Computer Speakers (55%), Headphones (41%), with mostly other speaker related options.

[1] https://www.strategyanalytics.com/strategy-analytics/news/st...


>Doesn't make any sense, this would be like an audio engineer getting rid of their studio monitors and mixing on AirPods because that's what people listen on.

Skrillex, an EDM producer who became popular about a decade ago, did approximately this. They were iPod headphones back then, not AirPods. Perhaps designing according to the vast majority of users is effective enough?


He took a music genre that you couldn't really listen to except you have semi professional equipment and modified the basses to make them 'hearable' for the masses on cheap equipment or even loud on their phones.

I kinda can see that this worked for him. Not sure if that transports well to other producers.


It actually makes sense. I struggle to see how one might enjoy Burial's debut album on a set of cheapo headphones. There's a lot going on below 100 Hz.

While I bet Bangarang was most listened to worldwide on that awful set of white Apple headphones everybody had.

I know Skrillex produces another "genre" of music, but Burial's is one of the seminal albums in the dubstep genre which Skrillex is part of.

(I've just learned the mid-heavy dubstep popularised by Skrillex & co. is called brostep)


The trouble with that specifically for audio, if you never test it on anything else you might miss things like crackles that only happen on speakers capable of producing crackles in that frequency range


He specifically talked about an audio engineer. Yes as a composer/producer you can get by without audio monitoring hardware if you have an audio engineer taking care of the mixing/mastering after you. Those are different profiles.


There is a caveat that a studio may have a cheap set of speakers around to ensure everything sounds acceptable on consumer equipment. But yes that is in addition to expensive monitors, and maybe a subwoofer the size of a minifridge.


>then check the end result on airpods, car stereos, etc.

And when it sounds like shit using airpods ship it anyways?


There are exceptions to every rule! Ludacris made at least one album from the ground up only listening to Apple stock headphones.


I mean you think it's dumb but I honestly think that mixing with airpods would be the better option.

Same with icons where a high resolution pixel graphic can look horrible when scaled down to icon level vs an icon drawn in the resolution it would be shown at from the get go.

It reflects the problem with software development today as well.

Most of the time developers are using top end machines, massive displays with fast internet and build bloated things that work nice and 'fast enough' for them, but once they run on low/mid level laptop/phones are unusable.

In the 90s programmers didn't have that luxury, all hardware was the same so what they were programming on was what the end user would be on and since hardware was limited anyway every bit and cycle counted. That's why you get games like doom/quake that will run on 75mhz and 8mb of ram and be buttery smooth, whereas today something like flappy bird needs 512mb minimum!


This whole comment section is making me think some people aren't being very diligent with their testing.

A professional mix absolutely requires testing across multiple grades of devices. Whether or not someone soley produces on airpods is reslly personal preference, so long as they verify their mix with many types of gear before shipping. A good development shop should be testing across screen sizes and performance profiles for the same reasons.


Lack of testing is part of it, most places do less testing than is ideal.

But there’s much more to it than that. From the article, the author’s resolution is 1366×768. This was designed and tested for! The problem wasn’t that it was broken at that res, the problem was that it was incorrectly classified as a tablet-only resolution.

The issue here is their designers and developers being out of touch with their users, and using constrained hardware helps with that.


>In the 90s programmers didn't have that luxury, all hardware was the same

I mostly agree with you. However, the 90s was actually a period where computer hardware became rapidly obsolescent. The 486 to Pentium transition, for example, meant that lots of people had PCs far less powerful than the PCs used by games developers. Quake is a case in point. Quake came out in 1996 and would not have run on a typical PC bought only two or three years earlier. A few years after that, many games would only run acceptably on PCs with 3D graphics cards (which were by no means universal).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: