I just finished adding AV1 support to Pion WebRTC. What AV1 is able to do is pretty amazing! I spent a lot of my time with H264 because of legacy devices/hardware acceleration. Seeing the jump in quality + reduction in bandwidth usage it is going to be hard to go back.
As far as I can tell, vp9 hardware decoding should be available[0][1], but safari will fallback to software decoding[2]. I can't find much of a definitive source for Apple's supported codecs.
The device's hardware encoders/decoders have licenses already. Unless you're shipping your own you're not likely have to pay any royalties. If you're so successful the MPEG-LA comes a' knockin' you should have the money to pay the royalties.
The whole purpose of the MPEG-LA is to be a single entity to approach about licenses held by a bunch of companies. Avoiding paying licenses means avoiding codecs well supported by mobile hardware.
You end up with artificially limited numbers of encode streams and stuff with that, at least with nvenc (I think they doubled the limit at the beginning of the pandemic though).
What is the target use-case of pion? Until AV1 support is added to libwebrtc, how can this be used that wouldn't be better served by using RTP directly?
So when I started building Pion the target use case was to make it easier to build scalable servers. Instead of interacting with a WebRTC servers REST API to query information/load balance I wanted to have it all in one code base. It also is really useful to have Media+Transport decoupled. Lots of use cases I didn't realize grew out of that.
Thanks that's helpful. Does pion support RTCP? I've tried sending pre-recorded media over WebRTC before. The issue I ran into is that I needed to send keyframes too often leading to a bitrate that was too high for users with slower internet connections. For whatever reason, not all browsers seemed to support keyframe intra refresh, so there were large bitrate spikes whenever a keyframe was sent. RTCP would have solved this problem by allowing clients to request additional keyframes when needed rather than using a high keyframe interval.
We also provide https://github.com/pion/interceptor. Interceptor is implementations of common RTCP workflows. Things like Congestion Feedback, NACK generation and Congestion Controllers. The idea is that you can use the ones we provide, or use your own!
That’s cool. Do you have any examples of doing real-time media streaming? I guess pion is just the transport layer; so you need a separate media library?
I'm looking to take a pre-recorded video file and stream it to a WebRTC client. I use an SFU, but I don't think that's the important part. The main issue is transcoding the video in a format that the browser supports and in a way that can respond to RTCP feedback signals. For instance, you can generate an RTP SDP file to use as the output for an FFmpeg CLI command, but the FFmpeg CLI has no support for RTCP. You had mentioned that pion was used to stream pre-recorded media over WebRTC, so I was hoping you had an example of doing so that properly handled RTCP.
I see, so these examples still suffer from the keyframe issue. As far as I know, there is no good solution to this issue, but I'd be interested in hearing how pion's consumers handle this issue. Streaming a file from disk is ideal because it's really cheap and doesn't require any video transcoding, but as soon as you need to insert additional keyframes, then at least some portion of the source video needs to be transcoded again.
You can respond if you click the timestamp of the comment :).
I understand that pion is a transport library, I was mostly wondering if you've seen anyone solve this issue. To give you a little background, we built an app that allowed a movie containing a children's story to be read aloud with the participants.The movie streamed from our servers of WebRTC. It worked well, but we found that users in Europe often have DSL and using a keyframe interval of 3 seconds was too much for their connections to handle. Increasing the keyframe interval lead to situations where users saw nothing when they first connected. We eventually switched to synchronized local playback for media, but it's much more difficult to time perfectly and people have noticed it's more out-of-sync than before.
Just a heads up, in future conversations you may want to mention the DSL part at the beginning. WebRTC over DSL is definitely not impossible, but you’re going to be dealing with whole classes of issues most WebRTC devs aren’t running into on a regular basis. It’s also unfortunately true that many modern use cases for WebRTC are only possible because cable, fiber, and 3G/4G/5G have increased the bandwidth available to most users. So it may be the case that what you’re looking for does not in fact exist.
But keep looking! Just make sure to mention that you’re targeting DSL so people know not to recommend things you’ve already ruled out.