just wanted to give you a quick update. I’m still working on the feature list. There is one very important one I’m finalizing now: adaptive bitrate. I would like to share a few words on this. My goal is to cover it at both protocols: TCP (used by RTMP, RTSP) and UDP (used at SRT, WebRTC, etc).
The most simple approach is: we measure loss of frames, or lost packets, packets dropped in period of time and we decrease the bitrate accordingly. This sounds simple but after you try to implement it’s not so simple:
- Z1 connects through WiFi to a mobile phone or cellular hotspot.
- Both Z1 and hotspot/mobile phone may be moving, so there is a network connection between z1 and mobile phone or hotspot too, that can potentially drop frames and loose packets too, not only because of the bandwidth changes on cellular network.
- Need to do simple math, it must run inside Z1 while live streaming at the same time.
Realistic goal:
- if someone measures available bandwidth and sets the video bitrate accordingly, it should provide much smoother live streaming, minimizing interruptions compared to live streaming without adaptive bitrate.
- For 360 live streaming, we should predefine a bitrate range and make adaptive bitrate work accordingly:8mbps-25mbps. If a user sets video bitrate to 18mbps, when adaptive bitrate is set, it would still be required the bandwidth not to drop below 8mbps, otherwise live streaming under 8mbps and below makes no sense. Also at platforms I tested, doesn’t make sense going above 25mbps at all, because for viewing it at viewers end, will not provide much in quality going beyond 25mbs.
- An optimal solution would be to avoid packet loss during live streaming, but that would require going “safe” and lowering actual bitrate below available bandwidth by more than 25%. Isn’t that a waste of bandwidth and quality? With “simple” approach , by measuring only packet loss, we can’t make make this work, we need way more complex approach: measuring bytes sent in period of time constantly as often as possible and to calculate statistics of available buffer and it’s actual size. This will give us a good hint, especially if we try to predict it for next 24-48 frames. Because of the nature of how live streaming works, we can’t do “optimal approach” without predictions, which will again cost us CPU time and will generate heat. I still think this will work and minimize interruptions.
Work in progress…