stream_http: add libcurl-based HTTP/HTTPS backend#17879
Conversation
026fd5a to
12dee02
Compare
|
I tested this PR and confirmed it works, including HTTP3👍 mingw test build https://github.com/Andarwinux/mpv-winbuild/releases/download/2026-05-07-cfd818bcae/mpv-x86_64-v3-20260507-git-27cd42ff65.7z |
b98cf95 to
9ad5a36
Compare
| void *curl_data = NULL; | ||
| int r = AVERROR(ENOSYS); | ||
| if (priv->opts->prefer_curl) | ||
| r = mp_curl_avio_open(demuxer, pb, &curl_data, url, flags); |
There was a problem hiding this comment.
There should be some logging for which backend was used, would be useful for debugging purposes. It would also be nice to have more logging in the curl backend itself
There was a problem hiding this comment.
It would also be nice to have more logging in the curl backend itself
What logging do you need? headers can be enabled with msg-level=curl=trace and the reset is just few errors, opening logging is handled by common code.
There was a problem hiding this comment.
There should be some logging for which backend was used
It's already logged, you will see either lavf or curl, depending on the logging parent.
[ 2.748475] curl: Opening ...
[ 2.794586] curl: Mime-type: 'video/mp4'
There was a problem hiding this comment.
headers can be enabled with
msg-level=curl=trace
There are no logs for me with the "curl" module on your branch.
It's already logged, you will see either lavf or curl, depending on the logging parent.
[http] status=206 compressed=0 size=372656696 seekable=1
[http] resize stream to 131072 bytes, drop 0 bytes
[http] Mime-type: 'video/mp4'
[http] Stream opened successfully.
This is all I see. the only way I can tell it's using the curl backend is because of the first line
There was a problem hiding this comment.
I've renamed it to curl (in your version it would be msg-level=http=trace), but you can know it was using it, because http tag was exclusively used by this stream.
|
Renamed to stream_curl. Resolved review comments. I have some local changes still, to few areas, will update later. |
b29d9d8 to
d8e2461
Compare
|
Added ftp/ftps support. This is significantly more stable than Lavf one. |
|
Somewhat confusing with ...when the certificate is actually fine and valid. But that seems to be up to curl. |
| end | ||
| if chunk_size < math.huge then | ||
| stream_opts = append_libav_opt(stream_opts, "request_size", tostring(chunk_size)) | ||
| mp.set_property_native("file-local-options/curl-max-request-size", chunk_size) |
There was a problem hiding this comment.
Not entirely related here but:
I do wonder if we're doing ourselves a favor with this option by default.
Google is throttling video (IMO) not because they hate downloaders, but to save themselves bandwidth (-> money) when people are eagerly preloading¹ an entire 20 minute video and then quit after minute 3.
If every software now implements this workaround the next likely thing to happen is that Google will plug the hole and we are back to square one.
I tried reverting the commit locally and with the curl backend interactive seeking feels fast enough in YT videos for me, so it's not strictly needed.
¹ notice how the yt web player preloads only the next 30s and only after a bit of watching that duration increases. mpv's default demuxer cache is huge
There was a problem hiding this comment.
Google is throttling video (IMO) not because they hate downloaders, but to save themselves bandwidth (-> money) when people are eagerly preloading¹ an entire 20 minute video and then quit after minute 3.
Correct. However, they don't care about "downloaders" in this context, imho. I can only speculate, because I don't have insider info. But to me it is clear this is CDN level download throttling that is used across the stack. The idea is quite simple, on any new request/seek, you get burst speed to pre-cache some data, after that any sustained playback is just downloading at the rate that is enough for smooth playback.
If every software now implements this workaround the next likely thing to happen is that Google will plug the hole and we are back to square one.
I don't think they care this much, especially not about mpv where playback is organic. Maybe if someone is trying to download hundreds of video with ytdl, but they will get ip banned quite quickly.
I tried reverting the commit locally and with the curl backend interactive seeking feels fast enough in YT videos for me, so it's not strictly needed.
Correct. This hack/workaround is no longer needed¹ in practice. The seeking is working well enough.
I do however think it's mostly inconsequential in mpv, where default cache max bytes is 150MB. We won't hammer the servers that much. I also don't want to give more attention to this option that it has, it is just something that comes from ytdl...
¹ Note, in some cases sustained download is not fast enough (without hack) to keep with best quality playback. (I don't know if that's still the case, in my location, I never had this issue, download was throttled, but fast enough generally) Which is very likely also CDN load balancing on their side. Problem is that mpv doesn't change playback quality dynamically based on cache health. Which the 1st-party player does, and can react when download is slower.
To summary, yes throttling at CDN level is load balancing and not targeted at "bad actors", as many people imply. At the same time, we don't provide auto quality option and dynamic switching, so it may be problematic in some scenarios.
mpv's default demuxer cache is huge
Really? It's 150MB forward only. This is not that huge.
There was a problem hiding this comment.
Really? It's 150MB forward only. This is not that huge.
I should have said "by comparison". With some random video I tested mpv's default cache could hold 10 minutes, which was ¼ of the entire video.
There was a problem hiding this comment.
max-request-size is very much needed to play at speed > 1.
sfan5
left a comment
There was a problem hiding this comment.
works, tested:
- http with lighttpd
- https with nginx (http2 works too)
- youtube
- twitch (HLS)
stream-lavf-o[cookies] only feeds the lavf stream backend. Instead of side-loading it, use core `--cookies-file` option.
Will be useful for future commits
Introduces stream_http, an internal libcurl-driven backend for http:// and https:// URLs. Runs all transfers on a dedicated curl multi thread with a producer-side ring buffer, and exposes HTTP/2 multiplexing, QUIC/HTTP/3 (when libcurl supports it), HSTS and TCP keep-alive. Should generally be more stable and faster than FFmpeg HTTP/1.1 impl, additionally connections are kept-alive between files open, so if you open playlist of network files and navigate through them, it will re-use the same connection. Build is gated on the new 'libcurl' meson option (auto). When disabled or unavailable, mpv silently falls back to FFmpeg's HTTP implementation.
Hook AVFormatContext.io_open so HLS/DASH segment fetches and other nested FFmpeg I/O go through stream_http when libcurl is available. This lets segments inherit the connection reuse, HTTP/2 multiplexing, mid-stream resume and conditional compression of the top-level stream. Falls back to FFmpeg's default I/O for non-HTTP URLs and when the curl backend is not built in.
Allow only subset of supported protocols by libcurl.
This is for stream_curl.
At the initial file open, the buffer may be populated with more data, than later requested by decoder, but don't drop this data. This fixes initial open to perform multiple reads of the area, especially for images that are likelly fully read for probbing already.
It's significantly more stable than Lavf. Fixes seeking out of cached ranges. Additionally correctly marks demuxer_is_network for ftp too.
|
Added |
|
What happened to |
It was confusing and no longer needed, so it has been removed. |
I think this should be kept as an option for debugging and testing purposes without needing to recompile mpv, especially considering there are some custom handlings in stream_curl that have not been extensively tested. mpv has several other options like this. It does not cause any drawback if curl is prefered by default. |
Note, that the option you refer to, was not doing what you think it was. It was only handling the nested transfer. Hence removed for confusion. I can add |
Yes that was what I meant. |
|
Added |
|
Disabled by default on libavformat < 62.15.101 for https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/23082 issue |
No description provided.