In case anyone's wondering, this website's syntax highlighting color scheme is called "gruvbox", which I quite like but took an embarrassingly long time to track down
Fair point, we could answer that more directly on the site. Besides the comparison were there other things that make it seem oriented to people already familiar with it?
Generally, the video tag is great and has come a very long way from when Video.js was first created. If the way you think about video is basically an image with a play button, then the video tag works well. If at some point you need Video.js, it'll become obvious pretty quick. Notable differences include:
* Consistent, stylable controls across browsers (browsers each change their native controls over time)
* Advanced features like analytics, ABR, ads, DRM, 360 video (not all of those are in the new version yet)
* Configurable features (with browsers UIs you mostly get what you get)
* A common API to many streaming formats (mp4/mp3, HLS, DASH) and services (Youtube, Vimeo, Wistia)
Of course many of those things are doable with the video tag itself, because (aside from the iframe players) video.js uses the video tag under the hood. But to add those features you're going to end up building something like video.js.
it just doesn’t work in every environment. every browser version has it’s own issues and edge cases. If you need stable video player or want streaming features you should use it.
P.S i built movie streaming and tv broadcasting player for country of Georgia and supported environments from 2009 LG Smart TVs to modern browsers.
you think it’s solid until you want customization and old browser support. it should work fine if you just want to autoplay a small size mp4 file on mute
Ah...you're scratching at some scabs with this totally reasonable question.
We learned some tough lessons with media-chrome[1] and Mux Player, where we tried to just write web components. The React side of things was a bit of a thorn, so we created React shims that provided a more idiomatic React experience and rendered the web components...which was mostly fine, but created a new set of issues. The reason we chose web components was to not have to write framework-specific code, and then we found ourselves doing both anyway.
With VJS 10 I think we've landed on a pretty reasonable middle ground. The core library is "headless," and then the rendering layer sits on top of it. Benefit is true React components and nice web components.
Web components sound neat until you try to make styling and SSR behave across a mess of app setups, and then you're burning time on shadow DOM quirks, hydration bugs, and framework glue instead of the player itself. Most users do not care. A plain JS lib with a decent API is easier to drop into an old stack, easier to debug, and less likely to turn us into free support for someone's anicent admin panel.
Is it not a web component, per se? Per the article, all the React stuff does seem to bake down to HTML Custom Elements, that get wired up by some client-side JS registering for them. That client-side JS is still a "web component", even if it's embedded inside React SPA code bundle, no?
If you mean "why do I need React / any kind of bundling; why can't I just include the minified video.js library as a script tag / ES6 module import?" — I'm guessing you can, but nobody should really want to, since half the point here is that the player JS that registers to back the custom elements, is now way smaller, because it's getting tree-shaken down to just the JS required to back the particular combination of custom elements that you happen to use on your site. And doing that requires that, at "compile time", the tree-shaking logic can understand the references from your views into the components of the player library. That's currently possible when your view is React components, but not yet possible (AFAIK) when your view is ordinary HTML containing HTML Custom Elements.
I guess you could say, if you want to think of it this way, that your buildscript / asset pipeline here ends up acting as a web-component factory to generate the final custom-tailored web-component for your website?
Probably not base case but a quick test to replace my audio player (currently using Plyr) turned up the following gaps for me, at least with the out-of-the-box code.
1. No playback rates under 1
2. No volume rocker on mobile
3. Would appreciate having seek buttons on mobile too
4. No (easily apparent) way to add an accent color, stuck with boring monochrome
5. Docs lacked clear example/demo/playground so I wasn't sure what it would look like until implemented
All solid feedback, thanks! I'm making sure these get captured as issues. Otherwise we're closely tracking feature parity with Plyr (and other players) and our goal is to have full parity by GA, aiming for the middle of the year.
I'm not familiar with video hosting but have played with html5 video player but I have this question: on the servers side, do I have to host a specific endpoint that serves chunks of video? Lets say I take 720p video @ 800mb and I chunk it into 2mb pieces with ffmpeg. So I have a folder somewhere (webserver, cdn, blob storage) with the original 4K video, then generate downscaled versions for 1440p, 1080p, 720p, so I end up with 4 large files, and then for each of those, I chunk them into reasonable sizes that aligns with bitrates / key frames. And then some thumbnail generation. Any advise on what the "best" way would be to chunk/host video files so that videojs runs the best and smoothest? I feel that I should build a very lean/fast chunk & thumbnail server, just one or two endpoints. Or is it best to let the webserver do the lifting? Or off-the-shelf media servers (like in the self-hosting community)?
Just convert it to HLS, which is naturally chunked at 1-2 second intervals, and serve all the pieces from nginix. No dynamic content needed. I do this with videojs and it works great. Added bonus of HLS is that my LG TV supports it natively from <video> tags.
If you don't need to switch versions at runtime (ABR), you don't even need to chunk it manully. Your server has to support range requests and then the browser does the reasonable thing automatically.
The simplest option is to use some basic object storage service and it'll usually work well out of the box (I use DO Spaces with built-in CDN, that's basically it).
Yes, serving an MP4 file directly into a <video> tag is the simplest possible thing you can do that works. With one important caveat: you need to move the "MOOV" metadata to the front of the file. There are various utilities for doing that.
It's not quite as simple as that because the chunks should be self-contained; they need to start with an IDR keyframe, which fully resets the decoder. That allows the player to seek to the start of any chunk.
That means when you're encoding the downscaled variants, the encoder wants to know the size of the file segments so it can insert those IDR frames. Therefore it's common to do the encoding and segmentation in a single step (e.g. with ffmpeg's "dash" formatter).
You can have variable-duration or fixed-duration segments. Supposedly some decoders are happier with fixed-duration segments, but it can be fiddly to get the ffmpeg settings just right, especially if you want the audio and video to have exactly the same segment size (here's a useful little calculator for that: https://anton.lindstrom.io/gop-size-calculator/)
For hosting, a typical setup would be to start with a single high-quality video file, have an encoder/segmenter pipeline that generates a bunch of video and audio chunks and DASH (.mpd) and/or HLS (.m3u8) manifests, and put all the chunks and manifests on S3 or similar. As long as all the internal links are relative they can be placed anywhere. The video player will start with the top-level manifest URL and locate everything else it needs from there.
Just want to say, thanks for the comprehensive blog post and not treating the reader like children. You did a great job explaining the differences & changes. I wish more product/project releases were done this well.
Congrats Steve! I haven't touched video since I was at JW Player a million years ago, but I always inspired by the simplicity of video.js (especially the theming).
Hope this new iteration is exceptionally successful.
Oh hi Zach! Blast from the past. Hope you’re doing well and thanks for the well wishes. Always enjoyed chatting you and the JW team at FOMS and conferences. The water’s warm back here in video tech if you ever want to jump back in!
So fun seeing all these familiar names pop up in a single thread, haven't been active in video after leaving Kaltura but have fond memories of FOMS/FOSDEM and meeting all of you!
Genuinely didn't expect 88% — what was the biggest win? Guessing it was the plugin system since that thing was a mess. Also curious if you broke any of the major integrations during the rewrite or managed to keep them intact.
I was just lamenting the other day about the size of video.js, which is used in my legacy web app, and looking for a way to improve that. Very keen to explore how we could migrate to v10!
This is amazing. We also kind of created a Player context provider and was using it to maintain/mutate player state globally. If its possible to also share any examples related to player events and new way to register plugins in V10, that would also help better understand the overall picture.
Hey there, I'm on the Video.js team! Sounds like your context provider approach is already in the right ballpark!
Some background: our store[1] which was inspired by Zustand[2] is created and passed down via context too. This is the central state management piece of our library and where we imagine most devs will build on for extending and customizing to their needs.
Updates are handled via simple store actions like `store.play()`, `store.setVolume(10)`, etc. Those actions are generally called in response to DOM events.
On the events side of things, rather than registering event listeners directly, in v10 you'd subscribe to the store instead. Something like `store.subscribe(callback)`, or in React you'd use our `usePlayer`[3] hook. The store is the single source of truth, so rather than listening to the underlying media element directly, you're observing state changes.
---
So far with v10 we haven't been thinking about "plugins" in the traditional sense either. If I had to guess at what it would look like, it'd be three things:
1. Custom store slices[4] so plugins can extend the store with their own state and actions
2. A middleware layer that plugs into the store's action pipeline so a plugin could intercept or react to actions before or after they're applied, similiar to Zustand middleware, or even in some ways like Video.js v8 middleware[5]
3. UI components that plugins can ship which use our core primitives for accessing the store, subscribing to state, etc.
I believe that'd cover the vast majority of what plugins needed in v8. We haven't nailed down the exact API yet but that's the direction we're leaning towards. We're still actively working on both the library and our docs so I don't have somewhere I can link to for these just yet (sadly)! We're likely targeting sooner, but GA (end of June) is the deadline.
I should also add... one thing we prototyped early on that may return: tracking end-to-end requests through the store. A DOM event triggers a store action like play, which calls `video.play()`, which then waits for the media event response (play, error, etc.). It worked really well and lines up nicely with the middleware direction.
Looking great. I'll give it a try later on once things stabilize a bit.
In the meantime, does anyone know what's going on in this space? Seems to me like a lot is changing over the past year. Eg: react-player new version, taken over by Mux. And also I did realize Video.js is sponsored by Mux. And also seemingly different companies working together.
OP and Mux co-founder here so have all the context on this. A lot has changed. Mux stepped in to help maintain React Player a few years ago. It wasn't getting frequent updates and Mux has a vested interest in the whole OSS player ecosystem (even if we didn't built it) because Mux Video (hosting) is player agnostic, and we get support requests for all of them. @luwes from Mux did the work to get to the new version, while making it possible to use Media Chrome media elements with React Player and consolidating some development efforts. We're still a tiny player team so that was important.
There are no immediate plans to deprecate React Player and I think it holds a special place in the ecosystem, but there will be overlap with video.js v10 and if there's specific features you care about or feel are missing, or if you think we're doing a bad job, please voice it here.
It was a similar story with Vidstack and Plyr, with Mux first sponsoring the projects. That's how I met Rahim and Sam, and how we got talking about a shared vision for the future of players.
I am curious, why would anyone pick HLS over Dash in these days?
Granted, my knowledge on the matter is rather limited, but I had some long running streams (weeks) and with HLS the playlist became quite large while with dash, the mpd was as small as it gets.
This is true, and the whole iOS/iPadOS/tvOS ecosystem supports HLS natively making it much easier to work with on that platform. In addition, Chrome recently added support for HLS[1] (and not DASH), so the native browser support for HLS is getting pretty wide.
HLS also has newer features that address the growing manifest issues you were seeing. [2]
All that said, I think a lot of people would feel more comfortable if the industry's adaptive streaming standard wasn't completely controlled by Apple.
Thank you! I’m on the Video.js team, and we’d love for you to try the library out and share your feedback. We’re especially eager to hear from developers who used or tried v8 in the past.
We’re taking a new approach to the library with a lot of new concepts, so your feedback would help us a ton during Beta as we figure out what’s working well and what isn’t.
I'm on the Video.js team, just wanted to say thank you! Means a lot and we'd be eager to hear your experience trying it out. Feel free to drop a GitHub issue or discussion post if you ever get a chance :)
From me, this is a massive relief after we just deployed a bunch of videos to Vimeo. The next week they were bought.
I'm a one-man operation. In the order of hundreds of videos served a week. All I want is control over my own destiny. If this and a VPS can do that, that'll be amazing. Thank you for doing this.
It’s largely because (1) the React runtime is not bundled so it’s technically not apples to apples, (2) the Web Component includes CSS as well since we’re using Shadow DOM.
Basically few kB for CSS and few kB for a thin “framework” layer for managing attr to prop mapping, simple lifecycle, context, and so on.
We are designing with the goal of supporting more frameworks like Svelte and Vue specifically, even as far as React Native! We just don’t know when exactly yet but a large part of our approach in v10 is to make sure we can deliver the best possible experience to each frontend framework. It’s important for us that the integrations don’t feel like wrappers but truly idiomatic.
In the meantime, we’re hoping our custom elements will act as a good stopgap. Most frameworks including Svelte support them well, and we’re pouring love into the APIs so they feel good to use regardless of which framework.
If you’re interested in peeking under the hood, architecturally we’re taking a similar approach to TanStack and separating out a shared core from the beginning, but with one added step of splitting out the DOM as well to aid in supporting RN one day.
Did the private equity buy the domain videojs.org (did it take control of the project and you somehow regained control after selling) or was this domain (and the project) always under your control?
can anyone recommend me good, battle-tested "slider" solution for playing videos as well as displaying images from single gallery? ideally capable of handling huge galleries (hundreds of items) with lazy loading
Not a today answer, but this is something I'm excited to build within the new Presets concept of video.js v10, where we can build specific "video interfaces" beyond a standard player using the composable architecture.
We currently already use video.js, and our framework us used all over the place, so we’d be the perfect use case for you guys.
How would we use video.js 10 instead, and for what? We would like to load a small video player, for videos, but which ones? Only mp4 files or can we somehow stream chunks via HTTP without setting up ridiculous streaming servers like Wowsa or Red5 in 2026?
That's great! It looks like you have a pretty extensive integration with the prior version of Video.js, so migrating will take some work, but I think worth it when you can make the time. That said, for Beta it works with browser-supported formats and HLS, with support for services like Youtube and Vimeo close behind as we migrate what we haver in the Media Chrome ecosystem[1]. So if that's what you need maybe hold your breath for a few weeks.
What are you supporting today that requires Wowza or Red5? The short answer is Video.js is only the front-end so it won't help the server side of live streaming much. I'm of course happy to recommend services that make that part easier though.
Thank you for your feedback. Yep I definitely understand that Video.js is just the front end. I want to avoid using Wowza / Red5 and just want to serve chunks of video files, essentially, buffering them and pasting them to the "end of the stream" laying down tracks ahead of the video.js train riding over those tracks.
So I'm just wondering whether we can do streaming that way, and video.js can "just work" to play the video as we fetch chunks ahead of it ("buffering" without streaming servers, just basic HTTP range requests or similar).
You should check out HLS and DASH. If you're already familiar and you're not using them because they don't meet your requirements, then apologies for the foolish recommendation. If not, this could solve your problem.
88% smaller is remarkable, but what really stands out to me is the decision to come back after 16 years and actually do the rewrite. Most abandoned projects just stay abandoned.
Video handling on the web is still surprisingly painful in 2026 -- between codec fragmentation, adaptive bitrate, and accessibility requirements. Having a maintained, lightweight player that handles the hard parts is genuinely valuable. Looking forward to trying this on a couple of projects where I am currently using a bloated custom setup.
Sibling comment didn't elaborate, but I think they might be onto something.
It happened to me personally - LLMs and agentic coding tools enabled me to pick up old side projects and actually finish them. Some of these projects were in the drawer for years, and when Sonnet 4 released I gave them another try and got up to speed really quickly. I suspect this happened to many developers.
Absolutely the case for me. Small fun projects that would take a few hours to round off a feature can now be done in an hour. Why wouldn't i finish it off?
Hey VJS core contributor here. We definitely feel that concern and we also don't yet have a silver bullet formalized. I suspect we'll need some kind of alternate implementations or feature augmentation at some point. We're currently doing things in a bit more ad hoc way, such as the interrelationship between PiP and Fullscreen (see, e.g.: https://github.com/videojs/v10/blob/main/packages/core/src/d...).
One other thing to note: because the features are "composed", we at least have a lot of flexibility here that makes me feel pretty good about the fundamentals and not "coding ourselves into a corner" here.
Yeah the composability buys you a lot of room. One central store with events, inject it into each feature, and they stay decoupled without painting yourself in.
https://github.com/morhetz/gruvbox
I had one question I couldn't answer reading the site: what makes this different from the native html video element?
AFAICT just the transport controls?
Generally, the video tag is great and has come a very long way from when Video.js was first created. If the way you think about video is basically an image with a play button, then the video tag works well. If at some point you need Video.js, it'll become obvious pretty quick. Notable differences include:
* Consistent, stylable controls across browsers (browsers each change their native controls over time)
* Advanced features like analytics, ABR, ads, DRM, 360 video (not all of those are in the new version yet)
* Configurable features (with browsers UIs you mostly get what you get)
* A common API to many streaming formats (mp4/mp3, HLS, DASH) and services (Youtube, Vimeo, Wistia)
Of course many of those things are doable with the video tag itself, because (aside from the iframe players) video.js uses the video tag under the hood. But to add those features you're going to end up building something like video.js.
P.S i built movie streaming and tv broadcasting player for country of Georgia and supported environments from 2009 LG Smart TVs to modern browsers.
(And why does that matter? Dynamic bitrate adjustment. The chunks are slightly easier to cache as well.)
Most can via media source extensions.
We learned some tough lessons with media-chrome[1] and Mux Player, where we tried to just write web components. The React side of things was a bit of a thorn, so we created React shims that provided a more idiomatic React experience and rendered the web components...which was mostly fine, but created a new set of issues. The reason we chose web components was to not have to write framework-specific code, and then we found ourselves doing both anyway.
With VJS 10 I think we've landed on a pretty reasonable middle ground. The core library is "headless," and then the rendering layer sits on top of it. Benefit is true React components and nice web components.
[1] https://github.com/muxinc/media-chrome
If you mean "why do I need React / any kind of bundling; why can't I just include the minified video.js library as a script tag / ES6 module import?" — I'm guessing you can, but nobody should really want to, since half the point here is that the player JS that registers to back the custom elements, is now way smaller, because it's getting tree-shaken down to just the JS required to back the particular combination of custom elements that you happen to use on your site. And doing that requires that, at "compile time", the tree-shaking logic can understand the references from your views into the components of the player library. That's currently possible when your view is React components, but not yet possible (AFAIK) when your view is ordinary HTML containing HTML Custom Elements.
I guess you could say, if you want to think of it this way, that your buildscript / asset pipeline here ends up acting as a web-component factory to generate the final custom-tailored web-component for your website?
1. No playback rates under 1
2. No volume rocker on mobile
3. Would appreciate having seek buttons on mobile too
4. No (easily apparent) way to add an accent color, stuck with boring monochrome
5. Docs lacked clear example/demo/playground so I wasn't sure what it would look like until implemented
The simplest option is to use some basic object storage service and it'll usually work well out of the box (I use DO Spaces with built-in CDN, that's basically it).
That means when you're encoding the downscaled variants, the encoder wants to know the size of the file segments so it can insert those IDR frames. Therefore it's common to do the encoding and segmentation in a single step (e.g. with ffmpeg's "dash" formatter).
You can have variable-duration or fixed-duration segments. Supposedly some decoders are happier with fixed-duration segments, but it can be fiddly to get the ffmpeg settings just right, especially if you want the audio and video to have exactly the same segment size (here's a useful little calculator for that: https://anton.lindstrom.io/gop-size-calculator/)
For hosting, a typical setup would be to start with a single high-quality video file, have an encoder/segmenter pipeline that generates a bunch of video and audio chunks and DASH (.mpd) and/or HLS (.m3u8) manifests, and put all the chunks and manifests on S3 or similar. As long as all the internal links are relative they can be placed anywhere. The video player will start with the top-level manifest URL and locate everything else it needs from there.
Hope this new iteration is exceptionally successful.
I hope the plugin directory get an overhaul too and a prominent place an the webpage. The plugin ecosystem was for me a huge benefit for Video.js
Even though some of them are outdated, they were a good source of inspiration.
https://github.com/videojs/v10/discussions
Some background: our store[1] which was inspired by Zustand[2] is created and passed down via context too. This is the central state management piece of our library and where we imagine most devs will build on for extending and customizing to their needs.
Updates are handled via simple store actions like `store.play()`, `store.setVolume(10)`, etc. Those actions are generally called in response to DOM events.
On the events side of things, rather than registering event listeners directly, in v10 you'd subscribe to the store instead. Something like `store.subscribe(callback)`, or in React you'd use our `usePlayer`[3] hook. The store is the single source of truth, so rather than listening to the underlying media element directly, you're observing state changes.
---
So far with v10 we haven't been thinking about "plugins" in the traditional sense either. If I had to guess at what it would look like, it'd be three things:
1. Custom store slices[4] so plugins can extend the store with their own state and actions
2. A middleware layer that plugs into the store's action pipeline so a plugin could intercept or react to actions before or after they're applied, similiar to Zustand middleware, or even in some ways like Video.js v8 middleware[5]
3. UI components that plugins can ship which use our core primitives for accessing the store, subscribing to state, etc.
I believe that'd cover the vast majority of what plugins needed in v8. We haven't nailed down the exact API yet but that's the direction we're leaning towards. We're still actively working on both the library and our docs so I don't have somewhere I can link to for these just yet (sadly)! We're likely targeting sooner, but GA (end of June) is the deadline.
I should also add... one thing we prototyped early on that may return: tracking end-to-end requests through the store. A DOM event triggers a store action like play, which calls `video.play()`, which then waits for the media event response (play, error, etc.). It worked really well and lines up nicely with the middleware direction.
[1]: https://github.com/videojs/v10/tree/main/packages/store
[2]: https://github.com/pmndrs/zustand
[3]: https://videojs.org/docs/framework/react/reference/use-playe...
[4]: https://zustand.docs.pmnd.rs/learn/guides/slices-pattern#sli...
[5]: https://legacy.videojs.org/guides/middleware/
There are no immediate plans to deprecate React Player and I think it holds a special place in the ecosystem, but there will be overlap with video.js v10 and if there's specific features you care about or feel are missing, or if you think we're doing a bad job, please voice it here.
It was a similar story with Vidstack and Plyr, with Mux first sponsoring the projects. That's how I met Rahim and Sam, and how we got talking about a shared vision for the future of players.
Granted, my knowledge on the matter is rather limited, but I had some long running streams (weeks) and with HLS the playlist became quite large while with dash, the mpd was as small as it gets.
HLS also has newer features that address the growing manifest issues you were seeing. [2]
All that said, I think a lot of people would feel more comfortable if the industry's adaptive streaming standard wasn't completely controlled by Apple.
[1] https://caniuse.com/http-live-streaming
[2] https://www.mux.com/blog/low-latency-hls-part-2
We’re taking a new approach to the library with a lot of new concepts, so your feedback would help us a ton during Beta as we figure out what’s working well and what isn’t.
I'm a one-man operation. In the order of hundreds of videos served a week. All I want is control over my own destiny. If this and a VPS can do that, that'll be amazing. Thank you for doing this.
Might need to consider bandwidth and the usual mitigation against scrapers if you're serving video unauthenticated.
We'll be moving to videojs 10 when it hits GA.
Basically few kB for CSS and few kB for a thin “framework” layer for managing attr to prop mapping, simple lifecycle, context, and so on.
In the meantime, we’re hoping our custom elements will act as a good stopgap. Most frameworks including Svelte support them well, and we’re pouring love into the APIs so they feel good to use regardless of which framework.
If you’re interested in peeking under the hood, architecturally we’re taking a similar approach to TanStack and separating out a shared core from the beginning, but with one added step of splitting out the DOM as well to aid in supporting RN one day.
Did the private equity buy the domain videojs.org (did it take control of the project and you somehow regained control after selling) or was this domain (and the project) always under your control?
https://videojs.org/docs/framework/react/concepts/presets
Throws Uncaught (in promise) TypeError: AbortSignal.any is not a function on volume-slider-data-attrs.BOpj3NK1.js
[1]: https://github.com/videojs/v10/issues/1120
https://github.com/Qbix/Platform/blob/main/platform/plugins/...
We currently already use video.js, and our framework us used all over the place, so we’d be the perfect use case for you guys.
How would we use video.js 10 instead, and for what? We would like to load a small video player, for videos, but which ones? Only mp4 files or can we somehow stream chunks via HTTP without setting up ridiculous streaming servers like Wowsa or Red5 in 2026?
What are you supporting today that requires Wowza or Red5? The short answer is Video.js is only the front-end so it won't help the server side of live streaming much. I'm of course happy to recommend services that make that part easier though.
[1] https://github.com/muxinc/media-elements
So I'm just wondering whether we can do streaming that way, and video.js can "just work" to play the video as we fetch chunks ahead of it ("buffering" without streaming servers, just basic HTTP range requests or similar).
Video handling on the web is still surprisingly painful in 2026 -- between codec fragmentation, adaptive bitrate, and accessibility requirements. Having a maintained, lightweight player that handles the hard parts is genuinely valuable. Looking forward to trying this on a couple of projects where I am currently using a bloated custom setup.
It happened to me personally - LLMs and agentic coding tools enabled me to pick up old side projects and actually finish them. Some of these projects were in the drawer for years, and when Sonnet 4 released I gave them another try and got up to speed really quickly. I suspect this happened to many developers.