Skip to content

vf_d3d11vpp: fix deinterlacing#17413

Merged
kasper93 merged 6 commits intompv-player:masterfrom
kasper93:d3d11-deint
Feb 18, 2026
Merged

vf_d3d11vpp: fix deinterlacing#17413
kasper93 merged 6 commits intompv-player:masterfrom
kasper93:d3d11-deint

Conversation

@kasper93
Copy link
Member

@kasper93 kasper93 commented Feb 15, 2026

This fixes deinterlacing issues in #15197

@kasper93
Copy link
Member Author

/cc @softworkz

Frankly, I didn't go deep into the issues, I just made it so it pases the required amount of reference frames, which mostly resolves the broken deinterlacing. We can work on other fixes if we have good examples when it is not working.

@softworkz
Copy link
Contributor

/cc @softworkz

Frankly, I didn't go deep into the issues, I just made it so it pases the required amount of reference frames, which mostly resolves the broken deinterlacing. We can work on other fixes if we have good examples when it is not working.

Hi,

thanks a lot for looking into this. It took me a moment to refresh memory, but from a glance it makes sense.

But: what this does is to change from one hard-code mode to a different fixed mode and it can be made better with a few simple changes:

  1. The modes "blend" and "ivtc" can be removed from code and docs, because they are not compatible with the implementation - which expects frame doubling
  2. What remains is BOB, ADAPTIVE and MOTION_COMPENSATION
    I don't know whether it's possible to switch between the latter two, but what I do know is that you get BOB when you don't specify reference frames (like before), so the easy change would be:
  • If BOB is selected, do not provide reference (like before)
  • If ADAPTIVE or MOTION_COMPENSATION is specified, provide the frames like in this PR

This would make the mode param effective (for the first time) and retrain existing behavior, which might be useful for example when a driver/GPU has a problematic/undesired impl. of ADAPTIVE or MOTION_COMPENSATION, then one can still switch back to Bob.

sw

@kasper93 kasper93 force-pushed the d3d11-deint branch 2 times, most recently from 27bc7b0 to e4ffc4b Compare February 16, 2026 15:02
@kasper93
Copy link
Member Author

The modes "blend" and "ivtc" can be removed from code and docs, because they are not compatible with the implementation - which expects frame doubling

I don't want to remove those, as they doesn't do any harm. IVTC cadence is still detected correctly, we just don't support custom rates to drop duplicated frames. My GPU doesn't support custom rates, so not sure how else we can support that. Without proper feedback from driver, not sure how we could even do manually frame selection. Anyway, it works as best as it can now. Maybe someone will step in and improve that.

I don't know whether it's possible to switch between the latter two, but what I do know is that you get BOB when you don't specify reference frames (like before), so the easy change would be:

It is not really possible to select any mode, unless they were to be exposed as separate video processors. But they are not, at least on my hardware. In which cases we just give the driver frames and it selects "best" strategy.

I've added a "force" bob mode, where we don't pass reference frames, this can be used for debugging if anything else, but frankly adding ref frames gives way better results.

@softworkz
Copy link
Contributor

It is not really possible to select any mode, unless they were to be exposed as separate video processors. But they are not, at least on my hardware. In which cases we just give the driver frames and it selects "best" strategy.

No. You select the mode by the way how you are using it. You switch to blend by setting output rate=input rate (without reference frames) and you select Bob by specifying a double output rate and by not providing reference frames. That's pretty clear.
There's also an example which shows exactly how to do the 3:2 pulldown for inverse telecine - again by the way how frames are provided and output rate is set.

With this PR, you have hardcoded adaptive/mocomp. Which means that the mode parameter is totally useless - it doesn't have any effect. But at the same time, you don't want to remove values (or the mode param entirely)?

These two things don't go together. It is confusing everybody, and often enough it has been upsetting me when using software with such "anomalies" which only a handful of people know about and the rest of the world either remains confused/frustrated/whatever, except those who find it out the hard way - and are even more frustrated, then.

@kasper93
Copy link
Member Author

kasper93 commented Feb 17, 2026

No. You select the mode by the way how you are using it. You switch to blend by setting output rate=input rate (without reference frames) and you select Bob by specifying a double output rate and by not providing reference frames. That's pretty clear.

No, it's not possible to select the mode. You can infer whether BOB or BLEND will be used, but the driver can do whatever it wants. It can use BOB even when passing frames. However, the code currently assumes that without frames BOB will always be used, which is implied by docs.

It also depends on the hardware. On my Intel system, BLEND (not yet included in the PR) works correctly, but on an AMD system it outputs only one field without blending the other one. (need to test more though, maybe it's fancy and have blending threshold, but I doubt that)

There's also an example which shows exactly how to do the 3:2 pulldown for inverse telecine - again by the way how frames are provided and output rate is set.

As I said, my hardware doesn't support custom rates, so I won't implement full 60i to 24p conversion. It's actually quite clear how to make it work, but if no vendor supports it, it's completely pointless to implement. IVTC itself works correctly, just with repeated progressive frames.

With this PR, you have hardcoded adaptive/mocomp. Which means that the mode parameter is totally useless - it doesn't have any effect. But at the same time, you don't want to remove values (or the mode param entirely)?

I didn't hardcode anything. It might have an effect if the driver exposed separate processors or rate converters. In practice, no one implements that, but the point here is that every mode works fine. Hardware implementations can detect the telecine cadence and correctly reconstruct frames, just without the frame-dropping part, so there is duplication.

As I said, there is no way to force the mode except by changing the rate itself. I implemented this locally for half-rate, because why not, but not for any custom rate... I would like to see it in action first before committing the code.


While playing with this, I found the root cause of some weirdness in behavior in some scenarios. Now everything works exactly as documentation describes. I will commit later remaining changes.

I'm kind of surprised that no one before complained about this deinterlacing quality. Sorry it took me so long to get into this topic and thanks for bringing this up.

@kasper93 kasper93 changed the title vf_d3d11vpp: pass more required amount of past/future frames vf_d3d11vpp: pass required amount of past/future frames Feb 17, 2026
@softworkz
Copy link
Contributor

No, it's not possible to select the mode. You can infer whether BOB or BLEND will be used, but the driver can do whatever it wants. It can use BOB even when passing frames. However, the code currently assumes that without frames BOB will always be used, which is implied by docs.

That's what I mean: We can force it to do BOB by specify the double output rate and by not supplying reference frames.

Then, we can (more or less) safely assume that it will use Adaptive or Mocomp if (1) it supports that and (2) we provide the required reference frames.

So, what I'm proposing is simply to:

  • Change the default to adaptive (and provide ref frames)
  • But when a user specifies BOB explicitly, to not provide reference frames in order to force BOB

Doesn't that make sense?

It also depends on the hardware. On my Intel system, BLEND (not yet included in the PR) works correctly, but on an AMD system it outputs only one field without blending the other one. (need to test more though, maybe it's fancy and have blending threshold, but I doubt that)

That observation matches the D3D11 processor caps that I'm seeing on my system. I can easily compare because here I have 3 GPUs: Intel integrated, Nvidia RTX and Radeon - all working in parallel (try that on Linux 🤣).
You can see the D3D11 processor caps in the 3rd post in the other conversation, where Intel and Nvidia are indicating support for Blend while AMD does not.

Also notable: AMD has separate processors for Interlaced and progressive sources while Intel and Nvidia have a single (combined) one.

Intel and AMD want 1 past and 1 future frame and Nvidia wants 2 past and 1 future frame.

There's also an example which shows exactly how to do the 3:2 pulldown for inverse telecine - again by the way how frames are provided and output rate is set.

As I said, my hardware doesn't support custom rates, so I won't implement full 60i to 24p conversion. It's actually quite clear how to make it work, but if no vendor supports it, it's completely pointless to implement. IVTC itself works correctly, just with repeated progressive frames.

I think you are mixing up two things:

Framerate Conversion

This is operating on progressive frames and meant to be an upcoming feature where intermediate frames are calculated based on motion interpolation (even for conversion to lower rates, interpolation is needed for smooth motion).

=> No vendor is supporting this yet (that's what you are referring to)

Inverse Telecine

This is working on interlaced fields and it is not about "repeated progressive frames" but it is about removing the "right" fields/frames which have been inserted by telecine conversions (film => TV).

=> All three vendors are supporting this via their D3D11 processors

They are even indicating the exact patterns which they are supporting for that:

image

I would like to see it in action first before committing the code.

This page has a pseudo-code snippet illustrating IVTC from 60i -> 24p (the lower half of the snippet).

Of course, I'm not saying you should do this. Obviously this makes only sense when you have a display connected which is running at 24Hz and you want to restore the original "cinematic experience".

While playing with this, I found the root cause of some weirdness in behavior in some scenarios. Now everything works exactly as documentation describes. I will commit later remaining changes.

I'm kind of surprised that no one before complained about this deinterlacing quality. Sorry it took me so long to get into this topic and thanks for bringing this up.

I'm very surprised as well, especially since MPV users are typically very picky and critical regarding playback quality.

Even more, I'm very glad and thankful that you found the time for taking a look a this. 👍

@kasper93
Copy link
Member Author

kasper93 commented Feb 18, 2026

That's what I mean: We can force it to do BOB by specify the double output rate and by not supplying reference frames.

Then, we can (more or less) safely assume that it will use Adaptive or Mocomp if (1) it supports that and (2) we provide the required reference frames.

So, what I'm proposing is simply to:

Change the default to adaptive (and provide ref frames)
But when a user specifies BOB explicitly, to not provide reference frames in order to force BOB

Doesn't that make sense?

This is already done in this PR since my first post, so not sure where is the issue?

I think you are mixing up two things:

I'm not.

Framerate Conversion

This is operating on progressive frames and meant to be an upcoming feature where intermediate frames are calculated based on motion interpolation (even for conversion to lower rates, interpolation is needed for smooth motion).

=> No vendor is supporting this yet (that's what you are referring to)

While you are right, that FRC can be used for various processing. I'm only talking in context of interlaced/telecine content. I'm not interested in FRC "smooth motion" at this point in time. AMD doesn't support this through d3d11, it's implemented as --vf=amf_frc, while NVIDIA does support it, but only in rates that are not interesting here (for ivtc).

image

Inverse Telecine

This is working on interlaced fields and it is not about "repeated progressive frames" but it is about removing the "right" fields/frames which have been inserted by telecine conversions (film => TV).

=> All three vendors are supporting this via their D3D11 processors

It's about reconstructing frames from fields and then dropping repeated ones. As you can see NVIDIA doesn't support IVTC at all, at least on 5090. While non of the vendors support custom rate required to allow dropping the frame for full IVTC support.

This page has a pseudo-code snippet illustrating IVTC from 60i -> 24p (the lower half of the snippet).

Of course, I'm not saying you should do this. Obviously this makes only sense when you have a display connected which is running at 24Hz and you want to restore the original "cinematic experience".

Like I said, it's obvious how to implement this. But none of the vendors support 4/5 custom rate, that would be required to implement this.

It's all explained on the page you linked.

Interlaced format at 4/5 custom rate (3:2 inverse telecine, OutputFrames=4 and InputFrameOrField=10):

DRV: Exports CustomRate=4/5, OutputFrames=4, InputInterlaced=TRUE, InputFramesOrFields=10.
APP: Creates VPGuid[1] video processor and set output rate to 4/5 custom rate.

@softworkz
Copy link
Contributor

Doesn't that make sense?

This is already done in this PR since my first post, so not sure where is the issue?

Really? The original commit that I had reviewed has this:

image

and this:

image

Since you were saying "no" to everything I said, I hadn't looked at it again. Now I see that you have changed it to this:

image

and this:

image

Which makes me wonder why you didn't just say that you've changed it - it's all good now.

@kasper93
Copy link
Member Author

In comment #17413 (comment):

I've added a "force" bob mode, where we don't pass reference frames, this can be used for debugging if anything else, but frankly adding ref frames gives way better results.

@softworkz
Copy link
Contributor

In comment #17413 (comment):

I've added a "force" bob mode, where we don't pass reference frames, this can be used for debugging if anything else, but frankly adding ref frames gives way better results.

Sorry, my bad, must have overlooked it.

@softworkz
Copy link
Contributor

softworkz commented Feb 18, 2026

Regarding the rate conversions, Nvidia driver is just wrong.

It says it supports IVTC but then it specifies custom rate conversions (which it indicates not to support:

image

From the docs, it's very clear:

image

There is D3D11_VIDEO_PROCESSOR_RATE_CONVERSION_CAPS

which has a member ITelecineCaps and corresponds to: D3D11_VIDEO_PROCESSOR_ITELECINE_CAPS

This capability is indicated by D3D11_VIDEO_PROCESSOR_PROCESSOR_CAPS_INVERSE_TELECINE

and it has a member CustomRateCount which can be used to call GetVideoProcessorCustomRate()

This capability is indicated by D3D11_VIDEO_PROCESSOR_PROCESSOR_CAPS_FRAME_RATE_CONVERSION

Nvidia indicates D3D11_VIDEO_PROCESSOR_PROCESSOR_CAPS_INVERSE_TELECINE and not D3D11_VIDEO_PROCESSOR_PROCESSOR_CAPS_FRAME_RATE_CONVERSION, even though it returns values for custom rate conversions.
The docs say "The video processor can convert the frame rate by interpolating frames." - which Nvidia clearly cannot do. But what it can do is IVTC - it just doesn't indicate it correctly.

Intel and AMD are reporting their IVTC capabilities correctly - with values from the enum D3D11_VIDEO_PROCESSOR_ITELECINE_CAPS , so they are surely able to do this.

@kasper93
Copy link
Member Author

Regarding the rate conversions, Nvidia driver is just wrong.

I noticed this too :)

This was a typo. Although this changes the user option name, this
filter, especially this option, is so niche that I fix it without a
deprecation period.
@softworkz
Copy link
Contributor

Some of our users have been asking whether we can implement rate doubling with motion interpolation. One has a very adventurous setup going through vaporsynth and some specialized software (name of which I don't remember), but I said we won't go crazy about it now as it's just a matter of time that GPU vendors will offer it in a way where it can easily be enabled with just a few changes - and that capability (D3D11_VIDEO_PROCESSOR_PROCESSOR_CAPS_FRAME_RATE_CONVERSION) is at least how MS thinks it should be made available. So let's hope that GPU vendors won't be cooking their own soup - like so often...

@kasper93 kasper93 changed the title vf_d3d11vpp: pass required amount of past/future frames vf_d3d11vpp: fix deinterlacing Feb 18, 2026
@kasper93
Copy link
Member Author

Updated PR with all changes. Should resolve all issues now.

  • BOB selection is supported by not sending reference frames
  • BLEND selection is supported by setting half-rate and not setting sending frames
  • ADAPTIVE, MOTION_COMPENSATION and IVTC is up to driver to decide
  • IVTC outputs at normal rate, as non of the vendors support required custom rates to implement frame dropping
  • Added more logging for mode selection / custom rates
  • Fixed frame parity, there was a hack to alternate frame parity to make it "work", with incorrect API use
  • Few other minor fixes around mode selection

So let's hope that GPU vendors won't be cooking their own soup - like so often...

Like, they already do. See #17148 it's available through AMF only. While it could easily be exposed in d3d11vpp too, especially it's trivial 2x converter.

NVIDIA at least support some FRC in d3d11. Might actually look into supporting that at some point. Though I'm not a big fun of rate converters and interpolation to be honest.

Regarding the rate conversions, Nvidia driver is just wrong.

AMD is also wrong, because for progressive rate converter, they say FRC is supported, but no custom rates are exposed, so it's not possible to configure any rate, except maybe half rate, but that doesn't work for progressive.
image

@softworkz
Copy link
Contributor

softworkz commented Feb 18, 2026

Updated PR with all changes. Should resolve all issues now.

  • BOB selection is supported by not sending reference frames
  • BLEND selection is supported by setting half-rate and not setting sending frames
  • ADAPTIVE, MOTION_COMPENSATION and IVTC is up to driver to decide
  • IVTC outputs at normal rate, as non of the vendors support required custom rates to implement frame dropping
  • Added more logging for mode selection / custom rates
  • Fixed frame parity, there was a hack to alternate frame parity to make it "work", with incorrect API use
  • Few other minor fixes around mode selection

Awesome!

  • IVTC outputs at normal rate, as non of the vendors support required custom rates to implement frame dropping

ITelecineCaps is part of the D3D11_VIDEO_PROCESSOR_RATE_CONVERSION_CAPS.
At least Intel and AMD are setting all the flags like D3D11_VIDEO_PROCESSOR_ITELECINE_CAPS_32 - which means

The video processor can reverse 3:2 pulldown (as per docs)

Reversing a 3:2 pulldown from 60i gives you 24p
Because 3:2 pulldown means that frames are mapped to 2fields, 3fields, 2fields, 3fields, etc. - on average: 2.5 fields
And 60 / 2,5 = 24

So, why do you think that "custom rates" would be needed for this? Custom is custom (= non-standard) and the D3D11_VIDEO_PROCESSOR_ITELECINE_CAPS_* values are the standard one when dealing with fields.
So from my interpretation, those "custom rates" do not need to be supported for IVTC. Also because custom rate conversions have a separate capability flag (D3D11_VIDEO_PROCESSOR_PROCESSOR_CAPS_FRAME_RATE_CONVERSION).

So let's hope that GPU vendors won't be cooking their own soup - like so often...

Like, they already do. See #17148 it's available through AMF only. While it could easily be exposed in d3d11vpp too, especially it's trivial 2x converter.

Nice!

Regarding the rate conversions, Nvidia driver is just wrong.

AMD is also wrong, because for progressive rate converter, they say FRC is supported, but no custom rates are exposed, so it's not possible to configure any rate, except maybe half rate, but that doesn't work for progressive.

By theory, they might set this capability as supported without specifying custom rates, in case they would support arbitrary rate conversions . the docs are not clear in this regard - unlikely the case though...

@kasper93
Copy link
Member Author

kasper93 commented Feb 18, 2026

Reversing a 3:2 pulldown from 60i gives you 24p

Reversing 3:2 pulldown without rate conversion.

You can think about this like when using software ffmpeg filters. fieldmatch filter can match fileds and do the progressive frames reconstruction. While on top of that you have to use decimate filter to drop duplicated frames.

D3D11_VIDEO_PROCESSOR_ITELECINE_CAPS_* means field matching, while custom rate would allow to drop duplicated reconstructed frames.

Because 3:2 pulldown means that frames are mapped to 2fields, 3fields, 2fields, 3fields, etc. - on average: 2.5 fields
And 60 / 2,5 = 24

So, why do you think that "custom rates" would be needed for this? Custom is custom (= non-standard) and the D3D11_VIDEO_PROCESSOR_ITELECINE_CAPS_* values are the standard one when dealing with fields.
So from my interpretation, those "custom rates" do not need to be supported for IVTC. Also because custom rate conversions have a separate capability flag (D3D11_VIDEO_PROCESSOR_PROCESSOR_CAPS_FRAME_RATE_CONVERSION).

This is pretty clear if you read the docs, the example at the bottom has two cases for performing Inverse Telecine (IVTC) on 3:2 pull-down, 30 frames (60 fields) per second interlaced content. First is 60i -> 60p at normal rate (this one we support), second is 60i -> 24p at custom rate.

From API point of view it would not be possible to support without custom rate and more reference frames, because we actually need to drive the processing at the output rate. NORMAL_RATE means that we drive processing at field rate repeating the same frame twice, HALF_RATE means that we drive at frame rate sending both fields once. This API makes sense. For IVTC we would need to send the same input frame 4 times, and give more reference frame, none of which is exposed by the caps reported by the driver. Therefore it's not supported.

Also I understand why it's not supported in practice. Because while normal rate processing can internally do the detection of the cadence and reconstruct frames as it wishes. Rate conversion would need preexisting knowledge for us to know how to configure the filter. Which we don't have because the biggest telecine thing is to detect the cadence in the first place and it can be mixed telecine / interlaced content. Normal rate processing just works, custom rate would be pain to support and likely not used by anyone.

EDIT: Well it would be easy to support with output_rate option, manually set by user...

@kasper93
Copy link
Member Author

Updated to output half-rate in ivtc mode, as this is likely more expected. Added also example in docs with decimate filter.

@softworkz
Copy link
Contributor

This is pretty clear if you read the docs, the example at the bottom has two cases for performing Inverse Telecine (IVTC) on 3:2 pull-down, 30 frames (60 fields) per second interlaced content. First is 60i -> 60p at normal rate (this one we support), second is 60i -> 24p at custom rate.

I was under the assumption that an indication for 3:2 IVTC would imply support for the corresponding rate conversion. After more reading, I see that this might be wrong. Have you ever tried to specify D3D11_VIDEO_PROCESSOR_OUTPUT_RATE_CUSTOM and 4/5 as custom rate?

Updated to output half-rate in ivtc mode, as this is likely more expected. Added also example in docs with decimate filter.

Nice! And yes, that fits better.

@kasper93
Copy link
Member Author

I was under the assumption that an indication for 3:2 IVTC would imply support for the corresponding rate conversion. After more reading, I see that this might be wrong. Have you ever tried to specify D3D11_VIDEO_PROCESSOR_OUTPUT_RATE_CUSTOM and 4/5 as custom rate?

Yes, I have. It fails with invalid parameter error on VideoProcessorBlt. I tried also OutputIndex > 1, but it fails too, because it's limited to the number of output frames produced NORMAL==2, HALF==1.

I initially was under impression that it should be supported, but it really needs custom rate to be exposed. I wonder if it ever was supported on some hardware and got cut down over the years.

@softworkz
Copy link
Contributor

Yes, I have. It fails with invalid parameter error on VideoProcessorBlt. I tried also OutputIndex > 1, but it fails too, because it's limited to the number of output frames produced NORMAL==2, HALF==1.

I initially was under impression that it should be supported, but it really needs custom rate to be exposed. I wonder if it ever was supported on some hardware and got cut down over the years.

I bet it was - at least at some point. It's untypical for MS to expose APIs like this which have never been tested or implemented at least by one vendor (which they are typically collaborating with).

For DXVAHD, there's even a private stream data struct for IVTC state information: DXVAHD_STREAM_STATE_PRIVATE_IVTC_DATA where they're explicitly describing the scenario:

image

Perhaps they sensed that it's too rarely used (who switches a desktop to 24Hz for watching a video?).
The only other theory I'd have is that they might have added this for Xbox and only the GPU driver on Xbox has it...

@softworkz
Copy link
Contributor

LGTM - as far as I can tell. Just typos.

@kasper93
Copy link
Member Author

For DXVAHD, there's even a private stream data struct for IVTC state information: DXVAHD_STREAM_STATE_PRIVATE_IVTC_DATA where they're explicitly describing the scenario:

Ah, that would be perfect. This was exactly the thing I would want when implementing IVTC. Else without proper feedback it's just manual guessing the parameters of the video stream, that can be mixed content.

I bet it was - at least at some point. It's untypical for MS to expose APIs like this which have never been tested or implemented at least by one vendor (which they are typically collaborating with).

Right, most likely it was used at some places. Actually I wonder why it's not exposed on current hardware. If they (AMD and Intel) already detect the cadence, it would be just detecting which frame to drop, some buffering of frames. Not like there is much silicon needed if any for that.

The only other theory I'd have is that they might have added this for Xbox and only the GPU driver on Xbox has it...

Yeah, maybe. There was at some point a push for Windows Media Center interface too, where all those things like IVTC would make sense. But I guess watching TV on PC died, or even never really existed.

@kasper93 kasper93 force-pushed the d3d11-deint branch 2 times, most recently from a82f3f2 to 043ec5a Compare February 18, 2026 09:25
@softworkz
Copy link
Contributor

softworkz commented Feb 18, 2026

For DXVAHD, there's even a private stream data struct for IVTC state information: DXVAHD_STREAM_STATE_PRIVATE_IVTC_DATA where they're explicitly describing the scenario:

Ah, that would be perfect. This was exactly the thing I would want when implementing IVTC. Else without proper feedback it's just manual guessing the parameters of the video stream, that can be mixed content.

There's a chance that it's working. The replacement for private stream_state data is ID3D11VideoContext::VideoProcessorSetStreamExtension / VideoProcessorGetStreamExtension. This also takes a GUID, so maybe(!) it's available by using the same GUID and the same structure...

Right, most likely it was used at some places. Actually I wonder why it's not exposed on current hardware. If they (AMD and Intel) already detect the cadence, it would be just detecting which frame to drop, some buffering of frames. Not like there is much silicon needed if any for that.

Yeah, it's cheap to do...

There was at some point a push for Windows Media Center interface too,

That was before all those APIs appeared. WMC was DirectShow based and there didn't even exist any 24 Hz displays at that time.

where all those things like IVTC would make sense. But I guess watching TV on PC died,

Life after death (little side project): https://www.youtube.com/watch?v=Ykfq9ILPsyM

@softworkz
Copy link
Contributor

I think you can remove the word "mostly" from the OP now 😉

@kasper93
Copy link
Member Author

There's a chance that it's working. The replacement for private stream_state data is ID3D11VideoContext::VideoProcessorSetStreamExtension / VideoProcessorGetStreamExtension. This also takes a GUID, so maybe(!) it's available by using the same GUID and the same structure...

Not working. Maybe in another universe :)

@kasper93 kasper93 force-pushed the d3d11-deint branch 2 times, most recently from cd0f108 to 335b81f Compare February 18, 2026 17:02
There were multiple issues in API usage, not setting correct
OutputIndex, lying about frame field parity to make it work without
OutputIndex. Not passing reference frames which forced BOB mode always.

Fix all issues and allow forcing BLEND/BOB mode by not passing frames.
Blend mode also sets half rate output. Half rate could be separate
option too, but there is not much gain from it except for debugging
really.

Fixes: mpv-player#15197
The optimal way would be to use custom rate based on telecine cadence,
but it's not supported by any driver in practice. So instead, allow
driver to field match and output with duplicated frames, which can be
removed using decimate filter.

Note that in practice all GPU vendors uses single video processor for
all deinterlacing, so IVTC will be done even if mode is not explicitly
selected. At least the field matching part, because correct rate output
is not supported. The main difference is that when selecting more we
output at video frame rate, not field rate.
@kasper93 kasper93 merged commit c2c081f into mpv-player:master Feb 18, 2026
30 checks passed
@kasper93 kasper93 deleted the d3d11-deint branch February 18, 2026 17:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments