/mpv/ - the Any Forumsreatest media player

Installation:
mpv.io/installation/

Wiki:
github.com/mpv-player/mpv/wiki

Manual:
Stable: mpv.io/manual/stable/
Git: mpv.io/manual/master/

User Scripts:
github.com/mpv-player/mpv/wiki/User-Scripts

Configuration Files:
mpv.io/manual/master/#configuration-files
mpv.io/manual/master/#files

High quality video output profile (goes into mpv.conf):

profile=gpu-hq


Input.conf:
github.com/mpv-player/mpv/blob/master/etc/input.conf

Windows Builds:
Stable: mpv.srsfckn.biz/
Git: sourceforge.net/projects/mpv-player-windows/files/

Evaluating mpv's upscaling algorithms:
artoriuz.github.io/blog/mpv_upscaling.html

NEWS:
>AMD FidelityFX Super Resolution ported to mpv:
gist.github.com/agyild/82219c545228d70c5604f865ce0b0ce5
>Artoriuz came back from the grave.
>FSR still king in performance/power.

Attached: 1562319845530.png (586x314, 80.68K)

Other urls found in this thread:

github.com/mpv-player/mpv/issues/9999#issuecomment-1073377206
arxiv.org/abs/1711.06077
twitter.com/NSFWRedditImage

scale=box

swallow the redpill mpvbros

>FSR

OH NO NO NO NO NONONONONONO FSRCNNXBROS WE'RE OVER

Attached: violet-lpips.png (756x2441, 238.35K)

You do realise lower is better, right?

>21/03/2022: Replacing NIQE with LPIPS in the upscaling test, LPIPS is a perception-based full-reference image quality metric.

Finally Artoriuz did a good objective job with those metrics. I was getting tired of the scalers shills in this thread.

imagine barely able to compite with a lanczos variant and winning by a short margin being 3 times more resource hungry
>b-but my AI

what fucking graph are you reading?
FSRCNNX is head and shoulders above the other algos
literally no reason to use anything else unless you have noise or power consumption problems, basically only if you have a laptop.

im presently running FSRCNNX_16 to upscale some old 720p sherlock holmes
my rx560 is not even audible.
see picrel
(i tried the 32unit version, sadly my crappy gpu can't keep up. will try again after upgrading...)

Attached: patrician_shit_you_wouldnt_get_it.jpg (1915x1077, 683.2K)

>30000 fresh
ok I won't lose a sec more trying to instruct you since you're just a plain fool

>I won't lose a sec more trying to instruct you
you shouldn't, you are not knowledgeable enough to be instructing anyone

instead, i will instruct you:
30000μs = 30ms
the content is 24hz
1sec = 1000ms
1000ms / 24 = 41ms
it takes me 30ms to render each frame
so i still have 10ms or so to consumer per frame before i start to get stuttering

the 32 unit version i mentioned does stutter, since it takes about 50ms per frame...
i could try it again without krigbilateral, and i've already removed adaptive-sharpen (can't really see a difference), but honestly i doubt it would be noticeable...

>ewa_robidouxsharp
Our time has come robidoux bros.

Artoriuz single-handedly making these threads extra cancerous

Attached: violet-average.png (756x2441, 244.38K)

>Our time has come robidoux bros.
It was always better, at least for live-action content since its a window polar scaler. Artoriuz average fans have been going full retard since he make the first metrics.

I remember it was also proven here in Any Forums with comparisons years ago but who care about delusionals.

>Artoriuz single-handedly making these threads extra cancerous
>oh no oh fuck

Attached: mpvchad.jpg (219x230, 8.65K)

>github.com/mpv-player/mpv/issues/9999#issuecomment-1073377206

I bet this is the average Any Forums user.

It's the disconnect between distortion metrics (PSNR, SSIM, etc) and the actual way we humans perceive quality.

This was studied extensively in arxiv.org/abs/1711.06077 and it's basically the reason why most SotA SRCNNs are using adversarial training and VGG feature extraction.

It's fine for the output to be objectively different if we still perceive it as realistic (pic related).

In any case I apologise for any pointless discussions my early results might've created, I need to revamp the page at some point.

The current version still has a few glaring issues:

1) There's only a single image under test.
2) There are too many distortion metrics (we really don't need 3 flavours of SSIM)
3) Updating the page still requires some manual labour, which is the primary reason I don't do it often. I simply don't have the motivation when Lanczos is honestly fine in most cases.

Attached: esrgan.png (1346x512, 1.28M)

based

hi artoriuz
thank you for your work

im assuming the model the published mpv shaders are based on was trained on a publicly available (and legal) dataset
do you think there would be a benefit in retraining FSRCNNX with a larger dataset?
like training it on a much larger dataset derived from screens from 4k blurays(or, more realistically, remuxes)

What happened to this and the grid user that compared scalers in every thread?

Attached: 1497992620169.png (768x1228, 642.51K)

Larger datasets almost always help as long as the data isn't redundant. I think DIV2K should be good enough to train a CNN of this size, although the closest "anime" alternative we have is Manga109.

I don't really think there's any point in using frames from 4k BDs, the images in DIV2K are already extremely well detailed.

There might be gains to be made changing the training dataset, but I think what igv managed to achieve with this model is probably close to what the model is capable of with normal distortion training.

The problem with distortion training, however, is that the model ends up smoothing things way too much, which is perceived as soft-looking or blurry. This happens because there are multiple valid HR outputs for each LR input, and the only way to minimise the metric is to pick something in between all valid HR outputs (which ends up lacking high-frequency components, the drastic pixel to pixel variations that would diverge between these valid HR outputs).

The most obvious improvement from here would be to offer a GAN version of the same network, although I'm not sure if it would work at all at this small size.

>I think DIV2K should be good enough to train a CNN of this size
this size meaning the 16unit version?
igv also has a 32unit available in some past releases
presumably you can do even more, as long as your gpu can do the work fast enough to avoid stuttering

>The most obvious improvement from here would be to offer a GAN version of the same network
im pretty sure there's a gan branch in igv's repo
not sure what's happening there and if actual shaders will be forthcoming

>I don't really think there's any point in using frames from 4k BDs
i was thinking 4k BDs since i expect a very common use case is to upscale 1080p to 4k
does the actual resolution used in training not matter?

i've read some of your newer posts on other superresolution techniques, do you expect any of them to be able to be used for mpv shaders?
i thought the one that was specifically learning high frequency components was quite interesting

it would also be interesting to see RNNs being used in some way, hopefully mpv shaders will soon offer access to previous/next frames