-
Notifications
You must be signed in to change notification settings - Fork 2
Description
A fantastic repository, thank you.
I'm just getting started with it, but I just thought I'd reach out and ask if you'd accept a pull request that trains for human percetual quality rather than MAE / PSNR in future?
I'm thinking a simple way to achieve a partial solution is to retrain on images in a colourspace like OKLAB where perceptual difference is baked in, and the perceived colour difference formula is trivial, instead of monstrous!
I was also thinking 'edge loss' or extra channels for the H&V image gradients during training with L1 loss, discarded after, could be good.
A rolls royce solution might be adversarial loss, perhaps using a secondary network like Netflix vmaf or something?
If there are any resources related to perceptual quality rather than PSNR, please do point me in the right direction :-)