Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We Need to Properly Handle Gamma Correction #6383

Closed
WestLangley opened this issue Apr 11, 2015 · 46 comments
Closed

We Need to Properly Handle Gamma Correction #6383

WestLangley opened this issue Apr 11, 2015 · 46 comments

Comments

@WestLangley
Copy link
Collaborator

As discussed previously colors are assumed by default to be in linear color space, and color textures in sRGB color space (all other textures in linear color space). We can add a setting renderer.sRGBInput = true if a user want to specify colors in sRGB space.

For gamma-correcting the final result, we currently have the renderer.gammaOutput flag, which causes rendered colors to be converted to sRGB space within the shader -- that is, with each draw call. This is OK for opaque scenes, but not correct for scenes having transparent objects.

For scenes with transparent objects, it is required that the gamma-correction be applied at the end -- after the rendering is complete. We should be using this approach, instead, as it is always correct.

Consequently, we need to modify the renderer so it (1) always renders to a texture, and then (2) optionally applies gamma-correction via a post-processing pass. We already have the GammaCorrectionShader implemented.

I envision a few new renderer properties:

renderer.sRGBInput // default false
renderer.outputToScreen // default true
renderer.sRGBOutput // default true

BTW, another approach is to move the composer logic into the library.

In any event, design suggestions and/or PRs are welcome to implement this.

Whatever we do should be backwards compatible, so renderer.render( scene, camera ) still works as expected.

@satori99
Copy link
Contributor

Consequently, we need to modify the renderer so it (1) always renders to a texture, and then (2) optionally applies gamma-correction via a post-processing pass

If the renderer always renders to a texture, don't we lose the ability to draw antialiased lines? My understanding is that only works when rendering direct to a canvas.

@mrdoob
Copy link
Owner

mrdoob commented Apr 12, 2015

That is correct. We lose antialias (not only lines).

@WestLangley
Copy link
Collaborator Author

That is correct. We lose antialias (not only lines).

Well, in that case, can we use the current approach for opaque scenes, and the postprocessing approach only if required?

@Usnul
Copy link
Contributor

Usnul commented Apr 20, 2015

@satori99 there's always FXAA, it's somewhat approximate, but is quite fast i think and it does antialiasing also.

I think this would be a good direction, as it would open up postprocessing pipeline in general, right now it is quite uncomfortable to use. Also, as far as AA goes, i'm not really convinces it's as much of an issue as it was a few years ago, pixels have become quite a bit smaller (or did i just grow up?).

@WestLangley
Copy link
Collaborator Author

/ping @benaadams @gero3 @bhouston

I feel we need to bring the composer logic into the core of the library -- or at a minimum, support a gamma-correction post-processing pass.

I'd appreciate help from anyone who has knowledge of the WebGLRenderer codebase, and who would be willing to have a go at implementing it.

@benaadams
Copy link
Contributor

Input gamma to linear could dropped from the shaders and moved to the texture loaders? Either by passing through canvas and correcting or using the sRGB extenstion when supported (40.9% of devices)? Canvas route wouldn't work with compressed textures though.

For Output as @bhouston has pointed out in Issue #6296 there are a number of output options beyond gamma correct a lot of which are captured already in the WebDeferredRenderer.

Could have it for straight rendering you only get gamma correct as an option and if you move to 'pipelined' composer-type rendering you get more options?

Thinking about some way of unifying post-processing/composer options/deferred rendering... not sure what correct approach would be atm.

@benaadams
Copy link
Contributor

Traditional transparent objects are... a little tricky; but you probably don't want to have to use a double pass if you don't use them? (still thinking...)

@benaadams
Copy link
Contributor

Referencing #6245 for GammaCorrectionShader

@WestLangley what happens with transparent objects now? They appear darker due to lack of gamma step? (which obv can't be applied as it would double apply to other objects)

@WestLangley
Copy link
Collaborator Author

We are working on getting the flow correct..... Colors are supposed to be specified in linear color space -- except certain textures, which are in sRGB color space, but are converted to linear space either via an extension or via software.

So all colors are in linear space during the drawing operation. In the case of colors with alpha, we use normal alpha-blending to blend the source and destination colors.

At the end, the drawing buffer needs to be converted to sRGB color space (which is lighter) so it will not appear too dark when rendered on the screen. This is the gamma-correction step, for which the GammaCorrectionShader would be used.

The only time the gamma-correction step would not be applied would be if the output was to be used for further processing of some sort -- or if the output was to be used as a texture, for example.

@benaadams
Copy link
Contributor

Something like this?

Texture Load:

  • Correct with canvas or sRGB extension when supported on GPU load, when marked as in sRGB space (default?).
  • Compressed textures assumed to be in linear space.

Automatic:

  • If you use no transparent objects, it corrects output in shaders and you can have the option of anti-aliasing (just gamma).
  • If you use a transparent object, no correction applied in shaders, it renders to texture and you loose anti-aliasing, creating context with anti-aliasing is possible but just a waste (just gamma).

Pipelined:

  • No correction applied in shaders, choice of output correction: Gamma, Tone map, Reinhard, Filmic, Uncharted

@benaadams
Copy link
Contributor

Though that may do weird things if you add and remove transparent objects from the scene at runtime :-/

@WestLangley
Copy link
Collaborator Author

Good point. Maybe we can find how other rendering engines do it.

Here are the assumptions. I am not sure about compressed textures.

@benaadams
Copy link
Contributor

I am not sure about compressed textures.

Compressed textures are expensive to recompress and fiddly to decompress; (possible patent issues doing it also?) so translating through canvas wouldn't be an option, but equally if you are using compressed textures you'll already be applying extra processing to prepare the textures, so also moving them into linear space isn't a big ask vs someone just trying things out.

@benaadams
Copy link
Contributor

As you suggest always rendering to texture then to screen would sort the problems

Concerns:

  • No anti-aliasing option
  • 1080p goes from 2073600 pixels per frame to 4147200 pixels rendered per frame, may kill performance in a lot of the mobile space?
  • More complex default pipeline, render to texture confuses debug tools like WebGLInspector

Perhaps pipelined by default and renderer initialisation switch to move to straight to screen? (When gamma would be up to the user to fix, some docs around the switch explaining this)

@benaadams
Copy link
Contributor

Tone Mapping and anti-aliasing with FXAA could be combined into a single shader, so wouldn't be too bad...

@WestLangley
Copy link
Collaborator Author

Yes, we need to support tone mapping for scenes with HDR. Presumably, we need FXAA, too. And I am pushing for GammaCorrection. And users may want any number of post-processing effects.

I should say that if we don't do this, and gamma-correct as we go ( as we are doing now ), it will not be correct for transparent objects, but it won't be the end-of-the-world, either. I'd still prefer to make it correct, of course.

Is this doable, in your opinion?

@benaadams
Copy link
Contributor

Yes and it probably ends up being exactly what you suggested and bringing the composer into the renderer :)

Can have a go unless anyone wants to jump in first? Won't be for a week or so. What's the general consensus? Bring multi-stage frame compositing into the renderer? /ping @mrdoob @gero3 @bhouston

@mrdoob
Copy link
Owner

mrdoob commented Apr 28, 2015

Losing "native" antialias is quite a concern for me...

@mrdoob
Copy link
Owner

mrdoob commented Apr 28, 2015

What about waiting for WebGL2 to have gamma corrected transparent objects?

@wrr
Copy link
Contributor

wrr commented Apr 28, 2015

Wouldn't gamma correction as a separate pass require rendering to a float target or some other HDR format?

If the renderer outputs linear colors to 1B/channel texture and then the colors are converted to gamma as a separate step, the final output probably won't be able to fully utilize 1B range, there will be fewer distinct dark colors.

Similar problem exists if input textures are converted to linear space as a separate step and stored in 1B/channel format, some information about dark colors differences will be lost.

@WestLangley
Copy link
Collaborator Author

What about waiting for WebGL2 to have gamma corrected transparent objects?

@mrdoob Can you provide a reference for that?

What would be great would be an extension that converts the framebuffer to sRGB before compositing.

@wrr Conversion from floating-point in the GPU to sRGB should not be a problem. Conversion from sRGB to floating point in the GPU should not be a problem either, because sRGB is specifically designed to retain reasonable precision in the dark colors.

@wrr
Copy link
Contributor

wrr commented Apr 28, 2015

@WestLangley what I mean is that if you set renderer.sRGBOutput to false, and render a scene to 1B/channel texture, your will loose precision in the dark colors. A gamma correction pass won't be able to regain this precision, because the information is already lost (if the first pass outputs to 4B/channel float texture, this isn't a problem).

Current flow is:

[linear color in 4B/channel float] =gamma correction=> [sRGB color in 4B/ch float] =output to screen=> [sRGB color in 1B/ch int, more precision in the dark range, less precision in the bright range]

Proposed flow is:

[linear color in 4B/ch float] =output to texture=> [linear color in 1B/channel int, the same precision across the whole range] =gamma correction pass=> [sRGB color in 1B/ch, not able to regain the dark color precision, because input has only 1B/ch]

@WestLangley
Copy link
Collaborator Author

if the first pass outputs to 4B/channel float texture, this isn't a problem.

@wrr I see. So, can we render the scene to a floating-point texture, and then apply the gamma-correction pass?

@wrr
Copy link
Contributor

wrr commented Apr 28, 2015

So, can we render the scene to a floating-point texture, and then apply the gamma-correction pass?

@WestLangley Yes. RGBM or similar encoding can be also used, because it allows to increase precision in the [0-1] range and uses less memory than floating-point texture.

@benaadams
Copy link
Contributor

RGBE, RGBM, RGBD and LogLuv HDR texture format references: #5687

@WestLangley not very good with linking specs but will give it a go:

https://www.khronos.org/registry/gles/specs/3.0/es_spec_3.0.3.pdf#nameddest=section-4.4.2

4.1.8 sRGB Conversion
If the value of FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING for the framebuffer
attachment corresponding to the destination buffer is SRGB (see section
6.1.13), the R, G, and B values after blending are converted into the non-linear
sRGB color space by computing

Also https://www.khronos.org/registry/webgl/specs/latest/2.0/#3.7.4

3.7.4 Renderbuffer objects
void renderbufferStorageMultisample(GLenum target, GLsizei samples, GLenum internalformat, GLsizei width, GLsizei height)

Where GLenum internalformat can be https://www.khronos.org/files/opengles3-quick-reference-card.pdf

nternalformat: {R,RG,RGB}8, RGB{565, A4, 5_A1, 10_A2},
RGB{10_A2UI}, R{8,16,32}I, RG{8,16,32}I, R{8,16,32}UI,
RG{8,16,32}UI, RGBA, RGBA{8, 8I, 8UI, 16I, 16UI, 32I, 32UI},
SRGB8_ALPHA8, STENCIL_INDEX8, DEPTH{24, 32F}_STENCIL8,
DEPTH_COMPONENT{16, 24, 32F}

I think those are the right bits?

@bhouston
Copy link
Contributor

@wrr wrote:

@WestLangley Yes. RGBM or similar encoding can be also used, because it allows to increase precision in the [0-1] range and uses less memory than floating-point texture.

BTW clara.io now supports RGBM, but it can only be created if you start with floating point. So it is not an option to use in the rendering pipeline (such as this case) except for source textures and even then it actually sucks -- one should just use true floating point textures as RGBM causes artifacts in mipmaps (but that is a different topic.)

@benaadams
Copy link
Contributor

Though waiting for WebGL 2 might be over a year or more to fix it in mainstream?

@bhouston
Copy link
Contributor

@WestLangley :

This is OK for opaque scenes, but not correct for scenes having transparent objects.

I didn't realize that transparent objects weren't combined properly in the current pipeline that does immediate shader-based gamma correction. Given that I didn't notice this bug until now, I do not think it is the most serious one around.

Seriously, I would remove this concern from this discussion as it is pretty minor.

@bhouston
Copy link
Contributor

I brought up with @WestLangley off list that we should integrate the EffectComposer into the renderer, so I do support this. It would make it a lot easier to use, rather than something extra you have to add.

@wrr
Copy link
Contributor

wrr commented Apr 29, 2015

EffectComposer integrated into the Three.js core could be also useful for #6251 It would allow to support cubemap convolution that is needed for light probes used on rough materials.

@mrdoob
Copy link
Owner

mrdoob commented Apr 29, 2015

I brought up with @WestLangley off list that we should integrate the EffectComposer into the renderer, so I do support this. It would make it a lot easier to use, rather than something extra you have to add.

But, again, we lose antialias if we do that...

@bhouston
Copy link
Contributor

@mrdoob wrote:

But, again, we lose antialias if we do that...

I think it should be optional to use the EffectComposer for sure, but I do believe we should promote it into core and have it just a flag on WebGLRenderer to enable -- thus it is very very streamlined and integrated.

You are completely right that anti-aliasing is important. Anti-aliasing of lines is especially important to Clara.io's editor (lots of wireframes!) so we do not use the EffectComposer there, but in our viewer/player we do. Thus I completely agree that there is a need for both approaches.

(Although I do currently have the problem that you can not preview post effects in the editor while you are tweaking their parameters.... argh!)

@bhouston
Copy link
Contributor

I'll be okay if we do not promote EffectComposer to core and we do not add it to WebGLRenderer. I personally think it is the "right thing to do", but I do not feel it is something that MUST be done.

@WestLangley
Copy link
Collaborator Author

OK. In spite of the fact that it is not proper to perform gamma-correction prior to alpha-blending (as we are doing), I think given the comments above, we should leave things as they are, and continue to gamma-correct in the shader (and not switch to a post-processing approach).

The consequences -- other than the colors are somewhat off -- is that transparent objects appear more transparent than they should, and transparency does not appear linear in the opacity parameter.

As I said earlier, opaque objects are rendered properly regardless of the approach.


For the record, another consequence of our current implementation is the clear-color is not gamma-corrected when gamma-correction is enabled, since we only gamma-correct renderable objects.

@Usnul
Copy link
Contributor

Usnul commented Apr 30, 2015

AA is an important use case. One that can be solved with an extra pass maybe, maybe not. To me, AO and HDR are important. I'd like to ask the same question I asked in webaudio discussion: what are we making? And if the answer is still "something like unity" then why are we choosing to avoid deferred pipeline?

@mrdoob
Copy link
Owner

mrdoob commented Apr 30, 2015

And if the answer is still "something like unity" then why are we choosing to avoid deferred pipeline?

I guess deferred pipeline requires WebGL2? Once WebGLRenderer is cleaned up, it would be nice to deal with WebGLDeferredRenderer and I guess that one could be as gamma corrected as it gets 😉

@benaadams
Copy link
Contributor

Going back to the original topic; has gamma correction for input textures been removed from the shaders? Or is that still to be done?

@bhouston
Copy link
Contributor

I guess deferred pipeline requires WebGL2

It will be a lot easier with WebGL2 to created a proper deferred renderer, so might as well wait for it, than wasting time trying to fight the limits of WebGL1.

@WestLangley
Copy link
Collaborator Author

Going back to the original topic; has gamma correction for input textures been removed from the shaders?

Previously, I said:

The following maps should be assumed to be in sRGB space, but have an override option:

  • material.map
  • material.envMap (except HDR maps -- they need to be decoded, but are in linear space)
  • material.specularMap (future, if/when is it changed to a 3-channel map)
  • material.emissiveMap (future)
  • material.iblMap (IBL map, future)

Currently, the following shader chunks call inputToLinear():

map_fragment.glsl
envmap_fragment.glsl
color_vertex.glsl // this is not correct - colors should be assumed to be in linear color space
shadowmap_fragment.glsl // not sure about this one

These conversions currently occur only if renderer.gammaInput is true. I think gammaInput should be removed, and the conversion should be automatic -- with a texture-specific override option.

How to best implement this is up for discussion.

@bhouston
Copy link
Contributor

What I have done for Clara.io is add an encoding type to each texture. And instead of inputToLinear, I now have decodeTexel that takes an encodingType.

The encodings I think one should support are Linear (e.g. None but Linear is a more accurate term), sRGB, RGBM16 (you need to specify the multiplier), RGBE, LogLuv...

But I will warn you, it turns out that doing live HDR decodings from 8-bit per channel RGBA textures is highly problematic. I should have just converted textures with HDR encodings to true half (fp16) textures on load, rather than doing live decoding.

@bhouston
Copy link
Contributor

A corollary would be that we add an output encodingType for shader results, which could be Linear or sRGB (or the others, but I do not see too many use cases for them). :) And we write a similar encodeTexel function that takes a RGBA and an encodingType.

@WestLangley
Copy link
Collaborator Author

I think it would be nice if the texture encoding was auto-detected by the renderer based on the file type and its intended use, unless pre-specified by the user. Unless, of course, such an approach leads to too many complications...

A corollary would be that we add an output encodingType for shader results

In my initial post, I suggested

renderer.sRGBOutput // default true

@bhouston
Copy link
Contributor

I agree with the defaults, but I would implement it like this:

texture.encoding = THREE.sRGBEncoding;
renderer.shaderOutputEncoding = THREE.sRGBEncoding; // or a shorter other variable name

@mrdoob
Copy link
Owner

mrdoob commented Apr 30, 2015

Have you guys considered having all the shader code in linear space and having some classes in the javascript side that do the conversion? Something like...

texture.convertToLinear();

@WestLangley
Copy link
Collaborator Author

OK. Thinking out loud...

By default, have texture.encoding set to undefined. Users should not need to set it if the renderer is smart enough.

If the renderer finds texture.encoding undefined, it must set it to something, because the shader will need to know it.

The rule the renderer will follow is to set it to THREE.LinearEncoding, unless the texture is for a map or environmentMap, in which case it sets it to THREE.sRGBEncoding. (Data textures, HDR textures, etc., I expect will complicate this logic.)

Note that if a user creates a texture using something like the method generateTexture() in the examples, then the user will have to set texture.encoding to THREE.LinearEncoding if linear colors were intended.


The alternate approach is to require texture.encoding to be set by the user all the time. Personally, this is the approach I would prefer.

@bhouston
Copy link
Contributor

bhouston commented Mar 3, 2016

This has been implemented as part of PR #8117, this should now be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants