Actually this is where I was caught out by surprise. Prior to this exercise, I always had the impression that madvr is doing a much better job in preserving shadows etc. are we seeing an improved dynamic range ? I don’t see any improvement in dynamic range
When we say dynamic range, we are talking about peak whites , the whitest of whites to blackest of blacks. The range between the whitest to the blackest is the dynamic range, that pipeline. How wide is that pipeline
When I observe the clouds , where smoke rises above to the skies after an explosion, that smoke is no longer a black smoke. If dynamic range is preserved, the smoke will look inky black whilst the face of the boy could be seen clearly . Then we can safely conclude, yes we have a better dynamic range because black smoke looks like black smoke whilst white clouds are still as pure and we are not missing details on the boys face. But the above doesn’t seem to be the case, the smoke doesn’t look as black anymore… it seems to be giving out very punchy mid tones… that kind of look…
The madvr version of explosion, looks like an explosion that had happened for quite sometime and the black smoke has slowly disappeared into thin air whilst the lg and lumagen version had that impression that explosion just occured and we have a thick black smoke as a result…
This is the fun part, if we didn’t break it down to analyse it, we wouldn’t notice all these…
Ya definitely the case. We will all need to wait for the original 4k HDR release from 20th century to determine which is the correct colour…
In the meantime, I think it only makes sense to compare it in SDR 709 against 20th century, to see if there is any differences in colour tone and if any, how big a gap that is…
Actually in my view, it should not deviate much…
Will find out soon when I compare with the SDR version…
Why woud we want to preserve shadows? MadVR lifts shadows so you can see the shadow detail. That’s the whole point of improving dynamic range. In very bright scenes, in order to eliminate highlight clipping, LG’s DTM darkens the image by moving the whole window down. This preserves the highlight detail, but the darkening crushes the blacks since the projectors window is just 50 nits or so. MadVR also does the same thing, but restores the crushed blacks, preserving the dynamic range.
Yes. A calibrated projector’s dynamic range is only about 0-50 nits so you have to map the 0-4,000 nits to fit. If you don’t adjust this window intelligently, you either blow out the whites or crush the blacks. The challenge is to simulate a human’s eyesight, which has a huge dynamic range, when your display device doesn’t. If you observed someone cradling their son’s head on a bright beach, would you think that their son’s head would be that black? Usually not because we can see a wide dynamic range. Also, as I said, compare to an OLED.
If you like the darker sky or think it is more realistic, then you can actually adjust MadVR’s sky processing to get what you want. The default processing was generated by the MadVR author, Madshi, by garnering the opinions of hundreds of people on AVS Forum who feedback to him how they like the scenes to look. Generally skies in MadVR look clean, blue and clouds are distinct due to specific sky processing. If you don’t like that, just turn it off. I’ve never tried though because I like it.
Here is the screen shot from the 4K SDR Rec 709 trailer. This is youtube playing on my Mac and then just freeze and do a screen shot, so no camera is involved. The colors are clearly less vibrant…
Hmmm… the smoke seems the same in MadVR and the HDR-X screen capture. You guys can do a screen capture on your PCs and see what it looks like from the HDR-X file…
The EOTF parameter typically controls the amount of compression applied to the image. The HDR signal was encoded at a higher peak brightness than the projector can reproduce, so it needs to be compressed to fit within the lower light output of the projector, a process called tone mapping used in the LG pj. Similar to dynamic-range compression in music, it reduces the difference between the lowest and highest portions of the content. The more you compress, the brighter the overall image becomes as the difference between the highs and lows is smaller. The less compression you use, the more dynamic range is preserved, but the darker the overall average image is.
I believe you have gotten it mixed up
A lot of people think they want higher dynamic range when they really want the image to be brighter; they associate HDR with a brighter image. So, they tend to set the EOTF control in madvr for more compression to give the overall image more pop, which differentiates it more from SDR imagery.
it generally comes down to two factors: how compressed—that is, bright—the person wants the content to look versus which preset provides the most balanced approach for a variety of content.
I think it’s wise to compare and use content that is very dark with not a lot of dynamic range , contents with moderate APL levels and high APL level and stacked them up to see if anything is clipped or over processed. I don’t think we have a right answer for this at the moment as we do not know what’s the Directors intent
I believe it’s good to preserve shadows as much, but what I’m seeing is the other lighter areas are lifted as well (smoke) . This is not indication of an improved dynamic range courtesy of the use of madvr, it is the reversed, ie higher compression applied with the least amount of dynamic range, which is why all the images look so bright in the madvr version. The luminance levels are changing shades of blue and green as we can see from the avatar. You have more of a lighter blue avatar
Because of how each manufacturer”s tone mapping algo is different, it makes calibrating hdr a big challenge. However with the LG, this tone mapping can be turned off prior to the hdr calibration . At the time of calibrating hdr with tone mapping turned off, we are getting very accurate grayscale tracking when there is no roll off applied to the EOTF target. We are essentially calibrating the system for hdr using gamma 2.2.
As far as what I’m seeing with the LG’s approach to hdr calibration, it’s pretty darn accurate for grayscale. There is no risk of losing shadow details on the majority of the dark movies. I use the game of thrones episode 3 season 8 drama to verify this. This is why I didn’t see the need to dive in to use madvr as im not overly obsessed with seeing every single detail on the dark scene. That’s because I know the calibrated grayscale tracking is accurate down to 0% and I leave the tone mapping Frame by frame to LG’s algorithm , if the EOTF roll off of the Lg is 60%, that’s fine. The image is darker, but the pictures are a lot more inky
The image does look really “over processed” on the madvr IMHO…
But not saying it’s wrong, just probably the way you prefer to watch majority of the content
Bryan, I think I understand what you are trying to say. Equipment has a certain dynamic range in terms of f stops or nits and you are pointing out to changing the EOTF function to compress the image. The brighter images point to misuse and overprocessing. I’d like to suggest instead, that LG DTM is mis-processing the image and losing detail.
I’m talking about the dynamic range of the image itself and I’m quite familiar with it from photography and processing images, by lifting shadows and lowering highlights because I do that a lot. For me improving dynamic range is create a larger number of tones in an image, so that you see both the highlights and the shadows. Photographers do this because the dynamic range of an image sensor is not capable of capturing all the tones from bright to dark that a human sees. Hence often we use a processing called stacking to take multiple images at different brightness levels, and use software, so as to stack an image on media, digital or otherwise, that reflects the higher dynamic range of human vision. Such photographs are called High Dynamic Range photos. Alternatively, certain RAW files of photos have enough information that can also be extracted to improve perceived dynamic range, by lowering highlights and raising shadows.
Without comparing with MadVR, which processing you don’t like , I can point out LG DTM’s problem with the loss of shadow detail. Looking at the image below, the Avatar’s chest, various parts of his body and the boy’s head are so dark that you cannot see any shadow detail. This is black crush to me. In order to preserve detail in the very bright sky in an image with large dynamic range, LG DTM brought the average content light level down so much that it crushed the detail in the shadows.
Compare this with a screen capture of the same image playing in the Youtube app on a Macbook Pro that has a HDR display of around 700 nits. You can verify this image by playing the HDR-X video on any PC with a decent HDR display. It is obvious to me that the shadow detail in chest area, parts of the body and boys head is preserved, unlike the LG DTM.
Note that in this image the smoke looks almost exactly like in the MadVR image, so its not that MadVR is lifting the smoke too much as you suggest. It’s that LG DTM has darkened the image so much that the smoke in the LG image is too dark and not representative of the HDR-X trailer.
Night mode photography and DTM, the uncanny resemblance
I can understand the arguments from both sides. This kind of reminds me of the astrophotography with “Night Sight” on Pixel Phones. The basic underlying premise of this technology is to capture stars in the night sky by opening the camera sensors as long as possible and then using Google’s proprietary computing algorithm to compose and stitch the images to show a starry night. This is phenomenal and ever since then other phone manufacturers are pushing the night mode photography to the extent that it "lifts up shadow details and sometimes accentuates highlights in a rather “unnatural” way…so much so that a night shot NO LONGER appears to have shot in the night. DTM when used judiciously will make the HDR content very pleasant to the eyes without cranking too much of this and that. The goldilocks zone of DTM is all but a proven standard that has been reached. There is simply no standard to benchmark as DTM is an arbitrary control that can go either way when different movie scenes are compared. I bet Lumagen and MadVR do certain scenes better than LG built-in DTM and there are times the latter perform remarkably better than the former. So what say you guys that we start a thread on DTM featuring other movie scene shots that are worth our time to look into instead of using Avatar as the ONLY piece to lay the foundation of what a good DTM should look like?
Preference vs Reference
In the past, I always believe that video calibration is a standard and irrefutable proof that calibration to a display will ensure that the content is as near to the director’s intent. With this DTM thing added into the video chain, I’m afraid this is no longer the case. But if we put our DTM “magnifying glass” aside and look at the picture as a whole…everyone is happy to see a great HDR and SDR image on their display compared to the OOTB. Everyone wins. It all boils down to personal taste and I can’t believe I am saying this…DTM has slowly evolve to make video calibration like audio calibration. Most of us ascribe to the idea that when it comes to audio, it is a matter of preference over reference. That is why we pushed ourselves so hard to improve the subwoofer response in the room and make every speakers “sing”…but we know reference level can be “dull” and “unexciting” if it flattens the curve…let’s add some “flavour” in the form of “house curve” which is pretty similar to DTM in the video department and now, I have my own “house curve” which I think is way better than the rest. But wait, there is NO STANDARD in DTM to begin with…
Guys, you know what…I think we can stop comparing. This is not going to lead us to anywhere near a definite answer. At the end of the day, so long as your display has been properly calibrated by well-known and recognized calibration s/w, I think we will be just fine. No need to lose your sleep over this. This is a good exercise but it has reached a point where we are now comparing dick size (tongue-in-cheek, not in the literal sense) and that’s just unhealthy.
Can’t wait to watch avatar 2 (2D) this week! Booked my ticket at shaw plq.
Hope the non-imax laser projector at shaw plq is good.
Maybe i shld watch avatar 1 at home b4 that and enjoy/familiarise the sdr avatar colors so that i can do a better visual comparison for avatar 2. The last time i watched avatar 1 was on my panny plasma!
Somehow, my eyes still prefer a well-calibrated sdr video over hdr most of the time, esp sdr with wider gamut in BT2020. I feel the picture is more natural and less eye fatigue.
Problem with hdr10/10+ is the dtm has no fixed standard to follow as what desray and a few of us mentioned. Manufacturers can do their own “rendition” which result in a different presentation of the image. Worst still, the subtitles in HDR is a pain in the ass (PITA)! I really can’t stand glaring subtitles which is a problem for hdr10/10+ content viewed in a dark room. The creators of hdr needs to solve this.
But DV is nice, much more pleasant to the eyes and overall image much more balanced and less glaring than hdr10/10+. Hope they release avatar 2 on disk with dv
Personally, I hope color standards won’t go down that route of preference determining a good image. But with consumer taste evolving, this could be highly possible.
Our projectors produce only 50 nits of max lumens calibrated so they are less fatiguing than bright TVs, especially when viewing headlights of cars
If you can set the color of the subtitles in your player, set it to gray. much less glaring. Of course this will not work if the subtitles are baked into the video from some download…
I find the DV stream better encoded so even though I am tonemapping with MadVR, I still feed it LLDV