Re-read my previous post and the following section in Dolby docs. You're focusing too much on the numbers and labels, and not how Dolby designed their pipeline. They interpolate from the 100-nit SDR trim to higher nit targets, and they recommend monitoring the SDR trim preview on the same HDR display or display with similar backlight technology for this reason. Obviously if 100-nit SDR trims were not improving the image (for SDR deliverable or not), they would just be static or removed, but they're almost always present, whether automatically generated or, as is almost always the case, manually adjusted by the colourist, which means Dolby Vision processing does some math and interprets these values for all possible end-user setups. They are also usually required by studios for delivery, although not 100% of time, as Netflix says they are optional for example. There are still questions over implementation, cmv2.9 vs. cmv4.0, but this is a major point of confusion now resolved, at least in my mind. Finally, as RESET_9999 pointed out, you really need to watch his comparison and testing clips, or just test yourself on your setup.susanstone2022 wrote: ↑Fri Jan 26, 2024 6:05 pmLet's say one makes a content with 1000nits master display and does 100nits trim for SDR. When the content gets playback on a 1000nits display, it will need to use 100nits trim? This does not make sense to me.
