I keep getting numpties telling me that 4K 4:2:0 8bit footage does not translate to 4:4:4 10bit on an HD timeline, don’t take my word for it lets hear from renowned expert Barry Green…
Barry “One excellent benefit of downconverting UHD/4K footage to 1080 HD in post is that you can realize an increase in proportional color resolution and a notable increase in bit depth. The AG-DVX200 records 4K or UHD footage at 8 bits per pixel and utilizes 4:2:0 color sampling. After downconversion, the resulting footage has 10 bits per pixel and 4:4:4 color sampling! Yes, you can convert 3840×2160 8-bit 4:2:0 recorded footage into 1920×1080 10-bit 4:4:4 footage in post.
To understand the color sampling advantage, you’d have to first understand that the camera re- cords its footage in 4:2:0 color sampling. That means (simply put) that there is one color sample for every 2×2 block of pixels. In any given 2×2 block of pixels there are four different “bright- ness” samples, but they all share one “color” sample. Effectively, within the 3840 x 2160 frame, there is a 1920 x 1080 matrix of color samples, one for every 2×2 block of pixels. During the downconversion to HD, each block of 2×2 brightness samples are converted into one HD pixel, creating a 1920 x 1080 matrix of brightness pixels. This 1920 x 1080 “brightness” (luminance) matrix can be effectively married to the originally-recorded 1920 x 1080 “color” matrix, result- ing in one individual and unique color sample for each and every brightness pixel. The result is 4:4:4 color sampling at high-definition resolution.
In terms of pixel depth, the original recorded footage is quantized and recorded at an 8-bit depth, providing for up to 256 shades per pixel. Other formats, like Panasonic’s own AVC-Intra, quantizes and records at a 10-bit depth, for up to 1,024 shades per pixel. Having deeper bit depth provides the ability for finer shading and more subtle transitions, especially apparent on smooth gradients (such as in a clear blue sky).
Generally 8-bit cameras perform fine for most images, but extensive image manipulation in post can reveal the limitations of 8-bit encoding and cause “banding” and “stair-stepping” from one shade to the next. 10-bit footage minimizes those effects because there are up to four shades for every one shade in 8-bit footage. When downconverting UHD/4K footage to 1080p HD, you also get the benefit of converting 8-bit pixel depth into 10-bit pixel depth! Since each 2×2 block of UHD/4K pixels will be summed to- gether to create a single 1×1 pixel in 1080p HD, the individual pixel values and gradations from the source footage can be retained in the downconverted footage.
Imagine a smooth gradient of medium gray, gradually getting brighter from left to right. In 8-bit pixel data, a medium gray might be represented by a pixel value of 128, and the next brighter shade might be 129. In 10-bit pixel data, that same medium gray (128) might be repre- sented by a pixel value of 512 (128 x 4) and that brighter shade (129) might be represented in 10- bit by a value of 516 (129 x 4). The obvious difference here is that an 8-bit camera can’t represent any difference between 128 and 129, but the 10-bit camera (looking at the exact same gradient) could represent a smoother transition between 512, to 513, 514, 515, and then eventually 516. Having 10 bits of data provides for the ability to retain and discern between finer shades of grey (or color). So what happens when we downcon- vert our 8-bit UHD footage to 10-bit 1080p HD footage? As each 2×2 block of pixels is summed together, those subtle differences in shade are retained, and we end up being able to represent shades that the 8-bit footage couldn’t have.”
HDW : Remember this only works as long as you setup a 10bit timeline.
Does this happen automatically on the timeline in Premiere Pro CS6? Or do you have to convert the footage in other software before import? Is any special treatment required to get this benefit?
HDW : I don’t use Premiere but in FCPX you have an option to choose Pro res 422 HD which is 10bit and 444 10bit. Theres not much point editing 10bit footage in an 8bit timeline.
Hi, Philip . . .
I can see that if you continue to post material on the AG-DVX200 I am going to have to start wearing a bib to avoid drooling on my keyboard.
I want one of these cams!
On the point you made regarding FCPX that’s in the export settings, right? Not the initial choice of timeline. Unless there’s a setting I’m overlooking when creating a new timeline?
Presumably this effect also happens when viewing UHD on UHD TV from normal viewing distances as our visual acuity cannot resolve the individual pixels when viewing from greater than 8 to 10 feet away. If so, 4:2:0 at 8 bit starts sounding not so bad in UHD. Bob
I transcode my 4K material using MPEG Streamclip to ProRes 422 before editing on a 10 bit HD timeline. Works great on my older/slower computer.
Again, this is wrong!! Please don´t confuse people with this 8-bit UHD becomes 10-bit FHD nonsense.
Not much words to loose about colorsampling since this becomes more complex and is harder to explain: The theory is simply even not true for black/white recordings so the theory is not true generally.
B/W gives the best possible precondition, because every sample (pixel) can get an individual value in contrast to color values that are reduced in 4:2:0 and even 4:2:2 subsampling.
I refute the theory in an easy to understand example – a grey ramp – the values (left to right) describe the luminance (Y) value, color (UV) is always 0 in B/W, so not listed:
On a real 10-bit FHD camera a grey ramp (horizontal line/block) could look like: 508-509-510-511-512
An 8-bit UHD camera sees this as: 127.127-127.127-128.128-128.128-128.128 (twice as much samples due to twice the horizontal resolution, but coarse 8-bit values).
Downscaled this to 10-bit FHD becomes: 508-508-512-512-512 (get it? There is NO benefit at all!!! The “steps” are as coarse as in 8-bit, there is no more information as in 8-bit!)
8-bit UHD camera shifted one pixel to the left: 127.127-127.127-127.128-128.128-128.128
Downscaled to 10-bit FHD becomes: 508-508-510-512-512 (Here the maximum, only statistical benefit appears, a “gain” to 9-bit between two pixels, but not to the whole ramp!)
A real 10-bit camera would record slow rising values in a ramp smooth with individual values, no 8-bit camera can do this with the same resolution in brightness and color information!!
All this 8-bit UHD becomes 10-bit FHD is nonsense. Feared “banding” becomes only visible in slow changing brightness/color graduations (espeacially after “grading”), smooth transitions are the only benefit of 10-bit!
I would recommend to remove the article because it is simply wrong and misleading.
My god, even Panasonic publishes this wrong theory, how paltry is this?!:
http://pro-av.panasonic.net/en/dvx4k/pdf/ag-dvx200_tech_brief_vol1_en.pdf
Do they expect to increase the sales on their UHD cameras?!
There´s much true in that sheet, but this “8-bit becomes 10-bit” is only “may happen” statistics followed by interpolation and will never help where a real 10-bit camera kicks in!
Why on earth should this weird pixel allocation and UHD to FHD transition happen in this way as shown in p.4?! This happens due to noise and severeal other cicumstances but not as a recording of the subject.
Crazy that the Panasonic guys publish this nonsense, they simply didn´t read and or do not understand!!
@Roland Schulz- While it’s true that a single 10-bit pixel has the theoretical potential to be more accurate, in very specific circumstances, than four 8-bit pixels, the practical difference will be very small. First of all, your example is flawed regarding the sampling.
First of all, yes, UHD has twice the horizontal resolution of 1080 but it is also twice the vertical resolution as well. Therefore, there are four 8-bit samples in UHD for every 10-bit sample at 1080, not two. Assuming that the four UHD pixels occupy the same surface area as the single 1080 pixel (therefore receiving the same amount of photon hits) then the sum of the four UHD pixel values should equal the value of a single 1080 pixel.
546= 132, 145, 127, 142
Occasionally, there will be deviations, but on average this should be true. The only exception would be in the unlikely chance that one(or three) of the four UHD pixels was bombarded by far more photons than it’s three neighbors and became saturated, or “full”, and therefore could not record any more information. In this case the total of the four UHD pixels’ values would be less accurate than a single 1080 pixel. This would only occur in very particular conditions.
@Roland Schulz
Your reasoning seems mostly correct but as always the devil is in the details.
Where you state that 508-509
becomes:
127.127-127.127 this is simply not true. The reality is that your 509 (19bit) will not translate into 4×127 (8bit) values but a random series of values between 127-128, probably with a 75-25% distribution, thus 127+127+127+128=509.
See? That oversampling is where the 2 extra bits come from and how you get from 8bit to 10bit, provided you exposed correctly to protect the highlights.
Note: Where it says 19bit it should read 10bit.
Not quite sure why this topic has just resurfaced, but I’m afraid Barry Green is completely missing a very vital point – if he is talking about compressed video. (And saying “the original recorded footage is quantized and recorded at an 8-bit depth”, I assume he is?)
That’s down to the fundamentals of how video compression systems work. Basically taking macroblocks, average values, and then pixel differences from the average. In essence compression works by rounding the differences and thereby saving data.
What Barry says *MAY* have some validity in an uncompressed 8 bit original, where statistical variations in the 8 bit samples *MAY* be used to generate a higher bit depth effect on downconversion. But such variations are exactly what a compression system is likely to throw away to reduce the data rate! In other words, simple “8 to 10 bit” magic won’t happen!
Another way to think of it, is to think of what happens to such as a gradient as the compression level increases, leaving bit depth alone. (It changes from smooth to a step pattern, similar to a reduction in bit depth. Very easily demonstrable in Photoshop by saving a gradient as jpgs with gradually harsher and harsher levels of compression – try it!) This is a similar mechanism, though in this case compression STOPPING or reducing any increase in bit depth, rather than giving an apparent reduction.
I won’t deny such a downconversion will likely give SOME improvement, but, depending on a host of factors, it will certainly be far less than would be expected from a simple statement of “4K 4:2:0 8bit footage will translate to 4:4:4 10bit on an HD timeline”.
That statement, *as it stands*, is simply not true, and by ignoring compression matters, Barrys analysis above becomes heoplessly simplistic. (And wrong.) Sorry.
I was discussing this with a colleague after the post above, and they made a couple of other points which may be worth sharing.
Even if you ignore my comments above about compression factors, then whilst in terms of chroma it may be true there will indeed be a colour space improvement, then for chroma the resultant difference signals will remain solidly 8 bit.
The other point made to me touches on what Duarte Bruno said above – that the basic premise relies a statistical variation. (ie Noise!!)
Think of it this way. Imagine a perfect uniform input such that a 10 bit coder would code each Y pixel as 513. An 8 bit coder would have to therefore code such as either 128 or 129 (equiv to 512 or 516) – and in this ideal situation would round them all to the closest 8 bit value – 128). Do the 4 pixel to 1 logic and in this ideal situation they all then become 512 – not 513.
Move away from the ideal, and (as Duarte suggests) you won’t have a situation of all 128, but (hopefully) will have a few of value 129 which is what the theory relies on. But think what’s happening. The whole premise is reliant on a level of random noise to give such variation, and it’s highly unlikely to be a simple 25% 129, 75% split, and even more unlikely that any such split will always be 1:3 for each block of 4 being combined. Statistically, there are also likely to be some of 127, some of 130.
If you like, this means that to stand any chance of (theoretically) working as hoped, the system will have a level of noise where any inherent advantage of 10 bit values is lost anyway! And this is before even thinking of the points I made previously about compression issues.
This is not to say it’s a bad idea to put a signal captured as 8 bit on a 10 bit timeline for post – far from it. But it’s not because the 8 bit UHD signal will become 10 bit HD, rather it will be more tolerant of subsequent processing.
Basically, Barrys statement above: “After downconversion, the resulting footage has 10 bits per pixel and 4:4:4 color sampling!” is fundamentally incorrect. His reasoning is correct as far as it goes – but unfortunately he is ignoring several very key facts, which largely negate the whole premise. A little knowledge can sometimes be a dangerous thing…..
Regarding the whole compression issue, it may be worth sharing this image from Wikipedia (close it to see the Wikipedia article): https://en.wikipedia.org/wiki/JPEG#/media/File:Felis_silvestris_silvestris_small_gradual_decrease_of_quality.png
It’s obvious that the quality progressively decreases from right to left, and the increasing banding normally makes most people think it’s likely due to reducing the bitdepth – a reasonable assumption?
In fact, it’s down to increasing the jpeg compression as you go to the left – nothing to do with bitdepth – and is a good example of what I said in the fourth paragraph of my original post.
There is a current obsession with 4:2:2 as a “must have” colour space – but in a bandwidth limited system, the photo also illustrates why 4:2:0 **and lower overall compression** may be a better compromise, certainly in a progressive system.
8-Bit Colour (UHD or otherwise) will sample at a fixed depth (420 or 422) etc. 10-Bit will sample from a greater depth. 12-Bit greater still. These are constants and can never change no matter where they end up. Same with Audio and the same with any process involves packing and unpacking digital data.
Here are the facts – all you can gain from processing 8-Bit UHD into a 10-Bit HD workflow is that the inferior colour depth of 8-Bit will be squeezed closer together as you downscale – possibly rendering some perceivable IQ increase – but it’s the SAME 8-Bit colour palette. You will not increase or introduce ‘new’ colours or expand your 8-Bit pallette beyond what you captured in the fixed 8-Bit palette.
Your colour palette is fixed from source.
What you can gain from working in a 10-Bit workflow, even with 8-Bit source material, is that anything you artificially introduce AFTER will be at the 10-Bit depth – e.g. Additional gradients, VFX work etc – which is why companies like Atomos recommend it.