When 420 8bit becomes 444 10bit from Barry Green

Categories: Miscellaneous 10 Comments

420 to 444

I keep getting numpties telling me that 4K 4:2:0 8bit footage does not translate to 4:4:4 10bit on an HD timeline, don’t take my word for it lets hear from renowned expert Barry Green…

Barry

Barry “One excellent benefit of downconverting UHD/4K footage to 1080 HD in post is that you can realize an increase in proportional color resolution and a notable increase in bit depth. The AG-DVX200 records 4K or UHD footage at 8 bits per pixel and utilizes 4:2:0 color sampling. After downconversion, the resulting footage has 10 bits per pixel and 4:4:4 color sampling! Yes, you can convert 3840×2160 8-bit 4:2:0 recorded footage into 1920×1080 10-bit 4:4:4 footage in post.

To understand the color sampling advantage, you’d have to first understand that the camera re- cords its footage in 4:2:0 color sampling. That means (simply put) that there is one color sample for every 2×2 block of pixels. In any given 2×2 block of pixels there are four different “bright- ness” samples, but they all share one “color” sample. Effectively, within the 3840 x 2160 frame, there is a 1920 x 1080 matrix of color samples, one for every 2×2 block of pixels. During the downconversion to HD, each block of 2×2 brightness samples are converted into one HD pixel, creating a 1920 x 1080 matrix of brightness pixels. This 1920 x 1080 “brightness” (luminance) matrix can be effectively married to the originally-recorded 1920 x 1080 “color” matrix, result- ing in one individual and unique color sample for each and every brightness pixel. The result is 4:4:4 color sampling at high-definition resolution.

In terms of pixel depth, the original recorded footage is quantized and recorded at an 8-bit depth, providing for up to 256 shades per pixel. Other formats, like Panasonic’s own AVC-Intra, quantizes and records at a 10-bit depth, for up to 1,024 shades per pixel. Having deeper bit depth provides the ability for finer shading and more subtle transitions, especially apparent on smooth gradients (such as in a clear blue sky).

AG-DVX200_4K

Generally 8-bit cameras perform fine for most images, but extensive image manipulation in post can reveal the limitations of 8-bit encoding and cause “banding” and “stair-stepping” from one shade to the next. 10-bit footage minimizes those effects because there are up to four shades for every one shade in 8-bit footage. When downconverting UHD/4K footage to 1080p HD, you also get the benefit of converting 8-bit pixel depth into 10-bit pixel depth! Since each 2×2 block of UHD/4K pixels will be summed to- gether to create a single 1×1 pixel in 1080p HD, the individual pixel values and gradations from the source footage can be retained in the downconverted footage.

Imagine a smooth gradient of medium gray, gradually getting brighter from left to right. In 8-bit pixel data, a medium gray might be represented by a pixel value of 128, and the next brighter shade might be 129. In 10-bit pixel data, that same medium gray (128) might be repre- sented by a pixel value of 512 (128 x 4) and that brighter shade (129) might be represented in 10- bit by a value of 516 (129 x 4). The obvious difference here is that an 8-bit camera can’t represent any difference between 128 and 129, but the 10-bit camera (looking at the exact same gradient) could represent a smoother transition between 512, to 513, 514, 515, and then eventually 516. Having 10 bits of data provides for the ability to retain and discern between finer shades of grey (or color). So what happens when we downcon- vert our 8-bit UHD footage to 10-bit 1080p HD footage? As each 2×2 block of pixels is summed together, those subtle differences in shade are retained, and we end up being able to represent shades that the 8-bit footage couldn’t have.”

HDW : Remember this only works as long as you setup a 10bit timeline.

For all your video production needs in Scotland, get in touch with Small Video Productions

10 comments on this post

  1. Steve says:

    Does this happen automatically on the timeline in Premiere Pro CS6? Or do you have to convert the footage in other software before import? Is any special treatment required to get this benefit?

    HDW : I don’t use Premiere but in FCPX you have an option to choose Pro res 422 HD which is 10bit and 444 10bit. Theres not much point editing 10bit footage in an 8bit timeline.

  2. Hi, Philip . . .

    I can see that if you continue to post material on the AG-DVX200 I am going to have to start wearing a bib to avoid drooling on my keyboard.

    I want one of these cams!

  3. Phil says:

    On the point you made regarding FCPX that’s in the export settings, right? Not the initial choice of timeline. Unless there’s a setting I’m overlooking when creating a new timeline?

  4. Bob says:

    Presumably this effect also happens when viewing UHD on UHD TV from normal viewing distances as our visual acuity cannot resolve the individual pixels when viewing from greater than 8 to 10 feet away. If so, 4:2:0 at 8 bit starts sounding not so bad in UHD. Bob

  5. Tim says:

    I transcode my 4K material using MPEG Streamclip to ProRes 422 before editing on a 10 bit HD timeline. Works great on my older/slower computer.

  6. Roland Schulz says:

    Again, this is wrong!! Please don´t confuse people with this 8-bit UHD becomes 10-bit FHD nonsense.

    Not much words to loose about colorsampling since this becomes more complex and is harder to explain: The theory is simply even not true for black/white recordings so the theory is not true generally.
    B/W gives the best possible precondition, because every sample (pixel) can get an individual value in contrast to color values that are reduced in 4:2:0 and even 4:2:2 subsampling.

    I refute the theory in an easy to understand example – a grey ramp – the values (left to right) describe the luminance (Y) value, color (UV) is always 0 in B/W, so not listed:

    On a real 10-bit FHD camera a grey ramp (horizontal line/block) could look like: 508-509-510-511-512

    An 8-bit UHD camera sees this as: 127.127-127.127-128.128-128.128-128.128 (twice as much samples due to twice the horizontal resolution, but coarse 8-bit values).
    Downscaled this to 10-bit FHD becomes: 508-508-512-512-512 (get it? There is NO benefit at all!!! The “steps” are as coarse as in 8-bit, there is no more information as in 8-bit!)

    8-bit UHD camera shifted one pixel to the left: 127.127-127.127-127.128-128.128-128.128
    Downscaled to 10-bit FHD becomes: 508-508-510-512-512 (Here the maximum, only statistical benefit appears, a “gain” to 9-bit between two pixels, but not to the whole ramp!)

    A real 10-bit camera would record slow rising values in a ramp smooth with individual values, no 8-bit camera can do this with the same resolution in brightness and color information!!

    All this 8-bit UHD becomes 10-bit FHD is nonsense. Feared “banding” becomes only visible in slow changing brightness/color graduations (espeacially after “grading”), smooth transitions are the only benefit of 10-bit!

    I would recommend to remove the article because it is simply wrong and misleading.

  7. Roland Schulz says:

    My god, even Panasonic publishes this wrong theory, how paltry is this?!:

    http://pro-av.panasonic.net/en/dvx4k/pdf/ag-dvx200_tech_brief_vol1_en.pdf

    Do they expect to increase the sales on their UHD cameras?!

    There´s much true in that sheet, but this “8-bit becomes 10-bit” is only “may happen” statistics followed by interpolation and will never help where a real 10-bit camera kicks in!

    Why on earth should this weird pixel allocation and UHD to FHD transition happen in this way as shown in p.4?! This happens due to noise and severeal other cicumstances but not as a recording of the subject.

    Crazy that the Panasonic guys publish this nonsense, they simply didn´t read and or do not understand!!

  8. Andrew W says:

    @Roland Schulz- While it’s true that a single 10-bit pixel has the theoretical potential to be more accurate, in very specific circumstances, than four 8-bit pixels, the practical difference will be very small. First of all, your example is flawed regarding the sampling.
    First of all, yes, UHD has twice the horizontal resolution of 1080 but it is also twice the vertical resolution as well. Therefore, there are four 8-bit samples in UHD for every 10-bit sample at 1080, not two. Assuming that the four UHD pixels occupy the same surface area as the single 1080 pixel (therefore receiving the same amount of photon hits) then the sum of the four UHD pixel values should equal the value of a single 1080 pixel.

    546= 132, 145, 127, 142

    Occasionally, there will be deviations, but on average this should be true. The only exception would be in the unlikely chance that one(or three) of the four UHD pixels was bombarded by far more photons than it’s three neighbors and became saturated, or “full”, and therefore could not record any more information. In this case the total of the four UHD pixels’ values would be less accurate than a single 1080 pixel. This would only occur in very particular conditions.

  9. Duarte Bruno says:

    @Roland Schulz
    Your reasoning seems mostly correct but as always the devil is in the details.
    Where you state that 508-509
    becomes:
    127.127-127.127 this is simply not true. The reality is that your 509 (19bit) will not translate into 4×127 (8bit) values but a random series of values between 127-128, probably with a 75-25% distribution, thus 127+127+127+128=509.
    See? That oversampling is where the 2 extra bits come from and how you get from 8bit to 10bit, provided you exposed correctly to protect the highlights.

  10. Duarte Bruno says:

    Note: Where it says 19bit it should read 10bit.

Post Comment

Please note: all comments are moderated by an Admin.


%d bloggers like this: