SAR and Dar, reconciling the pure pixel.

peter-d wrote on 8/24/2020, 11:56 PM

DISPLAY SIZE:
The Pixel Aspect Ratio is 1.0921.
The video width is displayed at 1.0921 x 352 = 384.
So the stored video is stretched for display at 384x288.

This raises some questions when it comes to processing, if the video is always stored at 352x288.


MY THINKING:
Logic tells me, this process of stretching and reducing width is degrading.
When I open the unaltered source at SAR 352x288, it should be in the most pure state.
Perhaps I should carry out all colour edits that work unaffected by the 352x288 SAR state.
This might mean the diference of a few troublesome plug-ins working, with the smaller frames.
Then save the video permanently at 384x288 before de-noise and any complex HSL edits.

The final render will be lossless AVI.

 

THE QUESTIONS:
Was the original camera footage shot at 384x288?
How and when was the 352x288 version created?
Your general thoughts?

Comments

Yelandkeil wrote on 8/25/2020, 1:31 AM

Sir, nobody will be narky but would you go the link below and inform us your QUESTIONS after reading --
https://en.wikipedia.org/wiki/Video_CD

I'm more than 60 yold, already left the school, but still living in the 21st CENTURY. 

ASUS TUF Gaming B550plus BIOS3202: 
- Thermaltake TOUGHPOWER GF1 850W 
- ADATA XPG GAMMIX S11PRO; 512GB/sys, 2TB/data 
- G.SKILL F4-3200C16Q-64GFX 
- AMD Ryzen9 5950x + LiquidFreezer II-240 
- XFX Speedster-MERC319-RX6900XT <-AdrenalinEdition 24.12.1
Samsung 2xLU28R55 HDR10 (300CD/m², 1499Nits/peak) ->2xDPort
ROCCAT Kave 5.1Headset/Mic ->Analog (AAFOptimusPack 6.0.9403.1)
LG DSP7 Surround 5.1Soundbar ->TOSLINK

Lumix DC-GH6/H-FS12060E: HLG4k60p, AWBw, shutter=200, ISO=auto (250 - 6400)
DJI Mini4 Pro: HLG4k60p, AWB, shutter=auto, ISO=auto
HERO5: ProtuneFlat2.7k60pLinear, WB=4800K, Shutter=auto, ISO=800

Win11Pro: 24H2-26100.4484; Direct3D API: 12.2
VEGASPro22 + XMediaRecode/Handbrake + DVDArchi7 
AcidPro10 + SoundForgePro14.0.065 + SpectraLayersPro7 
K-LitecodecPack17.8.0 (MPC Video Renderer for HDR10-Videoplayback on PC) 

peter-d wrote on 8/25/2020, 2:19 AM

THE QUESTIONS:
Was the original camera footage shot at 384x288?
Based on what I know, it was shot at 384x288.

How and when was the 352x288 version created?
By some type of compression algorithm at the time.
I should add that my footage has never had audio.

The questions around editing, are not affected by what I have read.
Your general thoughts?

Marco. wrote on 8/25/2020, 2:42 AM

It is unlikely a camera shoots or shot such a frame size. More likely it was shot with 576 lines of vertical scan and then downsized to VCD size later.

peter-d wrote on 8/25/2020, 2:58 AM

I am interested in what people think about colour correction in the Storage Aspect Ratio?
Also colour correction before Neat Video de-noise?
I believe that many people De-noise first?

 

 

Musicvid wrote on 8/25/2020, 8:07 AM

This was explained in a previous post. Pixels are Pixels. "Stretching" is a visual metaphor.

352 "stretched" really means mapped to 384. This happens during encoding.. Yes there is loss of horizontal information, about 10%; it is a tradeoff.

peter-d wrote on 8/25/2020, 8:58 AM

Yes, I have not forgotten.

I tend to use metaphors to understand general science principles.
I cannot see any difference between remap or stretch/contract to grasp the process of reduction from DAR to SAR.

Learning of compression that only act on width has really come as a surprise.
My thinking is that Colour correction on the SAR or DAR state maybe very similar.

If you do not mind the question, do you have a preference of order for De-noise and Colour correction?

fr0sty wrote on 8/25/2020, 9:04 AM

I denoise after color correction, as that step can introduce noise if you need to push levels a bit to brighten a scene.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

peter-d wrote on 8/25/2020, 9:15 AM

I have been starting with brightness reduction.
Than De-noise, followed by HSL edits and Selective Hue edit where brightness is also increased.

I will try putting De-noise last, believing that it should be enacted on DAR.

Just add that I am still questioning if De-noise on SAR or DAR state would change result.
Because when DAR is compressed to SAR the image should sharpen, and cannot be reversed without dissolution and lose of detail. I question if De-noising the SAR state before aspect correction, might even improve result for some low quality footage. 
 

Musicvid wrote on 8/25/2020, 2:03 PM

As I mentioned, pixel mapping to the output always happens in the encoder, meaning last; a preview is what it is.

You have no choice but to apply any and all of your effects before display mapping, if any, takes place. And no, it wouldn't make a gnat's a** worth of difference. You will learn far more from testing at this point, rather than letting yourself get stuck at the overthinking stage.

fifonik wrote on 8/25/2020, 8:04 PM

I'm using NeatVideo for de-noising and always applying it on the first place as per their recommendations (section 3) or FAQ "Is processing via Neat Video best done before or after any other processing (i.e. tonal/color correction)?".

Some adjustments (including CC) does not change noise characteristics dramatically so they can be applied before de-noising as well (but NV will require you to build/use another noise profile).

Other changes (crop/resize/blur/sharpen/temporal changes) are changing noise dramatically so you'd better to de-noise first so it will not affect further filters in chain.

Last changed by fifonik on 8/26/2020, 9:52 PM, changed a total of 1 times.

Camcorder: Panasonic X1500 + Panasonic X920 + GoPro Hero 11 Black

Desktop: MB: MSI B650P, CPU: AMD Ryzen 9700X, RAM: G'Skill 32 GB DDR5@6000, Graphics card: MSI RX6600 8GB, SSD: Samsung 970 Evo+ 1TB (NVMe, OS), HDD WD 4TB, HDD Toshiba 4TB, OS: Windows 10 Pro 22H2

NLE: Vegas Pro [Edit] 11, 12, 13, 15, 17, 18, 19, 22

Author of FFMetrics and FFBitrateViewer

peter-d wrote on 8/25/2020, 8:12 PM

You will learn far more from testing at this point, rather than letting yourself get stuck at the overthinking stage.

Yes, I am good to go now.
I have rendered 1 video perhaps 100 times, made changes in kept notes.
The way I look at this, is the knowledge will be applied to other videos, or even something unrelated.
Thanks

Einstein: The definition of insanity is doing the same thing and expecting a different result.

 

peter-d wrote on 8/25/2020, 8:59 PM

I'm using NeatVideo for de-noising and always applying it on the first place as per their recommendations (section 3).

Yes, De-noise before adding to problem should be best.
If we consider the history of final edited copy of a edited copy from original, it comes down to processing between.
I am rendering to tiff after each individual process through pc limits.
I have been weighting if such additional noise can make this really bad footage worse.

One of the biggest reasons for me to like putting de-noise first, is the clarity it gives to edit colour.

Musicvid wrote on 8/25/2020, 9:04 PM

I have rendered 1 video perhaps 100 times, made changes in kept notes.

All that for half-resolution, boneyard VCD? You do need to get out more.

That wasn't Einstein, but the self-recognition is admirable.

 

peter-d wrote on 8/25/2020, 9:19 PM

I need to get out and read more :)

http://www.miraizon.com/support/info_aspectratio.html