I was thinking about a world with one video codec that works and everyone using different electric wall sockets in their houses. I am so glad someone standardized the american electric plug. Just think about how many adapters you would need to plug in your laptop it would be as fun as video editing.
How about audio too?
Even now, we still have a chance to simply choose Wave, but we keep insisting on adding Dolby. WHY?
It's simple. It's because we are all gullible, dumb ass, lemmings and believe everything we read and we are so cheap, that we'll keep compressing it, as long as we're told, in print, that for some reason, it's better or O.K.
We (the consumer) are mostly using Dolby, just for bragging rights, not because anybody can hear that it is better or worse. In today's age, we aren't saving very much space by using Dolby. We are just handing them our money.
Like I said. WE ARE DUMB ASSES. Dumb asses, because we have to pay extra to compress it one more time. Dumb asses, because you'll still continue to do it. Dumb asses because, you would still rather brag that you HAVE Dolby, than a superior WAVE.
Dolby has suckered the world into continually paying for nonsense. And you gladly will.
Alright, let's hear the idiot Dolby defenders whine.......... (they probably own stock .... and why not, you can't lose)
Well, TGS, that's quite the rant and for both practical and technical reasons, I don't agree with you.
You say that "we have to pay extra to compress it one more time," PCM audio is uncompressed. How do you figure that we're compressing it "one more time?"
PCM stereo WAV audio is 1536 Kbps which take up a significant portion of the available bitrate when you're trying to squeeze 2 hours on a single sided DVD. Using 192 kbps Dolby AC3 audio, you can use an average video bitrate of about 4800 kbps, if you use uncompressed PCM audio, that video bitrate drops to about 3400 kbps. More often than not, that 1400 kbps difference will be very noticable.
I typically use PCM audio on Blu-ray discs because they're 25 GB and have a maximum combined bitrate of 40 Mbps. PCM audio can potentially sound better to people with the ears and equipment to discern the difference.
What exactly is your problem with Dolby that would cause a somewhat profane post like the one above?
Now, regarding the "unified codec theory," that would be fine if we all had just one destination format, but we don't. It's part of our job to know what codec to use for a specific situation. It separates us from the lower primates which don't typically edit or watch television. (Except maybe for these guys.)
there IS a video standard: mpeg-1. There IS an audio standard: wav.
I was using those back on my old Tandy 1000. not my fault everybody wanted to change. It was a standard then & it is a standard now but you won't see anyone here recomend them for various reasons.
Dolby is compressed. Super compressed. That is how we are compressing it one more time. How did you get that wrong?
I've never been a Dolby fan and I never will. They've permanently welded their foot in the door and we keep paying a premium for another compression. Yes..... I use it too. I already paid for the rights. But I'd also just use mp3's if I needed to save space. That is... if I could.
The funny thing is, Dolby hasn't figured out how to stick their foot into the strictly 'Audio only' format's door, since we've gone all digital. Not yet. Give 'em time..... you'll gladly pay for the bragging rights to say you're using "Digital Dolby Audio" and not those horrid mp3's or ITunes. You will be suckered by Dolby's nonsense again. They have enough money to mount a successful campaign and too big of an already suckered fan base, to not succeed.
It's supposed to be this FANTASTIC audio compression and it isn't even used in the audio market. It's in the video realm.
That should be a red flag.
"That is how we are compressing it one more time." How did you get that wrong?"
Let's see... WAVE audio is uncompressed and if I compress it to AC3 then I have compressed it once not "one more time. How did you get that wrong?
You are curiously angry and bitter about Dolby and I'm pretty sure you're the only one who understands why. Did someone named Dolby steal your girlfriend in junior high school?
I don't use Dolby AC3 for bragging rights, that's ridiculous. I use it because it works and in my opinion, it sounds better at a given bitrate that MP3. Besides, MP3 is not part of the DVD audio spec so in my opinion, your rant is pointless.
a point of note, the North American plug isn't REALLY standardized:
there's: two prong
polarized two prong
three prong
polarized three prong
115 volt (or 110, 120, depends)
220 volt (or 230 or 240, depends)
special 220 heavy duty plug with a round shape so it can't be accidentally put in a 115
I'm not counting anything but the plug itself here. :)
That anti-Dolby rhetoric is frighteningly similar to what goes on all the time over at hydrogenaudio forums. If that bunch had their way, all of our DVD and BluRay video discs would use LAME-encoded MP3 audio. Why? Because it's "transparent," and besides, it's what is best for mankind (the Freeman doctrine).
Of course, 5.1 MP3 was a latecomer and really never caught on, certainly not as a delivery format. But darn those fools who wrote the DVD and BD specs without mandating MP3 over that "inferior" AC-3 codec. Then we could have had some real standardization! (all jealousies aside).
I'll continue to use AC-3 for my video discs and LAME for my non-CD audio projects, thank you. ;?)
Wasn't Dolby supposed to cancel tape hiss, but all it did was attenuate the highs, so nobody wanted it because we blasted our music so loud you couldn't hear any hiss anyway.
One of those solutions without a problem, so they probably greased somebody somewhere to fashion it into a standard.
"Wasn't Dolby supposed to cancel tape hiss, but all it did was attenuate the highs"
Yes it did attenuate tape hiss and no it didn't attenuate the highs. Broadly, Dolby increased recording signal level in the highs and restored it to proper level on playback. And it worked fine. But if you blasted your music, your impression of reduced highs probably resulted from hearing loss.
If you're just talking about your house one type of plug is fine. When you get into commercial infrastructure it's a lot different world of mixed plugs and specs. Similar to the CODEC world. Figuring out which one works w/ the correct specs and equipment is half the fun/challenge I deal with designing building network infrastructure, data centers and data closets. Equipment manufacturers use different plugs on their gear all the time.. You really have to get the specs correct or the equipment wont power up and the electrician will be back rewiring receptacles.. then processing the change orders and added cost etc.. uhh.
Actually Dolby copied the old Lenkurt (California) mux equipment channel noise reduction techniques (3 db per channel reduction) used in the 1950's in their telephone exchanges.
JJK
Dolby "B", the most common consumer noise reduction system, applied level dependent high frequency emphasis on the encode side and did the opposite on playback. It could achieve up to 9 db of noise reduction and the audio cassette needed all the help it could get. It worked really well when properly calibrated, unfortunately cheap consumer cassette recorders were not rarely calibrated. I hated cassettes.
Aren't you that elitist character that thinks a Wave is uncompressed because the word "uncompressed" is always associated with it? Are you still patting yourself on the back over that one, Eincline?
I can also get avi's that say uncompressed that are 13 GB per hour and I can avi's that say uncompressed that are about 92 GB per hour.
When are you going to tell all these people that they are wasting their time recording audio at 24/96, since we already have the perfect Wave codec at 16/44.1, that, according to your logic and since it's uncompressed, can perfectly capture every single thing the human ear can detect?
Linear PCM is uncompressed in that it is sampled at a given sample rate and bit depth and no psychoacoustic tricks have been played on it to reduce the file size by throwing out information like happens with WMA or MP3. For all practical purposes, a WAVE file is uncompressed. An 8khz/8-bit WAVE file is also uncompressed, but it also has very limited resolution and fidelity. What I'm calling "uncompressed" is audio or video in which no further data reduction has been applied from it's original sampling rate and bit depth.
Anyone that claims that that a 13GB/hour DV .AVI file is uncompressed is flat-out wrong. Like audio, the video has been sampled at a specific sample rate and bit depth but that data has been further reduced by applying lossy DVC-format video compression.
Recording audio using a sample rate of 96k is a bit of overkill, but sampling at a bit depth of 24 bits is not. Generally speaking, 24 bit audio sounds better than 16 bit audio. However, virtually no one can reliably tell the difference between sampling at 48k vs. 96k. The only thing that can capture "every single thing the human ear can detect" is the human ear, everything else is an approximation at best.
24 bits/sample X 96,000 samples/sec X 2 channels = 4,608,000 bits/sec
16 bits/sample X 48,000 samples/sec X 2 channels = 1,536,000 bits/sec
Those are both at 1:1 compression ratio, which is uncompressed by definition.
It is also fifth grade math . . .
TGS,
If you are going to rewrite the definition of audio compression for the rest of us, where would you put the bit depth/sample rate endpoints?
Bit depth: >0 <= ?
Sample rate: >0 <= ?
That is eighth grade math . . .
Or are you arguing that only infinite bitrate is "truly" uncompressed? Follow that line of thought to its logical conclusion . . .
John Cline is 100% correct. By staying above the Nyquist frequency and >= 16-bit sampling, it would be difficult to find anyone who could tell the differences consistently in a controlled test. Uncompressed digital does not equate to pristine reproduction, no matter what you think.
I have a hard time telling a difference between 24/96 and 16/48 prime source recordings of live music. I ran a few listening tests when I first got my H4. But then, I don't have any trouble with putting AC-3 on my DVD videos, either, even though PCM has more clarity.
I guess it's one of the many advantages of being an old fart.
Much of the FUD on this topic comes from information that is decades out of date.
In the early days of digital audio 44.1KHz sounded pretty woefull and 48KHz sounded better and 96KHz better still. The issue back then was that the A.D converters ran at the target sample rate, sampling audio at 44'1KHz is not a very good idea at all as it makes for very serious issues in wrangling aliasing thanks to Mr Nyquist.
Today, all modern A>D converters oversample, most at probably 128KHz. Then they do the necesarry work to digitally filter and resample down to the target bitrate. Arguably they may do a better job than your software can. This could even lead to the opposite outcome from what would seem obvious. Recording at 44.1KHz if delivering 44.1KHz may produce better audio than recording at 128KHz and downsampling to 44.1KHz.
Bit depth is another matter entirely. No negative impact recording at 24 bits compared to 16 bits. I always do if I can. Just how much it really gains you is another matter. The extra dynamic range could well be obliterated by poor mic preamps, only the best are quiet enough for 24bit to make much of a difference.
If you really want to push the limits of digital audio fidelty Sony's Direct Stream Digital seems to be the duck's guts. From what I read it's a bear to deal with.
When are you going to tell all these people that they are wasting their time recording audio at 24/96, since we already have the perfect Wave codec at 16/44.1, that, according to your logic and since it's uncompressed, can perfectly capture every single thing the human ear can detect?
Something tells me you don't quite understand the mechanics of dynamic range, sampling frequency, bit depth, and the ear.
Do you know why Dr. Stockham didn't want 44.1 as a standardized audio sampling frequency, and fought it very hard for a long period of time? Do you know why 44.1 was chosen?
Some people are very happy with low resolution video files, others prefer to work with higher resolutions. Same with audio.
You might want to google "Nyquist" for some additional information.
To the OP, a single codec has appeal, but at the end of the day...codecs are what drive development, and man o' man...have we seen some super development thanks to codecs.
Although I am very partial to J2K as a scalable, single form model.
Recording at 44.1KHz if delivering 44.1KHz may produce better audio than recording at 128KHz and downsampling to 44.1KHz.
Duhh! Glad someone finally stated this in plain english.
No negative impact recording at 24 bits compared to 16 bits.
Agreed. The only negative collateral is a 50% increase in file size. If the delivery format is 24-bit, then recording at that bit depth is a no-brainer. But if one is delivering 16-bit, what is the point, beyond the already-boring fx vs. dithering discussion?
at the end of the day...codecs are what drive development, and man o' man...have we seen some super development thanks to codecs.
I don't know who could have said this better. Thanks!
I think because the first digital audio system recorded to VHS tape and that was the highest bitrate they could use to write data to VHS tape.
No, I didn't cheat and Google :)
My answer is because 44,1000 Hz is slightly above the Nyquist Frequency for human hearing and also the analog hardware capabilities at the time it was developed.
It was a challenge, but it was achieved. Oversampling came later.
Well the common answer has to do with fitting a certain piece of music onto the first commercial CD. I think that was the reason they decided to stay with 44.1KHz even though by the time it was ready for commercial use tape systems could achieve higher data rates.