24 bit, 48000Hz?

Status
Not open for further replies.

dirtyminded

Distinguished
Aug 22, 2011
14
0
18,510
Hi,
I noticed when i was screwing around with the sounds settings in my new computer there was a feature that had different HZ frequencies. The default was set to 24 bit, 48000 Hz (studio quality) ?? There was an option to go up to 24 bit, 96000 Hz or even higher at 24 bit, 192000 Hz?

What is the difference? And if i set it to the highest Hz frequency will my music and movies i play sound better?

Any info would be great. Thanks.
 

diellur

Distinguished
Apr 7, 2011
1,345
0
19,460
Frequency = 1/Time. So the higher the frequency, the smaller the time interval between samples when recording the source data and the better the sound quality of the recording (and the larger the size of the source file).

I'm not sure how it equates to listening back...I think it means that your soundcard can replay audio sampled up to that rate. However, I'd think it wouldn't be able to replay at a higher quality if it wasn't sampled at that rate in the first place.
 
Basically, its a sampling rate. A digital representation of an analog signal is not exact, but the more times you sample the digital representation over a set period of time, teh more exact teh output is. In theory, the higher the sample rate, the more correct the output will be.

However, for compatability reasons, 99% of all audio is encoded at 16-bit 44.1KHz or 16-bit 48KHz, so even if you output at a higher sample rate, you won't see any change in audio quality.

Also note, some speakers can't play back higher sample rates properly.
 

ulillillia

Distinguished
Jul 10, 2011
551
0
19,010
Sample rate is the number of samples (data points) for audio. In WAV files, this is stored in bytes 0x18 to 0x1B. Much above 44,100 (used with CDs) doesn't have much use outside editing. To explain the effect this has, consider a pure tone of 16,000 Hz, near the upper limit of human hearing. At 48,000 Hz sample rate, 3 samples are needed to reproduce this, one at 0, the next at 0.866, and the the third at -0.866. The waveform looks like a series of spikes. At a 96,000 Hz sample rate, 6 samples are needed to reproduce the same pitch - 0, 0.866, 0.866, 0, -0.866, and -0.866 giving the look of a series of mesas and valleys. At 192,000 Hz, 12 samples are used at 0, 0.5, 0.866, 1, 0.866, 0.5, 0, -0.5, -0.866, -1, -0.866, and -0.5. This looks like a natural sine curve. A lower-pitch waveform, like 5000 Hz is quite accurately reproduced at 44,100 Hz. The higher settings are best used when you intend on editing the source. I typically use 48,000 Hz myself even though I could use 192,000 myself.

Bit depth controls how many bits are used for each sample. CDs typically use 16-bit. 24-bit provides much greater accuracy in the samples. Higher bit depths are best for low-amplitude audio. 16-bit audio is capped at -46 1/6 dB while 24-bit audio is capped at -69 1/4 dB.

By changing the sample rate without changing the waveform, you can change the speed of your audio. Cut the sample rate in half and the song plays at half the speed and 12 semi-tones lower in pitch. "TargetSpeed*TrueSpeedSampleRate = TargetSampleRate" - use this formula to find any speed you want. For the change in pitch caused from this, use this formula instead (outputs to semitones): "log(TargetSampleRate/TrueSpeedSampleRate)/log(2)*12"

Although 1.5 million Hz is playable, this puts considerable demand on the system for having to convert it in real time to something the sound card can handle. 192,000 Hz is typically the upper limit for sound cards (scientific research may require needs for 384,000, such as studying bat sonar though I'm unsure of that).

I'm not an expert with audio, but I work with audio regularly from video editing.

Edit #1: expanded details on speed changes from changing sample rate.

Edit #2: Added details on how bit depth affects audio.
 
Status
Not open for further replies.