Author Topic: audiofools...maybe not so much  (Read 39222 times)

0 Members and 1 Guest are viewing this topic.

Offline linux-works

  • Super Contributor
  • ***
  • Posts: 1999
  • Country: us
    • netstuff
Re: audiofools...maybe not so much
« Reply #50 on: February 08, 2014, 04:40:15 am »
I was demo'ing some audio gear (including spkrs) at a show and the spkrs were from someone else.  he told me that the red/black posts may not be right.  I connected it and used the spkrs anyway.

one guy came by and said he thought the absolute polarity was wrong.  I don't believe he was there when the spkr owner explained that to me.

You're destroying the base by connecting them in anti-phase, I'm sure a lot of people won't notice but it's not something you need golden ears to recognize.

I don't think that's correct.

absolute polarity will not have the initial 'push' at you, but one half cycle later, it will!

what you are probably thinking about is one spkr being correct and the other being out of phase.  in THAT case, bass will cancel but if they are both using the same polarity, the difference is really only on attack and only on the first cycle.  its something very few people can truly detect with reliability.

Offline PetrosA

  • Frequent Contributor
  • **
  • Posts: 625
  • Country: us
Re: audiofools...maybe not so much
« Reply #51 on: February 08, 2014, 05:33:32 am »
Some thoughts I had reading through this thread:

I read some time back that testing was done on musicians to see if they have "better ears" than non-musicians. One of the things the research uncovered is that musicians have a much stronger ability to "fill in" missing sounds when listening to music on bad/cheap speakers. As a result, many musicians don't buy super high-end stereo equipment for their homes.

Someone wrote, and others suggested that "A scope will "hear" everything." Yes and no. Human beings experience sound with their entire body. Each cavity resonates at a certain frequency and as we age and change, our perception of sound will change as well. We have stomachs, lungs, sinuses, skin etc. that all react to sound waves of different frequencies and they react differently depending on whether you've eaten or not, whether you have a cold or not, etc.. In my younger days I walked out of a number of concerts because the bass drums were so strong that I was getting queasy from the thumping in my torso. A scope can only show you what its inputs record, so unless you can figure out a way to deliver a feed directly from a human brain of all the sensory information into a scope, you're only getting part of the picture, usually limited to what a microphone can detect.
I miss my home I miss my porch, porch
 

Offline JuKu

  • Frequent Contributor
  • **
  • Posts: 566
  • Country: fi
    • LitePlacer - The Low Cost DIY Pick and Place Machine
Re: audiofools...maybe not so much
« Reply #52 on: February 08, 2014, 08:41:13 am »
What I said is that good stereo can replicate everything 5.1 can. I've heard some pretty damn amazing stereo...
I've heard amazing stereo and amazing multichannel, too. A stereo can only replicate the front field, like in the entrance of a concert hall with doors open to the actual hall vs being inside the hall (multichannel). For some music, like club jazz, the front field can be enough. And in rock, no one cares about trying to reproduce the concert hall reverberation. But stereo can't hold a candle to an equal quality multichannel sound, especially on classical music.
[/quote]
http://www.liteplacer.com - The Low Cost DIY Pick and Place Machine
 

Offline JuKu

  • Frequent Contributor
  • **
  • Posts: 566
  • Country: fi
    • LitePlacer - The Low Cost DIY Pick and Place Machine
Re: audiofools...maybe not so much
« Reply #53 on: February 08, 2014, 09:57:46 am »
A scope will "hear" everything.
Just to clarify: A scope is utterly useless in audio engineering, other than troubleshooting. A healthy human hearing is at least 1000x better than any scope I know of. But we do have microphones that are more sensitive or can handle more sound pressure than a human, and we have audio analyzers that can give a number and to a variable and in many aspects, can measure smaller details that we might be able to hear. So, "measurements" is a better word.

Quote
There are companies that use solid science to make good-sounding devices (such as Dolby), and there are industries where quality is paramount and professionals who can hear a channel out of phase (sound technicians are generally an object of my admiration).
A channel out of phase is blatantly obvious error, btw.
Quote
But all this has nothing to do with audiophilia, which has absolutely no relation to reality, or anything measurable.
A very insulting generalization. Sure, there are con mans in the industry and fools to be fooled. There always is everywhere where money moves. However, by my experience most of the companies really do solid engineering and research and many consumers do know what they are talking about.

#2: Real life situations very seldom are blind. And the hearing sensation is real, even if it might only partly come from movement of air molecules. We know (as a proven, scientific facts) that color TVs don't sound as loud as B&W TVs but they have a better sound; black speakers sound better than red ones; big speakers have better bass etc.
Lol. When I pay for audio equipment, I am interested in sound quality, not in how its looks (and price) will affect my subjective perception of the sound. If hi-fi stuff was marketed honestly, like saying "this black mahogany supertextured finish and the impressive size of the unit will be a highlight of your living room", I'd say it's okay. But instead they spout shit like "our scientifically engineered mahogany finish absorbs subspace particle resonations and evens out the warp field for an optimal listening experience."
[/quote]
Lol yourself: Which is the fool, the one who understands what is going on and can be wary (or appreciate) of non-technical issues or the one that thinks he's immune to stuff no human is not? And for your marketing language examples, the former sounds much more familiar than the latter. There are con mans, but not that many. Sounds like you believe a spec sheet tells you everything there is to know about a car?  >:D
Quote
Are you by any chance involved in selling this snake oil stuff? You sound awfully apologetic. :P
Not apologetic; I'd try to be realistic. But yes, I have experience in designing high quality audio stuff; no snakes harmed! Are you by any change running a store selling run-of-the-mill "hifi"?

Quote
- Measurements: I'm an engineer, so I believe in measurements. I also believe that we don't have enough knowledge about how humans hear to know what to measure. Besides, I haven't yet seen any audio equipment with different designs that measure equally(!).
There are lots of honest companies who have put in years of research into finding out exactly what to measure. Dolby, once again... And we have practical and really cool stuff to thank that research, for example mp3 compression. ;)
Sure, we know a lot. I tried to point out that we don't know it all. At the moment, Dolby (an others) are hiring more researches to find out more, they are not closing the research department because all has been found out.

To take a very, very simple example: It is known and approved that channel separation in stereo equipment has an effect on sound field reproduction (and that is why in most reviews, that is measured.) However, we don't know precisely how: I can measure the symmetry of the leakage (is L to R the same as R to L) and the distortion and frequency response of the leaked signal very accurately. But we don't know what effect those details might have to the perceived sound. (And that is why you seldom see channel separation detail specs.)

One time, in an informal A-B test (we knew what we were listening to, but the point was to find out if there was a difference or not) we had otherwise identical (or changes that we thought should be inaudible) devices; same basic design and parts, slightly different circuit topology: One had better but uneven channel separation, the other was worse in numbers, but even in frequency. I thought I might have heard a minor difference, a college was sure there was one. But which one was better? We couldn't say, and the industry does not have much research nor a consensus on the subject either. And did I say channel separation is about as simple as it can be?

Most of the issues are much more complex.
Quote
I doubt the ones selling the magical $10000 amps do the science.
You'd be surprised. You'd be very surprised.
Quote
Plus, for an engineer, you're making far too many excuses not to trust measurements. I'm sensing an ulterior (financial?) motive here.
Oh, I do trust measurements, I'm an engineer. But I'm experienced enough to know that I don't know it all. As said, no two devices measure equally, so I'm not surprised that there is an entire (if small) industry around minute differences. What is slightly surprising is that we don't have a clear correlation between the measured differences and perceived sound quality. Therefore we have endless debate about the merits of some equipment. Either we don't know what to measure or the human hearing is much more sensitive than we know of. Personally, I'd vote for both.

Quote
which I still believe should be inaudible by a wide margin. Still, the press (sometimes in comparison tests) gave comments about the other products high-end, like "Nothing is missing, but it is a bit smoother than X", far too many for me to believe those were random guesses.
Of course it's not random. The press is trying to "impress" its sponsors. ;) I'd like to see the same result in double blind tests. (That said, it's not impossible that such a minute difference is actually audible. That said, we're talking about a pretty specific, measurable difference.)
In this case, the reporters did not know that the products used the same circuit boards. At least one magazine had not opened the products, and I'm rather sure the others did not find out either, otherwise they would surely have written about it. I believe the reporters genuinely heard a difference. What I find perplexing in this anecdote is that the difference should be inaudible by a far margin. I think that on the "keeping up the illusion" level, minute details matter, even if you might not be able to pinpoint what it is. Certainly, 0.1dB at 20kHz is inaudible as a frequency response deviation, but in right conditions, it seems to be enough to trigger the brain between "this [is]/[is not] really real". Or they wrote what they wrote to fill in the pages - but I don't think so.

You can see all there is to see from most windows, just like you can hear all there is to be heard with most hifi. But it takes an incredibly good and clean window to make you believe there is no glass.
http://www.liteplacer.com - The Low Cost DIY Pick and Place Machine
 

Offline scientist

  • Frequent Contributor
  • **
  • !
  • Posts: 317
  • Country: 00
  • User banned.
Re: audiofools...maybe not so much
« Reply #54 on: February 08, 2014, 08:29:25 pm »
:palm:
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: audiofools...maybe not so much
« Reply #55 on: February 09, 2014, 05:52:07 pm »
Just to clarify: A scope is utterly useless in audio engineering, other than troubleshooting. A healthy human hearing is at least 1000x better than any scope I know of.

I agree that a 8-bit DSO is missing the subtleties of an audio signal, but given that the source media is usually a stream of 16-bit samples at 44,100 (i.e. CD) there is only limited set of information the source.

I had half written a full reply, but then decided that leaving just a few links would be sufficient

http://forums.audioreview.com/cables/best-optical-cable-you-have-found-25693.html
Quote
No, it doesn't sound weird because I can hear the difference and as my post said, there HAS to be something lost in the optical cable.


Contrast with http://archimago.blogspot.co.nz/2013/05/measurements-toslink-optical-audio.html - actual measurements of actual cables.

Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Online NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9032
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: audiofools...maybe not so much
« Reply #56 on: February 10, 2014, 01:02:47 am »
Those audiofools who think coax has better bandwidth than fiber really are audiofools. Why would networking companies spend a fortune on fragile fiber cables if they could just use RG6? I was recently a contractor at a networking company and as it turns out, it's no surprise that fiber will easily go much further than any kind of copper cable. All the high speed copper runs were restricted to a few feet, while fiber does hundreds and thousands of feet no problem, even at 10Gbps.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline BravoV

  • Super Contributor
  • ***
  • Posts: 7547
  • Country: 00
  • +++ ATH1
Re: audiofools...maybe not so much
« Reply #57 on: February 10, 2014, 01:08:54 am »
Those audiofools who think "feel" coax has better bandwidth than fiber really are audiofools.

There is your mistake, corrected, they just don't do that.  :-DD

Offline linux-works

  • Super Contributor
  • ***
  • Posts: 1999
  • Country: us
    • netstuff
Re: audiofools...maybe not so much
« Reply #58 on: February 10, 2014, 02:46:41 am »
Those audiofools who think coax has better bandwidth than fiber really are audiofools. Why would networking companies spend a fortune on fragile fiber cables if they could just use RG6? I was recently a contractor at a networking company and as it turns out, it's no surprise that fiber will easily go much further than any kind of copper cable. All the high speed copper runs were restricted to a few feet, while fiber does hundreds and thousands of feet no problem, even at 10Gbps.

because, well, REALTIME AUDIO is not packet data!

jitter does matter (to some degree) in digital audio over spdif and over i2s.  otoh, a packet audio format (like UAC2 audio, asynch usb audio class 2 which is a 'pull' data model instead of the usual 'push') does not suffer due to line jitter.  in UAC2, the clock source is not the pc, its the DAC and there is buffering and windowing and reliability due to the packet and buffer model (and that data is ack'd, unlike spdif).

but with spdif, yes, you can have jitter over different wires and its now up to the spdif receiver chip to filter that out.  many modern ones can, but you can still argue its better not to have jitter on the line to start with.

tl;dr; bits is not bits when timing is embedded.

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: audiofools...maybe not so much
« Reply #59 on: February 10, 2014, 03:56:51 am »
Hi AmpDoctor:

Would you agree with this statement from http://members.chello.nl/~m.heijligers/DAChtml/PLL/PLL1.htm :

Quote
It is reported that timing errors in the order of 100 ps are perceptible [Watk94]. Some others claim jitter to be audible down to the few picosecond range, also depending upon the spectral content of the jitter.

Although it sounds plausable, it is untrue. If you are using a CD for a source, a full range 20KHz signal, will slew at most about 0.3 LSB in 100ps.  Jitter would need to be three times this level, and also be periodic (unlike the truely random noise one would expect), and this could at best inject a -96db signal into the audio output.

Have a look at page 8 of http://peufeu.free.fr/audio/extremist_dac/files/jitter92.pdf, where the jitter used was was 200x what it is claimed some people can hear - a -20db 20kHz tone is 'corrupted' with a perfectly un-random 20ns of jitter, and resulted in a tone of about -84db being injected into the sound.

And 20ns is very, very poor. Even I can engineer to < 10ns jitter without performing any PLL-based clock recovery. While implementing S/PDIF interfaces, and injected random errors just for fun, I can assure you that even if an bit error occurs once in one-in-a-million event it is still very obvious as clicks - and recovery of the data at 2.8MHz isn't that hard. Checksumming and acking data packets is also a crock - DVI-D works at 500x that speed with no feedback, no error correction, and I don't see any random pixels flashing on my screen. Heck, fibre channel pushes 16Gb/s down a fiber, checksums all the packets and I've seen port error count sit at zero for days on end.

These statement about hearing <1 ns jitter are as silly (and common) as saying that a higher quality HDMI cable will give you richer colours and better sound.

I better stop now and do something productive...
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline JuKu

  • Frequent Contributor
  • **
  • Posts: 566
  • Country: fi
    • LitePlacer - The Low Cost DIY Pick and Place Machine
Re: audiofools...maybe not so much
« Reply #60 on: February 10, 2014, 08:37:21 am »
few notes about jitter:

- First and foremost: as linux-works said, bits are not bits when timing is embedded: A "digital" audio signal is actually half analog. We are not talking about data retrieval, which is indeed trivial. Besides, any data errors would be an non-working system. We are talking about the purity of the timing of the samples going out. There, the unit is indeed picoseconds.
These statement about hearing <1 ns jitter are as silly (and common) as saying that a higher quality HDMI cable will give you richer colours and better sound.
You are mixing apples and oranges: For a digital display, the jitter needs to be small enough that the pixel data in the signal hits the correct pixel in the display (huge simplification, but you get the point). For "digital" audio, the pixel should hit the exact center.

- The audibility depends very much about the receiver circuitry and may be even more, about the DAC chip. I've measured chips that I have no doubt a few ten picoseconds would be audible, maybe even low digits. Ok, not lately: Now when jitter is a known issue, designers know what and how they are fighting for. I have also measured products that 10ns (which is huge, about the limit of losing sync) does not have a measurable change in the output. So, we would assume a wide range of opinions and experiences. (The link where somebody had measured different cables, did show very minor jitter differences, but apparently, the receiver/DAC was rather good.) Any anecdotes and measurements are valid for that particular system only.

- Coax has indeed much more bandwidth in consumer stuff that optical. The limiting factor is not the cable, it is the receiver module. The comparison to telecom is irrelevant: The telecom companies use a bit more money in their systems than your typical CD manufacturer, like 9c vs. 900$ for the receiver component.

- The differences in optical cables are mainly the mechanical tolerance of the connectors, i.e. how well does the light hit the sensor. The purity of the plastic might have an effect too, but I haven't done much experimenting here (see last point). And sadly, the tolerance of cheaper receiver components is meaningful. Again, a particular brand of cable with a particular brand of player and DAC can give different results, depending on the individual parts.

- Last: Jitter is an issue, but in high quality equipment, not a very big one nowadays. About any manufacturer aiming for quality gets that part right. On the other hand, about any manufacturer aiming for low price cuts some corners, and there are issues. But somebody buying stuff by price, by definition, doesn't care that much. Besides, the other equipment likely isn't that great either and there are bigger issues on sound quality. Therefore, jitter isn't that big deal there either.
http://www.liteplacer.com - The Low Cost DIY Pick and Place Machine
 

Offline ConKbot

  • Super Contributor
  • ***
  • Posts: 1386
Re: audiofools...maybe not so much
« Reply #61 on: February 10, 2014, 12:04:34 pm »
Note to self.  Start selling various dielectric coaxes and SPDIF fibers to audiophiles.   I wonder what sounds better, pure teflon? Polyethlyene? foamed PE? maybe the people with real taste would buy into an air-core coax?  Oooohh and after that I can start selling dry N2 generators to purge the heliax, and also prevent oxidation.
And if someone prefers optical, guess I could sell a SPDIF to multimode ST converters so you can use real glass fiber.  Maybe an coax SPDIF to single mode media converter  :D
guess who's retiring early boys  8)

 

Offline JuKu

  • Frequent Contributor
  • **
  • Posts: 566
  • Country: fi
    • LitePlacer - The Low Cost DIY Pick and Place Machine
Re: audiofools...maybe not so much
« Reply #62 on: February 10, 2014, 12:07:34 pm »
You are on to something... ;)
http://www.liteplacer.com - The Low Cost DIY Pick and Place Machine
 

Offline linux-works

  • Super Contributor
  • ***
  • Posts: 1999
  • Country: us
    • netstuff
Re: audiofools...maybe not so much
« Reply #63 on: February 10, 2014, 03:59:05 pm »
I was over at a friend's house who was doing some research on various coax cables to test for jitter.  he ran what is known as 'jtest' (see DIYaudio forum) and we really could see the results (on a spectrum analyser, pc software) between various coax types.  I saw it with my own eyes and I know this was not a faked setup.

now, whether those jitter peaks will affect modern spdif receiver chips, that I don't know.  10 or 20 yrs ago, yes, it would.  now, with reclocking receivers (such as the wolfson) its not really an issue anymore.  without a reclocker at the rx side, the cable would start to matter again.

now, I refuse to believe that usb cables would matter at all, ever.  from usb to the dac is always a packet exchange, its not a one-way pipe like spdif is.  the timing is derived from the pc (and not the dac, if UAC1 style) but still, a whole packet of data has to be received by the dac before its burst out and so, it does have local buffering.  timing is controlled by the pc but the per-bit timing, during the exchange, is not relevant.

when I started in digital audio, maybe 25 yrs or so ago, I was into DAT audio recording and a bunch of us were working on a digital copy box that would let us transfer spdif from one DAT to another, without that stupid scms nonsense getting in the way.  we defeated it and then there was talk about how much 'loss' there would be if you took a DAT tape and make 100 copies (bit for bit) from machine to machine.  would jitter accumulate in this situation?

no, it would not.  the bit-time is not relevant until its turned into analog.  if you are going digital to digital, it really is 'bits == bits'.  as long as you can recover all the bits (ie, no harsh bit losses) then only the final stage of dat PLAYBACK would matter.  the 100 or so back-to-back copies would not change the quality of the audio.


Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: audiofools...maybe not so much
« Reply #64 on: February 10, 2014, 04:32:05 pm »
I think the argument goes that there is some relationship between jitter at the SPDIF receiver, and jitter at the pin of the DAC which actually causes audio data to be updated.

You're quite right that only that very last SPDIF link could ever possibly have an impact on the audio quality, assuming that previous steps didn't actually malfunction (ie. introduce bit errors). Up until that point, there simply is no timing information to use or maintain. If, for example, I play a WAV file, it'll sound exactly the same as if I email you a copy of it and you play it on identical equipment.

It may well be an issue in some DAC designs, if (and only if) they attempt to directly recover a 'master' 44.1kHz clock (or some multiple thereof) from the SPDIF data stream, and actually connect that clock to the DAC chip itself - thereby providing a route whereby timing issues at the SPDIF input can have an influence.

It's not a very good design technique, though, and its implications (both real and fictitious) have been argued to death on audiophool forums.

Offline Sigmoid

  • Frequent Contributor
  • **
  • Posts: 488
  • Country: us
Re: audiofools...maybe not so much
« Reply #65 on: February 10, 2014, 05:41:08 pm »
- Coax has indeed much more bandwidth in consumer stuff that optical. The limiting factor is not the cable, it is the receiver module. The comparison to telecom is irrelevant: The telecom companies use a bit more money in their systems than your typical CD manufacturer, like 9c vs. 900$ for the receiver component.
Correct me if I'm wrong, but a digital signal is discrete in time and value domain. I'd expect each component of the signal chain to have its own clock, in order to correct any timing issues that each channel introduces.
So the only clockrate that really matters should be that of the DAC - unless of course a bit "skips a beat", which would naturally result in some kind of corruption.

For the stuff you're saying to make any sense, the time signal should be coming from the signal source (say, the rotation of the CD itself), and be propagated throughout the system up to the DAC.

Some audiophool designs do seem to imply this - like the precision drive CD players with just the mechanical parts costing several grand... So while I'm not saying this is impossible, it sounds like an extremely stupid design, and I cannot imagine anyone seriously designing an electronic disaster like this.

If someone actually did build it, then it's a vastly inferior design for a vastly superior price.

- The differences in optical cables are mainly the mechanical tolerance of the connectors, i.e. how well does the light hit the sensor. The purity of the plastic might have an effect too, but I haven't done much experimenting here (see last point). And sadly, the tolerance of cheaper receiver components is meaningful. Again, a particular brand of cable with a particular brand of player and DAC can give different results, depending on the individual parts.
These things start becoming an issue when the signal starts to break down. The whole idea of digital is that it's channel-agnostic. The only time you notice the channel is when the channel's all fscked up. Unlike in analog, in digital, there's no difference (NO difference) between a perfect and a good enough channel.
So unless that slightly misaligned sensor results in bit errors, it shouldn't matter at all. And if we have bit errors, we aren't talking about quality anymore, we're talking about bit error squeaks and blips in the audio.

Unless, of course, see above. In which case, the designer is clearly an idiot.

I read some time back that testing was done on musicians to see if they have "better ears" than non-musicians. One of the things the research uncovered is that musicians have a much stronger ability to "fill in" missing sounds when listening to music on bad/cheap speakers. As a result, many musicians don't buy super high-end stereo equipment for their homes.
Yep. Well, it's basically a question of musical literacy. Whether you're interested in the music (the pattern of code comprised of tempo, volume, chords and intervals), or in the holistic experience of the record, with background noise, reverb, etc.
Being a purist, I prefer the former, but do understand the latter too. That said, I think the vast majority of the high-end, $1000+ pricetag industry isn't really approaching that goal head on, and is instead lost in chasing ghosts and squeezing money out of select schlimazels.
« Last Edit: February 10, 2014, 06:06:25 pm by Sigmoid »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: audiofools...maybe not so much
« Reply #66 on: February 10, 2014, 06:08:53 pm »
You're absolutely correct, except that in some designs, the timing of the transitions in the SPDIF signal is used to derive the clock which actually controls the timing of the updates to the DAC.

In such designs, the SPDIF signal is being used in two ways. The first is as a carrier of digital information, in which case bits == bits, and as long as the interface is error-free, it's doing its job perfectly well. A USB, Ethernet or wireless connection could do exactly the same.

The issue comes when that same signal is used as a source of timing information. Then, depending on how the circuit works, there may be a route by which timing jitter in the SPDIF stream - which wouldn't in any way affect the correct recovery of the data bits - does nevertheless affect the stability of the recovered clock.

The distinction between the two is what confuses some audiophools.

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: audiofools...maybe not so much
« Reply #67 on: February 10, 2014, 08:52:49 pm »
The issue comes when that same signal is used as a source of timing information. Then, depending on how the circuit works, there may be a route by which timing jitter in the SPDIF stream - which wouldn't in any way affect the correct recovery of the data bits - does nevertheless affect the stability of the recovered clock.

Can you expand a bit more because I must be missing something?

Here's what I know for sure - The S/PDIF packet for stereo is 64 bits long, and has headers every 32 bits. The header for channel A data word is 00010111 (and will always be followed with a 0) or it's inverse 11101000 ((and will always be followed with a 1 ). The timing of the final transition can be used to recover the clock very accurately, and used to present the previously received samples to the DAC.

The thing that amazes me is that audio travels 330 micrometer per microsecond, so even if the system was perfect except for jitter that is wider than an entire S/PDIF bit (300ns, so bad that the system couldn't acutally work) the difference would only be equivilent to moving your ears 100 micrometers at 20Hz - about half the width of a human hair. When my S/PDIF project rebuffers and sends it off to a DAC with < 10ns of jitter it is equivilent to 3.3 micrometers of head movemement!

Unless highly periodic, jitter will result in unwanted high frequency noise, the majority of which will be higher than 20kHz and also at very low db. Exactly how such high freqency components get out the anti-aliasing filter, through the speaker's tweeter and into your ears, which themselves are designed to receive wavelengths from about 16m (20Hz) to 10mm (20,000Hz) is also a mystery to me (i.e. I can not see any valid physical mechinsim).
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: audiofools...maybe not so much
« Reply #68 on: February 10, 2014, 09:24:20 pm »
I don't think you're missing anything, and I completely agree that the amount of jitter which could possibly exist in a working system equates to tiny errors in terms of physical movement.

For the sake of completeness, though, it's worth also considering the amplitude error that 10ns of jitter could be equivalent to. The greatest possible slew rate between one sample and the next is a change of 65536 counts in 1/44100 sec, which is 2.89Gsa/s, so that 10ns timing error equates to 28.9 counts. That's actually not quite so insignificant IMHO, and I can certainly believe it might be audible.

As to what it sounds like, the only specific symptom I've ever heard mentioned is a negative effect on stereo imaging - which seems plausible enough, I guess.

Offline Rufus

  • Super Contributor
  • ***
  • Posts: 2095
Re: audiofools...maybe not so much
« Reply #69 on: February 10, 2014, 09:27:41 pm »
The thing that amazes me is that audio travels 330 micrometer per microsecond, so even if the system was perfect except for jitter that is wider than an entire S/PDIF bit (300ns, so bad that the system couldn't acutally work) the difference would only be equivilent to moving your ears 100 micrometers at 20Hz - about half the width of a human hair. When my S/PDIF project rebuffers and sends it off to a DAC with < 10ns of jitter it is equivilent to 3.3 micrometers of head movemement!

Consider the signal is a square wave at half the sample rate. Then alternately jitter the clock +/- half a bit time. The output is 1.5 clocks high and 0.5 clocks low. That jitter introduced 1/2 maximum amplitude dc level on the output and a lot of high frequency harmonics.  Jitter isn't a time or phasing problem it aliases with the content to modify it.

 

Offline DrGeoff

  • Frequent Contributor
  • **
  • Posts: 794
  • Country: au
    • AXT Systems
Re: audiofools...maybe not so much
« Reply #70 on: February 10, 2014, 09:33:51 pm »
Now compare all that with what is actually used to record and encode the source material...
Was it really supposed to do that?
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: audiofools...maybe not so much
« Reply #71 on: February 10, 2014, 10:04:29 pm »
Consider the signal is a square wave at half the sample rate. Then alternately jitter the clock +/- half a bit time. The output is 1.5 clocks high and 0.5 clocks low. That jitter introduced 1/2 maximum amplitude dc level on the output and a lot of high frequency harmonics.  Jitter isn't a time or phasing problem it aliases with the content to modify it.

That is completely true - but such a signal would never be fed into the DAC or ADC in the first place. Apart from it being bang-on on the Niquest limit of the system, it has a lots of harmonics well over the Niquest limit of the system.  If you were to put a signal realistic rise-times into the signal then the effect of jitter is far less than you analysis hints at.

As jitter can be introduced at any stage it was more an experiment of "if the jitter was introduced at the listener's head, by introducing random vibrations, what would it actually be like" - trying to get a physical appreciation for what is going on.
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline edavid

  • Super Contributor
  • ***
  • Posts: 3385
  • Country: us
Re: audiofools...maybe not so much
« Reply #72 on: February 10, 2014, 10:16:17 pm »
For the sake of completeness, though, it's worth also considering the amplitude error that 10ns of jitter could be equivalent to. The greatest possible slew rate between one sample and the next is a change of 65536 counts in 1/44100 sec, which is 2.89Gsa/s, so that 10ns timing error equates to 28.9 counts. That's actually not quite so insignificant IMHO, and I can certainly believe it might be audible.

Think about what you are saying... that would be an 0.05% amplitude error in a signal which you can't hear anyway.


 

Offline JuKu

  • Frequent Contributor
  • **
  • Posts: 566
  • Country: fi
    • LitePlacer - The Low Cost DIY Pick and Place Machine
Re: audiofools...maybe not so much
« Reply #73 on: February 11, 2014, 09:01:56 am »
Guys, do your research!

Point 1: Consider a simple SPDIF signal to DAC. The DAC timing has to run in sync with the input. Not just at the same nominal rate, but in sync. If you think about it, there is no other way. (There are schemes where the DAC provides timing to the source, but those are rare and if you have that, you know it.) Re-clocking etc are just fancy names for advanced PLL, multistage PLL, whatever. But the timing comes from the incoming SPDIF signal. You can attenuate timing errors, but you have to live with the timing of the input signal.

Point 2: SPDIF is one wire. The timing rides on the bit edges(!). It uses Biphase mark coding. To get the timing right with that requires infinite bandwidth(!!).

Point 3: Jitter (noise in the DAC timing) is not like moving your head or the speaker, that is a false analogy. Jitter (usually) manifests as enharmonic distortion, which the ear is very sensitive of.

--

Jitter is a very real issue. Ok, a known one, we can deal with it, and designers make the compromises with desired quality, price, complexity etc.
http://www.liteplacer.com - The Low Cost DIY Pick and Place Machine
 

Offline JuKu

  • Frequent Contributor
  • **
  • Posts: 566
  • Country: fi
    • LitePlacer - The Low Cost DIY Pick and Place Machine
Re: audiofools...maybe not so much
« Reply #74 on: February 11, 2014, 09:47:29 am »
For the sake of discussion, lets ignore the rare and fancy systems where the final DAC runs the timing, giving the sync signal to the source. Those are rare, expensive, and if you have it, you know it. Almost no home system has that (although asynchronous USB DACs are becoming more common.) This is for ordinary digital audio:
Correct me if I'm wrong, but a digital signal is discrete in time and value domain. I'd expect each component of the signal chain to have its own clock,
That basic assumption is wrong, and therefore, most of your conclusions (and especially, your snorts about high-end products) are also without base. Consider: Each stage must give out samples at the exact rate they are coming in. Otherwise you have to start inventing samples (local clock is fast) which would be awful. Or if the local clock is slow you'll lose lip sync, music plays a while after you hit stop on the player and no matter how big your buffer, eventually you run out of room in the buffer and have to start throwing out data. That is not good either. You have to run each stage in sync with the incoming signal.

Quote
For the stuff you're saying to make any sense, the time signal should be coming from the signal source (say, the rotation of the CD itself), and be propagated throughout the system up to the DAC.
And that is indeed the case. There is no way around it.

In reality, CD players have a small buffer and the timing comes from a local oscillator. (Which in mediocre products, is polluted by having a motor in the same box and not taking adequate protection for the purity of the oscillator.)
Quote
Unlike in analog, in digital, there's no difference (NO difference) between a perfect and a good enough channel.
That is a very common misunderstanding. ;) We are so used to data transmission, that not many realize that digital audio relies on analog timing information carried along it (on bit edges).

I'm bored and not much to do before the courier brings in the new parts, so I have time: First, see http://en.wikipedia.org/wiki/Differential_Manchester_encoding, and look at the second picture. This is how signal in SPDIF is transmitted. There is no separate clock signal, it is derived from the bit edges by a PLL. There is very good reading about jitter at http://nwavguy.blogspot.fi/2011/02/jitter-does-it-matter.html. Do look at least the second image that shows how the encoding and limited bandwidth on the cable generates jitter. In the bottom are very real life examples: A motherboard (truly awful), a couple of popular cheap external USB DACs (better, but still audible) and a $1600 DAC (something for the designer to be proud of). Btw: I'm not the NwAvGuy, and I have no affiliation to any of those products.

http://www.liteplacer.com - The Low Cost DIY Pick and Place Machine
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf