Frequencies and muxing

Status
Not open for further replies.

MrChips

Joined Oct 2, 2009
30,706
This is false for linear systems. Only nonlinear systems will produce intermodulation distortion. The linear sum of multiple of sine waves (or any other waveform) is indeed perfect.


It's all in the physics; the math is a model.
Thanks for being pedantic.
So how are you going to mix (or add) two electrical signals without using electronic components?
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
The reason this works is because of the defining characteristic of waves, i.e., the physical property of waves to combine in linear superposition. That is, when two or more waveforms combine, they interfere with each other constructively and destructively to create a new waveform. Because this process is linear, the component waveforms retain their individual properties (except in the degenerate case of perfect cancellation).

As a simple analogy, bricks have a similar property, though they only combine constructively. If we build a house with bricks, each individual brick retains its properties in the combination (the house). Mathematically, we say that combining bricks is an arithmetic sum (always constructive interference), whereas combining waves is an algebraic sum (constructive and destructive interference).

There's nothing surprising about bricks retaining their "information" in the construction of a house, right? It's the same principle with waves, but since waves can interfere destructively, the resulting "house" doesn't like much like the individual components. But there's nothing spooky going on -- just arithmetic with positive and negative values instead of just positive values.
Right, and I get that with respect to something such as a mono-timbral signal, where we can apply filters and get to individual ”notes” that, such as that Melodyne software.

But information retention
They're not. If the frequency components of the 900 different instruments are interfering (mixing) with each other, then we will never be able to perfectly recover the individual instruments. This is why multiplexing systems must separate the bandwidths of each channel -- if two channels shared the same bandwidth, we'd have no way of knowing which part belonged to which.
They're not. If the frequency components of the 900 different instruments are interfering (mixing) with each other, then we will never be able to perfectly recover the individual instruments. This is why multiplexing systems must separate the bandwidths of each channel -- if two channels shared the same bandwidth, we'd have no way of knowing which part belonged to which.
Right! But in the case of a single coax with a wave comprising 900 different timbres, we have a demuxing on the end which separates the subcarrier frequencies for each conversation, correct? 1k is containing convo 1, 2k has convo 2, 3k has convo 3.

On convo 1 someone’s playing a sax, on convo 2 someone is playing a piano, on convo 3 a dog is barking.

Each convo has LOADS of associated overtones to make up those timbres. When we demux the 3 convos, all timbre-defining overtones are “tagging along” losslessly and discretely to the subcarrier from 1 single aggregate wave?
 

bogosort

Joined Sep 24, 2011
696
Thanks for being pedantic.
So how are you going to mix (or add) two electrical signals without using electronic components?
Pedantic? What you claimed is false. Put two sine waves through an RC network and tell me how many frequency components will be in the output?
 

MrChips

Joined Oct 2, 2009
30,706
You are correct if we can mix the two signals with a perfectly linear mixer?
What happens if the mixer is not linear?

Besides, when I play two musical notes, how come I can hear a third note?
 

bogosort

Joined Sep 24, 2011
696
Right! But in the case of a single coax with a wave comprising 900 different timbres, we have a demuxing on the end which separates the subcarrier frequencies for each conversation, correct? 1k is containing convo 1, 2k has convo 2, 3k has convo 3.

On convo 1 someone’s playing a sax, on convo 2 someone is playing a piano, on convo 3 a dog is barking.

Each convo has LOADS of associated overtones to make up those timbres. When we demux the 3 convos, all timbre-defining overtones are “tagging along” losslessly and discretely to the respective subcarriers from 1 single aggregate wave?
You're missing the most important part -- each conversation has to have its own bandwidth, and all of the overtones in the conversation must lie within that segregated bandwidth. Otherwise, there's no way for the demux to reliably separate the conversations.

The opposite thing happens with a typical microphone recording. A microphone captures an orchestra and all of the harmonic information occurs within a single bandwidth. As such, we can't reliably separate the individual components. We can make guesses about which overtones belong to which instrument, but there's no way to do this that's intrinsic to the signal itself.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
You are correct if we can mix the two signals with a perfectly linear mixer?
What happens if the mixer is not linear?

Besides, when I play two musical notes, how come I can hear a third note?
you’e going to force me to lock this thread if you don
You are correct if we can mix the two signals with a perfectly linear mixer?
What happens if the mixer is not linear?

Besides, when I play two musical notes, how come I can hear a third note?
You’re not hearing a third note, though—you’re hearing an interval, unless you want to see intervals and chords as notes, which is an interesting thought.
 

bogosort

Joined Sep 24, 2011
696
You are correct if we can mix the two signals with a perfectly linear mixer?
What happens if the mixer is not linear?
I specifically said nonlinearity leads to intermodulation distortion, so obviously a nonlinear mixer will create IM products. But I'm sure you'll agree that it's very easy to make linear mixers. A summing network should never produce IM distortion, unless of course you've overdriven the circuit.

Besides, when I play two musical notes, how come I can hear a third note?
Seriously? Maybe you have hearing damage, because when I play two musical notes I hear exactly two tones.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
You're missing the most important part -- each conversation has to have its own bandwidth, and all of the overtones in the conversation must lie within that segregated bandwidth. Otherwise, there's no way for the demux to reliably separate the conversations.

The opposite thing happens with a typical microphone recording. A microphone captures an orchestra and all of the harmonic information occurs within a single bandwidth. As such, we can't reliably separate the individual components. We can make guesses about which overtones belong to which instrument, but there's no way to do this that's intrinsic to the signal itself.
The “segregated bandwidth” is my problem. If a sax and a voice share all sorts of overtone frequencies in the 1K region, we can “put the sax” on 30K and the voice on 40K, and create a “new wave,” but the demuxer is only seeing 30K and 40K. Where are the neighboring lossless overtones that make up the individual instruments after aggregation??
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
In short, I see how a single wave is composed of fourier components, and we can spectrographically observe those “wavelings” in the time domain.

But I do not see an explanation for lossless bandwidth integration and nested, localized subcarrier-specific overtone data-retention within the finalized single aggregate wave. The post-processing “metadata” to which I’m referring lies in this specific notion.
 

bogosort

Joined Sep 24, 2011
696
The “segregated bandwidth” is my problem. If a sax and a voice share all sorts of overtone frequencies in the 1K region, we can “put the sax” on 30K and the voice on 40K, and create a “new wave,” but the demuxer is only seeing 30K and 40K. Where are the neighboring lossless overtones that make up the individual instruments after aggregation??
No, the demux is seeing the bandwidth of the entire signal, which is much larger than the bandwidth of the individual conversation bandwidths. Suppose we decide to give the sax and voice 6 kHz of bandwidth each, and we put the sax signal centered at 40 kHz and the voice at 50 kHz. The demux knows this, so it uses a 6 kHz bandpass filter centered at 40 kHz to get the sax (36 kHz to 43 kHz), and a 6 kHz bandpass filter centered at 50 kHz to get the voice (47 kHz to 53 kHz). Before the mux, we filter the sax and voice signals so that no overtones exceed 6 kHz.

Here's a masterpiece rendition of the frequency domain from the demux's point of view:

1622219414630.png
The black triangles represent the bandwidth of each "conversation", and the green rectangles represent the bandpass filters used to get the conversations. Notice that if the two triangles were to blend into each other (mixing), then there'd be no way for the demux to distinguish the blended signals.
 

MrChips

Joined Oct 2, 2009
30,706
I specifically said nonlinearity leads to intermodulation distortion, so obviously a nonlinear mixer will create IM products. But I'm sure you'll agree that it's very easy to make linear mixers. A summing network should never produce IM distortion, unless of course you've overdriven the circuit.


Seriously? Maybe you have hearing damage, because when I play two musical notes I hear exactly two tones.
It is fact that my hearing is damage. Hence I cannot objectively rely on my hearing.
So I just conducted a test. I have two sine wave generators, one (A) set to 400Hz and the other (B) set to 300Hz.
The output is to a loudspeaker.
I have an audio frequency spectrum analyzer.
The result is three peaks, 300Hz, 400Hz, 500Hz.
I am not able to explain the 500Hz which happens to be (A) + (A-B).
 

Audioguru again

Joined Oct 21, 2019
6,671
I worked with a large high quality intercom system called Stentofon Pamex. It used a mux system called Pulse Amplitude Modulation where 4 low distortion wide bandwidth voices or music were on a single wire. There were actually 12 time slots in each cycle because the single wire was muted before and after each sound's time slot to avoid interference.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
No, the demux is seeing the bandwidth of the entire signal, which is much larger than the bandwidth of the individual conversation bandwidths. Suppose we decide to give the sax and voice 6 kHz of bandwidth each, and we put the sax signal centered at 40 kHz and the voice at 50 kHz. The demux knows this, so it uses a 6 kHz bandpass filter centered at 40 kHz to get the sax (36 kHz to 43 kHz), and a 6 kHz bandpass filter centered at 50 kHz to get the voice (47 kHz to 53 kHz). Before the mux, we filter the sax and voice signals so that no overtones exceed 6 kHz.

Here's a masterpiece rendition of the frequency domain from the demux's point of view:

View attachment 239829
The black triangles represent the bandwidth of each "conversation", and the green rectangles represent the bandpass filters used to get the conversations. Notice that if the two triangles were to blend into each other (mixing), then there'd be no way for the demux to distinguish the blended signals.
Very clear as always, thanks :) Btw, MrChips, your explanations are very clear as well. My questions are at the intersection of physics and metaphysics, so I apologize if they seem unclear—there’s times the question is exploring the very bleeding edge of that area, and sometimes getting the question phrased just right to get at the information makes me sound inadvertently trollish on an engineering forum! But you guys are the only people who get this sh*t, that’s why I ask here.

So....The demux sees the entire signal, yes, and the overtone data is confined to its own bandwidth sandbox.

I have an issue with the sandboxes themselves aggregating so they maintain addressability after superimposing into one parent soundbox. How are they maintaining nested recoverable organization within their own sandbox?
 

bogosort

Joined Sep 24, 2011
696
It is fact that my hearing is damage. Hence I cannot objectively rely on my hearing.
So I just conducted a test. I have two sine wave generators, one (A) set to 400Hz and the other (B) set to 300Hz.
The output is to a loudspeaker.
I have an audio frequency spectrum analyzer.
The result is three peaks, 300Hz, 400Hz, 500Hz.
I am not able to explain the 500Hz which happens to be (A) + (A-B).
Almost certainly the speaker is causing the IM distortion. Connect an oscope with FFT function to the circuit output before the speaker and you will see exactly two peaks.

Speakers are notoriously nonlinear devices, even $10,000 speakers produce significant distortion. The ubiquitous small, cheap speakers that come with hobby kits are millions of times (60+ dB) worse.
 

bogosort

Joined Sep 24, 2011
696
I have an issue with the sandboxes themselves aggregating so they maintain addressability after superimposing into one parent soundbox. How are they maintaining nested recoverable organization within their own sandbox?
Generally speaking, they aren't. If you put three distinct sine waves in the sandbox, they are easily separated because each sine wave has a bandwidth of exactly 0. But if you put a complex waveform in the sandbox, and if that waveform is comprised of harmonically-rich components that are mixing with each other (constructive and destructive interference), then there is no way to accurately separate the individual component signals, each of which is itself a complex waveform, just from the information present in the signal.

In the most extreme example, if two components each include a 5 kHz harmonic but they are of opposite polarity, then the composite signal will not have any 5 kHz at all. Just from the composite signal itself, we cannot tell if 5 kHz is missing because it was destructively cancelled, or if it had been filtered out of each of the components, or if it never existed in the first place.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Generally speaking, they aren't. If you put three distinct sine waves in the sandbox, they are easily separated because each sine wave has a bandwidth of exactly 0. But if you put a complex waveform in the sandbox, and if that waveform is comprised of harmonically-rich components that are mixing with each other (constructive and destructive interference), then there is no way to accurately separate the individual component signals, each of which is itself a complex waveform, just from the information present in the signal.

In the most extreme example, if two components each include a 5 kHz harmonic but they are of opposite polarity, then the composite signal will not have any 5 kHz at all. Just from the composite signal itself, we cannot tell if 5 kHz is missing because it was destructively cancelled, or if it had been filtered out of each of the components, or if it never existed in the first place.
Exactly!

So how is the one, single complex waveform representing hundreds of nested, “flattened” complex waveforms?? Down a coax, the 1 complex waveform can, at the receiving end, be parsed into all of the sub-aggregate complex waveforms, retaining their individual sandboxed data?
 

MrChips

Joined Oct 2, 2009
30,706
Almost certainly the speaker is causing the IM distortion. Connect an oscope with FFT function to the circuit output before the speaker and you will see exactly two peaks.

Speakers are notoriously nonlinear devices, even $10,000 speakers produce significant distortion. The ubiquitous small, cheap speakers that come with hobby kits are millions of times (60+ dB) worse.
We can agree that a nonlinear system would result in IM distortion.
What is beat frequency, for example IF and SSB BFO?
If I play 440Hz and 442Hz I can hear 2Hz.
 

bogosort

Joined Sep 24, 2011
696
Exactly!

So how is the one, single complex waveform representing hundreds of nested, “flattened” complex waveforms?? Down a coax, the 1 complex waveform can, at the receiving end, be subsequently parsed into all of the sub-aggregate complex waveforms, losslessly retaining their individual sandboxed data.
I'm saying the opposite -- the receiving end cannot "de-aggregate" a complex waveform within a single bandwidth if each of the component waveforms mix within that bandwidth. An FFT can tell you that 5 kHz is present, but it cannot tell you where it came from.
 

Deleted member 115935

Joined Dec 31, 1969
0
@Jennifer Solomon

it must be very hard for you to take in all these different people approaching frequency muxing from different directions,

I have taught this lesson a few time in class now,
do you want to PM us for a off forum conversation ?
 

bogosort

Joined Sep 24, 2011
696
We can agree that a nonlinear system would result in IM distortion.
So then our only disagreement is that a linear system does not introduce new frequencies? Consider that the basic engineering test for linearity is indeed to put two sine waves through a device -- if the output consists of only two sine waves, then the device is linear.

What is beat frequency, for example IF and SSB BFO?
If I play 440Hz and 442Hz I can hear 2Hz.
Beat frequencies are a psycho-acoustic phenomenon introduced by our brains. The beat frequencies do not exist in the signal.
 
Status
Not open for further replies.
Top