Frequency-division multiplexing

Status
Not open for further replies.

Ya’akov

Joined Jan 27, 2019
9,163
A Fourier Transform doesn't provide information about the source components of the signal operated on, only its composition based on frequency and amplitude based on sine waves.

You can recover the result of adding the original source waveforms by adding the sine waves identified by the FT, but you can't use the FT to decide what the components looked like before they were added and resulted in the waveform upon which the FT is performed.

Nothing that doesn't use state information not part of the signal under analysis can recover the complex waveforms that might comprise the components added to make that signal.

For example, an FT used to deconstruct the complex sound of say, a trumpet and violin, being played at the same time will produce a set of sine waves of various frequencies and amplitudes, which when recombined produce the original signal. No listener could tell you whether the signal was produced by the two instruments, or by a set of sine waves as specified by the FT—that information is not in the signal.
 

Ya’akov

Joined Jan 27, 2019
9,163
To me it’s akin to having a multi-layer Photoshop file, where each layer is loaded with complex pixel arrangements, we then flatten all the layers into one, but then how could we “recall” the individual layers with absolutely no meta-data on which specific pixels belonged to which layers?
To put a fine point on my previous comment, you are completely correct in this, but you are confused if you think that the signal upon which the FT is performed is in any way equivalent to the PSD file. The FT is performed on a screen capture, not the multilayer source file.

The signal you are trying to dissect is the result, like the display of the file on the screen, not the source, like the PSD file interpreted by PS to display on the screen.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
To put a fine point on my previous comment, you are completely correct in this, but you are confused if you think that the signal upon which the FT is performed is in any way equivalent to the PSD file. The FT is performed on a screen capture, not the multilayer source file.

The signal you are trying to dissect is the result, like the display of the file on the screen, not the source, like the PSD file interpreted by PS to display on the screen.
Yes, that’s exactly my issue—I’m aware the FT is performed on a screen capture.

Using AI, it seems software such as moises.ai can analyze a “flattened” analog signal, extracting the original layer “groups” and present them as discretely addressable elements—to continue with that PSD analogy. What beyond FT is permitting the capacity to identify the individual timbres and recover them as grouped entities?
 

Ya’akov

Joined Jan 27, 2019
9,163
Yes, that’s exactly my issue—I’m aware the FT is performed on a screen capture.

Using AI, it seems software such as moises.ai can analyze a “flattened” analog signal, extracting the original layer “groups” and present them as discretely addressable elements—to continue with that PSD analogy. What beyond FT is permitting the capacity to identify the individual timbres and recover them as grouped entities?
See my first answer: information stored in the AI, which is being used as a way to recover the information suggested by the input signal. The information in the input signal is not the contents of an FT or anything like that, it's the relationship between the input signal and the stored information being used in conjunction with it.

That is to say, just like the letters D, O, and G don't have anything about them that make a dog, and combining them isn't on account of something essential about dogs, when we communicate the word "dog" it can be used in conjunction with information already existing at the end of the communication channel to instantiate the idea of dog in whatever way is intended by the communicator.

No dogs were sent, there is nothing that is dog in the signals, it is just information relating to something known to both sides, a dog.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
See my first answer: information stored in the AI, which is being used as a way to recover the information suggested by the input signal. The information in the input signal is not the contents of an FT or anything like that, it's the relationship between the input signal and the stored information being used in conjunction with it.

That is to say, just like the letters D, O, and G don't have anything about them that make a dog, and combining them isn't on account of something essential about dogs, when we communicate the word "dog" it can be used in conjunction with information already existing at the end of the communication channel to instantiate the idea of dog in whatever way is intended by the communicator.

No dogs were sent, there is nothing that is dog in the signals, it is just information relating to something known to both sides, a dog.
Yes, I’m 100% on all that, and good nod to the mega-thread on this with the “dog”-onics. :)

Each instrument could be considered a “group of sines”. Somehow that software is doing a fourier transform that can be parsed as “groups” from within a “flattened“ waveform. Which means somehow the AI is able to delineate crosstalk and redundancy of the groups and know how to apply which sine to which group, such they they can reconstruct the groups as discrete entities: in effect, reconstituting the group “meta-data.” Once the group metadata is reconstituted, they can then match the spectrographic data in that group to an existing stored group, performing a kind of mini “shazam” ID on it, so they can label it “rhodes,” “violin,” ”drums,” etc.

I want to understand how they’re parsing the groups beyond a normal EQ, band-pass filter, etc. (Ultimately the conscious mind is doing something similar, and then going a step beyond to relate that data to true 3D forms in “reality” that it has a database of.)
 

Ya’akov

Joined Jan 27, 2019
9,163
Yes, I caught that, and good nod to the mega-thread on this with the “dog”-onics. :)

Each instrument could be considered a “group of sines”. Somehow that software is doing a fourier transform that can be parsed as “groups” from within a “flattened“ waveform. Which means somehow the AI is able to delineate crosstalk and redundancy of the groups and know how to apply which sine to which group, such they they can reconstruct the groups as discrete entities: in effect, reconstituting the group “meta-data.” Once the group metadata is reconstituted, they can then match the spectrographic data in that group to an existing stored group, performing a kind of mini “shazam” ID on it, so they can label it “rhodes,” “violin,” ”drums,” etc.

I want to understand how they’re parsing the groups beyond a normal EQ, band-pass filter, etc. (Ultimately the conscious mind is doing something similar, and then going a step beyond to relate that data to true 3D forms in “reality” that it has a database of.)
No. They are not depending on the signal for all the information. Some of the information is stored on the receiving end. They are recognizing patterns, the information about how the sounds are related to each other is not in the signal, it is locally stored. The signal is just a fingerprint of an event that can be understood historically. Without that historical information, it is impossible to derive the information you want, it's simply not there.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
No. They are not depending on the signal for all the information. Some of the information is stored on the receiving end. They are recognizing patterns, the information about how the sounds are related to each other is not in the signal, it is locally stored. The signal is just a fingerprint of an event that can be understood historically. Without that historical information, it is impossible to derive the information you want, it's simply not there.
That has incongruous implication though, because there are essentially infinite combinations of timbres at any interval. I get that they’re doing hella pattern matching, but there’s no way they can reconstruct that level of discreteness unless there is some enhanced Fourier-esque algorithm that is allowing them some kind of frequency collation around multiple fundamentals, otherwise how are they able to literally bounce the individual instruments intact, with minimal crosstalk, with all the overtone nuances maintained in their relevant “groups?” I’ve tried the app, it’s downright spooky how accurate it is.
 

Ya’akov

Joined Jan 27, 2019
9,163
That has incongruous implication though, because there are essentially infinite combinations of timbres at any interval. I get that they’re doing hella pattern matching, but there’s no way they can reconstruct that level of discreteness unless there is some enhanced Fourier-esque algorithm that is allowing them some kind of frequency collation around multiple fundamentals, otherwise how are they able to literally bounce the individual instruments intact, with minimal crosstalk, with all the overtone nuances maintained in their relevant “groups?” I’ve tried the app, it’s downright spooky how accurate it is.
If the app is using deep learning, there is no algorithm, there is a neural network. It's not a mathematical analysis.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Deep learning is a process that uses training data to create a structure that can recognize things. It's complex, of course, the Wikipedia article (https://en.wikipedia.org/wiki/Deep_learning) might be helpful.
I would still think some kind of Fourier deconstruction would be required before the pattern-matching is performed, though. This is not like a monophonic matching of a single timbre. This is nested timbre identification and extraction. The components have to exist as separate things on some level before it can do the matching.
 

Ya’akov

Joined Jan 27, 2019
9,163
I would still think some kind of Fourier deconstruction would be required before the pattern-matching is performed, though. This is not like a monophonic matching of a single timbre. This is nested timbre identification and extraction. The components have to exist as separate things on some level before it can do the matching.
You have already pointed out that a FT is not a method for delineating timbres. Even if the deep learning used FT as a method of identifying candidates, it is not the output of the FT that has the information concerning what the FT "means", it is the stored information derived from the learning set where candidates are already classified. The match to the possible FT output is not based on working out what is in it but how it relates to the "experience" of other FT outputs.

The information that makes it possible to apparently extract more information than is in the signal has been precalculated and stored locally. A naive analysis of the FT output does not have any way to get information that is not there. The neural net has information that is not in the signal about what signals like the instant one "means".

One more time: there is no analysis that can be performed on the FT, in the absence of some other source of information, that can derive information not stored in it. There is no occult information in there, it's just part of what can be used to figure out what it must be like based on history of other objects.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
You have already pointed out that a FT is not a method for delineating timbres. Even if the deep learning used FT as a method of identifying candidates, it is not the output of the FT that has the information concerning what the FT "means", it is the stored information derived from the learning set where candidates are already classified. The match to the possible FT output is not based on working out what is in it but how it relates to the "experience" of other FT outputs.

The information that makes it possible to apparently extract more information than is in the signal has been precalculated and stored locally. A naive analysis of the FT output does not have any way to get information that is not there. The neural net has information that is not in the signal about what signals like the instant one "means".

One more time: there is no analysis that can be performed on the FT, in the absence of some other source of information, that can derive information not stored in it. There is no occult information in there, it's just part of what can be used to figure out what it must be like based on history of other objects.
Yes, “Even if the deep learning used FT as a method of identifying candidates, it is not the output of the FT that has the information concerning what the FT ’means’”. Yes, of course... the meaning is a function of the contextuality and experiential magnitude mapping to an existing source. There certainly is an “occultic” information gap, however: A snare sound is a unique timbre. You might match that timbre and then output “snare sound.“ Further one might ask, “which snare sound in physical space?” Now you’re talking ones in 3D physical space, independent of the “1D data match.“ What is “physical space” to a wave/bit processor, and the set of all 3D objects from which sounds emanate—and which are readily correlated to said waves and the bits reflecting them and interrelating them? Undefined as of yet. This is part of a model for consciousness, reason, meaning, and life Itself.
 
Last edited:

MrAl

Joined Jun 17, 2014
11,480
Question for any physicists here:

A waveform is a 1D informational element. It has one degree of freedom to oscillate at a given frequency over time t.

At any given range of time, this waveform transmitted down a wire or through the air represents a composite of not just sine waves, but modulated sine waves.

But how can just one solitary modulated wave carry multiple carrier frequencies to contain unique “sub-modulated" channels of A/V information and form a single composite waveform, but also maintain discrete addressability of A/V information with only 1 degree of informational freedom in that single wave at any Δt? If we take two modulated waves and combine them, we get a third unique modulated wave. How do we know how many modulated waves comprise that final wave, and which of those modulated “wavelings” correlate to a clarinet sound, a conversation, a dog bark, or one or more of the lines on a TV display?
Hello there,

I think the key concept here is how we interpret measurements, static or dynamic.

Information is passed not only by amplitude and/or frequency but also by the change of that amplitude and/or frequency. That means at least the first derivative also has significance and there can be more than that (second, third, etc.).

Given a constant frequency constant amplitude sine wave, the only information that could be passed is the frequency and the amplitude. If that frequency and amplitude never change, how could any other information be passed. Once you allow a change in either or both then the Shannon Limit becomes the limiting criterion and i am sure you know that allows for quite a bit of information to be passed before that starts to restrict anything. In fact, that in itself may constitute an argument.

There are also some byproducts that will be recognizable by a regular trend simply because of the way in which regular patterns appear when mixing sine waves of different frequencies. These we could call sidebands. They of course appear because of the mathematical relationships between two or more waves that interfere with each other.

The most important fact though is that when a wave changes it means information can be passed by way of the derivatives. "One if by land, two if by sea" relies on a static measurement the number of lanterns, while "One followed by two if by land, two followed by one if by sea" relies on a dynamic change in more than one static measurement.

With maybe a few more than 26 different changes we could convey the entire works of Shakespeare :)
 

RH3

Joined Nov 14, 2016
1
Question for any physicists here:

A waveform is a 1D informational element. It has one degree of freedom to oscillate at a given frequency over time t.

At any given range of time, this waveform transmitted down a wire or through the air represents a composite of not just sine waves, but modulated sine waves.

But how can just one solitary modulated wave carry multiple carrier frequencies to contain unique “sub-modulated" channels of A/V information and form a single composite waveform, but also maintain discrete addressability of A/V information with only 1 degree of informational freedom in that single wave at any Δt? If we take two modulated waves and combine them, we get a third unique modulated wave. How do we know how many modulated waves comprise that final wave, and which of those modulated “wavelings” correlate to a clarinet sound, a conversation, a dog bark, or one or more of the lines on a TV display?
Prior to digital transmission many of the world’s telephony networks used Frequency Division Multiplex to carry hundreds of telephone calls over the same coaxial cable.
Each voice channel was modulated on a specific range of carrier frequencies that were independent of each other. All the resulting modulated channels were carried within the bandwidth of whatever form of cable was required and at the receiving end the individual channels were demodulated and filtered to extract the original voice calls in their natural vocal range. At all times during transmission the individual channels were entirely independent of each other in the frequency domain but all within the same time domain.

Multiple groups of channels could be modulated again by a single carrier of a much higher frequency than the original channel carriers to create what was known as a group. Again an even higher range of carrier frequencies could be used to combine multiple groups of channels to create a ‘super group’ and again the process could combine multiple super groups to form a hyper group. A hyper group could contain 960 entirely independent telephone voice calls simultaneously.
The secret to the success of this system was very accurate bandpass filters and an extremely stable national reference carrier signal of 60KHz from which each individual carrier frequency was generated.

As an earlier contributor mentioned Frequency Division Multiplex is like the radio signals on an FM radio receiver. To listen to a particular channel or station you only need to know the frequency of the carrier for the channel of choice is being broadcast on.

And then everything changed as the networks moved to a digital system but this time using the time domain.
 

Deleted member 115935

Joined Dec 31, 1969
0
" A waveform is a 1D informational element. It has one degree of freedom to oscillate at a given frequency over time t. "

There are a few things basically wrong with this statement,

Why do you say Waveform is one dimensions , it can expand n X, Y and Z over time .

Why do you say a waveform is an information element ? A waveform has no knowledge of information.

Why do you say one degree of freedom to oscillate ?
a sound wave, is a compression wave, in 3 dimensions,
an electrical wave is a TEM wave, moves as electrical / magnetic , as well as in three directions ,
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Prior to digital transmission many of the world’s telephony networks used Frequency Division Multiplex to carry hundreds of telephone calls over the same coaxial cable.
Each voice channel was modulated on a specific range of carrier frequencies that were independent of each other. All the resulting modulated channels were carried within the bandwidth of whatever form of cable was required and at the receiving end the individual channels were demodulated and filtered to extract the original voice calls in their natural vocal range. At all times during transmission the individual channels were entirely independent of each other in the frequency domain but all within the same time domain.

Multiple groups of channels could be modulated again by a single carrier of a much higher frequency than the original channel carriers to create what was known as a group. Again an even higher range of carrier frequencies could be used to combine multiple groups of channels to create a ‘super group’ and again the process could combine multiple super groups to form a hyper group. A hyper group could contain 960 entirely independent telephone voice calls simultaneously.
The secret to the success of this system was very accurate bandpass filters and an extremely stable national reference carrier signal of 60KHz from which each individual carrier frequency was generated.

As an earlier contributor mentioned Frequency Division Multiplex is like the radio signals on an FM radio receiver. To listen to a particular channel or station you only need to know the frequency of the carrier for the channel of choice is being broadcast on.

And then everything changed as the networks moved to a digital system but this time using the time domain.
Thanks for the reply...

I can understand how a single frequency can be modulated to contain the fundamental freq. and associated overtones of one particular “voice” on a single wire. Not yet cognizing how hundreds of timbre modulations can be kept discretely ”parsable” within a parent wave. There’s massive redundancy just in the 1K range.
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
" A waveform is a 1D informational element. It has one degree of freedom to oscillate at a given frequency over time t. " There are a few things basically wrong with this statement,

Why do you say Waveform is one dimensions , it can expand n X, Y and Z over time .
Yes, those are spatial dimensions, which are 3. But the data is informationally 1D — really, “non-D.” At any given t you can capture a 1D voltage value of its 3D oscillation. Technically there’s infinite values in it as well.

Why do you say a waveform is an information element ? A waveform has no knowledge of information.
Because it’s a spatial “traveling database“ of organized values which mathematically correlate to an “analog” of a 3D something’s (undefined) historical status in space. Light is a self-modulating wave which “records” the state of a 3D object off which it reflects.

As to whether or not it has no “knowledge” of information, which implies “conscious of,” I’m actually not yet sure how much we can speak to that due to an absence of a Reason and Human Thought model which adequately delineates the role and behavior or consciousness, or the distinction between information and what it describes.

A ”wave” and a “frequency” are not so much things unto themselves; they are both conditions of “something,” and this is undefined to the human mind at present. We are semantically used to using both terms improperly as “things” rather than states or conditions thereof.

In a brain-mind duality model, if that something is some kind of 5D conscious substrate in (undefined) Reality which can qualitatively, or feel or experience its own oscillation as an elementary property, each distinct wave may be some kind of self-embodying, self-organizing conscious force for all we know. Whatever is “waving” knows how to keep fully discrete in some kind of localized aether, and “knows“ how to losslessly combine and integrate with other waves. Newton, Des Cartes and many other individuals actually subscribed to such similar thoughts.
 
Last edited:

Deleted member 115935

Joined Dec 31, 1969
0
for those not up to speed ( as I wasn't ) on brain-mind duality

https://plato.stanford.edu/entries/dualism/

" In the philosophy of mind, dualism is the theory that the mental and the physical – or mind and body or mind and brain – are, in some sense, radically different kinds of thing "


Not certain what that has to do with all about circuits, but thank you
do you have any questions on electronics or other topics to do with circuits ?

May be we could take it in small steps so we can follow where your going.

For instance, you ask,

" I can understand how a single frequency can be modulated to contain the fundamental freq. and associated overtones of one particular “voice” on a single wire. I have a hard time seeing how hundreds can be kept discretely ”parsable” within a parent wave."

Now that's something we can talk about,
are you up for that being the basis of this discussion ?
 
Status
Not open for further replies.
Top