Frequency-division multiplexing

Status
Not open for further replies.

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Question for any physicists here:

A waveform is a 1D informational element. It has one degree of freedom to oscillate at a given frequency over time t.

At any given range of time, this waveform transmitted down a wire or through the air represents a composite of not just sine waves, but modulated sine waves.

But how can just one solitary modulated wave carry multiple carrier frequencies to contain unique “sub-modulated" channels of A/V information and form a single composite waveform, but also maintain discrete addressability of A/V information with only 1 degree of informational freedom in that single wave at any Δt? If we take two modulated waves and combine them, we get a third unique modulated wave. How do we know how many modulated waves comprise that final wave, and which of those modulated “wavelings” correlate to a clarinet sound, a conversation, a dog bark, or one or more of the lines on a TV display?
 

Deleted member 115935

Joined Dec 31, 1969
0
Easy answer

https://en.wikipedia.org/wiki/Frequency-division_multiplexing

Longer answer,
you need to look at a signal in both the time and frequency domain,

Mathematically the two are linked by Fourier,

The words "division multiplexing" is a confusion, its just a way of saying, multiple independent signals combined into one signal.

The most common, but now old example of FDM your used to is probably the radio,
where multiple channels are broadcast over the air,
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Easy answer

https://en.wikipedia.org/wiki/Frequency-division_multiplexing

Longer answer,
you need to look at a signal in both the time and frequency domain,

Mathematically the two are linked by Fourier,

The words "division multiplexing" is a confusion, its just a way of saying, multiple independent signals combined into one signal.

The most common, but now old example of FDM your used to is probably the radio,
where multiple channels are broadcast over the air,
Thanks for the reply... but I’m not seeing the “how” in that info. It’s not addressing the physics of the simultaneity of multiple modulations existing concurrently within an existing 1D range. Fourier transforming reduces a signal to multiple component sine waves. But the preserved modulations of those component waves is what I’m after. Others have posited a “stacked” concept, where a wave is 3D with “thickness,“ but that makes no sense over a wire.
 

Deleted member 115935

Joined Dec 31, 1969
0
Did yo read the wiki links and follow up article ?

So
loose this idea of one dimension, it does not exist,
signals appear in the frequency and the time domain, you can look at a signal in either or both as any signal appears in bot, its just how you define it.

So for instance, you take signal, say voice,
in the frequency domain it exists in the range say 300 to 3300 Hz,

If you take two voice signals, then they will frequency over lap,

So multiply one voice signal by say 100 KHz, the other by 200 Khz
and filter,
you now have one signal at 100300 to 103300 Hz, the other at 200300 to 203300 Hz

The two now do not interfere as they are at separate frequencies,
In the time domain, its a very complex wave form, but the two signals are still in there , separately.

To recover the 100 KHz signal,
multiply the combined signal with 100 KHz, and filter,
out will pop the base band of the orrignial signal,

How do you think you would recover the one base dat 200 KHz ?


If in doubt , do the maths,

Sine wave 1 , say 1 KHz,
Sine wave 2, say 2 KHz,

multiply one by 100 Khz, one by 200 KHz,
and add, what equation do you get ?
 

crutschow

Joined Mar 14, 2008
34,280
The "physics" is that for multiple frequencies in one signal, the signal voltage at any one instant in time represents the composite value of all the signals.
But at any instant-in-time, there are no frequencies. Frequency is given in Hertz (cycle per second) so You need to look at the signal over time to see all the frequencies.
So, if you look at the signal with an oscilloscope, it will likely look rather as a jumble of amplitudes (even as noise if there are a lot of frequencies),
But the separate frequencies can be seen if you look at the signal with a spectrum analyzer (which basically shows the Fourier components).
You can also recover the individual signals with a narrow band-pass filter tuned to each of the signals.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Did yo read the wiki links and follow up article ?

So
loose this idea of one dimension, it does not exist,
signals appear in the frequency and the time domain, you can look at a signal in either or both as any signal appears in bot, its just how you define it.

So for instance, you take signal, say voice,
in the frequency domain it exists in the range say 300 to 3300 Hz,

If you take two voice signals, then they will frequency over lap,

So multiply one voice signal by say 100 KHz, the other by 200 Khz
and filter,
you now have one signal at 100300 to 103300 Hz, the other at 200300 to 203300 Hz

The two now do not interfere as they are at separate frequencies,
In the time domain, its a very complex wave form, but the two signals are still in there , separately.

To recover the 100 KHz signal,
multiply the combined signal with 100 KHz, and filter,
out will pop the base band of the orrignial signal,

How do you think you would recover the one base dat 200 KHz ?


If in doubt , do the maths,

Sine wave 1 , say 1 KHz,
Sine wave 2, say 2 KHz,

multiply one by 100 Khz, one by 200 KHz,
and add, what equation do you get ?
Yes, I’m familiar with all the issues.

Many learned people on here would argue that a waveform is 1D of freedom, and there is no actual reality-based “z-depth” as observed in the time domain. This was discussed at length in a thread in the Off-Topic area that I believe you contributed to. I believe there has to be some kind of z-depth phenomenon to account for the myriad multimedia information that can be embedded over any duration.

At any given range, there are “embedded modulations” and ”sub-modulations” that are kept distinctly addressable and identifiable by the mind.

As crutschow said above, “The ’physics’ is that for multiple frequencies in one signal, the signal voltage at any one instant in time represents the composite value of all the signals.” That’s 1D all day long, akin to a flattened Photoshop PSD file.

The voltage is at any given moment a numeric value, high or low, and carrying way more than just that 1D data.
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
The "physics" is that for multiple frequencies in one signal, the signal voltage at any one instant in time represents the composite value of all the signals.
But at any instant-in-time, there are no frequencies. Frequency is given in Hertz (cycle per second) so You need to look at the signal over time to see all the frequencies.
So, if you look at the signal with an oscilloscope, it will likely look rather as a jumble of amplitudes (even as noise if there are a lot of frequencies),
But the separate frequencies can be seen if you look at the signal with a spectrum analyzer (which basically shows the Fourier components).
You can also recover the individual signals with a narrow band-pass filter tuned to each of the signals.
Thanks for the reply. Understood, and my question relates to how much data can be organized in a given range, retrieved, and identified by the mind. This is the crux as posted by a user in another thread:

“Although the human ear and brain can recognize individual instruments in an orchestra, a Fourier analysis cannot. The reason is that each instrument produces its own unique harmonics and overtones which we can identify with that instrument. A Fourier analysis separates an audio signal into all of the sinusoidal waveforms that make up the fundamental tone and its associated harmonics. Once they are separated it is not possible to identify which signal is associated with a specific instrument unless the spectral identity of the instrument is known.”​

So the brain is NOT just doing a fourier analysis to “get at“ all that spectral data. What is it doing, and where is the data?
 
Last edited:

Ya’akov

Joined Jan 27, 2019
9,069
No frequency in any context can be determined by a instantaneous measurement. Frequency has two dimensions that are inextricably linked: amplitude and time. To measure a frequency you have to sample the amplitude for at least 1/ƒ time.

As far as human perception it is a mistake to compare the system of the human sensorium and brain to a single sensor such as a microphone. Part of the information you are looking for is stored in the brain. Human perception is not equivalent to a recording of sensor data, it is constructed by the mind from many inputs and historical data. The mind adds in missing data that is "expected" and ignores normalizes data that is unexpected.

Taken together, these two things explain all the problems you raise. The time domain can not be considered independently of the amplitude domain; and human perception is a construction not a recording or presentation of the instantaneous state of things "outside" the person.
 

Deleted member 115935

Joined Dec 31, 1969
0
Hi
You seem to have gone a lot off topic tlaking about how a human percieves sounds from the orriginal how to frequency muliplex.

May be a new topic is in order,
 

Ya’akov

Joined Jan 27, 2019
9,069
Hi
You seem to have gone a lot off topic tlaking about how a human percieves sounds from the orriginal how to frequency muliplex.

May be a new topic is in order,
Please see #7

Also, I am directly responding to "where is the information?" since the TS has conflated stored and derived information with transmitted information.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Hi
You seem to have gone a lot off topic tlaking about how a human percieves sounds from the orriginal how to frequency muliplex.

May be a new topic is in order,
Yes, you’re correct... it’s a department in the same department store, if you will. I will try to steer it back.

To Yaakov, thanks for your reply... yep, you’re also correct, and I’m aware time is required for that process beyond instantaneity (though I’m “Cartesian“ with respect to the brain-mind duality when it comes to the actual spatial elements not stored in the brain; also, yes, most certainly the brain is a complex processor, and the microphone is a simple “sensor“ — definitely worlds apart to that end. :—)

At present, I’m trying to plumb the notion of exactly how much data can being stored over a certain time in a single waveform, and how discretely addressable that data are, and how its “organized” and kept discrete A/V within the fourier components. The mind’s ability and intention to seamlessly cannibalize and process those waves is a separate issue.
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
(I’m also investigating a de facto definition for information itself (other than “that which reduces uncertainty”), and the type of discrete countable data that undergird the notion of ℕ vs. the continuous, indissociable data that undergird ℝ as it relates to wave processing, reason, meaning, and the distinction between spatial (reality) geometry vs. vectorial (simulated) geometry)
 

Ya’akov

Joined Jan 27, 2019
9,069
Though some in IT are beginning to question the generality of Shannon, so far it hasn’t been effectively challenged as a general theory, and Landauer’s insight into the physical nature of information (https://yaakov.me/landauer.pdf) combined with the strong correlation between thermodynamics and Shannon channels tends to suggest information is simply stuff arranged usefully.
 

BobaMosfet

Joined Jul 1, 2009
2,110
Thanks for the reply. Understood, and my question relates to how much data can be organized in a given range, retrieved, and identified by the mind. This is the crux as posted by a user in another thread:

“Although the human ear and brain can recognize individual instruments in an orchestra, a Fourier analysis cannot. The reason is that each instrument produces its own unique harmonics and overtones which we can identify with that instrument. A Fourier analysis separates an audio signal into all of the sinusoidal waveforms that make up the fundamental tone and its associated harmonics. Once they are separated it is not possible to identify which signal is associated with a specific instrument unless the spectral identity of the instrument is known.”​

So the brain is NOT just doing a fourier analysis to “get at“ all that spectral data. What is it doing, and where is the data?
@Jennifer Solomon
I'll PM you.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Though some in IT are beginning to question the generality of Shannon, so far it hasn’t been effectively challenged as a general theory, and Landauer’s insight into the physical nature of information (https://yaakov.me/landauer.pdf) combined with the strong correlation between thermodynamics and Shannon channels tends to suggest information is simply stuff arranged usefully.
I believe it is physical, but it’s extra-dimensional and colocationed, and needs to be carefully triangulated via reductio ad absurdum approach.

Bits are discrete, and no one bit knows or cares what any other bit is doing. It is consciousness that is the basis of attributing indissociable form to that which has no innate arrangement. The “arrangement” is the entire issue. A 2D array in RAM is not 2D to the computer. But we, as conscious living entities, insist on a major distinction between 1D and higher dimensions that are not just “informatic,” vectorial dimensions, but “real” continuous, contiguous, indissociable dimensionality and shape. Our bodies and brains are forms in space independent of discrete bits on a disc. But if we are just a mechanical info-processing computer, we don’t and can’t “know” that. We can “match” bits that correspond to the “actual formal knowledge“ only. But consciousness is that which is doing that very differentiation.
 
Last edited:

Deleted member 115935

Joined Dec 31, 1969
0
" reductio ad absurdum" , is one way of approaching science,


You seem @Jennifer Solomon to keep coming back to the head / body question,
now your talking about 2D ram ,

I don't see how that's related to your question of frequency division multiplexing,

may be its time to start a new question ?
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
" reductio ad absurdum" , is one way of approaching science,


You seem @Jennifer Solomon to keep coming back to the head / body question,
now your talking about 2D ram ,

I don't see how that's related to your question of frequency division multiplexing,

may be its time to start a new question ?
I was just addressing the one tangent above.

Per the original question, I understand that sinusoidal Fourier components comprise any section of a waveform. But what I’m not closing on is how we can take two complex waveforms, p and q, combine them to create waveform r, and then perform a fourier transform on that wave r to theoretically reveal the original, intact p and q? And of course this would extend to more than combining two. There’s extreme crosstalk between all those waves, in a final, aggregate waveform. It’s the “discreteness” and organization of the time-domain spectrographic elements I’m trying to delineate (that ultimately the mind does effortlessly and instantaneously beyond any current machine).

To me it’s akin to having a multi-layer Photoshop file, where each layer is loaded with complex pixel arrangements, we then flatten all the layers into one, but then how could we “recall” the individual layers with absolutely no meta-data on which specific pixels belonged to which layers?
 
Last edited:
Status
Not open for further replies.
Top