Frequencies and muxing

Status
Not open for further replies.

Deleted member 115935

Joined Dec 31, 1969
0
As a start

lets keep it simple,
do we agree the equation of a sine wave amplitude at any time is V = A * sin( wT )
where w is 2 * pi * f , f being frequency
and sin is in the radians
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
I'm saying the opposite -- the receiving end cannot "de-aggregate" a complex waveform within a single bandwidth if each of the component waveforms mix within that bandwidth. An FFT can tell you that 5 kHz is present, but it cannot tell you where it came from.
Right, but I’m talking “inter-bandwidth”: the single wave can be parsed with FFT into component waveling components, but these are not “bandwidth grouped.“ But the demuxer is effectively parsing groups of complex waveforms. How are the groups losslessly maintaining their individual overtones in their respective sandboxes. I’m seeing correlated group meta-data effectively vanish and re-materialize.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
A photoshop file can have any number of groups of layers. The layer data per each group is localized to the group. If we flatten the group, the group metadata disappears, and is unrecoverable. We can bounce that to a flattened JPEG or GIF as a “flattened waveform.”

If we take 10 complex audio waveforms, each of them considered a group of layers, and we add them together, we have 1 waveform representing all the layers, not the (bandwidth) groupings. When we demux the final waveform, and we hear all of the data associated with a discrete group on a single frequency, we have resurrected the vanished group metadata in the Photoshop file. Yes?

We have nested complex waveforms maintaining discrete addressability after flattening them. FFT doesn’t yield the child complex waveforms, only the dissociated sine waves composing the flattened parent wave.
 

Deleted member 115935

Joined Dec 31, 1969
0
do we agree the equation of a sine wave amplitude at any time is V = A * sin( wT )
where w is 2 * pi * f , f being frequency
and sin is in the radians
 

bogosort

Joined Sep 24, 2011
696
Right, but I’m talking “inter-bandwidth”: the single wave can be parsed with FFT into disassociated waveling components, but these are not “bandwidth grouped.“ But the demuxer is effectively parsing groups of complex waveforms, one from another. How are the groups losslessly maintaining their individual overtones in their respective bandwidth sandboxes? I’m seeing correlated group meta-data effectively vanish (flatten) and re-materialize.
The demuxer can only separate the inter-bandwidth groups if the bandwidth of each group is distinct. Within each group -- intra-bandwidth -- there is no general way to classify all the components. A given 10 kHz harmonic component may be the first overtone of a 5 kHz component, or it may be the tenth overtone of a 1 kHz component. There is no way to know using just the information present in the signal.
 

bogosort

Joined Sep 24, 2011
696
A Photoshop file can have any number of groups of layers. The layer data per each group is localized to the group. If we flatten the group, the group metadata disappears, and is unrecoverable. We can bounce all layers to a flattened JPEG or GIF as a “flattened waveform.” Group data vanishes.
But our eyes cannot distinguish between the image as a group of layers and the same image after it's been flattened. The metadata is strictly for the program and not present in the light.

If we take 10 complex audio waveforms, each of them considered a group of layers, and we add them together, we have 1 waveform representing all the layers, not the (bandwidth) groupings.
But that's not what a muxer does. In frequency-division muxing, the 10 groups have to be shifted in frequency so that they occupy distinct bandwidths. In time-division muxing, the 10 groups have to be shifted in time so that they occupy distinct time slices. If we want to keep the groups separate, we must physically separate them.

When we demux the final waveform, and we hear all of the data associated with a discrete group on a single frequency, we have resurrected the vanished group metadata in the Photoshop file. Yes?
Not sure what you mean by a single frequency. The demuxer cannot resurrect what's not already there -- this is why the muxer is responsible for physically separating the signals, so that the demuxer can simply pick out the separated groups.
 

Deleted member 115935

Joined Dec 31, 1969
0
do we agree the equation of a sine wave amplitude at any time is V = A * sin( wT )
where w is 2 * pi * f , f being frequency
and sin is in the radians
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
But our eyes cannot distinguish between the image as a group of layers and the same image after it's been flattened. The metadata is strictly for the program and not present in the light.
But is there not implied metadata in the demuxing process?

The FDM demuxer sees a single complex input, and then when it demuxes the signal into 10 independent bandwidth groups, all of the associated wavelings are in each group, correct?

I just don't see how the groups are losslessly kept intact upon aggregating, flattened into one wave, and the demuxer is now uncovering nested complex waveform modulations that aren't anywhere in the parent actual wave shape.
 

bogosort

Joined Sep 24, 2011
696
But is there not implied metadata in the demuxing process?
Yes, certainly. The metadata is the encoding system the designer of the system chose for the mux, e.g., "each group will have a bandwidth of 6 kHz, and be separated by 10 kHz". The demux process uses this metadata to parse out the desired data.

The FDM demuxer sees a single complex input, and then when it demuxes the signal into 10 independent bandwidth groups, all of the associated wavelings are in each group, correct?
Yup.

I just don't see how the groups are losslessly kept intact upon aggregating, flattened into one wave, and the demuxer is now uncovering nested complex waveform modulations that aren't anywhere in the parent actual wave shape.
Let's organize this into a hierarchy:
Code:
signal A
 - signal A1
   * signal A1.1
     + sine A1.1.1
     + sine A1.1.2
   * signal A1.2
     + sine A1.2.1
     + sine A1.2.2
   * signal A1.3
     + sine A1.3.1
     + sine A1.3.2

 - signal A2
   * signal A2.1
     + sine A2.1.1
     + sine A2.1.2
   * signal A2.2
     + sine A2.2.1
     + sine A2.2.2
   * signal A2.3
     + sine A2.3.1
     + sine A2.3.2
Signal A is the single, wide-band signal that the demux sees. Signals A1 and A2 are the groups, which we presume have some fixed bandwidth and are well-separated from each other in frequency space.

Each group is composed of three complex signals. The first group has A1.1, A1.2, and A1.3. Each of these complex signals is composed of two multiple sinusoids, here labeled as, e.g., A1.1.1 and A1.1.2.

So, the demux sees signal A and uses external metadata to know where to filter the signal to extract signals A1 and A2. Once it has done this, we now have two separate complex signals A1 and A2. Can we use another round of demuxing to get the sub-signals within each group? Nope. For example, the complex signal A1.1 shares the same bandwidth as the complex signal A1.2. We can run A1 through an FFT and see all of the sine wave components, but we cannot tell which sine wave belongs to which signal (A1.1 or A1.2).

To make it all concrete, imagine that A1 is the recording of a musical performance by a guitar and piano duet, and that A2 is the audio from a football game broadcast. Because A1 and A2 occupy distinct bandwidths, the demuxer can keep them separate (the music listeners won't hear an NFL play-by-play happening at the same time). Further, suppose that A1.1 is the microphone feed from the guitar, and A1.2 is the mic feed from the piano. Since A1.1 occupies the same bandwidth as A1.2, the demuxer cannot separate the guitar track from the piano track. The best we can do is use special software to analyze the data and take an educated guess at which sine waves belong to which track (this is what our brains do), but we cannot get this information from signal A1 alone.
 

xox

Joined Sep 8, 2017
838
But is there not implied metadata in the demuxing process?


The FDM demuxer sees a single complex input, and then when it demuxes the signal into 10 independent bandwidth groups, all of the associated wavelings are in each group, correct?


I just don't see how the groups are losslessly kept intact upon aggregating, flattened into one wave, and the demuxer is now uncovering nested complex waveform modulations that aren't anywhere in the parent actual wave shape.

Think of a recording, a symphony for example. To the sound card it appears as a sequence of N samples, where N is the sample rate per second. In this example, keep it simple and only use powers of two for N. That just guarantees that we can perform a "perfect" FFT on the signal. Now you just have to decide the size of window to "draw" around it. Often this will be N, but it can be larger or smaller, and that defines the resolution of the FFT. Too small and you will encounter distortions (UNLESS you can somehow increase your sample rate; see Nyquist frequency for reference). Too large and adjoining "notes" in a signal can be inadvertantly merged into "phantom frequencies". Digital signal processing can be tricky!

Anyway, once we have all the frequencies they are then squashed down somewhat, throwing away the many unused frequencies from any given spectrum. Modulation is simply the mapping from some larger, low-entropy space to a much smaller, high-entropy one. Demodulation is the reverse. The individual signals are separated, extracted, expanded and finally converted back into sound (the original symphony).

It should be noted that not all audio signals are modulated. Certainly for radio, television, and cable they will be. But "raw" audio formats exist where no modulation is necessary, or even desired.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Yes, certainly. The metadata is the encoding system the designer of the system chose for the mux, e.g., "each group will have a bandwidth of 6 kHz, and be separated by 10 kHz". The demux process uses this metadata to parse out the desired data.


Yup.


Let's organize this into a hierarchy:
Code:
signal A
- signal A1
   * signal A1.1
     + sine A1.1.1
     + sine A1.1.2
   * signal A1.2
     + sine A1.2.1
     + sine A1.2.2
   * signal A1.3
     + sine A1.3.1
     + sine A1.3.2

- signal A2
   * signal A2.1
     + sine A2.1.1
     + sine A2.1.2
   * signal A2.2
     + sine A2.2.1
     + sine A2.2.2
   * signal A2.3
     + sine A2.3.1
     + sine A2.3.2
Signal A is the single, wide-band signal that the demux sees. Signals A1 and A2 are the groups, which we presume have some fixed bandwidth and are well-separated from each other in frequency space.

Each group is composed of three complex signals. The first group has A1.1, A1.2, and A1.3. Each of these complex signals is composed of two multiple sinusoids, here labeled as, e.g., A1.1.1 and A1.1.2.

So, the demux sees signal A and uses external metadata to know where to filter the signal to extract signals A1 and A2. Once it has done this, we now have two separate complex signals A1 and A2. Can we use another round of demuxing to get the sub-signals within each group? Nope. For example, the complex signal A1.1 shares the same bandwidth as the complex signal A1.2. We can run A1 through an FFT and see all of the sine wave components, but we cannot tell which sine wave belongs to which signal (A1.1 or A1.2).

To make it all concrete, imagine that A1 is the recording of a musical performance by a guitar and piano duet, and that A2 is the audio from a football game broadcast. Because A1 and A2 occupy distinct bandwidths, the demuxer can keep them separate (the music listeners won't hear an NFL play-by-play happening at the same time). Further, suppose that A1.1 is the microphone feed from the guitar, and A1.2 is the mic feed from the piano. Since A1.1 occupies the same bandwidth as A1.2, the demuxer cannot separate the guitar track from the piano track. The best we can do is use special software to analyze the data and take an educated guess at which sine waves belong to which track (this is what our brains do), but we cannot get this information from signal A1 alone.
Yes, I can see that it definitely does do this. My specific problem is essentially "how" with respect to the data organization:

If we capture a single instrument on a microphone like a piano, I can see how all the sine waves combine to create an aggregate waveform representing just the piano. Then we can use FFT and perhaps AI and parse the notes some and even shift them a la Melodyne. It could be said that each note is a fundamental sub-carrier that the overtones integrate with to create the rich piano timbre and respective pitches.

It's clear to see how the resulting complex waveform is composed of 100's of overtones at any given moment that "sound as one polyphonic piano-timbre chord."

But my issue here is in taking signal A1 above—which is a piano AND guitar, with countless overtones concurrently comprising it—and this is seen as a complex waveform unto itself with n-bytes of data describing it at every point... It itself is a modulated wave.

...and signal A2 above, which is the audio of a football game broadcast—it too with countless overtones concurrently comprising it—and this TOO is seen as a complex waveform unto itself with n-bytes of data describing it... It itself is a modulated wave.

...and now taking signal A which is some parent carrier frequency and modulating it with A1 and A2, and if you were to physically examine signal A, you'd see just another complex wave and have NO IDEA how its parts compromise it or are organized in tiers within it! It's just a shape!

You could FFT it and see how the sine waves create that wave. But what we can't see or even mathematically describe (that I can tell?) is how A is maintaining the discreteness of A1 and A2 in the waveform itself—organized "within it"—when ALL you are literally dealing with is x-y, vanilla oscillations. The wave is morse code in reality. Every point is a unique number — we can take a digital snapshot of the wave at 16 or 24-bit and 44.1kHz or 192kHz or whatever. And we just have numbers at every point. There are untold tiers of nested, grouped, polyphonic data being represented at every single discrete voltage oscillation.

And that's just A1 and A2. With a wave comprising 900 unique phone conversations at every single point, you are only dealing with a single voltage number.

Let's say A1 has 4GB of wave data and A2 has 5GB of wave data at every point.

Precisely where is this nested data at every point, when you simply have a one-dimensional numeric voltage snapshot?

Technically by parsing out the individual complex waveforms, you are "pulling additional data" out of nowhere. I do not see precisely where it is described in the wave. I see the math exists for us to GET to the individual sub-carriers and "out pops their modulated selves" upon filtering. But I do not see physically where or how it is retained and physically organized.

A complex wave has a voltage level associated with each point of it and it alone. That number, when "demodulated" is being extrapolated into tons of other organized and grouped numbers at every point.
 
Last edited:

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
For the record to any moderators, I opened another thread (this one) per the advice of andrewmm to remain specifically on point with the parent question that does in fact involve electronics, muxing, and information representation in physical space.

To Wendy, per your reply to the other thread: Anyone who looks beyond 5% of my posts knows I'm no troll. A troll is someone with disingenuous aim to "make a deliberately offensive or provocative online post with the aim of upsetting someone or eliciting an angry response from them." I have no such aim and there is clear proof of this.

The posters who repeatedly reply with extensive, helpful answers (bogosort, xox, MrAl, andrewmm, Delta Prime, WBahn, Yaakov, MrChips and others) knows I'm no troll, despite having initially "appeared" that way in the Theory of Everything thread I started last year which now enjoys views of over 31K today and adds 1K views every 2 weeks.

It is chock full with 1000's of hours of cogent interaction and study from myself and others (principally bogosort later on), and every thread I post asking questions has that parent aim in mind. That is NOT the activity of a troll.

I've made it abundantly clear what I'm looking for on every sub-topic, and I'm asking deeper-than-everyday questions in the information-theory, signal processing, logic, hardware, and others spaces that do at times ride the edge of philosophy—as did venerable George Boole, Stephen Kleene, John Von Neumann and Claude Shannon's thinking that in some way ultimately informs all of the circuity-speak on this board—to incorporate and cognize things at a much more fundamental level than just textbook answers, and with an aim to understand the "grey-matter bio circuit" we are using to "talk about circuits."

I am making a pointed effort to keep the topic on-point with the created subject, because I myself respect contextual adherence. Sometimes the topic slightly veers in order to glean a deeper awareness of the parent topic—but I desire to keep it specifically within the established topic.
 

bogosort

Joined Sep 24, 2011
696
It's clear to see how the resulting complex waveform is composed of 100's of overtones at any given moment that "sound as one polyphonic piano-timbre chord."
As a factual aside, there are maybe a dozen perceptible overtones in a loud, low-register piano or guitar note. There's less than 30 in a middle-register major triad (many of the overtones are repeated). Doesn't change anything, but it's good to have a sense of the scale of things.

...and now taking signal A which is some parent carrier frequency and modulating it with A1 and A2, and if you were to physically examine signal A, you'd see just another complex wave and have NO IDEA how its parts compromise it or are organized in tiers within it! It's just a shape!
From most angles, an apple orchard looks like a random group of trees. But if you stand at the right spot, it becomes immediately apparent that the trees are organized, arranged in rows. Likewise, if you didn't know anything about signal A and looked it at on a frequency analyzer, you'd immediately notice that it was carrying information, organized in two distinct bands.

Precisely where is this nested data at every point, when you simply have a one-dimensional numeric voltage snapshot?
This is the key concept, right here. You're absolutely correct that from the perspective of the receiver, signal A is just a sequence of changing voltage levels, a one-dimensional signal. If we take a snapshot of this signal, as we do when we digitally sample it, all we have is a single number. How in the world does a single number encode all of the information from the piano, the guitar, the room they were recorded in, the boozy commentators in the NFL broadcasting booth? Simple, it doesn't. Instead, all of that juicy information is encoded in how the voltage level changes over time.

A single voltage level transmits precisely zero bits of information. In the absence of noise, a time-varying voltage level can transmit infinitely many bits of information. In between those two extremes, we have a noisy channel with a time-varying voltage that can transmit some finite number of bits (see Shannon for details). If the signal-to-noise ratio is good enough, we can transmit a lot of information with a simple one-dimensional voltage signal. Hopefully you grok how the simple addition of change over time allows far more information to be sent than a single number could ever hold.

The other part is the encoding -- how we organize the data within the signal, or, if you prefer, how we nest the data. There are millions of encoding schemes, but I think the thing that you're wondering about is how we can squeeze all those nests of nests of information into a stupid one-dimensional voltage signal. Again, hopefully you can see that so long as we don't exceed the channel's information capacity, we're free to structure the data in any way we see fit.
 

bogosort

Joined Sep 24, 2011
696
It's very clear that Jennifer Solomon is genuinely interested in these topics and is not a troll. The subject of transmission and representation of information is quintessentially on topic in an engineering forum. Frankly, many engineers don't even realize these questions need to be asked in the first place. They either forget or never knew who Hartley, Shannon, and Gabor are -- some of the greatest engineers ever -- who asked the very same types of questions.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
As a factual aside, there are maybe a dozen perceptible overtones in a loud, low-register piano or guitar note. There's less than 30 in a middle-register major triad (many of the overtones are repeated). Doesn't change anything, but it's good to have a sense of the scale of things.


From most angles, an apple orchard looks like a random group of trees. But if you stand at the right spot, it becomes immediately apparent that the trees are organized, arranged in rows. Likewise, if you didn't know anything about signal A and looked it at on a frequency analyzer, you'd immediately notice that it was carrying information, organized in two distinct bands.


This is the key concept, right here. You're absolutely correct that from the perspective of the receiver, signal A is just a sequence of changing voltage levels, a one-dimensional signal. If we take a snapshot of this signal, as we do when we digitally sample it, all we have is a single number. How in the world does a single number encode all of the information from the piano, the guitar, the room they were recorded in, the boozy commentators in the NFL broadcasting booth? Simple, it doesn't. Instead, all of that juicy information is encoded in how the voltage level changes over time.

A single voltage level transmits precisely zero bits of information. In the absence of noise, a time-varying voltage level can transmit infinitely many bits of information. In between those two extremes, we have a noisy channel with a time-varying voltage that can transmit some finite number of bits (see Shannon for details). If the signal-to-noise ratio is good enough, we can transmit a lot of information with a simple one-dimensional voltage signal. Hopefully you grok how the simple addition of change over time allows far more information to be sent than a single number could ever hold.

The other part is the encoding -- how we organize the data within the signal, or, if you prefer, how we nest the data. There are millions of encoding schemes, but I think the thing that you're wondering about is how we can squeeze all those nests of nests of information into a stupid one-dimensional voltage signal. Again, hopefully you can see that so long as we don't exceed the channel's information capacity, we're free to structure the data in any way we see fit.
Yes, I definitely grok the time element being fundamental to the information storage and transmission. (I have a serious problem using the word "time" and not really knowing precisely what it is we're discussing other than an experiential illusion, but that's a digression).

Thinking more deeply about it, I've finally crystallized the issue I'm having: the physical representation of time-based superposition.

A sax is a complex waveform, as is the piano, the dog barking, and each of them are a highly complex wave unto themselves. When they all sound at the same time, they all constructively combine, but they do so at some rate and self-organization over time™. 1 second™ goes by, and a microphone over that 1 second is "hearing" a snapshot of all of those waves with their associated overtones, including the room's reverb.

They're causing the diaphragm to create a new additive wave with the 1D voltage fluctuations representing all of the superpositions which can theoretically be retrieved. Every point does have information in my estimation—but it's the "over time" element where the meaning is, which is the experiential ascription to that information. To someone who can hear 4ms difference in a snare hit, there could be meaning at that granularity. To someone who can hear only 500ms difference, there's different meaning at that perception level.

1D fluctuations over 2 seconds, for example — manifested as a pressure wave, we're dealing only with pressure changes. Where is the immense timbre qualia encoded, "muxed" into 1D voltage oscillations manifested as pressure waves over time? The mind begins to process the wave instantly upon hitting the ear. It immediately begins registering the sound the moment the ear-drum begins to vibrate. Within 1 second, there's sufficient information to identify the contents. I'm not seeing how 1D voltage fluctuations over 1-2 seconds can communicate all of the timbre information embedded in the wave, much less how 88,000 16-bit values correspond to that data.
 

Deleted member 115935

Joined Dec 31, 1969
0
Since you keep asking, I'll disagree. :)

To uniquely describe a sine, we need a few more parameters: \[ x(t) = A \sin(\omega_0 t + \phi) + b \]
As we all know, your dead right @bogosort,

I was trying to talk with @Jennifer Solomon

They , as they have said, are not like us , scientists and/ or engineers,
They seem to be more into the meta physical / philosophical sciences.

I really want to help them , and they at the start have asked us to keep this mathematical, and implied not to go off topic.

I don't know abut you @bogosort , but when I teach students I start with the basics, and get that agreed, so we can move on

For @Jennifer Solomon

The equation I quoted is the basic of a sine wave, giving the instantaneous amplitude at and time t

V = A sin( wt )


As @bogosort has said, that can be taken further.

They have V = A sin( wt + p ) + c

Where c is a DC offset, and p is a phase offset.

What this means is the "perfect" / base equation is for a sin wave, of an amplitude, which starts at zero at time t=0 , and is symmetrical around zero, going +- A in amplitude.

Thats all very true,
and thank you @bogosort for reminding us all of that,
but not what I would have covered in the first few lesson unless a student asked about DC offset or phase shift .


@Jennifer Solomon , if you want to engage in discussion about the topic and your expressed desire to keep it mathematical, I am more than willing to work with you via the PM,
 

Deleted member 115935

Joined Dec 31, 1969
0
But is there not implied metadata in the demuxing process?

The FDM demuxer sees a single complex input, and then when it demuxes the signal into 10 independent bandwidth groups, all of the associated wavelings are in each group, correct?

I just don't see how the groups are losslessly kept intact upon aggregating, flattened into one wave, and the demuxer is now uncovering nested complex waveform modulations that aren't anywhere in the parent actual wave shape.

The maths will show you this @Jennifer Solomon

An open forum is great for some things,
But the problem I see with the open forum is like having many teachers all teaching there own lessons at you at the same time,
they all might be leading to the same , but have different routes, and each teacher has a different style.
 

Deleted member 115935

Joined Dec 31, 1969
0
Yes, I definitely grok the time element being fundamental to the information storage and transmission. (I have a serious problem using the word "time" and not really knowing precisely what it is we're discussing other than an experiential illusion, but that's a digression).

Thinking more deeply about it, I've finally crystallized the issue I'm having: the physical representation of time-based superposition.

A sax is a complex waveform, as is the piano, the dog barking, and each of them are a highly complex wave unto themselves. When they all sound at the same time, they all constructively combine, but they do so at some rate and self-organization over time™. 1 second™ goes by, and a microphone over that 1 second is "hearing" a snapshot of all of those waves with their associated overtones, including the room's reverb.

They're causing the diaphragm to create a new additive wave with the 1D voltage fluctuations representing all of the superpositions which can theoretically be retrieved. Every point does have information in my estimation—but it's the "over time" element where the meaning is, which is the experiential ascription to that information. To someone who can hear 4ms difference in a snare hit, there could be meaning at that granularity. To someone who can hear only 500ms difference, there's different meaning at that perception level.

1D fluctuations over 2 seconds, for example — manifested as a pressure wave, we're dealing only with pressure changes. Where is the immense timbre qualia encoded, "muxed" into 1D voltage oscillations manifested as pressure waves over time? The mind begins to process the wave instantly upon hitting the ear. It immediately begins registering the sound the moment the ear-drum begins to vibrate. Within 1 second, there's sufficient information to identify the contents. I'm not seeing how 1D voltage fluctuations over 1-2 seconds can communicate all of the timbre information embedded in the wave, much less how 88,000 16-bit values correspond to that data.
@Jennifer Solomon

Regarding your comment " experiential "

You must remember, everything we as engineers and scientist do , is based upon theories,
you mention, time , implying does it exist,

Space / time is the current theory / model we have, and it works,

As we described in the last forum that went way off topic, but does have a catchy title, so gest lots of views,
We once beloved Newton, till Einstein came along,
etc

But that has nothing to do with your explicit desire to keep this mathematical, and on topic about Frequencies and Multiplexing.

In the current models of space time, to have a frequency as per your question you have to have time, as by definition a frequency is cycles per second .
 
Status
Not open for further replies.
Top