Theory of Everything

bogosort

Joined Sep 24, 2011
696
How is it that the numbers, representing simple voltage fluctuations is permitting a full analog reconstruction of the 3D wave so as to be able to access constituent parts of it post-recording??
The “sequence” of numbers is literally representing waves that should technically “not exist” after the mic picked it up, save for extra-dimensional rationale.
It feels like we're having one conversation in two threads. See my last post in the other thread for the answer to this question. The key point is that the 1D voltage (and the 1D sample sequence) does not permit reconstruction of the original 3D wave as it was in the room. What it permits is the reconstruction of the 1D acoustic wave recorded by the microphone, which is a "slice" of the full in-room 3D acoustic wave of the performance. What's missing in the 1D slice is extra information added by the room itself: if we subtract the 1D slice from the 3D original, we'd mostly hear "room sound", such as reverbations and resonances.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
It feels like we're having one conversation in two threads. See my last post in the other thread for the answer to this question. The key point is that the 1D voltage (and the 1D sample sequence) does not permit reconstruction of the original 3D wave as it was in the room. What it permits is the reconstruction of the 1D acoustic wave recorded by the microphone, which is a "slice" of the full in-room 3D acoustic wave of the performance. What's missing in the 1D slice is extra information added by the room itself: if we subtract the 1D slice from the 3D original, we'd mostly hear "room sound", such as reverbations and resonances.
Yeah, we are. :) See my response over at the other thread... we can stay over there if it's easier, but I think I got what I'm looking for out of our conversation.

Yes, you're correct... it's a different 3D wave, entirely. But my conclusion is still the same:

We have 3D to 1D to 3D representation, and at that final stage, we can address much of the discrete 3D spectrographic info in a new wave after it went through 1D!!
 

bogosort

Joined Sep 24, 2011
696
“Call the f*ckin paranormal hotline for further information” as to how those “dimensionless numbers” are describing voltage fluctuations that are reconstructing a new 3D wave with a speaker diaphragm(!) that has much of the multi-stem data intact that can be parsed and addressed, is what I’m getting at here!!
For simplicity, and without loss of generality, let's consider the monophonic case, The DAC produces a 1D voltage signal, which drives a speaker (a piston) that physically pushes the air in the room to and fro. In other words, the speaker creates a 3D acoustic wave. This new wave is comprised of the information that was present in the 1D voltage signal, plus the new information being imparted by the listening room. In other words, the new 3D signal -- the acoustic wave -- has more information than the 1D signal that produced it.

Listening to the same song in different environments makes this abundantly clear: a song sounds very different in the car, in a movie theatre, or over the grocery store's PA system. When we listen to a recording, we're hearing extra information that wasn't present in the original signal, namely, the information imparted by the listening environment. (There's also extra stuff from imperfect electronics and speakers, but we can ignore those in this context.)

Consider the signal flow:

original performance -> sound in a room -> microphone -> recording -> speakers -> sound in a room

What gets recorded? The microphone's perspective of the acoustic wave at single point in the room. This 1D signal carries one dimension of information, representing how the original performance sounded at that point in space. How can complex, multi-point sources of information be encoded in one dimension? Two reasons. First, the carriers of those sources of information are waves, and waves are linear: they mix additively. Mix 1,000 different waveforms together and you'll end up with a single wave, with no loss of information. From that single wave, the original thousand components can be recovered.

Second, though acoustic waves have three physical degrees of freedom, one is enough to characterize auditory information. (Note that this must be true, otherwise we wouldn't be able to understand speech with one ear.) Those two extra dimensions of the acoustic wave only carry information about the room. The information of the performance is a one-dimensional signal. If this is hard for you to imagine, imagine that the instruments are electronic rather than acoustic. Each instrument puts out a time-varying voltage (1D), which can be mixed electronically to produce a superposition of the component voltages. This superposition is a complex signal, but it's still one-dimensional. What makes the musical performance become 3D is the room, which is distinct from the performance.

This is why we can encode a complex musical performance in a 1D sequence of samples. If we're interested in capturing the room information as well as the performance, we can use two microphones to record two correlated, one-dimensional signals. That gives us an extra degree of freedom (phase) with which to encode the effects of the room. Notice that even here, one of the dimensions in the 3D acoustic wave is redundant.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Mix 1,000 different waveforms together and you'll end up with a single wave, with no loss of information. From that single wave, the original thousand components can be recovered.
But there is magic—for lack of better term—happening here, because "recovered" is very general term. We're going 3D to 1D back to an "approximated" 3D facsimile, but a spectrographic piece of software is working with 1D numbers on a hard drive that are representing the 3D information that was technically not captured at the prior 1D stage! You can then "photoshop" out various constituent waves and isolate the original data from this new wave, as if it was actually captured as a 3D wave with the spectrographic information. A wave carrying all those other sub-waves is cross-sectionally "stacked' as can be seen in the 3D animated video nsaspooks posted earlier. It is a 3D "thick" thing that is represented on a 2D screen from 1D information!

This very phenomenon is what I'm getting at. The new 3D wave is being reconstructed from a 1D source, and that new approximated 3D wave is maintaining much of the original's discretized elements. It's the discretized and addressable elements of that new 3D wave that's the mystery. They are addressable, isolate-able. The microphone resonated to pick up the consolidated wave only, with no dimensional spectrographic elements, and it did not parse the wave before it was captured by the ADC.

This is real-time proof of something more going on... because 3D-> 1D -> 3D WITH the original's 3D "sub-wave" elements broadstroke maintained, almost like a DAW multitrack (seriously!) storing on independent channels is not "business-as-usual" science in my humble estimation.
 
Last edited:

bogosort

Joined Sep 24, 2011
696
But there is magic—for lack of better term—happening here, because "recovered" is very general term.
There's no magic. Suppose we have three electronic musical instruments performing together. We can model instrument A's performance as a 1D function of time: a(t). Likewise, we have b(t) and c(t) for instruments B and C. Mixing them together is equivalent to

x(t) = a(t) + b(t) + c(t)

Note that x(t) is 1D signal, and that each of the component signals can easily be recovered. Most importantly, note that all of the performance information is one-dimensional.

We send x(t) to a speaker, which results in an acoustic wave y(r, t), where r is a three-vector over space.

We place a microphone in the middle of the room to record the performance. This mic captures a 1D signal m(t), corresponding to y(r, t) at a specific coordinate, i.e., the location of the mic.

Sampling the microphone signal m(t) gives us a discrete sequence m[n], which is stored on a computer. Later, you decide to play back the recording, so a DAC turns m[n] back into m(t), which is sent to a speaker, producing an acoustic wave y2(r, t).

If the recording and playback are in different rooms, what is common between the original acoustic wave y(r, t) and the playback acoustic wave y2(r, t)? The original 1D performance, x(t).

This is why we can record just the 1D part and get everything we need. Your "3D -> 1D -> 3D" characterization is missing the most important link in the chain, the original 1D performance. This is what the dimensional chain actually looks like:

1D -> 3D -> 1D -> 3D

As far as the performance information is concerned, the 3D parts are extraneous! There's no magic in saving 1D to 1D and gettting back 1D.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Note that x(t) is 1D signal, and that each of the component signals can easily be recovered. Most importantly, note that all of the performance information is one-dimensional.
There's no magic. Suppose we have three electronic musical instruments performing together. We can model instrument A's performance as a 1D function of time: a(t). Likewise, we have b(t) and c(t) for instruments B and C. Mixing them together is equivalent to

x(t) = a(t) + b(t) + c(t)

Note that x(t) is 1D signal, and that each of the component signals can easily be recovered. Most importantly, note that all of the performance information is one-dimensional.

We send x(t) to a speaker, which results in an acoustic wave y(r, t), where r is a three-vector over space.

We place a microphone in the middle of the room to record the performance. This mic captures a 1D signal m(t), corresponding to y(r, t) at a specific coordinate, i.e., the location of the mic.

Sampling the microphone signal m(t) gives us a discrete sequence m[n], which is stored on a computer. Later, you decide to play back the recording, so a DAC turns m[n] back into m(t), which is sent to a speaker, producing an acoustic wave y2(r, t).

If the recording and playback are in different rooms, what is common between the original acoustic wave y(r, t) and the playback acoustic wave y2(r, t)? The original 1D performance, x(t).

This is why we can record just the 1D part and get everything we need. Your "3D -> 1D -> 3D" characterization is missing the most important link in the chain, the original 1D performance. This is what the dimensional chain actually looks like:

1D -> 3D -> 1D -> 3D

As far as the performance information is concerned, the 3D parts are extraneous! There's no magic in saving 1D to 1D and gettting back 1D.
But you're essentially saying there's no "z" metric of measurable depth to the acoustic m(t) — as in, no waves have "depth" in time and space. But this is not true according to this video nsaspooks posted earlier, which shows that the information is essentially "stacked". It would have to be in order for spectrographic analysis to even work. There has to be literal depth of dimension to the wave in space. Video here:


Otherwise, how do you propose that a wave from one point to another, has "all of the nuances of every instrument simultaneously 'represented'?" We're talking every instrument, effects, nuances over something perceptible like 2 seconds? No possible way unless the wave has a z component.

There has to be "depth" to the wave to carry "multiple waves" of information.

This is the "magic" — better, how about "mystery" — to which I'm speaking...the z element is "flattened" once the mic's diaphragm "telegraphs" the wave's contents to a medium, then "reconstructed" from 1D numeric information, and then somehow affording access to discretized elements of it that are devoid of that original signal's z element which contained the potentially thousands of overtones for each audible element.
 
Last edited:

bogosort

Joined Sep 24, 2011
696
But you're essentially saying there's no "z" metric of measurable depth to the acoustic x(t) — as in, no waves have "depth" in time and space.
Yup, and there's no 'y' component, either. Remember, the performance signal x(t) is one-dimensional. An example might be

x(t) = 0.5 * sin(400t - 0.1) + 0.1 * sin(800t - 0.25) + 0.3 * sin(1000t)

That's a complex waveform comprised of three sinusoidal components.

But this is not true according to this video nsaspooks posted earlier, which shows that the information is essentially "stacked". It would have to be in order for spectrographic analysis to even work. There has to be literal depth of dimension to the wave in space.
I haven't watched the video, but I know the math. There's definitely no requirement for a signal of time to be multidimensional in order for it to be decomposed into its frequency components. You can easily confirm this by looking up the definitions of Fourier transform and Fourier series, which are naturally defined for functions of a single variable. You can of course do Fourier analysis on multivariable functions, but this is in no way a requirement. In fact, functions of two dimensions tend to be called pictures.

Sound is fundamentally one-dimensional.

Otherwise, how do you propose that a wave from one point to another, has "all of the nuances of every instrument simultaneously 'represented'?" We're talking every instrument, effects, nuances over something perceptible like 2 seconds? No possible way unless the wave has a z component.

There has to be "depth" to the wave to carry "multiple waves" of information.
Just because you can't yet imagine it does not make it so.

Open this link (Desmos, a popular online graphical calculator):
https://www.desmos.com/calculator/gplajvoa09

You'll see a function u(t) that is a complex wave, full of nuance. Click the circles on the left of each row to enable/disable the component parts, which provide all that nuance.

These are all 1D signals; no z-depth is required.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Yup, and there's no 'y' component, either. Remember, the performance signal x(t) is one-dimensional. An example might be

x(t) = 0.5 * sin(400t - 0.1) + 0.1 * sin(800t - 0.25) + 0.3 * sin(1000t)

That's a complex waveform comprised of three sinusoidal components.


I haven't watched the video, but I know the math. There's definitely no requirement for a signal of time to be multidimensional in order for it to be decomposed into its frequency components. You can easily confirm this by looking up the definitions of Fourier transform and Fourier series, which are naturally defined for functions of a single variable. You can of course do Fourier analysis on multivariable functions, but this is in no way a requirement. In fact, functions of two dimensions tend to be called pictures.

Sound is fundamentally one-dimensional.


Just because you can't yet imagine it does not make it so.

Open this link (Desmos, a popular online graphical calculator):
https://www.desmos.com/calculator/gplajvoa09

You'll see a function u(t) that is a complex wave, full of nuance. Click the circles on the left of each row to enable/disable the component parts, which provide all that nuance.

These are all 1D signals; no z-depth is required.
Ok — I agree actually (though the video says otherwise) — but in reality, this information makes this issue exponentially more unicorn-esque then, as others in here I'm sure will agree.

Because take any wave, and zoom in on any group of points over a second and you now have thousands upon thousands of data-points from the original performance being "signified" by that one 1D segment. The wave is essentially a vibration of air only. But yet we have a multi-stem choir, drums, speech, reverb, etc. representation here within a small amount of what can be converted to binary sequences, entirely agnostic to what they are sampling. You're essentially saying there's *organized*, collated, multi-stem-reflecting numbers that are discretely embedded within that small 1D section (and throughout the wave, of course).

Where is the spectrographic wavelets/sub-waves coming from within that section if not literally dimensionally represented with a z-component in space and time?

Every last data-point of everything in that wave must be represented somewhere. Technically you have infinite points within that section. You're saying that somehow a spectrographic analyzer and parser is able to pull all of the waves out of that single space and discretize their numbers for independent addressability?

That to me is unicorns right there, and it was to nsaspooks earlier when I brought this up, was even considered a troll for asking such a "basic question" and he said "you're thinking 1 dimensionally." It's considered *basic* to others even here that there is a y- and z-component! Wha?!

People are not thinking this is a 1D phenomenon because it is not at all intuitive to get your head around thousands of derivative waves occupying the same space. That's how I originally approached this problem. But the question still remains, and is even now more difficult if 1D — where is the numeric data stored, because what's incredible is, over 2 seconds you can take a 16-bit, 44.1kHz sample, and arrive at a discrete number of zeroes and ones that fairly accurately represent a waveform with somewhat addressable individual stem timbres, then feed those zeroes and ones back into a DAC and recreate a facsimile of the original 1D signal.

This does not sound "4D" to me whatsoever.
 
Last edited:

MrAl

Joined Jun 17, 2014
11,496
It's not just about recovering the signal it's about matching it to known patterns.
It's like encoding an audio dictionary with several people reading just the target words from different sections of the book all at the same time and sending it to someone. They have to separate the words and match them to known words in order to get the meaning.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
It's not just about recovering the signal it's about matching it to known patterns.
It's like encoding an audio dictionary with several people reading just the target words from different sections of the book all at the same time and sending it to someone. They have to separate the words and match them to known words in order to get the meaning.
Sure... but this “unicorn element” of how the signal is actually existent in reality is the $64,000 question.

There’s only one wave moving the microphone diaphragm in 1D. And yet we have the superimposition of potentially infinite discrete waves carrying all of the timbre and overtone information. The mic isn’t directly “seeing” these waves, because only the one principal acoustic wave shifting the air is then moving the mic’s diaphragm to create the signal.

Only the parent wave resonates the mic diaphragm in 1D. The other waves are “where” again? Even if there was a “z” element hypothetically, the sh*t is flattened at the mic.

It’s as crazy as quantum entanglement.

I’ll take, “Poltergeist V” for $2000, Alex.
 
Last edited:

bogosort

Joined Sep 24, 2011
696
There’s only one wave moving the microphone diaphragm in 1D. And yet we have the superimposition of potentially infinite discrete waves carrying all of the timbre and overtone information. The mic isn’t directly “seeing” these waves, because only the one principal acoustic wave shifting the air is then moving the mic’s diaphragm to create the signal.
This is wrong, and it's a crucial concept to understand. The "principle" wave, as you call it, is the linear combination of all the "timbre and overtone" waves (which are not discrete, by the way). This literally means that the principle wave is all the timbre and overtone waves.

Here's an analogy to help you develop the intuition: If I give you thee $1 bills, three quarters, two dimes, and five pennies, I've given you $4. The $4 is the "principle wave" and the thee dollars, etc., are the component (timbre) waves. We can talk about the principle wave as its own thing -- you may thank me for giving you $4 -- but we can always recover the component waves from the principle wave. The component waves never lose their identity in the superposition.

If this weren't the case, then filters could never work! Suppose we have a bandpass filter that's tuned to allow only 1 kHz to pass through it. We can put in a single, complex wave made up of 100 Hz, 1 kHz, and 10 kHz components, and out will come just the 1 kHz component. Returning to the money analogy, applying a "penny filter" to your $4 would return five pennies.

In terms of the microphone, saying that its diaphragm responds to the principle wave is exactly the same as saying that it responds to the component waves. Stretching the money analogy a bit, the microphone is like a calculator: it counts what I've given you, which we can describe as $4 or three dollars, three quarters, etc. (The analogy here breaks down because any superposition of waves is unique, whereas there are many ways to make $4.)
 

MrAl

Joined Jun 17, 2014
11,496
Well i wonder if we can really consider it one dimensional. After all it is not just a constant voltage or current, it is voltage and current and time. Time infers a derivative, so not only a single quantity but a quantity that changes one way at point A and other way at point B. Time is not euclidean, but it is often consider another dimension and in the block universe it is actually another dimension like length and width and height.
It is interesting that in graphic processing an algorithm can enhance an image with a special filter. I think this implies that there is more information there than meets the eye and certain mathematical operations can decipher it and put it to use. I'll see if i can find some representative images later.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
This is wrong, and it's a crucial concept to understand. The "principle" wave, as you call it, is the linear combination of all the "timbre and overtone" waves (which are not discrete, by the way). This literally means that the principle wave is all the timbre and overtone waves.

Here's an analogy to help you develop the intuition: If I give you thee $1 bills, three quarters, two dimes, and five pennies, I've given you $4. The $4 is the "principle wave" and the thee dollars, etc., are the component (timbre) waves. We can talk about the principle wave as its own thing -- you may thank me for giving you $4 -- but we can always recover the component waves from the principle wave. The component waves never lose their identity in the superposition.

If this weren't the case, then filters could never work! Suppose we have a bandpass filter that's tuned to allow only 1 kHz to pass through it. We can put in a single, complex wave made up of 100 Hz, 1 kHz, and 10 kHz components, and out will come just the 1 kHz component. Returning to the money analogy, applying a "penny filter" to your $4 would return five pennies.

In terms of the microphone, saying that its diaphragm responds to the principle wave is exactly the same as saying that it responds to the component waves. Stretching the money analogy a bit, the microphone is like a calculator: it counts what I've given you, which we can describe as $4 or three dollars, three quarters, etc. (The analogy here breaks down because any superposition of waves is unique, whereas there are many ways to make $4.)
Understood, the filters wouldn't work. Very good point, and well said. But as MrAl said in his reply above, "I wonder if we can really consider it 1D." Let me vocalize my take on why...

A fundamental freq + overtones create a "unique voice/timbre" such as a sax, a piano, a drum hit. (And forgive me for imperfect lexicon: "logical thinking" is my first language, I'm not a semanticist with the physics because it's one of many disciplines I'm researching for a synthesized aim; you're able to track my language well here, feel free to clarify on the terms at any point).

In a room, we have multiple "unique voices", and when all are performed in unison, a "primary wave" is "created" that reflects all of those voices, including any effects, such as acoustic or software-based reverb.

This means, in say, the app you posted earlier, where you can toggle-on-and-off the overtone functions on the graph, that in the same 1d space, we'd now be dealing with literally potentially millions or billions of waves at different frequencies.

There's only ONE fluctuation of space permitted at a time to get to that microphone diaphragm.

A wave is a disturbance in a medium, but you're actually implying here that the wave itself is a *storage medium*. Because you can pull apart the other waves from it. And obviously there's truth to this on "some kind of level", but "how" this is happening is my question:

HOW are the other waves STORED in that principal wave. No two objects can occupy the same space and also remain discrete as addressable entities. They are "effectively" discrete if they can be subsequently "addressed" by software or the brain in any way independently after being componental in the creation of the principal wave.

One cannot simply say that a "zig zag" in air pressure over 2 seconds can represent all of the timbres in the room at that moment and ALSO permit discretizing them further from the same wave into sub-waves after recorded. There's more mystery going on, or people wouldn't attempt to create theories such as the y- and z-component within space, because 1D makes ZERO intuitive sense. I mean zero, at least for me, and I believe other very intelligent folks here.

In summary, and bold and italics are the equivalent of speaking with Italian hands:
There is absolutely no way that millions or billions of waves representing all of the nuances of every discrete timbre are simultaneously creating the wave BUT also remaining in any way discrete at the same time, as well as occupying the same 1D space, where one final wave ends up being the "acoustic pressure"—that can subsequently be reduced to "numeric signification" of its constituents—and then also permit a reconstructed facsimile of itself via say, another external recording of it, while still providing any access to all the "constituent waves" through VERY limited binary information that is agnostic to the source.

There are only a fixed number of binary sequences that can describe a space of say, 2 seconds, depending on sample rate and frequency. Those binary sequences are being used to discretize information that simply isn't addressable without some other explanation.

The principal wave is nothing more than a series of numbers describing fluctuations. The constituent waves are also series of numbers describing fluctuations. Only one fluctuation can move the diaphragm.

If there was no way to access the constituent waves, I would buy the 1D theory here.
 
Last edited:

MrAl

Joined Jun 17, 2014
11,496
Understood, the filters wouldn't work. Very good point, and well said. But as MrAl said in his reply above, "I wonder if we can really consider it 1D." Let me vocalize my take on why...

A fundamental freq + overtones create a "unique voice/timbre" such as a sax, a piano, a drum hit. (And forgive me for terminology: "logical thinking" is my first language, I'm not a semanticist with the physics because it's one of many disciplines I'm researching for a synthesized aim; you're able to track my language well here, feel free to clarify on the terms at any point).

In a room, we have multiple "unique voices", and when all are performed in unison, a "primary wave" is "created" that reflects all of those voices, including any effects, such as acoustic or synthetic reverb.

This means, in say, the app you posted earlier, where you can toggle-on-and-off the overtone functions on the graph, that in the same 1d space, we'd now be dealing with literally potentially millions or billions of waves at different frequencies.

There's only ONE fluctuation of space permitted at a time to get to that microphone diaphragm.

A wave is a disturbance in a medium, but you're actually implying here that the wave itself a *storage medium*. Because you can pull apart the other waves from it. And obviously there's truth to this on "some kind of level", but "how" this is happening is my question:

HOW are the other waves STORED in that principal wave. No two objects can occupy the same space and also remain discrete as addressable entities.

One cannot simply say that a "zig zag" in air pressure over 2 seconds can represent all of the timbres in the room at that moment and ALSO permit discretizing them further from the same wave into sub-waves after recorded. There's more mystery going on, or people wouldn't attempt to create theories such as the y- and z-component within space, because 1D makes ZERO intuitive sense. I mean zero, at least for me, and I believe others here.

In summary, and bold and italics are the equivalent of speaking with Italian hands:
There is absolutely no way that millions or billions of waves representing all of the nuances of every discrete timbre are simultaneously creating the wave BUT also remaining in any way discrete at the same time, as well as occupying the same 1D space, where one final wave ends up being the "acoustic pressure"—that can subsequently be reduced to "numeric signification" of its constituents—and then also permit a reconstructed facsimile of itself via say, another external recording of it, while still providing any access to all the "constituent waves" through VERY limited binary information that is agnostic to the source.

There are only a fixed number of binary sequences that can describe a space of say, 2 seconds, depending on sample rate and frequency. Those binary sequences are being used to discretize information that simply isn't addressable without some other explanation.

The principal wave is nothing more than a series of numbers describing fluctuations. The constituent waves are also series of numbers describing fluctuations. Only one fluctuation can move the diaphragm.

If there was no way to access the constituent waves, I would buy the 1D theory here.
What filters? What are their specifications? You have been able to judge "all filters" without knowing every one of them, or are you a filter expert :)
There are filters that most people here probably never heard of as some of them are very new and some of them are only understood in digital form.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
Well i wonder if we can really consider it one dimensional. After all it is not just a constant voltage or current, it is voltage and current and time. Time infers a derivative, so not only a single quantity but a quantity that changes one way at point A and other way at point B. Time is not euclidean, but it is often consider another dimension and in the block universe it is actually another dimension like length and width and height.
It is interesting that in graphic processing an algorithm can enhance an image with a special filter. I think this implies that there is more information there than meets the eye and certain mathematical operations can decipher it and put it to use. I'll see if i can find some representative images later.
"I think this implies that there is more information there than meets the eye "

Operative phrase! Because after a wave is digitized, there are numeric sequences there that are clearly able to describe more than one scenario. Over 2 seconds of binary information describing an audio wave, there are a discrete number of voltage fluctuations that can be signified, but yet essentially infinite combinations of sounds it could be reflecting in reality. There's no way a 16-bit, 44.1kHz snapshot over 2 seconds can account for every combo, and yet it can. The numbers in that section have relevance to the parent wave they "inhabit" which in my estimation is a multi-dimensional "housing" for the "wavelings" ("sub-waves", "constituent" waves, whatever term) within it.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
What filters? What are their specifications? You have been able to judge "all filters" without knowing every one of them, or are you a filter expert :)
There are filters that most people here probably never heard of as some of them are very new and some of them are only understood in digital form.
Any and all filters which allow you to discretize/isolate audio information from a single wave...
 

MrAl

Joined Jun 17, 2014
11,496
Any and all filters which allow you to discretize/isolate audio information from a single wave...
That statement is way, way, way, too general.
That's like saying the filters that dont work are all the filters that dont work. That statement does not prove anything it just repeats what has been said and not proved yet.
The original question was meant to shed light on just how varied and complex filters can be. It's impossible to characterize every filter ever conceived and any filter that will be invented in the future as to their applicability for a given application. Digital filters have been tried and proven to do all kinds of magical things that seem impossible.
The term "filter" also has very wide usage. With audio transmission even a wall of some room in some house can be viewed as a filter. Even a decoder.
Hey we can see almost to the edge of the universe with special signal processing. Who would think that could be possible until it was done.
However, i think the kind of 'filter' we would need here would have to have some stored memory with different patterns and be able to match them up to measured signals.

But anything we talk about here will probably be too general anyway. More like coffee house talk. To make any real progress we'd have to do some research do some math and do some experiments. This subject does not interest me enough to get that deep though i am afraid. I have other things on the table that have to come first.
I have dealt with image processing in the past however, and i can say for sure that some of the ways of processing images seems like magic sometimes. It doesnt seem possible until you do it or see it done.
I meant to show some examples but havent gotten to that yet, apologies.
 

Thread Starter

Jennifer Solomon

Joined Mar 20, 2017
112
That statement is way, way, way, too general.
That's like saying the filters that dont work are all the filters that dont work. That statement does not prove anything it just repeats what has been said and not proved yet.
The original question was meant to shed light on just how varied and complex filters can be. It's impossible to characterize every filter ever conceived and any filter that will be invented in the future as to their applicability for a given application. Digital filters have been tried and proven to do all kinds of magical things that seem impossible.
The term "filter" also has very wide usage. With audio transmission even a wall of some room in some house can be viewed as a filter. Even a decoder.
Hey we can see almost to the edge of the universe with special signal processing. Who would think that could be possible until it was done.
However, i think the kind of 'filter' we would need here would have to have some stored memory with different patterns and be able to match them up to measured signals.

But anything we talk about here will probably be too general anyway. More like coffee house talk. To make any real progress we'd have to do some research do some math and do some experiments. This subject does not interest me enough to get that deep though i am afraid. I have other things on the table that have to come first.
I have dealt with image processing in the past however, and i can say for sure that some of the ways of processing images seems like magic sometimes. It doesnt seem possible until you do it or see it done.
I meant to show some examples but havent gotten to that yet, apologies.
But are you seeing my statement in full context to our parent long discussion concerning the nature of the signal?

If so, in that context, the only thing that matters with respect to the "filter" question is "any filter that can pull apart a wave into its constituent parts." I'd argue, with all levity, the micro-details of which filter and how the filter works is accessorial to the trajectory of the conversation, and that is found in the sentence you said above, "There's more to what meets the eye" when it comes to where that info is stored with respect to waves within reality...

Correct me if you feel I'm wrong, but how is any one filter's mechanics going to shine any light on the core question of the billions of waves occupying an apparent 1D space as a "single acoustic wave with post-recording accessibility of its constituent parts?"
 

bogosort

Joined Sep 24, 2011
696
Well i wonder if we can really consider it one dimensional. After all it is not just a constant voltage or current, it is voltage and current and time.
The voltage or current is one dimensional because it has precisely one degree of freedom: it can go up or it can go down.
 

bogosort

Joined Sep 24, 2011
696
HOW are the other waves STORED in that principal wave. No two objects can occupy the same space and also remain discrete as addressable entities. They are "effectively" discrete if they can be subsequently "addressed" by software or the brain in any way independently after being componental in the creation of the principal wave.
On a lexical note, I see now that by "discrete" you mean "parts of the whole that are independently accessible". I'd suggest using discrete strictly as an antonym to continuous, usually in reference to the domain or codomain of a function, which is the conventional usage in this context. Note that the underlying continuum -- i.e., real numbers -- has parts that are independently accessible, e.g., the number 42, or the number pi. This is why we can speak coherently about a point (in the continuum) of space, or an instant (in the continuum) of time. Note also that a wave, once discretized (as by sampling in time), ceases to be a wave.

Not a big deal, but perhaps useful to consider in the strive for clarity.

One cannot simply say that a "zig zag" in air pressure over 2 seconds can represent all of the timbres in the room at that moment and ALSO permit discretizing them further from the same wave into sub-waves after recorded. There's more mystery going on, or people wouldn't attempt to create theories such as the y- and z-component within space, because 1D makes ZERO intuitive sense. I mean zero, at least for me, and I believe other very intelligent folks here.
To this point, I'd say that the universe is under no compulsion to behave in ways that seem intuitive to us. Our intuition is built from an absurdly limited scope of experience, and any argument that tries to stand on such footing is doomed to fall (see quantum mechanics for plenty of examples).

Essentially, you're saying "I can't believe it", but -- I'm sure you'd agree -- that's neither a logical nor scientific argument. And though there's nothing wrong with healthy skepticism, I'd suggest that you weigh your intuitive disbelief against the two-plus centuries of scientific research on these phenomena. To put it plainly, there's no "gee whiz" aspect of linear wave mixing for scientists, and that should tell you something. I don't mention this to dismiss your skepticism, rather to help you re-calibrate it.

There are only a fixed number of binary sequences that can describe a space of say, 2 seconds, depending on sample rate and frequency. Those binary sequences are being used to discretize information that simply isn't addressable without some other explanation.
Suppose we have a CD-quality 2-second recording: the sample rate is 44.1 kHz, with a quantization level of 16 bits per sample. Each sample has \[ 2^{16} = 65,536 \] degrees of freedom. If we have two samples, we can write them as a single 32-bit number: simply use the first sample as the low 16 bits, and the second as the high 16 bits. Thus, with n samples, we have \( 2^{16n} \) degrees of freedom. Now, in a two-second recording, we'll get \[ 44,100 \times 2 = 88,200 \] samples. In this case, our fixed number of binary sequences is "only" \( 2^{16 \times 88100} \), which greatly exceeds the number of atoms in the universe.

Of course, the actual recording will represent precisely one of those sequences, but hopefully you can see that the number of possible two-second, CD-quality recordings is far, far, far greater than anything we can conceive of. And each second we add to the recording grows that crazy-land number exponentially. Hopefully this eases your skepticism that discrete representations are inadequate to capture nominally continuous phenomena.

The principal wave is nothing more than a series of numbers describing fluctuations. The constituent waves are also series of numbers describing fluctuations. Only one fluctuation can move the diaphragm.
A sequence of numbers is not a wave, though it can describe a wave. More to the point, every wave has a unique representation as a sequence of numbers (modulo choice of basis, obeying Nyquist criteria, etc.). We can call this sequence the wave's signature, and -- much like factorizing a composite number into its prime factors -- we can factor out these signatures from a composite wave.

That 1D component waves can be accessed from 1D composite waves is really no more magical than the fact that the number 6 "carries around with it" the numbers 3 and 2.
 
Last edited:
Top