U. Grad Math Major looking For Research Ideas

Thread Starter

sgiallourakis

Joined Mar 1, 2021
2
Hi Everyone,

I am about to graduate with my degree in applied mathematics. I am in capstone course for my major. I am very interested in studying the mathematics behind circuits and circuits in general. I am trying to refine my research topic but I really do not know where to start.

The project is all about creating a mathematical model of a real world situation.

Does anyone have any suggestions?

Many Thanks,

Steven
 

bogosort

Joined Sep 24, 2011
696
Hi Everyone,

I am about to graduate with my degree in applied mathematics. I am in capstone course for my major. I am very interested in studying the mathematics behind circuits and circuits in general. I am trying to refine my research topic but I really do not know where to start.

The project is all about creating a mathematical model of a real world situation.

Does anyone have any suggestions?
An interesting topic is modeling nonlinear circuits (e.g., modulators), or the nonlinear aspects of nominally linear circuits (e.g., linear amplifiers near saturation). Researching Volterra series would be a good start.
 

ApacheKid

Joined Jan 12, 2015
1,609
Hi Everyone,

I am about to graduate with my degree in applied mathematics. I am in capstone course for my major. I am very interested in studying the mathematics behind circuits and circuits in general. I am trying to refine my research topic but I really do not know where to start.

The project is all about creating a mathematical model of a real world situation.

Does anyone have any suggestions?

Many Thanks,

Steven
Analog or hybrid computers. These are around, its discussed in various places but not yet a huge focus by major firms.

The fact is that an analog computer can often solve complex differential equations in close to real-time hugely reduced time compared to computational techniques.

There could be scope for making serious money in this arena...
 

bogosort

Joined Sep 24, 2011
696
Analog or hybrid computers. These are around, its discussed in various places but not yet a huge focus by major firms.

The fact is that an analog computer can often solve complex differential equations in close to real-time hugely reduced time compared to computational techniques.

There could be scope for making serious money in this arena...
Analog computing is much older than digital computing, and there are good reasons why we rarely use analog computation anymore. In theory, a noise-free analog computer can solve NP-complete problems in polynomial time, making them more powerful than a Turing machine. But noise is inevitable, and so analog computing is no more or less powerful than digital computing. And since they have equivalent computational power, the more economical machines -- digital -- win.

Sure, you can set up a hydraulic system that will quickly solve a parametrized set of differential equations by turning a few valves, but it will be enormous (relative to a CPU) and, more importantly, it will be fixed -- it won't be able to solve a different set of differential equations without significantly changing the machine. It's the analog version of an ASIC without the economies of scale.

Then there's the issue of setting up the machine to solve a problem. A general purpose CPU may not be as fast as a hydraulic system (or ASIC) can be at computing a specific type of problem, but it's far easier to program a CPU than it is to set up a mechanical system correctly. Even the error analysis of such a machine -- which must include a host of factors that aren't relevant in digital, such as the type, lengths, and masses of materials -- is orders of magnitude more difficult in the analog realm.

The reality is that every digital computer is an electromagnetic analog computer that simulates digital computation. This provides the intuition why analog and digital have the same computational power (every digital computer is an analog machine), but it also speaks to a critical fact: the precision of mechanical analog computing runs into thermodynamic barriers long before the precision of electromagnetic digital does. We can essentially increase digital precision to any arbitrary degree (1024-bit arithmetic!), whereas every analog computation is limited by the worst-case noise (which at best is on the order of some 20 bits of precision). Even if analog machines were economical, their low-precision results make them niche, at best.
 

ApacheKid

Joined Jan 12, 2015
1,609
Analog computing is much older than digital computing, and there are good reasons why we rarely use analog computation anymore. In theory, a noise-free analog computer can solve NP-complete problems in polynomial time, making them more powerful than a Turing machine. But noise is inevitable, and so analog computing is no more or less powerful than digital computing. And since they have equivalent computational power, the more economical machines -- digital -- win.

Sure, you can set up a hydraulic system that will quickly solve a parametrized set of differential equations by turning a few valves, but it will be enormous (relative to a CPU) and, more importantly, it will be fixed -- it won't be able to solve a different set of differential equations without significantly changing the machine. It's the analog version of an ASIC without the economies of scale.

Then there's the issue of setting up the machine to solve a problem. A general purpose CPU may not be as fast as a hydraulic system (or ASIC) can be at computing a specific type of problem, but it's far easier to program a CPU than it is to set up a mechanical system correctly. Even the error analysis of such a machine -- which must include a host of factors that aren't relevant in digital, such as the type, lengths, and masses of materials -- is orders of magnitude more difficult in the analog realm.

The reality is that every digital computer is an electromagnetic analog computer that simulates digital computation. This provides the intuition why analog and digital have the same computational power (every digital computer is an analog machine), but it also speaks to a critical fact: the precision of mechanical analog computing runs into thermodynamic barriers long before the precision of electromagnetic digital does. We can essentially increase digital precision to any arbitrary degree (1024-bit arithmetic!), whereas every analog computation is limited by the worst-case noise (which at best is on the order of some 20 bits of precision). Even if analog machines were economical, their low-precision results make them niche, at best.
You raises some good points but to dismiss this so quickly, that surprises me, Bernard Ullman would disagree with much of what you say here.

Todays most powerful computers consume truly huge amounts of power, this is because they require a huge number of algorithmic steps, steps that do not appear in the physical laws they are simulating.

Consider the work being done at Sendyne, this paper for example is very interesting.

With digital computation you still get approximation errors and these can accumulate as I'm sure you know, calculating with large numbers of digits is done for that reason, even a digital computer implementation of some simulation require raw data to be input and that raw data will be analog.

The management team at Sendyne are listed here.

Look at this (note, its a logarithmic graph)

1614700387249.png

1614700590694.png

It just strikes me as a fascinating area that won't replace digital computation but could lead to superior solutions to certain problems from a cost/time perspective.
 

Thread Starter

sgiallourakis

Joined Mar 1, 2021
2
Analog or hybrid computers. These are around, its discussed in various places but not yet a huge focus by major firms.

The fact is that an analog computer can often solve complex differential equations in close to real-time hugely reduced time compared to computational techniques.

There could be scope for making serious money in this arena...


You raises some good points but to dismiss this so quickly, that surprises me, Bernard Ullman would disagree with much of what you say here.

Todays most powerful computers consume truly huge amounts of power, this is because they require a huge number of algorithmic steps, steps that do not appear in the physical laws they are simulating.

Consider the work being done at Sendyne, this paper for example is very interesting.

With digital computation you still get approximation errors and these can accumulate as I'm sure you know, calculating with large numbers of digits is done for that reason, even a digital computer implementation of some simulation require raw data to be input and that raw data will be analog.

The management team at Sendyne are listed here.

Look at this (note, its a logarithmic graph)


It just strikes me as a fascinating area that won't replace digital computation but could lead to superior solutions to certain problems from a cost/time perspective.
Thank you for all this information. I will definitely read that article.
 

bogosort

Joined Sep 24, 2011
696
You raises some good points but to dismiss this so quickly, that surprises me, Bernard Ullman would disagree with much of what you say here.
Who is Bernard Ullman and why should I care if he disagrees with well-known information theory? :)

Todays most powerful computers consume truly huge amounts of power, this is because they require a huge number of algorithmic steps, steps that do not appear in the physical laws they are simulating.
I'm not sure what you're trying to say here, but analog computers require far more "steps" than digital computers to perform the same calculation. A differential equation characterizes the dynamics of a system at every instant of time dt. In the digital realm, we replace differential equations with difference equations that tick along on some time increment Δt. Note that Δt / dt → ∞ is the defining characteristic between discrete and continuous time processing. Now consider that for any physical process f[(t) there will always be some minimum Δt for which f[(t) = f[(t + Δt), and it's easy to see why digital computation is more efficient.

Another way to look at it is that time is a non-negotiable quantity in analog computing -- if we're trying to model tide dynamics, we have to let the computation run for as long as the physical process itself takes (hours to days). In contrast, time is an abstract quantity in digital computation. Since we literally control clock speed, we can simulate a century's worth of tides in 5 minutes or whatever.

Consider the work being done at Sendyne, this paper for example is very interesting.
A couple of years ago at work we were approached by a company offering a similar digitally-controlled analog fabric to use in one of our product lines. It's neat at first glance, but setting up the model is the real work. It took them about a month to approximate our DSP, though with less precision. The deal breaker was how difficult it is tweak and parametrize the model for our different customers' needs, which is trivially done in code.

With digital computation you still get approximation errors and these can accumulate as I'm sure you know, calculating with large numbers of digits is done for that reason, even a digital computer implementation of some simulation require raw data to be input and that raw data will be analog.
I don't think you realize how difficult it is to get analog precision anywhere near to what digital can do. In the analog domain, noise is an unavoidable source of error, and -- short of employing expensive cryogenic containers -- the best we can reasonably do is about 20 bits' worth of precision (on the order of ±1 μV of total noise). The universe is a noisy place. In contrast, run-of-the-mill computers have 64-bit hardware, which -- out of the box -- provide 53 bits of floating-point precision. This corresponds to an equivalent analog noise floor of ±0.000111 pV (less than one thousandths of a trillionth of a volt). And since precision is limited only by available RAM, we can easily have far more orders of magnitude.

In mixed signal circuits, the goal is to get the frail analog signal into robust digital form ASAP. The more processing that is done in the analog domain, the noisier the signal becomes. This is the entire reason digital was invented in the first place, for its noise immunity, because noise plagues analog circuits.

Look at this (note, its a logarithmic graph)
Lol. That graph compares a 2016 analog IC to a 16-bit microcontroller from 1992. I'm sure there's a reason they didn't put it up against a 2016-era ARM A-series chip. :D

Also note that they didn't tell us the experimental setup. What were the boundary conditions? What was the error tolerance for a solution? What algorithm did they use for the digital case? Clearly the total run time was significantly less than 1 second, which means that their boundary conditions were ideal for the analog case. What would the graphs look like if they had tried to solve for stability after, say, 12 hours?

Perhaps most damning of all, the result had 8 bits of precision! Behold the power of analog! (Sorry, couldn't resist.)

It just strikes me as a fascinating area that won't replace digital computation but could lead to superior solutions to certain problems from a cost/time perspective.
It strikes me as hype without substance. Information theory is definitive here: there is no difference in computational power between an analog or digital computer. From an engineering/economics perspective, it's abundantly clear that mechanical analog computing can never compete with digital. The companies that sell analog computing know this well, so they offer electromagnetic analog computers. But electromagnetic analog computing forsakes the best parts of digital computation -- noise immunity, generalized re-programmability -- in an attempt to be "novel". But using a bunch of op-amps to perform calculations doesn't seem novel to me.
 

402DF855

Joined Feb 9, 2013
271
1. So-called artificial intelligence (i.e. advanced algorithms)
2. Autonomous vehicles
3. Cryptocurrency (or crypto in general)
4. Someday genetics and microbiology will be lucrative not sure when. Things like Crispr may change that.
5. Quantum computers

(I'm not advocating these just noting they may become popular and provide fertile career choices.)
 
Top