Interrupt, longer or shorter??

Thread Starter

richiechen

Joined Jan 1, 2012
93
HEY everyone


I am using dsPIC30F.

As I started a thread before, which is not open anymore.
http://forum.allaboutcircuits.com/showthread.php?t=64772

I am trying to analyze a system. Firstly a 1kHz sine wave is given to the system, then analyze the sine wave. And export DC voltage according to the received sine wave. As a result, the MCU need to generate sine wave, receive sine wave and analyze sine wave.

The signal exported by DAC:
includes a DC part(which is actually decided by calculation of MCU) and a 1kHz sine wave. I also plan to export DAC at 50kHz, which means 50samples/period for the sine wave. The DC part changes slowly actually.

The input signal at ADC:
Sinewave at 1kHz. The MCU ADC(12bits) need to convert at 50kHz, which means 50 samples/period.

Here is my question: If I use a timer triggered ADC as an interrupt and export DAC at the same time, the interrupt will be too big. (conversion ADC, put adc value to array, load DAC value, load the value to output ports.)

The interrupt latency will also be long.

If I do not use a big interrupt, any way to improve?


Regards
Richie
 

Georacer

Joined Nov 25, 2009
5,182
I apologize for you not being able to post in your thread anymore. Things got a bit out of hand there, and you weren't around at that time to ask for your opinion regarding your thread's fate.

But since this seems to be a new question, it will be better treated in a thread of its own anyway.
 

John P

Joined Oct 14, 2008
2,025
I looked up the dsPIC30F, and it says "2 Msps conversion speed at 5V". That seems to be easily fast enough to convert at 50K/second. Maybe you should work out a timing budget for all the things that the processor needs to do during each cycle, and see if it's all possible within a 20usec period. If it seems workable, write the code, and go through it with a pencil and add up the execution time. If it's too much, then the project just won't work and you'll have to think again.
 

Thread Starter

richiechen

Joined Jan 1, 2012
93
I looked up the dsPIC30F, and it says "2 Msps conversion speed at 5V". That seems to be easily fast enough to convert at 50K/second. Maybe you should work out a timing budget for all the things that the processor needs to do during each cycle, and see if it's all possible within a 20usec period. If it seems workable, write the code, and go through it with a pencil and add up the execution time. If it's too much, then the project just won't work and you'll have to think again.
Dont worry about that Georacer.

Thank you John, You always help others.

The time should be OK, since as you said the ADC rate suported is high actually.

Could I know more about the interrupt? As I am not familiar with it.
1. Could an interrupt change gloabl registers? As I need to store the returned value from ADC and I have read somewhere that it is not recommended to do so.
2. I also need to load the value to DAC. The whole interrup may be large.
And many people suggest that interrupts be as short as possible..

Thanks
Richie
 

atferrari

Joined Jan 6, 2004
4,764
Hola richiechen

The interrupt latency will also be long.
Latency is the time it takes to start the ISR.

I have no experience with that family of micros but I understand that all have the latency well defined, inherent to its design.

You know beforehand how many cycles it takes to start the respective ISR.

If I do not use a big interrupt, any way to improve?
I understand that you say big for long, right?

Whatever could be done out of the ISR, calculations, clearing of flags, data output, data transfer or whatever, is better to move to the main loop.

Leave inside whatever is timing sensitive only.

If MPSIM works with you micro, use the stopwatch function to know the timing of any part of your program.
 
Last edited:

ErnieM

Joined Apr 24, 2011
8,377
Richie,

Damn, I wrote a reply yesterday and never sent it.

You have the following tasks to do 50,000 times a second. I usually call the period a "tick," so you have 50,000 ticks each second to do the following:

A) Start an A2D conversion
B) wait for (A) to complete
C) Read A2D result
D) process result from (C)
E) Output computation result from (D)

The most undesirable thing here is (B), the waiting for the cycle to complete. So don't wait, do this instead:

A) Read A2D result
B) Start an A2D conversion
C) process result from (A)
D) Output computation result from (C)

What is happening here is at tick (N) you are reading the conversion result from the previous tick at time (N-1). The conversion is always a fixed 1 tick of time behind.

Thus your time limitation is just the computation of the new output.
 

Thread Starter

richiechen

Joined Jan 1, 2012
93
Hola richiechen



Latency is the time it takes to start the ISR.

I have no experience with that family of micros but I understand that all have the latency well defined, inherent to its design.

You know beforehand how many cycles it takes to start the respective ISR.



I understand that you say big for long, right?

Whatever could be done out of the ISR, calculations, clearing of flags, data output, data transfer or whatever, is better to move to the main loop.

Leave inside whatever is timing sensitive only.

If MPSIM works with you micro, use the stopwatch function to know the timing of any part of your program.
Latency is dependent with one-cycle or two-cycle instruction?
What are them(one-cycle or two-cycle instruction) exactly?

Thanks
Richie
 

Thread Starter

richiechen

Joined Jan 1, 2012
93
1.Then why does an interrupt need to be short?
2. If an interrupt is triggered by a timer, will the timer continue to run while executing the interrupt?


Thanks
Richie
 

ErnieM

Joined Apr 24, 2011
8,377
Latency is dependent with one-cycle or two-cycle
Latency is the time between the event signal tripping the interrupt and the very first useful instruction one can execute. It is the sum of several factors but the largest will be the code C needs to silently insert to make even the simplest of code (ie, a simple addition) work. It is longer by several times what you will see in the device data sheet.

Some seemingly ordinary things such as calling a function from an interrupt can cause this time to grow very significantly. Do read your compiler manual carefully to see what you can and can't control by settings in C or how you write your code.

What are them(one-cycle or two-cycle instruction) exactly?
Unless you are writing the tightest possible hand coded assembly program you do not care.

1.Then why does an interrupt need to be short?
Generally interrupt handlers are kept short to allow time for other code to execute. If all your micro is doing is this function then for you "short" means shorter then a tick interval.

2. If an interrupt is triggered by a timer, will the timer continue to run while executing the interrupt?
Yes it will run continuously.
 

MrChips

Joined Oct 2, 2009
30,711
If you are accessing the ADC repeatedly to digitize an input waveform and outputting to a DAC it is most important that the sampling period is held constant. If the sampling period is allowed to vary this introduces a timing error called "jitter". The end result is almost the same as adding input noise to the sampled data.

Since you are doing repeated conversions, you do not have to test for ADC completion. You have to do the time analysis which is straight forward. You need to know the conversion time of the ADC. For example, let us say the conversion time is 8us. This means that the fastest repetition rate that you can access the ADC is 120ksps. So we set the sampling rate at a lower rate, say 100kHz, for example.

To accomplish this, we set a hardware timer to cause an interrupt every 10us.
On every timer interrupt, you read the ADC, start a new conversion immediately and store the data into a buffer. Depending on the tasks of the project you may set a flag. If you are outputting to a DAC also, you can do that here. A goal is to keep the interrupt routine to the bare minimum.

All other tasks are performed in the main program.
 

blind2_man

Joined May 12, 2008
33
There is another way: to setup the ADC and DMA and not use interrupts to triger each sample but when the RAM area transfer is done, but I do not know if your dspic30f has DMA.
 

ErnieM

Joined Apr 24, 2011
8,377
Imagine someone testing a new circuit. They may use a function generator to drive it in various ways while they look at the output on an oscilloscope. Something similar can be done with your code too.

Inside the same MPLAB where you write your code there is a simulator called MPLAB SIM. It is located up on the Debugger menu item.

Do learn how to use it. Here, do use it to experiment with various ways you can handle the interrupt, with the baseline being to just clear the flag and return. You are looking to see what the stopwatch tells you in terms of time.

If you drop to the disassembly listing and trace code that way you can see every instruction C has inserted for you. Some things insert LOTS of code and perhaps should be avoided.

My catch phrase, learned from the school of hard knocks has always been "if it don't sim then you can't win."
 

John P

Joined Oct 14, 2008
2,025
In general you need to keep interrupts brief so that first, they'll be able to complete before it's time to start the next one. If you fail there, you've lost hope of keeping the interrupts correctly timed, and you'll probably lose some interrupts completely. Second, the main() routine has its own work to do, and you have to allow time for that. If every interrupt gives main() some new task, then between interrupts, there must be time to execute that task. There might be an situation where (let's say) data comes in as a burst, where there's time to buffer it but not process it all, and in that case you could let the data accumulate and deal with it as you're able to. But then you'd need to take a careful look at how much room was in the buffer, and how fast data was being added versus data being removed.

If your dsPIC were like the 16F series processors, you'd have to fire the A/D in one interrupt and get the result next time through. With its fast A/D, you can get the result almost immediately, so that's not necessary. But the issue is still the same: how much processing time do you need for each of those 50K/sec interrupts? You've go to work this out, or set it up and check it experimentally.
 
Top