How? PIC calculate/output simultaneously

Status
Not open for further replies.

Eric007

Joined Aug 5, 2011
1,158
There is nothing to agree or disagree about, and no point to argue. Either you are limiting the capability of you designs, or your designs are not involved/intricate/complex enough to fully exercise the silicon you choose to use.

And this is the perfect place to discuss it.
Yes Sir!

The OP wanted to know if two things can be done on a PIC simultaneously. The correct answer is yes. In fact, generally quite a few more than two things.
Wow...I'm impressed!
 

thatoneguy

Joined Feb 19, 2009
6,359
I believe in using more than one interrupt source, that's the only way to get a realtime response in some cases.

However, I rarely do any processing in the interrupt loop other than setting a flag or a port, and never call a function. I violate this guideline sometimes when the purpose of the firmware is to have a high priority on a function, and will then make it as short as possible, running it in the actual interrupt, preferably with the long parts continually pre-calculated in the main loop where a delay would normally be used.

Then the flags are checked in the main loop and processed. Main loop shouldn't have any delays in it, or only very short ones if bit banging.

We've had "Real Time Computing" for a lot longer than we've had dual core computers. It's accomplished with semaphores/flags, interrupts, and being aware of how much time something is taking. Never use a bit-bang serial interface if you can use the onboard UART or SPI even if slighly hacked (See PIC VGA Driver).
 

ErnieM

Joined Apr 24, 2011
8,377
If a single interrupt was all that is necessary and the best way to do things then that is how micro controllers would work. But the trend is just the opposite. If you go up to the PIC32 devices you will find 96 sources and 64 vectors with 7 priority levels for interrupts.

Interrupts are your friend. Properly used they give your app very fast response to many things (seemingly) all at once.
 

ErnieM

Joined Apr 24, 2011
8,377
Another point: you don't always need an interrupt to do two things at the same time. One product I designed required a pin to be exorcized, then a pulse train to be counted during a defined window, and finally the output to be changed based on the pulse count.

The issue? The next measurement cycle had to begin as soon as the previous one was complete, so there was just no time to do a few dozen cycles in a single time frame.

The solution was to accumulate the pulses into the hardware Timer thru an external pin. The sample window was opened by enabling the timer to count and clearing it, and closed just by disabling the timer input. That value was saved, and then processed during the next measurement frame during the open window time.

Thus two things were literally happening at the same time due to using the hardware to do one task. Something similar could be done by say using the the the A2D takes to do a conversion to do some other task.
 

John P

Joined Oct 14, 2008
2,025
Oh, well then.

I think sometimes the way we do programming has something to do with a person's character, or mental outlook. We're attracted to do things in a certain way, even if there are other methods that also work.

If you use a PIC processor, all the interrupts jump to the same place, and then you, or the compiler in some cases (CCS does; BoostC doesn't) send the program off to handle each individual interrupt. By using only one timer interrupt, you can eliminate this process, but then you have to check flags to see which of the various hardware features of the processor need service. A/D converter, UART, timers other than the one that generated the interrupt, etc.

There are obviously going to be delays in servicing most of the peripherals, as they all have to wait for a timer. But in general that isn't a problem, as long as service comes quickly enough. That would mean, for a UART you have to check at least as often as characters can come in, which would mean for 115200 Baud, you have to check at about 12KHz, and in fact I often do use an interrupt at that rate.

What I feel is gained by having just the one interrupt is that first, you have a timer that runs at the speed you set it for, rock solid. I often set a pin high when the interrupt first begins, then clear it as the last action taken when returning to the main program. Then I can look at this on a scope, and use it to gauge how much time the interrupt takes--and I can observe the occasional longer pulses when one feature or other has been demanding action, and assure myself that if it has to do the maximum amount of work there's still time to do it all (a critical issue, obviously). It's comforting to see that output at a constant frequency.

If each interrupt is part of a chaotic process(!!) in principle there can be faster service, but it's service with unpredictable delays, caused by one interrupt having to wait for another to complete. In fact if the multiple-interrupt scheme hits its worst-case condition, with everything happening at once, it runs inefficiently because each interrupt must start, store registers, do its work, restore registers and then return, followed by an immediate entry into another interrupt. With the single timer scheme, a single interrupt can serve everything, with considerable saving in time at the most crucial moment. The cost for this is that when nothing is happening, the timer will still be generating interrupts at a fast rate. My response is to say yes, but when nothing is happening, why should we care? You don't have to pay the processor extra money if you waste its time; it processes instructions at the same rate regardless of what it's doing.

Having written all this out, I like my method even more. But if you don't agree, that's your privilege. We don't all have to do things the same way.
 

THE_RB

Joined Feb 11, 2008
5,438
Here is a PIC project with source code that makes a precision frequency and precision waveshape 1kHz sine;
http://www.romanblack.com/onesec/Sine1kHz.htm



It does not need interrupts, it makes a period of 50kHz using the PIC TMR2 module and makes the sine using TMR2 and PWM from CCP1.

If you are happy to use 25 ADC samples per 1mS (per one sine) then all you need to do is use that code and sample the ADC every 2 loops.

Otherwise it is easy enough to adapt to a sine made from 40 samples but you need to generate a new sine waveform table.
 

ErnieM

Joined Apr 24, 2011
8,377
John: well, no.

By using a single interrupt source to check various flags you have traded a quite predictable system to one where a race condition is the norm.

For a system where each service generated it's own interrupt once can compute the sum of the individual frequencies of service requests time the fraction each request's instruction cycles required per unit time. If the sum of these for all peripherals shows significant daylight your system should work. You still have priorities to make sure the most important tasks happen first.

When you use a single interrupt trigger you can have several sources all requiring service get jammed together as they have to wait until the timer fires. Your timing requirement then becomes the sum of all request's instruction cycles over the timing rate, as you have to plan to service everything during the short time of the fast rate timer.
 

MMcLaren

Joined Feb 14, 2010
861
For a system where each service generated it's own interrupt once can compute the sum of the individual frequencies of service requests time the fraction each request's instruction cycles required per unit time. If the sum of these for all peripherals shows significant daylight your system should work. You still have priorities to make sure the most important tasks happen first
Ernie,

Can you rephrase this, please? I don't understand what you're trying to say. Are there missing commas, words, or sentences? Also, what does "shows significant daylight" mean?

I'd really like to understand what you're tying to say and I apologize for my language shortcomings.
 
Last edited:

John P

Joined Oct 14, 2008
2,025
He's bogged down in confusion caused by multiple inputs all clamoring for attention.

Clarity, clarity is what we need. In software as in life.
 

MrChips

Joined Oct 2, 2009
30,720
I think we are ignoring an important distinction. We have to pay attention to the difference between fixed periodic interrupts and random interrupts.

If we have a situation where the sample interval of an ADC or DAC is critical and no interference is tolerable then a single timer interrupt is the solution where all ADC and DAC processing is handled by the timer interrupt handler alone. All other devices, such as random UART RX, must be polled.

If this situation does not exist then it is common practice to allow multiple random interrupts from different sources. The recommended approach is to spend as little time as possible within each interrupt handler either by executing a simple straight forward task or by setting a semaphore (flag).

I design and build waveform digitizers and arbitrary waveform generators with sample frequencies of 20Msps. Here the sample streams must not be interrupted by random interrupts.
 
Last edited:

ErnieM

Joined Apr 24, 2011
8,377
Ernie,

Can you rephrase this, please? I don't understand what you're trying to say. Are there missing commas, words, or sentences? Also, what does "shows significant daylight" mean?

I'd really like to understand what you're tying to say and I apologize for my language shortcomings.
Yeah as I was writing it I was afraid it was a "write-only" sentence.

Say I have an interrupt source #1 that takes T1 to service and occurs N1 times a second. The time to service that would just be S1 = N1*T1 per second.

Do that for every source, and add them up for St = S1 + S2 + ...Sn.

As long as St < 1 second you have a good chance of everything working.

If I get St << 1 second ("significant daylight") I stop worrying, though I will still look at the sources that need the fastest service to insure if several things all hit at the same time they still get responsive service.
 

MMcLaren

Joined Feb 14, 2010
861
Earnie,

Thank you for the clarification and thank you for constructing a proper argument with a conclusion and premises. I don't necessarily believe your conclusions are correct but at least you've made an "argument".

Unlike your arguments, some comments from another poster are a bit disturbing;

Then you are seriously limiting what you can do in any one program.
Either you are limiting the capability of you designs, or your designs are not involved/intricate/complex enough to fully exercise the silicon you choose to use.
These comments aren't backed up by any premises so they're really not much more than an opinion. Both comments seem to require a suspension of logic or at least a leap of logic. There's simply no way the author can know John's capabilities and, based on my experience, it seems the author might not be aware of the capabilities of a carefully designed event-to-time-domain interrupt proc'. I believe the comments are unfounded and offensive and I believe John deserves an apology.

Regards, Mike
 
Last edited:

John P

Joined Oct 14, 2008
2,025
Thanks MMcL, but it's OK.

I think MrChips has made an important point. Suppose you have a processor like the PIC16Fx which has only one interrupt priority level, and you have to operate some output, or read an input, at a particular rate which cannot change. Obviously, you set up a periodic interrupt at that frequency. But then what if there are other things happening which might require an interrupt, like characters coming in from a UART?

If you normally allocate an independent interrupt for each of these peripherals, you start to fret at this point. But if you routinely use a timer as your only interrupt, you don't have to do anything special. You do the time-based action first, to make sure that nothing requiring branches of varying lengths gets in the way. Then you check the UART, or you check for user operation of a keypad or you increment a count for a delay in driving an LCD, or send the next data there if the count has run down. Or possibly set flags to call for the main() routine to do some of those things--sometimes that's a judgment call. You either do all these things every time, just because it's easiest, or you divide the clock down and do them every tenth or hundredth time, or whatever's reasonable. But the key point is, there is only one interrupt and it's based on a fixed frequency.

I'm not saying everyone should do it this way. But the silicon and me, we get along really well.
 

ErnieM

Joined Apr 24, 2011
8,377
These comments aren't backed up by any premises so they're really not much more than an opinion.
True dat. You should also realize those comments were in response to a post which actually stated it is merely an opinion:

I'm a strong believer in having just one interrupt source active, where that one is a fast repetitive clock.
There are many ways to achieve a given task, but by running everything thru a single interrupt vector is (to coin a phrase) "seriously limiting what you can do in any one program."

Sure, instead of an interrupt you can check flags and such, as long as you don't mind spending a lot of time checking flags instead of doing something possible more useful.

When things have to be handled in some specific order one can use the priority feature to perform the essential tasks first, and less critical functions second. If your PIC doesn't have that feature then perhaps you have the wrong device.
 

MMcLaren

Joined Feb 14, 2010
861
Thanks MMcL, but it's OK.

I think MrChips has made an important point. Suppose you have a processor like the PIC16Fx which has only one interrupt priority level, and you have to operate some output, or read an input, at a particular rate which cannot change. Obviously, you set up a periodic interrupt at that frequency. But then what if there are other things happening which might require an interrupt, like characters coming in from a UART?

If you normally allocate an independent interrupt for each of these peripherals, you start to fret at this point. But if you routinely use a timer as your only interrupt, you don't have to do anything special. You do the time-based action first, to make sure that nothing requiring branches of varying lengths gets in the way. Then you check the UART, or you check for user operation of a keypad or you increment a count for a delay in driving an LCD, or send the next data there if the count has run down. Or possibly set flags to call for the main() routine to do some of those things--sometimes that's a judgment call. You either do all these things every time, just because it's easiest, or you divide the clock down and do them every tenth or hundredth time, or whatever's reasonable. But the key point is, there is only one interrupt and it's based on a fixed frequency.

I'm not saying everyone should do it this way. But the silicon and me, we get along really well.
I agree. The type of task scheduling and task balancing method you're describing can often result in performance and capabilities beyond what would be possible by using another method (or multiple interrupts). I just think it's unfair for others to criticize the method, or in this case, criticize you for suggesting the method, simply because they're not familiar or proficient with the method themselves.
 

Eric007

Joined Aug 5, 2011
1,158
I'm enjoying reading this thread...Big boyz are talking here!!!

But woh woh I can also feel some 'tensions' up in here...take it easy guyz!
 

joeyd999

Joined Jun 6, 2011
5,237
Earnie,

Unlike your arguments, some comments from another poster are a bit disturbing;

[snip...]

These comments aren't backed up by any premises so they're really not much more than an opinion. Both comments seem to require a suspension of logic or at least a leap of logic. There's simply no way the author can know John's capabilities and, based on my experience, it seems the author might not be aware of the capabilities of a carefully designed event-to-time-domain interrupt proc'. I believe the comments are unfounded and offensive and I believe John deserves an apology.
Mike,

You may call me out by name...I've no problem with that.

Again, I stand by my comments (you and I have been through this before). I write what I intend to say, and how I intend to say it. The only apology I will offer is that I am sorry if my 'matter-of-fact' writing style offends some.

Please keep in mind that my 'opinions' come from many years of experience not only writing code, but 'repairing' (read: rewriting!) code that others have written.

I do not have time to write out a full dissertation regarding the effective utilization of silicon vs. various interrupt schemes. And if I did, it would be a book. What I know is, most of the code I write would either not work for a particular piece of silicon, or require a substantially more powerful piece of silicon to work, if I limited myself to one single interrupt.

As far as PICs go, I started writing for the PIC16C54 in 1991. These had *no* interrupts. I made due (quite successfully), but I felt considerably cramped by the limited capability. As more interrupt facilities became available over time, the amount of work that could be done by a similarly capable (MHz, ROM, RAM) part increased exponentially.

The most significant problem with John P's approach is that his main program loop must always be shorter than the rate at which the interrupt occurs, as to which I believe ErnieM is referring, at least partly. If not, then he will begin missing signals at some point, and the application will become unreliable.

Therefore, to increase complexity of his main body code, he will need to either decrease the interrupt rate, break up his main code into smaller, illogical parts, or upgrade the silicon. This is what I meant wrt 'limiting the capability of his designs'.

I never need to worry about the execution time of my main loop. All I need to insure is that I have a reasonable percentage of execution time available to the loop ('daylight' as ErnieM put it). I wrote on another thread somewhere that, on a particular app, 90% of my instruction cycles were consumed in interrupts driving not 1, but 2 PID loops at a 20Khz rate. The remaining 10% of instruction cycles were used by the main loop for state machine processing, UI, and communications (hardware driver also interrupt driven). This would simply be *impossible* to do with only one interrupt and polling of hardware services.

Back to my writing style: I do consider myself an 'expert' in embedded programming. Jeeze, I've been doing it successfully for 30 years! I've put in the hours. I've done things that others have told me are impossible. Heck, I always design-in at least one impossibility in everything I do. Take what I say for what its worth (or what you paid for it). Feel free to ignore me, if you like. Or debate the point (I love these kinds of debates). But expect no apologies, because none are forthcoming.
 

MMcLaren

Joined Feb 14, 2010
861
There are many ways to achieve a given task, but by running everything thru a single interrupt vector is (to coin a phrase) "seriously limiting what you can do in any one program."
You'll have to forgive me Ernie... I had a Philosophy class last semester which has affected the way I look at language. Take your statement above for example. You didn't preface it with "In my opinion" and you didn't provide any supporting premises so I'm not sure if it's an opinion, a conclusion, or an authoritative fact. I don't see how you could defend such a sweeping conclusion but I have an open mind and you're certainly welcome to try can convince me.
 

John P

Joined Oct 14, 2008
2,025
...
The most significant problem with John P's approach is that his main program loop must always be shorter than the rate at which the interrupt occurs, as to which I believe ErnieM is referring, at least partly. If not, then he will begin missing signals at some point, and the application will become unreliable.
...
If you believe this, then I can understand why you're a bit scornful. But it's not true at all. The main() routine and its subroutines can be doing whatever they need to do when the one and only interrupt fires. Why wouldn't they? Perhaps you haven't fully understood that the one and only interrupt checks flags to see if the other peripherals (which with other software schemes would get their own interrupts) are needing service. Of course this can't happen at the main() level. The two caveats are that the peripheral flags must be checked often enough to do whatever they need, and the "jackpot condition", when every peripheral wants service, mustn't take longer than one clock interval. However you arrange things, that's always going to be the worst case. But as I said, I claim my method is most efficient there, because the cost of entering and leaving the interrupt only has to be paid once.

How about considering the situation MrChips brought up, where there's some device that needs service at an exactly regular interval but there are other devices that also have to be dealt with? If it's possible to solve that problem with a single interrupt, shouldn't it be possible to use the same technique always?
 
Status
Not open for further replies.
Top