How? PIC calculate/output simultaneously

Status
Not open for further replies.

joeyd999

Joined Jun 6, 2011
5,237
If you believe this, then I can understand why you're a bit scornful.
Scornful? Really? Really????

Just here talkin' man. No emotional involvement whatsoever.

But it's not true at all. The main() routine and its subroutines can be doing whatever they need to do when the one and only interrupt fires. Why wouldn't they?
Then I made the serious error in assuming that your main program might actually have to do something as a result of that interrupt! I apologize!

Perhaps you haven't fully understood that the one and only interrupt checks flags to see if the other peripherals (which with other software schemes would get their own interrupts) are needing service. Of course this can't happen at the main() level. The two caveats are that the peripheral flags must be checked often enough to do whatever they need, and the "jackpot condition", when every peripheral wants service, mustn't take longer than one clock interval.
My gosh, it's worse than I thought! You're concerned about the impact of interrupt overhead (I assume in terms of the branch/return and context save/restore), yet you poll multiple devices in an interrupt whether service is needed or not!

So, let's get this straight:

Say you've got a SPI EEPROM, to which you want to read/write 1024 bytes in a reasonable amount of time, so you set you SPI clock to say, 500khz, for a total data transfer time of 16ms (ignore write time, please).

The tx buffer will empty every 16 microseconds, or 80 instruction cycles at 20Mhz.

For your scheme to work, your 1 interrupt will need to occur, on average, every 16 microseconds, whether your SPI is operating or not.

Subsequently, all your polling and servicing must be complete within 80 instruction cycles, or your data rate will drop (during SPI read/write), first to 50%, then to 33%, and so on.

The timing issues compound rapidly when you have multiple things going on at once. Typically, I'll have a main system clock, a EUSART, a SPI or IIC (with multiple devices), keypad, a direct drive LCD (whose timing simply will not wait for you to get around to it) or a multiplexed LED display, one or more periodic signal to synchronously drive the analog components, multiple channels of A/D synchronized with the aforementioned driving signal, a sound generator driving multiple tones to a speaker, multiple PWMs, etc.

Trying to tie this all together on 1 interrupt source, and making it work reliably and predictably is my idea of physical and mental torture!

And don't tell me to operate my EEPROM in the main loop! I refuse to halt my code for anything for 16ms + cumulative write times!

However you arrange things, that's always going to be the worst case. But as I said, I claim my method is most efficient there, because the cost of entering and leaving the interrupt only has to be paid once.
OK, I get it. I spend my time writing code, you spend yours arranging it.

How about considering the situation MrChips brought up, where there's some device that needs service at an exactly regular interval but there are other devices that also have to be dealt with? If it's possible to solve that problem with a single interrupt, shouldn't it be possible to use the same technique always?
I do this all the time, and usually more than one device. The magic is in the hardware. As I mentioned somewhere previously, lots of the PIC hardware is double buffered, the the SPI, EUSART, PWM, CCPs, etc. If I need an accurate output signal, where no jitter is tolerable, I will use a built-in hardware solution, such as the special event trigger, also tied to an interrupt. It is difficult (actually, kludgey), but not impossible, to create a jitter free signal in any other way (without using additional hardware).

I do not deny that what you do works. You wouldn't be here debating this if it didn't. My point is that you will never get close to 100% utilization of any particular silicon you use. Maybe you don't need to. I often do.
 

ErnieM

Joined Apr 24, 2011
8,377
There are many ways to achieve a given task, but by running everything thru a single interrupt vector is (to coin a phrase) "seriously limiting what you can do in any one program."
You'll have to forgive me Ernie... I had a Philosophy class last semester which has affected the way I look at language. Take your statement above for example. You didn't preface it with "In my opinion" and you didn't provide any supporting premises so I'm not sure if it's an opinion, a conclusion, or an authoritative fact. I don't see how you could defend such a sweeping conclusion but I have an open mind and you're certainly welcome to try can convince me.
Humpty Dumpty said:
‘When I use a word, it means just what I choose it to mean, neither more nor less.’
My statement, and the supporting evidence I made in that post are indeed my opinions, my conclusions, and to the best of my knowledge an authoritative fact.

Not to pick anyone but when I see statements such as: "All other devices, such as random UART RX, must be polled" I feel I must comment. A UART must be read at the maximum data rate t prevent an overrun and loss of data. Thus it must be polled at the max data least data be missed. If you would instead handle that in a low priority vector and leave your time sensitive work in the high priority vector you get a more responsive system without wasting processing time and design sweat doing frequent polling for events that typically are not there.

To pole the device at a fast rate you necessarily have to have a main loop of code that completely executes in this time frame. And THAT is one serious limitation.

HAHAHAH...LOL Poor richiechen...he's probably lost!
I do hope that is not the case! While we are having a good time hijacking his thread it is my sincere intent to present some fundamental concepts for enlightenment, both for "poor" richiechen and anyone else who may be following the thread.
 

MrChips

Joined Oct 2, 2009
30,712
If I may jump into the fray, one method is not better than the other. There are situations where multiple random interrupts are tolerable. There are other situations where it would be intolerable.
 

MMcLaren

Joined Feb 14, 2010
861
If I may jump into the fray, one method is not better than the other. There are situations where multiple random interrupts are tolerable. There are other situations where it would be intolerable.
The "voice of reason". Thank you MrChips...

I wish some other members were as open-minded as you Sir...

Cheerful regards, Mike
 

joeyd999

Joined Jun 6, 2011
5,237
The "voice of reason". Thank you MrChips...

I wish some other members were as open-minded as you Sir...

Cheerful regards, Mike
I am open minded. Just not quite so diplomatic.

Edit: And, Mike, you may refer to me as 'He Whose Name Must Not Be Spoken' if you wish.
 

ErnieM

Joined Apr 24, 2011
8,377
If I may jump into the fray, one method is not better than the other. There are situations where multiple random interrupts are tolerable. There are other situations where it would be intolerable.
Agreed, except it should read "There are situations where multiple random interrupts are desirable and necessary."

The next sentence "There are other situations where it would be intolerable" needs to show a single instance where such is true.
 

John P

Joined Oct 14, 2008
2,025
I would have the statement say "Experienced programmers disagree on whether there are situations where multiple random interrupts are desirable and necessary". I've seen some opinions strongly expressed, but no proof of anything.

MrChips' scenario of an output or input that has to take place at a constant rate, without exception, is an example where you can't tolerate other interrupts. At least not in a processor like the PIC16F series which has only one level of interrupt. I'm ready to hear this refuted, but so far it hasn't been.

I have yet to be convinced that my plan of a single timer-based interrupt can ever fail, if other methods succeed. One operation that would worry me would be if a PIC processor tried to read the output of an incremental encoder on two pins of PortB, maybe using the interrupt-on-change feature. You'd want to get the data as rapidly as possible, and it might work better if the interrupts could come very fast. But then, if the encoder made a brief forward move and then immediately reversed, would that risk losing a second interrupt while the first was being serviced? Reading the input on a timed basis might actually work better at rejecting transients, with the usual caution that the reading rate has to be faster than the step rate.

The problem using the EEPROM is indeed an awkward thing, and I can't see a perfect solution using any method. It would depend on what the most important criterion was--accomplishing all the writes in minimum time, or disrupting the main() level code the least. I'd want to know the longest tolerable stretch that the processor could spend giving its attention to the EEPROM rather than executing other code in main() before I'd care to answer this, and I'd also ask how lengthy the software loop in main() is, both fastest and slowest if it's variable. This might well be best tackled without using an interrupt--at least doing it that way saves the overhead of entering and leaving the interrupt.
 

joeyd999

Joined Jun 6, 2011
5,237
I would have the statement say "Experienced programmers disagree on whether there are situations where multiple random interrupts are desirable and necessary". I've seen some opinions strongly expressed, but no proof of anything.
Again, I don't have time to provide rigorous proof to you. And I have no desire to. You are free to believe what you want.

MrChips' scenario of an output or input that has to take place at a constant rate, without exception, is an example where you can't tolerate other interrupts. At least not in a processor like the PIC16F series which has only one level of interrupt. I'm ready to hear this refuted, but so far it hasn't been.
You ask for proof, yet you have not demonstrated such a scenario exists. Please present a realistic scenario, and I'll tell you how I would do it.

I have yet to be convinced that my plan of a single timer-based interrupt can ever fail, if other methods succeed.
Then I seriously question your level of experience.

One operation that would worry me would be if a PIC processor tried to read the output of an incremental encoder on two pins of PortB, maybe using the interrupt-on-change feature. You'd want to get the data as rapidly as possible, and it might work better if the interrupts could come very fast. But then, if the encoder made a brief forward move and then immediately reversed, would that risk losing a second interrupt while the first was being serviced? Reading the input on a timed basis might actually work better at rejecting transients, with the usual caution that the reading rate has to be faster than the step rate.
You mean, like this?

http://forum.allaboutcircuits.com/showthread.php?t=64318

Reading on the 'time basis' would absolutely miss transients that would be captured by the above code. And every transition you miss represents a cumulative misalignment of your encoder. So your code would not 'work better at rejecting transients'.

The problem using the EEPROM is indeed an awkward thing, and I can't see a perfect solution using any method.
So, in other words, I present a likely and common scenario that would 'fail' with your method, and you dismiss it by saying there is "no perfect solution using any method?"

There is a perfect solution. Use the freakin' interrupt and be done with it!

It would depend on what the most important criterion was--accomplishing all the writes in minimum time, or disrupting the main() level code the least. I'd want to know the longest tolerable stretch that the processor could spend giving its attention to the EEPROM rather than executing other code in main() before I'd care to answer this, and I'd also ask how lengthy the software loop in main() is, both fastest and slowest if it's variable. This might well be best tackled without using an interrupt--at least doing it that way saves the overhead of entering and leaving the interrupt.
John P, please keep doing what you are doing...less competition for me!
 

joeyd999

Joined Jun 6, 2011
5,237
...Can PIC do the two tasks at the same time? Or I have to use two PICs...
Ok guys, knock it off now. Enough already.
Why? This discussion has been entirely on topic. Granted, richiechen has gotten more than he's bargained for. But I think some of these issues are quite important.

There are a lot of new programmers here on this forum, and lots of bad programming examples. I recognize I will never convince John P. of the error of his ways, but I am not writing for him nor for his benefit. I am writing for those who have yet to choose a programming methodology in the hopes that they might pick up some good habits, or at least recognize there are some better ways.

I agree MrC, this is embarrassing.
For you, maybe. Like I said earlier, I enjoy these discussions/debates. You are free not to participate.
 

Eric007

Joined Aug 5, 2011
1,158
I can see the battle is still on...
Anyway I'm learning from the above discussion...

Guyz let get this straight! Why Not discuss all your beleifs and methods on a real given problem...and each one of you will have to demonstrate his method as well as the performance of the method!

I wish I was already *a big boy* to discuss as well but it ain't my level yet!

I can put $1000 here so errbody going to defend his theory on a real complex problem about the discussion and the winner will get the cash...sounds good!?

Guess y'all know "Yo mama" show...the winner always leave with $1000...
Ima make my own show now called "Yo design!"

Regards
 
Last edited:

ErnieM

Joined Apr 24, 2011
8,377
MrChips' scenario of an output or input that has to take place at a constant rate, without exception, is an example where you can't tolerate other interrupts. At least not in a processor like the PIC16F series which has only one level of interrupt. I'm ready to hear this refuted, but so far it hasn't been.
No need to refute that, and no one has stated such either. We have stated the correct use of prioritized interrupts for such a situation, which requires a PIC capable of such.

As long as you can meet all your requirements on the least expensive PIC with the least amount of features then by all means go for it. After all, you may save pennies in the final BOM (which is something I fully support).

I have yet to be convinced that my plan of a single timer-based interrupt can ever fail, if other methods succeed. One operation that would worry me would be if a PIC processor tried to read the output of an incremental encoder on two pins of PortB, maybe using the interrupt-on-change feature. You'd want to get the data as rapidly as possible, and it might work better if the interrupts could come very fast. But then, if the encoder made a brief forward move and then immediately reversed, would that risk losing a second interrupt while the first was being serviced? Reading the input on a timed basis might actually work better at rejecting transients, with the usual caution that the reading rate has to be faster than the step rate.
No it would not. joeyd999 has posted code that has no loss of data from an event occurring while another is being processed. The reason behind that is somewhat subtle and depends on a full understanding on the change on interrupt feature of the PIC. Basically by reading the PortB register once and only once the interrupt trigger is reset so it will catch a glitch that occurs while the handler code is running.

Reading the input on a timed basis may provide a false sense of security as it may only mask a tendency to miss glitches. It is still vulnerable to glitches that occur during the period a timed sample is being taken. And yes, it is possible to code against that, but first you need to recognize the problem exists.

Such a rotary decoder also makes another important point of ours: a timed sample needs to be RUNNING SO DAMN FAST ALL THE TIME IT CAN NEVER MISS A CHANGE. Even when nothing is happening. Always and forever.

If that decoder was say on the human interface it normally isn't being used, but hey, a timed sample is still using a significant amount of precious CPU cycles to see nothing is happening. Cycles that may be better used to say, process the last command the user gave it. The interrupt approach only uses cycles when that work is being called for.

Now the big secret here is an interrupt "RUNNING SO DAMN FAST ALL THE TIME" is reasoning behind the statement "seriously limiting what you can do in any one program."

Here endeth the lesson.
 

Georacer

Joined Nov 25, 2009
5,182
This discussion started as a very constructive one and could evolve even more so. But it seems that you insist on arguing with another man that you will most likely never see in your life about a subject that will never affect you.

It is really a pity.
 
Status
Not open for further replies.
Top