C programming loop question main() vs. while(1)


Joined Jun 26, 2012
This is precisely the case. I have had occasion to dissect several C startup files for about a dozen embedded processor/compiler combinations. IMHO the ones that do the best job actually force a hardware RESET by hook or by crook when there is a return from main. In most cases it is essential to draw as much attention to this behavior as possible. That said I can imagine a mission critical application in which a softer recovery mechanism might be employed. It is ALWAYS to the benefit of the embedded engineer to understand precisely what the canned compiler code is doing to(for) you. Only in that fashion can you inoculate yourself and your code.
I couldn't agree more. In the years-old post above, I didn't observe the hard RESET in any of the compilers I described but that would be the best course for the reasons PB noted. Besides clubbing the programmer for being sloppy, it would at least ensure that the system started from the same state. But in a lot of the examples I cited, the processor was just the CPU and jumping to 0000h restarted everything since the first thing the code would do is access the external peripherals and get them configured. That is not the case with microcontrollers with lots of on-board peripherals. Jumping to zero is just that. It does not re-initialize the peripherals, or the stack. Even a 'reset' instruction or hanging the 'dog is subtly different than a power on reset, on PICs at least, and there are ample reasons while you wouldn't want to leave such things to the whims of the compiler.

Here's a real world example. Awhile ago, I wrote a PIC based CAN controller that had to talk to some old industrial CAN IO cards. During testing, I got complaints that after days of testing, cards would fall out of the system and be inert until a hard power-on. After much head-scratching (and a few custom scripts) it became apparent that sending the RESET command sequence to the cards 41 times caused them to crash and lock up. We didn't have access to the source but it was pretty clear that someone just jumped to 0 at stack level 3 and 41*(128 stack levels/3) + normal overhead = BOOM! The programmer clearly relied on the power up clearing of the stack pointer without realizing that jumping to 0 did not do that. Again, BOOM!

I didn't write the card's code but figuring all of that out cost the project a lot of time and the client a lot of money. So yeah... Use the danged while(1). And while you are at it, specifically initialize everything - don't rely on power on defaults. The reasons should be clear..
Last edited:


Joined Mar 31, 2012
It is ALWAYS to the benefit of the embedded engineer to understand precisely what the canned compiler code is doing to(for) you. Only in that fashion can you inoculate yourself and your code.
And, to segue a bit, this is why it is important for all forms of engineers (as well as most professions, really) to understand as much of the under-the-hood behavior of their field as possible. Sure, few, if any, people can really take that "all the way", but the better you understand about how things in your bailiwick work, the better off you are. Sadly, today, we are moving faster and faster away from that. It wasn't all that long ago that an embedded programmer had no choice but to understand all of that stuff simply because they had no choice but to write all of that stuff. Today people simply open a box, install some software on their PC, program in a high-level language, and let the software do all the low-level thinking for them. Is that progress? In a lot of ways the answer is clearly, "Yes," but not in every way, including some pretty important ways that will come home to roost (and already are in the broader context).


Joined Feb 24, 2006
That is hardly the best long term solution. You should now be able to arrange things so that you can enable it again, and kick it periodically in the non interrupt section of the code. Good design practice says you should use it if you have it.