# Peripheral conflict avoidance

#### ApacheKid

Joined Jan 12, 2015
1,089
Other than running an OS, how do MCU developers protect against inadvertent resource overlapping? For example if I call some library and have it use some ADC or TIMER on my board, but somewhere else, try to use that ADC (or the pin previously assigned to that ADC) or TIMER for some other purpose, there seems to be nothing to prevent that and it could lead to mind numbing debug sessions.

In an OS this is usually implemented (for this very reason) through ideas like "handles" that represent "ownership" of some resources, so I suppose without an OS on the MCU there's not a lot of protection.

It seems that code I write for an MCU has unrestrained freedom to manipulate any aspect of the hardware it likes, no policing of any kind, yes that means there are no costs slowing me down but also no protection against human error or subtle bugs, possibly bugs that can be easily be found during normal tests.

Of course I'm assuming here that an MCU OS does typically provide such features, I might be wrong of course...

Thoughts?

Last edited:

#### nsaspook

Joined Aug 27, 2009
10,698
How do MCU developers protect against inadvertent resource overlapping?

Easy, don't do it. Have the source to know what every single line of code does.
I usually have a comment in a header/source file somewhere about resource usage.

C:
/*
*
* standard program units:
* Voltage  in (uint32_t/uint16_t) millivolts,
* Current in (int32_t) hundredths of amps
* Watts Power in (uint32_t)
*
*
* R: structure, real values from measurements
* C: structure, calculated values from measurements or programs
* B: structure
* V: structure, Volatile varables modified in the ISR in a possible non-atomic fashion
*
* USART1        Data Link channel 38400
* USART2         is the host comm port 38400
* Timer0        1 second clock
* TImer1        Not used
* Timer2        Not used
* Timer3        work thread , background I/O clock ~20HZ
* TImer4        PWM Period clock

* 0..8 analog channels are active
* adc0    systemvoltage    PIC Controller 5vdc supply voltage
* adc1    motorvoltage    24vdv PS monitor from relay
* PORTB        HID Qencoder and switch inputs
* PORTC        HID leds
* PORTD        configuration switch input
* PORTE        motor control relays
* adc5    rawp1 X pot RF0
* adc6 rawp2 Y pot RF1
* adc7 rawp3 Z pot RF2
* adc3 VREF from 5vdc reference chip REF02AP
* adc_cal[11-14]    current sensors zero offset stored in eeprom 11=x, 12=y, 13=z, 14=future
* cal table with checksum as last data item in adc_cal[]
* PORTH0        run flasher led onboard, 4x20 LCD status panel
* PORTJ        alarm and diag leds
* PORTG        Alarm and Voice outputs
*
*
*/
There are some controllers with configuration lock sequences that only work once to prevent reconfiguration of hardware by intention or random code failure but in general, there is no spoon.

Last edited:
xox

#### geekoftheweek

Joined Oct 6, 2013
916
I usually define names to map to registers and use them instead of the normal register names. Say instead of T2CON for timer 2 i'll call it liight_rimer_con and use that instead.
I also do most everything in asm still so I tend to use macros as much as I can to separate hardware interactions from the rest of the program.

#### ApacheKid

Joined Jan 12, 2015
1,089
How do MCU developers protect against inadvertent resource overlapping?

Easy, don't do it.
Not doing "it" is not easy though, otherwise we'd not need tools like debuggers and so on. It's actually hard, that's part of the reason we get paid for it, it isn't easy at all.

This is why C compilers don't just take the source and generate code, they also painstakingly check for mistakes and contradictions in our code.

Anyway, consider the lowly GPIO pins, if I could request use of it rather than just use it, we could avoid such scenarios more easily.

For example:

Code:
bool flag = LockResource(GPIOA, GPIO_PIN_0);

if (flag == false)

// do something to warn user - pin is somehow already being used...

...

// Relinquish the pin now:

bool flag = UnlockResources(GPIOA, GPIO_PIN_0);

if (flag == false)

// do something to warn user - we didn't have the resource locked in the first place!
You get the idea anway, this is a tad simplistic too, but strikes me as something that could reasonably be done and at a very low runtime cost.

I see that the ARM CPU family (unsurprisingly) includes instructions LDREX, STREX and CLREX, and these it seems are available as macros in cmsis_gcc.h on the setup I have anyway. These facilitate atomic read/modify/write operations more or less.

One can use that operation to test for a lock and/or obtain an (not yet obtained) lock, so in principle this is not hard to write.

Have the source to know what every single line of code does.
I usually have a comment in a header/source file somewhere about resource usage.

C:
/*
*
* standard program units:
* Voltage  in (uint32_t/uint16_t) millivolts,
* Current in (int32_t) hundredths of amps
* Watts Power in (uint32_t)
*
*
* R: structure, real values from measurements
* C: structure, calculated values from measurements or programs
* B: structure
* V: structure, Volatile varables modified in the ISR in a possible non-atomic fashion
*
* USART1        Data Link channel 38400
* USART2         is the host comm port 38400
* Timer0        1 second clock
* TImer1        Not used
* Timer2        Not used
* Timer3        work thread , background I/O clock ~20HZ
* TImer4        PWM Period clock

* 0..8 analog channels are active
* adc0    systemvoltage    PIC Controller 5vdc supply voltage
* adc1    motorvoltage    24vdv PS monitor from relay
* PORTB        HID Qencoder and switch inputs
* PORTC        HID leds
* PORTD        configuration switch input
* PORTE        motor control relays
* adc5    rawp1 X pot RF0
* adc6 rawp2 Y pot RF1
* adc7 rawp3 Z pot RF2
* adc3 VREF from 5vdc reference chip REF02AP
* adc_cal[11-14]    current sensors zero offset stored in eeprom 11=x, 12=y, 13=z, 14=future
* cal table with checksum as last data item in adc_cal[]
* PORTH0        run flasher led onboard, 4x20 LCD status panel
* PORTJ        alarm and diag leds
* PORTG        Alarm and Voice outputs
*
*
*/
There are some controllers with configuration lock sequences that only work once to prevent reconfiguration of hardware by intention or random code failure but in general, there is no spoon.
Docs are good, I now have a policy of having a <project>.doc.h file as a routine header, that's my "go to" place for that. But as we all know docs are easily sidelined, easy to overlook and fall out of date..

Anyway, this is just something that came up as I was reading about the ARM interrupt design.

#### nsaspook

Joined Aug 27, 2009
10,698
It's easier for those that think in hardware and translate that to code. Attention to detail is rule #1 in hardware engineering.

Things like GPIO locks are IMO a waste of resources and just create source code clutter and smells when you don't have protection or privilege on the single cpu/thread processor you're writing code for.

#### BobaMosfet

Joined Jul 1, 2009
2,053
Other than running an OS, how do MCU developers protect against inadvertent resource overlapping? For example if I call some library and have it use some ADC or TIMER on my board, but somewhere else, try to use that ADC (or the pin previously assigned to that ADC) or TIMER for some other purpose, there seems to be nothing to prevent that and it could lead to mind numbing debug sessions.

In an OS this is usually implemented (for this very reason) through ideas like "handles" that represent "ownership" of some resources, so I suppose without an OS on the MCU there's not a lot of protection.

It seems that code I write for an MCU has unrestrained freedom to manipulate any aspect of the hardware it likes, no policing of any kind, yes that means there are no costs slowing me down but also no protection against human error or subtle bugs, possibly bugs that can be easily be found during normal tests.

Of course I'm assuming here that an MCU OS does typically provide such features, I might be wrong of course...

Thoughts?
You keep a linked list of what is using any given device. You can quickly iterate through the list at any time to see if something else is already tied to that resource. You can also use a flag that is high when the resource is in use, and low when it isn't. If the resource is being used by one process another can't use it. This is a form of lazy arbitration.

A 'Handle' isn't a means of monitoring something. A 'Handle' is a memory construct- in it's correct usage, it is a double-dereference to a memory block, which allows a memory manager to move the block without invalidating access to it via a handle.

xox

#### ApacheKid

Joined Jan 12, 2015
1,089
You keep a linked list of what is using any given device. You can quickly iterate through the list at any time to see if something else is already tied to that resource. You can also use a flag that is high when the resource is in use, and low when it isn't. If the resource is being used by one process another can't use it. This is a form of lazy arbitration.

A 'Handle' isn't a means of monitoring something. A 'Handle' is a memory construct- in it's correct usage, it is a double-dereference to a memory block, which allows a memory manager to move the block without invalidating access to it via a handle.
I used the term "handle" in the context of Windows OS internals, I didn't make that clear. In the Windows kernel we have a kernel object manager and access and manipulation of these objects is done via handles, this is what I was thinking about when I said that, see here.

#### ApacheKid

Joined Jan 12, 2015
1,089
It's easier for those that think in hardware and translate that to code. Attention to detail is rule #1 in hardware engineering.

Things like GPIO locks are IMO a waste of resources and just create source code clutter and smells when you don't have protection or privilege on the single cpu/thread processor you're writing code for.
Perhaps, but this is all about trade offs though, engineering (hardware or software) is about compromise if its about anything. If one is hugely concerned about resources then they might use assembler, and forget about a compiled language altogether, one can argue that the high level language is "clutter" too.

Its also - I argue - hard to prove that something is a waste of resources, the only way to establish that is if the costs outweigh the benefits and unless one compares the two approaches then it's just speculation.

If some system is implemented two different ways, and one uses 5% more memory than the other and also consumes 2% more CPU than the other, but was developed at 50% of the cost and had 50% less bugs and took 20% less time to test, then tell me, which of these shows waste?

I doubt one could write event a simple OS for an MCU and not adopt some kind of resource manager, if any code - system or user - can access or update things that it should not, then you'll never get the OS off the ground, it would crash and lockup all the time and we'd have no idea why. Some event ten seconds ago might create and invalid state that leads to a problem now, that the cause was ten seconds ago will be rather hard to determine without some kind of management system.

These are real problems, these are some of the reasons that most avionics software, the space shuttle system, is written in Ada and not C, this is the reason that Nvidia recently made this announcement:

Software defines what moves us. From mobility apps and real-time maps to increasingly automated vehicles, lines of code have become fundamental to the world of transportation.

As this software becomes more complex, there’s a greater chance for human error, opening up more potential for security and safety risks.

To ensure that this vital software is secure, NVIDIA is working with AdaCore, a development and verification tool provider for safety and security critical software. By implementing the Ada and SPARK programming languages into certain firmware elements, we can reduce the potential for human error.

Both languages were designed with reliability, robustness and security in mind. Using them for programming can bring more efficiency to the process of verifying that code is free of bugs and vulnerabilities.

For industries that have strong safety, reliability and security standards, like aerospace and automotive, these benefits can translate to nearly 40 percent cost and time savings from enhanced software verification, according to a study by consultancy VDC Research.
and

That firmware will not be written in C though. It is being done in SPARK, a provable subset of the Ada programming language. Ada 2012 added contracts to that language and SPARK takes advantage of this feature. It allows programmers to specify details like the characteristics of procedure inputs and outputs. The compiler can then enforce these rules for calls to the procedure as well as how the results will be used.
Ada and RISC-V Secure Nvidia’s Future.

I think engineers in these areas should be open to change.

Consider further:

Rohrer continues, “We wanted to emphasize provability over testing as a preferred verification method.” Fortunately, it is possible to prove mathematically that your code behaves in precise accordance with its specification. This process is known as formal verification, and it is the fundamental paradigm shift that made NVIDIA investigate SPARK, the industry-ready solution for software formal verification.
and

and

Now I regard Nvidia as a firm that understand hardware and embedded and performance, there's a lot going on these days.

Last edited:

#### BobaMosfet

Joined Jul 1, 2009
2,053
I used the term "handle" in the context of Windows OS internals, I didn't make that clear. In the Windows kernel we have a kernel object manager and access and manipulation of these objects is done via handles, this is what I was thinking about when I said that, see here.
Yes, that's why I described what a 'handle' actually is. Windows O/S (which I also write software for) isn't so much an O/S as an application and it absolutely in no uncertain terms has no ability or understanding of how to manage memory and what memory is. As with all things Microsoft, the only thing it got right was DOS- and even that, Gates basically stole.

Microsoft makes device management a nightmare. Real device management is simple. There is no peripheral you can come up with that requires more than 5 overall commands to control it.

#### BobTPH

Joined Jun 5, 2013
6,080
Yes, that's why I described what a 'handle' actually is. Windows O/S (which I also write software for) isn't so much an O/S as an application and it absolutely in no uncertain terms has no ability or understanding of how to manage memory and what memory is. As with all things Microsoft, the only thing it got right was DOS- and even that, Gates basically stole.

Microsoft makes device management a nightmare. Real device management is simple. There is no peripheral you can come up with that requires more than 5 overall commands to control it.
That is nonsense. Windows is a full blown OS that manages memory. Try writing a Windows app that references memory not allocated to it.

#### BobaMosfet

Joined Jul 1, 2009
2,053
That is nonsense. Windows is a full blown OS that manages memory. Try writing a Windows app that references memory not allocated to it.
Really? Do a search online for all the people getting Out of Memory warnings from windows when they still have Gigabytes of real RAM left unused. Happens all the time.

Last edited:

#### ApacheKid

Joined Jan 12, 2015
1,089
Yes, that's why I described what a 'handle' actually is. Windows O/S (which I also write software for) isn't so much an O/S as an application and it absolutely in no uncertain terms has no ability or understanding of how to manage memory and what memory is. As with all things Microsoft, the only thing it got right was DOS- and even that, Gates basically stole.
That's an unusual view, NT (which is to all intents and purposes is what Windows is these days) was designed by an experienced minicomputer OS team headed by Dave Cutler, the respected architect of DEC's VMS OS. You'd have to elaborate too on what it is about the memory manager that you find problematic. I've done extensive work with NT and shared/mapped memory myself and that entailed me gaining a detailed knowledge the kernel's memory manager, I saw nothing other than first rate engineering design in there myself.

Microsoft makes device management a nightmare. Real device management is simple. There is no peripheral you can come up with that requires more than 5 overall commands to control it.
I've no direct personal experience of working with Windows at the device/hardware level so I'll have to refrain from comment.

#### ApacheKid

Joined Jan 12, 2015
1,089
Really? Do a search online for all the people getting Out of Memory warnings from windows when they still have Gigabytes of real RAM left unused. Happens all the time.
You mean 16 bit Windows 3.1 I presume?

#### BobaMosfet

Joined Jul 1, 2009
2,053
You mean 16 bit Windows 3.1 I presume?
No, 64-bit. Windows 7, on up to the current Windows 10. This is a known problem that Microsoft.