Anyone here using C to compose 'object-based' Microcontroller programmes?

Thread Starter

Hugh Riddle

Joined Jun 12, 2020
72
I'll let you on to a little secret. Everyone uses 'the OBject-based method' as you have described when they become familiar with the C language and intuitively develop a mental picture of the program that needs be created. It might not look that way at first because the functionality is prime but when you refactor for style points it naturally flows into structured methods.

I'm not fond of using the CPP #define for function macros because eventually it becomes an additional mini language hidden (X really means Y) feature instead of a C helper.
https://gcc.gnu.org/onlinedocs/cpp/Macro-Arguments.html

or using #define for simple computation (vs symbolic) 'magic numbers'.
Using the const keyword for fixed variables (there are times when #define must be used) adds C scope structure to programs and results in better compile time error checking with static range type-safety.
C:
#ifndef MAGIC_H
#define    MAGIC_H

#ifdef    __cplusplus
extern "C" {
#endif

static const double rps = 0.0174532925f; // degrees per second -> radians per second
static const uint8_t CAL_DIS_MS = 1; // calibration data element screen display time
static const char *build_date = __DATE__, *build_time = __TIME__;
static const char imu_missing[] = " MISSING \r\n";
/*
* NVRAM storage page variable
* const volatile here means it's stored in FLASH for read-only program data access in normal program flow but
* also set to volatile because the FLASH write controller can modify the contents out of normal program flow
* so we need the compiler TO NOT optimize out reads from the actual memory locations because of read-only data caching
* optimizations
*/
#include "nvram_page.h"
const volatile uint32_t myflash[4096] __attribute__((section("myflash"), space(prog), address(NVM_STARTVADDRESS)));

#ifdef    __cplusplus
}
#endif

#endif    /* MAGIC_H */
I thought structured programmes could take various forms. And how about the HOOD parts (as in ESA-HOOD) which require: OBject-named source files, a statement of child OBjects, clear English descriptions of their purpose(s) and relationships, function prototypes in plain sight and separated into required and provided operations, rather than out of sight in header files (and I suspect a preference for declaring/defining 'static' variables outside functions)? I can't read C code easily but don't think I see those explicit features which IMHO play such a major part in quickly yielding stable, clearly-related OBjects and stimulating specific and general insights.
 

nsaspook

Joined Aug 27, 2009
7,854
My point is that OBjects, clear English descriptions of their purpose(s) and relationships doesn't mean good code that's bug free in design or implementation. Most of that information can be automatically generated today using auto-documenting Doxygen, refactoring and call graphing systems on horribly written, bug filled code.
 

Thread Starter

Hugh Riddle

Joined Jun 12, 2020
72
I was fascinated to hear that the Curiosity Mars Rover's code is basically C, uses Wind River's VxWorks RTOS, was largely generated automatically and had a target blue screen interval of 15 years. Can you give us an idea of how that fits into the picture? Doesn't sound like a scene where bugs are welcome.
 
Last edited:

nsaspook

Joined Aug 27, 2009
7,854
I was fascinated to hear that the Curiosity Mars Rover's code is basically C, uses Green Hills VxWorks RTOS, was largely generated automatically and had a target blue screen interval of 15 years. Can you give us an idea of how that fits into the picture? Doesn't sound like a scene where bugs are welcome.
I've read a little about it. Millions of lines of C code in a project like that is about the same as saying millions on lines of assembler code were generated by compiling a C source code word processing program.
C was likely used as a high level assembly language because it was the common abstract machine for several types of designed and tested software platforms that needed to be integrated. There are plenty of commercial C static source-code-analysis tools for rule-checking. Several types of Model-based engineering programs like Simulink generate C code that's uses code styles designed for correctness (lots of simplistic static structures and functions) at the expense of verbosity. Specialized communication hardware telemetry devices can also have source generation back-ends to auto-generate C code for X protocol.

https://en.wikipedia.org/wiki/Simulink
https://www.mathworks.com/company/newsletters/articles/the-joy-of-generating-c-code-from-matlab.html


Part of the problem with systems like that is the correctness for long duration uptimes goes beyond just logical software correctness into the domain of hardware correctness and concurrency-related issues. The controller I'm currently using has several hardware error correcting capabilities like ECC flash, background CRC, auto retries for on the fly flash writes, hardware traps for spurious interrupts, etc....
A space rated CPU and processing system takes that up several notches.
 
Last edited:

Thread Starter

Hugh Riddle

Joined Jun 12, 2020
72
My point is that OBjects, clear English descriptions of their purpose(s) and relationships doesn't mean good code that's bug free in design or implementation. Most of that information can be automatically generated today using auto-documenting Doxygen, refactoring and call graphing systems on horribly written, bug filled code.
I'd only claim that OBjects, clear English descriptions of their purpose(s) and relationships (and highly visible messaging) can greatly help bring order to and ease the explaining of core tasks.

While writing that another perhaps naive thought formed: the method encourages using function prototypes as problem-solving elements. IMHO, a function definition is the worst place to work on a problem but I suspect it does happen.
 
Last edited:

Thread Starter

Hugh Riddle

Joined Jun 12, 2020
72
I've read a little about it. Millions of lines of C code in a project like that is about the same as saying millions on lines of assembler code were generated by compiling a C source code word processing program.
C was likely used as a high level assembly language because it was the common abstract machine for several types of designed and tested software platforms that needed to be integrated. There are plenty of commercial C static source-code-analysis tools for rule-checking. Several types of Model-based engineering programs like Simulink generate C code that's uses code styles designed for correctness (lots of simplistic static structures and functions) at the expense of verbosity. Specialized communication hardware telemetry devices can also have source generation back-ends to auto-generate C code for X protocol.

https://en.wikipedia.org/wiki/Simulink
https://www.mathworks.com/company/newsletters/articles/the-joy-of-generating-c-code-from-matlab.html


Part of the problem with systems like that is the correctness for long duration uptimes goes beyond just logical software correctness into the domain of hardware correctness and concurrency-related issues. The controller I'm currently using has several hardware error correcting capabilities like ECC flash, background CRC, auto retries for on the fly flash writes, hardware traps for spurious interrupts, etc....
A space rated CPU and processing system takes that up several notches.
Thank you again, nsaspook, for yet again revealing new horizons. To me, 'correctness, simplistic static structures and functions' sounds very attractive and I'm OK with verbosity; would that be in the identifiers or something more? But it'd be out of the C frying pan into the Simulink, Matlab fire. I intend subjecting my code to Cppcheck's C syntax checker but wonder whether you could suggest anything free and easily learnt which scrutinises one's C code for 'correctness' (and suggests simplistic alternatives)?
 
Last edited:

nsaspook

Joined Aug 27, 2009
7,854
I'd only claim that OBjects, clear English descriptions of their purpose(s) and relationships (and highly visible messaging) can greatly help bring order to and ease the explaining of core tasks.

While writing that another perhaps naive thought formed: the method encourages using function prototypes as problem-solving elements. IMHO, a function definition is the worst place to work on a problem but I suspect it does happen.
That's correct. It's clear the organisation of data flow dictates procedural design even in the simple case of stdin -> stdout. There are times when you just know the masterpiece is in the block of stone so you start using intuition instead of a formal design. The function prototypes are just major features you know must be there. The art is fleshing out the details.
 

nsaspook

Joined Aug 27, 2009
7,854
Thank you again, nsaspook, for yet again revealing new horizons. To me, 'correctness, simplistic static structures and functions' sounds very attractive and I'm OK with verbosity; would that be in the identifiers or something more? But it'd be out of the C frying pan into the Simulink, Matlab fire. I intend subjecting my code to Cppcheck's C syntax checker but wonder whether you could suggest anything free and easily learnt which scrutinises one's C code for 'correctness' (and suggests simplistic alternatives)?
Most non-obvious critical bugs will never be found using static checking. Usually the problem is in the wetware.
 

bogosort

Joined Sep 24, 2011
566
I intend subjecting my code to Cppcheck's C syntax checker but wonder whether you could suggest anything free and easily learnt which scrutinises one's C code for 'correctness' (and suggests simplistic alternatives)?
As nsaspook said, a static syntax checker isn't useless, but it won't come remotely close to checking for correctness. That's a much harder problem. In fact, it's much harder than even the halting problem, which has been proved to be computationally impossible. So, if we can't even write a general algorithm that can determine whether an arbitrary program will halt or not, there's no hope that we can write an algorithm that can determine the correctness of an arbitrary program.

In its truest form, checking for correctness would require knowing the programmer's intent, with corresponding mind-reading capability. Short of that, the checker would have to parse the code's semantics in addition to its syntactic structure. And though the context-free grammars of most programming languages are designed to be relatively easy to transliterate (e.g., into equivalent machine code), semantic translation is a big jump in computational complexity.

That's fine, you say, our computers are essentially Turing machines and can decide context-sensitive grammars (most compilers already do). But let's make a counting argument.

Let \(f:\mathbb{Z} \to \{0, 1\}\) be the fabled decision function that maps arbitrary computer programs (represented as integers in ℤ) to a correct (1) or not-correct (0) state. Let \(D\) be the set of all such decisions, which has cardinality \[ \begin{align} |D| &= |\{0,1\}|^{|\mathbb{Z}|} \\ &= 2^{\aleph_0} \\ &= |\mathbb{R}| \end{align}\] In other words, the total number of correctness decisions has the uncountable cardinality of the continuum. However, the set of computable functions is necessarily countable. Therefore, \(f\) is not computable and so there can be no such correctness-deciding algorithm.

Of course, even though we can't in general prove the correctness of any program, we can increase our confidence about the correctness of a particular program. There are many practical methods (e.g. unit testing) with varying levels of code coverage. None of them are perfect, and they all require development effort, but it's usually better than hoping that a complex program behaves correctly under arbitrary inputs.
 

Thread Starter

Hugh Riddle

Joined Jun 12, 2020
72
That's correct. It's clear the organisation of data flow dictates procedural design even in the simple case of stdin -> stdout. There are times when you just know the masterpiece is in the block of stone so you start using intuition instead of a formal design. The function prototypes are just major features you know must be there. The art is fleshing out the details.
At some point, the C standard decreed that function prototypes 'must be there' but keeping them in open view, as does ESA-HOOD, rather than in header files can help avoid messaging tangles and invites further development of message control.
 

BobaMosfet

Joined Jul 1, 2009
1,290
At some point, the C standard decreed that function prototypes 'must be there' but keeping them in open view, as does ESA-HOOD, rather than in header files can help avoid messaging tangles and invites further development of message control.
No.. no... it didn't just 'decree' for kicks. Learn something about your symbol table and linker. Your statement above is painfully full of lack of actual understanding, and also shows a complete mis-understanding about how header files are supposed to be used (not your fault, 90% of developers today don't know how to properly use them).

RANT WARNING: Just a General Rant to no one in particular-- No... don't get me started. If I had $1 for every idiot who thought they understand C but don't understand the proper use of 'struct' I'd be rich.
 

Thread Starter

Hugh Riddle

Joined Jun 12, 2020
72
No.. no... it didn't just 'decree' for kicks. Learn something about your symbol table and linker. Your statement above is painfully full of lack of actual understanding, and also shows a complete mis-understanding about how header files are supposed to be used (not your fault, 90% of developers today don't know how to properly use them).

RANT WARNING: Just a General Rant to no one in particular-- No... don't get me started. If I had $1 for every idiot who thought they understand C but don't understand the proper use of 'struct' I'd be rich.
You're right to spot I am still uncertain about how header files are normally used and don't know much about C, despite having seen examples of symbol tables and reading what I could find about how compilers effect linkage and also gathered that having to state prototypes was primarily to plug a gaping hole in C. And I certainly find keeping function prototypes in view helpful. If 90% of C developers' understanding is faulty in that respect, I'd be interested to know whether you consider there are comparable levels of misunderstanding around other languages? Despite reading quite a lot, I've found it often very difficult to arrive at certainty with C usage.
 
Last edited:
Top