The Imperium Programming Language - IPL

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,609
Just for fun, I'm interested in what kind of language capabilities experienced MCU developers would like to see in an ideal programming language for these devices.

Just dump your thoughts into two sections

Likes

Dislikes


Under likes mention the things you like (found helpful) in this or that language (any language), under dislikes (things you found unhelpful) do the same, but with a view towards the application of the features to MCU development.

For example things I like about C++ are namespaces, things I like about Pascal are nested procedures, whereas things I dislike about Pascal are lack of the static concept (though this might not be a bad thing!), something I dislike about C is the lack of strings as a native language data type.

So lets hear it, I'm seriously interested in this subject...
 
Last edited:

xox

Joined Sep 8, 2017
838
Wait, aren't there like dozens of machine-code formats to support? Also, strictly speaking a compiled language is rather limited compared to a (dynamic) "scripted" language. If written in pure standard C, something of the latter type could then be compiled for just about any platform, including MCUs. Just saying...

One thing I would love to see is a cleaner syntax. Most programming languages are just over the top with weird constructs. Keep it simple, I say.

Code:
print "Hello World!"
Another thing is deterministic constructors/destructors, one of things that makes C++ so powerful. Almost 100% of memory allocations can be managed directly without the help of a "garbage collector" (itself only needed for certain special situations). Not to mention the applications for things like managing file handles (ie. automatic closing), defining recursive objects, etc, etc.

Exceptions are hard to implement correctly, but if done so make the language much more flexible.

Built-in multithreading/multitasking would be nice, but that may be asking a lot of a program running on a micro-controller.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,609
Wait, aren't there like dozens of machine-code formats to support? Also, strictly speaking a compiled language is rather limited compared to a (dynamic) "scripted" language. If written in pure standard C, something of the latter type could then be compiled for just about any platform, including MCUs. Just saying...

One thing I would love to see is a cleaner syntax. Most programming languages are just over the top with weird constructs. Keep it simple, I say.

Code:
print "Hello World!"
Yes I have some opinions myself on this, on the grammar as its know in programming language theory.

Another thing is deterministic constructors/destructors, one of things that makes C++ so powerful. Almost 100% of memory allocations can be managed directly without the help of a "garbage collector" (itself only needed for certain special situations). Not to mention the applications for things like managing file handles (ie. automatic closing), defining recursive objects, etc, etc.

Exceptions are hard to implement correctly, but if done so make the language much more flexible.

Built-in multithreading/multitasking would be nice, but that may be asking a lot of a program running on a micro-controller.
Some past language have existed that incorporated features like multiple execution threads and even IO, its become the trend over the past thirty years or so to decouple the language from IO, but it could be something to consider. Several people here recently have talked about memory allocation and I've read some articles recently that explain the risks of a generic heap allocate/free model in MCUs, and I've been thinking about this.

For example a heap (I've developed these before so have some deep insights to this) can be something created at runtime by code and then used for allocating/freeing with some strategy. For example there was a system that needed to allocate/free potentially millions of small structures all the same size. We addressed this by creating a "memory pool" - a special simple heap that can only allocate blocks of X bytes or less and used a bitmap to record busy/free blocks. This could be an API or a native language feature too.

OK, these are interesting points. I hadn't though about interpreted languages, so its good you raised that.

I'd been primarily thinking of a compiled language as that's where I have experience, but a hardware oriented interpreted language could be interesting to think about.

Programming language designs of all sorts have been developed, the translation to target CPU instructions is well covered in the literature, for example some designs target an abstract CPU (for example Microsoft's IL and Java's Bytecode) and then define code generators for each target machine, converting the generic abstract instructions into CPU specific code, some of these are simply table driven even.
 
Last edited:

KeithWalker

Joined Jul 10, 2017
3,091
There is one language that will always remain my favorite. It is a graphical language. It was developed by Hewlett Packard electronic engineers and is based on pure logic. It was originally called HP Vee. After the HP computer/test and measurement company split it was called Agilent Vee and after the second split it is now called Keysight Vee.
To build a program in Vee, you use top-down design to make a layered flowchart of what the program must do. It can be multi-core and multi-threading. The flowchart is interpreted and compiled into an efficient run-time program. It can interact with other programming languages using the built-in ActiveX Automation Server.
This is not a commercial for Keysight Vee. It is just a brief description of a language that was made by engineers for use by engineers that I found to be totally intuitive and logical. That is something that I cannot say about most languages designed by computer programmers.
 

nsaspook

Joined Aug 27, 2009
13,262
That's the problem with embedded programming and designing languages. It's too important to the left to computer language designers. Hardware engineers who program want the equivalent of a wire, solder and a soldering iron available at all times in a language but programming guys say, that's too dangerous, you might burn something, use this idealized sand-box instead.

We say:
1667671847760.png
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,609
There is one language that will always remain my favorite. It is a graphical language. It was developed by Hewlett Packard electronic engineers and is based on pure logic. It was originally called HP Vee. After the HP computer/test and measurement company split it was called Agilent Vee and after the second split it is now called Keysight Vee.
To build a program in Vee, you use top-down design to make a layered flowchart of what the program must do. It can be multi-core and multi-threading. The flowchart is interpreted and compiled into an efficient run-time program. It can interact with other programming languages using the built-in ActiveX Automation Server.
This is not a commercial for Keysight Vee. It is just a brief description of a language that was made by engineers for use by engineers that I found to be totally intuitive and logical. That is something that I cannot say about most languages designed by computer programmers.
Quite interesting, is this an example?

https://people.ece.ubc.ca/cad/local/html/test/ls165vee.html
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,609
That's the problem with embedded programming and designing languages. It's too important to the left to computer language designers. Hardware engineers who program want the equivalent of a wire, solder and a soldering iron available at all times in a language but programming guys say, that's too dangerous, you might burn something, use this idealized sand-box instead.
Well clearly such people can't really be accurately described as "hardware engineers" if they are designing, writing, debugging, testing and maintaining software.

A goal of any software system should be to behave predictably at all times, that is, never be able to get into an unintended or undesired state, as soon as that arises then a system becomes unpredictable and who can tell what the consequences might be? In an word processor we get a messed up document, in a car steering control system people get killed.

So it seems to me that by its very nature embedded software needs to manage "dangers" more than typical software.

But I'm curious what kind of things are regarded as "dangerous"?
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,609
Code:
{

uint8_t regval;



}
Two things I'd like to see as standard in such a language are

Binary literals and macro namespaces.

The latter means being to write something like this if it was C:

Code:
#define MODE
{
    SYNC 0
    ASYNC 1
}
This would let us create meaningful names with no risk of name collisions with other #defines at compile time:

Code:
{
set_device_mode(MODE.SYNC);
}
Also worth noting:

"A proposal to add binary constants was rejected due to lack of precedent and insufficient utility."

From: Rationale for International Standard - Programming Languages - C Page: 51.
 
Last edited:

xox

Joined Sep 8, 2017
838
That's the problem with embedded programming and designing languages. It's too important to the left to computer language designers. Hardware engineers who program want the equivalent of a wire, solder and a soldering iron available at all times in a language but programming guys say, that's too dangerous, you might burn something, use this idealized sand-box instead.

We say:
View attachment 279995
Ha! Well at least C lets you shoot yourself in the foot as much as you care to. (C is available for pretty much any MCU, right?)
 

nsaspook

Joined Aug 27, 2009
13,262
Last edited:

xox

Joined Sep 8, 2017
838
Well clearly such people can't really be accurately described as "hardware engineers" if they are designing, writing, debugging, testing and maintaining software.

A goal of any software system should be to behave predictably at all times, that is, never be able to get into an unintended or undesired state, as soon as that arises then a system becomes unpredictable and who can tell what the consequences might be? In an word processor we get a messed up document, in a car steering control system people get killed.

So it seems to me that by its very nature embedded software needs to manage "dangers" more than typical software.

But I'm curious what kind of things are regarded as "dangerous"?
Probably things like: Setting a pointer to an EXACT address (eg. a screen buffer). Invoking interrupts. Direct memory allocation. Et cetera.
 

nsaspook

Joined Aug 27, 2009
13,262
Well clearly such people can't really be accurately described as "hardware engineers" if they are designing, writing, debugging, testing and maintaining software.

A goal of any software system should be to behave predictably at all times, that is, never be able to get into an unintended or undesired state, as soon as that arises then a system becomes unpredictable and who can tell what the consequences might be? In an word processor we get a messed up document, in a car steering control system people get killed.

So it seems to me that by its very nature embedded software needs to manage "dangers" more than typical software.

But I'm curious what kind of things are regarded as "dangerous"?
Sure, they are hardware engineers when software is just another tool to control a machine like mechanical gears, not a profession. It's possible to be very proficient and officially in both if you stick to programming closely related to hardware.
 

xox

Joined Sep 8, 2017
838
Yes, you can shoot your foot with C or blow your entire leg off with something like C++. Humans will not be stopped from making dumb mistakes by a programming language.
Basically it just boils down to how "safe" the developers of the language want it to be. Modern languages are definitely mostly of the ilk that tend to err on the safe side. In "the good old days" utility was key. Programmers were expected to be sophisticated enough to understand the dangers.
 

xox

Joined Sep 8, 2017
838
Some past language have existed that incorporated features like multiple execution threads and even IO, its become the trend over the past thirty years or so to decouple the language from IO, but it could be something to consider.

Threads are a must, and something that most interpreted languages don't even deal with properly. But not only can it be done, it could also built directly into the interpreter; chopping each subroutine into pieces and then executing "task switches" just like a CPU does. IO would be trivial to implement.


Several people here recently have talked about memory allocation and I've read some articles recently that explain the risks of a generic heap allocate/free model in MCUs, and I've been thinking about this.


For example a heap (I've developed these before so have some deep insights to this) can be something created at runtime by code and then used for allocating/freeing with some strategy. For example there was a system that needed to allocate/free potentially millions of small structures all the same size. We addressed this by creating a "memory pool" - a special simple heap that can only allocate blocks of X bytes or less and used a bitmap to record busy/free blocks. This could be an API or a native language feature too.
Sure, there are many ways to approach that. I would also include the ability to "attach" the pool to any chunk of memory whatsoever. A buffer declared on the local stack, for example. A simple method such as what you've described would work well for that kind of situation. But also garbage collection and/or RAII facilities (ie: constructors/destructors).

I'd been primarily thinking of a compiled language as that's where I have experience, but a hardware oriented interpreted language could be interesting to think about.


Programming language designs of all sorts have been developed, the translation to target CPU instructions is well covered in the literature, for example some designs target an abstract CPU (for example Microsoft's IL and Java's Bytecode) and then define code generators for each target machine, converting the generic abstract instructions into CPU specific code, some of these are simply table driven even.
You make it sound so simple! I do understand the principles, but again, it just seems like a massive undertaking. Let's say you compile the source code to some sort of IL. Then, for each and every platform, output correct productions for the given CPU. All of which have different register arrangements, memory bus widths, etc. How many CPU's do you expect to be able to support? If ALL, then just how MANY is that? I imagine such a project could take years to complete!
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,609
Threads are a must, and something that most interpreted languages don't even deal with properly. But not only can it be done, it could also built directly into the interpreter; chopping each subroutine into pieces and then executing "task switches" just like a CPU does. IO would be trivial to implement.




Sure, there are many ways to approach that. I would also include the ability to "attach" the pool to any chunk of memory whatsoever. A buffer declared on the local stack, for example. A simple method such as what you've described would work well for that kind of situation. But also garbage collection and/or RAII facilities (ie: constructors/destructors).



You make it sound so simple! I do understand the principles, but again, it just seems like a massive undertaking. Let's say you compile the source code to some sort of IL. Then, for each and every platform, output correct productions for the given CPU. All of which have different register arrangements, memory bus widths, etc. How many CPU's do you expect to be able to support? If ALL, then just how MANY is that? I imagine such a project could take years to complete!
Well it isn't magic, that's how I perceived it for years until I developed an interest programming languages, sure it isn't simple but it is not the big mystery its sometimes portrayed as.

For example source is not itself "seen" by the compiler proper. If you look at source code and ponder the systematic consumption of it, it can look daunting, it did to me for years despite being a competent programmer for several years.

What the parser sees is in fact much much simpler than what we see in the source code, the parser sees only tokens, small structures that might look like this in C:

Code:
struct token
{
   int token_kind;
   int line_number;
   int line_column;
   char * spelling;
};
The parser only sees a sequence of these identically structured items, every token in the source will be represented by one of these structs.

The token_kind is just an integer for every distinct token, could be an enum in fact:

Code:
enum token_kinds
{
   WHITESPACE,
   COMMENT,
   IF,
   ELSE,
   INT,
   LONG,
   WHILE,
   DO,
   UNTIL,
   FOR,
   RETURN,
   VOID,
   LPAR,
   RPAR,
   etc, etc...
};
So the parser almost never looks at source code as such, only the stream of tokens. The process of converting raw text into tokens is easily done with a FSM, a simple 2D table that jumps from state to state as it reads each char - and yes, the source is generally read one char at a time too, it really doesn't read the source line by line, again not obvious to people.

The parser's job involves checking that the sequence of tokens conforms to the grammar (syntax checking), that is easily done with a FSM and a stack, a so called push down automaton. I've written what are called "recursive descent" parsers by hand myself, and this is in fact a respected way to do it today despite there being tools for creating PDAs from grammar definitions.

Really, once you've done this once it becomes much less mystifying. The interesting area for me is the grammar, the exact syntax used by a language, I've worked with many languages and seen many grammars and there are things that could be used more in modern languages but are not, really the C grammar seems to dominate how people think, and is IMHO a handicap when trying to design a new language.

A good example of a bad decision in C, is the syntax of declarations, the name is the last token and can be prefixed by multiple/optional attribute tokens. It is pretty much recognized that that is a poor decision, the name ideally should appear before any attributes and those attribute can, should, be allowed in any order ideally; then we have useless silliness like "void" which to this day appears in the countless derivate languages.
 
Last edited:

xox

Joined Sep 8, 2017
838
Well it isn't magic, that's how I perceived it for years until I developed an interest programming languages, sure it isn't simple but it is not the big mystery its sometimes portrayed as.


For example source is not itself "seen" by the compiler proper. If you look at source code and ponder the systematic consumption of it, it can look daunting, it did to me for years despite being a competent programmer for several years.

Yes, tokenizing and parsing is fairly straightforward. I once wrote a Java bytecode generator which could parse the entire language, producing valid bytecode sequences as a result. So I do have a bit of experience there.

But generating code for a micro-controller, that is another matter altogether. CPUs have very complex processing semantics. You have to translate that "intermediate-level" format to raw microcode. All registers have to be set up properly and maintained. Maybe the CPU uses segmented addressing. Et cetera.

Ok, so you implement the work for THAT particular platform. But then what? Dozens of platforms will remain. Again, it could take years to support all of them.

Or maybe I am just overthinking things. Please enlighten me. Is generating machine code for all of the various platforms really such a trivial task?
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,609
Yes, tokenizing and parsing is fairly straightforward. I once wrote a Java bytecode generator which could parse the entire language, producing valid bytecode sequences as a result. So I do have a bit of experience there.

But generating code for a micro-controller, that is another matter altogether. CPUs have very complex processing semantics. You have to translate that "intermediate-level" format to raw microcode. All registers have to be set up properly and maintained. Maybe the CPU uses segmented addressing. Et cetera.

Ok, so you implement the work for THAT particular platform. But then what? Dozens of platforms will remain. Again, it could take years to support all of them.

Or maybe I am just overthinking things. Please enlighten me. Is generating machine code for all of the various platforms really such a trivial task?
The STM32 family use the ARM cpu,

https://ocw.aoc.ntua.gr/modules/document/file.php/ECE102/Σημειώσεις Μαθήματος/ARM_Programmer_s_Model.pdf

That's "just" a cpu, not trivial, but a cpu is a cpu.

compare with

https://www.amd.com/system/files/TechDocs/24592.pdf
 
Top