The Imperium Programming Language - IPL

xox

Joined Sep 8, 2017
838
The STM32 family use the ARM cpu,


https://ocw.aoc.ntua.gr/modules/document/file.php/ECE102/Σημειώσεις Μαθήματος/ARM_Programmer_s_Model.pdf


That's "just" a cpu, not trivial, but a cpu is a cpu.


compare with


https://www.amd.com/system/files/TechDocs/24592.pdf
Right, that's several hundreds of pages of documentation which all needs to be gone through, and more importantly, understood well enough to be able to generate semantically correct microcode. So again, considering the vast number of possible architectures, that just doesn't seem like a viable project. Maybe you could, say, target only the most popular ones? Then you'd be looking at maybe half a dozen or so to implement.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
Right, that's several hundreds of pages of documentation which all needs to be gone through, and more importantly, understood well enough to be able to generate semantically correct microcode. So again, considering the vast number of possible architectures, that just doesn't seem like a viable project. Maybe you could, say, target only the most popular ones? Then you'd be looking at maybe half a dozen or so to implement.
Well remember I'm asking about what people want to see in an "ideal" language, I'm no advocating or proposing to develop anything like a compiler, that's not really what I'm asking about here, so yes the generation of code for some target CPU is an effort but not something to be concerned with at this point.

But on that note I can show you the code generator I developed for a version of PL/I that generates code for the 32 bit X86 CPU family. The code generation phase handles:


It's a lot of work but not as daunting as it might first appear, it seemed huge when I began that work but soon became manageable as I learned more about the CPU low level details.

The code gen phase parallels the parser in a sense. Just as the parser has a recursive descent function to process some language construct, the codegen has a function to translate the same construct, e/g we have parse_assignment and tran_assignment.

For example here's the code that converts a "procedure" definition (like in C: uint8_t* HandleEvent (long X, char * Y) for example):

Code:
     Inst.opcode     = ENTER;
     Inst.target.imm = (short)ptr->proc->stack;
     Inst.target.len = WORD_PTR;
     Inst.source.imm = 0;
     Inst.source.len = BYTE_PTR;
     CpuGenerate();

     Inst.opcode     = MOV;
     Inst.source.reg = _ECX;
     Inst.target.bas = _EBP;
     Inst.target.len = DWORD_PTR;
     Inst.target.dis = -((short)(ptr->proc->stack)); /* save CX at offset 0 in frame */
     CpuGenerate();
There's a struct dedicated to representing any instruction, the code gen populates that and calls an emitter method (CpuGenerate) to convert that instruction struct into raw executable machine bytes.

Code:
typedef struct {
Opcode        opcode;
Address        source;
Address        target;
char        text[256];
} Instruction;
where
Code:
typedef struct {
        long   imm; /* used for JMP                   */
        chur   len; /* analogous to WORD PTR/BYTE PTR */
        chur   scale;
        Reg    reg;
        long   dis;
        Reg    idx;
        Reg    bas;
        Reg    seg; /* Used if segment overrides reqd..*/
} Address, * Address_ptr;
Converting instances of these structs to machine bytes is done via table lookups, we index into a table per-opcode and then index into that using the details of the addressing mode (immediate, reg to reg, reg to mem, mem to reg etc). So long as the tables are correctly populated from the CPU reference manual, it works very well, here's the table for the CALL instruction:

Code:
                    static opcode_map  call_map = {
/*--------------------------------------------------------------------------*/
/*             IMM08 IMM16 IMM32 REG08 REG16 REG32 MEM08 MEM16 MEM32 MEM64  */
/*--------------------------------------------------------------------------*/
/* IMM08  */ { NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* IMM16  */ { _E8,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* IMM32  */ { _E8,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* REG08  */ { NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* REG16  */ { _FF,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* REG32  */ { _FF,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* MEM08  */ { NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* MEM16  */ { _FF,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* MEM32  */ { _FF,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
/* MEM64  */ { NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP,  NOP },
};
Once one gets immersed into the problem its becomes less daunting, a key thing I found is that once we have created the parse tree and the symbol table tree, there's a huge amount of information and detail there, that information is what the code generation phase relies on.
 
Last edited:

xox

Joined Sep 8, 2017
838
Well remember I'm asking about what people want to see in an "ideal" language, I'm no advocating or proposing to develop anything like a compiler, that's not really what I'm asking about here
Ah, so just an "academic exercise". Got it.

But on that note I can show you the code generator I developed for a version of PL/I that generates code for the 32 bit X86 CPU family

....

Once one gets immersed into the problem its becomes less daunting, a key thing I found is that once we have created the parse tree and the symbol table tree, there's a huge amount of information and detail there, that information is what the code generation phase relies on.
I don't know. I remember slogging through x86 assembly language back in the day, Ralph's interrupt list in hand. Man that was tedious work! I guess I just don't appreciate low-level details as much.

Regardless, I do love compiler technology and it is nice to see a project that was able to mature to such a level. Many years ago, I took up a MiniBasic interpreter project at the request of a tech-aficionado who was dying of cancer. He couldn't run his programs on his new system. So I offered to help. Within a few weeks two others volunteered to assist with fine tuning. Within a month's time it was actually working well and running every program that we threw at it. What a great feeling that was! So yes, there is something satisfying about creating that sort of automation.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
Ah, so just an "academic exercise". Got it.



I don't know. I remember slogging through x86 assembly language back in the day, Ralph's interrupt list in hand. Man that was tedious work! I guess I just don't appreciate low-level details as much.

Regardless, I do love compiler technology and it is nice to see a project that was able to mature to such a level. Many years ago, I took up a MiniBasic interpreter project at the request of a tech-aficionado who was dying of cancer. He couldn't run his programs on his new system. So I offered to help. Within a few weeks two others volunteered to assist with fine tuning. Within a month's time it was actually working well and running every program that we threw at it. What a great feeling that was! So yes, there is something satisfying about creating that sort of automation.
Well thanks, I've never heard of that interrupt list, looking forward to reading about it. It's wonderful you were able to help that unfortunate gentleman, that does sound like a real project, I think you might be understating your skills a little!

That PL/I compiler was done over many years, in phases, I refactored frequently, even completely rewrote bits from time to time. I learned C the hard way, made frequent errors and design mistakes. One hugely helpful aspect of that type of work is that, unlike an OS, you can't crash your system.

Also unit testing becomes a matter of running the compiler with your own set of test source files, a list that grows as one works.

I could make some low level change or fix, the just ran a .BAT file that compiled like thirty test files, 90% of the time that would serve as a solid test, so one could work cyclically and get quite productive.

I used no source control system and worked alone, today with Github and open source collaboration, such a project would move more quickly. I got to the point of generating DOS linkable OBJ files and runnable code by working solely on DOS, only later did I update the project to generate NT COFF DLLs.

Today I'd likely write such a system in C# since I use that professionally and have done for like twenty years and now creates totally portable code since the advent of .Net Core. Basically that whole experience taught me a great deal and gave me high confidence when discussing the subject or starting such work.

Although perhaps not interesting to you, you might find this thought provoking in that I recently got involved in lengthy discussions with some people on the C# team about the shortcoming of the C# grammar and how it ultimately restricts flexibility, anyway that resulted in me writing a few detailed blog posts about this and later an experimental new grammar, one that could do what C# does but was free of these burdens - as I see them.

Those posts are here and the grammar experiments are here. That has a lexical analyzer and a parser, incomplete but not a toy and of course this was mainly a "what if" kind of experiment for me. In fact that experimental grammar is the kind of thing I'd been drawn towards if designing a new language for MCUs, the grammar is primarily new, a cross between PL/I and C, in that it leverages things from each grammar that seem truly helpful and discards things that are burdensome of idiosyncratic (like PL/I's infamous "PIC" data type specifier).

Though rather skeletal and "gappy" that work took me like four weeks, Git makes it just so much quicker to work safely and incrementally, curioulsy now that I'm looking at it, it was almost exactly one year ago too!

Something truly unusual emerged too, this arose during a discussion of how best to extend C# to allow string constants to contain any character without making it look unwieldy. The Microsoft team settled in the end for """ as the delimiters, which works and is fine I guess but I devised an alternative way of defining new tokens at compile time, that is we can modify the lexical analyzer itself as we compile the source.

That used a compiler directive I defined (again this is all "just for fun") called #add_delimiter - when the scanner sees that it uses the supplied char sequence as the new string delimiter, this was easy to do and I've never seen this kind of thing done before, it means that some project could decide ahead of time that their strings will always be delimited by say '" (apostrophe followed by quote) for example on the understanding that the team never will use that inside any of their strings.

The definition is stacked so to speak too, so one can "pop" the delimiter when done, meaning we could use it in small blocks of code to define strings that would otherwise need escape sequences to define them. ( I think I named these #dlimiter.push and #delimiter.pop in then end). In a sense the chosen solution for C# does seem to use this idea because at one point in the code """ can be the delimiter but a few lines later """" can become the delimiter and """ can be literally embedded in the string, that is they are in fact defining new tokens at compile time.

The huge interest area for me is to devise something that is truly user friendly, symmetric, expressive, free as far as can be, of odd restrictions and gotchas, some of this is hard to avoid but a lot of it is nothing more than convention, just presumed from the outset, as soon as designer say "OK we'll borrow from the C language here" they instantly close a number of doors that can never be reopened.

So in this thread I'm seeking input from others, experts in MCU use and development, who can "step back" and state the kinds of things they'd like to see in a language for this kind of work.

For example the absence of a "bit" data type with that name, in a language where manipulating bits is fundamental, is inexcusable IMHO, things do not have to be this way!
 
Last edited:

xox

Joined Sep 8, 2017
838
As far as source control, I do use it in projects where absolute necessary. But personally, just making a backup of the file before changes is usually good enough.

C# is pretty cool. I've done some stuff with WMI connectivity in the past. I like to think of it as a "better Java". It has much more sophisticated capabilities and API support across platforms is more uniform, I think. But it is bloated software, so running the whole subsystem on an MCU might be problematic. The other problem is the lack of user control over .NET applications. Running such applications can therefore be somewhat of a security concern. What I would like to see is a language where the runtime itself is subject to user permissions at EVERY LEVEL.

Another idea that came to mind is an interpreted, high-level language that could be invoked from different contexts, whether that be an assembly language program, one written in C, or Python, or whatever. That way, the developer can write the low-level stuff however they want, which could then be "glued" to the scripting language by the way of API calls to attach low-level callback functions to corresponding its higher-level interface. The script could return actual results to the caller (double, an int, an array, etc).

Best of all, the MCU wouldn't even need to parse the source; the raw op-code data could be embedded within the program and then executed by a much smaller interpreter.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
As far as source control, I do use it in projects where absolute necessary. But personally, just making a backup of the file before changes is usually good enough.

C# is pretty cool. I've done some stuff with WMI connectivity in the past. I like to think of it as a "better Java". It has much more sophisticated capabilities and API support across platforms is more uniform, I think. But it is bloated software, so running the whole subsystem on an MCU might be problematic. The other problem is the lack of user control over .NET applications. Running such applications can therefore be somewhat of a security concern. What I would like to see is a language where the runtime itself is subject to user permissions at EVERY LEVEL.

Another idea that came to mind is an interpreted, high-level language that could be invoked from different contexts, whether that be an assembly language program, one written in C, or Python, or whatever. That way, the developer can write the low-level stuff however they want, which could then be "glued" to the scripting language by the way of API calls to attach low-level callback functions to corresponding its higher-level interface. The script could return actual results to the caller (double, an int, an array, etc).

Best of all, the MCU wouldn't even need to parse the source; the raw op-code data could be embedded within the program and then executed by a much smaller interpreter.
Let me tell you here and now, that Git is insanely helpful, it is to programming what a to of the range scope is to an electronics engineer, it's that helpful.

I used to backup (ZIP the folder and add it to a backup drive) a very large C and C# codebase every evening when I was doing some specialized consultancy, had I known about and understood Git and GitHub in 2012 my life might have gone differently!

People often hate Git (as I once did) primarily because of the cryptic command line BS that many developers live by. Well forget that, I haven't used the Git command line for something like the past six years and I use Git very heavily and routinely, all of my work both professional and hobby sits "on top of" Git.

I'm no Linux or Unix fan but will without hesitation say that Git devised by Torvalds is superbly designed, I mean I doubt I could do a better job if I tried really really hard.

Anyone that wants to use Git and not get abused by the command line BS need only look to SmartGit, free too for non commercial use. I can and do work with Git 100% through its rich GUI.

It is written in Java and runs very well indeed on Windows, Mac, Linux etc.

You and anyone else are welcome to ask me any questions you like about Git and GitHub and SmartGit, I'm happy to help partly because I enjoy helping others but also because any developer that isn't using Git has absolutely no idea how disadvantaged they are.

In essence Got is "aware" of the file system, the folder tree that constitutes a "repository". You do anything in that folder tree to a file and Git "sees" it, one file, a hundred, it sees it all and you can snapshot that state instantly at anytime, either as one snapshot for all files or perhaps two three if one wants to break it up. Once committed it's there.

Imagine franticly coding some change for like an hour or so, maybe altering five or six files, headers, source code etc and the reaching a point we all do "Oh sh*t, this is really not the way to go, I had a suspicion but its clear, if I do that then this can't happen..." etc. Well in such cases one just looks at the SmartGit GUI and selects all the files that are changed (and only changed files appear) and right clicks "Discard" and all the changes in all the selected files - vanish ! - and you're safely back where you were!

You can peruse the history tree and pick some arbitrary point, say two days ago, then "checkout" a specific commit and Git will reconstitute the folder tree as it was at the time of that commit. You can "go back in time" and rerun some app to see why to worked then but doesn't know for example.

Anyway, enough, Git is very good SmartGit is superb, I encourage you and others here to "take the plunge".
 
Last edited:

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
After discussing something recently in a different thread, it strikes me that a HPL could support an "entrypoint" feature. Allowing a function/procedure (whatever it be termed) to have multiple invocation points:

Code:
void init_system()
{

     int data;

     data = do_phase_1_init();
     queue_sleep_request(100, resume);
     return;
   
entry resume:

     do_phase_2_init(data);
}
The only way to code this today in the C language is to pass a function pointer into queue_sleep_request which breaks the linear flow of code into two distinct functions and offers no closure allowing the resumed code to access data after the function returned previously; ordinarily in most languages the return unwinds the stack and the data in the stack frame is basically lost.

This is just an idea, an improved way to express certain constructs that one might want to use if writing code that does some kind of multitasking or scheduling or asynchronous processing.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
I've recently been thinking that a proof of concept language compiler could be developed by generating C source code rather than target specific machine code. This would afford an opportunity to explore a new grammar that could result in runnable code, and runnable on a microcontroller.

It's too soon to start such a thing, but it would be reasonably achievable once enough of a language definition began to exist.
 

xox

Joined Sep 8, 2017
838
I've recently been thinking that a proof of concept language compiler could be developed by generating C source code rather than target specific machine code. This would afford an opportunity to explore a new grammar that could result in runnable code, and runnable on a microcontroller.


It's too soon to start such a thing, but it would be reasonably achievable once enough of a language definition began to exist.
Oh ok, so a transpiler. I was actually thinking the same thing. It could get pretty complicated translating higher-level constructs down to C. (Although I think the first C++ compilers did just that.) Also might make dynamic loading a bit tricky.

One of these days I do think I will get involved in another programming language project. These days I have too much on my plate. Still dealing with a "project from hell" with no end in sight. (God I hate modern web-interface requirements!)
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
Oh ok, so a transpiler. I was actually thinking the same thing. It could get pretty complicated translating higher-level constructs down to C. (Although I think the first C++ compilers did just that.) Also might make dynamic loading a bit tricky.

One of these days I do think I will get involved in another programming language project. These days I have too much on my plate. Still dealing with a "project from hell" with no end in sight. (God I hate modern web-interface requirements!)
Oh so you're doing web dev work. I just spent the past couple months evaluating Blazor (we are a .Net shop anyway too) and I am very very pleased overall with this technology.

Blazor can run either in-browser or on-server and the programming model is almost identical in each case. Just a single HTTP GET to launch the app and thereafter all communications is "real time" SignalR, a secure, fast channel.

This make it possible to almost code the UI as if it were WPF or close too anyway.

Regarding HPL I do have a pretty solid if basic, parser for the imaginary "novus" language, an "idealized" grammar I was exploring as an alternative to C#'s grammar.

That already implements some of the things I'd expect in a new hardware language, a variant of C so I could branch that and see how that goes...
 

xox

Joined Sep 8, 2017
838
Oh so you're doing web dev work. I just spent the past couple months evaluating Blazor (we are a .Net shop anyway too) and I am very very pleased overall with this technology.


Blazor can run either in-browser or on-server and the programming model is almost identical in each case. Just a single HTTP GET to launch the app and thereafter all communications is "real time" SignalR, a secure, fast channel.


This make it possible to almost code the UI as if it were WPF or close too anyway.


Regarding HPL I do have a pretty solid if basic, parser for the imaginary "novus" language, an "idealized" grammar I was exploring as an alternative to C#'s grammar.


That already implements some of the things I'd expect in a new hardware language, a variant of C so I could branch that and see how that goes...

Blazor does look pretty impressive. Not sure how well it would do on a Linux distro though. Are page-load times acceptable? I read that it could be a little laggy in that respect. Also, why does it need to download an embedded copy of the .NET runtime every time it loads an application? THAT'S kind of crazy!

This particular project is a node.js application which interacts with a Redis server. All client-side code is generated "on the fly" and is subject to various authentication measures (and as such we have very low spambot/fake account rates). Rather than use something like Angular, Vue, or Svelte for UI templating, I just generate all of the content directly from javascript.

The back-end is very stable and is designed around a simple JSON API. It can answer (sanitized) queries, spit out HTML content, even images. So that's all working well. But the UI demands just seem to increase by the day. Everybody wants more more features. (Oh, if they only knew how hard it is to get CSS right!)


I think it goes without saying too, that any new language will not require forward declarations.
Depends on whether the language is interface-based or type-based. In the former case, as long as a certain interface criteria is met, the structure of any given object doesn't really need to be visible anyway. Otherwise the type-system has to do some VERY EXPENSIVE currying in order to handle arbitrarily placed declarations.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
Blazor does look pretty impressive. Not sure how well it would do on a Linux distro though. Are page-load times acceptable? I read that it could be a little laggy in that respect. Also, why does it need to download an embedded copy of the .NET runtime every time it loads an application? THAT'S kind of crazy!
One can have client code run in browser or on server, its a choice you get when you create an initial Blazor project. There is a lag, when using web assembly, the client side contains javascript code that bootstraps web assembly and must pull down a bunch of stuff. Once pulled though you can pretty much disconnect, the app is close to native app at this stage in terms of responsiveness.

But with Blazor server the UI code runs at the server, it reacts in real time to UI events like button clicks, checkbox selection etc. The local event is packaged and sent to the server over a fast SignalR secure link. The programming mode is unaware of this, it is invisible, you write the UI code just if it were running on a desktop, its very fast but all UI processing is done on the server, UI updates are determined and sent back to the browser for rendering.

You'd use Blazor WebAsm for something like Google Earth or a game, where there's a lot of graphic processing to do (Google Earth in fact uses Web Assembly, this is an industry standard now).

This particular project is a node.js application which interacts with a Redis server. All client-side code is generated "on the fly" and is subject to various authentication measures (and as such we have very low spambot/fake account rates). Rather than use something like Angular, Vue, or Svelte for UI templating, I just generate all of the content directly from javascript.

The back-end is very stable and is designed around a simple JSON API. It can answer (sanitized) queries, spit out HTML content, even images. So that's all working well. But the UI demands just seem to increase by the day. Everybody wants more more features. (Oh, if they only knew how hard it is to get CSS right!)

Depends on whether the language is interface-based or type-based. In the former case, as long as a certain interface criteria is met, the structure of any given object doesn't really need to be visible anyway. Otherwise the type-system has to do some VERY EXPENSIVE currying in order to handle arbitrarily placed declarations.
I looked closely at Angular, it is pretty good but quite complex, so my overall assessment for us - we are a large university - is that the benefits of replacing Javascript with C# for client code and having the same language across front/back ends and with the superb responsiveness of SignalR in Blazor Server, the developer effort is much lower.

Unless your rendering a fast game or very rich UI images etc, Blazor Server is excellent, the fact that the client code runs in the browser in Blazor WebAsm or on the server in Blazor Server is 99.9% invisible, UI manipulation like C# event handlers, two way binding etc all works identically, the main difference is in fact in the app startup code not the UI code.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
Something else that came up in another thread is how assembler allows us to embed multiple NOP instructions in cases where just want waste a little CPU time.

High level languages really don't expose that idea, so a new hardware language could support that, perhaps a keyword

Code:
if a > 100 then
   nop(10); // Note this is NOT C, nop here is a keyword like goto or switch, not a function call.
Where the compiler literally emits 10 NOP instructions (or an equivalent loop etc).

Every MCU and CPU to my knowledge has a NOP instruction so this idea seems like a portable, if simple, abstraction that is rarely, if ever, supported by high level languages.
 

xox

Joined Sep 8, 2017
838
Something else that came up in another thread is how assembler allows us to embed multiple NOP instructions in cases where just want waste a little CPU time.

High level languages really don't expose that idea, so a new hardware language could support that, perhaps a keyword

Code:
if a > 100 then
   nop(10); // Note this is NOT C, nop here is a keyword like goto or switch, not a function call.
Where the compiler literally emits 10 NOP instructions (or an equivalent loop etc).

Every MCU and CPU to my knowledge has a NOP instruction so this idea seems like a portable, if simple, abstraction that is rarely, if ever, supported by high level languages.
If the language is compiled, then yes, that would certainly be doable. While you are at it, might as well provide a means to generate all/most of the other assembly instructions as well. Now for a scripted language, that would be exceedingly difficult to pull off. You'd have to have some sort of sophisticated JIT mechanics going on there!

Ok, so but why the NOP instruction? For timing-related issues, I assume?
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
If the language is compiled, then yes, that would certainly be doable. While you are at it, might as well provide a means to generate all/most of the other assembly instructions as well. Now for a scripted language, that would be exceedingly difficult to pull off. You'd have to have some sort of sophisticated JIT mechanics going on there!

Ok, so but why the NOP instruction? For timing-related issues, I assume?
It came up in a different discussion.

https://forum.allaboutcircuits.com/...const-char-strings.190219/page-6#post-1777237

If some people do use NOP to achieve some observable outcome, then it struck me as possibly being a relevant abstraction. Because every CPU/MCU has a NOP instruction I guess one would just simply translate that to a native NOP, the only platform dependency might be the elapsed time for a NOP that is hardware dependent and so would make the code hardware specific, less portable.

In the case of NOP it does literally nothing, never changes status registers or flags or anything so could even be "emulated" if some processor really didn't have it.

Basically it would be a processor independent, time consuming operation that has no effect on CPU status, a bit like a

Code:
while(1);
But more readable, no loop condition or anything, a way of simply wasting a few cycles, I guess we could even name something else other than "nop", perhaps "idle" or something.

What's interesting to me is that it does reflect a true functional need, sometimes there is a need to do exactly nothing for a short time. No high level language does this, its simply a pointless thing to do from a traditional language design standpoint.

To my knowledge no high level language offers this construct, they let you do with simple spin loops and so on, but these get optimized away eventually in any real language. These are exactly the kinds of ideas I'm interested in capturing.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,610
Taking this idea a step further, I think almost all MCU/CPU devices also have a PC and SP, these are inherent and common to all devices be they 8 bit or 64 bit. So perhaps the language could expose these in some way for some needs.

Since automatic variables (in C and other languages) are memory locations inside a stack frame, we could expose a machine independent means of walking back through stack frames. Yes the frame layout does vary from CPU to CPU but that could abstracted away.

OS schedulers for example manipulate the stack to affect context switches, that kind of code is almost always written in assembler for this reason.

In a pre-emptive scheduler, a timer interrupt occurs, the running "thread" has its registers saved on the stack including the return address. The interrupt handler overwrites the stack, it saves current return address and replaces it with the return address of the next "thread" then the handler just does a return from interrupt.

This magically results in some other thread resuming, this is the clever way schedulers achieve what they do, perhaps that kind of code could be written a new high level language...
 

joeyd999

Joined Jun 6, 2011
5,283
Because every CPU/MCU has a NOP instruction I guess one would just simply translate that to a native NOP, the only platform dependency might be the elapsed time for a NOP that is hardware dependent and so would make the code hardware specific, less portable.
A NOP for timing purposes on anything other than a single threaded, single core non-instruction-cached CPU is pretty much useless.

On small CPUs/MCUs with predictable instruction cycle timing, a NOP takes a finite and predictable amount of time to execute (sans any interrupt processing that may occur before or after the instruction). But the execution time is dependent on clock speed, which is not the same for all applications, and may not be the same even within one single application.

Therefore, the construct NOP(n) is also non-portable.

The proper, portable way, is to have a macro called something like NOP_ns(n), where the preprocessor would generate he proper number of NOPs based on a predefined clock speed.

This is similar to how the macros Delay_us() and Delay_ms() work.
 
Top