The Imperium Programming Language - IPL

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
A NOP for timing purposes on anything other than a single threaded, single core non-instruction-cached CPU is pretty much useless.

On small CPUs/MCUs with predictable instruction cycle timing, a NOP takes a finite and predictable amount of time to execute (sans any interrupt processing that may occur before or after the instruction). But the execution time is dependent on clock speed, which is not the same for all applications, and may not be the same even within one single application.

Therefore, the construct NOP(n) is also non-portable.

The proper, portable way, is to have a macro called something like NOP_ns(n), where the preprocessor would generate he proper number of NOPs based on a predefined clock speed.

This is similar to how the macros Delay_us() and Delay_ms() work.
Why do you think predictability is not possible on a multithreaded preemptively scheduled system? I agree that a nop or idle implementation would likely include a nanosecond interval, the implementation is then down to the target compiler and runtime.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
And if preemption occurs during the execution of NOPs?
You prevent that from happening, the language or OS provides a way to prevent that, something like:

Code:
void setup_system()
{

   reset_all_devices();

   atomic // pause scheduling
   {
      start_transmitter();
      idle 40; // idle for exactly 40 nanoseconds before starting RF IO.
   }

   start_sending_data();

}
So your (legitimate) concern now becomes another goal! A new hardware oriented language must include a means of suspending a scheduler. Of course an OS itself can just expose an API for this too.

Of course additional interrupts might occur and might or might not be masked, but basically there's no fundametal rule that says we cannot prevent preemption.

This is an example of why I think its important to step back from C, its a bit like Edward de Bono, the guy who coined the term "lateral thinking" we get stuck in particular way (the C language) of looking at things and that then prevents us from seeing alternatives sometimes, we don't even know that we're stuck.

An OS itself can suspend scheduling, in Windows for example if a user thread has been interrupted and is now running kernel mode code, the scheduler will not, can not, reschedule it.
 
Last edited:

joeyd999

Joined Jun 6, 2011
5,287
You prevent that from happening, the language or OS provides a way to prevent that, something like:

Code:
void setup_system()
{

   reset_all_devices();

   atomic
   {
      start_transmitter();
      idle 40; // idle for exactly 40 nanoseconds before starting RF IO.
   }

   start_sending_data();

}
So your (legitimate) concern now becomes another goal! A new hardware oriented language must include a means of suspending a scheduler. Of course an OS itself can just expose an API for this too.

Of course additional interrupts might occur and might or might not be masked, but basically there's no fundametal rule that says we cannot prevent preemption.

This is an example of why I think its important to step back from C, its a bit like Edward de Bono, the guy who coined the term "lateral thinking" we get stuck in particular way (the C language) of looking at things and that then prevents us from seeing alternatives sometimes, we don't even know that we're stuck.
You make me feel like I'm arguing with my cat.

And I don't even have a cat.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
IMO You will eventually recreate one of the Wirthian Languages. Academically pure but dead on arrival in practicality.
https://www.theregister.com/2022/03/29/non_c_operating_systems/
Well I'm certainly not developing a language, just discussing the kinds of goals people deal with in the world of MCUs. That article too makes a good point of showing C in a proper context, one of many and far from the most innovative.

My problem with C is that it influences/stfiles the very way we think, this is true of human languages too, we use language to convey our thoughts but we also form those thoughts within a language. So many languages just presume the C grammar and syntax, they don't question it and if they do they quickly dismiss it because "we want to make our new language easy to pick up".

As soon as they do that, doors close, doors the language designers likely didn't even know existed.
 

nsaspook

Joined Aug 27, 2009
13,312
Well I'm certainly not developing a language, just discussing the kinds of goals people deal with in the world of MCUs. That article too makes a good point of showing C in a proper context, one of many and far from the most innovative.

My problem with C is that it influences/stifles the very way we think, this is true of human languages too, we use language to convey our thoughts but we also form those thoughts within a language. So many languages just presume the C grammar and syntax, they don't question it and if they do they quickly dismiss it because "we want to make our new language easy to pick up".

As soon as they do that, doors close, doors the language designers likely didn't even know existed.
That influence maybe stifles you but to most of use, it's not a show-stopper because we don't think in C or any programming language, we think about physical machines physics and mechanical properties that are much more complex, complicated and perplexing than computer language misfeatures that need to be considered during an implementation of those ideas.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
That influence maybe stifles you but to most of use, it's not a show-stopper because we don't think in C or any programming language, we think about physical machines physics and mechanical properties that are much more complex, complicated and perplexing than computer language misfeatures that need to be considered during an implementation of those ideas.
You raise some interesting philosophical points. So far as I can see we actually think in terms of abstractions, models. Physics is all about models, mathematical models.

I also don't know how you measure complexity in order to say "much more complex, complicated and perplexing than computer language misfeatures".

A hallmark of sound thinking is an ability to reduce apparent complexity, to seek out underlying patterns and behaviors that reduce complexity, identify symmetries and so on. Striving for such things in programming languages is no exception in my view anyway.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
As for stifling, here's a nice little challenge for the C devotees here. Its called the sum of digits. That is given a sequence of decimal digits, continually add the digits until there is only one digit left.

e.g.

  • 165 => 1+6+5 => 12 => 1+2 => 3
  • 8036 => 8+0+3+6 => 17 => 8

There is no defined bound to the length of the initial sequence and it cannot be empty and will contain only the digits 0 thru 9.

Write me some C code to do this - that's the challenge, if we pass 8036 we get 8, if we pass 165 we get 3, use any data types you like.
 

nsaspook

Joined Aug 27, 2009
13,312
You raise some interesting philosophical points. So far as I can see we actually think in terms of abstractions, models. Physics is all about models, mathematical models.

I also don't know how you measure complexity in order to say "much more complex, complicated and perplexing than computer language misfeatures".

A hallmark of sound thinking is an ability to reduce apparent complexity, to seek out underlying patterns and behaviors that reduce complexity, identify symmetries and so on. Striving for such things in programming languages is no exception in my view anyway.
Yes but how we think about those models are based on the foundations of learning we have. I'not saying you're wrong, I'm saying it's mainly irrelevant to what language (like using Python with all of its warts) we use to express those physical models in a manner compatible to computer hardware. Software provides a needed functionality in a system, just like a gear box. There's nothing special about it in the big picture of embedded processing on machines that cost millions in a factory that cost billions.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
Yes but how we think about those models are based on the foundations of learning we have. I'not saying you're wrong, I'm saying it's mainly irrelevant to what language (like using Python with all of its warts) we use to express those physical models in a manner compatible to computer hardware. Software provides a needed functionality in a system, just like a gear box. There's nothing special about it in the big picture of embedded processing on machines that cost millions in a factory that cost billions.
Well the thread is simply a discussion of programming language features that might be useful to an MCU programmer. So the relevance of anything I say is relative to that stated thread topic.
 

nsaspook

Joined Aug 27, 2009
13,312
Well the thread is simply a discussion of programming language features that might be useful to an MCU programmer. So the relevance of anything I say is relative to that stated thread topic.
The MCU's are the glue that holds it together today. Part of the problem with writing hardware interface MCU software for larger systems is making the source code understandable to future hardware engineers who will open an archive folder with the complete hardware/software PCB plans to build the old version of X when a new version of X is needed. This is why the field is very conservative about changes to language (or hardware) unless absolutely necessary.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
A NOP for timing purposes on anything other than a single threaded, single core non-instruction-cached CPU is pretty much useless.
Well that's an odd thing to say because you recently argued for its use for timing purposes.

On small CPUs/MCUs with predictable instruction cycle timing, a NOP takes a finite and predictable amount of time to execute (sans any interrupt processing that may occur before or after the instruction). But the execution time is dependent on clock speed, which is not the same for all applications, and may not be the same even within one single application.
A NOP's execution time is no less deterministic on a 64 bit CPU than it is on an 8 bit CPU. I also never said it was not dependent on clock speed. A portable mechanism could factor in the clock speed to the implementation of the NOP operation.

Therefore, the construct NOP(n) is also non-portable.
The construct doesn't exist remember, we're talking about something that could in principle be supported by a language, available to the programmer if and when they see a use for it. To make it portable we'd include the processing necessary to compute the number of NOPs based on clock speed, supplied nS argument and so on.

The proper, portable way, is to have a macro called something like NOP_ns(n), where the preprocessor would generate he proper number of NOPs based on a predefined clock speed.

This is similar to how the macros Delay_us() and Delay_ms() work.
That's not portable though because the macro would need to be changed for different clock speeds. A portable way to implement it would be to incorporate the clock speed into the implementation of the NOP feature.

Look if using NOP has a legitimate use in assembly language code for some device then it has a use potentially in any language code for that same device.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
"A NOP's execution time is no less deterministic on a 64 bit CPU than it is on an 8 bit CPU"
https://developer.arm.com/documentation/dui0473/m/arm-and-thumb-instructions/nop


and there are DCache or ICache timing related issues.
Very well, I stand corrected. So the way we'd implement a fixed time "do nothing" on some CPUs would be not simply a stream of NOP instructions.

The goal was to be confident that we could delay the next statement by some number of nanoseconds or microseconds. In Joey's case in his 8 bit devices, he has achieved that with a few NOP instructions, on other devices we might need to do it some other way.

Short settling delays seem to crop up often as I read about these MCUs and so since it's a problem domain concept, supporting it at the language level seems reasonable, to me anyway; if we want a language designed for programming hardware like this then the language should - IMHO - map to that problem domain, that's all this thread is about really.
 

joeyd999

Joined Jun 6, 2011
5,287
Very well, I stand corrected. So the way we'd implement a fixed time "do nothing" on some CPUs would be not simply a stream of NOP instructions.

The goal was to be confident that we could delay the next statement by some number of nanoseconds or microseconds. In Joey's case in his 8 bit devices, he has achieved that with a few NOP instructions, on other devices we might need to do it some other way.

Short settling delays seem to crop up often as I read about these MCUs and so since it's a problem domain concept, supporting it at the language level seems reasonable, to me anyway; if we want a language designed for programming hardware like this then the language should - IMHO - map to that problem domain, that's all this thread is about really.
Going up the ladder, things tend to hardware solutions rather than software. A hardware timer uses no CPU time, yet can time intervals far shorter and more accurately than software.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
Going up the ladder, things tend to hardware solutions rather than software. A hardware timer uses no CPU time, yet can time intervals far shorter and more accurately than software.
Joey, you are the person who wrote:

New question -- with a simple example:

Let's say I want to skip a few instruction cycles, say a series of 10 NOPs. I can write in my code Nop(); 10 times, obviously.

In .asm, I can also write 10 NOPs in succession, but I can write a macro like:

<snipped for brevity>

And the macro expands at assembly time.

How do I do similar in C?
So use a timer, use whatever you like, all the talk of "nop" or "idle" becoming a possible language feature arose from your question. You use it in assembler and you'd like to be able to use it in C, in a high level language!
 

joeyd999

Joined Jun 6, 2011
5,287
Joey, you are the person who wrote:



So use a timer, use whatever you like, all the talk of "nop" or "idle" becoming a possible language feature arose from your question. You use it in assembler and you'd like to be able to use it in C, in a high level language!
Again: my work is on small, 8 bit CPUs.

Aside from NOPs, there is no other way to generate nS timing.

You are looking for general cases from small micros to supercomputers -- from a software perspective they all look the same to you. From a hardware perspective, they are not.
 

Thread Starter

ApacheKid

Joined Jan 12, 2015
1,617
Again: my work is on small, 8 bit CPUs.

Aside from NOPs, there is no other way to generate nS timing.

You are looking for general cases from small micros to supercomputers -- from a software perspective they all look the same to you. From a hardware perspective, they are not.
I'm not looking for anything, you asked if C could provide nanosecond delays like a NOP does in assembler, I then reasoned that you'd find it helpful if it did.

You were asking for a way for a language to give you nanosecond delays, we were talking about programming languages.

Finally please do not speculate that "they all look the same to you", your opinion of how things look to me has no place here, we were discussing programming languages.
 
Top