How do you schedule task's

djsfantasi

Joined Apr 11, 2010
9,163
How about a cooperative multi-processing system, where a task is broken into the time needed for one command. Plus, there is a parameter for a required pause, before the next command is given control? Thus, the cpu rotates though a list of concurrent tasks. Each task is allowed to complete one instruction before the next step/instruction is allowed to execute. And there is a ”wait period” before the next instruction is executed.

This works as long as the maximum time with delay for a single step is greater than the “normal” time to execute each of any all other steps.

it also works if other higher priority tasks are allowed to execute between every other lower priority tasks.
 

Papabravo

Joined Feb 24, 2006
21,228
It was a challenge 40 years ago for me. Modula-2 with 68000 assembly register operations for a scheduler module in some OS code I was writting.
Code:
(*$P- *)
PROCEDURE       newtrap; (* IOTRANSFER executes this code before its *)
BEGIN                    (* normal vector. *)

        CODE(46fcH,2700H);  (* stop interrupts *)
        CODE(8b9H,intnum,0ffffH,0fa11H); (* BCLR intnum, ISRB *)
        CODE(8b9H,intnum,0ffffH,0fa15H); (* BCLR intnum, MASKB *)
        CODE(817H,5);                 (* BTST 5, (a7) check supermode *)
        CODE(6700H+2);                (* BEQ.S over rte *)
        CODE(4e73H);                  (* RTE supermode return *)
        CODE(48e7H,0fffeH);   (* save regs movem  *)
        CODE(204fH);    (* move.l ssp,a0 *)
        currentprocess^.ssp:=REGISTER(8)+66;

        IF (currentprocess^.ssp#sspval)
           AND (currentprocess^.ssp#accsuperstack) THEN
           CODE(4cdfH,7fffH); (* restore regs movem *)
           CODE(4e73H);       (* rte *)
        ELSE

           IncSlice;   (* cpu time update *)
           s0:=currentprocess;
           LOOP                    (* find next process store in s0 *)
              s0:=s0^.next;
              WITH s0^ DO
                IF (gemsave[14]#0) THEN (* If flags set then process *)
                   CPFlagptr:=ADR(gemsave[14]);

                   IF sleep IN CPFlagptr^ THEN (* sleep flag *)
                      IF (gemsave[13]#0) AND
                         (ADDRESS(hz200) >= gemsave[13]) THEN
                         active:=TRUE;
                         gemsave[13]:=0;
                         EXCL(CPFlagptr^,wait);  (* clear wait flag *)
                         EXCL(CPFlagptr^,sleep); (* clear sleep flag *)
                      END;
                   END;
                   IF wait IN CPFlagptr^ THEN (* wait flag *)
                      IF (waitloc^ = LONGCARD(LAnd(gemsave[11],gemsave[12]))) THEN
                         active:=TRUE;
                         gemsave[13]:=0;
                         EXCL(CPFlagptr^,wait);  (* clear wait flag *)
                         EXCL(CPFlagptr^,sleep); (* clear sleep flag *)
                      END;
                   END;

                END;
                IF (active) AND (wsp#NIL) THEN
                   IF misc[0]>=highpri THEN
                      IF pri>highpri THEN highpri:=pri END;
                      misc[0]:=pri;
                      EXIT;
                   ELSE
                      INC(misc[0]); (* age process til it's ready to run *)
                   END;
                END;
              END; (* with *)
           END; (* end LOOP *)

        (* Swap GEM pointers for the processes *)
           WITH currentprocess^ DO
           gemsave[0]:=gemsaveGvec^;
           gemsave[1]:=linea;
           gemsave[2]:=gemdos;
           gemsave[3]:=gsxgem;
           gemsave[4]:=tbios;
           gemsave[5]:=xbios;
           gemsave[6]:=linef;
           gemsave[7]:=level2;
           gemsave[8]:=level4;
           gemsave[9]:=shellp;
           biosval:=biospointer;
           termvec:=ptermvec;
           END;
           WITH s0^ DO
           biospointer:=biosval;
           ptermvec:=termvec;
           gemsaveGvec^:=gemsave[0];
           linea:=gemsave[1];
           gemdos:=gemsave[2];
           gsxgem:=gemsave[3];
           tbios:=gemsave[4];
           xbios:=gemsave[5];
           linef:=gemsave[6];
           level2:=gemsave[7];
           level4:=gemsave[8];
           shellp:=gemsave[9];
           END;

           SetTimer;
           s0^.slice:=hz200; (* set next cycle start *)
           SETREG(8,oldtrap);  (* move IOTRANSFER trap adr *)
           CODE(43faH,10); (* lea 12(pc),a1 *)
           CODE(2288H); (* move.l a0,(a1) *)
           CODE(4cdfH,7fffH); (* restore regs movem *)
           CODE(4ef9H,0,0) (* jmp back to routine *)
        END;
END     newtrap;
(*$P+ *)
[/quote]
Way cool!
 

Papabravo

Joined Feb 24, 2006
21,228
How about a cooperative multi-processing system, where a task is broken into the time needed for one command. Plus, there is a parameter for a required pause, before the next command is given control? Thus, the cpu rotates though a list of concurrent tasks. Each task is allowed to complete one instruction before the next step/instruction is allowed to execute. And there is a ”wait period” before the next instruction is executed.

This works as long as the maximum time with delay for a single step is greater than the “normal” time to execute each of any all other steps.

it also works if other higher priority tasks are allowed to execute between every other lower priority tasks.
There is no lack of possible strategies to meet a given set of constraints. It is one consequence of the lack of unique solutions. The best way to make a robust system is by design, you certainly can't do it by testing.
 

BobaMosfet

Joined Jul 1, 2009
2,113
Hi,
I'm trying to understand how preemptive scheduler works. I did a Google search and saw many tutorials, but I do not understand how does preemptive scheduler work.

If someone can explain by giving a specific example, it would be great help.

I am taking an example. it may be wrong or not related to it. It's just my attempt to understand preemptive scheduler.

Let's assume We have 1ms Or 10 ms tick.

There may several periodic task's

Task A run once every 15 ms
Task B run once every 50 ms
Task C run once every 100 ms

Typically the 10 ms task is highest priority, then the 50 ms task, then the 100 ms task.

This is the situation i have created to learn basic. It has nothing to do with any real project.

I hope someone will look to this topic and will help with specific example.
@aspirea5745 Start on a simpler problem. Create a clock using an interrupt. I don't mean just make an interrupt happen so often, I mean -make a clock using an interrupt. Something you can use to keep track of relative time. This is THE basis for all Operating systems, and the very first thing you create. Everything else hangs off of this.

Don't make it more difficult than it is. You keep trying to bite off too big a chunk. Stop. Do small things, and then voila... Understand the small things, and then the big things will become undestandable. You're eager, I get that. But... when you want to go fast, go SLOW.
 
Last edited:
The background task could be "Wait for interrupt"

On the older systems, it was rotate the CPU lights. For fun one day, someone wrote an assembly program that would rotate the lights the wrong way. It was in user space.
 

Thread Starter

aspirea5745

Joined Apr 17, 2019
99
@aspirea5745 Start on a simpler problem. Create a clock using an interrupt. I don't mean just make an interrupt happen so often, I mean -make a clock using an interrupt. Something you can use to keep track of relative time. This is THE basis for all Operating systems, and the very first thing you create. Everything else hangs off of this.
@BobaMosfet I'm trying to understand how preemptive scheduler works with example. I think I have not yet tried to attempt any difficult task's. I wasn't asking about any code. I just wanted to understand the concept.

What @Papabravo explained in the post #12 example, It was excellent description because I was looking for that type of explanation with example.
 

click_here

Joined Sep 22, 2020
548
I suggest that you look at "Patterns in C" by Adam Petersen, and work your way through to part 4 "OBSERVER".

This series looks at some object orientated approaches using pure C.

It's basically a neat way to create a watch using the "Observer" design pattern in C.

In the example, when a event happens "msTick", all of the attached processes are executed (known as "observers").

What you can do is extend this to create priority levels and manage how frequently those attached observers are fired. i.e. If a observer is due, run it, otherwise skip it.

On a side note, are you the TS here>
https://www.microchip.com/forums/m/tm.aspx?m=1174397
 

nsaspook

Joined Aug 27, 2009
13,315
I hate design patterns and OOP for this type of software. OOP has it's place but here at the pure machine level excess abstractions of hardware into some human behaviors framework is a sin IMO.
The main reason C is still popular at this level is that what you write at the source level is a pretty close match to what a non-optimized compiler will produce with assembly if you know the machine architecture and HLL language system well. This makes debugging a hell of a lot easier at the source level.
 
Last edited:

Papabravo

Joined Feb 24, 2006
21,228
I hate design patterns and OOP for this type of software. OOP has it's place but here at the pure machine level excess abstractions of hardware into some human behaviors framework is a sin IMO.
The main reason C is still popular at this level is that what you write at the source level is a pretty close match to what a non-optimized compiler will produce with assembly if you know the machine architecture and HLL language system well. This makes debugging a hell of a lot easier at the source level.
I'm with you on this one. The whole OOP concept is bankrupt, and some of the follow on detritus, like agile development was even worse.
 

click_here

Joined Sep 22, 2020
548
I'm on the other side of the fence with this one, and OOP has very much been the product of its own success.

I think that coding C as a direct rewording of assembly language leads to Premature Optimization.

Although I am primarily a C programmer I do a lot of coding in C#, so I understand first hand that the simplicity and maintainability of OOP in large programs is much better than non-OOP, (and know just painful trying to update someone elses assembly program can be...)

Things like polymorphism for reusing/extending code are just wonderful to use and lead to less problems when making updates.

Loose coupling and levels of indirection makes your code more resilient when changes are made, especially over a team. Nothing is more frustrating than when you make one change and an unexpected bug pops up somewhere completely different because it was tightly coupled to something else!
 

Thread Starter

aspirea5745

Joined Apr 17, 2019
99
@Papabravo

Hi, I need a clarification about the example you gave in post #12. I have spent time to figure out stuff myself before asking to you

The mystery for me is when we know the task's deadline and the task's priority, then how do you schedule each tasks.

In our example, we know both Deadline and task's priority.

Special I need to understand following paragraph, which I do not understand

What's 300ms in example ? How did you get the timings conclusion?

Now if I take a 300 ms. interval (the least common Multiple of 15, 50, & 100) , I will require 30, 1 ms. time slices for Task C, 30 1ms. time slices for Task B. and 40 1ms. time slices for Task A. so the total number of time slices needed in 300 ms. is 100 1 ms. slices. This is a system load that can easily be handled.

So my strategy is on every 1 ms. timer tick to run the highest priority task that is "Ready To Run". If no task is "Ready to Run" I run the "Do nothing Background Task"
 

click_here

Joined Sep 22, 2020
548
Bad programmers make bad programs, and they don't need any specific tool to do it.

The author of the article from that link does have a passionate opinion though starting with "I'll be honest, I’m not a raving fan of object-orientation. Of course, this article is going to be biased"

I found a lot of his opinions that he stated did not line up with my experience, so I guess that I remain unconvinced.

Reception of the opinion piece doesn't seem to have a big following - "(5) Question" https://www.quora.com/Is-senior-ful...And-is-OOP-in-fact-a-trillion-dollar-disaster

But we can all agree to disagree, right!
 

MIS42N

Joined Jan 25, 2013
22
The OP gives the example of
Task A run once every 15 ms
Task B run once every 50 ms
Task C run once every 100 ms
From a hardware perspective, the ability to switch between tasks relies entirely on interrupts. If each of A, B, C are reliant on a hardware interrupt (e.g. each running a timer, interrupt at the required interval) then there may not be a need for specific code to do scheduling. If there is no event (a timer expiring) then the processor is in idle mode (e.g an instruction that says Loop GOTO Loop). When an interrupt occurs, the processor saves the state of the idle loop (which may be as little as the address of the current instruction) and initiates the code associated with that interrupt. The simplest processors will just disable interrupts while the interrupt code is running, and enable interrupts when the interrupt code indicates it is finished. It is the equivalent of [disable interrupts and call interrupt code] interrupt code [return to interrupted task and enable interrupts].

Immediately there is a constraint - The total time for servicing a B and C interrupt must be less than 15ms, otherwise Task A may miss its 15 ms slot. This is solved if one or both of B and C enable interrupts in their own interrupt code, allowing A to run when needed. When A returns, instead of returning to the idle loop it returns to the interrupt code of B or C. When B or C complete, they just [return to interrupted task] and don't need to enable interrupts because they did it early in their own code.

But what if there is only one timer? In that case, scheduling code is required. The timer may be set to 5 ms, when the scheduler is activated by the interrupt it has to decide to run A, B, C or just return. It creates a queue of tasks (e.g at 150 ms all tasks need to run) and when any task completes it returns to the queue handler which starts the next in queue or if the queue is empty returns to the idle code. It is necessary that the scheduler code allow interrupts, and that tasks A, B, C do not interfere with each other.

As a real world example, I have a processor that has a timer interrupt every 250us and also a software UART to receive data at 9600 baud. There is a background task that runs every second. The timer interrupt counts to 40000 and sets a flag. The background task is looping, wait for flag set, when it is, it clears the flag and runs the once a second task, goes back to looping. Meanwhile the software UART is driven by two interrupts, both of which are only a few instructions long. When a character is assembled, the state of the interrupted process is saved and interrupts enabled. This allows the 250us timer to be serviced. The character is put in a buffer, and when a message is completed, state is saved in another place and message processing initiated.

Effectively, either a timer or soft UART interrupt can be serviced (and the sum of both is less than 250us or otherwise one would lock out the other), interrupting a process that is moving the previous character into a buffer and testing for a complete message, which may have interrupted the processing of a previously completed message, which interrupted the background task. As long as each task doesn't change something another task relies on, and there is enough processor cycles to complete all tasks, they play nicely together.

Each type of processor has a different way of handling interrupts, so the details of how to implement a scheduler vary. But I hope this gives some clue as to what goes on behind the scenes.
 

Papabravo

Joined Feb 24, 2006
21,228
A little bit of mathematics background. Given a set of numbers you can compute two other useful numbers called the Greatest Common Divisor and the Least Common Multiple. We need a common multiple of the scheduling times (15, 50 and 100) because we wish to see if there is enough time to do the task. The trivial answer is to multiply the three times together which gives us 75,000 ms. Question: is there a smaller number that will do the job? The answer is yes and it involves looking at the prime factors of each number in the set. You can see for yourself that 300 is a common multiple and take my word for it that it is the smallest one. This is important because we need to analyze the execution of an integer number of tasks that execute an integer number of times, each for a maximum amount of time in that fixed 300 ms. interval.

I specifically assumed that there were no hardware interrupts associated with those tasks. It is the system timer tick alone that is responsible for driving the scheduler.

The mechanics of picking "The highest priority task that is ready to run" is a matter of managing a constantly changing dataset. So you have a set of tasks. Each task has a "present" state. this state variable is updated each time the scheduler run. Let's jut pick a set of states:
  1. Running - the task which currently has control of the processor
  2. Ready To Run - all of the tasks which can run and are not blocked for some reason
  3. Blocked - all of the tasks that are waiting for an event
  4. Suspended - all of the tasks that have recently run and are not blocked
This example is not intended to be complete or exhaustive, it is for illustrative purposes. Imagine that there are three Queues one for each of the states: Ready To Run, Blocked & Suspended. Each Queue can hold one or more tasks. When the scheduler is invoked as the result of a timer tick The Running task moves to Suspended, and some other tasks can move from Suspended or Blocked to Ready To Run. You now go through the Ready To Run Queue and pick the "Highest Priority Task" on the Ready To Run Queue and make it the Running task.

There is a great deal of "policy" and decision making that I have glossed over for the sake of this example. There are books by many authors who have better, deeper, and more rigorous descriptions along with code you can follow but this is the gist of it.
 
Last edited:

Thread Starter

aspirea5745

Joined Apr 17, 2019
99
@Papabravo Now it is possible that you can angry on me But this is my last question because I did not understand why you selected 30, 30 and 40

I will require 30, 1 ms. time slices for Task C, 30 1ms. time slices for Task B. and 40 1ms. time slices for Task A. so the total number of time slices needed in 300 ms. is 100 1 ms. slices. This is a system load that can easily be handled.
I'm also reading books but I'm sorry If you are having trouble with my questions

Warm regards
aspire
 

Papabravo

Joined Feb 24, 2006
21,228
Task C needs to run for 10 ms, (maximum), three times in 300 ms.
Task B needs to run for 5 ms. (maximum), six times in 300 ms.
Task A needs to run for 1.5 ms. (round up to 2 ms because you can't have half a slice) (maximum), 20 times in 300 ms.

This is why the LCM (Least Common Multiple) is important in this problem. It quickly reveals how close you are to the theoretical limit. If the maximum task time multiplied by the number of times you have to run a task exceeds the LCM, then you are in BIG TROUBLE. I have never tried to push an actual system to the theoretical limit, but if I had to guess, somebody has actually done this just to see how close they could get with what level of confidence. The result would be stated in terms of a probability and as we all know probability 0 doesn't mean it won't happen and probability 1 doesn't mean it will happen. You have to know the variance.
 
Last edited:

nsaspook

Joined Aug 27, 2009
13,315
A little bit of mathematics background. Given a set of numbers you can compute two other useful numbers called the Greatest Common Divisor and the Least Common Multiple. We need a common multiple of the scheduling times (15, 50 and 100) because we wish to see if there is enough time to do the task. The trivial answer is to multiply the three times together which gives us 75,000 ms. Question: is there a smaller number that will do the job? The answer is yes and it involves looking at the prime factors of each number in the set. You can see for yourself that 300 is a common multiple and take my word for it that it is the smallest one. This is important because we need to analyze the execution of an integer number of tasks that execute an integer number of times, each for a maximum amount of time in that fixed 300 ms. interval.
...
There is a great deal of "policy" and decision making that I have glossed over for the sake of this example. There are books by many authors who have better, deeper, and more rigorous descriptions along with code you can follow but this is the gist of it.
Your excellent backgrounder reminded me of the old Linux scheduler.

A description of the original Linux scheduler design concepts.
https://github.com/bdaehlie/linux-cpu-scheduler-docs/blob/master/linux_cpu_scheduler.pdf

https://www.linuxpromagazine.com/Online/News/Con-Kolivas-Introduces-New-BFS-Scheduler
After two years deep into Linux, the Australian Con Kolivas has emerged with a new scheduler that above all should provide significantly better performance on dual and quad processors.
 

nsaspook

Joined Aug 27, 2009
13,315
Bad programmers make bad programs, and they don't need any specific tool to do it.

The author of the article from that link does have a passionate opinion though starting with "I'll be honest, I’m not a raving fan of object-orientation. Of course, this article is going to be biased"

I found a lot of his opinions that he stated did not line up with my experience, so I guess that I remain unconvinced.

Reception of the opinion piece doesn't seem to have a big following - "(5) Question" https://www.quora.com/Is-senior-ful...And-is-OOP-in-fact-a-trillion-dollar-disaster

But we can all agree to disagree, right!
For me it's not a disagreement on the usability of OOP in some programming arenas. We just shouldn't try to sell it as the solution to all programming arenas, with specificity to low-level embedded and systems bit-banging being an arena where OOP causes IMO a loss of clarity of the programming problem that's usually very precise, detailed and at times has only one correct solution as a set of sequences and procedures structured by fixed hardware. Programming in C while thinking about these types of sequence requirements is not Premature Optimization, it's a requirement for correctness of sequencing on each platform.
 
Top