How do you schedule task's

Thread Starter

aspirea5745

Joined Apr 17, 2019
99
Hi,
I'm trying to understand how preemptive scheduler works. I did a Google search and saw many tutorials, but I do not understand how does preemptive scheduler work.

If someone can explain by giving a specific example, it would be great help.

I am taking an example. it may be wrong or not related to it. It's just my attempt to understand preemptive scheduler.

Let's assume We have 1ms Or 10 ms tick.

There may several periodic task's

Task A run once every 15 ms
Task B run once every 50 ms
Task C run once every 100 ms

Typically the 10 ms task is highest priority, then the 50 ms task, then the 100 ms task.

This is the situation i have created to learn basic. It has nothing to do with any real project.

I hope someone will look to this topic and will help with specific example.
 

BobTPH

Joined Jun 5, 2013
3,332
Why not start by understanding how a preemptive scheduler would hand 3 equal priority tasks that are not real-time? Then add the priority and real time requirements to see what additional logic is needed.

Bob
 

atferrari

Joined Jan 6, 2004
4,332
Why not start by understanding how a preemptive scheduler would hand 3 equal priority tasks that are not real-time? Then add the priority and real time requirements to see what additional logic is needed.
Bob
@BobTPH
Genuinely interested on this thread; not intending to derail nor interfere with OP.

What is "not real-time" in this context?
 

Papabravo

Joined Feb 24, 2006
16,188
@BobTPH
Genuinely interested on this thread; not intending to derail nor interfere with OP.

What is "not real-time" in this context?
It means there is no constraint on what future time a particular task runs. In a real time context certain tasks must be run within certain time periods. Failure to meet one of those constraints would result in a "system" failure.
 

atferrari

Joined Jan 6, 2004
4,332
It means there is no constraint on what future time a particular task runs. In a real time context certain tasks must be run within certain time periods. Failure to meet one of those constraints would result in a "system" failure.
Could I say then, "at any time, any duration" for a certain task?
 

Papabravo

Joined Feb 24, 2006
16,188
Hi,
I'm trying to understand how preemptive scheduler works. I did a Google search and saw many tutorials, but I do not understand how does preemptive scheduler work.

If someone can explain by giving a specific example, it would be great help.

I am taking an example. it may be wrong or not related to it. It's just my attempt to understand preemptive scheduler.

Let's assume We have 1ms Or 10 ms tick.

There may several periodic task's

Task A run once every 15 ms
Task B run once every 50 ms
Task C run once every 100 ms

Typically the 10 ms task is highest priority, then the 50 ms task, then the 100 ms task.

This is the situation i have created to learn basic. It has nothing to do with any real project.

I hope someone will look to this topic and will help with specific example.
Rather than looking in from the outside it might be easier to consider what happens from the inside.

  1. In a cooperative multitasking environment the scheduler picks one of the tasks that is in the "Ready To Run" state, and "runs" it by ceding control of the processor to that task. Once control has been given to a scheduled task, it runs to completion. Interrupts can still be serviced, including the timer interrupt, but control always returns to the interrupted task. That is the task that was running at the time of the interrupt.
  2. In a preemptive multitasking environment, the scheduler still picks one of the tasks that is in the "Ready To Run" state, and "runs" it by ceding control of the processor to that task. Once control has been given to a scheduled task, it may or may not run to completion. When an interrupt, of ANY type, comes along control does not return to the interrupted task, it goes to the scheduler which then scans all other queues of "Waiting" tasks for the "Highest Priority" Task" that is in the "Ready To Run" state. It places that task in the "Run" state and cedes control of the processor to that task. The new task also has no guarantee that it will be allowed to run to completion. One consequence of this algorithm is that there must ALWAYS be at least one task that is in the "Ready to Run" state. This is usually satisfied by having a "Background Task" that is always "Ready To Run" and does absolutely nothing.
Maybe this helps and maybe it doesn't but I thought it might be worth a shot.​
 

Papabravo

Joined Feb 24, 2006
16,188
Could I say then, "at any time, any duration" for a certain task?
Yes that would be accurate. If there are no time constraints, then a simple scheduler could run a "list" of tasks in an arbitrary order and allow each of them to run to completion. I don't think I have ever encountered such a situation however. As soon as you introduce a peripheral device that sends and receives data, you all of a sudden have time constraints, that you must satisfy or data will be lost. That is tantamount to "system failure".
 

Papabravo

Joined Feb 24, 2006
16,188
out of interest,
pre emptive have saved a few bits of hardware over the years,
https://en.wikipedia.org/wiki/Apollo_Guidance_Computer
This software design was a forerunner of the time sharing systems that made their appearance in the middle 1960s on machines such as the DEC PDP-6 & PDP-10 and the SDS-940 and the Sigma 7
They even had equivalents of the POODO. It was better than just crashing.
 

Thread Starter

aspirea5745

Joined Apr 17, 2019
99
It means there is no constraint on what future time a particular task runs. In a real time context certain tasks must be run within certain time periods. Failure to meet one of those constraints would result in a "system" failure.
@Papabravo

I already said my example might be wrong, it could not be related to this. It is easy for me to understand theory with examples.

Does it makes sense now?

Task A run once every 15 ms
Task B run once every 50 ms
Task C run once every 100 ms

Task A high priority Task
Task B middle priority Task
Task C lower priority Task

Deadline for task A is 20 ms, if its missed then it may cause damage to the system

Deadline for task B is 60 ms if it's missed It would failure of system.

Deadline for task C is 130 ms, deadline is missed for whatever reason, bad things will happen.

Maybe I can make a mistake setting the priority.

If example is not suitable for my question, so please tell me a example of your choice
 

BobTPH

Joined Jun 5, 2013
3,332
Are you trying to understand the concept of preemptive multitasking, or to understand how one would schedule the tasks in your example?

Bob
 

Papabravo

Joined Feb 24, 2006
16,188
@Papabravo

I already said my example might be wrong, it could not be related to this. It is easy for me to understand theory with examples.

Does it makes sense now?

Task A run once every 15 ms
Task B run once every 50 ms
Task C run once every 100 ms

Task A high priority Task
Task B middle priority Task
Task C lower priority Task

Deadline for task A is 20 ms, if its missed then it may cause damage to the system

Deadline for task B is 60 ms if it's missed It would failure of system.

Deadline for task C is 130 ms, deadline is missed for whatever reason, bad things will happen.

Maybe I can make a mistake setting the priority.

If example is not suitable for my question, so please tell me a example of your choice
I expressed no opinion on your example, In my message that you quoted, I was answering @atferrari who asked a related question. Your example made sense and I understood it, but I was trying to offer a view from inside rather than outside, because there are many possible solutions to the example, depending on the particular internals that you choose. I was trying to paint a picture of what is happening inside so you would be in a better position to understand how the processing of individual tasks can be broken up into discrete chunks of time. Unless you have some familiarity with the internals of a scheduler and interrupts, it is hard to look at a concrete example. Maybe you process thing differently, I don't know anything about your capabilities or your experience and don't want to assume anything.

It may take me some time to figure out what is the minimum scaffolding needed to implement your example. Someone else may have a quicker answer.
Can I assume that the run time of each task is very much less than the period between required runs? Like for example less than 10% of the period.
e.g. Task A runs in less than 10% of 15 ms. or 1.5 ms., Task B runs in less than 5 ms. and Task C runs in less than 10 ms.
In a round robin scheduler the total run time of 1.5 ms. + 5 ms. + 10 ms. exceeds the required 15 ms. time interval between runs of Task A. so clearly a round robin scheduler would not work.

Now if I take a 300 ms. interval (the least common Multiple of 15, 50, & 100) , I will require 30, 1 ms. time slices for Task C, 30 1ms. time slices for Task B. and 40 1ms. time slices for Task A. so the total number of time slices needed in 300 ms. is 100 1 ms. slices. This is a system load that can easily be handled.

So my strategy is on every 1 ms. timer tick to run the highest priority task that is "Ready To Run". If no task is "Ready to Run" I run the "Do nothing Background Task"
 
Last edited:

Papabravo

Joined Feb 24, 2006
16,188
I expressed no opinion on your example, In my message that you quoted I was answering @atferrari who asked a related question. Your example made sense and I understood it, but I was trying to offer a view from inside rather than outside, because there are many possible solutions to the example, depending on the particular internals that you choose. I was trying to paint a picture of what is happening inside so you would be in a better position to understand how the processing of individual tasks can be broken up into discrete chunks of time. Unless you have some familiarity with the internals of a scheduler and interrupts, it is hard to look at a concrete example. Maybe you process thing differently, I don't know anything about your capabilities or your experience and don't want to assume anything.

It may take me some time to figure out what is the minimum scaffolding needed to implement your example. Someone else may have a quicker answer.
Can I assume that the run time of each task is very much less than the period between required runs? Like for example less than 10% of the period.
e.g. Task A runs in less than 10% of 15 ms. or 1.5 ms., Task B runs in less than 5 ms. and Task C runs in less than 10 ms.
In a round robin scheduler the total run time of 1.5 ms. + 5 ms. + 10 ms. exceeds the required 15 ms. time interval between runs of Task A. so clearly a round robin scheduler would not work.

Now if I take a 300 ms. interval (the least common Multiple of 15, 50, & 100) , I will require 30, 1 ms. time slices for Task C, 30 1ms. time slices for Task B. and 40 1ms. time slices for Task A. so the total number of time slices needed in 300 ms. is 100 1 ms. slices. This is a system load that can easily be handled.

So my strategy is on every 1 ms. timer tick to run the highest priority task that is "Ready To Run". If no task is "Ready to Run" I run the "Do nothing Background Task"
I forgot to mention that one of the drawbacks with this strategy comes into play when you consider the "cost" of task switching. In the dawn of the computer age, depending on the architecture, this cost could be either minimal on up to being a deal breaker. If task switching becomes too expensive every millisecond, then what you have to do is compromise by switching tasks every other clock tick, or adjust the interval of the clock tick. When you see how small details have a major impact on the performance in numerous non-obvious ways you realize the importance of not reinventing wheels.
 

Papabravo

Joined Feb 24, 2006
16,188
Pearls of Wisdom:

If you want to delve deeper into OS construction I suggest a small exercise to test your knowledge of the processor and the assembler.
  1. In the assembly language of your favorite processor construct three non-trivial subroutines. Call them A, B, C.
  2. Develop a way to call each subroutine indirectly through a data word in memory.
  3. Review the RETURN from a subroutine instruction.
  4. Call each subroutine in turn by creating a suitable stack frame (including register contents) and executing a RETURN instruction
  5. If you did this correctly each subroutine will return to the instruction following the calling RETURN and proceed to the setup and subsequent CALL by RETURN
If you can wrap your head around that piece of coding; congratulations you may be a potential OS developer.

Extra Credit:
Repeat the above exercise with a RETURN from Interrupt instruction
 

nsaspook

Joined Aug 27, 2009
8,394
I forgot to mention that one of the drawbacks with this strategy comes into play when you consider the "cost" of task switching. In the dawn of the computer age, depending on the architecture, this cost could be either minimal on up to being a deal breaker. If task switching becomes too expensive every millisecond, then what you have to do is compromise by switching tasks every other clock tick, or adjust the interval of the clock tick. When you see how small details have a major impact on the performance in numerous non-obvious ways you realize the importance of not reinventing wheels.
Yes, that's why many processors designed to be used with preemptive software systems have shadow registers for each priority level.

https://microchipdeveloper.com/32bit:mz-arch-isa-cpu-shadow-registers
The PIC32 processor implements one or more copies of the General Purpose Registers (GPR) for use by high-priority interrupts. The extra banks of registers are known as shadow register sets. When a high-priority interrupt occurs, the processor automatically switches to a shadow register set without software intervention. This reduces overhead in the interrupt handler and reduces effective latency.
 

nsaspook

Joined Aug 27, 2009
8,394
Pearls of Wisdom:

If you want to delve deeper into OS construction I suggest a small exercise to test your knowledge of the processor and the assembler.
  1. In the assembly language of your favorite processor construct three non-trivial subroutines. Call them A, B, C.
  2. Develop a way to call each subroutine indirectly through a data word in memory.
  3. Review the RETURN from a subroutine instruction.
  4. Call each subroutine in turn by creating a suitable stack frame (including register contents) and executing a RETURN instruction
  5. If you did this correctly each subroutine will return to the instruction following the calling RETURN and proceed to the setup and subsequent CALL by RETURN
If you can wrap your head around that piece of coding; congratulations you may be a potential OS developer.

Extra Credit:
Repeat the above exercise with a RETURN from Interrupt instruction
That's a hard one. ;)
https://forum.allaboutcircuits.com/...han-their-datatype-allows.149495/post-1277914
 

nsaspook

Joined Aug 27, 2009
8,394
Everybody needs a little challenge from time to time.
It was a challenge 40 years ago for me. Modula-2 with 68000 assembly register operations for a scheduler module in some OS code I was writting.
Code:
(*$P- *) 
PROCEDURE       newtrap; (* IOTRANSFER executes this code before its *)
BEGIN                    (* normal vector. *)

        CODE(46fcH,2700H);  (* stop interrupts *)
        CODE(8b9H,intnum,0ffffH,0fa11H); (* BCLR intnum, ISRB *)
        CODE(8b9H,intnum,0ffffH,0fa15H); (* BCLR intnum, MASKB *)
        CODE(817H,5);                 (* BTST 5, (a7) check supermode *)
        CODE(6700H+2);                (* BEQ.S over rte *)
        CODE(4e73H);                  (* RTE supermode return *)
        CODE(48e7H,0fffeH);   (* save regs movem  *)
        CODE(204fH);    (* move.l ssp,a0 *)
        currentprocess^.ssp:=REGISTER(8)+66;

        IF (currentprocess^.ssp#sspval)
           AND (currentprocess^.ssp#accsuperstack) THEN
           CODE(4cdfH,7fffH); (* restore regs movem *)
           CODE(4e73H);       (* rte *)
        ELSE

           IncSlice;   (* cpu time update *)
           s0:=currentprocess;
           LOOP                    (* find next process store in s0 *)
              s0:=s0^.next;
              WITH s0^ DO
                IF (gemsave[14]#0) THEN (* If flags set then process *)
                   CPFlagptr:=ADR(gemsave[14]);

                   IF sleep IN CPFlagptr^ THEN (* sleep flag *)
                      IF (gemsave[13]#0) AND
                         (ADDRESS(hz200) >= gemsave[13]) THEN
                         active:=TRUE;
                         gemsave[13]:=0;
                         EXCL(CPFlagptr^,wait);  (* clear wait flag *)
                         EXCL(CPFlagptr^,sleep); (* clear sleep flag *)
                      END;
                   END;
                   IF wait IN CPFlagptr^ THEN (* wait flag *)
                      IF (waitloc^ = LONGCARD(LAnd(gemsave[11],gemsave[12]))) THEN
                         active:=TRUE;
                         gemsave[13]:=0;
                         EXCL(CPFlagptr^,wait);  (* clear wait flag *)
                         EXCL(CPFlagptr^,sleep); (* clear sleep flag *)
                      END;
                   END;

                END;
                IF (active) AND (wsp#NIL) THEN
                   IF misc[0]>=highpri THEN
                      IF pri>highpri THEN highpri:=pri END;
                      misc[0]:=pri;
                      EXIT;
                   ELSE
                      INC(misc[0]); (* age process til it's ready to run *)
                   END;
                END;
              END; (* with *)
           END; (* end LOOP *)

        (* Swap GEM pointers for the processes *)
           WITH currentprocess^ DO
           gemsave[0]:=gemsaveGvec^;
           gemsave[1]:=linea;
           gemsave[2]:=gemdos;
           gemsave[3]:=gsxgem;
           gemsave[4]:=tbios;
           gemsave[5]:=xbios;
           gemsave[6]:=linef;
           gemsave[7]:=level2;
           gemsave[8]:=level4;
           gemsave[9]:=shellp;
           biosval:=biospointer;
           termvec:=ptermvec;
           END;
           WITH s0^ DO
           biospointer:=biosval;
           ptermvec:=termvec;
           gemsaveGvec^:=gemsave[0];
           linea:=gemsave[1];
           gemdos:=gemsave[2];
           gsxgem:=gemsave[3];
           tbios:=gemsave[4];
           xbios:=gemsave[5];
           linef:=gemsave[6];
           level2:=gemsave[7];
           level4:=gemsave[8];
           shellp:=gemsave[9];
           END;

           SetTimer;
           s0^.slice:=hz200; (* set next cycle start *)
           SETREG(8,oldtrap);  (* move IOTRANSFER trap adr *)
           CODE(43faH,10); (* lea 12(pc),a1 *)
           CODE(2288H); (* move.l a0,(a1) *)
           CODE(4cdfH,7fffH); (* restore regs movem *)
           CODE(4ef9H,0,0) (* jmp back to routine *)
        END;
END     newtrap;
(*$P+ *)
[/quote]
 
Top