Before we go on to dig deep into the internal working of the Real time operating systems let us first go over some fundamental concepts one needs to understand to go about building their own RTOS. We are going to test and debug our custom RTOS on a Microcontroller and more specifically an ARM cortex-M microcontroller. Hence we also need to be aware of the features provided in the MCU (microcontroller unit). So let’s get started.
SOME BASIC OPERATING SYSTEM CONCEPTS
There are some fundamental concepts that every operating system in the world follows. And are equally important for Real time OS. Let’s take a look at some of the Fundamentals of operating systems:
1. Thread
Thread is a single unit of execution and is a part of a process. In a single core CPU which is common for microcontroller based embedded system there are applications which require you to perform multitasking. Each task you need to perform will be programmed in a single sequential flow. In like manner several such thread are coded to perform multiple tasks.
2. Task Scheduler
A task scheduler is a piece of code that decides which Thread/Task to run next. There are multiple methods you can use to decide which thread should be loaded and executed next. These ‘methods’ per se are called scheduling algorithms. A scheduling algorithm takes help from the Task Control block during it’s decision making process.
3. Context switching
The process of storing the state of the process in the memory so that it can be resumed later, is known as context switching. Performing context switching allows CPU to write multiple tasks simultaneously and is an essential part of a RTOS kernel.
4. Thread/task control block
A task control block (TCB) is a data structure used by kernels to maintain information about a task. To give a sense of a typical Task control block given below is the task control block of Micrium µC/OS-III.
struct os_tcb {
CPU_STK *StkPtr;
void *ExtPtr;
CPU_STK *StkLimitPtr;
OS_TCB *NextPtr;
OS_TCB *PrevPtr;
OS_TCB *TickNextPtr;
OS_TCB *TickPrevPtr;OS_CHAR *NamePtr;
CPU_STK *StkBasePtr;
OS_TLS TLS_Tbl[OS_CFG_TLS_TBL_SIZE] OS_TASK_PTR TaskEntryAddr;
void *TaskEntryArg;
OS_PEND_DATA *PendDataTblPtr;
OS_STATE PendOn;
OS_STATUS PendStatus;
OS_STATE TaskState;
OS_PRIO Prio;
CPU_STK_SIZE StkSize;
OS_OPT Opt;
OS_OBJ_QTY PendDataEntries;
CPU_TS TS;
OS_SEM_CTR SemCtr;
OS_TICK TickRemain;
OS_TICK TickCtrPrev;
OS_TICK TimeQuanta;
OS_TICK TimeQuantaCtr;
void *MsgPtr;
OS_MSG_SIZE MsgSize;
OS_MSG_Q MsgQ;
CPU_TS MsgQPendTime;
CPU_TS MsgQPendTimeMax;
OS_REG RegTbl[OS_TASK_REG_TBL_SIZE];
OS_FLAGS FlagsPend;
OS_FLAGS FlagsRdy;
OS_OPT FlagsOpt;
OS_NESTING_CTR SuspendCtr;
OS_CPU_USAGE CPUUsage;
OS_CPU_USAGE CPUUsageMax;
OS_CTX_SW_CTR CtxSwCtr;
CPU_TS CyclesDelta;
CPU_TS CyclesStart;
OS_CYCLES CyclesTotal;
OS_CYCLES CyclesTotalPrev;
CPU_TS SemPendTime;
CPU_TS SemPendTimeMax;
CPU_STK_SIZE StkUsed;
CPU_STK_SIZE StkFree;
CPU_TS IntDisTimeMax;
CPU SchedLockTimeMax;
OS_TCB DbgNextPtr;
OS_TCB DbgPrevPtr;
CPU_CHAR DbgNamePtr;
};
The above block controls all the information a scheduler will need to perform context switching.
5. Scheduling algorithms
The scheduling decision making process can be as simple as executing them one after the other (round Robin). The complexity of the scheduling methods depends on the application requirements. I have listed below some common scheduling methods for a real time kernel:
First-Come, First-Served (FCFS) Scheduling
First come first serve (FCFS) scheduling algorithm schedules the jobs according to their arrival time. The job which arrives first in the queue will get the CPU first.
Shortest-Job-Next (SJN) Scheduling
Shortest job next (SJN) is a scheduling policy that selects the process with the smallest execution time. This is a non-pre emptive method.
Priority Scheduling
Priority scheduling is a scheduling processes based on the task priority. In this method, the scheduler chooses the task to be executed next as per the priority.
Shortest Remaining Time
It is a scheduling method which uses the task completion time to decide its next task. This algorithm in a sense is a pre-emptive version of shortest-job-next algorithm. In this method the job with the shortest remaining time is executed first.
Rate monotonic Scheduling
It is a static priority assignment algorithm used in real- time operating systems (RTOS). These algorithm assigns the priority to the task according to their periodicity and run time. I.e. a task with lower periodicity is assigned a higher priority. Round Robin algorithm: is a pre-emptive algorithm and switches between the task one after the other in a circular fashion. This is the simplest scheduling algorithm.
6. Pre-emptive and Non Pre-emptive algorithm
Pre-emption refers to the temporary interruption and suspension of a task. The task are pre-empted because sometimes it is important to serve the higher priority task first. Therefore, the running task is interrupted and resumed later when the priority task has finished its execution. In non-pre-emptive scheduling, a task cannot be interrupted and is executed till its completion.
7. Semaphore
Semaphores are integer variables that are used to solve the critical section problem. They use wait and signal method for process synchronization. Wait and signal are atomic operations. First of all what are atomic operations? They are instructions which during their execution cannot be interrupted.
Next, what is the critical section problem?
The part of the program which tries to access any kind of shared resource is known as the critical section. These resources may be any resource in a computer like a memory location, Data structure or any IO device (UART, I2c, GPIO’s etc.). and the problem faced while two or more task tries to access the same resource , is known as critical section problem. So to avoid such kind of critical section problem , Semaphores are used as a signalling mechanism to notify the other thread that the resource is currently in use.
8. Inter process communication
Inter Process Communication (IPC) refers to a mechanism, where the operating systems allow various processes to communicate with each other. These can be done either by passing messages between the tasks or by using a shared memory.
9. CPU utilization
CPU utilization is used to estimate the system performance.it the amount of workload that is handled by the CPU. The goal of the operating system is to utilize 100% of the CPU capabilities although this is not practically achievable. CPU utilization can be calculated using a simple Formula.
U=(R/C)*100%
U= Utilization
R= BUSY TIME (the amount of time CPU is doing something)
C= BUSY TIME (the amount of time CPU is doing something) + IDLE TIME (the amount of time the CPU is not doing anything)
For eg:
BUSY TIME= R = 4576
BUSY TIME + IDLE TIME = C = 4567 + 2644 = 7211
CPU Utilization is
(4567/7211) * 100= 0.63 *100%= 63%.