You are on page 1of 42

Embedded Systems

Module 3

IO subsystem: IO ports are the subsystems of OS device management systems. IO instruction depends on the hardware platform and IO systems differ from one OS to another. Subsystem of a typical IO system are: System hierarchy Actions and layers between the subsystems An application having an IO system. There may also be a sub Application layer between the application and basic IO functions Device independent OS functions. For example, file system IO basic functions functions for read and write, buffered IO or file (block) read and write functions. IO device driver Device dependent OS functions. A driver may interface with a functions set of library functions. For example, serial communication. Device hardware or port Serial device or network or IO interface card Two types of IO operations: synchronous and asynchronous. There may be separate functions for synchronous and asynchronous operations in an RTOS. Synchronous IO functions are at certain fixed data transfer rates. Therefore a task or process blocks tile completion of an IO operation. Asynchronous IO operations are at variable data transfer rates. It provisions for a process or task of high priority not blocked during IOs. Interrupt routine handling in RTOS: ISRs functions are as follows: ISRs have highest priorities over the OS functions and application tasks and an ISR does not wait for a semaphore, mailbox message or queue message ISR does not wait for mutex else it has to wait for other critical section code to finish before the critical section codes in ISR can run Three alternative systems for the OSes to respond to the hardware source calls from the interrupt are: 1. Direct call to an ISR by an interrupting source and ISR sending an ISR enter message 2. RTOS first interrupting on an interrupt, then OS calling the corresponding ISR. 3. RTOS first interrupting on an interrupt, then RTOS initiating the ISR and then an ISR
Department of ECE, VKCET Page 1

Embedded Systems

Module 3

Direct call to an ISR by an interrupting source and ISR sending an ISR enter message: The steps are shown below:

Step 1: On an interrupt, the process running at the CPU is interrupted and an ISR corresponding to that source starts executing. Step 2: A hardware source calls an ISR directly and ISR sends an ISR enter message to the OS. OS simply sent an ISR enter message (ISM) from ISR. The message is to inform the OS that an ISR has taken control of the CPU Step 3: ISR code can send into a mailbox or message queue. Step 4: The task waiting for the mailbox or message queue does not start before the return from ISR. Two functions: ISR and OS functions in two memory blocks. An i th interrupt sources causes ISR_i to execute. The routine sends an ISR enter message to the OS and the message is stored at the memory allotted for OS messages. When ISR finishes, it sends an exit to OS and there is return and either there is the execution of interrupted process or rescheduling of the processes. The multiple ISRs may be nested and each ISR of low priority sends high priority ISR interrupt message (ISM) to the OS to facilitate return to it on the completion and return from the higher priority interrupt. RTOS first interrupting on an interrupt, then OS calling the corresponding ISR: The steps are shown below:

Step 1: On interrupt of a task, OS first gets the hardware source call.


Department of ECE, VKCET Page 2

Embedded Systems

Module 3

Step 2: OS initiates the corresponding ISR after context saving Step 3: The called ISR execution. Step 4: During the execution of ISR, it can post one or more outputs for the events and messages into the mailboxes or queues. Step 5: Return from the ISR and retrieve the task context to continue task. RTOS first interrupting on an interrupt, then RTOS initiating the ISR and then an ISR: RTOS can provide two levels of ISRs: a fast level ISR (FLISR) and a slow level ISR (SLISR). FLISR is called hardware interrupt ISR and SLISR is software interrupt ISR In Windows CE (an RTOS), FLISR is called ISR and SLISR is called interrupt service thread (IST). The use of FLISR reduces interrupt latency for an interrupt service and jitter for an interrupt service. The steps are shown below:

Step 1: On interrupt, OS first gets hardware source call and initiates the corresponding ISR. Step 2: After finishing the critical section and reaching the preemption point and context saving. Step 3: ISR executes the device and platform-dependent code Step 4: During execution of ISR, it can send one or more outputs for the events and messages into mailboxes and queues for the ISTs. The IST executes the device and platform independent code Step 5: Just before the end of ISR, unmasks further preemption from the same or other hardware resources. Step 6: There are number of ISRs and number of ISTs. The ISR can post messages into FIFO for the ISTs after recognizing the interrupt source and its priority. The ISTs in the FIFO that have received the messages from the ISR executes as per the priority Step 7: When no ISR or ISTs is pending execution in the FIFO, the interrupted task runs on return. The ISRs must be short, run critical and necessary codes only, and they must simply send the initiate call or messages to ISTs into FIFO The ISTs run in kernel space and do not lead to priority inversion and have priority inheritance mechanism
Department of ECE, VKCET Page 3

Embedded Systems

Module 3

RTOS: Multitasking OS for the applications needing meeting of time deadlines and functioning in real-time constraints. The OS services of RTOS software are: 1. Basic OS functions Process management, resources management, device management, IO devices subsystems and network devices and subsystem management. 2. Process priorities management: priority allocation User level priority allocation. The real-time priorities are higher than dynamically allocated priorities to the OS functions and the idle priority allotted to low priority threads. 3. Process management: preemption RTOS kernel preempts a lower priority process when a message or event for which it is waiting to run a higher priority process takes place. 4. Process priorities management: priority inheritance Priority inheritance enables shared resource in low priority task. 5. Process predictability A predictable timing behavior of the system and predictable task synchronization with minimum jitter. 6. Memory management: protection In RTOS threads of application program can run in kernel space. Then the real-time performance becomes high. 7. Memory management: MMU Memory locking stops the page swapping between physical memory and disk when MMU is disabled. This makes RTOS task latencies predictable and reduces jitter. 8. Memory allocation Fixed length memory block allocation and are fast. 9. RTOS scheduling and interrupt latency control functions 10. Timer functions and time management 11. Asynchronous IO functions Permits asynchronous IOs without blocking a task. 12. IPC synchronization functions 13. Spin locks 14. Time slicing Execution of processes which have equal priority. 15. Hard and soft real-time operability RTOS task scheduling models: Some common scheduling models used by schedulers are: 1. Co-operative scheduling model. 2. Cyclic and round robin with time slicing scheduling models. 3. Preemptive scheduling model. 4. Model for critical section service by a preemptive scheduler. 5. Earliest deadline first (EDF) precedence and rate monotonic schedulers (RMS) models 6. Fixed real-time scheduling model
Department of ECE, VKCET Page 4

Embedded Systems

Module 3

Cooperative scheduling model: Consider an automatic washing machine. The system can be partitioned into multiple tasks A1 to AN, within first three tasks are A 1, A2, and A3 and following figure shows the tasks of the multiple process embedded software.

Scheduler first starts A1 waiting loop and waits for message from A1 Task A1: This is to reset the system and switch on the power if the door of the machine is

closed and the power switch pressed once and released to start the system. Task 1 waiting loop terminates after detection of two events: a) door closed and b) power switch pressed by the user. Finally task 1 sets a flag start_F and is the message from A 1 to schedule task A2 to start it. This message can be send using semaphore function OSSemPost(start_F) Task A2: Scheduler waits for message from A 1 for start_F setting. The waiting can be done by semaphore function OSSemPend(start_F). If start_F posting occurs, the task 2 starts and a bit is set to signal water into wash tank and repeatedly checks for water level. When water level is adequate the flag water_stage1_F is set, which is the message from A2 to scheduled task 3 to start executing code. This message can send using semaphore function OSSemPost(water_stage1_F). Task A3: Scheduler waits for A2 message and waiting can be done using semaphore function OSSemPend(water_stage1_F). If message is there task 3 wait ends and starts. A bit is set to stop water inlet and another bit to sets to start the wash tank motor. Then a flag motor_stage1_F is set, which is the message A 3 to the schedule the next task to start executing the code. Message can be sent using semaphore function OSSemPost(motor_stage1_F) Cooperative scheduling model is shown below:

Scheduler inserts into a list the ready task

Department of ECE, VKCET

Page 5

Embedded Systems

Module 3

The task program context at various instances are shown below:

Cooperative Scheduling of ready tasks in a circular queue. It closely relates to function queue scheduling. The scheduler in which the scheduler inserts into a list the ready tasks for sequential execution in cooperative is shown below:

PC changes at different time is shown below:

Worst case latency is same for each task If there are n tasks in ready list worst case latency when including the ISRs execution times will be Tworst = {(dti + sti + eti )1 + (dti + sti + eti)2 +...+ (dti + sti + eti)n-1 + (dti + sti + eti)n} + tISR = ttotal + tISR Where dti is event detection time when an even is brought into a list, st i is switching time for one task to another, eti is task execution time and tISR is sum of all execution times for the ISRs Tworst should always be less than the deadline Td for any task in the list.

Department of ECE, VKCET

Page 6

Embedded Systems

Module 3

Cooperative scheduling of ready tasks using an ordered list as per precedence constraints: Cooperative priority-based scheduling of the ISRs executed in the first layer and prioritybased ready tasks at an ordered list executed in the second layer is shown below:

PC switch at different times is shown below:

When scheduler calls the ISRs and the corresponding tasks at a n ordered list one by one. The scheduler using a priority parameter taskPriority, does the ordering of list of the tasks. The scheduler first executes only the first task at the ordered list and t total equals the period taken by the first task on the list. It is deleted from the list after the first task is executed and next task becomes the first. The insertion and deletion for forming the order is made by the beginning of each list. At first layer ISRs has a set of short codes that have to be executed immediately. Each task cooperates to let the running task finish. Cooperative means that each task cooperates to let the running one finish. None of the tasks does block in-between anywhere during the ready to finish states. The next start of scheduling is among the ready tasks from a priority based list.
Department of ECE, VKCET Page 7

Embedded Systems

Module 3

Worst-case latency is not same for every task. Varies from {(dt i + sti + eti )

+ tISR} to {(dti + sti + eti)p1 + (dti + sti + eti) p2 +........+ (dti + sti + eti) pm-1 + (dti + sti + eti) pm + tISR} Where tISR is the sum of all execution times for the ISRs, p em is the priority of the task that has maximum execution time and p1, p2,pm-1 and pm are the priorities of the tasks in ordered list. Also p1 > p2 > > pm. With this scheduler, it is easier but not guaranteed to meet Tworst less than deadline Td. So programmer assigns the lowest Td task a highest priority. Cyclic and round robin with time slicing scheduling model: Cyclic scheduling periodic tasks: Assume periodically occurring three tasks. Let in time-frames allotted to the first task, the task executes at t1, t1 + Tcycle, t1+ 2 Tcycle, .., second task frames at t2, t2 + Tcycle, t2+ 2 Tcycle and third task at t3, t3 + Tcycle, t3+ 2 Tcycle, . Start of a time frame is the scheduling point for the next task in the cycle. Tcycle is the cycle for repeating cycle of execution of tasks in order 1, 2 and 3 and equals start of task 1 time frame to end of task 3 frame. T cycle is period after which each task time frame allotted to that repeats. Each of the N tasks in a cyclic scheduler completes in its allotted time frame when the time frame is based on the deadline. It repeats the schedule decided after computations based on the period of occurrences of task instances. Each task has same priority for execution in the cyclic mode. An example: The video and audio signals reaching at the points in a multimedia system and processed. The video frame reaches at the rate of 25 in 1 second. The cyclic scheduler is used in this case to process video and audio with T cycle = 40ms or multiple of 40ms. Round robin time slicing scheduling: Task may not complete in its allotted time frame. Round robin means that each ready task runs in turn only in a cyclic queue for a limited time slice T slice and Tslice= Tcycle / N, where N is the number of tasks. A widely used model in traditional OS. Let a stream of coded message is reaching at port A after every 20ms, it is decrypted and retransmitted to the port B after encoding each decrypted message. Let the five tasks C1, C2, C3, C4 and C5 in the multiple processes are:
pem

Department of ECE, VKCET

Page 8

Embedded Systems

Module 3

The context of five tasks in five time schedules is shown below:

Different time-schedules and the process and context switching are: 1. At first instance C1 and task C1 is running. 2. At second instance after 4ms, OS switches to context C2, task C1 is finished, and C2 is running. As task C1 is finished, nothing is saved on the task C1 stack. 3. At third instance, OS switches the context to C3 on next timer interrupt, which occurred after 8ms from the start of task C1. Task C1 is finished, C2 is blocked and C3 is running. Then C2 is saved on task C2 stack because C2 is blocked. 4. At fourth instance, OS switches the context to C4 on timer interrupt, which occur after 12ms from the start of task C1. Task C1 is finished, C1 and C2 are blocked and C4 in running. Context C2 and C3 are at the tasks C2 and C3 stacks respectively. 5. At the fifth instance, OS switches the context to C5 on timer interrupt, which occurred after 16ms from the start of task C1. Task C1 is finished, C2, C3 and C4 are blocked and C5 is running. Contexts C2, C3 and C4 are saved on tasks C2, C3 and C4 stacks respectively. 6. On timer interrupt at 20ms, OS switches the context to C1. As C5 is finished, the contexts C2, C3 and C4 remain in the stack. Task C1 is running as per its schedule. When pth task has high execution time et p the worst case latency of the lowest priority task exceeds its deadline. This problem can be solved by defining a lower time slice for each task

Department of ECE, VKCET

Page 9

Embedded Systems

Module 3

The programming model of cyclic round robin time-slice scheduling is shown below:

PC on context switching when scheduler call to tasks at two consecutive time slices is shown below:

Each task is allotted a maximum time interval of tslice/N, where timer interrupts for every

tslice seconds and initiates a new cycle. OS completes the execution of all ready tasks in one cycle within time slice N x t slice. Let Tworst be sum of maximum times for all the tasks, the t slice Tworst and Tworst = {(dti + sti + eti)p1 + (dti + sti + eti) p2 +........+ (dti + sti + eti) pn-1 + (dti + sti + eti) pn + tISR} If N x tslice is the sum of maximum times for each task, the each task is executed once and finishes in one cycle. When a task finishes the execution before its maximum time it can take, then there is a waiting period between two cycles The worst case latency for any task is N x t slice and a task may periodically need execution. An alternate model strategy is decomposition of a task that takes an abnormally long time to be executed. The decomposition is into two or four or more tasks and one set of tasks (odd numbered) can run in one time slice t slice and another set (even numbered) in another time slice tslice.
Page 10

Department of ECE, VKCET

Embedded Systems

Module 3

Another strategy is decomposition of long time-tasking task into a number of sequential states. Then one of its states runs in first cycle, the next state in the second cycle and so on. This task then reduces the response time of the remaining tasks that are executed after a state. Preemptive Scheduling Model strategy by a Scheduler: Some difficulties in cooperative and cyclic scheduling of tasks are: A disadvantage of the cooperative scheduler is that a long execution time of a lesspriority task makes a high-priority task wait at least until it finishes Another disadvantage of cooperative scheduler is cyclic, but without a predefined tslice Worst case latency equals the sum of execution times of all tasks. In Preemptive Scheduling of tasks: OS schedules such that higher priority task, when ready, preempts a lower priority by blocking Solves the problem of large worst case latency for high priority tasks. Consider an example: First five tasks B1 to B5

The preemptive scheduling by the kernel pre-emptive scheduler actions is shown below:

Department of ECE, VKCET

Page 11

Embedded Systems

Module 3

A higher priority task takes control from a lower priority task and a higher priority task

switches into running state after blocking the lower priority task. Step 1: At first instance the context is B3 and task B3 is running Step 2: At second instance the context switches to B1 as context B3 saves on interrupt at port A and task B1 is of higher priority. Now task B1 is in running state and task B3 is in blocked state. Context B3 is at the task B3 stack. Step 3: At third instance the context switches to B2 on interrupt, which occurs only after task B1 finishes. Task B1 is in finished state, B2 in a running state and task B3 is still in the blocked state. Context B3 is at the task B3 stack. Step 4: At fourth instant context B3 is retrieved and the context switches to B3. Tasks B1 and B2, both of higher priorities than B3, are finished. Task B1 and B2 are in finished states. Task B3 blocked state changes to running state and B3 is now in a running state. Step 5: At fifth instance the context switches to B4. Tasks B1, B2 and B3, all of higher priorities than B4 are finished. Task B1, B2 and B3 are in finished states. B4 is now in a running state. Step 6: At sixth instance the context switches to B5. Tasks B1, B2, B3 and B4 all of higher priorities than B5, are finished. Tasks B1, B2, B3 and B4 are in finished states. B5 is now in a running state. Step 7: At seventh instance the context switches to B1 as context B5 is saved on interrupt at port A, and task B1 is of highest priority. Now task B1 is in a running state and task B5 is in a blocked state. Context B5 is at the task B5 stack. Step 8: At eighth instance the context switched to B2 on interrupt, which occurs only after task B1 finishes. Task B1 is in finished state, B2 in a running state and task B5 is still in the blocked state. Context B5 is still at the task B5 stack. Step 9: At the last instance the context is B3 and task B3 is running. The tasks B1 and B2 are in finished state. RTOS manages Preemptive Scheduling: Processes execute such that scheduler provides for preemption of lower priority process by higher priority process. Assume priority of task_1 > task_2> task_3 >task_4. > task N Each task has an infinite loop from start (Idle state) up to finish. Task 1 last instruction points to the next pointed address, *next. In case of the infinite loop, *next points to the same task 1 start.

Department of ECE, VKCET

Page 12

Embedded Systems

Module 3

Preemptive scheduling of the tasks readied in order of priority is shown below:

Program counter assignments on the a scheduler call to preempt task_2, when priority of task_1 > task_2 > task_3 is shown below:

Worst-case latency: Not Same for every task Highest priority task latency smallest Lowest priority task latency highest Different for different tasks in the ready list Tworst = {(dti + sti + eti )1 + (dti + sti + eti )2 +...+ (dti + sti + eti )p-1 + (dti + sti + eti )p} + tISR. tISR is the sum of all execution times for the ISRs For an ith task, let the event detection time with when an event is brought into a list be is dti , switching time from one task to another be is sti and task execution time be is eti i = 1, 2, p-1 when number of higher priority tasks = p-1 for the pth task.

Department of ECE, VKCET

Page 13

Embedded Systems

Module 3

Inter Process Communication: Exchange of data between two or more separate, independent processes or threads or tasks or ISR or scheduler OSes provide the following IPC functions: Signal for other process to start Semaphore (as token, mutex) or counting semaphores for the inter task communication between tasks sharing a common buffer Queues and Mailbox Pipe and Socket devices Remote procedure call (RPC) for distributed processes. Shared data problems Problems of sharing data by multiple tasks and routines: Some data is common to different processes or tasks. Examples are as follows: 1) Time: which is updated continuously by a process, is also used by display process in a system 2) Port input data: which is received by one process and further processed and analyzed by another process. 3) Memory Buffer data: which is inserted by one process and further read (deleted), processed and analyzed by another process Assume that at an instant when the value of variable operates and during the operations on it, only a part of the operation is completed and another part remains incomplete. At that moment, assume that there is an interrupt. Assume that there is another function. It also shares the same variable. The value of the variable may differ from the one expected if the earlier operation had been completed. Whenever another process sharing the same partly operated data, then shared data problem arises. Eg. Interrupt changing the subsequent bits while processing a 128 bit data on a 32-bit CPU. In which 32-bit CPU operation is atomic and 128 bit data variable operation is non-atomic Let x be a 32 bit variable b 127 . b0 and shift left by 2 (multiply by 4) operation OPsl is performed non-atomically by four sub-operations (atomic) OPA sl, OPBsl, OPCsl and OPDsl for b31 . b0, b63 . b32, b95 . b64 and b127 . b96 respectively. Assume at an instance OPA sl, OPBsl, OPCsl are completed and OPDsl is incomplete and an interrupt cause some function which access variable x and modifies it as x= b127 . b0. On return from the interrupt OPD sl operates on b127 . b96 other than b127 . b0
Department of ECE, VKCET Page 14

Embedded Systems

Module 3

Steps for the Elimination of Shared Data Problem: 1) Use modifier volatile variables with a declaration for a variable that returns from the interrupt. 2) Use re-entrant function with atomic instructions in that section of a function that needs its complete execution before it can be interrupted. This section is called the critical section. 3) Put a shared variable in a circular queue. A function that requires the value of this variable always deletes (takes) it from the queue front, and another function, which inserts (writes) the value of this variable, always does so at the queue back. 4) Disable the interrupts (DI) before a critical section starts executing and enable the interrupts (EI) on its completion. DI powerful but a drastic option. An interrupt, even if of higher priority than the critical function, gets disabled. A software designer usually not uses the drastic option of disabling interrupts in all the critical sections, except in automobile system like software. 5) Use lock ( ) a critical section starts executing and use unlock ( ) on its completion. 6) Use IPC (Inter-Process Communication) 7) Using semaphore as mutex for the shared data problem. Elimination of shared data problem: Use of semaphores does not eliminate the shared data problem completely The problems arise when using semaphores are: Sharing two semaphores creates deadlock problem. If semaphore taken is never released, some time-out mechanism (watchdog timer) resets the processor. A semaphore not taken, another task uses a shared variable. When using the multiple semaphore, if an unintended task takes semaphore and it creates problem Priority inversion problem Concept of Semaphores: Use of a semaphore as an event-signaling or notifying variable: OS provides the use of a semaphore for signaling or notifying an action and for notifying the acceptance of the notice or signal A type of semaphore called binary semaphore, use a binary variable s represent the semaphore. The operation on s signals or notifies the operation for communicating the occurrence of the event and for communicating taking note of an event. It is like a token; release of the token is the occurrence of the event and the acceptance of the token is taking note of that event. If s is 0, it is assumed that it has been taken (or accepted) and when it is 1, it has been released and no task has taken yet.
Department of ECE, VKCET Page 15

Embedded Systems

Module 3

An example how to use a binary semaphore for signaling and notifying occurrences of an event from a task or thread Chocolate delivery task: After task delivers the chocolate, it has to notify to the display Collect nice chocolate. Thank you Assume OSSemPost( ) is an OS function for IPC by posting and assume OSSemPend( ) is another IPC for waiting semaphore. Let sdispT is binary semaphore posted from delivery task and taken by Display_task for displaying the message and initially sdispT initial value =0. the codes for semaphore is: static void Task_Deliver (void*taskPointer) { -----while(1) { ----/*Codes for delivering chocolate*/ ------OSSemPost(sdispT) /*Post the semaphore sdispT. OS function increments by event control block*/ ----} } static void Task_Display(*void taskPointer) { -----while(1) { ----OSSemPend(sdispT) /*wait for sdispT until it is posted and becomes 1. When it becomes 1, wait is over and OS function runs to decrement sdispT in control block*/ /*Code for display message*/ ----} ----}

Department of ECE, VKCET

Page 16

Embedded Systems

Module 3

Use of a semaphore as resource key and for critical section: OS provides use of single semaphore as a resource key and for running the code of critical sections A task A can access a resource (eg. printer file, network etc.) notifies to the OS to have the semaphore. OS returns the semaphore as taken (accepted) by decrementing the semaphore from 1 to 0. Now task A accesses the resource and after completing the accessing it notifies to the OS to have posted that semaphore and OS returns the semaphore as released by incrementing semaphore from 0 to 1. Another task named B can access the same resource using OSSemPend( ) if it is waiting for that semaphore and it posts the semaphore using OSSemPost( ) after completing the access of the resource. Use of semaphore between A and B tasks with five sequential actions at five different times are shown below:

Department of ECE, VKCET

Page 17

Embedded Systems

Module 3

An eg: Consider Update_time_task; it update time t information from system clock at the time device on a clock tick interrupt I S and notify Read_time task to run a waiting section of the code to read t from the timing device, and after Read_time reads t, it notify Update_time_task to make a note of it. Code: static void Task_Update_time(void *taskPointer) { while(1) { OSSemPend(supdateT) /*supdateT binary semaphore and wait on it, an OS function decrements it in corresponding event control block and becomes 0 at T2 */ /*Codes for writing date and time into the timing device*/ ----OSSEemPost(supdateT) /*Post the semaphore supdateT, OS incerements it corresponding event control block and becomes 1 at T3 */ -----} } static void Task-Read_Time(void *taskPointer) { while(1) { OSSemPend(supdateT) /*Wait for supdateT, if it becomes 1 OS function decrements it to 0 at T4 under event control block*/ ---/*Code for reading the date and time at time device*/ ---OSSemPost(supdateT) /* Post semaphore, OS increments it to 1 by corresponding event control block*/ } } Mutex: When a binary semaphore is used at the beginning and end of critical sections in two or more tasks, such that at any instance only one section code can run and the semaphore is called mutex

Department of ECE, VKCET

Page 18

Embedded Systems

Module 3

Use of Multiple semaphores for synchronizing tasks: For multitasking operations An eg. shows the use of two semaphores for synchronizing the tasks I, J and M and tasks J and L:

Number of tasks waiting for the same semaphore: Different cases: i) In certain OS a semaphore is given to task of highest priority among the waiting tasks. ii) In certain OS a semaphore is given to the longest waiting task in the FIFO mode iii) In certain OS a semaphore is given to select an option and it is provided for priority or FIFO mode Counting semaphores: OS provide counting of semaphore and be an unsigned 8, 16 or 32-bit integer Count value of semaphore controls the blocking o or running of codes of a task Count increment when released by a task and decrement each time semaphore taken P and V Semaphores: For efficient synchronization, semaphore standard POSIX 1003.1.b which is an IEEE standard is used. (POSIX- Portable OS interfaces with unIX) In this standard P & V represent the semaphore by integer variable Semaphore accessed by two standard atomic operation P is for wait operation and stands for Proberen means to test V is for signal notifying operation and stands for Verhogen means to increment P semaphore function signals that task requires a resource and if not available waits for it. V signals from the task to the OS that resource is now free for the other users

Department of ECE, VKCET

Page 19

Embedded Systems

Module 3

Consider P semaphore is a function P(&sem_l), when it is called in a process, does the following actions: 1) Decrease semaphore variable: sem_l = sem_l 1; 2) If sem_l < 0 , send a message to OS by calling waitCallToOS, because < 0 means some other process has ready executed P function on sem_1 Whenever there is a return to the OS, it will be step to 1 Eg. code: if(sem_1<0) {waitCallToOS(sem_1);} Consider V semaphore is a function V(&sem_2), called in a process does the following actions: 1) Increase the semaphore variable: sem_2 = sem_2 + 1; 2) If sem_2 0, send a message to OS by calling the function signalCallToOs. Control of the process transfer to OS, because it means some other process has already executed P function on sem_2 Whenever there is a return to the OS it will be step 1 Eg. code: if(sem_2<=0){signalCallToOS(sem_2);} Use of P and V semaphore functions with a signal or notification property: Let sem_s be a semaphore variable, P and V semaphore functions are used in two processes. task1 and task2 as follows:

Department of ECE, VKCET

Page 20

Embedded Systems

Module 3

Use of P and V semaphore functions with a mutex property: Let sem_1 and sem_2 be same variable sem_m and used as follows:

Following figure shows the use of P and V semaphore in task I and J with the scheduler:

Department of ECE, VKCET

Page 21

Embedded Systems

Module 3

PC assignment to a process or functions when using P and V semaphore is shown below:

Use of P and V semaphore functions with accounting semaphore: Consider three eg. i) A task transmits bytes to an IO stream for filling the available places at the stream. ii) A process writes an IO stream to a printer buffer. iii) A task producing chocolates is being performed. Following situations create the use of counting semaphores: i) In eg. 1, if another task reads the IO stream bytes from the filled places and create empty places. ii) In eg.2, from the printer buffer an IO stream prints after a buffer-read and after printing, more empty places are created. iii) In eg. 3, if a consumer consumes the chocolates produced more empty places An eg. , assume two processes using P and V semaphore functions and two tasks task3 and task4. Let sem_c1 and sem_c2 are two counting variables and represents the number of filled places created by process3 and number of empty places process4 respectively.

Department of ECE, VKCET

Page 22

Embedded Systems

Module 3

Priority inversion Problem and deadlock situations: Assumptions Priorities of tasks are in an order such that task I highest priority, task J a lower, and task K the lowest priority. Only tasks I and K share the data and J does not share data with K. Also let tasks I and K alone share a semaphore sik and not J Priority inversion problem At an instant t0, suppose task K takes s ik , it does not block task J and blocks only the task I. This happens because only tasks I and K share the data and J does not and I is blocked at instance t0 due to wait for some message and sik. Consider the problem that now arises on selective sharing between K and I. At next instant t1, let task K become ready first on an interrupt. Now, assume that at next instant t2, task I becomes ready on an interrupt or getting the waiting message. At this instant, K is in the critical section. Therefore, task I cannot start at this instant due to K being in the critical region. Now, if at next instant t3, some action (event) causes the unblocked higher than K priority task J to run. After instant t3, running task J does not allow the highest priority task I to run because K is not running, and therefore K can't release sik that it shares with I. Further, the design of task J may be such that even when s ik is released by task K, it may not let I run. [J runs the codes as if it is in critical section all the time after executing DI.] The J action is now as if J has higher priority than I. This is because K, after entering the critical section and taking the semaphore the OS let the J run, did not share the priority information about Ithat task I is of higher priority than J. The priority information of another higher-priority task I should have also been inherited by K temporarily, if K waits for I but J does not and J runs when K has still not finished the critical section codes. This did not happen because the given OS was such that it didnt provide for temporary priority inheritance in such situations. Above situation is also called a priority inversion problem OS Provision for OS Provision for temporary priority inheritance in such situations: Some OSes provide for priority inheritance in these situations and thus priority inheritance problem does not occur when using them. A mutex should be a mutually exclusive Boolean function, by using which the critical section is protected from interruption in such a way that the problem of priority inversion does not arise.
Department of ECE, VKCET Page 23

Embedded Systems

Module 3

Mutex is automatically provided in certain RTOS so that it the priority inversion problem does not arise Mutex is automatically provided with priority inheritance by task on taking it in certain OSes so that it the priority inversion problem does not arise and certain OSes provides for selecting priority inheritance as well as priority sealing options.

Deadlock Situation: Assumptions: Priorities of tasks be such that task H is of highest priority, task I a lower priority and task J the lowest. Two semaphores, SemTok1 and SemTok2. Tasks I and H have a shared resource through SemTok1 only. Tasks I and J have two shared resources through two semaphores, SemTok1 and SemTok2. Let J interrupt at an instant t0 and first take both semaphores SemTok1 and SemTok1 At a next instant t1, being now of a higher priority, the task H interrupts the task I and J after it takes the semaphore SemTok1, and thus blocks both I and J. In between the time interval t0 and t1, the SemTok1 was released but SemTok2 was not released during the run of task J. But the latter did not matter as the tasks I and J doesnt share SemTok2. At an instant t2, if H now releases the SemTok1, lets the task I take it. Even then it cannot run because it is also waiting for task J to release the SemTok2. The task J is waiting at a next instant t3, for either H or I to release the SemTok1 because it needs this to again enter a critical section in it. Neither task I can run after instant t3 nor task J. There is a circular dependency established between I and J and is called a deadlock situation The solution: On the interrupt by H, the task J, before exiting from the running state, should have been put in queue-front so that later on, it should first take SemTok1, and the task I put in queue next for the same token, then the deadlock would not have occurred. Signal functions: One way for messaging is to use an OS function signal ( ). Provided in UNIX, Linux and several RTOSes. A signal is the software equivalent of the flag at a register that sets on hardware interrupt. Unless masked by a signal mask, the signal allows the execution of the signal handling function and allows the handler to run just as a hardware interrupt allows the execution of an ISR
Department of ECE, VKCET Page 24

Embedded Systems

Module 3

Signal is an IPC used for signaling from a process A to OS to enable start of another process B Signal is a one or two byte IPC from a process to the OS. Signal provides the shortest communication. The signal ( ) sends a one-bit output for a process, which unmasks a signal mask of a process or task (called signal handler) The handler has coding similar to ones in an ISR runs in a way similar to a highest priority ISR. An ISR runs on hardware interrupt provided that interrupt is no masked. The signal handler also runs on signal provided that signal is no masked Signal ( ) forces a signaled process or task called signal handler to run. When there is return from the signaled or forced task or process, the process, which sent the signal, runs the codes as happens on a return from the ISR. OS connects a signal to a process or ISR j (called signal handler function), and resets the signal mask of j. Then the j runs after all higher than j priority processes (or ISRs) finish. An OS provision for signal as an IPC function means a provision for interruptmessage from a process or task to another process or task Some signal related IPC functions are: 1. SigHandler ( ) to create a signal handler corresponding to a signal identified by the signal number and define a pointer to signal context. The signal context saves the registers on signal. 2. Connect an interrupt vector to a signal number, with signaled handler function and signal handler arguments. The interrupt vector provides the program counter value for the signal handler function address. 3. A function signal ( ) to send a signal identified by a number to a signal handler task 4. Mask the signal 5. Unmask the signal 6. Ignore the signal Comparison between signal and semaphore: Some OSes provide the signal and semaphore both IPC functions Every OS provides semaphore IPC functions. When the IPC functions for signal are not provided by an OS, then the OS employs semaphore for the same purpose.

Department of ECE, VKCET

Page 25

Embedded Systems

Module 3

An eg. Task A sending signal sB to initiate Task B (signal handler) to run

Task A sending semaphore as event flag sem i to initiate task section waiting to take sem i before it could run

Advantages of signals: Unlike semaphores, it takes the shortest possible CPU time. The signals are the flag or one or two byte message used as the IPC functions for synchronizing the concurrent processing of the tasks. A signal is the software equivalent of the flag at a register that sets on a hardware interrupt. It is sent on some exception or on some condition, which can set during running of a process or task or thread. Sending a signal is software equivalent of throwing exception in C/C++ or Java program Unless process is masked by a signal mask, the signal allows the execution of the signal handling process, just as a hardware interrupt allows the execution of an ISR A signal is identical to setting a flag that is shared and used by another interrupt servicing process. A signal raised by one process forces another process to interrupt and to catch that signal provided the signal is not masked at that process
Department of ECE, VKCET Page 26

Embedded Systems

Module 3

Drawbacks of using a Signal: Signal is handled only by a very high priority process (service routine). That may disrupt the usual schedule and usual priority inheritance mechanism. Signal may cause reentrancy problem [process not returning to state identical to the one before signal handler process executed]. Semaphore functions: Semaphore as notice or token for an event occurrence Semaphores can be a P and V semaphore-pair in the POSIX standard semaphore IPC. Some functions used in the COS-II are: 1. OSSemCreate to create a semaphore and to initialize 2. OSSemPost to send the semaphore to an event control block and its value increments on event occurrence. (Used in ISRs as well as in tasks). 3. OSSemPend to wait the semaphore from an event, and its value decrements on taking note of that event occurrence. (Used in tasks). 4. OSSemAccept to read and returns the present semaphore value and if it shows occurrence of an event (by non zero value) then it takes note of that and decrements that value. [No wait. Used in ISRs and tasks.] 5. OSSemQuery to query the semaphore for an event occurrence or nonoccurrence by reading its value and returns the present semaphore value, and returns pointer to the data structure OSSemData. The semaphore value does not decrease. Mutex, lock and spin lock: Mutex block critical section in a task on taking the mutex by another tasks critical section by OS A function in kernel called lock( ), locks a process to the resources till that process executes unlock( ) Use lock( ) and unlock( ) involves little overhead compared to OSSemPost and OSSemPend functions in mutex Consider the situation: a task is running and little time is left for its completion. The running time left for it is less than compared with the time that would taken in blocking it. OS handles this situation by spin lock; a powerful tool. Spin lock: The scheduler locking process for a task I waits in a loop to cause the blocking of running task first for a time interval t, then (t-t), the (t- 2t) and so on. When this time interval spin downs to 0, the task that requested the lock of the processor now unlocks the running task I and blocks it from further running

Department of ECE, VKCET

Page 27

Embedded Systems

Module 3

Message Queue Functions: Features of message queue IPC are: 1. OS provides inserting and deleting the message pointers or messages 2. Each queue for the message or message-pointers need initialization before using function in kernel in queue 3. Each created queue has an ID 4. Each queue has a user definable size (upper limits for number of bytes) 5. When OS call is to insert a message into the queue, the bytes are as per the pointed numbers of bytes 6. When a queue is full, there is an error handling function to handle it The functions for the queues in the OS is shown below:

Queue message block with the message or message-pointers is shown below:

OS functions for queue for example COS-II are: 1. OSQCreate - a function that creates a queue and initialize the queue 2. OSQPost a function that sends a message into the queue as per the queue tail pointer, it can be used tasks as well as ISRs 3. OSQPend waits for the queue message into the queue and reads and deletes that when received 4. OSQAccept deletes the present message at queue head after checking its presence and after the deletion the queue head pointer increments 5. OSQFlush deletes the messages from the queue head to tail
Department of ECE, VKCET Page 28

Embedded Systems

Module 3

6. OSQQuery queries the message block but the message is not deleted. 7. OSQPostFront sends a message to front pointer. Used if the message is urgent or is higher priority than all the previously posted message into the queue Mailbox functions: An IPC message-mailbox that can be used only by a single destined task It is a message pointer or can be a message The source (sender) is a task that sends the message pointer to mailbox. The destination is the place where the OSMBoxPend function waits for the mailbox message and read it when received Three mailbox-types at the different RTOSes is shown below:

The initialization and other functions for a mailbox at an OS is shown below:

Mailbox will permit one message pointer per box and the queue will permit multiple messages or message pointers RTOS functions for mailbox of COS-II are: 1. OSMBoxCreate creates a box and initializes the mailbox contents with a NULL pointer 2. OSMBoxPost sends (writes) a message to the box 3. OSMBoxWait waits (pend) for a mailbox message 4. OSMBoxAccept reads the current message pointer after checking the presence and deletes the mailbox when read 5. OSMBoxQuery queries the mailbox when read and not needed later An ISR can post, but not wait the mailbox of a task

Department of ECE, VKCET

Page 29

Embedded Systems

Module 3

Pipe functions: Similar to the ones used for devices like files It is a device for inserting (writing) and deleting (reading) from that between two given interconnected tasks or two set of tasks Writing and reading from a pipe is like using a C command fwrite with a file name to write into a named file and fread with a file name to read named file Writing to pipe is at the back pointer address *pBACK and reading at the front pointer address *pFRONT Pipe is unidirectional. One thread or task inserts into it and the other one deletes from it. The functions for pipe are: 1. pipeDevCreate creating a device, which functions as pipe 2. open ( ) for opening the device to enable its use from beginning of its allocated buffer 3. connect ( ) for connecting a thread or task inserting bytes to the thread or task deleting bytes from the pipe 4. write ( ) for inserting from the bottom of the empty memory space in the buffer allotted to it. 5. read ( ) for deleting from the pipe from the bottom of unread memory spaces in the buffer filled after writing into the pipe 6. close ( ) closing the device to enable its use from the beginning of its allocated buffer only after opening it again The functions at an OS is shown below:

Pipe messages in a message buffer is shown below:

Department of ECE, VKCET

Page 30

Embedded Systems

Module 3

Socket functions: Some problems with pipe functions are: Not possible for bidirectional communication between tasks Need address information (ID or port) of both source and destination for communication To solve these problems protocols are provided Connectionless protocol An example is User Datagram Protocol (UDP). It requires a UDP header, which contains source and destination numbers of processes, length of the datagram and checksum for the header-bytes. The numbers specifies processes and connectionless means there is no connection establishment between source and destination before actual transfer of data stream can takes place Datagram means set of independent data and is not be in sequence with previously sent data. Checksum is sum of the bytes to enable the checking the erroneous data transfer Connection oriented protocol An example is Transmission Control Protocol (TCP). There must first a connection establishment between source and destination and data transfer takes place. At end there must be termination of connection. Socket provides a device like mechanism with bi-directional communication and provides a protocol between source and destination processes for data transfer Socket provides connection establishment and closing a connection Socket may provide listening from multiple sources or multicasting to multiple destinations Two tasks at two distinct places are locally interconnect through socket Multiple tasks at multiple distinct places may interconnect through the sockets to a socket at a server process Socket domain may be TCP or UDP

Department of ECE, VKCET

Page 31

Embedded Systems

Module 3

An example: The figure shows the initialized sockets between the client set of tasks and a server set of tasks at an OS

Bytes stream between client and server is shown below:

Applications of socket are: To connect tasks in the distributed environment of embedded system. For example, a network interconnection or a card process connect to host-machine process TCP/IP socket for internet connection. For example, a task receiving a byte stream of TCP/IP protocol at a mobile internet connection A task writes into a file at a computer or at a network using Network File System (NFS) protocol Interconnection of task or a section in a source set of tasks in an embedded system with another task at a destined set of tasks Some OS functions for socket in UNIX are following: a) socket( ) gives the socket descriptor and enables its use form beginning of its allocated buffer at the socket address b) unlink( ) before bind c) bind( ) for binding a thread or task inserting bytes into the socket to the thread or task and deleting bytes from the socket. d) listen( ) for listening 16 queued connection from the client socket e) accept( ) accepts the client connection
Department of ECE, VKCET Page 32

Embedded Systems

Module 3

f) recv( ) deleting and receiving from the socket from bottom of unread memory spaces in buffer g) send( ) inserting and sending from the socket from the bottom of the memory spaces in the buffer filled after writing into the socket h) close( ) for closing the device to enable its use from the beginning RPC functions: Method used for connecting two remotely placed methods by first using a protocol for connecting the processes Used in case of distributed tasks OS provide the use of RPCs and these permit distributed environment for the embedded systems RPC provides IPC when a task is at system1 and another task is at system2 OS IPC function allows a function or method to run at another address space of shared network or other remote computer Study of Micro C/OS-II: Basic functions and types of RTOSes A complex multitasking embedded system design requires the followings: 1. IDE 2. Task functions in embedded C or embedded C++ 3. RTC-based hardware and software timers 4. Scheduler 5. Device drivers and device manager 6. Functions for IPCs using the signals, event flag group, semaphore handling functions, functions for queues, mailboxes, pipe and sockets 7. Additional functions like TCP/IP, USB, Bluetooth, WiFi or IrDA or GUI, etc. 8. Error and exception handling functions 9. Testing and system designing software for testing RTOS as well as developed embedded application The basic functions from kernel of an RTOS is shown below:

Department of ECE, VKCET

Page 33

Embedded Systems

Module 3

Some features of RTOS are: 1. Basic kernel functions and scheduling 2. Priority definitions for the tasks and IST 3. Priority inheritance feature 4. Limit of number of tasks 5. Task synchronization and IPC functions 6. IDE consisting of editor, platform builder, GUI and graphics software, compiler, debugging and host target support tools 7. Device imaging tool and device drivers 8. Support to clock, time and timer functions, POSIX, asynchronous IOs, memory allocation and de-allocation system, file systems, flash systems, TCP/IP, network, wireless and network protocols, IDE with Java and componentization, which leads to small footprint (small sized RTOS codes placed in the ROM image) 9. Support to number of processor architectures Real time or non-real time application development approach are two types: 1. Host target approach: A host machine (PC), uses a general purpose OS (Windows or UNIX) for system development. The target connects by a network protocol during development phase. The developed codes and the target RTOS functions first connect a target and the target with downloaded codes finally disconnects and contains a small size footprint of RTOS. 2. Self-host approach: The same system with full RTOS is used development on which the application runs. Types of RTOS: In-house developed RTOS: Has the code written for the specific need, and application or product and customizes the in-house design needs. Generally either a small-level application developer or a big research company with a group of engineers and system integrators uses this Broad-based commercial RTOS: Some advantages of such a system are: 1. Availability of the self tested and debugged RTOS functions 2. Provides self development tools like source-code engineering, testing, simulating, debugging, error handling capability etc 3. Supports many processor architecture like ARM, x86, MIPS, SuperH etc. 4. Supports many devices, graphics, network connectivity protocols and file systems 5. Provide error and exceptional handling functions 6. Saves large amount development time for RTOS tools and in-house documentation 7. Saves maintenance cost General purpose OS with RTOS: Embedded LINUX or Windows XP is a general purpose OS and are not componentized. Footprint is not reducible, tasks are not
Department of ECE, VKCET Page 34

Embedded Systems

Module 3

assignable priorities. They offer powerful GUI, rich multimedia interfaces and have low cost Special focus RTOS: Used with specific processors like ARM or 8051 or DSP RTOS COS-II A popular RTOS for embedded system development Non-commercial use, freeware and available from Micrium COS stands Micr-Controller OS. Also known as MUCOS or MicroCOS or UCOS Portable, ROMable, scalable, preemptive, real-time and multi-tasking kernel Applications: automotive, avionics, consumer electronics, medical devices, military, aerospace, networking, SoC development An advantage of using this RTOS is full source code availability. The codes are in C and few CPU specific modules are in assembly COS has a real-time kernel with some additional features: 1. C/BuldingBlocks embedded building blocks for hardware peripherals 2. C/FL embedded flash memory loader 3. C/FS embedded memory file system 4. C/GUI embedded GUI platform 5. C/Probe embedded real-time monitoring tool 6. C/TCP-IP embedded TCP-IP stack 7. C/CAN embedded CAN bus 8. C/MOD embedded modbus 9. C/USB embedded USB-devices framework Source files: Two types of source codes: 1. Processor-dependent source files: Two header files at the master are the following: a) os_cpu.h is the processor definition header file b) Kernel building configuration file is os_cfg.h Further two C files are for ISR and RTOS timer, os_cpu_c.c and os_tick.c 2. Processor-independent source files: Two files including header and C files and are ucos_ii.h and ucos_ii.c. The files for the RTOS core, timer and task files are os_core.c, os_time.c and os_task.c. The memory partitioning, semaphore, queue and mailbox code are in os_mem.c, os_sem.c, os_q.c, and os_mbox.c respectively. System-level functions of COS-II: void OSInit (void) At the beginning prior to OSStart( ). Initiating the OS before starting the use of the RTOS functions and is compulsory before calling any OS kernel functions.
Department of ECE, VKCET Page 35

Embedded Systems

Module 3

void OSStart (void) After OSInit ( ) and task create function. Starting the use of RTOS multitasking functions and running the tasks. Its use is compulsory for the multitasking OS kernel operations. void OSTickInit (void) In first task function, which executes once only , this function is to initialize the system timer tick. It start the use of RTOS system clock and the system clock ticks and interrupts at regular intervals as per OS_TICKS_PER_SEC predefined during configuring OS. Its use is compulsory for the multitasking OS kernel operations when timer functions are used. void OSIntEnter (void) Notifies that an ISR is being processed. This allows OS to keep track of interrupt nesting. OSIntEnter( ) is used in conjunction with OSIntExit(). It is for sending a message to RTOS kernel for taking control and is compulsory to let the multitasking OS kernel, control the nesting of the ISR in case of multiple interrupts. void OSIntExit (void) Notifies that an ISR has completed. This allows OS to keep track of interrupt nesting. OSIntExit() is used in conjunction with OSIntEnter(). It is used to sending message to RTOS kernel for quitting control from the nesting loop and is compulsory to quit the ISR from the nested loop of the ISRs. OS_ENTER_CRITICAL Macro to disable interrupts. Used at the start of the critical section in a task or ISR. OS_EXIT_CRITICAL Macro to enable interrupts. Used just before the return from the critical section and is compulsory to let OS kernel to quit the section and enable the interrupts to the system. OSSchedLock ( ) To lock scheduling of the task. It disables the permission by higher priority task. OSSchedUnlock ( ) Unlock scheduling of the tasks and enables preemption by higher priority task. Task Service and Time Functions in COS-II: unsigned byte OSTaskCreate(void (*task)(void *pd), void *pdata, OS_STK *ptos, unsigned byte prio); Passing Arguments: 1. task: is a pointer to the tasks code. 2. pdata: is a pointer to an optional data area used to pass parameters to the task when it is created. 3. ptos: is a pointer to the tasks top-of-stack. The stack is used to store local variables, function parameters, return addresses, and CPU registers during an interrupt. 4. prio: is the task priority. A unique priority number must be assigned to each task, and the lower the number, the higher the priority.
Department of ECE, VKCET Page 36

Embedded Systems

Module 3

Returned Value: returns one of the following error codes 1. OS_ERR_NONE: if the function is successful. 2. OS_ERR_PRIO_EXIST: if the requested priority already exists. 3. OS_ERR_PRIO_INVALID: if prio is higher than OS_LOWEST_PRIO. 4. OS_ERR_NO_MORE_TCB: if C/OS-II doesnt have any more OS_TCBs to assign. 5. OS_ERR_TASK_CREATE_ISR: if you attempted to create the task from an ISR. INT8U OSTaskSuspend(INT8U prio); - suspends (or blocks) execution of a task unconditionally. Passing arguments: 1. prio: specifies the priority of the task to suspend. Returned Value: returns one of the error codes: 1. OS_ERR_NONE: if the call is successful. 2. OS_ERR_TASK_SUSPEND_IDLE: if you attempt to suspend the C/OS-II idle task, which is not allowed. 3. OS_ERR_PRIO_INVALID if you specify a priority higher than the maximum allowed. 4. OS_ERR_TASK_SUSPEND_PRIO: if the task you are attempting to suspend does not exist. 5. OS_ERR_TASK_NOT_EXITS: if the task is assigned to a Mutex PIP. INT8U OSTaskResume(INT8U prio); - resumes a task suspended through the OSTaskSuspend() function. Passing Arguments: 1. prio: specifies the priority of the task to resume. Returned Value: returns one of the error codes: 1. OS_ERR_NONE: if the call is successful. 2. OS_ERR_TASK_RESUME_PRIO: if the task you are attempting to resume does not exist. 3. OS_ERR_TASK_NOT_SUSPENDED: if the task to resume has not been suspended. 4. OS_ERR_PRIO_INVALID: if prio is higher or equal to OS_LOWEST_PRIO. 5. OS_ERR_TASK_NOT_EXIST: if the task is assigned to a Mutex PIP. void OSTimeSet(INT32U ticks); - sets the system clock Passing arguments: 1. ticks: is the desired value for the system clock, in ticks INT32U OSTimeGet(void); - obtains the current value of the system clock Returned Value: The current system clock value (in number of ticks).

Department of ECE, VKCET

Page 37

Embedded Systems

Module 3

Time Delay functions in COS-II void OSTimeDly(INT32U ticks); - allows a task to delay itself for an integral number of clock ticks. Rescheduling always occurs when the number of clock ticks is greater than zero. Valid delays range from one to 232-1 ticks. Passing arguments: 1. ticks: is the number of clock ticks to delay the current task. INT8U OSTimeDlyResume(INT8U prio); - resumes a task that has been delayed through a call to either OSTimeDly() or OSTimeDlyHMSM(). Passing arguments: 1. Prio: specifies the priority of the task to resume. Returned Value: returns one of the the error codes: 1. OS_ERR_NONE: if the call is successful. 2. OS_ERR_PRIO_INVALID: if you specify a task priority greater than OS_LOWEST_PRIO. 3. OS_ERR_TIME_NOT_DLY: if the task is not waiting for time to expire. 4. OS_ERR_TASK_NOT_EXIST: if the task has not been created or has been assigned to a Mutex PIP. void OSTimeDlyHMSM (INT8U hours, INT8U minutes, INT8U seconds, INT16U ms); - allows a task to delay itself for a user-specified amount of time specified in hours, minutes, seconds, and milliseconds Passing arguments: 1. hours: is the number of hours the task is delayed. The valid range of values is 0 to 255. 2. minutes: is the number of minutes the task is delayed. The valid range of values is 0 to 59. 3. seconds: is the number of seconds the task is delayed. The valid range of values is 0 to 59. 4. ms: is the number of milliseconds the task is delayed. The valid range of values is 0 to 999. Returned Value: returns one of the the error codes: 1. OS_ERR_NONE: if you specify valid arguments and the call is successful. 2. OS_ERR_TIME_INVALID_MINUTES: if the minutes argument is greater than 59. 3. OS_ERR_TIME_INVALID_SECONDS: if the seconds argument is greater than 59. 4. OS_ERR_TIME_INVALID_MS: if the milliseconds argument is greater than 999. 5. OS_ERR_TIME_ZERO_DLY: if all four arguments are 0. 6. OS_ERR_TIME_DLY_ISR: if you called this function from an ISR.

Department of ECE, VKCET

Page 38

Embedded Systems

Module 3

Memory allocation-related functions of COS-II OS_MEM *OSMemCreate(void *addr, INT32U nblks, INT32U blksize, INT8U *perr); - creates and initializes a memory partition. A memory partition contains a user-specified number of fixed-size memory blocks Passing arguments: 1. addr: is the address of the start of a memory area that is used to create fixed-size memory blocks. 2. nblks: contains the number of memory blocks available from the specified partition. 3. blksize: specifies the size (in bytes) of each memory block within a partition. A memory block must be large enough to hold at least a pointer. Also, the size of a memory block must be a multiple of the size of a pointer. 4. per: is a pointer to a variable that holds an error code. OSMemCreate() sets *perr to: OS_ERR_NONE: if the memory partition is created successfully OS_ERR_MEM_INVALID_ADDR: if you are specifying an invalid address (i.e., addr is a NULL pointer) or your partition is not properly aligned. OS_ERR_MEM_INVALID_PART: if a free memory partition is not available OS_ERR_MEM_INVALID_BLKS: if you dont specify at least two memory blocks per partition OS_ERR_MEM_INVALID_SIZE: if you dont specify a block size that can contain at least a pointer variable and if its not a multiple of a pointer size variable. Returned Value: returns a pointer to the created memory-partition control block if one is available. If no memory-partition control block is available, OSMemCreate() returns a NULL pointer. void *OSMemGet(OS_MEM *pmem, INT8U *perr); - obtains a memory block from a memory partition Passing arguments: 1. pmem: is a pointer to the memory-partition control block that is returned to your application from the OSMemCreate() call. 2. per: is a pointer to a variable that holds an error code. OSMemGet() sets *perr to one of the following: OS_ERR_NONE: if a memory block is available and returned to your application. OS_ERR_MEM_NO_FREE_BLKS:if the memory partition doesnt contain any more memory blocks to allocate. OS_ERR_MEM_INVALID_PMEM: if pmem is a NULL pointer. Returned Value: returns a pointer to the allocated memory block if one is available. If no memory block is available from the memory partition, OSMemGet() returns a NULL pointer.
Department of ECE, VKCET Page 39

Embedded Systems

Module 3

INT8U OSMemQuery(OS_MEM *pmem, OS_MEM_DATA *p_mem_data); - obtains information about a memory partition. Passing arguments: 1. pmem: is a pointer to the memory-partition control block that is returned to your application from the OSMemCreate() call. 2. p_mem_data: is a pointer to a data structure of type OS_MEM_DATA, which contains the following fields: void *OSAddr; /* Points to beginning address of the memory partition void *OSFreeList; /* Points to beginning of the free list of memory blocks INT32U OSBlkSize; /* Size (in bytes) of each memory block INT32U OSNBlks; /* Total number of blocks in the partition INT32U OSNFree; /* Number of memory blocks free INT32U OSNUsed; /* Number of memory blocks used Returned Value: returns one of the following error codes: OS_ERR_NONE: if *p_mem_data was filled successfully. OS_ERR_MEM_INVALID_PMEM: if pmem is a NULL pointer. OS_ERR_MEM_INVALID_PDATA: if pdata is a NULL pointer. INT8U OSMemPut(OS_MEM *pmem, void *pblk); - returns a memory block to a memory partition. Passing Arguments: 1. pmem: is a pointer to the memory-partition control block that is returned to your application from the OSMemCreate() call. 2. pblk: is a pointer to the memory block to be returned to the memory partition. Returned Value: returns one of the following error codes: OS_ERR_NONE: if the memory block was returned to the memory partition. OS_ERR_MEM_FULL: if the memory partition cannot accept more memory blocks. OS_ERR_MEM_INVALID_PMEM: if pmem is a NULL pointer. OS_ERR_MEM_INVALID_PBLK: if pblk is a NULL pointer. Semaphore related functions of COS-II OS_EVENT *OSSemCreate(INT16U value); - creates and initializes a semaphore. Passing arguments: 1. value: is the initial value of the semaphore and can be between 0 and 65,535. A value of 0 indicates that a resource is not available or an event has not occurred. Returned Value: returns a pointer to the event control block allocated to the semaphore. If no event control block is available, OSSemCreate() returns a NULL pointer. void OSSemPend(OS_EVENT *pevent, INT32U timeout, INT8U *perr); - is used when a task wants exclusive access to a resource, needs to synchronize its activities with an ISR or a task, or is waiting until an event occurs. Passing arguments:
Department of ECE, VKCET Page 40

Embedded Systems

Module 3

1. pevent: is a pointer to the semaphore. This pointer is returned to your application when the semaphore is created. 2. timeout: allows the task to resume execution if a message is not received from the mailbox within the specified number of clock ticks. A timeout value of 0 indicates that the task waits forever for the message. The timeout value is not synchronized with the clock tick. 3. per: is a pointer to a variable used to hold an error code. OSSemPend() sets *perr to one of the following: OS_ERR_NONE: if the semaphore is available. OS_ERR_TIMEOUT: if the semaphore is not signaled within the specified timeout. OS_ERR_EVENT_TYPE: if pevent is not pointing to a semaphore. OS_ERR_PEND_ISR: if you called this function from an ISR and C/OS-II has to suspend it. OS_ERR_PEND_LOCKED: if you called this function when the scheduler is locked. OS_ERR_PEVENT_NULL: if pevent is a NULL pointer. INT16U OSSemAccept(OS_EVENT *pevent);- checks to see if a resource is available or an event has occurred. Passing arguments: 1. pevent: is a pointer to the semaphore that guards the resource. This pointer is returned to the application when the semaphore is created. Returned Value: When OSSemAccept() is called and the semaphore value is greater than 0, the semaphore value is decremented, and the value of the semaphore before the decrement is returned to your application. If the semaphore value is 0 when OSSemAccept() is called, the resource is not available, and 0 is returned to your application. INT8U OSSemPost(OS_EVENT *pevent); - A semaphore is signaled by calling OSSemPost(). If the semaphore value is 0 or more, it is incremented, and OSSemPost() returns to its caller. If tasks are waiting for the semaphore to be signaled, OSSemPost() removes the highest priority task pending for the semaphore from the waiting list and makes this task ready to run. The scheduler is then called to determine if the awakened task is now the highest priority task ready to run. Passing arguments: 1. pevent: is a pointer to the semaphore. This pointer is returned to your application when the semaphore is created. Returned Value: returns one of these error codes: OS_ERR_NONE: if the semaphore is signaled successfully. OS_ERR_SEM_OVF: if the semaphore count overflows. OS_ERR_EVENT_TYPE: if pevent is not pointing to a semaphore.
Department of ECE, VKCET Page 41

Embedded Systems

Module 3

OS_ERR_PEVENT_NULL: if pevent is a NULL pointer. INT8U OSSemQuery(OS_EVENT *pevent, OS_SEM_DATA *p_sem_data); - obtains information about a semaphore. Passing arguments: 1. pevent is a pointer to the semaphore. 2. p_sem_data: is a pointer to a data structure of type OS_SEM_DATA, which contains the following fields: INT16U OSCnt; /* Current semaphore count */ #if OS_LOWEST_PRIO <= 63 INT8U OSEventTbl[OS_EVENT_TBL_SIZE]; /* Semaphore wait list */ INT8U OSEventGrp; #else INT16U OSEventTbl[OS_EVENT_TBL_SIZE]; /* Semaphore wait list */ INT16U OSEventGrp; #endif Returned Value: returns one of these error codes: OS_ERR_NONE: if the call is successful. OS_ERR_EVENT_TYPE: if you dont pass a pointer to a semaphore. OS_ERR_PEVENT_NULL: if pevent is is a NULL pointer. OS_ERR_PDATA_NULL: if p_sem_data is is a NULL pointer.

Department of ECE, VKCET

Page 42

You might also like