Join this video course on Udemy. Click the below link
https://www.udemy.com/mastering-rtos-hands-on-with-freertos-arduino-and-stm32fx/?couponCode=SLIDESHARE
>> The Complete FreeRTOS Course with Programming and Debugging <<
"The Biggest objective of this course is to demystifying RTOS practically using FreeRTOS and STM32 MCUs"
STEP-by-STEP guide to port/run FreeRTOS using development setup which includes,
1) Eclipse + STM32F4xx + FreeRTOS + SEGGER SystemView
2) FreeRTOS+Simulator (For windows)
Demystifying the complete Architecture (ARM Cortex M) related code of FreeRTOS which will massively help you to put this kernel on any target hardware of your choice.
WhatsApp 9892124323 âCall Girls In Kalyan ( Mumbai ) secure service
Â
PART-3 : Mastering RTOS FreeRTOS and STM32Fx with Debugging
1. Mastering RTOS: Hands on FreeRTOS
and STM32Fx with Debugging
Learn Running/Porting FreeRTOS Real Time Operating System on
STM32F4x and ARM cortex M based Mircocontrollers
Created by :
FastBit Embedded Brain Academy
Visit www.fastbitlab.com
for all online video courses on MCU programming, RTOS and
embedded Linux
PART-3
2. FastBit Embedded Brain Academy is an online training wing of Bharati Software.
We leverage the power of the internet to bring online courses at your fingertip in the domain of embedded
systems and programming, microcontrollers, real-time operating systems, firmware development,
Embedded Linux.
All our online video courses are hosted in Udemy E-learning platform which enables you to
exercise 30 days no questions asked money back guarantee.
For more information please visit : www.fastbitlab.com
Email : contact@fastbitlab.com
About FastBitEBA
6. Queue
Task A Task B
15
Queue
Task A Task B
Whena Queueis createdit does not containanythingso it is empty
TaskA writes a valuein to thequeue. Thevalueis sentto the front Sincethequeuewas
previouslyempty,the valueis nowboththefirst andthelast valuein thequeue:
7. 12 15
Queue
Task A Task B
12 15
Queue
Task A Task B
TaskA sends another value. Thequeuenowcontains thepreviouslywrittenvalueandthis newly
addedvalue.The previous value remains at the front of the queue while the new one is now at its
back.Threespaces are stillavailable
TaskB reads a valuein thequeue.It willreceivethevalue whichis in thefront of thequeue
8. 12
Queue
Task A Task B
Task B has removedan item.Theseconditemis movedto be theoneinthefrontof thequeue.
Thisis the valuetask 2 will readnext timeit tries to reada value. Fourspaces are nowavailable:
9. Main Uses of Queues in RTOS
1. Synchronization between Tasks or Interrupts
2. Inter-task communication
10. Queues in Synchronization and inter-Task
communication
Blocked while
accessing the
empty Queue
I needsomedatatoconsume
But waitingforTASKA to producesome
datain thequeue
Task A Task B
Data Producer
Empty
Queue
Data Consumer
11. Queues in Synchronization and inter-Task
communication
I m
unblocked
Looks like
Data
available in
the queue
Task A Task B
Queue
Data Produced
Unblocks Task B
Data Producer Data Consumer
25. Exercise
Design a FreeRTOS application which implements the below commands
LED_ON,
LED_OFF,
LED_TOGGLE_START
LED_TOGGLE_STOP
LED_STATUS_READ
RTC_DATETIME_READ
The command should be sent to the board via UART from the user.
26. Command Format
struct APP_CMD
{
uint8_t COMMAND_NUM;
uint8_t COMMAND_ARGS[10];
};
/* Lets use this data
structure to store
command number and
its associated
arguments. */
30. Hardware Vs Software Timers
Hardware timers Software timers
Handled by the TIMER peripheral of the MCU Handled by FreeRTOS kernel Code
No FreeRTOS APIs . You have to create your
own Function to manage the timer
peripherals
FreeRTOS APIs are availabe
Micros/nano seconds resolutions are
possible
Resolution Depends on the
RTOS_TICK_RATE_HZ
34. A semaphore is a kernel object or you can say kernel
service, that one or more threads of execution can
acquire or release for the purpose of synchronization or
mutual exclusion.
40. Task-A
producer
Task-B
Consumer
t1 Task A runs first and it is waiting for data from
Device driver
t2 Task B runs that means Task A is pre-empted
t3 Task A runs again and finds the device driver
not yet given any data.
t4 no data is available so it again doesn't take any action
Task A and TaskB are not synchronizedfor the
productionand consumptionof data
41. Task-A
producer
Task-B
Consumer
t1 Task A runs first and it is waiting for data from
Device driver
t2 Task B runs that means Task A is pre-empted
t3 Task A runs again and finds the device driver
not yet given any data.
t4 no data is available so it again doesn't take any action
42. Kernel Objects which can be used for
Synchronization
Events (or Event Flags)
Semaphores ( Counting and binary )
Queues and MessageQueues
Pipes
Mailboxes
Signals (UNIXlike signals)
Mutex
FreeRTOS Supports
Semaphores, Queues and
Mutex
44. Concluding points
Synchronization is nothing but aligning number of Tasks to achieve a desired behaviour.
Where as mutual exclusion is avoiding a task to execute the critical section which is
already owned by another task for execution.
Typically Semaphores are used to implement the synchronization between tasks and
between tasks and interrupts.
Mutex are the best choice to implement the mutual exclusion. That is protecting access of
a shared item.
Semaphores also can be used to implement the mutual exclusion but it will introduce
some serious design issues which we will see later.
45. Lets see how we can use semaphores
for Synchronization
47. Creating a Semaphore
SCB
Value
(Binary or a Count)
Task-1 Task-2 Task-3
Task-Waiting-List
ThisvalueDetermines howmany
semaphoretokens are available.
Keys or tokens
Semaphore id or
name
49. Creating a Semaphore
A single semaphore can be acquired a finite number of times
by the tasks depending upon the how you first initializae the
semaphore.
51. Semaphore
Semaphores are kernel objects, or you can say
kernel services which you can use to achieve the
synchronization and mutual exclusion in your
project.
55. Binary semaphore use cases
1. Synchronization
That is synchronization between tasks or synchronization
between interrupts and tasks.
2. Mutual Exclusion
Binary semaphore can also be used for Mutual Exclusion, that is to
guard the critical section
57. Counting semaphore use cases
1. CountingEvents
In this usage scenario an event handler will 'give' a semaphore each time an event occurs
â causing the semaphores count value to be increment on each give. A handler task will
'take' a semaphore each time it processes an event â causing the semaphores count
value to be decremented on each take.
2. ResourceManagement
In this usage scenario the count value indicates the number of resources available. To
obtain control of a resource a, task must first obtain a semaphore â decrementing the
semaphores count value. When the count value reaches zero there are no free
resources. When a task finishes with the resource it 'gives' the semaphore back â
incrementing the semaphores count value.
58. Counting semaphore use cases
Resource Management
In this usage scenario the count value indicates the number of
resources available. To obtain control of a resource a task must first
obtain a semaphore by decrementing the semaphores count value.
When the count value reaches zero there are no free resources. When
a task finishes with the resource it 'gives' the semaphore back thus
incrementing the semaphores count value.
59. void interrupt_handler(void)
{
do_important_work(); /* this is very short code */
sema_give(&sem); /* Give means release the key */
} /* exit from the interrupt */
void task_function(void)
{
/* if taking a key is un-successful then this task will be blocked until key is available */
while ( sem_get(&sem ) ) // GET means, trying to take the key
{
// it will come here, only if taking a key is successful .
/* Do time consuming work of the ISR */
}
}
62. Lets first see how binary semaphore can be
used for synchronization between 2 tasks.
The main use of binary semaphore or counting semaphore is synchronization. The
synchronization can be between tasks or between a task and an interrupt.
63. Binary sema to Synchronize between
Tasks
Task-1
Data
Producer
Task-2
Data
Consumer
Sema_key
Increments the key
Whendatais produced
Unblocks Task-B
If it was blockeddueto
Non-availabilityof key
64. voidtask1_running(void)
{
if( TRUE== produce_some_data())
{
/* This is a signallingforthetask2 towakeup if it is blockeddueto nonavailabilityof key*/
sema_give(&sema_key); // âGIVEâ operationwill incrementthesemaphorevalueby 1
}
}
voidtask2_running(void)
{
/* if sema_key is unavailable thentask2will be blocked.*/
/*if sema_key is available,thentask2will takeit and sema_key becomes unavailableagain
*/
while(sema_take(&sema_key) )
{
//Taskwillcomehereonlywhenthe sem_take operationis successful.
/* lets consumethe data*/
/*sincethe sem_keyvalueis zeroat this point,thenext timewhentask-2tries to
take,it willbe blocked. */
}
}
65. /* declaring a semaphore object */
Semaphore sem_key;
int main()
{
/* Create Task1 */
/* Create Task-2 */
/* Create a binarysemaphore*/
Sem_key = create_bin_sema();
/* schedule both the tasks */
}
66. Exercise
Create 2 tasks 1) Manager task 2) Employee task
With manager task being the higher priority .
When manager task runs it should create a âTicket idâ and post it to the
queue and signal the employee task to process the âTicket idâ.
When employee task runs it should read from the queue and process the
âTicket idâ posted by the manager task.
Use binary semaphore to synchronize between manger and employee task.
68. Synchronizationbetween Interrupt& Task
The Binary semaphore is very handy and well
suited to achieve the synchronization between
an interrupt handler execution and the task
handler execution.
69. void interrupt_handler(void)
{
do_important_work(); /* this is very short code */
xSemaphoreGiveFromISR(&sem); /* Give means release the key */
} /* exit from the interrupt */
/* I am helper task for the interrupt ! I do time consuming work on behalf of interrupt handler*/
void helper_task(void)
{
/* if taking a key is un-successful then this task will be blocked until key is available */
while (xSemaphoreTake(&sem ) ) // GET means, trying to take the key
{
// it will come here, only if taking a key is successful .
/* Do time consuming work of the ISR */
}
}
70. A NA
A NA
Thesemaphoreis not
available
xSemaphoreTake()
Task
Thetaskis blockedwaiting
forthesemaphore
xSemaphoreTake()
Task
Interrupt!!!
xSemaphoreGiveFromISR()
An Interrupt occurs..that
âgivesâthesemaphore
Synchronization between an Interrupt and a Task
using Binary semaphore
1
Task is in blockedstate initially
2 Interrupt occurs
Thesemaphore is
available
71. A NA
A NA
Interrupt!!!
xSemaphoreGiveFromISR()
xSemaphoreTake()
Task
⌠whichunblocks thetask
( thesemaphoreis nowavailable)
xSemaphoreTake()
Task
Thetasknowsuccessfullyâtakesâthe
semaphore,so it is unavailableonce
more
3 Task Unblocked & tries to take the semaphore.
4 Task tookthe semaphore
72.
73.
74. Binary sema to Synchronize between
interrupt and task.
1.An interrupt occurred, when the task was in blocked state
2.The ISR executed and gave the semaphore ,due to that the task was
unblocked.
3.The Task executed and took the semaphore.
4.The task performed the intended work and tried to take the semaphore once
again
5.Entered the Blocked state again ,if the semaphore was not immediately
available.
76. A NA
A NA
Thesemaphoreis not
available
xSemaphoreTake()
Task
Thetaskis blockedwaiting
forthesemaphore
xSemaphoreTake()
Task
Interrupt!!!
xSemaphoreGiveFromISR()
An Interrupt occurs..that
âgivesâthesemaphore
Synchronization between an Interrupt and a Task
using Binary semaphore
78. A NA
A NA
Thesemaphoreis not
available
xSemaphoreTake()
Task
Thetaskis blockedwaiting
forthesemaphore
xSemaphoreTake()
Task
Interrupt!!!
xSemaphoreGiveFromISR()
An Interrupt occurs..that
âgivesâthesemaphore
Binary semaphore can latch at most 1 event
80. A NA
A NA
Interrupt!!!
xSemaphoreGiveFromISR()
Task
Thetaskis still processingthefirst
interrupt event
xSemaphoreTake()
Task
Whenprocessingof theoriginaleventcompletes thetaskcalls xSemaphoreTake()again.Because
anotherinterrupthas alreadyoccurredthesemaphoreis alreadyavailableso thetask takes the
semaphorewithout everenteringtheBlockedstate
Anotherinterrupt occurs whilethetaskis stillprocessing
thefirst event.TheISR âGivesâthesemaphoreagain,
effectivelylatching theeventso theevent is not lost
81. Concluding Points on Latching Events
1. When the interrupts/events happen relatively slow, the binary
semaphore can latch at most only one event
2. If multiple interrupts/events trigger back to back, then the
binary semaphore will not able to latch all the events. So some
events will be lost .
3. How to solve the above issue ? Welcome to the world of
âCounting Semaphoreâ
86. Concluding points on Counting semaphore
You can use counting semaphore to count the events and process
them serially one by one using another task.
The counting semaphore can also be used for resource
management .That is to regulate access to multiple identical
resources
87. Exercise
Create 2 tasks.
1) Hander task
2) Periodic task
Periodic task priority must be higher than the handler task.
Use counting semaphore to process latched events(by handler
task) sent by fast triggering interrupts.
Use the counting semaphore to latch events from the interrupts
89. Access to a resource that is shared either between tasks or between tasks and
interrupts needs to be serialized using a some techniques to ensure data
consistency.
Usually a common code block which deals with global array , variable or
memory address, has the possibility to get corrupted , when many tasks or
interrupts are racing around it
Mutual Exclusion using Binary Semaphore
90. #define UART_DR *((unsigned long * ) (0x40000000) )
/* This is a common function which write to UART DR */
int UART_Write( uint32_t len , uint8_t *buffer)
{
for (uint32_ti=0;i < len ; i++)
{
/* if Data Register is empty write it */
while(! is_DR_empty() );
UART_DR = buffer[i];
}
}
This is Thread-Unsafe code
91. #define UART_DR *((unsigned long * ) (0x40000000) )
/* This is a common function which write to UART DR */
int UART_Write( uint32_t len , uint8_t *buffer)
{
for (uint32_ti=0;i < len ; i++)
{
/* if Data Register is empty write it */
while(! is_DR_empty() );
UART_DR = buffer[i];
}
}
This is Thread-Unsafe code
This codeis absolutelyfine in non-multitaskingscenario( onlyone taskexistsper
application).But in multi-taskingscenario, this functionis thread unsafe.That means,
there is a possibility of race conditionsincethe critical section codeis notprotected.
93. Mutual Exclusion by Binary semaphore
#define UART_DR *((unsigned long * ) (0x40000000) )
/* This is a common function which write to UART DR */
int UART_Write( uint32_t len , uint8_t *buffer)
{
for (uint32_t i=0;i < len ; i++)
{
/* if Data Register is empty write it */
while(! is_DR_empty() );
UART_DR = buffer[i];
}
}
94. 2 ways we can Implement the mutual
exclusion in FreeRTOS
1. Using Binary semaphore APIs
2. Using Mutex APIs
96. #define UART_DR *((unsigned long * ) (0x40000000) )
/* This is a common function which write to UART DR */
int UART_Write( uint32_t len , uint8_t *buffer)
{
for (uint32_t i=0;i < len ; i++)
{
sema_take_key(bin_sema );
/* if Data Register is empty write it */
while(! is_DR_empty() );
UART_DR = buffer[i]; //Critical section
sema_give_key( bin_sema );
}
}
97. Task-A
Task-BA NA
Guarded
Resource
This resource is
Guarded by binary semaphore
Binary semaphore
Used to guard the resource
Twotaskseachwant to access theresource,but a taskis not
permittedtoaccesstheresourceunlessit is thetokenholder
Task-A
Task-BA NA
Guarded
Resource
This resource is
Guarded by binary semaphore
Binary semaphore
Used to guard the resource
TaskA attempts totakethesemaphore,becausesemaphoreis availableTask
A successfully becomethesemaphoreholderso it is permittedtoaccess the
resource
xSemaphoreTake()
98. Task-A
Task-BA NA
Guarded
Resource
This resource is
Guarded by binary semaphore
Binary semaphore
Used to guard the resource
TaskB executes andattempts totakethesamesemaphore.TaskA stillhas the semaphore,
so theattempt fails andTaskB is not permittedto accesstheguardedresource!
Task-A
Task-BA NA
Guarded
Resource
This resource is
Guarded by binary semaphore
Binary semaphore
Used to guard the resource
TaskB opts toentertheblockedstateto wait forthesemaphore,allowing TaskA to run
again. TaskA finisheswiththeresourceso âgivesâthesemaphoreback .
xSemaphoreTake()
xSemaphoreGive()
xSemaphoreTake()
99. Task-A
Task-BA NA
Guarded
Resource
This resource is
Guarded by binary semaphore
Binary semaphore
Used to guard the resource
TaskA givingthe semaphoreback causes TaskB to exittheblockedstate( thesemaphoreis nowavailable ). Task
B cannowsuccessfullyobtainthesemaphore,and havingdoneso is permittedto accesstheresource.
Task-A
Task-BA NA
Guarded
Resource
This resource is
Guarded by binary semaphore
Binary semaphore
Used to guard the resource
WhenTaskB finishes theresourceit toogives thesemaphoreback. Thesemaphoreis
nowonceagainavailable tobothtasks.
xSemaphoreTake()
xSemaphoreGive()
102. FreeRTOS Mutex Services
Mutex is derived from the phrase Mutual Exclusion !
Mutex is also kind of binary semaphore that include a priority inheritance
mechanism which minimizes the effect of Priority inversion .
Binary semaphores are the better choice for implementing synchronisation
(between tasks or between tasks and an interrupt), mutexes are the better
choice for implementing simple mutual exclusion
103. Advantages of Mutex Over Binary
semaphore.
Priority Inheritance
Mutexes and binary semaphores are very similar â the only major difference is
mutexes automatically provide a basic âpriorityinheritanceâ mechanism.
Priority inheritance is a technique by which mutex minimizes the negative effects
of priority inversion . Mutex can not able to fix the priority inversion problem
completely but it surely lessens its impact.
104. Advantages of Mutex Over Binary
semaphore.
Priority Inheritance
Most of the RTOS including FreeRTOS mutex implementation implements priority
inheritance feature.
Since mutex has all these features to avoid priority inversion, the memory
consumed by mutex serivce may be higher than the binary semaphore..
105. t1
LowPriority task[LP]
MediumPriority task[MP]
HighPriority task[HP]
TheHP taskattempts to takethe Mutexbut
can't becauseit is stillbeingheldby theLP task.
TheHP task enters theblockedstateto wait for
thesemaphoreto becomeavailable.
TheLP taskreturningthe mutexcauses theHP taskto exit
theblockedstateas themutexholder.WhentheHP task
has finishedwiththe mutexit gives it back.TheMP task
only executes when theHP taskreturns totheblocked
stateso theMP taskneverholds up theHP task.
The LP tasktakes a Mutexbefore
being preempted by theHP task.
TheLP taskis preventingtheHP taskfromexecutingso inherits thepriorityof the
HP task. TheLP taskcannotnowbe preemptedby theMP task.TheLP task
cannot nowbe preempted by theMP task, so theamountof timethat priority
inversionexists is minimized. WhentheLP taskgives the mutexback it returns its
original priority.!
Time
1
2
3
4
107. Mutex Disadvantage
If your system is very simple having small number of manageable
tasks or if you are working in memory constrained environment
then its better to avoid using mutex by opting out from the
compilation . Because mutex will surely eat up more CODE space
than binary semaphore.
110. Ways to protect the Critical Section
1. Binary semaphore
2. Mutex
3. Crude way ( disabling interrupts of the
system, either globally, or up to a specific
interrupt priority level)
111. Advantages of Mutex Over Binary
semaphore.
Mutex automatically provides a basic âpriorityinheritanceâ mechanism.
Priorityinheritance is a technique by which mutex minimizes the negative
effects of priority inversion . Mutex can not able to fix the priority inversion
problem completely but it surely lessens its impact.
113. GoodTask
ArrogantTask
Alright ! Lets have an
agreement ! No one
accesses UART_DR
without acquiring the
mutex .
No ! I donât
agree to this
agreement
UART_DR
(shareddata)
120. Priority register in Cortex M3/M4
Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
8-bit interrupt priorityregister
In CortexM basedProcessor everyinterruptandSystemexceptionhas 8 bitinterruptpriority
registerto configure its Priority
So , ideallythereare 2^8 ( 256) interruptpriority levelsfrom 0x00 to 0xff .
Where0x00 is the highestpriorityand 0xff is the lowestpriority
121. Priority Register
Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
Implemented Not implemented
Microcontroller Vendor XXX Microcontroller Vendor YYY
Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
Implemented Not implemented
8 Levels of Priority level 16 Levels of Priority level
Write has no effect Write has no effect
0x00,0x20,0x40,0x60,
0x80,0xA0,0xC0, 0xE0
0x00,0x10,0x20,0x30,0x40,0x50,
0x60,0x70,0x80,0x90,0xa0,0xb0,0xc0,0xd0,0xe0,0xf0
AT91SAM3X8ESTM32F4xxTM4C123G
122. Example 1 : Setting Priority
Letâs say you want to configure a priority of an interrupt number 8(IRQ8) to be 5.
Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
Implemented Not implemented
AT91SAM3X8E
Interrupt priority Register corresponding to IRQ8
So, Now you want to write value 5 into this
register.. How do you write ?
123. Example 1 : Setting Priority
Letâs say you want to configure a priority of an interrupt number 8(IRQ8) to be 5.
AT91SAM3X8E
Interrupt priority Register corresponding to IRQ8
Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
Implemented Not implemented
0 0 0 0 0 1 0 1
Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
Implemented Not implemented
0 1 0 1 0 0 0 0
Priority_register = priority_value << ( 8 - __NVIC_PRIO_BITS)
Priority_register = priority_value
126. FreeRTOS Stack and heap
RAM
Low High
Heap (configTOTAL_HEAP_SIZE)
Task -1
TCB
Stack
TCB1
STACK-1
Task -2
TCB
Stack
TCB-2
STACK-2
SCB
Semaphore
SCB
Queue
QCB
Item list
QCB
ITEMLIST
This RAM space is used for
Global data, arrays, static variables,
etc
Dynamically Created kernel objects
xTaskCreate() xTaskCreate() xSemaphoreCreateBinary()xQueueCreate()
127. FreeRTOS Heap management Scheme
FreeRTOS APIs and
Applications
heap_1.c heap_2.c heap_3.c heap_4.c heap_5.c
pvPortMalloc() pvPortMalloc()
vPortFree()
pvPortMalloc()
vPortFree()
pvPortMalloc()
vPortFree()
pvPortMalloc()
vPortFree()
Application uses any one of these Schemes according to its requirements
Your_own
_mem.c
pvPortMalloc()
vPortFree()
128. heap_1.c
⢠Simplest Implementation among all other heap
management implementations
⢠In this implementation you can only allocate Heap
memory but you can not free it
⢠Can be used if your application never deletes a
task, queue, semaphore, mutex, etc. (which
actually covers the majority of applications in
which FreeRTOS gets used).
⢠This implementation is always deterministic
(always takes the same amount of time to execute)
and cannot result in memory fragmentation.
129. heap_2.c
⢠Heap_2.c is implemented using âBest Fitâ algorithm
⢠This scheme allows freeing of memory unlike heap_1.c
⢠Combining adjacent free blocks into a single large
blocks is not possible using this scheme
⢠This scheme is used when the application repeatedly
deletes tasks, queues, semaphores, mutexes, etc . With
the possibility of memory fragmentation.
⢠Should not be used if the memory being allocated and
freed is of a random size
⢠It is not deterministic , but this scheme is much more
efficient than most standard C library malloc
implementations.
130. heap_3.c
⢠Heap_3.c just implements a wrapper around your standard
library memory allocation functions such as malloc and free
⢠The wrapper simply makes the malloc() and free() functions
thread safe.
⢠This implementation requires the linker to setup a heap, and
the compiler library to provide malloc() and free()
implementations.
⢠Not deterministic
⢠Not optimized for Embedded Systems, so consumes more
Code space.
⢠the configTOTAL_HEAP_SIZE setting in FreeRTOSConfig.h has
no effect when heap_3 is used
131. heap_4.c
⢠This scheme uses a first fit algorithm and, unlike scheme 2,
it does combine adjacent free memory blocks into a single
large block
⢠Can be used even when the application repeatedly deletes
tasks, queues, semaphores, mutexes, etc
⢠Even when the memory being allocated and freed in
random size, it is less likely to result in a heap space that is
badly fragmented in to smaller useless blocks.
⢠Much more efficient than most standard C library malloc
implementations.
⢠heap_4.c is particularly useful for applications that want to
use the portable layer memory allocation schemes directly
in the application code (rather than just indirectly by calling
API functions that themselves call pvPortMalloc() and
vPortFree()).
132. heap_5.c
⢠This scheme uses the same first fit and memory
coalescence algorithms as heap_4, and allows the heap to
span multiple non adjacent (non-contiguous) memory
regions.