WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
8 memory management strategies
1. Memory Management Strategies
Background
Swapping
Contiguous Memory Allocation
Paging
Structure of the Page Table
Segmentation
2. 1. Background
1.1 Basic Hardware
• Main memory and registers
are only storage CPU can
access directly
• Register access in one CPU
clock, but, main memory can
take many cycles
• Cache sits between main
memory and CPU registers
• Protection of memory required
to ensure correct operation
• A pair of base and limit
registers define the logical
address space
Loganathan R, CSE, HKBKCE 2
3. 1. Background Contd…
• CPU hardware compares every address generated in user mode with the
registers
• A program executing in user mode attempt to access OS memory or other
users' memory results in a trap to the OS, which treats the attempt as a fatal
error
• This prevents a user program from accidentally or deliberately modifying the
code or data structures of either the OS or other users.
base base + limit
yes yes
address
CPU ≥ < Memory
No No
Trap to OS monitor
–addressing error
Hardware Address Protection with base and limit registers
Loganathan R, CSE, HKBKCE 3
4. 1. Background Contd…
1.2 Address Binding
• A user program will go through several
steps, before being executed as shown
• Address binding of instructions and data
to memory addresses can happen at
three different stages
– Compile time: If memory location known
a priori, absolute code can be generated;
must recompile code if starting location
changes Eg. DOS .com Programs
– Load time: Must generate relocatable
code if memory location is not known at
compile time
– Execution time: Binding delayed until run
time if the process can be moved during
its execution from one memory segment
to another. Need hardware support for
address maps (e.g., base and limit
registers)
Loganathan R, CSE, HKBKCE 4
5. 1. Background Contd…
1.3 Logical vs. Physical Address Space
• Logical address – generated by the CPU, also referred as virtual address
• Physical address – address seen by the memory unit i.e loaded to memory-address
register
• Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme
• The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory-management unit (MMU)
• In MMU, the value in
Dynamic relocation using a relocation
the relocation register is register
added to every address
generated by a user
process at the time it is
sent to memory
• The user program deals
with logical addresses; it
never sees the real
physical addresses
Loganathan R, CSE, HKBKCE 5
6. 1. Background Contd…
1.4 Dynamic Loading
• All routines are kept on disk in a relocatable load format and main program is
loaded into memory and is executed
• Routine is not loaded until it is called
• The relocatable linking loader is called to load the desired routine
• Better memory-space utilization since unused routine is never loaded
• Useful when large amounts of code are needed to handle infrequently
occurring cases
• No special support from the operating system is required implemented
through program design
1.5 Dynamic Linking and Shared Libraries
• Linking postponed until execution time
• Small piece of code, stub, used to locate the appropriate memory-resident
library routine
• Stub replaces itself with the address of the routine, and executes the routine
• Operating system needed to check if routine is in processes’ memory address
• Dynamic linking is particularly useful for libraries
• System also known as shared libraries
Loganathan R, CSE, HKBKCE 6
7. 2. Swapping
• A process can be swapped temporarily out of memory to a backing store,
and then brought back into memory for continued execution
• Similar to round-robin CPU-scheduling algorithm , when a quantum expires,
the memory manager will swap out that process to swap another process
into the memory space that has been freed .
Swapping of two
processes using a
disk as a backing
store.
Loganathan R, CSE, HKBKCE 7
8. 2. Swapping Contd…
• Backing store – fast disk large enough to accommodate copies of all memory
images for all users and provide direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority process
can be loaded and executed
• The swapped out process will be swapped back into the same memory space
it occupied previously due to the restriction by the method of address
binding(assembly or load time)
• A process can be swapped into a different memory space If execution-time
binding is used since physical addresses are computed during execution time
• System maintains a ready queue of ready-to-run processes which have
memory images on disk
• The dispatcher swaps out a process in memory if there is no free memory
region and swaps in the desired process from a ready queue
• Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped
• Example : User process is 10 MB
Backing store is a hard disk with a transfer rate of 40 MB per sec
Transfer time = 10/40 MB per sec. = 250 milliseconds
Swap time = transfer time+ Seek time(latency 8 millisec)= 258 millisec.
Total swap time = swap out + swap in = 516 milliseconds
• Modified versions of swapping are found on many systems (i.e., UNIX, Linux,
and Windows) Loganathan R, CSE, HKBKCE 8
9. 3. Contiguous Memory Allocation
• Main memory usually into two partitions:
– Resident operating system, usually held in low memory with interrupt vector
– User processes then held in high memory
3.1 Memory Mapping and Protection
• Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
– Base register contains value of smallest physical address
– Limit register contains range of logical addresses – each logical address must be less
than the limit register
• MMU maps logical address dynamically by adding the relocation register
Hardware support for
relocation and limit
registers
Loganathan R, CSE, HKBKCE 9
10. 3. Contiguous Memory Allocation Contd…
3.2 Memory Allocation
• Memory is to divide memory into several fixed-sized partitions, when a partition is free,
a process is loaded into the free partition and when it terminates, the partition
becomes available for another process
• Multiple-partition allocation (generalization of the fixed-partition)
– Hole – block of available memory; holes of various size are scattered throughout memory
– OS maintains information about the allocated partitions and free partitions (hole)
– When a process arrives, it is allocated memory from a hole large enough to accommodate it
– If the hole is too large, it is split into two parts., One part is allocated to the arriving
process; the other is returned to the set of holes
– If the new hole is adjacent to other holes, the adjacent holes are merged to form one larger hole
• Dynamic storage allocation problem, which concerns how to satisfy a request of size n
from a list of free holes and the solutions are:
• First-fit: Allocate the first hole that is big enough , Search from beginning or where the
previous first-fit search ended
• Best-fit: Allocate the smallest hole that is big enough, must search entire list, unless
ordered by size, produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire list, produces the largest
leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Loganathan R, CSE, HKBKCE 10
11. 3. Contiguous Memory Allocation Contd…
3.3 Fragmentation
• External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous, i.e. storage is fragmented into a large
number of small holes
• Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a
partition, but not being used
• Reduce external fragmentation by
1 Compaction
– Shuffle memory contents to place all free memory together in one
large block
– Compaction is possible only if relocation is dynamic, and is done at
execution time
2 Permit the logical address space of the processes to be
noncontiguous, thus allowing a process to be allocated physical
memory wherever available
Loganathan R, CSE, HKBKCE 11
12. 4. Paging
• Permits the physical address space of a process to be noncontiguous
4.1 Basic Method
• Divide physical memory into fixed-sized blocks called frames (size is power of 2)
• Divide logical memory into blocks of same size called pages
• The backing store is divided into fixed-sized blocks of size of frames
• Hardware support for paging i.e. a page table to translate logical to physical addresses
Loganathan R, CSE, HKBKCE 12
13. 4. Paging Contd…
• Address generated by CPU is divided into:
Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit
• For given logical address space
2m and page size 2n then m – n
bits of a logical address
designate the page number,
and the n low-order bits
designate the page offset as :
page number page offset
p d
m-n n
where p is an index into the Paging Model of Logical
page table and d is the and Physical Memory
displacement within the page
Loganathan R, CSE, HKBKCE 13
14. 4. Paging Contd…
Paging example for a 32-byte(25)
memory with logical address of 16 byte
(24) and 4-byte(22) pages i.e. m=4 & n=2
For logical address 0 is page 0, offset 0
Indexing into the page table, find that
page 0 is in frame 5
Logical address 0 maps to physical
address = 5(frame number) x 4(page
size) + 0(offset) = 20
Logical address 3 (page 0, offset 3)
maps to physical address = 5x4 + 3 = 23
Logical address 13(page 3, offset 1)
indexing to page 3 find frame number 2
which maps to physical address = 2x4
+1=9
32-byte memory and 4-byte pages
Loganathan R, CSE, HKBKCE 14
15. 4. Paging Contd…
In paging no external fragmentation: Any free frame can be allocated to a process
that needs it, but, may have some internal fragmentation i.e. last frame allocated
may not be completely full
If the process requires n pages, If n frames are available, they are allocated
The first page of the
process is loaded into one
of the allocated frames,
and the frame number is
put in the page table for
this process.
The next page is loaded
into another frame, and its
frame number is put into
the page table and so on
OS keeps tracks of which
frames are allocated,
which frames are
available, how many total
frames are there, and so
on in a frame table
Before allocation After allocation
Loganathan R, CSE, HKBKCE 15
16. 4. Paging Contd…
4.2 Hardware Support
• Hardware implementation of Page Table is a set of high speed dedicated
Registers
• Page table is kept in main memory and
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page table
• The CPU dispatcher reloads these registers, instructions to load or modify the
page-table registers are privileged
• In this scheme every data/instruction access requires two memory accesses.
One for the page table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast-
lookup hardware cache called associative memory or translation look-aside
buffers (TLBs)
• TLB entry consists a key (or tag) and a value, when it is presented with an item,
the item is compared with all keys simultaneously
Page # Frame #
Loganathan R, CSE, HKBKCE 16
17. 4. Paging Contd…
• When page number from CPU address is presented to the TLB, if the page
number is found, its frame number is immediately available and is used to
access memory.
• If the page number is not in the TLB (known as a TLB miss), a memory
reference to the page table must be made, also the page number and frame
number to the TLB.
• If the TLB is full, the
OS select one entry
for replacement.
• Replacement
policies range from
LRU to random
• TLBs allow entries
(for kernel code) to
be wired down, so
that they cannot be
removed from the
TLB.
Paging Hardware With TLB
Loganathan R, CSE, HKBKCE 17
18. 4. Paging Contd…
• Some TLBs store address-space identifiers (ASIDs) in each TLB
entry which identifies each process and is used to provide address-
space protection for that process
• TLB must be flushed (or erased) to ensure that the next executing
process does not use the wrong translation information
• The percentage of times that a particular page number is found in
the TLB is called the hit ratio
4.3 Protection
• Memory protection implemented by associating protection bit
with each frame, Valid-invalid bit attached to each entry in the
page table:
–“valid” indicates that the associated page is in the process’s
logical address space, and is thus a legal page
–“invalid” indicates that the page is not in the process’s logical
address space
Loganathan R, CSE, HKBKCE 18
19. 4. Paging Contd…
Valid (v) or Invalid (i) Bit In A Page Table
Loganathan R, CSE, HKBKCE
19
20. 4. Paging Contd…
4.4 Shared Pages
• An advantage of paging is the
possibility of sharing common
code
• If the code is reentrant code
(or pure code), it can be shared
• One copy of read-only
(reentrant) code shared among
processes (i.e., text editors,
compilers, window systems).
• Shared code must appear in
same location in the logical
address space of all processes
• Each process keeps a separate
copy of the code and data
• The pages for the private code
and data can appear anywhere
in the logical address space
Sharing of code in a paging environment
Loganathan R, CSE, HKBKCE 20
21. 5. Structure of the Page Table
Techniques for structuring the page table : 1. Hierarchical
Paging 2. Hashed Page Tables 3. Inverted Page Tables
5.1 Hierarchical Paging
• Break up the logical address space into
multiple page tables
• A simple technique is a two-level page table
• A logical address (on 32-bit machine with 1K
page size) is divided into a page number
consisting of 22 bits and a page offset
consisting of 10 bits
• Since the page table is paged, the page
number is further divided into a 12-bit page
number a 10-bit page offset
page number page offset
pi p2 d
12 10 10
where pi is an index into the outer page
table, and p2 is the displacement within the
page of the outer page table
Two-Level Page-Table Scheme
Loganathan R, CSE, HKBKCE 21
22. 5. Structure of the Page Table Contd…
• Address-Translation Scheme for a two-level 32-bit paging architecture
• Three-level Paging Scheme
• Example 64 bit logical address
Using two-level paging scheme
Using three-level paging scheme
Loganathan R, CSE, HKBKCE 22
23. 5. Structure of the Page Table Contd…
5.2 Hashed Page Tables
• Common in address spaces > 32 bits
• The virtual page number is hashed into a page table. This page table contains
a chain of elements hashing to the same location.
• Virtual page numbers are compared in this chain searching for a match. If a
match is found, the corresponding physical frame is extracted.
p & q - page numbers
s & r - frame numbers
Hashed Page Table 23
Loganathan R, CSE, HKBKCE
24. 5. Structure of the Page Table Contd…
5.3 Inverted Page Table
• One entry for each real page of memory
• Entry consists of the virtual address of the page stored in that real memory
location, with information about the process that owns that page
• Decreases memory needed to store each page table, but increases time
needed to search the table when a page reference occurs
• Use hash table to limit the search to one — or at most a few — page-table
entries
Inverted Page Table Architecture
24
Loganathan R, CSE, HKBKCE
25. 6. Segmentation
• Memory-management scheme that supports user view of memory i.e. a
collection of variable-sized segments, with no necessary ordering among
segments
User's view of a program
6.1 Basic Method
• Segments are numbered and are referred to
by a segment number i.e. a logical address
consists of a two tuple:
< segment-number, offset >
• A program is a collection of segments,
compiler constructs separate segments for :
• The code
• Global variables
• The heap, from which memory is
allocated
• The stacks used, by each thread
• The standard C library
Loganathan R, CSE, HKBKCE 25
26. 6. Segmentation Contd…
6.2 Hardware
• Segment table – maps 2 dimensional physical addresses, each table entry has:
– base – contains the starting physical address where the segments reside in
memory
– limit – specifies the length of the segment
• Segment-table base
register (STBR) points to
the segment table’s
location in memory
• Segment-table length
register (STLR) indicates
number of segments
used by a program;
Segment number s is legal
if s < STLR
Segmentation
hardware.
Loganathan R, CSE, HKBKCE 26
27. 6. Segmentation Contd…
• Segment 2 is 400
bytes long and begins
at location 4300. Thus,
a reference to byte 53
of segment 2 is
mapped onto location
4300 + 53 = 4353
• Segment 3, byte 852,
is mapped to 3200 (the
base of segment 3) +
852 = 4052
• A reference to byte
1222 of segment 0
would result in a trap
to the operating
system, as this
segment is only 1,000
bytes long
Example of Segmentation
Loganathan R, CSE, HKBKCE 27