An OS is a program that controls the execution of
application programs and acts as an interface between
the user of a computer and the computer hardware.
Operating System Services
Access to I/O devices
Controlled access to files
Error detection and response
The Operating System (OS) typically provides services in the
• Program creation: The OS provides a variety of
facilities and services, such as editors and debuggers, to
assist the programmer in creating programs. Typically,
these services are in the form of utility programs that are
not actually part of the OS but are accessible through the
• Program execution: A number of tasks need to be
performed to execute a program Instructions and data
must be loaded into main memory, I/O devices and files
must be initialized, and other resources must be prepared.
The OS handles all of this for the user.
• Access to I/O devices: Each I/O device requires its own
specific set of instructions or control signals for
operation. The OS takes care of the details so that the
programmer can think in terms of simple reads and writes.
Controlled access to files: In the case of files,
control must include an understanding of not only
the nature of the I/O device (disk drive, tape drive) but
also the file format on the storage medium. Again, the
OS worries about the details. Further, in the case of a
system with multiple simultaneous users, the OS can
provide protection mechanisms to control access to
• System access: In the case of a shared or public
system, the OS controls access to the system as a
whole and to specific system resources.The access
function must provide protection of resources and
data from unauthorized users and must resolve
conflicts for resource contention.
• Error detection and response: A variety of errors
can occur while a computer system is running.These
include internal and external hardware errors, such as
a memory error, or a device failure or malfunction; and
various software errors, such as arithmetic overflow,
attempt to access forbidden memory location, and
inability of the OS to grant the request of an
application. In each case, the OS must make the
response that clears the error condition with the least
impact on running applications.The response may
range from ending the program that caused the error,
to retrying the operation, to simply reporting the error
to the application.
Accounting: A good OS collects usage statistics for
various resources and monitor performance
parameters such as response time. On any system, this
information is useful in anticipating the need for
future enhancements and in tuning the system to
improve performance. On a multiuser system, the
information can be used for billing purposes.
A computer is a set of resources for the movement,
storage, and processing of data and for the control of
these functions. The OS is responsible for managing
Types of Operating System
Single program (Uni-programming)
. An interactive system, the user/ programmer
interacts directly with the computer, usually
through a keyboard/display terminal, to request
the execution of a job or to perform a transaction.
A batch system is the opposite of interactive.
The user’s program is batched together with
programs from other users and submitted by a
computer operator. After the program is
completed, results are printed out for the user.
Multiprogramming, keeps the processor as busy as
possible, by having it work on more than one program
at a time. Several programs are loaded into memory,
and the processor switches rapidly among them.
The alternative is a uniprogramming system that
works only one program at a time
Late 1940s to mid 1950s
No Operating System
Programs interact directly with hardware
Two main problems:
These early systems presented two main problems.:
• Scheduling: Most installations used a sign-up sheet to
reserve processor time.
Typically, a user could sign up for a block of time in multiples of
a half hour or so. A user might sign up for an hour and finish in
45 minutes; this would result in wasted computer idle time. On
the other hand, the user might run into problems, not finish in
the allotted time, and be forced to stop before resolving the
• Setup time: A single program, called a job, could involve
loading the compiler plus the high-level language program
(source program) into memory, saving the compiled program
(object program), and then loading and linking together the
object program and common functions. Each of these steps
could involve mounting or dismounting tapes, or setting up card
decks. If an error occurred, the hapless user typically had to go
back to the beginning of the setup sequence.
Thus a considerable amount of time was spent just in setting up
the program to run.
Simple Batch Systems
Resident Monitor program
Users submit jobs to operator
Operator batches jobs
Monitor controls sequence of events to process batch
When one job is finished, control returns to Monitor
which reads next job
Monitor handles scheduling
Simultaneous Peripheral Operation Online, or
SPOOLing, is the simplest form of
Multiprogramming systems: extend the idea of
spooling and batch processing to allow several
executing programs to be in memory concurrently.
This is achieved by cycling through processes, allowing
each one to use the CPU for a specific slice of time.
Monitors were able to handle multiprogramming to a
certain extent and could start jobs, spool operations,
perform I/O, switch between user jobs, and give some
protection between jobs.
Time Sharing Systems
Allow users to interact directly with the computer
Multi-programming allows a number of users to
interact with the computer
In a timesharing system, the CPU switches between
user sessions very quickly, giving each user a small slice
of processor time.
This procedure of switching between processes is
called context switching. The operating system
performs these context switches quickly, in essence,
giving the user a personal virtual machine.
Timesharing permits many users to share the same
CPU. By extending this idea, a system can allow many
users to share a single application.
As with timesharing systems, large interactive system
users are unaware of the other users on the system.
Memory split into two
One for Operating System (monitor)
One for currently executing program
“User” part is sub-divided and shared among active
Problem: I/O is so slow compared with CPU that even
in multi-programming system, CPU can be idle most
of the time
Increase main memory
Leads to larger programs
What is Swapping?
Long term queue of processes stored on disk
Processes “swapped” in as space becomes available
As a process completes it is moved out of main
If none of the processes in memory are ready (i.e. all
Swap out a blocked process to intermediate queue
Swap in a ready process or a new process
But swapping is an I/O process…
Splitting memory into sections to allocate to processes
(including Operating System)
May not be equal size
Process is fitted into smallest hole that will take it (best
Some wasted memory
Leads to variable sized partitions
Real-time, Multiprocessor, and
Real-time systems are used for process control in manufacturing plants,
assembly lines, robotics etc.
Real-time systems have severe timing constraints. If specific deadlines
are not met, physical damage or other undesirable effects to persons or
property can occur. Because these systems must respond to external
events, correct process scheduling is critical.
In hard real-time systems (with potentially fatal results if deadlines aren't
met), there can be no errors.
In soft real-time systems, meeting deadlines is desirable, but does not
result in catastrophic results if deadlines are missed.
Multiprocessor systems have more than one processor that must be
scheduled. In multiprocessing environment, the CPUs cooperate with
each other to solve problems, working in parallel to achieve a common
goal. System synchronization requirements determine whether the
processors are designed using tightly coupled or loosely coupled
Tightly coupled multiprocessors share a single centralized memory,
which requires that an operating system must synchronize processes
very carefully to assure protection. Symmetric multiprocessors (SMPs)
are a popular form of tightly coupled architecture. These systems have
multiple processors that share memory and I/O devices. All processors
perform the same functions, with the processing load being distributed
among all of them.
Loosely coupled multiprocessors have a physically distributed memory,
also known as distributed systems.
A distributed collection of workstations on a LAN, each with its own
operating system, is typically referred to as a networked system. These
systems were motivated by a need for multiple computers to share
resources. A network operating system includes the necessary
provisions, such as remote command execution, remote file access, and
remote login, to attach machines to the network.
Network file systems are one of the most important applications of
networked systems. These allow multiple machines to share one logical
file system, although the machines are located in different geographical
locations and may have different architectures and unrelated operating
Operating System Design
The operating system controls the basic functions of the
computer, including memory management and I/O, not to
mention the "look and feel" of the interface. An operating system
differs from most other software in that it is event driven,
meaning it performs tasks in response to commands, application
programs, I/O devices, and interrupts.
Four main factors drive operating system design:
Two components are crucial in operating system design:
the kernel and
the system programs.
The kernel is the core of the operating system and is used by the
process manager, the scheduler, the resource manager, and the I/O
manager and is responsible for scheduling, synchronization,
protection/security, memory management, and dealing with
It has primary control of system hardware, including interrupts,
control registers, status words, and timers, loads all device drivers,
provides common utilities, and coordinates all I/O activity.
The kernel must know the specifics of the hardware to combine all of
these pieces into a working system.
Operating System Services
The operating system oversees all critical system
management tasks, including memory management,
process management, protection, and interaction with
I/O devices and the operating system determines how
the user interacts with the computer, serving as a
buffer between the user and the hardware. Each of
these functions is an important factor in determining
overall system performance and usability.
The Human Interface
The operating system provides a layer of abstraction between the
user and the hardware of the machine. The operating system
provides an interface to hide the details of the bare machine.
Operating systems provide three basic interfaces, each providing
a different view for a particular individual. Hardware developers
are interested in the operating system as an interface to the
hardware. Applications developers view the operating system as
an interface to various application programs and services.
Ordinary users are most interested in the graphical interface,
which is the interface most commonly associated with the term
Operating system user interfaces can be divided into two
command line interfaces
graphical user interfaces (GUIs).
Command line interfaces provide a prompt at which the
user enters various commands, including those for copying
files, deleting files, providing a directory listing, and
manipulating the directory structure.
GUIs provide a more accessible interface for the casual user.
They include features such as icons and other graphical
representations of files that are manipulated using a
Process management includes everything from creating
processes (setting up the appropriate structures to store
information about each one), to scheduling processes' use
of various resources, to deleting processes and cleaning up
after their termination.
The operating system keeps track of each process, its status
(which includes the values of variables, the contents of
CPU registers, and the actual state—running, ready, or
waiting—of the process), the resources it is using, and
those that it requires.
The kernel is managing a collection of processes,
consisting of user processes and system processes.
Process scheduling is a large part of the operating
system's normal routine.
Multitasking (allowing multiple processes to run
concurrently) and multithreading (allowing a process
to be subdivided into different threads of control)
provide interesting challenges for CPU scheduling.
A thread is the smallest schedulable unit in a system.
Threads share the same execution environment as their
parent process, including its CPU registers and page table.
Depending on the degree of concurrency required, it is
possible to have one process with one thread, one process
with multiple threads, multiple single-threaded processes,
or multiple multithreaded processes. An operating system
that supports multithreading must be able to handle all
In addition to process management, the operating system
manages system resources.
For example, multiple processes can share one processor,
multiple programs can share physical memory, and
multiple users and files can share one disk.
There are three resources that are of major concern to the
operating system: the CPU, memory, and I/O. Access to the
CPU is controlled by the scheduler. Memory and I/O access
requires a different set of controls and functions
Security and Protection
Resource sharing, however, creates a multitude of exposures, such as the
potential for unauthorized access or modification of data.
Concurrent processes must be protected from each other, and operating
system processes must be protected from all user processes.
Without this protection, a user program could potentially wipe out the
operating system code for dealing with, say, interrupts.
Multiuser systems require additional security services to protect both shared
resources (such as memory and I/O devices) and nonshared resources (such as
personal files). Memory protection safeguards against a bug in one user's
program affecting other programs or a malicious program taking control of the
entire system. CPU protection makes sure user programs don't get stuck in
infinite loops, consuming CPU cycles needed by other jobs.
The general idea behind positional numbering systems is that a
numeric value is represented through increasing powers of a
radix (or base). This is often referred to as a weighted
numbering system because each position is weighted by a
power of the radix.
The set of valid numerals for a positional numbering system is
equal in size to the radix of that system. For example, there are
10 digits in the decimal system, 0 through 9, and 3 digits for the
ternary (base 3) system, 0, 1, and 2. The largest valid number in
a radix system is one smaller than the radix, so 8 is not a valid
numeral in any radix system smaller than 9. To distinguish
among numbers in different radices, we use the radix as a
subscript, such as in 3310 to represent the decimal number 33.
Three numbers represented as powers of a radix.
243.5110 = 2 x 102 + 4 x 101 + 3 x 100 + 5 x 10-1 + 1 x 10-2
2123 = 2 x 32 + 1 x 31 + 2 x 30 = 2310
101102 = 1 x 24 + 0 x 23 + 1 x 22 + 1 x 21 + 0 x 20 = 2210
Signed Integer Representation
Signed numbers require additional issues to be addressed.
When an integer variable is declared in a program, many
programming languages automatically allocate a storage
area that includes a sign as the first bit of the storage
location. By convention, a "1" in the high-order bit indicates
a negative number. The storage location can be as small as
an 8-bit byte or as large as several words, depending on the
programming language and the computer system. The
remaining bits (after the sign bit) are used to represent the
How this number is represented depends on the method
used. There are three commonly used approaches. The most
intuitive method, signed magnitude, uses the remaining bits
to represent the magnitude of the number
Up to this point, we have ignored the possibility of binary representations
for negative numbers. The set of positive and negative integers is referred
to as the set of signed integers. The problem with representing signed
integers as binary values is the sign—how should we encode the actual
sign of the number?
Signedmagnitude representation is one method of solving this problem.
As its name implies, a signed-magnitude number has a sign as its left-
most bit (also referred to as the high-order bit or the most significant bit)
while the remaining bits represent the magnitude (or absolute value) of
the numeric value.
For example, in an 8-bit word, –1 would be represented as 10000001, and
+1 as 00000001. In a computer system that uses signed-magnitude
representation and 8 bits to store integers, 7 bits can be used for the
actual representation of the magnitude of the number. This means that
the largest integer an 8-bit word can represent is 27 – 1 or 127 (a zero in the
high-order bit, followed by 7 ones). The smallest integer is 8 ones, or –
127. Therefore, N bits can represent –2(N – 1) – 1 to 2(N – 1) –1.
Number theorists have known for hundreds of years that one decimal
number can be subtracted from another by adding the difference of the
subtrahend from all nines and adding back a carry. This is called taking
the nine's complement of the subtrahend, or more formally, finding the
diminished radix complement of the subtrahend. Let's say we wanted to
find 167 – 52. Taking the difference of 52 from 999, we have 947. Thus,
in nine's complement arithmetic we have 167 – 52 = 167 + 947 = 114. The
"carry" from the hundreds column is added back to the units place,
giving us a correct 167 – 52 = 115. This method was commonly called
"casting out 9s" and has been extended to binary operations to simplify
computer arithmetic. The advantage that complement systems give us
over signed magnitude is that there is no need to process sign bits
separately, but we can still easily check the sign of a number by looking
at its high-order bit.
Another way to envision complement systems is to imagine
an odometer on a bicycle. Unlike cars, when you go
backward on a bike, the odometer will go backward as well.
Assuming an odometer with three digits, if we start at zero
and end with 700, we can't be sure whether the bike went
forward 700 miles or backward 300 miles! The easiest
solution to this dilemma is simply to cut the number space
in half and use 001–500 for positive miles and 501–999 for
negative miles. We have, effectively, cut down the distance
our odometer can measure. But now if it reads 997, we
know the bike has backed up 3 miles instead of riding
forward 997 miles. The numbers 501–999 represent the
radix complements (the second of the two methods
introduced below) of the numbers 001–500 and are being
used to represent negative distance.
Error Detection And Correction
Regardless of the coding method used, no communications
channel or storage medium can be completely error-free. It is
a physical impossibility. As transmission rates are increased,
bit timing gets tighter. As more bits are packed per square
millimeter of storage, flux densities increase. Error rates
increase in direct proportion to the number of bits per
second transmitted, or the number of bits per square
millimeter of magnetic storage.
A parity bit could be added to an ASCII byte to help
determine whether any of the bits had become corrupted
during transmission. This method of error detection is
limited in its effectiveness: Simple parity can detect only an
odd number of errors per byte. If two errors occur, we are
helpless to detect a problem.
you should keep in mind that just as it is impossible to
create an error-free medium, it is also impossible to detect
or correct 100% of all errors that could occur in a medium.
Error detection and correction is yet another study in the
tradeoffs that one must make in designing computer
systems. The well-constructed error control system is
therefore a system where a "reasonable" number of the
"reasonably" expected errors can be detected or corrected
within the bounds of "reasonable" economics.
Cyclic Redundancy Check
Checksums are used in a wide variety of coding systems, from bar codes
to International Standard Book Numbers (ISBNs). These are self-
checking codes that will quickly indicate whether the preceding digits
have been misread. Cyclic redundancy check (CRC) is a type of
checksum used primarily in data communications that determines
whether an error has occurred within a large block or stream of
information bytes. The larger the block to be checked, the larger the
checksum must be to provide adequate protection. Checksums and
CRCs are a type of systematic error detection scheme, meaning that the
error-checking bits are appended to the original information byte. The
group of error-checking bits is called a syndrome. The original
information byte is unchanged by the addition of the error-checking
The word cyclic in cyclic redundancy check refers to the abstract
mathematical theory behind this error control system. Although a
discussion of this theory is beyond the scope of this text, we can
demonstrate how the method works to aid in your understanding of its
power to economically detect transmission errors.
Arithmetic Modulo 2
You may be familiar with integer arithmetic taken over a
modulus. Twelve-hour clock arithmetic is a modulo 12 system
that you use every day to tell time. When we add 2 hours to
11:00, we get 1:00. Arithmetic modulo 2 uses two binary
operands with no borrows or carries. The result is likewise
binary and is also a member of the modulus 2 system.
Because of this closure under addition, and the existence of
identity elements, mathematicians say that this modulo 2
system forms an algebraic field.
The addition rules are as follows:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0
Find the sum of 10112 and 1102 modulo 2.
This sum makes sense only in modulo 2.
Instruction Sets: Characteristics and
The operation of the processor is determined by the
instructions it executes, referred to as machine
instructions or computer instructions. The
collection of different instructions that the processor
can execute is referred to as the processor’s
Elements of a Machine Instruction Each instruction must contain the information
required by the processor for execution.
The figure above shows the steps involved in
instruction execution and, by implication, defines
the elements of a machine instruction which are:
Operation code: Specifies the operation to be
performed (e.g., ADD, I/O). The operation is
specified by a binary code, known as the operation
code, or opcode.
Source operand reference: The operation may
involve one or more source operands, that is,
operands that are inputs for the operation
Result operand reference: The operation may
produce a result.
•Next instruction reference: This tells the processor
where to fetch the next instruction after the execution
of this instruction is complete
The address of the next instruction to be fetched could be either a
real address or a virtual address, depending on the architecture.
Source and result operands can be in one of these areas:
Main or virtual memory: As with next instruction references,
the main or virtual memory address must be supplied.
Processor register: With rare exceptions, a processor contains
one or more registers that may be referenced by machine
instructions. If only one register exists, reference to it may be
implicit. If more than one register exists, then each register is
assigned a unique name or number, and the instruction must
contain the number of the desired register.
Immediate: The value of the operand is contained in a field in
the instruction being executed.
I/O device: The instruction must specify the I/O module and
device for the operation. If memory-mapped I/O is used, this is
just another main or virtual memory address.
Opcodes are represented by abbreviations, called mnemonics, that
indicate the operation. Common examples include
LOAD Load data from memory
STOR Store data to memory
Operands are also represented symbolically. For example, the instruction
ADD R, Y
may mean add the value contained in data location Y to the contents of
register R. In this example, Y refers to the address of a location in
memory, and R refers to a particular register. Note that the operation is
performed on the contents of a location, not on its address.
Types of Operands
Machine instructions operate on data which have these
Other common data types are numbers, characters, and
logical data, and some machines define specialized data
types or data structures. For example, there may be
machine operations that operate directly on a list or a
string of characters.
All machine languages include numeric data types.
Even in nonnumeric data processing, there is a
need for numbers to act as counters, field widths.
The distinction between numbers used in ordinary
mathematics and numbers stored in a computer is
that the latter are limited which is true in two
ways. First, there is a limit to the magnitude of
numbers representable on a machine and second,
in the case of floating-point numbers, a limit to
Three types of numerical data are common in
Binary integer or binary fixed point
Binary floating point
Intel x86 and ARM Data TypesThe x86 can deal with data types of 8 (byte), 16
(word), 32 (doubleword), 64 (quadword), and 128
(double quadword) bits in length. To allow
maximum flexibility in data structures and
efficient memory utilization, words need not be
aligned at evennumbered addresses; doublewords
need not be aligned at addresses evenly divisible
by 4; and quadwords need not be aligned at
addresses evenly divisible by 8, and so on.
However, when data are accessed across a 32-bit
bus, data transfers take place in units of
doublewords, beginning at addresses divisible by
The processor converts the request for misaligned
values into a sequence of requests for the bus
The byte, word, doubleword, quadword, and double
quadword are referred to as general data types
Nb: ARM is a RISC design that relies heavily on
Types of Data and descriptions
General Byte: word (16 bits), doubleword (32 bits), quadword (64 bits),
and double quadword (128 bits) locations with arbitrary binary
Integer: A signed binary value contained in a byte, word, or
doubleword, using twos complement representation.
Ordinal: An unsigned integer contained in a byte, word, or
Unpacked binary coded decimal (BDC): A representation of a BCD
digit in the range 0 through 9, with one digit in each byte.
Packed BCD : Packed byte representation of two BCD digits; value in
the range 0 to 99.
Near pointer: A 16-bit, 32-bit, or 64-bit effective address that
represents the offset within a segment. Used for all pointers in a
nonsegmented memory and for references within a segment in a
Far pointer : A logical address consisting of a 16-bit
segment selector and an offset of 16, 32, or 64 bits. Far
pointers are used for memory references in a segmented
memory model where the identity of a segment being
accessed must be specified explicitly.
Bit field : A contiguous sequence of bits in which the
position of each bit is considered as an independent unit. A
bit string can begin at any bit position of any byte and can
contain up to 32 bits.
Bit string: A contiguous sequence of bits, containing from
zero to 2^32-1 bits.
Byte string: A contiguous sequence of bytes, words, or
doublewords, containing from zero to 2^32-1 bytes.
Packed SIMD (single instruction, multiple data): Packed
64-bit and 128-bit data types
Intel x86 data types
The packed SIMD (single-instruction-multiple-data) data
types were introduced to the x86 architecture as part of the
extensions of the instruction set to optimize performance
of multimedia applications which included MMX
(multimedia extensions) and SSE (streaming SIMD
The basic concept is that multiple operands are packed into
a single referenced memory item and that these multiple
operands are operated on in parallel.
The data types are as follows:
Packed byte and packed byte integer: Bytes packed into
a 64-bit quadword or 128-bit double quadword, interpreted
as a bit field or as an integer
Packed word and packed word integer: 16-bit words
packed into a 64-bit quadword or 128-bit double quadword,
interpreted as a bit field or as an integer
Packed doubleword and packed doubleword integer:
32-bit doublewords packed into a 64-bit quadword or 128-
bit double quadword, interpreted as a bit field or as an
Packed quadword and packed qaudword integer: Two
64-bit quadwords packed into a 128-bit double quadword,
interpreted as a bit field or as an integer
Packed single-precision floating-point and packed
double-precision floatingpoint: Four 32-bit floating-
point or two 64-bit floating-point values packed into a 128-
bit double quadword
Types of OperationsThe number of different opcodes varies widely from
machine to machine. However,the same general
types of operations are found on all machines. A
useful and typical categorization is the following:
Transfer of control
Some common Arithmetic Operations
ADD : Perform the arithmetic sum of two
SUBTRACT : Perform the arithmetic difference of
MULTIPLY: Perform the product of two operands
DIVIDE: Perform the division of two operands
INCREMENT: Add one to the contents of a
DECREMENT: Subtract one from the contents of a
Some Common Logical Operations and
AND: Perform the logical ANDing of two operands
OR: Perform the logical ORing of two operands
EXOR: Perform the XORing of two operands
NOT: Perform the complement of an operand
COMPARE: Perform logical comparison of two operands
and set flag accordingly
SHIFT: Perform logical shift (right or left) of the content of
ROTATE: Perform logical shift (right or left) with
wraparound of the content of a register
Some Transfer of Control Operations
BRANCH-IF-CONDITION: Transfer of control to a
new address if condition is true
JUMP : Unconditional transfer of control
CALL: Transfer of control to a subroutine
RETURN: Transfer of control to the caller routine
Examples of Condition Flags
Negative (N): Set to 1 if the result of the most
recent operation is negative, it is 0 otherwise
Zero (Z): Set to 1 if the result of the most recent
operation is 0, it is 0 otherwise
Overflow (V): Set to 1 if the result of the most
recent operation causes an overflow, it is 0
Carry (C) : Set to 1 if the most recent operation
results in a carry, it is 0 otherwise
ArithmeticMost machines provide the basic arithmetic
operations of add, subtract, multiply, and divide.
These are invariably provided for signed integer
Often they are also provided for floating-point and
packed decimal numbers.
Other possible operations include a variety of single-
operand instructions like:
Absolute: Take the absolute value of the operand.
Negate: Negate the operand.
Increment: Add 1 to the operand.
•Decrement: Subtract 1 from the operand
Intel x86 and ARM operation types The x86 provides a complex array of operation types, including a
number of specialized instructions.The intent was to provide tools for
the compiler writer to produce optimized machine language
translation of high-level language programs.
Typical operations fall in the following categories:
High level language support
Mandatory: explain the following operations
High level language support
A CPU can understand and execute machine instructions.
An assembly language program is composed of :
Integer constants can be written in decimal,
hexadecimal, octal or binary, by adding a radix (or
number base) suffix to the end .
– d decimal (the default)
– h hexadecimal
– q or o octal
– b binary
An integer expressions is a mathematical
expressions involving integer values and integer
The expressions must be one that can be stored in
32 bits (or less).
( ) Expressions in Parentheses
+, - Unary Plus and minus
*, /, Mod Multiply, Divide, Modulus
+, - Add, Subtract
Real Number Constants
There are two types of real number constants:
– Decimal reals, which contain a sign followed by
a number with decimal fraction and an exponent:
+3.0 -44.2E+05 26.E5
– Encoded reals, which are represented exactly as
they are stored: 3F80000r
A character constant is a single character enclosed in
single or double quotation marks.
The assembler converts it to the equivalent value in the
binary code ASCII:
A string constant is a string of characters
enclosed in single or double quotation marks:
“This isn’t a test”
‘Say “Goodnight, ” Gracie’
Reserved words have a special meaning to the
assembler and cannot be used for anything other
than their specified purpose.
– Instruction mnemonics
– Operators in constant expressions
– Predefined symbols such as @data which return
constant values at assembly time.
Identifiers are names that the programmer
chooses to represent variables, constants,
procedures or labels.
– can have 1 to 247 characters
– are not case-sensitive
– begin with a letter , underscore, @ or $ and can
also contain digits after the first character.
– cannot be reserved words
Directives are commands for the assembler, telling it how
to assemble the program.
Directives have a syntax similar to assembly language but
do not correspond to Intel processor instructions.
Directives are also case-insensitive:
An instruction in Assembly language consists of a
name (or label), an instruction mnemonic,
operands and a comment
The general form is:
[name] [mnemonic] [operands] [; comment]
Statements are free-form; i.e, they can be written in
any column with any number of spaces between in
each operand as long as they are on one line and do
not pass column 128.
Labels are identifiers that serve as place markers
within the program for either code or data.
These are replaced in the machine-language
version of the program with numeric addresses.
We use them because they are more readable:
mov ax, 
mov ax, MyVariable
Instruction mnemonics are abbreviations that
identify the operation carried out by the
mov - move a value to another location
add - add two values
sub - subtract a value from another
jmp - jump to a new location in the program
mul - multiply two values
call - call a procedure