For any query visit us at: http://www.siliconmentor.com/
Digital signal processing can be divided into two subcategories, fixed point and floating points. These are the formats refer to store and manipulate numbers within the devices. To get more details read this article or contact us.
Python Notes for mca i year students osmania university.docx
Floating Point Unit (FPU)
1. FLOATING PINT UNIT (FPU)
Digital signal processing can be divided into two subcategories, fixed point and floating points.
These are the formats refer to store and manipulate numbers within the devices.
In computing, floating point is a method of representing an approximation of a real number in a
way that can support a trade-off between range and precision. A number that can be represented
exactly is of the following form:
There are three types of floating point numbers:-half precision, single precision and double
precision.
Half precision-Half precision floating point is a 16-bit binary floating-point interchange format.
From 16 bits, 15th bit is used for sign bit, 14th to 10th bit is used for exponent part and remaining
10 bits are used to represent significand bits.
Single precision- total 32 bits are used to represent single precision floating point number. 31st
bit is used for sign bit, 30th to 23rd bits are used for exponent part and remaining 23 bits are used
for significant part.
Double precision- total 64 bits are used to represent single precision floating point number. 63rd
bit is used for sign bit, 62th to 52th bits are used for exponent part and remaining bits are used
for significand part.
2. Floating point unit- floating point unit is an IC which is designed to manipulate all the
arithmetic operations based on floating point numbers or fractions. This unit is fully dedicated to
work only on floating point numbers and nothing else. The basic microprocessor is not capable
of manipulating floating number quickly so a separate specialized floating point unit is designed
as coprocessor. This unit is designed to perform basically five operations:-
1. Addition
2. Subtraction
3. Multiplication
4. Division
5. Square root
If we take an example of FPU application that is an image processing system which uses a lossy
biorthogonal 9/7 lifting DWT technique and this yields a problem of complex computations with
floating point numbers. Such a system requires an additional hardware to handle the floating
point computations. This leads to design of a separate floating point unit.
Example of floating point addition:-
Let’s we have two 5 digit binary numbers
24*1.1001
+ 22*1.0010
3. Step1- Find the difference between larger and smaller exponent.
El=24, Es =22, difference=4-2=2
Step 2- makes the exponent equal by shifting the fraction with the smaller exponent right by the
difference bits (2). And add both fractions
1.1001 000
+0.0100 100
1.1101 100
Step 3- round the result to nearest even
1.1110
Result= 24*1.1110
Similarly other operations can also be computed using FPU. There are operational switches on
the FPU, on the bases of input given to switch it decides which operation is going to be
performed. If adder operator is fed to the switch then addition operation will be take place. Same
procedure is followed for subtraction, multiplication, division and square root.
We can use this FPU in many VLSI applications such as power optimized image processing
system. To reduce the power consumption we use logarithmic based FPU instead of conventional
FPU. Today’s image acquiring tools are generally battery operated and hence optimization of
power is a major concern. Using LNS (logarithmic number system) in arithmetic unit results in
reduced power.
For More Any Query You Can Visit Us At:
http://www.siliconmentor.com/