SlideShare a Scribd company logo
1 of 22
Digital Communication (GTU)                             3-1                             Information Theory



                           Chapter           3      :         Information
                           Theory



Section 3.5 :

Ex. 3.5.3 :     Consider a telegraph source having two symbols, dot and dash. The dot duration is
                0.2 seconds; and the dash duration is 3 times the dot duration. The probability of the dot
                occurring is twice that of the dash, and the time between symbols is 0.2 seconds.
                Calculate the information rate of the telegraph source.                   .Page No. 3-14.
Soln. :
Given that :        1.   Dot duration : 0.2 sec.
                    2.   Dash duration : 3 × 0.2 = 0.6 sec.
                    3.   P (dot) = P (dot) = 2 P (dash).
                    4.   Space between symbols is 0.2 sec.
                         Information rate = ?

1.     Probabilities of dots and dashes :
      Let the probability of a dash be “P”. Therefore the probability of a dot will be “2P”. The total
probability of transmitting dots and dashes is equal to 1.
       ∴       P (dot) + P (dash)        = 1
       ∴       P + 2P                    = 1 ∴          P = 1/3
       ∴       Probability of dash       = 1/3
       and probability of dot            = 2/3                                                      …(1)

2.     Average information H (X) per symbol :

         ∴      H (X) = P (dot) · log2 [ 1/P (dot) ] + P (dash) · log2 [ 1/P (dash) ]
         ∴      H (X) = (2/3) log2 [ 3/2 ] + (1/3) log2 [ 3 ] = 0.3899 + 0.5283 = 0.9182 bits/symbol.

3.     Symbol rate (Number of symbols/sec.) :
       The total average time per symbol can be calculated as follows :
               Average symbol time      Ts = [ TDOT × P ( DOT )] + [ TDASH × P (DASH) ] + Tspace
                                    ∴   Ts = [ 0.2 × 2/3 ] + [ 0.6 × 1/3 ] + 0.2 = 0.5333 sec./symbol.
       Hence the average rate of symbol transmission is given by :
                                          r = 1/Ts = 1.875 symbols/sec.
Digital Communication (GTU)                              3-2                                 Information Theory


4.     Information rate (R) :

                                             R = r × H ( X ) = 1.875 × 0.9182 = 1.72 bits/sec.            ...Ans.

Ex. 3.5.4 :        The voice signal in a PCM system is quantized in 16 levels with the following
                   probabilities :
                   P1 = P2 = P3 = P4 = 0.1                              P5 = P6 = P7 = P8 = 0.05
                   P9 = P10 = P11 = P12 = 0.075                         P13 = P14 = P15 = P16 = 0.025
                   Calculate the entropy and information rate. Assume fm = 3 kHz.                  .Page No. 3-15
Soln. :
      It is given that,
1.    The number of levels = 16. Therefore number of messages = 16.
2.    fm = 3 kHz.

(a)    To find the entropy of the source :
       The entropy is defined as,
                       H = pk log2 (1/ pk)                                                                   ...(1)
              As       M = 16        Equation (1) gets modified to,
                       H = pk log2 (1/ pk)

                          = 4 [0.1 log2 (1/0.1)] + 4 [0.05 log2 (1/0.05)]

                             + 4 [0.075 log2 (1/0.075)] + 4 [0.025 log2 (1/0.025)]

               ∴       H = 0.4 log2 (10) + 0.2 log2 (20) + 0.3 log2 (13.33) + 0.1 log2 (40)

                          = 0.4 + + 0.3 +

                   ∴   H = 3.85 bits/message                                                         ...(2) ...Ans.

(b)    To find the message rate (r) :

       The minimum rate of sampling is Nyquist rate.
                             Therefore       fs = 2 × fm
                                                  = 2 × 3 kHz = 6 kHz                                        ...(3)
Digital Communication (GTU)                                3-3                                    Information Theory


      Hence there are 6000 samples/sec. As each sample is converted to one of the 16 levels, there are
6000 messages/sec.
                       ∴       Message rate r = 6000 messages/sec                                                 ...(4)

(c)    To find the information rate (R) :

                                             R = r × H = 6000 × 3.85 = 23100 bits/sec                            ...Ans.

Ex. 3.5.5 :       A message source generates one of four messages randomly every microsecond. The
                  probabilities of these messages are 0.4, 0.3, 0.2 and 0.1. Each emitted message is
                  independent of other messages in the sequence :
                  1.     What is the source entropy ?
                  2.     What is the rate of information generated by this source in bits per second ?
                                                                                                      .Page No. 3-15
Soln. :
      It is given that,
1.     Number of messages, M = 4, let us denote them by m1, m2, m3 and m4.
2.     Their probabilities are p1 = 0.4, p2 = 0.3, p3 = 0.2 and p4 = 0.1.
3.     One message is transmitted per microsecond.
       ∴    Message transmission rate r = = 1 × 106 messages/sec.

(a)    To obtain the source entropy (H) :

                       H = pk log2 ( 1/pk )
              ∴        H = p1 log2 ( 1/ p1 ) + p2 log2 ( 1/ p2 ) + p3 log2 ( 1/ p3 ) + p4 log2 ( 1/ p4 )
                           = 0.4 log2 ( 1/0.4 ) + 0.3 log2 ( 1/0.3 ) + 0.2 log2 ( 1/0.2 ) + 0.1 log2 ( 1/0.1 )
              ∴        H = 1.846 bits/message                                                                    ...Ans.

(b)    To obtain the information rate (R) :
                                             R = H × r = 1.846 × 1 × 106 = 1.846 M bits/sec                      ...Ans.

Ex. 3.5.6 :       A source consists of 4 letters A, B, C and D. For transmission each letter is coded into a
                  sequence of two binary pulses. A is represented by 00, B by 01, C by 10 and D by 11.
                  The probability of occurrence of each letter is P(A) = , P (B) = , P (C) = and
                  P (D) = . Determine the entropy of the source and average rate of transmission of
                  information.                                                                        .Page No. 3-15
Digital Communication (GTU)                             3-4                                    Information Theory

Soln. : The given data can be summarised as shown in the following table :
                                         Messag     Probability      Code
                                           e
                                           A              1/5          00
                                           B              1/4          01
                                           C              1/4          10
                                           D             3/10          11
Assumption : Let us assume that the message transmission rate be r = 4000 messages/sec.
(a)   To determine the source entropy :
                                          H =      log2 (5) + log2 (4) + log2 (4) + 0.3 log2 (10/3)
                                     ∴    H = 1.9855 bits/message                                            ...Ans.
(b)   To determine the information rate :
                                          R = r × H = [4000 messages/sec] × [1.9855 bits/message]
                                          R = 7942.3 bits/sec                                                ...Ans.
(c)   Maximum possible information rate :
                                     Number of messages/sec = 000
              But here the number of binary digits/message         = 2
                     ∴ Number of binary digits (binits)/sec = 4000 × 2 = 8000 binits/sec.
       We know that each binit can convey a maximum average information of 1 bit
                                 ∴        H = 1 bit/binit
       ∴ Maximum rate of information transmission = [8000 binits/s] × [Hmax /binit]
                                              = 8000 × 1 bits/sec                                            ...Ans.

Section 3.6 :

Ex. 3.6.3 :     A discrete memoryless source has five symbols x1, x2, x3, x4 and x5 with probabilities
                p ( x1 ) = 0.4, p ( x2 ) = 0.19, p ( x3 ) = 0.16, p ( x4 ) = 0.14 and p ( x 5 ) = 0.11. Construct the
                Shannon-Fano code for this source. Calculate the average code word length and coding
                efficiency of the source.                                                          .Page No. 3-21
Soln. : Follow the steps given below to obtain the Shannon-Fano code.
Step 1 : List the source symbols in the order of decreasing probability.
Step 2 : Partition the set into two sets that are as close to being equiprobable as possible and assign
         0 to the upper set and 1 to the lower set.
Step 3 : Continue this process, each time partitioning the sets with as nearly equal probabilities as
         possible until further partitioning is not possible.
Digital Communication (GTU)                                   3-5                                      Information Theory


(a)   The Shannon-Fano codes are as given in Table P. 3.6.3.
                                           Table P. 3.6.3 : Shannon-Fano codes

          Symbols          Probability        Step 1       Step 2       Step 3                        Code word
                x1             0.4               0            0            Stop                          00
                                                          Partition        here
                x2             0.19             0             1            Stop                          01
                                             Partition                     here
                x3             0.16              1            0            Stop                          10
                                                          Partition        here
                x4             0.14              1            1           0               Stop           110
                                                                       Partition          here
                x5             0.11              1            1             1             Stop          111
                                                                                          here

(b)   Average code word length (L) :

       The average code word length is given by :

                        L = pk × (length of mk in bits)
                            = ( 0.4 × 2 ) + ( 0.19 × 2 ) + ( 0.16 × 2 ) + ( 0.14 × 3 ) + ( 0.11 × 3 )
                            = 2.25 bits/message
(c)   Entropy of the source (H) :
                        H = pk log2 ( 1 / pk )
                          = 0.4 log2 ( 1 / 0.4 ) + 0.19 log2 ( 1 / 0.19 ) + 0.16 log2 ( 1 / 0.16 )
                            + 0.14 log2 ( 1 / 0.14 ) + 0.11 log2 ( 1 / 0.11 ) = 2.15

Ex. 3.6.6 :          A discrete memoryless source has an alphabet of seven symbols with probabilities for its
                     output as described in Table P. 3.6.6(a).                                            .Page No. 3-25
                                                       Table P. 3.6.6(a)
              Symbol                  S0          S1         S2        S3          S4            S5       S6
              Probability          0.25          0.25      0.125      0.125       0.125     0.0625      0.0625
                     Compute the Huffman code for this source moving the “combined” symbol as high as
                     possible. Explain why the computed source code has an efficiency of 100 percent.
Digital Communication (GTU)                        3-6                              Information Theory


Soln. : The Huffman code for the source alphabets is as shown in Fig. P. 3.6.6.




                                    Fig. P. 3.6.6 : Huffman code
       Follow the path indicated by the dotted line to obtain the codeword for symbol S 0 as 10.
Similarly we can obtain the codewords for the remaining symbols. These are as listed in
Table P. 3.6.6(b).
                                            Table P. 3.6.6(b)

                       Symbol    Probability     Codewor        Codeword length
                                                 d
                         S0          0.25            10              2 bit
                         S1          0.25            11              2 bit
                         S2         0.125           001              3 bit
                         S3         0.125           010              3 bit
                         S4         0.125           011              3 bit
                         S5         0.0625          0000             4 bit
                         S6         0.0625          0001             4 bit

To compute the efficiency :
1.    The average code length = L = (length of symbol in bits)
      From Table P. 3.7.4(b)
                 L = ( 0.25 × 2 ) + ( 0.25 × 2 ) + ( 0.125 × 3 ) × 3 + ( 0.0625 × 4 ) × 2
           ∴     L = 2.625 bits/symbol
2.    The average information per message = H =
Digital Communication (GTU)                                    3-7                           Information Theory


               ∴      H = [ 0.25 log2 ( 4 ) ] × 2 + [ 0.125 log2 ( 8 ) ] × 3 + [ 0.0625 log2 ( 16 ) ] × 2
                          = [ 0.25 × 2 × 2 ] + [ 0.125 × 3 × 3 ] + [ 0.0625 × 4 × 2 ]
               ∴      H = 2.625 bits/message.
3.     Code efficiency η = × 100 =
                        ∴       η = 100%
 Note : As the average information per symbol (H) is equal to the average code length (L), the code
         efficiency is 100%.

Section 3.11 :

Ex. 3.11.5 :       Calculate differential entropy H (X) of the uniformly distributed random variable X with
                   probability density function.
                   fX (x) = 1/a             0≤x≤a
                          = 0               elsewhere
                   for 1. a = 1       2.    a=2           3.    a = 1/2.                        .Page No. 3-49.
Soln. :
      The uniform PDF of the random variable X is as shown in Fig. P. 3.11.5.




                                                        Fig. P. 3.11.5

1.     The average amount of information per sample value of x (t) is measured by,
              H (X) = fX (x) · log2 [1/fX (x)] dx bits/sample                                       …(1)
       The entropy H (X) defined by the expression above is called as the differential entropy of X.
2.     Substituting the value of fX (x) in the expression for H (X) we get,
               H (X) =          · log2 (a) dx                                                                ...(2)
(a)      Substitute a = 1 to get, H (X) =                1 · log2 1 dx = 0                                  ...Ans.
(b)      Substitute a = 2 to get, H (X) =                · log2 2 dx = × 2 = 1                              ...Ans.
(c)      Substitute a = to get, H (X) = 2 log2 (1/2) dx = – 2 log2 2 = – 2                                  ...Ans.
       These are the values of differential entropy for various values of a.

Ex. 3.11.6 :       A discrete source transmits messages x1, x2, x3 with probabilities p ( x1 ) = 0.3,
                   p ( x2 ) = 0.25, p ( x3 ) = 0.45. The source is connected to the channel whose conditional
                   probability matrix is
                                      y1    y2     y3
Digital Communication (GTU)                              3-8                                      Information Theory

                P (Y / X) =
                Calculate all the entropies and mutual information with this channel.                 .Page No. 3-49
Soln. :
Steps to be followed :
Step 1 :   Obtain the joint probability matrix P (X, Y).
Step 2 :   Obtain the probabilities p (y1), p (y2), p (y3).
Step 3 :   Obtain the conditional probability matrix P (X/Y)
Step 4 :   Obtain the marginal densities H (X) and H (Y).
Step 5 :   Calculate the conditional entropy H (X/Y).
Step 6 :   Calculate the joint entropy H (X , Y).
Step 7 :   Calculate the mutual information I (X , Y).

Step 1 : Obtain the joint probability matrix P (X, Y) :

       The given matrix P (Y/X) is the conditional probability matrix. We can obtain the joint
probability matrix P (X , Y) as :

                                    P (X, Y) = P [ Y/X ] · P (X)

                              ∴ P (X, Y) =
                                                               y1       y2        y3
                            ∴ P (X, Y) =                            ...(1)
Step 2 : Obtain the probabilities p (y1), p (y2) and p (y3) :
        The probabilities p ( y1 ), p ( y2 ) and p ( y3 ) can be obtained by adding the column entries of
 P (X , Y) matrix of Equation (1).
                             ∴      p ( y1 ) = 0.27 + 0 + 0 = 0.27
                                    p ( y2 ) = 0.03 + 0.2 + 0.135 = 0.365
                                    p ( y3 ) = 0 + 0.05 + 0.315 = 0.365
Step 3 : Obtain the conditional probability matrix P (X/Y) :
        The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint
probability matrix P (X , Y) of Equation (1) by p (y1), p (y2) and p (y3) respectively.
                          ∴      P (X /Y) =
                                                          y1    y2          y3
                          ∴      P (X /Y) =                         ...(2)
Step 4 : To obtain the marginal entropies H (X) and H (Y) :
               H (X) = p ( xi ) log2 [ 1/p ( xi )]
                       = p ( x1 ) log2 [ 1/p ( x1 ) ] + p ( x2 ) log2 [ 1/p ( x2 ) ] + p ( x3 ) log2 [ 1/p ( x3 )]
       Substituting the values of p ( x1 ), p ( x2 ) and p ( x3 ) we get,
                       = 0.3 log2 (1/0.3) + 0.25 log2 (1/0.25) + 0.45 log2 (1/0.45)
Digital Communication (GTU)                              3-9                                      Information Theory


                       = [ (0.3 × 1.7369) + (0.25 × 2) + (0.45 × 1.152) ]
       ∴       H (X) = [ 0.521 + 0.5 + 0.5184 ] = 1.5394 bits/message                                            ...Ans.
    Similarly H (Y) = p ( y1 ) log2 [ 1/y1 ] + p ( y2 ) log2 [ 1/y2 ] + p ( y3 ) log2 [ 1/y3 ]
                       = 0.27 log2 [ 1/0.27 ] + 0.365 × 2 × log2 [ 1/0.365 ]
               H (Y) = 0.51 + 1.0614 = 1.5714 bits/message                                                       ...Ans.
Step 5 : To obtain the conditional entropy H (X / Y) :
     H (X/Y) = – p ( xi , yj ) log2 p ( xi/yj )

∴    H (X/Y) = – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 )

                    – p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 )

                    – p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 )
Digital Communication (GTU)                               3-10                                        Information Theory


Refer to the joint and conditional matrices given in Fig. P. 3.11.6.
                              P (X, Y)                                  P (X, Y)
                                                                            y2          y3
                     0.27     0.03                0                  0.0821            0
                                        0.05                         0.5479      0.1369
                              0.135     0.315                        0.3698      0.863

                                                      Fig. P. 3.11.6
Substituting various values from these two matrices we get,
            H (X/Y) = – 0.27 log2 1 – 0.03 log2 (0.0821) – 0 – 0 – 0.2 log2 (0.5479)
                         – 0.05 log2 (0.1369) – 0 – 0.135 log2 (0.3698) – 0.315 log2 (0.863)
                      = 0 + 0.108 + 0.1736 + 0.1434 + 0.1937 + 0.0669
            H (X/Y) = 0.6856 bits / message                                                                      ...Ans.
Step 6 : To obtain the joint entropy H (X , Y) :
       The joint entropy H (X , Y) is given by,
            H (X, Y) = – p ( xi , yj ) · log2 p ( xi , yj )
      ∴     H (X, Y) = – [ p ( x1 , y1 ) log2 p ( x1 , y1 ) + p ( x1 , y2 ) log2 p ( x1 , y2 ) + p ( x1 , y3 )
                       log2 p ( x1 , y3) + p ( x2 , y1 ) log2 p ( x2 , y1 ) + p ( x2 , y2 ) log2 p ( x2 , y2 )
                            + p ( x2 , y3 ) log2 p ( x2 , y3 ) + p ( x3 , y1 ) log2 p ( x3 , y1)
                            + p ( x3 , y2 ) log2 p ( x3 , y2 ) + p ( x3 , y3 ) log2 p ( x3 , y3 ) ]
       Referring to the joint matrix we get,
     ∴      H (X, Y) = – [ 0.27 log2 0.27 + 0.03 log2 0.03 + 0 + 0 + 0.2 log 0.2 + 0.05 log 0.05 + 0
                       + 0.135 log2 0.135 + 0.315 log2 0.315 ]
                       = [ 0.51 + 0.1517 + 0.4643 + 0.216 + 0.39 + 0.5249]
     ∴      H (X, Y) = 2.2569 bits/message                                                                       ...Ans.
Step 7 : To calculate the mutual information :
       Mutual information, is given by,
           I [ X, Y ] = H (X) – H (X/Y) = 1.5394 – 0.6856 = 0.8538 bits.

Ex. 3.11.7 :    For the given channel matrix, find out the mutual information. Given that p ( x1 ) = 0.6,
                p ( x2 ) = 0.3 and p ( x3 ) = 0.1.                                                       .Page No. 3-50




                                      p (y / x)
Digital Communication (GTU)                             3-11                                       Information Theory

Soln. :
Steps to be followed :
Step 1 :   Obtain the joint probability matrix P (X , Y).
Step 2 :   Calculate the probabilities p ( y1 ), p ( y2 ), p ( y3 ).
Step 3 :   Obtain the conditional probability matrix P (X/Y).
Step 4 :   Calculate the marginal densities H (X) and H (Y).
Step 5 :   Calculate the conditional entropy H (X/Y).
Step 6 :   Find the mutual information.
Step 1 : Obtain the joint probability matrix P (X , Y) :
       We can obtain the joint probability matrix P (X , Y) as
                               P (X , Y) = P (Y/X) · P (X)
       So multiply rows of the P (Y / X) matrix by p ( x1 ), p ( x2 ) and p ( x3 ) to get,


                                                     0.5 ×             0.5 ×              0
                                                             0                 0
                                                             .                 .
                                                             6                 6
                             P (X /                  0.5 ×              0            0.5 × 0.3
                                 Y                           0
                                 )                           .
                                                             3
                                                       0               0.5 ×         0.5 × 0.1
                                                                               0
                                                                               .
                                                                               1


                                                                 y1                 y2        y3
                                                                 0.3               0.3        0
                                                                                                            … (1)
                        ∴     P (X, Y)                       0.15                   0     0.15
                                                                 0                 0.05   0.05
Step 2 : Obtain the probabilities p ( y1 ), p ( y2 ), p ( y3 ) :
      These probabilities can be obtained by adding the column entries of P (X , Y) matrix of
Equation (1).
                               ∴     p ( y1 ) = 0.3 + 0.15 + 0 = 0.45
                                     p ( y2 ) = 0.3 + 0 + 0.05 = 0.35
                                     p ( y3 ) = 0 + 0.15 + 0.05 = 0.20
Step 3 : Obtain the conditional probability matrix P (X/Y) :
Digital Communication (GTU)                                 3-12                                       Information Theory

      The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint
probability matrix P (X , Y) of Equation (1) by p ( y1 ), p ( y2 ) and p ( y3 ) respectively.


                                                          0.3 / 0.45      0.3 / 0.35            0
                       ∴      P (X / Y)                     0.15 /              0        0.15 / 0.2
                                                                 0.4
                                                                   5
                                                               0          0.05 / 0.35    0.05 / 0.2


                                                                   y1               y2           y3
                                                               0.667             0.857           0                … (2)

                       ∴ P (X, Y)                              0.333                0           0.75
                                                                   0             0.143          0.25
Step 4 : Calculate the marginal entropy H (X) :
                                     H (X) = – p ( xi ) log2 p ( xi )
                                               = – p ( x1 ) log2 p ( x1 ) – p ( x2 ) log2 p ( x2 ) – p ( x3 ) log2 p ( x3 )
                                               = – 0.6 log2 (0.6) – 0.3 log2 (0.3) – 0.1 log2 (0.1)
                                               = 0.4421 + 0.5210 + 0.3321
                           ∴         H (X) = 1.2952 bits/message
Step 5 : Obtain the conditional entropy H (X/Y) :
       H (X/Y) = – p ( xi , yj ) log2 ( xi / yj )
                 = – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 )
                     – p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 )
                     – p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 )
       Refer to the joint and conditional matrices of Fig. P. 3.11.7.

                                   P (X / Y)                                  P (X, Y)
                               y1        y2        y3                    y1         y2   Y3
                            0.667      0.857       0                     0.3      0.3     0
                            0.333        0        0.75                  0.15        0    0.15
                               0       0.143      0.25                   0       0.05    0.05
                                                        Fig. P. 3.11.7

        Substituting various values from these two matrices we get,
              H (X/Y) = – 0.3 log2 0.667 – 0.3 log2 0.857 – 0
                            – 0.15 log2 0.333 – 0 – 0.15 log2 0.75
Digital Communication (GTU)                               3-13                               Information Theory

                             – 0 – 0.05 log2 0.143 – 0.05 log2 0.25
      ∴        H (X/Y) = 0.1752 + 0.06678 + 0.2379 + 0.06225 + 0.1402 + 0.1
      ∴        H (X/Y) = 0.78233 bits/message
Step 6 : Mutual information :
               I (X , Y) = H (X) – H (X/Y)
                           = 1.2952 – 0.78233 = 0.51287 bits                                            ...Ans.

Ex. 3.11.8 :      State the joint and conditional entropy. For a signal which is known to have a uniform
                  density function in the range 0 ≤ x ≤ 5; find entropy H (X). If the same signal is amplified
                  eight times, then determine H (X).                                            .Page No. 3-50
Soln. : For the definitions of joint and conditional entropy
refer to sections 3.10.1 and 3.10.2.
        The uniform PDF of the random variable X is as shown
in Fig. P. 3.11.8.
1.      The differential entropy H (X) of the given R.V. X is
        given by,
                H (X) = fX (x) log2 [1/fX (x)] dx bits/sample.                   Fig. P. 3.11.8
2.      Let us define the PDF fX (x). It is given that fX (x) is uniform in the range 0 ≤ x ≤ 5.


                       ∴         Let    fX (x) = k                        .... 0 ≤ x ≤ 5
                                                 = 0                      .... elsewhere
       But area under fX (x) is always 1.
       ∴                            fX (x) dx = 1
       ∴                                 k dx = 1
       ∴                                     k = 1/5
       Hence the PDF of X is given by,
                                        fX (x) = 1/5                        .... 0 ≤ x ≤ 5
                                                 = 0                       .... elsewhere
3.     Substituting the value of fX (x) we get,
                                       H (X) =        log2 (5) dx
                                ∴      H (X) = 2.322 bits/message                                       ...Ans.

Ex. 3.11.9 :      Two binary symmetrical channels are connected in cascade as shown in Fig. P. 3.11.9.
                  1.    Find the channel matrix of the resultant channel.
                  2.    Find p ( z1 ) and p ( z2 ) if p ( x1 ) = 0.6 and p ( x2 ) = 0.4.        .Page No. 3-50
Digital Communication (GTU)                             3-14                              Information Theory




                                     Fig. P. 3.11.9 : BSC for Ex. 3.11.10
Soln. :
Steps to be followed :
Step 1 : Write the channel matrix for the individual channels as P [ Y/X ] for the first one and
         P [ Z/Y ] for the second channel.
Step 2 : Obtain the channel matrix for the cascaded channel as,
                           P [ Z/X ] = P [ Y/X ] · P [ Z/Y ]
Step 3 : Calculate the probabilities P ( z1 ) and P ( z2 ).
1.    To obtain the individual channel matrix :
       The channel matrix of a BSC consists of the transition probabilities of the channel. That means
the channel matrix for channel – 1 is given by,
                                  P [ Y/X ] =                             ...(1)
       Substituting the values we get,
                                  P [ Y/X ] =                                                          ...(2)
       Similarly the channel matrix for second BSC is given by,
                                  P [ Z/Y ] =                             ...(3)
       Substituting the values we get,
                                  P [ Z/Y ] =                                                          ...(4)
2.    Channel matrix of the resultant channel :
       The channel matrix of the resultant channel is given by,
                                  P [ Z/X ] =                             ...(5)
       The probability P ( z1/x1 ) can be expressed by referring to Fig. P. 3.11.10 as,
           P ( z1/x1 ) = P ( z1/y1 ) · P ( y1/x1 ) + P ( z1/y2 ) · P ( y2/x2 )                         ...(6)
      Similarly we can obtain the expressions for the remaining terms in the channel matrix of resultant
channel.
∴ P[ Z/X ] =
                                                                                                     ...(7)
      The elements of the channel matrix of Equation (7) can be obtained by multiplying the individual
channel matrices.
                            ∴       P (Z/X) = P (Y/X) · P (Z/Y)                                       …(8)
                            ∴       P (Z/X) =
Digital Communication (GTU)                           3-15                                      Information Theory

                                              =                                                            ...Ans.
       This is the required resultant channel matrix.

3.    To calculate P ( z1 ) and P ( z2 ) :
       From Fig. P.3.11.10 we can write the following expression,
                                  P ( z1 ) = P ( z1/ y1 ) P ( y1 ) + P ( z1/ y2 ) · P ( y2 )                …(9)
                  Substituting P ( y1 ) = P ( x1 ) · P ( y1/ x1 ) + P ( x2 ) · P ( y1/ x2 )
                                           = (0.6 × 0.8) + (0.4 × 0.2) = 0.56
                          and    P ( y2 ) = P ( x1 ) · P ( y2/ x1 ) + P ( x2 ) · P ( y2/ x2 )
                                           = (0.6 × 0.2) + (0.4 × 0.8) = 0.44
                      and     P ( z1/ y1 ) = 0.7 and P ( z1/ y2 ) = 0.3
                      We get,     P ( z1 ) = (0.7 × 0.56) + (0.3 × 0.44)
                           ∴      P ( z1 ) = 0.392 + 0.132 = 0.524                                         ...Ans.
                   Similarly      P ( z2 ) = P ( z2/ y1 ) P ( y1 ) + P ( z2/ y2 ) · P ( y2 )
                                           = (0.3 × 0.56) + (0.7 × 0.44)
                           ∴     P ( z2 ) = 0.476                                                          ...Ans.

Ex. 3.11.10 :   A binary channel matrix is given by :
                  y1       y2     → outputs
                inputs →
                Determine H (X), H (X/Y), H (Y/X) and mutual information I (X ; Y)                 .Page No. 3-50
Soln. : The given channel matrix is
                                                        y1        y2
                                p (x, y) =
Step 1 : Obtain the individual probabilities :
       The individual message probabilities are given by -
                                 p ( x1 ) = 2/3 + 1/3 = 1
                                 p ( x2 ) = 1/10 + 9/10 = 1
                                 p ( y1 ) = 2/3 + 1/10 = 23/30
                                 p ( y2 ) = 1/3 + 9/10 = 37/30
Digital Communication (GTU)                           3-16                                   Information Theory


Step 2 : Obtain the marginal entropies H (X) and H (Y) :
                                  H (X) = p ( x1 ) log2 [ 1/ p ( x1 ) ] + p ( x2 ) log2 [ 1/ p ( x2 ) ]
                                            = 1 log2 (1) + 1 log2 (1)
                           ∴ H (X) = 0
                                  H (Y) = p ( y1 ) log2 [ 1/ p ( y1 ) ] + p ( y2 ) log2 [ 1/ p ( y2 ) ]
                                            = (23/30) log2 [ 30/23 ] + (37/30) log2 [30/37]
                                  H (Y) = 0.2938 – 0.3731 = – 0.07936 ≈ – 0.08
Step 3 : Obtain the joint entropy H (X, Y) :
          H (X, Y) = p ( x1 , y1 ) log2 [ 1/ p ( x1 , y1 ) ] + p ( x1 , y2 ) log2 [ 1/ p ( x1 , y2 ) ]
                       + p ( x2 , y1 ) log2 [ 1/ p ( x2 , y1 ) ] + p ( x2 , y2 ) log2 [ 1/ p ( x2 , y2 ) ]
       ∴ H (X, Y) = log2 (3/2) + log2 (3) + log2 (10) + log2 (10/9)
                 = 0.38 + 0.52 + 0.33 + 0.13 = 1.36 bits
Step 4 : Obtain the conditional probabilities H (X/Y) and H (Y/X) :
                               H (X/Y) = H (X , Y) – H (Y)
                                            = 1.36 – (– 0.08) = 1.44 bits.
                               H (Y/X) = H (X , Y) – H (X)
                                            = 1.36 – 0 = 1.36 bits.
Step 5 : Mutual information :
                               I (X, Y) = H (X) – H (X/Y)
                                            = 0 – 1.44
                                            = – 1.44 bits/message.                                         ...Ans.

Ex. 3.11.11 :   A channel has the following channel matrix :

                [ P (Y/X) ] =
                1.     Draw the channel diagram.
                2.     If the source has equally likely outputs, compute the probabilities associated with
                       the channel outputs for P = 0.2.                                          .Page No. 3-50
Soln. :
Part I :
1.     The given matrix shows that the number of
       inputs is two i.e. x1 and x2 whereas the
       number of outputs is three i.e. y1 , y2 and
       y3.
2.     This channel has two inputs x1 = 0 and x2 =
       1 and three outputs y1 = 0, y2 = e and y3 = 1
       as shown in Fig. P. 3.11.11.
                                                                Fig. P. 3.11.11 : The channel diagram

       The channel diagram is as shown in Fig. P. 3.11.11 This type of channel is called as “binary
erasure channel”. The output y2 = e indicates an erasure that means this output is in doubt and this
Digital Communication (GTU)                           3-17                                    Information Theory

output should be erased.

Part II :    Given that the sources x1 and x2 are equally likely

                              ∴     p ( x1 ) = p ( x2 ) = 0.5
       It is also given that p = 0.2.
                                ∴     p (y) = p (x) [ p (y/x) ]
                                              = [p (x1) , p (x2) ]
                               ∴      p (y) = [ 0.5, 0.5 ] = [ 0.4 0.2 0.4 ]
                   That means      p (y1) = 0.4, p ( y2 ) = 0.2 and p ( y3 ) = 0.4
       These are the required values of probabilities associated with the channel outputs for p = 0.2.

Ex. 3.11.13 :   Find the mutual information and channel capacity of the channel as shown in
                Fig. P. 3.11.13(a). Given that P ( x1 ) = 0.6 and P ( x2 ) = 0.4.                .Page No. 3-57.




                                              Fig. P. 3.11.13(a)
Soln. :
Given that : p ( x1 ) = 0.6, p ( x2 ) = 0.4
      The conditional probabilities are,
            p ( y1/x1 ) = 0.8, p ( y2/x1 ) = 0.2
            p ( y1/x2 ) = 0.3 and p ( y2/x2 ) = 0.7
        The mutual information can be obtained by
referring to Fig. P. 3.11.13(b).
                                                                                    Fig. P. 3.11.13(b)
       As already derived, the mutual information is given by,
            I (X ; Y) = Ω [ β + (1 – α – β) p ] – p Ω (α) – (1 – p) Ω (β)                                  ...(1)
       Where Ω is called as the horseshoe function which is given by,
                Ω (p) = p log2 (1/p) + (1 – p) log2 (1/1 – p)                                              ...(2)
Digital Communication (GTU)                           3-18                                 Information Theory


       Substituting the values we get,
           I (X ; Y) = Ω [ 0.3 + (1 – 0.2 – 0.3) 0.6 ] – 0.6 Ω (0.2) – 0.4 Ω (0.3)
       ∴ I (X ; Y) = Ω (0.6) – 0.6 Ω (0.2) – 0.4 Ω (0.3)                                                 …(3)
       Using the Equation (2) we get,
           I (X ; Y) = [ 0.6 log2 (1/0.6) + 0.4 log2 (1/0.4) ] – 0.6 [0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ]
                         – 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7)]
       ∴   I (X ; Y) = 0.1868 bits.                                                                    ...Ans.
Channel capacity (C) :
       For the asymmetric binary channel,
     C = 1 – p Ω (α) – (1 – p) Ω (β)
       = 1 – 0.6 Ω (0.2) – 0.4 Ω (0.3)
       = 1 – 0.6 [ 0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ] – 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7) ]
       = 1 – 0.433 – 0.352
     C = 0.214 bits                                                                                    ...Ans.

Section 3.12

Ex. 3.12.3 :    In a facsimile transmission of a picture, there are about [2.25 × 106 ] picture elements per
                frame. For good reproduction, twelve brightness levels are necessary. Assuming all these
                levels to be equiprobable, calculate the channel bandwidth required to transmit one
                picture in every three minutes for a single to noise power ratio of 30 dB. If SNR
                requirement increases to 40 dB, calculate the new bandwidth. Explain the trade-off
                between bandwidth and SNR, by comparing the two results.                       .Page No. 3-67
Soln. :
Given :        Number of picture elements per frame = 2.25 × 106
               Number of brightness levels = 12 = M
               All the twelve brightness levels are equiprobable.
               Number of pictures per minute = 1/3
               SNR1 = 30 dB                         SNR2 = 40 dB

1.     Calculate the information rate :
       The number of picture elements per frame is 2.25 × 106 and these elements can be of any
brightness out of the possible 12 brightness levels.
       The information rate (R) = No. of messages/sec. × Average information per message.
                                        R = r×H                                           ...(1)
                               Where r = = = 12500 elements/sec.                          ...(2)
                                 and H = log2 M = log2 12 ...as all brightness levels are
                                               equiprobable.                              ...(3)
                                ∴       R = 12,500 × log2 12
                                 ∴      R = 44.812 k bits/sec.                            ...(4)
Digital Communication (GTU)                               3-19                                Information Theory


2.       Calculate the bandwidth B :

         The Shannon’s capacity theorem states that,
         R ≤ C where         C = B log2                                                                      ...(5)
         Substitute          = 30 dB = 1000 we get,
     ∴    44.812 × 10   3
                             ≤ B log2 [1 + 1000]
               ∴        B ≥
               ∴        B    ≥ 4.4959 kHz.                                                                 ...Ans.

3.       BW for S/N = 40 dB :

         For signal to noise ratio of 40 dB or 10,000 let us calculate new value of bandwidth.

                            ∴     44.812 × 103 ≤ B log2 [1 + 10000 ]

                                      ∴       B ≥

                                       ∴      B ≥ 3.372 kHz.                                               ...Ans.

       Trade off between bandwidth and SNR : As the signal to noise ratio is increased from
30 dB to 40 dB, the bandwidth will have to be decreased.

Ex. 3.12.4 :       An analog signal having bandwidth of 4 kHz is sampled at 1.25 times the Nyquist rate,
                   with each sample quantised into one of 256 equally likely levels.
                   1.       What is information rate of this source ?
                   2.       Can the output of this source be transmitted without error over an AWGN channel
                            with bandwidth of 10 kHz and SNR or 20 dB ?
                   3.       Find SNR required for error free transmission for part (ii).
                   4.       Find bandwidth required for an AWGN channel for error free transmission this
                            source if SNR happens to be 20 dB.                             .Page No. 3-68
Soln. :
Given :        fm = 4 kHz., fs = 1.25 × 2 × fm = 1.25 × 2 × 4 kHz = 10 kHz.
                     Quantization levels Q = 256 (equally likely).
1.       Information rate (R) :
                                              R = r×H                                                       ...(1)
                                    Where      r = Number of messages/sec.
                                                 = Number of samples/sec. = 10 kHz.
                                      and     H = log2 256           ...as all the levels are equally likely
                                      ∴       R = 10 × 103 × log2 256 = 10 × 103 × 8
                                      ∴       R = 80 k bits/sec.                                         ...Ans.
2.       Channel capacity (C) :
      In order to answer the question asked in (ii) we have to calculate the channel capacity C.
Given :
Digital Communication (GTU)                           3-20                                  Information Theory

                                          B = 10 kHz and = 20 dB = 100
                                    ∴     C = B log2 = 10 × 103 log2 [101].
                                    ∴     C = 66.582 k bits/sec.

         For error free transmission, it is necessary that R ≤ C. But here R = 80 kb/s and C = 66.582 kb/s
hence R > C hence errorfree transmission is not possible.

3.    S/N ratio for errorfree transmission in part (2) :
     Substitute      C = R = 80 kb/s. we get,
               80 × 103 = B log2
     ∴         80 × 103 = 10 × 103 log2 [1+ (S/N)]
             ∴        8 = log2 [1+ (S/N)]
           ∴       256 = 1+ (S/N)
            ∴      S/N = 255 or 24.06 dB                                                                ...Ans.
         This is the required value of the signal to noise ratio to ensure the error free transmission.
4.    BW required for the errorfree transmission :
Given :
                                          C = 80 kb/s,                  S/N = 20 dB = 100
                                  ∴       C = B log2
                                  ∴      80 = B log2 [1 + 100]
                                   ∴      B ≥ 12 kHz.                                                    ...Ans.

Ex. 3.12.5 :      A channel has a bandwidth of 5 kHz and a signal to noise power ratio 63. Determine the
                  bandwidth needed if the S/N power ratio is reduced to 31. What will be the signal power
                  required if the channel bandwidth is reduced to 3 kHz ?                       .Page No. 3-68
Soln. :
1.    To determine the channel capacity :
       It is given that B = 5 kHz and = 63. Hence using the Shannon Hartley theorem the channel
capacity is given by,
                                     C = B log2 = 5 × 103 log2 [1+ 63]
                                ∴    C = 30 × 103 bits/sec                                   ...(1)
2.    To determine the new bandwidth :
       The new value of = 31. Assuming the channel capacity “C” to be constant we can write,
                               30 × 103 = B log2 [1+ 31]
                               ∴     B = = 6 kHz                                             ...(2)

3.    To determine the new signal power :
       Given that the new bandwidth is 3 kHz. We know that noise power N = N0 B.
       Let the noise power corresponding to a bandwidth of 6 kHz be N1 = 6 N0 and the noise power
corresponding to the new bandwidth of 3 kHz be N2 = 3 N0.
Digital Communication (GTU)                       3-21                             Information Theory

                                   ∴     = =2                                                   ...(3)
           The old signal to noise ratio = = 31
                                ∴     S1 = 31 N1                                                ...(4)
     The new signal to noise ratio = . We do not know its value, hence let us find it out.
                               30 × 103 = 3 × 103 log2
                                ∴        = 1023                                                 ...(5)
                             ∴        S2 = 1023 N2
     But from Equation (3), N2 = , substituting we get,
                             ∴        S2 = 1023                                                 ...(6)
     Dividing Equation (6) by Equation (4) we get,
                                         = = 16.5
                             ∴        S2 = 16.5 S1                                           ...Ans.
     Thus if the bandwidth is reduced by 50% then the signal power must be increased 16.5 times i.e.
1650% to get the same capacity.

Ex. 3.12.6 :   A 2 kHz channel has signal to noise ratio of 24 dB :
               (a)   Calculate maximum capacity of this channel.
               (b)   Assuming constant transmitting power, calculate maximum capacity when channel
                     bandwidth is : 1. halved 2. reduced to a quarter of its original value.
                                                                                       .Page No. 3-68
Digital Communication (GTU)                        3-22                         Information Theory

Soln. :
Data :      B = 2 kHz and (S/N) = 24 dB.

      The SNR should be converted from dB to power ratio.

                                  ∴ 24 = 10 log10 (S/N)

                                   ∴       = 251                                             ...(1)

(a)   To determine the channel capacity :

                  C = B log2 = 2 × 103 log2 [1 + 251] = 2 × 103

              ∴   C = 15.95 × 103 bits/sec                                                 ...Ans.
(b)   1. Value of C when B is halved :
      The new bandwidth B2 = 1 kHz, let the old bandwidth be denoted by B1 = 2 kHz.
          We know that the noise power N = N0 B
       ∴ Noise power with old bandwidth = N1 = N0 B1                                         ...(2)
      and Noise power with new bandwidth = N2 = N0 B2                                        ...(3)
                                      ∴       = = =
                                     ∴        =                                              ...(4)
      As the signal power remains constant, the SNR with new bandwidth is,
                                         = =2
      But we know that = 251     ...See Equation (1)
                               ∴         = 2 × 251 = 502                                     ...(5)
      Hence the new channel capacity is given by,
                                     C = B2 log2 = 1 × 103 log2 (503)
                                         = 1 × 103
                                ∴ C = 8.97 × 103 bits/sec                                  ...Ans.

2.    Value of C when B is reduced to 1/4 of original value :
      The Equation (4) gets modified to,
                                           =                                                 ...(6)
                                   ∴       = 4 = 4 × 251 = 1004                              ...(7)
      Hence new channel capacity is given by,
                                       C = B3 log2 = 500 log2 (1004)
                              ∴        C = 4.99 × 103 bits/sec ...Ans.

                                                                                           

More Related Content

What's hot

Coherent and Non-coherent detection of ASK, FSK AND QASK
Coherent and Non-coherent detection of ASK, FSK AND QASKCoherent and Non-coherent detection of ASK, FSK AND QASK
Coherent and Non-coherent detection of ASK, FSK AND QASKnaimish12
 
Improving coverage and capacity in cellular systems
Improving coverage and capacity in cellular systemsImproving coverage and capacity in cellular systems
Improving coverage and capacity in cellular systemsTarek Nader
 
WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts – ...
WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts –  ...WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts –  ...
WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts – ...ArunChokkalingam
 
Tele4653 l9
Tele4653 l9Tele4653 l9
Tele4653 l9Vin Voro
 
Wireless Channels Capacity
Wireless Channels CapacityWireless Channels Capacity
Wireless Channels CapacityOka Danil
 
Small scale fading
Small scale fadingSmall scale fading
Small scale fadingAJAL A J
 
Mac protocols
Mac protocolsMac protocols
Mac protocolsjuno susi
 
Interfacing with peripherals: analog to digital converters and digital to ana...
Interfacing with peripherals: analog to digital converters and digital to ana...Interfacing with peripherals: analog to digital converters and digital to ana...
Interfacing with peripherals: analog to digital converters and digital to ana...NimeshSingh27
 
8085 interfacing with memory chips
8085 interfacing with memory chips8085 interfacing with memory chips
8085 interfacing with memory chipsSrikrishna Thota
 
M ary psk modulation
M ary psk modulationM ary psk modulation
M ary psk modulationAhmed Diaa
 

What's hot (20)

Line coding
Line codingLine coding
Line coding
 
Coherent and Non-coherent detection of ASK, FSK AND QASK
Coherent and Non-coherent detection of ASK, FSK AND QASKCoherent and Non-coherent detection of ASK, FSK AND QASK
Coherent and Non-coherent detection of ASK, FSK AND QASK
 
Synchronization
SynchronizationSynchronization
Synchronization
 
Information theory
Information theoryInformation theory
Information theory
 
Improving coverage and capacity in cellular systems
Improving coverage and capacity in cellular systemsImproving coverage and capacity in cellular systems
Improving coverage and capacity in cellular systems
 
8051 timer counter
8051 timer counter8051 timer counter
8051 timer counter
 
Congestion control in TCP
Congestion control in TCPCongestion control in TCP
Congestion control in TCP
 
WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts – ...
WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts –  ...WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts –  ...
WSN NETWORK -MAC PROTOCOLS - Low Duty Cycle Protocols And Wakeup Concepts – ...
 
Tele4653 l9
Tele4653 l9Tele4653 l9
Tele4653 l9
 
Chapter 03 cyclic codes
Chapter 03   cyclic codesChapter 03   cyclic codes
Chapter 03 cyclic codes
 
Wireless Channels Capacity
Wireless Channels CapacityWireless Channels Capacity
Wireless Channels Capacity
 
Multirate DSP
Multirate DSPMultirate DSP
Multirate DSP
 
Small scale fading
Small scale fadingSmall scale fading
Small scale fading
 
Mac protocols
Mac protocolsMac protocols
Mac protocols
 
Interfacing with peripherals: analog to digital converters and digital to ana...
Interfacing with peripherals: analog to digital converters and digital to ana...Interfacing with peripherals: analog to digital converters and digital to ana...
Interfacing with peripherals: analog to digital converters and digital to ana...
 
Turbo codes.ppt
Turbo codes.pptTurbo codes.ppt
Turbo codes.ppt
 
Chap 6
Chap 6Chap 6
Chap 6
 
Magic tee
Magic tee  Magic tee
Magic tee
 
8085 interfacing with memory chips
8085 interfacing with memory chips8085 interfacing with memory chips
8085 interfacing with memory chips
 
M ary psk modulation
M ary psk modulationM ary psk modulation
M ary psk modulation
 

Similar to Chap 3

Information Theory Coding 1
Information Theory Coding 1Information Theory Coding 1
Information Theory Coding 1Mahafuz Aveek
 
Information Theory and Coding
Information Theory and CodingInformation Theory and Coding
Information Theory and CodingVIT-AP University
 
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfUnit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfvani374987
 
basicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxbasicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxupendrabhatt13
 
Data Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano codingData Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano codingDr Rajiv Srivastava
 
Information Theory and coding - Lecture 3
Information Theory and coding - Lecture 3Information Theory and coding - Lecture 3
Information Theory and coding - Lecture 3Aref35
 
Lec_8_Image Compression.pdf
Lec_8_Image Compression.pdfLec_8_Image Compression.pdf
Lec_8_Image Compression.pdfnagwaAboElenein
 
Digitla Communication pulse shaping filter
Digitla Communication pulse shaping filterDigitla Communication pulse shaping filter
Digitla Communication pulse shaping filtermirfanjum
 
Information Theory and coding - Lecture 2
Information Theory and coding - Lecture 2Information Theory and coding - Lecture 2
Information Theory and coding - Lecture 2Aref35
 
Noise infotheory1
Noise infotheory1Noise infotheory1
Noise infotheory1vmspraneeth
 
Noise info theory and Entrophy
Noise info theory and EntrophyNoise info theory and Entrophy
Noise info theory and EntrophyIzah Asmadi
 
Image compression
Image compressionImage compression
Image compressionAle Johnsan
 

Similar to Chap 3 (20)

Information Theory Coding 1
Information Theory Coding 1Information Theory Coding 1
Information Theory Coding 1
 
Information Theory and Coding
Information Theory and CodingInformation Theory and Coding
Information Theory and Coding
 
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfUnit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
 
Arithmetic Coding
Arithmetic CodingArithmetic Coding
Arithmetic Coding
 
basicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptxbasicsofcodingtheory-160202182933-converted.pptx
basicsofcodingtheory-160202182933-converted.pptx
 
Data Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano codingData Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano coding
 
Unit 5.pdf
Unit 5.pdfUnit 5.pdf
Unit 5.pdf
 
Datacompression1
Datacompression1Datacompression1
Datacompression1
 
Information Theory and coding - Lecture 3
Information Theory and coding - Lecture 3Information Theory and coding - Lecture 3
Information Theory and coding - Lecture 3
 
Lec_8_Image Compression.pdf
Lec_8_Image Compression.pdfLec_8_Image Compression.pdf
Lec_8_Image Compression.pdf
 
Digitla Communication pulse shaping filter
Digitla Communication pulse shaping filterDigitla Communication pulse shaping filter
Digitla Communication pulse shaping filter
 
Basics of coding theory
Basics of coding theoryBasics of coding theory
Basics of coding theory
 
Information Theory and coding - Lecture 2
Information Theory and coding - Lecture 2Information Theory and coding - Lecture 2
Information Theory and coding - Lecture 2
 
UNIT-2.pdf
UNIT-2.pdfUNIT-2.pdf
UNIT-2.pdf
 
Noise infotheory1
Noise infotheory1Noise infotheory1
Noise infotheory1
 
Noise info theory and Entrophy
Noise info theory and EntrophyNoise info theory and Entrophy
Noise info theory and Entrophy
 
Ch6 information theory
Ch6 information theoryCh6 information theory
Ch6 information theory
 
Unit IV_SS_MMS.ppt
Unit IV_SS_MMS.pptUnit IV_SS_MMS.ppt
Unit IV_SS_MMS.ppt
 
Losseless
LosselessLosseless
Losseless
 
Image compression
Image compressionImage compression
Image compression
 

Recently uploaded

ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfVanessa Camilleri
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptxiammrhaywood
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfErwinPantujan2
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxBarangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxCarlos105
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 

Recently uploaded (20)

ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdfICS2208 Lecture6 Notes for SL spaces.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxBarangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 

Chap 3

  • 1. Digital Communication (GTU) 3-1 Information Theory Chapter 3 : Information Theory Section 3.5 : Ex. 3.5.3 : Consider a telegraph source having two symbols, dot and dash. The dot duration is 0.2 seconds; and the dash duration is 3 times the dot duration. The probability of the dot occurring is twice that of the dash, and the time between symbols is 0.2 seconds. Calculate the information rate of the telegraph source. .Page No. 3-14. Soln. : Given that : 1. Dot duration : 0.2 sec. 2. Dash duration : 3 × 0.2 = 0.6 sec. 3. P (dot) = P (dot) = 2 P (dash). 4. Space between symbols is 0.2 sec. Information rate = ? 1. Probabilities of dots and dashes : Let the probability of a dash be “P”. Therefore the probability of a dot will be “2P”. The total probability of transmitting dots and dashes is equal to 1. ∴ P (dot) + P (dash) = 1 ∴ P + 2P = 1 ∴ P = 1/3 ∴ Probability of dash = 1/3 and probability of dot = 2/3 …(1) 2. Average information H (X) per symbol : ∴ H (X) = P (dot) · log2 [ 1/P (dot) ] + P (dash) · log2 [ 1/P (dash) ] ∴ H (X) = (2/3) log2 [ 3/2 ] + (1/3) log2 [ 3 ] = 0.3899 + 0.5283 = 0.9182 bits/symbol. 3. Symbol rate (Number of symbols/sec.) : The total average time per symbol can be calculated as follows : Average symbol time Ts = [ TDOT × P ( DOT )] + [ TDASH × P (DASH) ] + Tspace ∴ Ts = [ 0.2 × 2/3 ] + [ 0.6 × 1/3 ] + 0.2 = 0.5333 sec./symbol. Hence the average rate of symbol transmission is given by : r = 1/Ts = 1.875 symbols/sec.
  • 2. Digital Communication (GTU) 3-2 Information Theory 4. Information rate (R) : R = r × H ( X ) = 1.875 × 0.9182 = 1.72 bits/sec. ...Ans. Ex. 3.5.4 : The voice signal in a PCM system is quantized in 16 levels with the following probabilities : P1 = P2 = P3 = P4 = 0.1 P5 = P6 = P7 = P8 = 0.05 P9 = P10 = P11 = P12 = 0.075 P13 = P14 = P15 = P16 = 0.025 Calculate the entropy and information rate. Assume fm = 3 kHz. .Page No. 3-15 Soln. : It is given that, 1. The number of levels = 16. Therefore number of messages = 16. 2. fm = 3 kHz. (a) To find the entropy of the source : The entropy is defined as, H = pk log2 (1/ pk) ...(1) As M = 16 Equation (1) gets modified to, H = pk log2 (1/ pk) = 4 [0.1 log2 (1/0.1)] + 4 [0.05 log2 (1/0.05)] + 4 [0.075 log2 (1/0.075)] + 4 [0.025 log2 (1/0.025)] ∴ H = 0.4 log2 (10) + 0.2 log2 (20) + 0.3 log2 (13.33) + 0.1 log2 (40) = 0.4 + + 0.3 + ∴ H = 3.85 bits/message ...(2) ...Ans. (b) To find the message rate (r) : The minimum rate of sampling is Nyquist rate. Therefore fs = 2 × fm = 2 × 3 kHz = 6 kHz ...(3)
  • 3. Digital Communication (GTU) 3-3 Information Theory Hence there are 6000 samples/sec. As each sample is converted to one of the 16 levels, there are 6000 messages/sec. ∴ Message rate r = 6000 messages/sec ...(4) (c) To find the information rate (R) : R = r × H = 6000 × 3.85 = 23100 bits/sec ...Ans. Ex. 3.5.5 : A message source generates one of four messages randomly every microsecond. The probabilities of these messages are 0.4, 0.3, 0.2 and 0.1. Each emitted message is independent of other messages in the sequence : 1. What is the source entropy ? 2. What is the rate of information generated by this source in bits per second ? .Page No. 3-15 Soln. : It is given that, 1. Number of messages, M = 4, let us denote them by m1, m2, m3 and m4. 2. Their probabilities are p1 = 0.4, p2 = 0.3, p3 = 0.2 and p4 = 0.1. 3. One message is transmitted per microsecond. ∴ Message transmission rate r = = 1 × 106 messages/sec. (a) To obtain the source entropy (H) : H = pk log2 ( 1/pk ) ∴ H = p1 log2 ( 1/ p1 ) + p2 log2 ( 1/ p2 ) + p3 log2 ( 1/ p3 ) + p4 log2 ( 1/ p4 ) = 0.4 log2 ( 1/0.4 ) + 0.3 log2 ( 1/0.3 ) + 0.2 log2 ( 1/0.2 ) + 0.1 log2 ( 1/0.1 ) ∴ H = 1.846 bits/message ...Ans. (b) To obtain the information rate (R) : R = H × r = 1.846 × 1 × 106 = 1.846 M bits/sec ...Ans. Ex. 3.5.6 : A source consists of 4 letters A, B, C and D. For transmission each letter is coded into a sequence of two binary pulses. A is represented by 00, B by 01, C by 10 and D by 11. The probability of occurrence of each letter is P(A) = , P (B) = , P (C) = and P (D) = . Determine the entropy of the source and average rate of transmission of information. .Page No. 3-15
  • 4. Digital Communication (GTU) 3-4 Information Theory Soln. : The given data can be summarised as shown in the following table : Messag Probability Code e A 1/5 00 B 1/4 01 C 1/4 10 D 3/10 11 Assumption : Let us assume that the message transmission rate be r = 4000 messages/sec. (a) To determine the source entropy : H = log2 (5) + log2 (4) + log2 (4) + 0.3 log2 (10/3) ∴ H = 1.9855 bits/message ...Ans. (b) To determine the information rate : R = r × H = [4000 messages/sec] × [1.9855 bits/message] R = 7942.3 bits/sec ...Ans. (c) Maximum possible information rate : Number of messages/sec = 000 But here the number of binary digits/message = 2 ∴ Number of binary digits (binits)/sec = 4000 × 2 = 8000 binits/sec. We know that each binit can convey a maximum average information of 1 bit ∴ H = 1 bit/binit ∴ Maximum rate of information transmission = [8000 binits/s] × [Hmax /binit] = 8000 × 1 bits/sec ...Ans. Section 3.6 : Ex. 3.6.3 : A discrete memoryless source has five symbols x1, x2, x3, x4 and x5 with probabilities p ( x1 ) = 0.4, p ( x2 ) = 0.19, p ( x3 ) = 0.16, p ( x4 ) = 0.14 and p ( x 5 ) = 0.11. Construct the Shannon-Fano code for this source. Calculate the average code word length and coding efficiency of the source. .Page No. 3-21 Soln. : Follow the steps given below to obtain the Shannon-Fano code. Step 1 : List the source symbols in the order of decreasing probability. Step 2 : Partition the set into two sets that are as close to being equiprobable as possible and assign 0 to the upper set and 1 to the lower set. Step 3 : Continue this process, each time partitioning the sets with as nearly equal probabilities as possible until further partitioning is not possible.
  • 5. Digital Communication (GTU) 3-5 Information Theory (a) The Shannon-Fano codes are as given in Table P. 3.6.3. Table P. 3.6.3 : Shannon-Fano codes Symbols Probability Step 1 Step 2 Step 3 Code word x1 0.4 0 0 Stop 00 Partition here x2 0.19 0 1 Stop 01 Partition here x3 0.16 1 0 Stop 10 Partition here x4 0.14 1 1 0 Stop 110 Partition here x5 0.11 1 1 1 Stop 111 here (b) Average code word length (L) : The average code word length is given by : L = pk × (length of mk in bits) = ( 0.4 × 2 ) + ( 0.19 × 2 ) + ( 0.16 × 2 ) + ( 0.14 × 3 ) + ( 0.11 × 3 ) = 2.25 bits/message (c) Entropy of the source (H) : H = pk log2 ( 1 / pk ) = 0.4 log2 ( 1 / 0.4 ) + 0.19 log2 ( 1 / 0.19 ) + 0.16 log2 ( 1 / 0.16 ) + 0.14 log2 ( 1 / 0.14 ) + 0.11 log2 ( 1 / 0.11 ) = 2.15 Ex. 3.6.6 : A discrete memoryless source has an alphabet of seven symbols with probabilities for its output as described in Table P. 3.6.6(a). .Page No. 3-25 Table P. 3.6.6(a) Symbol S0 S1 S2 S3 S4 S5 S6 Probability 0.25 0.25 0.125 0.125 0.125 0.0625 0.0625 Compute the Huffman code for this source moving the “combined” symbol as high as possible. Explain why the computed source code has an efficiency of 100 percent.
  • 6. Digital Communication (GTU) 3-6 Information Theory Soln. : The Huffman code for the source alphabets is as shown in Fig. P. 3.6.6. Fig. P. 3.6.6 : Huffman code Follow the path indicated by the dotted line to obtain the codeword for symbol S 0 as 10. Similarly we can obtain the codewords for the remaining symbols. These are as listed in Table P. 3.6.6(b). Table P. 3.6.6(b) Symbol Probability Codewor Codeword length d S0 0.25 10 2 bit S1 0.25 11 2 bit S2 0.125 001 3 bit S3 0.125 010 3 bit S4 0.125 011 3 bit S5 0.0625 0000 4 bit S6 0.0625 0001 4 bit To compute the efficiency : 1. The average code length = L = (length of symbol in bits) From Table P. 3.7.4(b) L = ( 0.25 × 2 ) + ( 0.25 × 2 ) + ( 0.125 × 3 ) × 3 + ( 0.0625 × 4 ) × 2 ∴ L = 2.625 bits/symbol 2. The average information per message = H =
  • 7. Digital Communication (GTU) 3-7 Information Theory ∴ H = [ 0.25 log2 ( 4 ) ] × 2 + [ 0.125 log2 ( 8 ) ] × 3 + [ 0.0625 log2 ( 16 ) ] × 2 = [ 0.25 × 2 × 2 ] + [ 0.125 × 3 × 3 ] + [ 0.0625 × 4 × 2 ] ∴ H = 2.625 bits/message. 3. Code efficiency η = × 100 = ∴ η = 100% Note : As the average information per symbol (H) is equal to the average code length (L), the code efficiency is 100%. Section 3.11 : Ex. 3.11.5 : Calculate differential entropy H (X) of the uniformly distributed random variable X with probability density function. fX (x) = 1/a 0≤x≤a = 0 elsewhere for 1. a = 1 2. a=2 3. a = 1/2. .Page No. 3-49. Soln. : The uniform PDF of the random variable X is as shown in Fig. P. 3.11.5. Fig. P. 3.11.5 1. The average amount of information per sample value of x (t) is measured by, H (X) = fX (x) · log2 [1/fX (x)] dx bits/sample …(1) The entropy H (X) defined by the expression above is called as the differential entropy of X. 2. Substituting the value of fX (x) in the expression for H (X) we get, H (X) = · log2 (a) dx ...(2) (a) Substitute a = 1 to get, H (X) = 1 · log2 1 dx = 0 ...Ans. (b) Substitute a = 2 to get, H (X) = · log2 2 dx = × 2 = 1 ...Ans. (c) Substitute a = to get, H (X) = 2 log2 (1/2) dx = – 2 log2 2 = – 2 ...Ans. These are the values of differential entropy for various values of a. Ex. 3.11.6 : A discrete source transmits messages x1, x2, x3 with probabilities p ( x1 ) = 0.3, p ( x2 ) = 0.25, p ( x3 ) = 0.45. The source is connected to the channel whose conditional probability matrix is y1 y2 y3
  • 8. Digital Communication (GTU) 3-8 Information Theory P (Y / X) = Calculate all the entropies and mutual information with this channel. .Page No. 3-49 Soln. : Steps to be followed : Step 1 : Obtain the joint probability matrix P (X, Y). Step 2 : Obtain the probabilities p (y1), p (y2), p (y3). Step 3 : Obtain the conditional probability matrix P (X/Y) Step 4 : Obtain the marginal densities H (X) and H (Y). Step 5 : Calculate the conditional entropy H (X/Y). Step 6 : Calculate the joint entropy H (X , Y). Step 7 : Calculate the mutual information I (X , Y). Step 1 : Obtain the joint probability matrix P (X, Y) : The given matrix P (Y/X) is the conditional probability matrix. We can obtain the joint probability matrix P (X , Y) as : P (X, Y) = P [ Y/X ] · P (X) ∴ P (X, Y) = y1 y2 y3 ∴ P (X, Y) = ...(1) Step 2 : Obtain the probabilities p (y1), p (y2) and p (y3) : The probabilities p ( y1 ), p ( y2 ) and p ( y3 ) can be obtained by adding the column entries of P (X , Y) matrix of Equation (1). ∴ p ( y1 ) = 0.27 + 0 + 0 = 0.27 p ( y2 ) = 0.03 + 0.2 + 0.135 = 0.365 p ( y3 ) = 0 + 0.05 + 0.315 = 0.365 Step 3 : Obtain the conditional probability matrix P (X/Y) : The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint probability matrix P (X , Y) of Equation (1) by p (y1), p (y2) and p (y3) respectively. ∴ P (X /Y) = y1 y2 y3 ∴ P (X /Y) = ...(2) Step 4 : To obtain the marginal entropies H (X) and H (Y) : H (X) = p ( xi ) log2 [ 1/p ( xi )] = p ( x1 ) log2 [ 1/p ( x1 ) ] + p ( x2 ) log2 [ 1/p ( x2 ) ] + p ( x3 ) log2 [ 1/p ( x3 )] Substituting the values of p ( x1 ), p ( x2 ) and p ( x3 ) we get, = 0.3 log2 (1/0.3) + 0.25 log2 (1/0.25) + 0.45 log2 (1/0.45)
  • 9. Digital Communication (GTU) 3-9 Information Theory = [ (0.3 × 1.7369) + (0.25 × 2) + (0.45 × 1.152) ] ∴ H (X) = [ 0.521 + 0.5 + 0.5184 ] = 1.5394 bits/message ...Ans. Similarly H (Y) = p ( y1 ) log2 [ 1/y1 ] + p ( y2 ) log2 [ 1/y2 ] + p ( y3 ) log2 [ 1/y3 ] = 0.27 log2 [ 1/0.27 ] + 0.365 × 2 × log2 [ 1/0.365 ] H (Y) = 0.51 + 1.0614 = 1.5714 bits/message ...Ans. Step 5 : To obtain the conditional entropy H (X / Y) : H (X/Y) = – p ( xi , yj ) log2 p ( xi/yj ) ∴ H (X/Y) = – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 ) – p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 ) – p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 )
  • 10. Digital Communication (GTU) 3-10 Information Theory Refer to the joint and conditional matrices given in Fig. P. 3.11.6. P (X, Y) P (X, Y) y2 y3 0.27 0.03 0 0.0821 0 0.05 0.5479 0.1369 0.135 0.315 0.3698 0.863 Fig. P. 3.11.6 Substituting various values from these two matrices we get, H (X/Y) = – 0.27 log2 1 – 0.03 log2 (0.0821) – 0 – 0 – 0.2 log2 (0.5479) – 0.05 log2 (0.1369) – 0 – 0.135 log2 (0.3698) – 0.315 log2 (0.863) = 0 + 0.108 + 0.1736 + 0.1434 + 0.1937 + 0.0669 H (X/Y) = 0.6856 bits / message ...Ans. Step 6 : To obtain the joint entropy H (X , Y) : The joint entropy H (X , Y) is given by, H (X, Y) = – p ( xi , yj ) · log2 p ( xi , yj ) ∴ H (X, Y) = – [ p ( x1 , y1 ) log2 p ( x1 , y1 ) + p ( x1 , y2 ) log2 p ( x1 , y2 ) + p ( x1 , y3 ) log2 p ( x1 , y3) + p ( x2 , y1 ) log2 p ( x2 , y1 ) + p ( x2 , y2 ) log2 p ( x2 , y2 ) + p ( x2 , y3 ) log2 p ( x2 , y3 ) + p ( x3 , y1 ) log2 p ( x3 , y1) + p ( x3 , y2 ) log2 p ( x3 , y2 ) + p ( x3 , y3 ) log2 p ( x3 , y3 ) ] Referring to the joint matrix we get, ∴ H (X, Y) = – [ 0.27 log2 0.27 + 0.03 log2 0.03 + 0 + 0 + 0.2 log 0.2 + 0.05 log 0.05 + 0 + 0.135 log2 0.135 + 0.315 log2 0.315 ] = [ 0.51 + 0.1517 + 0.4643 + 0.216 + 0.39 + 0.5249] ∴ H (X, Y) = 2.2569 bits/message ...Ans. Step 7 : To calculate the mutual information : Mutual information, is given by, I [ X, Y ] = H (X) – H (X/Y) = 1.5394 – 0.6856 = 0.8538 bits. Ex. 3.11.7 : For the given channel matrix, find out the mutual information. Given that p ( x1 ) = 0.6, p ( x2 ) = 0.3 and p ( x3 ) = 0.1. .Page No. 3-50 p (y / x)
  • 11. Digital Communication (GTU) 3-11 Information Theory Soln. : Steps to be followed : Step 1 : Obtain the joint probability matrix P (X , Y). Step 2 : Calculate the probabilities p ( y1 ), p ( y2 ), p ( y3 ). Step 3 : Obtain the conditional probability matrix P (X/Y). Step 4 : Calculate the marginal densities H (X) and H (Y). Step 5 : Calculate the conditional entropy H (X/Y). Step 6 : Find the mutual information. Step 1 : Obtain the joint probability matrix P (X , Y) : We can obtain the joint probability matrix P (X , Y) as P (X , Y) = P (Y/X) · P (X) So multiply rows of the P (Y / X) matrix by p ( x1 ), p ( x2 ) and p ( x3 ) to get, 0.5 × 0.5 × 0 0 0 . . 6 6 P (X / 0.5 × 0 0.5 × 0.3 Y 0 ) . 3 0 0.5 × 0.5 × 0.1 0 . 1 y1 y2 y3 0.3 0.3 0 … (1) ∴ P (X, Y) 0.15 0 0.15 0 0.05 0.05 Step 2 : Obtain the probabilities p ( y1 ), p ( y2 ), p ( y3 ) : These probabilities can be obtained by adding the column entries of P (X , Y) matrix of Equation (1). ∴ p ( y1 ) = 0.3 + 0.15 + 0 = 0.45 p ( y2 ) = 0.3 + 0 + 0.05 = 0.35 p ( y3 ) = 0 + 0.15 + 0.05 = 0.20 Step 3 : Obtain the conditional probability matrix P (X/Y) :
  • 12. Digital Communication (GTU) 3-12 Information Theory The conditional probability matrix P (X/Y) can be obtained by dividing the columns of the joint probability matrix P (X , Y) of Equation (1) by p ( y1 ), p ( y2 ) and p ( y3 ) respectively. 0.3 / 0.45 0.3 / 0.35 0 ∴ P (X / Y) 0.15 / 0 0.15 / 0.2 0.4 5 0 0.05 / 0.35 0.05 / 0.2 y1 y2 y3 0.667 0.857 0 … (2) ∴ P (X, Y) 0.333 0 0.75 0 0.143 0.25 Step 4 : Calculate the marginal entropy H (X) : H (X) = – p ( xi ) log2 p ( xi ) = – p ( x1 ) log2 p ( x1 ) – p ( x2 ) log2 p ( x2 ) – p ( x3 ) log2 p ( x3 ) = – 0.6 log2 (0.6) – 0.3 log2 (0.3) – 0.1 log2 (0.1) = 0.4421 + 0.5210 + 0.3321 ∴ H (X) = 1.2952 bits/message Step 5 : Obtain the conditional entropy H (X/Y) : H (X/Y) = – p ( xi , yj ) log2 ( xi / yj ) = – p ( x1 , y1 ) log2 p ( x1/y1 ) – p ( x1 , y2 ) log2 p ( x1/y2 ) – p ( x1 , y3 ) log2 p ( x1/y3 ) – p ( x2 , y1 ) log2 p ( x2/y1 ) – p ( x2 , y2 ) log2 p ( x2/y2 ) – p ( x2 , y3 ) log2 p ( x2/y3 ) – p ( x3 , y1 ) log2 p ( x3/y1 ) – p ( x3 , y2 ) log2 p ( x3/y2 ) – p ( x3 , y3 ) log2 p ( x3/y3 ) Refer to the joint and conditional matrices of Fig. P. 3.11.7. P (X / Y) P (X, Y) y1 y2 y3 y1 y2 Y3 0.667 0.857 0 0.3 0.3 0 0.333 0 0.75 0.15 0 0.15 0 0.143 0.25 0 0.05 0.05 Fig. P. 3.11.7 Substituting various values from these two matrices we get, H (X/Y) = – 0.3 log2 0.667 – 0.3 log2 0.857 – 0 – 0.15 log2 0.333 – 0 – 0.15 log2 0.75
  • 13. Digital Communication (GTU) 3-13 Information Theory – 0 – 0.05 log2 0.143 – 0.05 log2 0.25 ∴ H (X/Y) = 0.1752 + 0.06678 + 0.2379 + 0.06225 + 0.1402 + 0.1 ∴ H (X/Y) = 0.78233 bits/message Step 6 : Mutual information : I (X , Y) = H (X) – H (X/Y) = 1.2952 – 0.78233 = 0.51287 bits ...Ans. Ex. 3.11.8 : State the joint and conditional entropy. For a signal which is known to have a uniform density function in the range 0 ≤ x ≤ 5; find entropy H (X). If the same signal is amplified eight times, then determine H (X). .Page No. 3-50 Soln. : For the definitions of joint and conditional entropy refer to sections 3.10.1 and 3.10.2. The uniform PDF of the random variable X is as shown in Fig. P. 3.11.8. 1. The differential entropy H (X) of the given R.V. X is given by, H (X) = fX (x) log2 [1/fX (x)] dx bits/sample. Fig. P. 3.11.8 2. Let us define the PDF fX (x). It is given that fX (x) is uniform in the range 0 ≤ x ≤ 5. ∴ Let fX (x) = k .... 0 ≤ x ≤ 5 = 0 .... elsewhere But area under fX (x) is always 1. ∴ fX (x) dx = 1 ∴ k dx = 1 ∴ k = 1/5 Hence the PDF of X is given by, fX (x) = 1/5 .... 0 ≤ x ≤ 5 = 0 .... elsewhere 3. Substituting the value of fX (x) we get, H (X) = log2 (5) dx ∴ H (X) = 2.322 bits/message ...Ans. Ex. 3.11.9 : Two binary symmetrical channels are connected in cascade as shown in Fig. P. 3.11.9. 1. Find the channel matrix of the resultant channel. 2. Find p ( z1 ) and p ( z2 ) if p ( x1 ) = 0.6 and p ( x2 ) = 0.4. .Page No. 3-50
  • 14. Digital Communication (GTU) 3-14 Information Theory Fig. P. 3.11.9 : BSC for Ex. 3.11.10 Soln. : Steps to be followed : Step 1 : Write the channel matrix for the individual channels as P [ Y/X ] for the first one and P [ Z/Y ] for the second channel. Step 2 : Obtain the channel matrix for the cascaded channel as, P [ Z/X ] = P [ Y/X ] · P [ Z/Y ] Step 3 : Calculate the probabilities P ( z1 ) and P ( z2 ). 1. To obtain the individual channel matrix : The channel matrix of a BSC consists of the transition probabilities of the channel. That means the channel matrix for channel – 1 is given by, P [ Y/X ] = ...(1) Substituting the values we get, P [ Y/X ] = ...(2) Similarly the channel matrix for second BSC is given by, P [ Z/Y ] = ...(3) Substituting the values we get, P [ Z/Y ] = ...(4) 2. Channel matrix of the resultant channel : The channel matrix of the resultant channel is given by, P [ Z/X ] = ...(5) The probability P ( z1/x1 ) can be expressed by referring to Fig. P. 3.11.10 as, P ( z1/x1 ) = P ( z1/y1 ) · P ( y1/x1 ) + P ( z1/y2 ) · P ( y2/x2 ) ...(6) Similarly we can obtain the expressions for the remaining terms in the channel matrix of resultant channel. ∴ P[ Z/X ] = ...(7) The elements of the channel matrix of Equation (7) can be obtained by multiplying the individual channel matrices. ∴ P (Z/X) = P (Y/X) · P (Z/Y) …(8) ∴ P (Z/X) =
  • 15. Digital Communication (GTU) 3-15 Information Theory = ...Ans. This is the required resultant channel matrix. 3. To calculate P ( z1 ) and P ( z2 ) : From Fig. P.3.11.10 we can write the following expression, P ( z1 ) = P ( z1/ y1 ) P ( y1 ) + P ( z1/ y2 ) · P ( y2 ) …(9) Substituting P ( y1 ) = P ( x1 ) · P ( y1/ x1 ) + P ( x2 ) · P ( y1/ x2 ) = (0.6 × 0.8) + (0.4 × 0.2) = 0.56 and P ( y2 ) = P ( x1 ) · P ( y2/ x1 ) + P ( x2 ) · P ( y2/ x2 ) = (0.6 × 0.2) + (0.4 × 0.8) = 0.44 and P ( z1/ y1 ) = 0.7 and P ( z1/ y2 ) = 0.3 We get, P ( z1 ) = (0.7 × 0.56) + (0.3 × 0.44) ∴ P ( z1 ) = 0.392 + 0.132 = 0.524 ...Ans. Similarly P ( z2 ) = P ( z2/ y1 ) P ( y1 ) + P ( z2/ y2 ) · P ( y2 ) = (0.3 × 0.56) + (0.7 × 0.44) ∴ P ( z2 ) = 0.476 ...Ans. Ex. 3.11.10 : A binary channel matrix is given by : y1 y2 → outputs inputs → Determine H (X), H (X/Y), H (Y/X) and mutual information I (X ; Y) .Page No. 3-50 Soln. : The given channel matrix is y1 y2 p (x, y) = Step 1 : Obtain the individual probabilities : The individual message probabilities are given by - p ( x1 ) = 2/3 + 1/3 = 1 p ( x2 ) = 1/10 + 9/10 = 1 p ( y1 ) = 2/3 + 1/10 = 23/30 p ( y2 ) = 1/3 + 9/10 = 37/30
  • 16. Digital Communication (GTU) 3-16 Information Theory Step 2 : Obtain the marginal entropies H (X) and H (Y) : H (X) = p ( x1 ) log2 [ 1/ p ( x1 ) ] + p ( x2 ) log2 [ 1/ p ( x2 ) ] = 1 log2 (1) + 1 log2 (1) ∴ H (X) = 0 H (Y) = p ( y1 ) log2 [ 1/ p ( y1 ) ] + p ( y2 ) log2 [ 1/ p ( y2 ) ] = (23/30) log2 [ 30/23 ] + (37/30) log2 [30/37] H (Y) = 0.2938 – 0.3731 = – 0.07936 ≈ – 0.08 Step 3 : Obtain the joint entropy H (X, Y) : H (X, Y) = p ( x1 , y1 ) log2 [ 1/ p ( x1 , y1 ) ] + p ( x1 , y2 ) log2 [ 1/ p ( x1 , y2 ) ] + p ( x2 , y1 ) log2 [ 1/ p ( x2 , y1 ) ] + p ( x2 , y2 ) log2 [ 1/ p ( x2 , y2 ) ] ∴ H (X, Y) = log2 (3/2) + log2 (3) + log2 (10) + log2 (10/9) = 0.38 + 0.52 + 0.33 + 0.13 = 1.36 bits Step 4 : Obtain the conditional probabilities H (X/Y) and H (Y/X) : H (X/Y) = H (X , Y) – H (Y) = 1.36 – (– 0.08) = 1.44 bits. H (Y/X) = H (X , Y) – H (X) = 1.36 – 0 = 1.36 bits. Step 5 : Mutual information : I (X, Y) = H (X) – H (X/Y) = 0 – 1.44 = – 1.44 bits/message. ...Ans. Ex. 3.11.11 : A channel has the following channel matrix : [ P (Y/X) ] = 1. Draw the channel diagram. 2. If the source has equally likely outputs, compute the probabilities associated with the channel outputs for P = 0.2. .Page No. 3-50 Soln. : Part I : 1. The given matrix shows that the number of inputs is two i.e. x1 and x2 whereas the number of outputs is three i.e. y1 , y2 and y3. 2. This channel has two inputs x1 = 0 and x2 = 1 and three outputs y1 = 0, y2 = e and y3 = 1 as shown in Fig. P. 3.11.11. Fig. P. 3.11.11 : The channel diagram The channel diagram is as shown in Fig. P. 3.11.11 This type of channel is called as “binary erasure channel”. The output y2 = e indicates an erasure that means this output is in doubt and this
  • 17. Digital Communication (GTU) 3-17 Information Theory output should be erased. Part II : Given that the sources x1 and x2 are equally likely ∴ p ( x1 ) = p ( x2 ) = 0.5 It is also given that p = 0.2. ∴ p (y) = p (x) [ p (y/x) ] = [p (x1) , p (x2) ] ∴ p (y) = [ 0.5, 0.5 ] = [ 0.4 0.2 0.4 ] That means p (y1) = 0.4, p ( y2 ) = 0.2 and p ( y3 ) = 0.4 These are the required values of probabilities associated with the channel outputs for p = 0.2. Ex. 3.11.13 : Find the mutual information and channel capacity of the channel as shown in Fig. P. 3.11.13(a). Given that P ( x1 ) = 0.6 and P ( x2 ) = 0.4. .Page No. 3-57. Fig. P. 3.11.13(a) Soln. : Given that : p ( x1 ) = 0.6, p ( x2 ) = 0.4 The conditional probabilities are, p ( y1/x1 ) = 0.8, p ( y2/x1 ) = 0.2 p ( y1/x2 ) = 0.3 and p ( y2/x2 ) = 0.7 The mutual information can be obtained by referring to Fig. P. 3.11.13(b). Fig. P. 3.11.13(b) As already derived, the mutual information is given by, I (X ; Y) = Ω [ β + (1 – α – β) p ] – p Ω (α) – (1 – p) Ω (β) ...(1) Where Ω is called as the horseshoe function which is given by, Ω (p) = p log2 (1/p) + (1 – p) log2 (1/1 – p) ...(2)
  • 18. Digital Communication (GTU) 3-18 Information Theory Substituting the values we get, I (X ; Y) = Ω [ 0.3 + (1 – 0.2 – 0.3) 0.6 ] – 0.6 Ω (0.2) – 0.4 Ω (0.3) ∴ I (X ; Y) = Ω (0.6) – 0.6 Ω (0.2) – 0.4 Ω (0.3) …(3) Using the Equation (2) we get, I (X ; Y) = [ 0.6 log2 (1/0.6) + 0.4 log2 (1/0.4) ] – 0.6 [0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ] – 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7)] ∴ I (X ; Y) = 0.1868 bits. ...Ans. Channel capacity (C) : For the asymmetric binary channel, C = 1 – p Ω (α) – (1 – p) Ω (β) = 1 – 0.6 Ω (0.2) – 0.4 Ω (0.3) = 1 – 0.6 [ 0.2 log2 (1/0.2) + 0.8 log2 (1/0.8) ] – 0.4 [ 0.3 log2 (1/0.3) + 0.7 log2 (1/0.7) ] = 1 – 0.433 – 0.352 C = 0.214 bits ...Ans. Section 3.12 Ex. 3.12.3 : In a facsimile transmission of a picture, there are about [2.25 × 106 ] picture elements per frame. For good reproduction, twelve brightness levels are necessary. Assuming all these levels to be equiprobable, calculate the channel bandwidth required to transmit one picture in every three minutes for a single to noise power ratio of 30 dB. If SNR requirement increases to 40 dB, calculate the new bandwidth. Explain the trade-off between bandwidth and SNR, by comparing the two results. .Page No. 3-67 Soln. : Given : Number of picture elements per frame = 2.25 × 106 Number of brightness levels = 12 = M All the twelve brightness levels are equiprobable. Number of pictures per minute = 1/3 SNR1 = 30 dB SNR2 = 40 dB 1. Calculate the information rate : The number of picture elements per frame is 2.25 × 106 and these elements can be of any brightness out of the possible 12 brightness levels. The information rate (R) = No. of messages/sec. × Average information per message. R = r×H ...(1) Where r = = = 12500 elements/sec. ...(2) and H = log2 M = log2 12 ...as all brightness levels are equiprobable. ...(3) ∴ R = 12,500 × log2 12 ∴ R = 44.812 k bits/sec. ...(4)
  • 19. Digital Communication (GTU) 3-19 Information Theory 2. Calculate the bandwidth B : The Shannon’s capacity theorem states that, R ≤ C where C = B log2 ...(5) Substitute = 30 dB = 1000 we get, ∴ 44.812 × 10 3 ≤ B log2 [1 + 1000] ∴ B ≥ ∴ B ≥ 4.4959 kHz. ...Ans. 3. BW for S/N = 40 dB : For signal to noise ratio of 40 dB or 10,000 let us calculate new value of bandwidth. ∴ 44.812 × 103 ≤ B log2 [1 + 10000 ] ∴ B ≥ ∴ B ≥ 3.372 kHz. ...Ans. Trade off between bandwidth and SNR : As the signal to noise ratio is increased from 30 dB to 40 dB, the bandwidth will have to be decreased. Ex. 3.12.4 : An analog signal having bandwidth of 4 kHz is sampled at 1.25 times the Nyquist rate, with each sample quantised into one of 256 equally likely levels. 1. What is information rate of this source ? 2. Can the output of this source be transmitted without error over an AWGN channel with bandwidth of 10 kHz and SNR or 20 dB ? 3. Find SNR required for error free transmission for part (ii). 4. Find bandwidth required for an AWGN channel for error free transmission this source if SNR happens to be 20 dB. .Page No. 3-68 Soln. : Given : fm = 4 kHz., fs = 1.25 × 2 × fm = 1.25 × 2 × 4 kHz = 10 kHz. Quantization levels Q = 256 (equally likely). 1. Information rate (R) : R = r×H ...(1) Where r = Number of messages/sec. = Number of samples/sec. = 10 kHz. and H = log2 256 ...as all the levels are equally likely ∴ R = 10 × 103 × log2 256 = 10 × 103 × 8 ∴ R = 80 k bits/sec. ...Ans. 2. Channel capacity (C) : In order to answer the question asked in (ii) we have to calculate the channel capacity C. Given :
  • 20. Digital Communication (GTU) 3-20 Information Theory B = 10 kHz and = 20 dB = 100 ∴ C = B log2 = 10 × 103 log2 [101]. ∴ C = 66.582 k bits/sec. For error free transmission, it is necessary that R ≤ C. But here R = 80 kb/s and C = 66.582 kb/s hence R > C hence errorfree transmission is not possible. 3. S/N ratio for errorfree transmission in part (2) : Substitute C = R = 80 kb/s. we get, 80 × 103 = B log2 ∴ 80 × 103 = 10 × 103 log2 [1+ (S/N)] ∴ 8 = log2 [1+ (S/N)] ∴ 256 = 1+ (S/N) ∴ S/N = 255 or 24.06 dB ...Ans. This is the required value of the signal to noise ratio to ensure the error free transmission. 4. BW required for the errorfree transmission : Given : C = 80 kb/s, S/N = 20 dB = 100 ∴ C = B log2 ∴ 80 = B log2 [1 + 100] ∴ B ≥ 12 kHz. ...Ans. Ex. 3.12.5 : A channel has a bandwidth of 5 kHz and a signal to noise power ratio 63. Determine the bandwidth needed if the S/N power ratio is reduced to 31. What will be the signal power required if the channel bandwidth is reduced to 3 kHz ? .Page No. 3-68 Soln. : 1. To determine the channel capacity : It is given that B = 5 kHz and = 63. Hence using the Shannon Hartley theorem the channel capacity is given by, C = B log2 = 5 × 103 log2 [1+ 63] ∴ C = 30 × 103 bits/sec ...(1) 2. To determine the new bandwidth : The new value of = 31. Assuming the channel capacity “C” to be constant we can write, 30 × 103 = B log2 [1+ 31] ∴ B = = 6 kHz ...(2) 3. To determine the new signal power : Given that the new bandwidth is 3 kHz. We know that noise power N = N0 B. Let the noise power corresponding to a bandwidth of 6 kHz be N1 = 6 N0 and the noise power corresponding to the new bandwidth of 3 kHz be N2 = 3 N0.
  • 21. Digital Communication (GTU) 3-21 Information Theory ∴ = =2 ...(3) The old signal to noise ratio = = 31 ∴ S1 = 31 N1 ...(4) The new signal to noise ratio = . We do not know its value, hence let us find it out. 30 × 103 = 3 × 103 log2 ∴ = 1023 ...(5) ∴ S2 = 1023 N2 But from Equation (3), N2 = , substituting we get, ∴ S2 = 1023 ...(6) Dividing Equation (6) by Equation (4) we get, = = 16.5 ∴ S2 = 16.5 S1 ...Ans. Thus if the bandwidth is reduced by 50% then the signal power must be increased 16.5 times i.e. 1650% to get the same capacity. Ex. 3.12.6 : A 2 kHz channel has signal to noise ratio of 24 dB : (a) Calculate maximum capacity of this channel. (b) Assuming constant transmitting power, calculate maximum capacity when channel bandwidth is : 1. halved 2. reduced to a quarter of its original value. .Page No. 3-68
  • 22. Digital Communication (GTU) 3-22 Information Theory Soln. : Data : B = 2 kHz and (S/N) = 24 dB. The SNR should be converted from dB to power ratio. ∴ 24 = 10 log10 (S/N) ∴ = 251 ...(1) (a) To determine the channel capacity : C = B log2 = 2 × 103 log2 [1 + 251] = 2 × 103 ∴ C = 15.95 × 103 bits/sec ...Ans. (b) 1. Value of C when B is halved : The new bandwidth B2 = 1 kHz, let the old bandwidth be denoted by B1 = 2 kHz. We know that the noise power N = N0 B ∴ Noise power with old bandwidth = N1 = N0 B1 ...(2) and Noise power with new bandwidth = N2 = N0 B2 ...(3) ∴ = = = ∴ = ...(4) As the signal power remains constant, the SNR with new bandwidth is, = =2 But we know that = 251 ...See Equation (1) ∴ = 2 × 251 = 502 ...(5) Hence the new channel capacity is given by, C = B2 log2 = 1 × 103 log2 (503) = 1 × 103 ∴ C = 8.97 × 103 bits/sec ...Ans. 2. Value of C when B is reduced to 1/4 of original value : The Equation (4) gets modified to, = ...(6) ∴ = 4 = 4 × 251 = 1004 ...(7) Hence new channel capacity is given by, C = B3 log2 = 500 log2 (1004) ∴ C = 4.99 × 103 bits/sec ...Ans. 