Point-to-Point Communications (filled notes) PDF

Title Point-to-Point Communications (filled notes)
Course Communication Principles
Institution Memorial University of Newfoundland
Pages 47
File Size 4.5 MB
File Type PDF
Total Downloads 34
Total Views 138

Summary

Point-to-Point Communications (filled notes)...


Description

The average error probability can be explicitly computed to be

pe =

Z



Q

0

√



2rSNR fR (r)dr =



  l L L−1  X L−1+l 1+µ 1−µ   , 2 2 l

(24)

l=0

where µ is given by µ=

r

SNR . 1 + SNR

(25)

In high SNR regime, we can approximate µ by taking the first-order Taylor expansion: µ≈1−

1 . 2SNR

(26)

Then, we have 1+µ ≈ 1, 2

1 1−µ ≈ . 2 4SNR

and

(27)

Further more, we have  L−1 X l=0

L−1+l l



  2L − 1 = . L

(28)

As a result, we can approximate the error probability as   1 2L − 1 , pe ≈ (4SNR)L L

(29)

at high SNR. The error probability decreases as the L-th power of SNR, corresponding to a slope of −L in the error probability curve, as shown in Fig. 3. The performance has been significantly improved, and we can also see that by examining the probability of the deep fade event, given by P (khk2 < 1/SNR).

(30)

Fig. 4 plots the distribution of khk2 for different values of L; clearly the tail of the 16

Fig. 3: Error probability as a function of SNR for different numbers of diversity branches L.

distribution near zero becomes lighter for larger L. For small x, the PDF of khk2 is approximately f (x) ≈

1 xL−1 . (L − 1)!

(31)

Therefore, we can obtain P (khk2 < 1/SNR) ≈

Z

0

1/SNR

1 1 1 xL−1 dx = . (L − 1)! L! SNRL

Remarks: A detection error occurs when khk2 =

PL

l=1

(32)

|hl |2 is of the order or smaller

than 1/SNR, and this happens when all the magnitudes of the gains |hl |2 ’s are small,

on the order of 1/SNR. Since the probability that each |hl |2 is less than 1/SNR is approximately 1/SNR ad gains are independent, the probability of the overall gain being small is of the order 1/SNRL . Typically, L is called the diversity gain of the system. Remarks: With repetition coding, to obtain the a diversity gain of L, we are consuming

significantly more energy and time slots (or bandwidth, equivalently). Can we further improve the efficiency?

17

Fig. 4: The probability density function of khk2 for different values of L. The larger the L, the faster the probability density function drops off around 0.

1.5 Beyond Repetition Coding The repetition code is the simplest possible code. Although it achieves a diversity gain, it does not exploit the degrees of freedom available in the channel effectively because it simply repeats the same symbol over the L time slots. By using more sophisticated codes, a coding gain can also be obtained beyond the diversity gain. There are many possible codes that one can use. We first focus on the example of a rotation code to explain some of the issues in code design for fading channels. Consider the case L = 2 over Rayleigh fading channels. The complex channel gains at two time slots are h1 and h2 , and they are independent. We want to transmit two BPSK symbols, u1 , u2 = ±a. In stead of transmitting the original symbols directly, we transmit the rotated version: 

x=

cos θ − sin θ sin θ

cos θ

 

u1 u2





 =

18

u1 cos θ − u2 sin θ u1 sin θ + u2 cos θ



.

(33)

1.6 Time Diversity in GSM Global System for Mobile (GSM) is a digital cellular standard developed in Europe in the 1980’s. GSM is a frequency division duplex (FDD) system and uses two 25 MHz bands, one for the uplink (mobiles to base station) and one for the downlink (base station to mobiles). The original bands set aside for GSM are the 890-915 MHz band (uplink) and the 935-960 MHz band (downlink). The bands are further divided into 200 kHz sub-channels and each sub-channel is shared by 8 users in a time division fashion (time-division multiple access (TDMA)). The data of each user are sent over time slots of length 577 microseconds and the time slots of the 8 users together form a frame of length 4.615 ms.

Fig. 6: The 25 Mhz band of a GSM system is divided into 200 kHz sub-channels which are further divided into time slots for 8 different users.

Voice is the main application for GSM. Voice is coded by a speech encoder into speech frames each of length 20 ms. The bits in each speech frame are encoded by a convolutional code of rate 1/2, with the two generator polynomials D4 + D3 + 1 and D4 + D3 + D + 1. The number of coded bits for each speech frame is 456. To achieve time diversity, these coded bits are interleaved across 8 consecutive time slots assigned to that specific user: the 0-th, 8-th, · · · , 448-th bits are put into the first time slot, the 1-st, 9-th, · · · , 449-th bits are put into the second time slot, etc. Since one time slot occurs every 4.615 ms for each user, this translates into a delay of roughly 40 ms, a delay judged tolerable for voice. The 8 time slots are shared between two 20 ms speech 21

frames. For the channel to fade more or less independently across the different time slots for a user, the coherence time should be less than 5 ms. For fc = 900 MHz, this translates into a mobile speed of at least 30 km/h. For a walking speed of say 3 km/h, there may be too little time diversity. In this case, GSM can go into a frequency hopping mode, where consecutive frames (each composed of the time slots of the 8 users) can hop from one 200 kHz sub-channel to another. With a typical delay spread of about 1 microsecond, the coherence bandwidth is 500 kHz (c.f. Table 2.1). The total bandwidth equal to 25 MHz is thus much larger than the typical coherence bandwidth of the channel and the consecutive frames can be expected to fade independently. This provides the same effect as having time diversity. Other ways to exploit frequency diversity will be discussed in later part of this chapter.

22

As we can see, the maximum likelihood estimator can be used to data detection and the frequency diversity gain can be obtained. However, the complexity grows exponentially with the sequence length, i.e., O(2KN ), because we need to go through all the possible data sequences. To avoid the exhaustive search, we can use the ubiquitous Viterbi Algorithm. The key observation is that the memory in the frequency-selective channel can be captured by a finite state machine: the received signal at the current time slot is only dependent on the current transmitted symbol, and the previous L − 1 symbols. It has nothing to do with the symbols before that. Motivated by this observation, we define define the state of time m as



x[m − L + 1]

   x[m − L + 2] s[m] =  ..   .  x[m]



   .   

(60)

Take BPSK modulation and L = 2 as an example, there are totally 4 states, and the state transition is shown in the following Figure ??.

ˆx = arg min xn

NX +L−2 m=0

|y[m] − s[m]T h|2 .

(61)

where s[m] is the state at the m-th time slot, and it is exclusively dependent on the transmitted sequence. Example: The transmitted sequence is [0, 1, 1, 0, 1], with an SNR of 10 dB. The modulated BPSK symbols are [−1, 1, 1, −1, 1]. Consider a simple channel h = [1, 0.5]T , without noise, the received signal yo should be

37

Suppose the white noise is w = [−0.2 − 0.3i,

0.2,

0.2 + 0.2i,

0.6i,

−0.2i, −0.3]T .

As a result, the actual received signal is y = yo + w:

At the receiver, we employ the Viterbi algorithm for sequence detection. The initial state is s0 , and the next state can be s1 or s2 . There are totally four states: s0 : x[m] = +1, x[m − 1] = +1, s[m] = [+1, +1]; s1 : x[m] = +1, x[m − 1] = −1, s[m] = [+1, −1]; s2 : x[m] = −1, x[m − 1] = −1, s[m] = [−1, −1]; s3 : x[m] = −1, x[m − 1] = +1, s[m] = [−1, +1]. State Transition:

38

Trellis:

Note: Every path in the Trellis corresponds to a sequence! Viterbi Algorithm:

39

Proof of (59) on Page 36: For the transmitted sequence xA , the received vector is y = XTA h + w, where the elements in w are i.i.d. CSCG random variables, with an identical variance of σ 2 . Consider Rayleigh fading channels, we have i.i.d. CSCG variables in h with unit variance. In this case, for a given h, the probability of confusing xA and xB is T hk2 > ky−XTA hk2 } = P {kwk2 > k(XA −XB )T h+wk2 }. P {xA → xB |h} = P {ky−XA

If we define s = (XA − XB )T h, we have P {kwk2 > ks + wk2 } =P {kwk2 > kwk2 + ksk2 + sH w + wH s},   H s w wH s < −ksk . + =P ksk ksk Based on our discussion in Chapter 2, we know that w =

sH w ksk

we have w˜ = w + w∗ ∼ N (0, 2σ 2 ), which means P {xA → xB |h} = P {w˜ < −ksk} =P



ksk w˜ √ < −√ 2σ 2σ



=Q

∼ CN (0, σ 2 ). Therefore,



ksk √ 2σ



  ksk2 ≤ exp − 2 . 4σ

Now we want to average over h to get the overall detection error, which is not easy. To simplify this process, notice that ksk2 = hH (XA − XB )∗ (XA − XB )T h.

(62)

We can conduct EVD on (XA − XB )∗ (XA − XB )T as (XA − XB )∗ (XA − XB )T = UΣUH , where U is a unitary matrix and Σ = Diag{[λ02, λ12, · · · , λ2L−1 ]}. Also, we can define

˜ ˜h = UH h, and we know that h˜ and h have the same distribution. The elements in h follow i.i.d. CSCG distribution with unit variance. Therefore, (62) can be rewritten as

˜hH Σh ˜=

L−1 X

λ2l βl ,

(63)

l=0

˜ l]|2 follows exponential distribution with a mean of unit. Its PDF is thus where βl = |h[ e−βl .

We rewrite the conditional PDF as   2   L−1 Y ksk2 λ βl exp − 2 = exp − l 2 . 4σ 4σ l=0 We can then take the average of the conditional error probability over βl : P {xA → xB } =E {P {xA → xB |h}}  2  L−1 Z ∞ Y λ βl ≤ exp − l 2 e−βl dβl 4σ l=0 0   2 L−1 Z ∞ Y λl + 4σ 2 = βl dβl exp − 4σ 2 l=0 0 ∞  2 L−1 Y  4σ 2 λ l + 4σ 2 − 2 = exp − βl  2 2 4σ λl + 4σ 0 l=0 =

L−1 Y l=0

4σ 2 λl2 + 4σ 2

=

L−1 Y

1 , SNRλ2l /4 + 1

l=0

where SNR = 1/σ 2 . This proves (59)....


Similar Free PDFs