MATLAB GUI showing the Least-Mean-Squared learning curve.

Adaptive EQ + QPSK Simulation

MATLAB GUI showing the Least-Mean-Squared learning curve.

Adaptive EQ + QPSK Simulation

Intersymbol Interference (ISI)

In digital communications, there is a phenomenon known as “Inter-Symbol Interference” (ISI) when transmitting data over a multipath channel. What does this mean in layman terms? There is a high probability where a chunk of bits of information will interfere with each other through “wireless” transmission– distorting the signal.

Adaptive Equalization (Adaptive EQ)

The objective of this simulation is to investigate the performance of an adaptive equalizer for data transmission over a multipath channel that causes inter-symbol interference (ISI).

The data generator module is used to create a sequence of complex-valued information symbol s[n]. For this simulation, I will assume QPSK symbols. In other data will be drawn from the set {a+ja, a−ja,−a+ja,−a−ja}, where a represents the signal amplitude that is chosen according to a given signal-to-noise ratio (SNR). Assuming that the noise has unit power, then SNR = 20log10(√2a)

To verify performance, here is the list of requirements:

  1. A channel filter module will be used as an FIR filter with impulse response c[n] that simulates the channel distortion.

  2. A noise generator module will be used to generate additive noise that is present in any digital communication system. We assume unit-power, complex Gaussian noise.

  3. The adaptive equalizer module is a length M+1 FIR filter h[n] whose coefficients are adjusted using either the LMS or the normalized-LMS algorithm.

  4. A decision device module takes the output of the equalizer and will quantize it to one of the four possible transmitted symbols in QPSK, based on whichever is closest.

  5. A plot displaying the error e[n] as a function of n will be shown, averaged over the P experiments.

The Code

  1. To generate random complex data sequences from a given SNR value, I do the following sequence: generate the symbol table in amplitude_to_qpskSet(), next I generate random “complex binary” data by randomizing the real and complex components separately using sign(randn()) in generate_QPSK_data(), and finally I map the random complex data to the generated symbol table in qpsk_mod(). The combination will ultimately give a complex unit-vector in QPSK coordinates.

  2. I send the random complex data through the channel by using filter(c,1, sn); This essentially convolutes the input with the channel.

  3. I add N amounts of noise using xn = channel_out + (randn(size(channel_out))+ 1j*randn(size(channel_out)))/sqrt(2).

  4. I update my LMS and Normalized filter coefficients (of size M+1) using h = h + ( mu * conj(e(n))*xn_shifts ); and hn = hn + ( lambda * conj(en(n))*xn_shifts) ./ ( (xn_shifts)\' * (xn_shifts ) )

  5. The decisions are commented on the code. In sum, the minimum distance between the filter output and a constellation point (QPSK value) is calculated and is then mapped to a constellation point using an index. This decision is made after the training sequence has gone (when n >T).

  6. The resulting LMS filter experiment is done P times and is then plotted using stem() to see the channel coefficients and filter coefficients, and semilogy() to see the LMS and Normalized LMS Learning Curve.

Adaptive Equalization Code

function adapt_equal( c, SNR, mu, lambda, M, N, T, P )

    average_J= zeros(N,1);
    average_Jn= zeros(N,1);
    e = zeros(1,N);
    en = zeros(1,N);
    J = zeros(1,N);
    Jn = zeros(1,N);
    xn_shifts = zeros([N 1]);
    f_out = zeros(N);
    f_out_n = zeros(N);
 
    %generate qpsk symbols
    sn = generate_QPSK_data(SNR, N);
    finite_sn = amplitude_to_qpskSet(SNR);
 
 
    for p = 1:P
    %go through channel
    % channel_out=conv(c,sn);
    channel_out = filter(c,1,sn);
 
    %add noise
    xn = channel_out + (randn(size(channel_out))+ 1j*randn(size(channel_out)))/sqrt(2);
 
    %initialize filter size
    h=zeros(M+1,1);
    hn = zeros(M+1,1);
 
        %per sample
        for n=1:N
 
            xn_shifts = [xn(n) ; xn_shifts(1:M)];
            
            f_out(n) = h' * xn_shifts;
            f_out_n(n) = hn' * xn_shifts;
 
            %decision block
            
            %LMS
            error_decision = f_out(n) - finite_sn;
            [~,decided_index] = min(abs(error_decision));
            s_hat = finite_sn(decided_index);
            
            %normalized LMS
            error_decision_n = f_out_n(n) - finite_sn;
            [~,decided_index_n] = min(abs(error_decision_n));
            s_hat_n = finite_sn(decided_index_n);
 
            %if training
            if n < T
                e(n) = sn(n) - f_out(n);
                en(n) = sn(n) - f_out_n(n);
            else
                e(n) = s_hat - f_out(n);
                en(n) = s_hat_n - f_out_n(n);
            end
                
            %update coeff
             h = h + ( mu * conj(e(n))*xn_shifts );
             hn = hn + ( lambda * conj(en(n))*xn_shifts) ./ ( (xn_shifts)' * (xn_shifts ) );
 
            J(n)=abs(e(n));
            Jn(n) = abs(en(n));
 
            average_J(n)=average_J(n)+J(n);
            average_Jn(n)=average_Jn(n)+Jn(n);
        end
 
        average_J=average_J/P;
        average_Jn=average_Jn/P;
 
 
        subplot(3,3,1)
        cplot(f_out)
        title('Filter Output')
 
        subplot(3,3,2)
        stem(h)
    %     axis([-2 2 -1 1])
        title('Adaptive Filter Impulse Response Coefficients (Real Component)')
        xlabel('n')  
 
        subplot(3,3,3)
        stem(c)
    %     axis([-2 2 -1 1])
        title('Channel Impulse Response Coefficients')    
        xlabel('n')  
 
        subplot(3,3,4)
        cplot(xn)
        axis([-35,35,-35,35])
        title('Channel with Noise')
 
        subplot(3,3,5)
        cplot(e)
        title('Error (complex)')
        subplot(3,3,6)
        cplot(channel_out)
        axis([-35,35,-35,35])
        title('All Data through channel')
 
        subplot(3,3,7)
        cplot(sn)
        title('Data Input (QPSK)')

        subplot(3,3,[8 9])
        drawnow
        
        semilogy(average_J)
        hold on
        
        semilogy(average_Jn)
        title('Learning curve abs(e(n))')
        
        xlabel('time step n')
        legend('LMS', 'Normalized LMS')
        
        hold off
         
 
    end        
 
end

Data Generation (BPSK) Code

function sn = generate_QPSK_data(sig_noise,numOfData)
 
    for n = 1:numOfData
        lookupTable = amplitude_to_qpskSet(sig_noise);    %QPSK lookup table using SNR
 
        %generate random digital data
        complex_binary = complex(sign(randn(numOfData,1)-0.5),sign(randn(numOfData,1)-0.5));
 
        %map data to the QPSK lookup table
        sn(n,1) = qpsk_mod(complex_binary, lookupTable);
    
    end
    %end data generator module!!!!
end
 
function signal = amplitude_to_qpskSet(SNR)
%returns column vector
%assume noise has unit power
    a = ( 10 ^ (SNR/20) ) / sqrt(2);
    signal = [complex(a,a), complex(a,-a), complex(-a,a), complex(-a,-a)];
end
 
function output = qpsk_mod(data, qpsk)
 
        for k = 1:size(data)
            
            if angle(data(k)) ==  angle(complex(1,1))
                output = qpsk(1); %first quad
 
            elseif angle(data(k)) == angle(complex(1,-1))
                output = qpsk(2); %fourth quad
 
            elseif angle(data(k)) == angle(complex(-1,1))
                output = qpsk(3); %second quad
 
            elseif angle(data(k)) == angle(complex(-1,-1))
                output = qpsk(4); %third quad
            else
                output = 0;
            end
            
        end
        
    output = output.';
end

S-Plane Plotting

function cplot(v)
    drawnow
%     pause(0.00001)
    plot(real(v(1:end)),imag(v(1:end)),'x')
    axis([-18,18,-18,18])
end

RESULTS & CONCLUSION

Test Cases

To simulate adaptive equalization, I ran three test cases: a base case, a different channel (complex impulses), and different step sizes. The figures below show the last snapshot of the simulation in their respective order.

Figure 1: Assignment Given Paramenters

Figure 2: Change Channel

Figure 3: Change Step Sizes

Looking at the three figures, what seems to be most effective is step size. One can see the learning curve decrease per time step. Though not shown here, each experiment snapshot demonstrated that the filter output was attempting to cluster at the Data Input constellations.

To conclude, this experiment simulated the adaptive equalizer environment, and with the correct step size, one can recover distorted data by canceling channel and noise effects using LMS.

Avatar
Andrew (Andres) Tec
Engineer

always the perfect time to learn.

Next
Previous