Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Wiener filters
1. Week 3 ELE 774 - Adaptive Signal Processing 1
WIENER FILTERS
2. ELE 774 - Adaptive Signal Processing2Week 3
Complex-valued stationary (at least w.s.s.) stochastic processes.
Linear discrete-time filter, w0, w1, w2, ... (IIR or FIR (inherently stable))
y(n) is the estimate of the desired response d(n)
e(n) is the estimation error, i.e., difference bw. the filter output and the
desired response
Linear Optimum Filtering: Statement
3. ELE 774 - Adaptive Signal Processing3Week 3
Linear Optimum Filtering: Statement
Problem statement:
Given
Filter input, u(n),
Desired response, d(n),
Find the optimum filter coefficients, w(n)
To make the estimation error “as small as possible”
How?
An optimization problem.
4. ELE 774 - Adaptive Signal Processing4Week 3
Linear Optimum Filtering: Statement
Optimization (minimization) criterion:
1. Expectation of the absolute value,
2. Expectation (mean) square value,
3. Expectation of higher powers of the absolute value
of the estimation error.
Minimization of the Mean Square value of the Error (MSE) is
mathematically tractable.
Problem becomes:
Design a linear discrete-time filter whose output y(n) provides an
estimate of a desired response d(n), given a set of input samples
u(0), u(1), u(2) ..., such that the mean-square value of the
estimation error e(n), defined as the difference between the
desired response d(n) and the actual response, is minimized.
5. ELE 774 - Adaptive Signal Processing5Week 3
Principle of Orthogonality
Filter output is the convolution of the filter IR and the input
6. ELE 774 - Adaptive Signal Processing6Week 3
Principle of Orthogonality
Error:
MSE (Mean-Square Error) criterion:
Square → Quadratic Func. → Convex Func.
Minimum is attained when
(Gradient w.r.t. optimization variable
w is zero.)
7. ELE 774 - Adaptive Signal Processing7Week 3
Derivative in complex variables
Let
then derivation w.r.t. wk is
Hence
or
!!! J: real, why? !!!
8. ELE 774 - Adaptive Signal Processing8Week 3
Principle of Orthogonality
Partial derivative of J is
Using and
Hence
9. ELE 774 - Adaptive Signal Processing9Week 3
Principle of Orthogonality
Since , or
The necessary and sufficient condition for the cost function J to
attain its minimum value is, for the corresponding value of the
estimation error eo(n) to be orthogonal to each input sample that
enters into the estimation of the desired response at time n.
Error at the minimum is uncorrelated with the filter input!
A good basis for testing whether the linear filter is operating in its
optimum condition.
10. ELE 774 - Adaptive Signal Processing10Week 3
Principle of Orthogonality
Corollary:
If the filter is operating in optimum conditions (in the MSE sense)
When the filter operates in its optimum condition, the estimate of the
desired response defined by the filter output yo(n) and the
corresponding estimation error eo(n) are orthogonal to each other.
11. ELE 774 - Adaptive Signal Processing11Week 3
Minimum Mean-Square Error
Let the estimate of the desired response that is optimized in the
MSE sense, depending on the inputs which span the space
i.e. ( ) be
Then the error in optimal conditions is
or
Also let the minimum MSE be (≠0)
HW: try to derive this
relation from the corollary.
12. ELE 774 - Adaptive Signal Processing12Week 3
Minimum Mean-Square Error
Normalized MSE: Let
Meaning
If ε is zero, the optimum filter operates perfectly, in the sense that
there is complete agreement bw. d(n) and . (Optimum case)
If ε is unity, there is no agreement whatsoever bw. d(n) and
(Worst case)
13. ELE 774 - Adaptive Signal Processing13Week 3
Wiener-Hopf Equations
We have (principle of orthogonality)
Rearranging
where
Wiener-Hopf
Equations
(set of
infinite eqn.s)
14. ELE 774 - Adaptive Signal Processing14Week 3
Wiener-Hopf Equations
Solution – Linear Transversal (FIR) Filter case
M simultaneous equations
15. ELE 774 - Adaptive Signal Processing15Week 3
Wiener-Hopf Equations (Matrix Form)
Let
Then
and
16. ELE 774 - Adaptive Signal Processing16Week 3
Wiener-Hopf Equations (Matrix Form)
Then the Wiener-Hopf equations can be written as
where
is composed of the optimum (FIR) filter coefficients.
The solution is found to be
Note that R is almost always positive-definite.
17. ELE 774 - Adaptive Signal Processing17Week 3
Substitute →
Rewriting
Error-Performance Surface
18. ELE 774 - Adaptive Signal Processing18Week 3
Error-Performance Surface
Quadratic function of the filter coefficients → convex function, then
or
Wiener-Hopf
Equations
19. ELE 774 - Adaptive Signal Processing19Week 3
Minimum value of Mean-Square Error
We calculated that
The estimate of the desired response is
Hence its variance is
Then
At wo.
(Jmin is independent of w)
20. ELE 774 - Adaptive Signal Processing20Week 3
Canonical Form of the Error-Performance Surface
Rewrite the cost function in matrix form
Next, express J(w) as a perfect square in w
Then, by substituting
In other words,
21. ELE 774 - Adaptive Signal Processing21Week 3
Canonical Form of the Error-Performance Surface
Observations:
J(w) is quadratic in w,
Minimum is attained at w=wo,
Jmin is bounded below, and is always a positive quantity,
Jmin>0 →
22. ELE 774 - Adaptive Signal Processing22Week 3
Canonical Form of the Error-Performance Surface
Transformations may significantly simplify the analysis,
Use Eigendecomposition for R
Then
Let
Substituting back into J
The transformed vector v is called as the principal axes of the
surface.
a vector
Canonical form
23. ELE 774 - Adaptive Signal Processing23Week 3
Canonical Form of the Error-Performance Surface
w1
w2
wo
J(wo)=Jmin
J(w)=c curve
v1
(λ1)
v2
(λ2)
Jmin
J(v)=c curve
Q
Transformation
24. ELE 774 - Adaptive Signal Processing24Week 3
Multiple Linear Regressor Model
Wiener Filter tries to match the filter coefficients to the model of the
desired response, d(n).
Desired response can be generated by
1. a linear model, a
2. with noisy observable data, d(n)
3. noise is additive and white.
Model order is m, i.e.
What should the length of the Wiener filter be to achive min. MSE?
25. ELE 774 - Adaptive Signal Processing25Week 3
Multiple Linear Regressor Model
The variance of the desired response is
But we know that
where wo is the filter optimized w.r.t. MSE (Wiener filter) of length M.
1. Underfitted model: M<m
Performance improves quadratically with increasing M.
Worst case: M=0,
2. Critically fitted model: M=m
wo=a, R=Rm,
26. ELE 774 - Adaptive Signal Processing26Week 3
Multiple Linear Regressor Model
3. Overfitted model: M>m
Filter longer than the model does not improve performance.
27. ELE 774 - Adaptive Signal Processing27Week 3
Example
Let
the model length of the desired response d(n) be 3,
the autocorrelation matrix of the input u(n) be (for conseq. 3 samples)
The cross-correlation of the input and the (observable) desired
response be
The variance of the observable data (desired response) be
The variance of the additive white noise be
We do not know the values
28. ELE 774 - Adaptive Signal Processing28Week 3
Example
Question:
a) Find Jmin for a (Wiener) filter length of M=1,2,3,4
b) Draw the error-performance (cost) surface for M=2
c) Compute the canonical form of the error-performance surface.
Solution:
a) we know that and then
29. ELE 774 - Adaptive Signal Processing29Week 3
Example
Solution, b)
30. ELE 774 - Adaptive Signal Processing30Week 3
Example
Solution, c) we know that
where for M=2
Then
v1
(λ1)
v2
(λ2)
Jmin
31. ELE 774 - Adaptive Signal Processing31Week 3
Application – Channel Equalization
Transmitted signal passes through the dispersive channel and a
corrupted version (both channel & noise) of x(n) arrives at the receiver.
Problem: Design a receiver filter so that we can obtain a delayed
version of the transmitted signal at its output.
Criterion: 1. Zero Forcing (ZF)
2. Minimum Mean Square Error (MMSE)
Filter, wChannel, h + +
Delay, δ
x(n) y(n)
x(n-δ)
ε(n)z(n)
-
32. ELE 774 - Adaptive Signal Processing32Week 3
Application – Channel Equalization
MMSE cost function is:
Filter output
Filter input
Convolution
Convolution
33. ELE 774 - Adaptive Signal Processing33Week 3
Application – Channel Equalization
Combine last two equations
Compact form of the filter output
Desired signal is x(n-δ), or
Convolution
Toeplitz matrix performs convolution
34. ELE 774 - Adaptive Signal Processing34Week 3
Application – Channel Equalization
Rewrite the MMSE cost function
Expanding (data and noise are uncorrelated E{x(n)v(k)}=0 for all n,k)
Re-expressing the expectations
35. ELE 774 - Adaptive Signal Processing35Week 3
Application – Channel Equalization
Quadratic function → gradient is zero at minimum
The solution is found as
And Jmin is
Jmin depends on the design parameter δ
36. ELE 774 - Adaptive Signal Processing36Week 3
Application – Linearly Constrained
Minimum - Variance Filter
Problem:
1. We want to design an FIR filter which suppresses all frequency
components of the filter input except ωo, with a gain of g at ωo.
37. ELE 774 - Adaptive Signal Processing37Week 3
Application – Linearly Constrained
Minimum - Variance Filter
Problem:
2. We want to design a beamformer which can resolve an
incident wave coming from angle θo (with a scaling factor g),
while at the same time suppress all other waves coming from
other directions.
38. ELE 774 - Adaptive Signal Processing38Week 3
Application – Linearly Constrained
Minimum - Variance Filter
Although these problems are physically different, they are
mathematically equivalent.
They can be expressed as follows:
Suppress all components (freq. ω or dir. θ) of a signal while
setting the gain of a certain component constant (ωo or θo)
They can be formulated as a constrained optimization problem:
Cost function: variance of all components (to be minimized)
Constraint (equality): the gain of a single component has to be g.
Observe that there is no desired response!.
40. ELE 774 - Adaptive Signal Processing40Week 3
Application – Linearly Constrained
Minimum - Variance Filter
Cost function: output power → quadratic → convex
Constraint : linear
Method of Lagrange multipliers can be utilized to solve the problem.
Solution: Set the gradient of J to zero
Optimum beamformer weights are found from the set of equations
similar to Wiener-Hopf equations.
output power constraint
41. ELE 774 - Adaptive Signal Processing41Week 3
Application – Linearly Constrained
Minimum - Variance Filter
Rewrite the equations in matrix form:
Hence
How to find λ? Use the linear constraint:
to find
Therefore the solution becomes
For θo, wo is
the linearly Constrained Minimum-Variance (LCMV) beamformer
For ωo, wo is
the linearly Constrained Minimum-Variance (LCMV) filter
42. ELE 774 - Adaptive Signal Processing42Week 3
Minimum-Variance Distortionless Response
Beamformer/Filter
Distortionless → set g=1, then
We can show that (HW)
Jmin represents an estimate of the variance of the signal impinging on
the antenna array along the direction θ0.
Generalize the result to any direction θ (angular frequency ω):
minimum-variance distortionless response (MVDR) spectrum
An estimate of the power of the signal coming from direction θ
An estimate of the power of the signal coming from frequency ω