ECE 565
Digital Signal Processing

INSTRUCTOR

Prof. Marco F. Duarte, Marcus 215I, mduarte@ecs.umass.edu
Office Hours: Wednesdays and Thursdays 1:30pm-2:30pm or by appointment.

LECTURES

323 Engineering Laboratory 10:00am-11:15am Tuesday and Thursday.

DESCRIPTION

ECE 565 is a newly designed advanced digital signal processing (DSP) course designed to follow an introductory DSP course. The central theme of the course is the application of tools from linear algebra to problems in signal processing and machine learning. Upon successful completion of this course you should be able to:

1. describe a range of signal processing problems using the language of bases and vector spaces,

2. use the singular value decomposition to solve and analyze a range of least-squares problems,

3. efficiently compute the solutions to least-squares problems in structured/large-scale systems.

PREREQUISITES

Undergraduate Students: ECE 315, Signal Processing Methods, or both ECE 313, Signals and Systems, and ECE 214/314, Introduction to Probability and Random Processes.

Graduate Students: An introductory course in digital signal processing covering concepts such as Fourier transforms, filtering, and sampling. Students should also be familiar with the fundamentals of linear algebra and should be very comfortable with the use of matrices to represent systems of equations – some existing familiarity with eigenvalues, eigenvectors, and eigenvalue decompositions will be extremely helpful. While most of the course will adopt a deterministic perspective, many of the models and algorithms we will discuss also have alternative probabilistic interpretations, and hence familiarity with the basics of probability (especially random vectors) will be useful for gaining a deeper appreciation for the material. Finally, students should also have basic MATLAB programming skills.

TEXTBOOK

There are no mandatory textbooks. The instructor will make lectures notes available after each lecture. Several resources (some available on reserve at the Science Library) can be helpful in learning the material:

1. “Linear Algebra its Applications” by Gilbert Strang, Harcourt, 1988.

2. “Computational Science and Engineering” by Gilbert Strang, Wellesley-Cambridge Press, 2007 • “Matrix Analysis” by Roger Horn and Charles Johnson, Cambridge U. Press, 1985.

3. “An Introduction to Hilbert Space” by Nicholas Young, Cambridge U. Press, 1988.

4. “Mathematical Methods and Algorithms for Signal Processing” by Todd Moon and Wynn Stirling, Prentice Hall, 2000.

5. “Foundations of Signal Processing” by Martin Vetterli, Jelena Kovacevic, and Vivek Goyal, Cambridge U. Press, 2014.

6. “Statistical Signal Processing: Detection, Estimation, and Time Series Analysis” by Louis Scharf, Pearson, 1991.

7. “Introduction to Applied Linear Algebra - Vectors, Matrices, and Least Squares” by Stephen Boyd and Lieven Vandenberghe, Cambridge U. Press, 2019. (Available online for free)

LECTURE SCHEDULE (TENTATIVE)

1. 1.Signal representations in vector spaces.

2. Introduction to discretizing signals using a basis: The Shannon-Nyquist sampling theorem

3. Linear vector spaces, linear independence, and basis expansions

4. Norms and inner products

5. Orthobases and the reproducing formula

6. Parseval's theorem and the general discretization principle

7. Important bases: Fourier, discrete cosine, lapped orthogonal, splines, wavelets

8. Signal approximation in an inner product space

9. Gram-Schmidt and the QR decomposition

1. 2.Linear inverse problems

2. Introduction to linear inverse problems, examples

3. The singular value decomposition (SVD)

4. Least-squares solutions to inverse problems and the pseudo-inverse • Stable inversion and regularization

5. Weighted least-squares and linear estimation

6. Least-squares with linear constraints

1. 3.Matrix approximation using least-squares

2. Low-rank approximation of matrices using the SVD

3. Total least-squares

4. Principal components analysis

5. Signal and noise subspaces in array processing

1. 4.Computing the solutions to least-squares problems

2. Cholesky and LU decomposition

3. Structured matrices: Toeplitz, diagonal+low rank, banded systems

4. Large-scale systems: Steepest descent

5. Large-scale systems: The conjugate gradient method

1. 5.Low-rank updates for streaming solutions to least-squares problems

2. Recursive least-squares

3. The Kalman filter

4. Adaptive filtering using LMS

1. 6.Beyond least-squares (topics as time permits)

2. Approximation in non-Euclidean norms

3. Regularization using non-Euclidean norms

4. Recovering vectors from incomplete information (compressed sensing)

5. Recovering matrices from incomplete information (matrix completion)