Instructor: Yaniv Plan Office: 1219 Math Annex Email: yaniv (at) math (dot) ubc (dot) ca
Lectures: TuTh, 3:30 – 5:00pm
Office hours: By appointment.
Prerequisites: The course will assume knowledge of linear algebra (and functional analysis) as well as a strong probabilistic intuition. For example, I will assume you have familiarity with stochastic processes, norms, singular values, and Lipschitz functions.
Overview: We study the tools and concepts in high-dimensional probability which support the theoretical foundations of compressed sensing; they also apply to many other problems in machine learning and data science.
Detailed course outline: See here. (We probably won’t cover all of those topics.)
Textbook: There is no required textbook. The following references cover some of the material, and they are available online:
R. Vershynin, High-dimensional probability. This book has the most overlap with our course. (Our course begins by following an earlier course of Vershynin’s on high-dimensional probability.)
Grading: Students will complete a class project (ideally in teams of 3 to 4). Instructions:
Determine a “mini-research problem” related to the class material that you wish to investigate. This should have a theory component, but may also include numerical simulations. It may also consist largely of literature review if your problem is (mostly) solved in prior literature. In the latter case, focus on clarifying what open questions remain. Please run your idea(s) by me by Feb 27.
Make what progress you can towards solving it.
Write up your results in about 3-5 pages, plus references (and pictures). Write ups are due on April 20. Here is an example.
Give a 20 minute presentation of your results in class (this will happen in late March and early April). All members of the group should contribute to the presentation and discuss the parts of the project that they worked on most.
Instructor: Yaniv Plan
Office: 1219 Math Annex
Email: yaniv (at) math (dot) ubc (dot) ca
Lectures: TuTh, 3:30 – 5:00pm
Office hours: By appointment.
Prerequisites: The course will assume knowledge of linear algebra (and functional analysis) as well as a strong probabilistic intuition. For example, I will assume you have familiarity with stochastic processes, norms, singular values, and Lipschitz functions.
Overview: We study the tools and concepts in high-dimensional probability which support the theoretical foundations of compressed sensing; they also apply to many other problems in machine learning and data science.
Detailed course outline: See here. (We probably won’t cover all of those topics.)
Textbook: There is no required textbook. The following references cover some of the material, and they are available online:
Grading: Students will complete a class project (ideally in teams of 3 to 4). Instructions: