Graduate students from UIC, NU, TTIC, UC, and IIT may request to take these courses and receive credit at their home institutions (or to simply audit these courses). Note that credits received for taking these courses might not correspond exactly with the listed course and will be determined in consultation with the IDEAL site director1 of the graduate student’s home institution. Each course will run according to the schedule of the respective university offering the course. You may request to register for courses outside your home institution by filling out this form.
[To facilitate receiving credit at your home institution, please fill the following form.]
University of Illinois at Chicago (UIC):
MCS 501 Computer Algorithms II (in-person)
Lev Reyzin Addams Hall 307
MWF 9:00–9:50 am
This course will introduce students to the fundamental ideas underlying modern algorithmic techniques. Students will be taught how to design and analyze approximation algorithms and randomized algorithms, as well as other advanced topics.
ECE 594 Coding Theory (in-person)
Natasha Devroye Burnham Hall 304
TR 12:30–1:45 pm
Graduate-level introduction to coding theory. A mix/balance of theory and programming practice. Topics include both a few algebraic codes (BCH, RS, RM), convolutional codes, trellis-moded modulation, as well as more modern itertive codes (Turbo, LDPC) and the latest polar codes. Forays into research applications by folks at UIC including coding for distributed storage and deep-learned error correcting codes..
ECE 491 Introduction to Neural Networks (hybrid)
Ahmet Enis Cetin Online / Thomas Beckham Hall 180G
TR 12:30–1:45 pm
An introductory course to neural networks.
ECE 491 Information and Learning (in-person)
Mesrob Ohannessian Lecture Center A2
TR 9:30–10:45 am
A first mathematical look at what information is, what it means to learn, and how the two are related. This course covers the basics of statistical inference and learning under the lens of information theory. This means that in addition to specific methods and algorithms that acquire knowledge from observations, this course also highlights the limits of what is possible and explains what it would take to reach them. Concepts are illustrated with applications. Topics covered: Statistical Inference, Entropy and Compression, Concentration Inequalities, Efficiency and Universality, PAC Learning, Model Complexity, Regularization, Mutual Information and Lower Bounds.
Northwestern University (NU):
ELEC_ENG 428 Information Theory and Learning (in-person)
Dongning Guo Swift Hall 107
[Winter] MW 2:00–3:20 pm
COMP_SCI 496 Foundations of Quantum Computing (in-person)
Aravindan Vijayaraghavan Tech LR 5
[Winter] TR 9:30–10:50 am
STAT 430-2 Probability for Statistical Inference 2 (in-person)
Miklos Racz Annenberg Hall G29
[Winter] TR 11:00 am–12:20 pm
(description pending)
(Spring courses TBD)
Toyota Technological Institute at Chicago (TTIC):
TTIC 31020 Intro to Machine Learning (in-person)
Nati Srebro TTIC 530
[Winter] TR 1:30–2:50 pm (Lectures) F 1:30–2:30 pm (Tutorial)
PhD level conceptual and practical introduction to modern machine learning.
TTIC 31010 Algorithms (in-person)
CMSC 37000-1
Yury Makarychev TTIC 530
[Winter] TR 1:30–2:50 pm (Lectures) F 1:30–2:30 pm (Tutorial)
PhD level course on Algorithms.
TTIC 31260 Algorithmic Game Theory (in-person)
Avrim Blum TTIC 530
[Spring] MW 1:30–2:50 pm
A PhD-level course on Algorithmic Game Theory. Topics include: solution concepts in game theory, such as Nash equilibrium and correlated equilibrium, and connections to learning theory; the price of anarchy in routing and congestion games; computational social choice: the axiomatic approach to ranking systems and crowdsourcing, manipulation; algorithmic mechanism design, focusing on truthful approximation algorithms; market design, with an emphasis on optimization and incentives; diffusion of technologies and influence maximization in social networks; and procedures for fair division, such as cake cutting algorithms.
TTIC 31180 Probabilistic Graphical Models (in-person)
Matt Walter TTIC 530
[Spring] TR 9:30am–10:50am
“Many problems in machine learning, computer vision, natural language processing, robotics, computational biology, and beyond require modeling complex interactions between large, heterogeneous collections of random variables. Graphical models combine probability theory and graph theory to provide a unifying framework for representing these relationships in a compact, structured form. Probabilistic graphical models decompose multivariate joint distributions into a set of local relationships among small subsets of random variables via a graph. These local interactions result in conditional independencies that afford efficient learning and inference algorithms. Moreover, their modular structure provides an intuitive language for expressing domain-specific knowledge, and facilitates the transfer of modeling advances to new applications. This graduate-level course will provide a strong foundation for learning and inference with probabilistic graphical models. The course will first introduce the underlying representational power of graphical models, including Bayesian and Markov networks, and dynamic Bayesian networks. Next, the course will investigate contemporary approaches to statistical inference, both exact and approximate. The course will then survey state-of-the-art methods for learning the structure and parameters of graphical models.”
Illinois Institute of Technology (IIT):
CS 595 Trustworthy Machine Learning (hybrid)
Binghui Wang Stuart Building 113
MW 10:00-11:15am
Machine learning (ML), or Artificial intelligence (AI) in general, has achieved many breakthroughs in both academia and industry and changed our everyday life as well. On the other hand, recent studies show that ML/AI techniques can cause serious security/privacy threats when in the hand of an attacker, and ML/AI itself is also vulnerable to adversarial security/privacy attacks. Thus, understanding ML/AI In adversarial settings is extremely important and necessary at present. In this course, we will mainly follow two directions: 1) Security and privacy for ML/AI; 2) ML/AI for security and privacy. In 1), we will study the security/privacy vulnerabilities of an ML/AI system itself, as well as mitigating these vulnerabilities. In 2), we will study how an attacker can leverage ML/AI to perform security/privacy threats, as well as designing methods to alleviate these threats.
MATH 569 Statistical Learning (in-person)
Ming Zhong Perlstein Hall 108
TR 10:00-11:15am
University of Chicago (UC):
CMSC 35401 The Interplay of Learning and Game Theory (in-person)
Haifeng Xu Ryerson Physical Laboratory 255
[Winter] R 2:00–4:50 pm (15 min break in the middle)
This is a graduate level course covering topics at the interface between machine learning and game theory. In many economic or game-theoretic applications, the problem either lacks sufficient data or is too complex. In such cases, machine learning theory helps to design more realistic or practical algorithms. Conversely, in many application of machine learning or prediction, the algorithms have to obtain data from self-interested agents whose objectives are not aligned with the algorithm designer. In those settings, the algorithms have to take into account these agents’ strategic behaviors. These problems form an intriguing interplay between machine learning and game theory, and have attracted a lot of recent research attention. This course will discuss several recent research directions in this space. Our goal is to cover (some selected) basic results in the following four directions. Alone the way, we will also cover necessary basics to game theory, learning theory, mechanism design, prediction and information aggregation.
STAT 37786 Topics in Mathematical Data Science:
Cong Ma Spectral Methods and Nonconvex Optimization (in-person)
Jones Laboratory 226
[Winter] TR 9:30–10:50 am
Traditional supervised learning assumes that the training and testing distributions are the same. Such a no-distribution-shift assumption, however, is frequently violated in practice. In this course, we survey topics in machine learning in which distribution shifts naturally arise. Possible topics include supervised learning with covariate shift, off-policy evaluation in reinforcement learning, and offline reinforcement learning.
STAT 28000 Optimization (in-person)
Lek-Heng Lim (location TBD)
[Spring] (time TBD)
Undergraduate course on optimization.