image

10601

Course taught by Professor Mathew Gormley and Roni Rosenfieldcovers the fundamentals required for understanding the advanced topics of machine learning. This course covers the theory and practical algorithms for machine learning from a variety of perspectives. We cover topics such as Bayesian networks, decision tree learning, Support Vector Machines, statistical learning methods, unsupervised learning and reinforcement learning. The course covers theoretical concepts such as inductive bias, the PAC learning framework, Bayesian learning methods, margin-based learning, and Occam’s Razor. Programming assignments include hands-on experiments with various learning algorithms. This course is designed to give a graduate-level student a thorough grounding in the methodologies, technologies, mathematics and algorithms currently needed by people who do research in machine learning.

Learning Outcomes: By the end of the course, students should be able to:

• Implement and analyze existing learning algorithms, including well-studied methods for classification, regression, structured prediction, clustering, and representation learning
• Integrate multiple facets of practical machine learning in a single system: data preprocessing, learning, regularization and model selection
• Describe the formal properties of models and algorithms for learning and explain the practical implications of those results
• Compare and contrast different paradigms for learning (supervised, unsupervised, etc.)
• Design experiments to evaluate and compare different machine learning techniques on real-world problems
• Employ probability, statistics, calculus, linear algebra, and optimization in order to develop new predictive models or learning methods
• Given a description of a ML technique, analyze it to identify (1) the expressive power of the formalism; (2) the inductive bias implicit in the algorithm; (3) the size and complexity of the search space; (4) the computational properties of the algorithm: (5) any guarantees (or lack thereof) regarding termination, convergence, correctness, accuracy or generalization power.

For more details about topics covered, see the Schedule page.

Major Assignments and Projects built from scratch:

• Decision Tree implementation for binary classification using ID3
• Binary logistic regression for sentiment analysis on movie reviews
• Basic implementation of Neural network with one layer. Involved mathematical calculation of derivatives using backprop
• HMM – to predict the hidden states using Viterbi algorithm
• Mountain Car (Reinforcement Learning)