4th Workshop on Geometry and Machine Learning
Thursday, June 20, 2019 | Portland, OR, USA
Organized by Jinhui Xu and Jeff Phillips.
Machine learning (broadly defined) concerns techniques that can learn from and make predictions on data. Such algorithms are built to explore the useful pattern of the input data, which usually can be stated in terms of geometry (e.g., problems in high dimensional feature space). Hence computational geometry plays a crucial and natural role in machine learning. Importantly, geometric algorithms often come with quality guaranteed solutions when dealing with high-dimensional data. The computational geometry community has many researchers with the unique knowledge on high dimensional geometry, which could be utilized to have a great impact on machine learning or any data related fields.

This workshop is intended to provide a forum for those working in the fields of computational geometry, machine learning and the various theoretical and algorithmic challenges to promote their interaction and blending. To this end, the workshop will consist of 2 invited talks as well as several contributed talks. Invited talks will mainly serve as tutorials about the applications of geometric algorithms in machine learning. Such interaction will stimulate those working in both fields, and we can expect that a synergy can promote many new interesting geometric problems, concepts and ideas, which will contribute to open up new vistas in computational geometry communities.

This workshop is being held as part of CG Week 2019 (June 18-21, 2019 in Portland, OR, USA) which also includes the International Symposium on Computational Geometry (SoCG).

It will be held in room: Maseeh EB 103
Talks
2:30-3:15 Tutorial: Jeff Phillips (University of Utah) A Primer on the Geometry in Machine Learning
3:20-3:40 Alejandro Flores-Velazco (University of Maryland) Condensation for the Approximate Nearest-Neighbor Rule
3:40-4:00 Wai Ming Tai (University of Utah) Relative Error RKHS Embeddings for Gaussian Kernels
4:30-5:15 Invited: Thomas G. Dietterich (Oregon State University) Approaches to Robust Artificial Intelligence: Can Geometry Help?
5:20-5:40 Marc Khoury (UC Berkeley) On the Geometry of Adversarial Examples
5:40-6:00 Chao Chen (Stony Brook University) A Topological Regularizer for Classifiers via Persistent Homology

Contributed Talks: To submit a contributed talk to be considered for a presentation, send an email to WoGeomML@gmail.com with an abstract (e.g., 2 pages) or link to permanent, publically available version (e.g., at arXiv.org). The email should contain a list of authors and the name of the person presenting.
We received contributions until March 30, 2019. Submissions are now closed, and a list of accepted contributed talks is listed below.
Tentative Full Schedule
2:30-3:15
Tutorial: A Primer on the Geometry in Machine Learning (SLIDES)
Jeff M. Phillips
University of Utah
Machine Learning is a discipline filled with many geometric algorithms, the central task of which is usually classification. These varied approaches all take as input a set of n points in d dimensions, each with a label. In learning, the goal is to use this input data to build a function which predicts a label accurately on new data drawn from the same unknown distribution as the input data. The main difference in the many algorithms is largely a result of the chosen class of functions considered. This talk will take a quick tour through many approaches from simple to comlex and modern, and show the geometry inherent at each step. Pit stops will include connections to geometric data structures, duality, random projections, range spaces, and coresets.

Bio: Dr. Phillips is an Associate Professor in the School of Computing at the University of Utah. He recieved a BS in Computer Science and BA in Math from Rice University in 2003, and a PhD in Computer Science from Duke University in 2009. He has been a NSF GRF, CI Fellow, and CAREER Award recipient. He serves as the director of the new Data Science program at the University of Utah. He is writing a new book on the Mathematical Foundations of Data Analysis.

3:20-4:00
Contributed talks
Hu Ding
Greedy Is Good, But Needs Randomization (canceled for Visa delay)
Alejandro Flores-Velazco
Condensation for the Approximate Nearest-Neighbor Rule
Wai Ming Tai
Relative Error RKHS Embeddings for Gaussian Kernels

4:00-4:30 (Maseeh Atrium) : Coffee Break

4:30-5:20
Approaches to Robust Artificial Intelligence: Can Geometry Help?
Thomas G. Dietterich
Oregon State University
Advances in machine learning are encouraging high-stakes applications of this emerging technology. However, machine learning can be very brittle. How can we convert it into a robust technology? This talk will review some of the approaches being pursued and then focus on methods for anomaly detection. I'll describe some opportunities to apply geometric techniques and make a plea for help.

Bio: Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His research is motivated by challenging real world problems with two areas of special focus: robust artificial intelligence and ecological sustainability. He is best known for his work on ensemble methods in machine learning including the development of error-correcting output coding. Dietterich has also invented important reinforcement learning algorithms including the MAXQ method for hierarchical reinforcement learning.
Dietterich has devoted many years of service to the research community. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding president of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He led the writing of the machine learning component of the NSF's 20-year Roadmap for AI Research. Dietterich is a Fellow of the ACM, AAAI, and AAAS.

5:20-6:00
Contributed Talks
Marc Khoury
On the Geometry of Adversarial Examples
Chao Chen
A Topological Regularizer for Classifiers via Persistent Homology


The previous versions of this workshop were:
  • Workshop on Geometry and Machine Learning
  • 2nd Workshop on Geometry and Machine Learning
  • 3rd Workshop on Geometry and Machine Learning