7th Workshop on Geometry and Machine Learning
Thursday June 15, 2023 | TBA | Dallas, TX, USA (with remote option)

Organized by Hubert Wagner and Jeff M. Phillips.
Machine learning (broadly defined) concerns techniques that can learn from and make predictions on data. Such algorithms are built to explore the useful pattern of the input data, which usually can be stated in terms of geometry (e.g., problems in high dimensional feature space). Hence computational geometry plays a crucial and natural role in machine learning. Importantly, geometric algorithms often come with quality guaranteed solutions when dealing with high-dimensional data. The computational geometry community has many researchers with the unique knowledge on high dimensional geometry, which could be utilized to have a great impact on machine learning or any data related fields.

This workshop is intended to provide a forum for those working in the fields of computational geometry, machine learning and the various theoretical and algorithmic challenges to promote their interaction and blending. To this end, the workshop will consist of an invited talks and several contributed talks. The invited talk will mainly serve as tutorials about the applications of geometric algorithms in machine learning. Such interaction will stimulate those working in both fields, and we can expect that a synergy can promote many new interesting geometric problems, concepts and ideas, which will contribute to open up new vistas in computational geometry communities.

This workshop is being held as part of CG Week 2023 (June 15, 2023 in Dallas, TX, USA) which also includes the International Symposium on Computational Geometry (SoCG).




Contributed Talks
14:40-15:00 Bei Wang (University of Utah) Topology of Artificial Neuron Activations
with Archit Rathore, Yichu Zhou, and Vivek Srikumar
15:00-15:20 Arnur Nigmetov (Lawrence Berkeley National Lab) Topological Optimization with Big Steps
with Dmitriy Morozov
15:20-15:40 Bala Krishnamoorthy (Washington State University, Vancouver) A Normalized Bottleneck Distance on Persistence Diagrams and Homology Preservation under Dimension Reduction
with Nathan May
Invited Talk
15:40 - 16:30 Anshumali Shrivastava (Rice University) Revisiting the Economics of Large Language Models with Neural Scaling Laws and Dynamic Sparsity
Anshumali Shrivastava is an associate professor in the computer science department at Rice University. He is also the Founder and CEO of ThirdAI Corp, a startup focussed on democratizing Mega-AI models through "dynamic sparsity". His broad research interests include probabilistic algorithms for resource-frugal deep learning. In 2018, Science news named him one of the Top-10 scientists under 40 to watch. He is a recipient of the National Science Foundation CAREER Award, a Young Investigator Award from the Air Force Office of Scientific Research, a machine learning research award from Amazon, and a Data Science Research Award from Adobe. He has won numerous paper awards, including Best Paper Award at NIPS 2014, MLSys 2022, and Most Reproducible Paper Award at SIGMOD 2019. His work on efficient machine learning technologies on CPUs has been covered by popular press including Wall Street Journal, New York Times, TechCrunch, NDTV, Engadget, Ars technica, etc. Neural Scaling Law informally states that increased model size and data automatically improve AI. However, we have reached a point where the growth has reached a tipping end where the cost and energy associated with AI are becoming prohibitive.

This talk will demonstrate the algorithmic progress that can exponentially reduce the compute and memory cost of training and inference using "dynamic sparsity" with neural networks. Dynamic sparsity, unlike static sparsity, aligns with Neural Scaling Laws and does not reduce the power of neural networks while reducing the number of FLOPS required by neural models by 99% or more. We will show how data structures, particularly randomized hash tables, can be used to design an efficient "associative memory" that reduces the number of multiplications associated with the training of the neural networks. Current implementations of this idea challenge the common knowledge prevailing in the community that specialized processors like GPUs are significantly superior to CPUs for training large neural networks. The resulting algorithm is orders of magnitude cheaper and energy-efficient. Our careful implementations can train billions of parameter recommendations and Language models on commodity desktop CPUs significantly faster than top-of-the-line TensorFlow alternatives on the most potent A100 GPU clusters, with the same or better accuracies.

We will show some demos, including how to train and fine-tune (with rhfl) a billion-parameter language model on a laptop from scratch for search, discovery, and summarization.


Coffee Break
Contributed Talks
17:00-17:20 Chao Chen (Stony Brook University) Topological Representation and Topological Uncertainty for Biomedical Image Analysis
with Xiaoling Hu, Dimitris Samaras
17:20-17:40 Shreyas Samaga (Purdue University) GRIL: A 2-parameter Persistence Based Vectorization for Machine Learning
with Cheng Xin, Soham Mukherjee, and Tamal K. Dey
17:40-18:00 Mathijs Wintraecken (INRIA, Sophia Antipolis) Improved stability results for the medial axis
with Hana Dal Poz Kourimska and Andre Lieutier
18:00-18:20 Reyan Ahmed (Colgate University) Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte Carlo Tree Search
with Mithun Ghosh, Kwang-Sung Jun, and Stephen Kobourov

Contributed Talks: Potential particiapnts could submit a contributed talk to be considered for a presentation, via an email to WoGeomML@gmail.com with an abstract (e.g., 2 pages) or preferably link to permanent, publically available version (e.g., at arXiv.org). The email should contain a title, list of authors, and should identify the name of the person presenting. Indicate if you hope to attend in person, or will prefer a virtual option (in person is preferred, but we will attempt to accomodate virtual).
We received official contributions until May 9, 2023.

A similar workshop on the more focused theme of Geometric Clustering will also be collocated at CG Week.
Generous support provided by NSF CCF-2115677.
The previous versions of this workshop were:
  • Workshop on Geometry and Machine Learning
  • 2nd Workshop on Geometry and Machine Learning
  • 3rd Workshop on Geometry and Machine Learning
  • 4th Workshop on Geometry and Machine Learning
  • 5th Workshop on Geometry and Machine Learning
  • 6th Workshop on Geometry and Machine Learning