from the Design of Clean Fuels to Understanding Climate Change 

Director, Center for Extreme Data Management Analysis and Visualization
Professor, SCI institute and School of Computing, University of Utah Laboratory Fellow, Pacific Northwest National Laboratory 

Advanced techniques for understanding large scale data models are a crucial ingredient for the success of the activities of any supercomputing center and data intensive scientific investigation. Developing such techniques involves a number of major challenges such as the realtime management of massive data, or the quantitative analysis of scientific features of unprecedented complexity. Computational geometry algorithms have provided essential new capabilities needed in the analysis of massive scientific models because of their robustness and scalability.
In this talk I will present a number of success stories in the application of a computational geometry techniques for the analysis and visualization of large scale scientific simulations. In particular I will focus on topological algorithms that enabled new insight in a number of applications including the evolution of the mixing layer in hydrodynamic instabilities, the robustness of porous materials under stress and failure, the energy transport of eddies in ocean data used for climate modeling, and the turbulence of lifted flames used to design clean energy production. 
Director, Center for Massive Data Algorithmics
Professor, Department of Computer Science, Aarhus Unversity 

In the last decade new mapping technologies such as LiDAR have not only resulted in dramatic improvements in terrain data quality, but also in a spectacular increase in the amount of terrain data being acquired. This has revealed scalability problems with existing commercial terrain processing software. For example, while two detailed LiDAR datasets containing about 26 billion points each were produced for the country of Denmark several years ago, the use of the data in advanced countrywide analysis and modeling applications has been virtually nonexistent.
In this talk we will describe how the development of I/Oefficient computational geometry algorithms has lead to efficient big terrain data analysis software; I/Oefficient algorithms are algorithms designed to minimize the number of accesses to the slow disks used to store big data. We will in particular consider flood risk analysis, and e.g. illustrate how I/Oefficient computational geometry software has made it possible to analyze flood risk for the entire country of Denmark on a normal desktop computer. The software is being marketed by the startup company SCALGO. 
Professor, Department of Computer Science and UMIACS, University of Maryland  
Proximity searching is the general term used for various distancebased
retrieval problems. This term encompasses the well known problems of
nearest neighbor searching and range searching. The abstract problem is
to build a data structure storing a finite set of objects so that, given
a query object, the elements of the subset that are sufficiently close
to the query can be found quickly. This encompasses a wide variety of
retrieval problems, depending on the properties of the space and the
notion of distance.
Inspired by the many applications of proximity searching in science and engineering, researchers have spent decades in search of the fastest and most space efficient solutions. While it seems that the "holy grail" of a unified solution is unlikely, this has been one of the success stories in the theory of algorithm design. A number of theoretical innovations have made a significant impact on current practice. In this talk, we will explore a number of recent results in this area and present directions for future research. 
Professor, Department of Computer Science, University of California at Davis  
Surface reconstruction and mesh generation are problems that arose in industrial and scientific computation and were addressed seriously by the Computational Geometry community over many years. We produced techniques and ideas that are widely known by experts outside of Computational Geometry, and some software that is widely used, mostly based on robust Delaunay triangulation programs. In this talk we will survey the important ideas that summarize what we know about these problems, and mention some directions for the future. 
Professor, School of Computer Science, Tel Aviv University  
Robotics is fast growing and expanding. New designs of robotic systems raise new computational challenges, many of which are geometric in nature. I will review challenges for computational geometry in optimizing motion plans, in handling systems with many degrees of freedom (many robots in coordination or highly redundant chains), and in new automated manufacturing processes at the micro scale. I will also revive old and persistent challenges regarding motion and production in very tight quarters, which I believe the time is ripe to (re)address. 
Associate Professor, Department of Computer Science, State Univerity of New York, Stony Brook  
In this talk I will discuss the opportunities that geometry can be applied in algorithm design for wireless sensor networks. I will talk about geometric models of wireless transmissions, sensor coverage, as well as geometry used in network positioning and routing. The talk will be self contained and may have the speaker's personal, biased opinion. 
Approximate Computation and Implicit Regularization for Very Largescale Data Analysis 

Math Department, Stanford University  
Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this talk, I will address fundamental aspects of this algorithmicstatistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worstcase algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very largescale databases and that also have good inferential or predictive properties. 
Geometry of Diverse, HighDimensional, and Nonlinear Imaging Data 

Assistant Professor, SCI institute and School of Computing, University of Utah  
In this talk I will focus on the following challenges that medical imaging data present: 1) it's highdimensional, 2) there's lots of it, and 3) it very often consists of objects that are most naturally parameterized by nonlinear manifolds. Some examples of this type of data include anatomical shape from structural imaging, white matter connectivity from diffusion imaging, and brain activity/metabolism from functional imaging. Statistical analysis and machine learning in these settings require extensions of the classical Euclidean techniques to highdimensional, nonlinear geometries. Added to this challenge is the fact that most imaging studies include several imaging modalities at once, repeated measurements over time, and additional data, such as clinical test scores, demographics, and genetic data. The goal of medical image analysis is to discover relationships within such diverse sets of data that help us better understand disease processes and improve treatments. These problems are difficult, and solving them will require theoretical advances in geometric analysis, statistics, and machine learning. 