Catmull to Receive Technical Academy Award

University of Utah College of Engineering alum, Ed Catmull, the president of Pixar Animation and Walt Disney Animation whose technological advancements in computer animation revolutionized Hollywood movies, will be awarded his sixth scientific and technical Academy Award this February, the Academy of Motion Picture Arts and Sciences announced.

Catmull will receive the Academy’s Scientific and Engineering Award for his original concept behind “subdivision surfaces as a modeling technique in motion picture production. Subdivision surfaces have become a preferred modeling primitive for many types of motion picture computer graphics,” according to a news release. Computer graphics researchers Jos Stam and Tony DeRose will also receive the award for their scientific and practical implementation of the concept.

“Subdivision surfaces” is a method in computer graphics to produce a smooth surface of an object (such as Woody’s head from “Toy Story,” left) over a digital wire mesh. Catmull discovered the process in 1978 along with fellow U alum, Jim Clark.

This method would be an early milestone in the development of computer graphics and animation. Catmull also helped create the computer animation software known as RenderMan, which would be the core program used in the development of Pixar’s animated movies such as “Toy Story” and “Monster’s Inc.”

Catmull first attended the University of Utah in 1963 as a physics student but later took computer science classes as graphics were emerging as a new technology. In the late 1960s and ‘70s, he was paving new ground in computer technology along with other noted U pioneers that included Nolan Bushnell of Atari, interface designer Alan Kay, Silicon Graphics founder Jim Clark, and Adobe founder John Warnock.

During his time at the U in 1972, Catmull would produce a landmark film, a computer-animated version of his left hand that would be the first landmark in computer animation. In 1979, movie mogul George Lucas hired Catmull to head the computer animation division for Lucasfilm, producing special effects for a number of films In 1986, Apple co-founder Steve Jobs purchased Lucasfilm’s computer animation division and created Pixar with Catmull. In addition to dozens of short films, Pixar so far has produced 20 feature-length computer animated films that have earned more than $13 billion globally at the box office. In 2006, Pixar merged with Disney, and Catmull remained as president of the company while also becoming president of Disney’s Animation Studios.

As the president of Pixar Animation Studios, based in Emeryville, Calif., Catmull was responsible for the company becoming the most distinguished computer animation studio in the world with other box office hits such as “Finding Nemo,” “Up,” and “Coco.” In all, the film studio has garnered 15 Academy Awards, nine Golden Globes and 11 Grammys. Simultaneously, his tenure as the president of Walt Disney Animation Studios produced such hits as “Frozen,” “Big Hero 6,” and “Moana.”

In addition to the award he will receive in February, Catmull has earned four other scientific and technical Academy Awards for both the concept of subdivision surfaces and for the creation of the RenderMan software. He is also the recipient of the Academy’s Gordon E. Sawyer Award, which is given to an individual in the motion picture industry “whose technological contributions have brought credit to the industry.”

Catmull announced in October that he would retire from both Pixar and the Walt Disney Company at the end of this year. He will remain on as a consultant until July of 2019. He is also a member of the college’s Engineering National Advisory Council.

New Club for Women in CS

New Club Formalizes Support Network for Women in CS

Q&A with Alexandra Bertagnolli, the first president of the new Women in Computing (WiC) student organization. Attend the WiC meeting 11 a.m. Nov. 16 in the Union’s Saltair Room to hear from a speaker Bertagnolli expects will be especially awesome: Kiri Wagstaff of NASA’s Jet Propulsion Lab. WiC will also host a second event with Wagstaff: an informal Q&A session (details are TBD but will be posted here when finalized).

When and why did the club start?

In spring 2018, John Melchi—the School of Computing’s (SoC) director of business affairs—wanted to hear about our experiences as women in SoC, so he organized several lunches to talk with us. At these lunches it became apparent there was a need for more social support for women in computing.

What’s the club’s goal?

The goal is to provide better support and a sense of community for women in computing here at the U. In the long term, we hope the club helps with retention of female students and possibly even attracts more women to computing.

What are the prerequisites to join?

There are no prerequisites to join, although the club is geared toward undergraduate women in computer science and computer engineering, whether you’re in the major yet or not.

Is it open only to women?

No, but events and activities will focus on women.

How often do you meet, and what do you do at meetings?

We’ve decided to have one event every month at various times to accommodate as many people as possible. So no regular meetings, just approximately five WiC events a semester. These events will rotate between socials, workshops, guest speakers, and more. Right now, the club’s focus is on the women at the U. In future years we’d like to include outreach activities.

Get on your soapbox: Why is this club important? Why should people join?

Personally, I’ve struggled with feelings of inadequacy and impostor syndrome. As a senior, I still catch myself walking into classrooms and feeling like the men are just so much better or more experienced than me, even though it’s not true. The best thing that has helped me throughout my years in this major was talking to other women and hearing that they have the same doubts—that they struggle in the same way. I’ve also observed the difference in behavior when women are able to talk about these issues in predominantly female spaces. This club is so important because it will provide women-centered spaces and the support that I wish I’d had going through this program. I encourage any women taking a CS class to join so they can not only go to cool events, but also affirm that they belong here and they’re not alone.

If people are interested, how can they join or find out more?

Send an email with your name to to subscribe to our mailing list. Or, join our orgsync page for more information about upcoming events.

What You Can’t See Can Hurt You

You can’t see nasty microscopic air pollutants in your home, but what if you could?

Engineers from the University of Utah’s School of Computing conducted a study to determine if homeowners change the way they live if they could visualize the air quality in their house. It turns out, their behavior changes a lot.

Their study was published this month in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. The paper also is being presented Oct. 9 in Singapore during the “ACM International Joint Conference on Pervasive and Ubiquitous Computing.” The paper can be viewed and downloaded here.

“The idea behind this study was to help people understand something about this invisible air quality in their home,” says University of Utah School of Computing assistant professor Jason Wiese (pictured upper left), who was a lead author of the paper along with U School of Computing doctoral student Jimmy Moore (pictured, right) and School of Computing associate professor Miriah Meyer.

During the day, the air pollution inside your home can be worse than outside due to activities such as vacuuming, cooking, dusting or running the clothes dryer. The results can cause health problems, especially for the young and elderly with asthma.

University of Utah engineers from both the School of Computing and the Department of Electrical and Computer Engineering built a series of portable air quality monitors with Wi-Fi and connected them to a university server. Three sensors were placed in each of six homes in Salt Lake and Utah counties from four to 11 months in 2017 and 2018. Two were placed in different, high-traffic areas of the house such as the kitchen or a bedroom and one outside on or near the porch. Each minute, each sensor automatically measured the air for PM 2.5 (a measurement of tiny particles or droplets in the air that are 2.5 microns or less in width) and sent the data to the server. The data could then be viewed by the homeowner on an Amazon tablet that displayed the air pollution measurements in each room as a line graph over a 24-hour period (pictured, below). Participants in the study could see up to 30 days of air pollution data. To help identify when there might be spikes in the air pollution, homeowners were given a voice-activated Google Home speaker so they could tell the server to label a particular moment in time when the air quality was being measured, such as when a person was cooking or vacuuming. Participants also were sent an SMS text message warning them whenever the indoor air quality changed rapidly.

During the study, researchers discovered some interesting trends from their system of sensors, which they called MAAV (Measure Air quality, Annotate data streams, and Visualize real-time PM2.5 levels). One homeowner discovered that the air pollution in her home spiked when she cooked with olive oil. So that motivated her to find other oils that produced less smoke at the same cooking temperature.

Another homeowner would vacuum and clean the house just before a friend with allergies dropped by to try and clean the air of dust. But what she found out through the MAAV system is that she actually made the air much worse because she kicked up more pollutants with her vacuuming and dusting. Realizing this, she started cleaning the house much earlier before the friend would visit.

Participants would open windows more when the air was bad or compare measurements between rooms and avoid those rooms more with more pollution.

“Without this kind of system, you have no idea about how bad the air is in your home,” Wiese says. “There are a whole range of things you can’t see and can’t detect. That means you have to collect the data with the sensor and show it to the individual in an accessible, useful way.”

Researchers also learned that circumstances that made the air pollution worse differed in each home. Vacuuming in the home, for example, would cause different reactions to the air quality. They also learned that if homeowners could visualize the air quality in their home, they always stayed on top of labeling and looking at the data.

Wiese says no known manufacturers make air quality systems for the home that allow residents to visualize and label the air quality in this way, but he hopes their research can spur more innovation.

The study involves engineering in collaboration with other University of Utah scientists, including biomedical informatics and clinical asthma researchers. It was funded as part of a larger National Institutes of Health program known as Pediatric Research using Integrated Sensor Monitoring Systems (PRISMS), launched in 2015 to develop sensor-based health monitoring systems for measuring environmental, physiological and behavioral factors in pediatric studies of asthma and other chronic diseases.

Research reported in this publication was funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under Award Number U54EB021973. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

This is a TEST

By John Regehr

Ranking university programs can be useful: it helps students decide which school to attend, it helps prospective professors decide where to apply for jobs, and it lets university administrators determine which of their units are performing exceptionally well.

What does it really mean for one department to be ranked higher than another? Does it mean that they publish more papers? That more of their graduates create successful companies? It isn’t clear that there’s any single right answer to these questions.

It is clear, however, that ranking can be done badly, and unfortunately, according to the Computing Research Association, this is what has happened to the US News and World Report rankings for computer science, which is perhaps the most widely used and influential ranking. The CRA issued a statement describing a number of problems with the methods used by US News and World Report–including the fact that they do a poor job tracking the venues where computer scientists publish research papers–and concludes: “Anyone with knowledge of CS research will see these rankings for what they are—nonsense–and ignore them. But others may be seriously misled.”

Charles Hansen appointed Distinguished Professor

March 13th, 2019

School of Computing professor Charles Hansen has been appointed as a Distinguished Professor of Comp[...]

CRA Statement

Beyond the problems identified by the CRA, the US News rankings are also hard to interpret since the criteria they are based on are not public. We don’t know the formula they use, nor do we have access to the data that they use as input to the secret formula. This makes it hard for people, such as prospective students, to get benefit from rankings, because it just isn’t clear what it means for one computer science department to be ranked above another.

Beyond the problems identified by the CRA, the US News rankings are also hard to interpret since the criteria they are based on are not public. We don’t know the formula they use, nor do we have access to the data that they use as input to the secret formula. This makes it hard for people, such as prospective students, to benefit from rankings.

Computer science professor Emery Berger, at the University of Massachusetts at Amherst, has come up with a better way to do rankings called CSRankings. His method is transparent: anyone can inspect the formula that it uses and also the data is fed into the formula. The entire implementation for his ranking system is available as open source software!

CSRankings is based on the idea that the best computer science departments are the ones that publish the most articles at “top tier” conferences. These conferences might accept only 10-20% of the papers submitted for publication each year and they are where the best researchers tend to submit their best work.

Prospective Students can Explore Ranking Data

Ranking university programs can be useful: it helps students decide which school to attend, it helps prospective professors decide where to apply for jobs, and it lets university administrators determine which of their units are performing exceptionally well. What does it really mean for one department to be ranked higher than another? Does it mean that they publish more papers? That more of their graduates create successful companies? It isn't clear that there's any single right answer to these questions. Read more...

By counting only top-tier publications, instead of total publications, CSRankings avoids the problem of inflating the ranking of researchers who publish a large number of low-quality publications. The CSRankings system is carefully designed to be a zero-sum game: the total credit that it gives to a top-tier paper cannot be inflated by adding authors to a paper.

The openness of the CSRankings system and its data set is a huge advantage. The best thing is that the CSRankings web site allows everyone to explore the data.

Let’s say that a prospective student is interested in operating systems and formal verification. That person can select only those two areas of interest and the site will show the departments that publish heavily in top-tier conferences in those specific areas. A prospective student can then drill down at the department level and see who the key players are in those areas and read their code and papers.

This is a fundamentally different use of a ranking system. The ultimate purpose of the rankings is to guide us toward accurate data that can be used to make informed decisions.

SoC Faculty Receive NSF CAREER Awards

Two University of Utah School of Computing faculty members received the National Science Foundation’s CAREER Award for projects in developing faster cloud-based data systems and software that can help researchers and doctors determine why they choose the medical decisions they make.

Ryan Stutsman

University of Utah School of Computing assistant professor Ryan Stutsman, whose research focuses on “big data” and creating more efficient databases, is receiving $550,000 over five years for a project that rethinks the common approach to cloud-based databases.

Lots of data is stored in the cloud, and when it’s analyzed, that data moves around among the many servers that store and analyze it, he said. “We’re trying to come up with a way to share this massive cloud systems but also safely push their operations to the data itself,” he added. “By running code, we allow these cloud users to pull the data from the databases without moving it around as much.”

That means users could analyze much bigger data sets and produce results more quickly. This could be valuable for services such as Facebook, which deals with massive amounts of data every day in real time. Future technologies, such as autonomous vehicles, also could benefit from this new method. “What we’re most focused on is future networks to move data at high bandwidth with low latencies,” Stutsman said.

Stutsman earned his doctorate in computer science from Stanford University and began at the University of Utah as a faculty member in the summer of 2017. He has performed internships at Facebook and the Lawrence Livermore National Laboratory and postdoctoral research for Microsoft.

“It’s a huge honor,” he said about receiving the NSF CAREER Award. “I’m really excited about the work. In the future people, will have huge amounts of data and interact with it really aggressively, and I think with this project we will push the envelope of that.”

Alexander Lex

University of Utah School of Computing assistant professor Alexander Lex has received $512,000 for developing software that will capture the decision-making process of doctors and other researchers by using algorithms and human-computer interaction methods. Lex, who is also a member of the U’s Scientific Computing and Imaging Institute, conducts research on Interactive data visualization, data analysis methods, visual analytics and data science.

The project is specifically focused on helping doctors with cancer diagnostics and those studying the genetic causes of suicide. Typically, doctors will make a series of medical decisions but that process is not analyzed well, and it’s difficult to reproduce why the doctor made those decisions.

Lex’s research will develop a software system so experts can reproduce their decision-making in order to better justice those choices. “I hope this will give people the tool to better communicate and reproduce the data analysis of what they do,” he said.

“This is really exciting for me, and it’s a great honor,” he said about receiving the CAREER Award.

Lex received his doctorate in computer science from the Graz University of Technology in Austria and was a postdoctoral fellow at Harvard University. He began at the University of Utah in 2015.

The NSF CAREER Award is given out to faculty “who have the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization.”

Lex and Stutsman are so far the second and third University of Utah College of Engineering faculty members to receive the NSF CAREER Award this year. In January, electrical and computer engineering assistant professor Pierre-Emmanuel Gaillardon received the award for his research on developing transistors that can do more, not just work faster.

To learn more about Lex and Stutsman, click on the video below that profiles their work.