photo

I am currently pursuing my internship with Yahoo! Ads & Data Team.

I have two MS degrees:

I am currently looking for full-time software developer/engineer positions starting October 10, 2015.

Explore the links on the left to know more about my education, research, projects and other activities!

Click here to get a copy of my current resume.

The content below is outdated by atleast 6 months! I will try and update this page as soon as I get some time to spend on it.
Meanwhile, to get some idea of what I am upto, please look at the following submissions:
Click here to download MIDGRAPH 2011 Paper , VR 2012 Poster or to look at my University of Utah Presentation , I gave during my trip to the University of Utah.

My current research focuses on visual body feedback in virtual environments. The first pass of this work requires user segmentation from the background.

Problem Context: Segmentation of a user given the constraints that the user is in motion and the camera's view is a first person view of the user.

There are several methods available for color-based segmentation. But, this is not applicable in the context of the current problem as it not involves segmenting a mix of colors in the user's attire and the objects he/she holds. One solution for this problem might be to segment user/foreground based on motion information in consecutive video frames. But, this is applicable only to scenes with relatively static backgrounds. The alternate methods to solve the problem are being explored at the moment. The following are a few screenshots of the ongoing research:

color based segmentation

Skin color based segmentation based on [5].

color based segmentation

Foreground separation from static backgrounds based on the method used in Chris Harrison's 3D Head Tracking Project.

The second image involves darker lighting conditions than the first image. By observation one can see that simple color based segmentation can not be applied as in the first case. In this case, it is due to variations in skin color (observed) due to non-uniform illumination.

References:

1. BETTY J.MOHLER, WILLIAM B. THOMPSON, SARAH H. CREEM- REGEHR, PETER WILLEMSEN, HERBERT L. PICK, JR., and JOHN J. RIESER. Calibration of Locomotion Resulting from Visual Motion in a Treadmill-Based Virtual Environment. ACM Transactions on Applied Perception, 4 (1). 4, 2007.
2. BRUDER Gerd, STEINICKE Frank, VALKOV, Dimitar, HINRICHS, Klaus. Augmented Virtual Studio for Architectural Exploration. Proceedings of Virtual Reality International Conference (VRIC 2010), 7-9 April 2010, Laval, France.
3. Frank Steinicke, Gerd Bruder, Klaus Hinrichs, Markus Lappe, Brian Ries, Victoria Interrrante. Transitional Environments Enhance Distance Perception in immersive Virtual Reality Systems. In Proceedings of Symposium on Applied Perception in Graphics and Visualization, pages 19-26, 2009.
4. Frank Steinicke, Gerd Bruder, Klaus Hinrichs, Markus Lappe, Scott Kuhl, Pete Willemsen. Judgement of Natural Perspective Projections in Head-Mounted Display Environments. IEEE Transactions on Visualization and Computer Graphics, 29 October 2010.
5. Gerd Bruder, Frank Steinicke, Kai Rothaus, Klaus Hinrichsm. Enhancing Presence in Head-mounted Display Environments by Visual Body Feedback Using Head-mounted Cameras. In Proceedings of International Conference on CyberWorlds, pages 43-50. IEEE Press, 2009.
6. Michael Geuss, Jeanine Stefanucci, Sarah Creen-Regehr, Wiliam B. Thompson. Can I Pass?: Using Affordances to Measure Perceived Size in Virtual Environments. Proceedings of the 7th Symposium on Applied Perception in Graphics and Visualization, APGV 2010, Los Angeles, California, pages 61-64, July 23-24, 2010.
7. PETER WILLEMSEN, MARK B. COLTON, SARAH H. CREEM-REGEHR, and WILLIAM B. THOMPSON. The Effects of Head-Mounted Display Mechanical Properties and Field of View on Distance Judgements in Virtual Environments. ACM Transactions on Applied Perception 2, 6.
8. William B. Thompson, Peter Willemsen, Amy A. Gooch, Sarah H. Creem- Regehr, Jack M. Loomis, Andrew C. Beall. Does the Quality of the Computer Graphics Matter when Judging Distances in Visually Immersive Environments? Presence: Teleoperators Virtual Environ, 13, 5, 560-571.
Ray Tracer: A Whitted and Cook Style Ray Tracer based on Image-Order Rendering

As a part of our advanced computer graphics class, we had to build a ray tracer from scratch. My Ray Tracer demonstrates a variety of features like ideal specular reflections, shadows, object instancing, antia-aliasing, motion blur, soft shadows, depth of field, glossy reflections, bounding volume hierarchies, simple animations etc. It also supports miscellaneous extensions like soft blobs, texture generation using perlin noise etc. This project is entirely written in C++ and is built using CMake. It also uses the standard png library to png images as output. It supports RDL scene descriptions.

The following screen shots demonstrate some of the images generated by my ray tracer:

stratified caffeine molecule

Caffeiene molecule using stratified anti-aliasing technique. The RDL file for this image is originally created by Scot Halverson .

soft shadows

Sphere using soft shadows.

object instancing

Object instancing applied on spheres.

motion blur

Motion blur observed in a scene with one static sphere and one sphere in motion.

soft blob

A simple demonstration of soft blobs technique.

We have also experimented with NVIDIA's Optix API.

lab

3D model of a lab loaded and rendered as a mesh of triangles by Optix.

dragon

"Stanford Dragon Model" loaded and rendered as a mesh of triangles by Optix. This model is originally developed by Stanford University Computer Graphics Laboratory and is available on the web for free.

Mouse Simulation using Hand Gestures

This is developed as the main project in my engineering final year. It uses hand gestures to simulate mouse operations and thereby controls system applications using simple color caps as gesture input devices.

We use hand gestures and an optional speech recognition to simulate mouse operations. We then use it to perform dynamic region/contour capture from a live video stream. We show how it can also be used to play games by means of a simple game.

All these applications use integrated webcam (or any webcam supported by OpenCV) of the computer. OpenCV is used to capture live video stream from the webcam. This stream is then processed to identify and map hand gestures to their corresponding actions. We used color marker caps as the indicators for gestures. The recognized gestures are then mapped to corresponding actions. The gesture control module of this project is written in C++, speech recognition module is written in C#.NET while the game is written in Core Java. The speech recognition module uses Microsoft SAPI.

The following are some of the screenshots from the project:
mouse select

Simulation of mouse selection using Mouse Control without Speech module

mouse dragon

Simulation of mouse drag using Mouse Control with Speech module

mouse dragon

Dynamic Capturing of region using Dynamic Region Capture module

mouse dragon

Captured region



mouse dragon

Game control using Mouse Control without Speech module

This project is also extended to use Microsoft's Kinect for video capture.

Natural Language Processing Projects

POS Tagger:

A Perl script that assigns parts of speech to text using Hidden Markov Models.

Word Sense Disambiguator:

A Bayesian Classifier written in Perl that detects the sense of a word based on its context.

Sentence Generator:

A Perl program that learns an N-gram language model from an arbitrary number of input files and generates sentences on the fly based on learnt model. A 4-gram model built from Crime and Punishment, by Fyodr Dostoyevsky and War and Peace, by Leo Tolstoy generated the below 10 random sentences!

natasha got up and went on . the faces all expressed animation and apprehension , but it seemed to himself at least .
spots appeared on his face .
but as barclay did not inspire confidence his power was limited .
he came up with a great deal and is still suffering from the idea that no one knows , but one must sit somewhere ; that poor katia now - - you ' re in force , and he occupied a temporary post in the commissariat department in that town . you will let me win this ten , or beat it .
" the worst of it , i have another object .
for such an act ; in the second place if you want anything come straight to me , an old woman ' s beauty to him and he often thought about her .
he kissed her hand , he pressed it to the committee prince andrew looked kindly at sonia .
a pleasant humming and whistling of bullets were often heard .

Snake Game:

This standard Snake game is probably the first window based game I have ever written (some 5 years back). This game is written completely using C Graphics. It contains a user snake and a snake that's controlled by the computer. The code used is so old that I had to use DosBox to run it and take the following screenshot ;-).

snake

A random screenshot of the Snake game.

Ball&Bat Game:

This standard Ball&Bat game is developed as a sub-part of my other project "Application Control using Hand Gestures". The entire code is written in Java. It contains a user bat that can be controlled by a physical mouse or by means of a colorcap (worn on the finger).

ballbat

A random screenshot of the Ball&Bat game.

Tetris Game:

This standard and simplified Tetris game is developed as an attempt to get acquainted with opengl. It's a very simplified version of the actual Tetris game.

tetris

A random screenshot of the Tetris Game.

Helicopter Game:

This standard Helicopter game is developed as an attempt to get acquainted with XNA framework. Thanks to the nice XNA tutorials by Riemer, I was able to understand basics of XNA framework and apply them in this game. The background texture used in this game is from the same tutorial and the helicopter image is available freely over the internet.

helicopter

A random screenshot of the Helicopter Game.

M.S. in Computing, Data Management & Analysis, August 2012 - May 2015
Cumulative GPA: 3.98/4.0

Spring 2015:

CS 6140 - Data Mining
CS 6230 - High Performance Computing & Parallelization
CS 6810 - Computer Architecture

Spring 2014:

CS 6460 - Operating Systems
CS 6030 - Technical Communications

Fall 2013:

CS 6150 - Advanced Algorithms
CS 6530 - Database Systems

Spring 2013:

CS 6320 - 3D Computer Vision
CS 6650 - Perception for Computer Graphics

Fall 2012:

CS 6350 - Machine Learning
CS 6630 - Scientific Visualization
CS 6640 - Digital Image Processing

M.S. in Computer Science (with Thesis), September 2010 - June 2012
Cumulative GPA: 4.0/4.0

Spring 2012:

CS 8761 - Advanced Systems Programming

Fall 2011:

CS 8761 - Natural Language Processing

Spring 2011:

CS 8771 - Advanced Computational Logic
CS 4511 - Computability and Complexity (Audit)
MATH 5233 - Mathematical Foundations of Bio-Informatics

Fall 2010:

CS 8721 - Advanced Computer Graphics
MATH 5830 - Numerical Analysis: Approximation and Quadrature

Graduate Teaching Assistant for:

Spring 2012

CS 5631 - Operating Systems under Dr. Christoper G. Prince
CS 5721 - Computer Graphics under Dr. Pete Willemsen

Fall 2011

CS 1581 - Honors Computer Science 1 under Dr. Tim Colburn
CS 5651 - Computer Networks under Dr. Pete Willemsen

Spring 2011

CS 4531 - Sotware Engineering under Dr. Tim Colburn
CS 5721 - Computer Graphics under Dr. Pete Willemsen

Fall 2010

CS 1511 - Visual C++ under Dr. Jim Allert
CS 4531 - Sotware Engineering under Dr. Gary Shute

Graduate Teaching Assistant for:

CS 5631 - Operating Systems under Dr. Christoper G. Prince
CS 5721 - Computer Graphics under Dr. Pete Willemsen

Graduate Teaching Assistant for:

CS 1581 - HONORS COMPUTER SCIENCE 1 under Dr. Tim Colburn
CS 5651 - COMPUTER NETWORKS under Dr. Pete Willemsen

Graduate Teaching Assistant for:

CS 4531 - SOFTWARE ENGINEERING under Dr. Tim Colburn
CS 5721 - COMPUTER GRAPHICS under Dr. Pete Willemsen

Graduate Teaching Assistant for:

CS 1511 - VISUAL C++ under Dr. Jim Allert
CS 4531 - SOFTWARE ENGINEERING under Dr. Gary Shute

Pic with Marissa Meyer!

My best moment at Yahoo!

My first Hiking!

My first ever hiking with Erin McManus, Dr. William Bill Thompson, Scot Halverson and Mrs. Barbara

Feel free to drop me a message via email.

Email:kaushik(at)cs.utah.edu
LinkedIn:https://www.linkedin.com/in/srivishnusatyavolu

Intern II, Yahoo!, Sunnyvale, August 2015 - October 2015

Enhancing Utility and Mobility of Third Person Self-Avatars

As a part of my work as a Research Assistant at University of Utah VPSC Lab, I worked on designing and developing virtual reality systems and ways to enhance both the utility and mobility of motion-captured (Uses Motion Builder and WorldViz) self-avatars. Some of the work was published in SAP 2014 and can be found in this paper.

Microsoft Kinect based Virtual Reality System (MS. Thesis)

This thesis has two major parts in it. Together these form a fully Kinect based Virtual Reality System that can give users dynamic visual feedback while simultaneously being able to track the position of the user in 3D space. The detailed thesis document can be found here. And here, is the poster that came out of the work. A draft version of the poster can be found here. The work has also been submitted to Midgraph 2011, the paper can be found here. Finally, here is the talk I gave at University of Utah about my research.

Kinect based 3D self-avatars

Information from Multiple Microsoft Kinect IR/RGB/Depth Cameras was combined to extract a real-time 3D self-representation using a sequence of video/image processing stages such as preprocessing, background subtraction, meshing etc. The resultant 3D self avatar is projected back in the VR world to give instant visual feedback to users. For this purpose, I built my own feature-rich C++ Kinect library that utilized OpenCV, libKinect, libSivelab, OpenScenegraph, KinectViewer APIs for processing and accessing information from multiple Kinects connected over a network.

Kinect based VR Tracking System

The information from multiple Microsoft Kinect IR/Depth Cameras connected over network is integrated to robustly track an IR marker attached to the head of the user. Apart from using a custom networking protocol, a Kalman filter was also used to remove the jitter from the position tracking. The resultant coverage space is scalable with the number of Kinects. A detailed analysis was also conducted on the IR interference effects on the position tracking abilities of the tracking system (see the VR 2012 paper listed above for details). My API also had support for both intrinsic and extrinsic calibration of Kinects along with UDP communication across networks.

Machine Learning Projects

Hand-written Lower-Case Character Classification using Machine Learning

A set of Matlab-based classifiers that were written using Thomas Henderson's CS 6530 Fall 2012 Matlab Code Codebase.

The goal is to automatically classify lower case scanned hand-written characters into their corresponding character classes using Machine Learning techniques. Specifically, techniques such as Multi-Layer Perceptrons (MLPs), Radial Basis Feed Forward Networks (RBFs), Decision Trees (DTs), Adaboost Algorithm, K-Means/K-Medians Clustering etc. were implemented and the resultant performances were reported in the following reports: MLP-Report , RBF-Report, DT-Report, Adaboost-Report, K-Means-Report.

A few example graphs from the MLP-Report were illustrated below:
Letter a Letter i Letter l Letter z

Lower case scanned handwritten characters (a, i, l, z).



MLP Learning Curve MLP Performance with Number of Hidden Neurons MLP Performance with Number of Hidden Neurons and Number of iterations MLP Performance with Number of Iterations and Number of Hidden Layers

Extracting Textual Information from Hand-Scanned Engineering Drawings

As an immediate application of the character classifier project mentioned above, the goal of this project is to automatically extract textual information from engineering drawing documents. The process involves, preprocessing to remove noise and sharpen the boxes, box segementation, box identification, text extraction, connected component segmentation, followed by text classification. This is written in Matlab using Thomas Henderson's CS 6530 Fall 2012 Matlab Machine Learning Codebase. The details about this project can be found in the following report. A teaser image from the report is illustrated below:

Drawing Box Extraction

An example engineering drawing image with detected boxes in red.

Linux Kernel Device Drivers

Implemented and tested a set of char, block, and USB linux kernel device drivers with corresponding hardware devices as a part of CS 8631 course, Spring 2012.

Reference: Linux Device Drivers, Third Edition

Computer Vision and Image Processing Projects

Tracking Moving objects in a Surveillance Video using Canny Filter and Clustering

The goal of this project is to autmoatically detect and track moving objects in a surveillance video. In this regard, a 2 layered Canny Filter is applied to extract a spatio-temporal edge map whose pixels are then clustered using a greedy clustering (an adaptive variation of K-Means) technique. The details of this project are available in this report here. A Teaser image showing one successful result of the algorithm is illustrated below:

Tracking Moving Pedestrians

Figure showing two moving pedestrians successfully tracked by the above algorithm.

Scientific Visualization Projects

As a part of the CS 6630 course at University of Utah, I used a variety of visualization software like Tableau, Processing, and VTK to develop interactive visualizations that help better understand data.

Flow Visualization and Critical Point Analysis using VTK

VTK was used to visualize vector fields and programmatically identify various types of critical points in the flow. The details are described here.

Visualizing 2D and 3D Spatial Data using VTK

2D and 3D spatial data is visualized using a VTK pipleline. The details can be found here here and here.

Parallel Coordinates

An interactive parallel coordinates visualization tool was implemented in Processing for exploring open source data sets. The details are described here.

Time Series Visualization using Processing

An interactive time series visualization tool was designed and implemented in Processing and is described here.

Data Exploration Using Tableau

A systematic and scientific visual analysis of flight data set is done using Tableau. The data is obtained from www.transtats.bts.gov. The details can be found here.

English Accent Characterization using Unsupervised Learning

The goal of this project was to identify and extract important features from human accented speech and see how useful they are for accent characterization. In this regard, several types of features viz., spectral distribution features (cepstral coefficients), features based off of just the frequency domain were compared, and so are different types of unsupervised clustering (K-Means, and K-Center), and dimensionality reduction (SVD vs PCA) are compared and contrasted. In the end, we (me, Xu Wang, and Shaobo Pei) found that the combination of K-Means and PCA with Cepstral coefficients yielded the best results as shown in the below poster. A link to the pdf version of this poster can be found here.

Tracking Moving Pedestrians

Poster illustrating the various aspects of the project and the corresponding results.

Mini Database System

Implemented a Mini Database system as a part of CS 6530. A HeapFile system for storing records, love/hate buffer-page replacement policy for bugger management, a B+ Tree clustered index for finding records, and an external sorting mechanism for joins and order by clauses.

Webwise Document System

This is a VB.NET application that serves as a one-stop personalized home page (Webwise Files) for users. Webwise Files are files that change their contents accordingly with that of the World Wide Web. Yet, they have their physical existence on the very terminal the user works on. The web wise files are actually user defined files comprising of data (from the WWW) of user’s choice in the format specified by the user. This application provides support for local and remote text, web page content, blog post, SOAP-based web services, twitter post, and a RSS feed. A Webwise Document has a bunch of Webwise Files stored in a Webwise Document (a XML template) for a particular used. The application has support for column, row, and grid layouts. The full details of the project can be found in this report.

Example Webwise Document

An example Webwise Document showing a local text, news headlines, and a RSS feed.

Example Webwise Document

An example Webwise Document showing a SMS web service, Twitter post, and a blog.

Conference Papers

Srivishnu Kaushik Satyavolu, Sarah H. Creem-Regehr, Jeanine K. Stefanucci, and William B. Thompson. 2014. Pointing from a third person avatar location: does dynamic feedback help?. In Proceedings of the ACM Symposium on Applied Perception (SAP '14). ACM, New York, NY, USA, 95-98.

Conference Publications

Satyavolu, S.; Bruder, G.; Willemsen, P.; Steinicke, F., "Analysis of IR-based virtual reality tracking using multiple Kinects," in Virtual Reality Short Papers and Posters (VRW), 2012 IEEE , vol., no., pp.149-150, 4-8 March 2012.

Workshop Papers

Satyavolu, Willemsen, P.; Enhancing User Immersion and Natural interaction in HMD based Virtual Environments with Real Time Visual Body Feedback using Multiple Microsoft Kinects "Analysis of IR-based virtual reality tracking using multiple Kinects,". MidGraph 2011, Iowa City, Iowa, USA.