<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Suresh Venkatasubramanian</title>
	<atom:link href="http://www.cs.utah.edu/~suresh/web/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.cs.utah.edu/~suresh/web</link>
	<description></description>
	<lastBuildDate>Tue, 26 Feb 2013 16:47:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.5</generator>
		<item>
		<title>Multiple Target Tracking with RF Sensor Networks</title>
		<link>http://www.cs.utah.edu/~suresh/web/2013/01/24/multiple-target-tracking-with-rf-sensor-networks/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2013/01/24/multiple-target-tracking-with-rf-sensor-networks/#comments</comments>
		<pubDate>Thu, 24 Jan 2013 07:18:32 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CPS 1035565]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=336</guid>
		<description><![CDATA[[author]Maurizio Bocca, Ossi Kaltiokallio, Neal Patwari and Suresh Venkatasubramanian.[/author] Submitted. http://arxiv.org/abs/1302.4720 Abstract: RF sensor networks are wireless networks that can localize and track people (or targets) without needing them to carry or wear any electronic device. They use the change in the received signal strength (RSS) of the links due to the movements of people [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Maurizio Bocca, Ossi Kaltiokallio, Neal Patwari and Suresh Venkatasubramanian.[/author]<br />
<em>Submitted.</em></p>
<p><a href="http://arxiv.org/abs/1302.4720">http://arxiv.org/abs/1302.4720</p>
<p></a></p>
<p><span id="more-336"></span></p>
<p><strong>Abstract:</strong></p>
<div title="Page 1">
<blockquote><p>RF sensor networks are wireless networks that can localize and track people (or targets) without needing them to carry or wear any electronic device. They use the change in the received signal strength (RSS) of the links due to the movements of people to infer their locations. In this paper, we consider real-time multiple target tracking with RF sensor networks. We perform radio tomographic imaging (RTI), which generates images of the change in the propagation field, as if they were frames of a video. Our RTI method uses RSS measurements on multiple frequency channels on each link, combining them with a fade level-based weighted average. We describe methods to adapt machine vision methods to the peculiarities of RTI to enable real time multiple target tracking. Several tests are performed in an open environment, a one-bedroom apartment, and a cluttered office environment. The results demonstrate that the system is capable of accurately tracking in real-time up to 4 targets in cluttered indoor environments, even when their trajectories intersect multiple times, without mis-estimating the number of targets found in the monitored area. The highest average tracking error measured in the tests is 0.45 m with two targets, 0.46 m with three targets, and 0.55 m with four targets.</p></blockquote>
</div>
<p>Links: Coming soon.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2013/01/24/multiple-target-tracking-with-rf-sensor-networks/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Sensor Network Localization for Moving Sensors</title>
		<link>http://www.cs.utah.edu/~suresh/web/2012/10/15/sensor-network-localization-for-moving-sensors/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2012/10/15/sensor-network-localization-for-moving-sensors/#comments</comments>
		<pubDate>Mon, 15 Oct 2012 17:44:24 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0953066]]></category>
		<category><![CDATA[CCF 1115677]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=287</guid>
		<description><![CDATA[[author]Arvind Agarwal, Hal Daume III, Jeff M. Phillips, Suresh Venkatasubramanian[/author] The Second IEEE ICDM Workshop on Data Mining in Networks Abstract: Sensor network localization (SNL) is the problem of determining the locations of the sensors given sparse and usually noisy inter-communication distances among them. In this work we propose an iterative algorithm named PLACEMENT to [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Arvind Agarwal, Hal Daume III, Jeff M. Phillips, Suresh Venkatasubramanian[/author]<br />
<em><a href="http://damnet.reading.ac.uk/">The Second IEEE ICDM Workshop on Data Mining in Networks</a></em></p>
<p><span id="more-287"></span></p>
<p>Abstract:</p>
<blockquote><p>Sensor network localization (SNL) is the problem of determining the locations of the sensors given sparse and usually noisy inter-communication distances among them. In this work we propose an iterative algorithm named PLACEMENT to solve the SNL problem.<br />
This iterative algorithm requires an initial estimation of the locations and in each iteration, is guaranteed to reduce the cost function. The proposed algorithm is able to take advantage of the good initial estimation of sensor locations making it suitable for localizing moving sensors, and also suitable for the reﬁnement of the results produced by other algorithms. Our algorithm is very scalable. We have<br />
experimented with a variety of sensor networks and have shown that the proposed algorithm outperforms existing algorithms both in terms of speed and accuracy in almost all experiments. Our algorithm can embed 120,000 sensors in less than 20 minutes.</p>
</blockquote>
<p>Links: <a href="http://www.cs.utah.edu/~suresh/papers/damnet/damnet.pdf">PDF</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2012/10/15/sensor-network-localization-for-moving-sensors/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Radio Tomographic Imaging and Tracking of Stationary and Moving People via Histogram Difference</title>
		<link>http://www.cs.utah.edu/~suresh/web/2012/07/18/radio-tomographic-imaging-and-tracking-of-stationary-and-moving-people-via-histogram-difference/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2012/07/18/radio-tomographic-imaging-and-tracking-of-stationary-and-moving-people-via-histogram-difference/#comments</comments>
		<pubDate>Wed, 18 Jul 2012 16:05:22 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CPS 1035565]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=283</guid>
		<description><![CDATA[[author]Yang Zhao, Neal Patwari, Jeff Phillips and Suresh Venkatasubramanian[/author] IPSN, 2013 Abstract: Device-free localization systems pinpoint and track people in buildings using changes in the signal strength measurements made on wireless devices in the building&#8217;s wireless network. It has been shown that such systems can locate people who do not participate in the system by [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Yang Zhao, Neal Patwari, Jeff Phillips and Suresh Venkatasubramanian[/author]<br />
<a href="http://ipsn.acm.org/2013/"><em>IPSN, 2013</em></a></p>
<p><span id="more-283"></span><br />
<strong>Abstract</strong>:</p>
<blockquote><p>Device-free localization systems pinpoint and track people in buildings using changes in the signal strength measurements made on wireless devices in the building&#8217;s wireless network. It has been shown that such systems can locate people who do not participate in the system by wearing any radio device, even through walls, because of the changes that moving people cause to the static wireless network. However, many such systems cannot locate stationary people. We present and evaluate a system which can locate stationary or moving people, with or without calibration, by quantifying the difference between two histograms of signal strength measurements. From five experiments, we show that our kernel distance-based radio tomographic localization system performs better than the state-of-the-art device-free localization systems in different non line-of-sight environments.</p></blockquote>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2012/07/18/radio-tomographic-imaging-and-tracking-of-stationary-and-moving-people-via-histogram-difference/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Efficient Protocols for Distributed Classification and Optimization</title>
		<link>http://www.cs.utah.edu/~suresh/web/2012/04/16/efficient-protocols-for-distributed-classification-and-optimization/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2012/04/16/efficient-protocols-for-distributed-classification-and-optimization/#comments</comments>
		<pubDate>Tue, 17 Apr 2012 02:25:28 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0953066]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=275</guid>
		<description><![CDATA[[author]Hal Daume III, Jeff M. Phillips, Avishek Saha, Suresh Venkatasubramanian[/author] Proc. 23rd International Conference on Algorithmic Learning Theory (ALT), 2012. arXiv:1204.3523v1 [cs.LG] Abstract: In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal (expensive) communication. Prior work (Daume III et al., 2012) proposes a general model [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Hal Daume III, Jeff M. Phillips, Avishek Saha, Suresh Venkatasubramanian[/author]<br />
<a href="http://www-alg.ist.hokudai.ac.jp/~thomas/ALT12/index.html">Proc. 23rd International Conference on Algorithmic Learning Theory (ALT), 2012.</a><br />
<a href="http://arxiv.org/abs/1204.3523">arXiv:1204.3523v1 [cs.LG]</a></p>
<p><span id="more-275"></span><br />
<strong>Abstract:</strong></p>
<blockquote><p>In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal (expensive) communication. Prior work (Daume III et al., 2012) proposes a general model that bounds the communication required for learning classifiers while allowing for $\eps$ training error on linearly separable data adversarially distributed across nodes.</p>
<p>In this work, we develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses $O(d^2 \log{1/\eps})$ words of communication to classify distributed data in arbitrary dimension $d$, $\eps$-optimally. This readily extends to classification over $k$ nodes with $O(kd^2 \log{1/\eps})$ words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results.<br />
In addition, we illustrate general algorithm design paradigms for doing efficient learning over distributed data. We show how to solve fixed-dimensional and high dimensional linear programming efficiently in a distributed setting where constraints may be distributed across nodes. Since many learning problems can be viewed as convex optimization problems where constraints are generated by individual points, this models many typical distributed learning scenarios. Our techniques make use of a novel connection from multipass streaming, as well as adapting the multiplicative-weight-update framework more generally to a distributed setting. As a consequence, our methods extend to the wide range of problems solvable using these techniques. </p>
</blockquote>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2012/04/16/efficient-protocols-for-distributed-classification-and-optimization/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Protocols for Learning Classifiers on Distributed Data</title>
		<link>http://www.cs.utah.edu/~suresh/web/2011/12/12/protocols-for-learning-classifiers-on-distributed-data/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2011/12/12/protocols-for-learning-classifiers-on-distributed-data/#comments</comments>
		<pubDate>Mon, 12 Dec 2011 21:53:20 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0953066]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=260</guid>
		<description><![CDATA[[author]Hal Daumé, Jeff M. Phillips, Avishek Saha and Suresh Venkatasubramanian[/author] In the 15th International Conference on Artificial Intelligence and Statistics (AISTATS), 2012. Abstract: We consider the problem of learning classifiers for labeled data that has been distributed across several nodes. Our goal is to find a single classifier, with small approximation error, across all datasets [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Hal Daumé, Jeff M. Phillips, Avishek Saha and Suresh Venkatasubramanian[/author]<br />
In the <a href="http://www.aistats.org/">15th International Conference on Artificial Intelligence and Statistics</a> (AISTATS), 2012.</p>
<p><span id="more-260"></span><br />
<strong>Abstract:</strong></p>
<p>We consider the problem of learning classifiers for labeled data that has been distributed across several nodes. Our goal is to find a single classifier, with small approximation error, across all datasets while minimizing the communication between nodes. This setting models real-world communication bottlenecks in the processing of massive distributed datasets.  We present several very general sampling-based solutions as well as some two-way protocols which have a provable exponential speed-up over any one-way protocol. We focus on core problems for <em>noiseless</em> data distributed across two or more nodes. The techniques we introduce are reminiscent of active learning, but rather than actively probing labels, nodes actively communicate with each other, each node simultaneously learning the important data from another node. </p>
<p>Links: <a href="http://www.cs.utah.edu/~suresh/papers/active/active.pdf">PDF </a>(this is the submitted version, not the final accepted version)</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2011/12/12/protocols-for-learning-classifiers-on-distributed-data/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Adaptive Sampling for Large-Data MDS</title>
		<link>http://www.cs.utah.edu/~suresh/web/2011/10/17/adaptive-sampling-for-large-data-mds/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2011/10/17/adaptive-sampling-for-large-data-mds/#comments</comments>
		<pubDate>Mon, 17 Oct 2011 17:19:56 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0953066]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=258</guid>
		<description><![CDATA[[author]Arvind Agarwal, Chad Brubaker, Hal Daumé III, Jeff M. Phillips and Suresh Venkatasubramanian [/author] Submitted. Abstract: Multidimensional scaling (MDS) is one of the most popular methods for reducing the dimensionality of data. As data sizes have grown, the space and time limitations of traditional MDS algorithms have become more pronounced, and extensive research has gone [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Arvind Agarwal, Chad Brubaker, Hal Daumé  III, Jeff M. Phillips and Suresh Venkatasubramanian [/author]<br />
<em>Submitted</em>.</p>
<p><span id="more-258"></span><br />
<strong>Abstract</strong>:<br />
Multidimensional scaling (MDS) is one of the most popular methods for reducing the dimensionality of data. As data sizes have grown, the space and time limitations of traditional MDS algorithms have become more pronounced, and extensive research has gone into designing methods for performing MDS that scale to larger data sets. However, these approaches generally start with a matrix decomposition approach to solving MDS. This matrix decomposition is expensive in time and space, and thus the approaches focus on trying to approximate the decomposition using Nystr&ouml;m methods to solve a <em>smaller</em> matrix decomposition problem. </p>
<p>In this paper, we present a new approach to scalable MDS that combines adaptive sampling methods, multi-pass streaming algorithms, and multi-core extensions, and gives a much better error-time tradeoff than prior approaches. Our approach uses a <em>nonlinear</em> projection technique that was recently developed for MDS and avoids expensive matrix decompositions, from which it derives much of its space and time efficiency. </p>
<p>This method allows us to perform MDS feasibly and accurately on data sets of the order of hundreds of thousands of points. While this is still not &#8220;enormous&#8221;, it is orders of magnitude larger (for similar error rates) than previous known methods. In addition, because of the underlying approach we use, this method generalizes to many variants of MDS (using robust error metrics, in <em>non-Euclidean</em> spaces) that have never been studied at scale. </p>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2011/10/17/adaptive-sampling-for-large-data-mds/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Computing Hulls, Centerpoints and VC dimension  in Positive Definite Space</title>
		<link>http://www.cs.utah.edu/~suresh/web/2011/08/10/computing-hulls-in-positive-definite-space/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2011/08/10/computing-hulls-in-positive-definite-space/#comments</comments>
		<pubDate>Wed, 10 Aug 2011 07:16:01 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0841185]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=69</guid>
		<description><![CDATA[[author]P. Thomas Fletcher, John Moeller, Jeff Phillips and Suresh Venkatasubramanian[/author] In Algorithms And Data Structures Symposium (formerly WADS), 2011. Abstract: Many data analysis problems in machine learning, shape analysis, information theory and even mechanical engineering involve the study and analysis of collections of positive definite matrices. The space of such matrices P(n) is a Riemannian [...]]]></description>
				<content:encoded><![CDATA[<p>[author]P. Thomas Fletcher, John Moeller, Jeff Phillips and Suresh Venkatasubramanian[/author]<br />
In <a href="http://www.wads.org/">Algorithms And Data Structures Symposium</a> (formerly WADS), 2011.<br />
<span id="more-69"></span><br />
Abstract:</p>
<p>Many data analysis problems in machine learning, shape analysis, information theory and even mechanical engineering involve the study and analysis of collections of positive definite matrices. The space of such matrices P(n) is a Riemannian manifold with variable negative curvature. It includes Euclidean space and hyperbolic space as submanifolds, and poses significant challenges for the design of algorithms for data analysis. </p>
<p>In this paper, we develop foundational geometric structures and algorithms for analyzing collections of such matrices. A key technical contribution of this work is the use of  \emph{horoballs}, a natural generalization of halfspaces for non-positively curved Riemannian manifolds. Horoballs possess some desirable properties of halfspaces (and balls) but are fundamentally more complex to work with because of the inherent curvature of the underlying space. </p>
<p>We propose generalizations of the notion of a convex hull and a centerpoint and develop algorithms for constructing such structures approximately by combining structural properties of horoballs with novel decompositions of P(n). Using these, we also prove that the VC-dimension of range spaces defined by horoballs is bounded in the case of P(2) (2 x 2 symmetric positive definite matrices). </p>
<p><strong>Links</strong>: </p>
<ul>
<li><a href="http://www.cs.utah.edu/~suresh/web/wp-content/uploads/2009/10/paper.pdf">2 page version</a> at the <a href="http://www.cs.tufts.edu/research/geometry/FWCG09/">19th Fall Workshop on Computational Geometry</a></li>
<li>Original version at the arxiv (<a href="http://arxiv.org/abs/0912.1580">arXiv:0912.1580v2 [cs.CG]</a>)</li>
<li><a href="http://www.cs.utah.edu/~suresh/papers/psd/paper.pdf">Latest version</a> (restructured, including new results on VC-dimension):</li>
</ul>
<hr />
This material is based upon work supported by the National Science Foundation under Grant No. 0841185</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2011/08/10/computing-hulls-in-positive-definite-space/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Approximate Bregman near neighbors in sublinear time: Beyond the triangle inequality</title>
		<link>http://www.cs.utah.edu/~suresh/web/2011/07/29/approximate-bregman-near-neighbors-in-sublinear-time-beyond-the-triangle-inequality/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2011/07/29/approximate-bregman-near-neighbors-in-sublinear-time-beyond-the-triangle-inequality/#comments</comments>
		<pubDate>Fri, 29 Jul 2011 22:51:08 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0953066]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=245</guid>
		<description><![CDATA[[author]Amirali Abdullah, John Moeller and Suresh Venkatasubramanian[/author] Proc. Symposium on Computational Geometry, 2012 http://arxiv.org/abs/1108.0835 Abstract: Bregman divergences are important distance measures that are used extensively in data-driven applications such as computer vision, text mining, and speech processing, and are a key focus of interest in machine learning. Answering nearest neighbor (NN) queries under these measures [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Amirali Abdullah, John Moeller and Suresh Venkatasubramanian[/author]<br />
<em><a href="http://socg2012.web.unc.edu/">Proc. Symposium on Computational Geometry, 2012</a></em><br />
<a href="http://arxiv.org/abs/1108.0835 "><em>http://arxiv.org/abs/1108.0835 </em></a></p>
<p><span id="more-245"></span><br />
<strong>Abstract</strong>:</p>
<blockquote><p>
Bregman divergences are  important distance measures that are used extensively in data-driven applications such as computer vision, text mining, and speech processing, and are a key focus of interest in machine learning. Answering nearest neighbor  (NN) queries under these measures is very important in these applications and has been the subject of extensive study, but is problematic because these distance measures  lack metric properties like symmetry and the triangle inequality.</p>
<p>In this paper, we present the first provably approximate nearest-neighbor (ANN)  algorithms. These process queries in $O(\log n)$ time for Bregman divergences in fixed dimensional spaces. We also obtain $\text{poly}\log n$ bounds for a more abstract class of distance measures (containing Bregman divergences) which satisfy certain structural properties . Both of these bounds apply to both the regular asymmetric Bregman divergences as well as their symmetrized versions.</p>
<p>To do so, we develop two geometric properties vital to our analysis: a reverse triangle inequality (RTI) and a relaxed triangle inequality called $\mu$-defectiveness where $\mu$ is a domain-dependent parameter. Bregman divergences  satisfy the RTI but not $\mu$-defectiveness. However, we show that the square root of a Bregman divergence does satisfy $\mu$-defectiveness. This allows us to then utilize both properties in an efficient search data structure that follows the general two-stage paradigm of a ring-tree decomposition followed by a quad tree search used in previous near-neighbor algorithms for Euclidean space and spaces of bounded doubling dimension. </p>
<p>Our first algorithm resolves a query for a $d$-dimensional $(1+\eps)$-ANN in $O \left((\frac{\log n}{\eps})^{O(d)} \right)$ time and $O(\left(n \log^{d-1} n \right)$ space and holds for generic $\mu$-defective distance measures satisfying a RTI. Our second algorithm is more specific in analysis to the Bregman divergences and uses a further structural constant, the maximum ratio of second derivatives over each dimension of our domain ($c_0$). This allows us to locate a $(1+\eps)$-ANN in $O(\log n)$ time and $O(n)$ space, where there is a further $(c_0)^d$ factor in the big-Oh for the query time.</p>
</blockquote>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2011/07/29/approximate-bregman-near-neighbors-in-sublinear-time-beyond-the-triangle-inequality/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Generating a Diverse Set of High-Quality Clusterings</title>
		<link>http://www.cs.utah.edu/~suresh/web/2011/07/29/generating-a-diverse-set-of-high-quality-clusterings/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2011/07/29/generating-a-diverse-set-of-high-quality-clusterings/#comments</comments>
		<pubDate>Fri, 29 Jul 2011 22:19:19 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0953066]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=243</guid>
		<description><![CDATA[[author]Jeff Phillips, Parasaran Raman and Suresh Venkatasubramanian[/author] arXiv:1108.0017 In the 2nd MultiClust Workshop: Discovering, Summarizing and Using Multiple Clusterings (held in conjunction with ECML/PKDD 2011) Best Paper Award. Abstract: We provide a new framework for generating multiple good quality partitions (clusterings) of a single data set. Our approach decomposes this problem into two components, generating [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Jeff Phillips, Parasaran Raman and Suresh Venkatasubramanian[/author]<br />
<a href="http://arxiv.org/abs/1108.0017">arXiv:1108.0017</a><br />
<em>In the <a href="http://dme.rwth-aachen.de/en/MultiClust2011">2nd MultiClust Workshop: Discovering, Summarizing and Using Multiple Clusterings</a> (held in conjunction with <a href="http://www.ecmlpkdd2011.org/">ECML/PKDD 2011</a>)</em><br />
<strong>Best Paper Award.</strong><br />
<span id="more-243"></span><br />
<strong>Abstract:</strong></p>
<blockquote><p>We provide a new framework for generating multiple good quality partitions (clusterings) of a single data set. Our approach decomposes this problem into two components, generating many high-quality partitions, and then grouping these partitions to obtain k representatives. The decomposition makes the approach extremely modular and allows us to optimize various criteria that control the choice of representative partitions.</p></blockquote>
<p>Links: <a href="http://www.cs.utah.edu/~suresh/papers/multiclust11/alternative.pdf">PDF</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2011/07/29/generating-a-diverse-set-of-high-quality-clusterings/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Active Supervised Domain Adaptation</title>
		<link>http://www.cs.utah.edu/~suresh/web/2011/07/29/active-supervised-domain-adaptation/</link>
		<comments>http://www.cs.utah.edu/~suresh/web/2011/07/29/active-supervised-domain-adaptation/#comments</comments>
		<pubDate>Fri, 29 Jul 2011 22:11:09 +0000</pubDate>
		<dc:creator>suresh</dc:creator>
				<category><![CDATA[Papers]]></category>
		<category><![CDATA[CCF 0841185]]></category>
		<category><![CDATA[CCF 0953066]]></category>

		<guid isPermaLink="false">http://www.cs.utah.edu/~suresh/web/?p=239</guid>
		<description><![CDATA[[author]Avishek Saha, Piyush Rai, Hal Daumé III, Suresh Venkatasubramanian, and Scott L. DuVall[/author] In the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2011) Abstract: In this paper, we harness the synergy between two important learning paradigms, namely, active learning and domain adaptation. We show how active learning [...]]]></description>
				<content:encoded><![CDATA[<p>[author]Avishek Saha, Piyush Rai, Hal Daumé III, Suresh Venkatasubramanian, and Scott L. DuVall[/author]<br />
<em>In the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (<a href="http://www.ecmlpkdd2011.org/index.php">ECML-PKDD 2011</a>)</em><br />
<span id="more-239"></span><br />
<strong>Abstract:</strong><br />
In this paper, we harness the synergy between two important learning paradigms, namely, active learning and domain adaptation. We show how active learning in a target domain can leverage information from a different but related source domain. Our proposed framework, Active Learning Domain Adapted (Alda), uses source domain knowledge to transfer information that facilitates active learning in the target domain. We propose two variants of Alda: a batch B-Alda and an online O-Alda. Empirical comparisons with numerous baselines on real-world datasets establish the efficacy of the proposed methods.</p>
<p>Links: <a href="http://www.cs.utah.edu/~suresh/papers/ecml2011/alda.pdf">PDF</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cs.utah.edu/~suresh/web/2011/07/29/active-supervised-domain-adaptation/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>