Invited Talks

Keynote Speaker, Carlos Guestrin

Carlos Guestrin is the Amazon Professor of Machine Learning in Computer Science & Engineering at the University of Washington. He is also the co-founder of GGideaLab, a start up focused on monetizing social networks. Previously, he was a senior researcher at the Intel Research Lab in Berkeley. Carlos received his MSc and PhD in Computer Science from Stanford University in 2000 and 2003, respectively, and a Mechatronics Engineer degree from the Polytechnic School of the University of Sao Paulo, Brazil, in 1998. Carlos’ work received awards at a number of conferences and a journal: KDD 2007 and 2010, IPSN 2005 and 2006, VLDB 2004, NIPS 2003 and 2007, UAI 2005, ICML 2005, AISTATS 2010, JAIR in 2007, and JWRPM in 2009. He is also a recipient of the ONR Young Investigator Award, NSF Career Award, Alfred P. Sloan Fellowship, IBM Faculty Fellowship, the Siebel Scholarship and the Stanford Centennial Teaching Assistant Award. Carlos was named one of the 2008 ‘Brilliant 10′ by Popular Science Magazine,  received the IJCAI Computers and Thought Award and the Presidential Early Career Award for Scientists and Engineers (PECASE). He is a former member of the Information Sciences and Technology (ISAT) advisory group for DARPA.

Title: Machine Learning at Scale with GraphLab

Abstract: Today, machine learning (ML) methods play a central role in industry and science. The growth of the Web and improvements in sensor data collection technology have been rapidly increasing the magnitude and complexity of the ML tasks we must solve. This growth is driving the need for scalable, parallel ML algorithms that can handle “Big Data.”

In this talk, we will focus on:
1. Examining common algorithmic patterns in distributed ML methods.
2. Qualifying the challenges of implementing these algorithms in real distributed systems.
3. Describing computational frameworks for implementing these algorithms at scale.

In the latter part, we will focus mainly on the GraphLab framework, which naturally expresses asynchronous, dynamic graph computations that are key for state-of-the-art ML algorithms. When these algorithms are expressed in our higher-level abstraction, GraphLab will effectively address many of the underlying parallelism challenges, including data distribution, optimized communication, and guaranteeing sequential consistency, a property that is surprisingly important for many ML algorithms. On a variety of large-scale tasks, GraphLab provides 20-100x performance improvements over Hadoop. In recent months, GraphLab has received many tens of thousands of downloads, and is being actively used by a number of startups, companies, research labs and universities.

Keynote Speaker, Santosh Vempala

Vempala attended Carnegie Mellon University, where he received his Ph.D. in 1997 under professor Avrim Blum. In 1997, he was awarded a Miller Fellowship at Berkeley. Subsequently, he was a Professor at MIT in theMathematics Department, until he moved to Georgia Tech in 2006.His main work has been in the area of theoretical computer science, with particular activity in the fields of algorithmsrandomized algorithmscomputational geometry, and computational learning theory, including the authorship of books on random projectionand spectral methods.[4] Vempala has received numerous awards, including a Guggenheim FellowshipSloan Fellowship, and being listed in Georgia Trend’s 40 under 40. In 2008, he co-founded the Computing for Good (C4G)[5] program at Georgia Tech.

Title: High-dimensional Sampling Algorithms and their Applications

Abstract: How efficiently can we solve fundamental problems such as Optimization, Integration, Rounding and Sampling in high dimension? Under appropriate convexity assumptions, these general problems can be solved in time polynomial in the dimension, with sampling playing a central role. In this talk, we survey the state-of-the-art and the main ideas that led to it, including geometric random walks, simulated annealing, isoperimetric inequalities and concentration of measure.

Keynote Speaker, Vincent Vanhoucke

Vincent Vanhoucke is a Research Scientist at Google. He leads the speech recognition quality effort for Google Search by Voice. He holds a Ph.D. in Electrical Engineering from Stanford University and a Diplôme d’Ingénieur from the Ecole Centrale Paris.

Title: Acoustic Modeling and Deep Learning for Speech Recognition

Abstract: Over the past few years, advances in deep learning have triggered a mini-revolution in the field of acoustic modeling for automatic speech recognition. Acoustic modeling has evolved largely independently from machine learning for many years, developing its own set of unique techniques in an increasingly complex and specialized ecosystem. The success of deep learning has forced the community to rethink many long held assumptions about what matters to speech recognition accuracy: what are the roles of discriminative learning, speaker adaptation, noise robustness and feature engineering? Can we perform unsupervised, semi-supervised, and transfer learning effectively? How much and what type of data can we really use? More importantly, this development is providing the machine learning and speech recognition communities with an opportunity to reconnect around a familiar set of basic tools and methods. In this talk, I will provide an overview of these recent developments and attempt to paint a picture of what new opportunities lie ahead.