Timezone: »

AI For Social Good (AISG)
Margaux Luck · Kris Sankaran · Tristan Sylvain · Sean McGregor · Jonnie Penn · Girmaw Abebe Tadesse · Virgile Sylvain · Myriam Côté · Lester Mackey · Rayid Ghani · Yoshua Bengio

Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ 104 B

AI for Social Good

Important information

Contact information: aisg2019.icml.contact@gmail.com

Submission deadline: EXTENDED to April 26th 2019 11:59PM ET

Workshop website

Submission website

Poster Information:

  • Poster Size - 36W x 48H inches or 90 x 122 cm

  • Poster Paper - lightweight paper - not laminated


This workshop builds on our AI for Social Good workshop at NeurIPS 2018 and ICLR 2019.

Introduction: The rapid expansion of AI research presents two clear conundrums:

  • the comparative lack of incentives for researchers to address social impact issues and
  • the dearth of conferences and journals centered around the topic. Researchers motivated to help often find themselves without a clear idea of which fields to delve into.

Goals: Our workshop address both these issues by bringing together machine learning researchers, social impact leaders, stakeholders, policy leaders, and philanthropists to discuss their ideas and applications for social good. To broaden the impact beyond the convening of our workshop, we are partnering with AI Commons to expose accepted projects and papers to the broader community of machine learning researchers and engineers. The projects/research may be at varying degrees of development, from formulation as a data problem to detailed requirements for effective deployment. We hope that this gathering of talent and information will inspire the creation of new approaches and tools by the community, help scientists access the data they need, involve social and policy stakeholders in the framing of machine learning applications, and attract interest from philanthropists invited to the event to make a dent in our shared goals.

Topics: The UN Sustainable Development Goals (SDGs), a set of seventeen objectives whose completion is set to lead to a more equitable, prosperous, and sustainable world. In this light, our main areas of focus are the following: health, education, the protection of democracy, urban planning, assistive technology, agriculture, environmental protection and sustainability, social welfare and justice, developing world. Each of these themes presents unique opportunities for AI to reduce human suffering and allow citizens and democratic institutions to thrive.

Across these topics, we have dual goals: recognizing high-quality work in machine learning motivated by or applied to social applications, and creating meaningful connections between communities dedicated to solving technical and social problems. To this extent, we propose two research tracks:

  • Short Papers Track (Up to four page papers + unlimited pages for citations) for oral and/or poster presentation. The short papers should focus on past and current research work, showcasing actual results and demonstrating beneficial effects on society. We also accept short papers of recently published or submitted journal contributions to give authors the opportunity to present their work and obtain feedback from conference attendees.
  • Problem Introduction Track (Application form, up to five page responses + unlimited pages for citations) which will present a specific solution that will be shared with stakeholders, scientists, and funders. The workshop will provide a suite of questions designed to: (1) estimate the feasibility and impact of the proposed solutions, and (2) estimate the importance of data in their implementation. The application responses should highlight ideas that have not yet been implemented in practice but can lead to real impact. The projects may be at varying degrees of development, from formulation as a data problem to structure for effective deployment. The workshop provides a supportive platform for developing these early-stage or hobby proposals into real projects. This process is designed to foster sharing different points of view ranging from the scientific assessment of feasibility, discussion of practical constraints that may be encountered, and attracting interest from philanthropists invited to the event. Accepted submissions may be promoted to the wider AI solutions community following the workshop via the AI Commons, with whom we are partnering to promote the longer-term development of projects.
Sat 8:45 a.m. - 9:00 a.m.
Welcoming and Poster set-up
Sat 9:00 a.m. - 9:05 a.m.

Speaker bio: Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research, scientific director of Mila, CIFAR Program co-director of the CIFAR Learning in Machines and Brains program (formerly Neural Computation and Adaptive Perception), scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. He supervises a large group of graduate students and post-docs. His research is widely cited (over 130000 citations found by Google Scholar in August 2018, with an H-index over 120, and rising fast).

Yoshua Bengio
Sat 9:05 a.m. - 9:45 a.m.
Video » 

Wadhwani AI was inaugurated a little more than a year ago with the mission of bringing the power of AI to address societal challenges, especially among underserved communities throughout the world. We aim to address problems all major domains including health, agriculture, education, infrastructure, and financial inclusion. We are currently working on three solutions (two in health and one in agriculture) and are exploring more areas where we can apply AI for social good.The most important lesson that we have learned during our short stint is the importance of working in close partnership with other stakeholders and players in the social sectors, especially NGOs and Government organizations. In this talk, I will use one case, namely that of developing an AI based approach for Integrated Pest Management (IPM) in Cotton Farming, to describe how this partnership based approach has evolved and been critical to our solution development and implementation.

Speaker bio: Dr. P. Anandan is the CEO of Wadhwani Institute of Artificial Intelligence. His prior experience includes - Adobe Research Lab India (2016-2017) as a VP for Research and a Distinguished Scientist and Managing Director at Microsoft Research (1997-2014). He was also the founding director of Microsoft Research India which he ran from 2005-2014. Earlier stint was at Sarnoff Corporation (1991-1997) as a researcher and an Assistant Professor of Computer Science at Yale University (1987-1991). His primary research area is Computer vision where he is well known for his fundamental and lasting contributions to the problem of visual motion analysis. He received his PhD in Computer Science from University of Massachusetts, Amherst in 1987, a Masters in Computer Science from University of Nebraska, Lincoln in 1979 and his B.Tech in Electrical Engineering from IIT Madras, India in 1977. He is a distinguished alumnus of IIT Madras, and UMass, Amherst and is on the Nebraska Hall of Computing. His hobbies include playing African drums, writing poems (in Tamil) and travel which makes his work related travel interesting.

P. Anandan
Sat 9:45 a.m. - 9:50 a.m.
Video » 

AI Commons is a collective project whose goal is to make the benefits of AI available to all. Since AI research can benefit from the input of a large range of talents across the world, the project seeks to develop ways for developers and organizations to collaborate more easily and effectively. As a community operating in an environment of trust and problem-solving, AI Commons can empower researchers to tackle the world's important problems using all the possibilities of cutting-edge AI.

Speaker bio: Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research, scientific director of Mila, CIFAR Program co-director of the CIFAR Learning in Machines and Brains program (formerly Neural Computation and Adaptive Perception), scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. He supervises a large group of graduate students and post-docs. His research is widely cited (over 130000 citations found by Google Scholar in August 2018, with an H-index over 120, and rising fast).

Yoshua Bengio
Sat 9:50 a.m. - 10:00 a.m.
Video » 

Marine debris pollution is one of the most ubiquitous and pressing environmental issues affecting our oceans today. Clean up efforts such as the Great Pacific Garbage Patch project have been implemented across the planet to combat this problem. However, resources to accomplish this goal are limited, and the afflicted area is vast. To this end, unmanned vehicles that are capable of automatically detecting and removing small-sized debris would be a great complementary approach to existing large-scale garbage collectors. Due to the complexity of fully functioning unmanned vehicles for both detecting and removing debris, in this project, we focus on the detection task as a first step. From the perspective of machine learning, there is an unfortunate lack of sufficient labeled data for training a specialized detector, e.g., a classifier that can distinguish debris from other objects like wild animals. Moreover, pre-trained detectors on other domains would be ineffective while creating such datasets manually would be very costly. Due to the recent progress of training deep models with synthetic data and domain randomization, we propose to train a debris detector based on a mixture of real and synthetic images.

Speaker bio: Kris is a postdoc at Mila working with Yoshua Bengio on problems related to Humanitarian AI. He is generally interested in ways to broaden the scope of problems studied by the machine learning community and am curious about the ways to bridge statistical and computational thinking.

Kris Sankaran
Sat 10:00 a.m. - 10:10 a.m.
Video » 

Technologies to address cyber bullying are limited to detecting and hiding abusive messages. We propose to investigate the potential of conversational technologies for addressing abusers. We will outline directions for studying the effectiveness dialog strategies (e.g., to educate or deter abusers, or keep them busy with chatbots rather than their victims) and for initiating new research on chatbot-mediated mitigation of online abuse.

Speaker bio: Emma Beauxis-Aussalet is a Senior Track Associate at the Digital Society School of Amsterdam University of Applied Science, where she investigates how data-driven technologies can be applied for the best interests of society. She holds a PhD on classification errors and biases from Utrecht University. Her interests include ethical and explainable AI, data literacy in the general public, and the synergy between human & artificial intelligence to tackle job automation.

Emma Beauxis-Aussalet
Sat 10:10 a.m. - 10:20 a.m.

The use of Teff as an exclusive crop for making Injera, Ethiopian national staple, has changed overtime.Driven by the ever increasing price of Teff, producers have added other ingredients, of which some are good (maze and rice), while others are not. Hence, households opting for the industrial solution of Injera, are disturbed by the fact that hey can not figure out what exactly is contained in their Injera. Thousands of local producers and local shopkeepers work together to make fresh Injera available to millions around the country. However, consumers are finding it more and more difficult to find a safe Injera for purchase. This injera is usually sold unpacked, unlabeled and in an unsafe way through local shops. This being so, consumers face more and more health risks, all the more as it is impossible to evaluate the ingredients contained in the Injera they are buying There are two kinds of risks: (a) the local producers might try to reduce the cost by using cheap ingredients, including risky additives, and (b) the shops might sell expired Injera warmed up. We discuss here the growing food safety problem faced by millions of Injera consumers in Ethiopia, and the possibility of using AI to solve this problem.

Speaker bio: Wondimagegnehu is a master’s student in Information Science at Addis Ababa University. He is working on a master's thesis in learning an optimal representation of word structure for morphological complex languages under a constrained settings: limited training data and human supervision. He is interested in exploring research challenges in using AI on a social setting.

Sat 10:20 a.m. - 10:30 a.m.
Video » 

In recent years, floods, landslides and droughts have become an annual occurrence in Sri Lanka. Despite the efforts made by the government and other entities, these natural disasters remain challenging mainly to the people who live in high risk areas. It is also crucial to predict such disasters early on to facilitate evacuation of people living in these areas. Furthermore, Sri Lankan economy largely depends on agriculture, yet this sector still remains untouched by recent advancements of AI and other predictive analytics techniques. The solution is to develop an AI based platform that generates insights from emerging data sources. It will be modular, extensible and open source. Similar to any other real world AI system, the end solution will consist of multiple data pipelines to extract data, analyze and present results through APIs. The presentation layer will be a public API that can be consumed through a portal such as Disaster Management Centre of Sri Lanka.

Speaker bio: Narmada is research engineer at ConscientAI Labs based in Sri Lanka. She is also a visiting research student at the Memorial University of Newfoundland, Canada. She is interested in research on climate change and effects of it on human lifestyle and Deep Learning for Computer Vision.

Sat 10:30 a.m. - 11:00 a.m.
Break / Poster Session 1
Sat 11:00 a.m. - 11:40 a.m.
Video » 

AI can help solve big data and decision-making problems to understand and protect the environment. I’ll survey several projects the area and discuss how to approach environmental problems using AI. The Dark Ecology project uses weather radar and machine learning to unravel mysteries of bird migration. A surprising probabilistic inference problem arises when analyzing animal survey data to monitor populations. Novel optimization algorithms can help reason about dams, hydropower, and the ecology of river networks.

Speaker bio: Daniel Sheldon is an Assistant Professor of Computer Science at the University of Massachusetts Amherst and Mount Holyoke College. His research investigates fundamental problems in machine learning and AI motived by large-scale environmental data, dynamic ecological processes, and real-world network phenomena.

Daniel Sheldon
Sat 11:40 a.m. - 11:50 a.m.
Video » 

The handicraft industry is a strong pillar of Indian economy which provides large-scale employment opportunities to artisans in rural and underprivileged communities. However, in this era of globalization, diverse modern designs have rendered traditional designs old and monotonous, causing an alarming decline of handicraft sales. In this talk, we will discuss our approach leveraging techniques like GANs, Color Transfer, Pattern Generation etc. to generate contemporary designs for two popular Indian handicrafts - Ikat and Block Print. The resultant designs are evaluated to be significantly more likeable and marketable than the current designs used by artisans.

Speaker bio: Sonam Damani is an Applied Scientist in Microsoft, India where she has worked on several projects in the field of AI and Deep Learning, including Microsoft's human-like-chatbot Ruuh, Cortana personality, novel art generation using AI, Bing search relevance, among others. In the past year, she has co-authored a bunch of publications in the field of conversational AI and AI creativity that were presented in NeurIPS, WWW and CODS-COMAD.

Sonam Damani
Sat 11:50 a.m. - 12:00 p.m.
Video » 

We present a deep CNN for breast cancer screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images). Our network achieves an AUC of 0.895 in predicting whether there is a cancer in the breast, when tested on the screening population. We attribute the high accuracy of our model to a two-stage training procedure, which allows us to use a very high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and find our model to be as accurate as experienced radiologists when presented with the same data. Finally, we show that a hybrid model, averaging probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately.

Speaker bio: - Krzysztof Geras is an assistant professor at NYU School of Medicine and an affiliated faculty at NYU Center for Data Science. His main interests are in unsupervised learning with neural networks, model compression, transfer learning, evaluation of machine learning models and applications of these techniques to medical imaging. He previously did a postdoc at NYU with Kyunghyun Cho, a PhD at the University of Edinburgh with Charles Sutton and an MSc as a visiting student at the University of Edinburgh with Amos Storkey. His BSc is from the University of Warsaw.

  • Nan Wu is a PhD student at NYU Center for Data Science. She is interested in data science with application to healthcare and currently working on medical image analysis. Before joining NYU, she graduated from School for Gifted Young, University of Science and Technology of China, receiving B.S in Statistics and B.A. in Business Administration.
Krzysztof J Geras, Nan Wu
Sat 12:00 p.m. - 2:00 p.m.
Lunch - on your own
Sat 2:00 p.m. - 2:30 p.m.
Video » 

Luke Stark will discuss two recent papers (Greene, Hoffmann & Stark 2019; Stark & Hoffmann 2019) that use discursive analysis to examine a) recent high-profile value statements endorsing ethical design for artificial intelligence and machine learning and b) professional ethics codes in computer science, statistics, and other fields. Guided by insights from Science and Technology Studies, values in design, and the sociology of business ethics, he will discuss the grounding assumptions and terms of debate that shape current conversations about ethical design in data science and AI. He will also advocate for an expanded view of expertise in understanding what ethical AI/ML/AI for Social Good should mean.

Speaker bio: Luke Stark is a Postdoctoral Researcher in the Fairness, Accountability, Transparency and Ethics (FATE) Group at Microsoft Research Montreal, and an Affiliate of the Berkman Klein Center for Internet & Society at Harvard University. Luke holds a PhD from the Department of Media, Culture, and Communication at New York University, and an Honours BA and MA in History from the University of Toronto. Trained as a media historian, his scholarship centers on the interconnected histories of artificial intelligence (AI) and behavioral science, and on the ways the social and ethical contexts of AI are changing how we work, communicate, and participate in civic life.

Sat 2:30 p.m. - 3:00 p.m.
Video » 

Over the past six years, Will High has volunteered his expertise as a data scientist to various nonprofits and civic causes. He's contributed to work on homelessness, improving charter schools and optimizing water distribution. Will will talk about his experience doing pro-bono work with DataKind, a global nonprofit based in New York that connects leading social change organizations with data science talent to collaborate on cutting-edge analytics and advanced algorithms developed to maximize social impact. He'll comment on DataKind's mission, how to structure effective pro-bono engagements, and broader principles of the pro bono model applied to machine learning, analytics and engineering.

Speaker bio: Will is a data science executive at Joymode in Los Angeles and works with DataKind as Data Ambassador, consultant and facilitator. Will was previously a Senior Data Scientist at Netflix. He holds a PhD in physics from Harvard.

William High
Sat 3:00 p.m. - 3:30 p.m.
Break / Poster Session 2
Sat 3:30 p.m. - 3:50 p.m.
Poster Session
Boli Fang, Ananth Balashankar, Sonam Damani, Emma Beauxis-Aussalet, Nan Wu, Elizabeth Bondi, Marc Rußwurm, David Ruhe, Nripsuta Saxena, Katie Spoon
Sat 3:50 p.m. - 4:20 p.m.
Video » 

This talk will give an overview of some of the known failure modes that are leading to unintended consequences in AI development, as well as research agendas and initiatives to mitigate them, including a number that are underway at the Partnership on AI (PAI). Important case studies include the use of algorithmic risk assessment tools in the US criminal justice system, and the side-effects that are caused by using deliberate or unintended optimization processes to design high-stakes technical and bureaucratic system. These are important in their own right, but they are also important contributors to conversations about social good applications of AI, which are also subject to significant potential for unintended consequences.

Speaker bio: Peter Eckersley is Director of Research at the Partnership on AI, a collaboration between the major technology companies, civil society and academia to ensure that AI is designed and used to benefit humanity. He leads PAI's research on machine learning policy and ethics, including projects within PAI itself and projects in collaboration with the Partnership's extensive membership. Peter's AI research interests are broad, including measuring progress in the field, figuring out how to translate ethical and safety concerns into mathematical constraints, finding the right metaphors and ways of thinking about AI development, and setting sound policies around high-stakes applications such as self-driving vehicles, recidivism prediction, cybersecurity, and military applications of AI. Prior to joining PAI, Peter was Chief Computer Scientist for the Electronic Frontier Foundation. At EFF he lead a team of technologists that launched numerous computer security and privacy projects including Let's Encrypt and Certbot, Panopticlick, HTTPS Everywhere, the SSL Observatory and Privacy Badger; they also worked on diverse Internet policy issues including campaigning to preserve open wireless networks; fighting to keep modern computing platforms open; helping to start the campaign against the SOPA/PIPA Internet blacklist legislation; and running the first controlled tests to confirm that Comcast was using forged reset packets to interfere with P2P protocols. Peter holds a PhD in computer science and law from the University of Melbourne; his research focused on the practicality and desirability of using alternative compensation systems to legalize P2P file sharing and similar distribution tools while still paying authors and artists for their work. He currently serves on the board of the Internet Security Research Group and the Advisory Council of the Open Technology Fund; he is an Affiliate of the Center for International Security and Cooperation at Stanford University and a Distinguished Technology Fellow at EFF.

Sat 4:20 p.m. - 4:30 p.m.
Video » 

The World Health Organization identifies outdoor fine particulate air pollution (PM2.5) as a leading risk factor for premature mortality globally. As such, understanding the global distribution of PM2.5 is an essential precursor towards implementing pollution mitigation strategies and modelling global public health. Here, we present a convolutional neural network based approach for estimating annual average outdoor PM2.5 concentrations using only satellite images. The resulting model achieves comparable performance to current state-of-the-art statistical models.

Speaker bio: - Kris Y Hong is a research assistant and prospective PhD student in the Weichenthal Lab at McGill University, in Montreal, Canada. His interests lie in applying current statistical and machine learning techniques towards solving humanitarian and environmental challenges. Prior to joining McGill, he was a data analyst at the British Columbia Centre for Disease Control while receiving his B.Sc. in Statistics from the University of British Columbia.

Sat 4:30 p.m. - 4:40 p.m.
Video » 

As awareness of the potential for learned models to amplify existing societal biases increases, the field of ML fairness has developed mitigation techniques. A prevalent method applies constraints, including equality of performance, with respect to subgroups defined over the intersection of sensitive attributes such as race and gender. Enforcing such constraints when the subgroup populations are considerably skewed with respect to a target can lead to unintentional degradation in performance, without benefiting any individual subgroup, counter to the United Nations Sustainable Development goals of reducing inequalities and promoting growth. In order to avoid such performance degradation while ensuring equitable treatment to all groups, we propose Pareto-Efficient Fairness (PEF), which identifies the operating point on the Pareto curve of subgroup performances closest to the fairness hyperplane. Specifically, PEF finds a Pareto Optimal point which maximizes multiple subgroup accuracy measures. The algorithm scalarizes using the adaptive weighted metric norm by iteratively searching the Pareto region of all models enforcing the fairness constraint. PEF is backed by strong theoretical results on discoverability and provides domain practitioners finer control in navigating both convex and non-convex accuracy-fairness trade-offs. Empirically, we show that PEF increases performance of all subgroups in skewed synthetic data and UCI datasets.

Speaker bio: Ananth Balashnkar is a 2nd year Ph.D student in Computer Science advised by Prof. Lakshminarayanan Subramanian at NYU's Courant Institute of Mathematical Sciences. He is currently interested in Interpretable Machine Learning and the challenges involved in applying machine perception for the domains of policy, privacy, economics and healthcare.

Ananth Balashankar
Sat 4:40 p.m. - 4:50 p.m.
Video » 

Social media has been extensively used for crisis management. Recent work examines possible sub-events as a major crisis unfolds. In this project, we first propose a framework to identify sub-events from tweets. Then, leveraging 4 California wildfires in 2018-2019 as a case study, we investigate how sub-events cascade based on existing hypotheses drawn from the disaster management literature, and find that most hypotheses are supported on social media, e.g., fire induces smoke, which causes air pollution, which later harms health and eventually affects the healthcare system. In addition, we discuss other unexpected sub-events that emerge from social media.

Speaker bio: Alejandro (Alex) Jaimes is Chief Scientist and SVP of AI at Dataminr. Alex has 15+ years of intl. experience in research and product impact at scale. He has published 100+ technical papers in top-tier conferences and journals in diverse topics in AI and has been featured widely in the press (MIT Tech review, CNBC, Vice, TechCrunch, Yahoo! Finance, etc.). He has given 80+ invited talks (AI for Good Global Summit (UN, Geneva), the Future of Technology Summit, O’Reilly (AI, Strata, Velocity), Deep Learning Summit, etc.). Alex is also an Endeavor Network mentor (which leads the high-impact entrepreneurship movement around the world), and was an early voice in Human-Centered AI (Computing). He holds a Ph.D. from Columbia U.

Sat 4:50 p.m. - 5:00 p.m.
Video » 

Dyslexia is a learning disability that hinders a person's ability to read. Dyslexia needs to be caught early, however, teachers are not trained to detect dyslexia and screening tests are used inconsistently. We propose (1) two new data sets of handwriting collected from children with and without dyslexia amounting to close to 500 handwriting samples, and (2) an automated early screening technique to be used in conjunction with current approaches, to accelerate the detection process. Preliminary results suggest our system out-performs teachers.

Speaker bio: Katie Spoon recently completed her B.S./M.S. in computer science from Indiana University with minors in math and statistics, and with research interests in anomaly detection, computer vision, data visualization, and applications of computer vision to health and education, like her senior thesis detecting dyslexia with neural networks. She worked at IBM Research in the summer of 2018 on neuromorphic computing, and will be returning there full-time. She hopes to potentially get a PhD and become a corporate research scientist.

Sat 5:00 p.m. - 5:30 p.m.
Video » 


Speaker bio: Phebe Vayanos is Assistant Professor of Industrial & Systems Engineering and Computer Science at the University of Southern California, and Associate Director of the CAIS Center for Artificial Intelligence in Society. Her research aims to address fundamental questions in data-driven optimization (aka prescriptive analytics) with aim to tackle real-world decision- and policy-making problems in uncertain and adversarial environments.

Sat 5:30 p.m. - 5:40 p.m.
Open announcement and Best Paper Award

Author Information

Margaux Luck (MILA, UdM)
Kris Sankaran (Mila)
Tristan Sylvain (MILA - Universite de Montreal)

I'm a PhD student at MILA, Universite de Montreal.

Sean McGregor (Syntiant)
Jonnie Penn (University of Cambridge)
Girmaw Abebe Tadesse (University of Oxford)
Virgile Sylvain (University of Montreal)
Myriam Côté (Mila)
Lester Mackey (Microsoft Research)
Lester Mackey

Lester Mackey is a machine learning researcher at Microsoft Research, where he develops new tools, models, and theory for large-scale learning tasks driven by applications from healthcare, climate, recommender systems, and the social good. Lester moved to Microsoft from Stanford University, where he was an assistant professor of Statistics and (by courtesy) of Computer Science. He earned his PhD in Computer Science and MA in Statistics from UC Berkeley and his BSE in Computer Science from Princeton University. He co-organized the second place team in the \$1M. Netflix Prize competition for collaborative filtering, won the \$50K Prise4Life ALS disease progression prediction challenge, won prizes for temperature and precipitation forecasting in the yearlong real-time \$800K Subseasonal Climate Forecast Rodeo, and received a best student paper award at the International Conference on Machine Learning.

Rayid Ghani (University of Chicago)
Yoshua Bengio (Montreal Institute for Learning Algorithms)

More from the Same Authors