Structured Prediction Problems in Natural Language Processing
Monday, July 7, 4:05 pm - 5:05 pm, S1 (1st floor)
Modeling language at the syntactic or semantic level is a key problem in natural language processing, and involves a challenging set of structured prediction problems. In this talk I'll describe work on machine learning approaches for syntax and semantics, with a particular focus on lexicalized grammar formalisms such as dependency grammars, tree adjoining grammars, and categorial grammars. I'll address key issues in the following areas: 1) the design of learning algorithms for structured linguistic data; 2) the design of representations that are used within these learning algorithms; 3) the design of efficient approximate inference algorithms for lexicalized grammars, in cases where exact inference can be very expensive. In addition, I'll describe applications to machine translation, and natural language interfaces.
STAIR: The STanford Artificial Intelligence Robot project
Tuesday, July 8, 4:35 pm - 5:35 pm, S1 (1st floor)
This talk will describe the STAIR home assistant robot project, and several satellite projects that led to key STAIR components such as (i) robotic grasping of previously unknown objects, (ii) depth perception from a single still image, and (iii) apprenticeship learning for control.
Since its birth in 1956, the AI dream has been to build systems that exhibit broad-spectrum competence and intelligence. STAIR revisits this dream, and seeks to integrate onto a single robot platform tools drawn from all areas of AI including learning, vision, navigation, manipulation, planning, and speech/NLP. This is in distinct contrast to, and also represents an attempt to reverse, the 30 year old trend of working on fragmented AI sub-fields. STAIR's goal is a useful home assistant robot, and over the long term, we envision a single robot that can perform tasks such as tidying up a room, using a dishwasher, fetching and delivering items, and preparing meals.
STAIR is still a young project, and in this talk I'll report on our progress so far on having STAIR fetch items from around the office. Specifically, I'll describe: (i) learning to grasp previously unseen objects (including its application to unloading items from a dishwasher); (ii) probabilistic multi-resolution maps, which enable the robot to open/use doors; (iii) a robotic foveal+peripheral vision system for object recognition and tracking. I'll also outline some of the main technical ideas - such as learning 3-d reconstructions from a single still image, and reinforcement learning algorithms for robotic control - that played key roles in enabling these STAIR components.
Katholieke Universiteit Leuven
Logical and Relational Learning Revisited
Monday, July 7, 8:30 am - 9:30 am, S1 (1st floor)
I use the term logical and relational learning (LRL) to refer to the subfield of machine learning and data mining that is concerned with learning in expressive logical or relational representations. It is the union of inductive logic programming, (statistical) relational learning and multi-relational data mining and constitutes a general class of techniques and methodology for learning from structured data (such as graphs, networks, relational databases) and background knowledge.
During the course of its existence, logical and relational learning has changed dramatically. Whereas early work was mainly concerned with logical issues (and even program synthesis from examples), in the 90s its focus was on the discovery of new and interpretable knowledge from structured data, often in the form of rules or patterns. Since then the range of tasks to which logical and relational learning has been applied has significantly broadened and now covers almost all machine learning problems and settings. Today, there exist logical and relational learning methods for reinforcement learning, statistical learning, distance- and kernel-based learning in addition to traditional symbolic machine learning approaches.
At the same time, logical and relational learning problems are appearing everywhere. Advances in intelligent systems are enabling the generation of high-level symbolic and structured data in a wide variety of domains, including the semantic web, robotics, vision, social networks, and the life sciences, which in turn raises new challenges and opportunities for logical and relational learning,
These developments have led to a new view on logical and relational learning and its role in machine learning and artificial intelligence. In this talk, I shall reflect on this view by identifying some of the lessons learned in logical and relational learning and formulating some challenges for future developments.
Microsoft Research Cambridge
Probabilistic models for understanding images
Sunday, July 6, 8:45 am - 9:45 am, S1 (1st floor)
Getting a computer to understand an image is challenging due to the numerous sources of variability that influence the imaging process. The pixels of a typical photograph will depend on the scene type and geometry, the number, shape and appearance of objects present in the scene, their 3D positions and orientations, as well as effects such as occlusion, shading and shadows. The good news is that research into physics and computer graphics has given us a detailed understanding of how these variables affect the resulting image. This understanding can help us to build the right prior knowledge into our probabilistic models of images. In theory, building a model containing all of this knowledge would solve the image understanding problem. In practice, such a model would be intractable for current inference methods. The open challenge for machine learning and machine vision researchers is to create a model which captures the imaging process as accurately as possible, whilst remaining tractable for accurate inference. To illustrate this challenge, I will show how different aspects of the imaging process can be incorporated into models for object detection and segmentation, and discuss techniques for making inference tractable in such models.