Wednesday

Thursday

Friday

Christos Papadimitriou

Saso Dzeroski

Sebastian Thrun

Morning Tea

Morning Tea

Morning Tea

ENSEMBLE

Is Combining Classifiers Better than Selecting the Best One?

Saso Dzeroski

Bernard Zenko

HRL

Discovering Hierarchy in Reinforcement Learning with HEXQ

Bernhard Hengst

TEXT

Learning word normalization using word suffix and context from unlabeled data

Dunja Mladenic

BC/DISC

Reinforcement Learning and Shaping: Encouraging Intended Behaviors

Adam Laud

Gerald DeJong

SVM

Anytime Interval-Valued Outputs for Kernel Machines: Fast Support Vector Machine Classification via Distance Geometry

Dennis DeCoste

COLT

Sufficient Dimensionality Reduction - A novel Analysis Principle

Amir Globerson

Naftali Tishby

ENSEMBLE

Incorporating Prior Knowledge into Boosting

Robert Schapire

Marie Rochery

Mazin Rahim

Narendra Gupta

FEATURE

Refining the Wrapper Approach - Smoothed Error Estimates for Feature Selection

Loo-Nin Teow

Hwee Tou Ng

Haifeng Liu

Eric Yap

ILP

Feature Subset Selection and Inductive Logic Programming

Erick Alphonse

Stan Matwin

ENSEMBLE

A Unified Decomposition of Ensemble Loss for Predicting Ensemble Performance

Michael Goebel

Pat Riddle

Mike Barley

HRL

Automatic Creation of Useful Macro-Actions in Reinforcement Learning

Marc Pickett

Andrew Barto

TEXT

A New Statistical Approach on Personal Name Extraction

Zheng Chen

Feng Zhang

BC/DISC

Separating Skills from Preference: Using Learning to Program by Reward

Daniel Shapiro

Pat Langley

SVM

Multi-Instance Kernels

Thomas Gaertner

Peter Flach

Adam Kowalczyk

Alex Smola

Robert Williamson

COLT

Combining Training Set and Test Set Bounds

John Langford

ENSEMBLE

Modeling Auction Price Uncertainty Using Boosting-based Conditional Density Estimation

Robert Schapire

Peter Stone

David McAllester

Michael Littman

Janos Csirik

FEATURE

Feature Selection with Active Learning

Huan Liu

Hiroshi Motoda

Lei Yu

ILP

Inductive Logic Programming out of Phase Transition: A bottom-up constraint-based approach

Jacques Ales Bianchetti

Celine Rouveirol

Michele Sebag

ENSEMBLE

Cranking: An Ensemble Method for Combining Rankers using Conditional Probability Models on Permutations

Guy Lebanon

John Lafferty

HRL

Using Abstract Models of Behaviours to Automatically Generate Reinforcement Learning Hierarchies

Malcolm Ryan

TEXT

IEMS - The Intelligent Email Sorter

Elisabeth Crawford

Judy Kay

Eric McCreath

BC/DISC

Learning to Fly by Controlling Dynamic Instabilities

David Stirling

SVM

Kernels for Semi-Structured Data

Hisashi Kashima

Teruo Koyanagi

COLT

Learning k-Reversible Context-Free Grammars from Positive Structural Examples

Tim Oates

Devina Desai

Vinay Bhat

ENSEMBLE

How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness

Alexander K. Seewald

FEATURE

Randomized Variable Elimination

David Stracuzzi

Paul Utgoff

ILP

Graph-Based Relational Concept Learning

Jesus Gonzalez

Lawrence Holder

Diane Cook

ENSEMBLE

Active + Semi-supervised Learning = Robust Multi-View Learning

Ion Muslea

Steven Minton

Craig Knoblock

HRL

Model-based Hierarchical Average-reward Reinforcement Learning

Sandeep Seri

Prasad Tadepalli

TEXT

Combining Labeled and Unlabeled Data for MultiClass Text Categorization

Rayid Ghani

BC/DISC

Qualitative reverse engineering

Dorian Suc

Ivan Bratko

SVM

A Fast Dual Algorithm for Kernel Logistic Regression

Sathiya Keerthi

Kaibo Duan

Shirish Shevade

Aun Poo

COLT

On generalization bounds, projection profile, and margin distribution

Ashutosh Garg

Sariel Har-Peled

Dan Roth

ENSEMBLE

Towards "Large Margin" Speech Recognizers by Boosting and Discriminative Training

Carsten Meyer

Peter Beyerlein

FEATURE

Discriminative Feature Selection via Multiclass Variable Memory Markov Model

Noam Slonim

Gill Bejerano

Shai Fine

Naftali Tishby

RULE

Descriptive Induction through Subgroup Discovery: A Case Study in a Medical Domain

Dragan Gamberger

Nada Lavrac

Lunch

Lunch

Lunch

TREES

Fast Minimum Training Error Discretization

Tapio Elomaa

Juhu Rousu

HRL

Hierarchically Optimal Average Reward Reinforcement Learning

Mohammad Ghavamzadeh

Sridhar Mahadevan

TEXT

Partially Supervised Classification of Text Documents

Bing Liu

Wee Sun Lee

Philip S. Yu

Xiaoli Li

BC/DISC

Inducing Process Models from Continuous Data

Pat Langley

Javier Sanchez

Ljupco Todorovski

Saso Dzeroski

COST

An Alternate Objective Function for Markovian Fields

Sham Kakade

Yee Whye Teh

Sam Roweis

BAYES

Non-Disjoint Discretization for Naive-Bayes Classifiers

Ying Yang

Geoffrey I. Webb

SVM

Statistic Behavior and Consistency of Support Vector Machines, Boosting, and Beyond

Tong Zhang

BAYES

Sparse Bayesian Learning for Regression and Classification using Markov Chain Monte Carlo

Shien-Shin Tham

Arnaud Doucet

Ramamohanarao Kotagiri

FEATURE

Linkage and Autocorrelation Cause Feature Selection Bias in Relational Learning

David Jensen

Jennifer Neville

TREES

Learning Decision Trees Using the Area Under the ROC Curve

Cesar Ferri

Peter Flach

Jose Hernandez-Orallo

RL

Action Refinement in Reinforcement Learning by Probability Smoothing

Thomas Dietterich

Didac Busquets

Ramon Lopez de Mantaras

Carles Sierra

TEXT

Syllables and other String Kernel Extensions

Craig Saunders

Hauke Tschach

John Shawe-Taylor

RL

Integrating Experimentation and Guidance in Relational Reinforcement Learning

Kurt Driessens

Saso Dzeroski

COST

Issues in Classifier Evaluation using Optimal Cost Curves

Kai Ming Ting

BAYES

Numerical Minimum Message Length Inference of Univariate Polynomials

Leigh Fitzgibbon

David Dowe

Lloyd Allison

SVM

The Perceptron Algorithm with Uneven Margins

Yaoyong Li

Hugo Zaragoza

Ralf Herbrich

John Shawe-Taylor

Jaz Kandola

BAYES

Modeling for Optimal Probability Prediction

Yong Wang

Ian H. Witten

RL

Algorithm-Directed Exploration for Model-Based Reinforcement Learning

Carlos Guestrin

Relu Patrascu

Dale Schuurmans

TREES

An Analysis of Functional Trees

Joao Gama

BC/DISC

Learning Spatial and Temporal Correlation for Navigation in a 2-Dimensional Continuous World

Anand Panangadan

Michael Dyer

TEXT

A Boosted Maximum Entropy Model for Learning Text Chunking

Seong-Bae Park

Byoung-Tak Zhang

RL

Approximately Optimal Approximate Reinforcement Learning

Sham Kakade

John Langford

COST

Pruning Improves Heuristic Search for Cost-Sensitive Learning

Valentina Bayer Zubek

Thomas Dietterich

BAYES

Learning to Share Distributed Probabilistic Beliefs

Christopher Leckie

Ramamohanarao Kotagiri

SVM

Learning the Kernel Matrix with Semi-Definite Programming

Gert Lanckriet

Nello Christianini

Peter Bartlett

Laurent El Ghaoui

Michael Jordan

BAYES

Representational Upper Bounds of Bayesian Networks

Huajie Zhang

Charles Ling

RL

A Necessary Condition of Convergence for Reinforcement Learning with Function Approximation

Artur Merke

Ralf Schoknecht

Afternoon Tea

Afternoon Tea

Afternoon Tea

TREES

Classification Value Grouping

Colin Ho

RL

Scalable Internal-State Policy-Gradient Methods for POMDPs

Douglas Aberdeen

Jonathan Baxter

TEXT

Using Unlabelled Data for Text Classification through Addition of Cluster Parameters

Bhavani Raskutti

Adam Kowalczyk

Herman Ferra

RL

Competitive Analysis of the Explore/Exploit Tradeoff

John Langford

Martin Zinkevich

Sham Kakade

UNSUP

Semi-supervised Clustering by Seeding

Sugato Basu

Arindam Banerjee

Raymond Mooney

BAYES

Markov Chain Monte Carlo Sampling using Direct Search Optimization

Malcolm Strens

Mark Bernhardt

Nicholas Everett

SVM

Diffusion Kernels on Graphs and Other Discrete Structures

Risi Kondor

John Lafferty

RULE

Learning Decision Rules by Randomized Iterative Local Search

Michael Chisholm

Prasad Tadepalli

RL

Stock Trading System Using Reinforcement Learning with Cooperative Agents

Jangmin O

Jae Won Lee

Byoung-Tak Zhang

TREES

Finding an Optimal Gain-Ratio Subset-Split Test for a Set-Valued Attribute in Decision Tree Induction

Fumio Takechi

Einoshin Suzuki

RL

An epsilon-Optimal Grid-Based Algorithm for Partially Observable Markov Decision Processes

Blai Bonet

UNSUP

From Instance-level Constraints to Space-Level Constraints: Making the Most of Prior Knowledge in Data Clustering

Dan Klein

Sepandar Kamvar

Christopher Manning

RL

Investigating the Maximum Likelihood Alternative to TD(lambda)

Fletcher Lu

Relu Patrascu

Dale Schuurmans

UNSUP

Exploiting Relations Among Concepts to Acquire Weakly Labeled Training Data

Joseph Bockhorst

Mark Craven

BAYES

Exact model averaging with naive Bayesian classifiers

Denver Dash

Gregory Cooper

RL

Learning from Scarce Experience

Leonid Peshkin

Christian Shelton

RULE

Transformation-Based Regression

Bjorn Bringmann

Stefan Kramer

Friedrich Neubarth

Hannes Pirker

Gerhard Widmer

MULT

Content-Based Image Retrieval Using Multiple-Instance Learning

Qi Zhang

Wei Yu

Sally Goldman

Jason Fritts

TREES

Adaptive View Validation: A First Step Towards Automatic View Detection

Ion Muslea

Steven Minton

Craig Knoblock

RL

On the Existence of Fixed Points for Q-Learning and Sarsa in Partially Observable Domains

Theodore Perkins

Mark Pendrith

RULE

Mining Both Positive and Negative Association Rules

Xindong Wu

Shichao Zhang

RL

Coordinated Reinforcement Learning

Carlos Guestrin

Michail Lagoudakis

Ronald Parr

UNSUP

Interpreting and Extending Classical Agglomerative Clustering Algorithms using a Model-Based approach

Sepandar Kamvar

Dan Klein

Christopher Manning

BAYES

MMIHMM: Maximum Mutual Information Hidden Markov Models

Nuria Oliver

Ashutosh Garg