Timezone: »
We introduce a new budgeted framework for online influence maximization, considering the total cost of an advertising campaign instead of the common cardinality constraint on a chosen influencer set. Our approach models better the real-world setting where the cost of influencers varies and advertizers want to find the best value for their overall social advertising budget. We propose an algorithm assuming an independent cascade diffusion model and edge-level semi-bandit feedback, and provide both theoretical and experimental results. Our analysis is also valid for the cardinality-constraint setting and improves the state of the art regret bound in this case.
Author Information
Pierre Perrault (ENS Paris-Saclay, Inria)
Jennifer Healey (Adobe)
Jennifer Healey has a long history of looking into how people interact with sensors and envisioning the new experiences that this enables. She holds BS, MS and PhD degrees from MIT in EECS. During here graduate studies at the Media Lab, she pioneered the field of “Affective Computing” with Rosalind Picard and developed the first wearable computer with physiological sensors and a video camera that allowed the wearer to track their daily activities and how record how they felt while doing them. She worked at both IBM Zurich and IBM TJ Watson on AI for smart phones with a multi-modal user interface that allowed the user to switch from voice to visual (input and output) seamlessly. She has been an Instructor in Translational Medicine at Harvard Medical School and Beth Israel Deaconess Medical Center, where she worked on new algorithms to predict cardiac health from mobile sensors. She continued working in Digital Health at both HP and Intel where she helped develop the Shimmer sensing platform and the Intel Health Guide. Her research at Intel extended to sensing people in cars and cooperative autonomous driving (see her TED talk). She has also continued her work in Affective computing, developing a new software platform for cell phones which included onboard machine learning algorithms for recognizing stress from heart rate, activation from features of voice and privacy protected sentiment analysis of texts and emails (Best Demo at MobileHCI 2018).
Zheng Wen (DeepMind)
Michal Valko (DeepMind)
More from the Same Authors
-
2021 : Marginalized Operators for Off-Policy Reinforcement Learning »
Yunhao Tang · Mark Rowland · Remi Munos · Michal Valko -
2021 : Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret »
Jean Tarbouriech · Jean Tarbouriech · Simon Du · Matteo Pirotta · Michal Valko · Alessandro Lazaric -
2022 Oral: From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses »
Daniil Tiapkin · Denis Belomestny · Eric Moulines · Alexey Naumov · Sergey Samsonov · Yunhao Tang · Michal Valko · Pierre MENARD -
2022 Spotlight: Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2022 Spotlight: Retrieval-Augmented Reinforcement Learning »
Anirudh Goyal · Abe Friesen Friesen · Andrea Banino · Theophane Weber · Nan Rosemary Ke · Adrià Puigdomenech Badia · Arthur Guez · Mehdi Mirza · Peter Humphreys · Ksenia Konyushkova · Michal Valko · Simon Osindero · Timothy Lillicrap · Nicolas Heess · Charles Blundell -
2022 Poster: From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses »
Daniil Tiapkin · Denis Belomestny · Eric Moulines · Alexey Naumov · Sergey Samsonov · Yunhao Tang · Michal Valko · Pierre MENARD -
2022 Poster: Retrieval-Augmented Reinforcement Learning »
Anirudh Goyal · Abe Friesen Friesen · Andrea Banino · Theophane Weber · Nan Rosemary Ke · Adrià Puigdomenech Badia · Arthur Guez · Mehdi Mirza · Peter Humphreys · Ksenia Konyushkova · Michal Valko · Simon Osindero · Timothy Lillicrap · Nicolas Heess · Charles Blundell -
2022 Poster: Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2021 Poster: Fast active learning for pure exploration in reinforcement learning »
Pierre MENARD · Omar Darwiche Domingues · Anders Jonsson · Emilie Kaufmann · Edouard Leurent · Michal Valko -
2021 Poster: UCB Momentum Q-learning: Correcting the bias without forgetting »
Pierre MENARD · Omar Darwiche Domingues · Xuedong Shang · Michal Valko -
2021 Spotlight: Fast active learning for pure exploration in reinforcement learning »
Pierre MENARD · Omar Darwiche Domingues · Anders Jonsson · Emilie Kaufmann · Edouard Leurent · Michal Valko -
2021 Oral: UCB Momentum Q-learning: Correcting the bias without forgetting »
Pierre MENARD · Omar Darwiche Domingues · Xuedong Shang · Michal Valko -
2021 Poster: Kernel-Based Reinforcement Learning: A Finite-Time Analysis »
Omar Darwiche Domingues · Pierre Menard · Matteo Pirotta · Emilie Kaufmann · Michal Valko -
2021 Poster: Online A-Optimal Design and Active Linear Regression »
Xavier Fontaine · Pierre Perrault · Michal Valko · Vianney Perchet -
2021 Poster: Joint Online Learning and Decision-making via Dual Mirror Descent »
Alfonso Lobos Ruiz · Paul Grigas · Zheng Wen -
2021 Spotlight: Kernel-Based Reinforcement Learning: A Finite-Time Analysis »
Omar Darwiche Domingues · Pierre Menard · Matteo Pirotta · Emilie Kaufmann · Michal Valko -
2021 Spotlight: Online A-Optimal Design and Active Linear Regression »
Xavier Fontaine · Pierre Perrault · Michal Valko · Vianney Perchet -
2021 Spotlight: Joint Online Learning and Decision-making via Dual Mirror Descent »
Alfonso Lobos Ruiz · Paul Grigas · Zheng Wen -
2021 Poster: Revisiting Peng's Q($\lambda$) for Modern Reinforcement Learning »
Tadashi Kozuno · Yunhao Tang · Mark Rowland · Remi Munos · Steven Kapturowski · Will Dabney · Michal Valko · David Abel -
2021 Poster: Taylor Expansion of Discount Factors »
Yunhao Tang · Mark Rowland · Remi Munos · Michal Valko -
2021 Spotlight: Taylor Expansion of Discount Factors »
Yunhao Tang · Mark Rowland · Remi Munos · Michal Valko -
2021 Spotlight: Revisiting Peng's Q($\lambda$) for Modern Reinforcement Learning »
Tadashi Kozuno · Yunhao Tang · Mark Rowland · Remi Munos · Steven Kapturowski · Will Dabney · Michal Valko · David Abel -
2020 Poster: Monte-Carlo Tree Search as Regularized Policy Optimization »
Jean-Bastien Grill · Florent Altché · Yunhao Tang · Thomas Hubert · Michal Valko · Ioannis Antonoglou · Remi Munos -
2020 Poster: Improved Sleeping Bandits with Stochastic Action Sets and Adversarial Rewards »
Aadirupa Saha · Pierre Gaillard · Michal Valko -
2020 Poster: Gamification of Pure Exploration for Linear Bandits »
Rémy Degenne · Pierre Menard · Xuedong Shang · Michal Valko -
2020 Poster: Stochastic bandits with arm-dependent delays »
Anne Gael Manegueu · Claire Vernade · Alexandra Carpentier · Michal Valko -
2020 Poster: Near-linear time Gaussian process optimization with adaptive batching and resparsification »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2020 Poster: Influence Diagram Bandits: Variational Thompson Sampling for Structured Bandit Problems »
Tong Yu · Branislav Kveton · Zheng Wen · Ruiyi Zhang · Ole J. Mengshoel -
2020 Poster: Structured Policy Iteration for Linear Quadratic Regulator »
Youngsuk Park · Ryan A. Rossi · Zheng Wen · Gang Wu · Handong Zhao -
2020 Poster: Taylor Expansion Policy Optimization »
Yunhao Tang · Michal Valko · Remi Munos -
2019 Poster: Exploiting structure of uncertainty for efficient matroid semi-bandits »
Pierre Perrault · Vianney Perchet · Michal Valko -
2019 Poster: Scale-free adaptive planning for deterministic dynamics & discounted rewards »
Peter Bartlett · Victor Gabillon · Jennifer Healey · Michal Valko -
2019 Oral: Exploiting structure of uncertainty for efficient matroid semi-bandits »
Pierre Perrault · Vianney Perchet · Michal Valko -
2019 Oral: Scale-free adaptive planning for deterministic dynamics & discounted rewards »
Peter Bartlett · Victor Gabillon · Jennifer Healey · Michal Valko