Timezone: »
We consider \emph{influence maximization} (IM) in social networks, which is the problem of maximizing the number of users that become aware of a product by selecting a set of ``seed'' users to expose the product to. While prior work assumes a known model of information diffusion, we propose a novel parametrization that not only makes our framework agnostic to the underlying diffusion model, but also statistically efficient to learn from data. We give a corresponding monotone, submodular surrogate function, and show that it is a good approximation to the original IM objective. We also consider the case of a new marketer looking to exploit an existing social network, while simultaneously learning the factors governing information propagation. For this, we propose a pairwise-influence semi-bandit feedback model and develop a LinUCB-based bandit algorithm. Our model-independent analysis shows that our regret bound has a better (as compared to previous work) dependence on the size of the network. Experimental evaluation suggests that our framework is robust to the underlying diffusion model and can efficiently learn a near-optimal solution.
Author Information
Sharan Vaswani (University of British Columbia)
Branislav Kveton (Adobe Research)
Zheng Wen (Adobe Research)
Mohammad Ghavamzadeh (Adobe Research & INRIA)
Laks V.S Lakshmanan (University of British Columbia)
Mark Schmidt (University of British Columbia)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Model-Independent Online Learning for Influence Maximization »
Mon. Aug 7th 08:30 AM -- 12:00 PM Room Gallery #22
More from the Same Authors
-
2023 Poster: Target-based Surrogates for Stochastic Optimization »
Jonathan Lavington · Sharan Vaswani · Reza Babanezhad · Mark Schmidt · Nicolas Le Roux -
2023 Poster: Simplifying Momentum-based Positive-definite Submanifold Optimization with Applications to Deep Learning »
Wu Lin · Valentin Duruisseaux · Melvin Leok · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2023 Poster: Let's Make Block Coordinate Descent Converge Faster: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence »
Julie Nutini · Issam Laradji · Mark Schmidt -
2021 Poster: Tractable structured natural-gradient descent using local parameterizations »
Wu Lin · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2021 Spotlight: Tractable structured natural-gradient descent using local parameterizations »
Wu Lin · Frank Nielsen · Khan Emtiyaz · Mark Schmidt -
2021 Poster: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2021 Oral: Robust Asymmetric Learning in POMDPs »
Andrew Warrington · Jonathan Lavington · Adam Scibior · Mark Schmidt · Frank Wood -
2020 Poster: Handling the Positive-Definite Constraint in the Bayesian Learning Rule »
Wu Lin · Mark Schmidt · Mohammad Emtiyaz Khan -
2019 Poster: Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits »
Branislav Kveton · Csaba Szepesvari · Sharan Vaswani · Zheng Wen · Tor Lattimore · Mohammad Ghavamzadeh -
2019 Oral: Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits »
Branislav Kveton · Csaba Szepesvari · Sharan Vaswani · Zheng Wen · Tor Lattimore · Mohammad Ghavamzadeh -
2019 Poster: Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations »
Wu Lin · Mohammad Emtiyaz Khan · Mark Schmidt -
2019 Oral: Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations »
Wu Lin · Mohammad Emtiyaz Khan · Mark Schmidt -
2017 Poster: Active Learning for Accurate Estimation of Linear Models »
Carlos Riquelme Ruiz · Mohammad Ghavamzadeh · Alessandro Lazaric -
2017 Poster: Online Learning to Rank in Stochastic Click Models »
Masrour Zoghi · Tomas Tunys · Mohammad Ghavamzadeh · Branislav Kveton · Csaba Szepesvari · Zheng Wen -
2017 Poster: Bottleneck Conditional Density Estimation »
Rui Shu · Hung Bui · Mohammad Ghavamzadeh -
2017 Talk: Active Learning for Accurate Estimation of Linear Models »
Carlos Riquelme Ruiz · Mohammad Ghavamzadeh · Alessandro Lazaric -
2017 Talk: Bottleneck Conditional Density Estimation »
Rui Shu · Hung Bui · Mohammad Ghavamzadeh -
2017 Talk: Online Learning to Rank in Stochastic Click Models »
Masrour Zoghi · Tomas Tunys · Mohammad Ghavamzadeh · Branislav Kveton · Csaba Szepesvari · Zheng Wen