Timezone: »
The influence maximization (IM) problem aims at finding a subset of seed nodes in a social network that maximize the spread of influence. In this study, we focus on a sub-class of IM problems, where whether the nodes are willing to be the seeds when being invited is uncertain, called \textit{contingency-aware IM}. Such contingency aware IM is critical for applications for non-profit organizations in low resource communities (e.g., spreading awareness of disease prevention). Despite the initial success, a major practical obstacle in promoting the solutions to more communities is the tremendous runtime of the greedy algorithms and the lack of high performance computing (HPC) for the non-profits in the field -- whenever there is a new social network, the non-profits usually do not have the HPCs to recalculate the solutions. Motivated by this and inspired by the line of works that use reinforcement learning (RL) to address combinatorial optimization on graphs, we formalize the problem as a Markov Decision Process (MDP), and use RL to learn an IM policy over historically seen networks, and generalize to unseen networks with negligible runtime at test phase. To fully exploit the properties of our targeted problem, we propose two technical innovations that improve the existing methods, including state-abstraction and theoretically grounded reward shaping. Empirical results show that our method achieves influence as high as the state-of-the-art methods for contingency-aware IM, while having negligible runtime at test phase.
Author Information
Haipeng Chen (Harvard University)
Wei Qiu (Nanyang Technological University)
Han-Ching Ou (Harvard University)
Bo An (Nanyang Technological University)
Milind Tambe (Harvard University)
More from the Same Authors
-
2023 Poster: Improved Policy Evaluation for Randomized Trials of Algorithmic Resource Allocation »
Aditya Mate · Bryan Wilder · Aparna Taneja · Milind Tambe -
2022 Poster: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Yixuan Li -
2022 Poster: Learning Pseudometric-based Action Representations for Offline Reinforcement Learning »
Pengjie Gu · Mengchen Zhao · Chen Chen · Dong Li · Jianye Hao · Bo An -
2022 Spotlight: Learning Pseudometric-based Action Representations for Offline Reinforcement Learning »
Pengjie Gu · Mengchen Zhao · Chen Chen · Dong Li · Jianye Hao · Bo An -
2022 Spotlight: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Yixuan Li -
2022 Poster: Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets »
Hongxin Wei · Lue Tao · RENCHUNZI XIE · LEI FENG · Bo An -
2022 Spotlight: Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets »
Hongxin Wei · Lue Tao · RENCHUNZI XIE · LEI FENG · Bo An -
2021 Poster: Pointwise Binary Classification with Pairwise Confidence Comparisons »
Lei Feng · Senlin Shu · Nan Lu · Bo Han · Miao Xu · Gang Niu · Bo An · Masashi Sugiyama -
2021 Poster: Learning from Similarity-Confidence Data »
Yuzhou Cao · Lei Feng · Yitian Xu · Bo An · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Learning from Similarity-Confidence Data »
Yuzhou Cao · Lei Feng · Yitian Xu · Bo An · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Pointwise Binary Classification with Pairwise Confidence Comparisons »
Lei Feng · Senlin Shu · Nan Lu · Bo Han · Miao Xu · Gang Niu · Bo An · Masashi Sugiyama -
2020 Poster: Learning Efficient Multi-agent Communication: An Information Bottleneck Approach »
Rundong Wang · Xu He · Runsheng Yu · Wei Qiu · Bo An · Zinovi Rabinovich -
2020 Poster: Learning with Multiple Complementary Labels »
LEI FENG · Takuo Kaneko · Bo Han · Gang Niu · Bo An · Masashi Sugiyama