We thank all the reviewers for your encouraging remarks and helpful suggestions. Please see our response to the comments of reviewer 1 and 2 separately.$ Reviewer 1: - Clarity in related work Our work is inspired by the subsampled MH algorithms and we talked about them in both the introduction and related work. We will improve the clarity in the section of related work with clearer connections. - Clarity in experiments We omit some experiment details in the main text due to space limit. Details about the model, algorithm, setting, and extra experiment results are provided in Section C of the supplementary. We’ll explain which factors are subsampled in the main text. - Another experiment on Ising model We considered the Ising model but it was similar to the binary-variable MRF experiment in App. F of Korattikara et al. (2014), which could be considered as a special case of our algorithm when D=2 (discussed in our section 4). It already showed the advantage of the subsampling approach. We did extra experiments on RBMs. They are not included in the paper due to space limit but might be included in an extended version in the future. Reviewer 2: - Stronger assumptions for better theoretical performance in section 3.3.1. It is possible to derive a similar proposition to Prop. 7 when using the Serfling concentration bound so that we can show better sample complexity as a function of the mean gap. The derivation of the bound would be more involved but is doable with a proper relaxation. - How different assumptions on the f_i give asymptotically better performance and asymptotical results for the number of samples used. We agree with the reviewer that it would be helpful to study the asymptotical performance of proposed algorithms and how it is affected by the reward distribution and also the size of the state space. It will be one of our research directions for future work.