Thanks for the insightful reviews.$ R1: In our context, the curse of dimension implies that for usual distributions that are not totally flat (i.e., the uniform distribution), the ratio between the area where the density is larger than a constant, over the area when the density is smaller than a constant, decreases exponentially with d. For this reason, the higher the dimension, the more crucial it is to be able to locate as accurately as possible the area where most of the mass of the density is localized. A smart sampler would then sample almost only in this area; and sampling in the rest of the space with only small probability. Therefore, the localization method described in 3.6 is crucial. On the other hand, in the optimization process, the dimension naturally pops up in the computational time. Thus in high dimension, it is not reasonable to obtain both fast convergence and small computational time in general, so the curse of dimension does indeed appear. Next, it is clear (and unavoidable) that unless additional assumptions are made, the efficiency of our procedure will converge to the efficiency of a regular rejection sampler when the dimension goes to infinity - this is unsurprising and is the case for all smart sampling methods (as e.g. A^*). Now, if additional assumptions hold, e.g., that the underlying density only depends on a small number of variables, our method can be combined with, e.g., high dimensional PCA or manifold estimation, so that the estimator is built only on the smaller dimensional space or manifold where the true density is defined. We only discussed this little, as it is in some sense an orthogonal question, but we will add a short paragraph describing the relevant techniques with which our method can be combined in high dimensions. R2: The setting of H_C is an important question. H_C is a constant that intervenes in the construction of the high probability proposal (upper bound). It calibrates the amount of correction we add to the estimator of the density to make it an upper bound, and it is therefore linked to the probability with which the proposal is an upper bound over the true density. Therefore, this parameter is in some sense a "subjective" parameter and should be calibrated according to the preferences of the user of the method: the larger H_C, the higher the probability that it is indeed above the true density (and so the higher the probability that the method is a correct sampler), but also the higher the gap between the proposal and the true density (and so the more often the points will be rejected). A nice way H_C can be selected in a "parameter-free" way is by cross-validation/bootstrap based approaches, that go, e.g., as follows. Over the initial samples of size N that we use in the initial sampling phase, we use bootstrap and re-sample several (say B) datasets of size N/2. On these B bootstrap datasets, we estimate several estimates of f - let us write them \hat f_1, ..., \hat f_B. Let then L_i = sup_{x\in [0,A]^d} |\hat f(x) - \hat f_i(x)|. If the user wants that the method works with probability 1-\delta, it can then set H_C = g_{(i)}, where the g_{(j)} are the L_j reordered in increasing order, and where i = \lfloor(1-\delta)B\rfloor. If instead of a bootstrap resampling, the sample of size N is divided into B sub-samples (and then the same procedure is applied), then it is the cross-validation analogue, we refer to in the paper. We will make these procedures explicit in the paper (but other versions of H_C estimation may be used too). R3: We will improve and extend our discussion and results regarding the high dimensions. Please see also the reply to R1 regarding this point. In the experiments section, we compared our method to A* sampling on, among other problem, the clutter problem, which was evaluated in the A* sampling paper (Maddison et al. NIPS 2014) which is, to the best of our knowledge, considered as the state-of-the-art smart sampling approach. If the reviewer has suggestions of other problems to consider, or methods to compare with, we would be happy to implement them.