We thank the reviewers for their valuable feedback and comments which will be certainly helpful for polishing the paper. $ As a general remark: all of these written reviews come across as highly positive about our submission (thanks!), on both the writing and contribution novelty. The scores, on the other hand, give the impression that the paper is borderline. We would kindly suggest that the reviewers consider upgrading their score to match what seems to be a quite positive assessment. Reviewer_1: ------------ 1. Computing the barrier itself does not give an algorithm. One needs to compute the inverse Hessiand and the gradient, to obtain the Newton direction. This we do in this paper, and show the (surprising) relationship to simulated annealing. We also remark that the authors of this paper pointed out this remark to Bubeck and Eldan. 2. Karmarkar - absolutely right, we will correct. 3. "iterative Newton algorithm" - it is indeed a path following, but may differ from the standard Nesterov-Nemirovsky one by step size. We agree to the rest of the comments, and will implement them in the revision. Reviewer 2: ----------- 3. Indeed, besides the theoretical interest (since membership oracle is the most general), there are many such examples: essentially any zero-order optimization problem can be thought as such, for example, optimization of user preference in econ applications: many times we can only observe what the user purchased. Optimization of transportation systems where one can obtain feedback on average delay, but not compute gradients, hyper-parameter optimization in machine learning (where one can measure running time), etc. 4. Indeed the SDP section is misleading - it seems as if there is no improvement, where actually the improvement is \sqrt{n} (the dimension n here is actually n^2, since we're in the matrix world). Other interesting cases are Ellipsoids and Balls, which admit a $O(1)$ barrier. We will add this in the revision. 6. Indeed this is a typo, t should be very high initially, and slowly decrease. It was shown in Kalai-Vempala that t on the order of the dimension suffices. 13. Lemma 4 gives the formal connection between the divergence of two consecutive distributions, which comes from the context of annealing, to the local-norm distance of two consecutive points, which is an IPM notion. It is used in the main proof of lemma 5. Indeed it is out of place, which happened as a result of ICML formatting. We'll place it in context. 14. A good suggestion. We'll fix. We agree to the rest of the useful comments and will implement these suggestions. Reviewer 3: ----------- Let us address your point "The relevance of this result to Machine Learning is not immediately clear to me." Keep in mind that both *sampling methods* and *interior point* methods are frequently utilized in ML algorithms. In particular, optimization over the PSD cone arises quite frequently for many problems such as matrix factorization and ordinal embedding.