Reviewer 1: $ Table 1 is comparing the results of applying countGauss versus countSketch versus true Gaussian matrices when applying SVM solvers on the datasets. In previous work, Paul et al 2014, it was claimed that countSketch, while being faster in applications on real-world sparse datasets, leads to slower running times in downstream applications such as support vector machines (SVM). We show that by using countGauss we do not suffer such a consequence and in fact are faster than both countSketch and Gaussian projection. Please check out the column SVMf for various projection sizes of 128, 256, and 512. In all three cases, we are faster than both countSketch and Gaussian Projection. For instance, when using projection of size 256, the mean running times are 0.54, 0.21 and 0.98 for countSketch, countGauss and Gaussian Projection respectively. Reviewer 4: While Clarkson and Woodruff have used CountSketch, and one can compose this with Gaussians, it was unknown if the composition had low variation distance to applying a Gaussian matrix. Therefore, the composition was not known to be useful in contexts where one really needs identical behavior as multiplying by a Gaussian matrix (the Clarkson Woodruff paper concerned a specific form of regression and low rank approximation error, which does not require the properties of a Gaussian matrix). Reviewer 5: 1. We will make the constants, as well as the dependence on the total variation distance, explicit in Theorems 2 and 3. 2. Yes, you are indeed right that for smaller dimensions in the experiments we do not see a speedup. One reason is due to the fact that the matrices are dense. We will fix it by running it on sparse matrices (note that real experiments use sparse matrices) and also increase the dimensions to show speedup for really huge-dimensional datasets.