1. Assigned_Reviewer_1$- We have also conducted simulated experiments on the Yahoo Learning to Rank Challenge dataset (http://research.microsoft.com/en-us/um/beijing/projects/letor/yahoodata.aspx). Their results agree with the intuition behind our framework and the results of other experiments. We will do our best to conduct the other suggested experiments and fit them into the camera-ready version of the paper. 2. Assigned_Reviewer_5 - We have also conducted simulated experiments on the Yahoo Learning to Rank Challenge dataset (http://research.microsoft.com/en-us/um/beijing/projects/letor/yahoodata.aspx). Their results agree with the intuition behind our framework and the results of the other experiments. We will try to fit them into the camera-ready version of the paper. 3. Assigned_Reviewer_6 - As one of the directions of the future work we plan to fit our framework directly into models yielding a smooth function of its arguments (labels,features,weights), e.g. certain neural networks. However, we believe that feature expansion trick (Remark after Def1) partially addresses this issue; in fact, even the complex models, mentioned by the reviewer, ultimately combine weak learners in a linear fashion (e.g., individual decision trees in GBDT and outputs of the last layer neurons of a neural network).