We would like to thank all the reviewers for their thoughtful considerations and feedback. We appreciate R1 and R2's support for acceptance, asserting that the "topic seems important" and "initiates an interesting direction of research" (R1) and that our "paper contains novel theoretical contributions and experimental results that could impact many real world learning systems" (R2). Below we discuss the main reviewer concerns.$ R1 says "I applaud these first steps towards understanding how to effectively employ labelers" but "the limitation of this study is that it is restricted to very simple concept spaces…and I will be eager to see follow-up works that bring this closer to something practically useful". We fully agree that this work is a first step in an important and underexplored area of research in machine learning. Indeed, even within our simple labeling tasks, we found that human efficiency and the strategies they employed for teaching varied widely. We believe the insights gained from this experiment highlights an often overlooked factor in interactive machine learning, namely that human ability greatly impacts learning in practice. With regard to R1’s interest in follow-up work on NLP or vision-based tasks, we have indeed started research on those very tasks. In fact, R1’s envisioned difficulties are indeed true. For example, the near-optimal teaching set (chosen from a given corpus) for SVM in a two-class text categorization tasks contains as few as two training documents. Mathematically, the two documents are well-characterized: the line segment between them in the bag-of-words space is approximately normal to, and bisected by, the target decision boundary. However, it has been hard to explain such document pairs to non-technical human teachers. Meanwhile, human teachers tend to teach with prototypical documents from the two classes. Fundamentally, this is a mismatch between the actual learning algorithm and what a non-technical human teacher assumes of the learner. While we continue to explore what type of teacher-education is most productive on such tasks, we are also investigating the possibility of an automatic "translator" to convert organic human teaching items into a teaching set for the learner. With regard to concerns about theoretical contributions (R1 and R4), we argue that our contribution is in mapping HCI to learning theory. Specifically, our main contribution is the discovery that certain aspects of interactive machine learning can be characterized by a combination of existing theory in teaching dimension and active learning (presented in Table 1). Another contribution is the specific mixed-initiative learning procedure (Alg 1) which is both practical and amenable to analysis. Note that not all arbitrary mixed-initiative procedures have label complexity guarantees. We believe these contributions are important starting points to future researchers and practitioners when estimating costs to human labelers, as we presented in the discussion section of the paper. This also helps frame the subsequent empirical analysis. With regard to R4’s concern that this work "contains no theorems, no propositions, no proofs" and "the only novel contribution is, apparently, the experiment", we believe it is necessary to point out that ICML explicitly calls for papers that can present "either theoretical or empirical results". We believe this work presents theoretical contributions such as those described above, which is "the first formal analysis of a topic that is strongly connected to real-world applications of active learning" (R2) and also presents an "empirical study of the effectiveness of human teachers" (R1). To summarize, in this work we took the necessary first steps in applying theory directly to human application in a principled way. We believe this calls attention to a serious gap between theory and practice that, in turn, suggests new and important research directions for the ICML community.