Paper ID: 23 Title: Revisiting Semi-Supervised Learning with Graph Embeddings Review #1 ===== Summary of the paper (Summarize the main claims/contributions of the paper.): The paper proposes a semi-supervised learning method, which uses graph-embedding technique in graph-based semi-supervised learning (GSSL) framework. Since the original framework is in transdutive setting, the paper then presents an inductive extension. Experiments conducted on five text data sets to validate the effectiveness of the proposed approach. Clarity - Justification: The presentation of the paper could be improved. It is not easy to follow this paper, especially the technique part. Significance - Justification: Although the presented approach is generally reasonable, it is a bit incremental. Specifically, the paper can be viewed as a combination of GSSL and graph embedding, where both GSSL and graph embedding are generally based on existing approaches. In this case, the technique novelty seems not so significant. Although the motivation of the paper is fine, but it is also a bit straightforward. Detailed comments. (Explain the basis for your ratings while providing constructive feedback.): The paper proposes a semi-supervised learning method, which uses graph-embedding technique in graph-based semi-supervised learning (GSSL) framework. Since the original framework is in transdutive setting, the paper then presents an inductive extension. Experiments conducted on five text data sets to validate the effectiveness of the proposed approach. The proposed approach is reasonable and of potential to improve the performance, whereas the paper contains some limitations. 1. Although the presented approach is generally reasonable, it is a bit incremental. Specifically, the paper can be viewed as a combination of GSSL and graph embedding, where both GSSL and graph embedding are generally based on existing approaches. In this case, the technique novelty seems not so significant. 2. Although the motivation of the paper is fine, but it is also a bit straightforward. 3. From the paper, it is not clear why the paper emphasizes so much on the contribution on inductive and transductive setting. To my best knowledge, in many SSL works, this is just a kind of extension for some transdutive methods. 4. The presentation of the paper could be improved. It is not easy to follow this paper, especially the technique part. 5. I think the paper should submit to NLP conference, rather than learning conference. From the experiments, the insight from NLP community might be more significant, whereas the insight from learning seems only marginal. ===== Review #2 ===== Summary of the paper (Summarize the main claims/contributions of the paper.): The paper presents a method that jointly does embedding and label prediction in a graph, in a semi-supervised framework called Planetoid. The authors propose both a transductive and an inductive variant of the embedding+classification approach. Planetoid is compared to other embedding and classification models, and improvements are shown across different types of datasets. Clarity - Justification: The paper is clearly written -- the problem is stated succinctly, the approach is clearly described. Significance - Justification: The paper is novel -- it proposes a method that jointly does embedding of points and also predicts classification labels, in a semi-supervised framework. Detailed comments. (Explain the basis for your ratings while providing constructive feedback.): Strengths: + The paper uses the framework of deep neural networks to provide a framework for doing the joint embedding and classification. + The SGD-based training algorithm is efficient and scalable. + Empirical results on a wide variety of datasets (including text classification) shows the benefit of this approach over other related work, e.g, label propagation, semi-supervised embedding, manifold regularization. Weaknesses: - The authors do not analyze how robust their approach is to label noise or hyperparameters of the algorithm. - The authors do not perform a thorough ablation study to show the relative benefit of embedding and classification in the joint optimization setting. The paper is overall novel and a strong submission. ===== Review #3 ===== Summary of the paper (Summarize the main claims/contributions of the paper.): Paper proposes an approach that combines graph-based SSL with some of the recent advances in learning embeddings. The authors assume that one is given a graph over labeled and unlabeled instances, and they learn an embedding of an instance to jointly predict the class label (similar to past work) and the context per the graph. In essence the authors are doing multi-task learning where one of the tasks is to predict the label of interest while the other task involves predicting the context of the graph. Paper presents results on a number of different tasks and shows that the proposed approach outperforms state-of-the-art approaches. Clarity - Justification: I found certain parts of Section 3 hard to follow. I would recommend presenting your objective upfront and then discussing the sampling strategy. In addition perhaps focus on the inductive setting first and then present the transductive setting. Significance - Justification: I found the paper to be interesting in that it combines some of the advances in graph-based SSL w/ recent advances in learning embeddings. Detailed comments. (Explain the basis for your ratings while providing constructive feedback.): For text classification, why did you not consider a corpora such as Reuters or WebKB which are fairly standard for this task. These are different in comparison to the data sets you have used in your paper in that they do not come w/ a link graph. Further from my understanding of the different tasks in your paper, in each case you are given the graph (i.e., the matrix A). What about cases where the graph is not given and is induced from the data. Would it still make sense to learn an embedding to predict the graph context? =====