Structured prediction is the cornerstone of several machine learning applications. Unfortunately, in structured prediction settings with expressive inter-variable interactions, inference and hence exact learning is often intractable. We present a new way, Decomposed Learning (DecL), for performing efficient learning over structured output spaces. In DecL, we restrict the inference step to a limited part of the output space. We use characterizations based on the structure, target parameters, and gold labels to guarantee that DecL with limited inference is equivalent to exact learning. We also show that in real world settings, where our theoretical assumptions may not hold exactly, DecL-based algorithms are significantly more efficient and provide accuracies close to exact learning.