Evidential Copula Concept Embedding Models
Abstract
Concept Embedding Models (CEMs) advance interpretable AI by extending Concept Bottleneck Models (CBMs) through semantic concept embeddings, providing an important solution in high-stakes domains such as medical diagnosis where accuracy and interpretability are critical. However, a fundamental limitation persists: existing CEMs inherently assume concept independence, critically overlooking the highly complex dependencies among concepts. To address this, we propose an Evidential Copula Concept Embedding Model (EC-CEM) that redefines the joint distribution over concepts, capturing inter-concept dependencies while maintaining a flexible structure that decouples the marginal concept distributions from their dependency structure. In particular, EC-CEM relaxes the concept independence assumption and uniquely integrates Copula theory with evidential deep learning to define a joint distribution over concepts. The proposed EC-CEM also develops two training objectives that aim at classification and concept modeling simultaneously. We provide theoretical justification via variational inference and demonstrate empirical superiority through extensive experiments.