From Knowledge to Inference: Formalizing Specialized Public Health Reasoning on GlobalHealthAtlas
Abstract
Public health reasoning requires population-level inference grounded in scientific evidence, expert consensus, and safety constraints. However, it remains underexplored as a structured machine learning problem with limited supervised signals and benchmarks. We introduce GlobalHealthAtlas, a large-scale multilingual dataset of 280,210 instances spanning 15 public health domains and 17 languages, stratified into three difficulty levels from health literacy to epidemiological and policy reasoning. Instances are derived from openly available public health sources and labeled by language, domain, and difficulty to support supervised learning and slice-based evaluation. We further propose a large language model (LLM) assisted construction and quality-control pipeline with retrieval, duplication, evidence-grounding checks, and label validation to improve consistency at scale. Finally, we present a domain-aligned evaluator distilled from high confidence judgments of diverse LLMs to assess outputs along six dimensions: Accuracy, Reasoning, Completeness, Consensus Alignment, Terminology Norms, and Insightfulness. Together, these contributions enable reproducible training and evaluation of LLMs for public health reasoning beyond conventional QA benchmarks.