Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

Parameter-Efficient Quantized Mixture-of-Experts Meets Vision-Language Instruction Tuning for Semiconductor Electron Micrograph Analysis

Sagar Srinivas Sakhinana · Sannidhi Geethan · Chidaksh Ravuru · Venkataramana Runkana

Keywords: [ Small-Scale MultiModal Models(SMMs) ] [ Parameter Efficient Vision-Language Instruction Tuning ] [ Semiconductor Imaging and Analysis ]


Abstract:

Semiconductors, crucial to modern electronics, are generally under-researched in foundational models. It highlights the need for research to enhance the semiconductor device technology portfolio and aid in high-end device fabrication. In this paper, we introduce sLAVA, a small-scale vision-language assistant tailored for semiconductor manufacturing, with a focus on electron microscopy image analysis. It addresses challenges of data scarcity and acquiring high-quality, expert-annotated data. We employ a teacher-student paradigm, using a foundational vision-language model like GPT-4 as a teacher to create instruction-following multimodal data for customizing the student model, sLAVA, for electron microscopic image analysis tasks on consumer hardware with limited budgets. Our approach allows enterprises to further fine-tune the proposed framework with their proprietary data securely within their own infrastructure, protecting intellectual property. Rigorous experiments validate that our framework surpasses traditional methods, handles data shifts, and enables high-throughput screening.

Chat is not available.