Poster
in
Workshop: Neural Conversational AI Workshop - What’s left to TEACH (Trustworthy, Enhanced, Adaptable, Capable and Human-centric) chatbots?
AutoML-GPT: Large Language Model for AutoML
Yun Da Tsai · Yu-Che Tsai · Bo-Wei Huang · Chun-Pai Yang · Shou-De Lin
With the emerging trend of GPT models, we establish a framework, AutoML-GPT, integrates with a comprehensive set of tools and libraries, granting access to a wide range of data preprocessing techniques, feature engineering methods, and model selection algorithms. Users can specify their requirements, constraints, and evaluation metrics through a conversational interface.Throughout the process, AutoML-GPT employs advanced techniques for hyperparameter optimization, and model selection, ensuring that the resulting model achieves optimal performance. The system effectively manages the complexity of the machine learning pipeline, guiding users towards the best choices without requiring deep domain knowledge.Through our experimental results on diverse datasets, we demonstrate that AutoML-GPT significantly reduces the time and effort required for machine learning tasks. Its ability to leverage the vast knowledge encoded in large language models enables it to provide valuable insights, identify potential pitfalls, and suggest effective solutions to common challenges faced during model training.