Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 1st ICML Workshop on In-Context Learning (ICL @ ICML 2024)

Verbalized Machine Learning: Revisiting Machine Learning with Language Models

Tim Xiao · Robert Bamler · Bernhard Schölkopf · Weiyang Liu


Abstract:

Motivated by the large progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML). In contrast to conventional machine learning models that are typically optimized over a continuous parameter space, VML constrains the parameter space to be human-interpretable natural language. Such a constraint leads to a new perspective of function approximation, where an LLM with a text prompt can be viewed as a function parameterized by the text prompt. Guided by this perspective, we revisit classical machine learning problems, such as regression and classification, and find that these problems can be solved by an LLM-parameterized learner and optimizer. The major advantages of VML include (1) easy encoding of inductive bias, (2) automatic model selection and (3) interpretable learner update.

Chat is not available.