Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

147 Results

<<   <   Page 1 of 13   >   >>
Poster
Wed 4:30 Position: Intent-aligned AI Systems Must Optimize for Agency Preservation
Catalin Mitelut · Benjamin Smith · Peter Vamplew
Workshop
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
Brian Bartoldson · James Diffenderfer · Konstantinos Parasyris · Bhavya Kailkhura
Poster
Tue 4:30 HumanTOMATO: Text-aligned Whole-body Motion Generation
Shunlin Lu · Ling-Hao Chen · Ailing Zeng · Jing Lin · Ruimao Zhang · Lei Zhang · Heung-Yeung Shum
Poster
Wed 4:30 Learning Reward for Robot Skills Using Large Language Models via Self-Alignment
Yuwei Zeng · Yao Mu · Lin Shao
Workshop
ProtMamba: a homology-aware but alignment-free protein state space model
Damiano Sgarbossa · Cyril Malbranke · Anne-Florence Bitbol
Workshop
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks
Maksym Andriushchenko · Francesco Croce · Nicolas Flammarion
Poster
Tue 2:30 Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
Brian Bartoldson · James Diffenderfer · Konstantinos Parasyris · Bhavya Kailkhura
Poster
Thu 2:30 Neural Tangent Kernels for Axis-Aligned Tree Ensembles
Ryuichi Kanoh · Mahito Sugiyama
Poster
Wed 4:30 Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation
Xianghe Pang · shuo tang · Rui Ye · Yuxin Xiong · Bolun Zhang · Yanfeng Wang · Siheng Chen
Poster
Wed 4:30 Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels
Haoning Wu · Zicheng Zhang · Weixia Zhang · Chaofeng Chen · Liang Liao · Chunyi Li · Yixuan Gao · Annan Wang · Erli Zhang · Wenxiu Sun · Qiong Yan · Xiongkuo Min · Guangtao Zhai · Weisi Lin
Workshop
Can Language Models Safeguard Themselves, Instantly and For Free?
Dyah Adila · Changho Shin · Yijing Zhang · Frederic Sala
Workshop
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
Brian Bartoldson · James Diffenderfer · Konstantinos Parasyris · Bhavya Kailkhura