Workshop: Hardware-aware efficient training (HAET)

Not All Lotteries Are Made Equal

Surya Kant Sahu · Sai Mitheran · Somya Suhans Mahapatra


The Lottery Ticket Hypothesis (LTH) states that for a reasonably sized neural network, a sub-network within the same network yields no less performance than the dense counterpart when trained from the same initialization. This work investigates the relation between model size and the ease of finding these sparse sub-networks. We show through experiments that, surprisingly, under a finite budget, smaller models benefit more from Ticket Search (TS).

Chat is not available.