Although kernel-based online learning has shown promising performance in a number of studies, its main shortcoming is the unbounded number of support vectors, making it non-scalable and unsuitable for large-scale datasets. In this work, we study the problem of bounded kernel-based online learning that aims to constrain the number of support vectors by a predefined budget. Although several algorithms have been proposed, they are mostly based on the Perceptron algorithm and only provide bounds for the number of classification mistakes. To address this limitation, we propose a framework for bounded kernel-based online learning based on online gradient descent approach. We propose two efficient algorithms of bounded online gradient descent (BOGD) for bounded kernel-based online learning: (i) BOGD using uniform sampling and (ii) BOGD++ using non-uniform sampling. We present theoretical analysis of regret bound for both algorithms, show the promising empirical performance of the proposed algorithms by comparing them to the state-of-the-art algorithms for bounded kernel-based online learning.