Abstract
Sparse least mean square (LMS) algorithms employ approximations of sparseness constraints as a zero-point attraction term that forces small tap weights towards the origin when unknown systems to be identified are sparse. Recently, the online linearized Bregman iteration (OLBI) algorithm appreciated soft thresholding techniques based on an $L_{1}$ -norm regularization in reducing a steady-state error. Although the soft thresholding successfully improves accuracy of the adaptive filter for sparse systems, this brief is limited to the $L_{1}$ -norm regularization. In sparse representation, the $L_{0}$ -norm regularization can theoretically yield the sparsest representation and lead to the promising performance in adaptive filters. In this regard, we introduce a $L_{0}$ -norm based LMS algorithm by exploiting a hard thresholding through a variable splitting method. The proposed algorithm preserves the behavior of large tap weights and strongly enforces small tap weights to zero by relaxation of $L_{0}$ -norm regularization. We also provide the mean stability conditions and theoretical mean-square performance of the proposed algorithm. Experimental results show that the proposed algorithm achieves superior convergence performance compared with conventional sparse algorithms.
| Original language | English |
|---|---|
| Article number | 9113696 |
| Pages (from-to) | 3597-3601 |
| Number of pages | 5 |
| Journal | IEEE Transactions on Circuits and Systems II: Express Briefs |
| Volume | 67 |
| Issue number | 12 |
| DOIs | |
| State | Published - Dec 2020 |
Keywords
- Adaptive filter
- hard thresholding
- least mean square algorithm
- sparse system identification
- sparseness constraint