posted on 2024-02-27, 18:54authored bySi-Heng Luo, Si-Qi Pan, Gan-Yu Chen, Yi Xie, Bin Ren, Guo-Kun Liu, Zhong-Qun Tian
Denoising is a necessary step in image analysis to extract
weak
signals, especially those hardly identified by the naked eye. Unlike
the data-driven deep-learning denoising algorithms relying on a clean
image as the reference, Noise2Noise (N2N) was able to denoise the
noise image, providing sufficiently noise images with the same subject
but randomly distributed noise. Further, by introducing data augmentation
to create a big data set and regularization to prevent model overfitting,
zero-shot N2N-based denoising was proposed in which only a single
noisy image was needed. Although various N2N-based denoising algorithms
have been developed with high performance, their complicated black
box operation prevented the lightweight. Therefore, to reveal the
working function of the zero-shot N2N-based algorithm, we proposed
a lightweight Peak2Peak algorithm (P2P) and qualitatively and quantitatively
analyzed its denoising behavior on the 1D spectrum and 2D image. We
found that the high-performance denoising originates from the trade-off
balance between the loss function and regularization in the denoising
module, where regularization is the switch of denoising. Meanwhile,
the signal extraction is mainly from the self-supervised characteristic
learning in the data augmentation module. Further, the lightweight
P2P improved the denoising speed by at least ten times but with little
performance loss, compared with that of the current N2N-based algorithms.
In general, the visualization of P2P provides a reference for revealing
the working function of zero-shot N2N-based algorithms, which would
pave the way for the application of these algorithms toward real-time
(in situ, in vivo, and operando) research improving both temporal
and spatial resolutions. The P2P is open-source at https://github.com/3331822w/Peak2Peakand will be accessible online access at https://ramancloud.xmu.edu.cn/tutorial.