无数据
1.College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
2.Key Laboratory of Novel Targets and Drug Study for Neural Repair of Zhejiang Province, School of Medicine, Hangzhou City University, Hangzhou, China
3.Key Laboratory of Soybean Molecular Design Breeding, National Key Laboratory of Black Soils Conservation and Utilization, Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Changchun, China
4.Institute of Physics Chinese Academy of Sciences, Beijing, China
Huijun Tan (tanhuijun@nuaa.edu.cn)
Zhijing Zhu (vczzj@zju.edu.cn)
Depeng Wang (depeng.wang@nuaa.edu.cn)
Received:08 October 2024,
Revised:04 March 2025,
Accepted:21 March 2025,
Published Online:29 April 2025,
Published:31 July 2025
Scan QR Code
Lin, B Z. et al. Real-time and universal network for volumetric imaging from microscale to macroscale at high resolution. Light: Science & Applications, 14, 1851-1869 (2025).
Lin, B Z. et al. Real-time and universal network for volumetric imaging from microscale to macroscale at high resolution. Light: Science & Applications, 14, 1851-1869 (2025). DOI: 10.1038/s41377-025-01842-w.
Light-field imaging has wide applications in various domains
including microscale life science imaging
mesoscale neuroimaging
and macroscale fluid dynamics imaging. The development of deep learning-based reconstruction methods has greatly facilitated high-resolution light-field image processing
however
current deep learning-based light-field reconstruction methods have predominantly concentrated on the microscale. Considering the multiscale imaging capacity of light-field technique
a network that can work over variant scales of light-field image reconstruction will significantly benefit the development of volumetric imaging. Unfortunately
to our knowledge
no one has reported a universal high-resolution light-field image reconstruction algorithm that is compatible with microscale
mesoscale
and macroscale. To fill this gap
we present a real-time and universal network (RTU-Net) to reconstruct high-resolution light-field images at any scale. RTU-Net
as the first network that works over multiscale light-field image reconstruction
employs an adaptive loss function based on generative adversarial theory and consequently exhibits strong generalization capability. We comprehensively assessed the performance of RTU-Net through the reconstruction of multiscale light-field images
including microscale tubulin and mitochondrion dataset
mesoscale synthetic mouse neuro dataset
and macroscale light-field particle imaging velocimetry dataset. The results indicated that RTU-Net has achieved real-time and high-resolution light-field image reconstruction for volume sizes ranging from 300 μm × 300 μm × 12 μm to 25 mm × 25 mm × 25 mm
and demonstrated higher resolution when compared with recently reported light-field reconstruction networks. The high-resolution
strong robustness
high efficiency
and especially the general applicability of RTU-Net will significantly deepen our insight into high-resolution and volumetric imaging.
Prevedel, R. et al. Simultaneous whole-animal 3D imaging of neuronal activi ty using light-field microscopy. Nat. Methods 11 , 727–730 (2014)..
Ng, R. et al. Light Field Photography with A Hand-Held Plenoptic Camera. (Stanford University, 2005).
Wang, Z. Q. et al. Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning. Nat. Methods 18 , 551–556 (2021)..
Zhang, Z. K. et al. Imaging volumetric dynamics at high speed in mouse and zebrafish brain with confocal light field microscopy. Nat. Biotechnol. 39 , 74–83 (2021)..
Shi, S. X. et al. Volumetric calibration enhancements for single-camera light-field PIV. Exp. Fluids 60 , 21 (2019)..
Xing, F. et al. Single camera based dual-view light-field particle imaging velocimetry with isotropic resolution. Opt. Lasers Eng. 167 , 107592 (2023)..
Jones, C. G. et al. Volumetric velocity measurements of a three-dimensional shock-wave/boundary-layer interaction with flow actuation using two-camera plenoptic particle image velocimetry. AIAA J. 60 , 4191–4206 (2022)..
Wu, J. M. et al. Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale. Cell 184 , 3318–3332.e17 (2021)..
Wagner, N. et al. Deep learning-enhanced light-field imaging with continuous validation. Nat. Methods 18 , 557–563 (2021)..
Lu, Z. et al. Virtual-scanning light-field microscopy for robust snapshot high-resolution volumetric imaging. Nat. Methods 20 , 735–746 (2023)..
Falk, T. et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16 , 67–70 (2019)..
Shi, W. Z. et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proc. of the 2016 IEEE Conference on Computer Vision and Pattern Recognition . 1874-1883 (IEEE, 2016).
Pathak, D. et al. Context encoders: feature learning by inpainting. In. Proc. of the 2016 IEEE Conference on Computer Vision and Pattern Recognition . 2536-2544 (IEEE, 2016).
Zhou, Y. et al. Neural texture synthesis with guided correspondence. In. Proc. of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition . 18095-18104 (IEEE, 2023).
Chen, H. T. et al. Distilling portable generative adversarial networks for image translation. In. Proc. of the 34th AAAI Conference on Artificial Intelligence . 3585-3592 (AAAI, 2020).
Isola, P. et al. Image-to-image translation with conditional adversarial networks. In. Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition . 5967-5976 (IEEE, 2017).
Broxton, M. et al. Wave optics theory and 3-D deconvolution for the light field microscope. Opt. Express 21 , 25418–25439 (2013)..
Wan, M. Z. & Jing, N. Style recommendation and simulation for handmade artworks using generative adversarial networks. Sci. Rep. 14 , 28002 (2024)..
Lu, Z. et al. A practical guide to scanning light-field microscopy with digital adaptive optics. Nat. Protoc. 17 , 1953–1979 (2022)..
Descloux, A., Grußmayer, K. S. & Radenovic, A. Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16 , 918–924 (2019)..
Packer, A. M. et al. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nat. Methods 12 , 140–146 (2015)..
Song, A. et al. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. J. Neurosci. Methods 358 , 109173 (2021)..
Wang, K. K. et al. Synthetic color-and-depth encoded (sCade) illumination-based high-resolution light field particle imaging velocimetry. Opt. Express 32 , 27042–27057 (2024)..
Xing, F. et al. High-resolution light-field particle imaging velocimetry with color-and-depth encoded illumination. Opt. Lasers Eng. 173 , 107921 (2024)..
Keane, R. D. & Adrian, R. J. Theory of cross-correlation analysis of PIV images. Appl. Sci. Res. 49 , 191–215 (1992)..
Lagemann, C. et al. Deep recurrent optical flow learning for particle image velocimetry data. Nat. Mach. Intell. 3 , 641–651 (2 021)..
Levoy, M. et al. Light field microscopy. ACM Siggraph 2006 Papers. 924-934 (ACM, 2006).
Horn, B. K. & Schunck, B. G. Determining optical flow. Artif. Intell. 17 , 185–203 (1981)..
Vedula, S. et al. Three-dimensional scene flow. In. Proceedings of the Seventh IEEE International Conference on Computer Vision . 722-729 (IEEE, 1999).
Nöbauer, T. et al. Mesoscale volumetric light-field (MesoLF) imaging of neuroactivity across cortical areas at 18 Hz. Nat. Methods 20 , 600–609 (2023)..
Sugawara, Y., Shiota, S.&Kiya, H. Super-resolution using convolutional neural networks without any checkerboard artifacts. 2018 25th IEEE International Conference on Image Processing (ICIP) 66-70 (IEEE, 2018).
Laina, I. et al. Deeper depth prediction with fully convolutional residual networks. 2016 Fourth International Conference on 3D Vision (3DV) 239-248 (IEEE, 2016).
Zhao, H. et al. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3 , 47–57 (2017)..
Mirza, M. & Osindero, S. Conditional generative adversarial nets. Print at https://arxiv.org/abs/1411.1784 https://arxiv.org/abs/1411.1784 (2014).
Wang, Z., Simoncelli, E. P.&Bovik, A. C. Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems&Computers, 2003. 1398-1402 (IEEE, 2003).
Zhao, H. et al. Loss functions for neural networks for image processing. Print at https://arxiv.org/abs/1511.08861 https://arxiv.org/abs/1511.08861 (2015).
Ogundokun, R. O. et al. Improved CNN based on batch normalization and Adam optimizer. Proceedings of the 22nd International Conference on Computational Science and its Applications. 593-604 (Springer, 2022)..
Hardt, M.&Ma, T. Y. Identity matters in deep learning. 5th International Conference on Learning Representations. (ICLR, 2017).
Xie, S. N.et al. Aggregated residual transformations for deep neural networks. In Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition . 5987-5995 ( IEEE, 2017).
Ioffe, S.&Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on Machine Learning. 448-456 (PMLR, 2015).
Loshchilov, I.&Hutter, F. Decoupled weight decay regularization. 7th International Conference on Learning Representations. (ICLR, 2019).
Giovannucci, A. et al. CaImAn an open source tool for scalable calcium imaging data analysis. eLife 8 , e38173 (2019)..
0
Views
0
Downloads
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution