On December 23, the team led by Professor Xingjun Wang and Researcher Haowen Shu from the Center, in collaboration with the team of Professor Cheng Wang from City University of Hong Kong and the team of Professor Linjie Zhou from Shanghai Jiao Tong University, published a research article entitled “Integrated bionic LiDAR for adaptive 4D machine vision” online in Nature Communications.
Inspired by the visual mechanism of the human eye, the research team proposed and experimentally demonstrated a novel integrated bionic FMCW LiDAR architecture. For the first time at the chip scale, the system realizes an adaptive parallel 4D imaging capability with a “gaze” function. Furthermore, by synergistic sensing with a camera, the system enables a 4D-plus machine vision representation, offering a new pathway toward next-generation intelligent perception characterized by high resolution, low power consumption, and high flexibility.
Facing emerging application scenarios such as autonomous driving, embodied intelligence, and low-altitude intelligent systems, machine vision is evolving from simply “being able to see” to “seeing clearly, seeing quickly, and seeing comprehensively.” On one hand, complex and dynamic environments require sensing systems to maintain continuous situational awareness over a wide field of view. On the other hand, decision safety depends on providing higher resolution, lower latency, and richer multidimensional information for key targets and regions. As a result, next-generation machine vision systems exhibit simultaneous demands for high resolution, low power consumption, and scalable parallelization.
Illustration of human-eye-inspired vision and intelligent perception scenarios
Under this trend, the contradiction between performance limits and system cost in LiDAR, as an active optical sensor, has become increasingly prominent. Fundamentally, LiDAR operation can be regarded as digital sampling of the continuous physical world, where the sampling point density and the accessible information dimensions (such as range, velocity, and reflectivity) determine perception accuracy and information richness. However, traditional approaches to improving resolution often rely on brute-force scaling—adding more channels and increasing sampling rates across the entire field of view—which leads to rapid growth in the number of optoelectronic devices as well as bandwidth and processing requirements in backend high-speed electronics. Consequently, system cost, power consumption, and complexity quickly approach engineering limits. Essentially, this spatial-channel-density-based scaling path yields diminishing returns while incurring ever-increasing costs.
This challenge is even more pronounced for FMCW LiDAR. FMCW LiDAR offers high ranging accuracy, strong interference immunity, and the ability to simultaneously retrieve multiple dimensions such as distance and velocity. However, its coherent detection chain imposes much stricter requirements on laser linewidth, frequency sweep linearity, and phase stability, making single-channel implementations significantly more complex and costly than incoherent schemes. As a result, the commonly used strategy of enhancing angular resolution through dense spatial channel stacking becomes far less scalable in FMCW systems, with marginal costs emerging earlier and engineering burdens in device count, packaging density, and thermal management becoming increasingly difficult to sustain. In contrast, the human eye achieves high visual acuity at limited energy consumption not by pursuing uniformly maximum resolution across the entire field of view, but by adopting a “peripheral vision plus gaze” mechanism that concentrates high sampling density on regions of interest. Therefore, constructing a human-eye-like adaptive resource allocation mechanism under a limited channel budget—achieving higher effective resolution while improving sampling efficiency—constitutes a key scientific challenge for scalable high-resolution coherent sensing.
To address this challenge, the research team proposed a novel “micro-parallel” architecture and developed a parallel FMCW LiDAR prototype system with dynamic gaze capability. The experimental results demonstrate that high-resolution imaging no longer needs to rely solely on dense spatial channel stacking; instead, more efficient resolution scaling can be achieved through reconfigurable wavelength/frequency-domain resource scheduling. The system achieves an angular resolution of 0.012° within local regions of interest, while simultaneously maintaining wide field-of-view coverage and high-fidelity imaging within the same platform. More importantly, by combining a thin-film lithium niobate electro-optic frequency comb with a widely tunable external-cavity laser, the system architecture decouples “field-of-view coverage” from “local resolution requirements.” The external-cavity laser enables large-range viewpoint scanning for global coverage, while the electro-optic frequency comb adaptively increases sampling density in target regions, thereby achieving a synergistic optimization of “seeing wide” and “seeing clearly.”
Architecture of the “micro-parallel” FMCW LiDAR system
Building on this architecture, the team further demonstrated the first real-time parallel 4D imaging system based on an integrated optical frequency comb. In addition to three-dimensional geometric information, the system simultaneously retrieves target velocity and reflectivity, and performs multimodal fusion with a visible-light camera to colorize point clouds. This compensates for the difficulty of LiDAR in directly acquiring color appearance information, enhancing scene interpretability and semantic expressiveness, and enabling a 4D-plus scene representation tailored for intelligent agent tasks. These demonstrations confirm that the proposed architecture enhances effective sampling efficiency through gaze-based resource allocation, with chip-scale reconfigurable parallel channels as its core enabling capability. This provides a system-level pathway toward scalable, compact, and low-power integrated coherent LiDAR solutions.
Real-time 4D-plus imaging results
The demonstrated “micro-parallel” coherent LiDAR architecture offers a new paradigm for building integrated, deployable high-resolution sensing modules. By liberating high-resolution capability from rigid dependence on dense spatial channel stacking, the system can achieve on-demand resolution enhancement through reconfigurable resource scheduling under a limited channel budget—better aligning with the stringent constraints on size, power consumption, and scalability required by intelligent agents. Moreover, such modules feature reconfigurability and composability, and future cooperation among multiple modules and multimodal fusion may give rise to new forms of bionic machine vision. The integrated bionic LiDAR architecture presented in this work exhibits strong scalability and chip-level integration potential, providing critical technological support for next-generation autonomous driving, robotics, unmanned systems, and integrated air–space–ground sensing.
The co–first authors of this paper are Dr. Ruixuan Chen (Specially Appointed Associate Researcher, School of Electronics, Peking University), Yichen Wu (PhD candidate, School of Electronics, Peking University), Dr. Ke Zhang (City University of Hong Kong), Dr. Chuxin Liu (Shanghai Jiao Tong University), and Wencan Li (PhD candidate, School of Electronics, Peking University). The corresponding authors are Professor Xingjun Wang (Peking University), Professor Cheng Wang (City University of Hong Kong), Professor Linjie Zhou (Shanghai Jiao Tong University), and Researcher Haowen Shu (Peking University). Additional contributors include Yikun Chen (PhD student, City University of Hong Kong), Bitao Shen (Postdoctoral Fellow, Peking University), Dr. Chaoxi Chen (City University of Hong Kong), Assistant Professor Hank Fong (City University of Hong Kong), Associate Researcher Zhangfeng Ge and Associate Researcher Yan Zhou (Peking University Yangtze Delta Institute of Optoelectronics), Postdoctoral Fellows Zihan Tao and Xuguang Zhang (Peking University), Dr. Weihan Xu (Shanghai Jiao Tong University), PhD student Yimeng Wang (Peking University), as well as Dr. Pengfei Cai and Dr. Dong Pan from SiFotonics Technologies Co., Ltd. Professor Weiji He from Nanjing University of Science and Technology also provided valuable suggestions. This research was supported by the National Key R&D Program of China, the National Natural Science Foundation of China (Young Scientists Fund, Category B and C), the China Postdoctoral Innovative Talent Support Program, projects from the Hong Kong Research Grants Council, and the Croucher Foundation. The work was completed with the National Key Laboratory of Photonic Transmission and Communications, School of Electronics, Peking University, as the first completing institution.
Original article:
https://www.nature.com/articles/s41467-025-66529-7