Simultaneous Localization and Mapping (SLAM) is the prerequisite and guarantee for mobile robots to realize autonomous navigation, and it is also a difficult point in the research of mobile robot navigation. As an indispensable part of mobile robots to realize intelligence, SLAM has aroused strong research interest of scholars at home and abroad. At present, different SLAM methods adapt to different sensors and computing requirements. The development of laser SLAM tends to be mature, but it is vulnerable to the limitation of radar detection range and therefore loses map data points, while visual SLAM is vulnerable to changes in light and algorithms are exposed to several challenges including huge map size, perceptual aliasing and high computational cost. In recent years, domestic and foreign researchers have often used lidar and depth cameras for mapping to improve the accuracy, efficiency and viability of the mobile robot's mapping process. In this paper, multi-sensor fusion is based on lidar, RGB-D camera, encoder, IMU, etc. This paper conducts multi-sensor data and algorithm fusion, and conducts corresponding theoretical and experimental research. The main research contents are as follows: