Chair Professor Department of Electrical and Electronic Engineering Research Group
Dr. ZHANG Hong is currently a Chair Professor in the Department of Electronic and Electrical Engineering at Southern University of Science and Technology (SUSTech) where he directs “Shenzhen Key Laboratory on Robotics and Computer Vision”. His research interests include robotics, autonomous vehicles, computer vision, and image processing. Prior to joining SUSTech, he was a Professor in the Department of Computing Science, University of Alberta, Canada, where he worked for over 30 years. He held an NSERC Industrial Research Chair from 2003-2017, and made significant contributions in robotics research. He served as the Editor-in-Chief of IROS Conference Paper Review Board, a flagship conference of the IEEE Robotics and Automation Society (RAS), from 2020-2022. He is currently serving a three-year term as a member of the Administrative Committee (AdCom) of IEEE RAS (2023-2025). In recognition of his many contributions, Dr. Zhang was elected a Fellow of IEEE and a Fellow of the Canadian Academy of Engineering.
Personal Profile
Research
- Mobile roboti navigation, visual SLAM, semantic mapping
- Embedded artificial intelligence, LLM/VLM based robot navigation and manipulation
- Computer vision, visual object detection, object tracking, image segmentation
- Polarization imaging for HDR, 3D reconstruction
Teaching
EE 346 - Mobile Robot Navigation
EE 5346 – Autonomous Robot Navigation
Publications Read More
(Since 2022 – see complete list on GS)
[1] S. Elkerdawy, M. Elhoushi, H. Zhang, and N. Ray, ‘Fire together wire together: A dynamic pruning approach with self-supervised mask prediction’, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12454–12463.
[2] X. Wang, H. Zhang, and G. Peng, ‘Evaluating and Optimizing Feature Combinations for Visual Loop Closure Detection’, Journal of Intelligent & Robotic Systems, vol. 104, no. 2, p. 31, 2022.
[3] I. Ali and H. Zhang, ‘Are we ready for robust and resilient slam? a framework for quantitative characterization of slam datasets’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 2810–2816.
[4] M. Shakeri and H. Zhang, ‘Highlight specular reflection separation based on tensor low-rank and sparse decomposition using polarimetric cues’, arXiv preprint arXiv:2207. 03543, 2022.
[5] I. Ali and H. Zhang, ‘Optimizing SLAM Evaluation Footprint Through Dynamic Range Coverage Analysis of Datasets’, in 2023 Seventh IEEE International Conference on Robotic Computing (IRC), 2023, pp. 127–134.
[6] S. An et al., ‘Deep tri-training for semi-supervised image segmentation’, IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10097–10104, 2022.
[7] B. Yang, J. Li, Z. Shao, and H. Zhang, ‘Robust UWB indoor localization for NLOS scenes via learning spatial-temporal features’, IEEE Sensors Journal, vol. 22, no. 8, pp. 7990–8000, 2022.
[8] G. Chen, L. He, Y. Guan, and H. Zhang, ‘Perspective phase angle model for polarimetric 3d reconstruction’, in European Conference on Computer Vision, 2022, pp. 398–414.
[9] H. Ye, J. Zhao, Y. Pan, W. Chen, and H. Zhang, ‘Following Closely: A Robust Monocular Person Following System for Mobile Robot’, arXiv preprint arXiv:2204. 10540, 2022.
[10] R. Zhou, L. He, H. Zhang, X. Lin, and Y. Guan, ‘Ndd: A 3d point cloud descriptor based on normal distribution for loop closure detection’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 1328–1335.
[11] H. Ye, J. Zhao, Y. Pan, W. Cherr, L. He, and H. Zhang, ‘Robot person following under partial occlusion’, in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 7591–7597.
[12] Y. Pan, L. He, Y. Guan, and H. Zhang, ‘An Experimental Study of Keypoint Descriptor Fusion’, in 2022 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2022, pp. 699–704.
[13] B. Liu, Y. Fu, F. Lu, J. Cui, Y. Wu, and H. Zhang, ‘NPR: Nocturnal Place Recognition in Streets’, arXiv preprint arXiv:2304. 00276, 2023.
[14] H. Ye, W. Chen, J. Yu, L. He, Y. Guan, and H. Zhang, ‘Condition-invariant and compact visual place description by convolutional autoencoder’, Robotica, vol. 41, no. 6, pp. 1718–1732, 2023.
[15] C. Tang, D. Huang, L. Meng, W. Liu, and H. Zhang, ‘Task-oriented grasp prediction with visual-language inputs’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 4881–4888.
[16] C. Tang, D. Huang, W. Ge, W. Liu, and H. Zhang, ‘Graspgpt: Leveraging semantic knowledge from a large language model for task-oriented grasping’, IEEE Robotics and Automation Letters, 2023.
[17] C. Tang, J. Yu, W. Chen, B. Xia, and H. Zhang, ‘Relationship oriented semantic scene understanding for daily manipulation tasks’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 9926–9933.
[18] B. Yang, J. Li, Z. Shao, and H. Zhang, ‘Self-supervised deep location and ranging error correction for UWB localization’, IEEE Sensors Journal, vol. 23, no. 9, pp. 9549–9559, 2023.
[19] J. Ruan, L. He, Y. Guan, and H. Zhang, ‘Combining scene coordinate regression and absolute pose regression for visual relocalization’, in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 11749–11755.
[20] X. Liu, S. Wen, and H. Zhang, ‘A real-time stereo visual-inertial SLAM system based on point-and-line features’, IEEE Transactions on Vehicular Technology, vol. 72, no. 5, pp. 5747–5758, 2023.
[21] K. Cai, W. Chen, C. Wang, H. Zhang, and M. Q.-H. Meng, ‘Curiosity-based robot navigation under uncertainty in crowded environments’, IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 800–807, 2022.
[22] X. Lin, J. Ruan, Y. Yang, L. He, Y. Guan, and H. Zhang, ‘Robust data association against detection deficiency for semantic SLAM’, IEEE Transactions on Automation Science and Engineering, vol. 21, no. 1, pp. 868–880, 2023.
[23] W. Chen et al., ‘Keyframe Selection with Information Occupancy Grid Model for Long-term Data Association’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 2786–2793.
[24] W. Yang, Y. Zhuang, D. Luo, W. Wang, and H. Zhang, ‘VI-HSO: Hybrid Sparse Monocular Visual-Inertial Odometry’, IEEE Robotics and Automation Letters, 2023.
[25] L. He and H. Zhang, ‘Large-scale graph sinkhorn distance approximation for resource-constrained devices’, IEEE Transactions on Consumer Electronics, 2023.
[26] W. Chen et al., ‘Cloud Learning-based Meets Edge Model-based: Robots Don’t Need to Build All the Submaps dItself’, IEEE Transactions on Vehicular Technology, 2023.
[27] W. Chen, C. Fu, and H. Zhang, ‘Rumination meets vslam: You don’t need to build all the submaps in realtime’, Authorea Preprints, 2023.
[28] Z. Tang, H. Ye, and H. Zhang, ‘Multi-Scale Point Octree Encoding Network for Point Cloud Based Place Recognition’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 9191–9197.
[29] L. He, W. Li, Y. Guan, and H. Zhang, ‘IGICP: Intensity and geometry enhanced LiDAR odometry’, IEEE Transactions on Intelligent Vehicles, 2023.
[30] L. He and H. Zhang, ‘Doubly stochastic distance clustering’, IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 11, pp. 6721–6732, 2023.
[31] S. Wen, P. Li, and H. Zhang, ‘Hybrid Cross-Transformer-KPConv for Point Cloud Segmentation’, IEEE Signal Processing Letters, 2023.
[32] J. Li et al., ‘Deep learning based defect detection algorithm for solar panels’, in 2023 WRC Symposium on Advanced Robotics and Automation (WRC SARA), 2023, pp. 438–443.
[33] B. Liu et al., ‘NocPlace: Nocturnal Visual Place Recognition Using Generative and Inherited Knowledge Transfer’, arXiv preprint arXiv:2402. 17159, 2024.
[34] S. Wen, X. Liu, Z. Wang, H. Zhang, Z. Zhang, and W. Tian, ‘An improved multi-object classification algorithm for visual SLAM under dynamic environment’, Intelligent Service Robotics, vol. 15, no. 1, pp. 39–55, 2022.
[35] W. Ge, C. Tang, and H. Zhang, ‘Commonsense Scene Graph-based Target Localization for Object Search’, arXiv preprint arXiv:2404. 00343, 2024.
[36] J. Zhao, H. Ye, Y. Zhan, and H. Zhang, ‘Human Orientation Estimation under Partial Observation’, arXiv preprint arXiv:2404. 14139, 2024.
[37] H. Tao, B. Liu, J. Cui, and H. Zhang, ‘A convolutional-transformer network for crack segmentation with boundary awareness’, in 2023 IEEE International Conference on Image Processing (ICIP), 2023, pp. 86–90.
[38] J. Yin, Y. Zhuang, F. Yan, Y.-J. Liu, and H. Zhang, ‘A Tightly-Coupled and Keyframe-Based Visual-Inertial-Lidar Odometry System for UGVs With Adaptive Sensor Reliability Evaluation’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024.
[39] I. Ali, B. Wan, and H. Zhang, ‘Prediction of SLAM ATE using an ensemble learning regression model and 1-D global pooling of data characterization’, arXiv preprint arXiv:2303. 00616, 2023.
[40] S. An et al., ‘An Open-Source Robotic Chinese Chess Player’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 6238–6245.
[41] X. Liu, S. Wen, J. Zhao, T. Z. Qiu, and H. Zhang, ‘Edge-Assisted Multi-Robot Visual-Inertial SLAM With Efficient Communication’, IEEE Transactions on Automation Science and Engineering, 2024.
[42] S. Ji et al., ‘A Point-to-distribution Degeneracy Detection Factor for LiDAR SLAM using Local Geometric Models’, in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 12283–12289.
[43] D. Huang, C. Tang, and H. Zhang, ‘Efficient Object Rearrangement via Multi-view Fusion’, in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 18193–18199.
[44] H. Ye, J. Zhao, Y. Zhan, W. Chen, L. He, and H. Zhang, ‘Person re-identification for robot person following with online continual learning’, IEEE Robotics and Automation Letters, 2024.
[45] W. Chen et al., ‘Cloud-edge Collaborative Submap-based VSLAM using Implicit Representation Transmission’, IEEE Transactions on Vehicular Technology, 2024.
[46] G. Zeng, B. Zeng, Q. Wei, H. Hu, and H. Zhang, ‘Visual Object Tracking with Mutual Affinity Aligned to Human Intuition’, IEEE Transactions on Multimedia, 2024.