My work is, in general, about fundamental research on computational approaches for Learning and Optimization, two most important problems in Artificial Intelligence. I'm also frequently attracted by other relevant domains, such as Smart Logistics, Structural and Multi-Disciplinary Optimization and Computational Finance, for which applied research is required to produce application-oriented learning and optimization techniques.

Most of my research could also be viewed as arising from Evolutionary Computation, which is essentially a distributed heuristic search framework widely applicable for modelling, learning and optimization problems, especially for hard problems where limited prior knowledge is available.

My topics for fundamental research, together with selected relevant papers, are listed below.

1. Scalable Evolutionary Search

Research in this direction aims at systematically boosting the capacity of Evolutionary Computation on problems with huge search space, which has been believed as a major challenge for most EAs. Approaches for this purpose include:

  1. Co-evolutionary Search: Introducing the divide-and-conquer idea to guide EAs adaptively search different regions of the search space

    1. W. Hong, K. Tang*, A. Zhou, H. Ishibuchi and X. Yao, “A Scalable Indicator-Based Evolutionary Algorithm for Large-Scale Multi-Objective Optimization,” IEEE Transactions on Evolutionary Computation, 23(3): 525-537, June 2019.
    2. P. Yang, K. Tang* and X. Yao, “Turning High-dimensional Optimization into Computationally Expensive Optimization,” IEEE Transactions on Evolutionary Computation, 22(1): 143-156, February 2018.
    3. K. Tang, J. Wang X. Li and X. Yao, “A Scalable Approach to Capacitated Arc Routing Problems Based on Hierarchical Decomposition,” IEEE Transactions on Cybernetics, 47(11): 3928-3940, November 2017.
    4. K. Tang, P. Yang and X. Yao, “Negatively Correlated Search,” IEEE Journal on Selected Areas in Communications, 34(3): 1-9, March 2016.
    5. W. Chen, T. Weise, Z. Yang and K. Tang, “Large-Scale Global Optimization using Cooperative Coevolution with Variable Interaction Learning,” in Proceedings of the 11th International Conference on Parallel Problem Solving From Nature (PPSN), Kraków, Poland, September 11–15, 2010, pp. 300– 309, Lecture Notes in Computer Science, Volume 6239, Part II, Springer-Verlag, Berlin, Germany.
    6. Z. Yang, K. Tang* and X. Yao, “Large Scale Evolutionary Optimization Using Cooperative Coevolution,” Information Sciences, 178(15): 2985-2999, August 2008.
  2. Parallel Algorithm Portfolios: Leveraging on high performance computing to enhance both the extreme performance and reliability of EAs, without suffering the wall-clock runtime but only computational resources.

    1. K. Tang*, S. Liu, P. Yang and X. Yao, “Few-shots Parallel Algorithm Portfolio Construction via Co-evolution,” IEEE Transactions on Evolutionary Computation, in press (DOI: 10.1109/TEVC.2021.3059661).
    2. S. Liu, K. Tang and X. Yao, “Automatic Construction of Parallel Portfolios via Explicit Instance Grouping,” in Proceedings of The 33th AAAI Conference on Artificial Intelligence (AAAI-2019), Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 1560-1567.
    3. F. Peng, K. Tang*, G. Chen and X. Yao, “Population-based Algorithm Portfolios for Numerical Optimization,” IEEE Transactions on Evolutionary Computation, 14(5): 782-800, October 2010.
  3. Surrogate-assisted Search: Exploiting data generated during the search course to alleviate the cost of evaluating a future solution.

    1. X. Lu, T. Sun, and K. Tang, “Evolutionary Optimization with Hierarchical Surrogates,” Swarm and Evolutionary Computation, vol. 47, pp. 21-32, June 2019.
    2. X. Lu and K. Tang*, “Classification- and Regression-Assisted Differential Evolution for Computationally Expensive Problems,” Journal of Computer Science and Technology, 27(5): 1024- 1034, September 2012.

2. Reinforcement and Evolutionary Learning

Reinforcement Learning is a learning problem that lies exactly in the "backyard" of EAs, because the objective function of most RL tasks so far rely on a noisy and non-differentiable simulator. Thus it’d be quite interesting to see whether EC could offer a promising alternative approach for RL.

  1. P. Yang, Q. Yang, K. Tang* and X. Yao, “Parallel Exploration via Negatively Correlated Search,” Frontiers of Computer Science, in press (DOI: 10.1007/s11704-020-0431-0), 2020.
  2. L. Zhang, K. Tang and X. Yao, “Explicit Planning for Efficient Exploration in Reinforcement Learning,” In:Advances in Neural Information Processing Systems (NIPS'19), Vancouver, Canada, December 08-14, 2019, pp. 7488-7497.
  3. L. Zhang, K. Tang and X. Yao, “Log-normality and Skewness of Estimated State/Action Values in Reinforcement Learning,” In: Advances in Neural Information Processing Systems 30 (NIPS'17), Long Beach, CA, December 4-9, 2017, pp.1802--1812.
  4. L. Zhang, K. Tang and X. Yao, “Increasingly Cautious Optimism for Practical PAC-MDP Exploration,” in Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI'15), Buenos Aires, Argentina, July 25-31, 2015, pp. 4033-4040.

3. Learning and Optimization with Uncertainty

Uncertainty is ubiquitous in real-world learning and optimization tasks. It could be due to the dynamically changing physical world, the noise caused by imprecise measurements, or even the unpredictable nature of human behaviors. We are specifically interested in new learning/optimization methods that could handle various forms of uncertainty.

  1. Y. Sun, K. Tang*, Z. Zhu and X. Yao, “Concept Drift Adaptation by Exploiting Historical Knowledge,” IEEE Transactions on Neural Networks and Learning Systems, 29(10): 4822-4832, October 2018.
  2. J. Zhong, P. Yang, K. Tang*, “A Quality-Sensitive Method for Learning from Crowds,” IEEE Transactions on Knowledge and Data Engineering, 29(12): 2643-2654, December 2017.
  3. Y. Sun, K. Tang*, L. L. Minku, S. Wang and X. Yao, “Online Ensemble Learning of Data Streams with Gradually Evolved Classes,” IEEE Transactions on Knowledge and Data Engineering, 28(6): 1532-1545, June 2016.
  4. H. Fu, B. Sendhoff, K. Tang and X. Yao, “Robust Optimization Over Time: Problem Difficulties and Benchmark Problems,” IEEE Transactions on Evolutionary Computation, 19(5): 731-745, October 2015.
  5. X. Yu, K. Tang* and X. Yao, “Immigrant Schemes for Evolutionary Algorithms in Dynamic Environments: Adapting the Replacement Rate,” Science in China Series F: Information Sciences, 54(7): 1352-1364, July 2011.

Copyright © 2018 All Rights Reserved.