PRICAI 2018 Special Track on Reinforcement Learning

Modified: 2018/03/27 20:30 by admin - Uncategorized


CFP: PRICAI’18 Special Track on Reinforcement Learning

August 27-31, 2018, Nanjing, China



Scope and Background:

Reinforcement learning (RL) is an active field of research that deals with the problem of (single or multiple agents') sequential decision-making in unknown and possibly partially observable domains, whose dynamics may be deterministic, stochastic or adversarial. In the last few years, we have seen a growing interest in RL from both research communities and industries, and recent developments in exploration-exploitation, online learning, planning, and representation learning are making RL more and more appealing to real-world applications, with promising results in challenging domains such as recommendation systems, computer games, or robotic control.

This special track focuses on both theoretical models and algorithms of RL and its practical applications in various domains. The ultimate goal is to bring together diverse viewpoints in the RL area in an attempt to consolidate the common ground, identify new research directions, and promote the rapid advance of RL research community.


Topics:

The special track will cover a range of sub-topics in RL, from theoretical aspects to empirical evaluations, including but not limited to:

  • Exploration/exploitation
  • Deep RL, function approximation in RL
  • Policy search methods
  • Batch RL
  • Kernel methods for RL
  • Evolutionary RL
  • Partially observable RL, POMDP, predictive state representations
  • Bayesian RL
  • Multi-agent RL
  • RL in non-stationary domains
  • Life-long RL
  • Non-standard Criteria in RL, e.g., risk-sensitive RL, multi-objective RL, preference-based RL
  • Transfer Learning in RL
  • Model-based RL, simulation-based RL, planning-based RL
  • Knowledge representation in RL
  • Hierarchical RL
  • Interactive RL
  • Planning under uncertainty
  • RL in psychology and neuroscience
  • Applications of RL, e.g., in recommender systems, robotics, video games, finance, autonomous driving, healthcare.

  • Submission Guidelines:

    All submission and publication guidelines announced for the PRICAI 2018 conference (http://cse.seu.edu.cn/pricai18/) will be applicable for this special track. All papers should be submitted electronically using the conference management tool in PDF/DOC format and formatted using the Springer LNAI template. Submitted papers should be double blind, not exceed 12 pages (excluding references), and must not be published or under consideration to be published elsewhere.


    Paper Submission:

    Papers submitted to the special track and the main conference will use the same submission system. Please choose “Reinforcement Learning” special track in the submission system (https://easychair.org/conferences/?conf=pricai2018). The option is under "Additional submission choices" in the submission page.


    Publication:

    All papers submitted will be peer-reviewed using the same criteria of PRICAI-18. The accepted papers will be included in the conference proceedings of PRICAI-18, which will be published by Springer as a volume of LNAI series. Selected papers will be considered to publish on SCI indexed journals, such as Frontiers of Computer Science.


    Important Dates:

    * Full Paper Submission: April 14 (11:59PM GMT+8)

    * Notification of Acceptance: May 31, 2018

    * Camera Ready Submission: June 11, 2018

    * Main Conference: August 27-31, 2018


    Track Chairs:

    Chao Yu, Dalian University of Technology, China

    Jianye Hao, Tianjin University, China

    Yang Yu, Nanjing University, China

    Zongzhang Zhang, Soochow University, China


    Track PC members:

    Daan Bloembergen, Centrum Wiskunde & Informatica, Netherlands

    Siqi Chen, Southwest University, China

    Yingke Chen, Sichuan University, China

    Jen Jen Chung, ETH Zürich, Switzerland

    Qiming Fu, Suzhou University of Science and Technology, China

    Yang Gao, Nanjing University, China

    Jianye Hao, Tianjin University, China

    Jianmin Ji, University of Science and Technology of China, China

    Yichuan Jiang, Southeast University of China, China

    Guangliang Li, Ocean University of China, China

    Wee Sun Lee, National University of Singapore, Singapore

    Qiang Lv, Yangzhou University, China

    Feng Wu, University of Science and Technology of China, China

    Paul Weng, University of Michigan-Shanghai Jiaotong University, China

    Yifeng Zeng, Teesside University, UK

    Yingfeng Chen, Netease, China

    Quan Liu, Soochow University, China

    Xian Guo , Nankai University, China

    Chao Yu, Dalian University of Technology, China

    Yang Yu, Nanjing University, China

    Zongzhang Zhang, Soochow University, China

    Li Zhao, Microsoft Research Asia, China



    Contact Person:

    Dr. Chao Yu

    Dalian University of Technology

    Email: cy496@dlut.edu.cn

    Dr. Jianye Hao

    Tianjin University

    Email: jianye.hao@tju.edu.cn




    The end