March 8, 2018

CEC Tutorials

Room Europa IV Ground Floor Europa II Ground Floor OCEANIA I 2nd Floor OCEANIA II 2nd Floor OCEANIA III 2nd Floor OCEANIA IV 2nd Floor OCEANIA V 2nd Floor OCEANIA VI 2nd Floor OCEANIA VII 2nd Floor OCEANIA VIII 2nd Floor OCEANIA IX 2nd Floor OCEANIA X 2nd Floor
Capacity 300 Poster Session 100 160 100 100 120 150 110 100 180 190
08:00AM-10:00AM HYB_01 – part 1 CEC_01 CEC_06 CEC_10 CEC_15 FUZZ_01 Part 1 FUZZ_05 Part 1 IJCNN_01 Part 1 IJCNN_07 Part 1 IJCNN_12 Part 1
10:00AM-10:15AM COFFEE BREAK
10:15AM-12:15AM HYB_01 – part 2 CEC_02 CEC_07 CEC_11 CEC_16 FUZZ_01 Part 2 FUZZ_05 Part 2 IJCNN_01 Part 2 IJCNN_07 Part 2 IJCNN_12 Part 2
12:15AM-1:00PM LUNCH
1:00PM-3:00PM HYB_02 CEC_03 CEC_08 Part 1 CEC_12 CEC_17 Part 1 FUZZ_02 FUZZ_06 Part 1 IJCNN_03 IJCNN_08 IJCNN_13
3:00PM-3:15PM COFFEE BREAK
3:15PM-5:15PM HYB_03 CEC_04 CEC_08 Part 2 CEC_13 CEC_17 Part 2 FUZZ_03 FUZZ_06 Part 2 IJCNN_04 IJCNN_09 IJCNN_14 Part 1
5:15PM-7:15PM IJCNN_15 CEC_05 CEC_09 CEC_14 CEC_18 FUZZ_04 IJCNN_10 IJCNN_05 IJCNN_06 IJCNN_11 IJCNN_14 Part 2
7:30PM-9:30PM Welcome Reception @ Europa Room – Ground Floor

 

CEC_01 Co-evolutionary games
CEC_02 Differential Evolution with Ensembles, Adaptations and Topologies
CEC_03 Multi-concept Optimization
CEC_04 Evolutionary Bilevel Optimization
CEC_05 Evolutionary Computation for Dynamic Optimization Problems
CEC_06 Applying Stopping Criteria in Evolutionary Multi-Objective Optimization
CEC_07 Dynamic Multi-objective Optimization: Challenges, Applications and Future Directions
CEC_08 Evolutionary Many-Objective Optimization
CEC_09 Pareto Optimization for Subset Selection: Theory and Applications in Machine Learning
CEC_10 Evolutionary Large-Scale Global Optimization: An Introduction
CEC_11 Representation in Evolutionary Computation
CEC_12 Parallel and distributed evolutionary algorithms
CEC_13 Evolutionary Algorithms and Hyperheuristics
CEC_14 Parallelization of Evolutionary Algorithms: MapReduce and Spark
CEC_15 Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms
CEC_16 Constraint-Handling in Nature-Inspired Optimization
CEC_17 Machine Learning on Evolutionary Computation
CEC_18 Evolutionary Algorithms, Swarm Dynamics, and Complex Networks: Recent Advances and Progress

 

 

Title: Co-evolutionary games (CEC_01)

Organized by Hendrik Richter

Description: 

Studying evolutionary and co-evolutionary games can be seen as an attempt to address a long‐standing and fundamental problem in Darwinian evolution. How can the two seemingly contradictory observations be reconciled such that: individuals experience selective pressure, which entails competition in order to be successful in survival and reproduction, but, at the same time, there is wide‐spread cooperative and even altruistic behavior between individuals (and also groups of individuals or even species)? In other words, how can we have selection that favors fitter individuals over less fit, while the same individuals regularly cooperate with and support each other, thus ostensibly leveling off differences in fitness?

Co‐evolutionary games may offer answers to these questions as they set up mathematical models to discuss whether, when and under which circumstances cooperation may be more advantageous than competition. Such games define individuals having behavioral choices to be players selecting and executing strategies (for instance cooperation or competition). By linking the relative costs and benefits of strategies to payoff (and subsequently fitness), we obtain a measure of how profitable a given choice is in evolutionary terms. Co-evolutionary games become dynamic if they are played iteratively over several rounds and the players may update their strategies and/or their networks of interaction, which describe with whom a given player interacts in the game. In this context, it is common to call games whose players update only their strategies evolutionary games. Games whose players may additionally update their interaction networks are called co-evolutionary games. The tutorial will give an overview about concepts and research questions in co‐evolutionary games, as well as addressing recent developments in this area.

Intended Audience

The tutorial might be interesting for researcher and students looking for an entry point into co‐evolutionary games. Scientists already working with co‐evolutionary games may find it helpful to obtain an update about recent developments in the field and getting a fresh look at perspectives and potentials. Also practitioners wanting to enhance their knowledge about co‐evolutionary games and co‐evolutionary game theory may discover valuable material.

Short Biography

Hendrik Richter is a Professor at the Faculty Of Electrical Engineering & Information Technology of HTWK Leipzig University of Applied Sciences. He is involved in research in the field of evolutionary computation since almost 20 years, regularly attending conference in the field and contributing papers. His main research topics are dynamic optimization, fitness landscapes, co-evolution and evolutionary game theory. A Tutorial entitled “Recent advances in fitness landscapes” was presented at WCCI‐CEC 2014. At WCCI‐CEC 2016, He received the regular best paper award for a paper proposing a landscape approach to co-evolutionary games.

 

 

Title: Differential Evolution with Ensembles, Adaptations and Topologies (CEC_02)

Organized by Ponnuthurai Nagaratnam Suganthan

Description:

Differential evolution (DE) is one of the most successful numerical optimization paradigm. Hence, practitioners and junior researchers would be interested in learning this optimization algorithm. DE is also rapidly growing. Hence, a tutorial on DE will be timely and beneficial to many of the CEC 2018 conference attendees. This tutorial will introduce the basics of the DE and then point out some advanced methods for solving diverse numerical optimization problems by using DE. DE is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This tutorial will begin with a brief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a relatively recent concept that can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE and adaptive DEs to improve the performance of DE. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of-the-art DE algorithms and demand strong research effort from the DE-community in the future.

Intended Audience

This presentation will include basics as well as advanced topics of DE. Hence, researchers commencing their research in DE as well as experienced researcher can attend. Practitioners will also benefit from the presentation.

Short Biography

Ponnuthurai Nagaratnam Suganthan received the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to NTU in 1999. He is an Editorial Board Member of the Evolutionary Computation Journal, MIT Press. He is an associate editor of the IEEE Trans on Cybernetics (2012 – ), IEEE Trans on Evolutionary Computation (2005 -), Information Sciences (Elsevier) (2009 – ), Pattern Recognition (Elsevier) (2001 – ) and Int. J. of Swarm Intelligence Research (2009 – ) Journals. He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 – ), an SCI Indexed Elsevier Journal. His co-authored SaDE paper (published in April 2009) won the “IEEE Trans. on Evolutionary Computation outstanding paper award” in 2012. His former PhD student, Dr Jane Jing Liang, won the IEEE CIS Outstanding PhD dissertation award, in 2014. His research interests include swarm and evolutionary algorithms, pattern recognition, big data, deep learning and applications of swarm, evolutionary & machine learning algorithms. His SCI indexed publications attracted over 1000 SCI citations in each calendar years 2013, 2014, 2015, 2016 and 2017. He was selected as one of the highly cited researchers by Thomson Reuters in 2015, 2016 , and 2017 in computer science. He served as the General Chair of the IEEE SSCI 2013. He has been a member of the IEEE (S’90, M’92, SM’00, F’15) since 1990 and an elected AdCom member of the IEEE Computational Intelligence Society (CIS) in 2014-2016.

 

Title: Multi-concept Optimization (CEC_03)

Organized by Amiram Moshaiov

Description:

The main goal is to introduce Multi-Concept Optimization (MCO) to the community of EC researches. This goal will be achieved by way of the following seven tasks: Providing background on what is a conceptual solution (as evident from the conceptual design stage during the process of engineering design); Describing academic and real-life examples of conceptual solutions; Defining what is MCO and how it differs from a traditional definition of an optimization problem. The definitions will include both single and multi-objective MCO; Explaining the significance of MCO as an optimization methodology that is useful for the following three main reasons: Supporting the selection of conceptual solutions, an alternative approach to multi-modal optimization, a unique approach to design space exploration; Describing evolutionary algorithms for MCO and their benchmarking; Describing the application of MCO to a real-life problem (joint work with Israel Aerospace Industries); Providing an assessment on research needs concerning evolutionary computation for MCO. The proposed tutorial is timely for at least two main reasons. First, modern hardware makes MCO appealing for real-life application, as evident from our joint work with the Israel Aerospace Industries. Second, the proposer and his research group have accumulated a large body of knowledge and achievements on MCO, which allows him to prepare an attractive tutorial.

Intended Audience

The potential participants include engineers and computer scientists that commonly attend the IEEE-CEC. The tutorial should be appealing to a wide audience as it deals with a generic approach to optimization. For example, all those who are interested in multi-modal optimization are potential attendees.

Short Biography

Amiram Moshaiov is a faculty member of the School of Mechanical Engineering and a member of the Sagol School of Neuroscience at Tel-Aviv University. During the 80’s he was a faculty member at MIT, USA. He is a member of the Editorial Board of the Journal of Memetic Computing and a reviewer to many other scientific journals. He has been a member of the Management Board of the European Network of Excellence in Robotics and is a member of the Working Group on Artificial Life and Complex Adaptive Systems of IEEE. He was the originator and Co-Chair of the IEEE/RSJ IROS Workshop on Multi-Objective Robotics and on Multi-Competence Optimization and Adaptation in Robotics and A-life, and of the GECCO Workshop on the Evolution of Natural and Artificial Systems. He is also the originator and co-organizer of the IEEE-CEC 2017 Special Session on Evolutionary Computations and Games: From Theory to Applications. He is and/or has been a member and associate editor in many international program committees of conferences such as: The IEEE Int. Conf. on Systems, Man, and Cybernetics, The IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, The IEEE Congress on Evolutionary Computation, The IEEE World Congress on Computational Intelligence, The IEEE Sym. on Artificial Life, The IEEE Sym. on Comp. Intelligence for Security and Defense Applications, The IEEE Sym. on Comp. Intelligence in Multicriteria Decision Making, The Int. Conference on Parallel Problem Solving from Nature, The International Conf. on Simulated Evolution And Learning, The European Robotic Symposium, The Int. IFAC Symposium on Robot Control, The Int. Symposium on Tools and Methods of Competitive Engineering, The Int. Conf. on Engineering Design, The Int. Conference on Mechatronics, The IEEE Int. Conference on Control Applications, and The IEEE Int. Conference on Computational Cybernetics.

His research interests are in methods such as: Computational Intelligence including Evolutionary Computation, Artificial Neural Networks, Fuzzy Logic and their hybridizations, Interactive Evolutionary Computation, Multi-criteria Decision Making, Multi-Objective Optimization and Adaptation, and Multi-objective Games.

He is interested in application areas such as: Engineering Design, Planning, Operation Research, Behavioral and Cognitive Robotics, Mechatronics, Control, Bio-Mechanics, Complex Adaptive Systems, Cybernetics and Artificial Life (Bio-Plausible Simulations), Computer Vision, and Defense (air, land, sea, and cyber). Data Science and Big Data.

 

Title: Evolutionary Bilevel Optimization (CEC_04)

Organized by Ankur Sinha and Kalyanmoy Deb

Description:

Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the corresponding lower level variable vector is optimal for the lower level optimization problem. Consider, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are commonly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the “nestedness” of one optimization task into another. Evolutionary Algorithms (EAs) provide some amenable ways to solve such problems due to their flexibility and ability to handle constrained search spaces efficiently. Clearly, EAs have an edge in solving such difficult yet practically important problems. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems. In this tutorial, we will introduce principles of bilevel optimization for single and multiple objectives, and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we will present a few viable evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be discussed. Finally, a number of immediate and future research ideas on bilevel optimization will also be highlighted.

Target Audience

Bilevel optimization belongs to a difficult class of optimization problems. Most of the classical optimization methods are unable to solve even simpler instances of bilevel problems. This offers a niche to the researchers in the field of evolutionary computation to work on the development of efficient bilevel procedures. However, many researchers working in the area of evolutionary computation are not familiar with this important class of optimization problems. Bilevel optimization has immense practical applications and it certainly requires attention of the researchers working on evolutionary computation. The target audience for this tutorial will be researchers and students looking to work on bilevel optimization. The tutorial will make the basic concepts on bilevel optimization and the recent results easily accessible to the audience.

Short Biography

Ankur Sinha is working as an Assistant Professor at Indian Institute of Management, Ahmedabad, India. He completed his PhD from Helsinki School of Economics (Now: Aalto University School of Business) where his PhD thesis was adjudged as the best dissertation of the year 2011. He holds a Bachelors degree in Mechanical Engineering from Indian Institute of Technology (IIT) Kanpur. After completing his PhD, he has held visiting positions at Michigan State University and Aalto University. His research interests include Bilevel Optimization, Multi-Criteria Decision Making and Evolutionary Algorithms. He has offered tutorials on Evolutionary Bilevel Optimization at GECCO 2013, PPSN 2014, CEC 2015 and CEC 2017. His research has been published in some of the leading Computer Science, Business and Statistics journals. He regularly chairs sessions at evolutionary computation conferences. For detailed information about his research and teaching, please refer to his personal page: http://www.iima.ac.in/~asinha/.

Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 325 international journal and conference research papers. He has pioneered and is a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals. More information about his research can be found from http://www.egr.msu.edu/people/profile/kdeb.

 

Title: Evolutionary Computation for Dynamic Optimization Problems (CEC_05)

Organized by Shengxiang Yang

Description:

Many real-world optimization problems are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic optimization problems (DOPs) are challenging problems due to their nature of difficulty. Yet, they are important problems that researchers and practitioners in decision-making in many domains need to face and solve. Evolutionary computation (EC) encapsulates a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DOPs due to the inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DOPs has attracted a lot of research efforts during the last two decades with some promising results. However, this research area is still quite young and far way from well-understood. This tutorial provides an introduction to the research area of EC and DOPs and carry out an in-depth description and classification of the state-of-art research in the field. The purpose is to (i) provide detailed description and classification of DOP benchmark problems and performance measures; (ii) review current EC approaches and provide detailed explanations on how they work for DOPs; (iii) present current application in the area of EC for DOPs; (iv) analyze current gaps and challenges in EC for DOPs; and (v) point out future research directions in EC for DOPs.

Target Audience

This tutorial is intermediate and the audience is expected to have some knowledge of EC. This tutorial is expected to attract people with different backgrounds, including academic researchers, PhD students, and practitioners, from the EC and Operational research (OR) communities since many real-world optimization problems are DOPs.

Short Biography

Shengxiang Yang got his PhD degree in Systems Engineering in 1999 from Northeastern University, China. He is now a Professor of Computational Intelligence (CI) and Director of the Centre for Computational Intelligence (http://www.cci.dmu.ac.uk/), De Montfort University (DMU), UK. He has worked extensively for 20 years in the areas of CI methods, including EC and artificial neural networks, and their applications for real-world problems. He has over 240 publications in these domains. His work has been supported by UK research councils (e.g., Engineering and Physical Sciences Research Council (EPSRC), Royal Society, and Royal Academy of Engineering), EU FP7 and Horizon 2020, Chinese Ministry of Education, and industry partners.

He serves as an Associate Editor or Editorial Board Member of seven international journals, including IEEE Transactions on Cybernetics, Evolutionary Computation, Information Sciences, and Soft Computing. He is the founding chair of the Task Force on Intelligent Network Systems (TF-INS) and the chair of the Task Force on EC in Dynamic and Uncertain Environments (ECiDUEs) of the IEEE CI Society (CIS). He has organised/chaired over 40 workshops and special sessions relevant to ECiDUEs for several major international conferences. He is the founding co-chair of the IEEE Symposium on CI in Dynamic and Uncertain Environments.

 

Title: Applying Stopping Criteria in Evolutionary Multi-Objective Optimization (CEC_06)

Organized by Luis Marti, and Nayat Sanchez-Pi

Description:

Most soft-computing, heuristic, non-deterministic or numerical methods have in common that they need a stopping criterion. This criterion, which is usually a heuristic itself, is responsible for minimizing the waste of computational resources by detecting scenarios where it makes no sense to continue executing the method. Hence, the success or failure of any practical application relies heavily on not only the techniques applied but also the support methodologies, including the stopping criterion. Paradoxically, the matter of stopping criteria and convergence detection has been often overlooked by most of the evolutionary multi-objective optimization (EMO) community. This is probably because it plays a supporting role and, consequently, the theoretical and practical implications concerning this topic have not yet been properly studied and disseminated. However, it can be argued that many real-world applications of theoretically outstanding methods may have underperformed due to an incorrect algorithm termination scheme.

In this tutorial, we propose and updated summary of the results obtained so far in this area and provide re-usable examples of how these methods should be applied in real-life practice. Consequently, many real-world applications of theoretically outstanding methods may have underperformed due to an incorrect algorithm termination scheme. Typically, a stopping criterion is invoked at the end of an iteration of the algorithm. At that point, it is decided whether algorithm execution should continue or can be aborted. We have identified four scenarios when the execution of an algorithm should be terminated: the current solution is satisfactory; the method is able to output a feasible solution, which, although not optimal, is unlikely to be bettered; the method is unable to converge to any solution; or the computation already performed is sufficient to reach a solution or further computation is unjustified.

This tutorial aims to provide the evolutionary computation community a comprehensive reference that encompass: a presentation of the theoretical context of the problem with emphasis in the mathematical foundation and, in particular, the Karush-Kuhn-Tucker condition; theoretical results regarding convergence in EMO and single-objective evolutionary algorithms; characteristics, challenges and requirements of EMO stopping criteria; an updated survey of current state of the art on this area, and; interactive activity comparing different stopping criteria in order to show their strengths and weaknesses. By debating and presenting these topics we intend to draw the a‚ention of the EMO community towards this issue as well as to provide usable tools and methodologies to extend the proper use of these methods. Demonstration and familiarization activities are a fundamental part if the tutorial. In this regard, we will perform a series of “live” comparative experiments relying on the EMO stopping criteria implemented by the authors. For this purpose, various EMO algorithms will be applied to well-known benchmark problems. The populations of each run of the algorithms will be stored and each of the criteria will be presented with them. This will allow pointing out their features and drawbacks. The exercises that will be carried out interactively by the instructors and the interested public will be available on the web during the tutorial and will remain online for later use, reference and study. The necessary software for tutorial is already available as a MATLAB/Octave EMO stopping criteria taxonomy that contains the “classical” as well as current state of the art methods. This taxonomy is available on-line as open-source so ware at https://github.com/lmarti/emo-stoppingcriteria-taxonomy.

Intended Audience

Practitioners of the multi-criterion decision making and evolutionary multi-objective communities, from both academia and industry. Elementary knowledge of evolutionary computation would be convenient although not required.

Short Biography

Luis Martí is currently an Adjunct Professor at the Institute of Computing of the Universidade Federal Fluminense in Rio de Janeiro, Brazil and a Research Fellow at the Team on Learning and Optimization of the Institut National de Recherche en Informatique et en Automatique (INRIA) in Saclay, France. Before that, he was CNPq Young Talent of Science Fellow at the Pontifical Catholic University of Rio de Janeiro. He studied Computer Science at the University of Havana (UH), Cuba and got a Ph.D. in Computer Science and Technology from the Universidad Carlos III de Madrid (UC3M), Spain. He has served as Junior Professor at UH, as Assistant Professor at UC3M and as Postdoc Research Fellow at T.U. Dortmund (Germany). He has also been visiting researcher at the University of Udine, Italy; the Center for Research and Advanced Technologies (CINVESTAV), Mexico, and T.U. Dortmund, Germany. His main research focus has to do with machine learning, evolutionary computation, multi-objective optimization and multi-criteria decision-making, estimation of distribution algorithms, and other related matters. He pioneered the research on the topic of EMO convergence detection and stopping criteria.He proposed the first EMO algorithm-independent approach, coauthored the only survey on the matter and is actively researching on this matter.

Nayat Sanchez-Pi acts as Adjunct Professor at the Institute of Mathematics and Computer Science of the Universidade do Estado do Rio de Janeiro. She earned a Ph.D. in Computer Science from the Universidad Carlos III de Madrid (Spain) in 2011 where she obtained a qualification of cum laude with European mention and the Best Ph.D. Award of University Carlos III of Madrid. She is also M.Sc. in Computer Science and Technology from Universidad Carlos III de Madrid (2007) and B.Sc. in Computer Science from the Universidad de La Habana, Cuba (2000). She has been a senior researcher at the Institute of Logic, Philosophy and Theory of Science (ILTC) working full time at the Active Documentation and Intelligent Design Laboratory (ADDLabs) of the Institute of Computing of the Fluminense Federal University. Before that, she was Lecturer and Researcher at Group of Applied Artificial Intelligence (GIAA) at Universidad Carlos III de Madrid. She has been Visiting Researcher in the University College Dublin, Ireland and the University of Lisbon, Portugal. She has also been Post-doc Researcher at Universidade Federal Fluminense in Rio Janeiro, Brazil. Her main interests are artificial intelligence, machine learning, nature-inspired computing, ambient intelligence, ubiquitous computing, sensors and data fusion and multi-agent systems.

 

Title: Dynamic Multi-objective Optimization: Challenges, Applications and Future Directions (CEC_07)

Organized by Marde Helbig, Kalyanmoy Deb

Description:

Most optimization problems in real-life have more than one objective, with at least two objectives in conflict with one another and at least one objective that changes over time. These kinds of optimization problems are referred to as dynamic multi-objective optimization (DMOO) problems. Instead of re-starting the optimization process after a change in the environment has occurred, previous knowledge is used and if the changes are small enough, this may lead to new solutions being found much quicker. Most research in multi-objective optimization has been conducted on static problems and most research on dynamic problems has been conducted on single-objective optimization. The goal of a DMOO algorithm (DMOA) is to find an optimal set of solutions that is as close as possible to the true set of solutions (similar to static MOO) and that contains a diverse set of solutions. However, in addition to these goals a DMOA also has to track the changing set of optimal solutions over time. Therefore, the DMOA also has to deal with the problems of a lack of diversity and outdated memory (similar to dynamic single-objective optimization).

This tutorial will introduce the participants to the field of DMOO by discussing: benchmark functions and performance measures that have been proposed and the issues related to each of these; algorithms that have been proposed to solve DMOO; issues with the comparison of DMOAs’ performance and ensuring a fair comparison; analyzing the performance of DMOAs and why traditional approaches used for static MOO is not necessarily adequate enough; challenges in the DMOO field that are not yet addressed, such as incorporating a decision maker’s preference in DMOO and visualizing the behavior of DMOAs; real-world applications; and emerging research fields that provide interesting research opportunities. The following will be demonstrated during the tutorial: Why certain problems exist with current performance measures; A tool to incorporate a decision maker’s preference into the search process when solving dynamic multi-objective optimization problems; A tool to visualize the behavior of dynamic multi-objective optimization algorithms during the search process.

Intended Audience 

Researchers, PhD students and practitioners on optimization and Evolutionary Computation.

Short Biography 

Mardé Helbig is a Senior Lecturer at the University of Pretoria, South Africa. She obtained her PhD in 2012 at the University of Pretoria, with a thesis entitled: “Solving dynamic optimisation problems using the vector evaluated particle swarm optimisation algorithm”. She has been the main organizer of special sessions on DMOO at CEC 2014, CEC 2015, SSCI 2015, CEC 2016, SSCI 2016, CEC 2017, SSCI 2017 and a competition on DMOO at CEC 2015. She also presented a tutorial on DMOO at SSC I2015 and the International Conference on Swarm Intelligence (ICSI) 2016. She was invited as a keynote speaker on DMOO at the International Conference on Mechanical and Intelligent Manufacturing Technologies (ICMIMT) 2017 and the International Conference on Soft Computing and Machine Learning (ISCMI) 2017. She has also been invited as a keynote speaker at ICMIMT 2018 and to present the first Memorial Lecture of Prof Zadeh at ISCMI 2017. She has numerous publications on DMOO and is a regular reviewer for the top conferences and journals in the field. In addition, she is the vice chair of the IEEE Task force on Evolutionary Multi-objective Optimization, the chair of the IEEE Computational Intelligence Society (CIS) Chapter in South Africa, a sub-committee member of the IEEE CIS Woman in Computational Intelligence, a sub-committee member of the IEEE CIS Young Professionals and a member of the IEEE CIS Emerging Technologies Technical Committee (ETTC). In 2017 she was also selected as a member of the South African Young Academy of Science (SAYAS).
Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 325 international journal and conference research papers. He has pioneered and is a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals. More information about his research can be found from http://www.egr.msu.edu/people/profile/kdeb.

 

Title: Evolutionary Many-Objective Optimization (CEC_08)

Organized by Hisao Ishibuchi, Hiroyuki Sato

Description:

The goal of the proposed tutorial is clearly explain difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problem difficult. In the first part (Part I: Difficulties), we clearly explain not only frequently-discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We also explain some state-of-the-art many-objective algorithms. The attendees of the tutorial will learn some representative approaches to many-objective optimization and state-of-the-art many-objective algorithms. At the same time, the attendees will also learn that there still exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions are explained in detail in the tutorial.

Intended Audience

Researchers, PhD students and practitioners on optimization and Evolutionary Computation.

Short Biography

Hisao Ishibuchi and Hiroyuki Sato published 50 papers on evolutionary many- objective optimization in total (34 + 16). They have published not only a large number of many-objective papers but also high-quality and popular papers. For example, they received GECCO Best Paper Awards in the EMO truck in 2011 (Sato), 2014 (Sato) and 2017 (Ishibuchi). Dr. Sato’s EMO 2007 conference paper on many-objective optimization has been cited 226 times. Dr. Ishibuchi’s TEVC paper on many-objective optimization in 2015 has been cited 105 times. The main justification for the two tutorial speakers is their excellent publication records on evolutionary many-objective optimization. Another justification is a broad view of Dr. Ishibuchi based on a lot of his technical activities in the IEEE Computational Intelligence Society such as CEC 2010 Program Chair, IEEE CIS Vice-President for Technical Activity (2010-2013), Editor-in-Chief of IEEE Computational Intelligence Magazine (2014-2019), IEEE CIS AdCom Member (2014-2019) and IEEE CIS Distinguished Lecturer (2014-2017).

 

Title: Pareto Optimization for Subset Selection: Theory and Applications in Machine Learning (CEC_09)

Organized by Yang Yu, and Chao Qian

Description:

Pareto optimization is a special kind of evolutionary optimization methods. It optimizes an objective function by transforming it into a bi-objective optimization problem, has been shown a promising method for the subset selection problem. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for the problem. It has also been applied in machine learning tasks successfully. This tutorial will introduce Pareto optimization from scratch. We will cover the history of Pareto optimization, theoretical results, and we will show that it achieves the best-so-far theoretical and practical performance in some machine learning tasks. We assume that the audiences are with basic knowledge of probabilistic and linear algebra.

Target Audience

Potential audience include those who are curios in theoretical grounded evolutionary algorithms, and those who are interested in applying evolutionary algorithms to achieve state-of-the-art performance in machine learning.

Short Biography

Yang Yu is an associate professor in Department of Computer Science and Technology, Nanjing University, China. He received the BSc and PhD degrees in computer science from Nanjing University, China, in 2004 and 2011, respectively. He joined the Department of Computer Science & Technology at Nanjing University in 2011. His research interests are mainly in artificial intelligence, machine learning and evolutionary computation, particularly, the foundation of evolutionary algorithms and its application in machine learning. He has published more than 40 research papers in top-tier journals (e.g., Artificial Intelligence, JAIR, IEEE TEC, ECJ) and conferences (e.g., IJCAI, AAAI, KDD, NIPS). He has won several awards/honors including the National Outstanding Doctoral Dissertation Award, China Computer Federation Outstanding Doctoral Dissertation Award, PAKDD’08 Best Paper Award, GECCO’11 Best Paper (Theory Track), IDEAL 2016 Best Paper Award, etc. He is a Youth Associate Editor of the Frontiers of Computer Science, an Area Chair of IJCAI’18, a Senior Program Committee member of IJCAI’15/17, a member of IEEE CIS Technical Committee on Data Mining and Big Data Analytics, etc.

Chao Qian is an associate researcher in School of Computer Science and Technology, University of Science and Technology of China, China. He received the BSc and PhD degrees in computer science from Nanjing University, China, in 2009 and 2015, respectively. Since then, he joined the School of Computer Science & Technology at University of Science and Technology of China as an associate researcher. His research interests are mainly in artificial intelligence, evolutionary computation and machine learning, particularly, the foundation of evolutionary algorithms and its application in machine learning. He has published 20 first-authored papers in leading international journals and conference proceedings, including Artificial Intelligence, Evolutionary Computation, IEEE Transactions on Evolutionary Computation, NIPS, IJCAI, AAAI, etc. He has won the ACM GECCO 2011 Best Paper Award (Theory Track), the IDEAL 2016 Best Paper Award, the 2017 Outstanding Doctoral Dissertation Award of China Association of Artificial Intelligence, and has been in the team of the PAKDD 2012 Data Mining Competition (Open Category) Grand Prize Winner. He has also been selected to the Young Elite Scientists Sponsorship Program by China Association for Science and Technology.

 

Title: Evolutionary Large-Scale Global Optimization: An Introduction (CEC_10)

Organized by Mohammad Nabi Omidvar, and Xiaodong Li

Description:

Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years, and this trend continues with an ever-increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems. The recent advance in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables. It is this “curse-of-dimensionality” that has made large-scale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical and meta-heuristics optimization algorithms being developed to overcome this scalability issue. Among these, meta-heuristics have gained popularity due to their ability in dealing with black-box optimization problems. In this tutorial, we provide an overview of recent advances in the field of evolutionary large-scale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems. This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in large-scale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such as the contribution-based cooperative co-evolution family of algorithms. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.

Target Audience

This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of-the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal, which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community.

Short Biography

Mohammad Nabi Omidvar is a research fellow in evolutionary computation and is a member of Centre of Excellence for Research in Computational Intelligence and Applications (Cercia) at the school of computer science, the University of Birmingham. Prior to joining the University of Birmingham, Dr. Omidvar completed his Ph.D. in computer science with the Evolutionary Computing and Machine Learning (ECML) group at RMIT University in Melbourne, Australia. Dr. Omidvar holds a bachelor in applied mathematics and a bachelor in computer science with first class honors from RMIT University. Dr. Omidvar won the IEEE Transaction on Evolutionary Computation Outstanding Paper award for his work on large-scale global optimization. He has also received an Australian Postgraduate Award in 2010 and also received the best Computer Science Honours Thesis award from the School of Computer Science and IT, RMIT University. Dr. Omidvar is a member of IEEE Computational Intelligence Society since 2009 and is a member of IEEE Taskforce on Large-Scale Global Optimization. His current research interests are large-scale global optimization, decomposition methods for optimization, and multi-objective optimization.

Xiaodong Li received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. Currently, he is an Associate Professor at the School of Computer Science and Information Technology, RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, complex systems, multi-objective optimization, and swarm intelligence. He serves as an Associate Editor of the journal IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL’08, a Program Co-Chair AI’09, and a Program Co-Chair for IEEE CEC’2012. He is the recipient of 2013 ACM SIGEVO Impact Award.

 

Title: Representation in Evolutionary Computation (CEC_11)

Organized by Daniel Ashlock

Description:

This tutorial has the goal of laying out the principles of representation for evolutionary computation and providing illustrative examples. The tutorial will consist of two parts. The first part will establish the importance of representation in evolutionary computation, something well known to senior researchers but potentially valuable to junior researchers and students. This portion of the tutorial will give concrete examples of how the design of evolutionary computation influences the character of the fitness landscape. Impact on time to solution, character of solutions located, and design principles for representations will all be covered. This tutorial has been offered several times before, but research in representation is quite active and so a number of beautiful new examples of representations will be presented in the second half of the tutorial. These will be chosen to emphasize the design principles expounded in the first half of the presentation. These will include applications in games, bioinformatics, optimization, and evolved art. The tness landscape abstraction will serve as a unifying theme throughout the presentation. New topics will include self-adaptive representations, parameterized manifolds of representations, state-conditioned representations, and representations that arise from abstract algebra. No prior knowledge of abstract algebra will be assumed.

Target Audience

This tutorial is potentially valuable to anyone that needs evolutionary computation beyond simple of-the-shelf technology and also anyone doing research in evolutionary computation.

Short Biography

Daniel Ashlock has 270 peer reviewed scientific publications, many of which deal with the issues of representation. He has presented tutorials and invited plenary lectures on representation at WCCI, CEC, CIG, and CIBCB as well as at numerous universities. Dr. Ashlock serves as an associate editor for the IEEE Transactions on Evolutionary Computation, the IEEE Transactions on Games, The IEEE/ACM Transactions on Computational Biology and Bioinformatics, Game and Puzzle Design, and Biosystems. He is a member if the IEEE CIS Technical Committee on Bioinformatics and Biomedical Engineering and on the Technical Committee on Games. He is currently vice chair of the games technical committee and nominated to be its next president. Dr. Ashlock has written books on evolutionary computation and on the representation of game playing agents. Dr. Ashlock has served as the general chair of three IEEE Conferences and served on the program committee of more than twenty conferences.

 

Title: Parallel and distributed evolutionary algorithms (CEC_12)

Organized by El-Ghazali Talbi

Description:

On one hand, optimization and machine learning problems are more and more complex and their resource requirements are ever increasing. Real-life optimization problems are often NP-hard, and CPU time and/or memory consuming. Although the use of meta-heuristics and evolutionary algorithms allows to significantly reduce the computational complexity of the search process, the latter remains time-consuming for many problems in diverse domains of application, where the objective function and the constraints associated to the problem are resource (e.g. CPU, memory) intensive and the size of the search space is huge. Moreover, more and more complex and resource intensive meta-heuristics are developed (e.g. hybrid meta-heuristics, multi-objective meta-heuristics, meta-heuristics under uncertainty, meta-heuristics for bi-level optimization, surrogate-assisted evolutionary algorithms).

On the other hand, the rapid development of technology in designing processors (e.g. multi-core processors, dedicated architectures), networks (e.g. local networks -LAN- such as Myrinet and Infiniband, wide area networks -WAN- such as optical networks), and data storage make the use of parallel computing more and more popular. Such architectures represent an effective strategy for the design and implementation of parallel metaheuristics and evolutionary algorithms. Indeed, sequential architectures are reaching physical limitation (speed of light, thermodynamics). Nowadays, even laptops and workstations are equipped with multi-core processors, which represent a given class of parallel architecture. Moreover, the ratio cost/performance is constantly decreasing. The proliferation of powerful workstations and fast communication networks has shown the emergence of GPUs, multi-cores architectures, clusters of processors (COWs), networks of workstations (NOWs), and large-scale network of machines (Grids) as platforms for high performance computing.

Parallel and distributed computing can be used in the design and implementation of meta-heuristics and evolutionary algorithms for the following reasons: Speedup the search: one of the main goals in parallelizing a meta-heuristic is to reduce the search time. This helps designing real-time and interactive optimization methods. This is a very important aspect for some class of problems where there are hard requirements on the search time such as in dynamic optimization problems and time critical control problems such as “real-time” planning. Improve the quality of the obtained solutions: some parallel models for meta-heuristics and evolutionary algorithms allow improving the quality of the search. Indeed, exchanging information between cooperative meta-heuristics will alter their behavior in terms of searching in the landscape associated to the problem. The main goal in a parallel cooperation between evolutionary algorithms is to improve the quality of solutions. Both better convergence and reduced search time may happen. Let us notice that a parallel model for meta-heuristics may be more effective than a sequential meta-heuristic even on a single processor. Improve the robustness: a parallel evolutionary algorithm may be more robust in terms of solving in an effective manner different optimization problems and different instances of a given problem. Robustness may be also measured in terms of the sensitivity of the algorithm to its parameters. Solve large-scale problems: parallel meta-heuristics allow to solve large scale instances of complex optimization problems. A challenge here is to solve very large instances, which cannot be solved on a sequential machine. Another similar challenge is to solve more accurate mathematical models associated to different optimization problems. Improving the accuracy of the mathematical models increases in general the size of the associated problems to be solved. Moreover, some optimization problems need the manipulation of huge databases such as data mining and machine learning problems.

The main goal of this tutorial is to provide a unified view of parallel meta-heuristics and evolutionary algorithms. It presents the main design questions and search components for all families of meta-heuristics. Not only the design aspect of meta-heuristics is presented, but also their implementation using a software framework. This will encourage the reusing of both the design and code of existent search components with a high level of transparency regarding the target applications and parallel architectures.

Target Audience

Using many case studies and treating design and implementation independently, this tutorial gives readers the skills necessary to solve in parallel large-scale optimization and machine learning problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.

Short Biography

El-ghazali Talbi received the Ph.D degrees in Computer Science from the Institut National Polytechnique de Grenoble in France. Since 2001, he is a full Professor at the University of Lille. He is the founder and head of the INRIA Dolphin project dealing with parallel multi-objective optimization. He has many collaborative national, European and international projects. His current research interests are in the field of multi-objective optimization, evolutionary algorithms, parallel algorithms, meta-heuristics, combinatorial optimization, and cloud computing, hybrid and cooperative optimization, and application to logistics/transportation, energy and networks. Professor Talbi has to his credit more than 150 publications in journals, books and conferences. He was a guest editor of more than 15 special issues in different journals (Journal of Heuristics, Journal of Parallel and Distributed Computing, European Journal of Operational Research, Theoretical Computer Science, Computers and Operations Research, Journal of Global Optimization). He is the co-founder and the coordinator of the research group dedicated to Meta-heuristics: Theory and Applications (META). He served in different capacities on the programs of more than 100 national and international conferences. His work on meta-heuristics (e.g. his book entitled “Metaheuristics: from design to implementation”) has a large impact and visibility in the field of optimization. His h-index is 47 and the number of citations is more than 12 004 (google scholar).

 

Title: Evolutionary Algorithms and Hyperheuristics (CEC_13)

Organized by Nelishia Pillay

Description:

Hyper-heuristics is a rapidly developing domain, which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flow-shop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper -heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper -heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation of a genetic algorithm hyper-heuristic for the case studies for selection hyper-heuristics and a genetic programming hyper-heuristic for the generation hyper-heuristics. Challenges in the implementation of evolutionary algorithm hyper -heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper -heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper -heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper-heuristics.

Target Audience

The tutorial is aimed at researchers in computational intelligence that have an interest in hyper-heuristics or have just started working in this area. A background in evolutionary algorithms is assumed.

Short Biography

Nelishia Pillay is a vice chair of the IEEE Task Force on Hyper6Heuristics with the Technical Committee of Intelligent Systems and Applications at IEEE Computational Intelligence Society. She is an active researcher in the field of evolutionary algorithm hyper6heuristics for combinatorial optimization and automated design. This is one of the focus areas of the NICOG (Nature-Inspired Computing Optimization) research group, which she has established. She is currently working on the project “Automated Intelligent Design Support Using Hyper-Heuristics” in collaboration with the University of Nottingham, which is supported by a Royal Society Newton International exchange grant.

 

Title: Parallelization of Evolutionary Algorithms: MapReduce and Spark (CEC_14)

Organized by Simone Ludwig

Description:

This tutorial will give an introduction into two distributed computing technologies – Hadoop’s MapReduce and Apache’s Spark framework. Examples will be shown on how an evolutionary computing technique can be parallelized in order to speed up the execution time of the algorithm in particular for difficult problems with large dimensions. Both frameworks can be implemented in different languages. For this tutorial, Java is used to show how to develop and run code on the different platforms.

Target Audience

Basic tutorial for beginners who would like to make use of MapReduce and/or Spark as a distributed computing technology.

Short Biography

Simone Ludwig is an Associate Professor of Computer Science at North Dakota State University (NDSU), USA. Prior to joining NDSU, she worked at the University of Saskatchewan (Canada), Concordia University (Canada), Cardiff University (UK) and Brunel University (UK). She received her PhD degree and MSc degree with distinction from Brunel University (UK), in 2004 and 2000, respectively. Her research interests include swarm intelligence, evolutionary computation, parallelization of swarm intelligence and evolutionary computation approaches, image processing, and data mining.

 

Title: Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms (CEC_15)

Organized by Pietro S. Oliveto

Description:

Great advances have been made in recent years towards the runtime complexity analysis of evolutionary algorithms for combinatorial optimization problems. Much of this progress has been due to the application of techniques from the study of randomized algorithms. The first pieces of work, started in the 90s, were directed towards analyzing simple toy problems with significant structures. This work had two main goals: to understand on which kind of landscapes EAs are efficient, and when they are not to develop the first basis of general mathematical techniques needed to perform the analysis. Thanks to this preliminary work, nowadays, it is possible to analyze the runtime of evolutionary algorithms on different combinatorial optimization problems. In this beginners’ tutorial, we give a basic introduction to the most commonly used techniques, assuming no prior knowledge about time complexity analysis.
By the end of the tutorial participants will be able: to understand theoretically the behavior of EAs on different problems; to perform runtime complexity analyses of simple EAs on the most common toy problems; to understand more complicated work on the analysis of EAs for combinatorial optimization; to have the basic skills to start independent research in the area.

Target Audience

The tutorial is targeted at scientists and engineers who wish to: theoretically understand the behavior and performance of the search algorithms they design; familiarize with the techniques used in the runtime analysis of EAs; pursue research in the area of time complexity analysis of randomized algorithms in general and EAs in particular.

Short Biography

Pietro Simone Oliveto is a Senior Lecturer and EPSRC funded Early Career Fellow at the University of Sheffield, UK. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Efficient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener’s research group. From 2009 to 2013 he held the positions of EPSRC PhD+ Fellow for one year and of EPSRC Postdoctoral Fellow in Theoretical Computer Science for 3 years at the University of Birmingham. From 2013 to 2016 he was a Vice-chancellor’s Fellow at the University of Sheffield.

His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems. He has published several runtime analysis papers on Evolutionary Algorithms (EAs), Artificial Immune Systems (AIS) and Ant Colony Optimization (ACO) algorithms for classical NP-Hard combinatorial optimization problems such as vertex cover, mincut and spanning trees with maximal number of leaves together with a review paper of the field of time complexity analysis of EAs for combinatorial optimization problems and a book chapter containing a tutorial on the runtime analysis of EAs. He has won best paper awards at the GECCO’08, ICARIS’11 and GECCO’14 conferences and got very close with several best paper nominations. Dr. Oliveto has given tutorials on the runtime analysis of EAs at WCCI 2012, CEC 2013, GECCO 2013, WCCI 2014, GECCO 2014, GECCO 2015, SSCI 2015, GECCO 2016, GECCO 2017, CEC 2017, SSCI 2017 and PPSN 2016. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), IEEE Senior member, Associate Editor of IEEE Transactions on Evolutionary Computation and Chair of the IEEE CIS Task Force on “Theoretical Foundations of Bio-inspired Computation”. Since 2008 he has been invited to the Dagstuhl seminar series on the Theory of Evolutionary Algorithms.

 

Title: Constraint-Handling in Nature-Inspired Optimization (CEC_16)

Organized by Efrén Mezura-Montes

Description:

The goal of the tutorial is three-fold: (1) to remark the changes constraints introduce in a numerical optimization problem, (2) to show the initial ideas to introduce feasibility information in a nature-inspired algorithm, and (3) to present current popular constraint-handling techniques, remarking pros and cons of each one of them. The tutorial starts with a set of important concepts to be used during the session and an explanation of the reasons why a constrained problem is different with respect to an unconstrained problem. After that, the first constraint-handling techniques, mainly dominated by penalty functions, are presented. After that, the most recent efforts on constraint-handlers are detailed. Finally, a summary of the material and the current trends in nature-inspired constrained optimization are shown.

Target Audience

The tutorial is intended for researchers, practitioners and students working on numerical optimization using evolutionary algorithms, swarm intelligence, and other nature-inspired algorithms, particularly in real-world problems, which are usually constrained.

Short Biography

Efrén Mezura-Montes is a full-time researcher at the Artificial Intelligence Research Center, University of Veracruz, MEXICO. His research interests are the design, analysis and application of bio-inspired algorithms to solve complex optimization problems. He has published over 110 papers in peer-reviewed journals and conferences. He also has one edited book and six book chapters published by international publishing companies. From his work, Google Scholar reports more than 4,500 citations. Dr. Mezura-Montes is member of the IEEE Computational Intelligence Society Evolutionary Computation Technical Committee and he is also member of the IEEE Systems Man and Cybernetics Society Soft Computing Technical Committee. He is also the founder of the IEEE Computational Intelligence Society task force on Nature-Inspired Constrained Optimization. Dr. Mezura-Montes is a member of the editorial board of the journals: “Swarm and Evolutionary Computation”, “Computational Optimization and Applications”, “Complex & Intelligent Sytems”, and the “Journal of Optimization”. He is also a reviewer for more than 20 international specialized journals, including the IEEE Transactions on Evolutionary Computation and the IEEE Transactions on Cybernetics. Dr. Mezura-Montes is a Level 2 member of the Mexican National Researchers System (SNI). Finally, Dr. Mezura-Montes is a regular member of the Mexican Computing Academy (AMEXCOMP).

 

Title: Machine Learning on Evolutionary Computation (CEC_17)

Organized by Masaya Nakata, Shinichi Shirakawa, Frederic Chazal, Naoki Hamada

Description:

A fusion of Machine Learning (ML) and Evolutionary Computation (EC) has been recognized as a rapidly-growing research area, attracting both ML and EC comminutes. In the ML community, standing for the use of EC techniques, many ML techniques internally involve optimization problems of learning model or system configurations and thus EC would be a natural way to combine with those ML techniques, often referred to Evolutionary Machine Learning (EML). In the EC community in contrast standing for the use of ML techniques, ML techniques have been a necessary approach to analyze optimized solutions or to design/evaluate algorithms. While introducing many branches of the fusion of ML and EC, but those potentially related works often have been separately discussed in different conferences with different research groups. Then, we are motivated to seek to describe a map of fusion of Machine Learning and Evolutionary Computation with interactions of audiences including ML and EC communities, as WCCI 2018 would be a very good opportunity to achieve our final goal with CEC, FUZZ-IEEE and IJCNN attendee. This tutorial provides two major streams related to the fusion of ML and EC. First, we give modern successful approaches of evolutionary machine learning; evolutionary rule-based machine learning and evolutionary neural networks. This fits the main streams of ML fields as rule-based is classical but a popular approach of symbolic learning models and neural networks is probably the most popular approach in the modern ML techniques including deep learning. Second, we provide an advanced technique to analyze optimized solutions given by evolutionary computation, and so our tutorial also covers how to use ML techniques for EC techniques.

Target Audience

As a fusion of Machine Learning and Evolutionary Computation covers a wide range of computational intelligence fields, this tutorial would attract a great number of the potential audience joined not only at CEC but FUZZ-IEEE and IJCNN.

Short Biography

Masaya Nakata is an assistant professor at Faculty of Engineering, Yokohama National University, Japan. He received the B.A., M.Sc. Ph.D. degrees in informatics from the University of Electro-Communications, Japan, in 2011, 2013, 2016 respectively. He has been working on Evolutionary Rule-based Machine Learning, Reinforcement Learning, Data mining, more specifically, Learning Classifier System (LCS). He was a visiting researcher at Politecnico di Milano, Italy, University Bristol, UK and Victoria University of Wellington, New Zealand where he worked with top researchers on evolutionary rule-based machine learning. His contributions have been published as more than 10 journal papers and more than 20 conference papers including the high-quality conferences in EC, e.g., CEC, GECCO, PPSN. He was elected from the international LCS community as an organizing committee member of International Workshop on Learning Classifier System/Evolutionary Rule-based Machine Learning 2014-2015, 2017- in GECCO conference. He received IEEE CIS Japan chapter Young Research Award.

Shinichi Shirakawa is a lecturer at Faculty of Environment and Information Sciences, Yokohama National University, Japan. He received his Ph.D. degree in engineering from Yokohama National University in 2009. He worked at Fujitsu Laboratories Ltd. from 2010 to 2012 as a researcher. His research interests include evolutionary computation, machine learning, computer vision, and so on. He is currently working on the evolutionary deep neural networks. His contributions have been published as high-quality journal and conference papers in EC and AI, e.g., CEC, GECCO, PPSN, AAAI. He received IEEE CIS Japan chapter Young Research Award in 2009 and won the best paper award in evolutionary machine learning track of GECCO 2017.

Naoki Hamada is a researcher at Fujitsu Laboratories Ltd., Japan. He received his Ph.D. degree in engineering from Tokyo Institute of Technology, Japan in 2013. His research interests are located at the intersection of evolutionary computation, machine learning, and topological data analysis. He is a pioneer of applying TDA to EMO problems, which was awarded by the Japan Society for Evolutionary Computation. He also received IEEE CIS Japan chapter Young Research Award.

Frederic Chazal is a senior researcher at INRIA Saclay, France. His research interests are topological and geometric data analysis; topological persistence; geometric inference and geometric learning; computational geometry, geometry processing, and solid modeling; geometry and topology. He has nearly 100 publications about TDA and related topics. He is currently working on GUDHI project (http://gudhi.gforge.inria.fr/), developing one of the most widely-used, highest-performance TDA libraries.

 

Title: “Evolutionary Algorithms, Swarm Dynamics, and Complex Networks: Recent Advances and Progress” (CEC_18)

Organized by Ivan Zelinka

Description:

Evolutionary algorithms, based on the Darwinian theory of evolution and Mendelian theory of genetic processes, are as well as swarm algorithms (based on the emergent behavior of natural swarms) very popular and widely used in today’s technological problem solutions. Usually, an outstanding performance is achieved, compared with classical algorithms and one of the open research topics of evolutionary and swarm algorithm (SEA) if its performance, efficiency, effectivity and speed of getting a solution. Much different research has been done on that field and results to various modifications, including adaptive one, of those algorithms. This tutorial summarizes and introduces our research, which has been done on the SEA and can be separated into two parts. In the first one, we show, how can be internal dynamics of SEA understand as a social interaction amongst individuals and can also be converted into the social-like network. Analysis on such network is then straightforward and can be used as the feedback into the SEA to improve its performance. In the second part, we discuss advanced conversion into so-called CML system (coupled map lattices), that allows elegant analysis of SEA dynamics (including chaotic one) and also its control. Selected evolutionary algorithms in this tutorial will be differential evolution, genetic algorithm, particle swarm, Bee algorithm, SOMA, and others. At the end will be demonstrated how we can control EAs dynamics using feedback loop control scheme.

Target Audience

The tutorial is designed as an introduction; no advanced or expert knowledge of complex networks, chaos, and control is expected.

Short Biography

Ivan Zelinka (born in 1965, ivanzelinka.eu) is currently associated with the Technical University of Ostrava (VSB-TU), Faculty of Electrical Engineering and Computer Science. He graduated consequently at the Technical University in Brno (1995 – MSc.), UTB in Zlin (2001 – Ph.D.) and again at Technical University in Brno (2004 – Assoc. Prof.) and VSB-TU (2010 – Professor). Prof. Zelinka is responsible supervisor of several grant types of research of Czech grant agency GAČR as Unconventional Control of Complex Systems, Security of Mobile Devices and Communication (a bilateral project between Czech and Vietnam) and co-supervisor of grant FRVŠ – Laboratory of parallel computing amongst the others. He was also working on numerous grants and two EU projects as a member of the team (FP5 – RESTORM) and supervisor (FP7 – PROMOEVO) of the Czech team. He is also head of research team NAVY http://navy.cs.vsb.cz/. Prof. Zelinka was awarded Siemens Award for his Ph.D. thesis, as well as by journal Software news for his book about artificial intelligence. He is a member of the British Computer Society, Machine Intelligence Research Labs (MIR Labs – http://www.mirlabs.org/czech.php), IEEE (committee of Czech section of Computational Intelligence), a few international program committees of various conferences, and three international journals. He is also the founder and editor-in-chief of a new book series entitled Emergence, Complexity and Computation (Springer series 10624, see also www.ecc-book.eu).