Date: July, 8th, 2018
|8.00 – 10.00||Session 1||CEC1_01||CEC2-01||CEC3_01||CEC4_01||FUZZ5_01 (Part 1)||FUZZ6_01 (Part 1)||HYB7_01 Part 1||IJCNN1_01 (Part 1)||IJCNN1_02 (Part 1)||IJCNN3_02 (Part 1)||IJCNN4_01 (Part 1)|
|10:15 – 12.15||Session 2||CEC1-02||CEC2-02||CEC3_02||CEC4_02||FUZZ5_01 (Part 2)||FUZZ6_01 (Part 2)||HYB7_01 (part 2)||IJCNN1_01 (Part 2)||IJCNN1_02 (Part 2)||IJCNN3_02 (Part 2)||IJCNN4_01 (Part 2)|
|13:00 – 15:00||Session 3||CEC1-03||CEC2_04||CEC3_03||CEC4_03 (Part 1)||FUZZ5_02||FUZZ6_02 (Part 1)||HYB7_02||IJCNN1_03||IJCNN1_02 (Part 3)||IJCNN3_01||IJCNN4_02|
|15:00 – 15.15||Coffee|
|15:15 – 17:15||Session 4||CEC1-04||CEC2_03 (Part 1)||CEC3_04||CEC4_03 (Part 2)||FUZZ5_03||FUZZ6_02 (Part 2)||HYB7_03||IJCNN1_04||IJCNN1_02 (Part 4)||IJCNN3_03||IJCNN4_03 (Part 1)|
|17:15 – 19:15||Session 5||CEC1-05||CEC2_03 (Part 2)||CEC3_05||CEC4_04||FUZZ5_04||IJCNN3_04||HYB7_04||IJCNN1_05||IJCNN2_01||IJCNN3_05||IJCNN4_03 (Part 2)|
CEC1_01 Co-evolutionary games
CEC1_02 Differential Evolution with Ensembles, Adaptations and Topologies
CEC1_03 Multi-concept Optimization
CEC1_04 Evolutionary Bilevel Optimization
CEC1_05 Evolutionary Computation for Dynamic Optimization Problems
CEC2_01 Applying Stopping Criteria in Evolutionary Multi-Objective Optimization
CEC2_02 Dynamic Multi-objective Optimization: Challenges, Applications and Future Directions
CEC2_03 Evolutionary Many-Objective Optimization
CEC2_04 Pareto Optimization for Subset Selection: Theory and Applications in Machine Learning
CEC3_01 Evolutionary Large-Scale Global Optimization: An Introduction
CEC3_02 Representation in Evolutionary Computation
CEC3_03 Parallel and distributed evolutionary algorithms
CEC3_04 Evolutionary Algorithms and Hyperheuristics
CEC3_05 Parallelization of Evolutionary Algorithms: MapReduce and Spark
CEC4_01 Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms
CEC4_02 Constraint-Handling in Nature-Inspired Optimization
CEC4_03 Machine Learning on Evolutionary Computation
CEC4_04 Evolutionary Algorithms, Swarm Dynamics, and Complex Networks: Recent Advances and Progress
Title: Co-evolutionary games (CEC1_01)
Organized by Hendrik Richter
Studying evolutionary and co-evolutionary games can be seen as an attempt to address a long‐standing and fundamental problem in Darwinian evolution. How can the two seemingly contradictory observations be reconciled that individuals experience selective pressure which entails competition in order to be successful in survival and reproduction, but at the same time there is wide‐spread cooperative and even altruistic behavior between individuals (and also groups of individuals or even species)? In other words, how can we have selection that favors fitter individuals over less fit, while the same individuals regularly cooperate with and support each other, thus ostensibly leveling off differences in fitness? Co‐evolutionary games may offer answers to these questions as they set up mathematical models to discuss whether, when and under what circumstances cooperation may be more advantageous than competition. Such games define individuals having behavioral choices to be players selecting and executing strategies (for instance cooperation or competition). By linking the relative costs and benefits of strategies to payoff (and subsequently fitness), we obtain a measure of how profitable a given choice is in evolutionary terms. Co-evolutionary games become dynamic if they are played iteratively over several rounds and the players may update their strategies and/or their networks of interaction, which describe with whom a given player interacts in the game. In this context it is common to call games where the players update exclusively their strategies evolutionary games, while games where the players may additionally update their interaction networks are called co-evolutionary games. The tutorial will give an overview about concepts and research questions as well as addressing recent developments in studying co‐evolutionary games.
The tutorial might be interesting for researcher and students looking for an entry point into co‐evolutionary games. Scientists already working with co‐evolutionary games may find it helpful to obtain an update about recent developments in the field and getting a fresh look at perspectives and potentials. Also practitioners wanting to enhance their knowledge about co‐evolutionary games and co‐evolutionary game theory may discover valuable material.
Title: Differential Evolution with Ensembles, Adaptations and Topologies (CEC1_02)
Organized by Ponnuthurai Nagaratnam Suganthan
Differential evolution (DE) is one of the most successful numerical optimization paradigm. Hence, practitioners and junior researchers would be interested in learning this optimization algorithm. DE is also rapidly growing. Hence, a tutorial on DE will be timely and beneficial to many of the CEC 2018 conference attendees. This tutorial will introduce the basics of the DE and then point out some advanced methods for solving diverse numerical optimization problems by using DE. DE is one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DE-variants perturb the current-generation population members with the scaled differences of distinct population members.
Therefore, no separate probability distribution has to be used for generating the offspring. Since its inception in 1995, DE has drawn the attention of many researchers all over the world resulting in a lot of variants of the basic algorithm with improved performance. This tutorial will begin with a brief overview of the basic concepts related to DE, its algorithmic components and control parameters. It will subsequently discuss some of the significant algorithmic variants of DE for bound constrained single-objective optimization. Recent modifications of the DE family of algorithms for multi-objective, constrained, large-scale, niching and dynamic optimization problems will also be included. The talk will discuss the effects of incorporating ensemble learning in DE – a relatively recent concept that can be applied to swarm & evolutionary algorithms to solve various kinds of optimization problems. The talk will also discuss neighborhood topologies based DE and adaptive DEs to improve the performance of DE. Theoretical advances made to understand the search mechanism of DE and the effect of its most important control parameters will be discussed. The talk will finally highlight a few problems that pose challenge to the state-of- the-art DE algorithms and demand strong research effort from the DE-community in the future.
This presentation will include basics as well as advanced topics of DE. Hence, researchers commencing their research in DE as well as experienced researcher can attend. Practitioners will also benefit from the presentation.
Title: Multi-concept Optimization (CEC1_03)
Organized by Amiram Moshaiov
The main goal is to introduce Multi-Concept Optimization (MCO) to the community of EC researches. This goal will be achieved by way of the following seven tasks: Providing background on what is a conceptual solution (as evident from the conceptual design stage during the process of engineering design); Describing academic and real- life examples of conceptual solutions; Defining what is MCO and how it differs from a traditional definition of an optimization problem. The definitions will include both single and multi-objective MCO; Explaining the significance of MCO as an optimization methodology that is useful for the following three main reasons: Supporting the selection of conceptual solutions, an alternative approach to multi-modal optimization, a unique approach to design space exploration; Describing evolutionary algorithms for MCO and their benchmarking; Describing the application of MCO to a real-life problem (joint work with Israel Aerospace Industries); Providing an assessment on research
needs concerning evolutionary computation for MCO. The proposed tutorial is timely for at least two main reasons. First, modern hardware makes MCO appealing for real-life application, as evident from our joint work with the Israel Aerospace Industries. Second, the proposer and his research group have accumulated a large body of knowledge and achievements on MCO, which allows him to prepare an attractive tutorial.
The potential participants include engineers and computer scientists that commonly attend the IEEE-CEC. The tutorial should be appealing to a wide audience as it deals with a generic approach to optimization. For example, all those who are interested in multi-modal optimization are potential attendees.
Title: Evolutionary Bilevel Optimization (CEC1_04)
Organized by Ankur Sinha and Kalyanmoy Deb
Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the corresponding lower level variable vector is optimal for the lower level optimization problem. Consider, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are commonly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the "nestedness" of one optimization task into another.
Evolutionary Algorithms (EAs) provide some amenable ways to solve such problems due to their flexibility and ability to handle constrained search spaces efficiently. Clearly, EAs have an edge in solving such difficult yet practically important problems. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems. In this tutorial, we will introduce principles of bilevel optimization for single and multiple objectives, and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we will present a few viable evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be discussed. Finally, a number of immediate and future research ideas on bilevel optimization will also be highlighted.
Bilevel optimization belongs to a difficult class of optimization problems. Most of the classical optimization methods are unable to solve even simpler instances of bilevel problems. This offers a niche to the researchers in the field of evolutionary computation to work on the development of efficient bilevel procedures. However, many researchers working in the area of evolutionary computation are not familiar with this important class of optimization problems. Bilevel optimization has immense practical applications and it certainly requires attention of the researchers working on evolutionary computation. The target audience for this tutorial will be researchers and students looking to work on bilevel optimization. The tutorial will make the basic concepts on bilevel optimization and the recent results easily accessible to the audience.
Title: Evolutionary Computation for Dynamic Optimization Problems (CEC1_05)
Organized by Shengxiang Yang
Many real-world optimization problems are subject to dynamic environments, where changes may occur over time regarding optimization objectives, decision variables, and/or constraint conditions. Such dynamic optimization problems (DOPs) are challenging problems due to their nature of difficulty. Yet, they are important problems that researchers and practitioners in decision-making in many domains need to face and solve. Evolutionary computation (EC) encapsulates a class of stochastic optimization methods that mimic principles from natural evolution to solve optimization and search problems. EC methods are good tools to address DOPs due to the inspiration from natural and biological evolution, which has always been subject to changing environments. EC for DOPs has attracted a lot of research efforts during the last two decades with some promising results. However, this research area is still quite young and far way from well-understood. This tutorial provides an introduction to the research area of EC and DOPs and carry out an in-depth description and classification of the state-of- art research in the field. The purpose is to (i) provide detailed description and classification of DOP benchmark problems and performance measures; (ii) review current EC approaches and provide detailed explanations on how they work for DOPs; (iii) present current application in the area of EC for DOPs; (iv) analyze current gaps and challenges in EC for DOPs; and (v) point out future research directions in EC for DOPs.
This tutorial is intermediate and the audience is expected to have some knowledge of EC. This tutorial is expected to attract people with different backgrounds, including academic researchers, PhD students, and practitioners, from the EC and Operational research (OR) communities since many real-world optimization problems are DOPs.
Title: Applying Stopping Criteria in Evolutionary Multi-Objective Optimization
Organized by Luis Marti, and Nayat Sanchez-Pi
Most soft-computing, heuristic, non-deterministic or numerical methods have in common that they need a stopping criterion. This criterion, which is usually a heuristic itself, is responsible for minimizing the waste of computational resources by detecting scenarios where it makes no sense to continue executing the method. Hence, the success or failure of any practical application relies heavily on not only the techniques applied but also the support methodologies, including the stopping criterion. Paradoxically, the matter of stopping criteria and convergence detection has been often overlooked by most of the evolutionary multi-objective optimization (EMO) community. This is probably because it plays a supporting role and, consequently, the theoretical and practical implications concerning this topic have not yet been properly studied and disseminated. However, it can be argued that many real-world applications of theoretically outstanding methods may have underperformed due to an incorrect algorithm termination scheme.
In this tutorial, we propose and updated summary of the results obtained so far in this area and provide re-usable examples of how these methods should be applied in real- life practice. Consequently, many real-world applications of theoretically outstanding methods may have underperformed due to an incorrect algorithm termination scheme. Typically, a stopping criterion is invoked at the end of an iteration of the algorithm. At that point, it is decided whether algorithm execution should continue or can be aborted. We have identified four scenarios when the execution of an algorithm should be terminated: the current solution is satisfactory; the method is able to output a feasible solution, which, although not optimal, is unlikely to be bettered; the method is unable to converge to any solution; or the computation already performed is sufficient to reach a solution or further computation is unjustified.
This tutorial aims to provide the evolutionary computation community a comprehensive reference that encompass: a presentation of the theoretical context of the problem with emphasis in the mathematical foundation and, in particular, the Karush-Kuhn- Tucker condition; theoretical results regarding convergence in EMO and single-objective evolutionary algorithms; characteristics, challenges and requirements of EMO stopping criteria; an updated survey of current state of the art on this area, and; interactive activity comparing different stopping criteria in order to show their strengths and weaknesses. By debating and presenting these topics we intend to draw the a‚ention of the EMO community towards this issue as well as to provide usable tools and methodologies to extend the proper use of these methods. Demonstration and familiarization activities are a fundamental part if the tutorial. In this regard, we will perform a series of “live” comparative experiments relying on the EMO stopping criteria implemented by the authors. For this purpose, various EMO algorithms will be applied to well-known benchmark problems. The populations of each run of the algorithms will be stored and each of the criteria will be presented with them. This will allow pointing out their features and drawbacks. The exercises that will be carried out interactively by the instructors and the interested public will be available on the web during the tutorial and will remain online for later use, reference and study. The necessary software for tutorial is already available as a MATLAB/Octave EMO stopping criteria taxonomy that contains the “classical” as well as current state of the art methods.
This taxonomy is available on-line as open-source so ware at link (github).
Practitioners of the multi-criterion decision making and evolutionary multi-objective communities, from both academia and industry. Elementary knowledge of evolutionary computation would be convenient although not required.
Title: Dynamic Multi-objective Optimization: Challenges, Applications and Future Directions (CEC2_02)
Organized by Marde Helbig, Kalyanmoy Deb
Most optimization problems in real-life have more than one objective, with at least two objectives in conflict with one another and at least one objective that changes over time. These kinds of optimization problems are referred to as dynamic multi-objective optimization (DMOO) problems. Instead of re-starting the optimization process after a change in the environment has occurred, previous knowledge is used and if the changes are small enough, this may lead to new solutions being found much quicker. Most research in multi-objective optimization has been conducted on static problems and most research on dynamic problems has been conducted on single-objective optimization. The goal of a DMOO algorithm (DMOA) is to find an optimal set of solutions that is as close as possible to the true set of solutions (similar to static MOO) and that contains a diverse set of solutions. However, in addition to these goals a DMOA also has to track the changing set of optimal solutions over time. Therefore, the DMOA also has to deal with the problems of a lack of diversity and outdated memory (similar to dynamic single-objective optimization).
This tutorial will introduce the participants to the field of DMOO by discussing: benchmark functions and performance measures that have been proposed and the issues related to each of these; algorithms that have been proposed to solve DMOO; issues with the comparison of DMOAs’ performance and ensuring a fair comparison; analyzing the performance of DMOAs and why traditional approaches used for static MOO is not necessarily adequate enough; challenges in the DMOO field that are not yet addressed, such as incorporating a decision maker’s preference in DMOO and visualizing the behavior of DMOAs; real-world applications; and emerging research fields that provide interesting research opportunities. The following will be demonstrated during the tutorial: Why certain problems exist with current performance measures; A tool to incorporate a decision maker’s preference into the search process when solving dynamic multi-objective optimization problems; A tool to visualize the behavior of dynamic multi-objective optimization algorithms during the search process.
Researchers, PhD students and practitioners on optimization and Evolutionary Computation.
Title: Evolutionary Many-Objective Optimization (CEC2_03)
Organized by Hisao Ishibuchi, Hiroyuki Sato
The goal of the proposed tutorial is clearly explain difficulties of evolutionary many- objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problem difficult.
In the first part (Part I: Difficulties), we clearly explain not only frequently- discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms.
After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We also explain some state-of-the-art many-objective algorithms. The attendees of the tutorial will learn some representative approaches to many-objective optimization and state-of- the-art many- objective algorithms. At the same time, the attendees will also learn that there still exist a large number of promising, interesting and important research directions in evolutionary many-objective optimization. Some promising research directions are explained in detail in the tutorial.
Researchers, PhD students and practitioners on optimization and Evolutionary Computation.
Title: Pareto Optimization for Subset Selection: Theory and Applications in Machine Learning (CEC2_04)
Organized by Yang Yu, and Chao Qian
Pareto optimization is a special kind of evolutionary optimization methods. It optimizes an objective function by transforming it into a bi-objective optimization problem, has been shown a promising method for the subset selection problem. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for the problem. It has also been applied in machine learning tasks successfully. This tutorial will introduce Pareto optimization from scratch. We will cover the history of Pareto optimization, theoretical results, and we will show that it achieves the best-so-far theoretical and practical performance in some machine learning tasks.
We assume that the audiences are with basic knowledge of probabilistic and linear algebra.
Potential audience include those who are curios in theoretical grounded evolutionary algorithms, and those who are interested in applying evolutionary algorithms to achieve state-of- the-art performance in machine learning.
Title: Evolutionary Large-Scale Global Optimization: An Introduction (CEC3_01)
Organized by Mohammad Nabi Omidvar, and Xiaodong Li
Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years, and this trend continues with an ever-increasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems. The recent advance in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so- called deep learning), some of which have over a billion decision variables. It is this “curse-of- dimensionality” that has made large-scale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems.
It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical and meta-heuristics optimization algorithms being developed to overcome this scalability issue. Among these, meta-heuristics have gained popularity due to their ability in dealing with black-box optimization problems. In this tutorial, we provide an overview of recent advances in the field of evolutionary large-scale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems. This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in large-scale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm and its latest improved derivatives, which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks. We also address the issue of resource allocation in cooperative co-evolution and provide a detailed explanation of some recent algorithms such as the contribution-based cooperative co-evolution family of algorithms.
Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.
This tutorial is suitable for anyone with an interest in evolutionary computation who wishes to learn more about the state-of- the-art in large-scale global optimization. The tutorial is specifically targeted for Ph.D. students, and early career researchers who want to gain an overview of the field and wish to identify the most important open questions and challenges in the field to bootstrap their research in large-scale optimization. The tutorial can also be of interest to more experienced researchers as well as practitioners who wish to get a glimpse of the latest developments in the field. In addition to our prime goal, which is to inform and educate, we also wish to use this tutorial as a forum for exchanging ideas between researchers. Overall, this tutorial provides a unique opportunity to showcase the latest developments on this hot research topic to the EC research community.
Title: Representation in Evolutionary Computation (CEC3_02)
Organized by Daniel Ashlock
This tutorial has the goal of laying out the principles of representation for evolutionary computation and providing illustrative examples. The tutorial will consist of two parts. The first part will establish the importance of representation in evolutionary computation, something well known to senior researchers but potentially valuable to junior researchers and students. This portion of the tutorial will give concrete examples of how the design of evolutionary computation influences the character of the fitness landscape. Impact on time to solution, character of solutions located, and design principles for representations will all be covered. This tutorial has been offered several times before, but research in representation is quite active and so a number of beautiful new examples of representations will be presented in the second half of the tutorial. These will be chosen to emphasize the design principles expounded in the first half of the presentation. These will include applications in games, bioinformatics, optimization, and evolved art. The tness landscape abstraction will serve as a unifying theme throughout the presentation. New topics will include self-adaptive representations, parameterized manifolds of representations, state-conditioned representations, and representations that arise from abstract algebra. No prior knowledge of abstract algebra will be assumed.
This tutorial is potentially valuable to anyone that needs evolutionary computation beyond simple of-the- shelf technology and also anyone doing research in evolutionary computation.
Title: Parallel and distributed evolutionary algorithms (CEC3_03)
Organized by El-Ghazali Talbi
On one hand, optimization and machine learning problems are more and more complex and their resource requirements are ever increasing. Real-life optimization problems are often NP-hard, and CPU time and/or memory consuming. Although the use of meta-heuristics and evolutionary algorithms allows to significantly reduce the computational complexity of the search process, the latter remains time-consuming for any problems in diverse domains of application, where the objective function and the constraints associated to the problem are resource (e.g. CPU, memory) intensive and the size of the search space is huge. Moreover, more and more complex and resource intensive meta-heuristics are developed (e.g. hybrid meta-heuristics, multi-objective meta-heuristics, meta-heuristics under uncertainty, meta-heuristics for bi-level optimization, surrogate-assisted evolutionary algorithms).
On the other hand, the rapid development of technology in designing processors (e.g. multi-core processors, dedicated architectures), networks (e.g. local networks -LAN- such as Myrinet and Infiniband, wide area networks -WAN- such as optical networks), and data storage make the use of parallel computing more and more popular. Such architectures represent an effective strategy for the design and implementation of parallel metaheuristics and evolutionary algorithms. Indeed, sequential architectures are reaching physical limitation (speed of light, thermodynamics). Nowadays, even laptops and workstations are equipped with multi-core processors, which represent a given class of parallel architecture. Moreover, the ratio cost/performance is constantly decreasing. The proliferation of powerful workstations and fast communication networks has shown the emergence of GPUs, multi-cores architectures, clusters of processors (COWs), networks of workstations (NOWs), and large-scale network of machines (Grids) as platforms for high performance computing.
Parallel and distributed computing can be used in the design and implementation of meta-heuristics and evolutionary algorithms for the following reasons: Speedup the search: one of the main goals in parallelizing a meta-heuristic is to reduce the search time. This helps designing real-time and interactive optimization methods. This is a very important aspect for some class of problems where there are hard requirements on the search time such as in dynamic optimization problems and time critical control problems such as “real-time” planning. Improve the quality of the obtained solutions: some parallel models for meta-heuristics and evolutionary algorithms allow improving the quality of the search. Indeed, exchanging information between cooperative meta- heuristics will alter their behavior in terms of searching in the landscape associated to the problem. The main goal in a parallel cooperation between evolutionary algorithms is to improve the quality of solutions. Both better convergence and reduced search time may happen. Let us notice that a parallel model for meta-heuristics may be more effective than a sequential meta-heuristic even on a single processor. Improve the robustness: a parallel evolutionary algorithm may be more robust in terms of solving in an effective manner different optimization problems and different instances of a given problem. Robustness may be also measured in terms of the sensitivity of the algorithm to its parameters. Solve large-scale problems: parallel meta-heuristics allow to solve large scale instances of complex optimization problems. A challenge here is to solve very large instances, which cannot be solved on a sequential machine. Another similar challenge is to solve more accurate mathematical models associated to different optimization problems. Improving the accuracy of the mathematical models increases in general the size of the associated problems to be solved. Moreover, some optimization problems need the manipulation of huge databases such as data mining and machine learning problems.
The main goal of this tutorial is to provide a unified view of parallel meta-heuristics and evolutionary algorithms. It presents the main design questions and search components for all families of meta-heuristics. Not only the design aspect of meta-heuristics is presented, but also their implementation using a software framework. This will encourage the reusing of both the design and code of existent search components with a high level of transparency regarding the target applications and parallel architectures.
Using many case studies and treating design and implementation independently, this tutorial gives readers the skills necessary to solve in parallel large-scale optimization and machine learning problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.
Title: Evolutionary Algorithms and Hyperheuristics (CEC3_04)
Organized by Nelishia Pillay
Hyper-heuristics is a rapidly developing domain, which has proven to be effective at providing generalized solutions to problems and across problem domains. Evolutionary algorithms have played a pivotal role in the advancement of hyper-heuristics, especially generation hyper-heuristics. Evolutionary algorithm hyper-heuristics have been successful applied to solving problems in various domains including packing problems, educational timetabling, vehicle routing, permutation flow-shop and financial forecasting amongst others. The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper -heuristics for researchers interested in working in this domain. An overview of hyper-heuristics will be provided. The tutorial will examine each of the four categories of hyper-heuristics, namely, selection constructive, selection perturbative, generation constructive and generation perturbative, showing how evolutionary algorithms can be used for each type of hyper-heuristic. A case study will be presented for each type of hyper -heuristic to provide researchers with a foundation to start their own research in this area. The EvoHyp library will be used to demonstrate the implementation of a genetic algorithm hyper-heuristic for the case studies for selection hyper-heuristics and a genetic programming hyper-heuristic for the generation hyper-heuristics. Challenges in the implementation of evolutionary algorithm hyper – heuristics will be highlighted. An emerging research direction is using hyper-heuristics for the automated design of computational intelligence techniques. The tutorial will look at the synergistic relationship between evolutionary algorithms and hyper -heuristics in this area. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper – heuristics for the design of computational intelligence techniques. The tutorial will end with a discussion session on future directions in evolutionary algorithms and hyper- heuristics.
The tutorial is aimed at researchers in computational intelligence that have an interest in hyper-heuristics or have just started working in this area. A background in evolutionary algorithms is assumed.
Title: Parallelization of Evolutionary Algorithms: MapReduce and Spark (CEC3_05)
Organized by Simone Ludwig
This tutorial will give an introduction into two distributed computing technologies – Hadoop’s MapReduce and Apache’s Spark framework. Examples will be shown on how an evolutionary computing technique can be parallelized in order to speed up the execution time of the algorithm in particular for difficult problems with large dimensions. Both frameworks can be implemented in different languages. For this tutorial, Java is used to show how to develop and run code on the different platforms.
Basic tutorial for beginners who would like to make use of MapReduce and/or Spark as a distributed computing technology.
Title: Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms (CEC4_01)
Organized by Pietro S. Oliveto
Great advances have been made in recent years towards the runtime complexity analysis of evolutionary algorithms for combinatorial optimization problems. Much of this progress has been due to the application of techniques from the study of randomized algorithms. The first pieces of work, started in the 90s, were directed towards analyzing simple toy problems with significant structures. This work had two main goals: to understand on which kind of landscapes EAs are efficient, and when they are not to develop the first basis of general mathematical techniques needed to perform the analysis. Thanks to this preliminary work, nowadays, it is possible to analyze the runtime of evolutionary algorithms on different combinatorial optimization problems. In this beginners’ tutorial, we give a basic introduction to the most commonly used techniques, assuming no prior knowledge about time complexity analysis. By the end of the tutorial participants will be able: to understand theoretically the behavior of EAs on different problems; to perform runtime complexity analyses of simple EAs on the most common toy problems; to understand more complicated work on the analysis of EAs for combinatorial optimization; to have the basic skills to start
independent research in the area.
The tutorial is targeted at scientists and engineers who wish to: theoretically understand the behavior and performance of the search algorithms they design; familiarize with the techniques used in the runtime analysis of EAs; pursue research in the area of time complexity analysis of randomized algorithms in general and EAs in particular.
Title: Constraint-Handling in Nature-Inspired Optimization (CEC4_02)
Organized by Efrén Mezura-Montes
The goal of the tutorial is three-fold: (1) to remark the changes constraints introduce in a numerical optimization problem, (2) to show the initial ideas to introduce feasibility information in a nature-inspired algorithm, and (3) to present current popular constraint- handling techniques, remarking pros and cons of each one of them. The tutorial starts with a set of important concepts to be used during the session and an explanation of the reasons why a constrained problem is different with respect to an unconstrained problem. After that, the first constraint-handling techniques, mainly dominated by penalty functions, are presented. After that, the most recent efforts on constraint-handlers are detailed. Finally, a summary of the material and the current trends in nature-inspired constrained optimization are shown.
The tutorial is intended for researchers, practitioners and students working on numerical optimization using evolutionary algorithms, swarm intelligence, and other nature-inspired algorithms, particularly in real-world problems, which are usually constrained.
Title: Machine Learning on Evolutionary Computation (CEC4_03)
Organized by Masaya Nakata, Shinichi Shirakawa, Frederic Chazal, Naoki Hamada
A fusion of Machine Learning (ML) and Evolutionary Computation (EC) has been recognized as a rapidly-growing research area, attracting both ML and EC comminutes. In the ML community, standing for the use of EC techniques, many ML techniques internally involve optimization problems of learning model or system configurations and thus EC would be a natural way to combine with those ML techniques, often referred to Evolutionary Machine Learning (EML). In the EC community in contrast standing for the use of ML techniques, ML techniques have been a necessary approach to analyze optimized solutions or to design/evaluate algorithms. While introducing many branches of the fusion of ML and EC, but those potentially related works often have been separately discussed in different conferences with different research groups. Then, we are motivated to seek to describe a map of fusion of Machine Learning and Evolutionary Computation with interactions of audiences including ML and EC communities, as WCCI 2018 would be a very good opportunity to achieve our final goal with CEC, FUZZ-IEEE and IJCNN attendee. This tutorial provides two major streams related to the fusion of ML and EC. First, we give modern successful approaches of evolutionary machine learning; evolutionary rule-based machine learning and evolutionary neural networks. This fits the main streams of ML fields as rule-based is classical but a popular approach of symbolic learning models and neural networks is probably the most popular approach in the modern ML techniques including deep learning. Second, we provide an advanced technique to analyze optimized solutions given by evolutionary computation, and so our tutorial also covers how to use ML techniques for EC techniques.
As a fusion of Machine Learning and Evolutionary Computation covers a wide range of computational intelligence fields, this tutorial would attract a great number of the potential audience joined not only at CEC but FUZZ-IEEE and IJCNN.
Title: Evolutionary Algorithms, Swarm Dynamics, and Complex Networks: Recent Advances and Progress" (CEC4_04)
Organized by Ivan Zelinka
Evolutionary algorithms, based on the Darwinian theory of evolution and Mendelian theory of genetic processes, are as well as swarm algorithms (based on the emergent behavior of natural swarms) very popular and widely used in today's technological problem solutions. Usually, an outstanding performance is achieved, compared with classical algorithms and one of the open research topics of evolutionary and swarm algorithm (SEA) if its performance, efficiency, effectivity and speed of getting a solution. Much different research has been done on that field and results to various modifications, including adaptive one, of those algorithms. This tutorial summarizes and introduces our research, which has been done on the SEA and can be separated into two parts. In the first one, we show, how can be internal dynamics of SEA understand as a social interaction amongst individuals and can also be converted into the social- like network. Analysis on such network is then straightforward and can be used as the feedback into the SEA to improve its performance. In the second part, we discuss advanced conversion into so-called CML system (coupled map lattices), that allows elegant analysis of SEA dynamics (including chaotic one) and also its control. Selected evolutionary algorithms in this tutorial will be differential evolution, genetic algorithm, particle swarm, Bee algorithm, SOMA, and others. At the end will be demonstrated how we can control EAs dynamics using feedback loop control scheme.
The tutorial is designed as an introduction; no advanced or expert knowledge of complex networks, chaos, and control is expected.