IEEE WCCI KEYNOTE SPEAKERS
Machine learning and AI for the sciences — towards understanding
Monday 09 Jul, 10:30 AM – 11:30AM, ASIA rooms -3rd Floor
In recent years machine learning (ML) and Artificial Intelligence (AI) methods have begun to play a more and more enabling role in the sciences and in industry. In particular, the advent of large and/or complex data corpora has given rise to new technological challenges and possibilities.
The talk will touch upon the topic of ML applications in sciences, here, in Neuroscience, Medicine and Physics and discuss possibilities for extracting information from machine learning models for furthering our understanding by explaining nonlinear ML models.
Klaus-Robert Müller (M’12) has been a professor of computer science at Technische Universität Berlin since 2006; at the same time he has been the director of the Bernstein Focus on Neurotechnology Berlin until 2014 and from 2014 on Co-director of the Berlin Big Data Center.
He studied physics in Karlsruhe from 1984 to 1989 and obtained his Ph.D. degree in computer science at Technische Universität Karlsruhe in 1992. After completing a postdoctoral position at GMD FIRST in Berlin, he was a research fellow at the University of Tokyo from 1994 to 1995. In 1995, he founded the Intelligent Data Analysis group at GMD-FIRST (later Fraunhofer FIRST) and directed it until 2008. From 1999 to 2006, he was a professor at the University of Potsdam. He was awarded the 1999 Olympus Prize by the German Pattern Recognition Society, DAGM, and, in 2006, he received the SEL Alcatel Communication Award and in 2014 he was granted the Science Prize of Berlin awarded by the Governing Mayor of Berlin, in 2017 der Vodafone innovations award. In 2012, he was elected to be a member of the German National Academy of Sciences-Leopoldina, in 2017 of the Berlin Brandenburg Academy of Sciences and also in 2017 external scientific member of the Max Planck Society. His research interests are intelligent data analysis, machine learning, AI, signal processing, brain-computer interfaces and electronic structure calculactions.
Brain-machine interfaces: from basic science to neurological rehabilitation
Thuesday 10 Jul, 10:30 AM – 11:30AM, ASIA rooms -3rd Floor
In this talk, I will describe how state-of-the-art research on brain-machine interfaces makes it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices without any interference of the body muscles or sensory organs.
I will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices. I will also describe evidence indicating that continuous operation of a closed-loop brain machine interface, which utilizes a robotic arm as its main actuator, can induce significant changes in the physiological properties of neural circuits in multiple motor and sensory cortical areas. This research raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject's own body.
Miguel Nicolelis, M.D., Ph.D., is the Duke School of Medicine Distinguished Professor of Neuroscience at Duke University, Professor of Neurobiology, Biomedical Engineering and Psychology and Neuroscience, and founder of Duke's Center for Neuroengineering. He is Founder and Scientific Director of the Edmond and Lily Safra International Institute for Neuroscience of Natal. Dr. Nicolelis is also founder of the Walk Again Project, an international consortium of scientists and engineers, dedicated to the development of an exoskeleton device to assist severely paralyzed patients in regaining full body mobility.
Dr. Nicolelis has dedicated his career to investigating how the brains of freely behaving animals encode sensory and motor information. As a result of his studies, Dr. Nicolelis was first to propose and demonstrate that animals and human subjects can utilize their electrical brain activity to directly control neuroprosthetic devices via brain-machine interfaces (BMI).
Over the past 25 years, Dr. Nicolelis pioneered and perfected the development of a new neurophysiological method, known today as chronic, multi-site, multi-electrode recordings. Using this approach in a variety of animal species, as well as in intra-operative procedures in human patients, Dr. Nicolelis launched a new field of investigation, which aims at measuring the concurrent activity and interactions of large populations of single neurons throughout the brain. Through his work, Dr. Nicolelis has discovered a series of key physiological principles that govern the operation of mammalian brain circuits.
Dr. Nicolelis pioneering BMI studies have become extremely influential since they offer potential new therapies for patients suffering from severe levels of paralysis, Parkinson’s disease, and epilepsy. Today, numerous neuroscience laboratories in the US, Europe, Asia, and Latin America have incorporated Dr. Nicolelis' experimental paradigm to study a variety of mammalian neuronal systems. His research has influenced basic and applied research in computer science, robotics, and biomedical engineering.
Dr. Nicolelis is a member of the French and Brazilian Academies of Science and has authored over 200 manuscripts, edited numerous books and special journal publications, and holds three US patents. He is the author of Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines and How It Will Change Our Lives; and most recently co-authored The Relativistic Brain: How it Works and Why it Cannot be Simulated by a Turing Machine.
Streaming Consciousness on Streaming Clustering
Wednesday 11 Jul, 10:30 AM – 11:30AM, ASIA rooms -3rd Floor
As one of us who has been involved in research and applications of clustering for many years, I’ve come to view the clustering enterprise through three basic questions.
- Do you believe there are any clusters in your data?
- If so, can you come up with a technique to find the natural grouping of your data?
- Are the clusters you found good groupings of the data?
These questions have fueled many advances to both feature vector analytics and relational data analytics. Question1 probably draws the least attention since us clustering folk want to get about our business. However for example, some nice visualization techniques have been advanced to assist with this assertion. A side benefit of not skipping this aspect of the problem is that the methods to provide an idea of whether the data has natural clusters also give hints about the big question of how many clusters to search for. There are hundreds, perhaps thousands, of answers to question 2, and always room for more. Question 3 looks at the issue of cluster validity, usually optimizing the number of clusters to provide compact and well separated groups of data.
With the explosion of ubiquitous continuous sensing (something Lotfi Zadeh predicted as one of the pillars of Recognition Technology in the late 1990s), on-line streaming clustering is attracting more and more attention. I was drawn into this world mainly due to our desire to continuously monitor the activities, and health conditions, of older adults in a large interdisciplinary eldercare research group. Roughly, the requirements are that the streaming clustering algorithm recognize and adapt clusters as the data evolves, that anomalies are detected, and that new clusters are automatically formed as incoming data dictate. Several groups are building algorithms to perform on-line clustering. But, how do those requirements conform to the long-held trust in the three questions of clustering? The purpose of this talk is to examine (my thoughts on) these questions as they relate to streaming clustering. I chose to call it “streaming consciousness” to highlight that this is not a completely defined answer, but more a flow of thoughts about this overall area.
James M. Keller holds the University of Missouri Curators Distinguished Professorship in the Electrical Engineering and Computer Science Department on the Columbia campus. He is also the R. L. Tatum Professor in the College of Engineering. His research interests center on computational intelligence with a focus on problems in computer vision, pattern recognition, and information fusion including bioinformatics, spatial reasoning, geospatial intelligence, landmine detection and technology for eldercare. Professor Keller has been funded by a variety of government and idustry organizations and has coauthored around 500 technical publications. Jim is a Life Fellow of the IEEE, is an IFSA Fellow, and a past President of NAFIPS. He received the 2007 Fuzzy Systems Pioneer Award and the 2010 Meritorious Service Award from the IEEE Computational Intelligence Society. He finished a full six year term as Editor-in-Chief of the IEEE Transactions on Fuzzy Systems, followed by being the Vice President for Publications of the IEEE CIS from 2005-2008, and then an elected CIS Adcom member. He is VP Pubs for CIS again, and has served as the IEEE TAB Transactions Chair and as a member of the IEEE Publication Review and Advisory Committee from 2010 to 2017. Jim has had many conference positions and duties over the years.
The Evolutionary Analysis and Synthesis of Intellingent Living Systems
Thursday 12 Jul, 10:30 AM – 11:30AM, ASIA rooms -3rd Floor
Evolutionary computation has been successfully used for understanding the emergence of a wide range of biological intelligent behaviors for which there is no fossil record, such as altruism, division of labor, communication, non-deterministic behavior, and reward-based learning. These methods have also been used for generating efficient robot controllers in communicating rovers, flocking drones, plant robots, and modular robots. In the meanwhile, robotics is witnessing a profound transformation with the exploration of novel soft actuators, stretchable sensors, variable-stiffness bodies, and unconventional physical interactions. In these soft robots, just like in biological systems, the traditional distinction between body and intelligence is blurred. Intelligent and adaptive properties of these novel soft robots emerge from the co-design of morphology, materials, and computation. Evolutionary computation is a powerful method for exploring this complex space and generating novel soft robots. The success of this endeavor will depend on at least three factors: the definition of suitable components and models, the design of novel soft-physics simulators, and the availability of sufficient computing power. Recent progress on all these fronts makes this field one of the most promising and exciting research areas in artificial intelligence.
Prof. Dario Floreano is director of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology Lausanne (EPFL). He is also the founding director of the Swiss National Center of Competence in Robotics with almost 60 researchers working in 20 labs. Prof. Floreano holds an M.A. in Psychophysics, an M.S. in Neural Computation, and a PhD in Robotics. He has held research positions at Sony Computer Science Laboratory, at Caltech/JPL, and at Harvard University. His research interests focus on robotics and A.I. at the convergence of biology and engineering. Prof. Floreano has made pioneering contributions to the fields of evolutionary robotics, aerial robotics, and soft robotics that have been published in almost 400 peer-reviewed articles and 4 books on Artificial Neural Networks, Evolutionary Robotics, Bio-inspired Artificial Intelligence and Bio-inspired Flying Robots. He served in several advisory boards and committees, including the Future and Emerging Technologies division of the European Commission, the World Economic Forum General Agenda Council, the International Society of Artificial Life, the International Neural Network Society, and in the editorial committee of ten scientific journal. In addition, his laboratory has generated two drone companies (senseFly and Flyability) and a foundation dedicated to communication on robotics and A.I. (RoboHub).
Evolutionary fuzzy systems for data science and big data: Why and what for?
Friday 13 Jul, 10:30 AM – 11:30AM, ASIA rooms -3rd Floor
Evolutionary Fuzzy Systems are a successful hybridization between Fuzzy Systems and Evolutionary Algorithms. They integrate both the management of imprecision/ uncertainty and inherent interpretability of Fuzzy Rule Based Systems, with the learning and adaptation capabilities of evolutionary optimization.
Data science, Big data and smart data applications are emerging during the last years, and researchers from many disciplines are aware of the high advantages related to the knowledge extraction from this type of problem. This talk will discuss the progression of Evolutionary Fuzzy Systems for different data science areas (complex classification problems, smart data, big data, ...). We will present a discussion on the most recent and difficult data science tasks to be addressed by evolutionary fuzzy systems, their usefulness for knowledge understanding, and which are the latest trends. Why and what for must we apply evolutionary fuzzy systems?
Francisco Herrera is ca Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada, Spain. Hi is the Director of the Data Science and Computational Intelligence Research Institute.
He has coauthored published 6 monograph as well as more than 360 journal papers, receiving more than 57000 citations (Scholar Google, H-index 118). He has been the supervisor of 41 Ph.D. students. He acts as Editor in Chief of the journals "Information Fusion" (Elsevier) and “Progress in Artificial Intelligence (Springer), and editorial member of a dozen of journals. He is an IEEE SM'2015, ECCAI Fellow 2009 and IFSA Fellow 2013. He received the IEEE Transactions on Fuzzy System Outstanding 2008 and 2012 Paper Award (bestowed in 2011 and 2015 respectively). He has been selected as a Highly Cited Researcher http://highlycited.com/ (in the fields of Computer Science and Engineering, respectively, 2014 to present, Clarivate Analytics).
His research interests includes among others, computational intelligence (including fuzzy modeling, evolutionary algorithms and deep learning), information fusion and decision making and data science (data preprocessing, classification, big data,…)
IJCNN 2018 PLENARIES
Cyborg Intelligence: Towards the Convergence of Machine and Biological Intelligence
09-Jul – Monday, 1:00 PM- 2:00 PM , ASIA 1 room – 3rd floor
Recent advances in the multidisciplinary fields such as brain-machine interfaces, artificial intelligence, and computational neuroscience, signal a growing convergence between machines and living beings. Brain-machine interfaces (BMIs) enable direct communication pathways between the brain and an external device, making it possible to connect organic and computing parts at the signal level. Cyborg means a biological-machine system consisting of both organic and computing components. Cyborg intelligence aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via BMIs. This talk will introduce the concept, architectures, and applications of cyborg intelligence. It will also discuss its issues and challenges. Our recent progresses in this field will be presented.
Zhaohui Wu received his BSc and PhD degrees in computer science from Zhejiang University, Hangzhou, China, in 1988 and 1993, respectively. From 1991 to 1993, he was with the German Research Center for Artificial Intelligence (DFKI) as a joint Ph.D. student. He was a visiting professor of the University of Arizona. He is currently a professor with the College of Computer Science and Technology, Zhejiang University, and the president of Zhejiang University.
His research interests include artificial intelligence, service computing, and cyborg intelligence. He received three IEEE/ACM Best Paper Awards, Distinguished Young Scholars of National Science Foundation of China (2005), Second Prize of National Science and Technology Progress Award (2010), the HLHL Science and Technology Innovation Award of the HOLEUNG HO LEE Foundation (2011), Second Prize of National Technology Invention Award (2014), First Prize of Innovation Award of Chinese Association for Artificial Intelligence (2016), and a Top-10 Achievements in Science and Technology in Chinese Universities (2016). He is a fellow of the China Computer Federation (CCF). Dr. Wu has authored 5 books and more than 120 refereed journal papers. He is the founding Editor-in-Chief of Elsevier Journal of Big Data Research
Information Theory of Deep Learning
10-Jul – Tuesday, 1:00 PM- 2:00 PM, Asia 1 room 3rd floor
I will present a novel comprehensive theory of large scale learning with Deep Neural Networks, based on the correspondence between Deep Learning and the Information Bottlneck framework. The new theory has the following components: (1) rethinking Learning theory; I will prove a new generalization bound, the input-compression bound, which shows that compression of the representation of input variable is far more important for good generalization than the dimension of the network hypothesis class, an ill defined notion for deep learning. (2) I will prove that for large scale Deep Neural Networks the mutual information on the input and the output variables, for the last hidden layer, provide a complete characterization of the sample complexity and accuracy of the network. This makes the information Bottlneck bound for the problem as the optimal trade-off between sample complexity and accuracy with ANY learning algorithm. (3) I will show how Stochastic Gradient Descent, as used in Deep Learning, achieves this optimal bound. In that sense, Deep Learning is a method for solving the Information Bottlneck problem for large scale supervised learning problems. The theory provide a new computational understating of the benefit of the hidden layers, and gives concrete predictions for the structure of the layers of Deep Neural Networks and their design principles. These turn out to depend solely on the joint distribution of the input and output and on the sample size.
Prof. Naftali Tishby is a professor of Computer Science and the director of the Interdisciplinary Center for Neural Computation (ICNC). He is holding Ruth and Stan Flinkman Chair for Brain Research at the Edmond and Lily Safra Center for Brain Science (ELSC) at the Hebrew University of Jerusalem. He is one of the leaders of machine learning research and computational neuroscience in Israel and his numerous ex-students serve at key academic and industrial research positions all over the world. He received his PhD in theoretical physics from the Hebrew university in 1985 and was a research staff member at MIT and Bell Labs from 1985 and 1991. Prof. Tishby was also a visiting professor at Princeton NECI, University of Pennsylvania, UCSB, and IBM research.
His research is at the interface between computer science, statistical physics, and computational neuroscience. He pioneered various applications of statistical physics and information theory in computational learning theory. More recently, he has been working on the foundations of biological information processing and the connections between dynamics and information. He has introduced with his colleagues new theoretical frameworks for optimal adaptation and efficient information representation in biology, such as the Information Bottleneck method and the Minimum Information principle for neural coding. His Information Bottleneck Theory of Deep Learning is considered a promising breakthrough in this area.
Information Theoretic Machine Learning.
11-Jul – Wednesday, 1:00 PM- 2:00 PM ,Asia 1 room 3rd floor
This talk presents an overview of how information theoretic (IT) concepts and algorithms can be applied in machine learning. The first step is to select an approach to estimate directly from data entropy and mutual information because in machine learning the pdf of the data is normally unknown. Here we will show how Renyi’s entropy and mutual information can be estimated from the eigenspectrum of the Gram matrix of kernel learning. Once this is done, IT can provide new cost functions and new frameworks to select hyper parameters for learning machines. Here we will briefly explain how IT can be used to analyze the dynamics of learning and set up proper topologies in deep learning. We will also present how the exploration-exploitation dilemma in reinforcement learning can be formulated theoretically with Stratonovich theory of the value of information.
Jose C. Principe (M’83-SM’90-F’00) is a Distinguished Professor of Electrical and Computer Engineering and Biomedical Engineering at the University of Florida where he teaches advanced signal processing, machine learning and artificial neural networks (ANNs) modeling. He is the Eckis Professor of ECE and the Founder and Director of the University of Florida Computational NeuroEngineering Laboratory (CNEL) www.cnel.ufl.edu . His primary area of interest is processing of time varying signals with adaptive neural models. The CNEL Lab has been studying signal and pattern recognition principles based on information theoretic criteria (entropy and mutual information).
Dr. Principe is an IEEE Fellow. He was the past Chair of the Technical Committee on Neural Networks of the IEEE Signal Processing Society, Past-President of the International Neural Network Society, and Past-Editor in Chief of the IEEE Transactions on Biomedical Engineering. He is a member of the Advisory Board of the University of Florida Brain Institute. Dr. Principe has more than 800 publications. He directed 95 Ph.D. dissertations and 65 Master theses. He wrote in 2000 an interactive electronic book entitled “Neural and Adaptive Systems” published by John Wiley and Sons and more recently co-authored several books on “Brain Machine Interface Engineering” Morgan and Claypool, “Information Theoretic Learning”, Springer, and “Kernel Adaptive Filtering”, Wiley.
AutoML: Automating Machine Learning
12-Jul – Thursday, 1:00 PM- 2:00 PM ,Asia 1 room 3rd floor
For many decades, the Computational Intelligence research community has investigated how to automate Machine Learning, covering not only its different stages, but also the whole data analysis process. From the large number of efforts in this direction, one of the main popular is the automatic tuning of the hyper-parameters affecting the performance of Computational Intelligence techniques. With the recent advances in storage, processing and communication technologies, associated with the expansion of the number, complexity and size of public datasets, this area has experienced a large growth. This talk will present the main approaches and the recent advances in AutoML, which is a research area concerned with the progressive automation of Machine Learning.
André C. P. L. F. de Carvalho is Full Professor in the department of Computer Science, University of São Paulo, Brazil. He was Associate Professor in the University of Guelph, Canada. He was visiting researcher in the University of Porto, Portugal and visiting professor University of Kent, UK. He is Assessor ad hoc for funding Agencies in Brazil, Canada, Chile, Czech Republic and UK. His main research interests are data mining, data science and machine learning. Prof. de Carvalho has more than 300 publications in these areas, including 10 paper awards from conferences organized by ACM, IEEE and SBC. He is the director of the Center of Machine Learning in Data Analysis, University of São Paulo.
The plastic brain
13-Jul – Friday, 1:00 PM- 2:00 PM , Asia 1 room 3rd floor
Development of the nervous system depends on adaptive mechanisms that guide and fine-tune neuronal connectivity. Flexibility is essential for establishing topographic mapping between sense organs and the brain. After the formation of connections, many synapses are able to regulate their strength as a result of activity passing through them. Such plasticity helps individual animals to match their perceptual, cognitive and motor skills to the nature of the world around them. The activity-dependent modification of sensory areas of the cerebral cortex during postnatal ‘sensitive periods’ is the best-known example of such adaptive plasticity. There has been progress in defining the molecular mechanisms and functional value of developmental plasticity. The adaptability that underpins normal development might have played an important role in the evolution of the brain, providing a mechanism by which mutational changes in parts of a neural pathway (for instance, an increase in the size of the cerebral cortex or the appearance of additional types of peripheral processing), can be functionally accommodated.
Many parts of the brain retain forms of plasticity throughout life. The ‘mapping’ within sensory and motor areas of the cerebral cortex can change rapidly in response to loss or change of input, local damage and learning. And the cortex can re-organize itself on a massive scale after stroke or after the onset of blindness. Synaptic plasticity, although fundamentally genetically determined, has enabled mammals, especially human beings, to escape from the informational limits in the blueprint of their genes and propelled them into a different mode of evolution, dependent on the cultural transmission of information. A better understanding of the mechanisms and value of adult brain plasticity might reveal features that could be incorporated into the architecture of computational learning.
Sir Colin Blakemore is Professor of Neuroscience & Philosophy, and Director of the Centre for the Study of the Senses, in the School of Advanced Study, University of London. He worked in the medical school at Oxford for 33 years and from 2003-7 was Chief Executive of the UK Medical Research Council. His research has focused on vision, development of the brain, and neurodegenerative disease. He was one of the first to emphasize the importance of plasticity in brain function. Sir Colin has been President of the British Science Association, the British Neuroscience Association, the Physiological Society and the Society of Biology. His many awards include the Ralph Gerard Prize, the highest award of the Society for Neuroscience, the Faraday Prize and the Ferrier Prize from the Royal Society, and, in 2016, the Elise and Walter A Haas International Award from the University of California Berkeley. He was knighted in 2014 for “services to scientific research, policy and outreach”.
IEEE CEC 2018 PLENARIES
Rise of Evolutionary Multi-Criterion Optimization: Destined or Directed?
09-Jul – Monday, 1:00 PM- 2:00 PM, ASIA 2 room – 3rd floor
Any bibliometric analysis of Computational Intelligence (CI) publication streams today will support the fact that evolutionary multi-criterion optimization (EMO) is one of CI’s fastest growing fields. EMO has proliferated to industries through dedicated EMO software products; EMO has attracted young researchers to spend their careers on; EMO has even taken CI outside its realm and made CI accessible to various non-engineering and non-CS fields. The success of such a field depends on key and sustained research contributions by many researchers. In this invited talk, we shall provide an account of the rise of EMO field over the past 25 years, by highlighting key events and focus areas along the way. We shall also discuss whether EMO and its success were inevitable or orchestrated by pioneering EMO researchers. Lessons learned from such an analysis can provide a better perspective to the new-comers about their field of research, provide clues for future developments, and also provide useful ideas to other emerging fields.
Kalyanmoy Deb is Koenig Endowed Chair Professor at Department of Electrical and Computer Engineering in Michigan State University, USA. Prof. Deb's research interests are in evolutionary optimization and their application in multi-criterion optimization, modeling, and machine learning. He has been a visiting professor at various universities across the world including IITs in India, Aalto University in Finland, University of Skovde in Sweden, Nanyang Technological University in Singapore. He was awarded Infosys Prize, TWAS Prize in Engineering Sciences, CajAstur Mamdani Prize, Distinguished Alumni Award from IIT Kharagpur, Edgeworth-Pareto award, Bhatnagar Prize in Engineering Sciences, and Bessel Research award from Germany. He is fellow of IEEE, ASME, and three Indian science and engineering academies. He has published over 475 research papers with Google Scholar citation of over 104,000 with h-index 104. He is in the editorial board on 19 major international journals. More information about his research contribution can be found from here.
Evolutionary Deep Learning for Image Analysis.
10-Jul – Tuesday, 1:00 PM- 2:00 PM, Asia 2 room 3rd floor
Image analysis problems occur in our everyday life. Recognising faces in digital images and diagnosing medical conditions from X-Ray images are just two examples of the many important tasks for which we need computer based image analysis systems. Since the 1980s, many image analysis algorithms have been developed. Among those algorithms, deep learning particularly deep convolutional neural networks have received very good success and attracted attentions to industry people and researchers in computer vision and image processing, neural networks, and machine learning. However, there are at least three major limitations in deep convolutional neural networks: (1) the learning architecture including the number of layers, the number of feature maps in each layer and the number of nodes in each feature map are still very much determined manually via "trial and error", which requires a large amount of hand-crafting/trial time and good domain knowledge. However, such experts are hard to find in many cases, or using such expertise is too expensive. (2) Almost all the current deep learning algorithms need a large number of examples/instances (e.g. AlphaGo used over 30 million instances) that many problems do not have. (3) Those algorithms require a huge computational cost that big companies such as Google, Baidu, and Microsoft can cope well but most universities and research institutions cannot.
To address these limitations, evolutionary computation techniques start playing a significant role for automatically determining deep structures, transfer functions and parameters to tackle image analysis tasks, and have great potential to advance the developments of deep structures and algorithms. This talk will provide an extended view of deep learning, overview the state-of-the-art work in evolutionary deep learning using GAs/PSO/DE, and discuss some recent developments using Genetic Programming (GP) to automatically evolving deep structures and feature construction for image recognition with a highlight of the interpretation capability and visualisation of constructed features. Finally, recent work and ideas on evolutionary deep transfer learning will be discussed.
Mengjie Zhang is currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group with over 12 staff members, seven postdocs and over 25 PhD students. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, a member of the Faculty of Graduate Research Board at the University, Associate Dean (Research and Innovation) for Faculty of Engineering, and Chair of the Research Committee for the School of Engineering and Computer Science. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of computer vision and image processing, multi-objective optimisation, and feature selection and dimension reduction for classification with high dimensions, transfer learning, classification with missing data, and scheduling and combinatorial optimisation. Prof Zhang has published over 500 research papers in fully refereed international journals and conferences in these areas. He has been supervising over 100 research thesis and project students including over 30 PhD students.
He has been serving as an associated editor or editorial board member for ten international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, Evolutionary Computation Journal (MIT Press), IEEE Transactions Emergent Topics in CI, Genetic Programming and Evolvable Machines (Springer), Applied Soft Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been a major chair for over ten international conferences including IEEE CEC, GECCO, EvoStar and SEAL.
He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top five world genetic programming researchers by the GP bibliography. He will chair and host IEEE CEC 2019 Wellington, the Capital City of New Zealand.
Prof Zhang is a Fellow of Royal Society (Academy of Sciences) of New Zealand. He is currently chairing the IEEE CIS Intelligent Systems and Applications Technical Committee. He is the immediate Past Chair for the Emergent Technologies Technical Committee and the IEEE CIS Evolutionary Computation Technical Committee, and a member of the IEEE CIS Award Committee. He is also a vice-chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction, a vice-chair of the Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.
Evolutionary Strategies for Difficult Engineering Design Problems
11-Jul – Wednesday, 1:00 PM- 2:00 PM, ASIA 2 room 3rd floor
This presentation will put forth several straightforward but successful implementations of an often overlooked evolutionary algorithm – evolutionary strategies, ES – for the design of complex systems. ES was developed more than 50 years ago for optimizing engineering design problems in continuous space and is characterized by its simplicity and computational efficiency. There are few tunable parameters in the basic version and the search relies on the evolution of a population through mutation only, where mutation is a Gaussian which adapts automatically to the search history. Such simplicity is appealing for both algorithm development and implementation and tends to result in a robust search. The engineering design problems showcased in this talk are diverse and most involve two objectives optimized with ES simultaneously to identify a Pareto set of non-dominated designs. The applications are (1) the design of an airfoil for a flying drone considering drag and lift, (2) the design of heterogeneous communications networks considering resiliency and traffic efficiency, (3) the location of semi-obnoxious facilities in municipalities considering transport costs and social costs, and (4) the design of large order picking warehouses considering travel distance.
ALICE E. SMITH is the Joe W. She holds a U.S. patent and has authored more than 200 refereed publications. These have accumulated over 10,000 citations with an H Index of 44 (Google Scholar). Dr. Smith has been a principal investigator on over $7.5 million of sponsored research and is a three time Fulbright Scholar. She is a Fellow of IEEE and IISE and a registered Professional Engineer. She serves on the IEEE CIS Ad Com, is an Associate Editor of IEEE Transactions on Evolutionary Computation and IEEE Transactions on Automation Science and Engineering, and is an IEEE CIS Distinguished Lecturer. Dr. Smith is an Area Editor of INFORMS Journal on Computing and Computers & Operations Research.
Algorithms that play and design games.
12-Jul – Thursday, 1:00 PM- 2:00 PM , ASIA 2 room 3rd floor
The race is on to develop algorithms that can play a wide variety of games as well as humans, or even better. We do this both to understand how well our algorithms can solve tasks that are designed specifically to be hard for humans to solve, and to find software that can help with game development and design through automatic testing and adaptation. After recent successes with Poker and Go, the attention is now shifting to video games such as DOOM, DoTA, and StarCraft, which provide a fresh set of challenges. Even more challenging is designing agents that can play not just a single game, but any game you give it. A different kind of challenge is that of designing algorithms that can design games, on their own or together with human designers, rather than play them. I will present several examples of how methods from the computational intelligence toolbox, including evolutionary computation, neural networks, and Monte Carlo Tree Search, can be adapted to address these formidable research challenges.
Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on all aspects of computational intelligence and games and on selected topics in evolutionary computation and evolutionary reinforcement learning. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of the IEEE Transactions on Games. Togelius holds a BA from Lund University, an MSc from the University of Sussex, and a PhD from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen.
On Parallel Evolutionary Algorithms for Multilevel Optimization.
13-Jul – Friday, 1:00 PM- 2:00 PM, ASIA 2 room 3rd floor
Helio J.C. Barbosa
The investigations in multilevel programming techniques are strongly motivated by real-world applications found in diverse areas such as economics, operations research, and engineering.
Multilevel optimization problems are characterized by a hierarchical structure where in each level one or more agents (decision makers), controlling a partial set of the variables, seek to optimize, not necessarily in a cooperative way, their particular objective function, subject to given constraints, taking into account the decisions of agents in the upper, and often in the same, hierarchical level.
Due to the complexity involved in solving these problems, evolutionary computation is a candidate tool to overcome the many challenges arising, such as non-convexity and non-differentiability, large number of variables and/or constraints, and mixed types of design variables.
In this talk, ways to exploit the parallel nature of the evolutionary techniques will be discussed in order to construct distributed computational algoritms to tackle multilevel optimization problems.
Helio J.C. Barbosa is a Senior Tecnologist at the Laboratório Nacional de Computação Científica, Brazil. He received a Civil Engineering degree (1974) from the Federal University of Juiz de Fora, where he is an Associate Professor in the Computer Science Department, and M.Sc. (1978) and D.Sc. (1986) degrees in Civil Engineering from the Federal University of Rio de Janeiro, Brazil.
During 1988-1990 he was a visiting scholar at the Division of Applied Mechanics, Stanford University, USA, working on numerical analysis of finite element methods. He got involved with the Evolutionary Computation field in the early nineties. He is a regular reviewer for the major conferences and journals in the area, and member of the IEEE Evolutionary Computation Technical Committee. Currently he is mainly interested in the design and application of metaheuristics in engineering, operations research, and biology.
FUZZ-IEEE 2018 PLENARIES
Fuzzy Associative Memories: Theory and Applications
09-Jul – Monday, 1:00 PM- 2:00 PM, Americas room – 2nd lower level
Associative memories are models inspired by the human brain ability to recall information by association. We speak of a fuzzy associative memory when the associative memory is designed for the storage and recall of fuzzy sets. Apart from the biological motivation, a fuzzy associative memory is a continuous fuzzy system that maps close input to close output.
We shall begin this talk by reviewing the matrix-based fuzzy associative memories, which are closely related to the compositional rule of inference. Examples of matrix-based fuzzy associative memories include the famous models of Kosko and the implicative fuzzy associative memories. We point out that many matrix-based fuzzy associative memories can be embedded into the general class of fuzzy morphological associative memories. In the light of this remark, we review some key concepts of mathematical morphology, a theory widely used for image processing and analysis. Particular attention is given to auto-associative fuzzy morphological memories, in which we provide theoretical results concerning the storage capacity, noise tolerance, and fixed points. Afterward, we shall address recent advances on fuzzy associative memories. Furthermore, we shall present some applications of fuzzy associative memories including time series prediction, pattern recognition, and computer vision.
Marcos Eduardo Valle received his master and Ph.D. degrees in applied mathematics at the University of Campinas in 2005 and 2007, respectively. He previously worked at the University of Londrina, Brazil. Currently, he is an assistant professor at the Department of Applied Mathematics of the University of Campinas, Brazil. His research interests include associative memories, fuzzy set theory, lattice theory, mathematical morphology, hypercomplex-valued neural networks, pattern recognition, and data recovery. He is a member of the Mathematical Imaging and Computational Intelligence Laboratory at the Institute of Mathematics, Statistics, and Scientific Computing. Valle's primary research contributions to fuzzy associative memories include the implicative fuzzy associative memories and the class of fuzzy morphological associative memories. Marcos has published more than 60 articles; including book chapters, journal manuscripts, and conference proceedings. More details can be fount at: link
On decision and optimization models and their applications.
10-Jul – Tuesday, 1:00 PM- 2:00 PM, Americas room 2nd lower level
José L. Verdegay
The importance of decision problems and optimization problems in all areas is nowadays beyond doubt. Notwithstanding this importance, it is often tended to think that these two fields circulate along different routes, when the relationship between them is more than narrow, symbiotic. To illustrate this dependence, this talk will present the problems of optimization, essential but not exclusively those of Mathematical Programming, as particular cases of Decision Problems that are described in direct function of the type of information available. For this purpose, a General Decision Problem, is presented as a sextet (X, E, f, ≤, I, K) which includes the set of actions to be taken, the states of nature, the results, the relationship ordering the results, the available information and the framework in which the decision maker has to carry out its activities, respectively. Then, depending on the characteristics that each of these elements take, different optimization problems may arise. We will focus on the case of having information of fuzzy nature and then different models and problems of fuzzy optimization (Fuzzy Mathematical Programming problems, Fuzzy sets based Metaheuristics, …) will appear. Recent applications of these models as well as future research lines associated to them will be described.
José Luis Verdegay received the M.S. degree in mathematics and the Ph.D. degree in sciences from the University of Granada, Granada, Spain, in 1975 and 1981, respectively. He is a full Professor at Department of Computer Science and Artificial Intelligence (DECSAI), University of Granada, Spain. He has published twenty nine books and more than 350 scientific and technical papers in leading scientific journals, and has been Advisor of 20 Ph.D. dissertations. He has served on many international program committees and has attended numerous national and international conferences, congresses, and workshops. He has been Principal Researcher in a variety of national and international research and educational projects and currently is conducting a research project on “Models of Optimization and Decision: Applications and Solutions at 3 Different Environments” and coordinating the Ibero-American research network in Decision and Optimization models (iMODA). He is also a member of the Editorial Board of several international leading journals, as for instance Fuzzy Sets and Systems, Fuzzy Optimization and Decision Making, IJUFKS or Memetic Computing. Professor Verdegay is an IFSA fellow, IEEE Senior member and Honorary member of the Cuban Academy of Mathematics and Computation. Besides he has the Featured Position of “Invited Professor” at the Technical University of Havana (Cuba), Central University of Las Villas (Santa Clara, Cuba) and University of Holguín (Cuba). He is also a “Distinguished Guest” of the National University of Trujillo (Perú). His current scientific interests are on Soft Computing, fuzzy sets and systems, decision support systems, metaheuristic algorithms, nature inspired systems and all their applications to real world problems.
Fuzzy management of data and information quality.
11-Jul – Wednesday, 1:00 PM- 2:00 PM, Americas room 2nd lower level
The management of big data is certainly one of the most important challenges in the modern digital society. Beyond the problems of Volume, Velocity and Variety (or heterogeneity) classically mentioned as the three V’s in all analyses on big data, it is important to pay attention to the fourth V, usually called Veracity in a broad sense, related to uncertainty in data. In this regard, we differenciate data quality from information quality. The first one depends on the completeness, accuracy, errors and validity of available data. The second one is based on the truth attached to pieces of information in function of the confidence of sources in the information they provide, their reliability, as well as the level of inconsistency in the obtained information and its suitability for the final user needs. The analysis of data and information quality is complex and depends on intertwined objective and subjective factors, according to the nature of data: open data or temporal data, collaborative information, news streams or data acquired from connected devices, for instance.
Statistics and statistical machine learning appear preeminent in the so-called data science. We highlight the importance of non-statistical models to cope with the drawbacks we mentioned, mainly fuzzy set and possibility-based methods which are particularly useful to deal with subjective criteria and to provide easily interpretable information. We also mention solutions based on evidence-based methods, interval computation or non-classical logics. We review existing methods and we provide examples of non-statistical models, pointing out the interest of opening new possibilities to solve the difficult problem of quality in big data and related information.
Bernadette Bouchon-Meunier is a director of research emeritus at the National Centre for Scientific Research, the former head of the department of Databases and Machine Learning in the Computer Science Laboratory of the University Pierre et Marie Curie-Paris 6 (LIP6). She is the Editor-in-Chief of the International Journal of Uncertainty, Fuzziness and Knowledge-based Systems, the (co)-editor of 27 books, and the (co)-author of five. She has (co-) authored more than 400 papers on approximate and similarity-based reasoning, as well as the application of fuzzy logic and machine learning techniques to decision-making, data mining, risk forecasting, information retrieval, user modelling, sensorial and emotional information processing.
Co-executive director of the IPMU International Conference held every other year since 1986, she also served as the FUZZ-IEEE 2010 and FUZZ-IEEE 2013 Program Chair, the IEEE Symposium Series on Computational Intelligence (SSCI 2011) General Chair and the FUZZ-IEEE 2012 Conference Chair, as well as the Honorary chair of IEEE SSCI 2013, IEEE CIVEMSA 2013 and IEEE CIVEMSA 2017. She is currently the IEEE Computational Intelligence Society Vice-President for Conferences, the IEEE France Section Vice-President for Chapters and the IEEE France Section Computational Intelligence chapter vice-chair. She is an IEEE fellow and an International Fuzzy Systems Association fellow. She received the IEEE Computational Intelligence Society Meritorious Service Award in 2012 and she has been selected for the 2018 IEEE Computational Intelligence Society Fuzzy Systems Pioneer award.
Decomposable Graphical Models: On Learning, Fusion, and Revision.
12-Jul – Thursday, 1:00 PM- 2:00 PM , Americas room 2nd lower level
Decomposable Graphical Models are of high relevance for complex industrial applications. The Markov network approach is one of their most prominent representatives and an important tool to structure uncertain knowledge about high dimensional domains. But also relational and possibilistic decompositions turn out to be useful to make reasoning in such domains feasible. Compared to conditioning the decomposable model on given evidence, the learning of the structure of the model from data as well as the fusion of several decomposable models is much more complicated. The important belief change operation revision has been almost entirely disregarded in the past, although the problem of inconsistencies is of utmost relevance for real world applications. In this talk these problems are addressed by presenting successful complex industrial applications
Rudolf Kruse is Professor at the Faculty of Computer Science in the Otto-von-Guericke University of Magdeburg in Germany. He obtained his Ph.D. and his Habilitation in Mathematics from the Technical University of Braunschweig in 1980 and 1984 respectively. Following a stay at the Fraunhofer Gesellschaft, he joined the Technical University of Braunschweig as a professor of computer science in 1986. From 1996 to 2017 he created and headed the Computational Intelligence Group of the Faculty of Computer Science in the Otto-Von-Guericke University Magdeburg.
He has coauthored 15 monographs and 25 books as well as more than 350 peer-refereed scientific publications in various areas with more than 15000 citations and h index of 50 in Google Scholar. He is associate editor of several scientific journals. Rudolf Kruse is Fellow of the International Fuzzy Systems Association (IFSA), Fellow of the European Association for Artificial Intelligence (EURAI/ECCAI ), and Fellow of the Institute of Electrical and Electronics Engineers (IEEE).
His group is successful in various industrial applications in cooperation with companies such as Volkswagen, SAP, Daimler, and British Telecom. His current main research interests include data science and intelligent systems.
TITLE Decomposable Graphical Models: On Learning, Fusion, and Revision
Fuzzy (F-) transforms — the efficient tool for (even big) data preprocessing.
13-Jul – Friday, 1:00 PM- 2:00 PM, Americas room 2nd lower level
The F-transform provides a (dimensionally) reduced representation of original data. It is based on a granulation of a domain (fuzzy partition) and gives a tractable image of an original data.
Main characteristics with respect to input data: size reduction , noise removal, invariance to geometrical transformations, knowledge transfer from conventional mathematics, fast computation.
The F-transform has been applied to: image processing, computer vision, pattern recognition, time series analysis and forecasting, numerical methods for differential equations, deep learning neural networks.
In this talk, I will present theoretical background and applications of the proposed technique. I will discuss the current research being carried out within the Computer Vision group in the Institute for Research and Allications of Fuzzy Modeling, university of Ostrava.
Professor Irina Perfilieva, Ph.D., received the degrees of M.S. (1975) and Ph.D (1980) in Applied Mathematics from the Lomonosov State University in Moscow, Russia. At present, she is full professor of Applied Mathematics in the University of Ostrava, Czech Republic. At the same time she is Head of Theoretical Research Department in the University of Ostrava, Institute for Research and Applications of Fuzzy Modeling. She is the author and co-author of six books on mathematical principles of fuzzy sets and fuzzy logic and their applications, she is an editor of many special issues of scientific journals. She has published over 270 papers in the area of multi-valued logic, fuzzy logic, fuzzy approximation and fuzzy relation equations.
Her scientific research is internationally recognized. She is an area editor of IEEE Transactions on Fuzzy Systems and International Journal of Computational Intelligence Systems, and an editorial board member of the following journals: Fuzzy Sets and Systems, Iranian Journal of Fuzzy Systems, Journal of Uncertain Systems, Journal of Intelligent Technologies and Applied Statistics, Fuzzy Information and Engineering. She works as a member of Program Committees of the most prestigious International Conferences and Congresses in the area of fuzzy and knowledge-based systems. For her long-term scientific achievements she was awarded on the International FLINS 2010 Conference on Foundations and Applications of Computational Intelligence. She received the memorial Da Ruan award for the best paper at FLINS 2012. In 2013, she was elected to be an EUSFLAT Honorary Member. She got a special price of the Seoul International Inventions Fair 2010. She has two patents.
Her scientific interests lie in the area of applied mathematics and mathematical modeling where she successfully uses modern as well as classical approaches. During last five years she is working in the area of image processing and pattern recognition.
IEEE WCCI PUBLIC LECTURE
Applications of Computational Intelligence in Biomedicine
09-Jul – Monday, 7:00 PM- 8:00 PM, ASIA room 3rd floor level
Gary B. Fogel
At its core, the field of biomedicine focuses on an understanding of molecular processes, their possible physiological pathologies, and resulting medical treatment. This includes diagnostics that can classify individuals and their risk of disease based on molecular information. Or, for instance, determination of the appropriate treatment for individuals that are already afflicted with a disease. Given an overwhelming abundance of information at the molecular level, there exists a growing opportunity to use computational intelligence to improve our basic understanding of disease processes. In this public lecture I will provide examples of how these approaches can be used to help inform and lead to new medical opportunities. I will also review some of the hurdles that remain for the use of these tools in clinical settings.
Dr. Gary Fogel is Chief Executive Officer of Natural Selection, Inc. (NSI) in San Diego, California, an internationally recognized award-winning company with a 25-year history of applied computational intelligence. Dr. Fogel received his Ph.D. in biology from U.C. Los Angeles focusing on the evolution of histone proteins. His more recent efforts include many applications of computational intelligence to biology, chemistry, and medicine from genomics to clinical drug development. Dr. Fogel has over 140 publications in technical journals, conferences, and other venues and holds 3 patents. He also helped establish the IEEE CIS Bioinformatics and Bioengineering Technical Committee, and IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology. He has also served on the editorial boards for 10 journals, including as a founding associate editor for IEEE Transactions on Computational Biology and Bioinformatics and IEEE Transactions on Emerging Topics in Computational Intelligence. He currently serves as Editor-in-Chief for the journal BioSystems. Dr. Fogel is an IEEE Fellow and member of the IEEE CIS Administrative Committee.