June 9, 2016



Brain-machine interfaces: from basic science to neurological rehabilitation

Place and date no defined

Miguel Nicolelis

Miguel Nicolelis

In this talk, I will describe how state-of-the-art research on brain-machine interfaces makes it possible for the brains of primates to interact directly and in a bi-directional way with mechanical, computational and virtual devices without any interference of the body muscles or sensory organs.

I will review a series of recent experiments using real-time computational models to investigate how ensembles of neurons encode motor information. These experiments have revealed that brain-machine interfaces can be used not only to study fundamental aspects of neural ensemble physiology, but they can also serve as an experimental paradigm aimed at testing the design of novel neuroprosthetic devices. I will also describe evidence indicating that continuous operation of a closed-loop brain machine interface, which utilizes a robotic arm as its main actuator, can induce significant changes in the physiological properties of neural circuits in multiple motor and sensory cortical areas. This research raises the hypothesis that the properties of a robot arm, or other neurally controlled tools, can be assimilated by brain representations as if they were extensions of the subject's own body.

Miguel Nicolelis, M.D., Ph.D., is the Duke School of Medicine Distinguished Professor of Neuroscience at Duke University, Professor of Neurobiology, Biomedical Engineering and Psychology and Neuroscience, and founder of Duke's Center for Neuroengineering. He is Founder and Scientific Director of the Edmond and Lily Safra International Institute for Neuroscience of Natal. Dr. Nicolelis is also founder of the Walk Again Project, an international consortium of scientists and engineers, dedicated to the development of an exoskeleton device to assist severely paralyzed patients in regaining full body mobility.

Dr. Nicolelis has dedicated his career to investigating how the brains of freely behaving animals encode sensory and motor information. As a result of his studies, Dr. Nicolelis was first to propose and demonstrate that animals and human subjects can utilize their electrical brain activity to directly control neuroprosthetic devices via brain-machine interfaces (BMI).

Over the past 25 years, Dr. Nicolelis pioneered and perfected the development of a new neurophysiological method, known today as chronic, multi-site, multi-electrode recordings. Using this approach in a variety of animal species, as well as in intra-operative procedures in human patients, Dr. Nicolelis launched a new field of investigation, which aims at measuring the concurrent activity and interactions of large populations of single neurons throughout the brain. Through his work, Dr. Nicolelis has discovered a series of key physiological principles that govern the operation of mammalian brain circuits.

Dr. Nicolelis pioneering BMI studies have become extremely influential since they offer potential new therapies for patients suffering from severe levels of paralysis, Parkinson’s disease, and epilepsy. Today, numerous neuroscience laboratories in the US, Europe, Asia, and Latin America have incorporated Dr. Nicolelis' experimental paradigm to study a variety of mammalian neuronal systems. His research has influenced basic and applied research in computer science, robotics, and biomedical engineering.

Dr. Nicolelis is a member of the French and Brazilian Academies of Science and has authored over 200 manuscripts, edited numerous books and special journal publications, and holds three US patents. He is the author of Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines and How It Will Change Our Lives; and most recently co-authored The Relativistic Brain: How it Works and Why it Cannot be Simulated by a Turing Machine.

Machine learning and AI for the sciences — towards understanding

Place and date no defined

Klaus Robert Muller

Klaus-Robert Müller

In recent years machine learning (ML) and Artificial Intelligence (AI) methods have begun to play a more and more enabling role in the sciences and in industry. In particular, the advent of large and/or complex data corpora has given rise to new technological challenges and possibilities.

The talk will touch upon the topic of ML applications in sciences, here, in Neuroscience, Medicine and Physics and discuss possibilities for extracting information from machine learning models for furthering our understanding by explaining nonlinear ML models.

Klaus-Robert Müller (M’12) has been a professor of computer science at Technische Universität Berlin since 2006; at the same time he has been the director of the Bernstein Focus on Neurotechnology Berlin until 2014 and from 2014 on Co-director of the Berlin Big Data Center.

He studied physics in Karlsruhe from 1984 to 1989 and obtained his Ph.D. degree in computer science at Technische Universität Karlsruhe in 1992. After completing a postdoctoral position at GMD FIRST in Berlin, he was a research fellow at the University of Tokyo from 1994 to 1995. In 1995, he founded the Intelligent Data Analysis group at GMD-FIRST (later Fraunhofer FIRST) and directed it until 2008. From 1999 to 2006, he was a professor at the University of Potsdam. He was awarded the 1999 Olympus Prize by the German Pattern Recognition Society, DAGM, and, in 2006, he received the SEL Alcatel Communication Award and in 2014 he was granted the Science Prize of Berlin awarded by the Governing Mayor of Berlin, in 2017 der Vodafone innovations award. In 2012, he was elected to be a member of the German National Academy of Sciences-Leopoldina, in 2017 of the Berlin Brandenburg Academy of Sciences and also in 2017 external scientific member of the Max Planck Society. His research interests are intelligent data analysis, machine learning, AI, signal processing, brain-computer interfaces and electronic structure calculactions.

The Evolutionary Analysis and Synthesis of Intellingent Living Systems

Place and date no defined

Dario FLoreano

Dario Floreano

Evolutionary computation has been successfully used for understanding the emergence of a wide range of biological intelligent behaviors for which there is no fossil record, such as altruism, division of labor, communication, non-deterministic behavior, and reward-based learning. These methods have also been used for generating efficient robot controllers in communicating rovers, flocking drones, plant robots, and modular robots. In the meanwhile, robotics is witnessing a profound transformation with the exploration of novel soft actuators, stretchable sensors, variable-stiffness bodies, and unconventional physical interactions. In these soft robots, just like in biological systems, the traditional distinction between body and intelligence is blurred. Intelligent and adaptive properties of these novel soft robots emerge from the co-design of morphology, materials, and computation. Evolutionary computation is a powerful method for exploring this complex space and generating novel soft robots. The success of this endeavor will depend on at least three factors: the definition of suitable components and models, the design of novel soft-physics simulators, and the availability of sufficient computing power. Recent progress on all these fronts makes this field one of the most promising and exciting research areas in artificial intelligence.

Prof. Dario Floreano is director of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology Lausanne (EPFL). He is also the founding director of the Swiss National Center of Competence in Robotics with almost 60 researchers working in 20 labs. Prof. Floreano holds an M.A. in Psychophysics, an M.S. in Neural Computation, and a PhD in Robotics. He has held research positions at Sony Computer Science Laboratory, at Caltech/JPL, and at Harvard University. His research interests focus on robotics and A.I. at the convergence of biology and engineering. Prof. Floreano has made pioneering contributions to the fields of evolutionary robotics, aerial robotics, and soft robotics that have been published in almost 400 peer-reviewed articles and 4 books on Artificial Neural Networks, Evolutionary Robotics, Bio-inspired Artificial Intelligence and Bio-inspired Flying Robots. He served in several advisory boards and committees, including the Future and Emerging Technologies division of the European Commission, the World Economic Forum General Agenda Council, the International Society of Artificial Life, the International Neural Network Society, and in the editorial committee of ten scientific journal. In addition, his laboratory has generated two drone companies (senseFly and Flyability) and a foundation dedicated to communication on robotics and A.I. (RoboHub).

Similarity and Fuzzy Logic in Cluster Analysis

Place and date no defined

Enrique Ruspini

Enrique Ruspini

The ability to recognize similarities between objects, situations, and concepts is the basis of an important class of cognitive processes known as analogical reasoning. The notion of similarity and its related notion of graded preference have been shown to provide an interpretation of the basic concepts and methods of possibility theory—notably the key procedure known as generalized modus ponens. The interpretation of possibilistic constructs by Ruspini in 1991 in terms of similarity concepts led to the subsequent development of a number of approximate-reasoning methods further extending their classical logic counterparts.

The concept of similarity also plays a significant role in classification processes that seek to discover structure in data. Following the initial work of Ruspini in 1969, establishing cluster analysis as the identification of fuzzy partitions with optimal properties, a large number of methods have been developed based on the idea of mapping metrics into partitions.

In this presentation we aim to bring together our previous work on similarity-based interpretations of fuzzy logic and on relational clustering in the context of issues and methods for hierarchical clustering. Our point of departure will be the extension of similarity measures between points to one between crisp sets of the sample space through a process that is dual to that leading to the well-known Hausdorff extension of distance measures. We will present then an extension of this measure to a similarity measure between fuzzy sets of the dataset.

Having established a similarity structure between fuzzy sets it is now possible to define and study metric properties of curves connecting fuzzy subsets of the sample space. We propose a methodology, grounded on the original similarity structure, for the derivation of taxonomical trees on the basis of properties of these curves. In closing we will provide examples of application of these ideas and discuss future research perspectives.

In a seminal 1969 paper, Enrique H. Ruspini provided the conceptual bases and tools for fuzzy clustering: the summarization and understanding of large data sets and complex objects as collections of fuzzy sets. In subsequent work, Ruspini defined methods that generalize fuzzy clustering by allowing the discovery of multiple, overlapping clusters of different nature and for recognizing important relations between those clusters. His work has led to numerous approaches for data representation and their application to fields ranging from image understanding to neurophysiology to genomics. His developments in the field of approximate reasoning led to a better understanding of methodologies for the analysis of systems described by uncertain data and to approaches to the intelligent control of autonomous robots and to pattern matching in databases (finding “needles” in data “haystacks”).

An IEEE Life Fellow, Ruspini is currently an independent consultant residing in Palo Alto, CA, USA.

Evolutionary fuzzy systems for data science and big data: Why and what for?

Place and date no defined

Paco Herrera

Francisco Herrera

Evolutionary Fuzzy Systems are a successful hybridization between Fuzzy Systems and Evolutionary Algorithms. They integrate both the management of imprecision/ uncertainty and inherent interpretability of Fuzzy Rule Based Systems, with the learning and adaptation capabilities of evolutionary optimization.

Data science, Big data and smart data applications are emerging during the last years, and researchers from many disciplines are aware of the high advantages related to the knowledge extraction from this type of problem. This talk will discuss the progression of Evolutionary Fuzzy Systems for different data science areas (complex classification problems, smart data, big data, ...). We will present a discussion on the most recent and difficult data science tasks to be addressed by evolutionary fuzzy systems, their usefulness for knowledge understanding, and which are the latest trends. Why and what for must we apply evolutionary fuzzy systems?

Francisco Herrera is ca Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada, Spain. Hi is the Director of the Data Science and Computational Intelligence Research Institute.

He has coauthored published 6 monograph as well as more than 360 journal papers, receiving more than 57000 citations (Scholar Google, H-index 118). He has been the supervisor of 41 Ph.D. students. He acts as Editor in Chief of the journals "Information Fusion" (Elsevier) and “Progress in Artificial Intelligence (Springer), and editorial member of a dozen of journals. He is an IEEE SM'2015, ECCAI Fellow 2009 and IFSA Fellow 2013. He received the IEEE Transactions on Fuzzy System Outstanding 2008 and 2012 Paper Award (bestowed in 2011 and 2015 respectively). He has been selected as a Highly Cited Researcher http://highlycited.com/ (in the fields of Computer Science and Engineering, respectively, 2014 to present, Clarivate Analytics).

His research interests includes among others, computational intelligence (including fuzzy modeling, evolutionary algorithms and deep learning), information fusion and decision making and data science (data preprocessing, classification, big data,…)


AutoML: Automating Machine Learning

Place and date no defined

Andre Carvalho

Andre Carvalho

For many decades, the Computational Intelligence research community has investigated how to automate Machine Learning, covering not only its different stages, but also the whole data analysis process. From the large number of efforts in this direction, one of the main popular is the automatic tuning of the hyper-parameters affecting the performance of Computational Intelligence techniques. With the recent advances in storage, processing and communication technologies, associated with the expansion of the number, complexity and size of public datasets, this area has experienced a large growth. This talk will present the main approaches and the recent advances in AutoML, which is a research area concerned with the progressive automation of Machine Learning.

André C. P. L. F. de Carvalho is Full Professor in the department of Computer Science, University of São Paulo, Brazil. He was Associate Professor in the University of Guelph, Canada. He was visiting researcher in the University of Porto, Portugal and visiting professor University of Kent, UK. He is Assessor ad hoc for funding Agencies in Brazil, Canada, Chile, Czech Republic and UK. His main research interests are data mining, data science and machine learning. Prof. de Carvalho has more than 300 publications in these areas, including 10 paper awards from conferences organized by ACM, IEEE and SBC. He is the director of the Center of Machine Learning in Data Analysis, University of São Paulo.

Cyborg Intelligence: Towards the Convergence of Machine and Biological Intelligence

Place and date no defined

Zhaohui Wu

Zhaohui Wu

Recent advances in the multidisciplinary fields such as brain-machine interfaces, artificial intelligence, and computational neuroscience, signal a growing convergence between machines and living beings. Brain-machine interfaces (BMIs) enable direct communication pathways between the brain and an external device, making it possible to connect organic and computing parts at the signal level. Cyborg means a biological-machine system consisting of both organic and computing components. Cyborg intelligence aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via BMIs. This talk will introduce the concept, architectures, and applications of cyborg intelligence. It will also discuss its issues and challenges. Our recent progresses in this field will be presented.

Zhaohui Wu received his BSc and PhD degrees in computer science from Zhejiang University, Hangzhou, China, in 1988 and 1993, respectively. From 1991 to 1993, he was with the German Research Center for Artificial Intelligence (DFKI) as a joint Ph.D. student. He was a visiting professor of the University of Arizona. He is currently a professor with the College of Computer Science and Technology, Zhejiang University, and the president of Zhejiang University.

His research interests include artificial intelligence, service computing, and cyborg intelligence. He received three IEEE/ACM Best Paper Awards, Distinguished Young Scholars of National Science Foundation of China (2005), Second Prize of National Science and Technology Progress Award (2010), the HLHL Science and Technology Innovation Award of the HOLEUNG HO LEE Foundation (2011), Second Prize of National Technology Invention Award (2014), First Prize of Innovation Award of Chinese Association for Artificial Intelligence (2016), and a Top-10 Achievements in Science and Technology in Chinese Universities (2016). He is a fellow of the China Computer Federation (CCF). Dr. Wu has authored 5 books and more than 120 refereed journal papers. He is the founding Editor-in-Chief of Elsevier Journal of Big Data Research

The plastic brain

Place and date no defined

Colli Barcelona-

Colin Blakemore

Development of the nervous system depends on adaptive mechanisms that guide and fine-tune neuronal connectivity. Flexibility is essential for establishing topographic mapping between sense organs and the brain. After the formation of connections, many synapses are able to regulate their strength as a result of activity passing through them. Such plasticity helps individual animals to match their perceptual, cognitive and motor skills to the nature of the world around them. The activity-dependent modification of sensory areas of the cerebral cortex during postnatal ‘sensitive periods’ is the best-known example of such adaptive plasticity. There has been progress in defining the molecular mechanisms and functional value of developmental plasticity. The adaptability that underpins normal development might have played an important role in the evolution of the brain, providing a mechanism by which mutational changes in parts of a neural pathway (for instance, an increase in the size of the cerebral cortex or the appearance of additional types of peripheral processing), can be functionally accommodated.

Many parts of the brain retain forms of plasticity throughout life. The ‘mapping’ within sensory and motor areas of the cerebral cortex can change rapidly in response to loss or change of input, local damage and learning. And the cortex can re-organize itself on a massive scale after stroke or after the onset of blindness. Synaptic plasticity, although fundamentally genetically determined, has enabled mammals, especially human beings, to escape from the informational limits in the blueprint of their genes and propelled them into a different mode of evolution, dependent on the cultural transmission of information. A better understanding of the mechanisms and value of adult brain plasticity might reveal features that could be incorporated into the architecture of computational learning.

Sir Colin Blakemore is Professor of Neuroscience & Philosophy, and Director of the Centre for the Study of the Senses, in the School of Advanced Study, University of London. He worked in the medical school at Oxford for 33 years and from 2003-7 was Chief Executive of the UK Medical Research Council. His research has focused on vision, development of the brain, and neurodegenerative disease. He was one of the first to emphasize the importance of plasticity in brain function. Sir Colin has been President of the British Science Association, the British Neuroscience Association, the Physiological Society and the Society of Biology. His many awards include the Ralph Gerard Prize, the highest award of the Society for Neuroscience, the Faraday Prize and the Ferrier Prize from the Royal Society, and, in 2016, the Elise and Walter A Haas International Award from the University of California Berkeley. He was knighted in 2014 for “services to scientific research, policy and outreach”.

Memory architectures for recurrent neural networks.

Place and date no defined

Lee Giles

Lee Giles

Neural networks are often considered to be black box models. However, discrete time recurrent neural networks (RNNs), which are the most commonly used, have properties that lend themselves to the extraction and insertion of rules. We assume that we have a discrete time RNN that has been trained on sequential data. For each discrete step in time, or a collection thereof, an input can be associated with the RNNs current and previous activations. We can then cluster these activations into states to obtain a previous state to current state transition that is governed by an input. From a formal grammar perspective, these state-to-state transitions can be considered to be production rules. Once the rules are extracted, a minimal unique set of states can be obtained with complexity O(n log n) for n2 states. It can be shown that, for learning known production rules of regular grammars, the rules extracted are stable and independent of initial conditions and, at times, outperform the trained source neural network in terms of classification accuracy. Theoretical work has also shown that regular expression production rules can be easily inserted into certain types of RNNs and proved that the resulting systems are stable. Since for many problem areas such as finance, medicine, security, etc., black box models are not acceptable, the methods discussed here have the potential to uncover what the trained RNN is doing from a regular grammar and finite state machine perspective. We will discuss the strengths, weaknesses, and issues associated with using these and associated methods.

Dr. C. Lee Giles is the David Reese Professor at the College of Information Sciences and Technology at the Pennsylvania State University, University Park, PA, with appointments in Computer Science and Engineering, and Supply Chain and Information Systems. He is a Fellow of the ACM, IEEE and INNS (Gabor prize). He is probably best known for his work on the search engine and digital library, CiteSeer, which he cocreated, developed and maintains.  He has published over 500 referred articles with over 30,000 citations with an h-index of 86 according to Google Scholar

Information Theory of Deep Learning

Place and date no defined

Naftali Tishby

Naftali Tishby

I will present a novel comprehensive theory of large scale learning with Deep Neural Networks, based on the correspondence between Deep Learning and the Information Bottlneck framework.  The new theory has the following components: (1) rethinking Learning theory; I will prove a new generalization bound, the input-compression bound, which shows that compression of the representation of input variable is far more important for good generalization than the dimension of the network hypothesis class, an ill defined notion for deep learning. (2) I will prove that for large scale Deep Neural Networks the mutual information on the input and the output variables, for the last hidden layer, provide a complete characterization of the sample complexity and accuracy of the network. This makes the information Bottlneck bound for the problem as the optimal trade-off between sample complexity and accuracy with ANY learning algorithm. (3) I will show how Stochastic Gradient Descent, as used in Deep Learning, achieves this optimal bound. In that sense, Deep Learning is a method for solving the Information Bottlneck problem for large scale supervised learning problems.  The theory provide a new computational understating of the benefit of the hidden layers, and gives concrete predictions for the structure of the layers of Deep Neural Networks and their design principles. These turn out to depend solely on the joint distribution of the input and output and on the sample size.

Prof. Naftali Tishby is a professor of Computer Science and the director of the Interdisciplinary Center for Neural Computation (ICNC). He is holding Ruth and Stan Flinkman Chair for Brain Research at the Edmond and Lily Safra Center for Brain Science (ELSC) at the Hebrew University of Jerusalem. He is one of the leaders of machine learning research and computational neuroscience in Israel and his numerous ex-students serve at key academic and industrial research positions all over the world. He received his PhD in theoretical physics from the Hebrew university in 1985 and was a research staff member at MIT and Bell Labs from 1985 and 1991. Prof. Tishby was also a visiting professor at Princeton NECI, University of Pennsylvania, UCSB, and IBM research.

His research is at the interface between computer science, statistical physics, and computational neuroscience. He pioneered various applications of statistical physics and information theory in computational learning theory. More recently, he has been working on the foundations of biological information processing and the connections between dynamics and information. He has introduced with his colleagues new theoretical frameworks for optimal adaptation and efficient information representation in biology, such as the Information Bottleneck method and the Minimum Information principle for neural coding. His Information Bottleneck Theory of Deep Learning is considered a promising breakthrough in this area.


Trusted Computational Intelligence

Place and date no defined

Alice Smith

Alice Smith

Forehand/Accenture Distinguished Professor of the Industrial and Systems Engineering Department at Auburn University (USA) with a joint appointment with the Department of Computer Science and Software Engineering. Her research focus is analysis, modeling, and optimization of complex systems with emphasis on computation inspired by natural systems.

ALICE E. SMITH is the Joe W. She holds a U.S. patent and has authored more than 200 refereed publications. These have accumulated over 10,000 citations with an H Index of 44 (Google Scholar). Dr. Smith has been a principal investigator on over $7.5 million of sponsored research and is a three time Fulbright Scholar. She is a Fellow of IEEE and IISE and a registered Professional Engineer. She serves on the IEEE CIS Ad Com, is an Associate Editor of IEEE Transactions on Evolutionary Computation and IEEE Transactions on Automation Science and Engineering, and is an IEEE CIS Distinguished Lecturer. Dr. Smith is an Area Editor of INFORMS Journal on Computing and Computers & Operations Research.

‘Memes’ as Building Blocks of Intelligence: From Refinement to Higher Order Representations in Transfer and Multitask Optimization.

Place and date no defined

Ong Yew Soon

Ong Yew Soon

We are in an era where a plethora of computational problem-solving methodologies are being invented to tackle the diverse problems that are of interest to researchers. Some of these problems have emerged from real-life scenarios, while others are theoretically motivated with the aim of stretching the boundaries of existing computational algorithms. Nevertheless, it is becoming increasingly clear that the development of an overarching conceptual paradigm that dissolves the barriers among these techniques will facilitate the unified advancement of algorithmic research towards the common goal of solving the big technological challenges of the present day. Against this backdrop, and interesting parallel can be drawn between the notion of memes from a socio-cultural perspective, and that from a computational standpoint. The platform for memes in the former resides in the human mind, while in the latter, it manifests in the form of computational models, higher-order representations, and/or algorithmic subroutines that can capture recurring patterns in diverse problem-solving exercises. The phrase Memetic Computing has accordingly surfaced in recent years - emerging as a discipline that focusses on the computational realization of memes as units of information about the underlying problem at hand; which is analogous to the behaviour of memes in a socio-cultural context. In this talk, we walk through the advancements of memetic computation over the years, beginning with memes as simple refinement procedures in population based search, to the advent of transfer optimization and multitasking where memes appear as fundamental building-blocks of transferrable knowledge. In particular, it is worth noting that traditional optimization techniques seldom learn from experience, and rarely are they capable of automatically harnessing the similarity between problems or tasks. It is only very recently that the notion of Transfer & Multifactorial Optimization has been developed to explore the potential of memes in this regard. The basic idea is to facilitate knowledge transfer across problems in a simple and elegant manner, thereby, opening doors to new research opportunities in EC, dealing, in particular, with the exploitation of underlying synergies between seemingly distinct tasks. Last but not least, a variety of real world applications of Transfer & Multifactorial Optimization shall be proposed and demonstrated during the talk.

Yew-Soon Ong is Professor and Chair of the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He is Director of the Data Science and Artificial Intelligence Research Center, Director of the A*Star SIMTECH-NTU Joint Lab on Complex Systems and Principal Investigator of the Data Analytics & Complex System Programme in the Rolls-Royce@NTU Corporate Lab. He received his PhD from University of Southampton, UK.

Dr. Ong is founding Editor-In-Chief of the IEEE Transactions on Emerging Topics in Computational Intelligence, founding Technical Editor-In-Chief of Memetic Computing Journal (Springer), Associate Editor of IEEE Transactions on Evolutionary Computation, IEEE Transactions on Neural Network & Learning Systems, IEEE Transactions on Cybernetics, IEEE Transactions on Big Data, and others. His research interests in computational intelligence span across memetic computation, complex design optimization, intelligent agents and Big Data Analytics. His research grant comprises of external funding from both national and international partners that exceed 15 Million USD. . Dr. Ong’s research advancement in computer science has earned him the recognition of a Thomson Reuters Highly Cited Researcher for two consecutive years (2015 and 2016) and a position among the World's Most-Influential-Scientific Minds. He received the 2015 IEEE Computational Intelligence Magazine Outstanding Paper Award and the 2012 IEEE Transactions on Evolutionary Computation Outstanding Paper Award for his work pertaining to Memetic Computation.

Algorithms that play and design games.

Place and date no defined

Julian Togelius

Julian Togelius

The race is on to develop algorithms that can play a wide variety of games as well as humans, or even better. We do this both to understand how well our algorithms can solve tasks that are designed specifically to be hard for humans to solve, and to find software that can help with game development and design through automatic testing and adaptation. After recent successes with Poker and Go, the attention is now shifting to video games such as DOOM, DoTA, and StarCraft, which provide a fresh set of challenges. Even more challenging is designing agents that can play not just a single game, but any game you give it. A different kind of challenge is that of designing algorithms that can design games, on their own or together with human designers, rather than play them. I will present several examples of how methods from the computational intelligence toolbox, including evolutionary computation, neural networks, and Monte Carlo Tree Search, can be adapted to address these formidable research challenges.

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on all aspects of computational intelligence and games and on selected topics in evolutionary computation and evolutionary reinforcement learning. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of the IEEE Transactions on Games. Togelius holds a BA from Lund University, an MSc from the University of Sussex, and a PhD from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen.

On Parallel Evolutionary Algorithms for Multilevel Optimization.

Place and date no defined

Helio J.C. Barbosa

Helio J.C. Barbosa

The investigations in multilevel programming techniques are strongly motivated by real-world applications found in diverse areas such as economics, operations research, and engineering.

Multilevel optimization problems are characterized by a hierarchical structure where in each level one or more agents (decision makers), controlling a partial set of the variables, seek to optimize, not necessarily in a cooperative way, their particular objective function, subject to given constraints, taking into account the decisions of agents in the upper, and often in the same, hierarchical level.

Due to the complexity involved in solving these problems, evolutionary computation is a candidate tool to overcome the many challenges arising, such as non-convexity and non-differentiability, large number of variables and/or constraints, and mixed types of design variables.

In this talk, ways to exploit the parallel nature of the evolutionary techniques will be discussed in order to construct distributed computational algoritms to tackle multilevel optimization problems.

Helio J.C. Barbosa is a Senior Tecnologist at the Laboratório Nacional de Computação Científica, Brazil. He received a Civil Engineering degree (1974) from the Federal University of Juiz de Fora, where he is an Associate Professor in the Computer Science Department, and M.Sc. (1978) and D.Sc. (1986) degrees in Civil Engineering from the Federal University of Rio de Janeiro, Brazil.

During 1988-1990 he was a visiting scholar at the Division of Applied Mechanics, Stanford University, USA, working on numerical analysis of finite element methods. He got involved with the Evolutionary Computation field in the early nineties. He is a regular reviewer for the major conferences and journals in the area, and member of the IEEE Evolutionary Computation Technical Committee. Currently he is mainly interested in the design and application of metaheuristics in engineering, operations research, and biology.

More information can be found at here or here ).

Rise of Evolutionary Multi-Criterion Optimization: Destined or Directed?

Place and date no defined

kalyanmoy deb

Kalyanmoy Deb

Any bibliometric analysis of Computational Intelligence (CI) publication streams today will support the fact that evolutionary multi-criterion optimization (EMO) is one of CI’s fastest growing fields. EMO has proliferated to industries through dedicated EMO software products; EMO has attracted young researchers to spend their careers on; EMO has even taken CI outside its realm and made CI accessible to various non-engineering and non-CS fields. The success of such a field depends on key and sustained research contributions by many researchers. In this invited talk, we shall provide an account of the rise of EMO field over the past 25 years, by highlighting key events and focus areas along the way. We shall also discuss whether EMO and its success were inevitable or orchestrated by pioneering EMO researchers. Lessons learned from such an analysis can provide a better perspective to the new-comers about their field of research, provide clues for future developments, and also provide useful ideas to other emerging fields.

Kalyanmoy Deb is Koenig Endowed Chair Professor at Department of Electrical and Computer Engineering in Michigan State University, USA. Prof. Deb's research interests are in evolutionary optimization and their application in multi-criterion optimization, modeling, and machine learning. He has been a visiting professor at various universities across the world including IITs in India, Aalto University in Finland, University of Skovde in Sweden, Nanyang Technological University in Singapore. He was awarded Infosys Prize, TWAS Prize in Engineering Sciences, CajAstur Mamdani Prize, Distinguished Alumni Award from IIT Kharagpur, Edgeworth-Pareto award, Bhatnagar Prize in Engineering Sciences, and Bessel Research award from Germany. He is fellow of IEEE, ASME, and three Indian science and engineering academies. He has published over 475 research papers with Google Scholar citation of over 104,000 with h-index 104. He is in the editorial board on 19 major international journals. More information about his research contribution can be found from here.


Fuzzy (F-) transforms — the efficient tool for (even big) data preprocessing.

Place and date no defined


Irina Perfilieva

The F-transform provides a (dimensionally) reduced representation of original data. It is based on a granulation of a domain (fuzzy partition) and gives a tractable image of an original data.

Main characteristics with respect to input data: size reduction , noise removal, invariance to geometrical transformations, knowledge transfer from conventional mathematics, fast computation.

The F-transform has been applied to: image processing, computer vision, pattern recognition, time series analysis and forecasting, numerical methods for differential equations, deep learning neural networks.

In this talk, I will present theoretical background and applications of the proposed technique. I will discuss the current research being carried out within the Computer Vision group in the Institute for Research and Allications of Fuzzy Modeling, university of Ostrava.

Professor Irina Perfilieva, Ph.D., received the degrees of M.S. (1975) and Ph.D (1980) in Applied Mathematics from the Lomonosov State University in Moscow, Russia. At present, she is full professor of Applied Mathematics in the University of Ostrava, Czech Republic. At the same time she is Head of Theoretical Research Department in the University of Ostrava, Institute for Research and Applications of Fuzzy Modeling. She is the author and co-author of six books on mathematical principles of fuzzy sets and fuzzy logic and their applications, she is an editor of many special issues of scientific journals. She has published over 270 papers in the area of multi-valued logic, fuzzy logic, fuzzy approximation and fuzzy relation equations.

Her scientific research is internationally recognized. She is an area editor of IEEE Transactions on Fuzzy Systems and International Journal of Computational Intelligence Systems, and an editorial board member of the following journals: Fuzzy Sets and Systems, Iranian Journal of Fuzzy Systems, Journal of Uncertain Systems, Journal of Intelligent Technologies and Applied Statistics, Fuzzy Information and Engineering. She works as a member of Program Committees of the most prestigious International Conferences and Congresses in the area of fuzzy and knowledge-based systems. For her long-term scientific achievements she was awarded on the International FLINS 2010 Conference on Foundations and Applications of Computational Intelligence. She received the memorial Da Ruan award for the best paper at FLINS 2012. In 2013, she was elected to be an EUSFLAT Honorary Member. She got a special price of the Seoul International Inventions Fair 2010. She has two patents.

Her scientific interests lie in the area of applied mathematics and mathematical modeling where she successfully uses modern as well as classical approaches. During last five years she is working in the area of image processing and pattern recognition.

Decomposable Graphical Models: On Learning, Fusion, and Revision.

Place and date no defined

Rudolf Kruse

Rudolf Kruse

Decomposable Graphical Models are of high relevance for complex industrial applications. The Markov network approach is one of their most prominent representatives and an important tool to structure uncertain knowledge about high dimensional domains. But also relational and possibilistic decompositions turn out to be useful to make reasoning in such domains feasible. Compared to conditioning the decomposable model on given evidence, the learning of the structure of the model from data as well as the fusion of several decomposable models is much more complicated. The important belief change operation revision has been almost entirely disregarded in the past, although the problem of inconsistencies is of utmost relevance for real world applications. In this talk these problems are addressed by presenting successful complex industrial applications

Rudolf Kruse is Professor at the Faculty of Computer Science in the Otto-von-Guericke University of Magdeburg in Germany. He obtained his Ph.D. and his Habilitation in Mathematics from the Technical University of Braunschweig in 1980 and 1984 respectively. Following a stay at the Fraunhofer Gesellschaft, he joined the Technical University of Braunschweig as a professor of computer science in 1986. From 1996 to 2017 he created and headed the Computational Intelligence Group of the Faculty of Computer Science in the Otto-Von-Guericke University Magdeburg.

He has coauthored 15 monographs and 25 books as well as more than 350 peer-refereed scientific publications in various areas with more than 15000 citations and h index of 50 in Google Scholar. He is associate editor of several scientific journals. Rudolf Kruse is Fellow of the International Fuzzy Systems Association (IFSA), Fellow of the European Association for Artificial Intelligence (EURAI/ECCAI ), and Fellow of the Institute of Electrical and Electronics Engineers (IEEE).

His group is successful in various industrial applications in cooperation with companies such as Volkswagen, SAP, Daimler, and British Telecom. His current main research interests include data science and intelligent systems.

TITLE Decomposable Graphical Models: On Learning, Fusion, and Revision

On decision and optimization models and their applications.

Place and date no defined

Jose L.Verdegay

José L. Verdegay

The importance of decision problems and optimization problems in all areas is nowadays beyond doubt. Notwithstanding this importance, it is often tended to think that these two fields circulate along different routes, when the relationship between them is more than narrow, symbiotic. To illustrate this dependence, this talk will present the problems of optimization, essential but not exclusively those of Mathematical Programming, as particular cases of Decision Problems that are described in direct function of the type of information available. For this purpose, a General Decision Problem, is presented as a sextet (X, E, f, ≤, I, K) which includes the set of actions to be taken, the states of nature, the results, the relationship ordering the results, the available information and the framework in which the decision maker has to carry out its activities, respectively. Then, depending on the characteristics that each of these elements take, different optimization problems may arise. We will focus on the case of having information of fuzzy nature and then different models and problems of fuzzy optimization (Fuzzy Mathematical Programming problems, Fuzzy sets based Metaheuristics, …) will appear. Recent applications of these models as well as future research lines associated to them will be described.

José Luis Verdegay received the M.S. degree in mathematics and the Ph.D. degree in sciences from the University of Granada, Granada, Spain, in 1975 and 1981, respectively. He is a full Professor at Department of Computer Science and Artificial Intelligence (DECSAI), University of Granada, Spain. He has published twenty nine books and more than 350 scientific and technical papers in leading scientific journals, and has been Advisor of 20 Ph.D. dissertations. He has served on many international program committees and has attended numerous national and international conferences, congresses, and workshops. He has been Principal Researcher in a variety of national and international research and educational projects and currently is conducting a research project on “Models of Optimization and Decision: Applications and Solutions at 3 Different Environments” and coordinating the Ibero-American research network in Decision and Optimization models (iMODA). He is also a member of the Editorial Board of several international leading journals, as for instance Fuzzy Sets and Systems, Fuzzy Optimization and Decision Making, IJUFKS or Memetic Computing. Professor Verdegay is an IFSA fellow, IEEE Senior member and Honorary member of the Cuban Academy of Mathematics and Computation. Besides he has the Featured Position of “Invited Professor” at the Technical University of Havana (Cuba), Central University of Las Villas (Santa Clara, Cuba) and University of Holguín (Cuba). He is also a “Distinguished Guest” of the National University of Trujillo (Perú). His current scientific interests are on Soft Computing, fuzzy sets and systems, decision support systems, metaheuristic algorithms, nature inspired systems and all their applications to real world problems.

Fuzzy Associative Memories: Theory and Applications

Place and date no defined

Jose L.Verdegay

Marcos Valle

Associative memories are models inspired by the human brain ability to recall information by association. We speak of a fuzzy associative memory when the associative memory is designed for the storage and recall of fuzzy sets. Apart from the biological motivation, a fuzzy associative memory is a continuous fuzzy system that maps close input to close output.

We shall begin this talk by reviewing the matrix-based fuzzy associative memories, which are closely related to the compositional rule of inference. Examples of matrix-based fuzzy associative memories include the famous models of Kosko and the implicative fuzzy associative memories. We point out that many matrix-based fuzzy associative memories can be embedded into the general class of fuzzy morphological associative memories. In the light of this remark, we review some key concepts of mathematical morphology, a theory widely used for image processing and analysis. Particular attention is given to auto-associative fuzzy morphological memories, in which we provide theoretical results concerning the storage capacity, noise tolerance, and fixed points. Afterward, we shall address recent advances on fuzzy associative memories. Furthermore, we shall present some applications of fuzzy associative memories including time series prediction, pattern recognition, and computer vision.

Marcos Eduardo Valle received his master and Ph.D. degrees in applied mathematics at the University of Campinas in 2005 and 2007, respectively. He previously worked at the University of Londrina, Brazil. Currently, he is an assistant professor at the Department of Applied Mathematics of the University of Campinas, Brazil. His research interests include associative memories, fuzzy set theory, lattice theory, mathematical morphology, hypercomplex-valued neural networks, pattern recognition, and data recovery. He is a member of the Mathematical Imaging and Computational Intelligence Laboratory at the Institute of Mathematics, Statistics, and Scientific Computing. Valle's primary research contributions to fuzzy associative memories include the implicative fuzzy associative memories and the class of fuzzy morphological associative memories. Marcos has published more than 60 articles; including book chapters, journal manuscripts, and conference proceedings. More details can be fount at: link

Fuzzy management of data and information quality.

Place and date no defined

Bernadette Bouchon Meunier

Bernadette Buchon-Meunier

The management of big data is certainly one of the most important challenges in the modern digital society. Beyond the problems of Volume, Velocity and Variety (or heterogeneity) classically mentioned as the three V’s in all analyses on big data, it is important to pay attention to the fourth V, usually called Veracity in a broad sense, related to uncertainty in data. In this regard, we differenciate data quality from information quality. The first one depends on the completeness, accuracy, errors and validity of available data. The second one is based on the truth attached to pieces of information in function of the confidence of sources in the information they provide, their reliability, as well as the level of inconsistency in the obtained information and its suitability for the final user needs. The analysis of data and information quality is complex and depends on intertwined objective and subjective factors, according to the nature of data: open data or temporal data, collaborative information, news streams or data acquired from connected devices, for instance.

Statistics and statistical machine learning appear preeminent in the so-called data science. We highlight the importance of non-statistical models to cope with the drawbacks we mentioned, mainly fuzzy set and possibility-based methods which are particularly useful to deal with subjective criteria and to provide easily interpretable information. We also mention solutions based on evidence-based methods, interval computation or non-classical logics. We review existing methods and we provide examples of non-statistical models, pointing out the interest of opening new possibilities to solve the difficult problem of quality in big data and related information.

Bernadette Bouchon-Meunier is a director of research emeritus at the National Centre for Scientific Research, the former head of the department of Databases and Machine Learning in the Computer Science Laboratory of the University Pierre et Marie Curie-Paris 6 (LIP6). She is the Editor-in-Chief of the International Journal of Uncertainty, Fuzziness and Knowledge-based Systems, the (co)-editor of 27 books, and the (co)-author of five. She has (co-) authored more than 400 papers on approximate and similarity-based reasoning, as well as the application of fuzzy logic and machine learning techniques to decision-making, data mining, risk forecasting, information retrieval, user modelling, sensorial and emotional information processing.

Co-executive director of the IPMU International Conference held every other year since 1986, she also served as the FUZZ-IEEE 2010 and FUZZ-IEEE 2013 Program Chair, the IEEE Symposium Series on Computational Intelligence (SSCI 2011) General Chair and the FUZZ-IEEE 2012 Conference Chair, as well as the Honorary chair of IEEE SSCI 2013, IEEE CIVEMSA 2013 and IEEE CIVEMSA 2017. She is currently the IEEE Computational Intelligence Society Vice-President for Conferences, the IEEE France Section Vice-President for Chapters and the IEEE France Section Computational Intelligence chapter vice-chair. She is an IEEE fellow and an International Fuzzy Systems Association fellow. She received the IEEE Computational Intelligence Society Meritorious Service Award in 2012 and she has been selected for the 2018 IEEE Computational Intelligence Society Fuzzy Systems Pioneer award.


Applications of Computational Intelligence in Biomedicine

Place and date no defined


Gary B. Fogel

At its core, the field of biomedicine focuses on an understanding of molecular processes, their possible physiological pathologies, and resulting medical treatment. This includes diagnostics that can classify individuals and their risk of disease based on molecular information. Or, for instance, determination of the appropriate treatment for individuals that are already afflicted with a disease. Given an overwhelming abundance of information at the molecular level, there exists a growing opportunity to use computational intelligence to improve our basic understanding of disease processes. In this public lecture I will provide examples of how these approaches can be used to help inform and lead to new medical opportunities. I will also review some of the hurdles that remain for the use of these tools in clinical settings.

Dr. Gary Fogel is Chief Executive Officer of Natural Selection, Inc. (NSI) in San Diego, California, an internationally recognized award-winning company with a 25-year history of applied computational intelligence. Dr. Fogel received his Ph.D. in biology from U.C. Los Angeles focusing on the evolution of histone proteins. His more recent efforts include many applications of computational intelligence to biology, chemistry, and medicine from genomics to clinical drug development. Dr. Fogel has over 140 publications in technical journals, conferences, and other venues and holds 3 patents. He also helped establish the IEEE CIS Bioinformatics and Bioengineering Technical Committee, and IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology. He has also served on the editorial boards for 10 journals, including as a founding associate editor for IEEE Transactions on Computational Biology and Bioinformatics and IEEE Transactions on Emerging Topics in Computational Intelligence. He currently serves as Editor-in-Chief for the journal BioSystems. Dr. Fogel is an IEEE Fellow and member of the IEEE CIS Administrative Committee.