|8.00 – 10.00||Session 1||CEC1_01||CEC2-01||CEC3_01||CEC4_01||FUZZ5_01 (Part 1)||FUZZ6_01 (Part 1)||HYB7_01 Part 1||IJCNN1_01 (Part 1)||IJCNN1_02 (Part 1)||IJCNN3_02 (Part 1)||IJCNN4_01 (Part 1)|
|10:15 – 12.15||Session 2||CEC1-02||CEC2-02||CEC3_02||CEC4_02||FUZZ5_01 (Part 2)||FUZZ6_01 (Part 2)||HYB7_01 (part 2)||IJCNN1_01 (Part 2)||IJCNN1_02 (Part 2)||IJCNN3_02 (Part 2)||IJCNN4_01 (Part 2)|
|13:00 – 15:00||Session 3||CEC1-03||CEC2_03 (Part 1)||CEC3_03||CEC4_03 (Part 1)||FUZZ5_02||FUZZ6_02 (Part 1)||HYB7_02||IJCNN1_03||IJCNN1_02 (Part 3)||IJCNN3_01||IJCNN4_02|
|15:00 – 15.15||Coffee|
|15:15 – 17:15||Session 4||CEC1-04||CEC2_03 (Part 2)||CEC3_04||CEC4_03 (Part 2)||FUZZ5_03||FUZZ6_02 (Part 2)||HYB7_03||IJCNN1_04||IJCNN1_02 (Part 4)||IJCNN3_03||IJCNN4_03 (Part 1)|
|17:15 – 19:15||Session 5||CEC1-05||CEC2-04||CEC3_05||CEC4_05||FUZZ5_04||IJCNN3_04||HYB7_04||IJCNN1_05||IJCNN2_01||IJCNN3_05||IJCNN4_03 (Part 2)|
IJCNN1_01 Deep Learning for Sequences
IJCNN1_02 Deep Recurrent Neural Networks: Training and Applications in the Modeling and Control of Nonlinear Systems, Signal Processing and Robotics.
IJCNN1_03 Prediction, Interaction, and User Behaviour
IJCNN1_04 Entropic Evaluation of Classification. A hands-on, get-dirty introduction
IJCNN1_05 Machine Learning for Spark Streaming with StreamDM
IJCNN2_01 Learning class imbalanced data streams
IJCNN3_01 Adaptive Resonance Theory in Social Media Clustering with Applications
IJCNN3_02 Artificial Intelligence in Business (Changed to: Reinforcement Learning: Principles, Algorithms and Applications)
IJCNN3_03 Non-Iterative Learning Methods for Classification and Forecasting
IJCNN3_04 Graph based and Topological Unsupervised Machine Learning
IJCNN3_05 Methods and Resources for Texture Classification
IJCNN4_01 Tutorial on dynamic classifier selection: recent advances and perspectives
IJCNN4_02 Deep, Transfer and Emergent Reinforcement Learning Techniques for Intelligent Agents
IJCNN4_03 Neurosymbolic Learning and Reasoning with Constraints
Title: Deep Learning for Sequences (IJCNN1_01)
Organized by Alessandro Sperduti
With the diffusion of cheap sensors, sensor-equipped devices (e.g., drones), and sensor networks (such as Internet of Things), as well as the development of inexpensive human-machine interaction interfaces, the ability to quickly and effectively process sequential data is becoming more and more important. Many are the tasks that may benefit from advancements in this field, ranging from monitoring and classification of human behavior to prediction of future events. Many are the approaches that have been proposed in the past to learn in sequential domains, ranging from linear models to early models of Recurrent Neural Networks, up to more recent Deep Learning solutions. The tutorial will start with the presentation of relevant sequential domains, introducing scenarios involving different types of sequences (e.g., symbolic sequences, time series, multivariate sequences) and tasks (e.g., classification, prediction, transduction). Linear models are first introduced, including linear auto-encoders for sequences.
Subsequently non-linear models and related training algorithms are recalled, starting from early versions of Recurrent Neural Networks. Computational problems and proposed solutions will be presented, including novel linear-based pre-training approaches. Finally, more recent Deep Learning models will be discussed. Tutorial will close with some theoretical considerations on the relationships between Feed-forward and Recurrent Neural Networks, and a discussion about dealing with more complex data (e.g., trees and graphs).
PhD students, post-docs, and researchers in neural networks and related areas. Pre-requisites: Basic algebra, calculus, and probability at the introductory college level.
Title: Deep Recurrent Neural Networks: Training and Applications in the Modeling and Control of Nonlinear Systems, Signal Processing and Robotics (IJCNN1_02)
Organized by Antonio Moran
The tutorial presents recent developments in the training and application of deep recurrent neural networks for solving different problems in signal processing, systems modeling, parameter identification, systems control, articulated and mobile robots positioning, among other applications. The tutorial analyzes the algorithms for training recurrent neural networks focusing on Dynamic Back Propagation and Back Propagation Through Time algorithms. The tutorial also presents the integration of knowledge and training for configuring fuzzy-neural networks. All algorithms and applications are implemented in Matlab to verify its effectiveness and realizability.
The potential audience is constituted by people with a background in system dynamics, connectionist systems, and learning algorithms. It is expected the tutorial will arise interest in master students, PhD students, engineers and scientists working on deep learning and neural networks for solving complex problems.
Title: Prediction, Interaction, and User Behaviour (IJCNN1_03)
Organized by Charles Martin, Enrique García Ceja, Kai Olav Ellefsen and Jim Tørresen
The goal of this tutorial is to apply predictive machine learning models to human behaviour through a human computer interface. We will introduce participants to the key stages for developing predictive interaction in user-facing technologies: collecting and identifying data, applying machine learning models, and developing predictive interactions. Many of us are aware of recent advances in deep neural networks (DNNs) and other machine learning (ML) techniques; however, it is not always clear how we can apply these techniques in interactive and real-time applications. Apart from well-known examples such as image classification and speech recognition, what else can predictive ML models be used for? How can these computational intelligence techniques be deployed to help users? In this tutorial, we will show that ML models can be applied to many interactive applications to enhance users’ experience and engagement. We will demonstrate how sensor and user interaction data can be collected and investigated, modelled using classical ML and DNNs, and where predictions of these models can feed back into an interface. We will walk through these processes using live-coded demonstrations with Python code in Jupyter Notebooks so participants will be able to see our investigations live and take the example code home to apply in their own projects.
Our demonstrations will be motivated from examples from our own research in creativity support tools, robotics, and modelling user behaviour. In creativity, we will show how streams of interaction data from a creative musical interface can be modelled with deep recurrent neural networks (RNNs). From this data, we can predict users' future interactions, or the potential interactions of other users. This enables us to "fill in" parts of a tablet-based musical ensemble when other users are not available, or to continue a user's composition with potential musical parts. In user behaviour, we will show how smartphone sensor data can be used to infer user contextual information such as physical activities. This contextual information can be used to trigger interactions in smart home or internet of things (IoT) environments, to help tune interactive applications to user’s needs, or to help track health data.
The primary audience for this tutorial are researchers, students, and computing practitioners interested in applying ML models, but without the background in AI or ML to get started. The demonstrations would assume some programming knowledge, but not specialist knowledge in machine learning. Existing practitioners in AI may also be interested to see how new models, such as DNNs, can be integrated into interactive applications.
Title: Entropic Evaluation of Classification. A hands-on, get-dirty introduction (IJCNN1_04)
Organized by Francisco Valverde and Carmen Peláez
To evaluate supervised classification tasks, we have two different data sources:
- The observation vectors themselves to be used to infer the classifier to obtain the predicted labels.
- The true labels of the observations to be compared to the predicted labels in the form of a confusion matrix
Two main kinds of measures try to analyze the confusion matrix: count-based measures (accuracy, TPR, FPR and derived measures, like AUC, etc.) and entropy-based measures (variation of information, KL-divergence, mutual information and derived measures.) The first kind use the minimization of such error count-based measures as heuristic to improve the quality of the classifier, while the latter try to optimize the flow of information between input and output label distributions (e.g. minimize variation of information, maximize mutual information). The purpose of the tutorial is to train attendees in using a visual and numerical standalone tool, the Entropy Triangle, to carry out entropy-based evaluation and how to compare it to accuracy-based evaluation.
This tutorial is intended for practitioners in machine learning and data science who have ever been baffled by the evaluation of their favorite pet classifier. Since the technique is technology-agnostic it can be used in any form of classifier and supervised classification data whatsoever (and indeed for many more type of datasets!). The not-so- gory details are easy to understand to researchers and students who have ever seen and understood the concepts of entropy, mutual information and Venn-diagrams
Title: Machine Learning for Spark Streaming with StreamDM (IJCNN1_05)Organized by Heitor Gomes and Albert Bifet
The main goal of this tutorial is to introduce attendees to big data stream mining theory and practice. We will use the StreamDM framework to illustrate concepts and also to demonstrate how data stream mining pipelines can be deployed using StreamDM.
The tools and algorithms in StreamDM are specifically designed for the data stream setting. Due to the large amount of data that is created – and must be processed – in real-time streams, such methods need to be extremely time-efficient while using very small amounts of memory. StreamDM is the first library to include advanced stream mining algorithms for Spark Streaming, and is intended to be the open-source gathering point of the research and implementation of data streams, while designed to allow practical deployments on real-world datasets.
This tutorial is of interest to both researchers and industry practitioners. Researchers can take advantage of the StreamDM API to develop experiments with novel big data stream mining algorithms. Practitioners can identify opportunities to deploy data stream mining solutions on top of a Spark cluster.
Heitor Gomes is a researcher at LTCI, Telecom ParisTech (Paris, France) and a visiting researcher at INESC TEC (Porto, Portugal). His main research area is Machine Learning, specially Evolving Data Streams, Concept Drift, Ensemble methods and Big Data Streams. He is an active contributor to the open data stream mining project MOA and a co-leader of the StreamDM project.
Albert Bifet is a Professor at LTCI, Telecom ParisTech and Head of the Data, Intelligence and Graphs (DIG) Group at Telecom ParisTech. His research focuses on Data Stream mining, Big Data Machine Learning and Artificial Intelligence. The problems he investigates are motivated by large scale data, the Internet of Things (IoT), and Big Data Science. He co-leads the open source projects MOA (Massive On-line Analysis), Apache SAMOA (Scalable Advanced Massive Online Analysis) and StreamDM.
Title: Learning class imbalanced data streams (IJCNN2_01)
Organized by Leandro L. Minku, Shuo Wang and Giacomo Boracchi
Many real-world applications are characterized by class imbalanced data streams, i.e., data arrive over time and exhibit classes that are not equally represented in either the training or test sequences. Such applications exist in many domains, such as risk management, anomaly detection, software engineering, social media mining, and recommender systems. A couple of specific examples of classification problems with imbalanced data streams are fraud detection in credit card transactions (since genuine transactions far outnumber frauds) and fault detection in operating machinery (since faults are rare). Both class imbalance and streaming data need to be properly handled during the training stage. Imbalanced data require either resampling/weighting techniques or algorithm-level adjustment to avoid learning a trivial model that assigns each sample to the majority class. Classification over streaming data requires continuous adaptation strategies to train the predictive model over time in an efficient manner and maintain predictive accuracy. An increasing number of papers addressing the combined issues of class imbalance and streaming data show that learning imbalanced data streams requires special care and ad-hoc solutions to efficiently and effectively address both challenges. The goal of this tutorial is to provide a “good starting kit” to PhD students, researchers and practitioners that have never dealt with learning tasks involving imbalanced data streams before, as well as an up-to- date overview of the state- of-the- art solutions. In particular, the tutorial aims at:
- Presenting a general formulation of the problem of learning class
imbalanced data streams.
- Giving examples of real world class imbalanced data streams and the
challenges that they pose.
- Presenting a taxonomy of existing approaches, including a detailed
explanation of key existing algorithms.
- Introducing suitable metrics for evaluating approaches for imbalanced
PhD students, researchers and practitioners that face classification problems with imbalanced data streams.
Title: Adaptive Resonance Theory in Social Media Clustering with Applications (IJCNN3_01)
Organized by Lei Meng, Ah-Hwee Tan, Donald C. Wunsch and Leonardo Enzo Brito da Silva
Mining user-centric social media data streams, i.e. social networks and user-generated multimedia content, offers great opportunities for clustering in unstructured data organization and knowledge discovery. Whereas, the big volume, dynamic, and heterogeneous social media data post challenges in terms of scalability, online learning capability, multimodal information fusion, and robustness to noise and parameter settings. Adaptive resonance theory (ART) has long been an important approach for computational intelligence. In this tutorial, we will systematically describe frontiers and challenges of modern clustering techniques in social media analytics, and present a class of novel clustering techniques based on ART that tackle different aspects of the social media clustering challenges, which are illustrated with real-world social media understanding and mining tasks.
Nowadays, social media has been a natural testbed for media understanding, social network mining, and user behaviour and emotion understanding, where clustering holds a great potential. This tutorial targets at the audience who are interested in ART and clustering, and their applications in social media analytics. More importantly, we would like to deliver our initiatives on the understanding of ART and the methodologies to extend ART towards online and fully automated self-organizing clustering algorithms for the social media clustering challenges.
Title: Artificial Intelligence in Business (Changed to: Reinforcement Learning: Principles, lgorithms and Applications) (IJCNN3_02)
Organized by Meng Joo Er
Reinforcement Learning (RL) is a type of Machine Learning where an agent learns by interacting with its environment. It is not told by a trainer what to do and it learns what actions to take to get the highest reward in the situation by trial and error, even when the reward is not obvious and immediate. It learns how to solve problems rather than being taught what solutions look like. RL is how DeepMind created the AlphaGo system that beat a high-ranking Go player (and has recently been winning online Go matches anonymously). It is how University of California Berkeley’s BRETT robot learns how to move its hands and arms to perform physical tasks like stacking blocks or screwing the lid onto a bottle, in just three hours (or ten minutes if it is told where the objects are that it is going to work with, and where they need to end up). Developers at a hackathon built a smart trash called AutoTrash that used reinforcement learning to sort compostable and recyclable rubbish into the right compartments.
Reinforcement learning is the reason Microsoft just bought Maluuba, which Microsoft plans to use it to aid in understanding natural language for search and chatbots, as a stepping stone to general intelligence. The objectives of this tutorial are to review RL algorithms developed by various researchers over the last few decades. It will have a comprehensive coverage of three aspects of RL, namely Principles, Algorithms and Applications. Specifically, we will cover Deep Reinforcement Learning and practical applications of RL ranging from Manufacturing to Finance.
PhD students, researchers and practitioners with interests in Reinforcement Learning, in particular Deep Reinforcement Learning and practical applications of Reinforcement Learning ranging from engineering and medicine to finance and gaming.
Title: Non-Iterative Learning Methods for Classification and Forecasting (IJCNN3_03)
Organized by Ponnuthurai Nagaratnam Suganthan and Filippo Bianchi
This tutorial will first introduce the main non-iterative learning paradigms with closed-form solutions such as the randomization based feedforward neural networks, randomization based recurrent neural networks and kernel ridge regression. The popular instantiation of the feedforward type called random vector functional link neural network (RVFL) originated in early 1990s. Other feedforward methods are random weight neural networks (RWNN), extreme learning machines (ELM), etc. Reservoir computing methods such as echo state networks (ESN) and liquid state machines (LSM) are randomized recurrent networks. Another non-iterative paradigm is based on kernel trick such as the kernel ridge regression and kernel extreme learning machines. The tutorial will also consider computational complexity with increasing scale of the classification/forecasting problems. Another non-iterative paradigm is based on random forest. However, RF does not have a closed-form solution, even though the learning algorithm is non-iterative because once a tree is grown, it remains fixed. The tutorial will also present extensive benchmarking studies using classification and forecasting datasets.
This presentation will include basics as well as recent advances. Hence, researchers commencing their research as well as experienced researcher can attend. Practitioners will also benefit from this tutorial. This tutorial will be a companion to the IJCNN 2018 special session on the same topic: Non-iterative approaches to learning.
Title: Graph based and Topological Unsupervised Machine Learning (IJCNN3_04)
Organized by Nistor Grozavu and Rushed Kanawati,
One of the main tasks in the field of exploratory data analysis of high dimensional data is the formation of simplified, usually visual, overview of explored data. This can be achieved through simplified description or summaries, which should provide the possibility of discovery or identification of features or patterns of most relevance. Clustering and projection are two useful methods to achieve this task. On one hand classical clustering algorithms produce a grouping of the data according to a given criterion. Projection methods, on the other hand, represent the data points in a lower dimensional space in such a way that the clusters and the metric relations of the data items are preserved as faithfully as possible. Topological and graph based methods can do both clustering, and projection simultaneously
Researchers, PhD students and practitioners with interest in clustering and topological unsupervised machine learning.
Title: Methods and Resources for Texture Classification (IJCNN3_05)
Organized by Paulo Rodrigo Cavalin and Luiz S. Oliveira
We will present a comprehensive review of methods and resources for texture recognition, and provide insights and advices in order to apply them in practice in accordance with the experience of the instructors. Regarding the methods, we plan to cover since the most traditional approaches, for instance texture descriptors such as graylevel co-occurence matrices (GLCM), to more recent approaches such as Convolutional Neural Networks (CNN) and multi-scale encodings such as Fisher Vectors. Related to resources, the goal is not only to present a list of publicly available datasets from the most well-known sets such as Brodatz to more recent ones such as DTD, but also provide a discussion on specific characteristics of each and on the current state-of- the-art results.
Regarding datasets, we also plan to describe the methodology employed to create datasets for forest species recognition and breast cancer classification, providing the audience an idea for the creation of datasets to other problems. Finally, we will present tools that can help audience to speed up the development of texture recognition systems.
This tutorial is mostly target at students, researchers and professional with some knowledge on image recognition, but little to no knowledge in texture recognition. These people will be provided with a broad but shallow view of the state-of- the-art techniques for texture recognition, and will be able to later get deeper into them if they decide to pursue in this area. People with intermediate to advanced experience on texture recognition can also benefit from the tutorial, since we plan to present the most recent advances in the field.
Title: Tutorial on dynamic classifier selection: recent advances and perspectives (IJCNN4_01)
Organized by Rafael M. O. Cruz, Robert Sabourin and George D. C. Cavalcanti
Multiple Classifier Systems (MCS) have been widely studied as an alternative for increasing accuracy in pattern recognition. One of the most promising MCS approaches is Dynamic Selection (DS), in which the base classifiers are selected on the fly, according to each new sample to be classified. DS has become an active research topic in the multiple classifier systems literature in past years. This has been due to the fact that more and more works are reporting the superior performance of such techniques over static combination approaches, especially when dealing with small sized datasets, imbalanced problems and noise distributions. DS techniques work by estimating the competence level of each classifier from a pool of classifiers. Only the most competent or an ensemble containing the most competent classifiers is selected to predict the label of a specific test sample. The rationale for such techniques is that not every classifier in the pool is an expert in classifying all unknown samples; rather, each base classifier is an expert in a different local region of the feature space. So, the key aspect in dynamic selection techniques is how to estimate the competence of the base classifiers in the local regions and select the most competent ones for the classification of any given query samples.
The goal of this tutorial is to present the attendees a detailed formulation of each step involved in a dynamic selection system, from the initial steps of generating a pool of classifiers, to the final steps on how the base classifiers are selected based on each new query sample. Moreover, experimental evaluation of the state-of- the-art dynamic selection techniques as well as guidelines for future research is also presented.
The tutorial is made for researchers interested in multiple-classifiers systems, ensemble learning as well as interested in dealing with complex pattern recognition problems such as small sample sized datasets, noisy datasets, imbalanced distributions and concept drift.
Title: Deep, Transfer and Emergent Reinforcement Learning Techniques for Intelligent Agents (IJCNN4_02)
Organized by Abdulrahman Altahhan and Vasile Palade
The goal of this tutorial is to gain in depth understanding of recent developments in RL such as True online TD and BackTD and the issues surrounding the combination of Deep Learning and Reinforcement Learning in the Games and Robotics domains.
Potential audience will be mechanical engineer interested in developing a driver- less cars, engineers who are interested in applying and utilizing deep learning and reinforcement learning into their work line, as well as researchers who want to advance the frontier of the topic.
Title: Neurosymbolic Learning and Reasoning with Constraints (IJCNN4_03)
Organized by Artur S. d’Avila Garcez, Marco Gori, Luis C. Lamb, Luciano Serafini and Michael Spranger
Following the great recent success of deep learning, attention has turned to neural artificial intelligence (AI) systems capable of harnessing knowledge as well as large amounts of data [1,2]. Neural-symbolic integration  has sought for many years to benefit from the integration of symbolic AI with neural computation that can lead to more versatile and explainable learning systems . Recently, constraints have been shown to offer a unifying theoretical framework for learning and reasoning [5,6,9]. Constraints based neural-symbolic computing (CBNSC) offers a methodology for unifying knowledge representation and machine learning. In this tutorial, we will introduce the theory and practice of CBNSC using a recent computational implementation called Logic Tensor Networks (LTNs)  implemented using Python and Google’s Tensorflow. LTN are a logic-based formalism defined on a first-order language, where formulas have fuzzy-logic semantics and terms are interpreted in feature vectors of real numbers. LTN allows a well-founded integration of deductive reasoning on a knowledge-base and efficient datadriven relational machine learning e.g. using tensor networks. LTNs enable relational knowledge infusion and distilling from deep networks, thus constraining data-driven learning with background knowledge, and allowing deep networks to be interrogated for explainability. LTNs have been successfully applied to image understanding and natural language processing (NLP) tasks . The tutorial will give a general introduction to CBNSC its practical realization in LTN with an ample set of hands-on examples of such applications in Python, as well as situate LTN within the broader landscape of CBNSC.
Specifically, the goals of the tutorial are:
- to introduce constraint-based learning and reasoning, contrasting it with related work;
- to place it in the context of neural-symbolic computing, knowledge infusion and distilling;
- to introduce Logic Tensor Networks as a general framework for harnessing and distilling knowledge from deep networks; and
- to enable participants to run and understand LTNs through the use of representative small examples as a tool for knowledge-based deep networks in the area of semantic image interpretation.
The tutorial will seek to stimulate the attendees to appreciate beyond their specific area of expertise the benefits and limitations of both neural and symbolic approaches and their integration
The tutorial is of interest to the general WCCI audience, in particular, researchers interested in combining constraints, logic and reasoning with deep neural networks. The tutorial assumes no prior knowledge of logic and will make sure to introduce all necessary basic concepts.