March 8, 2018

IJCNN Tutorials

Room Europa IV Ground Floor Europa II Ground Floor OCEANIA I 2nd Floor OCEANIA II 2nd Floor OCEANIA III 2nd Floor OCEANIA IV 2nd Floor OCEANIA V 2nd Floor OCEANIA VI 2nd Floor OCEANIA VII 2nd Floor OCEANIA VIII 2nd Floor OCEANIA IX 2nd Floor OCEANIA X 2nd Floor
Capacity 300 Poster Session 100 160 100 100 120 150 110 100 180 190
08:00AM-10:00AM HYB_01 – part 1 CEC_01 CEC_06 CEC_10 CEC_15 FUZZ_01 Part 1 FUZZ_05 Part 1 IJCNN_01 Part 1 IJCNN_07 Part 1 IJCNN_12 Part 1
10:15AM-12:15AM HYB_01 – part 2 CEC_02 CEC_07 CEC_11 CEC_16 FUZZ_01 Part 2 FUZZ_05 Part 2 IJCNN_01 Part 2 IJCNN_07 Part 2 IJCNN_12 Part 2
12:15AM-1:00PM LUNCH
1:00PM-3:00PM HYB_02 CEC_03 CEC_08 Part 1 CEC_12 CEC_17 Part 1 FUZZ_02 FUZZ_06 Part 1 IJCNN_03 IJCNN_08 IJCNN_13
3:15PM-5:15PM HYB_03 CEC_04 CEC_08 Part 2 CEC_13 CEC_17 Part 2 FUZZ_03 FUZZ_06 Part 2 IJCNN_04 IJCNN_09 IJCNN_14 Part 1
5:15PM-7:15PM IJCNN_15 CEC_05 CEC_09 CEC_14 CEC_18 FUZZ_04 IJCNN_10 IJCNN_05 IJCNN_06 IJCNN_11 IJCNN_14 Part 2
7:30PM-9:30PM Welcome Reception @ Europa Room – Ground Floor


IJCNN_01 Deep Learning for Sequences
IJCNN_02 Deep Recurrent Neural Networks: Training and Applications in the Modeling and Control of Nonlinear Systems, Signal Processing and Robotics.  [CANCELLED]
IJCNN_03 Prediction, Interaction, and User Behaviour
IJCNN_04 Entropic Evaluation of Classification. A hands-on, get-dirty introduction
IJCNN_05 Machine Learning for Spark Streaming with StreamDM
IJCNN_06 Learning class imbalanced data streams
IJCNN_07 Adaptive Resonance Theory in Social Media Clustering with Applications
IJCNN_08 Artificial Intelligence in Business (Changed to: Reinforcement Learning: Principles, Algorithms and Applications)
IJCNN_09 Non-Iterative Learning Methods for Classification and Forecasting
IJCNN_10 Graph based and Topological Unsupervised Machine Learning
IJCNN_11 Methods and Resources for Texture Classification
IJCNN_12 Tutorial on Dynamic Classifier Selection: Recent Advances and Perspectives
IJCNN_13 Deep, Transfer and Emergent Reinforcement Learning Techniques for Intelligent Agents
IJCNN_14 Neurosymbolic Learning and Reasoning with Constraints
IJCNN_15 Quest For the Neural Input: Electrophysiology, Source Localization and Causality Analysis

Title: Deep Learning for Sequences (IJCNN_01)

Organized by Alessandro Sperduti


With the diffusion of cheap sensors, sensor-equipped devices (e.g., drones), and sensor networks (such as Internet of Things), as well as the development of inexpensive human-machine interaction interfaces, the ability to quickly and effectively process sequential data is becoming more and more important. Many are the tasks that may benefit from advancements in this field, ranging from monitoring and classification of human behavior to prediction of future events. Many are the approaches that have been proposed in the past to learn in sequential domains, ranging from linear models to early models of Recurrent Neural Networks, up to more recent Deep Learning solutions. The tutorial will start with the presentation of relevant sequential domains, introducing scenarios involving different types of sequences (e.g., symbolic sequences, time series, multivariate sequences) and tasks (e.g., classification, prediction, transduction). Linear models are first introduced, including linear auto-encoders for sequences. Subsequently non-linear models and related training algorithms are recalled, starting from early versions of Recurrent Neural Networks. Computational problems and proposed solutions will be presented, including novel linear-based pre-training approaches. Finally, more recent Deep Learning models will be discussed. Tutorial will close with some theoretical considerations on the relationships between Feed-forward and Recurrent Neural Networks, and a discussion about dealing with more complex data (e.g., trees and graphs).

Intended Audience

PhD students, post-docs, and researchers in neural networks and related areas. Pre-requisites: Basic algebra, calculus, and probability at the introductory college level.

Short Biography

Alessandro Sperduti is full professor of Computer Science at the Department of Mathematics of the University of Padova since March 1, 2002. Previously, he has been associate professor (1998-2002) and assistant professor (1995-1998) at the Department of Computer Science of the University of Pisa. His research interests are mainly in Neural Networks, Kernel Methods, and Process Mining. Prof. Sperduti has been PC member of several conferences (such as IJCAI, ECAI, ICML, ECML, SIGIR, ECIR, SDM, IJCNN, ICANN, ESANN, …), and guest editor of special issues for the journals Neural Networks, IEEE TKDE, and Cognitive Systems Research. He is in the editorial board of the journal Theoretical Computer Science (Section C), the European Journal on Artificial Intelligence, IEEE Intelligent Systems Magazine, and the journal Neural Networks. He has been associate editor (2009-2012) for the IEEE Transactions on Neural Networks and Learning Systems. Starting from 2001 till 2010, he has been member of the European Neural Networks Society (ENNS) Executive Committee, chair of the DMTC of IEEE CIS for the years 2009 and 2010, chair of the NNTC for the years 2011 and 2012, and chair of the IEEE CIS Student Games-Based Competition Committee for the year 2013. He is senior member IEEE. He has delivered several tutorials in main Artificial Intelligence conferences (WCCI 2012, IJCAI 2001, IJCAI 1999, IJCAI 1997) and summer schools (recently, Deep Learning 2017, Bilbao, and Deep Learning On-Chip 2017, Torino). He was the recipient of the 2000 AI*IA (Italian Association for Artificial Intelligence) ‘MARCO SOMALVICO’ Young Researcher Award. He as been invited plenary speaker for the conferences ICANN 2001, WSOM 2007, CIDM 2013. Prof. Sperduti is the author of more than 200 publications on refereed journals, conferences, and chapters in books.


Title: Deep Recurrent Neural Networks: Training and Applications in the Modeling and Control of Nonlinear Systems, Signal Processing and Robotics (IJCNN_02) [CANCELLED]

Organized by Antonio Moran


The tutorial presents recent developments in the training and application of deep recurrent neural networks for solving different problems in signal processing, systems modeling, parameter identification, systems control, articulated and mobile robots positioning, among other applications. The tutorial analyzes the algorithms for training recurrent neural networks focusing on Dynamic Back Propagation and Back Propagation Through Time algorithms. The tutorial also presents the integration of knowledge and training for configuring fuzzy-neural networks. All algorithms and applications are implemented in Matlab to verify its effectiveness and realizability.

Intended Audience

The potential audience is constituted by people with a background in system dynamics, connectionist systems, and learning algorithms. It is expected the tutorial will arise interest in master students, PhD students, engineers and scientists working on deep learning and neural networks for solving complex problems.

Short Biography

Antonio Moran has a PhD in Mechatronics Systems Engineering from Tokyo University of Agriculture and Technology, Japan, where he has worked as scientist and professor in the Laboratory of Automation developing research projects for the Japanese car manufacturing companies. Dr. Antonio Moran is visiting professor at Technological University of Ilmenau, Germany, Stockholm University, Sweden, and Tokyo University of Agriculture and Technology, Japan. Past President of the IEEE Robotics and Automation Society, Peru Section, winner of the Best Society Award in Hong Kong, China, 2014. Dr. Antonio Moran is international lecturer and invited speaker in national and international congresses, and has published more than 60 papers in journals, conferences and congresses. Dr. Antonio Moran is professor in the Graduate School of Pontifical Catholic University of Peru PUCP. He is also leader and head of international accreditation processes in Peruvian universities. General manager at TECHNOVA SAC, company dedicated to quality management, automation and robotics in industry and university sectors. Dr. Antonio Moran is member of IEEE and five of its societies, JSME (Japan Society of Mechanical Engineers) and CIP (Peruvian Engineers Association).


Title: Prediction, Interaction, and User Behaviour (IJCNN_03)

Organized by Charles Martin, Enrique García Ceja, Kai Olav Ellefsen and Jim Tørresen


More information about this tutorial, as well as instructions for installing the software we use is available at:

The goal of this tutorial is to apply predictive machine learning models to human behaviour through a human computer interface. We will introduce participants to the key stages for developing predictive interaction in user-facing technologies: collecting and identifying data, applying machine learning models, and developing predictive interactions. Many of us are aware of recent advances in deep neural networks (DNNs) and other machine learning (ML) techniques; however, it is not always clear how we can apply these techniques in interactive and real-time applications. Apart from well-known examples such as image classification and speech recognition, what else can predictive ML models be used for? How can these computational intelligence techniques be deployed to help users?

In this tutorial, we will show that ML models can be applied to many interactive applications to enhance users’ experience and engagement. We will demonstrate how sensor and user interaction data can be collected and investigated, modelled using classical ML and DNNs, and where predictions of these models can feed back into an interface. We will walk through these processes using live-coded demonstrations with Python code in Jupyter Notebooks so participants will be able to see our investigations live and take the example code home to apply in their own projects.

Our demonstrations will be motivated from examples from our own research in creativity support tools, robotics, and modelling user behaviour. In creativity, we will show how streams of interaction data from a creative musical interface can be modelled with deep recurrent neural networks (RNNs). From this data, we can predict users’ future interactions, or the potential interactions of other users. This enables us to “fill in” parts of a tablet-based musical ensemble when other users are not available, or to continue a user’s composition with potential musical parts. In user behaviour, we will show how smartphone sensor data can be used to infer user contextual information such as physical activities. This contextual information can be used to trigger interactions in smart home or internet of things (IoT) environments, to help tune interactive applications to user’s needs, or to help track health data.

Intended Audience

The primary audience for this tutorial are researchers, students, and computing practitioners interested in applying ML models, but without the background in AI or ML to get started. The demonstrations would assume some programming knowledge, but not specialist knowledge in machine learning. Existing practitioners in AI may also be interested to see how new models, such as DNNs, can be integrated into interactive applications.

Short Biography

Charles Martin is a specialist in percussion, music technology, and musical AI from Australia. He links percussion with interaction and machine learning through new technologies. He is the author of musical iPad app, PhaseRings, and founded touchscreen ensemble, Ensemble Metatone, percussion group, Ensemble Evolution, and cross-artform group, Last Man to Die. Charles did his doctoral research at the Australian National University developing intelligent agents that mediate ensemble performance. At present, Charles is a postdoctoral fellow at the University of Oslo in the Engineering Prediction and Embodied Cognition (EPEC) project and the RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion where he is developing new ways to predict musical intentions and performances in smartphone apps and embedded devices. Website:

Enrique Garcia Ceja is currently a Postdoc at the Department of Informatics, University of Oslo. He received his M.Sc and Ph.D. degrees in intelligent systems from Tecnológico de Monterrey University, México. Enrique was a visiting researcher at CREATE-NET Mobile and Ubiquitous Technologies Research group, Trento, Italy (2014). During 2016-2017 he worked as an independent data analysis consultant for a project between industry and academia on the development of machine learning algorithms to analyze sensor data from wearable devices to infer users’ contextual information. Since 2016 Enrique is a member of IEEE eta kappa nu. His research interests include: ambient intelligence, mental health monitoring, human activity recognition, machine learning, mobile health and wearable sensors. Website:

Kai Olav Ellefsen is a postdoctoral fellow at the University of Oslo, in the project “Engineering Predictability with Embodied Cognition (EPEC)”. His current research focuses on internal models (mental simulations of real-world phenomena), and their ability to improve performance in intelligent agents by letting them predict the future. He holds a Masters degree (2010, winner of the Norwegian Artificial Intelligence Society Best Master Thesis Award) and Ph. D. (2014) from the Norwegian University of Science and Technology. Before joining the University of Oslo in 2016, he was a postdoctoral fellow at the Brazilian Institute of Robotics at SENAI CIMATEC (Salvador, Brazil) where he was part of the FlatFish AUV project. His research interests cover many topics in Artificial Intelligence, including evolutionary algorithms, artificial neural networks, adaptation and learning. Website:

Jim Tørresen received his M.Sc. and (Ph.D) degrees in computer architecture and design from the Norwegian University of Science and Technology, University of Trondheim in 1991 and 1996, respectively. He has been employed as a senior hardware designer at NERA Telecommunications (1996-1998) and at Navia Aviation (1998-1999). Since 1999, he has been a professor at the Department of Informatics at the University of Oslo (associate professor 1999-2005). Jim Torresen has been a visiting researcher at Kyoto University, Japan for one year (1993-1994), four months at Electrotechnical laboratory, Tsukuba, Japan (1997 and 2000) and a visiting professor at Cornell University, USA for one year (2010-2011). His research interests at the moment include bio-inspired computing, machine learning, reconfigurable hardware, robotics and applying this to complex real-world applications. Website:



Title: Entropic Evaluation of Classification. A hands-on, get-dirty introduction (IJCNN_04)

Organized by Francisco Valverde and Carmen Peláez


To evaluate supervised classification tasks, we have two different data sources:

  • The observation vectors themselves to be used to infer the classifier to obtain the predicted labels.
  • The true labels of the observations to be compared to the predicted labels in the form of a confusion matrix

Two main kinds of measures try to analyze the confusion matrix: count-based measures (accuracy, TPR, FPR and derived measures, like AUC, etc.) and entropy-based measures (variation of information, KL-divergence, mutual information and derived measures.) The first kind use the minimization of such error count-based measures as heuristic to improve the quality of the classifier, while the latter try to optimize the flow of information between input and output label distributions (e.g. minimize variation of information, maximize mutual information). The purpose of the tutorial is to train attendees in using a visual and numerical standalone tool, the Entropy Triangle, to carry out entropy-based evaluation and how to compare it to accuracy-based evaluation.

Intended Audience

This tutorial is intended for practitioners in machine learning and data science who have ever been baffled by the evaluation of their favorite pet classifier. Since the technique is technology-agnostic it can be used in any form of classifier and supervised classification data whatsoever (and indeed for many more type of datasets!). The not-so-gory details are easy to understand to researchers and students who have ever seen and understood the concepts of entropy, mutual information and Venn-diagrams

Short Biography

Francisco J. Valverde-Albacete has been teaching a number of subjects in Electrical Engineering, Signal processing, data mining and pattern recognition for the past 20+ years, including master and Ph.D. level subjects. His interests now lay in Information Theoretic approaches to Machine Learning and non-standard algebras for signal processing.

Carmen Pelez-Moreno has been teaching a number of subjects in Signal Processing, Speech, Audio and Multimedia processing and Pattern Recognition for the past 20+ years, including master and Ph.D. level subjects. She is an Associate Professor in the Multimedia Processing group at Universidad Carlos III de Madrid, Spain, where she applies signal processing to Automatic Speech Recognition


Title: Machine Learning for Spark Streaming with StreamDM (IJCNN_05)

Organized by Heitor Gomes and Albert Bifet


The main goal of this tutorial is to introduce attendees to big data stream mining theory and practice. We will use the StreamDM framework to illustrate concepts and also to demonstrate how data stream mining pipelines can be deployed using StreamDM.

The tools and algorithms in StreamDM are specifically designed for the data stream setting. Due to the large amount of data that is created – and must be processed – in real-time streams, such methods need to be extremely time-efficient while using very small amounts of memory. StreamDM is the first library to include advanced stream mining algorithms for Spark Streaming, and is intended to be the open-source gathering point of the research and implementation of data streams, while designed to allow practical deployments on real-world datasets.

Intended Audience

This tutorial is of interest to both researchers and industry practitioners. Researchers can take advantage of the StreamDM API to develop experiments with novel big data stream mining algorithms. Practitioners can identify opportunities to deploy data stream mining solutions on top of a Spark cluster.

Short Biography

Heitor Gomes is a researcher at LTCI, Telecom ParisTech (Paris, France) and a visiting researcher at INESC TEC (Porto, Portugal). His main research area is Machine Learning, specially Evolving Data Streams, Concept Drift, Ensemble methods and Big Data Streams. He is an active contributor to the open data stream mining project MOA and a co-leader of the StreamDM project.

Albert Bifet is a Professor at LTCI, Telecom ParisTech and Head of the Data, Intelligence and Graphs (DIG) Group at Telecom ParisTech. His research focuses on Data Stream mining, Big Data Machine Learning and Artificial Intelligence. The problems he investigates are motivated by large scale data, the Internet of Things (IoT), and Big Data Science. He co-leads the open source projects MOA (Massive On-line Analysis), Apache SAMOA (Scalable Advanced Massive Online Analysis) and StreamDM.


Title: Learning class imbalanced data streams (IJCNN_06)Organized by Leandro L. Minku, Shuo Wang and Giacomo Boracchi


Many real-world applications are characterized by class imbalanced data streams, i.e., data arrive over time and exhibit classes that are not equally represented in either the training or test sequences. Such applications exist in many domains, such as risk management, anomaly detection, software engineering, social media mining, and recommender systems. A couple of specific examples of classification problems with imbalanced data streams are fraud detection in credit card transactions (since genuine transactions far outnumber frauds) and fault detection in operating machinery (since faults are rare).

Both class imbalance and streaming data need to be properly handled during the training stage. Imbalanced data require either resampling/weighting techniques or algorithm-level adjustment to avoid learning a trivial model that assigns each sample to the majority class. Classification over streaming data requires continuous adaptation strategies to train the predictive model over time in an efficient manner and maintain predictive accuracy. An increasing number of papers addressing the combined issues of class imbalance and streaming data show that learning imbalanced data streams requires special care and ad-hoc solutions to efficiently and effectively address both challenges.

The goal of this tutorial is to provide a “good starting kit” to PhD students, researchers and practitioners that have never dealt with learning tasks involving imbalanced data streams before, as well as an up-to-date overview of the state-of-the-art solutions.

In particular, the tutorial aims at:

  • Presenting a general formulation of the problem of learning class imbalanced data streams.
  • Giving examples of real world class imbalanced data streams and the challenges that they pose.
  • Presenting a taxonomy of existing approaches, including a detailed explanation of key existing algorithms.
  • Introducing suitable metrics for evaluating approaches for imbalanced data streams

Intended Audience

PhD students, researchers and practitioners that face classification problems with imbalanced data streams.

Short Biography

Leandro L. Minku is a Lecturer (Assistant Professor) at the Department of Informatics, University of Leicester (UK). Prior to that, he was a research fellow at the University of Birmingham (UK). He received the PhD degree in Computer Science from the University of Birmingham (UK) in 2010. Dr. Minku’s main research interests are online learning / data stream mining, class imbalance learning, ensembles of learning machines and computational intelligence for software engineering. Of special interest to the proposed tutorial, he has published on online learning and class imbalance learning at TNNLS, TKDE and IJCAI, besides having co-chaired an IJCAI workshop on Learning in the Presence of Class Imbalance and Concept Drift and guest edited a Neurocomputing Special Issue on this topic. He also gave an invited tutorial on Online Ensemble Approaches for Data Stream Mining at the Summer Training Workshop “Big Data meets Machine Learning”, Wroclaw, Poland, 2015. Dr. Minku is a steering committee member for the International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE), an Associate Editor for the Journal of Systems and Software, and a conference correspondent for IEEE Software.

Shuo Wang is a Research Fellow at the Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA) in the School of Computer Science, the University of Birmingham (UK). She received the BSc degree in Computer Science from the Beijing University of Technology (BJUT), China, in 2006, and was a member of Embedded Software and System Institute in BJUT in 2007. She received the PhD degree in Computer Science from the University of Birmingham, UK, in 2011, sponsored by the Overseas Research Students Award (ORSAS) from the British Government (2007). Dr. Wang’s research interests include class imbalance learning, ensemble learning, online learning and machine learning in software engineering. Her work has been published in internationally renowned journals and conferences, such as IEEE TKDE and IEEE TNNLS. She was the chair of the IJCAI’17 Workshop on Learning in the Presence of Class Imbalance and Concept Drift and the guest editor of the special issue in Neurocomputing.

Giacomo Boracchi received the MS degree in Mathematics from the Universita degli Studi di Milano, Italy, and the PhD degree in Information Technology at the Politecnico di Milano, Italy, in 2004 and 2008, respectively. He was researcher at the Tampere International Center for Signal Processing, Finland, in 2004-2005. Currently, he is an Assistant Professor at the Dipartimento di Elettronica, Informazione e Bioingegneria of the Politecnico di Milano, Italy. His main research interests include learning methods for nonstationary environments, as well as mathematical and statistical methods for image processing and analysis. In 2015 he received the IBM Faculty Award, in 2016 the IEEE Transactions on Neural Networks and Learning Systems Outstanding Paper Award, and in 2017 he received the Nokia visiting professor grant. He was co-chair of IntECS (2014, 2015, 2016) part of IEEE SSCI, and co-organizer of this Special Sessions at IJCNN 2014, 2015 and 2016. He has been guest editor of the Special Issue “Model Complexity, Regularization and Sparsity” in IEEE Computational Intelligence Magazine (November 2016)..


Title: Artificial Intelligence in Business (Changed to: Reinforcement Learning: Principles, Algorithms and Applications) (IJCNN_07)Organized by Meng Joo Er


Reinforcement Learning (RL) is a type of Machine Learning where an agent learns by interacting with its environment. It is not told by a trainer what to do and it learns what actions to take to get the highest reward in the situation by trial and error, even when the reward is not obvious and immediate. It learns how to solve problems rather than being taught what solutions look like.

RL is how DeepMind created the AlphaGo system that beat a high-ranking Go player (and has recently been winning online Go matches anonymously). It is how University of California Berkeley’s BRETT robot learns how to move its hands and arms to perform physical tasks like stacking blocks or screwing the lid onto a bottle, in just three hours (or ten minutes if it is told where the objects are that it is going to work with, and where they need to end up). Developers at a hackathon built a smart trash called AutoTrash that used reinforcement learning to sort compostable and recyclable rubbish into the right compartments. Reinforcement learning is the reason Microsoft just bought Maluuba, which Microsoft plans to use it to aid in understanding natural language for search and chatbots, as a stepping stone to general intelligence.

The objectives of this tutorial are to review RL algorithms developed by various researchers over the last few decades. It will have a comprehensive coverage of three aspects of RL, namely Principles, Algorithms and Applications. Specifically, we will cover Deep Reinforcement Learning and practical applications of RL ranging from Manufacturing to Finance.

Intended Audience

PhD students, researchers and practitioners with interests in Reinforcement Learning, in particular Deep Reinforcement Learning and practical applications of Reinforcement Learning ranging from engineering and medicine to finance and gaming.

Short Biography

Er Meng Joo is currently a Full Professor in Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He served as the Founding Director of Renaissance Engineering Programme and an elected member of the NTU Advisory Board from 2009 to 2012. Furthermore, he served as a member of the NTU Senate Steering Committee from 2010 to 2012.

He has authored five books entitled “Dynamic Fuzzy Neural Networks: Architectures, Algorithms and Applications” and “Engineering Mathematics with Real-World Applications” published by McGraw Hill in 2003 and 2005 respectively, and “Theory and Novel Applications of Machine Learning” published by In-Tech in 2009, “New Trends in Technology: Control, Management, Computational Intelligence and Network Systems” and “New Trends in Technology: Devices, Computer, Communication and Industrial Systems”, both published by SCIYO, 18 book chapters and more than 500 refereed journal and conference papers in his research areas of interest.

Professor Er was bestowed the Web of Science Top 1 % Best Cited Paper and the Elsevier Top 20 Best Cited Paper Award in 2007 and 2008 respectively. In recognition of the significant and impactful contributions to Singapore’s development by his research projects, Professor Er won the Institution of Engineers, Singapore (IES) Prestigious Engineering Achievement Award twice (2011 and 2015). He is also the only dual winner in Singapore IES Prestigious Publication Award in Application (1996) and IES Prestigious Publication Award in Theory (2001). Under his leadership, the NTU Team emerged first runner-up in the Freescale Technology Forum Design Challenge 2008.  He received the Teacher of the Year Award for the School of EEE in 1999, School of EEE Year 2 Teaching Excellence Award in 2008, the Most Zealous Professor of the Year Award in 2009 and the Outstanding Mentor Award in 2014. He also received the Best Session Presentation Award at the World Congress on Computational Intelligence in 2006, the Best Presentation Award at the International Symposium on Extreme Learning Machine 2012. On top of this, he has more than 60 awards received at international and local competitions.

Currently, Professor Er serves as the Editor-in-Chief of 3 international journals, namely International Journal of Intelligent Autonomous Systems, Transactions on Machine Learning and Artificial Intelligence and the International Journal of Electrical and Electronic Engineering and Telecommunications. He also serves an Area Editor of International Journal of Intelligent Systems Science and an Associate Editor of 14 refereed international journals, namely IEEE Transaction on Cybernetics, Information Sciences, Neurocomputing, Asian Journal of Control, International Journal of Fuzzy Systems, ETRI Journal, International Journal of Humanoid Robots, International Journal of Modelling, Simulation and Scientific Computing, International Journal of Applied Computational Intelligence and Soft Computing, International Journal of Business Intelligence and Data Mining, International Journal of Fuzzy and Uncertain Systems, International Journal of Automation and Smart Technology, International Journal of Intelligent Information Processing and an editorial board member of the Open Automation and Control Systems Journal and the EE Times.

Professor Er has been invited to deliver more than 60 keynote speeches and invited talks overseas. He has also been active in professional bodies. He has served as Chairman of IEEE Computational Intelligence Society (CIS) Singapore Chapter (2009 to 2011) and Chairman of IES Electrical and Electronic Engineering Technical Committee (EEETC) (2004 to 2006 and 2008 to 2012). Under his leadership, the IEEE CIS Singapore Chapter won the CIS Outstanding Chapter Award 2012 (The Singapore Chapter is the first chapter in Asia to win the award). In recognition of his outstanding contributions to professional bodies, he was bestowed the IEEE Outstanding Volunteer Award (Singapore Section) and the IES Silver Medal in 2011. Due to his outstanding contributions in education, research, administration and professional services, he is listed in Who’s Who in Engineering, Singapore, Edition 2013.


Title: Adaptive Resonance Theory in Social Media Clustering with Applications (IJCNN_08)

Organized by Lei Meng, Ah-Hwee Tan, Donald C. Wunsch and Leonardo Enzo Brito da Silva


Mining user-centric social media data streams, i.e. social networks and user-generated multimedia content, offers great opportunities for clustering in unstructured data organization and knowledge discovery. Whereas, the big volume, dynamic, and heterogeneous social media data post challenges in terms of scalability, online learning capability, multimodal information fusion, and robustness to noise and parameter settings.

Adaptive resonance theory (ART) has long been an important approach for computational intelligence. In this tutorial, we will systematically describe frontiers and challenges of modern clustering techniques in social media analytics, and present a class of novel clustering techniques based on ART that tackle different aspects of the social media clustering challenges, which are illustrated with real-world social media understanding and mining tasks.

Intended Audience

Nowadays, social media has been a natural testbed for media understanding, social network mining, and user behaviour and emotion understanding, where clustering holds a great potential. This tutorial targets at the audience who are interested in ART and clustering, and their applications in social media analytics. More importantly, we would like to deliver our initiatives on the understanding of ART and the methodologies to extend ART towards online and fully automated self-organizing clustering algorithms for the social media clustering challenges.

Shrot Biography

Lei Meng Lei Meng is currently Research Fellow with the Joint NTU-UBC Research Center of Excellence in Active Living for the Elderly (LILY), Nanyang Technological University, Singapore, working with Prof. Chunyan Miao with Nanyang Technological University and Prof. Cyril Leung with the University of British Columbia. His current research focus is on unsupervised learning, deep learning, and big data analytics for AI-powered computational E-commerce and aging healthcare applications. He obtained the Ph.D.’s degree under the supervision of Prof. Ah-Hwee Tan in the School of Computer Science and Engineering, Nanyang Technological University, Singapore, in February, 2015, with the thesis titled “Clustering and Heterogeneous Information Fusion for Social Media Theme Discovery and Associative Mining.” This tutorial is extended based on his Ph.D. work. He has published more than ten conference and journal papers while many of them are in well-known and top venues, such as TKDE, TNNLS, SDM, ICMR, and IJCNN. He is the editorial board member of Applied Soft Computing, and has served as Program/Technical Committee member for a number of international conferences. He also frequently serves as Reviewer for a number of high-quality conferences and journals, such as KDD, ICDM, SDM, TNNLS, Applied Intelligence, and Applied Soft Computing.

Ah-Hwee Tanreceived the Ph.D. in Cognitive and Neural Systems from Boston University, the M.Sc and the B.Sc (1st Class Honors) in Computer and Information Science from the National University of Singapore. He is currently a Full Professor and Associate Chair (Research) at the School of Computer Science and Engineering (SCSE), Nanyang Technological University (NTU), where he held the positions of Head of Division of Software & Information Systems from 2008 to 2014 and founding Director of Emerging Research Laboratory from 2004 to 2008. Prior to joining NTU, he was a Research Manager at the A*STAR Institute for Infocomm Research (I2R), spearheading the Text Mining and Intelligent Agents research programmes. His research interests include cognitive and neural systems, brain-inspired intelligent agents, machine learning, knowledge discovery and text mining. Prof. Tan has published over 200 technical papers, including ten edited books and proceeding volumes. He holds two US patents, five Singapore patents, and has commercialized a suite of document analysis and text mining technologies. He is a Senior Member of IEEE, a Member of Web Intelligence (WI) Technical Committee and Conference Steering Committee, and Vice Chair of IEEE CIS Task Force on Towards Human-Like Intelligence.

Donald Wunsch is the Mary K. Finley Missouri Distinguished Professor at Missouri S&T. Earlier employers were: Texas Tech University, Boeing, Rockwell International, and International Laser Systems.  His education includes: Executive MBA-Washington University in St. Louis,  Ph.D., Electrical Engineering-University of Washington (Seattle), M.S.,  Applied Mathematics   (same institution),   B.S., Applied Mathematics-University of New Mexico, and Jesuit Core Honors Program, Seattle University.  Key research contributions are: Clustering/ Unsupervised Learning; Adaptive Resonance and Reinforcement Learning architectures, hardware and applications; Neurofuzzy regression; Traveling Salesman Problem heuristics; Robotic Swarms; and Bioinformatics.   He is an IEEE Fellow and previous INNS President, INNS Fellow and Senior Fellow 2007-2013, NSF CAREER Award winner, and winner of the 2015 INNS Gabor Award.  He served as IJCNN General Chair, and on several Boards, including the St. Patrick’s School Board, IEEE Neural Networks Council, International Neural Networks Society, and the University of Missouri Bioinformatics Consortium, Chaired the Missouri S&T Information Technology and Computing Committee and the Student Design and Experiential Learning Center Board.  He has produced 18 Ph.D. recipients in Computer Engineering, Electrical Engineering, and Computer Science; has attracted over $10 million in sponsored research; and has over 400 publications including nine books.  His research has been cited over 12,000 times.

Leonardo Enzo Brito da Silva received the B.S. and the M.S. degrees in electrical engineering from Universidade Federal do Rio Grande do Norte, Natal, Brazil, in 2011 and 2013, respectively. He is currently a PhD candidate in computer engineering at Missouri University of Science and Technology, Rolla, USA. His research interests include clustering/unsupervised learning and data visualization, adaptive resonance theory, self-organizing maps, reinforcement learning, information theory and evolutionary computation.


Title: Non-Iterative Learning Methods for Classification and Forecasting (IJCNN_09)

Organized by Ponnuthurai Nagaratnam Suganthan


This tutorial will first introduce the main non-iterative learning paradigms with closed-form solutions such as the randomization based feedforward neural networks, randomization based recurrent neural networks and kernel ridge regression. The popular instantiation of the feedforward type called random vector functional link neural network (RVFL) originated in early 1990s. Other feedforward methods are random weight neural networks (RWNN), extreme learning machines (ELM), etc. Reservoir computing methods such as echo state networks (ESN) and liquid state machines (LSM) are randomized recurrent networks. Another non-iterative paradigm is based on kernel trick such as the kernel ridge regression and kernel extreme learning machines. The tutorial will also consider computational complexity with increasing scale of the classification/forecasting problems. Another non-iterative paradigm is based on random forest. However, RF does not have a closed-form solution, even though the learning algorithm is non-iterative because once a tree is grown, it remains fixed. The tutorial will also present extensive benchmarking studies using classification and forecasting datasets.

Intended Audience
This presentation will include basics as well as recent advances. Hence, researchers commencing their research as well as experienced researcher can attend. Practitioners will also benefit from this tutorial. This tutorial will be a companion to the IJCNN 2018 special session on the same topic: Non-iterative approaches to learning
Short Biography
Ponnuthurai Nagaratnam Suganthan received the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to NTU in 1999. He is an Editorial Board Member of the Evolutionary Computation Journal, MIT Press. He is an associate editor of the IEEE Trans. on Cybernetics (2012 – ), IEEE Trans on Evolutionary Computation (2005 -), Information Sciences (Elsevier) (2009 – ), Pattern Recognition (Elsevier) (2001 – ) and Int. J. of Swarm Intelligence Research (2009 – ) Journals. He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 – ), an SCI Indexed Elsevier Journal. His co-authored SaDE paper (published in April 2009) won the “IEEE Trans. on Evolutionary Computation outstanding paper award” in 2012. His former PhD student, Dr Jane Jing Liang, won the IEEE CIS Outstanding PhD dissertation award, in 2014. His research interests include swarm and evolutionary algorithms, pattern recognition, big data, deep learning and applications of swarm, evolutionary & machine learning algorithms. His SCI indexed publications attracted over 1000 SCI citations in each calendar years 2013, 2014, 2015, 2016 and 2017. He was selected as one of the highly cited researchers by Thomson Reuters in 2015, 2016, and 2017 in computer science. He served as the General Chair of the IEEE SSCI 2013. He has been a member of the IEEE (S’90, M’92, SM’00, F’15) since 1990 and an elected AdCom member of the IEEE Computational Intelligence Society (CIS) in 2014-2016.


Title: Graph based and Topological Unsupervised Machine Learning  (IJCNN_10)

Organized by Nistor Grozavu and Rushed Kanawati


One of the main tasks in the field of exploratory data analysis of high dimensional data is the formation of simplified, usually visual, overview of explored data. This can be achieved through simplified description or summaries, which should provide the possibility of discovery or identification of features or patterns of most relevance. Clustering and projection are two useful methods to achieve this task. On one hand classical clustering algorithms produce a grouping of the data according to a given criterion. Projection methods, on the other hand, represent the data points in a lower dimensional space in such a way that the clusters and the metric relations of the data items are preserved as faithfully as possible. Topological and graph based methods can do both clustering, and projection simultaneously

Intended Audience

Researchers, PhD students and practitioners with interest in clustering and topological unsupervis­­­­ed machine learning.

Short Biography

Rushed Kanawati has received a PhD degree in computer science from the National Polytechnic Institute of Grenoble (INPG) France in 1998. He then joined the INRIA as an expert engineer where he worked on designing and implementing web based recommender systems. Since year 2000 he is member of LIPN Laboratory, university Paris 13, where he conducts research in the area of case-based reasoning, machine learning and complex network analysis. His recent research work covers various topics such as link prediction and community detection in complex networks as well as multiplex and attributed network analysis. He is an author of more than 120 papers in international and national venues. He has also been involved in organizing several conferences, workshops and tutorials mainly in the area of complex network analysis.

Nistor Grozavu received his Master of Computer Science degree from Marseille II University in 2006 in Fundamental Informatics. He completed his Ph.D. in Computer Science (Unsupervised Learning) in 2009 in the Computer Science Laboratory of Paris 13 University. He is currently an Associate Professor in Computer Science at the Paris 13 University. His research is with the Machine Learning and Application Team from the LIPN Laboratory. His research interests include Unsupervised Learning, Transfer Learning, Dimensionality reduction, Collaborative Learning, Machine Learning by Matrix Factorization and content based information retrieval. He is also a member of IEEE, INNS, INNS AML group (founder and head of the group).


Title: Methods and Resources for Texture Classification (IJCNN_11)

Organized by Paulo Rodrigo Cavalin and Luiz S. Oliveira


We will present a comprehensive review of methods and resources for texture recognition, and provide insights and advices in order to apply them in practice in accordance with the experience of the instructors. Regarding the methods, we plan to cover since the most traditional approaches, for instance texture descriptors such as graylevel co-occurence matrices (GLCM), to more recent approaches such as Convolutional Neural Networks (CNN) and multi-scale encodings such as Fisher Vectors. Related to resources, the goal is not only to present a list of publicly available datasets from the most well-known sets such as Brodatz to more recent ones such as DTD, but also provide a discussion on specific characteristics of each and on the current state-of-the-art results. Regarding datasets, we also plan to describe the methodology employed to create datasets for forest species recognition and breast cancer classification, providing the audience an idea for the creation of datasets to other problems. Finally, we will present tools that can help audience to speed up the development of texture recognition systems.

Intended Audience

This tutorial is mostly target at students, researchers and professional with some knowledge on image recognition, but little to no knowledge in texture recognition. These people will be provided with a broad but shallow view of the state-of-the-art techniques for texture recognition, and will be able to later get deeper into them if they decide to pursue in this area. People with intermediate to advanced experience on texture recognition can also benefit from the tutorial, since we plan to present the most recent advances in the field.

Short Biography

Paulo R. Cavalin is currently a research scientist at IBM Research, Rio de Janeiro, RJ, Brazil. He received a Ph.D. degree in automated production engineering from École de Technologie Supérieure (ÉTS) – Université du Québec in 2011, an M.Sc. degree in applied informatics from Pontifícia Universidade Católica do Paraná (PUCPR), in 2005, and a B.Sc. degree in informatics from Universidade Estadual de Ponta Grossa (UEPG), in 2002. Since he joined IBM in 2012, he has been conducting both theoretical and applied research in pattern recognition, machine learning, and computer vision.

Luiz S. Oliveira received his B.S. degree in Computer Science from Unicenp, Curitiba, PR, Brazil, the M.Sc. degree in electrical engineering and industrial informatics from the Centro Federal de Educação Tecnológica do Paraná (CEFETPR), Curitiba, PR, Brazil, and Ph.D. degree in Computer Science from École de Technologie Supérieure, Université du Québec in 1995, 1998 and 2003, respectively. From 2004 to 2009 he was professor of the Computer Science Department at Pontifical Catholic University of Paraná, Curitiba, PR, Brazil. In 2009, he joined the Federal University of Paraná, Curitiba, PR, Brazil, where he is professor of the Department of Informatics and head of the Graduate Program in Computer Science. His current interests include Pattern Recognition, Machine Learning, Image Analysis, and Evolutionary Computation.


Title: Tutorial on dynamic classifier selection: recent advances and perspectives  (IJCNN_12)

Organized by Rafael M. O. Cruz, Robert Sabourin and George D. C. Cavalcanti


Multiple Classifier Systems (MCS) have been widely studied as an alternative for increasing accuracy in pattern recognition. One of the most promising MCS approaches is Dynamic Selection (DS), in which the base classifiers are selected on the fly, according to each new sample to be classified. DS has become an active research topic in the multiple classifier systems literature in past years. This has been due to the fact that more and more works are reporting the superior performance of such techniques over static combination approaches, especially when dealing with small sized datasets, imbalanced problems and noise distributions.

DS techniques work by estimating the competence level of each classifier from a pool of classifiers. Only the most competent or an ensemble containing the most competent classifiers is selected to predict the label of a specific test sample. The rationale for such techniques is that not every classifier in the pool is an expert in classifying all unknown samples; rather, each base classifier is an expert in a different local region of the feature space. So, the key aspect in dynamic selection techniques is how to estimate the competence of the base classifiers in the local regions and select the most competent ones for the classification of any given query samples.

The goal of this tutorial is to present the attendees a detailed formulation of each step involved in a dynamic selection system, from the initial steps of generating a pool of classifiers, to the final steps on how the base classifiers are selected based on each new query sample. Moreover, experimental evaluation of the state-of-the-art dynamic selection techniques as well as guidelines for future research is also presented.

Intended Audience

The tutorial is made for researchers interested in multiple-classifiers systems, ensemble learning as well as interested in dealing with complex pattern recognition problems such as small sample sized datasets, noisy datasets, imbalanced distributions and concept drift.

Short Biography

Rafael M. O. Cruz obtained a Ph.D. in Engineering from the École de Technologie Supérieure (ÉTS), Université du Québec in 2016. Currently he is a post-doctoral researcher at LIVIA (Laboratoire d imagerie, de vision et d’intelligence artificielle). His main research interests are ensemble of classifiers, dynamic ensemble selection, local learning, meta-learning, data pre-processing and computer vision.

R. Sabourin joined in 1977 the physics department of the Montreal University where he was responsible for the design, experimentation and development of scientific instrumen­ta­tion for the Mont Mégantic Astronomical Observatory. His main contribution was the design and the implementation of a microprocessor-based fine tracking system combined with a low-light level CCD detector. In 1983, he joined the staff of the École de Technologie Supérieure, Université du Québec, in Montréal where he co-founded the Dept. of Automated Manufacturing Engineering where he is currently Full Professor and teaches Pattern Recognition, Evolutionary Algorithms, Neural Networks and Fuzzy Systems. In 1992, he joined also the Computer Science Department of the Pontifícia Universidade Católica do Paraná (Curitiba, Brazil) where he was, co-responsible for the implementation in 1995 of a master program and in 1998 a PhD program in applied computer science. Since 1996, he is a senior member of the Centre for Pattern Recognition and Machine Intelligence (CENPARMI, Concordia University). Since 2012, he is the Research Chair holder specializing in Adaptive Surveillance Systems in Dynamic Environments. Dr Sabourin is the author (and co-author) of more than 400 scientific publications including journals and conference proceeding. His research interests are in the areas of adaptive biometric systems, adaptive surveillance systems in dynamic environments, intelligent watermarking systems, evolutionary computation and bio-cryptography.

George D. C. Cavalcanti received the D.Sc. degree in Computer Science from Center for Informatics, Federal University of Pernambuco, Brazil. He is currently an Associate Professor with the Center for Informatics, Federal University of Pernambuco, Brazil. His research interests include machine learning, pattern recognition, computer vision, and biometrics.


Title: Deep, Transfer and Emergent Reinforcement Learning Techniques for Intelligent Agents (IJCNN_13)

Organized by Abdulrahman Altahhan and Vasile Palade


The goal of this tutorial is to gain in depth understanding of recent developments in RL such as True online TD and BackTD and the issues surrounding the combination of Deep Learning and Reinforcement Learning in the Games and Robotics domains.

Intended Audience

Potential audience will be mechanical engineer interested in developing a driver-less cars, engineers who are interested in applying and utilizing deep learning and reinforcement learning into their work line, as well as researchers who want to advance the frontier of the topic.

Short Biography

Abdulrahman Altahhan is a Senior Lecturer in Computing as Leeds Beckett University. He served as the Programme Director of MSc in Data Science at Coventry University, UK. Previous to that he worked in Dubai as an Assistant Professor and Acting Dean. He has a PhD in 2008 in Reinforcement Learning and Robotics and an MPhil in Fuzzy Expert Systems. Dr Abdulrahman is actively researching in the area of Deep Reinforcement Learning applied to robotics and autonomous agents with publications in this front. He has extensively prepared designed and developed a novel reinforcement learning family of methods and studied their mathematical underlying properties. Recently he established a new set of algorithms and findings where he combined deep learning with reinforcement learning in a unique way that is hoped to contribute to the development of this new research area. He presented in prestigious conferences and venues in the area of machine learning and neural network. Dr Abdulrahman is a reviewer for important Neural Networks related journals, and venues from Springer and the IEEE; including Neural Computing and Applications journal, International Conference of Robotics and Automation ICRA, and he serves in the programme committees for related conferences such as INNS Big Data 2016. Dr Abdulrahman has organised several special sessions in Deep Reinforcement Learning in IJCNN2016 and IJCNN 2017 as well as ICONIP 2016 and 2017 conferences. Dr Abdulrahman is an EPSRC reviewer and taught Machine Learning, Neural Networks and Big Data Analysis modules in the MSc of Data Science, he is an IEEE Member, a member of the IEEE Computational Intelligence Society and International Neural Network Society.

Vasile Palade is a Reader in Pervasive Computing in the Faculty of Engineering and Computing and a member of the Cogent Computing Applied Research Centre at Coventry University, UK. He previously had academic and research positions at the University of Oxford – UK (Departmental Lecturer in the Department of Computer Science), University of Hull – UK (Research Fellow in the Department of Engineering) and the University of Galati – Romania (Associate Professor in the Department of Computer Science and Engineering).

His research interests lie in the area of machine learning/computational intelligence, and encompass mainly neuro-fuzzy systems, various nature inspired algorithms such as swarm optimization algorithms, hybrid intelligent systems, ensemble of classifiers, class imbalance learning. Application areas include Bioinformatics problems, fault diagnosis, web usage mining, among others.

Dr Palade is author and co-author of more than 130 papers in journals and conference proceedings as well as books on computational intelligence and applications. He has also co-edited several books including conference proceedings. He is an Associate Editor for several journals, such as Knowledge and Information Systems (Elsevier), International Journal on Artificial Intelligence Tools (World Scientific), International Journal of Hybrid Intelligent Systems (IOS Press), Neurocomputing (Elsevier). He has delivered keynote talks to international conferences on machine learning and applications.

He was the General Chair for KES2003 – The 7th Int. Conf. on Knowledge Based Intelligent Engineering Systems, Oxford, Sept. 2003, and Co-Chair for ICMLA 2010 – The 9th Int. Conf. on Machine Learning and Applications, Washington D.C., Dec. 2010. Dr Vasile Palade is an IEEE Senior Member and a member of the IEEE Computational Intelligence Society.


Title: Neurosymbolic Learning and Reasoning with Constraints (IJCNN_14)

Organized by Artur S. d’Avila Garcez, Marco Gori, Luis C. Lamb, Luciano Serafini and Michael Spranger


Following the great recent success of deep learning, attention has turned to neural artificial intelligence (AI) systems capable of harnessing knowledge as well as large amounts of data [1,2]. Neural-symbolic integration [3] has sought for many years to benefit from the integration of symbolic AI with neural computation that can lead to more versatile and explainable learning systems [4]. Recently, constraints have been shown to offer a unifying theoretical framework for learning and reasoning [5,6,9]. Constraints based neural-symbolic computing (CBNSC) offers a methodology for unifying knowledge representation and machine learning. In this tutorial, we will introduce the theory and practice of CBNSC using a recent computational implementation called Logic Tensor Networks (LTNs) [7] implemented using Python and Google’s Tensorflow. LTN are a logic-based formalism defined on a first-order language, where formulas have fuzzy-logic semantics and terms are interpreted in feature vectors of real numbers. LTN allows a well-founded integration of deductive reasoning on a knowledge-base and efficient datadriven relational machine learning e.g. using tensor networks. LTNs enable relational knowledge infusion and distilling from deep networks, thus constraining data-driven learning with background knowledge, and allowing deep networks to be interrogated for explainability. LTNs have been successfully applied to image understanding and natural language processing (NLP) tasks [8]. The tutorial will give a general introduction to CBNSC its practical realization in LTN with an ample set of hands-on examples of such applications in Python, as well as situate LTN within the broader landscape of CBNSC.

Specifically, the goals of the tutorial are:

  • to introduce constraint-based learning and reasoning, contrasting it with related work;
  • to place it in the context of neural-symbolic computing, knowledge infusion and distilling;
  • to introduce Logic Tensor Networks as a general framework for harnessing and distilling knowledge from deep networks; and
  • to enable participants to run and understand LTNs through the use of representative small examples as a tool for knowledge-based deep networks in the area of semantic image interpretation.

The tutorial will seek to stimulate the attendees to appreciate beyond their specific area of expertise the benefits and limitations of both neural and symbolic approaches and their integration

Intended Audience

The tutorial is of interest to the general WCCI audience, in particular, researchers interested in combining constraints, logic and reasoning with deep neural networks. The tutorial assumes no prior knowledge of logic and will make sure to introduce all necessary basic concepts.

Short Biography

Marco Gori (University of Siena – Marco Gori received the Ph.D. degree in 1990 from Università di Bologna, Italy, while working partly as a visiting student at the School of Computer Science (McGill University, Montréal). In 1992, he became an associate professor of Computer Science at Università di Firenze and, in November 1995, he joined the Università di Siena, where he is currently full professor of computer science. His main interests are in machine learning, decision support systems, Web mining, and game playing. He was the leader of the WebCrow project supported by Google for automatic solving of crosswords that outperformed human competitors in an official competition taken place within the ECAI-06 conference. Dr. Gori serves (has served) as an Associate Editor of a number of technical journals related to his areas of expertise and he has been the recipient of best paper awards and keynote speakers in a number of international conferences. He was the Chairman of the Italian Chapter of the IEEE Computational Intelligence Society, and the President of the Italian Association for Artificial Intelligence. He is a fellow of the ECCAI, and a fellow of the IEEE, and a fellow of the IAPR.

Artur d’Avila Garcez (City, University of London – Garcez is Professor of Computer Science and Director of the Research Centre for Machine Learning at City, University of London. He holds a Ph.D. in Computer Science (2000) from Imperial College London. He co-authored two books: Neural-Symbolic Cognitive Reasoning (Springer, 2009) and Neural-Symbolic Learning Systems (Springer, 2002), and more than 150 peer-reviewed publications in Artificial Intelligence, Machine Learning, Neural Computing and Neural-Symbolic Computing. Garcez is president of the NeuralSymbolic Learning and Reasoning Association, editor-in-chief of the Artificial Intelligence and Neural Computation book series, associate editor of the Journal of Artificial Intelligence Research, deep learning and symbolic reasoning track, associate editor of the Journal of Logic and Computation, and member of the editorial boards and program committees of many journals and international conferences. Garcez has organized many successful workshops at NIPS, IJCAI and ECAI, most recently the CoCo workshops at NIPS, has organized two Dagstuhl seminars (2014 and 2017), designed London’s leading Data Science masters, and is a member of the steering committee of the Human-Level Artificial Intelligence multi-conference series. His research has received funding from the Nuffield foundation, the EU, the Daiwa Foundation, the Royal Society, Innovate UK, ESRC and EPSRC UK.

Luis Lamb (UFRGS, Lamb is Professor and vice Provost for Research (Pro-Rector for Research) at the Federal University of Rio Grande do Sul, Porto Alegre, Brazil. He holds both the Ph.D. in Computing Science from the Imperial College London (2000) and the Diploma of the Imperial College (D.I.C.) (2000), MSc by research (1995) and BSc in Computer Science (1992) from the Federal University of Rio Grande do Sul, Brazil. He has been Honorary Visiting Fellow at the Department of Computing, City University London. His research interests include: Logic in Computer Science and Artificial Intelligence, Machine Learning and Neural Computation; Social Computing. Lamb has co-authored two research monographs: Neural-Symbolic Cognitive Reasoning, with d’Avila Garcez and Gabbay (Springer 2009) and Compiled Labelled Deductive Systems, with Broda, Gabbay and Russo (IoP 2004). He is an Editorial Board Member of the Cognitive Technologies Book Series (SpringerNature). Lamb’s research has led to publications in ACM Transactions on Autonomous and Adaptive Systems, Theoretical Computer Science, Neural Computation, Journal of Logic and Computation, IEEE Transactions on Neural Networks, European Journal of Operational Research, Physica A, Philosophical Transactions of the Royal Society A, The Journal of Theoretical Biology, and at the flagship Artificial Intelligence and Neural Computation conferences AAAI, IJCAI, NIPS, HCOMP. He was co-organizer of the Dagstuhl Seminar 14381: NeuralSymbolic Learning and Reasoning in September 2014 and of the Dagstuhl Seminar 17192: Human-Like Neural-Symbolic Computing in May 2017. Lamb holds an Advanced Research Fellowship (2010-2017) from the Brazilian National Research Council CNPq. He is a professional member of the ACM, ACM SIGACT, AAAI, AMS, ASL, IEEE, C&GCA, and the Brazilian Computer Society.

Luciano Serafini (Fondazione Bruno Kessler – Luciano Serafini is a 30 year experienced researcher in knowledge representation and reasoning, semantic web, and ontologies. Since 1989 he worked as a researcher at the Fondazione Bruno Kessler,leading the data and knowledge management research unit since 2007. Hi is the main inventor of the logic of context and multi-context systems and logic based ontology matching. In the last years he has enlarged his scientific interests by developing approaches for integrating logical reasoning and machine learning, and he is one of the main inventor of Logic Tensor Networks. He has taught several courses at University of Trento and Bolzano in database and knowledge representation and reasoning; he regularly supervises Ms. and PhD students; he served as a PC member in the main conferences in Artificial Intelligence, Ontologies, and Semantic Web; he organizes main scientific events, like the International Semantic Web Conference (2014); he has published more than 150 papers, and has an h-index of 41.

Michael Spranger (Sony Inc. – Spranger is a Researcher at the Fundamental Research Laboratory of Sony Computer Science Laboratories Inc. located in Tokyo, Japan. He holds a Ph.D. in Artificial Intelligence (2011) from Vrije Universiteit Brussels (Belgium). He authored more than 60 peer-reviewed publications in Artificial Intelligence, Machine Learning, Robotics and Natural Language Processing and holds patents on structured personal information processing. Spranger is the current chair of the IEEE task force on language and cognition. He served on the program committees of various conferences including IJCAI, Cogsci, Alife, IEEE ICDL-EPIROB and many workshops on language and robots (most recently GLU 2017 and Semdeep). Spranger has organised successful workshops and tutorials at IJCAI and IEEE ICDL-EPIROB and various summer schools. His tutorial on language and robots recently won the prize of best tutorial at the summer school on Creativity and Evolution (CAES) 2016..


Title:  Quest for the Neural Input: Electrophysiology, Source Localization and Causality Analysis (IJCNN_15)

Organized by  Zoltán Somogyvári and Péter Érdi


A new era of brain research was opened with the vast amount of neural data that is now available. New data analysis methods are needed to take full advantage of the available resources. Neural data pose specific challenges towards the mathematical analysis and this specificity partially originates from in the type of the data: large data sets appear as long multi-channel time series in most of the cases. This is the typical form of the data not only in electrophysiology, but in optical recording techniques, as well as in fMRI.

The focus of this tutorial is the neurophyisiological data analysis: how the mathematical analysis can contribute to the answer to a crucial open question, which is necessary to understand the input-output transformation implemented by the neurons: how we can determine the inputs of a neuron in an animal during task-solving behavior?

Using this open question as a beacon, we will cover (1) the basic questions of multi-electrode recordings in the neural tissue, shortly touching some challenges of spike sorting, (2) then we will move towards the source localization techniques applicable to single neurons on micro-scales and (3) finally new ways of causality analysis will be introduced.

In more details:

(1) An average neuron in the cortex receives 10-15 thousand synapses from other neurons. While many fine details are known about the properties of individual synapses, the spatio-temporal transmembrane current patterns, resulting from the summation of a huge number of individual synaptic inputs on a whole neuron, are almost entirely unknown. The main reason for this large gap in our knowledge is the lack of a proper technique for measuring spatio-temporal inputs patterns on single neurons in behaving animals. While the output of a neuron is well recognizable in the extracellular potential measurements in the form of action potentials, the input that evoked the observed spike is unknown. Without knowing the input, deciphering the input-output transformation implemented by an individual neuron is hopeless.

The steadily improving optical imaging techniques provide extremely good spatial resolution, but they still have not reached the speed, signal-to-noise ratio, sampling frequency, aperture and miniaturization properties necessary to record action potentials and synaptic input patterns on whole neurons in behaving animals.

On the other hand, the number of channels, together with the spatial resolution, have dramatically increased recently in the widely used multi-electrode arrays (MEA), and further improvements are expected. This relatively low cost technique is applicable to freely behaving animals as well. Traditionally, only the spike timings are used from these extracellular (EC) potential recordings, but recent improvements significantly increased the spatial information content of MEA measurements. Thus, new analysis techniques are needed to exploit this new information.

(2) We will review some lource localization inverse methods can be used for the reconstruction of the current source density (CSD) on single neurons. The first sCSD method, which is able to reconstruct the full spatio-temporal CSD dynamics of single neurons during the action potentials. By using this method, the extracellular observability of back propagating action potentials in the basal dendrites of cortical neurons, the forward propagation preceding the action potential on the dendritic tree and the signs of the Ranvier-nodes were demonstrated. Latter it was extended to include the specific morphology and details of the dendritic tree of the actual cell, thus provides more precise source reconstruction on them.

(3) Finding causal relations is one of the most important step towards understanding of complex interactions in the nervous system. Majority of the traditional analysis methods, such as correlation and mutual information are able to reveal linear or nonlinear interdependencies between measurements, but does not provide information on the direction of the causal effect. The first analysis method which was able determine the directional causal effect were introduced by Clive Granger (1963) based on the idea originated from Norbert Wiener: If the inclusion of a new measurement (time series) improves the prediction of a time series, than the newly included time series is causal to the predicted one in the Wiener-Granger sense. During the decades, many versions and extensions emerged and the original idea have been awarded by the Nobel-prize in 2003. This causality definition works well for unidirectionally coupled stochastic systems, but encounters difficulties, if the causal effect is circular, the two systems mutually influences each other, as it is often the case in the nervous system. Sugihara et al. introduces a new definition and analysis method for causality in 2012, which promises, that it can reveal not only unidirectional, but circular causal effect as well.

In this talk, we will show examples of application of Sugihara’s new method to reveal causality between EEG signals recorded in different neocortical regions and the two hippocampi during epileptic seizures as well as many other in vitro and in vivo measurements.

Intended Audience

The tutorial can be equally useful for those, who work in a mathematics-related field, and interested in the neuroscience, data series analysis or the combination of them and those who’s main interest is neuroscience and want to learn new, cutting-edge  techniques  for analyzing and understanding better their data, While the application of the covered data analysis methods, does not limited to neural data series.

Short Biography

Dr. Zoltán Somogyvári is a Senior Researcher at Wigner Research Center for Physics (Budapest, Hungary) and leader of the Teoretical Neuroscience and Complex Systems Research Group. He holds MSc in physics and PhD on neuroscience. He introduced several new data analysis concepts and methods in the field of neuroscience and complex systems. He introduced the first inverse method for the reconstruction of the spatio-temporal distribution of current source density on single neurons, to reveal synaptic input of neurons.

In network science, He took part in the introduction of predicting framework for new technologies based on patent citation networks, derived analytical results on random boolean network’s dynamics as a general model for genetic networks, and introduced a hierarchical extension a game-theoretical model, the minority game.

Recently, he is working on a new causality analysis method, which is able to distinguish all possible causal relationship, based on parallel time series, including unidirectional (driver) connection, bidirectional (circular) coupling, and the existence of the hidden common cause.