Title: Computational Intelligence for Data Science and Big Data (HYB7_01)
Organized by Isaac Triguero, Alberto Fernández, Mikel Galar
In the era of big data, the leverage of recent advances achieved in distributed technologies enables data mining techniques to discover unknown patterns or hidden relations from voluminous data in a faster way. Extracting knowledge from big data becomes a very interesting and challenging task where we must consider new paradigms to develop scalable algorithms. However, computational intelligence models for machine learning and data mining cannot be straightforwardly adapted to the new space and time requirements. Hence, existing algorithms should be redesigned or new ones developed in order to take advantage of their capabilities in the big data context.Moreover, several issues are posed by real-world complex big data problems besides from computational complexity, and big data mining techniques should be able to deal with challenges such as dimensionality, class-imbalance, and lack of annotated samples among others.Addressing Big Data becomes a very interesting and challenging task where we must consider new paradigms to develop scalable algorithms. The MapReduce framework, introduced by Google, allows us to carry out the processing of large amounts of information. Its open source implementation, named Hadoop, has allowed the development scalable algorithm becoming de facto standard for addressing Big Data problems. Recently, new alternatives to the standard Hadoop-MapReduce framework have arisen to improve the performance in this scenario, being Apache Spark project the most relevant one. Even working on Spark, the MapReduce framework implies that existing algorithms need to be redesigned or new ones need to be developed in order to take advantage of their capabilities in the big data context.In this tutorial, we will first provide a gentle introduction to the problem of Big Data as well as the presentation of recent technologies (Hadoop ecosystem, Spark, Flink). Then, we will dive into the field of Big Data analytics, explaining the challenges that come to Computational Intelligence techniques and introducing Machine Learning libraries such as Mahout, MLlib and FlinkML. Afterwards, we will go across two of the main topics of the WCCI 2018, namely fuzzy modeling and evolutionary models in the Big Data context. We start by introducing the features and design for the most recent approaches for fuzzy modeling on Big Data.
Then, we continue with several case studies for evolutionary instance selection/generation, feature selection/weighting and imbalanced data classification. We aim at defining the direction for the design of powerful algorithms based on both fuzzy systems and evolutionary algorithms, and how the information extracted with these models can be useful for the experts. Finally, we will consider the software associated with the case studies presented in order to carry out a live demonstration of some of our most recent developed models for Big Data classification.
This tutorial is aimed at all those researchers involved in the development of fuzzy models as well as evolutionary algorithms, providing them an overview of the existing technologies to deal with Big Data problems. The audience will also be able to understand the impact of the use of such kind of approaches in Data Science, in particular by means of our most recent research publications on the topic and a demonstration section that intends to help the participants to better understand the underlying process.
Title: Empirical Approach: How to get Fast, Interpretable Deep Learning (HYB7_02)
Organized by Plamen Angelov and Xiaowei Gu
We are witnessing an explosion of data (streams) being generated and growing exponentially. Nowadays we carry in our pockets Gigabytes of data in the form of USB flash memory sticks, smartphones, smartwatches etc. Extracting useful information and knowledge from these big data streams is of immense importance for the society, economy and science. Deep Learning quickly become a synonymous of a powerful method to enable items and processes with elements of AI in the sense that it makes possible human like performance in recognising images and speech. However, the currently used methods for deep learning which are based on neural networks (recurrent, belief, etc.) is opaque (not transparent), requires huge amount of training data and computing power (hours of training using GPUs), is offline and its online versions based on reinforcement learning has no proven convergence, does not guarantee same result for the same input (lacks repeatability).The presenters recently introduced a new concept of empirical approach to machine learning and fuzzy sets and systems, had proven convergence for a class of such models and used the link between neural networks and fuzzy systems (neuro-fuzzy systems are known to have a duality from the radial basis functions (RBF) networks and fuzzy rule based models and having the key property of universal approximation proven for both). In this tutorial we will present in a systematic way the basics of the newly introduced Empirical Approach to Machine Learning, Fuzzy Sets and Systems and its applications to problems like: anomaly detection, clustering, classification, prediction and control.
The major advantages of this new paradigm is the liberation from the restrictive and often unrealistic assumptions and requirements concerning the nature of the data (random, deterministic, fuzzy), the need to formulate and assume a priori the type of distribution models, membership functions, the independence of the individual data observations, their large (theoretically infinite) number, etc. From a pragmatic point of view, this direct approach from data (streams) to complex, layered model representation is automated fully and leads to very efficient model structures. For example, we will demonstrate and explain step by step how fast, transparent, non- parametric, re-trainable and dynamically evolving deep learning classifiers can be developed that do not require huge amounts of training data and computational power (for comparison currently existing deep learning models require hours of training of GPUs, tens of thousands of training data and generate complex cumbersome black box representations with tens of millions of parameters or more that are not directly interpretable).
Moreover, the proposed new methods can guarantee convergence and stability for the first order models, be highly parallelised and as precise as the traditional one. Furthermore, it does not require the use of stochastic tricks like “elastic distortion”, using stochastic models and as a result can guarantee full repeatability (same result for the same input image no matter how many times we repeat the experiment). In addition, the proposed new concept learns in a way similar to the way people learn – it can start from a single example. Imagine people who can recognise an object which they have seen only once and can associate to it any other previously unseen object that is similar to it. This is completely possible in real live. However, no machine learning approach can start with no prior training on labelled data. The reason why the proposed new approach makes this possible is because it is prototype based and non- parametric. We will further demonstrate semi-supervised learning where only a handful, e.g. 5% of the data are being labelled and the rest are associated based on their similarity to the prototypes identified from these 5% of the data.
We will use a number of experimental results and examples to visualize, demonstrate and involve the audience with this new approach. A book is being prepared and software will be made available for further hands-on experience with this new methodology.
Title: Interactive Adaptive Learning (HYB7_03)
Organized by Adrian Calma, Daniel Kottke, Robi Polikar
Science, technology, and commerce increasingly recognize the importance of machine learning approaches for data-intensive, evidence-based decision making. This is accompanied by increasing numbers of machine learning applications and volumes of data. Nevertheless, the capacities of processing systems, human supervisors, or domain experts remain limited in real-world applications. Furthermore, applications require fast reaction to new situations, which means that predictive models need to be available even if few data is yet available. Therefore, approaches are needed that optimize the whole learning process, including the interaction with human supervisors, processing systems, and data of various kind and at different timings. Such approaches include (1) techniques for estimating the impact of additional resources (e.g. data) on the learning progress; (2) techniques for the active selection of the information processed or queried; (3) techniques for reusing knowledge across time, domains, or tasks, by identifying similarities and adaptation to changes between them; (4) techniques for making use of different types of information, such as labeled or unlabeled data, constraints or domain knowledge. Solutions are provided in, for example, the fields of adaptive, active, semi-supervised, and transfer learning. However, this is mostly done in separate lines of research. Combinations thereof in interactive and adaptive machine learning systems that are capable of operating under various constraints, and thereby address the immanent real-world challenges of volume, velocity and variability of data and data mining systems, are rarely reported.The importance of interactive machine learning setting has been recently been acknowledged [Holzinger, “Interactive machine learning for health informatics: when do we need the human-in-the- loop?” (Brain Informatics 3/2), 2016]. While a widely accepted taxonomy still needs to be developed, we consider the following definition based on Holzinger’s definition: “Interactive adaptive learning comprises systems and algorithms that can interact with agents, which could be humans or other smart systems, in a learning loop, observing the result of learning, optimizing their learning behavior through these interactions and providing input that improve the learning outcome”.Therefore, this tutorial aims to bring together researchers and practitioners from these different areas, and to stimulate research in interactive and adaptive machine learning systems as a whole. Therefore, we discuss several topics in the field of interactive adaptive learning, e.g. stream mining, active learning and semi- supervised learning and point out the interdependence between these fields.
Title: Ranking: from social psychology to algorithms and back (HYB7_04)
Organized by Peter Erdi
Description:Comparison, ranking and even rating is a fundamental feature of human nature. The goal of this tutorial to explain the integrative aspects of the evolutionary, psychological, institutional and algorithmic aspects of ranking.Since we humans (1) love lists; (2), are competitive and (3), are jealous of other people, we like ranking. The practice of ranking is studied in social psychology and political science, the algorithms of ranking in computer science. Initial results of the possible neural and cognitive architectures behind rankings are also reviewed.Why we rank? How we rank?
Comparison, ranking, rating and lists
The reality, illusion and manipulation of the objectivity
About the social psychology of ranking
Social ranking in animal and human societies
How do people choose? From optimality to bounded rationality
From individual decision making to social choice
Struggle for reputation
Ranking algorithms: from PageRank to patent ranking
Ranking and rating algorithms: friends or foes? The debate is not over