Keynote speakers


Prof. Plamen Angelov

Title: Empirical Data Analytics: Learning Autonomously from Data Streams

The staggering proliferation of heterogeneous, large scale data sets and streams is recognised as an untapped resource which offers new opportunities for extracting aggregated information to inform decision-making in policy and commerce. However, currently existing methods and techniques for data mining involve a lot of prior assumptions, handcrafting and a range of other bottleneck issues: i) scalability – vast amounts of data which require high throughput automated methods (e.g. manual labelling of data samples can be prohibitive); ii) complex, heterogeneous data (including signals, images, text that may be uncertain and unstructured); iii) dynamically evolving, non-stationary data patterns, and the shortcomings of the “standard” assumptions about data distributions; iv) the need to hand craft features, parameters or set thresholds. As a result, a large proportion of the available data remains untapped. The key challenge now is to manage, process and gain value and understanding from the vast quantity of heterogeneous data without handcrafting and prior assumptions, at an industrial scale.

In this talk a newly emerging theoretical framework which we call Empirical Data Analytics will be introduced and described and its relation to the probability, density, centrality, etc. Traditional disciplines of Machine Learning, Data Mining, Pattern Recognition, System Modelling and Identification are well developed. However, current tools often require a number of restrictive assumptions, or handcrafting/manual selection of features, distribution types, parameters, thresholds, etc. Existing algorithms are usually iterative, including internal cycles. In traditional statistical approaches, averages play a more important role than the individual specifics. Even rapidly emerging AI and computational intelligence approaches require ad hoc assumptions and a priori decisions (e.g. network depth/ architecture, membership function type and parameters). Furthermore, most existing algorithms assume fixed model structures. This hampers their application to dynamically evolving non-stationary data streams and dealing with shifts and drifts. For example, in cybersecurity, adversaries are often adaptive and intelligent; they exploit the vulnerabilities of traditional systems that are based on fixed prior assumptions, designed for stationary data streams and data generated by the same distribution. Attacks on spam filtering may, for exmaple, include spam messages that are obscured by random misspellings of trigger words; similar problems exist for detecting malware and biometric spoofing.

Motivated by the principle of Occam’s Razor [3], we suggest a complete departure from traditional approaches to large-scale data analysis: we advocate recognising the central importance and complexity of real-world data. Our aim is to establish a new paradigm for autonomous data analytics that is based on minimal prior assumptions. The guiding principles of this paradigm are that i) we should avoid assumptions about the statistical properties of the data; ii) the burden of human effort should be shifted away from the large amount of raw data to the top of the knowledge pyramid (see Fig. 2); iii) all new methods for data analytics should be scalable.

Fig.1 top–traditional approach; bottom–EDA

Fig.2 Autonomous Learning Systems within the EDA hierarchical architecture

Within EDA we define cumulative proximity, typicality, eccentricity, local and global, uni and multimodal density. Typicality is particularly interesting, because it resembles (but differs from) the probability density function (pdf), information potential and other similar representations related to system state and structure description and has very close links with laws of physics such as gravitation, intensity and inverse square distance.

In the talk this new concept will be described as well as a number of applications to various problems.


  1. P. Angelov, Autonomous Learning Systems: From Data Streams to Knowledge in Real time, John Willey and Sons, Dec.2012, ISBN: 978-1-1199-5152-0.
  2. P Angelov et al, Empirical Data Analysis: A New Tool for Data Analytics, IEEE SMC Conf., Budapest,2016.
  3. H G Gauch, Scientific Method in Practice, Cambridge Univ. Press, 2003.

Professor Angelov has 25+ years of professional experience in high level research and holds a Personal Chair in Intelligent Systems at Lancaster University, UK. He leads the Data Science groups at the School of Computing and Communications which includes over 20 academics, researchers and PhD students and is one of the eight groups of the School. Prof. Angelov is a Fellow of IEEE for contributions to neuro-fuzzy and autonomous learning systems an of the IET. HE is also Board of Governors member of the International Neural Networks Society (INNS) and of the Systems, Man and Cybernetics Society where he also Chairs a Technical Committee (TC) on Evolving Intelligent Systems within the and a member of several other TCs. Within SMC Society this includes the TC on Diagnostics and Prognostics. More generally within IEEE, Prof. Angelov is a member of the TC on Neural Networks and TC on Fuzzy Systems within the Computational Intelligence Society, IEEE. Prof. Angelov has authored or co-authored over 200 peer-reviewed publications in leading journals, peer-reviewed conference proceedings, 5 patents, two research monographs (by Wiley, 2012 and Springer, 2002) and over a dozen other books. These publications has been cited over 5000 times (Google Scholar) with an h-index of 34; i10-index is 71. He and his co-authors and students received a number of IEEE best paper awards (2006, 2009, 2012, 2013) as well as one of his papers was nominated for outstanding IEEE Transactions paper (2010). His most cited paper has over 580 citations. He has an active research portfolio in the area of computational intelligence and machine learning and internationally recognised results into online and evolving learning and algorithms for knowledge extraction in the form of human-intelligible fuzzy rule-based systems. Prof. Angelov leads numerous projects (including several multimillion ones) funded by UK research councils, EU, industry, UK Ministry of Defence. His research was recognised by ‘The Engineer Innovation and Technology 2008 Special Award’ and ‘For outstanding Services’ (2013) by IEEE and INNS. He is also the founding co-Editor-in-Chief of Springer’s journal on Evolving Systems and Associate Editor of the leading international scientific journals in this area, including IEEE Transactions on Cybernetics, IEEE Transactions on Fuzzy Systems and several other journals including Applied Soft Computing, Fuzzy Sets and Systems, Soft Computing, etc. He was General Chair of primes conferences (IJCNN-2013, Dallas, Texas, 4-9 August 2013, Texas, USA; INNS inaugural Conference on Big Data, San Francisco, August, 2015) and Programme Committee co-Chair of prime conferences (FUZZ-IEEE- 2014, July 2014, Beijing, China; IEEE Intelligent Systems’14, Warszaw, Poland; IJCNN2016, Vancouver, Canada); founding General co- Chair of a series of annual IEEE conferences on Evolving and Adaptive Intelligent Systems. Prof. Angelov is often acting as a Visiting Professor (in Brazil, 2007 and 2014; Germany, 2006; Spain, 2010; France, 2014; Bulgaria, 2011-14) regularly gives invited and plenary talks at leading companies and universities. Prof. Angelov is a member of the Special Interest Group (SIG) on Autonomous Machine Learning, International Neural Network Society, since 2008, co-ordinator of the Working Group on Data Mining and Learning to EUSFLAT since 2007, member of ISGEC (International Society of Genetic and Evolutionary Computation) and its Council of Authors, since 2002; member of the Senior Members sub-committee, IEEE, 2008-2010; North American Fuzzy Information Processing Society, NAFIPS, 2001,2005; member of the Innovation Award Committee for the World Congress on Nature and Biologically Inspired Computing, December 2009. Prof. Angelov gave over a dozen plenary and key note talks at high profile conferences. More information can be found at his web site

Stefanos Kollias

Prof. Stefanos Kollias

Title: Developing performance-aware trustful neural architectures for complex data analysis

Over the past five years there has been significant growth in applications and technologies that relate to machine learning and understanding, knowledge modelling and management, structured and unstructured multimedia data search, retrieval and analytics, in numerous sectors, such as biomedical data, social networks, emotion and sentiment analysis, culture and creativity, IoT, smart homes and cities. Deep Neural Networks, especially convolutional and recurrent (DCNN-RNN), constitute the state-of-the-art in all basic signal analysis fields (Computer Vision, Speech and Audio Processing, Natural Language Processing). The major effort in Deep Learning Technologies today is to apply them in every topic of the above fields, greatly improving the existing performance measures.

The keynote speech will first focus on a key aspect of such networks’ performance, which is adaptation and case-specific decision making, especially in applications with great data diversity. Specific topics of research related to DNN adaptation include drift (data/context change) detection, DNN transfer learning, e.g., from constrained environments to applications in the wild, development of DNN ensembles with self evaluation & retraining abilities and of end-to-end deep neural architectures and systems.

A current constraint of DNNs is the difficulty to understand the way how they derive their classification decisions, which is crucial for medical, safety, financial & many other applications. The keynote speech will also focus on open research issues on trust & semantic evaluation of performance of such architectures. Specific topics of research include merging DNNs with knowledge ontological representations & reasoning and extracting rules from trained networks.

Various results and application studies on the above issues will be presented in the keynote speech. In biomedical applications we will focus on deriving performance-aware trustful (PAT) neural architectures (NAs) which are able to appropriately correlate imaging, epidemiological and clinical data. In Human machine Interaction, on PAT-NAs that analyze behaviors, emotions, sentiments, social interactions, in specific contexts, or in-the-wild. In other real life problems, on PAT-NAs for effective detection, classification, prediction, assessment and decision making.

Stefanos Kollias is a Founding Professor of Machine Learning in College of Science, University of Lincoln, UK, starting from September 2016. He has been Professor in the School of Electrical and Computer Engineering, National Technical University of Athens, since 1997 and Director of the Intelligent Systems, Content & Interaction Laboratory. He is an IEEE Fellow (2015, suggested by the IEEE Computational Intelligence Society). He has been member of the Executive Committee of the European Neural Network Society (2007-2016).

He has produced a world leading research activity in the fields of machine learning, intelligent systems (with emphasis on artificial neural networks), semantic multimedia analysis, semantic metadata interoperability, affective computing. He has published 108 papers in international journals and 300 papers in proceedings of international conferences. His research has been highly referenced (about 8000 citations with an h-index of 41 in Google Scholar).

He has supervised more than 40 Ph.D. students. He has led his group participation in more than 100 European R&D projects, in which his group funding has been more than 20MEuro.


  • Fellow in Intelligent Systems — IEEE, 2015
  • Best Learning Game Award for the SIREN system — Games and Learning Alliance Network of Excellence, 2013
  • Beta Sprint Award for the MINT system — Digital Public Library of America, 2011
  • Best Paper Awards — International Conferences

Włodzisław Duch

Włodzisław Duch

Title: From understanding the brain to neurocognitive technologies

Human brains have astronomical complexity. Natural development of such complex systems is rarely close to optimal. Techniques for precise regulation of brain processes are still in their infancy. I will present recent progress in analysis of neuroimaging signals opening the way to novel neurofeedback methods that may selectively change activity and internal communication between selected brain structures. Real-time functional magnetic resonance (rtFMRI) is quite effective for this purpose but too costly to serve as a practical method. Methods of EEG analysis aimed at discovery of activity of specific brain subnetworks or brain regions are under development. These methods include source reconstruction/localization, analysis of spatio-temporal EEG patterns, event-related synchronisation/desynchronisation, independent-component analysis, graph-based network analysis, connectome-based techniques, and model-based approaches. These methods are used to generate feature spaces in which deep neural networks are trained to create filters that discover activity of specific structures. Such techniques open the way for development of precise neurofeedback methods for optimization and correction of specific brain processes. Direct stimulation of the brain may be used to increase neuroplasticity during training. Real non-invasive brain engineering is on the horizon.

Wlodzislaw Duch heads the Neurocognitive Laboratory in the Center of Modern Interdisciplinary Technologies, and the Department of Informatics, both at Nicolaus Copernicus University, Torun, Poland. In 2014-15 he has served as a deputy minister for science and higher education in Poland, and in 2011-14 as the Vice-President for Research and ICT Infrastructure at his University. Before that he has worked as the Nanyang Visiting Professor (2010-12) in the School of Computer Engineering, Nanyang Technological University, Singapore where he also worked as a visiting professor in 2003-07. MSc (1977) in theoretical physics, Ph.D. in quantum chemistry (1980), postdoc at Univ. of Southern California, Los Angeles (1980-82), D.Sc. in applied math (1987); worked at the University of Florida; Max-Planck-Institute, Munich, Germany, Kyushu Institute of Technology, Meiji and Rikkyo University in Japan, and several other institutions. He is/was on the editorial board of IEEE TNN, CPC, NIP-LR, Journal of Mind and Behavior, and 14 other journals; was co-founder & scientific editor of the “Polish Cognitive Science” journal; for two terms has served as the President of the European Neural Networks Society executive committee (2006-2008-2011), is an active member of IEEE CIS Technical committee; International Neural Network Society Board of Governors elected him to their most prestigious College of Fellows. He works as an expert of the European Union science programs; published over 300 scientific and over 200 popular articles on diverse subjects, has written or co-authored 4 books and co-edited 21 books, his DuchSoft company has made GhostMiner software package marketed by Fujitsu company.

Wlodek Duch is well known for development of computational intelligence (CI) methods that facilitate understanding of data, general CI theory based on similarity evaluation and composition of transformations, meta-learning schemes that automatically discover the best model for a given data. He is working on development of neurocognitive informatics, focusing on algorithms inspired by cognitive functions, information flow in the brain, learning and neuroplasticity, understanding of attention, integrating genetic, molecular, neural and behavioral levels to understand attention deficit disorders in autism and other diseases, infant learning and toys that facilitate mental development, creativity, intuition, insight and mental imagery, geometrical theories that allow for visualization of mental events in relation to the underlying neurodynamics. He has also written several papers in the philosophy of mind, and was one of the founders of cognitive sciences in Poland. Since 2014 he is heading a unique NeuroCognitive Laboratory, that involves experts in hardware and software, signal processing, physics, cognitive science, psychology and philosophy. His Lab works with infants, preschool children, students and older people, using neuroimaging techniques, behavioral experiments and computational modelling.
With a wide background in many branches of science and understanding of different cultures he bridges many scientific communities. To find a lot of information about his activity including his full CV just type “W. Duch” in Google.