Bielefeld University, Germany
Barbara Hammer received her Ph.D. in Computer Science in 1995 and her venia legendi in Computer Science in 2003, both from the University of Osnabrueck, Germany. From 2000-2004, she was chair of the junior research group ‘Learning with Neural Methods on Structured Data’ at University of Osnabrueck before accepting an offer as professor for Theoretical Computer Science at Clausthal University of Technology, Germany, in 2004. Since 2010, she is holding a professorship for Theoretical Computer Science for Cognitive Systems at the CITEC cluster of excellence at Bielefeld University, Germany. Several research stays have taken her to Italy, U.K., India, France, the Netherlands, and the U.S.A. Her areas of expertise include hybrid systems, self-organizing maps, clustering, and recurrent networks as well as applications in bioinformatics, industrial process monitoring, or cognitive science. She has been chairing the IEEE CIS Technical Committee on Data Mining in 2013 and 2014, and she is chair of the Fachgruppe Neural Networks of the GI and vice-chair of the GNNS. She has been elected as IEEE CIS AdCom member for 2016-2018. She has published more than 200 contributions to international conferences / journals, and she is coauthor/editor of four books.
The amount of electronic data available today increases rapidly; hence humans rely on automated tools which allow them to intuitively scan data volumes for valuable information. Dimensionality reducing data visualization, which displays high dimensional data in two or three dimensions, constitutes a popular tool to directly visualize data sets on the computer screen. Dimensionality reduction, however, is an inherently ill-posed problem, and the results vary depending on the chosen technology, the parameters, and even random aspects of the algorithms — there is a high risk to display noise instead of valuable information.
In the presentation, we discuss discriminative dimensionality reduction techniques, i.e. methods which enhance a dimensionality reduction method by auxiliary information such as class labels. This allows the practitioner to easily focus on those aspects he is interested in rather than noise. We will discuss two different approaches in this realm, which rely on a parametric resp. non-parametric metric-learning scheme, and display their effect in several benchmarks. We discuss how these methods can be extended to non-vectorial and big data, and how they open the door to a visualization of not only the given data but any given classifier.
Department of Computer Science, University of Ioannina, Greece
Diploma in Electrical Engineering, National Technical University of Athens, June 1990. Ph.D. in Electrical and Computer Engineering, National Technical University of Athens, July 1994.
Associate Editor: IEEE Transactions on Neural Networks (2008-2012)
General Co-chair: ECML/PKDD 2011
Aristidis Likas is a Professor in the Department of Computer Science and Engineering of the University of Ioannina, Greece. He received the Diploma in electrical engineering from the National Technical University of Athens, Greece, in 1990 and the Ph.D. degree in electrical and computer engineering from the same university in 1994. Since 1996, he has been with the Department of Computer Science and Engineering, University of Ioannina, Greece.
He is interested in developing methods for machine learning/data mining problems (mainly classification, clustering, statistical and Bayesian learning) and in the application of those methods to video analysis, computer vision, medical diagnosis, bioinformatics and text mining. His recent research focuses on techniques for estimating the number of clusters, kernel-based clustering and multi-view clustering.
He has published more than 80 journal papers and more than 80 conference papers attracting more than 5000 citations. Recently, he received a Best Paper Award at the ICPR 2014 conference. He has participated in several National and European research and development projects. He is a Senior Member of the IEEE. He served as an Associate Editor of the IEEE Trans. on Neural Networks journal and as General co-Chair of the ECML PKDD 2011 and the SETN 2014 conferences.
Clustering constitutes an essential problem in machine learning and data mining with important applications in science, technology and business. The aim is to partition a dataset into groups, called clusters, such that instances falling in the same cluster are similar to each other and dissimilar to those of other clusters according to some similarity/dissimilarity measure.
A significant issue in clustering research is the estimation of the number of clusters in a dataset. We will present the recently proposed ‘dip-dist’ criterion for estimating the homogeneity of a group of instances based on statistical tests of unimodality. Then we will describe the use of this criterion for developing incremental and agglomerative clustering methods that automatically estimate the number of clusters. We will also briefly discuss analogous methods for sequence segmentation that use the dip-dist criterion for deciding on segment boundaries.
Another active area of clustering research relates to multi-view clustering. In this case multiple representations (views) are available for each data instance, coming from different sources and/or feature spaces. Typical multi-view approaches treat all available views as being equally important, which may lead to a considerable drop in performance if degenerate views (e.g. noisy views) exist in the dataset. We will present approaches that assign a weight to each view. Such weights are automatically tuned to reflect the quality of the views and determine their contribution to the clustering solution accordingly.
Finally, we will present our experience from the application of the above methods for video summarization, and, more specifically, for video sequence segmentation and extraction of representative key-frames.
Max Planck Institute for Intelligent Systems, Germany
TU Darmstadt, Germany
Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group.
Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the Robotics: Science & Systems – Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society‘s Early Career Award. Jan Peters has being honored for the development of new approaches to robot learning, robot architecture and robotic methods and their applications for humanoid robots. In 2015, he was awarded an ERC Starting Grant. Jan Peters has studied Computer Science, Electrical, Mechanical and Control Engineering at TU Munich and FernUni Hagen in Germany, at the National University of Singapore (NUS) and the University of Southern California (USC). He has received four Master‘s degrees in these disciplines as well as a Computer Science PhD from USC.
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent „hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.