Quantifying the uncertainty of predictions produced by classification and regression techniques is of paramount importance in real-world applications and especially in risk sensitive domains. Conformal Prediction (CP) is a recently developed framework, based on ideas originating from the theory of algorithmic randomness (closely connected to Kolmogorov Complexity), for complementing the predictions of Machine Learning techniques with reliable measures of confidence. Unlike the probability/confidence values produced by other approaches the confidence measures produced by CP are provably valid under only the assumption that the data are generated independently by the same probability distribution (i.i.d.) and have a clear probabilistic interpretation. Moreover the flexibility of the framework allows it to be used for extending almost any conventional Machine Learning algorithm into a reliable confidence predictor. Both these properties make it ideal for use in real-world applications.
Since its development, the framework has been used for extending a number of popular Machine Learning techniques such as Support Vector Machines, Artificial Neural Networks and Random Forests and has been successfully applied to a variety of challenging real-world problems ranging from medical decision support to the prediction of space weather. The promising results and the guarantee of well-calibrated confidence measures, led to the development of extensions of the framework to additional problem settings such as semi-supervised learning, anomaly detection, feature selection, outlier detection, change detection in streams and active learning.
This tutorial will present the CP framework from a practical point of view focusing on examples, with the aim of demonstrating the qualities of the framework and providing attendees with the main know-how needed to start utilizing it in their work.