Over the last few years we have seen tremendous progress in deep learning based generative models that can now sometimes convincingly model high dimensional data objects like e.g. images or sound. Broadly speaking three distinct approaches have been at the center of attention: autoregressive models, generative adversarial approaches (GANs) and latent variable models with amortized variational inference (e.g. VAEs). Here I want to focus on the latter: models where we train not only a neural network based generative model that maps latent variables to observed ones, but where we in parallel train an approximate inference network to predict the latent variables given observations. Despite recent advances, many foundational aspects of these models, especially for models with hierarchies of latents variables, are relatively unexplored. This includes theoretical properties and effective algorithms for inference and learning.
George Karypis is a Distinguished McKnight University Professor and an ADC Chair of Digital Technology at the Department of Computer Science & Engineering at the University of Minnesota, Twin Cities. His research interests span the areas of data mining, high performance computing, information retrieval, collaborative filtering, bioinformatics, cheminformatics, and scientific computing. His research has resulted in the development of software libraries for serial and parallel graph partitioning (METIS and ParMETIS), hypergraph partitioning (hMETIS), for parallel Cholesky factorization (PSPASES), for collaborative filtering-based recommendation algorithms (SUGGEST), clustering high dimensional datasets (CLUTO), finding frequent patterns in diverse datasets (PAFI), and for protein secondary structure prediction (YASSPP). He has coauthored over 280 papers on
these topics and two books (“Introduction to Protein Structure Prediction: Methods and Algorithms” (Wiley, 2010) and “Introduction to Parallel Computing” (Publ. Addison Wesley, 2003, 2 nd edition)). In addition, he is serving on the program committees of many conferences and workshops on these topics, and on the editorial boards of the IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Knowledge Discovery from Data, Data Mining and Knowledge Discovery, Social Network Analysis and Data Mining Journal, International Journal of Data Mining and Bioinformatics, the journal on Current Proteomics, Advances in Bioinformatics, and Biomedicine and Biotechnology.
He is a Fellow of the IEEE.
Recommender systems are designed to identify the items that a user will like or find useful based on the user’s prior preferences and activities. These systems have become ubiquitous and are an essential tool for information filtering and (e-)commerce. Over the years, collaborative filtering, which derive these recommendations by leveraging past activities of groups of users, has emerged as the most prominent approach for solving this problem. This talk will present some of our recent work towards improving the performance of collaborative filtering-based recommender systems and understanding some of their fundamental limitations and characteristics. It will start by analyzing how the ratings that users provide to a set of items relate to their ratings of the set’s individual items and, using these insights, will present rating prediction approaches that utilize distant supervision. It will then discuss extensions to approaches based on sparse linear and latent factor models that postulate that users’ preferences are a combination of global and local preferences, which are shown to lead to better user modeling and as such improved prediction performance. Finally, the talk will conclude by discussing what can be accurately predicted by latent factor approaches and by analyzing the estimation error of sparse linear and latent factor models and how its characteristics impacts the performance of top N recommendation algorithms.
In this talk, we review a novel method to quantify graph dissimilarities. The pseudo-metric is based on the quantification of differences between the distance probability distributions. The measure shows to be an efficient and precise tool for network comparison. It can identify and quantify structural topological differences that have a practical impact on the information flow through the network.
A generalization for multiplex networks is also discussed. In this structures, each layer possesses its connectivity configuration, and the quantification of their dissimilarities is associated with the concept of “diversity”. Layers with unique connectivity patterns create a more diverse multiplex structure.
Several applications on real and artificial data are examined, exemplifying the usefulness of the proposed measures. In particular, we address the analysis of diffusion dynamics on single and multiplex structures.
The problem of understanding synchronization goes back more than 300 years to Huygens. We focus on the heart beat, where the heart cells, the myocytes, each has a beat involving proteins, actin and myosin. When physically connected these beating cells begin to beat in unison. How can math contribute to understanding this phenomena?
Past Keynote Speakers
The Keynote Speakers of the previous editions:
- Nello Cristianini, University of Bristol, UK
- Yi-Ke Guo, Imperial College London, UK
- Vipin Kumar, University of Minnesota, USA
- George Michailidis, University of Florida, USA
- Stephen Muggleton, Imperial College London, UK
- Panos Pardalos, University of Florida, USA
- Jun Pei, Hefei University of Technology, China
- Tomaso Poggio, MIT, USA
- Ruslan Salakhutdinov, Carnegie Mellon University, USA, and AI Research at Apple
- Vincenzo Sciacca, IBM, Italy
- My Thai, University of Florida, USA