# About event
The ML in PL Conference is an event focused on the best of Machine Learning both in academia and in business. This year, due to the spread of COVID-19 around the world, we've decided to gather remotely.
Learn from the top world experts
Share your knowledge
Meet the ML community
Feel the unique atmosphere
# Agenda
-
Friday, 5 November
-
10:00 - 15:00 CET - Students' Day (GatherTown)
An event dedicated to students taking first steps as researchers and speakers, consisting of a series of talks and networking sessions.
Attendance at this event is free of charge! -
15:30 - 16:00 CET - Opening Remarks (Hopin)
-
16:00 - 17:15 CET -Keynote Lecture: Daphne Koller (Hopin)
Transforming Drug Discovery using Digital Biology
Modern medicine has given us effective tools to treat some of the most significant and burdensome diseases. At the same time, it is becoming consistently more challenging and more expensive to develop new therapeutics. A key factor in this trend is that the drug development process involves multiple steps, each of which involves a complex and protracted experiment that often fails. We believe that, for many of these phases, it is possible to develop machine learning models to help predict the outcome of these experiments, and that those models, while inevitably imperfect, can outperform predictions based on traditional heuristics. To achieve this goal, we are bringing together high-quality data from human cohorts, while also developing cutting edge methods in high throughput biology and chemistry that can produce massive amounts of in vitro data relevant to human disease and therapeutic interventions. Those are then used to train machine learning models that make predictions about novel targets, coherent patient segments, and the clinical effect of molecules. Our ultimate goal is to develop a new approach to drug development that uses high-quality data and ML models to design novel, safe, and effective therapies that help more people, faster, and at a lower cost.
-
17:30 - 18:45 CET - Strategic Sponsor Lecture: BCG Gamma(Hopin)
How AI can be supercharged with a guided human-in-the-loop concept to unlock millions of cost savings.
COVID-19 has challenged all companies to understand and optimize their cost-drivers. BCG’s patented Spend AI analyzes hundreds of millions of individual transactions to create an unprecedented view on a company’s cost. It achieves in average 28% of cost savings on all indirect cost.
In this presentation, Marcin will present how he co-founded Spend AI as a BCG big bet by developing a project, operational and technology model allowing constant innovation. He will show how an AI framework can be built to leverage Data Scientist and Subject Matter Experts alike to achieve maximum precision with sparse data while minimizing the total effort.
BCG GAMMA is a unique place where 1200 world-class data scientists, AI software engineers and business consultants come together using advanced analytics to achieve breakthrough business results. Our drive impact end-to-end, from framing new business challenges, achieving impact scale through fact-based, innovative algorithms and tools, and helping our clients become self-sufficient through training and transfer.
-
19:00 - 20:15 CET - Keynote Lecture: Yoshua Bengio (Hopin
Cognitively-inspired inductive biases for higher-level cognition and systematic generalization
How can deep learning be extended to encompass the kind of high-level cognition and reasoning that humans enjoy and that seems to provide us with stronger out-of-distribution generalization than current state-of-the-art AI? Looking into neuroscience and cognitive science and translating these observations and theories into machine learning, we propose an initial set of inductive biases for representations, computations and probabilistic dependency structure. The Global Workspace Theory in particular suggests an important role for a communication bottleneck through a working memory, and this may impose a form of sparsity on the high-level dependencies. These inductive biases also strongly tie the notion of representation with that of actions, interventions and causality, possibly giving a key to stronger identifiability of latent causal structure and ensuing better sample complexity in and out of distribution, as well as meta-cognition abilities facilitating exploration that seeks to reduce epistemic uncertainty of the underlying causal understanding of the environment.
-
Saturday, 6 November
-
10:30 - 11:45 CET - Keynote Lecture: Marta Kwiatkowska (Hopin)
Safety and robustness for deep learning with provable guarantees
Computing systems are becoming ever more complex, with decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. Using illustrative examples from autonomous driving and NLP, this lecture will describe progress with developing safety verification techniques for deep neural networks to ensure robustness of their decisions with respect to input perturbations and causal interventions, with provable guarantees. The lecture will conclude with an overview of the challenges in this field.
-
12:00 - 13:00 CET - Sponsor Keynote: Allegro (Hopin)
It ain’t much but you can use offline data in RL
In contrast to supervised learning methods or bandits, RL allows optimizing the long-term value of the actions. The standard practice, in this case, is to collect new data during each training run, starting from a random agent. That limits application in the domains where interactions are expensive. In contrast, the Offline RL paradigm is based on solving RL problems with historical data only, which allows us to apply RL approaches in cases where we cannot interact with the environment during the training.
In this talk, we will present to you an overview of the current state of Offline RL, starting from a brief intro into the RL and going through main approaches, challenges, and potential applications.
Also, we will tell you about the main research projects that we are working on in Allegro.
Mikhail Zanka is a research engineer at Allegro, where he works on Applied and Offline Reinforcement Learning. Before joining Allegro, he worked on ML in networks at Cisco. Afterward, he researched constrained RL methods at the University of Warsaw. His main research interests are Reinforcement Learning and Deep Learning.Jacek is a research engineer at Allegro, where he works on application historical data as a pre-training in reinforcement learning.
Before joining Allegro he worked at the University of Warsaw where he worked on applying action constraints in RL. Currently, he is focusing on Reinforcement Learning. -
13:00 - 14:00 CET - Lunch Break
-
14:00 - 15:15 CET - Contributed Talks Session 1 (Hopin)
Time aspect in making an actionable prediction of a conversation breakdown
Piotr Janiszewski (Poznań University of Technology), Mateusz Lango (Poznań University of Technology), Jerzy Stefanowski (Poznań University of Technology)Online harassment is an important problem of modern societies, usually mitigated by the manual work of website moderators, often supported by machine learning tools. The vast majority of previously developed methods enable only retrospective detection of online abuse, e.g., by automatic hate speech detection. Such methods fail to fully protect users as the potential harm related to the abuse has always to be inflicted. The recently proposed proactive approaches that allow detecting derailing online conversations can help the moderators to prevent conversation breakdown. However, they do not predict the time left to the breakdown, which hinders the practical possibility of prioritizing moderators’ works. In this work, we propose a new method based on deep neural networks that both predict the possibility of conversation breakdown and the time left to conversation derailment. We also introduce three specialized loss functions and propose appropriate metrics. The conducted experiments demonstrate that the method, besides providing additional valuable time information, also improves on the standard breakdown classification task with respect to the current state-of-the-art method.
Crisis Situation Assessment Using Different NLP Methods
Agnieszka Pluwak (SentiOne), Emilia Kacprzak (SentiOne), Michał Lew (SentiOne), Michał Stańczyk (SentiOne), Aleksander Obuchowski (Talkie.ai)From the perspective of marketing studies, a crisis of reputation is interpreted as such post factum, when measurable financial loss is induced. However, it is highly demanded to discover its signs as early as possible for risk management purposes. Here is where artificial intelligence finds its application since the start of the Facebook era. Emotion- or sentiment-classification algorithms, based on BiLSTM neural networks or transformer architectures, achieve very good F1 scores. Nevertheless, the scholarly literature offers very few approaches to the detection of reputational crises ante-factum from an NLP point of view, while not every peak of mentions with negative sentiment equals a crisis of reputation by definition. There exist ample general sentiment classification tools dedicated to a specific social medium, like e.g. Twitter, while reputational crises often expand over various Internet sources. They also tend to be highly unpredictable in the way they appear and spread online and very few studies of their development have so far been conducted from the perspective of NLP tool-design. Therefore, in this talk we try to answer the question: how can we track reputational crises fast and precisely in multiple communication channels and what do current NLP methods offer with this respect?
Law, Graphs and AI
Adam Zadrożny (National Centre for Nuclear Research)Legal systems are one of the most complicated human made creations. It is hard to control it due its sparse and vast structure. For a human without completed legal studies it is hard to even get a good glimpse on a small part of it. But let’s look at the law from a computer scientist's perspective. It is divided into logical parts Acts, Chapters, Articles, Points, … . What is more, there are references between relevant parts. So we have a hyperlink structure which allows us to present a legal system as a graph. By using this and NLP methods we can efficiently visualise its structure and analyze interactions between its parts. But this leads us to the conclusion that we can use data mining and NLP to consistency checks, detect redundancy or detect involuntary changes due to hyperlink structure. In the talk, I will explain how such analysis could be done for Polish and European Law. And how it might help to analyze what to clean up in the legal system, after many hurried messy changes were introduced during the COVID pandemic. Last but not least, how citizens might have a better control over what changes might be introduced by proposed Acts.
-
14:00 - 15:15 CET - Contributed Talks Session 2 (Hopin)
Manipulating explainability and fairness in machine learning
Hubert Baniecki (MI2DataLab, Warsaw University of Technology)Are explainability methods black-box themselves? As explanations and fairness methods became widely adopted in various machine learning applications, a crucial discussion was raised on their validity. Precise measures and evaluation approaches are required for a trustworthy adoption of explainable machine learning techniques. It should be as obvious as evaluating machine learning models, especially when working with various stakeholders. A careless adoption of these methods becomes irresponsible. Historically, adversarial attacks exploited machine learning models in diverse ways; hence, defense mechanisms were proposed, e.g. using model explanations. Nowadays, ways of manipulating explainability and fairness in machine learning have become more evident. They might be used to achieve adversary, but more so to highlight the explanations' shortcomings and the need for evaluation. In this talk, I will introduce the fundamental concepts fusing the domains of adversarial and explainable machine learning. Then summarise the most impactful methods and results that recently appeared in the literature. This talk is based on a live survey of related work, including over 30 relevant papers, available at https://github.com/hbaniecki/adversarial-explainable-ai.
Let's open the black box! Hessian-based toolbox for interpretable and reliable machines learning physics
Anna Dawid (University of Warsaw & ICFO), Patrick Huembeli (EPFL — École polytechnique fédérale de Lausanne), Michał Tomza (Faculty of Physics, University of Warsaw), Maciej Lewenstein (ICFO - The Institute of Photonic Sciences, Barcelona), Alexandre Dauphin (ICFO - The Institute of Photonic Sciences, Barcelona)Identifying phase transitions is one of the key problems in quantum many-body physics. The challenge is the exponential growth of the complexity of quantum systems’ description with the number of studied particles, which quickly renders exact numerical analysis impossible. A promising alternative is to harness the power of machine learning (ML) methods designed to deal with large datasets. However, ML models, and especially neural networks (NNs), are known for their black-box construction, i.e., they usually hinder any insight into the reasoning behind their predictions. As a result, if we apply ML to novel problems, neither we can fully trust their predictions (lack of reliability) nor learn what the ML model learned (lack of interpretability). We present a set of Hessian-based methods (including influence functions) opening the black box of ML models, increasing their interpretability and reliability. We demonstrate how these methods can guide physicists in understanding patterns responsible for the phase transition. We also show that influence functions allow checking that the NN, trained to recognize known quantum phases, can predict new unknown ones. We present this power both for the numerically simulated data from the one-dimensional extended spinless Fermi-Hubbard model and experimental topological data. We also show how we can generate error bars for the NN’s predictions and check whether the NN predicts using extrapolation instead of extracting information from the training data. The presented toolbox is entirely independent of the ML model’s architecture and is thus applicable to various physical problems.
Foundations of Interpretable and Reliable Reinforcement Learning
Jacek Cyranka (University of Warsaw, Institute of Informatics)In this talk I will showcase my recent research interests, concentrated on research topic related to study of interpretable and reliable reinforcement learning algorithms with applications to robotics, space missions and computer games. 1) So-called state-planning policy method. I will present some preliminary results obtained in MuJoCo and SafetyGym simulation environments, I will be discussing several related prospective projects related to vision based state planning policy method and offline reinforcement learning. 2) Idea to develop a set of environments simulating space missions, i.e. deployment of a ship into space having gravitational interactions with planes, with aim of reaching fixed target position or enter an orbit with prescribed hard safety constraints (avoiding crashing into other objects having gravitational pull).
-
14:00 - 15:15 CET - Contributed Talks Session
3 (Hopin)Generative models in continual learning
Kamil Deja (Warsaw University of Technology), Wojciech Masarczyk (Warsaw University of Technology), Paweł Wawrzyński (Warsaw University of Technology), Tomasz Trzciński (Warsaw University of Technology)Neural networks suffer from catastrophic forgetting, defined as an abrupt performance loss on previously learned tasks when acquiring new knowledge. For instance, if a network previously trained for detecting virus infections is now retrained with data describing a recently discovered strain, the diagnostic precision for all previous ones drops significantly. To mitigate that, we can retrain the network on a joint dataset from scratch, yet it is often infeasible due to the size of the data or impractical when retraining requires more time than it takes to discover a new strain. The catastrophic forgetting severely limits the capabilities of contemporary neural networks, and continual learning aims to address this pitfall. Although current approaches to continual learning emphasize the sequential nature of learning new discriminative tasks, we argue that the main attribute of how humans learn new things is by discovering information structure without supervision. In this talk I will present several ideas of how we can use generative models and their latent data representations to address the problem of forgetting in neural networks.
A Bayesian Nonparametrics View into Deep Representations
Michał Jamroż (AGH University of Science and Technology), Marcin Kurdziel (AGH University of Science and Technology)We investigate neural network representations from a probabilistic perspective. Specifically, we leverage Bayesian nonparametrics to construct models of neural activations in Convolutional Neural Networks (CNNs) and latent representations in Variational Autoencoders (VAEs). This allows us to formulate a tractable complexity measure for distributions of neural activations and to explore global structure of latent spaces learned by VAEs. We use this machinery to uncover how memorization and two common forms of regularization, i.e. dropout and input augmentation, influence representational complexity in CNNs. We demonstrate that networks that can exploit patterns in data learn vastly less complex representations than networks forced to memorize. We also show marked differences between effects of input augmentation and dropout, with the latter strongly depending on network width. Next, we investigate latent representations learned by standard -VAEs and Maximum Mean Discrepancy (MMD) -VAEs. We show that aggregated posterior in standard VAEs quickly collapses to the diagonal prior when regularization strength increases. MMD-VAEs, on the other hand, learn more complex posterior distributions, even with strong regularization. While this gives a richer sample space, MMD-VAEs do not exhibit independence of latent dimensions. Finally, we leverage our probabilistic models as an effective sampling strategy for latent codes, improving quality of samples in VAEs with rich posteriors.
LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood
Piotr Tempczyk (University of Warsaw), Rafał Michaluk (University of Warsaw), Adam Goliński (University of Oxford), Przemysław Spurek (Jagiellonian University), Jacek Tabor (Jagiellonian University)We investigate the problem of local intrinsic dimension (LID) estimation. LID of the data is the minimal number of coordinates which are necessary to describe the data point and its neighborhood without the significant information loss. Existing methods for LID estimation do not scale well to high dimensional data because they rely on estimating the LID based on nearest neighbors structure, which may cause problems due to the curse of dimensionality. We propose a new method for Local Intrinsic Dimension estimation using Likelihood (LIDL), which yields more accurate LID estimates thanks to the recent progress in likelihood estimation in high dimensions, such as normalizing flows (NF). We show our method yields more accurate estimates than previous state-of-the-art algorithms for LID estimation on standard benchmarks for this problem, and that unlike other methods, it scales well to problems with thousands of dimensions. We anticipate this new approach to open a way to accurate LID estimation for real-world, high dimensional datasets and expect it to improve further with advances in the NF literature.
-
15:15 - 15:30 CET - Lightning Talk: IDEAS NCBR (Hopin)
IDEAS NCBR - about the new research center
Piotr Sankowski (IDEAS NCBR)How many of you wanted to conduct research? And how many of you did not choose the scientific career path because of limited support academic system provides? I would like to present an option you may follow to meet your own research interests. I am more than happy to announce, that this year, a new scientific and research center - IDEAS NCBR was developed.
IDEAS NCBR is a research program in AI and digital economy. We believe that our center will become a discussion platform between academia and business. We aim to provide high-quality working environment by creating an interesting employment conditions and leading mentoring program. We support researchers to deliver the high quality R&D project by providing necessary tools and networking opportunities that can become a milestone in creating your scientific career, or testing your innovative idea. It will give you a chance to carry out an independent project, focus on solving problems and experience interesting challenges. Our mission is to become the biggest AI research center in Poland.
So far, we were able to form an incredible scientific board that consists of the best experts representing AI researchers and business environment. We also created two research groups led by prof. Piotr Sankowski, and prof. Stefan Dziembowski that will focus on blockchain and intelligent algorithms and learned data structures.
Would you like to learn more and together with us create the new lighthouse AI project in Poland?
-
15:30 - 16:00 CET - Networking 1-1 (Hopin)
This networking will take place on the Hopin platform.
-
16:00 - 17:15 CET - Keynote Lecture: Rich Caruana (Hopin)
Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning
In machine learning often tradeoffs must be made between accuracy, privacy and intelligibility: the most accurate models usually are not very intelligible or private, and the most intelligible models usually are less accurate. This can limit the accuracy of models that can safely be deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust models is important. EBMs (Explainable Boosting Machines) are a recent learning method based on generalized additive models (GAMs) that are as accurate as full complexity models, more intelligible than linear models, and which can be made differentially private with little loss in accuracy. EBMs make it easy to understand what a model has learned and to edit the model when it learns inappropriate things. In the talk I’ll present case studies where EBMs discover surprising patterns in data that would have made deploying black-box models risky. I’ll also show how we’re using these models to uncover and mitigate bias in models where fairness and transparency are important.
-
17:30 - 18:30 CET - Panel Discussion: Sins & Marvels of AI Research (Hopin)
Do you find yourself questioning the way Machine Learning research is conducted? Or quite the opposite – amazed at the progress we’ve achieved? Maybe both?
Panelists:- Sara Hooker, Google Brain, Delta Analytics
- Kyle Cranmer, New York University
- Aleksandra Przegalińska, Kozminski University, American Institute for Economic Research
-
18:30 - 19:00 CET - Community Session (Hopin)
Are you interested in joining ML in PL Association? Find out how does it look to be in our organization and why is it worth joining us!
Led by Michał Zmysłowski, co-founder. -
19:00 - 20:30 CET - Poster Session (GatherTown)
Traditionally, Poster Session is the official last session on the middle day of the ML in PL Conference. During the session using, participants will have the opportunity to see and discuss accepted posters on the Gather Town platform.
List of accepted posters: /accepted-talks-and-posters/
-
19:00 - 20:15 CET - Women in ML in PL: Discussion Panel (Hopin)The participants will hear a discussion on ML trends and career paths from the panel of experts from both academia and industry.
Panelists:- Bianca Dumitrascu, University of Cambridge
- Angela Fan, INRIA Nancy, Facebook AI Research Paris
- Marta Kwiatkowska, University of Oxford
- Ewa Szczurek, University of Warsaw
- Maja Trębacz, DeepMind
- Liesbeth Venema, Nature
Moderator: Agnieszka Słowik, University of Cambridge
-
20:30 - 23:45 CET - Networking + AI Art Exhibition (GatherTown)
The networking will take place on the Gather Town platform.
-
-
Sunday, 7 November
-
10:30 - 11:45 CET - Keynote Lecture: Marta Garnelo (Hopin)
Meta learning: theory and applications
Meta learning, often referred to as “learning to learn”, encompasses research that makes an explicit distinction between what a model (or agent) has to be able to adapt to during inference time vs what can be amortised. As an example, a task in this area could be to assign images to classes which have never been observed during training or rapid adaptation to a new distribution of mazes for robot navigation.
In this talk we will introduce the meta learning naming conventions and assumptions. We will go through the main groups of methods and the benchmarks that fueled research in this field through the last years. The goal is to provide a unified picture that can be applied to both standard supervised as well as reinforcement learning; We will conclude with the set of open challenges and state of the art results.
-
12:00 - 13:15 CET - Panel Discussion: Prospects for ML in Poland: education and work (Hopin)
Wondering what working in the Machine Learning sector looks like in Poland? Or maybe you are entering the ML world and want to know where to start? This is a unique opportunity to learn more about the Polish ML scene from our experienced speakers!
Panelists:
- Robert Bogucki, Deepsense.ai
- Marek Cygan, University of Warsaw, Nomagic
- Łukasz Kobyliński, Polish Academy of Sciences, Sages, SigDelta
- Julia Krysztofiak-Szopa, Appsilon
- Paweł Wawrzyński, Warsaw University of Technology
Important info: this is the only event during the entire conference that will be held entirely in Polish! -
12:00 - 13:15 CET - Women in ML in PL: Crushing the Code Interview Workshop (Hopin)Join us for this session where we’ll share tips on how to prepare for the coding interview at Facebook. We’ll walk you through the format of the interview and shed some light on what kind of questions you can expect, so that you can crush your next technical interview.
Lead by Ola Piktus (Facebook AI) -
13:15 - 14:15 CET - Lunch Break
-
14:15 - 15:15 CET - Sponsor Keynote: RTB House (Hopin)
ML Challenges in cookieless world
Sophisticated, high-qps, real-time Machine Learning systems are at the heart of advanced advertising solutions. Today these systems are in the midst of a profound overhaul. New, privacy-centric mechanisms are being developed that will soon supersede the traditional tech stack that relied on third-party cookies.
RTB House is at the forefront of these changes. This talk will give you a glimpse of the novel challenges that the industry faces, including federated learning, browser-side model evaluation, differentially-private reporting, and more.
Presented by Jonasz Pamuła, Head of Research at RTB House
-
15:30 - 16:25 CET - Contributed Talks Session 4 (Hopin)
Self-attention based encoder models for strong lens detection
Hareesh Thuruthipilly (National Centre of Nuclear Research, Poland), Adam Zadrożny (National Centre of Nuclear Research, Poland), Agnieszka Pollo (National Centre of Nuclear Research, Poland)In the talk we would like to present an architecture for image processing based on transformers and inspired by the DETR architecture introduced by Facebook as a better alternative to regular CNNs and RNNs. We implemented our architecture on the Bologna Lens Challenge (a mock data challenge for finding gravitational lensing), and we were able to surpass the best existing automated methods that participated in the challenge. Our proposed architecture requires a very low computational cost and has shown better stability than the CNNs participated in the challenge. In addition, the proposed architecture seems to be resistant to the problem of overfitting.
Color recognition should be simple - but isn’t - case study of player clustering in football.
Marcin Tuszyński (Respo.Vision), Łukasz Grad (MIMUW & Respo.Vision), Wojtek RosińskiColor recognition seems like a simple task, in fact there are over 500 repositories on GitHub and dozens of easy-to-find blog posts on this topic. While fairly common, it is not always that easy to use and adapt to complex environments. The problem that we are dealing with is fully automatic and real-time player clustering in football. Our goal is to decide, during a particular game, to which team of the two a person belongs to. Our setting, which is information extraction from any broadcast-like match video, creates an extraordinarily difficult environment for machine learning models. Significant differences in camera angles, variable weather and light conditions, temporary occlusions and high movement dynamics are only some of the factors that need to be taken into account when creating a highly robust solution. In our talk, we will give a short introduction to Gaussian Mixture Models (GMM) and how they can be used to solve the problem of player clustering. We will present our results using a GMM based on a color-based representation of each player, applied to both online and offline cases. We will show how different color models and player representations impact the clustering performance and the ability to detect non-player outliers. Next, we will focus on the shortcomings of the above-mentioned approach, especially during the model fitting in the beginning of the game, that ultimately led us to the development of alternative methods that do not rely on online estimation. A wise man once said „If you don’t know what to do, use neural networks”. To our surprise, there are only a few publications on the topic of color recognition. We are going to walk you through our approaches and their architectures, beginning with why treating our problem as color classification may not be enough in some cases; how can colorspace regression be extended and customized and why does it overfit easily. Next, we describe how one can incorporate color as an auxiliary input in a neural network and why it provides unsatisfactory results. Finally, we will focus on how metric (color) learning can solve the player clustering problem and what to do when it (still) doesn’t want to.
-
15:30 - 16:25 CET - Contributed Talks Session 5 (Hopin)
Neural network-based left ventricle geometry prediction from CMR images with application in biomechanics
Agnieszka Borowska (University of Glasgow), Lukasz Romaszko (University of Glasgow), Alan Lazarus (University of Glasgow), David Dalton (University of Glasgow), Colin Berry (British Heart Foundation and University of Glasgow), Xiaoyu Luo (University of Glasgow), Dirk Husmeier (University of Glasgow), Hao Gao (University of Glasgow)Combining biomechanical modelling of left ventricular (LV) function and dysfunction with cardiac magnetic resonance (CMR) imaging has the potential to improve the prognosis of patient-specific cardiovascular disease risks. Biomechanical studies of LV function in three dimensions usually rely on a computerized representation of the LV geometry based on finite element discretization, which is essential for numerically simulating in vivo cardiac dynamics. Detailed knowledge of the LV geometry is also relevant for various other clinical applications, such as assessing the LV cavity volume and wall thickness. Accurately and automatically reconstructing personalized LV geometries from conventional CMR images with minimal manual intervention is still a challenging task, which is a pre-requisite for any subsequent automated biomechanical analysis. I this talk I will present a deep learning-based automatic pipeline for predicting the three-dimensional LV geometry directly from routinely-available CMR cine images, without the need to manually annotate the ventricular wall. The framework takes advantage of a low-dimensional representation of the high-dimensional LV geometry based on principal component analysis. I will discuss how the inference of myocardial passive stiffness is affected by using our automatically generated LV geometries instead of manually generated ones. These insights can be used to inform the development of statistical emulators of LV dynamics to avoid computationally expensive biomechanical simulations. The proposed framework enables accurate LV geometry reconstruction, outperforming previous approaches by delivering a reconstruction error 50% lower than reported in the literature. I will further demonstrate that for a nonlinear cardiac mechanics model, using our reconstructed LV geometries instead of manually extracted ones only moderately affects the inference of passive myocardial stiffness described by an anisotropic hyperelastic constitutive law. The developed methodological framework has the potential to make an important step towards personalized medicine by eliminating the need for time consuming and costly manual operations. In addition, the proposed method automatically maps the CMR scan into a low-dimensional representation of the LV geometry, which constitutes an important stepping stone towards the development of an LV geometry-heterogeneous emulator.
Cell-counting in human embryo time-lapse monitoring Piotr Wygocki (MIM Solutions & University of Warsaw), Michał Siennicki (MIM Solutions), Tomasz Gilewicz (MIM Solutions), Paweł Brysch (MIM Solutions)
Embryo visual analysis is a non-invasive method of selecting blastocysts for transfer after in vitro fertilization. Currently, it is mostly performed by embryologists. One of the prevalent biomarkers used for embryo selection is the so-called cell division times. Thanks to the development of the automatic embryo monitoring systems we have obtained more than 600 clips of embryos in the first 5 days of incubation - from a single cell to a grown, ready to transfer blastocyst. Photos were taken every 7 minutes - about one thousand frames per embryo. The division times were manually tagged by embryologists. We created an ML system which given an embryo time-lapse predicts its division times. In 91% of cases, its prediction is in the 3% error interval. Our model consists of two levels - 3D Conv Net that counts cells on short videos and second level, which based on the previous model’s predictions returns the division times for the whole clip. -
15:30 - 16:25 CET - Contributed Talks Session 6 (Hopin)
Beautiful Mind and RTB Auctions
Piotr Sankowski (MIM Solutions, IDEAS NCBR & University of Warsaw), Piotr Wygocki (MIM Solutions), Adam Witkowski (MIM Solutions), Michał Brzozowski (MIM Solutions)RTB (real-time bidding) ad auctions are gaining in importance - in 2021, their global value will reach over $ 155 billion. Due to the fact that the advertiser purchases individual views targeted at specific users, he has much more control over the course of advertising campaigns and has a chance to match the advertisement to the user. Our company supported the creation of several systems operating on this market, and our clients include: RTB House, Spicy Mobile and HitDuck. As part of my presentation, I would like to tell you about a fundamental change that has taken place on the RTB market in recent years, namely the transition from the second-price auction to the first-price auction. First, I will present what are the reasons for this change and why previously some brokerage firms found it profitable to cheat clients about their auction mechanisms. Secondly, I will explain what it means for us - strategic players and how to optimize our bidding algorithms in the case of the first price auction. I will tell you how the assumption of a symmetrical market in combination with Nash's equlibrium allows you to reduce advertising costs by over 40%.
From Big Data to Semantic Data in Industry 4.0
Szymon Bobek (Jagiellonian University), Grzegorz Nalepa (Jagiellonian University)Advances in artificial intelligence, trigger transformations that make more and more companies entering Industry 4.0 era. In many cases these transformations are gradual, and performed in bottom-up manner. This means that in the first step, the industrial hardware is upgraded to collect as many data as possible, without actual planning of utilization of the information. Secondly, the infrastructure for data storage and processing is prepared to keep the large volumes of historical data accessible for further analysis. Only in the last step, methods for processing the data are developed to improve or get more insight into the industrial and business process. Such a pipeline makes many companies face a problem with huge amount of data and incomplete understanding of how the data relates to the process that generates it. In this talk we will present our works to improve this situation and bring more understanding to the data on the example of two industrial use-cases: coal mine and steel factory.
-
16:30 - 16:45 CET - Lightning Talk: Samsung (Hopin)
Sound AI – the Missing Piece in AI Puzzle!
Sound is important. Hearing is our primary sense and we cannot imagine the world without it. In this short presentation, we will show how at Samsung we combine Machine Learning with the fascinating world of sounds. Based on real examples taken from our portfolio we will show what is possible and what challenges are ahead of us.
Speaker: Jakub Tkaczuk, Head of Audio Intelligence at Samsung R&D Institute Poland
Passionate researcher and project manager with over 10 years of experience in the area of artificial intelligence, audio processing, multimedia systems and vision processing. Experienced in project, technology, and innovation management. Building successful teams. Audio technology enthusiast.
-
16:45 - 17:00 CET - Lightning Talk: Alphamoon (Hopin)
Intelligent Document Processing. The next level of automation in enterprises
In recent years automation technologies have been successfully transforming industries by freeing humans from tedious and repetitive tasks. Despite their initial success, they are still lacking critical components like trustworthy and generic approaches for text and document processing.
During this short talk, we will present the main challenges in Intelligent Document Processing development, including vision-based data capturing, content understating with multimodal deep learning, trustworthiness and continual improvement of ML models.
Speaker: Adam Gonczarek, Chief Technology Officer, Alphamoon
Co-founder and Chief Technology Officer at Alphamoon – a Polish company developing intelligent automation technologies for enterprises. PhD in machine learning from Wrocław University of Science and Technology. Former assistant professor and academic lecturer. Experienced in leading and applying AI for large commercial projects across various industries. Co-author of several scientific papers in top-tier journals and conferences (including workshops at ICML & NeurIPS).
-
17:00 - 17:55 CET - Contributed Talks Session 7 (Hopin)
LOGAN: Low-latency, On-device, GAN Vocoder
Dariusz Piotrowski (Amazon), Mikołaj Olszewski (Amazon), Arnaud Joly (Amazon)In recent years neural networks have driven technological advancement across a magnitude of domains, including the TTS (Text To Speech) technology. Meanwhile, both economic and environmental costs of training and running huge neural networks have been growing rapidly; thus, creating the need for the optimisation of their efficiency. Making the models smaller and faster not only means reducing their economic and environmental impact but also allows to run them offline, directly on end-user’s devices. Bringing the inference offline helps protecting the user’s data, as it does not leave their devices and significantly reduces the latency by removing the time needed for cloud communication. Furthermore, it solves the problem of missing or intermittent connectivity. A typical TTS system consists of two models - a context generator which creates the speech representation as mel-spectrograms from a phoneme sequence, and a vocoder which reconstructs the waveform. This work focuses on the latter. As a baseline we have chosen a state-of-the-art, GAN-based vocoder - MultiBand MelGAN. It can synthesise a high-quality audio with the complexity of 0.95 GFLOPS, which is fast enough to run in real time on high-powered embedded devices. However, it is not enough for real-world scenarios where there are multiple systems working on the device at once. What is more, the majority of the devices are lower-powered and the latency would be prohibitively too high, especially for longer utterances. Models created with efficiency in mind are often orders of magnitude faster with minimal to none quality degradation. In this work we propose a number of methods, which can be also utilised across other domains to achieve a model suitable for real-world scenarios. To decrease the latency, the inference is typically run in a streamable fashion, meaning the model is fed the input data in chunks and producing the output in corresponding chunks. Multi-Band MelGAN is a convolution-based model, which in contrast to traditional auto-regressive models is not streamable by default. If input were split into parts, fed into a convolutional network and then the outputs were to be concatenated - the result would be different, and possibly worse than if the whole input were to be fed into the network at once. Lack of streamable synthesis would cause the latency to grow linearly with the length of the target utterance. To address that issue we propose a method of running the inference in a streamable fashion, that is suitable for convolutional models, by carefully overlapping chunks of the model input and extracting parts of the computational graph, based on the receptive field of the model. To further improve the performance of the model and reduce the latency we introduce quarter-precision quantisation and overcome the quality degradation it typically causes by using μ-law output together with pre-emphasis filtering and quantisation-aware training. Moreover, we increase the parallelisation of the computations by increasing the number of frequency sub-bands and optimise the architecture by reducing the number of filters used. As a result the proposed model achieves almost 44x higher RTF (Real Time Factor - audio length divided by inference time), reduces the latency by 78% and is almost 80% smaller, while decreasing the quality by only 1 point in relative MUSHRA score (methodology of evaluating the perceived quality of the output from lossy compression algorithms).
Synerise at KDD Cup 2021: Node Classification in massive heterogeneous graphs
Michał Daniluk (Synerise), Jacek Dąbrowski (Synerise), Barbara Rychalska (Synerise), Konrad Gołuchowski (Synerise)We recently won three highly acclaimed international AI competitions. In this talk, I will present our solution to one of them - KDD Cup Challenge 2021. We achieved 3rd place in this competition - after Baidu and DeepMind teams, beating OPPO Research, Beijing University, Intel & KAUST, among many others. The competition task was to predict the subject of scientific publications on the basis of edges contained in the heterogeneous graph of papers, citations, authors, and scientific institutions. The graph of unprecedented size (~ 250 GB) contained 244,160,499 vertices of 3 types, connected by as many as 1,728,364,232 edges, which made it possible to verify the algorithms in terms of their readiness to operate on very large-scale data. We proposed an efficient model based on our previously introduced algorithms: EMDE and Cleora, on top of a simplistic feed-forward neural network.
-
17:00 - 17:55 CET - Contributed Talks Session 8 (Hopin)
Deep learning for decoding 3D hand translation based on ECoG signalMaciej Śliwowski (Univ. Grenoble Alpes), Matthieu Martin (Univ. Grenoble Alpes), Antoine Souloumiac (Université Paris-Saclay), Pierre Blanchart (Université Paris-Saclay), Tetiana Aksenova (Univ. Grenoble Alpes)
Brain-computer interfaces (BCIs) may significantly improve tetraplegic patients' quality of life. BCIs create an alternative way of communication between humans and the environment and thus could potentially compensate for motor function loss. However, most current systems suffer from low decoding accuracy and are not easy to use for patients. In the case of invasive BCIs, electrocorticography-based (ECoG) systems can provide better signal characteristics, compared to electroencephalography (EEG), while being less invasive than intracortical recordings. Most studies predicting continuous hand translation trajectories from ECoG use linear models that may be too simple to analyze brain processes. Models based on deep learning (DL) proved efficient in many machine learning problems. Thus they emerge as a solution to create a robust and high level brain signals representation. This work evaluated several DL-based models to predict 3D hand translation from ECoG time-frequency features. The data was recorded in a closed-loop experiment ("BCI and Tetraplegia" clinical trial, clinicaltrials.gov NCT02550522) with a tetraplegic subject controlling movements of hands of a virtual avatar. We started the analysis with a multilayer perceptron taking vectorized features as input. Then, we proposed convolutional neural networks (CNN), which take matrix-organized inputs that approximate the spatial relationship between the electrodes. In addition, we investigated the usefulness of long short-term memory (LSTM) to analyze temporal information. Results showed that CNN-based architectures performed better than the current state-of-the-art multilinear model on the analyzed ECoG dataset. The best architecture used a CNN-based model to analyze the spatial representation of time-frequency features followed by LSTM exploiting sequential character of the desired hand trajectory. Compared to the multilinear model, DL-based solutions increased average cosine similarity from 0.189 to 0.302 for the left hand and from 0.157 to 0.249 for the right hand. This study showed that CNN and LSTM could improve ECoG signal decoding and increase the quality of interaction for ECoG-based BCI.
Multiscale Reweighted Stochastic Embedding (MRSE): Deep Learning of Generalized Variables for Statistical PhysicsJakub Rydzewski (Nicolaus Copernicus Universit)
We present a new machine learning method called multiscale reweighted stochastic embedding (MRSE) ((1)) for automatically constructing generalized variables to represent and drive the sampling of statistical physics systems, particularly, enhanced sampling simulations. The technique automatically finds generalized variables by learning a low-dimensional embedding of the high-dimensional feature space to the latent space via a deep neural network. Our work builds upon the popular t-distributed stochastic neighbor embedding approach. We introduce several new aspects to stochastic neighbor embedding algorithms that make MRSE especially suitable for enhanced sampling simulations: (1) a well-tempered landmark selection scheme; (2) a multiscale representation of the high-dimensional feature space; and (3) a reweighting procedure to account for biased training data. We show the performance of MRSE by applying it to several model systems.
-
17:00 - 17:55 CET - Contributed Talks Session 9 (Hopin)
Application of Machine Learning for multi-phase flow pattern recognition
Rafał A. Bachorz (PSI Polska Sp. z o.o.), Adam Karaszewski (PSI Polska Sp. z o.o.), Grzegorz Miebs (PSI Polska Sp. z o.o.)Pipeline transportation safety requires the application of LDS (Leak Detection Systems). Reinforcing all pipeline lengths with continuous physical leak detection based on hydrocarbon sensing ropes, fiberoptic temperature or vibration detection is costly and not implemented for existing pipelines. Running scheduled checks of internal inspection gauges, camera-fitted drones or dog patrols is likely to miss or delay spill detection. On the other hand, the deployment of mature computational methods utilizing sparsely installed flow, pressure, or acoustic sensors provides a sufficient amount of process data allowing for robust, accurate, and precise leak detection and localization. The work focuses on a particularly difficult case of pressure-based LDS for the multi-phase flow conditions. To maintain the accuracy of leak detection and location, also running estimation of reliability and robustness of LDS models it is important to monitor the profile of flow patterns over the pipeline paths. The hydraulic phenomena in the multi-phase flow are particularly complex, one of the features determining the fluid properties is flow pattern. Classically one distinguishes between Annular, Bubble, Dispersed bubble, Intermittent, Stratified smooth, and Stratified wavy patterns. These flow patterns are difficult to be determined by fundamental principles. A possible remedy here is to utilize empirical data in order to create the predictive models capable of assigning the flow pattern. The predictive strength of the resulting models was carefully checked and the Machine Learning Interpretability techniques have been employed in order to obtain a deeper understanding of the predictions. In particular, the prediction breakdown or the partial dependency plots were applied here. This, in the connections with the domain knowledge, brought an efficient tool for the Root Cause Analysis. All the Machine Learning Interpretability techniques were applied to understand the decisions proposed by the models. This allowed for building confidence and trust that the predictions are fair and based on clear presumptions.
Detection of faulty specimen made of Carbon fiber composites with deep learning and non-destructive testing techniques
Marek Sawicki (Wroclaw University of Science and Technology), Damian Pietrusiak (Wroclaw University of Science and Technology), Mariusz Ptak (Wroclaw University of Science and Technology)In presented work authors showed capabilities of Deep Learning and Convolutional Neural Network for investigation of material defects in Carbon Fiber Reinforced Polymers (CFRP) with usage of Non Destructive Testing techniques (NDT). Specimen made out of CFRP with two holes within four random position were preliminary loaded at tensile test machine. Loading with 1000 cycles, 50 Hz and 2 kN amplitude were applied. Shortly after preliminary phase, static tensile test was performed. Static tensile test was conducted up to rupture with material out of thermodynamic balance. Disturbing factors such as lack thermodynamic balance and four random positions of holes scheme was used to estimate environment influence factors for real word applications. IR camera and regular photo camera was used for record field temperature and strain. Authors tested 30 specimens, which 1 specimen was excluded due to technical reasons, with 51 points of acquisition during static tensile phase. This gives around 1500 data sets obtained from experiment. By usage of augmentation number of datasets were multiplied by 4 which gave number of datasets around 6000. Thermography (IR imaging) was use for track temperature signatures of region which suffered irreversible thermodynamic changes. Those symptoms usually are related to local damage of material. Photo camera was used for record surface of specimen under static tensile test. Selected frames from recorded movie was used for stain field calculation with Digital Image Correlation (DIC) technique. Due to different principle of measurements advanced fitting of results to common format was applied. Common format was 100 x 50 x n, where 100 and 50 represent height and width, whereas n represent number of layers used for training. Those values was input for three CNN/DNN models. Raw values without pre-processing including features extraction had no potential in determination degree of material wear out. Filtering signal for feature extraction required extensive study of composites material mechanics. First model based on CNN was inspired by common CNN architectures such as VGG16, VGG19, ResNet etc. Second model based on DNN architecture with single vector input generated by Principal Component Analysis. This architecture reduced massively computation time with no significant differences in final model predictions. Third model consisted two separated input paths for separation DIC and IR related data from each other. In all cases classification problem was investigated. Model predict class as percentage range of averaged tensile curve. Output data was checked against true classes for acquisition points. In conclusion authors performed 3 blind tests with specimens excluded from training process to show that models are able to indicate specimen with internal defects from population of health specimens.
-
18:00 - 19:15 CET - Keynote Lecture: Łukasz Kaiser (Hopin)
Transformers - How Far Can They Go?
Transformer models have been used to obtain more and more impressive results in recent years. I will review the Transformer architectureand a few results and analyze the issues that arise in scaling-up themodels. I will then present recent work on addressing issues likeefficiency, handling long contexts and out-of-distributiongeneralization. I will then talk about new frontiers for Transformers:can they one day just solve problems for us? -
19:30 - 20:00 CET - Closing Remarks (Hopin)
-
# Invited Speakers
-
Marta Garnelo
Marta Garnelo
Marta is a senior research scientist at DeepMind where she has primarily worked on deep generative models and meta learning. As part of this research she has been involved in developing Generative Query Networks and led the work on Neural Processes. In addition to generative models her recent interests have expanded to multi-agent systems and game theory. Prior to DeepMind Marta obtained her PhD from Imperial College London, where she also worked on symbolic reinforcement learning.
-
Lukasz Kaiser
Lukasz Kaiser
Lukasz is a deep learning researcher who works on fundamental aspects of deep learning and natural language processing. He has co-invented Transformers and other neural sequence models and co-authored the TensorFlow system and the Tensor2Tensor and Trax libraries. Before working on machine learning, Lukasz was a tenured researcher at University Paris Diderot and worked on logic and automata theory. He received his PhD from RWTH Aachen University in 2008 and his MSc from the University of Wroclaw, Poland.
-
Marta Kwiatkowska
Marta Kwiatkowska
Marta Kwiatkowska is a Professor of Computing Systems and Fellow of Trinity College, University of Oxford. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. She led the development of the PRISM model checker, the leading software tool in the area and winner of the HVC Award 2016. Probabilistic model checking has been adopted in diverse fields, including distributed computing, wireless networks, security, robotics, healthcare, systems biology, DNA computing, and nanotechnology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska is the first female winner of the Royal Society Milner Award, winner of the BCS Lovelace Medal, and was awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She won two ERC Advanced Grants, VERIWARE and FUN2MODEL, and is a co-investigator of the EPSRC Programme Grant on Mobile Autonomy. Kwiatkowska is a Fellow of the Royal Society, a Fellow of ACM, EATCS, BCS and PTNO, and a Member of Academia Europea.
-
Rich Caruana
Rich Caruana
Rich Caruana is a Senior Principal Researcher at Microsoft. His research focus is on intelligible/transparent modeling and machine learning for medical decision-making. Before joining Microsoft, Rich was on the faculty at Cornell, at UCLA's Medical School, and at CMU's Center for Learning and Discovery. Rich's Ph.D. is from CMU, and his thesis on Multitask Learning helped create interest in the new subfield of Transfer Learning. Rich received an NSF CAREER Award in 2004 for Meta Clustering.
-
Daphne Koller
Daphne Koller
Daphne Koller is CEO and Founder of insitro, a machine-learning enabled drug discovery company. Daphne is also co-founder of Engageli, was the Rajeev Motwani Professor of Computer Science at Stanford University, where she served on the faculty for 18 years, the co-CEO and President of Coursera, and the Chief Computing Officer of Calico, an Alphabet company in the healthcare space. She is the author of over 200 refereed publications appearing in venues such as Science, Cell, and Nature Genetics. Daphne was recognized as one of TIME Magazine’s 100 most influential people in 2012. She received the MacArthur Foundation Fellowship in 2004 and the ACM Prize in Computing in 2008. She was inducted into the National Academy of Engineering in 2011 and elected a fellow of the American Association for Artificial Intelligence in 2004, the American Academy of Arts and Sciences in 2014, and the International Society of Computational Biology in 2017.
-
Marcin Druzkowski
Marcin Druzkowski
Marcin Druzkowski is an Associate Director, AI Software Engineering, at BCG GAMMA in Warsaw. He is a software engineer with experience in architecting and building data platforms and machine learning solutions.He is introducing proper software engineering practices into data science space (i.e. deployments pipelines, reproducibility). Before joining BCG GAMMA, Marcin was a lead machine learning engineer at Ocado.com where he was responsible for building and deploying models like email classification in contact centre, customer segmentation and demand forecasting. Marcin holds a Bachelor in Computer Science from Jagiellonian University.
-
Yoshua Bengio
Yoshua Bengio
Yoshua Bengio is a Full Professor in the Department of Computer Science and Operations Research at Université de Montreal, as well as the Founder and Scientific Director of Mila and the Scientific Director of IVADO. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award with Geoff Hinton and Yann LeCun, known as the Nobel prize of computing. He is a Fellow of both the Royal Society of London and Canada, an Officer of the Order of Canada, and a Canada CIFAR AI Chair.
# Call for Contributors
We are very excited to invite you to submit your proposals for contributed talks and posters for ML in PL ‘21 Conference!
All accepted talks and posters will be presented during the Main Conference and their authors will receive a free ticket.
List of accepted talks and posters can be found here.
Results of the Best Poster and Contributed Talks Awards can be found here.
Detailed description of Call for Contribution can be found here.
Instructions for preparing your talk and poster can be found here.
# Panel Discussion
Prospects for ML in Poland: education and work
# Women in ML in PL
At #MLinPL, we understand the importance of supporting historically underrepresented groups. That's why we are glad to present our new initiative: Women in ML in PL.
During Saturday and Sunday we will hold two events dedicated to empowering women in Machine Learning research:
• Workshop “Cracking the coding interview”
• A panel of female researchers working in natural language processing, bioinformatics and at the intersection of applied physics and machine learning.
More on each soon!
# Students' Day
Registration for Students’ Day for #MLinPL Conference 2021 is now open! We are very excited to invite you to submit your talk's proposals. Your talks should be about 20 minutes long.
Participation in Students' Day is free of charge. Moreover, all accepted speakers will receive tickets for the whole ML in PL Conference 2021!
Check out detailed information here.
Don't hesitate: the deadline for submissions passes 10 October 2021
# Scientific Board
Ewa Szczurek is an assistant professor at the Faculty of Mathematics, Informatics and Mechanics at the University of Warsaw. She holds two Master degrees, one from the Uppsala University, Sweden and one from the University of Warsaw, Poland. She finished PhD studies at the Max Planck Institute for Molecular Genetics in Berlin, Germany and conducted postdoctoral research at ETH Zurich, Switzerland. She now leads a research group focusing on machine learning and molecular biology, with most applications in computational oncology. Her group works mainly with probabilistic graphical models and deep learning, with a recent focus on variational autoencoders.
Henryk Michalewski obtained his Ph.D. in Mathematics and Habilitation in Computer Science from the University of Warsaw. Henryk spent a semester in the Fields Institute, was a postdoc at the Ben Gurion University in Beer-Sheva and a visiting professor in the École normale supérieure de Lyon. He was working on topology, determinacy of games, logic and automata. Then he turned his interests to more practical games and wrote two papers on Morpion Solitaire. Presenting these papers at the IJCAI conference in 2015 he met researchers from DeepMind and discovered the budding field of deep reinforcement learning. This resulted in a series of papers including Learning from memory of Atari 2600, Hierarchical Reinforcement Learning with Parameters, Distributed Deep Reinforcement Learning: Learn how to play Atari games in 21 minutes and Reinforcement Learning of Theorem Proving.
Jacek Tabor in his scientific work deals with broadly understood machine learning, in particular with deep generative models. He is also a member of the GMUM group (gmum.net) aimed at popularization and development of machine learning methods in Cracow.
Jan Chorowski is an Associate Professor at Faculty of Mathematics and Computer Science at University of Wrocław. He received his M.Sc. degree in electrical engineering from Wrocław University of Technology and Ph.D. from University of Louisville. He has visited several research teams, including Google Brain, Microsoft Research and Yoshua Bengio’s lab. His research interests are applications of neural networks to problems which are intuitive and easy for humans and difficult for machines, such as speech and natural language processing.
Prior to joining Yahoo! Research Krzysztof Dembczyński was an Assistant Professor at Poznan University of Technology (PUT), Poland. He has received his PhD degree in 2009 and Habilitation degree in 2018, both from PUT. During his PhD studies he was mainly working on preference learning and boosting-based decision rule algorithms. During his postdoc at Marburg University, Germany, he has started working on multi-target prediction problems with the main focus on multi-label classification. Currently, his main scientific activity concerns extreme classification, i.e., classification problems with an extremely large number of labels. His articles has been published at the premier conferences (ICML, NeurIPS, ECML) and in the leading journals (JMLR, MLJ, DAMI) in the field of machine learning. As a co-author he won the best paper award at ECAI 2012 and at ACML 2015. He serves as an Area Chair for ICML, NeurIPS, and ICLR, and as an Action Editor for MLJ.
Krzysztof Geras is an assistant professor at NYU School of Medicine and an affiliated faculty at NYU Center for Data Science. His main interests are in unsupervised learning with neural networks, model compression, transfer learning, evaluation of machine learning models and applications of these techniques to medical imaging. He previously completed a postdoc at NYU with Kyunghyun Cho, a PhD at the University of Edinburgh with Charles Sutton and an MSc as a visiting student at the University of Edinburgh with Amos Storkey. His BSc is from the University of Warsaw. He also completed industrial internships in Microsoft Research (Redmond and Bellevue), Amazon (Berlin) and J.P. Morgan (London).
Marek Cygan is currently an associate professor at the University of Warsaw, leading a newly created Robot learning group, focused on robotic manipulation and computer vision. Additionally, CTO and co-founder of Nomagic, a startup delivering smart pick-and-place robots for intralogistics applications. Earlier doing research in various branches of algorithms, having an ERC Starting grant on the subject.
Piotr Miłoś is an Associate Professor at the Faculty of Mathematics, Mechanics and Computer Science of the University of Warsaw. He received his Ph.D. in probability theory. From 2016 he has developed interest in machine learning. Since then he collaborated with deepsense.ai on various research projects. His focus in on problems in reinforcement learning.
Przemysław Biecek obtained his Ph.D. in Mathematical Statistics and MSc in Software Engineering at Wroclaw University of Science and Technology. He is currently working as an Associate Professor at the Faculty of Mathematics and Information Science, Warsaw University of Technology, and an Assistant Professor at the Faculty of Mathematics, Informatics and Mechanics, University of Warsaw.
Razvan Pascanu is a Research Scientist at Google DeepMind, London. He obtained a Ph.D. from the University of Montreal under the supervision of Yoshua Bengio. While in Montreal he was a core developer of Theano. Razvan is also one of the organizers of the Eastern European Summer School. He has a wide range of interests around deep learning including optimization, RNNs, meta-learning and graph neural networks.
Tomasz Trzciński is an Associate Professor at Warsaw University of Technology since 2015, where he leads a Computer Vision Lab. He was a Visiting Scholar at Stanford University in 2017 and at Nanyang Technological University in 2019. Previously, he worked at Google in 2013, Qualcomm in 2012 and Telefónica in 2010. He is an Associate Editor of IEEE Access and MDPI Electronics and frequently serves as a reviewer in major computer science conferences (CVPR, ICCV, ECCV, NeurIPS, ICML) and journals (TPAMI, IJCV, CVIU). He is a Senior Member of IEEE and an expert of National Science Centre and Foundation for Polish Science. He is a Chief Scientist at Tooploox and a co-founder of Comixify, a technology startup focused on using machine learning algorithms for video editing.
Viorica Patraucean is a research scientist in DeepMind. She obtained her PhD from University of Toulouse on probabilistic models for low-level image processing. She then carried out postdoctoral work at Ecole Polytechnique Paris and University of Cambridge, on processing of images, videos, and point-clouds. Her main research interests revolve around efficient vision systems, with a focus on deep video models. She is one of the main organisers of EEML summer school and has served as program committee member for top Computer Vision and Machine Learning conferences.
# ML in PL Association
We are a group of young people who are determined to bring the best of Machine Learning to Central and Eastern Europe by creating a high-quality event for every ML enthusiast. Although we come from many different academic backgrounds, we are united by the common goal of spreading the knowledge about the discipline.
Learn more about ML in PL Association