Changes

no edit summary
Line 60: Line 60:     
<h2>Learning</h2>
 
<h2>Learning</h2>
 +
<div class="image">[[Image:GFlowNets and AI.png|150px|link=|GFlowNets and AI for Science]]</div>
 +
<h3 style="text-decoration:none;">[https://www.youtube.com/watch?v=zbRyVLtcCGI GFlowNets and AI for Science presentation - Princeton AI Club]</h3>
 +
<p class="author">Prof. Yoshua Bengio</p>
 +
<p>(In English) Machine learning research is expanding its reach, beyond the traditional realm of the tech industry and into the activities of other scientists, opening the door to truly transformative advances in these disciplines. In this talk I will focus on two aspects, modeling and experimental design, that are intertwined in the theory-experiment-analysis active learning loop that constitutes a core element of the scientific methodology. Computers will be necessary to go beyond the currently purely manual research loop and take advantage of high-throughput experimental setups and large-scale experimental datasets. I will introduce a novel machine learning framework called GFlowNets (for “Generative Flow Networks”), related to reinforcement learning, generative modeling and variational methods and conceived as an ML-driven replacement for MCMC. GFlowNets were first used to propose a highly diverse set of molecular candidates and were then incorporated in an active learning framework for efficiently looking for molecules with desirable properties. More recently, we have been exploring how GFlowNets can generate not just molecular graphs but also causal graphs and Bayesian posterior distributions in function space. I will describe our research program to build on these bases and develop machine learning methodologies for efficiently exploring the space of causal theories as well as the space of experiments while characterizing the ambiguities left by finite datasets and non-identifiability, as well as our plans to apply these tools in areas of great societal need like the unmet challenge of antimicrobial resistance.</p>
 +
<br>
 +
<br>
 +
<br>
 +
 +
<div class="image">[[Image:BengioYT.jpg|150px|link=|GFlowNets, Consciousness & Causality]]</div>
 +
<h3 style="text-decoration:none;">[https://www.youtube.com/watch?v=zbRyVLtcCGI GFlowNets, Consciousness & Causality - Machine Learning Street Talk]</h3>
 +
<p class="author">Prof. Yoshua Bengio</p>
 +
<p>(In English) For Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of Machine Learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them.</p>
 +
<br>
 +
<br>
 +
<br>
 +
 +
<div class="image">[[Image:Learning Machines Seminar.png|150px|link=|Learning Machines Seminar]]</div>
 +
<h3 style="text-decoration:none;">[https://www.youtube.com/watch?v=K8LNtTUsiMI&t Learning Machines Seminar: Yoshua Bengio (Université de Montreal)]</h3>
 +
<p class="author">Prof. Yoshua Bengio</p>
 +
<p>(In English) How can what has been learned on previous tasks generalize quickly to new tasks or changes in distribution? The study of conscious processing in human brains (and the window into it given by natural language) suggests that we are able to decompose high-level verbalizable knowledge into reusable components (roughly corresponding to words and phrases). This has stimulated research in modular neural networks where attention mechanisms can be used to dynamically select which modules should be brought to bear in a given new context. Another source of inspiration for tackling this challenge is the body of research into causality, where changes in tasks and distributions are viewed as interventions. The crucial insight is that we need to learn to separate (somewhat like in meta-learning) what is stable across changes in distribution, environments or tasks and what may be separate to each of them or changing in non-stationary ways in time. From a causal perspective what is stable are the reusable causal mechanisms, along with the inference machinery to make probabilistic guesses about the appropriate combination of mechanisms (maybe seen as a graph) in a particular new context. What may change with time are the interventions and other random variables which are those that yield more directly to observations. If interventions are not observed (we do not have labels for fully explaining the changes in tasks in terms of the underlying modules and causal variables) we would ideally like to estimate the Bayesian posterior over the interventions, given whatever is observed. This research approach raises many interesting research questions ranging from Bayesian inference and identifiability to causal discovery, representation learning and out-of-distribution generalization and adaptation, which will be discussed in the presentation.</p>
 +
<br>
 +
<br>
 +
<br>
 +
 
<div class="image">[[Image:Discover-Data-Series-art.png|150px|link=|Discover Data series]]</div>
 
<div class="image">[[Image:Discover-Data-Series-art.png|150px|link=|Discover Data series]]</div>
 
<h3 style="text-decoration:none;">[https://www.csps-efpc.gc.ca/discover-series/data-eng.aspx Digital Academy's Discover Series: Discover Data]</h3>
 
<h3 style="text-decoration:none;">[https://www.csps-efpc.gc.ca/discover-series/data-eng.aspx Digital Academy's Discover Series: Discover Data]</h3>