Inductive biases are assumptions about the world that are encoded into models to help learn and generalise better. For computational modelling of perception (vision, language, etc.) inductive biases usually come from human perception and cognition-intending to drive human-like learning and learning from (relatively) less data than is now typical. This talk covers two recent lines of enquiry into building more powerful inductive biases into computational models. The first explores the use of explicit compositionality to leverage structural biases for representation learning in language, and the second explores the use of discrete variables and autoencoding to help learn more efficient diffusion models. Together, these are intended to facilitate exploration into a class of models that can induce compositional structure to learn better models of data.
View the presentation material as:

