BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Centre for AI in Assistive Autonomy - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://assistive-autonomy.ed.ac.uk
X-WR-CALDESC:Events for Centre for AI in Assistive Autonomy
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260317T110000
DTEND;TZID=Europe/London:20260317T120000
DTSTAMP:20260427T155722
CREATED:20260322T081546Z
LAST-MODIFIED:20260322T083814Z
UID:281593-1773745200-1773748800@assistive-autonomy.ed.ac.uk
SUMMARY:Explicit Inductive Biases for Efficient Modelling and Representation Learning | Reading Group
DESCRIPTION:Paper Summary\n		\n\n		\n			Video\n		\n\n		\n			Presentation Notes\n		\n\n		\n			Presenter\n		\n\n		\n\n		\n			\n				\n					\n			Welcome to the reading group presentation! \nInductive biases are assumptions about the world that are encoded into models to help learn and generalise better. For computational modelling of perception (vision\, language\, etc.) inductive biases usually come from human perception and cognition-intending to drive human-like learning and learning from (relatively) less data than is now typical. This talk covers two recent lines of enquiry into building more powerful inductive biases into computational models. The first explores the use of explicit compositionality to leverage structural biases for representation learning in language\, and the second explores the use of discrete variables and autoencoding to help learn more efficient diffusion models. Together\, these are intended to facilitate exploration into a class of models that can induce compositional structure to learn better models of data.\n		\n\n		\n			Coming soon!\n		\n\n		\n			View the presentation material as: \n📄 Presentation\n		\n\n		\n			Siddhartha is a Reader at School of Informatics.
URL:https://assistive-autonomy.ed.ac.uk/event/explicit-inductive-biases-for-efficient-modelling-and-representation-learning-reading-group/
LOCATION:Informatics Forum (G.03)\, 10 Crichton Street\, Edinburgh\, Midlothian\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/png:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2026/03/reading-group-banners-sid.png
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260217T110000
DTEND;TZID=Europe/London:20260217T120000
DTSTAMP:20260427T155722
CREATED:20260322T005203Z
LAST-MODIFIED:20260322T084306Z
UID:281589-1771326000-1771329600@assistive-autonomy.ed.ac.uk
SUMMARY:Effective design of graphics in (robotics) research | Reading Group
DESCRIPTION:Paper Summary\n		\n\n		\n			Video\n		\n\n		\n			Presentation Notes\n		\n\n		\n			Presenter\n		\n\n		\n\n		\n			\n				\n					\n			Welcome to the reading group presentation! \nIn this tutorial\, Matias will present a methodology to tackle a painful though necessary part of writing papers: making figures. He will present a workflow of five steps\, which streamlines this process and reduce the decisions we need to make when facing this challenge\, hopefully saving precious time while also improving the graphical quality of our manuscripts. The tutorial will cover basic concepts for each step and will provide concrete examples from robotics papers but that should also apply to other fields such as AI or computer vision. The slides and the companion code will be shared after the presentation.\n		\n\n		\n			Coming soon!\n		\n\n		\n			View the presentation material as: \n📄 Presentation\n		\n\n		\n			Matias Mattamala is a Research Associate.
URL:https://assistive-autonomy.ed.ac.uk/event/matias-mattamala-reading-group/
LOCATION:Informatics Forum (G.03)\, 10 Crichton Street\, Edinburgh\, Midlothian\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/png:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2026/03/reading-group-banners-matias.png
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20251125T110000
DTEND;TZID=Europe/London:20251125T120000
DTSTAMP:20260427T155722
CREATED:20251031T170021Z
LAST-MODIFIED:20251105T135404Z
UID:281484-1764068400-1764072000@assistive-autonomy.ed.ac.uk
SUMMARY:AI co-pilot bronchoscope robot | Reading Group
DESCRIPTION:Paper Summary\n		 \n		\n			Video\n		 \n		\n			Presentation Notes\n		 \n		\n			Presenter\n		\n		 \n		\n			\n				\n					\n			Welcome to the reading group presentation! \nSal introduces an AI-assisted bronchoscope robot that enables novice doctors to perform safe and expert-level lung examinations\, addressing global disparities in access to skilled bronchoscopic care. \nPlease find the\nrelevant paper here. \n		 \n		\n			To be added after the presentation!\n		 \n		\n			To be added after the presentation!\n		 \n		\n			Salvatore Esposito is a Research Fellow.
URL:https://assistive-autonomy.ed.ac.uk/event/281484/
LOCATION:Informatics Forum (G.03)\, 10 Crichton Street\, Edinburgh\, Midlothian\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/jpeg:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2025/10/reading-group-banners-1.jpg
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20251111T110000
DTEND;TZID=Europe/London:20251111T120000
DTSTAMP:20260427T155722
CREATED:20251031T162724Z
LAST-MODIFIED:20260322T084940Z
UID:281424-1762858800-1762862400@assistive-autonomy.ed.ac.uk
SUMMARY:Modelling Assistive Interaction | Reading Group
DESCRIPTION:Paper Summary\n		\n\n		\n			Video\n		\n\n		\n			Presentation Notes\n		\n\n		\n			Presenter\n		\n		\n\n		\n			\n				\n					\n			\nWelcome to the reading group presentation! \nWhat do we mean when we say we need to design assistive agents that help us in our daily activities? In this talk\, Rim will aim to answer this question by surveying several seminal works and recent advances in designing assistive agents\, stemming from decision-making and information theory. \nPlease find the\nJAIR paper\nand\narXiv paper\nhere.\n\n		\n\n		\n			\nComing soon!\n\n		\n\n		\n			\nView the presentation material as: \n📄 Presentation\n\n		\n\n		\n			\nRimvydas Rubavicius is a Research Associate.
URL:https://assistive-autonomy.ed.ac.uk/event/modelling-assistive-interaction-reading-group/
LOCATION:Informatics Forum (G.03)\, 10 Crichton Street\, Edinburgh\, Midlothian\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/jpeg:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2025/10/reading-group-banners.jpg
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20251028T110000
DTEND;TZID=Europe/London:20251028T120000
DTSTAMP:20260427T155722
CREATED:20250818T112356Z
LAST-MODIFIED:20251111T152545Z
UID:281218-1761649200-1761652800@assistive-autonomy.ed.ac.uk
SUMMARY:Integrated Task and Motion Planning | Reading Group
DESCRIPTION:Paper Summary\n		 \n		\n			Video\n		 \n		\n			Presentation Notes\n		 \n		\n			Presenter\n		\n		 \n		\n			\n				\n					\n			Welcome to the reading group presentation! \nIn this talk\, Emanuele will discuss about a class of TAMP problems and survey algorithms for solving them\, characterizing the solution methods in terms of their strategies for solving the continuous-space subproblems and their techniques for integrating the discrete and continuous components of the search. \nPaper Link: Please find the relevant paper here. \n		\n\n		\n			Coming soon!\n		\n\n		\n			 \nView the presentation material as: \n Presentation  \n📄 Repo to run codes \n		\n\n		\n			Emanuele De Pellegrin is a Research Associate.
URL:https://assistive-autonomy.ed.ac.uk/event/integrated-task-and-motion-planning-reading-group/
LOCATION:Informatics Forum (MF2)\, 10 Crichton Street\, Edinburgh\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/png:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2025/08/reading-group-banners.png
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20251014T110000
DTEND;TZID=Europe/London:20251014T120000
DTSTAMP:20260427T155722
CREATED:20250806T140720Z
LAST-MODIFIED:20251111T152748Z
UID:280893-1760439600-1760443200@assistive-autonomy.ed.ac.uk
SUMMARY:Bayesian Object Models for Robotic Interaction with Differentiable Probabilistic Programming | Reading Group
DESCRIPTION:Paper Summary\n		 \n		\n			Video\n		 \n		\n			Presentation Notes\n		 \n		\n			Presenter\n		\n\n		 \n		\n			\n				\n					\n			Welcome to the reading group presentation! \nIn this talk\, Matias will explore a paper that introduces Bayesian Object Models (BOMs)\, a framework enabling robots to build rich\, uncertainty-aware models of unseen objects from limited interaction. These models capture both structural and dynamic object properties using a differentiable probabilistic program. By combining a tree structure sampler with a physics engine\, BOMs enable efficient gradient-based Bayesian inference\, outperforming recent neural and physics-based alternatives. \nPaper Link: Please find the relevant paper here. \n\n		\n\n		\n			Coming soon!\n		\n\n		\n			 \nView the presentation material as: \n📄 Presentation \n		\n\n		\n			Matias Mattamala is a Research Associate in Human-Robot Interaction.
URL:https://assistive-autonomy.ed.ac.uk/event/bayesian-object-models-for-robotic-interaction-with-differentiable-probabilistic-programming/
LOCATION:Informatics Forum (G.03)\, 10 Crichton Street\, Edinburgh\, Midlothian\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/jpeg:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2025/08/reading-group-banners-1_page-0004-scaled.jpg
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20250930T110000
DTEND;TZID=Europe/London:20250930T120000
DTSTAMP:20260427T155722
CREATED:20250806T135622Z
LAST-MODIFIED:20250811T162652Z
UID:280888-1759230000-1759233600@assistive-autonomy.ed.ac.uk
SUMMARY:Physically Assistive Robots: A Systematic Review of Mobile and Manipulator Robots That Physically Assist People with Disabilities | Reading Group
DESCRIPTION:Paper Summary\n		 \n		\n			Video\n		 \n		\n			Presentation Notes\n		 \n		\n			Presenter\n		\n\n		 \n		\n			\n				\n					\n			Welcome to the reading group presentation! \nIn this talk\, Joshua will discuss a paper that reviews recent progress in physically assistive robots developed to help individuals with disabilities perform everyday tasks such as moving\, eating\, and personal care. As these robots become safer\, more capable\, and affordable\, real-world deployment is increasing. The paper highlights major research trends in interaction methods\, autonomy\, and adaptability\, and outlines frameworks and future directions for the field. \nPaper Link: Please find the relevant paper here. \n\n		 \n		\n			To be added after the presentation!\n		 \n		\n			To be added after the presentation!\n		 \n		\n			Joshua Giles is a Research Associate in Human-Centred Artificial Intelligence for Assistive Technology.
URL:https://assistive-autonomy.ed.ac.uk/event/physically-assistive-robots-a-systematic-review-of-mobile-and-manipulator-robots-that-physically-assist-people-with-disabilities/
LOCATION:Bayes Centre (Bayes Theorem (G.03))\, 47 Potterrow\, Edinburgh\, EH8 9BT\, United Kingdom
ATTACH;FMTTYPE=image/jpeg:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2025/08/reading-group-banners-1_page-0003-scaled.jpg
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20250916T110000
DTEND;TZID=Europe/London:20250916T120000
DTSTAMP:20260427T155723
CREATED:20250806T133113Z
LAST-MODIFIED:20251111T152721Z
UID:280881-1758020400-1758024000@assistive-autonomy.ed.ac.uk
SUMMARY:Cognitive Science as a Source of Forward and Inverse Models of Human Decisions for Robotics and Control | Reading Group
DESCRIPTION:Paper Summary\n		 \n		\n			Video\n		 \n		\n			Presentation Notes\n		 \n		\n			Presenter\n		\n\n		 \n		\n			\n				\n					\n			Welcome to the reading group presentation! \nIn this presentation\, Manisha will present a paper which explores how computational cognitive science helps us understand human decision-making using tools like probability theory\, reinforcement learning\, and statistical modeling. It reviews models that explain both how people make decisions (forward models) and how they reason about others’ decisions (inverse models). The authors highlight recent progress in integrating black-box learning with theory-driven approaches and reframe heuristics as rational strategies under cognitive constraints. The work bridges cognitive science with control and optimization perspectives. \nPaper Link: Please find the relevant paper here. \n\n		\n\n		\n			Coming soon!\n		\n\n		\n			 \nView the presentation material as: \n📄 Presentation  \n📄 Notebook for forward model  \n📄 Notebook for reverse model  \n\n		\n\n		\n			Manisha Dubey is a Research Associate in Generative Modelling of Human Behaviour.
URL:https://assistive-autonomy.ed.ac.uk/event/cognitive-science-as-a-source-of-forward-and-inverse-models-of-human-decisions-for-robotics-and-control/
LOCATION:Informatics Forum (MF2)\, 10 Crichton Street\, Edinburgh\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/jpeg:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2025/08/reading-group-banners-1_page-0002-scaled.jpg
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20250902T110000
DTEND;TZID=Europe/London:20250902T120000
DTSTAMP:20260427T155723
CREATED:20250806T124320Z
LAST-MODIFIED:20251111T152156Z
UID:280347-1756810800-1756814400@assistive-autonomy.ed.ac.uk
SUMMARY:π0: A Vision-Language-Action Flow Model for General Robot Control | Reading Group
DESCRIPTION:Paper Summary\n		 \n		\n			Video\n		 \n		\n			Presentation Notes\n		 \n		\n			Presenter\n		\n\n		 \n		\n			\n				\n					\n			Welcome to the first reading group presentation! \nIn this presentation\, Yuhui will present a paper on building generalist robot policies using a vision-language foundation model. The authors propose a flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge. Trained on a large\, varied dataset\, the model can follow language instructions\, perform tasks in a zero-shot setting\, and learn new skills via fine-tuning. \nPaper Link: Please find the relevant paper here. \n\n		 \n		\n			Coming soon!\n		\n\n		\n			 \nView the presentation with notes as: \n📄 Presentation  \n📄 Presentation Notes \n		\n\n		\n			Yuhui Wan is a Research Associate in Autonomy for Surgical Robots.
URL:https://assistive-autonomy.ed.ac.uk/event/%cf%800-a-vision-language-action-flow-model-for-general-robot-control-reading-group/
LOCATION:Bayes Centre (Bayes Theorem (G.03))\, 47 Potterrow\, Edinburgh\, EH8 9BT\, United Kingdom
ATTACH;FMTTYPE=image/jpeg:https://assistive-autonomy.ed.ac.uk/wp-content/uploads/2025/08/reading-group-banners-1_page-0001-scaled.jpg
ORGANIZER;CN="Centre for AI in Assistive Autonomy":MAILTO:info@assistive-autonomy.ed.ac.uk
END:VEVENT
END:VCALENDAR