Public library services for Canadians with print disabilities
  • Mobile accessibility tips
    • Change contrast
      • AYellow on black selected
      • ABlack on yellow selected
      • AWhite on black selected
      • ABlack on white selected
      • ADefault colours selected
    • Change text size
      • Text size Small selected
      • Text size Medium selected
      • Text size Large selected
      • Text size Maximum selected
    • Change font
      • Arial selected
      • Verdana selected
      • Comic Sans MS selected
    • Change text spacing
      • Narrow selected
      • Medium selected
      • Wide selected
  • Register
  • Log in
  • Français
  • Home
  • Newspapers
  • Magazines
  • Recommended
  • For libraries
  • Help
  • Skip to content
      • Change contrast
        • AYellow on black selected
        • ABlack on yellow selected
        • AWhite on black selected
        • ABlack on white selected
        • ADefault colours selected
      • Change text size
        • Text size Small selected
        • Text size Medium selected
        • Text size Large selected
        • Text size Maximum selected
      • Change font
        • Arial selected
        • Verdana selected
        • Comic Sans MS selected
      • Change text spacing
        • Narrow selected
        • Medium selected
        • Wide selected
  • Accessibility tips
CELAPublic library services for Canadians with print disabilities

Centre for Equitable Library Access
Public library service for Canadians with print disabilities

  • Register
  • Log in
  • Français
  • Home
  • Newspapers
  • Magazines
  • Recommended
  • For libraries
  • Help
  • Advanced search
  • Browse by category
  • Search tips
Breadcrumb
  1. Home

Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration (Synthesis Lectures on Computer Vision)

By Katsushi Ikeuchi, Naoki Wake, Jun Takamatsu, Kazuhiro Sasabuchi

Computers and internet, Science and technology

Synthetic audio, Automated braille

Summary

This book presents recent breakthroughs in the field of Learning-from-Observation (LfO) resulting from advancement in large language models (LLM) and reinforcement learning (RL) and positions it in the context of historical developments in the area. LfO involves observing human behaviors… and generating robot actions that mimic these behaviors. While LfO may appear similar, on the surface, to Imitation Learning (IL) in the machine learning community and Programing-by-Demonstration (PbD) in the robotics community, a significant difference lies in the fact that these methods directly imitate human hand movements, whereas LfO encodes human behaviors into the abstract representations and then maps these representations onto the currently available hardware (individual body) of the robot, thus indirectly mimicking them. This indirect imitation allows for absorbing changes in the surrounding environment and differences in robot hardware. Additionally, by passing through this abstract representation, filtering can occur, distinguishing between important and less important aspects of human behavior, enabling imitation with fewer demonstrations and less demanding demonstrations. The authors have been researching the LfO paradigm for the past decade or so.  Previously, the focus was primarily on designing necessary and sufficient task representations to define specific task domains such as assembly of machine parts, knot-tying, and human dance movements. Recent advancements in Generative Pre-trained Transformers (GPT) and RL have led to groundbreaking developments in methods to obtain and map these abstract representations. By utilizing GPT, the authors can automatically generate abstract representations from videos, and by employing RL-trained agent libraries, implementing robot actions becomes more feasible.

Title Details

ISBN 9783032034458
Publisher Springer Nature Switzerland
Copyright Date 2026
Book number 6899285
Report a problem with this book

Learning-from-Observation 2.0: Automatic Acquisition of Robot Behavior from Human Demonstration (Synthesis Lectures on Computer Vision)

FAQ

Which devices can I use to read books and magazines from CELA?

Answer: CELA books and magazines work with many popular accessible reading devices and apps. Find out more on ourCompatible devices and formats page.

Go to Frequently Asked Questions page

About us

The Centre for Equitable Library Access, CELA, is an accessible library service, providing books and other materials to Canadians with print disabilities.

  • Learn more about CELA
  • Privacy
  • Terms of acceptable use
  • Member libraries

Follow us

Keep up with news from CELA!

  • Subscribe to our newsletters
  • Blog
  • Facebook
  • Bluesky
  • Twitter
  • Youtube

Suggestion Box

CELA welcomes all feedback and suggestions:

  • Join our Educator Advisory Group
  • Apply for our User Advisory Group
  • Suggest a title for the collection
  • Report a problem with a book

Contact Us

Email us at help@celalibrary.ca or call us at 1-855-655-2273 for support.

Go to contact page for full details

Copyright 2025 CELA. All rights reserved.