loading page

Towards Cognitive Digital Twin System of   Human-Robot Collaboration Manipulation
  • +2
  • Xin Li,
  • Bin He,
  • Zhipeng Wang,
  • Yanmin Zhou,
  • Gang Li
Xin Li

Corresponding Author:[email protected]

Author Profile
Bin He
Zhipeng Wang
Yanmin Zhou
Gang Li

Abstract

Multielement decision-making is crucial for the robust deployment of human-robot collaboration (HRC) systems in flexible manufacturing environments with personalized tasks and dynamic scenes. Large Language Models (LLMs) have recently demonstrated remarkable reasoning capabilities in various robotic tasks, potentially providing this ability. However, the application of LLMs to actual HRC systems necessitates the timely and comprehensive capture of real-scene information. In this study, we suggest incorporating real scene data into LLMs using digital twin (DT) technology and present a cognitive digital twin prototype system of HRC manipulation, known as HRCCogiDT. Specifically, we initially construct a scene semantic graph that encodes the geometric information of entities, spatial relationships between entities, actions of humans and robots, and collaborative activities. Subsequently, we devise a prompt that merges scene semantics with activity priori, linking the real scene with LLMs. To evaluate performance, we compile an HRC scene understanding dataset and set up a laboratory-level experimental platform. Empirical results indicate that HRC-CogiDT can swiftly perceive scene changes and make high-level decisions according to varying task requirements, such as task planning, anomaly detection, and schedule reasoning. This study provides promising implications for future applications of LLMs in robotics.
08 Mar 2024Submitted to TechRxiv
14 Mar 2024Published in TechRxiv