RDMM

Fine-Tuned LLM Models for On-Device Robotic Decision Making with Enhanced Contextual Awareness in Specific Domains

Abstract

Large language models (LLMs) represent a sig- nificant advancement in integrating physical robots with AI- driven systems. We showcase the capabilities of our framework within the context of the real-world household competition. This research introduces a framework that utilizes RDMM (Robotics Decision-Making Models), which possess the capacity for decision-making within domain-specific contexts, as well as an awareness of their personal knowledge and capabilities. The framework leverages information to enhance the autonomous decision-making of the system. In contrast to other approaches, our focus is on real-time, on-device solutions, successfully operating on hardware with as little as 8GB of memory. Our framework incorporates visual perception models equipping robots with understanding of their environment. Additionally, the framework has integrated real-time speech recognition capabilities, thus enhancing the human-robot interaction ex- perience. Experimental results demonstrate that the RDMM framework can plan with an 93% accuracy. Furthermore, we introduce a new dataset consisting of 27k planning instances, as well as 1.3k text-image annotated samples derived from the competition. The framework, benchmarks, datasets, and models developed in this work are publicly available on our GitHub repository at https://github.com/shadynasrat/RDMM.

Video

Benchmarks

benchmark done using ai models

Accuracy Across Tasks




benchmark using human evaluation on a survey

On-Device inference speed comparison

BibTeX

@inproceedings{RDMM,
  author    = {Shady Nasrat, Minseong Jo, Myungsu Kim, Seonil Lee, Jiho Lee, Yeoncheol Jang, and Seung-joon Yi},
  title     = {RDMM: Fine-Tuned LLM Models for On-Device Robotic Decision Making with Enhanced Contextual Awareness in Specific Domains},
  booktitle={----},
  pages={----},
  year={2024},
  organization={----}
}