Advancing HRI Research and Benchmarking Through Open-Source Ecosystems

2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI) Workshop

This full-day workshop was held during the Human-Robot Interaction (HRI) 2023 conference on March 13, 2023 in Stockholm, Sweden.

Abstract


Recent rapid progress in HRI research makes it more crucial than ever to have systematic development and benchmarking methodologies to assess and compare different algorithms and strategies. Indeed, the lack of such methodologies results in inefficiencies and sometimes stagnation, since new methods cannot be effectively compared to prior work and the research gaps become challenging to identify. Moreover, lacking an active and effective mechanism to disseminate and utilize the available datasets and benchmarking protocols significantly reduces their impact and utility.


A unified effort in the development, utilization, and dissemination of open-source assets amongst a governed community of users can advance these domains substantially; for HRI, this is particularly needed in the curation and generation of datasets for benchmarking. This workshop will take a step towards removing the roadblocks to the development and assessment of HRI by reviewing, discussing, and laying the groundwork for an open-source ecosystem at the intersection of HRI and robot manipulation. The workshop will play a crucial role for identifying the preconditions and requirements to develop an open-source ecosystem that provides open-source assets for HRI benchmarking and comparison, aiming to determine the needs and wants of HRI researchers.  Invited speakers include those who have contributed to the development of open-source assets in HRI and robot manipulation and discussion topics will include issues related to the usage of open-source assets and the benefits of forming of an open-source ecosystem.

Key Takeaways

Based on the discussions had at the workshop, a set of key takeaways have been summarized and organized into topics below:

HRI vs. other domains

Open-source development and maturation

Simulations for benchmarking

Benchmarking and competitions

Sustainability

Ensuring relevance

Standards

Replicability and generalizability

Improvement efforts

Overview

Presentations and guided discussion will take place across two categories:


After each category sessions’ presentations and Q+A for each presenter is completed, a guided discussion will be facilitated amongst the workshop participants. 

The following topics, among others, will be put forth to motivate these discussions:


The workshop will be hybrid, with a focus on in-person participation, but a virtual option for remote attendees to watch presentations and participate in discussion will be available. A Slack workspace is being established for pre, during, and post-workshop discussions and coordination, to serve as an open communication platform for the open-source ecosystem. 

Speakers

Danica Kragic

Royal Institute of Technology (KTH)

Andrea Cavallaro

Idiap Research Institute

Harold Soh

National University of Singapore

Shelly Bagchi

National Institute of Standards and Technology (NIST)

Henny Admoni

Carnegie Mellon University

Tapo Bhattacharjee

Cornell University

Sonia Chernova

Georgia Institute of Technology

You!

Consider contributing to this workshop! See below

Schedule

Invited talks: 30 minutes each (20 presentation + 10 questions)

Submitted talks: 15 minutes each (10 presentation + 5 questions)

Discussion: 30 minutes each


All times given are in Central European Time (CET; UTC+01:00)


A YouTube playlist of the workshop presentations can be found here: https://www.youtube.com/playlist?list=PLfUzSIwyYwvWnBbCouga8SoIMsHlALnEj


Introduction


Human Factors in Benchmarking and Dataset Generation


Lunch break


Open-source and Human-Robot Interaction

Participation

The workshop will be hybrid, with a focus on in-person participation, but a virtual option for remote attendees to watch presentations and participate in discussion will be available. Join the COMPARE project Slack workspace, channel #hri-2023-workshop, to participate in discussions pre, during, and post-workshop: https://join.slack.com/t/compare-ecosystem/shared_invite/zt-1nfgdwq4z-_8_PsXVhJ6H1FAZuQizjTA 

Contributions (CLOSED)


Short papers are sought to be presented that discuss issues faced, successes achieved, and/or analyses of the current landscape of robotic manipulation and HRI when developing or utilizing open-source assets. Submissions may be in the form of position papers, proposals for new efforts, or reporting of new results, with the expectation that authors of accepted papers will provide a presentation at the workshop (in-person or remotely) and participate in topic discussions.


Submissions of papers should use the HRI 2023 format, 2-4 pages in length (excluding references), anonymization not required. Contributed papers should fit into one or more of the following topics of the workshop: human factors in benchmarking, open-source benchmarking protocols and datasets, the availability of open-source assets, their composition, applicability or lack there of, benefits of open-source, and barriers to implementation, among others.


All submissions will be reviewed and authors of accepted papers will be asked to give a 10 minute talk at the workshop. At least one author of each accepted submission must register for the workshop.

  

 

Submissions should be e-mailed to adam_norton@uml.edu with the text “[HRI 2023 Workshop Submission]” in the subject line. 

Organizers

Contact

Please contact Adam Norton with any questions or comments via e-mail: adam_norton@uml.edu 

Funded by the National Science Foundation, Pathways to Enable Open-Source Ecosystems (POSE), Award TI-2229577