Advancing HRI Research and Benchmarking Through Open-Source Ecosystems
2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI) Workshop
This is a full-day workshop to be held during the Human-Robot Interaction (HRI) 2023 conference on March 13, 2023 in Stockholm, Sweden.
Survey on Open-Source and Benchmarking for Robotics
We are inviting you to take a survey to provide feedback on the current state of open-source assets and benchmarking resources for robotics, and future activities for improvement. Participation in this study is completely voluntary. You will not be compensated for filling out the survey. Completing the survey should take no more than 15 minutes. To be eligible for this study you must be ≥18 years of age and able to read and write in English.
If you are interested in participating, you can access the survey by clicking here!
Recent rapid progress in HRI research makes it more crucial than ever to have systematic development and benchmarking methodologies to assess and compare different algorithms and strategies. Indeed, the lack of such methodologies results in inefficiencies and sometimes stagnation, since new methods cannot be effectively compared to prior work and the research gaps become challenging to identify. Moreover, lacking an active and effective mechanism to disseminate and utilize the available datasets and benchmarking protocols significantly reduces their impact and utility.
A unified effort in the development, utilization, and dissemination of open-source assets amongst a governed community of users can advance these domains substantially; for HRI, this is particularly needed in the curation and generation of datasets for benchmarking. This workshop will take a step towards removing the roadblocks to the development and assessment of HRI by reviewing, discussing, and laying the groundwork for an open-source ecosystem at the intersection of HRI and robot manipulation. The workshop will play a crucial role for identifying the preconditions and requirements to develop an open-source ecosystem that provides open-source assets for HRI benchmarking and comparison, aiming to determine the needs and wants of HRI researchers. Invited speakers include those who have contributed to the development of open-source assets in HRI and robot manipulation and discussion topics will include issues related to the usage of open-source assets and the benefits of forming of an open-source ecosystem.
Presentations and guided discussion will take place across two categories:
Human Factors in Benchmarking and Dataset Generation: best practices to generate datasets and benchmarking protocols that accommodate variations in human inputs and allow for systematic comparison between different algorithms and platforms.
Open-source and Human-Robot Interaction: perspectives on the current state of open-source to support human-robot interaction research, examples of successful implementations, and lessons learned to improve the ecosystem.
After each category sessions’ presentations and Q+A for each presenter is completed, a guided discussion will be facilitated amongst the workshop participants.
The following topics, among others, will be put forth to motivate these discussions:
Availability: What open-source assets are available? What types of assets are there too many or too few of? How is the availability of these assets promoted, or how should it be?
Composition: What formats or structures of open-source assets are used? What characteristics are they missing and which are unnecessary?
Applicability: Are the open-source assets and experimentation practices reviewed applicable to your research? What uses cases would they be applicable to? Are there particular domains or applications that would benefit greatly from open-source assets?
Benefits: What are the benefits of having this open-source asset available? How do you use it for your own work? Are there missing features that would provide greater benefit to you or others?
Implementation: What are the barriers to using open-source assets for HRI experimentation? Are there existing instructions and documentation that assist in implementation, or are these features lacking? What level of support is desired to ease implementation?
The workshop will be hybrid, with a focus on in-person participation, but a virtual option for remote attendees to watch presentations and participate in discussion will be available. A Slack workspace is being established for pre, during, and post-workshop discussions and coordination, to serve as an open communication platform for the open-source ecosystem.
Royal Institute of Technology (KTH)
Idiap Research Institute
National University of Singapore
National Institute of Standards and Technology (NIST)
Carnegie Mellon University
Georgia Institute of Technology
Consider contributing to this workshop! See below
The workshop will be hybrid, with a focus on in-person participation, but a virtual option for remote attendees to watch presentations and participate in discussion will be available. Join the COMPARE project Slack workspace, channel #hri-2023-workshop, to participate in discussions pre, during, and post-workshop: https://join.slack.com/t/compare-ecosystem/shared_invite/zt-1nfgdwq4z-_8_PsXVhJ6H1FAZuQizjTA
Invited talks: 30 minutes each (20 presentation + 10 questions)
Submitted talks: 15 minutes each (10 presentation + 5 questions)
Discussion: 30 minutes each
All times given are in Central European Time (CET; UTC+01:00)
A YouTube playlist of the workshop presentations can be found here: https://www.youtube.com/playlist?list=PLfUzSIwyYwvWnBbCouga8SoIMsHlALnEj
9:15 Introduction of workshop participants
9:25 Current State of Open-source Robot Manipulation Landscape and User Experiences for HRI, Adam Norton [presentation | youtube]
Human Factors in Benchmarking and Dataset Generation
9:45 Developing Datasets and Benchmarks for Social Navigation, Henny Admoni [presentation | youtube]
10:15 Benchmarking Human-Robot Handovers, Andrea Cavallaro [presentation | youtube]
10:45 Democratizing Robotic Caregiving through Human-Centered Platforms and Datasets, Tapo Bhattacharjee [presentation | youtube]
11:15 Coffee break
11:30 Submitted talks
Preserving HRI Capabilities: Physical, Remote and Simulated Modalities in the SciRoc 2021 Competition, Vincenzo Suriani [paper | presentation | youtube]
Towards an Open Source Library and Taxonomy of Benchmark Usecase Scenarios for Trust-Related HRI Research, Peta Masters and Victoria Young [paper | presentation | youtube]
12:00 Improving Research Transference to the Real World by Developing Standards & Recommended Practices for HRI, Shelly Bagchi [presentation | youtube]
12:30 Discussion on benchmarking and replication [notes]
1:00 - 2:30
Open-source and Human-Robot Interaction
2:30 Our Open-Source Adventures in Human-Robot Interaction, Harold Soh
3:00 Humans for Robots and Robots for Humans, Danica Kragic and Marco Moletta [presentation | youtube]
3:30 Big Data Benchmarking in HRI: Challenges and Opportunities, Sonia Chernova [presentation]
4:00 Coffee break
4:30 Submitted talks
The Need to Simplify Open-Source Real-time Systems for Human-Robot Interaction, Christopher K. Fourie [paper | presentation | youtube]
ROS4HRI: Standardising an Interface for Human-Robot Interaction, Racquel Ros [paper | presentation | youtube]
Applicability of Open-Source Tools in Robot-Assisted Reinforcement Learning-based QWriter system, Zhansaule Telisheva [paper]
5:15 Discussion on next steps for an open-source ecosystem [notes]
6:00 Workshop end
Short papers are sought to be presented that discuss issues faced, successes achieved, and/or analyses of the current landscape of robotic manipulation and HRI when developing or utilizing open-source assets. Submissions may be in the form of position papers, proposals for new efforts, or reporting of new results, with the expectation that authors of accepted papers will provide a presentation at the workshop (in-person or remotely) and participate in topic discussions.
Submissions of papers should use the HRI 2023 format, 2-4 pages in length (excluding references), anonymization not required. Contributed papers should fit into one or more of the following topics of the workshop: human factors in benchmarking, open-source benchmarking protocols and datasets, the availability of open-source assets, their composition, applicability or lack there of, benefits of open-source, and barriers to implementation, among others.
All submissions will be reviewed and authors of accepted papers will be asked to give a 10 minute talk at the workshop. At least one author of each accepted submission must register for the workshop.
December 5, 2022: Call for submissions open
January 13, 2023, 23:59 Anywhere on Earth (AoE): Early submission deadline for short papers to ensure decision by HRI 2023 early registration deadline (January 20)
January 19, 2023: Notification of acceptance of early workshop submissions
February 1, 2023, 23:59 AoE: Submission deadline for short papers
February 10, 2023: Notification of acceptance for workshop submissions
Submissions should be e-mailed to firstname.lastname@example.org with the text “[HRI 2023 Workshop Submission]” in the subject line. Authors of accepted submissions will be offered the option of having their papers uploaded to a workshop-specific archive on arXiv.org. Inclusion in this archive will not be mandatory since it may create problems for authors who wish to submit follow-on work to venues with strict prior publication rules.
Adam Norton, University of Massachusetts Lowell
Holly Yanco, University of Massachusetts Lowell
Berk Calli, Worcester Polytechnic Institute
Aaron Dollar, Yale University
Funded by the National Science Foundation, Pathways to Enable Open-Source Ecosystems (POSE), Award TI-2229577