Publication:
FROG: a new people detection dataset for knee-high 2D range finders

dc.contributor.authorAmodeo Zurbano, Fernando
dc.contributor.authorPérez Higueras, Noé
dc.contributor.authorMerino, Luis
dc.contributor.authorCaballero, Fernando
dc.date.accessioned2025-10-21T10:47:11Z
dc.date.available2025-10-21T10:47:11Z
dc.date.issued2025-10-20
dc.descriptionThe author(s) declare that financial support was received for the research and/or publication of this article. FA is supported by the predoctoral grant PRE2022-105119 as part of the INSERTION project (PID2021-127648OB-C31), funded by Ministerio de Ciencia e Innovación. This work is partially supported by the project PICRAH4.0 (PLEC2023-010353) funded by programa Transmisiones 2023 del Ministerio de Ciencia e Innovación, and by the project NORDIC (TED2021-132476B-I00) funded by MCIN/AEI/10.13039/501100011033 and the European Union “NextGenerationEU”/“PRTR”.
dc.descriptionProyectos de Investigación: TED2021-132476B-I00
dc.description.abstractMobile robots require knowledge of the environment, especially of humans located in its vicinity. While the most common approaches for detecting humans involve computer vision, an often overlooked hardware feature of robots for people detection are their 2D range finders. These were originally intended for obstacle avoidance and mapping/SLAM tasks. In most robots, they are conveniently located at a height approximately between the ankle and the knee, so they can be used for detecting people too, and with a larger field of view and depth resolution compared to cameras. In this paper, we present a new dataset for people detection using knee-high 2D range finders called FROG. This dataset has greater laser resolution, scanning frequency, and more complete annotation data compared to existing datasets such as DROW (Beyer et al., 2018). Particularly, the FROG dataset contains annotations for 100% of its laser scans (unlike DROW which only annotates 5%), 17x more annotated scans, 100x more people annotations, and over twice the distance traveled by the robot. We propose a benchmark based on the FROG dataset, and analyze a collection of state-of-the-art people detectors based on 2D range finder data. We also propose and evaluate a new end-to-end deep learning approach for people detection. Our solution works with the raw sensor data directly (not needing hand-crafted input data features), thus avoiding CPU preprocessing and releasing the developer of understanding specific domain heuristics. Experimental results show how the proposed people detector attains results comparable to the state of the art, while an optimized implementation for ROS can operate at more than 500 Hz.
dc.description.sponsorshipService Robotics Lab
dc.description.sponsorshipUniversidad Pablo de Olavide
dc.format.mimetypeapplication/pdf
dc.identifier.citationFrontiers in Robotics and AI, Volume 12 - 2025
dc.identifier.doi10.3389/frobt.2025.1671673
dc.identifier.urihttps://hdl.handle.net/10433/24913
dc.language.isoen
dc.publisherFrontiers
dc.relation.projectIDPID2021-127648OB-C31
dc.relation.projectIDPLEC2023-010353
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internationalen
dc.rights.accessRightsopen access
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectHuman-aware robotics
dc.subject2D LIDAR
dc.subjectPeople detection
dc.subjectDataset
dc.subjectROS
dc.subjectBenchmark
dc.subjectDeep learning
dc.titleFROG: a new people detection dataset for knee-high 2D range finders
dc.typejournal article
dc.type.hasVersionVoR
dspace.entity.typePublication
relation.isAuthorOfPublication9aeaf435-6c2f-42a2-9764-56d85e185bd7
relation.isAuthorOfPublicationc280da0b-63c4-4627-98bb-8b1e4589ef77
relation.isAuthorOfPublication021f43bc-c25f-40dd-9ac1-0fc2933e7071
relation.isAuthorOfPublication144853bd-af99-4072-840b-71bdd0b94309
relation.isAuthorOfPublication.latestForDiscovery9aeaf435-6c2f-42a2-9764-56d85e185bd7

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
frobt-12-1671673.pdf
Size:
3.82 MB
Format:
Adobe Portable Document Format