%0 Generic %A Ragel de la Torre, Ricardo %A Rey Arcenegui, Rafael %A Páez, Álvaro %A Ponce Chulani, Javier %A Nakamura, Keisuke %A Caballero, Fernando %A Merino, Luis %A Gómez, Randy %T Multi-modal data fusion for people perception in the social robot haru %D 2023 %U https://hdl.handle.net/10433/23695 %X This article presents a people perception software architecture and its implementation, focused on the information of interest from the point of view of a social robot. The key modules employed to get the different people features, such as the body parts location, the face and hands information, and the speech, from a set of possible devices and configurations are described. The association and combination of these features using a temporal and geometric fusion system are explained in detail. A high-level interface for Human-Robot interaction using the resulting information from the fused people is proposed. The paper presents experimental results evaluating the relevant aspects of the system. %K Social Robotics %K Data Fusion %K Human-Robot Interaction %~