Slomma, D., Huang, S., & Zhao, L. (2024). Efficient and Accurate Template-based Reconstruction of Deformable Surfaces. 2024 18th International Conference on Control, Automation, Robotics and Vision (ICARCV), 672–678.
@inproceedings{10821695,
author = {Slomma, Dominik and Huang, Shoudong and Zhao, Liang},
booktitle = {2024 18th International Conference on Control, Automation, Robotics and Vision (ICARCV)},
title = {Efficient and Accurate Template-based Reconstruction of Deformable Surfaces},
year = {2024},
pages = {672-678},
keywords = {Surface reconstruction;Accuracy;Three-dimensional displays;Costs;Deformation;Linear programming;Computational efficiency;Image reconstruction;Optimization;Robots},
doi = {10.1109/ICARCV63323.2024.10821695},
issn = {2474-963X},
month = dec
}
3D surface reconstruction in deformable environments presents significant challenges. Template-based methods have proven robust for achieving accurate reconstructions by utilising images and textured triangulated meshes as reference data. These methods rely on feature detection in both the reference and current images to establish corresponding points, leveraging reprojection and deformation constraints for precise reconstruction. However, challenges arise when features are not uniformly distributed across mesh triangles, potentially resulting in sparse or coarse reconstructions. Moreover, the combined computational cost of reprojection and deformable constraints often leads to prolonged optimisation times. This study aims to enhance efficiency in reconstructing deformations within the field of view. Our approach involves back-projecting vertices from a reference mesh onto the reference image plane and subsequently tracking them directly in the subsequent image. This method assumes the resulting observations are sufficiently accurate, encoding the deformation within this information. By eliminating the re-projection constraint and focusing solely on a deformation constraint based on Euclidean distances between vertices, we significantly reduce computational and memory costs. The results of our proposed algorithm demonstrate a notable reduction in computational cost and memory cost, while maintaining reconstruction accuracy comparable to related methods. The code of our algorithm is publicly available at https://github.com/DominikSlomma/Efficient-and-Accurate-Template-based-Reconstruction-of-Deformable-Surfaces
Surmann, H., Leinweber, A., Senkowski, G., Meine, J., & Slomma, D. (2023). UAVs and Neural Networks for search and rescue missions. ISR Europe 2023; 56th International Symposium on Robotics, 1–8.
@inproceedings{10363046,
author = {Surmann, Hartmut and Leinweber, Artur and Senkowski, Gerhard and Meine, Julien and Slomma, Dominik},
booktitle = {ISR Europe 2023; 56th International Symposium on Robotics},
title = {UAVs and Neural Networks for search and rescue missions},
year = {2023},
pages = {1-8},
month = sep,
eprint = {2310.05512},
archiveprefix = {arXiv},
doi = {https://arxiv.org/abs/2310.05512}
}
In this paper, we present a method for detecting objects of interest, including cars, humans, and fire, in aerial images captured by unmanned aerial vehicles (UAVs) usually during vegetation fires. To achieve this, we use artificial neural networks and create a dataset for supervised learning. We accomplish the assisted labeling of the dataset through the implementation of an object detection pipeline that combines classic image processing techniques with pretrained neural networks. In addition, we develop a data augmentation pipeline to augment the dataset with automatically labeled images. Finally, we evaluate the performance of different neural networks.
Surmann, H., Thurow, M., & Slomma, D. (2022). PatchMatch-Stereo-Panorama, a fast dense reconstruction from 360° video images. 2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 366–372.
@inproceedings{10018698,
author = {Surmann, Hartmut and Thurow, Marc and Slomma, Dominik},
booktitle = {2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)},
title = {PatchMatch-Stereo-Panorama, a fast dense reconstruction from 360° video images},
year = {2022},
pages = {366-372},
keywords = {Location awareness;Visualization;Adaptation models;Simultaneous localization and mapping;Three-dimensional displays;Streaming media;Cameras;PatchMatch-Stereo;360°-Panorama;visual monocular SLAM;UAV;Rescue Robotics},
doi = {10.1109/SSRR56537.2022.10018698},
issn = {2475-8426},
month = nov
}
This work proposes a new method for real-time dense 3d reconstruction for common 360° action cams, which can be mounted on small scouting UAVs during USAR missions. The proposed method extends a feature based Visual monocular SLAM (OpenVSLAM, based on the popular ORB-SLAM) for robust long-term localization on equirectangular video input by adding an additional densification thread that computes dense correspondences for any given keyframe with respect to a local keyframe-neighboorhood using a PatchMatch-Stereo-approach. While PatchMatch-Stereo-types of algorithms are considered state of the art for large scale Mutli-View-Stereo they had not been adapted so far for real-time dense 3d reconstruction tasks. This work describes a new massively parallel variant of the PatchMatch-Stereo-algorithm that differs from current approaches in two ways: First it supports the equirectangular camera model while other solutions are limited to the pinhole camera model. Second it is optimized for low latency while keeping a high level of completeness and accuracy. To achieve this it operates only on small sequences of keyframes, but employs techniques to compensate for the potential loss of accuracy due to the limited number of frames. Results demonstrate that dense 3d reconstruction is possible on a consumer grade laptop with a recent mobile GPU and that it is possible with improved accuracy and completeness over common offline-MVS solutions with comparable quality settings.
Surmann, H., Slomma, D., Grafe, R., & Grobelny, S. (2022). Deployment of Aerial Robots During the Flood Disaster in Erftstadt / Blessem in July 2021. 2022 8th International Conference on Automation, Robotics and Applications (ICARA), 97–102.
@inproceedings{9738529,
author = {Surmann, Hartmut and Slomma, Dominik and Grafe, Robert and Grobelny, Stefan},
booktitle = {2022 8th International Conference on Automation, Robotics and Applications (ICARA)},
title = {Deployment of Aerial Robots During the Flood Disaster in Erftstadt / Blessem in July 2021},
year = {2022},
pages = {97-102},
keywords = {Solid modeling;Three-dimensional displays;Systematics;Computational modeling;Inspection;Emergency services;Climate change;Rescue Robotic;UAVs},
doi = {10.1109/ICARA55094.2022.9738529},
issn = {2767-7745},
month = feb
}
Climate change is leading to more and more extreme weather events such as heavy rainfall and flooding. This technical report deals with the question of how rescue commanders can be better and faster provided with current information during flood disasters using Unmanned Aerial Vehicles (UAVs), i.e. during the flood in July 2021 in Central Europe, more specifically in Erftstadt / Blessem. The UAVs were used for live observation and regular inspections of the flood edge on the one hand, and on the other hand for the systematic data acquisition in order to calculate 3D models using Structure from Motion and MultiView Stereo. The 3D models embedded in a GIS application serve as a planning basis for the systematic exploration and decision support for the deployment of additional smaller UAVs but also rescue forces. The systematic data acquisition of the UAVs by means of autonomous meander flights provides high-resolution images which are computed to a georeferenced 3D model of the surrounding area within 15 minutes in a specially equipped robotic command vehicle (RobLW). From the comparison of high-resolution elevation profiles extracted from the 3D model on successive days, changes in the water level become visible. This information enables the emergency management to plan further inspections of the buildings and to search for missing persons on site.
Surmann, H., Slomma, D., Grobelny, S., & Grafe, R. (2021). Deployment of Aerial Robots after a major fire of an industrial hall with hazardous substances, a report. 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 40–47.
@inproceedings{9597677,
author = {Surmann, Hartmut and Slomma, Dominik and Grobelny, Stefan and Grafe, Robert},
booktitle = {2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)},
title = {Deployment of Aerial Robots after a major fire of an industrial hall with hazardous substances, a report},
year = {2021},
pages = {40-47},
keywords = {Training;Solid modeling;Visualization;Three-dimensional displays;Simultaneous localization and mapping;Buildings;Robot vision systems},
doi = {10.1109/SSRR53300.2021.9597677},
issn = {2475-8426},
month = oct
}
This technical report is about the mission and the experience gained during the reconnaissance of an industrial hall with hazardous substances after a major fire in Berlin. During this operation, only UAVs and cameras were used to obtain information about the site and the building. First, a geo-referenced 3D model of the building was created in order to plan the entry into the hall. Subsequently, the UAVs were used to fly in the heavily damaged interior and take pictures from inside of the hall. A 360° camera mounted under the UAV was used to collect images of the surrounding area especially from sections that were difficult to fly into. Since the collected data set contained similar images as well as blurred images, it was cleaned from non-optimal images using visual SLAM, bundle adjustment and blur detection so that a 3D model and overviews could be calculated. It was shown that the emergency services were not able to extract the necessary information from the 3D model. Therefore, an interactive panorama viewer with links to other 360° images was implemented where the links to the other images depends on the semi dense point cloud and located camera positions of the visual SLAM algorithm so that the emergency forces could view the surroundings.
Kruijff-Korbayová, I., Grafe, R., Heidemann, N., Berrang, A., Hussung, C., Willms, C., Fettke, P., Beul, M., Quenzel, J., Schleich, D., Behnke, S., Tiemann, J., Güldenring, J., Patchou, M., Arendt, C., Wietfeld, C., Daun, K., Schnaubelt, M., von Stryk, O., … Surmann, H. (2021). German Rescue Robotics Center (DRZ): A Holistic Approach for Robotic Systems Assisting in Emergency Response. 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 138–145.
@inproceedings{9597869,
author = {Kruijff-Korbayová, Ivana and Grafe, Robert and Heidemann, Nils and Berrang, Alexander and Hussung, Cai and Willms, Christian and Fettke, Peter and Beul, Marius and Quenzel, Jan and Schleich, Daniel and Behnke, Sven and Tiemann, Janis and Güldenring, Johannes and Patchou, Manuel and Arendt, Christian and Wietfeld, Christian and Daun, Kevin and Schnaubelt, Marius and von Stryk, Oskar and Lel, Alexander and Miller, Alexander and Röhrig, Christof and Straßmann, Thomas and Barz, Thomas and Soltau, Stefan and Kremer, Felix and Rilling, Stefan and Haseloff, Rohan and Grobelny, Stefan and Leinweber, Artur and Senkowski, Gerhard and Thurow, Marc and Slomma, Dominik and Surmann, Hartmut},
booktitle = {2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)},
title = {German Rescue Robotics Center (DRZ): A Holistic Approach for Robotic Systems Assisting in Emergency Response},
year = {2021},
pages = {138-145},
keywords = {System testing;Robot control;Companies;Emergency services;Teamwork;Planning;Security},
doi = {10.1109/SSRR53300.2021.9597869},
issn = {2475-8426},
month = oct
}
To meet the challenges involved in providing adequate robotic support to first responders, a holistic approach is needed. This requires close cooperation of first responders, researchers and companies for scenario-based needs analysis, iterative development of the corresponding system functionality and integrated robotic systems as well as human-robot teamwork support, and experimentation, system testing and evaluation in realistic missions carried out with or by first responders. We describe how such a holistic approach is implemented by the partners in the cooperative project A-DRZ for the establishment of the German Rescue Robotics Center (DRZ). The A-DRZ approach addresses important requirements identified by first responders: adaptation of operational capabilities of robotic platforms; robust network connectivity; autonomous assistance functions facilitating robot control; improving situation awareness for strategic and tactical mission planning; integration of human-robot teams in the first responders’ mission command structure. Solutions resulting from these efforts are tested and evaluated in excercises utilizing the advanced capabilities at the DRZ Living Lab and in external deployments.
Surmann, H., Kaiser, T., Leinweber, A., Senkowski, G., Slomma, D., & Thurow, M. (2021). Small Commercial UAVs for Indoor Search and Rescue Missions. 2021 7th International Conference on Automation, Robotics and Applications (ICARA), 106–113.
@inproceedings{9376551,
author = {Surmann, Hartmut and Kaiser, Tiffany and Leinweber, Artur and Senkowski, Gerhard and Slomma, Dominik and Thurow, Marc},
booktitle = {2021 7th International Conference on Automation, Robotics and Applications (ICARA)},
title = {Small Commercial UAVs for Indoor Search and Rescue Missions},
year = {2021},
pages = {106-113},
keywords = {Solid modeling;Three-dimensional displays;Two dimensional displays;Streaming media;Inspection;Writing;Load modeling;Search and Rescue Robots;Unmanned Aerial Vehicles;Artificial Intelligence;Deep Learning;Autonomous Robots},
doi = {10.1109/ICARA51699.2021.9376551},
month = feb
}
This technical report is about the architecture and integration of very small commercial UAVs (<; 40 cm diagonal) in indoor Search and Rescue missions. One UAV is manually controlled by only one single human operator delivering live video streams and image series for later 3D scene modelling and inspection. In order to assist the operator who has to simultaneously observe the environment and navigate through it we use multiple deep neural networks to provide guided autonomy, automatic object detection and classification and local 3D scene modelling. Our methods help to reduce the cognitive load of the operator. We describe a framework for quick integration of new methods from the field of Deep Learning, enabling for rapid evaluation in real scenarios, including the interaction of methods.