Feedback

X
Multimodal Panoptic Segmentation of 3D Point Clouds

Multimodal Panoptic Segmentation of 3D Point Clouds

en

0 Ungluers have Faved this Work
The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion.

This book is included in DOAB.

Why read this book? Have your say.

You must be logged in to comment.

Rights Information

Are you the author or publisher of this work? If so, you can claim it as yours by registering as an Unglue.it rights holder.

Downloads

This work has been downloaded 24 times via unglue.it ebook links.
  1. 24 - pdf (CC BY-SA) at Unglue.it.

Keywords

  • Computer science
  • Computing & information technology
  • deep learning
  • panoptic segmentation
  • Panoptische Segmentierung
  • semantic segmentation
  • Semantische Segmentierung
  • sensor fusion
  • Sensorfusion
  • Temporal Fusion
  • Zeitliche Fusion

Editions

edition cover

Share

Copy/paste this into your site: