Hi, I have noticed a conceptual problem while navigating with a static, forward facing Kinect in combination with a 270degrees laserscanner. When I encounter a table with thin legs, the Kinect will put the top of the table as a pointcloud in the costmap. This way, my robot will successfully replan a path around the table instead of through it. HOWEVER, when my robot turns to go around the table, my LASERSCANNER will clear the top of the table again, hence allowing a replan through the table again! With this my robot seems to get in a deadlock situation when encountering a table (or anything with thin legs). I do need both sensors to 'mark' and to 'clear' obstacles. Is there any way I can circumvent this problem with the current implementation (i.e. by using just the .yaml configuration files for move_base), or do I really need to make a whole new implementation for this? My solution would be to have each sensor ONLY clear their own obstacles, and NOT the ones detected by the other sensor. But this doesn't seem possible with the current implementation. Any ideas would be --> very welcome!!
Thanks!!
Rob
↧