Difference between revisions of "Vision-based Navigation and Manipulation"
From Robot Intelligence
Kistvision (Talk | contribs) (→Concept) |
Kistvision (Talk | contribs) (→Concept) |
||
Line 2: | Line 2: | ||
==Concept== | ==Concept== | ||
* As a map representation, we proposed a hybrid map using object-spatial layout-route information. | * As a map representation, we proposed a hybrid map using object-spatial layout-route information. | ||
− | * | + | * Our global localization is based on object recognition and its pose relationship, and the local localization uses 2D-contour matching by 2D laser scanning data. |
* Our map representation is like this: | * Our map representation is like this: | ||
::[[File:map-repres.jpg|600px|left]] | ::[[File:map-repres.jpg|600px|left]] |
Revision as of 20:43, 26 May 2016
Contents
Concept
- As a map representation, we proposed a hybrid map using object-spatial layout-route information.
- Our global localization is based on object recognition and its pose relationship, and the local localization uses 2D-contour matching by 2D laser scanning data.
- Our map representation is like this:
- The Object-based global localization is as follows:
Related papers
Unknown Objects Grasping
Concept
- With a stereo vision(passive 3D sensor) and a Jaw-type hand, we studied a method for any unknown object grasping.
- In the context of perception with only one-shot 3D image, three graspable directions such as lift-up, side and frontal direction are suggested, and an affordance-based grasp, handle graspable, is also proposed.
- Our experimental movie clip : https://www.youtube.com/watch?v=YVfTltLy2w0
- Our grasp directions are as follows:
- The schema of our whole grasping process is like this: