Advanced Search

ISSN1001-3806 CN51-1125/TN Map

Volume 41 Issue 3
Mar.  2017
Article Contents
Turn off MathJax

Citation:

Research of robot navigation vision sensors based on grating projection stereo vision

  • Received Date: 2016-06-13
    Accepted Date: 2016-09-13
  • In order to detect obstacles and locate mobile robots in mobile robot navigation under dark environment, a novel visual navigation method based on grating projection stereo vision was proposed. At first, by combining with grating projection profilometry of plane structured light and stereo vision technology, the geometry and mathematical model of a grating projection stereo vision sensor were founded. Then, the method of space equipment position constraint and projection plane intersection were used and 3-D coordinates of the object in field of view of robot were calculated. A reliable and real method of obstacle detection and analysis was established. After theoretical analysis and experimental verification, the caculated range precision of 0.8mm was obtained. The results show that the method can achieve sub-pixel accuracy in image computation. The study can be used to overcome the problem that the robot can't navigate autonomously in dark environment and provides a basis of robot navigation without global positioning system support in dark environment.
  • 加载中
  • [1]

    RAIA H, BOGDAN M, GARBIS S, et al. Complex terrain mapping with multi-camera visual odometry and realtime drift correction[C]// Proceedings of 2nd Joint 3D IM/3D PVT Conference.New York, USA: IEEE, 2012: 493-500.
    [2]

    OLEG N, XUN Zh, STERGIOS R, et al. Two efficient solutions for visual odometry using directional correspondence[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(4):818-824. doi: 10.1109/TPAMI.2011.226
    [3]

    ZHANG H T, LI L Q, WANG L, et al. Indoor three-dimensional environment detection for aerial robots[J]. Recent Advances in Computer Science and information Engineering, 2012, 129:445-450. doi: 10.1007/978-3-642-25778-0
    [4]

    JUNG H G, KIM J H. Model-based light stripe detection for indoor navigation[J]. Optics and Lasers in Engineering, 2009, 47(1):62-74. doi: 10.1016/j.optlaseng.2008.07.018
    [5]

    HUNTSBERGER T, AGHAZARIAN H, HOWARD A, et al. Stereo vision-based navigation for autonomous surface vessels[J]. Journal of Field Robotics, 2011, 28(1):3-18. doi: 10.1002/rob.v28.1
    [6]

    SIZINTSEV M, WILDES R P. Coarse-to-fine stereo vision with accurate 3-D boundaries[J]. Image and Vision Computing, 2010, 28(3): 352-366. doi: 10.1016/j.imavis.2009.06.008
    [7]

    LI F B, HE Q, HUANG Ch W, et al. High frame rate and high line density ultrasound imaging for local pulse wave velocity estimation using motion matching: A feasibility study on vessel phantoms[J]. Ultrasonics, 2016, 67(4):41-54.
    [8]

    ZHANG X L, ZHANG B F, LIN Y Ch. Study on image matching of 3-D measurement system on dynamic object in long distance[J]. Journal of Optoelectronics ·Laser, 2008, 19(3):373-377 (in Chinese).
    [9]

    ZHANG B F, LI J L, ZHANG X L. Application of SIFT algorithm in 3D scene reconstruction[J]. Advanced Materials Research, 2012, 616/618:1956-1960. doi: 10.4028/www.scientific.net/AMR.616-618
    [10]

    ZHANG X L, ZHANG B F, LIN Y Ch, et al. Accurate phase expansion on reference planes in grating projection profilometry[J]. Measurement Science and Technology, 2011, 22(7):075301. doi: 10.1088/0957-0233/22/7/075301
    [11]

    FENG W, ZHANG Q C. Analysis of membrane vibration modes based on structured light projection[J]. Laser Technology, 2015, 39(4):446-449 (in Chinese).
    [12]

    XIAO Ch J, ZHANG J Ch, WEI Y, et al. Measurement of glass bubble size based on laser vision principle[J]. Laser Technology, 2015, 39(3):391-394 (in Chinese).
    [13]

    KAZMI W, SALMERONS F, RIBASG A, et al. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: analysis and comparison[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 88(1):128-146.
    [14]

    LIN L H, LAWRENCE P D, HALL R. Robust outdoor stereo vision SLAM for heavy machine rotation sensing[J]. Machine Vision and Applications, 2013, 24(1):205-226. doi: 10.1007/s00138-011-0380-6
    [15]

    ZHANG X L, ZHANG B F, TIAN X Zh. Research on stereo vision odometry[J]. Proceedings of the SPIE, 2010, 7850:785000.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(6)

Article views(4150) PDF downloads(10) Cited by()

Proportional views

Research of robot navigation vision sensors based on grating projection stereo vision

  • 1. School of Electrical & Information Engineering, Jiangsu University of Technology, Changzhou 213001, China
  • 2. State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China

Abstract: In order to detect obstacles and locate mobile robots in mobile robot navigation under dark environment, a novel visual navigation method based on grating projection stereo vision was proposed. At first, by combining with grating projection profilometry of plane structured light and stereo vision technology, the geometry and mathematical model of a grating projection stereo vision sensor were founded. Then, the method of space equipment position constraint and projection plane intersection were used and 3-D coordinates of the object in field of view of robot were calculated. A reliable and real method of obstacle detection and analysis was established. After theoretical analysis and experimental verification, the caculated range precision of 0.8mm was obtained. The results show that the method can achieve sub-pixel accuracy in image computation. The study can be used to overcome the problem that the robot can't navigate autonomously in dark environment and provides a basis of robot navigation without global positioning system support in dark environment.

引言
  • 自主移动机器人是一种能够在各种道路和野外环境下,实现连续、实时地自主运行的智能机器人。在工业生产、未知环境探测、医学、无人驾驶飞机及汽车、危险场合的自动作业、军事国防等领域都有着广泛的应用前景。

    对自主移动机器人来说,最关键的问题就是导航,即在具有障碍物的未知环境中,获得有效而可靠的环境信息,进行障碍物检测与识别,并按一定的评价标准实现机器人从起点到目标点的无碰撞运动。目前,雷达导航需要进行平面或者3维扫描,在野外地形复杂、路面高低不平的情况下,由于车体的剧烈颠簸,会出现严重的障碍物漏检和虚报现象;深空探索机器人、月球车等完全没有全球定位系统(global positioning system, GPS)信息的支持,无法用GPS进行导航,视觉导航得到广泛应用[1-2]

    但是,在黑暗环境中,自主移动机器人携带能源有限、无法采用外来光源照明的情况下,如何进行自主导航,相应的研究较少,参考文献[3]~参考文献[5]中采用单摄像机的线结构光视觉实现障碍物检测,根据光条在图像中的位置变化,跟踪运动目标的移动实现避障,但没有研究机器人自定位和3维建图。该项目提出一种在黑暗环境中基于光栅投射立体视觉的自主移动机器人导航方法,实现机器人自主导航中避障、及时定位与建图和视觉里程计研究。

    立体视觉导航[6-7]能利用环境光(主要是太阳光)由传感器直接感受周围场景的信息,由于其具有仿人眼视觉进行信息采集和处理,且能有效地节约探测车携带的有限能源的显著优点,在空间探测车、火星车和月球车等方面有着广泛的应用。

    作者针对立体视觉不能解决黑暗环境中自主移动机器人的导航,且需要进行图像匹配[8-9],而匹配困难大、精度不高的问题,采用光栅投射[10-12]和立体视觉[13]相结合的模型,提出一种基于光栅投射立体视觉的黑暗环境中自主移动机器人导航视觉传感器方法,只需要在间隔时间处发射光栅条纹,同时用摄像机采集一幅图,就可以解决黑暗环境下机器人自主导航中避障、即时定位与建图(simultaneous localization and mapping, SLAM)[14]和视觉里程计[15]等问题。

1.   视觉导航技术路线框图
  • 自主移动机器人的视觉传感器导航原理框图如图 1所示。首先通过光栅投射和立体视觉相融合的方法,建立光栅投射立体视觉传感器几何和数学模型,实现视觉传感器模型;然后在机器人移动过程中及时恢复出各个场景的3维坐标,建立可靠、真实的障碍物检测和分析方法;再进行运动目标的跟踪识别;最后根据运动目标移动信息与车体的姿态、朝向以及行驶距离等信息之间的关系,实现机器人的运动估计和精确定位。其中, CCD为电荷耦合器件(charge-coupled device), LD为半导体激光器(laser diode)。

    Figure 1.  Block diagram of technical route

2.   视觉传感器数学模型建立
  • 光栅投射立体视觉传感器由1个光栅投射装置(见图 2中的S点)和2个摄像机(见图 2中的A点和B点)构成,包括世界坐标系(Ow-xwywzw)、机器人坐标系(Or-xryrzr)、摄像机坐标系(OA-xAyAzAOB-xByBzB)、图像坐标系(Oi, AXi, AYi, AOi, BXi, BYi, B)、离散像素坐标系(Oe, A, Xe, AYe, A, Oe, B, Xe, BYe, B)和光栅投射坐标系(OS-xSySzS)。拟建立的“光栅投射立体视觉”几何模型如图 2所示。

    Figure 2.  Geometric model of vision sensor based on grating projection stereo vision

  • 在几何模型的基础上,根据三平面(即光栅条纹面SPQ,摄像机空间平面OAPQOBPQ)相交求交汇的方法建立光栅投射立体视觉的数学模型。图 2中投射的条纹面方程为:

    将条纹面方程转化到世界坐标中,则有:

    式中, RS, wTS, w分别为从光栅投射坐标系到世界坐标系的旋转矩阵和平移矩阵,其12个参量通过标定获得。

    设摄像机空间平面OAPQ方程为:

    式中, A1, B1, C1, D1为系数。根据摄像机的透视变换原理,可以建立离散像素坐标系Oe, AXe, AYe, A与摄像机坐标系OA-xAyAzA之间的关系,有:

    式中, λ为比例因子,fAfB分别为摄像机AB两点的焦距,u0v0为摄像机像的中心点坐标,uAvAA点像面的离散像素坐标。参量需要经过标定获得。将OA, PA, QA 3点在离散像素坐标系中的坐标代入(2)式,求解出方程中A1, B1, C1, D1值。同理,摄像机空间平面OBPQ方程为:

    同理,A2, B2, C2, D2均为系数。摄像机坐标系与世界坐标系之间的关系为:

    式中, RA, wRB, wTA, wTB, w分别为从摄像机坐标系到世界坐标系的旋转矩阵和平移矩阵,其24个参量通过标定获得。

    将(3)式、(5)式和(6)式联立,得到两摄像机空间平面在世界坐标系中的方程;将(1)式和(2)式联立,得到光栅条纹平面在世界坐标系中的方程。由于空间P点和Q点一定位于这两个联立方程中,解这两个联立方程即可得到空间曲线PQ的方程,并求解出P点, Q点在世界坐标系中的坐标值。

3.   实验与分析
  • 视觉传感器布置方案:选用两个CCD镜头以任意夹角水平放置,光栅投射仪置于两镜头之间,实验证明,两光轴以任意夹角放置非常便于视觉系统的设计安装,且扩大了摄像机的共同视场,并在环境较恶劣,机器人有大的震动时,可以保证测量的精度和鲁棒性。该视觉系统安装于机器人的上部,且两摄像机和光栅条纹仪的光轴稍向下倾斜,与地面成一定角度,如图 3所示。在机器人运动过程中,光栅条纹仪每隔一定时间向机器人前方投射光栅条纹,同时两摄像机采集受到机器人周围场景调制的变形条纹图像,如图 4所示。

    Figure 3.  The placement of robot and vision sensor

    Figure 4.  Deformation image of the surrounding scene

    然后分别对左右图像进行结构光线条提取,可以获得室内场景中的障碍物高度图,如图 5所示。并计算出该障碍物与机器人之间的距离,利用此信息进行机器人定位和路径规划, 如图 6所示。

    Figure 5.  The extracted obstacle information

    Figure 6.  Distance information of obstacles at different positions

    从障碍物不同位置相对于相机的距离信息可知,该方法距离计算精度达到0.8mm,有利于目前黑暗环境中机器人无法自主导航难题的突破,为暗环境中无GPS支持的机器人导航提供基础探索。

4.   结论
  • 提出利用光栅投射立体视觉传感器来解决机器人导航问题,该传感器既规避了双目立体视觉的立体匹配这一难题,又规避了光栅投射的相位展开这一难题,仅采用三平面求交汇的方法实现场景空间内3维坐标计算,非常适合处理快速、实时的障碍物检测、定位、视觉里程计问题,可以应用于无GPS信号支持的野外黑暗环境的机器人导航,为机器人导航提供鲁棒性高、精度高以及实时性好的新方法。实验结果证明,该方法对于图像计算的精度在亚像素级,距离计算精度达到0.8mm。该项目的研究将开创在黑暗环境中机器人视觉导航研究及应用,在工业生产、未知环境探测、无人驾驶飞机及汽车、危险场合的自动作业、军事国防领域的灾害救援、巡逻机器人哨兵、排雷机器人、深空探索机器人等领域都有着广泛的应用前景。

Reference (15)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return