-
光栅投射立体视觉传感器由1个光栅投射装置(见图 2中的S点)和2个摄像机(见图 2中的A点和B点)构成,包括世界坐标系(Ow-xwywzw)、机器人坐标系(Or-xryrzr)、摄像机坐标系(OA-xAyAzA,OB-xByBzB)、图像坐标系(Oi, AXi, AYi, A,Oi, BXi, BYi, B)、离散像素坐标系(Oe, A, Xe, AYe, A, Oe, B, Xe, BYe, B)和光栅投射坐标系(OS-xSySzS)。拟建立的“光栅投射立体视觉”几何模型如图 2所示。
-
在几何模型的基础上,根据三平面(即光栅条纹面SPQ,摄像机空间平面OAPQ和OBPQ)相交求交汇的方法建立光栅投射立体视觉的数学模型。图 2中投射的条纹面方程为:
$ \left\{ \begin{array}{l} {Y_S} = {Z_S}\tan \alpha \\ {X_S} = 0 \end{array} \right. $
(1) 将条纹面方程转化到世界坐标中,则有:
$ \left[ {\begin{array}{*{20}{c}} {{X_S}}\\ {{Y_S}}\\ {{Z_S}} \end{array}} \right] = {\mathit{\boldsymbol{R}}_{S, {\rm{w}}}}\left[ {\begin{array}{*{20}{c}} {{x_{\rm{w}}}}\\ {{y_{\rm{w}}}}\\ {{z_{\rm{w}}}} \end{array}} \right] + {\mathit{\boldsymbol{T}}_{S, {\rm{w}}}} $
(2) 式中, RS, w和TS, w分别为从光栅投射坐标系到世界坐标系的旋转矩阵和平移矩阵,其12个参量通过标定获得。
设摄像机空间平面OAPQ方程为:
$ {A_1}{x_A} + {B_1}{y_A} + {C_1}{z_A} + {D_1} = 0 $
(3) 式中, A1, B1, C1, D1为系数。根据摄像机的透视变换原理,可以建立离散像素坐标系Oe, AXe, AYe, A与摄像机坐标系OA-xAyAzA之间的关系,有:
$ \lambda \left[ {\begin{array}{*{20}{c}} {{u_A}}\\ {{v_A}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 1&0&{{u_0}}\\ 0&1&{{v_0}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{f_A}}&0&0&0\\ 0&{{f_A}}&0&0\\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{x_A}}\\ {{y_A}}\\ {{z_A}}\\ 1 \end{array}} \right] $
(4) 式中, λ为比例因子,fA和fB分别为摄像机A和B两点的焦距,u0和v0为摄像机像的中心点坐标,uA和vA为A点像面的离散像素坐标。参量需要经过标定获得。将OA, PA, QA 3点在离散像素坐标系中的坐标代入(2)式,求解出方程中A1, B1, C1, D1值。同理,摄像机空间平面OBPQ方程为:
$ {A_2}{x_B} + {B_2}{y_B} + {C_2}{z_B} + {D_2} = 0 $
(5) 同理,A2, B2, C2, D2均为系数。摄像机坐标系与世界坐标系之间的关系为:
$ \left\{ \begin{array}{l} \left[ {\begin{array}{*{20}{c}} {{x_A}}\\ {{y_A}}\\ {{z_A}} \end{array}} \right] = {\mathit{\boldsymbol{R}}_{A, {\rm{w}}}}\left[ {\begin{array}{*{20}{c}} {{x_{\rm{w}}}}\\ {{y_{\rm{w}}}}\\ {{z_{\rm{w}}}} \end{array}} \right] + {\mathit{\boldsymbol{T}}_{A, {\rm{w}}}}\\ \left[ {\begin{array}{*{20}{c}} {{x_B}}\\ {{y_B}}\\ {{z_B}} \end{array}} \right] = {\mathit{\boldsymbol{R}}_{B, {\rm{w}}}}\left[ {\begin{array}{*{20}{c}} {{x_{\rm{w}}}}\\ {{y_{\rm{w}}}}\\ {{z_{\rm{w}}}} \end{array}} \right] + {\mathit{\boldsymbol{T}}_{B, {\rm{w}}}} \end{array} \right. $
(6) 式中, RA, w,RB, w和TA, w,TB, w分别为从摄像机坐标系到世界坐标系的旋转矩阵和平移矩阵,其24个参量通过标定获得。
将(3)式、(5)式和(6)式联立,得到两摄像机空间平面在世界坐标系中的方程;将(1)式和(2)式联立,得到光栅条纹平面在世界坐标系中的方程。由于空间P点和Q点一定位于这两个联立方程中,解这两个联立方程即可得到空间曲线PQ的方程,并求解出P点, Q点在世界坐标系中的坐标值。
基于光栅投射的机器人导航视觉传感器研究
Research of robot navigation vision sensors based on grating projection stereo vision
-
摘要: 为了实现暗环境下移动机器人导航中障碍物的检测与运动机器人的定位,采用了一种组合式光栅投射立体视觉传感器研究方法,首先通过光栅投射和立体视觉相融合的方法,建立光栅投射立体视觉传感器几何和数学模型,然后利用空间设备位置约束原理和投影平面相交的方法,进行了机器人视场内空间物体的3维坐标计算,建立了可靠真实的障碍物检测和分析方法,并进行了理论分析和实验验证,取得了距离计算精度0.8mm的数据。结果表明,该方法对于图像计算的精度在亚像素级。该方法有利于目前黑暗环境中机器人无法自主导航难题的突破,为黑暗环境中无全球定位系统支持的机器人导航提供了基础探索。Abstract: In order to detect obstacles and locate mobile robots in mobile robot navigation under dark environment, a novel visual navigation method based on grating projection stereo vision was proposed. At first, by combining with grating projection profilometry of plane structured light and stereo vision technology, the geometry and mathematical model of a grating projection stereo vision sensor were founded. Then, the method of space equipment position constraint and projection plane intersection were used and 3-D coordinates of the object in field of view of robot were calculated. A reliable and real method of obstacle detection and analysis was established. After theoretical analysis and experimental verification, the caculated range precision of 0.8mm was obtained. The results show that the method can achieve sub-pixel accuracy in image computation. The study can be used to overcome the problem that the robot can't navigate autonomously in dark environment and provides a basis of robot navigation without global positioning system support in dark environment.
-
Key words:
- image processing /
- robot navigation /
- vision sensor /
- grating projection stereo vision /
- obstacle
-
[1] RAIA H, BOGDAN M, GARBIS S, et al. Complex terrain mapping with multi-camera visual odometry and realtime drift correction[C]// Proceedings of 2nd Joint 3D IM/3D PVT Conference.New York, USA: IEEE, 2012: 493-500. [2] OLEG N, XUN Zh, STERGIOS R, et al. Two efficient solutions for visual odometry using directional correspondence[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(4):818-824. doi: 10.1109/TPAMI.2011.226 [3] ZHANG H T, LI L Q, WANG L, et al. Indoor three-dimensional environment detection for aerial robots[J]. Recent Advances in Computer Science and information Engineering, 2012, 129:445-450. doi: 10.1007/978-3-642-25778-0 [4] JUNG H G, KIM J H. Model-based light stripe detection for indoor navigation[J]. Optics and Lasers in Engineering, 2009, 47(1):62-74. doi: 10.1016/j.optlaseng.2008.07.018 [5] HUNTSBERGER T, AGHAZARIAN H, HOWARD A, et al. Stereo vision-based navigation for autonomous surface vessels[J]. Journal of Field Robotics, 2011, 28(1):3-18. doi: 10.1002/rob.v28.1 [6] SIZINTSEV M, WILDES R P. Coarse-to-fine stereo vision with accurate 3-D boundaries[J]. Image and Vision Computing, 2010, 28(3): 352-366. doi: 10.1016/j.imavis.2009.06.008 [7] LI F B, HE Q, HUANG Ch W, et al. High frame rate and high line density ultrasound imaging for local pulse wave velocity estimation using motion matching: A feasibility study on vessel phantoms[J]. Ultrasonics, 2016, 67(4):41-54. [8] ZHANG X L, ZHANG B F, LIN Y Ch. Study on image matching of 3-D measurement system on dynamic object in long distance[J]. Journal of Optoelectronics ·Laser, 2008, 19(3):373-377 (in Chinese). [9] ZHANG B F, LI J L, ZHANG X L. Application of SIFT algorithm in 3D scene reconstruction[J]. Advanced Materials Research, 2012, 616/618:1956-1960. doi: 10.4028/www.scientific.net/AMR.616-618 [10] ZHANG X L, ZHANG B F, LIN Y Ch, et al. Accurate phase expansion on reference planes in grating projection profilometry[J]. Measurement Science and Technology, 2011, 22(7):075301. doi: 10.1088/0957-0233/22/7/075301 [11] FENG W, ZHANG Q C. Analysis of membrane vibration modes based on structured light projection[J]. Laser Technology, 2015, 39(4):446-449 (in Chinese). [12] XIAO Ch J, ZHANG J Ch, WEI Y, et al. Measurement of glass bubble size based on laser vision principle[J]. Laser Technology, 2015, 39(3):391-394 (in Chinese). [13] KAZMI W, SALMERONS F, RIBASG A, et al. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: analysis and comparison[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 88(1):128-146. [14] LIN L H, LAWRENCE P D, HALL R. Robust outdoor stereo vision SLAM for heavy machine rotation sensing[J]. Machine Vision and Applications, 2013, 24(1):205-226. doi: 10.1007/s00138-011-0380-6 [15] ZHANG X L, ZHANG B F, TIAN X Zh. Research on stereo vision odometry[J]. Proceedings of the SPIE, 2010, 7850:785000.