-
当前理论中,基于霍夫变换[7-9](Hough transform, HT)的圆识别是识别圆的经典算法之一。HT首先利用canny边缘检测算子检测边缘,并将边缘像素映射为3维的霍夫圆空间,然后提取包含一定像素的圆形[10]。该类方法不仅耗时、费空间,而且准确率低、很多参量需要用户给定。为了克服这些缺点,提出了很多改进算法,如随机霍夫变换[11]、模糊霍夫变换[12]、点霍夫变换[13]等。改进算法虽然某种程度上克服了经典HT的缺点,但速度仍然不能满足需要,而且准确率仍然不高。根据所研究的模型,本文中采用了提取圆轮廓点进行最小二乘法拟合圆[14-15]的识别方法。此种方法速度快、准确率高,而且圆心识别准确,能够满足需要。
-
图像采集时需要对图像进行轻微的过曝处理,减少金属滤网表面纹理等造成的噪声,如图 3a所示。因为正常曝光的情况下,物体表面的纹理较为清晰,会影响图像二值化;过度曝光的情况下,盲孔的边缘会被过度腐蚀,直接干扰轮廓检测。
采集图像后,需要对图像进一步处理,去除多余噪声,保证盲孔轮廓的完整,提高识别轮廓的准确率。本文中采用的预处理方式有二值化、膨胀腐蚀等。根据相机采集图像的灰度直方图(如图 3b所示)可以得知,图像的灰度值分布呈两极分化,适合二值化处理。但不同的阈值选取会直接影响结果,如图 4所示,阈值过低会导致图像处理后盲孔残缺不全,阈值过高会影响噪声去除。
在金属模具表面较为粗糙的情况下,增加膨胀腐蚀的处理,可以提高结果的容错性,方便提取圆轮廓。而且图像二值化之后可能会产生一些微小的像素聚合点,对图像进行膨胀或者腐蚀处理,可以消除这些聚合点,对盲孔边缘进行一定的优化,获得最优图像。
-
边缘跟踪[16-18]可以将检测到的边缘点用某种结构表示出来,方便后续处理。作者使用参考文献[19]中描述的边界追踪算法提取圆轮廓。SUZUKI采用编码的思想,给不同的边界赋予不同的整数值,从而确定它是什么边界以及层次关系。编码遵循两个原则:(1)每次行扫描,遇到以下两种情况,确定外边界和孔边界(f(i, j)表示图片中第i行第j列的密度值):f(i, j-1)=0, f(i, j)=1(outer border); f(i, j)≥1, f(i, j+1)=0(hole border); (2)边界的编号规则。分配一个唯一的标识给新发现的边界,称为边界序列号(number of border,NBD),使用符号B表示。初始时B=1,每次新发现一个边界加1。在这个过程中,遇到f(p, q)=1, f(p, q+1)=0时,将f(p, q)置为-B(f(p, q)表示图片中第p行第q列的边界序列号)。
依照这两个原则,可以将图像中的边界提取出来,并将每一个轮廓以像素点坐标的方式单独存储,得到网点轮廓点的集合。每一个轮廓点的集合都可以用一系列的数据点{(xi, yi)}描述,这些数据点近似地落在一个圆上,根据这些数据可以使用最小二乘法(least squares, LS)估计这个圆的参量。圆的方程可以写为:
$ {\left( {x - {x_{\rm{c}}}} \right)^2} + {\left( {y - {y_{\rm{c}}}} \right)^2} = {R^2} $
(1) 式中, (xc, yc)是圆心坐标,R是半径。展开后有:
$ {x^2} - 2{x_{\rm{c}}}x + x_{\rm{c}}^2 + {y^2} - 2{y_{\rm{c}}}y + y_{\rm{c}}^2 = {R^2} $
(2) 令a=-2xc, b=-2yc, c=xc2+yc2-R2,就可以得到圆的一般方程:
$ {x^2} + {y^2} + ax + by + c = 0 $
(3) 最小二乘法拟合圆需要轮廓点{(xi, yi)}到圆心的距离平方和d最小。d的表达式为:
$ d = \sum {{{\left[ {\sqrt {{{\left( {{x_i} - {x_{\rm{c}}}} \right)}^2} + {{({y_i} - {y_{\rm{c}}})}^2}} - R} \right]}^2}} $
(4) 但是这个方程算起来很麻烦,也得不到解析解。所以退而求其次,简化(4)式得到:
$ \begin{array}{l} d = \sum {{{\left[ {{{\left( {{x_i} - {x_{\rm{c}}}} \right)}^2} + {{\left( {{y_i} - {y_{\rm{c}}}} \right)}^2} - {R^2}} \right]}^2}} = \\ \;\;\;\;\;\;\;\;\sum {{{\left( {x_i^2 + y_i^2 + a{x_i} + b{y_i} + c} \right)}^2}} \end{array} $
(5) 根据最小二乘法的特性,可对上述公式取极值求得参量a,b,c的值,得到下面的条件:
$ \frac{{\partial d}}{{\partial a}} = 0, \frac{{\partial d}}{{\partial b}} = 0, \frac{{\partial d}}{{\partial c}} = 0 $
(6) 即:
$ \left\{ {\begin{array}{*{20}{l}} {\frac{{\partial d}}{{\partial a}} = \sum 2 \left( {x_i^2 + y_i^2 + a{x_i} + b{y_i} + c} \right){x_i} = 0}\\ {\frac{{\partial d}}{{\partial b}} = \sum 2 \left( {x_i^2 + y_i^2 + a{x_i} + b{y_i} + c} \right){y_i} = 0}\\ {\frac{{\partial d}}{{\partial c}} = \sum 2 \left( {x_i^2 + y_i^2 + a{x_i} + b{y_i} + c} \right) = 0} \end{array}} \right. $
(7) 可解得:
$ \left\{ {\begin{array}{*{20}{l}} {a = \frac{{HD - EG}}{{CG - {D^2}}}}\\ {b = \frac{{HC - ED}}{{{D^2} - GC}}}\\ {c = - \frac{{\sum {\left( {x_i^2 + y_i^2} \right)} + a\sum {{x_i}} + b\sum {{y_i}} }}{N}} \end{array}} \right. $
(8) 式中, C=N∑xi2-∑xi∑xi, D=N∑xiyi-∑xi∑yi, E=N∑xi3+N∑xiyi2-∑(xi2+yi2)∑xi, G=N∑yi2-∑yi∑yi, H=N∑yi2+N∑xi2yi-∑(xi2+yi2)∑yi, N表示拟合轮廓点的数量。
至此,就可以求出xc, yc,R的估计拟合值,得到圆心坐标和半径大小:
$ \left\{ {\begin{array}{*{20}{l}} {{x_{\rm{c}}} = - \frac{a}{2}}\\ {{y_{\rm{c}}} = - \frac{b}{2}}\\ {R = \frac{1}{2}\sqrt {{a^2} + {b^2} - 4c} } \end{array}} \right. $
(9) 由此,利用上述方法就可以对提取的圆孔边缘轮廓点进行拟合,精确地得到盲孔的圆心坐标和半径。
-
为验证本算法的基本性能,对分辨率为2464pixel×2056pixel的局部图像进行算法分析。
-
本文中对同一幅图片分别使用霍夫变换和最小二乘法拟合圆检测盲孔,识别效果如图 8所示,检测出的盲孔使用圆圈标注。由图可知,霍夫变换与最小二乘法相比对边缘缺损盲孔的圆心识别精度更高,但是对整体盲孔圆心的识别精度不如最小二乘法。除此之外,霍夫变换对盲孔的识别率低于最小二乘法,会出现部分完整盲孔识别不出的现象,如图 8c所示。最后,使用霍夫变换检测盲孔,如果更换待检测图片就需要手动调参,而最小二乘法不存在这种问题。
两种方法识别出盲孔的数量和耗费时间如表 1所示,检测同一幅图片,最小二乘法检测出的盲孔数量更多,是因为边缘缺损盲孔使用最小二乘法更容易检测出来,但圆心精度不高,而且霍夫变换会漏检一些盲孔。两种算法相比,使用最小二乘法的检测速度也更快,效率更高。因此,检测局部图像中的盲孔时,只要设定条件剔除图像边缘的盲孔,同时对图像预处理防止盲孔缺损过大影响识别结果,就可以使用最小二乘法获得最优结果。
Table 1. Comparison of Hough transform and least squares
test 1 test 2 amount time/ms amount time/ms Hough transform 243 424 363 669 least squares 251 115 369 122 由于网点中盲孔在平台坐标的理论位置无法获取,本实验中通过使用激光打标机在平面标刻一个圆,然后将单个圆在平台中移至相机中心,采集图像并使用最小二乘法拟合出其圆心像素坐标,和相机幅面中心坐标(1232, 1028)比较的方式计算整个系统的误差。实验结果得到的系统测量坐标如表 2所示。以系统测量坐标和实际中心坐标(1232,1028)的差值的绝对值作为误差,可计算得到整体平均误差为(0.41,0.34),即圆心位置误差为0.53pixel。依据图 7b对应的实际几何尺寸,计算得到像素尺寸和实际几何尺寸的对应关系是0.016mm/pixel,则系统整体误差为0.008mm。
Table 2. System measurement and error
number measurement/pixel error/pixel 1 (1232.52, 1027.08) (0.52, 0.92) 2 (1232.26, 1027.76) (0.26, 0.24) 3 (1232.85, 1028.81) (0.85, 0.81) 4 (1232.32, 1028.15) (0.32, 0.15) 5 (1231.60, 1027.67) (0.40, 0.33) 6 (1232.33, 1028.40) (0.33, 0.40) 7 (1231.49, 1027.99) (0.51, 0.01) 8 (1231.90, 1027.81) (0.10, 0.19) 9 (1232.31, 1028.36) (0.31, 0.36) 10 (1231.55, 1028.03) (0.45, 0.03) -
依照上述小节邻域特征法的描述,进行实际匹配的结果如图 9所示。采集部分的盲孔使用圆圈标注。邻域特征法依据局部图和整体图的边缘点特征,将两者精确地匹配到一起,匹配时间不超过0.05s。从局部图和理论图中选出两对对应的特征点,可以计算出偏移量和旋转角,据此就可以通过理论图中盲孔的坐标求得盲孔在平台坐标系中的位置。
采集一张局部图进行匹配时,定位误差会随着相对特征点的距离增大而线性增加,导致计算得到的另一端盲孔圆心位置与实际位置偏差极大。因此,本文中选取两个局部采集图像,在两张局部图中分别选择一个特征点进行计算,求出偏移量、旋转角以及比例系数,使得定位精度得到极大提高,最大误差不超过0.02mm,能够满足实验要求。而使用原始的人工定位方法,定位精度只能达到0.32mm。
基于邻域特征的网点激光打孔定位算法研究
Research of laser pointing location algorithm based on neighborhood characteristics feature
-
摘要: 为了解决在激光加工中大尺寸多盲孔的网点加工件定位困难的问题, 用工业相机拍摄局部图片, 根据网点中盲孔的结构特征和分布特点, 采用最小二乘法拟合圆的方法检测局部图片中的盲孔, 使用邻域特征的方法寻找网点中的特征点, 并利用特征点匹配实现局部定位整体, 从而实现了加工件的精确和高效定位。结果表明, 邻域特征法对网点中盲孔的定位精度为0.02mm, 实现了对盲孔的有效定位, 解决了网点加工件定位困难的问题。该研究为在盲孔中完成激光加工等后续研究提供了基础。Abstract: In order to solve the difficult problem of locating large size multi-blind holes in laser processing, local pictures were taken by industrial cameras. According to the structure and distribution characteristics of blind holes in meshes, blind holes in local images were detected by least square fitting circle method. Neighborhood feature was used to find feature points in a network. By using feature point matching to realize local localization as a whole, the precise and efficient positioning of the workpiece was realized. The results show that, neighborhood feature method has a positioning accuracy of 0.02mm for blind holes in meshes. Effective positioning of blind holes is realized. It solves the problem of difficult positioning of dot processing parts. This study provides a basis for further research on laser processing in blind holes.
-
Table 1. Comparison of Hough transform and least squares
test 1 test 2 amount time/ms amount time/ms Hough transform 243 424 363 669 least squares 251 115 369 122 Table 2. System measurement and error
number measurement/pixel error/pixel 1 (1232.52, 1027.08) (0.52, 0.92) 2 (1232.26, 1027.76) (0.26, 0.24) 3 (1232.85, 1028.81) (0.85, 0.81) 4 (1232.32, 1028.15) (0.32, 0.15) 5 (1231.60, 1027.67) (0.40, 0.33) 6 (1232.33, 1028.40) (0.33, 0.40) 7 (1231.49, 1027.99) (0.51, 0.01) 8 (1231.90, 1027.81) (0.10, 0.19) 9 (1232.31, 1028.36) (0.31, 0.36) 10 (1231.55, 1028.03) (0.45, 0.03) -
[1] MA X D. Software design of intelligent laser flying processing based on OpenCV visual measurement[D]. Wuhan: Huazhong University of Science and Technology, 2016: 1-5(in Chinese). [2] LI Q. Some research on laser micromachining applied in electronic industry[D]. Hangzhou: Zhejiang University, 2010: 1-23(in Chinese). [3] SUN Sh F, LIAO H P, WU X H, et al. Experimental study about micro hole processing by picosecond laser[J]. Laser Technology, 2018, 42(2):234-238(in Chinese). [4] WEN H, WANG M J, TANG P Sh. An algorithm on graphic joining[J]. Computer Applications and Software, 2000, 17(2):26-29(in Chinese). [5] GUO J. The research of binarization method based on non-uniform illumination images[D]. Wuhan: Wuhan University of Science and Technology, 2013: 5-43(in Chinese). [6] YANG K, ZENG L B, WANG D Ch. A fast arithmetic for the erosion and dilation operations of mathematical morphology[J]. Computer Engineering and Applications, 2005, 41(34):54-56(in Chinese). [7] SHEN X P, PENG G, YUAN Zh Q. Insulator location method based on Hough transformation and RANSAC algorithm[J]. Electronic Measurement Technology, 2017, 40(6):138-143(in Chinese). [8] LIU G Ch, DENG G W. Measurement of geometric characteristics of the electronic component based on OpenCV[J]. Information Technology, 2015(7):165-169(in Chinese). [9] LIU F L, QIAO G F, ZOU B. Precise measurement of circles in industrial computed tomographic images[J]. Optics and Precision Engineering, 2009, 17(11):2842-2848(in Chinese). [10] MAO Q Zh, PAN Zh M, GAO W W. Using iterative hough round transform and connected areato count steel bars reliabley[J]. Geomatic and Information Science of Wuhan University, 2014, 39(3):373-378(in Chinese). [11] XU L. Randomized Hough transform (RHT); basic mechanisms, algorithms, and computational complexities[J]. Cvgip Image Understanding, 1993, 57(2):131-154. doi: 10.1006/ciun.1993.1009 [12] PHILIP K P, DOVE E L, McPHERSON D D, et al. The fuzzy Hough transform-feature extraction in medical images[J]. IEEE Transactions on Medical Imaging, 1994, 13(2):235-240. [13] LIN J L, SHI Q Y. Circle recognition through a point hough transformation[J]. Computer Engineering, 2003, 29(11):17-18(in Ch-inese). [14] AN P Y. Researsh on detection technology of multi-circle workpiece based on machine vision[D]. Hangzhou: Zhejiang Sci-Tech University, 2018: 35-42(in Chinese). [15] YU P, JIANG L X, WANG A Ch, et al. Center of circle detection of the edge missing circle[J]. Geomatics Spatial Information Technology, 2018, 41(7):207-211(in Chinese). [16] ZHANG X Q, WANG J J, JIANG L Y. Circle recognition algorithm based on freeman chain code[J]. Computer Engineering, 2007, 33(15):196-198(in Chinese). [17] CHU G L. Study on the key technologies of automatic identification for cooperative target on spacecraft[D]. Changchun: Changchun Institute of Optics, Fine Mehcanics and Physics, Chinese Academy of Sciences, 2015: 13-30(in Chinese). [18] TANG L L, ZHANG Q C, HU S. An improved algorithm for Canny edge detection with adaptive threshold[J]. Opto-Electronic Engineering, 2011, 38(5):127-132(in Chinese). [19] SUZUKI S, BE K. Topological structural analysis of digitized binary images by border following[J]. Computer Vision Graphics and Image Processing, 1985, 30(1):32-46. doi: 10.1016/0734-189X(85)90016-7 [20] ZHAO L L, GENG G H, LI K, et al. Images matching algorithm based on SURF and fast approximate nearest neighbor search[J]. Application Research of Computers, 2013, 30(3):921-923(in Ch-inese). [21] GAO J, WU Y F, WU K, et al. Image matching method based on corner detection[J]. Chinese Journal of Scientific Instrument, 2013, 34(8):1717-1725(in Chinese).