Advance Search
YANG Zhifang,ZHANG Liya,HAO Bonan,et al. Conveyor belt deviation detection method based on dual flow network[J]. Coal Science and Technology,2023,51(S2):1−9. doi: 10.13199/j.cnki.cst.2023-0215
Citation: YANG Zhifang,ZHANG Liya,HAO Bonan,et al. Conveyor belt deviation detection method based on dual flow network[J]. Coal Science and Technology,2023,51(S2):1−9. doi: 10.13199/j.cnki.cst.2023-0215

Conveyor belt deviation detection method based on dual flow network

  • Among the traditional belt edge detection methods, the contact detection technology has high cost and the non-contact detection technology has low precision. With the development of artificial intelligence technology, although the method based on convolutional neural network can effectively improve the detection accuracy, but limited by the local operation characteristics of the convolutional operation itself, there are still problems such as insufficient perception of long-distance and global information, it is difficult to improve the accuracy of the belt edge detection. In order to solve the above problems, ① by combining the traditional convolutional neural network's ability to extract local features and the Transformer structure's ability to perceive global and long-distance information, a dual-flow transformer network (DFTNet) which integrates global and local information is proposed. The edge detection network model can better improve the belt edge detection accuracy and suppress the interference of belt image noise and background; ② By designing the CNN and Transformer feature fusion modules, a dual-flow encoder-decoder structure is formed. The clever design can better integrate the global context information, avoid the pre-training of the Transformer structure on large-scale data sets and be flexibly adjusted; ③ By Through the multi-scene conveyor belt pictures collected from the actual industrial scene, a belt conveyor belt dataset containing five different scenes, various angles and different positions is constructed. Through experimental verification, the DFTNet proposed in this paper has the best comprehensive performance with mIou 91.08%, ACC 99.48%, mPrecision 91.88% and mRecall 96.22%. which are 25.36%, 0.29%, 17.70% and 29.46% respectively compared to the pure convolutional neural network HRNet, and 29.5%, 0.32%, 24.77% and 34.13% respectively compared to FCN. At the same time, the frame rate of DFTNet processing images reaches 53.07 fps, which meets the real-time requirements in the industry and has great practical value.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return