logo

SCIENTIA SINICA Informationis, Volume 49, Issue 2: 188-203(2019) https://doi.org/10.1360/N112018-00205

Co-optimization of ethnic-pattern segmentation based on hierarchical patch matching

More info
  • ReceivedAug 2, 2018
  • AcceptedOct 29, 2018
  • PublishedFeb 20, 2019

Abstract

The segmentation of ethnic patterns is vital for digital analyses of ethnic cultures. Although existing image segmentation methods can properly segment natural images, they cannot preserve the main structure of the image. They also require many user interactions to segment ethnic patterns. This paper presents a co-optimization method that segments ethnic patterns using hierarchical patch matching. Exploiting the repetitive characteristics of ethnic patterns, the method first automatically detects all similar patterns using a global patch match. Second, the relative orientation between similar patterns is estimated by a local patch match, and an accurate dense correspondence is constructed by a constrained patch match. Finally, the pre-segmentation of ethnic patterns is co-optimized to preserve their main structures. Our method can segment all similar ethnic patterns into separate elements with dense correspondence. Besides reducing the user interaction and improving the segmentation accuracy, the proposed method improves the quality of ethnic pattern digital analysis such as vectorization. Experiments demonstrated the validity of our method.


Funded by

国家自然科学基金(61272309)

浙江省自然科学基金(LY18F020033)

浙江大学CAD&CG国家重点实验室开放课题(A1818)

浙江理工大学科研基金(17032001-Y)


References

[1] Kanungo T, Mount D M, Netanyahu N S. An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Machine Intell, 2002, 24: 881-892 CrossRef Google Scholar

[2] Comaniciu D, Meer P. Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Machine Intell, 2002, 24: 603-619 CrossRef Google Scholar

[3] Cheng X, Zeng M, Liu X. Feature-preserving filtering with L0 gradient minimization. Comput Graphics, 2014, 38: 150-157 CrossRef Google Scholar

[4] Xu L, Lu C, Xu Y, et al. Image smoothing via $L_0$ gradient minimization. ACM Trans Graph, 2011, 30: 1--12. Google Scholar

[5] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Columbus, 2014. 580--587. Google Scholar

[6] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Santiago, 2015. 3431--3440. Google Scholar

[7] Boykov Y, Kolmogorov V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision.. IEEE Trans Pattern Anal Machine Intell, 2004, 26: 1124-1137 CrossRef PubMed Google Scholar

[8] Rother C, Kolmogorov V, Blake A. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Trans Graph, 2004, 23: 309-314 CrossRef Google Scholar

[9] Wang X Y, Xiang J, Pan R R, et al. Automatic extraction on embroidered patterns of traditional costumes. J Textile Res, 2017, 38: 120--126. Google Scholar

[10] Wang Y J, Zhou J X, Xu T W. Application of improved fuzzy c-means algorithm for ethnic costume image segmentation. Comput Eng, 2017, 43: 261--267. Google Scholar

[11] Han H M, Yao L, Wan Y. Fiber image segmentation based on K-means and GVF snake model. J Donghua Univ (Nat Sci), 2011, 37: 66--71. Google Scholar

[12] Bay H, Tuytelaars T, Gool L V. Surf: speeded up robust features. In: Proceedings of European Conference on Computer Vision, Graz, 2006. 404--417. Google Scholar

[13] Lowe D G. Object recognition from local scale-invariant features. In: Proceedings of the 7th IEEE International Conference on Computer Vision, Fort Collins, 1999. 1150--1157. Google Scholar

[14] Rublee E, Rabaud V, Konolige K, et al. Orb: An efficient alternative to sift or surf. In: Proceedings of IEEE International Conference on Computer Vision, Barcelona, 2011. 2564--2571. Google Scholar

[15] Igarashi T, Moscovich T, Hughes J F. As-rigid-as-possible shape manipulation. ACM Trans Graph, 2005, 24: 1134-1141 CrossRef Google Scholar

[16] Brown M, Lowe D G. Automatic Panoramic Image Stitching using Invariant Features. Int J Comput Vision, 2007, 74: 59-73 CrossRef Google Scholar

[17] Belongie S, Malik J, Puzicha J. Shape context: A new descriptor for shape matching and object recognition. In: Advances in Neural Information Processing Systems. 2001. 831--837. Google Scholar

[18] Ling H B, Jacobs D W. Shape classification using the inner-distance.. IEEE Trans Pattern Anal Mach Intell, 2007, 29: 286-299 CrossRef PubMed Google Scholar

[19] Cheng M M, Zhang F L, Mitra N J, et al. Repfinder: finding approximately repeated scene elements for image editing. ACM Trans Graph, 2010, 83: 1--8. Google Scholar

[20] Barnes C, Shechtman E, Finkelstein A. PatchMatch. ACM Trans Graph, 2009, 28: 1 CrossRef Google Scholar

[21] Barnes C, Shechtman E, Goldman D B, et al. The generalized patchmatch correspondence algorithm. In: Proceedings of European Conference on Computer Visio, Crete, 2010. 29--43. Google Scholar

[22] Lukáč M, Sýkora D, Sunkavalli K. Nautilus: recovering regional symmetry transformations for image editing. ACM Trans Graph, 2017, 36: 1-11 CrossRef Google Scholar

  • Figure 1

    (Color online) The drawbacks of existing segmentation methods for ethnic cultural patterns and our improvement based on segmentation co-optimization. The optimizing objects are surrounding petal patterns, different colors indicate different primitive regions in (b), (c), (d), and color correspondence between the left and right zoom-in sub-figures indicates the consistency of our segmentation in (d). (a) Input image; (b) mean-shift [1]; (c) $L_0$ segmentation [2]; (d) our result

  • Figure 2

    (Color online) The overall framework of our method. (a) User interaction; (b) object detection; (c) is a dense matching map of a partial region; (d) pre-segmentation; (e) our result

  • Figure 3

    (Color online) Comparison of single target selection methods. The top sub-figure in (c) shows the result of the traditional Grabcut method, the bottom sub-figure in (c) shows our result. The local background area makes our method avoid the interference of similar patterns in the background. (a) Input image with user interaction; (b) region partition; (c) segmentation comparison

  • Figure 4

    (Color online) The iterative automatic detection for multiple objects. (a) User interaction; (b) first global patch matching; (c) second global patch matching; (d) initial segmentation; (e) first segmentation; (f) second segmentation

  • Figure 5

    (Color online) Our dense match results based on rotation angle alignment. (a) Matching ambiguity; (b) dominating direction estimation; (c) our improvement

  • Figure 6

    (Color online) The illustration of our co-optimization effect, arrows represent the correspondence between the primitives before and after co-optimization. (a) Primitive correspondence before co-optimization; (b) primitive correspondence after co-optimization

  • Figure 7

    (Color online) Compared with the state-of-the-art segmentation methods. Our method has much less under-segmentation and over-segmentation issues, thus is closest to ground truth. (a) Input image; (b) $L_0$ segmentation [2]; (c) color threshold merge; (d) mean-shift [1]; (e) our result using Mean-shift; (f) our result using $L_0$ segm.; (g) ground truth

  • Figure 8

    (Color online) Comparison of the segmentation results on the ethnic carpet, arrows point out the advantages of our method. (a) Input image; (b) $L_0$ segmentation [2]($\lambda=0.06$); (c) $L_0$ segmentation [2]($\lambda=0.08$); (d) our result

  • Figure 9

    (Color online) Comparison of the segmentation results on the national costumes, arrows point out the advantages of our method. (a) Input image; (b) segmentation without co-optimization; (c) segmentation with co-optimization

  • Figure 10

    (Color online) Comparison of segmentation co-optimization results on natural images, arrows point the advantages of our method. (a) Input image; (b) $L_0$ segmentation; (c) our result; (d) mean-shift [1]($h_s$, $h_r$)=(31, 18); (e) mean-shift [1]($h_s$, $h_r$)=(20, 12); (f) failure cases

  • Figure 11

    (Color online) Comparison of segmentation co-optimization accuracy. Arrows point out drawbacks of the segmentation method based on $L_0$ gradient minimization. Error maps emphasize the accuracy of our method. (a) Input image; (b) $L_0$ segmentation; (c) our result; (d) ground truth; (e) $L_0$ segmentation [2]error map; (f) our error map

  • Figure 12

    (Color online) Our vectorization results and comparison with VectorMagic vectorization, the blue dots and lines are the corresponding vector graphics. (a) Four similar patterns from the input image; (b) vectorization based on our segmentation results; (c) vectorization of a commercial software VectorMagic

  • 1   Table 1Experimental data statistics
    Image information Time (s) Accuracy (%)
    Images Size Number of Multi-target Pre- Collaborative Pre- Our
    patterns selection segmentation optimization segmentation result
    Petal (Figure 1) 1000$\times$1000 16 8.629 3.436 12.465 83.2 90.2
    Disc (Figure 3) 1000$\times$936 6 70.512 2.91 26.431 87.9 92.4
    Ethnic clothing (Figure 7)
    1001$\times$945 2 60.068 3.368 12.603 80.1 86.5
    Carpet (Figure 8) 1025$\times$827 20 31.307 2.807 17.564 86.6 91.8
    Ethnic clothing (Figure 9)
    894$\times$726 2 46.532 2.506 11.352 82.8 90.3
    Butterfly (Figure 10) 1024$\times$727 2 127.157 2.327 15.234 82.3 93.6
  •   

    Algorithm 1 分割图元的协同优化

    Require:输入: 相似图案$\{E^k\},~k\in~[1,N]$, 图元$\{E^k_i\},~i\in[1,N_k]$ 和相似图案之间的稠密对应$\Psi(E^k)\to~E^s$;

    Output:遍历协同优化所有图元$\{E^k_i\}$;

    $k~\Leftarrow~0$;

    while $k~\neq~N$ do

    $i~\Leftarrow~0$;

    while $i~\neq~N_k$ do

    $E^k_j~\Leftarrow~{\rm~FindAdjacent}(E^k_i)$;

    if $E^k_j~\neq~\emptyset$ then

    if $\bigcup_{s=1,s\neq~k}^{s=N}~\Big(\Omega(E^k_i,~\Psi(E^k)\to~E^s)~\bigcap~\Omega(E^k_j,~\Psi(E^k)\to~E^s)\Big)~\neq~\emptyset$ then

    $E^k_j~\Leftarrow~E^k_i~\cup~E^k_j$;

    end if

    end if

    $i~\Leftarrow~i~+~1$;

    end while

    $k~\Leftarrow~k~+~1$;

    输出: 新的$\{E^k_i\}$

    end while

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1