SCIENCE CHINA Information Sciences, Volume 60, Issue 9: 092106(2017) https://doi.org/10.1007/s11432-016-0087-0

Out of sight, out of mind: a distance-aware forgetting strategy for adaptive random testing

More info
  • ReceivedAug 29, 2016
  • AcceptedOct 31, 2016
  • PublishedApr 27, 2017


Adaptive random testing (ART) achieves better failure-detection effectiveness than random testing by increasing the diversity of test cases. However, the intention of ensuring even spread of test cases inevitably causes an overhead problem. Although two basic forgetting strategies (i.e. random forgetting and consecutive retention) were proposed to reduce the computation cost of ART, they only considered the temporal distribution of test cases. In the paper, we presented a distance-aware forgetting strategy for the fixed size candidate set version of ART (DF-FSCS), in which the spatial distribution of test cases is taken into consideration. For a given candidate, the test cases out of its “sight" are ignored to reduce the distance computation cost. At the same time, the dynamic adjustment for partitioning and the second-round forgetting are adopted to ensure the linear complexity of DF-FSCS algorithm. Both simulation analysis and empirical study are employed to investigate the efficiency and effectiveness of DF-FSCS. The experimental results show that DF-FSCS significantly outperforms the classical ART algorithm FSCS-ART in efficiency, and has comparable failure-detection effectiveness. Compared with two basic forgetting methods, DF-FSCS is better in both efficiency and effectiveness. In contrast with a typical linear-time ART algorithm RBCVT-Fast, our algorithm requires less computational overhead and exhibits the similar failure-detection capability. In addition, DF-FSCS has more reliable performance than RBCVT-Fast in detecting failures for the programs with high-dimensional input domain.

  • Figure 1

    The typical failure patterns in a two-dimensional input domain. Here, the shaded areas represent failure-causing inputs. (a) Block pattern; (b) strip pattern; (c) point pattern.

  • Figure 2

    An illustration of distance-aware forgetting strategy for ART. (a) The basic FSCS-ART; (b) distance computation within the sight of candidate point $c_k$.

  • Figure 3

    (Color online) The trend of the average computation overhead $\overline{C}_\text{DF-FSCS}$. Here, the parameters are set as follows: $\varphi_0=9$, $\tau=5$, $d=2$, $s=10$. (a) $n$ from 0 to $10^4$; (b) $n$ from 0 to $10^6$.

  • Figure 4

    (Color online) The overhead comparison on five algorithms ($n$ from 0 to $2\times 10^4$). (a) DF-FSCS vs. FSCS-ART vs. RBCVT-Fast; (b) DF-FSCS vs. RF-FSCS vs. CR-FSCS.

  • Figure 5

    (Color online) The P-measureresults of all 12 programs. (a) airy ($d=1$); (b) bessj0 ($d=1$); (c) erfcc ($d=1$); (d) probks ($d=1$); (e) tanh ($d=1$); (f) bessj ($d=2$); (g) gammq ($d=2$); (h) sncndn ($d=2$); (i) golden ($d=3$); (j) plgndr ($d=3$); (k) cel ($d=4$); (l) el2 ($d=4$).

  • Figure 6

    (Color online) The P-measureresults in different values of parameter $\alpha$ in RBCVT-Fast algorithm. (a) cel ($d=4$); (b) el2 ($d=4$).


    Algorithm 1 DF-FSCS

    Require:(1) The size of candidate set, denoted as $s (s>1)$. (2) The dimension number ($d$), and the initial partition number ($p_0$) for each dimension. (3) The pre-set threshold ($\tau$) of the average number of test cases in a cell.

    Output:The set of test cases $\text{TS}$.

    Input parameters $s$, $d$, $p_0$ and $\tau$;

    Set $p=p_0$, $n=0$, $t=0$ and $\text{TS}=\{\}$;

    Partition each dimension into $p_0$ equal parts, $\varphi_0=p_0^d$, and construct a corresponding indexing structure for cells;

    Randomly select a test case $\text{tc}$ from the input domain;

    while (termination condition does not satisfy) do

    Add $\text{tc}$ into $\text{TS}$, and increment $n$ by 1;

    Localize the cell of $\text{tc}$, and add $\text{tc}$ into the indexing structure; //dynamic adjustment for partitioning from line 8 to 12

    if $n == \tau\cdot 2^{td}\cdot\varphi_0$ then

    $p=2p$, $t=t+1$;

    Re-partition each dimension into $p$ equal parts, and construct a new indexing structure for it;

    Re-locate all test cases in $\text{TS}$ into the new indexing structure;

    end if

    Randomly generate $s$ candidates $c_1, c_2, \dots, c_s$ from the input domain;

    for each candidate $c_k$, where $k=1,2,\dots, s$

    Localize the cell of $c_k$, and find its neighbour region $\text{neig}(c_k)$ according to (2);

    Collect all test cases lying in $\text{neig}(c_k)$ to from set $\text{TS}_{\text{neig}(c_k)}$;

    if $\text{TS}_{\text{neig}(c_k)}=\emptyset$ then


    break; //exit the current loop from line 14 to 26

    end if

    Call procedure SelectNeighbours($c_k$,$\text{TS}_{\text{neig}(c_k)}$) to get the controlled test subset from $\text{TS}_{\text{neig}(c_k)}$, denote it as $\text{TS}^\text{up}_{\text{neig}(c_k)}$;

    for each test case $\text{tc}_j$ in $\text{TS}^\text{up}_{\text{neig}(c_k)}$

    Calculate the distance from $c_k$ to $\text{tc}_j$, and denote it as $d_{kj}$;

    end for

    Find the shortest one from $\{d_{kj}\} (1\leq j\leq |\text{TS}^\text{up}_{\text{neig}(c_k)}|)$, and denote it as $\text{dist}_\text{ap}(c_k, \textrm{TS})$;

    end for

    Find the candidate whose $\text{dist}_\text{ap}(c_k, \textrm{TS}) (1\leq k\leq s)$ is the largest one, and set it as $\text{tc}$;

    end while

    return $\text{TS}$;

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有