# 4.7 Permutation test

Permutation test is used to perform a nonparametric two-sample test. Consider a random sample ${ z_1,\dots ,z_ M} $ drawn from an unknown distribution $\mathbf z\sim F_{\mathbf z}(\cdot )$ and a random sample ${ y_1,\dots ,y_ N } $ from an unknown distribution $\mathbf y\sim F_{\mathbf y}(\cdot )$ . Let the null hypothesis be that the two distributions are the same regardless of the analytical forms of the distributions.

Consider a (order-independent) test statistic for the observed data and call it $t(D_ N,D_ M)$. The rationale of the permutation test is to locate the statistic $t(D_ N,D_ M)$ with respect to the distribution which could be obtained if the null hypothesis were true. In order to build the null hypothesis distribution, all the possible $R=\binom {M+N}{M}$ partitionings of the $N+M$ observations in two subsets of size $N$ and $M$ are considered. If the null hypothesis were true all the partitionings would be equally likely. Then for each $i$-th permutation ($i=1,\dots ,R$) the permutation test computes the $t^{(i)}$ statistic. Eventually, the value $t(D_ N,D_ M)$ is compared with the set of values $t^{(i)}$. If the the value $t(D_ N,D_ M)$ falls in the $\alpha /2$ tails of the $t^{(i)}$ distribution, the null hypothesis is rejected with type I error $\alpha $.

The permutation procedure will involve substantial computation unless $M$ and $N$ are small. When the number of permutations is too large a random sample of a large number $R$ of permutations can be taken.

Note that when observations are drawn according to a normal distribution, it can be shown that the use of a permutation test gives results close to those obtained using the $t$ test.

### Example

Let us consider $D_4=[74,86,98,102,89]$ and $D_3=[10,25,80]$. We run a permutation test ($R=\binom {8}{4}=70$ permutations) to test the hypothesis that the two sets belong to the same distribution.

library(e1071)

dispo<-function(v,d){

dis<-NULL

n<-length(v)

B<-bincombinations(n)

IB<-apply(B,1,sum)

BB<-B[which(IB==d),]

for (i in 1:NROW(BB)){

dis<-rbind(dis,v[which(BB[i,]>0)])

}

dis

}

D1<-c(74,86,98,102,89)

D2<-c(10,25,80)

alpha<-0.1

M<-length(D1)

N<-length(D2)

D<-c(D1, D2)

t<-mean(D[1:M])-mean(D[(M+1):(M+N)])

Dp<-dispo(D,M)

tp<-numeric(nrow(Dp))

for (p in 1:nrow(Dp))

tp[p]<-mean(Dp[p,])-mean(setdiff(D,Dp[p,]))

tp<-sort(tp)

q.inf<-sum(t<tp)/length(tp)

q.sup<-sum(t>tp)/length(tp)

hist(tp,main="")

abline(v=t,col="red")

if ((q.inf<alpha/2) | (q.sup<alpha/2)){

title(paste("Hypothesis D1=D2 rejected: p-value=",

round(min(c(q.inf,q.sup)),2)," alpha=", alpha))

} else {

title(paste("Hypothesis D1=D2 not rejected: p-value=",

round(min(c(q.inf,q.sup)),2)," alpha=", alpha))

}

Let $t(D_ N)=\hat{\mu }(D_4)-\hat{\mu }(D_3)=51.46$. Figure 4.2 shows the position of $t(D_ N)$ with respect to the null sampling distribution.

$\bullet $