Raw File

We first provide an overview of the package emplik.
Then we gave some longer examples.

The name convention: functions with name 
el.***.**  are for mean parameters.
emplikH.*** are for the Hazard parameters
bj*** or BJ*** are for Buckley-James estimators 
WReg*** are for (weighted regression) or case weighted AFT models


The example below is to find the confidence interval for ROC curve.

## Here is an example of finding the confidence interval for R(t0), with t0=0.5.
## Note: We are finding the confidence interval of R(0.5). So we are testing 
## R(0.5)= 0.35, 0.36, 0.37, 0.38, etc
## try to find values L and U such that testing R(0.5) = L, or U has 
## p-values of 0.10,  
## then [L,  U] is the 90% confidence interval for R(0.5)


> set.seed(123)
> t1 <- rexp(200)
> t2 <- rexp(200)
> ROCnp( t1=t1, d1=rep(1, 200), t2=t2, d2=rep(1, 200), b0=0.5, t0=0.5)$"-2LLR"

###  since the -2LLR value is less than  2.705543 = qchisq(0.9, df=1),  so the confidence interval
###  contains b0=0.5.

 
> gridpoints <- 350:650/1000
> ELvalues <- gridpoints
> for( i in 1:301 ) ELvalues[i] <- ROCnp( t1=t1, d1=rep(1, 200), t2=t2, d2=rep(1, 200), b0=gridpoints[i], t0=0.5)$"-2LLR"
> myfun1 <- approxfun(x=gridpoints, y=ELvalues)
> 
> uniroot( f= function(x){myfun1(x)-2.705543}, interval= c(0.35, 0.5) )
$root
[1] 0.4478605

$f.root
[1] -6.778798e-05

$iter
[1] 5

$estim.prec
[1] 9.502673e-05

> uniroot( f= function(x){myfun1(x)-2.705543}, interval= c(0.5, 0.65) )
$root
[1] 0.5883669

$f.root
[1] 0.007179575

$iter
[1] 7

$estim.prec
[1] 6.103516e-05

### So, the 90% confidence interval in this case is [0.4478605,   0.5883669]


############################################################################

Next Example: finding confidence interval for the ratio of two medians
(or two residual medians) from two independent samples.

First a fact: We are going to test the hypothesis H_0: M1/M2 = C for a 
given constant C. If we are able to find the P-value for the above hypothesis
for any C values, then the confidence interval of the ratio is the 
collection of the C values that its corresponding P-value > 0.1 (for a 90%
confidence interval).

Next we demonstrate how one can find the P-value for such a hypothesis.

Consider another related hypothesis H_0^*: M1= C*A, M2=A 
for a given constant A. 
This hypothesis is easy to test, since they are separate. We can use
the empirical likelihood test for the hypothesis (M1=C*A) based on
sample 1 only, and again use empirical likelihood for hypothesis (M2=A)
based on sample 2. Two samples are independent.

We compute the summation of the two test statistic:
  -2log ELR1(A) + -2 log ELR2(A) = M(A)
it is a function of A.

We then compute the inf of the summation over A
 inf_{A} M(A)
This will be the test statistic for the original hypothesis M1/M2=C.
And the null distribution is chi square with 1 degree of freedom.

Therefore the P-value of the hypothesis M1/M2=C is
P( inf_{A} M(A) <= chi^2(1) ).

The inf is actually easy to find and is a minimum
The range of A we need to search is at most
min(m2, C*m2, m1, m1/C), to max(m2, C*m2, m1, m1/C)
where m1 and m2 are sample medians from sample 1 and 2.

back to top