INTRODUCTION
Comparing independent or dependent correlations is often based on standard statistical significance tests.^{1,2,3,4,5,6,7} Independent correlations come from different samples. For example, suppose that a school administrator is interested in determining if there was a difference between the correlations of the mathematics scores on the Iowa Test of Basic Skills (IT) and scores on the Children’s Memory Scale (CM) for grades 2 and 6 (H_{o}: ρ_{1}=ρ_{2}). If the correlation for grade 2 was .50 and the correlation for grade 6 was .20 with sample sizes of 100 and 200, respectively, then the ztest for independent correlations would equal 2.79, p<.01. The conclusion would be that there is a significantly higher correlation between the mathematics scores of the IT and CM scores for grade 2 than for grade 6 children. Although the Fisher’s ztest for examining the difference between independent correlations is shown in many standard statistics textbooks,^{1} it is not usually contained in the standard statistical packages unless a researcher writes a separate program for performing it.
Dependent correlations, however, are those contained within the same sample. One hypothesis consists of testing the difference between two dependent correlations with one element in common (H_{o}:ρ_{12}=ρ_{13}). For example, suppose that the same administrator is interested in determining if the correlation between the mathematics scores on the IT would be significantly higher with CM scores (r=.60) than with the overall scores of the Montreal Battery of Evaluation of Musical Abilities (MBEMA) (r=.30) for 100 grade 5 children. Moreover, suppose that the correlation between the scores of the CM and MBEMA was .20. There are number of procedures for testing the null hypothesis of ρ_{12}=ρ_{13}, that either compare the correlations using the t distribution2 or via Fisher’s z’ transformation which purportedly distributes out as z.3 Research indicated that they were deficient under certain conditions with regard to Type I error rate and power.4 Consequently,5 offered Method D2 as an alternative to the standard techniques. They provided this alternative in R and SPLUS programs. Nevertheless, using the3 ztest, the value was 2.832, p<.01 indicating that the correlation between mathematics scores on the IT and CM scores was significantly higher than the correlation between mathematics scores on the IT and MBEMA scores for grade 5 children.
A second hypothesis consists of testing the difference between two dependent correlations with no elements in common (H_{o}:ρ_{12}=ρ_{34}). Suppose that the administrator is now interested in determining if the correlation between the mathematics scores on the IT and CM scores would be higher (r=.50) after a brief memory skill course (e.g., mnemonics) than before one for grade 4 children (r=.30). Here is a hypothetical correlation matrix for a sample size of 50:

IT before 
CM before 
IT after 
CM after

IT before 
−

.30 
.75 
.25

CM before 

−

.15 
.65

IT after 


−

.50

CM after 



−

Using the procedure,^{3} the ztest value was 1.75, p>.05. This indicates that there was no statistically significant difference between the correlations of the mathematics scores on the IT and CM scores before and after the mnemonic intervention for grade 4 children. In a simulation of four possible procedures for testing the null hypothesis of ρ_{12}=ρ_{34}, which included the ztest,^{3} one procedure was entirely too liberal, whereas the other three were a bit conservative when the predictorcriterion correlation was low.^{6} Nevertheless, the best significance test procedures for testing dependent correlations with zero and one element in common, based upon their findings,^{6} were programmed for Windows.^{7}
Although statistical significance tests are used for testing these hypotheses, more emphasis has been placed on confidence intervals for performing the same task. The confidence interval separately provides the magnitude and precision of the particular effect, whereas these characteristics are confounded in standard hypothesis testing p values^{8} provided confidence interval techniques which purportedly have better control of Type I errors and have more power than the standard statistical significance tests. Although many of these techniques have been programmed in R,^{9} and recently in SAS and SPSS as separate programs,^{10} the problem is that many researchers who are basic users of these packages or do not use them at all, may have difficulty in applying these programs. In some cases, researchers may resort to computing these techniques by hand. Therefore, in order to make these confidence interval approaches more generalizable to researchers, the purpose of the userfriendly, standalone program was to compute them for testing differences between: a) independent correlations;^{8} and b) two dependent correlations with either zero or one element in common^{8} in a Windows platform.
DESCRIPTION
The user is queried interactively for the particular test, correlations, sample size, and the confidence interval probability (e.g., 95%). The normal curve value associated with computing the confidence interval for the individual correlations was obtained using the algorithm by.^{11} The program responds with a restatement of the input correlations, sample size, the confidence interval for the individual correlations, the confidence interval for testing the differences between correlations and a brief statement mentioning that confidence intervals containing zero are nonsignificant. The program is written in FORTRAN 77, using the GNU FORTRAN compiler, and runs on a Windows PC or compatible. The output is contained in COMPCOR.OUT.
Sample outputs based upon the hypothetical scenarios are given in Tables 13. The output indicates there are no differences in the general conclusions using the confidence interval approach^{8} and the standard statistical significance tests.^{6} Although there were no differences in the general conclusions, given the findings of^{7} in terms of Type I error rates and power,^{8} it is still important for researchers to have a potentially better option at their disposal.
Table 1: Sample output from COMPCOR for testing the difference between independent correlations. 
The difference between independent correlations

Sample Sizes 
Confidence Interval for r1 
Confidence Interval for r2

Confidence Interval for the difference between
r1 and r2 


The 0.9500 confidence interval for 0.5000 
The 0.9500 confidence interval for 0.2000 
The 0.9500 confidence interval for the difference
between 0.5000 and 0.2000 
r1=0.5000

r1=100.0000

has a lower bound of 0.3366 
has a lower bound of 0.0630

has a lower bound of 0.0915 
r2=0.2000 
r2=200.0000 
and an upper bound of 0.6341 
and an upper bound of 0.3296 
and an upper bound of 0.4917

If the interval contains 0, then it is nonsignificant. 
Table 2: Sample output from COMPCOR for testing the difference between dependent correlations with one element in common. 
Testing the difference between dependent correlations with one element in common

The sample size 
Confidence Interval for r12 
Confidence Interval for r13

Confidence Interval for the difference between r12 and r13 


The 0.9500 confidence interval for 0.6000 
The 0.9500 confidence interval for 0.3000 
The 0.9500 confidence interval for the difference
between 0.6000 and 0.3000 
r12=0.6000 
100.0000 
has a lower bound of 0.4575 
has a lower bound of 0.1101 
has a lower bound of 0.0914 
r13=0.3000 

and an upper bound of 0.7125 
and an upper bound of 0.4688 
and an upper bound of 0.5098 
If the Interval contains 0, then it is nonsignificant. 
Table 3: Sample output from COMPCOR for testing the difference between dependent correlations with no elements in common. 
Testing the difference between correlations with no elements in common

The sample size 
Confidence Interval for r12 
Confidence Iinterval for r34

Confidence Interval for the difference between r12 and r34



The 0.9500 confidence interval for 0.3000 
The 0.9500 confidence interval for 0.5000 
The 0.9500 confidence interval for the difference
between 0.3000 and 0.5000 
r12=0.3000 
50.0000 
has a lower bound of 0.0236 
has a lower bound of 0.2575 
has a lower bound of 0.4307 
r34=0.5000 

and an upper bound of 0.5338 
and an upper bound of 0.6833 
and an upper bound of 0.0235 
If the Interval contains 0, then it is nonsignificant. 
AVAILABILITY
COMPCOR.FOR and the executable version (COMPCOR.EXE) may be obtained at no charge by sending an email request to N. Clayton Silver, Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV 891545030 at fdnsilvr@unlv.nevada.edu.
CONFLICTS OF INTEREST
The authors declare that they have no conflicts of interest.