Statistical & Financial Consulting by Stanford PhD

Home Page

T-test is a statistical procedure which allows us to make conclusions about one or two Normal distributions. Three types of t-tests are distinguished: one-sample t-test, independent samples t-test and paired samples t-test. In all three cases, the decision making is based on a statistic which has t-distribution with varying degrees of freedom. Hence the statistic is called "t-statistic" and the test is called "t-test".

**1]** In a *one-sample t-test*, we study a Normal distribution with unknown mean and variance. We test whether the mean µ of the distribution equals a particular number C. We do that by drawing a single sample from the distribution and basing the t-statistic on this sample. If µ = C, the t-statistic tends to fluctuate around 0. Therefore, if it gets too extreme on our data set, we conclude that µ ≠ C. Depending on which way the t-statistic sways, we may accept one of the two alternative hypotheses: µ > C or µ < C. The procedure is valid under the assumption of independence of the observations in the sample.

**2]** In an *independent samples t-test*, we study two Normal distributions with unknown means and variances. We test whether the means µ_{1} and µ_{2} of the two distributions are the same. We sample from each distribution. The samples may have different sizes but must be independent. In addition to that, the observations in each sample must be independent. We calculate the t-statistic as a certain function of the two samples. If µ_{1} = µ_{2}, the t-statistic tends to fluctuate around 0. Therefore, if it gets too extreme on our data set, we conclude that µ_{1} ≠ µ_{2}. Again, depending on which way the t-statistic sways, we may accept one of the two alternative hypotheses: µ_{1} > µ_{2} or µ_{1} < µ_{2}.

The calculation of the t-statistic and its distribution depend critically on whether we assume that the variances in the two samples are the same. In practice, this assumption is first tested with Levene’s test.

**3]** A *paired samples t-test* has the same set-up as an independent samples t-test, except that the two samples must be equally-sized and each observation in one sample must be paired with an observation in the other sample. The observations must be independent between the pairs. An example could be two measurements made for each member of a certain group of people. One measurement is made before an experiment and the other measurement is made after the experiment. Naturally, the measurements may be independent between different people. However, the performance of a particular person before the experiment may be closely linked to his or her strengths and weaknesses. Thus it may be closely linked to the performance of this person after the experiment.

We want to test whether the means µ_{1} and µ_{2} of the two distributions are the same. Since the samples are paired, we can construct the third sample which is the difference of the first two. In the new sample all the observations are independent, because different observations correspond to different people. This allows us to run a one-sample t-test on the new sample. The resulting t-statistic tends to fluctuate around 0 under the assumption of µ_{1} = µ_{2}. For that reason, if the t-statistic gets too extreme on our data set, we conclude that µ_{1} ≠ µ_{2}. If we are interested in further analysis, we look at the sign of the t-statistic and accept one of the two alternative hypotheses: µ_{1} > µ_{2} or µ_{1} < µ_{2}.

**T-TEST REFERENCES
**

Freedman, D., Pisani, R., & Purves, R. (1998). Statistics (3rd ed). New York: W. W. Norton & Company.

Dekking, F. M., Kraaikamp, C., Lopuhaä, H. P., & Meester, M. E. (2007). A Modern Introduction to Probability and Statistics: Understanding Why and How (3rd ed). London: Springer.

Lehmann, E. L., & Romano, J. P. (2006). Testing Statistical Hypotheses (corrected 2nd printing of the 3rd ed). New York: Springer.

Greene, W. H. (2011). Econometric Analysis (7th ed). Upper Saddle River, NJ: Prentice Hall.

Draper, N. R., & Smith, H.(1998). Applied Regression Analysis. New York: Wiley Series in Probability and Statistics.

Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003). Applied multiple regression / correlation analysis for the behavioral sciences (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates.

Teller, G. R. (1999). Mathematical Statistics: A Unified Introduction. New York: Springer.

Koch, K. R. (2010). Parameter Estimation and Hypothesis Testing in Linear Models. Heidelberg: Springer.

Kennedy, P. (2003). A Guide to Econometrics. Cambridge, MA: MIT Press.

- Detailed description of the services offered in the areas of statistical and financial consulting: home page, types of service, experience, case studies, payment options and statistics tutoring
- Directory of financial topics