hcistats:ttest

A t test is a very standard statistical test to compare the means of two groups. You can run a t test with Microsoft Excel (probably openoffice too) as well as statistical software. I believe that doing a t test is quite straightforward. It is also a good method to start with if you are not comfortable with statistics.

As other kinds of statistical test do, a t test makes some assumptions. The first important assumption is that **the distribution of the population of your sample data is normal**. Notice that a t test cares about the distribution of the population, not the distribution of your samples. If you have enough samples, this assumption doesn't cause a lot of problem.

The next step is to decide which t test you want to use: an paired or unpaired?

There is one important thing you need to understand in a t test: Paired and unpaired. **Unpaired** means that you simply compare the two groups. So, you will build a model for each group (calculate the mean and variance), and see whether there is a difference. **Paired** means that you will look at the differences between the two groups. A paired test first calculates the difference from one group to the other, and runs a one-sample t test.

So, how should you decide which one to use? It depends on your experimental design. If you use a within-subject design, you should use a paired t test. Because each participant contributed to a data point for each of the two groups (*e.g.*, two interaction techniques), your data have an implicit relationship based on your participants. For example, some participants were slower than other participants in both techniques, but all of the participants were faster with one technique than with the other. paired t test considers such individual differences by taking the differences between the two groups. Otherwise, you can't really have a legitimate way to take the differences, so you need to use an unpaired t test.

For t tests, you probably also want to report the effect size. The general explanation of effect sizes is available here, and here I explain how to calculate the effect size for a t test.

There are two kinds of effect size metrics for a t test: **Cohen's d**, and

It depends on the fields what size is considered as a small or large effect size, but here are some standard thresholds. Remember that you cannot use *r* for a paired t test.

small size | medium size | large size | |
---|---|---|---|

Cohen's d | 0.2 | 0.5 | 0.8 |

Pearson's r | 0.1 | 0.3 | 0.5 |

Cohen's *d* for an unpaired t test can be calculated as follows:

.

where *M* is the mean of differences, and *SD* is the standard deviation of differences.

Cohen's *d* for an unpaired t test can be calculated as follows:

.

In some of the literature I read, the denominator is *n_1 + n_2*. I think this also makes sense, but I an not 100% sure which one is right. Because both seem to be used and MBESS package (a package for calculating the effect size) uses *n_1 + n_2 - 2*, I picked up the above formula. For a unpaired t test, you can also use Pearson's *r*. Pearson's *r* is slightly simpler than *d*:

.

where *t* is the t value of the test, and *df* is the degree of freedom.

You should use a paired t test if you do a within-subject design. What a paired t test does is to take differences between data in the two groups, and see whether the distribution of the differences is too different from the t distribution. Because it uses the differences between the groups, **a paired t test does not assume the variances of the population of the two groups are equal**. But it still assumes the normality. The null hypothesis is there is no significant difference in the means between the two groups. If the p value is less than 0.05, you reject the null hypothesis, and say that you find a significant difference.

First, prepare the data.

Then, run a paired t test.

In this example, we do not have a significant effect of Group because p = 0.11. The results also show other information. For instance, the mean of the differences is 1.4 (the means of Group 1 and Group 2 are 2.0, and 3.4 respectively). It also shows *95 percent confidence interval*. This is the 95% confidence interval of the mean of the differences. This means that under the criterion of p=0.05, the mean of the differences can be between -3.19 and 0.39. The null hypothesis is that the difference in means is equal to 0. Thus, if the zero is included in the 95% confidence interval of the mean of the differences, we cannot really reject the null hypothesis. This clearly corresponds with the p value we have.

You also need to calculate the effect size. We need to use Cohen's *d* for a paired t test (remember that we cannot use the calculation of Pearson's *r* mentioned above for a paired t test). First, we need to calculate the mean and variance for each group. As you can see in the results of the paired t test, the mean of differences is 1.4. So, you need to calculate the standard deviation of differences by hand.

Thus, Cohen's *d* is.

If you need to report the 95% confidence interval for the effect size, you can use *ci.sm() function* in MBESS package.

Thus, Cohen's *d* = 0.56 with CI = [-0.12, 1.21].

If you are going to use an unpaired t test, you need to consider another assumption, which is about the homogenity of variances: **The variances of the population of the two groups are equal**. This is important for an unpaired t test. However, we have a t test which can accommodate the unequal variances, which is called a Welch's t test. Unless you can make sure that the variances of the population of the two groups are equal, you can simply use a Welch's t test without thinking too much. This is fair because a Welch's t test is generally a more strict test than a standard t test (*i.e.,* your p value with a Welch's t test becomes higher than one with a standard t test).

A t test has a hypothesis, called null hypothesis. The null hypothesis is **there is no significant difference in the means between the two groups**. If the p value is less than 0.05, you reject the null hypothesis, and say that you find a significant difference.

As you can see in the following example with R, the beauty of a t test is its simplicity. This is one reason why I like a t test. It is less likely to mess up an analysis compared to other kinds of statistical tests. I recommend you to think about your experiment so that you can do this nice test.

First, create data with two groups (0 and 1). Let's say Group 0 represents some performance of Technique 1, and Group 1 represents some performance of Technique 2.

Then, run a Welch's t test (which does not assume the equal variances).

You will get the results.

The p value is 0.04 < 0.05, which means that there is a statistically significant difference. Let's see the results with a standard t test (which does assume the equal variances) for comparison.

As you can see here, the p value with a standard t test (0.04099) is slightly smaller than one with a Welch's t test (0.04382). Another point you should look at is the degree of freedom: 18 in a standard t test and 14.963 in a Welch's t test. The intuition of a Welch's t test is to accommodate the unequal variances by adjusting the degree of freedom.

We also need to calculate the effect size. Here, we can use *smd()* function in MBESS package.

Or you can calculate *d* manually. But you have to calculate the variance of each group first.

If you need the 95% confidence interval for the effect size, you can use *ci.smd()* function with the t value (2.2014 in this example).

Thus, Cohen's *d* = 0.98 with CI = [0.04, 1.90].

You can report the results in your paper like:

**With a Welch's t test, we found a significant effect for techniques (t(15) = 2.20, p < 0.05, Cohen's d=0.98) with Technique 2 outperforming Technique 1.**

You can take the absolute of the t value. In this example, the mean of Group 1 is larger than the one of Group 0, and this is why we have a negative for the t value.

hcistats/ttest.txt · Last modified: 2014/04/02 05:58 by Koji Yatani

## Comments

Thank you for your very helpful tutorials!

Can you please write a post about partial eta-squared confidence intervals? In R? Thank you!