# Two Way ANOVA Calculator

Factorial ANOVA - Balanced design

Fixed effects, Mixed effects, Random effects and Mixed repeated measures

Unbalanced two way ANOVAReplications are observations of the same combination of factors A and B.

The tool ignores empty cells or non-numeric cells.

**Balanced two factor ANOVA with replication**- enter**all the replications in one cell**separated by Enter or , (comma).**ANOVA without replication**- enter**one value per cell**.The tool ignores empty cells or non-numeric cells.

## Information

#### Models

There are many possible models, this calculator deal currently only with the following balanced models:**Fixed effect model (A-Fixed, B-Fixed), no repeats**- both factors are fixed.**Mixed effect model (A-Random, B-Fixed), no repeats**- factor A is random, factor B is fixed, each subject is measured only once.**Mixed effect model (A-Fixed, B-Random), no repeats**- factor A is fixed, factor B is random, each subject is measured only once.**Random effect model (A-Random, B-Random), no repeats****Mixed repeated measures (A-Fixed, B-Repeated)**- factor A is fixed, factor B uses the same subject for all the categories.

##### What is balanced model?

The balanced design has the same number of observations in each cell - each combination of factor.**Currently this calculator supports only the balanced design.**

When the model is

**unbalanced**, it causes correlation between the factors and the interaction if it is proportional, and also between the factors if it is unbalance but not proportional.

hence you don't know how to divide the shared sum of squares between the two factors.

There are several methods how to deal with the shared sum of squares.

Type I - sequenceial, the first some of squares (SS) you calculate get the shared some of squares, in this case the order is matter!

Type II - conservative, it assumes there is no interaction between the factors, it ignores the shared SS between the factors. Type III - it assumes there is interaction between the factors, it ignores all the shared SS between the factors and between the factors and the intercation.

#### Targets

The two way ANOVA test checks the following targets using sample data.- Checks if the difference between
**Factor A**averages of two or more categories is significant - Checks if the difference between
**Factor B**averages of two or more categories is significant - Checks if there is an interaction between
**Factor A**and**Factor B**

The F statistic represents the ratio of the variance between the groups and the variance inside the groups. Unlike many other statistic tests, the smaller the F statistic the more likely the averages are equal.

**Right-tailed**F test, for ANOVA test you can use only the right tail. Why?

## Two-way ANOVA

Hypotheses

H

There is no interaction between variable A and variable B, i.e., for all the cells, the effect of variable A on the cells' means is not depend on the effect of variable B, and vice versa.

Factor A: H

There is no difference in the means of variable A categories._{0}: μ_{1}= .. = μ_{a}Factor B: H

There is no difference in the means of variable B categories._{0}: μ_{1}= .. = μ_{b}H

_{0}: Interaction(A_{i}B_{j}) = 0 (∀ i = 1 to a, j = 1 to b)There is no interaction between variable A and variable B, i.e., for all the cells, the effect of variable A on the cells' means is not depend on the effect of variable B, and vice versa.

Two-way ANOVA tests formulas

Fixed Model | Mixed Model | Random Model | Mixed Repeated | ||||

F_{A}= | MS_{A} | F_{A}= | MS_{A} | F_{A}= | MS_{A} | F_{A}= | MS_{A} |

MS_{E} | MS_{AB} | MS_{AB} | MS_{SWA} | ||||

F_{B}= | MS_{B} | F_{B}= | MS_{B} | F_{B}= | MS_{B} | F_{B}= | MS_{B} |

MS_{E} | MS_{E} | MS_{AB} | MS_{BSWA} | ||||

F_{AB}= | MS_{AB} | F_{AB}= | MS_{AB} | F_{AB}= | MS_{AB} | F_{AB}= | MS_{AB} |

MS_{E} | MS_{E} | MS_{E} | MS_{BSWA} |

F distribution

## Assumptions

- The dependent variable is continuous (ratio or interval)
- Two categorical independent variables
- Independent observations (no repeated measure)
- The residuals distribution is normal
- Homogeneity of variances, a similar variance for each cell

## Required Sample Data

Sample data from all compared groups |

## Parameters

**a**- the number of categories in variable A, number of rows.

**b**- the number of categories in variable B, number of columns.

**n**- sample side of category i of variable A (row i).

_{i}**n**- sample side of category j of variable B (column j).

_{j}**n**- sample side of cell i,j (row i, column j). In the balance n

_{i,j}_{i,j}=n/(a*b)

**n**- overall sample side, includes all the groups (Σn

_{i,j}, i=1 to a, j=1 to b).

**Ȳ**- average of all the observations of category i of variable A (row i).

_{i}**Ȳ**- average of all the observations of category j of variable B (column j).

_{j}**Ȳ**- overall average (ΣY

_{i,j,k}/ n, i=1 to a, j=1 to b, k=1 to n

_{i,j}).

### Repeated measures ANOVA

**s**- represent the order of subject in category i (subject 1 in category 1 is different than subject 1 in category 2)

**sub**- number of subjects per cell, cell is one combination of variable A and variable B. For the balance design: N=a*b*sub.

**Ȳ**- subject's average, ΣY

_{i,s}_{i,j,s}for subject i,s ,the average of all the observations of subject s of category j of variable B (column j).

**Ȳ**- overall average (ΣY

_{i,j,s}/ n

## Results calculations

### Sum of squares

The sum of squares accumulates the squared differences related to the effect we try to estimate.**SS**- the squared differences related to the effect of variable A. You compare the average of every category to the total average. The same value as the sum of squares between groups in one way ANOVA.

_{A}**SS**- the same as SS

_{B}_{A}, for variable B.

**SS**- the squared differences related to the effect of the combination of variable A and variable B in each cell, Since we try to understand the influence of the interaction AB, the interaction of the specific value of variable A and the specific value of variable B, we take the average of each cell, remove the influence of variable A and variable B, and compare to the total average.

_{AB}**A effect**= Ȳ

_{i}- Ȳ

**B effect**= Ȳ

_{j}- Ȳ

**AB effect**= Cell average - A effect - B effect - Total average.

= Ȳ

_{i,j}- (Ȳ

_{i}- Ȳ) - (Ȳ

_{j}- Ȳ) - Ȳ.

= Ȳ

_{i,j}- Ȳ

_{i}- Ȳ

_{j}+ Ȳ.

Take the square of each difference

Ȳ

_{i,j}- Ȳ

_{i}- Ȳ

_{j}+ Ȳ)

^{2}.

Count the square differences of each value in the cell, hence multiply by the sample size of each cell (n

_{i,j}).

SS

_{AB}=Σ

_{i}

^{a}Σ

_{j}

^{b}n

_{i,j}(Ȳ

_{i,j}- Ȳ

_{i}- Ȳ

_{j}+ Ȳ)

^{2}

### Fixed and Random Effects

The fixed and random effects are related to the independent variables ().#### Fixed Effect

The effect is constant across individuals.- The categories of the variable contains the entire categories' list
- The effect of this variable is interesting. The difference between the categories is important
- There is no know pattern on the difference between the categories

#### Random Effect

The effect vary across individuals, the individuals may be people, products.- The categories' list is only a sample from the entire categories' list
- The effect of this variable is not interesting by itself. The difference between the categories is not important.
- There is no know pattern on the difference between the categories

A sample from the entire groups' population.

There is no pattern about the difference between the schools, and if there will be a pattern, it will be another factor, like school's size.

Each school is not important by itself.

When you change the

**interaction field**or the**model**, the following ANOVA table and diagram will be adjusted!### ANOVA table - with interaction

Source | Degrees of Freedom (DF) | Sum of Squares (SS) | Mean Square (MS) | F statistic | p-value |
---|---|---|---|---|---|

Factor A (rows)Between the categories of factor A | DF_{A} = a - 1 | SS_{A} = Σ_{i}^{a}n_{i}(Ȳ_{i}-Ȳ)^{2} | MS_{A} = SS_{A} / DF_{A} | F_{A} = MS_{A} / MS_{E} | P(x > F_{A}) |

Factor B (Columns)Between the categories of factor B | DF_{B} = b - 1 | SS_{B} = Σ_{j}^{b}n_{j}(Ȳ_{j}-Ȳ)^{2} | MS_{B} = SS_{B} / DF_{B} | F_{B} = MS_{B} / MS_{E} | P(x > F_{B}) |

Interaction ABBetween the cells after reducing factor A and factor B effects | DF_{AB} = (a - 1)(b - 1) | SS_{AB}=Σ_{i}^{a}Σ_{j}^{b}n_{i,j}(Ȳ_{i,j} - Ȳ_{i} - Ȳ_{j} + Ȳ)^{2} | MS_{AB} = SS_{AB} / DF_{AB} | F_{AB} = MS_{AB} / MS_{E} | P(x > F_{AB}) |

ErrorWithin the cells | DF_{E} = n - a*b | SS_{E}=Σ_{i}^{a}Σ_{j}^{b}Σ_{k}^{ni,j}(Y_{i,j,k} - Ȳ_{i,j})^{2} | MS_{E} = SS_{E} / DF_{E} | ||

TotalAll the deviations from the average | DF_{T} = n - 1 | SS_{T}=Σ_{i}^{a}Σ_{j}^{b}Σ_{k}^{ni,j}(Y_{i,j,k} - Ȳ)^{2}SS _{T}=Sample Variance*(n-1)SS _{T}=SS_{A}+SS_{B}+SS_{AB}+SS_{E} | MS_{E} = S^{2} = SS_{T} / (n - 1) |

### Sum of squares diagram - with interaction

In the following diagram you may see the differences per each observation Y_{i,j,k}that used to calculate the sum of squares.

A effect: Ȳ

_{i}- Ȳ.

B effect: Ȳ

_{j}- Ȳ.

Interaction effect (AB): Y

Error: Y

Total effect: Y_{i,j}- Ȳ_{i}- Ȳ_{j}+ Ȳ.Error: Y

_{i,j,k}- Ȳ_{i,j}._{i,j,k}- Ȳ.