A statistical hypothesis is any assumption about a general population formulated without knowing the general population in full. The decision to make or reject the hypothesis is made after conducting an appropriate statistical test. The test statistic is a random variable, the value of which is calculated using the data from the sample. Depending on its value, we decide not to reject or reject the H0 hypothesis in favor of the HA hypothesis. Depending on our data and the type of study, we choose the statistical test:

```
from IPython.display import Image
Image(filename="img/param.png")
```

A statistical test is a formal technique that relies on a probability distribution to come to a conclusion as to whether a hypothesis is valid. These hypothetical difference tests are classified as parametric and nonparametric tests. A parametric test is one that contains information about a population parameter, and a nonparametric test is one in which we have no idea about the population parameter. Parametric tests assume basic statistical distributions in the data. Therefore, several conditions must be met for the parametric test result to be credible. For example, the Student’s t-test for two independent samples is only reliable if each sample has a normal distribution and if the sample variances are homogeneous. Nonparametric testing is not distribution-based. Thus, they can be used even when the parametric conditions are not met.

Parametric tests often have nonparametric counterparts. The advantage of using a parametric test instead of a nonparametric equivalent is that the former will be more statistically powerful than the latter. In other words, the parametric test may lead to a greater rejection of H0.

```
Image(filename="img/test.png")
```

##### Parametric Tests¶

- T-test – Student’s t-test is a statistical method used to compare two means with each other if we know the number of subjects, the arithmetic mean and the value of the standard deviation or variance.

- Z-test – A statistical test used to determine if two population indicators are different when the variances are known and the sample size is large. The test statistic is assumed to be normally distributed and the standard deviation should be known in order to perform an accurate test.

- ANOVA -analysis of variance – a group of analysis used to study the influence of factors (independent variables) on the dependent variable.

##### Non parametric Tests¶

They often refer to statistical methods that do not assume a Gaussian distribution. They were developed for use with ordinal or interval data, but in practice they can also be used with a ranking of cases with true value in a sample of data rather than the observational values themselves. The null hypothesis of these tests is often the assumption that both samples were drawn from a population with the same distribution and therefore with the same population parameters such as mean or median. Tests also return a p value that can be used to interpret the test result. The p-value can be considered as the probability of observing two data samples under the assumption (null hypothesis) that two samples were drawn from a population with the same distribution.

p <= alpha: discard H0, different distribution p> alpha: can’t reject H0, same distribution

- Mann-Whitney Test It is a non-parametric test of statistical significance to determine if two independent samples were taken from a population with the same distribution. The following example is generating two samples and using the Mann-Whitney test to check if they are taken from the same population:

```
from numpy.random import seed
from numpy.random import randn
from scipy.stats import mannwhitneyu
data1 = 5 * randn(100) + 12
data2 = 5 * randn(100) + 13
stat, p = mannwhitneyu(data1, data2)
print('Statistics=%.3f, p=%.3f' % (stat, p))
alpha = 0.05
if p > alpha:
print('The same distribution (dont reject H0)')
else:
print('Different distribution (reject H0)')
```

Statistics=4929.000, p=0.432 The same distribution (dont reject H0)

- Wilcoxon Test (signed-rank) Sometimes the data samples can be somehow related to each other. The samples are not independent, therefore the Mann-Whitney test cannot be used. Instead, the Wilcoxon signed-rank test is used, also called the Wilcoxon T test, the equivalent of the paired Student’s test, but for ranking data rather than actual Gaussian distribution data. Same samples as before:

```
from scipy.stats import wilcoxon
stat, p = wilcoxon(data1, data2)
print('Statistics=%.3f, p=%.3f' % (stat, p))
alpha = 0.05
if p > alpha:
print('The same distribution (dont reject H0)')
else:
print('Different distribution (reject H0)')
```

Statistics=2459.000, p=0.820 The same distribution (dont reject H0)

- Kruskal-Wallis Test If we have many data samples and we are interested in whether two or more samples have the same distribution. The Kruskal-Wallis test is a non-parametric version of the one-way analysis of the variance test (ANOVA). We will add a third sample:

```
from scipy.stats import kruskal
data1 = 5 *
```