## About this sin

Stopping a test once Statistical Significance is reached. This is a grave sin as it invalidates your data. Under no circumstances should you do this.

## How to atone

Learn how to calculate sample size and wait. Non-believers are encouraged to perform an A/A test to witness first-hand how terrible this sin is.

## Useful script

Imagine you are trying to understand who the better basketball player is between you and LeBron James by taking shots from half-court. If you were limited to only 1 shot, one can see how if you got lucky you could tie or even beat LeBron James. If you were to name yourself as the winner in this case, it would be easy to understand that taking 1 shot isn't enough of a sample to account for luck. This unfair calling of a test before you have enough samples is called Peeking - and must be avoided at all costs. To witness this first hand, I encourage you to run an A/A test (where the two variant are identical). Very often you can see statistical significance before the sample size requirement is met - which is obviously incorrect. For more information about Peeking, check out these resources:

- https://gopractice.io/blog/peeking-problem/
- https://towardsdatascience.com/how-not-to-run-an-a-b-test-88637a6b921b
- https://www.youtube.com/watch?v=AJX4W3MwKzU
- http://www.einarsen.no/is-your-ab-testing-effort-just-chasing-statistical-ghosts/
- https://blog.analytics-toolkit.com/2017/the-bane-of-ab-testing-reaching-statistical-significance/