A/B testing and the Chi-squared test

Marton Trencseni - Fri 28 February 2020 • Tagged with ab-testing

In an ealier post, I wrote about A/B testing conversion data with the Z-test. The Chi-squared test is a more general test for conversion data, because it can work with multiple conversion events and multiple funnels being tested (A/B/C/D/..).

Chi-squared distribution

Continue reading

A/B testing and the t-test

Marton Trencseni - Sun 23 February 2020 • Tagged with ab-testing

The t-test is better than the z-test for timespent A/B tests, because it explicitly models the uncertainty of the variance due to sampling. Using Monte-Carlo simulations I show that around N=100, the t-test becomes the z-test.

Normal distribution vs t-distribution

Continue reading

A/B testing and the Z-test

Marton Trencseni - Sat 15 February 2020 • Tagged with ab-testing

I discuss the Z-test for A/B testing and show how to compute parameters such as sample size from first principles. I use Monte Carlo simulations to validate significance level and statistical power, and visualize parameter scaling behaviour.

Conversion difference vs N

Continue reading

Beyond the Central Limit Theorem

Marton Trencseni - Thu 06 February 2020 • Tagged with data, ab testing, statistics

In the previous post, I talked about the importance of the Central Limit Theorem (CLT) to A/B testing. Here we will explore cases when we cannot rely on the CLT to hold.

Running mean for Cauchy distribution

Continue reading

A/B testing and the Central Limit Theorem

Marton Trencseni - Wed 05 February 2020 • Tagged with data, ab testing, statistics

When working with hypothesis testing, the desciptions of the statistical method often has normality assumptions. For example, the Wikipedia page for the z-test starts like this: "A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution". What does this mean? How do I know it’s a valid assumption for my data?

Normal distribution from uniform

Continue reading

A/B tests: Moving Fast vs Being Sure

Marton Trencseni - Mon 01 July 2019 • Tagged with ab-testing, fetchr

Most A/B testing tools default to α=0.05, meaning the expected false positive rate is 5%. In this post I explore the trade-offs between moving fast, ie. using higher α, versus being sure, ie. using lower α.

14. slide

Continue reading

Beautiful A/B testing

Marton Trencseni - Sun 05 June 2016 • Tagged with ab-testing, strata, statistics, data

I gave this talk at the O’Reilly Strata Conference London in 2016 June, mostly based on what I learned at Prezi from 2012-2016.

14. slide

Continue reading