A/B testing and networks effects

Marton Trencseni - Sat 21 March 2020 • Tagged with ab-testing

I use Monte Carlo simulations to explore how A/B testing on Watts–Strogatz random graphs depends on the degree distribution of the social network.

Watts-Strogatz degree distribution

Continue reading

A/B testing on social networks

Marton Trencseni - Mon 09 March 2020 • Tagged with ab-testing

I use Monte Carlo simulations to show that experimentation on social networks is a beatiful statistical problem with unexpected nuances due to network effects.

Watts-Strogatz

Continue reading

Early stopping in A/B testing

Marton Trencseni - Thu 05 March 2020 • Tagged with ab-testing

Increased false positive rate due to early stopping is beautiful nuance of statistical testing. It is equivalent to running at an overall higher alpha. Data scientists need to be aware of this phenomenon so they can control it and keep their organizations honest about their experimental results.

Early stopping

Continue reading

A/B testing and Fisher's exact test

Marton Trencseni - Tue 03 March 2020 • Tagged with ab-testing

Fisher’s exact test directly computes the same p value as the Chi-squared test, so it does not rely on the Central Limit Theorem to hold.

Fisher's test, Fisher Monte Carlo and Chi-squared test p values

Continue reading

A/B testing and the Chi-squared test

Marton Trencseni - Fri 28 February 2020 • Tagged with ab-testing

In an ealier post, I wrote about A/B testing conversion data with the Z-test. The Chi-squared test is a more general test for conversion data, because it can work with multiple conversion events and multiple funnels being tested (A/B/C/D/..).

Chi-squared distribution

Continue reading

A/B testing and the t-test

Marton Trencseni - Sun 23 February 2020 • Tagged with ab-testing

The t-test is better than the z-test for timespent A/B tests, because it explicitly models the uncertainty of the variance due to sampling. Using Monte-Carlo simulations I show that around N=100, the t-test becomes the z-test.

Normal distribution vs t-distribution

Continue reading

A/B testing and the Z-test

Marton Trencseni - Sat 15 February 2020 • Tagged with ab-testing

I discuss the Z-test for A/B testing and show how to compute parameters such as sample size from first principles. I use Monte Carlo simulations to validate significance level and statistical power, and visualize parameter scaling behaviour.

Conversion difference vs N

Continue reading

Beyond the Central Limit Theorem

Marton Trencseni - Thu 06 February 2020 • Tagged with data, ab testing, statistics

In the previous post, I talked about the importance of the Central Limit Theorem (CLT) to A/B testing. Here we will explore cases when we cannot rely on the CLT to hold.

Running mean for Cauchy distribution

Continue reading

A/B testing and the Central Limit Theorem

Marton Trencseni - Wed 05 February 2020 • Tagged with data, ab testing, statistics

When working with hypothesis testing, the desciptions of the statistical method often has normality assumptions. For example, the Wikipedia page for the z-test starts like this: "A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution". What does this mean? How do I know it’s a valid assumption for my data?

Normal distribution from uniform

Continue reading

A/B tests: Moving Fast vs Being Sure

Marton Trencseni - Mon 01 July 2019 • Tagged with ab-testing, fetchr

Most A/B testing tools default to α=0.05, meaning the expected false positive rate is 5%. In this post I explore the trade-offs between moving fast, ie. using higher α, versus being sure, ie. using lower α.

14. slide

Continue reading

Beautiful A/B testing

Marton Trencseni - Sun 05 June 2016 • Tagged with ab-testing, strata, statistics, data

I gave this talk at the O’Reilly Strata Conference London in 2016 June, mostly based on what I learned at Prezi from 2012-2016.

14. slide

Continue reading