As UX continues to mature it’s becoming harder to avoid using statistics to quantify design improvements… Here are five of the more critical but challenging concepts. The author didn’t just pick some arbitrary geeky stuff to stump math geeks (or get you an interview at Google). These are fundamental concepts that take practice and patience but are worth the effort to understand.
- Using statistics on small sample sizes: You do not need a sample size in the hundreds or thousands or even above 30 to use statistics. The author regularly compute statistics on small sample sizes (less than 15) and find statistical differences.
- Power: Power is sort of like the confidence level for detecting a difference—you don’t know ahead of time if one design has a higher completion rate than another.
- The p-value: The p-value stands for probability value. It’s the probability the difference you observed in a study is due to chance.
- Sample Size: Sample size calculation remains a dark art for many practitioners. There are many counterintuitive concepts, including power, confidence and effect sizes. One complication is that there are different ways to compute sample size. There are basically three ways to find the right sample size for just about any study in user research- problem detection, comparing and precision.
- Confidence intervals get wider as you increase your confidence level: The “95%” in the 95% confidence interval you see on my site and in publications is called the confidence level. A confidence interval is the most plausible range for the unknown population mean. But you can’t be sure an interval contains the true average. By increasing the confidence level to 99% the author makes their intervals wider. The price for being more confident is that they have to cast a wider net.
Leave a Reply