Automated Business Decision-Making Using Generative AI in Online A/B Testing: Comparative Analysis With Human Decision-Making
Online A/B testing is widely used as an experimental methodology for product improvement and business optimization. However, interpreting experimental results often involves subjective judgment and biases from experiment designers, which can undermine the reliability and reproducibility of test outc...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11079579/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Online A/B testing is widely used as an experimental methodology for product improvement and business optimization. However, interpreting experimental results often involves subjective judgment and biases from experiment designers, which can undermine the reliability and reproducibility of test outcomes. In particular, experiment designers frequently exhibit inconsistent decision-making when dealing with neutral results—cases where neither statistically significant positive nor negative effects are observed. This study aims to explore the feasibility of automating A/B test decision-making using Generative AI and empirically analyze how well AI decisions align with those of experiment designers and experts. Utilizing 1,407 experimental cases from 48 companies on the Hackle online experimentation platform, the study compares decision-making outcomes between experiment designers and Generative AI, analyzing agreement rates and identifying patterns across companies. Statistical analyses, including chi-square tests and inter-rater agreement evaluation, were employed to assess differences and reliability. The findings indicate meaningful discrepancies between AI and experiment designers but demonstrate that AI decisions closely align with expert judgments. These results suggest that Generative AI can serve as a complementary tool to enhance the consistency and reliability of A/B test result interpretation. |
---|---|
ISSN: | 2169-3536 |