Article ID: 2025TAP0016
In this study, we investigate soft binary hypothesis testing using a random sample, wherein decisions are made based on a soft test function. To evaluate this test function, we introduce two classes of tunable loss functions and define generalized type I and II errors, as well as Bayesian errors. We analyze the trade-offs between these errors and establish asymptotic results that extend the Neyman-Pearson lemma, the Chernoff-Stein lemma, and Chernoff information in classical binary hypothesis testing.