Batch testing methods extend classical multiple testing procedures to the online setting, where hypotheses arrive in batches over time and must be tested sequentially while maintaining overall FDR control across all batches.
The batching framework, developed by Zrnic et al. (2020), addresses scenarios where: - Tests naturally arrive in groups (batches) - Each batch can be processed using classical procedures - Overall FDR control is required across all batches - Adaptive alpha allocation improves power over time
Benjamini-Hochberg procedure for online batch FDR control.
BatchBH extends the classical Benjamini-Hochberg (BH) procedure to the online batching setting, where hypotheses arrive in batches over time and must be tested sequentially while maintaining overall FDR control across all batches.
This implements Algorithm 1 from "The Power of Batching in Multiple Hypothesis Testing" by Zrnic, Jiang, Ramdas, and Jordan (2020). The key innovation is the calculation of adaptive alpha levels that account for the interdependence between batches while preserving the BH optimality within each batch.
The algorithm maintains FDR control by: 1. Allocating alpha budget using a gamma sequence 2. Adjusting for dependencies between batches via β_t correction 3. Computing R^+ (maximum possible rejections) for power optimization 4. Applying standard BH procedure within each batch
Parameters:
Name
Type
Description
Default
alpha
float
Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
required
Attributes:
Name
Type
Description
alpha0
Original target FDR level.
num_test
Number of batches tested so far.
seq
Gamma sequence for alpha allocation across batches.
r_s
Number of rejections in each batch.
r_s_plus
Maximum possible rejections for each batch (R^+ values).
Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515.
Benjamini, Y., and Y. Hochberg (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing." Journal of the Royal Statistical Society: Series B, 57(1):289-300.
classBatchBH(AbstractBatchingTest):"""Benjamini-Hochberg procedure for online batch FDR control. BatchBH extends the classical Benjamini-Hochberg (BH) procedure to the online batching setting, where hypotheses arrive in batches over time and must be tested sequentially while maintaining overall FDR control across all batches. This implements Algorithm 1 from "The Power of Batching in Multiple Hypothesis Testing" by Zrnic, Jiang, Ramdas, and Jordan (2020). The key innovation is the calculation of adaptive alpha levels that account for the interdependence between batches while preserving the BH optimality within each batch. The algorithm maintains FDR control by: 1. Allocating alpha budget using a gamma sequence 2. Adjusting for dependencies between batches via β_t correction 3. Computing R^+ (maximum possible rejections) for power optimization 4. Applying standard BH procedure within each batch Args: alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1). Attributes: alpha0: Original target FDR level. num_test: Number of batches tested so far. seq: Gamma sequence for alpha allocation across batches. r_s: Number of rejections in each batch. r_s_plus: Maximum possible rejections for each batch (R^+ values). alpha_s: Alpha level used for each batch. Examples: >>> # Basic batch testing >>> bh = BatchBH(alpha=0.05) >>> batch1 = [0.001, 0.02, 0.15, 0.8] >>> decisions1 = bh.test_batch(batch1) >>> print(f"Batch 1 discoveries: {sum(decisions1)}") >>> # Sequential batches with adaptive alpha >>> batch2 = [0.03, 0.9, 0.006, 0.4] >>> decisions2 = bh.test_batch(batch2) # Alpha adjusted based on batch1 >>> print(f"Batch 2 discoveries: {sum(decisions2)}") >>> # Multiple batches >>> batches = [[0.001, 0.8], [0.02, 0.3], [0.005, 0.9]] >>> all_decisions = [] >>> for i, batch in enumerate(batches): ... decisions = bh.test_batch(batch) ... all_decisions.append(decisions) ... print(f"Batch {i+1}: {sum(decisions)} discoveries") References: Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515. Benjamini, Y., and Y. Hochberg (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing." Journal of the Royal Statistical Society: Series B, 57(1):289-300. """def__init__(self,alpha:float):"""Initialize BatchBH with FDR control level alpha. Args: alpha: Target FDR control level. Must be in (0, 1). Raises: ValueError: If alpha is not in (0, 1). """super().__init__(alpha)self.alpha0=alphaself.num_test=0# Number of batches tested so farself.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.r_s_plus=[]# R^+ values for each batchself.r_s=[]# R values (number of rejections) for each batchself.alpha_s=[]# Alpha values used for each batchdeftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the BatchBH procedure. Args: p_vals: List of p-values for the current batch Returns: List of boolean values indicating which hypotheses are rejected """n_batch=len(p_vals)t=self.num_test# Current batch index (0-based)ift==0:# First batch: α₁ = γ₁αalpha_t=self.alpha0*self.seq.calc_gamma(j=1)else:# Calculate β_tbeta_t=0total_rejections_except_s=sum(self.r_s)# Total rejections so farforsinrange(t):# For each previous batch s, calculate its contribution to β_t# Denominator is R^+_s + sum of all other rejections up to t-1rejections_from_other_batches=total_rejections_except_s-self.r_s[s]denominator=self.r_s_plus[s]+rejections_from_other_batchesifdenominator>0:beta_t+=self.alpha_s[s]*self.r_s_plus[s]/denominator# Calculate α_t = (Σ_{s≤t} γ_s α - β_t) × (n_t + Σ_{s<t} R_s) / n_tgamma_sum=sum(self.seq.calc_gamma(j=i+1)foriinrange(t+1))numerator=gamma_sum*self.alpha0-beta_ttotal_prev_rejections=sum(self.r_s)alpha_t=numerator*(n_batch+total_prev_rejections)/n_batch# Ensure alpha_t is non-negativealpha_t=max(0,alpha_t)# Run BH on current batchnum_reject,threshold=bh(p_vals,alpha_t)# Calculate R^+_t (maximum rejections if one p-value is set to 0)r_plus=num_reject# Start with current rejectionsforiinrange(len(p_vals)):# Temporarily set p-value to 0original_p=p_vals[i]p_vals[i]=0temp_reject,_=bh(p_vals,alpha_t)r_plus=max(r_plus,temp_reject)# Restore original p-valuep_vals[i]=original_p# Store resultsself.r_s.append(num_reject)self.r_s_plus.append(r_plus)self.alpha_s.append(alpha_t)self.num_test+=1# Return rejection decisionsreturn[p_val<=thresholdforp_valinp_vals]
deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the BatchBH procedure. Args: p_vals: List of p-values for the current batch Returns: List of boolean values indicating which hypotheses are rejected """n_batch=len(p_vals)t=self.num_test# Current batch index (0-based)ift==0:# First batch: α₁ = γ₁αalpha_t=self.alpha0*self.seq.calc_gamma(j=1)else:# Calculate β_tbeta_t=0total_rejections_except_s=sum(self.r_s)# Total rejections so farforsinrange(t):# For each previous batch s, calculate its contribution to β_t# Denominator is R^+_s + sum of all other rejections up to t-1rejections_from_other_batches=total_rejections_except_s-self.r_s[s]denominator=self.r_s_plus[s]+rejections_from_other_batchesifdenominator>0:beta_t+=self.alpha_s[s]*self.r_s_plus[s]/denominator# Calculate α_t = (Σ_{s≤t} γ_s α - β_t) × (n_t + Σ_{s<t} R_s) / n_tgamma_sum=sum(self.seq.calc_gamma(j=i+1)foriinrange(t+1))numerator=gamma_sum*self.alpha0-beta_ttotal_prev_rejections=sum(self.r_s)alpha_t=numerator*(n_batch+total_prev_rejections)/n_batch# Ensure alpha_t is non-negativealpha_t=max(0,alpha_t)# Run BH on current batchnum_reject,threshold=bh(p_vals,alpha_t)# Calculate R^+_t (maximum rejections if one p-value is set to 0)r_plus=num_reject# Start with current rejectionsforiinrange(len(p_vals)):# Temporarily set p-value to 0original_p=p_vals[i]p_vals[i]=0temp_reject,_=bh(p_vals,alpha_t)r_plus=max(r_plus,temp_reject)# Restore original p-valuep_vals[i]=original_p# Store resultsself.r_s.append(num_reject)self.r_s_plus.append(r_plus)self.alpha_s.append(alpha_t)self.num_test+=1# Return rejection decisionsreturn[p_val<=thresholdforp_valinp_vals]
Storey-Benjamini-Hochberg procedure for online batch FDR control.
BatchStoreyBH extends the online batching framework to incorporate Storey's π₀ estimation method, which estimates the proportion of true null hypotheses within each batch. This provides enhanced power when the true null proportion is less than 1, particularly in genomics and other high-dimensional settings.
The algorithm combines the adaptive alpha allocation strategy from BatchBH with Storey's modification of the Benjamini-Hochberg procedure, which adjusts the rejection threshold based on the estimated proportion of true nulls (π₀).
Key innovations: 1. Per-batch π₀ estimation using Storey's method 2. Adaptive alpha allocation across batches 3. Integration with the online batching framework 4. Enhanced power when π₀ < 1
Parameters:
Name
Type
Description
Default
alpha
float
Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
required
lambda_
float
Threshold parameter for π₀ estimation. Must be in (0, 1). Typically set to 0.5. Higher values give more conservative estimates of π₀ but may reduce power.
required
Attributes:
Name
Type
Description
alpha0
float
Original target FDR level.
num_test
int
Number of batches tested so far.
lambda_
float
Threshold parameter for π₀ estimation.
seq
Gamma sequence for alpha allocation across batches.
pi0_estimates
list[float]
π₀ estimates for each batch.
r_s_plus
list[float]
Maximum possible rejections for each batch.
r_s
list[bool]
Rejection indicators for each batch.
r_total
int
Total number of rejections across all batches.
alpha_s
list[float]
Alpha levels used for each batch.
Examples:
>>> # Basic usage with π₀ estimation>>> storey_bh=BatchStoreyBH(alpha=0.05,lambda_=0.5)>>> batch1=[0.001,0.02,0.15,0.8,0.9]>>> decisions1=storey_bh.test_batch(batch1)>>> print(f"π₀ estimate for batch 1: {storey_bh.pi0_estimates[-1]:.3f}")
>>> # Comparing different lambda values>>> storey_conservative=BatchStoreyBH(alpha=0.05,lambda_=0.8)>>> storey_liberal=BatchStoreyBH(alpha=0.05,lambda_=0.3)>>> # Conservative λ gives higher π₀ estimates, liberal λ gives lower
References
Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515.
Storey, J. D. (2002). "A direct approach to false discovery rates." Journal of the Royal Statistical Society: Series B, 64(3):479-498.
Storey, J. D. (2003). "The positive false discovery rate: a Bayesian interpretation and the q-value." Annals of Statistics, 31(6):2013-2035.
classBatchStoreyBH(AbstractBatchingTest):"""Storey-Benjamini-Hochberg procedure for online batch FDR control. BatchStoreyBH extends the online batching framework to incorporate Storey's π₀ estimation method, which estimates the proportion of true null hypotheses within each batch. This provides enhanced power when the true null proportion is less than 1, particularly in genomics and other high-dimensional settings. The algorithm combines the adaptive alpha allocation strategy from BatchBH with Storey's modification of the Benjamini-Hochberg procedure, which adjusts the rejection threshold based on the estimated proportion of true nulls (π₀). Key innovations: 1. Per-batch π₀ estimation using Storey's method 2. Adaptive alpha allocation across batches 3. Integration with the online batching framework 4. Enhanced power when π₀ < 1 Args: alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1). lambda_: Threshold parameter for π₀ estimation. Must be in (0, 1). Typically set to 0.5. Higher values give more conservative estimates of π₀ but may reduce power. Attributes: alpha0: Original target FDR level. num_test: Number of batches tested so far. lambda_: Threshold parameter for π₀ estimation. seq: Gamma sequence for alpha allocation across batches. pi0_estimates: π₀ estimates for each batch. r_s_plus: Maximum possible rejections for each batch. r_s: Rejection indicators for each batch. r_total: Total number of rejections across all batches. alpha_s: Alpha levels used for each batch. Examples: >>> # Basic usage with π₀ estimation >>> storey_bh = BatchStoreyBH(alpha=0.05, lambda_=0.5) >>> batch1 = [0.001, 0.02, 0.15, 0.8, 0.9] >>> decisions1 = storey_bh.test_batch(batch1) >>> print(f"π₀ estimate for batch 1: {storey_bh.pi0_estimates[-1]:.3f}") >>> # Sequential batches with adaptive power >>> batch2 = [0.03, 0.7, 0.006, 0.4, 0.85] >>> decisions2 = storey_bh.test_batch(batch2) >>> print(f"Batch 1 discoveries: {sum(decisions1)}") >>> print(f"Batch 2 discoveries: {sum(decisions2)}") >>> # Comparing different lambda values >>> storey_conservative = BatchStoreyBH(alpha=0.05, lambda_=0.8) >>> storey_liberal = BatchStoreyBH(alpha=0.05, lambda_=0.3) >>> # Conservative λ gives higher π₀ estimates, liberal λ gives lower References: Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515. Storey, J. D. (2002). "A direct approach to false discovery rates." Journal of the Royal Statistical Society: Series B, 64(3):479-498. Storey, J. D. (2003). "The positive false discovery rate: a Bayesian interpretation and the q-value." Annals of Statistics, 31(6):2013-2035. """def__init__(self,alpha:float,lambda_:float):"""Initialize BatchStoreyBH with FDR control level and π₀ estimation parameter. Args: alpha: Target FDR control level. Must be in (0, 1). lambda_: Threshold parameter for π₀ estimation. Must be in (0, 1). P-values above λ are used to estimate the proportion of true nulls. Common choice: λ = 0.5. Raises: ValueError: If alpha or lambda_ are not in (0, 1). """super().__init__(alpha)self.alpha0:float=alphaself.num_test:int=1self.lambda_:float=lambda_ifnot0<lambda_<1:raiseValueError("lambda_ must be between 0 and 1")self.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.pi0_estimates:list[float]=[]# Store π₀ estimates per batchself.r_s_plus:list[float]=[]self.r_s:list[bool]=[]self.r_total:int=0self.r_sums:list[float]=[0]self.alpha_s:list[float]=[]deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the Storey-BH procedure with adaptive alpha. For each batch, this method: 1. Estimates π₀ (proportion of true nulls) using Storey's method 2. Calculates an adaptive alpha level based on previous batches 3. Applies the Storey-BH procedure with the calculated alpha 4. Updates internal statistics for future batches The Storey π₀ estimation uses the formula: π̂₀ = min(1, (1 + #{p > λ}) / (n × (1 - λ))) Args: p_vals: List of p-values for the current batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> storey_bh = BatchStoreyBH(alpha=0.05, lambda_=0.5) >>> decisions = storey_bh.test_batch([0.001, 0.02, 0.8, 0.9]) >>> print(f"π₀ estimate: {storey_bh.pi0_estimates[-1]:.3f}") >>> print(f"Rejections: {sum(decisions)}") """n_batch=len(p_vals)ifn_batch==0:return[]# Estimate π₀ for this batch using Storey's methodnum_above_lambda=sum(1forpinp_valsifp>self.lambda_)pi0_batch=min(1.0,(1+num_above_lambda)/(n_batch*(1-self.lambda_)))self.pi0_estimates.append(pi0_batch)# Calculate adaptive alpha for this batchifself.num_test==1:# First batchself.alpha=self.alpha0*self.seq.calc_gamma(j=1)else:# Subsequent batches: follow BatchBH frameworkgamma_sum=sum(self.seq.calc_gamma(i)foriinrange(1,self.num_test+1))self.alpha=gamma_sum*self.alpha0# Subtract previously spent alpha (adjusted by π₀ estimates)beta_t=sum(self.alpha_s[i]*self.pi0_estimates[i]*self.r_s_plus[i]/(self.r_s_plus[i]+self.r_sums[i+1])foriinrange(0,self.num_test-1)ifself.r_s_plus[i]+self.r_sums[i+1]>0)self.alpha=max(0,self.alpha-beta_t)# Adjust for batch sizeifn_batch>0:self.alpha*=(n_batch+self.r_total)/n_batch# Apply Storey-BH procedure with current alphanum_reject,threshold=storey_bh(p_vals,self.alpha,self.lambda_)# Update running statisticsself.r_sums.append(self.r_total)self.r_sums[1:self.num_test]=[x+num_rejectforxinself.r_sums[1:self.num_test]]self.r_total+=num_rejectself.alpha_s.append(self.alpha)# Calculate R+ efficientlyr_plus=self._calculate_r_plus(p_vals)self.r_s_plus.append(r_plus)self.num_test+=1return[p_val<=thresholdforp_valinp_vals]def_calculate_r_plus(self,p_vals:list[float])->int:"""Calculate R⁺ (maximum rejections if one p-value is set to 0). R⁺ represents the maximum number of rejections possible in the current batch if we were allowed to set one p-value to 0 (perfect signal). This quantity is used in the BatchBH framework to determine how much alpha to allocate for future batches. The calculation is done efficiently by augmenting the batch with a p-value of 0 and running the Storey-BH procedure to see how many rejections would result. Args: p_vals: List of p-values for the current batch. Returns: Maximum number of additional rejections possible with one perfect signal. Note: This is a key component of the online batching framework that helps determine the adaptive alpha allocation for subsequent batches. """ifnotp_vals:return0# Create a copy with an additional p-value of 0augmented_p_vals=p_vals+[0.0]r_plus,_=storey_bh(augmented_p_vals,self.alpha,self.lambda_)# Since we added one p-value (0), and it will definitely be rejected,# R+ is the total rejections minus 1returnmax(0,r_plus-1)
Test a batch of p-values using the Storey-BH procedure with adaptive alpha.
For each batch, this method: 1. Estimates π₀ (proportion of true nulls) using Storey's method 2. Calculates an adaptive alpha level based on previous batches 3. Applies the Storey-BH procedure with the calculated alpha 4. Updates internal statistics for future batches
deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the Storey-BH procedure with adaptive alpha. For each batch, this method: 1. Estimates π₀ (proportion of true nulls) using Storey's method 2. Calculates an adaptive alpha level based on previous batches 3. Applies the Storey-BH procedure with the calculated alpha 4. Updates internal statistics for future batches The Storey π₀ estimation uses the formula: π̂₀ = min(1, (1 + #{p > λ}) / (n × (1 - λ))) Args: p_vals: List of p-values for the current batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> storey_bh = BatchStoreyBH(alpha=0.05, lambda_=0.5) >>> decisions = storey_bh.test_batch([0.001, 0.02, 0.8, 0.9]) >>> print(f"π₀ estimate: {storey_bh.pi0_estimates[-1]:.3f}") >>> print(f"Rejections: {sum(decisions)}") """n_batch=len(p_vals)ifn_batch==0:return[]# Estimate π₀ for this batch using Storey's methodnum_above_lambda=sum(1forpinp_valsifp>self.lambda_)pi0_batch=min(1.0,(1+num_above_lambda)/(n_batch*(1-self.lambda_)))self.pi0_estimates.append(pi0_batch)# Calculate adaptive alpha for this batchifself.num_test==1:# First batchself.alpha=self.alpha0*self.seq.calc_gamma(j=1)else:# Subsequent batches: follow BatchBH frameworkgamma_sum=sum(self.seq.calc_gamma(i)foriinrange(1,self.num_test+1))self.alpha=gamma_sum*self.alpha0# Subtract previously spent alpha (adjusted by π₀ estimates)beta_t=sum(self.alpha_s[i]*self.pi0_estimates[i]*self.r_s_plus[i]/(self.r_s_plus[i]+self.r_sums[i+1])foriinrange(0,self.num_test-1)ifself.r_s_plus[i]+self.r_sums[i+1]>0)self.alpha=max(0,self.alpha-beta_t)# Adjust for batch sizeifn_batch>0:self.alpha*=(n_batch+self.r_total)/n_batch# Apply Storey-BH procedure with current alphanum_reject,threshold=storey_bh(p_vals,self.alpha,self.lambda_)# Update running statisticsself.r_sums.append(self.r_total)self.r_sums[1:self.num_test]=[x+num_rejectforxinself.r_sums[1:self.num_test]]self.r_total+=num_rejectself.alpha_s.append(self.alpha)# Calculate R+ efficientlyr_plus=self._calculate_r_plus(p_vals)self.r_s_plus.append(r_plus)self.num_test+=1return[p_val<=thresholdforp_valinp_vals]
Benjamini-Yekutieli procedure for online batch FDR control under dependence.
BatchBY extends the online batching framework to use the Benjamini-Yekutieli (BY) procedure, which provides FDR control even under arbitrary dependence among p-values. This makes it particularly suitable for situations where the independence assumption may be violated, such as in spatial statistics, time series analysis, or genomics with linkage disequilibrium.
The BY procedure is a modification of the Benjamini-Hochberg (BH) procedure that uses harmonic weights to maintain FDR control under arbitrary dependence. While this comes at the cost of reduced power compared to BH, it provides robust FDR control in challenging dependence scenarios.
The algorithm follows the online batching framework, allocating alpha budget across batches using a gamma sequence and adjusting for inter-batch dependencies through the β_t correction mechanism.
Parameters:
Name
Type
Description
Default
alpha
float
Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
required
Attributes:
Name
Type
Description
alpha0
float
Original target FDR level.
num_test
int
Number of batches tested so far.
seq
Gamma sequence for alpha allocation across batches.
r_s_plus
[float]
Maximum possible rejections for each batch.
r_s
[bool]
Rejection indicators for each batch.
r_total
int
Total number of rejections across all batches.
r_sums
[float]
Cumulative rejection counts for dependency tracking.
alpha_s
[float]
Alpha levels used for each batch.
Examples:
>>> # Basic usage for dependent p-values>>> by_test=BatchBY(alpha=0.05)>>> # Test correlated p-values (e.g., from spatial data)>>> batch1=[0.001,0.002,0.15,0.8]# May be dependent>>> decisions1=by_test.test_batch(batch1)>>> print(f"BY discoveries in batch 1: {sum(decisions1)}")
>>> # Sequential dependent batches>>> batch2=[0.03,0.04,0.006,0.4]# Also potentially dependent>>> decisions2=by_test.test_batch(batch2)>>> print(f"BY discoveries in batch 2: {sum(decisions2)}")
>>> # Comparing with standard BH under dependence>>> fromonline_fdr.batchingimportBatchBH>>> bh_test=BatchBH(alpha=0.05)>>> by_test=BatchBY(alpha=0.05)>>> # BY provides guaranteed FDR control, BH may not under dependence
Notes
The BY procedure is particularly recommended when: - P-values exhibit positive dependence - Spatial or temporal correlation is present - Conservative FDR control is required - The exact dependence structure is unknown
Trade-off: Enhanced robustness comes at the cost of reduced power compared to the standard Benjamini-Hochberg procedure.
References
Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188.
Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515.
classBatchBY(AbstractBatchingTest):"""Benjamini-Yekutieli procedure for online batch FDR control under dependence. BatchBY extends the online batching framework to use the Benjamini-Yekutieli (BY) procedure, which provides FDR control even under arbitrary dependence among p-values. This makes it particularly suitable for situations where the independence assumption may be violated, such as in spatial statistics, time series analysis, or genomics with linkage disequilibrium. The BY procedure is a modification of the Benjamini-Hochberg (BH) procedure that uses harmonic weights to maintain FDR control under arbitrary dependence. While this comes at the cost of reduced power compared to BH, it provides robust FDR control in challenging dependence scenarios. The algorithm follows the online batching framework, allocating alpha budget across batches using a gamma sequence and adjusting for inter-batch dependencies through the β_t correction mechanism. Args: alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1). Attributes: alpha0: Original target FDR level. num_test: Number of batches tested so far. seq: Gamma sequence for alpha allocation across batches. r_s_plus: Maximum possible rejections for each batch. r_s: Rejection indicators for each batch. r_total: Total number of rejections across all batches. r_sums: Cumulative rejection counts for dependency tracking. alpha_s: Alpha levels used for each batch. Examples: >>> # Basic usage for dependent p-values >>> by_test = BatchBY(alpha=0.05) >>> # Test correlated p-values (e.g., from spatial data) >>> batch1 = [0.001, 0.002, 0.15, 0.8] # May be dependent >>> decisions1 = by_test.test_batch(batch1) >>> print(f"BY discoveries in batch 1: {sum(decisions1)}") >>> # Sequential dependent batches >>> batch2 = [0.03, 0.04, 0.006, 0.4] # Also potentially dependent >>> decisions2 = by_test.test_batch(batch2) >>> print(f"BY discoveries in batch 2: {sum(decisions2)}") >>> # Comparing with standard BH under dependence >>> from online_fdr.batching import BatchBH >>> bh_test = BatchBH(alpha=0.05) >>> by_test = BatchBY(alpha=0.05) >>> # BY provides guaranteed FDR control, BH may not under dependence Notes: The BY procedure is particularly recommended when: - P-values exhibit positive dependence - Spatial or temporal correlation is present - Conservative FDR control is required - The exact dependence structure is unknown Trade-off: Enhanced robustness comes at the cost of reduced power compared to the standard Benjamini-Hochberg procedure. References: Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188. Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515. """def__init__(self,alpha:float):"""Initialize BatchBY with FDR control level. Args: alpha: Target FDR control level. Must be in (0, 1). Raises: ValueError: If alpha is not in (0, 1). """super().__init__(alpha)self.alpha0:float=alphaself.num_test:int=1self.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.r_s_plus:[float]=[]self.r_s:[bool]=[]self.r_total:int=0self.r_sums:[float]=[0]self.alpha_s:[float]=[]deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the Benjamini-Yekutieli procedure. The BY procedure provides FDR control under arbitrary dependence among p-values by using harmonic weights in the rejection threshold calculation. This method adapts the static BY procedure to the online batching framework. The algorithm: 1. Calculates adaptive alpha level for the current batch 2. Applies the BY procedure with harmonic correction 3. Updates statistics for future batch calculations 4. Computes R⁺ for inter-batch dependency handling Args: p_vals: List of p-values for the current batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> by_test = BatchBY(alpha=0.05) >>> # Test potentially dependent p-values >>> decisions = by_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"Rejections with BY: {sum(decisions)}") Note: The BY procedure is more conservative than BH but maintains FDR control even when p-values are positively dependent. """n_batch=len(p_vals)ifself.num_test==1:self.alpha=(self.alpha0# fmt: skip*self.seq.calc_gamma(j=1))else:self.alpha=(sum(self.seq.calc_gamma(i)foriinrange(1,self.num_test+1))*self.alpha0# fmt: skip)self.alpha-=sum([self.alpha_s[i]*self.r_s_plus[i]/(self.r_s_plus[i]+self.r_sums[i+1])foriinrange(0,self.num_test-1)])self.alpha*=(n_batch+self.r_total)/n_batchnum_reject,threshold=by(p_vals,self.alpha)self.r_sums.append(self.r_total)self.r_sums[1:self.num_test]= \
[x+num_rejectforxinself.r_sums[1:self.num_test]]# fmt: skipself.r_total+=num_rejectself.alpha_s.append(self.alpha)r_plus=0fori,p_valinenumerate(p_vals):p_vals[i]=0r_plus=max(r_plus,by(p_vals,self.alpha)[0])p_vals[i]=p_valself.r_s_plus.append(r_plus)self.num_test+=1return[p_val<=thresholdforp_valinp_vals]
Test a batch of p-values using the Benjamini-Yekutieli procedure.
The BY procedure provides FDR control under arbitrary dependence among p-values by using harmonic weights in the rejection threshold calculation. This method adapts the static BY procedure to the online batching framework.
The algorithm: 1. Calculates adaptive alpha level for the current batch 2. Applies the BY procedure with harmonic correction 3. Updates statistics for future batch calculations 4. Computes R⁺ for inter-batch dependency handling
Parameters:
Name
Type
Description
Default
p_vals
list[float]
List of p-values for the current batch.
required
Returns:
Type
Description
list[bool]
List of boolean values indicating which hypotheses are rejected.
Examples:
>>> by_test=BatchBY(alpha=0.05)>>> # Test potentially dependent p-values>>> decisions=by_test.test_batch([0.001,0.002,0.15,0.8])>>> print(f"Rejections with BY: {sum(decisions)}")
Note
The BY procedure is more conservative than BH but maintains FDR control even when p-values are positively dependent.
deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the Benjamini-Yekutieli procedure. The BY procedure provides FDR control under arbitrary dependence among p-values by using harmonic weights in the rejection threshold calculation. This method adapts the static BY procedure to the online batching framework. The algorithm: 1. Calculates adaptive alpha level for the current batch 2. Applies the BY procedure with harmonic correction 3. Updates statistics for future batch calculations 4. Computes R⁺ for inter-batch dependency handling Args: p_vals: List of p-values for the current batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> by_test = BatchBY(alpha=0.05) >>> # Test potentially dependent p-values >>> decisions = by_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"Rejections with BY: {sum(decisions)}") Note: The BY procedure is more conservative than BH but maintains FDR control even when p-values are positively dependent. """n_batch=len(p_vals)ifself.num_test==1:self.alpha=(self.alpha0# fmt: skip*self.seq.calc_gamma(j=1))else:self.alpha=(sum(self.seq.calc_gamma(i)foriinrange(1,self.num_test+1))*self.alpha0# fmt: skip)self.alpha-=sum([self.alpha_s[i]*self.r_s_plus[i]/(self.r_s_plus[i]+self.r_sums[i+1])foriinrange(0,self.num_test-1)])self.alpha*=(n_batch+self.r_total)/n_batchnum_reject,threshold=by(p_vals,self.alpha)self.r_sums.append(self.r_total)self.r_sums[1:self.num_test]= \
[x+num_rejectforxinself.r_sums[1:self.num_test]]# fmt: skipself.r_total+=num_rejectself.alpha_s.append(self.alpha)r_plus=0fori,p_valinenumerate(p_vals):p_vals[i]=0r_plus=max(r_plus,by(p_vals,self.alpha)[0])p_vals[i]=p_valself.r_s_plus.append(r_plus)self.num_test+=1return[p_val<=thresholdforp_valinp_vals]
Batch FDR control under Positive Regression Dependency on a Subset (PRDS).
BatchPRDS provides FDR control when p-values within each batch satisfy the PRDS condition - a form of positive dependence that is less restrictive than independence but more structured than arbitrary dependence. This makes it suitable for applications where there is positive correlation between test statistics, such as in genomics or neuroimaging.
The algorithm extends the classical Benjamini-Hochberg procedure to the online batching setting under PRDS conditions. It allocates alpha budget across batches using a gamma sequence and adjusts the significance level based on the number of previous discoveries and current batch size.
PRDS (Positive Regression Dependency on a Subset) means that for any subset of true null hypotheses, the joint distribution of corresponding p-values is stochastically smaller when conditioned on smaller values of other p-values. This includes many practically relevant dependence structures.
Parameters:
Name
Type
Description
Default
alpha
float
Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
required
Attributes:
Name
Type
Description
alpha0
Original target FDR level.
seq
Gamma sequence for alpha allocation across batches.
num_test
int
Number of batches tested so far.
r_total
int
Total number of rejections across all batches.
alpha_s
Alpha levels used for each batch (stored for testing).
Examples:
>>> # Basic usage under PRDS conditions>>> prds_test=BatchPRDS(alpha=0.05)>>> # Test batch with positive correlation (e.g., genetic data)>>> batch1=[0.001,0.005,0.15,0.8]# Positively correlated>>> decisions1=prds_test.test_batch(batch1)>>> print(f"PRDS discoveries: {sum(decisions1)}")
>>> # PRDS vs standard BH under positive dependence>>> # PRDS maintains FDR control while BH may be conservative
Notes
PRDS conditions are satisfied in many practical scenarios: - Genomic association studies with linkage disequilibrium - Neuroimaging with spatial smoothing - Financial time series with positive correlation - Social network analysis with homophily
The algorithm provides exact FDR control under PRDS while maintaining good power compared to more conservative methods like BY.
References
Zrnic, T., A. Ramdas, and M. I. Jordan (2018). "Asynchronous Online Testing of Multiple Hypotheses." arXiv preprint arXiv:1812.05068.
Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188.
Benjamini, Y., and D. Yekutieli (2001). "On the Adaptive Control of the False Discovery Rate in Multiple Testing With Independent Statistics." Journal of Educational and Behavioral Statistics, 25(1):60-83.
classBatchPRDS(AbstractBatchingTest):"""Batch FDR control under Positive Regression Dependency on a Subset (PRDS). BatchPRDS provides FDR control when p-values within each batch satisfy the PRDS condition - a form of positive dependence that is less restrictive than independence but more structured than arbitrary dependence. This makes it suitable for applications where there is positive correlation between test statistics, such as in genomics or neuroimaging. The algorithm extends the classical Benjamini-Hochberg procedure to the online batching setting under PRDS conditions. It allocates alpha budget across batches using a gamma sequence and adjusts the significance level based on the number of previous discoveries and current batch size. PRDS (Positive Regression Dependency on a Subset) means that for any subset of true null hypotheses, the joint distribution of corresponding p-values is stochastically smaller when conditioned on smaller values of other p-values. This includes many practically relevant dependence structures. Args: alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1). Attributes: alpha0: Original target FDR level. seq: Gamma sequence for alpha allocation across batches. num_test: Number of batches tested so far. r_total: Total number of rejections across all batches. alpha_s: Alpha levels used for each batch (stored for testing). Examples: >>> # Basic usage under PRDS conditions >>> prds_test = BatchPRDS(alpha=0.05) >>> # Test batch with positive correlation (e.g., genetic data) >>> batch1 = [0.001, 0.005, 0.15, 0.8] # Positively correlated >>> decisions1 = prds_test.test_batch(batch1) >>> print(f"PRDS discoveries: {sum(decisions1)}") >>> # Subsequent batch - alpha adjusted for previous discoveries >>> batch2 = [0.02, 0.03, 0.4, 0.9] >>> decisions2 = prds_test.test_batch(batch2) >>> print(f"Total discoveries: {prds_test.r_total}") >>> # PRDS vs standard BH under positive dependence >>> # PRDS maintains FDR control while BH may be conservative Notes: PRDS conditions are satisfied in many practical scenarios: - Genomic association studies with linkage disequilibrium - Neuroimaging with spatial smoothing - Financial time series with positive correlation - Social network analysis with homophily The algorithm provides exact FDR control under PRDS while maintaining good power compared to more conservative methods like BY. References: Zrnic, T., A. Ramdas, and M. I. Jordan (2018). "Asynchronous Online Testing of Multiple Hypotheses." arXiv preprint arXiv:1812.05068. Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188. Benjamini, Y., and D. Yekutieli (2001). "On the Adaptive Control of the False Discovery Rate in Multiple Testing With Independent Statistics." Journal of Educational and Behavioral Statistics, 25(1):60-83. """def__init__(self,alpha:float):"""Initialize BatchPRDS with FDR control level. Args: alpha: Target FDR control level. Must be in (0, 1). Raises: ValueError: If alpha is not in (0, 1). """super().__init__(alpha)self.alpha0=alphaself.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.num_test:int=1self.r_total:int=0self.alpha_s=[]# only for testdeftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values under PRDS conditions. The algorithm calculates an adaptive significance level based on the gamma sequence, current batch size, and total previous discoveries. It then applies the standard Benjamini-Hochberg procedure with this adapted alpha level. The alpha calculation incorporates: - Gamma sequence value for the current batch number - Batch size normalization - Adjustment for accumulated discoveries Args: p_vals: List of p-values for the current batch. Must satisfy PRDS conditions within the batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> prds_test = BatchPRDS(alpha=0.05) >>> # Test positively dependent p-values >>> decisions = prds_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"PRDS rejections: {sum(decisions)}") Note: This method assumes that p-values within the batch satisfy PRDS conditions. If this assumption is violated, FDR control may not be maintained. """batch_size=len(p_vals)self.alpha=(self.alpha0*self.seq.calc_gamma(self.num_test)/batch_size*(batch_size+self.r_total))self.alpha_s.append(self.alpha)num_reject,threshold=bh(p_vals,self.alpha)self.r_total+=num_rejectself.num_test+=1return[p_val<=thresholdforp_valinp_vals]
The algorithm calculates an adaptive significance level based on the gamma sequence, current batch size, and total previous discoveries. It then applies the standard Benjamini-Hochberg procedure with this adapted alpha level.
The alpha calculation incorporates: - Gamma sequence value for the current batch number - Batch size normalization - Adjustment for accumulated discoveries
Parameters:
Name
Type
Description
Default
p_vals
list[float]
List of p-values for the current batch. Must satisfy PRDS conditions within the batch.
required
Returns:
Type
Description
list[bool]
List of boolean values indicating which hypotheses are rejected.
deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values under PRDS conditions. The algorithm calculates an adaptive significance level based on the gamma sequence, current batch size, and total previous discoveries. It then applies the standard Benjamini-Hochberg procedure with this adapted alpha level. The alpha calculation incorporates: - Gamma sequence value for the current batch number - Batch size normalization - Adjustment for accumulated discoveries Args: p_vals: List of p-values for the current batch. Must satisfy PRDS conditions within the batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> prds_test = BatchPRDS(alpha=0.05) >>> # Test positively dependent p-values >>> decisions = prds_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"PRDS rejections: {sum(decisions)}") Note: This method assumes that p-values within the batch satisfy PRDS conditions. If this assumption is violated, FDR control may not be maintained. """batch_size=len(p_vals)self.alpha=(self.alpha0*self.seq.calc_gamma(self.num_test)/batch_size*(batch_size+self.r_total))self.alpha_s.append(self.alpha)num_reject,threshold=bh(p_vals,self.alpha)self.r_total+=num_rejectself.num_test+=1return[p_val<=thresholdforp_valinp_vals]
For each batch, the algorithm computes \(R^+\) (maximum possible rejections if one p-value were 0): - Used to determine optimal alpha allocation for future batches - Balances current discoveries with future testing power - Key innovation enabling adaptive power allocation
When batches complete out of order: - Maintain batch ordering for FDR calculations - Buffer results until dependencies resolved - Use timestamps or sequence numbers
The framework naturally handles: - Different batch sizes across time - Empty batches (no effect on FDR control) - Very large batches (may need memory management)
Converting online methods to batch setting: - Group individual tests into batches - Apply batch framework with chosen internal procedure - May improve power over pure online methods
Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515.
Benjamini, Y., and Y. Hochberg (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing." Journal of the Royal Statistical Society: Series B, 57(1):289-300.
Storey, J. D. (2002). "A direct approach to false discovery rates." Journal of the Royal Statistical Society: Series B, 64(3):479-498.