Batch testing methods extend classical multiple testing procedures to the online setting, where hypotheses arrive in batches over time and must be tested sequentially while maintaining overall FDR control across all batches.
The batching framework, developed by Zrnic et al. (2020), addresses scenarios where: - Tests naturally arrive in groups (batches) - Each batch can be processed using classical procedures - Overall FDR control is required across all batches - Adaptive alpha allocation improves power over time
Benjamini-Hochberg procedure for online batch FDR control.
BatchBH extends the classical Benjamini-Hochberg (BH) procedure to the online batching setting, where hypotheses arrive in batches over time and must be tested sequentially while maintaining overall FDR control across all batches.
This implements Algorithm 1 from "The Power of Batching in Multiple Hypothesis Testing" by Zrnic, Jiang, Ramdas, and Jordan (2020). The key innovation is the calculation of adaptive alpha levels that account for the interdependence between batches while preserving the BH optimality within each batch.
The algorithm maintains FDR control by: 1. Allocating alpha budget using a gamma sequence 2. Adjusting for dependencies between batches via β_t correction 3. Computing R^+ (maximum possible rejections) for power optimization 4. Applying standard BH procedure within each batch
Parameters:
Name
Type
Description
Default
alpha
float
Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
required
Attributes:
Name
Type
Description
alpha0
Original target FDR level.
num_test
Number of batches tested so far.
seq
Gamma sequence for alpha allocation across batches.
r_s
list[int]
Number of rejections in each batch.
r_s_plus
list[int]
Maximum possible rejections for each batch (R^+ values).
Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515.
Benjamini, Y., and Y. Hochberg (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing." Journal of the Royal Statistical Society: Series B, 57(1):289-300.
classBatchBH(AbstractBatchingTest):"""Benjamini-Hochberg procedure for online batch FDR control. BatchBH extends the classical Benjamini-Hochberg (BH) procedure to the online batching setting, where hypotheses arrive in batches over time and must be tested sequentially while maintaining overall FDR control across all batches. This implements Algorithm 1 from "The Power of Batching in Multiple Hypothesis Testing" by Zrnic, Jiang, Ramdas, and Jordan (2020). The key innovation is the calculation of adaptive alpha levels that account for the interdependence between batches while preserving the BH optimality within each batch. The algorithm maintains FDR control by: 1. Allocating alpha budget using a gamma sequence 2. Adjusting for dependencies between batches via β_t correction 3. Computing R^+ (maximum possible rejections) for power optimization 4. Applying standard BH procedure within each batch Args: alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1). Attributes: alpha0: Original target FDR level. num_test: Number of batches tested so far. seq: Gamma sequence for alpha allocation across batches. r_s: Number of rejections in each batch. r_s_plus: Maximum possible rejections for each batch (R^+ values). alpha_s: Alpha level used for each batch. Examples: >>> # Basic batch testing >>> bh = BatchBH(alpha=0.05) >>> batch1 = [0.001, 0.02, 0.15, 0.8] >>> decisions1 = bh.test_batch(batch1) >>> print(f"Batch 1 discoveries: {sum(decisions1)}") >>> # Sequential batches with adaptive alpha >>> batch2 = [0.03, 0.9, 0.006, 0.4] >>> decisions2 = bh.test_batch(batch2) # Alpha adjusted based on batch1 >>> print(f"Batch 2 discoveries: {sum(decisions2)}") >>> # Multiple batches >>> batches = [[0.001, 0.8], [0.02, 0.3], [0.005, 0.9]] >>> all_decisions = [] >>> for i, batch in enumerate(batches): ... decisions = bh.test_batch(batch) ... all_decisions.append(decisions) ... print(f"Batch {i+1}: {sum(decisions)} discoveries") References: Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515. Benjamini, Y., and Y. Hochberg (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing." Journal of the Royal Statistical Society: Series B, 57(1):289-300. """def__init__(self,alpha:float):"""Initialize BatchBH with FDR control level alpha. Args: alpha: Target FDR control level. Must be in (0, 1). Raises: ValueError: If alpha is not in (0, 1). """super().__init__(alpha)self.alpha0=alphaself.num_test=0# Number of batches tested so farself.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.r_s_plus:list[int]=[]# R^+ values for each batchself.r_s:list[int]=[]# R values (number of rejections) for each batchself.alpha_s:list[float]=[]# Alpha values used for each batchdeftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the BatchBH procedure. Args: p_vals: List of p-values for the current batch Returns: List of boolean values indicating which hypotheses are rejected """p_vals_local=list(p_vals)n_batch=len(p_vals_local)ifn_batch==0:return[]validity.check_p_vals_batch(p_vals_local)t=self.num_test# Current batch index (0-based)ift==0:# First batch: α₠= γâ‚αalpha_t=self.alpha0*self.seq.calc_gamma(j=1)else:# Calculate β_tbeta_t=0.0total_rejections_except_s=sum(self.r_s)# Total rejections so farforsinrange(t):# For each previous batch s, calculate its contribution to β_t# Denominator is R^+_s + sum of all other rejections up to t-1rejections_from_other_batches=total_rejections_except_s-self.r_s[s]denominator=self.r_s_plus[s]+rejections_from_other_batchesifdenominator>0:beta_t+=self.alpha_s[s]*self.r_s_plus[s]/denominator# Calculate α_t = (Σ_{s≤t} γ_s α - β_t) × (n_t + Σ_{s<t} R_s) / n_tgamma_sum=sum(self.seq.calc_gamma(j=i+1)foriinrange(t+1))numerator=gamma_sum*self.alpha0-beta_ttotal_prev_rejections=sum(self.r_s)alpha_t=numerator*(n_batch+total_prev_rejections)/n_batch# Ensure alpha_t is non-negativealpha_t=max(0,alpha_t)# Run BH on current batchnum_reject,threshold=bh(p_vals_local,alpha_t)# Calculate R^+_t (maximum rejections if one p-value is set to 0)r_plus=num_reject# Start with current rejectionsadjusted=list(p_vals_local)foriinrange(len(adjusted)):# Temporarily set p-value to 0original_p=adjusted[i]adjusted[i]=0.0temp_reject,_=bh(adjusted,alpha_t)r_plus=max(r_plus,temp_reject)# Restore original p-valueadjusted[i]=original_p# Store resultsself.r_s.append(num_reject)self.r_s_plus.append(r_plus)self.alpha_s.append(alpha_t)self.num_test+=1# Return rejection decisionsreturn[p_val<=thresholdforp_valinp_vals_local]
deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the BatchBH procedure. Args: p_vals: List of p-values for the current batch Returns: List of boolean values indicating which hypotheses are rejected """p_vals_local=list(p_vals)n_batch=len(p_vals_local)ifn_batch==0:return[]validity.check_p_vals_batch(p_vals_local)t=self.num_test# Current batch index (0-based)ift==0:# First batch: α₠= γâ‚αalpha_t=self.alpha0*self.seq.calc_gamma(j=1)else:# Calculate β_tbeta_t=0.0total_rejections_except_s=sum(self.r_s)# Total rejections so farforsinrange(t):# For each previous batch s, calculate its contribution to β_t# Denominator is R^+_s + sum of all other rejections up to t-1rejections_from_other_batches=total_rejections_except_s-self.r_s[s]denominator=self.r_s_plus[s]+rejections_from_other_batchesifdenominator>0:beta_t+=self.alpha_s[s]*self.r_s_plus[s]/denominator# Calculate α_t = (Σ_{s≤t} γ_s α - β_t) × (n_t + Σ_{s<t} R_s) / n_tgamma_sum=sum(self.seq.calc_gamma(j=i+1)foriinrange(t+1))numerator=gamma_sum*self.alpha0-beta_ttotal_prev_rejections=sum(self.r_s)alpha_t=numerator*(n_batch+total_prev_rejections)/n_batch# Ensure alpha_t is non-negativealpha_t=max(0,alpha_t)# Run BH on current batchnum_reject,threshold=bh(p_vals_local,alpha_t)# Calculate R^+_t (maximum rejections if one p-value is set to 0)r_plus=num_reject# Start with current rejectionsadjusted=list(p_vals_local)foriinrange(len(adjusted)):# Temporarily set p-value to 0original_p=adjusted[i]adjusted[i]=0.0temp_reject,_=bh(adjusted,alpha_t)r_plus=max(r_plus,temp_reject)# Restore original p-valueadjusted[i]=original_p# Store resultsself.r_s.append(num_reject)self.r_s_plus.append(r_plus)self.alpha_s.append(alpha_t)self.num_test+=1# Return rejection decisionsreturn[p_val<=thresholdforp_valinp_vals_local]
classBatchStoreyBH(AbstractBatchingTest):"""Storey-BH batch procedure for online batch-level FDR control."""def__init__(self,alpha:float,lambda_:float):super().__init__(alpha)self.alpha0:float=alphaself.num_test:int=1self.lambda_:float=lambda_ifnot0<lambda_<1:raiseValueError("lambda_ must be between 0 and 1.")self.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.k_s:list[int]=[]self.pi0_estimates:list[float]=[]self.r_s_plus:list[int]=[]self.r_sums:list[int]=[]self.alpha_s:list[float]=[]deftest_batch(self,p_vals:list[float])->list[bool]:p_vals_local=list(p_vals)n_batch=len(p_vals_local)ifn_batch==0:return[]validity.check_p_vals_batch(p_vals_local)ifself.num_test==1:self.alpha=self.alpha0*self.seq.calc_gamma(j=1)else:gamma_sum=self.alpha0*sum(self.seq.calc_gamma(i)foriinrange(1,self.num_test+1))total_rejections=sum(self.r_sums)penalty=0.0foridxinrange(self.num_test-1):denom=self.r_s_plus[idx]+(total_rejections-self.r_sums[idx])ifdenom>0:penalty+=(self.k_s[idx]*self.alpha_s[idx]*(self.r_s_plus[idx]/denom))self.alpha=(gamma_sum-penalty)*((n_batch+total_rejections)/n_batch)batch_decisions,pi0_batch=self._storey_batch_decisions(p_vals_local,self.alpha)num_reject=sum(batch_decisions)self.k_s.append(int(max(p_vals_local)>self.lambda_))self.pi0_estimates.append(pi0_batch)self.r_sums.append(num_reject)self.alpha_s.append(self.alpha)r_plus=self._calculate_r_plus(p_vals_local)self.r_s_plus.append(r_plus)self.num_test+=1returnbatch_decisionsdef_calculate_r_plus(self,p_vals:list[float])->int:"""Compute R+ via one-coordinate replacement p_i <- 0 on same batch size."""ifnotp_vals:return0r_plus=0n=len(p_vals)foriinrange(n):pseudo_pvals=p_vals[:i]+p_vals[i+1:]+[0.0]pseudo_rejections,_=self._storey_batch_decisions(pseudo_pvals,self.alpha)r_plus=max(r_plus,sum(pseudo_rejections))returnr_plusdef_storey_batch_decisions(self,p_vals:list[float],alpha_batch:float)->tuple[list[bool],float]:"""Replicate onlineFDR's BatchStBH inner Storey-BH rejection rule."""n_batch=len(p_vals)ifn_batch==0:return[],0.0num_above_lambda=sum(1forpinp_valsifp>self.lambda_)pi0_batch=(num_above_lambda+1.0)/((1.0-self.lambda_)*n_batch)order_desc=sorted(range(n_batch),key=p_vals.__getitem__,reverse=True)inv_order=[0]*n_batchforrank,idxinenumerate(order_desc):inv_order[idx]=rankadjusted_desc:list[float]=[]running_min=1.0forrank,idxinenumerate(order_desc):j_val=n_batch-rankcandidate=(n_batch/j_val)*pi0_batch*p_vals[idx]running_min=min(running_min,candidate)adjusted_desc.append(min(1.0,running_min))return([adjusted_desc[inv_order[i]]<=alpha_batchforiinrange(n_batch)],pi0_batch,)
Benjamini-Yekutieli procedure for online batch FDR control under dependence.
BatchBY extends the online batching framework to use the Benjamini-Yekutieli (BY) procedure, which provides FDR control even under arbitrary dependence among p-values. This makes it particularly suitable for situations where the independence assumption may be violated, such as in spatial statistics, time series analysis, or genomics with linkage disequilibrium.
The BY procedure is a modification of the Benjamini-Hochberg (BH) procedure that uses harmonic weights to maintain FDR control under arbitrary dependence. While this comes at the cost of reduced power compared to BH, it provides robust FDR control in challenging dependence scenarios.
The algorithm follows the online batching framework, allocating alpha budget across batches using a gamma sequence and adjusting for inter-batch dependencies through the β_t correction mechanism.
Parameters:
Name
Type
Description
Default
alpha
float
Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
required
Attributes:
Name
Type
Description
alpha0
float
Original target FDR level.
num_test
int
Number of batches tested so far.
seq
Gamma sequence for alpha allocation across batches.
r_s_plus
list[int]
Maximum possible rejections for each batch.
r_s
list[int]
Rejection indicators for each batch.
r_total
int
Total number of rejections across all batches.
r_sums
list[int]
Cumulative rejection counts for dependency tracking.
alpha_s
list[float]
Alpha levels used for each batch.
Examples:
>>> # Basic usage for dependent p-values>>> by_test=BatchBY(alpha=0.05)>>> # Test correlated p-values (e.g., from spatial data)>>> batch1=[0.001,0.002,0.15,0.8]# May be dependent>>> decisions1=by_test.test_batch(batch1)>>> print(f"BY discoveries in batch 1: {sum(decisions1)}")
>>> # Sequential dependent batches>>> batch2=[0.03,0.04,0.006,0.4]# Also potentially dependent>>> decisions2=by_test.test_batch(batch2)>>> print(f"BY discoveries in batch 2: {sum(decisions2)}")
>>> # Comparing with standard BH under dependence>>> fromonline_fdr.batchingimportBatchBH>>> bh_test=BatchBH(alpha=0.05)>>> by_test=BatchBY(alpha=0.05)>>> # BY provides guaranteed FDR control, BH may not under dependence
Notes
The BY procedure is particularly recommended when: - P-values exhibit positive dependence - Spatial or temporal correlation is present - Conservative FDR control is required - The exact dependence structure is unknown
Trade-off: Enhanced robustness comes at the cost of reduced power compared to the standard Benjamini-Hochberg procedure.
References
Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188.
Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515.
classBatchBY(AbstractBatchingTest):"""Benjamini-Yekutieli procedure for online batch FDR control under dependence. BatchBY extends the online batching framework to use the Benjamini-Yekutieli (BY) procedure, which provides FDR control even under arbitrary dependence among p-values. This makes it particularly suitable for situations where the independence assumption may be violated, such as in spatial statistics, time series analysis, or genomics with linkage disequilibrium. The BY procedure is a modification of the Benjamini-Hochberg (BH) procedure that uses harmonic weights to maintain FDR control under arbitrary dependence. While this comes at the cost of reduced power compared to BH, it provides robust FDR control in challenging dependence scenarios. The algorithm follows the online batching framework, allocating alpha budget across batches using a gamma sequence and adjusting for inter-batch dependencies through the β_t correction mechanism. Args: alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1). Attributes: alpha0: Original target FDR level. num_test: Number of batches tested so far. seq: Gamma sequence for alpha allocation across batches. r_s_plus: Maximum possible rejections for each batch. r_s: Rejection indicators for each batch. r_total: Total number of rejections across all batches. r_sums: Cumulative rejection counts for dependency tracking. alpha_s: Alpha levels used for each batch. Examples: >>> # Basic usage for dependent p-values >>> by_test = BatchBY(alpha=0.05) >>> # Test correlated p-values (e.g., from spatial data) >>> batch1 = [0.001, 0.002, 0.15, 0.8] # May be dependent >>> decisions1 = by_test.test_batch(batch1) >>> print(f"BY discoveries in batch 1: {sum(decisions1)}") >>> # Sequential dependent batches >>> batch2 = [0.03, 0.04, 0.006, 0.4] # Also potentially dependent >>> decisions2 = by_test.test_batch(batch2) >>> print(f"BY discoveries in batch 2: {sum(decisions2)}") >>> # Comparing with standard BH under dependence >>> from online_fdr.batching import BatchBH >>> bh_test = BatchBH(alpha=0.05) >>> by_test = BatchBY(alpha=0.05) >>> # BY provides guaranteed FDR control, BH may not under dependence Notes: The BY procedure is particularly recommended when: - P-values exhibit positive dependence - Spatial or temporal correlation is present - Conservative FDR control is required - The exact dependence structure is unknown Trade-off: Enhanced robustness comes at the cost of reduced power compared to the standard Benjamini-Hochberg procedure. References: Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188. Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515. """def__init__(self,alpha:float):"""Initialize BatchBY with FDR control level. Args: alpha: Target FDR control level. Must be in (0, 1). Raises: ValueError: If alpha is not in (0, 1). """super().__init__(alpha)self.alpha0:float=alphaself.num_test:int=1self.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.r_s_plus:list[int]=[]self.r_s:list[int]=[]self.r_total:int=0self.r_sums:list[int]=[0]self.alpha_s:list[float]=[]deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the Benjamini-Yekutieli procedure. The BY procedure provides FDR control under arbitrary dependence among p-values by using harmonic weights in the rejection threshold calculation. This method adapts the static BY procedure to the online batching framework. The algorithm: 1. Calculates adaptive alpha level for the current batch 2. Applies the BY procedure with harmonic correction 3. Updates statistics for future batch calculations 4. Computes R⁺ for inter-batch dependency handling Args: p_vals: List of p-values for the current batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> by_test = BatchBY(alpha=0.05) >>> # Test potentially dependent p-values >>> decisions = by_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"Rejections with BY: {sum(decisions)}") Note: The BY procedure is more conservative than BH but maintains FDR control even when p-values are positively dependent. """p_vals_local=list(p_vals)n_batch=len(p_vals_local)ifn_batch==0:return[]validity.check_p_vals_batch(p_vals_local)ifself.num_test==1:self.alpha=(self.alpha0# fmt: skip*self.seq.calc_gamma(j=1))else:self.alpha=(sum(self.seq.calc_gamma(i)foriinrange(1,self.num_test+1))*self.alpha0# fmt: skip)self.alpha-=sum([self.alpha_s[i]*self.r_s_plus[i]/(self.r_s_plus[i]+self.r_sums[i+1])foriinrange(0,self.num_test-1)])self.alpha*=(n_batch+self.r_total)/n_batchnum_reject,threshold=by(p_vals_local,self.alpha)self.r_sums.append(self.r_total)self.r_sums[1:self.num_test]= \
[x+num_rejectforxinself.r_sums[1:self.num_test]]# fmt: skipself.r_total+=num_rejectself.alpha_s.append(self.alpha)r_plus=0adjusted=list(p_vals_local)fori,p_valinenumerate(adjusted):adjusted[i]=0.0r_plus=max(r_plus,by(adjusted,self.alpha)[0])adjusted[i]=p_valself.r_s_plus.append(r_plus)self.num_test+=1return[p_val<=thresholdforp_valinp_vals_local]
Test a batch of p-values using the Benjamini-Yekutieli procedure.
The BY procedure provides FDR control under arbitrary dependence among p-values by using harmonic weights in the rejection threshold calculation. This method adapts the static BY procedure to the online batching framework.
The algorithm: 1. Calculates adaptive alpha level for the current batch 2. Applies the BY procedure with harmonic correction 3. Updates statistics for future batch calculations 4. Computes R⁺ for inter-batch dependency handling
Parameters:
Name
Type
Description
Default
p_vals
list[float]
List of p-values for the current batch.
required
Returns:
Type
Description
list[bool]
List of boolean values indicating which hypotheses are rejected.
Examples:
>>> by_test=BatchBY(alpha=0.05)>>> # Test potentially dependent p-values>>> decisions=by_test.test_batch([0.001,0.002,0.15,0.8])>>> print(f"Rejections with BY: {sum(decisions)}")
Note
The BY procedure is more conservative than BH but maintains FDR control even when p-values are positively dependent.
deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values using the Benjamini-Yekutieli procedure. The BY procedure provides FDR control under arbitrary dependence among p-values by using harmonic weights in the rejection threshold calculation. This method adapts the static BY procedure to the online batching framework. The algorithm: 1. Calculates adaptive alpha level for the current batch 2. Applies the BY procedure with harmonic correction 3. Updates statistics for future batch calculations 4. Computes R⁺ for inter-batch dependency handling Args: p_vals: List of p-values for the current batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> by_test = BatchBY(alpha=0.05) >>> # Test potentially dependent p-values >>> decisions = by_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"Rejections with BY: {sum(decisions)}") Note: The BY procedure is more conservative than BH but maintains FDR control even when p-values are positively dependent. """p_vals_local=list(p_vals)n_batch=len(p_vals_local)ifn_batch==0:return[]validity.check_p_vals_batch(p_vals_local)ifself.num_test==1:self.alpha=(self.alpha0# fmt: skip*self.seq.calc_gamma(j=1))else:self.alpha=(sum(self.seq.calc_gamma(i)foriinrange(1,self.num_test+1))*self.alpha0# fmt: skip)self.alpha-=sum([self.alpha_s[i]*self.r_s_plus[i]/(self.r_s_plus[i]+self.r_sums[i+1])foriinrange(0,self.num_test-1)])self.alpha*=(n_batch+self.r_total)/n_batchnum_reject,threshold=by(p_vals_local,self.alpha)self.r_sums.append(self.r_total)self.r_sums[1:self.num_test]= \
[x+num_rejectforxinself.r_sums[1:self.num_test]]# fmt: skipself.r_total+=num_rejectself.alpha_s.append(self.alpha)r_plus=0adjusted=list(p_vals_local)fori,p_valinenumerate(adjusted):adjusted[i]=0.0r_plus=max(r_plus,by(adjusted,self.alpha)[0])adjusted[i]=p_valself.r_s_plus.append(r_plus)self.num_test+=1return[p_val<=thresholdforp_valinp_vals_local]
Batch FDR control under Positive Regression Dependency on a Subset (PRDS).
BatchPRDS provides FDR control when p-values within each batch satisfy the PRDS condition - a form of positive dependence that is less restrictive than independence but more structured than arbitrary dependence. This makes it suitable for applications where there is positive correlation between test statistics, such as in genomics or neuroimaging.
The algorithm extends the classical Benjamini-Hochberg procedure to the online batching setting under PRDS conditions. It allocates alpha budget across batches using a gamma sequence and adjusts the significance level based on the number of previous discoveries and current batch size.
PRDS (Positive Regression Dependency on a Subset) means that for any subset of true null hypotheses, the joint distribution of corresponding p-values is stochastically smaller when conditioned on smaller values of other p-values. This includes many practically relevant dependence structures.
Parameters:
Name
Type
Description
Default
alpha
float
Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
required
Attributes:
Name
Type
Description
alpha0
Original target FDR level.
seq
Gamma sequence for alpha allocation across batches.
num_test
int
Number of batches tested so far.
r_total
int
Total number of rejections across all batches.
alpha_s
list[float]
Alpha levels used for each batch (stored for testing).
Examples:
>>> # Basic usage under PRDS conditions>>> prds_test=BatchPRDS(alpha=0.05)>>> # Test batch with positive correlation (e.g., genetic data)>>> batch1=[0.001,0.005,0.15,0.8]# Positively correlated>>> decisions1=prds_test.test_batch(batch1)>>> print(f"PRDS discoveries: {sum(decisions1)}")
>>> # PRDS vs standard BH under positive dependence>>> # PRDS maintains FDR control while BH may be conservative
Notes
PRDS conditions are satisfied in many practical scenarios: - Genomic association studies with linkage disequilibrium - Neuroimaging with spatial smoothing - Financial time series with positive correlation - Social network analysis with homophily
The algorithm provides exact FDR control under PRDS while maintaining good power compared to more conservative methods like BY.
References
Zrnic, T., A. Ramdas, and M. I. Jordan (2018). "Asynchronous Online Testing of Multiple Hypotheses." arXiv preprint arXiv:1812.05068.
Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188.
Benjamini, Y., and D. Yekutieli (2001). "On the Adaptive Control of the False Discovery Rate in Multiple Testing With Independent Statistics." Journal of Educational and Behavioral Statistics, 25(1):60-83.
classBatchPRDS(AbstractBatchingTest):"""Batch FDR control under Positive Regression Dependency on a Subset (PRDS). BatchPRDS provides FDR control when p-values within each batch satisfy the PRDS condition - a form of positive dependence that is less restrictive than independence but more structured than arbitrary dependence. This makes it suitable for applications where there is positive correlation between test statistics, such as in genomics or neuroimaging. The algorithm extends the classical Benjamini-Hochberg procedure to the online batching setting under PRDS conditions. It allocates alpha budget across batches using a gamma sequence and adjusts the significance level based on the number of previous discoveries and current batch size. PRDS (Positive Regression Dependency on a Subset) means that for any subset of true null hypotheses, the joint distribution of corresponding p-values is stochastically smaller when conditioned on smaller values of other p-values. This includes many practically relevant dependence structures. Args: alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1). Attributes: alpha0: Original target FDR level. seq: Gamma sequence for alpha allocation across batches. num_test: Number of batches tested so far. r_total: Total number of rejections across all batches. alpha_s: Alpha levels used for each batch (stored for testing). Examples: >>> # Basic usage under PRDS conditions >>> prds_test = BatchPRDS(alpha=0.05) >>> # Test batch with positive correlation (e.g., genetic data) >>> batch1 = [0.001, 0.005, 0.15, 0.8] # Positively correlated >>> decisions1 = prds_test.test_batch(batch1) >>> print(f"PRDS discoveries: {sum(decisions1)}") >>> # Subsequent batch - alpha adjusted for previous discoveries >>> batch2 = [0.02, 0.03, 0.4, 0.9] >>> decisions2 = prds_test.test_batch(batch2) >>> print(f"Total discoveries: {prds_test.r_total}") >>> # PRDS vs standard BH under positive dependence >>> # PRDS maintains FDR control while BH may be conservative Notes: PRDS conditions are satisfied in many practical scenarios: - Genomic association studies with linkage disequilibrium - Neuroimaging with spatial smoothing - Financial time series with positive correlation - Social network analysis with homophily The algorithm provides exact FDR control under PRDS while maintaining good power compared to more conservative methods like BY. References: Zrnic, T., A. Ramdas, and M. I. Jordan (2018). "Asynchronous Online Testing of Multiple Hypotheses." arXiv preprint arXiv:1812.05068. Benjamini, Y., and D. Yekutieli (2001). "The control of the false discovery rate in multiple testing under dependency." Annals of Statistics, 29(4):1165-1188. Benjamini, Y., and D. Yekutieli (2001). "On the Adaptive Control of the False Discovery Rate in Multiple Testing With Independent Statistics." Journal of Educational and Behavioral Statistics, 25(1):60-83. """def__init__(self,alpha:float):"""Initialize BatchPRDS with FDR control level. Args: alpha: Target FDR control level. Must be in (0, 1). Raises: ValueError: If alpha is not in (0, 1). """super().__init__(alpha)self.alpha0=alphaself.seq=DefaultSaffronGammaSequence(gamma_exp=1.6,c=0.4374901658)self.num_test:int=1self.r_total:int=0self.alpha_s:list[float]=[]# only for testdeftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values under PRDS conditions. The algorithm calculates an adaptive significance level based on the gamma sequence, current batch size, and total previous discoveries. It then applies the standard Benjamini-Hochberg procedure with this adapted alpha level. The alpha calculation incorporates: - Gamma sequence value for the current batch number - Batch size normalization - Adjustment for accumulated discoveries Args: p_vals: List of p-values for the current batch. Must satisfy PRDS conditions within the batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> prds_test = BatchPRDS(alpha=0.05) >>> # Test positively dependent p-values >>> decisions = prds_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"PRDS rejections: {sum(decisions)}") Note: This method assumes that p-values within the batch satisfy PRDS conditions. If this assumption is violated, FDR control may not be maintained. """p_vals_local=list(p_vals)batch_size=len(p_vals_local)ifbatch_size==0:return[]validity.check_p_vals_batch(p_vals_local)self.alpha=(self.alpha0*self.seq.calc_gamma(self.num_test)/batch_size*(batch_size+self.r_total))self.alpha_s.append(self.alpha)num_reject,threshold=bh(p_vals_local,self.alpha)self.r_total+=num_rejectself.num_test+=1return[p_val<=thresholdforp_valinp_vals_local]
The algorithm calculates an adaptive significance level based on the gamma sequence, current batch size, and total previous discoveries. It then applies the standard Benjamini-Hochberg procedure with this adapted alpha level.
The alpha calculation incorporates: - Gamma sequence value for the current batch number - Batch size normalization - Adjustment for accumulated discoveries
Parameters:
Name
Type
Description
Default
p_vals
list[float]
List of p-values for the current batch. Must satisfy PRDS conditions within the batch.
required
Returns:
Type
Description
list[bool]
List of boolean values indicating which hypotheses are rejected.
deftest_batch(self,p_vals:list[float])->list[bool]:"""Test a batch of p-values under PRDS conditions. The algorithm calculates an adaptive significance level based on the gamma sequence, current batch size, and total previous discoveries. It then applies the standard Benjamini-Hochberg procedure with this adapted alpha level. The alpha calculation incorporates: - Gamma sequence value for the current batch number - Batch size normalization - Adjustment for accumulated discoveries Args: p_vals: List of p-values for the current batch. Must satisfy PRDS conditions within the batch. Returns: List of boolean values indicating which hypotheses are rejected. Examples: >>> prds_test = BatchPRDS(alpha=0.05) >>> # Test positively dependent p-values >>> decisions = prds_test.test_batch([0.001, 0.002, 0.15, 0.8]) >>> print(f"PRDS rejections: {sum(decisions)}") Note: This method assumes that p-values within the batch satisfy PRDS conditions. If this assumption is violated, FDR control may not be maintained. """p_vals_local=list(p_vals)batch_size=len(p_vals_local)ifbatch_size==0:return[]validity.check_p_vals_batch(p_vals_local)self.alpha=(self.alpha0*self.seq.calc_gamma(self.num_test)/batch_size*(batch_size+self.r_total))self.alpha_s.append(self.alpha)num_reject,threshold=bh(p_vals_local,self.alpha)self.r_total+=num_rejectself.num_test+=1return[p_val<=thresholdforp_valinp_vals_local]
For each batch, the algorithm computes \(R^+\) (maximum possible rejections if one p-value were 0): - Used to determine optimal alpha allocation for future batches - Balances current discoveries with future testing power - Key innovation enabling adaptive power allocation
When batches complete out of order: - Maintain batch ordering for FDR calculations - Buffer results until dependencies resolved - Use timestamps or sequence numbers
The framework naturally handles: - Different batch sizes across time - Empty batches (no effect on FDR control) - Very large batches (may need memory management)
Converting online methods to batch setting: - Group individual tests into batches - Apply batch framework with chosen internal procedure - May improve power over pure online methods
Zrnic, T., D. Jiang, A. Ramdas, and M. I. Jordan (2020). "The Power of Batching in Multiple Hypothesis Testing." Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, 119:11504-11515.
Benjamini, Y., and Y. Hochberg (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing." Journal of the Royal Statistical Society: Series B, 57(1):289-300.
Storey, J. D. (2002). "A direct approach to false discovery rates." Journal of the Royal Statistical Society: Series B, 64(3):479-498.