Skip to content

Investing Methods

Alpha investing methods form the core of modern online FDR control. These methods maintain a "wealth" that increases with discoveries and is spent on testing, allowing adaptive thresholds that respond to the success of previous tests.

Overview

The key insight of alpha investing is to treat significance testing as an investment game:

  1. Start with initial wealth \(W_0\)
  2. Spend wealth to "buy" significance thresholds \(\alpha_i\)
  3. Earn wealth from successful discoveries
  4. Adapt thresholds based on current wealth

This framework allows methods to be more aggressive when discoveries are being made and more conservative when they are not.

Available Methods

Core Alpha Investing Methods

Method Full Name Key Feature Best For
GAI Generalized Alpha Investing Simple wealth dynamics Educational/baseline
SAFFRON Serial estimate of False Discovery proportiON Candidate selection High-throughput screening
ADDIS ADaptive DIScard Discarding + candidate selection General purpose (recommended)

LORD Family Methods

Method Full Name Key Feature Best For
LORD3 Levels based on Recent ObservatiOns Recent discovery weighting Time series analysis
LORD++ LORD Plus Plus Enhanced reward structure Moderate dependence
LORD Dependent Dependent LORD Handles arbitrary dependence Strong dependence
LORD Discard LORD with Discarding Large p-value discarding Sparse alternatives
LORD Memory Decay Memory Decay LORD Temporal decay weighting Non-stationary time series

LOND Family Methods

Method Full Name Key Feature Best For
LOND Levels based On Number of Discoveries Simple discovery counting Independent/weak dependence

Common Interface

All investing methods inherit from AbstractSequentialTest and implement:

class InvestingMethod(AbstractSequentialTest):
    def __init__(self, alpha: float, wealth: float, **kwargs):
        """
        Parameters
        ----------
        alpha : float
            Target FDR level (0 < alpha < 1)
        wealth : float  
            Initial wealth (0 < wealth ≤ alpha)
        **kwargs : dict
            Method-specific parameters
        """

    def test_one(self, p_value: float) -> bool:
        """
        Test a single p-value against current threshold.

        Parameters
        ----------
        p_value : float
            P-value to test (0 ≤ p_value ≤ 1)

        Returns
        -------
        bool
            True if null hypothesis is rejected, False otherwise
        """

    @property
    def alpha(self) -> Optional[float]:
        """Current significance threshold (None if no wealth)."""

Parameter Selection Guide

Universal Parameters

Alpha (α)

Target FDR level
- Standard values: 0.05, 0.1, 0.2 - Choose based on tolerance for false discoveries - Higher values → more discoveries but more false positives

Initial Wealth (W₀)

Starting investment budget
- Conservative: α/4 - Moderate: α/2 - Aggressive: 3α/4 - Constraint: Must satisfy W₀ ≤ α

Method-Specific Parameters

Addis(
    alpha=0.05,        # Target FDR
    wealth=0.025,      # Initial wealth (α/2)
    lambda_=0.25,      # Candidate threshold  
    tau=0.5           # Discarding threshold
)
  • λ (lambda_): Lower values → more candidates, higher bar for rejection
  • τ (tau): Higher values → fewer discarded tests
Saffron(
    alpha=0.05,        # Target FDR
    wealth=0.025,      # Initial wealth
    lambda_=0.5        # Candidate threshold
)
  • λ (lambda_): Balance between candidate selection and rejection threshold
LordThree(
    alpha=0.05,        # Target FDR  
    wealth=0.025,      # Initial wealth
    reward=0.05        # Wealth gained per discovery
)
  • reward: Higher values → more aggressive after discoveries

Performance Comparison

Based on simulation studies across various scenarios:

Power (Higher is Better)

Scenario: π₀ = 0.9, effect size = 2.5

Method          Independent    Weak Depend.   Strong Depend.
ADDIS           0.82          0.78           0.71
SAFFRON         0.79          0.75           0.68  
LORD3           0.75          0.79           0.69
GAI             0.71          0.68           0.63
LOND            0.77          0.73           0.65

FDR Control (Should be ≤ α)

Target α = 0.1

Method          Independent    Weak Depend.   Strong Depend.
ADDIS           0.089         0.094          0.097
SAFFRON         0.087         0.092          0.095
LORD3           0.091         0.096          0.098  
GAI             0.085         0.089          0.091
LOND            0.088         0.093          0.099

Choosing the Right Method

Decision Tree

graph TD
    A[Start] --> B{Know dependency structure?}
    B -->|Independent/Weak| C{High throughput screening?}
    B -->|Strong Dependence| D[LORD Dependent / Conservative LOND]
    B -->|Time Series| E{Non-stationary?}

    C -->|Yes| F[ADDIS or SAFFRON]
    C -->|No| G[ADDIS recommended]

    E -->|Yes| H[LORD Memory Decay]  
    E -->|No| I[LORD3 or LORD++]

    F --> J[SAFFRON: simpler, fewer parameters<br/>ADDIS: more flexible, discarding]

Practical Recommendations

Start with ADDIS using default parameters:

from online_fdr.investing.addis.addis import Addis
addis = Addis(alpha=0.05, wealth=0.025, lambda_=0.25, tau=0.5)

Use SAFFRON for simplicity:

from online_fdr.investing.saffron.saffron import Saffron
saffron = Saffron(alpha=0.1, wealth=0.05, lambda_=0.5)

Use LORD3 for temporal patterns:

from online_fdr.investing.lord.three import LordThree  
lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.05)

Use dependent methods:

from online_fdr.investing.lond.lond import Lond
lond = Lond(alpha=0.05, dependent=True)

Advanced Usage Patterns

Adaptive Parameter Tuning

from online_fdr.investing.addis.addis import Addis

def adaptive_addis(p_values, target_discoveries=10):
    """Adaptively tune ADDIS parameters based on early performance."""

    # Start conservative
    method = Addis(alpha=0.05, wealth=0.025, lambda_=0.25, tau=0.5)

    discoveries = 0
    results = []

    for i, p_value in enumerate(p_values):
        decision = method.test_one(p_value)
        results.append(decision)

        if decision:
            discoveries += 1

        # Adapt after first 20 tests
        if i == 19 and discoveries < 2:
            # Too conservative, create more aggressive method
            method = Addis(alpha=0.1, wealth=0.05, lambda_=0.5, tau=0.6)

    return results

Wealth Monitoring

from online_fdr.investing.lord.three import LordThree

def monitor_wealth(method, p_values):
    """Monitor wealth dynamics during testing."""

    wealth_history = [method.wealth]

    for p_value in p_values:
        decision = method.test_one(p_value)
        wealth_history.append(getattr(method, 'wealth', 0))

        if len(wealth_history) > 1:
            wealth_change = wealth_history[-1] - wealth_history[-2]
            print(f"p={p_value:.3f}, decision={decision}, "
                  f"wealth={wealth_history[-1]:.3f}{wealth_change:+.3f})")

    return wealth_history

Early Stopping

def early_stopping_fdr(method, p_value_generator, max_tests=1000, 
                      fdr_threshold=0.15):
    """Stop testing if empirical FDR exceeds threshold."""

    true_pos = false_pos = 0

    for i in range(max_tests):
        p_value, is_alternative = p_value_generator.sample_one()
        decision = method.test_one(p_value)

        if decision:
            if is_alternative:
                true_pos += 1
            else:
                false_pos += 1

            # Check FDR after sufficient discoveries
            if true_pos + false_pos >= 10:
                empirical_fdr = false_pos / (true_pos + false_pos)
                if empirical_fdr > fdr_threshold:
                    print(f"Early stopping at test {i+1}: FDR = {empirical_fdr:.3f}")
                    break

    return true_pos, false_pos

Troubleshooting

Common Issues

Wealth Becomes Zero

Symptom: Method stops making any rejections
Cause: Initial wealth too low or no early discoveries
Solution: Increase initial wealth or use more aggressive parameters

Too Many Rejections Early

Symptom: Many rejections in first few tests, then very few
Cause: Initial wealth too high
Solution: Decrease initial wealth or increase candidate thresholds

Poor Power

Symptom: Very few discoveries despite true alternatives
Cause: Overly conservative parameters
Solution: Increase wealth, decrease candidate thresholds, or choose different method

Parameter Sensitivity

Methods ranked by parameter sensitivity (most to least sensitive):

  1. ADDIS: Sensitive to λ and τ selection
  2. SAFFRON: Moderately sensitive to λ
  3. LORD3: Sensitive to reward parameter
  4. GAI: Least sensitive, fewer parameters
  5. LOND: Robust to parameter choices

References and Further Reading

Each method page contains detailed references to the original papers. Key foundational papers:

  • Alpha Investing: Foster & Stine (2008), "Alpha-investing: a procedure for sequential control of expected false discovery proportion"
  • SAFFRON: Ramdas et al. (2017), "A sequential algorithm for false discovery rate control on directed acyclic graphs"
  • ADDIS: Tian & Ramdas (2019), "ADDIS: an adaptive discarding algorithm for online FDR control with conservative nulls"
  • LORD: Javanmard & Montanari (2018), "Online Rules for Control of False Discovery Rate and False Discovery Exceedance"

Next Steps

  • Explore individual method pages for detailed documentation
  • See Examples for real-world applications
  • Read Theory for mathematical foundations
  • Try Quick Start for hands-on introduction