Skip to content

LORD: Levels based On Recent Discovery

LORD (significance Levels based On Recent Discovery) is a family of procedures for online FDR control that use alpha-investing principles, where test levels depend on the timing and wealth from previous discoveries.

Original Paper

Javanmard, A., and A. Montanari. "Online rules for control of false discovery rate and false discovery exceedance." Annals of Statistics, 46(2):526-554, 2018. [Project Euclid]

Overview

The Alpha-Investing Philosophy

LORD procedures have an intuitive interpretation: they start with an error budget (alpha-wealth), pay a price each time a hypothesis is tested, and earn back wealth when discoveries are made. The adjusted significance thresholds depend on:

  1. Alpha-wealth dynamics - Spending and earning back wealth
  2. Discovery timing - When previous discoveries were made
  3. Gamma sequences - Proper spending schedules for FDR control

Key Innovation

Unlike LOND which depends only on the number of discoveries, LORD takes advantage of the timing of discoveries. This allows for higher power by allocating more wealth when discoveries are recent.

Available LORD Variants

The package implements LORD 3, which depends on the past only through the time of the last discovery and the wealth at that time.

Historical Context

LORD 3 was superseded by LORD++ in later work, but remains implemented for comparison studies and educational purposes. For practical applications, consider more recent methods like SAFFRON or ADDIS.

Class Reference

LORD3

online_fdr.investing.lord.three.LordThree

Bases: AbstractSequentialTest

LORD 3: Online FDR control based on recent discovery with wealth dynamics.

LORD 3 is a variant of the LORD (significance Levels based On Recent Discovery) procedure for online FDR control. The test levels depend on the past only through the time of the last discovery and the wealth accumulated at that time.

LORD procedures have an intuitive interpretation: they start with an error budget (alpha-wealth), pay a price each time a hypothesis is tested, and earn back wealth when discoveries are made. LORD 3 sets thresholds based on the time since the last discovery and the wealth at that time.

Note

This method was superseded by LORD++ and is implemented for demonstrative purposes and comparison studies. For practical applications, consider using LORD++ or more recent methods like ADDIS or SAFFRON.

Parameters:

Name Type Description Default
alpha float

Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).

required
wealth float

Initial alpha-wealth for purchasing rejection thresholds. Must satisfy 0 < wealth < alpha.

required
reward float

Reward earned back for each discovery. Must be positive. Typical choice is reward = alpha - wealth.

required

Attributes:

Name Type Description
wealth float

Current alpha-wealth available for testing.

reward float

Fixed reward earned per discovery.

last_reject int

Index of the most recent rejection (0 if none).

wealth_reject float

Alpha-wealth at the time of the last rejection.

Examples:

>>> # Basic usage with recommended parameters
>>> lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.025)
>>> decision = lord3.test_one(0.01)  # Test a small p-value
>>> print(f"Rejected: {decision}")
>>> # Sequential testing
>>> p_values = [0.001, 0.3, 0.02, 0.8, 0.005]
>>> decisions = [lord3.test_one(p) for p in p_values]
>>> discoveries = sum(decisions)
References

Javanmard, A., and A. Montanari (2018). "Online rules for control of false discovery rate and false discovery exceedance." Annals of Statistics, 46(2):526-554.

Project Euclid: https://projecteuclid.org/journals/annals-of-statistics/volume-46/issue-2/Online-rules-for-control-of-false-discovery-rate-and-false/10.1214/17-AOS1559.full

Source code in online_fdr/investing/lord/three.py
class LordThree(AbstractSequentialTest):
    """LORD 3: Online FDR control based on recent discovery with wealth dynamics.

    LORD 3 is a variant of the LORD (significance Levels based On Recent Discovery)
    procedure for online FDR control. The test levels depend on the past only through
    the time of the last discovery and the wealth accumulated at that time.

    LORD procedures have an intuitive interpretation: they start with an error budget
    (alpha-wealth), pay a price each time a hypothesis is tested, and earn back wealth
    when discoveries are made. LORD 3 sets thresholds based on the time since the last
    discovery and the wealth at that time.

    Note:
        This method was superseded by LORD++ and is implemented for demonstrative
        purposes and comparison studies. For practical applications, consider using
        LORD++ or more recent methods like ADDIS or SAFFRON.

    Args:
        alpha: Target FDR level (e.g., 0.05 for 5% FDR). Must be in (0, 1).
        wealth: Initial alpha-wealth for purchasing rejection thresholds.
                Must satisfy 0 < wealth < alpha.
        reward: Reward earned back for each discovery. Must be positive.
                Typical choice is reward = alpha - wealth.

    Attributes:
        wealth: Current alpha-wealth available for testing.
        reward: Fixed reward earned per discovery.
        last_reject: Index of the most recent rejection (0 if none).
        wealth_reject: Alpha-wealth at the time of the last rejection.

    Examples:
        >>> # Basic usage with recommended parameters
        >>> lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.025)
        >>> decision = lord3.test_one(0.01)  # Test a small p-value
        >>> print(f"Rejected: {decision}")

        >>> # Sequential testing
        >>> p_values = [0.001, 0.3, 0.02, 0.8, 0.005]
        >>> decisions = [lord3.test_one(p) for p in p_values]
        >>> discoveries = sum(decisions)

    References:
        Javanmard, A., and A. Montanari (2018). "Online rules for control of false
        discovery rate and false discovery exceedance." Annals of Statistics,
        46(2):526-554.

        Project Euclid: https://projecteuclid.org/journals/annals-of-statistics/volume-46/issue-2/Online-rules-for-control-of-false-discovery-rate-and-false/10.1214/17-AOS1559.full
    """

    def __init__(
        self,
        alpha: float,
        wealth: float,
        reward: float,
    ):  # fmt: skip
        super().__init__(alpha)
        self.wealth: float = wealth
        self.reward: float = reward
        validity.check_initial_wealth(wealth, alpha)
        validity.check_reward_budget(wealth=wealth, reward=reward, alpha=alpha)

        self.seq = DefaultLordGammaSequence(c=0.07720838)

        self.last_reject: int = 0  # reject index
        self.wealth_reject: float = wealth  # reject wealth
        # Matches onlineFDR's LORD(version = "3") state recursion sentinel.
        self._decision_history: list[bool] = [True]

    def test_one(self, p_val: float) -> bool:
        """Test a single p-value using the LORD 3 procedure.

        The LORD 3 algorithm processes p-values sequentially:
        1. Calculate threshold based on time since last discovery and wealth at that time
        2. Spend alpha-wealth equal to the threshold
        3. Earn back reward if discovery is made
        4. Update last rejection time and wealth if discovery is made

        Args:
            p_val: P-value to test. Must be in [0, 1].

        Returns:
            True if the null hypothesis is rejected (discovery), False otherwise.

        Raises:
            ValueError: If p_val is not in [0, 1].

        Examples:
            >>> lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.025)
            >>> lord3.test_one(0.01)  # Small p-value, likely rejected
            True
            >>> lord3.test_one(0.8)   # Large p-value, not rejected
            False
        """
        validity.check_p_val(p_val)
        self.num_test += 1

        self.alpha = (
            self.seq.calc_gamma(self.num_test - self.last_reject)  # fmt: skip
            * self.wealth_reject
        )

        is_rejected = p_val <= self.alpha

        # onlineFDR applies the spend as min(alpha_t, W_{t-1}).
        spend = min(self.alpha, self.wealth)

        # onlineFDR LORD-3 recursion credits reward with one-step lag and
        # sentinel R_0 = TRUE on t=2.
        reward_credited = (
            is_rejected
            if self.num_test == 1
            else self._decision_history[self.num_test - 2]
        )

        self.wealth -= spend
        self.wealth += self.reward if reward_credited else 0.0
        self._decision_history.append(is_rejected)

        self.last_reject = self.num_test if is_rejected else self.last_reject
        self.wealth_reject = self.wealth if is_rejected else self.wealth_reject

        return is_rejected

Functions

test_one(p_val)

Test a single p-value using the LORD 3 procedure.

The LORD 3 algorithm processes p-values sequentially: 1. Calculate threshold based on time since last discovery and wealth at that time 2. Spend alpha-wealth equal to the threshold 3. Earn back reward if discovery is made 4. Update last rejection time and wealth if discovery is made

Parameters:

Name Type Description Default
p_val float

P-value to test. Must be in [0, 1].

required

Returns:

Type Description
bool

True if the null hypothesis is rejected (discovery), False otherwise.

Raises:

Type Description
ValueError

If p_val is not in [0, 1].

Examples:

>>> lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.025)
>>> lord3.test_one(0.01)  # Small p-value, likely rejected
True
>>> lord3.test_one(0.8)   # Large p-value, not rejected
False
Source code in online_fdr/investing/lord/three.py
def test_one(self, p_val: float) -> bool:
    """Test a single p-value using the LORD 3 procedure.

    The LORD 3 algorithm processes p-values sequentially:
    1. Calculate threshold based on time since last discovery and wealth at that time
    2. Spend alpha-wealth equal to the threshold
    3. Earn back reward if discovery is made
    4. Update last rejection time and wealth if discovery is made

    Args:
        p_val: P-value to test. Must be in [0, 1].

    Returns:
        True if the null hypothesis is rejected (discovery), False otherwise.

    Raises:
        ValueError: If p_val is not in [0, 1].

    Examples:
        >>> lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.025)
        >>> lord3.test_one(0.01)  # Small p-value, likely rejected
        True
        >>> lord3.test_one(0.8)   # Large p-value, not rejected
        False
    """
    validity.check_p_val(p_val)
    self.num_test += 1

    self.alpha = (
        self.seq.calc_gamma(self.num_test - self.last_reject)  # fmt: skip
        * self.wealth_reject
    )

    is_rejected = p_val <= self.alpha

    # onlineFDR applies the spend as min(alpha_t, W_{t-1}).
    spend = min(self.alpha, self.wealth)

    # onlineFDR LORD-3 recursion credits reward with one-step lag and
    # sentinel R_0 = TRUE on t=2.
    reward_credited = (
        is_rejected
        if self.num_test == 1
        else self._decision_history[self.num_test - 2]
    )

    self.wealth -= spend
    self.wealth += self.reward if reward_credited else 0.0
    self._decision_history.append(is_rejected)

    self.last_reject = self.num_test if is_rejected else self.last_reject
    self.wealth_reject = self.wealth if is_rejected else self.wealth_reject

    return is_rejected

LORD++

online_fdr.investing.lord.plus_plus.LordPlusPlus

Bases: AbstractSequentialTest

Implements LORD++, an improved variant that superseded LORD1 and LORD2.

LORD++ uses a wealth-based approach where alpha levels are determined by accumulated wealth and the gamma sequence. The method tracks rejections and spends wealth accordingly.

References

[1] Ramdas, A., Zrnic, T., Wainwright, M.J. and Jordan, M.I. (2017). "SAFFRON: an adaptive algorithm for online control of the false discovery rate." arXiv preprint arXiv:1802.09098.

[2] Javanmard, A., and Montanari, A. (2018). "Online Rules for Control of False Discovery Rate and False Discovery Exceedance." Annals of Statistics, 46(2):526-554.

Source code in online_fdr/investing/lord/plus_plus.py
class LordPlusPlus(AbstractSequentialTest):
    """Implements LORD++, an improved variant that superseded LORD1 and LORD2.

    LORD++ uses a wealth-based approach where alpha levels are determined by
    accumulated wealth and the gamma sequence. The method tracks rejections
    and spends wealth accordingly.

    References
    ----------
    [1] Ramdas, A., Zrnic, T., Wainwright, M.J. and Jordan, M.I. (2017).
    "SAFFRON: an adaptive algorithm for online control of the false discovery rate."
    arXiv preprint arXiv:1802.09098.

    [2] Javanmard, A., and Montanari, A. (2018).
    "Online Rules for Control of False Discovery Rate and False Discovery Exceedance."
    Annals of Statistics, 46(2):526-554.
    """

    def __init__(self, alpha: float, wealth: float, reward: float | None = None):
        super().__init__(alpha)
        self.alpha0: float = alpha
        self.wealth0: float = wealth
        self.wealth: float = wealth

        validity.check_initial_wealth(wealth, alpha)
        if reward is not None and not math.isclose(reward, alpha):
            raise ValueError(
                "LordPlusPlus guarantee regime requires reward == alpha."
            )
        self.reward: float = alpha

        self.seq = DefaultLordGammaSequence(c=0.07720838)

        self.first_reject: int | None = None  # first rejection index
        self.last_reject: list = []  # rejection indices without first
        self.wealth_at_first_reject: float | None = None

    def test_one(self, p_val: float) -> bool:
        validity.check_p_val(p_val)
        self.num_test += 1

        # Calculate alpha based on LORD++ formula
        self.alpha = self.wealth0 * self.seq.calc_gamma(self.num_test)

        if self.first_reject is not None:
            # Add contribution from first rejection
            self.alpha += (self.alpha0 - self.wealth0) * self.seq.calc_gamma(
                self.num_test - self.first_reject
            )

            # Add contributions from subsequent rejections
            self.alpha += self.alpha0 * sum(
                self.seq.calc_gamma(self.num_test - reject_idx)
                for reject_idx in self.last_reject
            )

        # Ensure we don't spend more than available wealth
        self.alpha = min(self.alpha, self.wealth)

        is_rejected = p_val <= self.alpha

        # Update wealth: spend alpha, gain reward if rejected
        self.wealth -= self.alpha
        if is_rejected:
            self.wealth += self.reward

            if self.first_reject is None:
                # First rejection
                self.first_reject = self.num_test
                self.wealth_at_first_reject = self.wealth
            else:
                # Subsequent rejection
                self.last_reject.append(self.num_test)

        return is_rejected

LORD Dependent

online_fdr.investing.lord.dependent.LordDependent

Bases: AbstractSequentialTest

Implements a variant of LORD for dependent p-values[1]_.

References

[1] Javanmard, A., and A. Montanari. Online rules for control of false discovery rate and false discovery exceedance. Annals of Statistics, 46(2):526-554, 2018.

Source code in online_fdr/investing/lord/dependent.py
class LordDependent(AbstractSequentialTest):
    """Implements a variant of LORD for dependent p-values[1]_.

    References
    ----------
    [1] Javanmard, A., and A. Montanari.
    Online rules for control of false discovery rate
    and false discovery exceedance.
    Annals of Statistics, 46(2):526-554, 2018."""

    def __init__(
        self,
        alpha: float,
        wealth: float,
        reward: float,
    ):  # fmt: skip
        super().__init__(alpha)
        self.alpha0: float = alpha
        self.wealth: float = wealth
        self.reward: float = reward
        validity.check_initial_wealth(wealth, alpha)
        validity.check_reward_budget(wealth=wealth, reward=reward, alpha=alpha)

        # onlineFDR LORDdep default xi_i:
        # xi_i = 0.139307 * alpha / (b0 * i * log(max(i,2))^3)
        self.seq = DependentLordGammaSequence(
            c=0.139307 * self.alpha0 / self.reward,
            b0=self.reward,
        )

        self.last_reject: int = 0  # tau
        self.wealth_reject: float = self.wealth  # wealth at tau

    def test_one(self, p_val: float) -> bool:
        validity.check_p_val(p_val)
        self.num_test += 1

        self.alpha = (  # fmt: skip
            self.seq.calc_gamma(self.num_test)  # fmt: skip
            * self.wealth_reject
        )

        is_rejected = p_val <= self.alpha

        self.wealth -= self.alpha
        self.wealth += self.reward if is_rejected else 0
        self.last_reject = self.num_test if is_rejected else self.last_reject
        self.wealth_reject = self.wealth if is_rejected else self.wealth_reject

        return is_rejected

LORD Discard

online_fdr.investing.lord.discard.LordDiscard

Bases: AbstractSequentialTest

Implements LORD++ with discarding as described in [1]_.

References

[1] Tian, J., and A. Ramdas. ADDIS: an adaptive discarding algorithm for online FDR control with conservative nulls. In Advances in Neural Information Processing Systems (NeurIPS 2019), vol. 32. Curran Associates, Inc., 2019.

Source code in online_fdr/investing/lord/discard.py
class LordDiscard(AbstractSequentialTest):
    """Implements LORD++ with discarding as
    described in [1]_.

    References
    ----------
    [1] Tian, J., and A. Ramdas.
    ADDIS: an adaptive discarding algorithm for
    online FDR control with conservative nulls.
    In Advances in Neural Information Processing Systems
    (NeurIPS 2019), vol. 32. Curran Associates, Inc., 2019."""

    def __init__(self, alpha: float, wealth: float, tau: float):
        super().__init__(alpha)
        self.alpha0: float = alpha
        self.wealth0: float = wealth
        self.tau: float = tau
        validity.check_initial_wealth(wealth, alpha)
        validity.check_tau(tau)

        self.seq = DefaultLordGammaSequence(c=0.07720838)

        self.first_reject: int | None = None  # first rejection index
        self.last_reject: list = []  # without first rejection

    def _compute_alpha(self, tested_index: int) -> float:
        alpha = self.wealth0 * self.seq.calc_gamma(tested_index)

        if self.first_reject is not None:
            alpha += (
                (self.tau * self.alpha0 - self.wealth0)
                * self.seq.calc_gamma(tested_index - self.first_reject)
            )

        if self.last_reject:
            alpha += (
                self.tau
                * self.alpha0
                * sum(
                    self.seq.calc_gamma(tested_index - reject_idx)
                    for reject_idx in self.last_reject
                )
            )

        return float(alpha)

    def test_one(self, p_val: float) -> bool:
        validity.check_p_val(p_val)
        next_tested_index = self.num_test + 1
        # Expose the same per-step threshold semantics as onlineFDR,
        # including discarded p-values.
        self.alpha = self._compute_alpha(next_tested_index)

        if p_val > self.tau:
            return False  # discard

        self.num_test = next_tested_index

        is_rejected = p_val <= min(self.tau, self.alpha)

        if is_rejected:
            if self.first_reject is None:
                self.first_reject = self.num_test
            else:
                self.last_reject.append(self.num_test)

        return is_rejected

LORD Memory Decay

online_fdr.investing.lord.mem_decay.LORDMemoryDecay

Bases: AbstractSequentialTest

LORD variant with memory decay for time series anomaly detection.

This variant is designed for non-stationary time series where recent discoveries are more relevant than older ones. Unlike standard LORD variants, it does NOT track wealth. Instead, it uses a decay factor to down-weight older rejections and a smoothing parameter to control the base detection threshold.

The algorithm spends

alpha_t = alpha * eta * max(gamma(t), 1-delta) + alpha * sum_r decay(t,r) * gamma(t-r-l)

where the sum is over past rejections r, and decay(t,r) = delta^(t-r-l).

Key differences from standard LORD: - No wealth tracking or accumulation - Uses decay to forget old discoveries - Ensures minimum spending via max(gamma(t), 1-delta)

References

[1] Rebjock, Q., B. Kurt, T. Januschowski, and L. Callot. Online false discovery rate control for anomaly detection in time series. In Advances in Neural Information Processing Systems (NeurIPS 2021), vol. 34, pp. 26487-26498. Curran Associates, Inc., 2021.

Source code in online_fdr/investing/lord/mem_decay.py
class LORDMemoryDecay(AbstractSequentialTest):
    """LORD variant with memory decay for time series anomaly detection.

    This variant is designed for non-stationary time series where recent
    discoveries are more relevant than older ones. Unlike standard LORD
    variants, it does NOT track wealth. Instead, it uses a decay factor
    to down-weight older rejections and a smoothing parameter to control
    the base detection threshold.

    The algorithm spends:
        alpha_t = alpha * eta * max(gamma(t), 1-delta)
                  + alpha * sum_r decay(t,r) * gamma(t-r-l)

    where the sum is over past rejections r, and decay(t,r) = delta^(t-r-l).

    Key differences from standard LORD:
    - No wealth tracking or accumulation
    - Uses decay to forget old discoveries
    - Ensures minimum spending via max(gamma(t), 1-delta)

    References
    ----------
    [1] Rebjock, Q., B. Kurt, T. Januschowski, and L. Callot.
    Online false discovery rate control for anomaly detection in time series.
    In Advances in Neural Information Processing Systems (NeurIPS 2021),
    vol. 34, pp. 26487-26498. Curran Associates, Inc., 2021.
    """

    def __init__(self, alpha: float, delta: float = 0.99, eta: float = 0.5, l: int = 0):
        """
        Parameters
        ----------
        alpha : float
            Overall significance level in (0, 1). This is the target FDR level.
        delta : float, optional
            Decay factor in (0, 1) for down-weighting older rejections.
            Default is 0.99 (1% decay per time step). Lower values mean
            faster forgetting of old discoveries.
        eta : float, optional
            Smoothing/scaling factor that controls the base detection threshold.
            - eta * alpha * max(gamma(t), 1-delta) is spent at each step
            - eta=0.1: Conservative (10% of budget)
            - eta=0.5: Moderate (50% of budget) [default]
            - eta=1.0: Aggressive (full budget)
        l : int, optional
            Dependency lag parameter. Set l>0 if p-values have serial dependence.
            Default is 0 (assumes independence).
        """
        super().__init__(alpha)
        self.alpha0: float = alpha
        self.delta: float = delta
        self.eta: float = eta
        self.l: int = l

        validity.check_decay_factor(delta)
        if not 0 < eta <= 1:
            raise ValueError(f"eta must be in (0, 1], got {eta}")

        self.seq = DefaultLordGammaSequence(c=0.07720838)

        self.rejection_times: list[int] = []  # all rejection times
        self._gamma_cache: dict[int, float] = {}  # Cache for efficiency

    def test_one(self, p_val: float) -> bool:
        validity.check_p_val(p_val)
        self.num_test += 1

        # Base component with smoothing and minimum threshold
        if self.num_test not in self._gamma_cache:
            self._gamma_cache[self.num_test] = self.seq.calc_gamma(self.num_test)

        gamma_t = self._gamma_cache[self.num_test]
        self.alpha = self.alpha0 * self.eta * max(gamma_t, 1 - self.delta)

        # Add decayed contributions from past rejections
        for reject_idx in self.rejection_times:
            time_diff = self.num_test - reject_idx - self.l
            if time_diff > 0:
                # Cache gamma values for efficiency
                if time_diff not in self._gamma_cache:
                    self._gamma_cache[time_diff] = self.seq.calc_gamma(time_diff)

                decay_weight = self.delta**time_diff
                gamma_val = self._gamma_cache[time_diff]
                self.alpha += self.alpha0 * decay_weight * gamma_val

        is_rejected = p_val <= self.alpha

        if is_rejected:
            self.rejection_times.append(self.num_test)

        return is_rejected

Usage Examples

Basic Usage

from online_fdr.investing.lord.three import LordThree

# Create LORD 3 instance with recommended parameters
lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.025)

# Test individual p-values
p_values = [0.001, 0.15, 0.03, 0.8, 0.02, 0.45, 0.006]

print("LORD 3 Online Testing:")
discoveries = []

for i, p_value in enumerate(p_values):
    decision = lord3.test_one(p_value)

    if decision:
        discoveries.append(i + 1)
        print(f" Test {i+1}: p={p_value:.3f}  discovery (wealth: {lord3.wealth:.4f})")
    else:
        print(f"  Test {i+1}: p={p_value:.3f}  no rejection (wealth: {lord3.wealth:.4f})")

print(f"\nTotal discoveries: {len(discoveries)}")
print(f"Final wealth: {lord3.wealth:.4f}")

Understanding Wealth Dynamics

def demonstrate_wealth_dynamics():
    """Show how LORD 3 manages alpha-wealth over time."""

    lord3 = LordThree(alpha=0.1, wealth=0.05, reward=0.05)

    print("LORD 3 Wealth Dynamics:")
    print("=" * 30)
    print(f"Initial wealth: {lord3.wealth:.4f}")

    test_scenarios = [
        (0.001, "Very small p-value"),
        (0.8, "Large p-value"),  
        (0.02, "Small p-value after discovery"),
        (0.5, "Medium p-value"),
        (0.005, "Another small p-value")
    ]

    for i, (p_value, description) in enumerate(test_scenarios, 1):
        wealth_before = lord3.wealth
        decision = lord3.test_one(p_value)
        wealth_after = lord3.wealth

        status = "REJECT" if decision else "ACCEPT"
        wealth_change = wealth_after - wealth_before

        print(f"Test {i}: p={p_value:.3f} ({description})")
        print(f"  Decision: {status}")
        print(f"  Wealth: {wealth_before:.4f}  {wealth_after:.4f} (change: {wealth_change:+.4f})")
        print()

demonstrate_wealth_dynamics()

Parameter Selection and Tuning

def compare_lord_parameters():
    """Compare LORD 3 performance with different parameters."""

    # Test different parameter combinations
    configs = [
        {"name": "Conservative", "wealth": 0.01, "reward": 0.04},
        {"name": "Moderate", "wealth": 0.025, "reward": 0.025},  
        {"name": "Aggressive", "wealth": 0.04, "reward": 0.01},
    ]

    test_p_values = [0.001, 0.02, 0.15, 0.003, 0.8, 0.01, 0.4, 0.005]

    print("LORD 3 Parameter Comparison:")
    print("=" * 40)

    for config in configs:
        lord3 = LordThree(alpha=0.05, 
                         wealth=config["wealth"], 
                         reward=config["reward"])

        decisions = [lord3.test_one(p) for p in test_p_values]
        discoveries = sum(decisions)

        print(f"{config['name']:>12} (W={config['wealth']:.3f}, R={config['reward']:.3f}): "
              f"{discoveries} discoveries")

compare_lord_parameters()

Comparison with Other Methods

from online_fdr.investing.addis.addis import Addis
from online_fdr.investing.saffron.saffron import Saffron

def compare_online_methods(p_values):
    """Compare LORD 3 with adaptive methods."""

    print("Online FDR Method Comparison:")
    print("=" * 35)

    # Create method instances
    methods = {
        'LORD 3': LordThree(alpha=0.05, wealth=0.025, reward=0.025),
        'SAFFRON': Saffron(alpha=0.05, wealth=0.025, lambda_=0.5),
        'ADDIS': Addis(alpha=0.05, wealth=0.025, lambda_=0.25, tau=0.5)
    }

    results = {}

    for method_name, method in methods.items():
        decisions = [method.test_one(p) for p in p_values]
        discoveries = sum(decisions)
        discovery_indices = [i+1 for i, d in enumerate(decisions) if d]

        results[method_name] = {
            'decisions': decisions,
            'discoveries': discoveries,
            'indices': discovery_indices
        }

        print(f"{method_name:>8}: {discoveries} discoveries at positions {discovery_indices}")

    return results

# Test with a realistic p-value sequence
realistic_p_values = [0.001, 0.25, 0.02, 0.7, 0.005, 0.9, 0.04, 0.3, 0.008, 0.6]
results = compare_online_methods(realistic_p_values)

Working with Dependent Data

from online_fdr.utils.generation import DataGenerator, GaussianLocationModel
import numpy as np

def lord_with_correlation():
    """Test LORD 3 with correlated data."""

    print("LORD 3 with Correlated Test Statistics:")
    print("=" * 42)

    # Generate correlated test statistics (AR(1) process)
    n_tests = 100
    rho = 0.3  # Correlation parameter

    # Start with independent standard normals
    z = np.random.normal(0, 1, n_tests)

    # Apply AR(1) structure: z_t = rho * z_{t-1} + sqrt(1-rho^2) * epsilon_t
    for t in range(1, n_tests):
        z[t] = rho * z[t-1] + np.sqrt(1 - rho**2) * z[t]

    # Add signal to first 20% of tests
    n_alternatives = int(0.2 * n_tests)
    z[:n_alternatives] += 2.0  # Add signal

    # Convert to p-values (two-sided)
    from scipy.stats import norm
    p_values = 2 * (1 - norm.cdf(np.abs(z)))
    true_alternatives = np.zeros(n_tests, dtype=bool)
    true_alternatives[:n_alternatives] = True

    # Apply LORD 3
    lord3 = LordThree(alpha=0.05, wealth=0.025, reward=0.025)

    decisions = [lord3.test_one(p) for p in p_values]

    # Evaluate performance
    true_positives = sum(d and t for d, t in zip(decisions, true_alternatives))
    false_positives = sum(d and not t for d, t in zip(decisions, true_alternatives))
    total_discoveries = true_positives + false_positives

    empirical_fdr = false_positives / max(total_discoveries, 1)
    power = true_positives / n_alternatives

    print(f"Correlation (rho): {rho}")
    print(f"True alternatives: {n_alternatives}")
    print(f"Total discoveries: {total_discoveries}")
    print(f"True positives: {true_positives}")
    print(f"False positives: {false_positives}")
    print(f"Empirical FDR: {empirical_fdr:.3f}")
    print(f"Power: {power:.3f}")

    return empirical_fdr, power

# Run correlation test
fdr, power = lord_with_correlation()

Mathematical Foundation

Wealth Dynamics

LORD 3 maintains wealth W_t that evolves as:

\[W_{t+1} = W_t - \alpha_t + R \cdot \mathbf{1}_{\text{reject at time } t}\]

where: - alpha_t is the rejection threshold at time t - R is the fixed reward earned per discovery - W0 is the initial wealth

Threshold Formula

The rejection threshold at time t is:

\[\alpha_t = \gamma(t - \tau_{\text{last}}) \cdot W_{\tau_{\text{last}}}\]

where: - tau_last is the time of the last discovery (0 if no discoveries) - W_{tau_last} is the wealth at the time of the last discovery - gamma(.) is a gamma sequence (typically declining)

FDR Guarantee

Theorem (LORD FDR Control): For independent p-values, LORD procedures control FDR at level alpha.

Best Practices

Parameter Selection Guidelines

Wealth vs Reward Trade-off

  • High initial wealth, low reward: Strong early power, slower wealth recovery
  • Low initial wealth, high reward: Conservative start, builds momentum with discoveries
  • Balanced: W = R = alpha/2 is often a good starting point

Practical Recommendations

  • For alpha = 0.05: Start with W = 0.025, R = 0.025
  • Adjust based on expected discovery pattern
  • Higher W for expected early discoveries
  • Higher R for sparse discovery scenarios

When to Use LORD 3

  • Good for: Educational purposes, method comparisons, historical studies
  • Consider alternatives: SAFFRON (adaptive), ADDIS (conservative nulls), LORD++ (improved version)
  • Strengths: Simple, interpretable, proven FDR control
  • Limitations: Non-adaptive, superseded by newer methods

Common Issues

Potential Problems

  • Wealth depletion: Too aggressive parameters can exhaust wealth quickly
  • Poor parameter choice: Mismatched W and R can hurt performance
  • Non-adaptive: Doesn't adapt to unknown proportion of nulls

References

  1. Javanmard, A., and A. Montanari (2018). "Online rules for control of false discovery rate and false discovery exceedance." Annals of Statistics, 46(2):526-554.

  2. Foster, D. P., and R. A. Stine (2008). "Alpha-investing: a procedure for sequential control of expected false discoveries." Journal of the Royal Statistical Society: Series B, 70(2):429-444.

  3. Ramdas, A., F. Ruf, M. Reeb, and A. Ramdas (2017). "A unified treatment of multiple testing with prior knowledge using the p-filter." Annals of Statistics, 47(5):2790-2821.

See Also

  • LOND: Simpler predecessor to LORD
  • SAFFRON: Adaptive online FDR control
  • ADDIS: Handles conservative nulls
  • Theory: Mathematical foundations