TNFR Python Engine TNFR Testing Guide
Version 0.0.2 · DOI 10.5281/zenodo.17764207 · Updated 2025-11-29 Source TESTING.md

TNFR Testing Guide

This document describes the testing strategy, infrastructure, and best practices for the TNFR Python Engine. It consolidates information about test organization, coverage requirements, and structural fidelity validation.

Table of Contents

Testing Philosophy

TNFR tests validate structural coherence first, implementation details second. Every test must:

  1. Preserve TNFR Invariants: Verify canonical constraints (see ARCHITECTURE.md §3)
  2. Test Structural Behavior: Focus on coherence, phase, frequency, not implementation
  3. Maintain Reproducibility: Use seeds, validate determinism
  4. Guard Regressions: Performance, accuracy, and API stability

Tests are not responsible for: - Fixing unrelated pre-existing failures - Optimizing code that already passes - Validating framework internals (NetworkX, NumPy)

Test Organization

The test suite is organized by concern and scope:

tests/
├── conftest.py              # Shared fixtures and configuration
├── utils.py                 # Test utilities (module clearing, etc.)
├── unit/                    # Unit tests for individual modules
   ├── test_cache.py
   ├── test_dynamics.py
   ├── test_operators.py
   ├── test_structural.py
   └── ...
├── integration/             # Integration tests for subsystems
   ├── test_glyph_sequences.py
   ├── test_operator_chains.py
   └── test_telemetry_pipeline.py
├── property/                # Property-based tests (Hypothesis)
   └── test_tnfr_invariants.py
├── stress/                  # Stress and scale tests
   └── test_large_networks.py
├── performance/             # Performance regression tests
   └── test_dnfr_pipeline.py
├── mathematics/             # Mathematical backend tests
   └── test_epi.py
└── cli/                     # CLI interface tests
    └── test_cli.py

Key Test Files

File Purpose Markers
test_extreme_cases.py Boundary value testing unit
test_glyph_sequences.py Grammar and sequence validation integration
test_tnfr_invariants.py Property-based invariant checks property, slow
test_dnfr_pipeline.py Performance regression guards performance, slow
test_trace.py Telemetry and debugging utilities unit

Running Tests

Basic Test Execution

# Run all tests (excluding slow/benchmarks by default)
pytest

# Run specific test category
pytest tests/unit
pytest tests/integration

# Run with coverage report
pytest --cov=tnfr --cov-report=html

# Run verbose with output
pytest -v -s

Artifact Guards

Telemetry dashboards are part of the reproducibility surface. A lightweight regression test (tests/test_precision_walk_dashboard_artifact.py) ensures benchmarks/results/precision_walk_dashboard.json is valid JSON (no NaN literals) and remains loadable by downstream tooling. It now runs automatically via:

# Curated smoke bundle (includes dashboard JSON guard)
make smoke-tests

# Run the guard directly when iterating on telemetry scripts
pytest tests/test_precision_walk_dashboard_artifact.py

Test Markers

Tests are marked for selective execution:

# Run only fast tests (default in CI)
pytest -m "not slow"

# Run slow/property-based tests
pytest -m slow

# Run performance regression tests
pytest -m performance tests/performance

# Run backend-specific tests
pytest -m numpy_only
pytest -m requires_jax

Configuration

Default pytest options are configured in pyproject.toml:

[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
addopts = [
    "-m", "not slow",          # Skip slow tests by default
    "--benchmark-skip",        # Skip benchmarks by default
    "--strict-markers",
    "--tb=short",
]
markers = [
    "slow: marks tests as slow (deselected by default)",
    "performance: marks performance regression tests",
    "numpy_only: requires NumPy backend",
    "requires_jax: requires JAX backend",
    "requires_torch: requires PyTorch backend",
]

Test Categories

1. Unit Tests (tests/unit/)

Purpose: Test individual modules and functions in isolation.

Coverage Areas:

Example:

def test_coherence_operator_stabilizes_epi(simple_graph):
    """Coherence operator should increase stability."""
    G, node = simple_graph
    initial_coherence = compute_coherence(G)

    apply_operator(G, node, "coherence")

    final_coherence = compute_coherence(G)
    assert final_coherence >= initial_coherence

2. Integration Tests (tests/integration/)

Purpose: Test interactions between subsystems and operator sequences.

Coverage Areas:

Example:

def test_canonical_sequence_execution():
    """Test canonical TNFR sequence executes without errors."""
    G, node = create_nfr("test", epi=1.0, vf=1.0)
    sequence = ["emission", "reception", "coherence", "coupling"]

    result = run_sequence(G, node, sequence)

    assert result["success"]
    assert compute_coherence(G) > 0

3. Property-Based Tests (tests/property/)

Purpose: Use Hypothesis to generate test cases validating TNFR invariants.

Coverage Areas:

Example:

@given(
    epi=st.floats(min_value=0.1, max_value=10.0),
    vf=st.floats(min_value=0.1, max_value=10.0),
)
def test_coherence_operator_preserves_bounds(epi, vf):
    """Coherence operator should never produce NaN or infinite values."""
    G, node = create_nfr("prop_test", epi=epi, vf=vf)

    apply_operator(G, node, "coherence")

    epi_after = G.nodes[node]["epi"]
    assert np.isfinite(epi_after).all()

4. Stress Tests (tests/stress/)

Purpose: Validate behavior under extreme conditions and large scales.

Coverage Areas:

Example:

@pytest.mark.slow
def test_large_network_coherence():
    """Test coherence computation on large networks."""
    G = nx.DiGraph()
    for i in range(1000):
        G.add_node(i, epi=np.random.rand(10), vf=1.0, phase=0.0)

    coherence = compute_coherence(G)

    assert 0 <= coherence <= 1.0
    assert np.isfinite(coherence)

5. Performance Tests (tests/performance/)

Purpose: Guard against performance regressions in critical paths.

Coverage Areas:

Execution:

pytest -m performance tests/performance

Structural Fidelity Tests

Invariant Tests

These tests validate the 6 TNFR canonical invariants. For complete invariant definitions and physics, see AGENTS.md § Canonical Invariants.

Invariant 1: Nodal Equation Integrity

def test_nodal_equation_integrity():
    """EPI evolution must follow ∂EPI/∂t = νf · ΔNFR(t) only."""
    G, node = create_nfr("test", epi=1.0, vf=1.0)
    initial_epi = G.nodes[node]["epi"].copy()

    # Changes occur only via structural operators
    # Validates nodal equation constraint
    apply_operator(G, node, "coherence")

    assert not np.array_equal(G.nodes[node]["epi"], initial_epi)

Invariant 5: Structural Metrology

def test_structural_metrology_units():
    """Structural frequency must remain in Hz_str units with proper telemetry."""
    G, node = create_nfr("test", epi=1.0, vf=2.5)

    apply_operator(G, node, "mutation")

    vf = G.nodes[node]["vf"]
    assert isinstance(vf, (int, float))
    assert vf > 0  # Positive Hz_str

Invariant 5: Phase Check Before Coupling

def test_coupling_requires_phase_check():
    """Coupling should verify phase synchrony."""
    G = nx.DiGraph()
    n1 = G.add_node(1, epi=1.0, vf=1.0, phase=0.0)
    n2 = G.add_node(2, epi=1.0, vf=1.0, phase=np.pi)

    # Should validate phase compatibility
    with pytest.raises(ValidationError):
        apply_operator(G, n1, "coupling", target=n2)

Invariant 8: Reproducible Simulations

def test_deterministic_with_seed():
    """Same seed should produce same results."""
    seed = 42

    # Run 1
    G1, node1 = create_nfr("test", epi=1.0, vf=1.0, seed=seed)
    run_sequence(G1, node1, ["emission", "coherence"])
    result1 = compute_coherence(G1)

    # Run 2
    G2, node2 = create_nfr("test", epi=1.0, vf=1.0, seed=seed)
    run_sequence(G2, node2, ["emission", "coherence"])
    result2 = compute_coherence(G2)

    assert result1 == result2

Backend Selection

TNFR supports multiple mathematical backends (NumPy, JAX, PyTorch). Tests can specify backend requirements:

Environment Variable

# Set backend before running tests
export TNFR_MATH_BACKEND=numpy  # or jax, torch
pytest tests/mathematics

Command Line

# Use pytest option
pytest tests/mathematics --math-backend=torch

In Tests

@pytest.mark.requires_jax
def test_jax_specific_feature():
    """This test requires JAX backend."""
    from tnfr.mathematics import get_backend
    backend = get_backend()
    assert backend.name == "jax"

Backend Availability

When a requested backend is unavailable, pytest automatically skips backend-specific tests while continuing with NumPy fallback.

Coverage Requirements

Minimum Coverage Targets

Critical Paths (Must be 100% covered)

Coverage Report

# Generate HTML coverage report
pytest --cov=tnfr --cov-report=html

# Open report
open htmlcov/index.html

Test Development Guidelines

1. Test Isolation

Use clear_test_module() utility for test independence:

from tests.utils import clear_test_module

def test_fresh_import():
    """Test with fresh module state."""
    clear_test_module('tnfr.utils.io')
    import tnfr.utils.io  # Fresh import

Note: Module path checking ('module' in sys.modules) may trigger CodeQL warnings. This is legitimate test infrastructure, not URL validation. See ARCHITECTURE.md §"Test isolation and module management".

2. Fixture Usage

Prefer pytest fixtures for common setup:

@pytest.fixture
def simple_graph():
    """Create a simple test graph."""
    G = nx.DiGraph()
    node = G.add_node(1, epi=np.array([1.0, 2.0]), vf=1.0, phase=0.0)
    return G, node

def test_with_fixture(simple_graph):
    G, node = simple_graph
    # Test implementation

3. Parametrize for Variants

Use pytest.mark.parametrize for multiple test cases:

@pytest.mark.parametrize("operator", [
    "emission", "reception", "coherence", "dissonance"
])
def test_operator_preserves_structure(operator):
    """All operators should preserve graph structure."""
    G, node = create_nfr("test", epi=1.0, vf=1.0)
    apply_operator(G, node, operator)

    assert node in G.nodes

4. Document Test Intent

Every test should have a clear docstring explaining:

def test_resonance_propagates_coherence():
    """
    Resonance operator should propagate coherence to coupled nodes
    without altering EPI identity (Invariant 1, 4).

    This test validates that resonance maintains operational
    fractality while increasing network coupling.
    """
    # Test implementation

5. Avoid Brittleness

6. Performance Awareness

Mark slow tests appropriately:

@pytest.mark.slow
def test_expensive_computation():
    """This test takes >1 second to run."""
    # Long-running test

Test Maintenance

Pre-Existing Failures

The repository tracks known test failures that are not regressions:

These are documented and should not block PR approval unless your changes introduce new failures.

Continuous Integration

All PRs must:

  1. Pass the default test suite (pytest -m "not slow")
  2. Maintain or improve coverage
  3. Not introduce new failures (beyond pre-existing)
  4. Pass all structural fidelity tests

Test Optimization

When optimizing tests:

  1. Consolidate redundant tests across directories
  2. Use shared fixtures to reduce duplication
  3. Parametrize instead of copying test functions
  4. Profile slow tests and optimize data generation

Debugging Tests

Verbose Output

# Show print statements and detailed assertions
pytest -v -s tests/unit/test_operators.py

# Show local variables on failure
pytest --showlocals

Run Single Test

# Run specific test function
pytest tests/unit/test_cache.py::test_shelve_layer_stores_data

# Run test by keyword match
pytest -k "coherence" tests/

Debug with PDB

# Drop into debugger on failure
pytest --pdb

# Drop into debugger at test start
pytest --trace

Capture Warnings

# Show warnings
pytest -W default

# Treat warnings as errors
pytest -W error

Resources


Last Updated: November 2025
Version: 0.0.1