Tutorial: Understanding and Managing Noise
A comprehensive guide to understanding noise in homomorphic encryption, tracking noise budgets through operations, and managing noise with rescaling and bootstrapping.
Table of Contents
- Overview
- Prerequisites
- Learning Objectives
- Part 1: Understanding Noise in HE
- Part 2: Tracking Noise Through Operations
- Part 3: Noise Simulation with FakeBackend
- Part 4: Managing Noise with Rescaling
- Part 5: Refreshing Noise with Bootstrapping
- Part 6: Predicting Bootstrapping Needs
- Part 7: Custom Noise Models
- Complete Noise Management Script
- Best Practices
- Troubleshooting
- Summary
- Next Steps
- See Also
Overview
Noise is a fundamental concept in homomorphic encryption. Understanding and managing noise is crucial for building reliable HE applications. This tutorial explains:
- What noise is and why it exists
- How noise grows with each operation
- How to track noise budgets in your computations
- How to manage noise with rescaling and bootstrapping
- How to predict when bootstrapping is needed
What we'll build: A noise-aware neural network compiler that tracks and manages noise throughout computation, automatically inserting bootstrapping when necessary.
Time to complete: 40-50 minutes
Prerequisites
Before starting this tutorial, you should:
- Complete the Simple Neural Network Tutorial
- Understand CKKS parameters and modulus levels
- Be familiar with basic pass pipelines
- Have completed or read the Optimization Strategies Tutorial
Concepts to understand:
- Ciphertext: Encrypted data in HE
- Noise: Random errors that grow with computation
- Noise budget: Remaining capacity for computation
- Rescaling: Operation to manage scale in CKKS
- Bootstrapping: Operation to refresh noise budget
Learning Objectives
By the end of this tutorial, you will:
- Understand what noise is and why it grows
- Track noise budgets through HE operations
- Use noise simulation to validate compilations
- Configure rescaling to manage noise growth
- Insert bootstrapping to refresh noise budgets
- Predict when bootstrapping will be needed
- Create custom noise models for different scenarios
Part 1: Understanding Noise in HE
1.1 What is Noise?
In homomorphic encryption, noise is a small random error intentionally introduced to ensure security. This noise grows with each operation:
Fresh ciphertext: noise = ε (very small)
After 1 addition: noise ≈ ε
After 1 multiplication: noise ≈ ε²
After 2 multiplications: noise ≈ ε⁴
...
After k multiplications: noise ≈ ε^(2^k)
When noise grows too large, decryption fails and returns garbage values.
1.2 Why Does Noise Exist?
Noise is essential for security:
- Without noise, encrypted data could be analyzed statistically
- Noise obscures patterns in ciphertexts
- Computational security relies on noise being hard to remove
Trade-off: We need noise for security, but too much noise breaks functionality.
1.3 Noise Budget
The noise budget is the remaining capacity for computation before decryption fails:
# Conceptual representation
noise_budget = max_noise - current_noise
# When noise_budget reaches 0:
# - Decryption fails
# - Must bootstrap or stop computation
Typical units: Noise budget is measured in bits:
- Start with 100-120 bits of noise budget
- Each operation consumes some bits
- When budget drops to 0, decryption impossible
1.4 Operation Costs
Different HE operations consume different amounts of noise budget:
| Operation | Noise Growth | Typical Cost (bits) | Notes |
|---|---|---|---|
| Addition | Linear | ~1 bit | Cheap, safe to do many |
| Multiplication | Exponential | ~5-20 bits | Expensive, limited depth |
| Rotation | Linear | ~1 bit | Cheap, can do many |
| Rescaling | Reduces scale | Uses 1 modulus level | Manages scale, not noise |
| Relinearization | Reduces size | ~1-2 bits | After multiplication |
| Bootstrapping | Resets to initial | N/A (expensive op) | Refreshes noise budget |
1.5 Multiplication Depth
The multiplication depth is the longest chain of sequential multiplications:
# Example 1: Depth = 3
x = input # depth 0
y = x * x # depth 1
z = y * y # depth 2
w = z * z # depth 3
# Example 2: Depth = 2 (parallel multiplications don't add)
x = input # depth 0
y1 = x * a # depth 1
y2 = x * b # depth 1 (parallel with y1)
z = y1 * y2 # depth 2
Key insight: Multiplication depth determines how many modulus levels you need. Each rescaling after multiplication consumes one level.
1.6 Noise in CKKS vs BFV/BGV
CKKS (approximate arithmetic):
- Noise grows with multiplications
- Rescaling manages scale, not noise directly
- Modulus switching (via rescaling) reduces noise indirectly
- Bootstrapping resets both noise and level
BFV/BGV (exact arithmetic):
- Similar noise growth pattern
- Modulus switching explicitly reduces noise
- No concept of "scale" like CKKS
This tutorial focuses on CKKS, which is most common for neural networks.
Part 2: Tracking Noise Through Operations
2.1 Basic Noise Tracking Example
import torch
from hetorch import FakeBackend
# Create backend with noise simulation
backend = FakeBackend(simulate_noise=True, initial_noise_budget=100.0)
# Encrypt data
x = torch.tensor([1.0, 2.0, 3.0, 4.0])
ct_x = backend.encrypt(x)
print(f"Initial noise budget: {ct_x.info.noise_budget:.2f} bits")
# Output: Initial noise budget: 100.00 bits
2.2 Noise Growth with Addition
# Addition: linear noise growth (cheap)
ct_y = backend.encrypt(torch.tensor([2.0, 3.0, 4.0, 5.0]))
ct_add = backend.cadd(ct_x, ct_y)
print(f"After addition: {ct_add.info.noise_budget:.2f} bits")
# Output: After addition: 99.00 bits (only 1 bit consumed)
# Multiple additions
ct_result = ct_x
for i in range(10):
ct_result = backend.cadd(ct_result, ct_y)
print(f"After 10 additions: {ct_result.info.noise_budget:.2f} bits")
# Output: After 10 additions: 90.00 bits (10 bits consumed, still safe)
Key observation: Additions are cheap, consume ~1 bit each.
2.3 Noise Growth with Multiplication
# Multiplication: exponential noise growth (expensive)
ct_x = backend.encrypt(torch.tensor([1.0, 2.0, 3.0]))
print(f"Initial: {ct_x.info.noise_budget:.2f} bits")
ct_mult = backend.cmult(ct_x, ct_x)
print(f"After 1 mult: {ct_mult.info.noise_budget:.2f} bits")
# Output: After 1 mult: 85.00 bits (15 bits consumed!)
ct_mult2 = backend.cmult(ct_mult, ct_mult)
print(f"After 2 mults: {ct_mult2.info.noise_budget:.2f} bits")
# Output: After 2 mults: 65.00 bits (20 more bits consumed!)
ct_mult3 = backend.cmult(ct_mult2, ct_mult2)
print(f"After 3 mults: {ct_mult3.info.noise_budget:.2f} bits")
# Output: After 3 mults: 40.00 bits (25 more bits consumed!)
Key observation: Multiplications are expensive, consume 15-25 bits each. Noise consumption accelerates with depth.
2.4 Noise Growth with Rotation
# Rotation: linear noise growth (cheap)
ct_x = backend.encrypt(torch.tensor([1.0, 2.0, 3.0, 4.0]))
print(f"Initial: {ct_x.info.noise_budget:.2f} bits")
ct_rot = backend.rotate(ct_x, steps=1)
print(f"After rotation: {ct_rot.info.noise_budget:.2f} bits")
# Output: After rotation: 99.00 bits (only 1 bit consumed)
Key observation: Rotations are cheap like additions.
2.5 Complete Operation Comparison
import torch
from hetorch import FakeBackend
def compare_operations():
"""Compare noise consumption of different HE operations"""
backend = FakeBackend(simulate_noise=True, initial_noise_budget=100.0)
x = torch.tensor([1.0, 2.0, 3.0])
y = torch.tensor([2.0, 3.0, 4.0])
operations = [
("Addition", lambda: backend.cadd(backend.encrypt(x), backend.encrypt(y))),
("Multiplication", lambda: backend.cmult(backend.encrypt(x), backend.encrypt(y))),
("Rotation", lambda: backend.rotate(backend.encrypt(x), 1)),
("Plaintext Mult", lambda: backend.pmult(backend.encrypt(x), torch.tensor(2.0))),
("Relinearization", lambda: backend.relinearize(backend.encrypt(x))),
]
print("Operation Noise Consumption:")
print(f"{'Operation':<20} {'Initial (bits)':>15} {'Final (bits)':>15} {'Consumed (bits)':>15}")
print("-" * 70)
for op_name, op_func in operations:
# Each operation starts fresh
initial_budget = 100.0
result = op_func()
final_budget = result.info.noise_budget
consumed = initial_budget - final_budget
print(f"{op_name:<20} {initial_budget:>15.2f} {final_budget:>15.2f} {consumed:>15.2f}")
compare_operations()
Example output:
Operation Noise Consumption:
Operation Initial (bits) Final (bits) Consumed (bits)
----------------------------------------------------------------------
Addition 100.00 99.00 1.00
Multiplication 100.00 85.00 15.00
Rotation 100.00 99.00 1.00
Plaintext Mult 100.00 98.50 1.50
Relinearization 100.00 98.00 2.00
Part 3: Noise Simulation with FakeBackend
HETorch's FakeBackend provides realistic noise simulation without the overhead of
actual encryption.
3.1 Enabling Noise Simulation
from hetorch import FakeBackend
# Create backend with noise simulation
backend = FakeBackend(
simulate_noise=True, # Enable simulation
initial_noise_budget=100.0, # Starting budget (bits)
warn_on_low_noise=True, # Warn when budget low
noise_warning_threshold=20.0, # Warning threshold (bits)
)
print(f"Noise simulation: {backend.simulate_noise}")
print(f"Initial budget: {backend.initial_noise_budget} bits")
3.2 Tracking Noise Through a Neural Network
import torch
import torch.nn as nn
from hetorch import (
CKKSParameters,
CompilationContext,
FakeBackend,
HEScheme,
HETorchCompiler,
)
from hetorch.passes import (
PassPipeline,
InputPackingPass,
NonlinearToPolynomialPass,
RescalingInsertionPass,
DeadCodeEliminationPass,
)
# Simple neural network
class SimpleNN(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 20)
self.fc2 = nn.Linear(20, 10)
def forward(self, x):
x = self.fc1(x)
x = torch.nn.functional.gelu(x)
x = self.fc2(x)
x = torch.sigmoid(x)
return x
# Create backend with noise simulation
backend = FakeBackend(
simulate_noise=True,
initial_noise_budget=100.0,
warn_on_low_noise=True,
noise_warning_threshold=30.0,
)
# Create compilation context
context = CompilationContext(
scheme=HEScheme.CKKS,
params=CKKSParameters(
poly_modulus_degree=8192,
coeff_modulus=[60, 40, 40, 60],
scale=2**40,
noise_budget=100.0, # Match backend
),
backend=backend,
)
# Compile model
model = SimpleNN()
example_input = torch.randn(1, 10)
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="eager"),
DeadCodeEliminationPass(),
])
compiler = HETorchCompiler(context, pipeline)
compiled_model = compiler.compile(model, example_input)
# Test with noise tracking
encrypted_input = backend.encrypt(example_input)
print(f"\nInput noise budget: {encrypted_input.info.noise_budget:.2f} bits")
encrypted_output = compiled_model(encrypted_input)
print(f"Output noise budget: {encrypted_output.info.noise_budget:.2f} bits")
print(f"Noise consumed: {encrypted_input.info.noise_budget - encrypted_output.info.noise_budget:.2f} bits")
# Decrypt and validate
decrypted_output = backend.decrypt(encrypted_output)
print(f"\nDecrypted output shape: {decrypted_output.shape}")
3.3 Low Noise Warnings
When warn_on_low_noise=True, the backend automatically warns when noise budget
is running low:
backend = FakeBackend(
simulate_noise=True,
initial_noise_budget=100.0,
warn_on_low_noise=True,
noise_warning_threshold=20.0, # Warn at 20 bits
)
# Perform operations that consume noise
ct = backend.encrypt(torch.tensor([1.0, 2.0]))
for i in range(6):
ct = backend.cmult(ct, ct) # Each mult consumes 15-20 bits
print(f"Iteration {i+1}: {ct.info.noise_budget:.2f} bits")
Example output with warnings:
Iteration 1: 85.00 bits
Iteration 2: 65.00 bits
Iteration 3: 40.00 bits
Iteration 4: 18.00 bits
⚠ WARNING: Low noise budget (18.00 bits) below threshold (20.00 bits)
Consider inserting bootstrapping operation
Iteration 5: -5.00 bits
⚠ WARNING: Negative noise budget! Decryption will fail!
Iteration 6: -30.00 bits
3.4 Noise Budget vs Modulus Level
Important distinction:
- Noise budget (bits): Tracks noise growth, determines decryption correctness
- Modulus level: Tracks rescaling operations, determines scale management
from hetorch import FakeBackend, CKKSParameters
backend = FakeBackend(simulate_noise=True)
params = CKKSParameters(
poly_modulus_degree=8192,
coeff_modulus=[60, 40, 40, 60], # 4 moduli = max level 3
)
ct = backend.encrypt(torch.tensor([1.0]))
print(f"Initial state:")
print(f" Noise budget: {ct.info.noise_budget:.2f} bits")
print(f" Modulus level: {ct.info.level}")
# Multiplication consumes noise, doesn't affect level
ct = backend.cmult(ct, ct)
print(f"\nAfter multiplication:")
print(f" Noise budget: {ct.info.noise_budget:.2f} bits (decreased)")
print(f" Modulus level: {ct.info.level} (unchanged)")
# Rescaling consumes level, may reduce noise slightly
ct = backend.rescale(ct)
print(f"\nAfter rescaling:")
print(f" Noise budget: {ct.info.noise_budget:.2f} bits (slightly improved)")
print(f" Modulus level: {ct.info.level} (decreased by 1)")
Both must be managed: Running out of either noise budget or modulus levels stops computation.
Part 4: Managing Noise with Rescaling
4.1 How Rescaling Helps
Rescaling in CKKS serves two purposes:
- Primary: Manages scale (prevents exponential growth)
- Secondary: Indirectly reduces noise by switching to smaller modulus
backend = FakeBackend(simulate_noise=True, initial_noise_budget=100.0)
ct = backend.encrypt(torch.tensor([1.0, 2.0]))
print(f"Initial: noise={ct.info.noise_budget:.2f}, level={ct.info.level}")
# Multiply (increases noise and scale)
ct = backend.cmult(ct, ct)
print(f"After mult: noise={ct.info.noise_budget:.2f}, level={ct.info.level}")
# Rescale (manages scale, slight noise benefit)
ct = backend.rescale(ct)
print(f"After rescale: noise={ct.info.noise_budget:.2f}, level={ct.info.level}")
Expected output:
Initial: noise=100.00, level=3
After mult: noise=85.00, level=3
After rescale: noise=87.00, level=2
Notice: Noise improved slightly (85→87 bits) because we switched to a smaller modulus.
4.2 Eager vs Lazy Rescaling (Noise Perspective)
Eager rescaling:
- Rescales immediately after every multiplication
- More frequent modulus switches
- Slightly better noise management (frequent small improvements)
- Consumes modulus levels faster
Lazy rescaling:
- Delays rescaling until necessary
- Fewer modulus switches
- May accumulate more noise temporarily
- Conserves modulus levels
import torch
import torch.nn as nn
from hetorch import HETorchCompiler, CompilationContext, FakeBackend, HEScheme, CKKSParameters
from hetorch.passes import PassPipeline, InputPackingPass, NonlinearToPolynomialPass, RescalingInsertionPass, DeadCodeEliminationPass
class TestNN(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(10, 10)
def forward(self, x):
x = self.fc(x)
x = torch.nn.functional.gelu(x)
return x
model = TestNN()
example_input = torch.randn(1, 10)
# Test both strategies
for strategy in ["eager", "lazy"]:
backend = FakeBackend(simulate_noise=True, initial_noise_budget=100.0)
context = CompilationContext(
scheme=HEScheme.CKKS,
params=CKKSParameters(poly_modulus_degree=8192, coeff_modulus=[60, 40, 40, 60]),
backend=backend,
)
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy=strategy),
DeadCodeEliminationPass(),
])
compiled = HETorchCompiler(context, pipeline).compile(model, example_input)
# Test
encrypted_input = backend.encrypt(example_input)
encrypted_output = compiled(encrypted_input)
print(f"\n{strategy.capitalize()} Strategy:")
print(f" Initial noise: {encrypted_input.info.noise_budget:.2f} bits")
print(f" Final noise: {encrypted_output.info.noise_budget:.2f} bits")
print(f" Consumed: {encrypted_input.info.noise_budget - encrypted_output.info.noise_budget:.2f} bits")
print(f" Final level: {encrypted_output.info.level}")
4.3 Rescaling Doesn't Replace Bootstrapping
Important: Rescaling helps but doesn't reset noise budget:
backend = FakeBackend(simulate_noise=True, initial_noise_budget=100.0)
ct = backend.encrypt(torch.tensor([1.0]))
print(f"Initial: {ct.info.noise_budget:.2f} bits")
# Consume lots of noise with multiplications
for i in range(4):
ct = backend.cmult(ct, ct)
print(f"After mult {i+1}: {ct.info.noise_budget:.2f} bits")
# Rescale helps slightly but doesn't reset
ct = backend.rescale(ct)
print(f"After rescale: {ct.info.noise_budget:.2f} bits")
print("Note: Noise improved slightly but still low!")
# Only bootstrapping fully resets noise
ct = backend.bootstrap(ct)
print(f"After bootstrap: {ct.info.noise_budget:.2f} bits (reset!)")
Part 5: Refreshing Noise with Bootstrapping
5.1 What is Bootstrapping?
Bootstrapping is a special operation that "refreshes" a ciphertext:
- Resets noise budget to initial value
- Resets modulus level to maximum
- Enables arbitrarily deep computation
Cost: Bootstrapping is very expensive (~100-1000x a multiplication).
5.2 When to Bootstrap
Bootstrap when:
- Noise budget drops below safe threshold (e.g., 20 bits)
- Modulus level is too low to continue
- You need deeper computation than parameters allow
Don't bootstrap unnecessarily:
- Each bootstrap adds significant latency
- Minimize bootstrap count through optimization
5.3 Manual Bootstrapping
from hetorch import FakeBackend
backend = FakeBackend(simulate_noise=True, initial_noise_budget=100.0)
# Perform deep computation
ct = backend.encrypt(torch.tensor([1.0, 2.0]))
print(f"Initial: {ct.info.noise_budget:.2f} bits")
# Consume noise with multiplications
for i in range(5):
ct = backend.cmult(ct, ct)
print(f"After 5 mults: {ct.info.noise_budget:.2f} bits (very low!)")
# Bootstrap to refresh
ct = backend.bootstrap(ct)
print(f"After bootstrap: {ct.info.noise_budget:.2f} bits (reset!)")
print(f"Level reset to: {ct.info.level}")
# Can continue computation
ct = backend.cmult(ct, ct)
print(f"After another mult: {ct.info.noise_budget:.2f} bits")
5.4 Automatic Bootstrapping with Pass
The BootstrappingInsertionPass automatically inserts bootstrapping operations:
from hetorch import HETorchCompiler, CompilationContext, FakeBackend, HEScheme, CKKSParameters
from hetorch.passes import PassPipeline, InputPackingPass, NonlinearToPolynomialPass, RescalingInsertionPass, BootstrappingInsertionPass, DeadCodeEliminationPass
import torch
import torch.nn as nn
class DeepNN(nn.Module):
"""Very deep network requiring bootstrapping"""
def __init__(self):
super().__init__()
self.layers = nn.ModuleList([nn.Linear(16, 16) for _ in range(8)])
def forward(self, x):
for layer in self.layers:
x = layer(x)
x = torch.nn.functional.gelu(x)
return x
model = DeepNN()
example_input = torch.randn(1, 16)
# Backend with noise simulation
backend = FakeBackend(simulate_noise=True, initial_noise_budget=100.0)
context = CompilationContext(
scheme=HEScheme.CKKS,
params=CKKSParameters(
poly_modulus_degree=32768,
coeff_modulus=[60] * 38, # Many levels but still need bootstrapping
scale=2**40,
),
backend=backend,
)
# Pipeline with automatic bootstrapping
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="lazy"),
BootstrappingInsertionPass(
level_threshold=30.0, # Bootstrap when < 30 levels remain
strategy="greedy",
),
DeadCodeEliminationPass(),
])
compiler = HETorchCompiler(context, pipeline)
compiled_model = compiler.compile(model, example_input)
# Count bootstrap operations
bootstrap_count = sum(
1 for node in compiled_model.graph.nodes
if "bootstrap" in str(node.target).lower()
)
print(f"\nBootstrap operations inserted: {bootstrap_count}")
print(f"This deep network needed {bootstrap_count} bootstrap(s) to complete")
5.5 Bootstrap Cost Analysis
import time
from hetorch import FakeBackend
backend = FakeBackend(simulate_noise=True)
ct = backend.encrypt(torch.tensor([1.0, 2.0, 3.0]))
# Measure operation costs (simulated)
operations = [
("Addition", lambda: backend.cadd(ct, ct)),
("Multiplication", lambda: backend.cmult(ct, ct)),
("Rescale", lambda: backend.rescale(ct)),
("Bootstrap", lambda: backend.bootstrap(ct)),
]
print("Relative Operation Costs (FakeBackend simulation):")
print(f"{'Operation':<15} {'Relative Cost':>15}")
print("-" * 35)
# In real HE (approximate):
costs = {
"Addition": 1,
"Multiplication": 10,
"Rescale": 2,
"Bootstrap": 1000, # 100-1000x more expensive!
}
for op_name, relative_cost in costs.items():
print(f"{op_name:<15} {relative_cost:>15}x")
print("\nKey insight: Minimize bootstrap count for performance!")
Part 6: Predicting Bootstrapping Needs
6.1 Estimating Noise Consumption
To predict if bootstrapping is needed, estimate total noise consumption:
def estimate_noise_consumption(model, params):
"""
Estimate noise consumption for a neural network
Rules of thumb:
- Linear layer: 15 bits (1 multiplication)
- GELU activation (degree 8): 3 mults = 45 bits
- Sigmoid activation (degree 8): 3 mults = 45 bits
- ReLU approximation (degree 3): 2 mults = 30 bits
"""
total_noise = 0
# Count operations
linear_layers = sum(1 for m in model.modules() if isinstance(m, torch.nn.Linear))
# Assume activations after each layer (simplified)
activations = linear_layers
# Estimate consumption
total_noise += linear_layers * 15 # Linear layers
total_noise += activations * 45 # GELU/Sigmoid activations
print(f"Estimated noise consumption:")
print(f" Linear layers: {linear_layers} × 15 bits = {linear_layers * 15} bits")
print(f" Activations: {activations} × 45 bits = {activations * 45} bits")
print(f" Total: {total_noise} bits")
# Compare to initial budget
initial_budget = params.noise_budget
remaining = initial_budget - total_noise
print(f"\nBudget analysis:")
print(f" Initial budget: {initial_budget} bits")
print(f" Estimated consumption: {total_noise} bits")
print(f" Remaining: {remaining} bits")
if remaining < 20:
print(f" ⚠ WARNING: Low remaining budget! Bootstrap needed")
return True
elif remaining < 50:
print(f" ⚠ CAUTION: Moderate remaining budget")
return False
else:
print(f" ✓ Sufficient budget")
return False
# Example
class MyNN(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(64, 32)
self.fc2 = nn.Linear(32, 16)
self.fc3 = nn.Linear(16, 8)
model = MyNN()
params = CKKSParameters(noise_budget=100.0)
needs_bootstrap = estimate_noise_consumption(model, params)
6.2 Level Consumption Prediction
Predict modulus level consumption:
def estimate_level_consumption(model):
"""
Estimate modulus levels needed
Rules (with eager rescaling):
- Linear layer: 1 mult → 1 rescale → 1 level
- Polynomial activation (degree 8): 3 mults → 3 rescales → 3 levels
- Total: (linear_layers + activations * 3) levels
"""
linear_layers = sum(1 for m in model.modules() if isinstance(m, torch.nn.Linear))
activations = linear_layers
levels_needed = linear_layers + activations * 3
print(f"Estimated level consumption:")
print(f" Linear layers: {linear_layers} levels")
print(f" Activations: {activations} × 3 = {activations * 3} levels")
print(f" Total: {levels_needed} levels needed")
return levels_needed
model = MyNN()
levels_needed = estimate_level_consumption(model)
# Compare to available levels
params = CKKSParameters(coeff_modulus=[60, 40, 40, 60])
max_levels = len(params.coeff_modulus) - 1
print(f"\nLevel budget:")
print(f" Available levels: {max_levels}")
print(f" Needed levels: {levels_needed}")
if levels_needed > max_levels:
print(f" ✗ Insufficient levels! Need {levels_needed - max_levels} more")
print(f" Solution: Increase coeff_modulus or use lazy rescaling")
else:
print(f" ✓ Sufficient levels ({max_levels - levels_needed} spare)")
6.3 Bootstrap Threshold Selection
Choose appropriate bootstrap threshold:
def recommend_bootstrap_threshold(params, model_depth):
"""
Recommend bootstrap threshold based on parameters and model
Args:
params: CKKSParameters
model_depth: Estimated multiplication depth
Returns:
Recommended level_threshold for BootstrappingInsertionPass
"""
max_levels = len(params.coeff_modulus) - 1
# Rule of thumb: bootstrap when 1/4 of levels remain
recommended_threshold = max_levels * 0.25
# But ensure at least 10 levels for safety
recommended_threshold = max(recommended_threshold, 10.0)
# Adjust based on model depth
if model_depth > max_levels:
# Deep model definitely needs bootstrapping
# More aggressive threshold
recommended_threshold = max_levels * 0.35
print(f"Bootstrap Threshold Recommendation:")
print(f" Max levels: {max_levels}")
print(f" Model depth: {model_depth}")
print(f" Recommended threshold: {recommended_threshold:.1f}")
print(f"\n Rationale: Bootstrap when {recommended_threshold:.0f}/{max_levels} levels remain")
print(f" Provides safety margin while minimizing bootstrap count")
return recommended_threshold
params = CKKSParameters(poly_modulus_degree=32768, coeff_modulus=[60] * 38)
model_depth = 15 # Estimated from model
threshold = recommend_bootstrap_threshold(params, model_depth)
Part 7: Custom Noise Models
7.1 Creating Custom Noise Models
HETorch allows custom noise models for different simulation scenarios:
from hetorch import NoiseModel, FakeBackend
# Create a conservative noise model (aggressive noise growth)
conservative_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=2.0, # More noise from addition (default: 1.0)
mult_noise_factor=3.0, # More noise from multiplication (default: 2.0)
rotate_noise_bits=1.5, # More noise from rotation (default: 1.0)
)
backend_conservative = FakeBackend(
simulate_noise=True,
noise_model=conservative_model,
)
# Create an optimistic noise model (less aggressive)
optimistic_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=0.5,
mult_noise_factor=1.5,
rotate_noise_bits=0.5,
)
backend_optimistic = FakeBackend(
simulate_noise=True,
noise_model=optimistic_model,
)
# Compare
ct_cons = backend_conservative.encrypt(torch.tensor([1.0]))
ct_opt = backend_optimistic.encrypt(torch.tensor([1.0]))
for i in range(3):
ct_cons = backend_conservative.cmult(ct_cons, ct_cons)
ct_opt = backend_optimistic.cmult(ct_opt, ct_opt)
print(f"After 3 multiplications:")
print(f" Conservative model: {ct_cons.info.noise_budget:.2f} bits")
print(f" Optimistic model: {ct_opt.info.noise_budget:.2f} bits")
7.2 Noise Model for Different Schemes
Different HE schemes have different noise characteristics:
# CKKS: Moderate noise growth, rescaling helps
ckks_noise_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=1.0,
mult_noise_factor=2.0,
rotate_noise_bits=1.0,
)
# BFV: Higher noise growth, no rescaling benefit
bfv_noise_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=1.5,
mult_noise_factor=2.5, # Higher than CKKS
rotate_noise_bits=1.0,
)
# BGV: Similar to BFV but with modulus switching
bgv_noise_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=1.5,
mult_noise_factor=2.5,
rotate_noise_bits=1.0,
)
7.3 Validating Against Real HE
Use custom noise models to match real HE library behavior:
def calibrate_noise_model(real_he_backend):
"""
Calibrate noise model to match real HE library
Process:
1. Run operations on real HE backend
2. Measure actual noise consumption
3. Adjust noise model parameters
4. Validate match
"""
# Example: calibration data from SEAL
calibration_data = {
"addition": 1.2, # bits consumed per addition
"multiplication": 18.5, # bits consumed per multiplication
"rotation": 1.0, # bits consumed per rotation
}
# Create calibrated model
calibrated_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=calibration_data["addition"],
mult_noise_factor=calibration_data["multiplication"] / 10, # Scaled
rotate_noise_bits=calibration_data["rotation"],
)
return calibrated_model
# Use calibrated model for accurate simulation
calibrated_model = calibrate_noise_model(None) # Pass real backend
backend = FakeBackend(simulate_noise=True, noise_model=calibrated_model)
Complete Noise Management Script
Here's a comprehensive script demonstrating all noise management concepts:
"""
Complete Noise Management Tutorial Script
Demonstrates noise tracking, rescaling, and bootstrapping.
"""
import torch
import torch.nn as nn
from hetorch import (
CKKSParameters,
CompilationContext,
FakeBackend,
HEScheme,
HETorchCompiler,
NoiseModel,
)
from hetorch.passes import (
PassPipeline,
InputPackingPass,
NonlinearToPolynomialPass,
RescalingInsertionPass,
RelinearizationInsertionPass,
BootstrappingInsertionPass,
DeadCodeEliminationPass,
CostAnalysisPass,
)
class NoiseDemoNN(nn.Module):
"""Neural network for noise demonstration"""
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(32, 32)
self.fc2 = nn.Linear(32, 32)
self.fc3 = nn.Linear(32, 16)
def forward(self, x):
x = self.fc1(x)
x = torch.nn.functional.gelu(x)
x = self.fc2(x)
x = torch.nn.functional.gelu(x)
x = self.fc3(x)
return x
def main():
print("=" * 80)
print("HETorch Noise Management Tutorial")
print("=" * 80)
# Setup
model = NoiseDemoNN()
example_input = torch.randn(1, 32)
# Create noise model
noise_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=1.0,
mult_noise_factor=2.0,
rotate_noise_bits=1.0,
)
# Create backend with noise simulation
backend = FakeBackend(
simulate_noise=True,
noise_model=noise_model,
warn_on_low_noise=True,
noise_warning_threshold=30.0,
)
# Create parameters
params = CKKSParameters(
poly_modulus_degree=32768,
coeff_modulus=[60] * 38,
scale=2**40,
noise_budget=100.0,
)
context = CompilationContext(
scheme=HEScheme.CKKS,
params=params,
backend=backend,
)
# Create pipeline with noise management
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="lazy"),
RelinearizationInsertionPass(strategy="lazy"),
BootstrappingInsertionPass(level_threshold=30.0, strategy="greedy"),
DeadCodeEliminationPass(),
CostAnalysisPass(verbose=False),
])
# Compile
print("\nCompiling model with noise management...")
compiler = HETorchCompiler(context, pipeline)
compiled_model = compiler.compile(model, example_input)
# Analyze
cost_analysis = compiled_model.meta["cost_analysis"]
bootstrap_count = cost_analysis.total_operations.get("bootstrap", 0)
print(f"\nCompilation Results:")
print(f" Bootstrap operations: {bootstrap_count}")
print(f" Total operations: {sum(cost_analysis.total_operations.values())}")
print(f" Estimated latency: {cost_analysis.estimated_latency:.2f} ms")
# Test with noise tracking
print(f"\nExecuting with noise tracking...")
encrypted_input = backend.encrypt(example_input)
print(f" Input noise budget: {encrypted_input.info.noise_budget:.2f} bits")
encrypted_output = compiled_model(encrypted_input)
print(f" Output noise budget: {encrypted_output.info.noise_budget:.2f} bits")
consumed = encrypted_input.info.noise_budget - encrypted_output.info.noise_budget
print(f" Noise consumed: {consumed:.2f} bits")
if encrypted_output.info.noise_budget > 20:
print(f" ✓ Sufficient noise budget remaining")
else:
print(f" ⚠ Low noise budget (< 20 bits)")
# Decrypt and validate
decrypted_output = backend.decrypt(encrypted_output)
with torch.no_grad():
original_output = model(example_input)
error = torch.abs(original_output - decrypted_output).max().item()
print(f"\nAccuracy:")
print(f" Max error: {error:.6f}")
print(f" Status: {'✓ Acceptable' if error < 0.5 else '⚠ High error'}")
print("\n" + "=" * 80)
print("Tutorial Complete!")
print("=" * 80)
if __name__ == "__main__":
main()
Best Practices
1. Always Enable Noise Simulation During Development
# ✓ Good: Always simulate noise during development
backend = FakeBackend(
simulate_noise=True,
initial_noise_budget=100.0,
warn_on_low_noise=True,
)
# ✗ Bad: No noise simulation (may miss issues)
backend = FakeBackend(simulate_noise=False)
2. Start Conservative, Then Optimize
# Step 1: Conservative (validate correctness)
pipeline_conservative = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="eager"),
BootstrappingInsertionPass(level_threshold=35.0), # High threshold
])
# Step 2: Optimize after validation
pipeline_optimized = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="lazy"), # Saves levels
BootstrappingInsertionPass(level_threshold=25.0), # Lower threshold
])
3. Monitor Noise Budget Throughout Development
def print_noise_summary(ciphertext, label=""):
"""Helper to print noise status"""
print(f"{label}")
print(f" Noise budget: {ciphertext.info.noise_budget:.2f} bits")
print(f" Level: {ciphertext.info.level}")
status = "✓" if ciphertext.info.noise_budget > 20 else "⚠"
print(f" Status: {status}")
# Use throughout computation
ct_input = backend.encrypt(x)
print_noise_summary(ct_input, "Input")
ct_hidden = model.layer1(ct_input)
print_noise_summary(ct_hidden, "After layer 1")
ct_output = model.layer2(ct_hidden)
print_noise_summary(ct_output, "Final output")
4. Estimate Before Compiling
# Before expensive compilation, estimate if bootstrapping needed
def quick_estimate(model, params):
linear_count = sum(1 for m in model.modules() if isinstance(m, nn.Linear))
estimated_noise = linear_count * 60 # Rough estimate
if estimated_noise > params.noise_budget:
print(f"⚠ Bootstrapping will be needed")
return True
return False
# Check before compiling
if quick_estimate(model, params):
print("Including bootstrapping in pipeline...")
pipeline.append(BootstrappingInsertionPass(level_threshold=30.0))
5. Use Appropriate Warning Thresholds
# Adjust warning threshold based on use case
# Production: Conservative (early warnings)
backend_prod = FakeBackend(
simulate_noise=True,
warn_on_low_noise=True,
noise_warning_threshold=30.0, # Warn early
)
# Development: Moderate
backend_dev = FakeBackend(
simulate_noise=True,
warn_on_low_noise=True,
noise_warning_threshold=20.0,
)
# Testing: Aggressive (test limits)
backend_test = FakeBackend(
simulate_noise=True,
warn_on_low_noise=True,
noise_warning_threshold=10.0, # Warn late
)
Troubleshooting
Issue 1: Decryption Returns Garbage
Symptoms: Decrypted output has very large errors or random values.
Cause: Noise budget exhausted (< 0 bits).
Solution:
# Check noise budget
if encrypted_output.info.noise_budget < 5:
print("⚠ Noise budget exhausted! Decryption will fail")
print("Solutions:")
print(" 1. Add bootstrapping: BootstrappingInsertionPass()")
print(" 2. Use fewer multiplications (lower poly degree)")
print(" 3. Increase initial parameters")
Issue 2: Bootstrapping Not Inserted
Symptoms: Expected bootstrapping but none inserted.
Cause: Level threshold too low or network not deep enough.
Solution:
# Debug bootstrapping
pipeline_debug = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="lazy"),
BootstrappingInsertionPass(
level_threshold=35.0, # Increase threshold
strategy="greedy",
),
PrintGraphPass(verbose=True), # Inspect graph
])
# Check if bootstraps were inserted
bootstrap_count = sum(
1 for node in compiled.graph.nodes
if "bootstrap" in str(node.target).lower()
)
print(f"Bootstraps inserted: {bootstrap_count}")
Issue 3: Too Many Bootstraps
Symptoms: Many bootstrap operations, very slow execution.
Cause: Threshold too high or eager rescaling consuming levels.
Solution:
# Option 1: Lower threshold
BootstrappingInsertionPass(level_threshold=20.0) # Was 35.0
# Option 2: Use lazy rescaling to save levels
RescalingInsertionPass(strategy="lazy")
# Option 3: Increase available levels
params = CKKSParameters(
coeff_modulus=[60] * 50, # More levels
)
Issue 4: Inconsistent Noise Estimates
Symptoms: Noise simulation doesn't match real HE.
Cause: Default noise model doesn't match your HE library.
Solution:
# Calibrate noise model to your HE library
# Run test operations and measure actual noise
# Then create custom model
calibrated_model = NoiseModel(
initial_noise_budget=100.0,
add_noise_bits=1.2, # Measured from real HE
mult_noise_factor=2.3, # Measured from real HE
rotate_noise_bits=0.9, # Measured from real HE
)
backend = FakeBackend(simulate_noise=True, noise_model=calibrated_model)
Summary
Key Takeaways
-
Noise Fundamentals:
- Noise is essential for security
- Grows with each operation, especially multiplications
- Must be managed or computation fails
-
Noise Tracking:
- Use
FakeBackend(simulate_noise=True)for realistic simulation - Monitor
ciphertext.info.noise_budgetthroughout computation - Enable warnings for early detection of problems
- Use
-
Rescaling:
- Primarily manages scale, not noise
- Provides slight noise benefit via modulus switching
- Doesn't replace bootstrapping for deep networks
-
Bootstrapping:
- Resets noise budget and modulus level
- Expensive (~100-1000x multiplication)
- Use
BootstrappingInsertionPassfor automatic insertion - Tune threshold based on parameters and depth
-
Best Practices:
- Always simulate noise during development
- Estimate noise consumption before compiling
- Use lazy strategies to conserve levels
- Monitor noise budget throughout computation
- Test with conservative thresholds, optimize later
Noise Management Checklist
For every HE application:
- Enable noise simulation in FakeBackend
- Set appropriate initial noise budget (typically 100 bits)
- Estimate noise consumption before compiling
- Use lazy rescaling/relinearization for deep networks
- Add bootstrapping if depth exceeds budget
- Monitor final noise budget (should be > 20 bits)
- Validate decryption correctness
- Profile and optimize bootstrap count
Next Steps
Continue learning:
- Custom Pass Tutorial - Build passes that manage noise
Advanced topics:
- Cost Models - Performance modeling with noise
- Custom Backends - Real HE integration
Practical applications:
- Build noise-aware neural network compilers
- Optimize bootstrap placement for performance
- Create custom noise models for different HE libraries
See Also
- Simple Neural Network Tutorial - Prerequisites
- Optimization Strategies Tutorial - Related optimizations
- Backends User Guide - Backend configuration
- Encryption Schemes User Guide - Scheme details