Examples
This guide walks through the example scripts included with HETorch, from basic to advanced usage.
Overview
HETorch includes 7 example scripts demonstrating different features:
| Example | Complexity | Features Demonstrated |
|---|---|---|
basic_linear.py | Beginner | Basic compilation, fake backend |
phase2_neural_network.py | Intermediate | Polynomial approximation, rescaling |
graph_visualization.py | Intermediate | Graph visualization pass |
graph_export.py | Intermediate | Complete graph export (SVG, code, JSON) |
phase3_advanced_optimization.py | Advanced | BSGS, cost analysis |
phase3_bootstrapping_realistic.py | Advanced | Bootstrapping insertion |
phase4_noise_simulation.py | Advanced | Noise simulation, custom noise models |
Running Examples
# From project root
cd /path/to/hetorch
# Run an example
python examples/basic_linear.py
python examples/phase2_neural_network.py
# ... etc
Example 1: Basic Linear Model
File: examples/basic_linear.py
What it demonstrates:
- Basic compilation workflow
- Simple linear model
- Empty pass pipeline
- Fake backend usage
Code Walkthrough
import torch
import torch.nn as nn
from hetorch import (
HEScheme,
CKKSParameters,
CompilationContext,
HETorchCompiler,
FakeBackend,
)
from hetorch.passes import PassPipeline
# 1. Define model
class SimpleLinear(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(4, 2)
def forward(self, x):
return self.linear(x)
model = SimpleLinear()
# 2. Create context
context = CompilationContext(
scheme=HEScheme.CKKS,
params=CKKSParameters(
poly_modulus_degree=8192,
coeff_modulus=[60, 40, 40, 60],
scale=2**40
),
backend=FakeBackend()
)
# 3. Create empty pipeline
pipeline = PassPipeline([])
# 4. Compile
compiler = HETorchCompiler(context, pipeline)
compiled_model = compiler.compile(model, torch.randn(1, 4))
# 5. Execute
output = compiled_model(torch.randn(1, 4))
print(f"Output: {output}")
Key Takeaways
- Simplest possible HETorch workflow
- No transformation passes (empty pipeline)
- Fake backend for fast execution
- Model compiles and executes successfully
Expected Output
==============================================================
HETorch Basic Example: Simple Linear Model
==============================================================
1. Created model: SimpleLinear(...)
2. Example input shape: torch.Size([1, 4])
3. Created compilation context:
Scheme: HEScheme.CKKS
Backend: FakeBackend
4. Created pass pipeline: PassPipeline(...)
5. Created compiler: HETorchCompiler
6. Successfully compiled model!
7. Compiled graph: ...
8. Testing execution...
Original output: tensor([...])
Compiled output: tensor([...])
Max difference: 0.0000
Example 2: Neural Network with Activations
File: examples/phase2_neural_network.py
What it demonstrates:
- Multi-layer neural network
- Polynomial approximation of activations
- Rescaling insertion (CKKS)
- Relinearization insertion
- Dead code elimination
Code Walkthrough
from hetorch.passes.builtin import (
InputPackingPass,
NonlinearToPolynomialPass,
RescalingInsertionPass,
RelinearizationInsertionPass,
DeadCodeEliminationPass,
)
# Define 3-layer network
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 20)
self.fc2 = nn.Linear(20, 10)
self.fc3 = nn.Linear(10, 2)
def forward(self, x):
x = torch.relu(self.fc1(x)) # Non-linear activation
x = torch.relu(self.fc2(x)) # Non-linear activation
return self.fc3(x)
# Create pipeline with passes
pipeline = PassPipeline([
InputPackingPass(strategy="row_major"),
NonlinearToPolynomialPass(degree=8), # Replace ReLU with polynomial
RescalingInsertionPass(strategy="lazy"),
RelinearizationInsertionPass(strategy="lazy"),
DeadCodeEliminationPass(),
])
# Compile and execute
compiler = HETorchCompiler(context, pipeline)
compiled_model = compiler.compile(model, torch.randn(1, 10))
Key Takeaways
- Polynomial approximation replaces ReLU
- Lazy rescaling reduces operations
- Lazy relinearization reduces operations
- Dead code elimination cleans up graph
Expected Output
Original graph: 6 nodes
After InputPackingPass: 6 nodes
After NonlinearToPolynomialPass: 46 nodes (polynomial expansion)
After RescalingInsertionPass: 54 nodes (rescaling added)
After RelinearizationInsertionPass: 58 nodes (relinearization added)
After DeadCodeEliminationPass: 52 nodes (unused nodes removed)
Approximation accuracy:
Max error: 0.02
Mean error: 0.01
Example 3: Graph Visualization
File: examples/graph_visualization.py
What it demonstrates:
- GraphVisualizationPass usage
- Visualizing transformations at each stage
- SVG output generation
Code Walkthrough
from hetorch.passes.builtin import GraphVisualizationPass
# Create pipeline with visualization at each stage
pipeline = PassPipeline([
GraphVisualizationPass(prefix="01_original"),
InputPackingPass(),
GraphVisualizationPass(prefix="02_packed"),
NonlinearToPolynomialPass(),
GraphVisualizationPass(prefix="03_polynomial"),
RescalingInsertionPass(strategy="lazy"),
GraphVisualizationPass(prefix="04_rescaled"),
DeadCodeEliminationPass(),
GraphVisualizationPass(prefix="05_final"),
])
Key Takeaways
- Visualize graph at each transformation stage
- SVG files saved to
./graph_exports/ - Requires graphviz to be installed
- Useful for debugging and understanding transformations
Expected Output
Graph visualization saved to: ./graph_exports/01_original_1234567890.svg (8.2 KB)
Graph visualization saved to: ./graph_exports/02_packed_1234567891.svg (8.5 KB)
Graph visualization saved to: ./graph_exports/03_polynomial_1234567892.svg (24.1 KB)
Graph visualization saved to: ./graph_exports/04_rescaled_1234567893.svg (28.3 KB)
Graph visualization saved to: ./graph_exports/05_final_1234567894.svg (26.7 KB)
Example 4: Complete Graph Export
File: examples/graph_export.py
What it demonstrates:
- Exporting graphs in multiple formats
- SVG visualization for visual debugging
- Python code export for code review
- Tabular format for node-by-node analysis
- JSON metadata for programmatic analysis
Code Walkthrough
from hetorch.passes.builtin import (
GraphVisualizationPass,
InputPackingPass,
NonlinearToPolynomialPass,
RescalingInsertionPass,
)
# Custom export functions
def export_graph_to_code(graph_module, output_path):
"""Export graph as executable Python code"""
with open(output_path, 'w') as f:
f.write(graph_module.code)
def export_graph_to_tabular(graph_module, output_path):
"""Export graph in tabular format"""
with open(output_path, 'w') as f:
import sys
old_stdout = sys.stdout
sys.stdout = f
graph_module.graph.print_tabular()
sys.stdout = old_stdout
def export_graph_metadata(graph_module, output_path):
"""Export graph metadata as JSON"""
metadata = {
"num_nodes": len(list(graph_module.graph.nodes)),
"nodes": [
{
"name": node.name,
"op": node.op,
"target": str(node.target),
"users": [u.name for u in node.users],
}
for node in graph_module.graph.nodes
]
}
with open(output_path, 'w') as f:
json.dump(metadata, f, indent=2)
# Export original graph
export_graph_to_code(traced, "graph.py")
export_graph_to_tabular(traced, "graph.txt")
export_graph_metadata(traced, "graph.json")
# Apply transformations with visualization
pipeline = PassPipeline([
GraphVisualizationPass(output_dir="./exports", name_prefix="01_original"),
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="eager"),
GraphVisualizationPass(output_dir="./exports", name_prefix="04_final"),
])
transformed = pipeline.run(traced, context)
# Export transformed graph
export_graph_to_code(transformed, "final_graph.py")
export_graph_to_tabular(transformed, "final_graph.txt")
export_graph_metadata(transformed, "final_graph.json")
Key Takeaways
- Multiple export formats: SVG, Python code, tabular text, JSON
- SVG exports: Visual representation for debugging
- Code exports: View generated computation code
- Tabular exports: Node-by-node operation details
- JSON exports: Programmatic analysis and tooling
- Export at any stage: Capture graphs before/after transformations
Expected Output
Output directory: ./graph_exports/
✓ Exported Python code to: 01_original_graph.py
✓ Exported text format to: 01_original_graph_tabular.txt
✓ Exported JSON metadata to: 01_original_graph_metadata.json
Pipeline completed...
✓ Exported Python code to: 04_final_graph.py
✓ Exported text format to: 04_final_graph_tabular.txt
✓ Exported JSON metadata to: 04_final_graph_metadata.json
Original graph nodes: 7
Transformed graph nodes: 57
Node difference: +50
Summary of Exported Files:
• 01_original_*.svg - Visual SVG representation
• 01_original_graph.py - Executable Python code
• 01_original_graph_tabular.txt - Node-by-node tabular format
• 01_original_graph_metadata.json - JSON metadata for analysis
• 04_final_*.svg - Final transformed graph
• 04_final_graph.py - Final Python code
• 04_final_graph_tabular.txt - Final tabular format
• 04_final_graph_metadata.json - Final JSON metadata
Use Cases
Code Review & Documentation:
# Review generated code
cat graph_exports/04_final_graph.py
# See actual operations performed
def forward(self, x):
fc1 = self.fc1(x); x = None
poly_0 = fc1 * 0.5; fc1 = None
# ... polynomial approximation of ReLU
fc2 = self.fc2(poly_result)
# ... rescaling operations
return output
Programmatic Analysis:
# Parse JSON for automated analysis
import json
with open('graph_exports/04_final_graph_metadata.json') as f:
metadata = json.load(f)
# Count operation types
ops = {}
for node in metadata['nodes']:
op = node['op']
ops[op] = ops.get(op, 0) + 1
print(f"Total nodes: {metadata['num_nodes']}")
print(f"Operations: {ops}")
# Output: {'placeholder': 1, 'call_module': 3, 'call_function': 48, 'output': 1}
Visual Debugging:
# Open SVG files in browser
firefox graph_exports/01_original_*.svg
firefox graph_exports/04_final_*.svg
# Compare before/after transformations visually
Example 5: Advanced Optimization
File: examples/phase3_advanced_optimization.py
What it demonstrates:
- LinearLayerBSGSPass for matrix optimization
- CostAnalysisPass for performance analysis
- Comparing baseline vs optimized pipelines
Code Walkthrough
from hetorch.passes.builtin import LinearLayerBSGSPass, CostAnalysisPass
# Baseline pipeline
baseline_pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(),
RescalingInsertionPass(strategy="eager"),
DeadCodeEliminationPass(),
CostAnalysisPass(verbose=True),
])
# Optimized pipeline
optimized_pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(),
LinearLayerBSGSPass(min_size=16), # BSGS optimization
RescalingInsertionPass(strategy="lazy"), # Lazy rescaling
DeadCodeEliminationPass(),
CostAnalysisPass(verbose=True),
])
# Compare results
baseline_model = compiler.compile(model, example_input, pipeline=baseline_pipeline)
optimized_model = compiler.compile(model, example_input, pipeline=optimized_pipeline)
baseline_analysis = baseline_model.meta['cost_analysis']
optimized_analysis = optimized_model.meta['cost_analysis']
print(f"Baseline latency: {baseline_analysis.estimated_latency:.2f} ms")
print(f"Optimized latency: {optimized_analysis.estimated_latency:.2f} ms")
print(f"Improvement: {(1 - optimized_analysis.estimated_latency / baseline_analysis.estimated_latency) * 100:.1f}%")
Key Takeaways
- BSGS reduces rotation count for large matrices
- Lazy rescaling reduces unnecessary operations
- Cost analysis quantifies improvements
- Optimizations compound for better performance
Expected Output
=== Baseline Pipeline ===
Total Operations: 156
rescale: 27 (17.3%)
rotate: 64 (41.0%)
...
Estimated Latency: 56.40 ms
Estimated Memory: 66,048 bytes
=== Optimized Pipeline ===
Total Operations: 142
rescale: 24 (16.9%)
rotate: 48 (33.8%) # Reduced by BSGS
...
Estimated Latency: 55.80 ms
Estimated Memory: 64,512 bytes
Improvement: 1.1% latency, 2.3% memory
Example 6: Bootstrapping
File: examples/phase3_bootstrapping_realistic.py
What it demonstrates:
- BootstrappingInsertionPass usage
- Deep network requiring bootstrapping
- Level tracking and bootstrap placement
Code Walkthrough
from hetorch.passes.builtin import BootstrappingInsertionPass
# Deep network (many layers)
class DeepNetwork(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.ModuleList([
nn.Linear(64, 64) for _ in range(10) # 10 layers
])
def forward(self, x):
for layer in self.layers:
x = torch.relu(layer(x))
return x
# Pipeline with bootstrapping
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="lazy"),
BootstrappingInsertionPass(
level_threshold=15.0, # Bootstrap when level < 15
strategy="greedy"
),
DeadCodeEliminationPass(),
])
Key Takeaways
- Deep networks exhaust multiplication depth
- Bootstrapping refreshes ciphertexts
- Greedy strategy inserts bootstrap when threshold reached
- Enables arbitrarily deep computations
Expected Output
Initial level: 3 (from coeff_modulus length)
Layer 1: level 2 (consumed 1)
Layer 2: level 1 (consumed 1)
Layer 3: level 0 (consumed 1)
Bootstrap inserted! level reset to 3
Layer 4: level 2 (consumed 1)
Layer 5: level 1 (consumed 1)
Layer 6: level 0 (consumed 1)
Bootstrap inserted! level reset to 3
...
Total bootstraps inserted: 3
Example 7: Noise Simulation
File: examples/phase4_noise_simulation.py
What it demonstrates:
- Noise budget tracking
- Custom noise models
- Predicting bootstrapping needs
- Comparing conservative vs optimistic models
Code Walkthrough
from hetorch import FakeBackend, NoiseModel
# Example 1: Basic noise tracking
backend = FakeBackend(
simulate_noise=True,
initial_noise_budget=100.0,
warn_on_low_noise=True,
noise_warning_threshold=20.0
)
x = backend.encrypt(torch.tensor([1.0, 2.0, 3.0]))
print(f"Initial: {x.info.noise_budget:.2f} bits") # 100.00
y = backend.cadd(x, x)
print(f"After cadd: {y.info.noise_budget:.2f} bits") # 99.00
z = backend.cmult(x, x)
print(f"After cmult: {z.info.noise_budget:.2f} bits") # 50.00
# Example 2: Custom noise models
conservative_model = NoiseModel(
mult_noise_factor=3.0, # More noise from multiplication
add_noise_bits=2.0 # More noise from addition
)
optimistic_model = NoiseModel(
mult_noise_factor=1.5, # Less noise from multiplication
add_noise_bits=0.5 # Less noise from addition
)
# Compare models
conservative_backend = FakeBackend(simulate_noise=True, noise_model=conservative_model)
optimistic_backend = FakeBackend(simulate_noise=True, noise_model=optimistic_model)
# After 2 multiplications:
# Conservative: 100 / 3 / 3 = 11.11 bits
# Optimistic: 100 / 1.5 / 1.5 = 44.44 bits
Key Takeaways
- Noise simulation predicts bootstrapping needs
- Custom noise models for different scenarios
- Conservative models: More bootstraps, safer
- Optimistic models: Fewer bootstraps, riskier
Expected Output
=== Example 1: Basic Noise Tracking ===
Initial: 100.00 bits
After cadd: 99.00 bits (-1.00)
After cmult: 50.00 bits (-50.00)
After rotate: 99.50 bits (-0.50)
After pmult: 66.67 bits (-33.33)
=== Example 2: Rescaling and Bootstrapping ===
After 3 cmults: 12.50 bits
After rescale: 22.50 bits (+10.00)
After bootstrap: 100.00 bits (reset)
=== Example 3: Low Noise Warnings ===
Warning: Low noise budget: 18.5 bits remaining
Warning: Low noise budget: 9.2 bits remaining
=== Example 4: Custom Noise Models ===
Conservative model after 2 mults: 11.11 bits
Optimistic model after 2 mults: 44.44 bits
Difference: 4.0x
Common Patterns
Pattern 1: Quick Testing
# Minimal pipeline for quick testing
pipeline = PassPipeline([
InputPackingPass(),
DeadCodeEliminationPass(),
])
Pattern 2: Standard Neural Network
# Standard pipeline for neural networks
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="lazy"),
RelinearizationInsertionPass(strategy="lazy"),
DeadCodeEliminationPass(),
])
Pattern 3: Performance Optimization
# Optimized pipeline with BSGS and cost analysis
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
LinearLayerBSGSPass(min_size=16),
RescalingInsertionPass(strategy="lazy"),
RelinearizationInsertionPass(strategy="lazy"),
DeadCodeEliminationPass(),
CostAnalysisPass(verbose=True),
])
Pattern 4: Deep Networks
# Pipeline with bootstrapping for deep networks
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(degree=8),
LinearLayerBSGSPass(min_size=16),
RescalingInsertionPass(strategy="lazy"),
RelinearizationInsertionPass(strategy="lazy"),
BootstrappingInsertionPass(level_threshold=15.0),
DeadCodeEliminationPass(),
])
Pattern 5: Debugging
# Debug pipeline with visualization and analysis
pipeline = PassPipeline([
GraphVisualizationPass(prefix="01_original"),
InputPackingPass(),
GraphVisualizationPass(prefix="02_packed"),
NonlinearToPolynomialPass(),
GraphVisualizationPass(prefix="03_polynomial"),
PrintGraphPass(verbose=True),
CostAnalysisPass(verbose=True),
])
Modifying Examples
Change Model Architecture
# Original
class SimpleLinear(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(4, 2)
# Modified: Add more layers
class DeepLinear(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(4, 8)
self.fc2 = nn.Linear(8, 4)
self.fc3 = nn.Linear(4, 2)
def forward(self, x):
x = self.fc1(x)
x = torch.relu(x)
x = self.fc2(x)
x = torch.relu(x)
return self.fc3(x)
Change Encryption Parameters
# Original
params = CKKSParameters(
poly_modulus_degree=8192,
coeff_modulus=[60, 40, 40, 60],
scale=2**40
)
# Modified: More multiplication depth
params = CKKSParameters(
poly_modulus_degree=16384, # Higher degree
coeff_modulus=[60, 40, 40, 40, 40, 60], # More levels
scale=2**40
)
Change Pass Configuration
# Original
NonlinearToPolynomialPass(degree=8)
# Modified: Higher degree for better approximation
NonlinearToPolynomialPass(degree=10)
# Modified: Only replace specific functions
NonlinearToPolynomialPass(degree=8, functions=["relu", "gelu"])
Next Steps
- Compilation Workflow: Understand the compilation process
- Builtin Passes: Learn about available passes
- Tutorials: Step-by-step tutorials
- Custom Passes: Write your own passes