HETorch Documentation
Welcome to HETorch - a modular compilation framework that transforms PyTorch models into Homomorphic Encryption (HE) operations.
What is HETorch?
HETorch bridges the gap between PyTorch's familiar tensor operations and the complex world of Homomorphic Encryption. It provides a flexible, extensible framework for compiling neural networks to run on encrypted data, enabling privacy-preserving machine learning.
Key Features
- Tensor-Centric Design: Built on PyTorch's tensor abstraction using torch.fx for graph capture
- Backend-Agnostic: Unified interface supporting multiple HE schemes (CKKS, BFV, BGV) and libraries
- Modular Pass System: Composable transformation passes for flexible compilation pipelines
- Performance-Aware: Built-in cost models guide optimization decisions
- Developer-Friendly: Fake backends enable rapid testing without expensive HE computations
Who Should Use HETorch?
Research Users
If you want to:
- Compile PyTorch models to run on encrypted data
- Experiment with privacy-preserving ML
- Understand HE compilation challenges
- Prototype encrypted inference systems
Developers
If you want to:
- Extend HETorch with custom optimization passes
- Integrate new HE backends
- Contribute to the framework
- Build HE compilation tools
Quick Links
Getting Started
- Installation - Set up HETorch
- Quickstart - Your first compiled model in 5 minutes
- Basic Concepts - Core abstractions and workflow
User Guide
- Compilation Workflow - End-to-end compilation process
- Encryption Schemes - CKKS, BFV, BGV explained
- Builtin Passes - Available transformation passes
- Pass Pipelines - Building compilation pipelines
- Backends - Fake vs real backends
Developer Guide
- Architecture - System design and components
- Custom Passes - Writing transformation passes
- Custom Backends - Implementing HE backends
- IR Design - Intermediate representation details
Tutorials
- Simple Neural Network - Compile a 2-layer MLP
- Optimization Strategies - Advanced optimization techniques
- Noise Management - Understanding and managing noise budgets
- Custom Pass Tutorial - Build your own transformation pass
Architecture Overview
HETorch follows a layered architecture:
┌─────────────────────────────────────────────────────────────┐
│ PyTorch Model (nn.Module) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Frontend: torch.fx Graph Capture │
│ Symbolic tracing converts model to computation graph │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ HE-Aware IR (fx.GraphModule) │
│ Graph nodes represent HE operations with metadata │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Transformation Pass Pipeline │
│ Composable passes optimize and transform the graph │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Backend Interface │
│ Abstract HE operations (cadd, cmult, rotate, etc.) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Backend Implementations │
│ Fake: PyTorch simulation | Real: SEAL, OpenFHE, etc. │
└─────────────────────────────────────────────────────────────┘
Core Concepts
Compilation Context
The CompilationContext maintains global state throughout compilation:
- Scheme: HE scheme (CKKS, BFV, BGV)
- Parameters: Encryption parameters (polynomial degree, modulus, scale)
- Backend: Backend implementation providing HE operations
Transformation Passes
Modular transformations that modify the computation graph:
- Input Packing: Pack tensors into ciphertext slots
- Polynomial Approximation: Replace non-linear activations
- Rescaling/Relinearization: Manage ciphertext properties
- Bootstrapping: Refresh noise budgets
- Optimization: BSGS, dead code elimination, etc.
Pass Pipeline
Ordered sequence of passes that transforms the model:
pipeline = PassPipeline([
InputPackingPass(),
NonlinearToPolynomialPass(),
RescalingInsertionPass(),
DeadCodeEliminationPass(),
])
Backend Interface
Abstract interface for HE operations:
- Fake Backend: PyTorch simulation for rapid testing
- Real Backend: Integration with HE libraries (future)
Example: Compiling a Simple Model
import torch
import torch.nn as nn
from hetorch import (
HEScheme, CKKSParameters, CompilationContext,
HETorchCompiler, FakeBackend
)
from hetorch.passes import PassPipeline
from hetorch.passes.builtin import (
InputPackingPass,
NonlinearToPolynomialPass,
RescalingInsertionPass,
DeadCodeEliminationPass,
)
# Define model
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(10, 5)
self.fc2 = nn.Linear(5, 2)
def forward(self, x):
x = torch.relu(self.fc1(x))
return self.fc2(x)
# Create compilation context
context = CompilationContext(
scheme=HEScheme.CKKS,
params=CKKSParameters(
poly_modulus_degree=8192,
coeff_modulus=[60, 40, 40, 60],
scale=2**40
),
backend=FakeBackend()
)
# Build pass pipeline
pipeline = PassPipeline([
InputPackingPass(strategy="row_major"),
NonlinearToPolynomialPass(degree=8),
RescalingInsertionPass(strategy="lazy"),
DeadCodeEliminationPass(),
])
# Compile
model = SimpleModel()
compiler = HETorchCompiler(context, pipeline)
compiled_model = compiler.compile(model, torch.randn(1, 10))
# Execute
output = compiled_model(torch.randn(1, 10))
License
HETorch is licensed under the MIT License. See LICENSE file for details.
Next Steps
- New Users: Start with Installation and Quickstart
- Research Users: Read Compilation Workflow and explore Examples
- Developers: Study Architecture and try Custom Pass Tutorial