hetorch.utils.polynomial
Polynomial approximation utilities for non-linear functions
Provides Chebyshev and least-squares polynomial approximations for common activation functions used in neural networks.
Functions
approximate_activation(activation: str, degree: int = 8, method: str = 'chebyshev', range_override: Optional[Tuple[float, float]] = None) -\> numpy.ndarray
Approximate an activation function with a polynomial
Args: activation: Activation function name (relu, gelu, sigmoid, etc.) degree: Polynomial degree method: Approximation method ("chebyshev" or "least_squares") range_override: Optional custom range (min, max)
Returns: Polynomial coefficients (from constant to highest degree)
chebyshev_approximation(func: Callable[[numpy.ndarray], numpy.ndarray], degree: int, a: float, b: float) -\> numpy.ndarray
Compute Chebyshev polynomial approximation coefficients
Uses Chebyshev interpolation to approximate a function over [a, b]
Args: func: Function to approximate degree: Polynomial degree a: Left endpoint of approximation range b: Right endpoint of approximation range
Returns: Polynomial coefficients (from constant to highest degree)
chebyshev_nodes(n: int, a: float, b: float) -\> numpy.ndarray
Generate Chebyshev nodes in interval [a, b]
Args: n: Number of nodes (degree + 1) a: Left endpoint b: Right endpoint
Returns: Array of Chebyshev nodes
chebyshev_to_polynomial(cheb_coeffs: numpy.ndarray, a: float, b: float) -\> numpy.ndarray
Convert Chebyshev coefficients to standard polynomial coefficients
Args: cheb_coeffs: Chebyshev coefficients a: Left endpoint of interval b: Right endpoint of interval
Returns: Standard polynomial coefficients
evaluate_polynomial(x: torch.Tensor, coeffs: numpy.ndarray) -\> torch.Tensor
Evaluate polynomial with given coefficients using Horner's method
Args: x: Input tensor coeffs: Polynomial coefficients (from constant to highest degree)
Returns: Polynomial evaluation result
get_activation_function(activation: str) -\> Callable[[numpy.ndarray], numpy.ndarray]
Get numpy implementation of activation function
Args: activation: Activation function name
Returns: Numpy function implementing the activation
get_activation_range(activation: str) -\> Tuple[float, float]
Get default approximation range for common activation functions
Args: activation: Activation function name
Returns: Tuple of (min, max) for approximation range
least_squares_approximation(func: Callable[[numpy.ndarray], numpy.ndarray], degree: int, a: float, b: float, n_points: int = 1000) -\> numpy.ndarray
Compute least-squares polynomial approximation
Args: func: Function to approximate degree: Polynomial degree a: Left endpoint b: Right endpoint n_points: Number of sample points
Returns: Polynomial coefficients (from constant to highest degree)
polynomial_to_string(coeffs: numpy.ndarray, var: str = 'x') -\> str
Convert polynomial coefficients to human-readable string
Args: coeffs: Polynomial coefficients var: Variable name
Returns: String representation of polynomial