A Brief History of FHE

FHE Onchain
October 14, 2024

Fully homomorphic encryption, frequently referred to as simply FHE, has been described as the holy grail of encryption. Today, data must be decrypted in order to be processed, presenting a host of security and privacy issues. FHE enables encrypted data to be computed upon without decryption, unlocking data utility without compromising on confidentiality or security.

Once thought impossible, FHE is now a reality, and it’s set to unlock across a plethora of industries.

Let’s dive into the history of FHE, from its inception through partial homomorphic encryption to its current position.

Early Concepts: Privacy Homomorphisms

Introduction to Privacy Homomorphisms by Rivest et al. (1978)

The concept of privacy homomorphisms was first introduced by Ronald Rivest, Leonard Adleman, and Michael Dertouzos in 1978. Their pioneering work laid the groundwork for what would eventually evolve into FHE. Privacy homomorphisms aimed to enable computations on encrypted data without revealing any information about the underlying plaintext. This idea was revolutionary at the time because it suggested that data could be processed securely without needing to decrypt it first, thereby maintaining confidentiality throughout the computation process.

However, the early privacy homomorphisms proposed by Rivest and his colleagues were limited in their functionality. These initial schemes could only support a restricted set of operations on encrypted data, such as addition or multiplication, but not both. This limitation made them impractical for many real-world applications that required more complex computations. Despite these shortcomings, the introduction of privacy homomorphisms represented a significant theoretical advancement and sparked further research into the possibility of developing more powerful encryption schemes.

The Significance of Privacy Homomorphisms in the Evolution of FHE

Privacy homomorphisms played a crucial role in the evolution of cryptographic techniques, serving as the conceptual foundation for FHE. While the early schemes were limited in scope, they demonstrated the feasibility of performing operations on encrypted data, a concept that would later be expanded and refined. The limitations of privacy homomorphisms highlighted the need for more advanced cryptographic tools, leading researchers to explore new mathematical frameworks and techniques that could support arbitrary computations on encrypted data.

The significance of privacy homomorphisms lies in their influence on the development of FHE. They provided the initial insights and challenges that would guide subsequent research efforts, ultimately culminating in the creation of the first practical FHE scheme by Craig Gentry in 2009. Without the early exploration of privacy homomorphisms, the field of homomorphic encryption might not have progressed as rapidly as it did. 

The Learning-With-Errors (LWE) Problem

In cryptography, Learning-With-Errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms.

Let $\mathbb{Z}_q$ denote the ring of integers modulo $q$ and let $\mathbb{Z}_q^n$ denote the set of $n$-vectors over $\mathbb{Z}_q$. There exists a certain unknown linear function $f : \mathbb{Z}_q^n \rightarrow \mathbb{Z}_q$, and the input to the LWE problem is a sample of pairs $(x, y)$, where $x \in \mathbb{Z}_q^n$ and $y \in \mathbb{Z}_q$, so that with high probability $y = f(x)$.

It is based on the idea of representing secret information as a set of vector equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. The concept of Learning-With-Errors (LWE) introduces a computational problem where solving linear equations becomes infeasible due to the addition of noise.

The LWE problem plays a pivotal role in the development of FHE schemes. In particular, Craig Gentry's pioneering FHE framework was followed by more efficient constructions that leveraged the hardness of the LWE problem to ensure security. LWE's noise-based structure aligns well with the encryption mechanisms in FHE, where handling and managing noise are critical for performing operations on encrypted data without compromising security. Over time, LWE became a foundational problem for many modern FHE schemes, such as BGV (Brakerski-Gentry-Vaikuntanathan) and TFHE, contributing to more practical implementations of FHE.

LWE Encryption

The encryption scheme is constructed around a public and a private key, where the public key is derived from the private key with the addition of noise. This ensures that, while encryption can be performed by anyone possessing the public key, decryption is feasible only with the knowledge of the private key. The essence of this encryption lies in its ability to mask the message with a layer of computational complexity that is impractical to breach without the private key, thus ensuring confidentiality.

Private Key: $ s \leftarrow \mathbb{Z}_q^n $.

Public Key: $(A, b \in \mathbb{Z}_q^N)$ where

$$ A = (a_1, a_2, \dots, a_N) \leftarrow \mathbb{Z}_q^{n \times N} $$

and

$$ b = s \cdot A + \epsilon $$

where $\epsilon$ is a randomly chosen $N$-vector.

Encryption: To encrypt a message $ m \in \{0, 1\} $,

$$ Enc(0) = (c_1, c_2) = \left(\sum_i a_i, \sum_i b_i\right) $$

$$ Enc(1) = (c_1, c_2) = \left(\sum_i a_i, \left\lfloor \frac{q}{2} \right\rfloor + \sum_i b_i\right) $$

Decryption: To decrypt a given ciphertext $(c_1, c_2)$,

$$ Dec((c_1, c_2)) = 0 $$

if $ c_2 - s \cdot c_1 $ is close to 0,

$$ Dec((c_1, c_2)) = 1 $$

if $ c_2 - s \cdot c_1 $ is close to $ \left\lfloor \frac{q}{2} \right\rfloor $

You can easily test this scheme by yourself. As you may have noticed, we have only covered regular LWE encryption. There is also “Ring-LWE encryption,” but it is beyond the scope of this content for now.

Craig Gentry’s Breakthrough

Noise Growth Problem

In LWE-based FHE, noise growth is a critical issue that directly affects the practicality and efficiency of the encryption scheme. The core idea of LWE-based FHE is to perform computations on encrypted data without decrypting it. However, each homomorphic operation (addition or multiplication) on ciphertexts causes the noise—initially added to the plaintext during encryption for security purposes—to increase. As more operations are performed, the noise grows cumulatively and exponentially, potentially making the ciphertexts too noisy to decrypt correctly. If the noise exceeds a certain threshold, the decryption process yields incorrect results.

Gentry’s Idea: Bootstrapping

Cryptographer Craig Gentry proposed a solution to this problem: bootstrapping. Suppose we have an encryption scheme $E$ with plaintext space $P = \{0, 1\}$. So we will be dealing with bits. Suppose also that we have a ciphertext $\psi_1$ that encrypts $\pi_1$ under a public key $pk_1$, which we want to refresh. So that we can decrypt our noisy ciphertext homomorphically, assume that we have the secret key $sk_1$ which is for $pk_1$.

Then our "reencryption" algorithm for ciphertext $\psi_1$ is as follows:

1. Encrypt bits of the secret key $sk_1$ under the new public key $pk_2$:

$$ Enc_E(pk_2, sk_{1j}) = \langle sk_{1j} \rangle. $$

2. Encrypt bits of the ciphertext $\psi_1$ under the new public key $pk_2$:

$$ Enc_E(pk_2, \psi_{1j}) = \langle \psi_{1j} \rangle. $$   

3. Evaluate the decryption circuit $D_E$ homomorphically to bits of the new ciphertext $\psi_1$ by using the encrypted secret key $sk_1$:

$$ Eval_E(pk_2, D_E, \langle \langle sk_{1j} \rangle, \langle \psi_{1j} \rangle \rangle) = \psi_2. $$

With the base assumption of the fact that the decryption circuit is less noisy, we have a new ciphertext $\psi_2$ which has less noise.

Notice that the homomorphic decryption we performed removes the initial encryption layer along with the accumulated noise.

Evolution of FHE Schemes

BGV Scheme

Plaintext & Ciphertext Spaces

To provide historical context, you could emphasize how the evolution of plaintext and ciphertext spaces in schemes like BGV and BFV reflects the advancements in optimizing Fully Homomorphic Encryption (FHE) for practical use. Initially, the computational overhead in early FHE schemes, including Gentry's original construction, was immense, with no clear optimization for efficiently handling large data sets.

BGV and BFV improved on these early constructions by introducing more sophisticated algebraic structures, like the polynomial rings mentioned, which allow for faster encryption, decryption, and homomorphic operations. The choice of plaintext and ciphertext spaces, with parameters such as the ring dimension $n$ and the moduli $q$ and $t$, became critical for balancing efficiency with security. By leveraging ring structures and power-of-2 dimensions, these schemes significantly reduced computational overhead, which made practical FHE implementations more feasible, especially for applications like encrypted cloud computing and secure data analysis. This optimization marks a key stage in the historical development of FHE technology.

The plaintext polynomial ring is defined as $\mathcal{P} = R_t = \mathbb{Z}_t[x]/(x^n + 1)$, which means it consists of polynomials with degrees less than $n$ and coefficients in $\mathbb{Z}_t$, where both the plaintext modulus $t$ and the ring dimension $n$ are integers. The ciphertext space is given by $\mathcal{C} = R_{q_l} \times R_{q_l}$, where $R_{q_l} = \mathbb{Z}_{q_l}[x]/(x^n + 1)$ and $q_l \in \mathbb{Z}$ represents the ciphertext modulus at level $l$.

Key Generation

The secret key $sk$ is a random ternary polynomial generated from $R_2$. The public key $pk$ consists of a pair of polynomials $(pk_1, pk_2)$, which are calculated as follows:

$$ pk_1 = (-1 \cdot (a \cdot sk + t \cdot e) \mod q_l)  $$

$$ pk_2 = a $$

Here, $a$ is a random polynomial in $R_{q_l}$, and $e$ is a random error polynomial sampled from $\chi$. The notation $\mod q_l$ indicates that the polynomial arithmetic is performed modulo $q_l$. Note that the error is scaled by $t$ here, unlike in the key generation process for BFV.

Encryption and Decryption

The encryption algorithm takes a plaintext message $M$ in $\mathcal{P}$ and the public key $pk$, and outputs the ciphertext $C = (C_1, C_2)$, effectively encrypting the message. The encryption process involves generating three small random polynomials $u$ from $R_2$ and $e_1$ and $e_2$ from $\chi$, and then computing:

$$ C_1 = (pk_1 \cdot u + t \cdot e_1 + M \mod q_l) $$

$$ C_2 = (pk_2 \cdot u + t \cdot e_2 \mod q_l) $$

Notice how the error polynomial in $C_1$ is scaled by the plaintext modulus $t$. This aligns with the earlier demonstration, where the message occupies the lower bits of the ciphertext coefficient while the noise can grow in the upper bits.

The decryption process reverses encryption by taking the ciphertext and the secret key $sk$ as input. It retrieves the plaintext message $M$, assuming the noise remains manageable throughout computation. Decryption proceeds as follows:

$$ M = \left( (C_1 + C_2 \cdot sk \mod q_l) \mod t \right) $$

To understand why decryption works and under what conditions, let's expand Equation (3) assuming that decryption happens immediately after encryption for simplicity:

$$ C_1 + C_2 \cdot sk = pk_1 \cdot u + t \cdot e_1 + M + (pk_2 \cdot u + t \cdot e_2) \cdot sk $$

$$ = \left( - (a \cdot sk + t \cdot e) \cdot u + t \cdot e_1 + M + a \cdot u \cdot sk + t \cdot e_2 \cdot sk \right) \mod q_l $$

$$ = M - t \cdot e \cdot u + t \cdot e_1 + t \cdot e_2 \cdot sk \mod q_l $$

$$ = M + t \cdot v $$

By reducing the result modulo $t$, we eliminate the noise vector $v$ and retrieve $M$ without the noise. However, this process is contingent on the norm of $v$ being small enough, specifically $||v||_\infty < \frac{q_l}{2t}$, where $||v||_\infty$ is the largest absolute coefficient in $v$. The bound ensures that the noise magnitude does not grow excessively and distort the message.

Relinearization

The primary goal of relinearization is to reduce the size of product ciphertexts, which result from `Mul`, back to two ring elements. To address this, we define $ C^* = \{C^*_1, C^*_2, C^*_3\} $. Our objective is to find $ \hat{C}^* = \{\hat{C}^*_1, \hat{C}^*_2\} $ such that:

$$ (C^*_1 + C^*_2 \cdot SK + C^*_3 \cdot SK^2 \mod q) \approx (\hat{C}^*_1 + \hat{C}^*_2 \cdot SK + r \mod q) $$

The evaluation key $ EK = (-(a \cdot SK + e) + SK^2, a) $ allows access to $ SK^2 $.. This key is a masked version of $ SK^2 $ because $ EK_1 + EK_2 \cdot SK = SK^2 - e $. Using this, we can compute $ \hat{C}^* $ as follows:

$$ \hat{C}^*_1 = (C^*_1 + EK_1 \cdot C^*_3 \mod q) $$

$$ \hat{C}^*_2 = (C^*_2 + EK_2 \cdot C^*_3 \mod q) $$

To decrypt this equation, we get:

$$ \hat{C}^*_1 + \hat{C}^*_2 \cdot SK = C^*_1 + EK_1 \cdot C^*_3 + SK \cdot (C^*_2 + EK_2 \cdot C^*_3) $$

$$ = C^*_1 + C^*_2 \cdot SK + C^*_3 \cdot (EK_1 + EK_2 \cdot SK) = C^*_1 + C^*_2 \cdot SK + C^*_3 \cdot SK^2 + C^*_3 \cdot e $$

It is important to note that the term $ C^*_3 \cdot e $ can be quite large since $ C^*_3 $ has coefficients in $ \mathbb{Z}_q $

Modulus Switching

ModSwitch is a technique used to manage the noise generated during multiplication operations. This concept was first introduced in [BGV14]. It utilizes the idea that a ciphertext $C$ defined with respect to modulus $q$ and secret key $sk$ can be transformed into an equivalent ciphertext $C'$ defined with respect to another modulus $q'$ while maintaining the same secret key $sk$ such that:

$$ (C(sk) \mod q) = (C'(sk) \mod q') $$

The transformation is accomplished by scaling the coefficients of $C$ by the factor $q'/q$ and then applying appropriate rounding. To effectively reduce the noise magnitude, a smaller $q'$ is selected compared to $q$, allowing us to scale down the multiplication noise.

In the context of BGV, $q$ can be at any level $l$, that is, $q_l$, and $q'$ in this context is simply $q_{l-1}$. The ciphertext moduli $q_l$ for $0 \le l \le L$ are chosen such that they are equivalent modulo $t$. This results in scaling down the noise without affecting the encrypted plaintext message. Essentially, it is akin to scaling by 1 from the plaintext perspective.

ModSwitch can be computed as shown in the following equation, where $[\cdot]$ denotes a suitable rounding function. The transformed ciphertext $C'$ is defined with respect to the new modulus $q'$:

$$ C' = \left[\frac{q'}{q} \cdot C \right] $$

TFHE Scheme

Among the various FHE schemes, Fast Fully Homomorphic Encryption Over the Torus (TFHE) is distinguished by its efficiency and practicality. TFHE leverages the mathematical properties of the torus to enable fast and secure computations on encrypted data. This section provides a summarized overview of TFHE, highlighting its key components and operations.

Introduction to the Torus

The torus is a mathematical construct, typically denoted as $ \mathbb{T} = \mathbb{R}/\mathbb{Z} $. In the context of TFHE, the torus is used to define a space where encrypted operations can be performed efficiently. Operations on the torus involve modular arithmetic, making it well-suited for cryptographic applications where data is represented in a continuous domain.

Torus Polynomials

Torus polynomials are polynomials with coefficients that lie on the torus, forming the basis for encryption and decryption operations in TFHE. These polynomials are used to construct ciphertexts that can be manipulated homomorphically, allowing for encrypted computations.

LWE over the Torus

TLWE (Torus Learning with Errors) is a variant of the standard Learning with Errors (LWE) problem adapted for the torus. In TLWE, the message to be encrypted is combined with a secret key and a Gaussian error term, resulting in a ciphertext pair that securely hides the original message. The use of the torus allows for more efficient encryption and decryption processes compared to traditional LWE.

RLWE over the Torus

TRLWE (Ring Learning with Errors over the Torus) extends the TLWE scheme to work with ring elements, enabling more complex computations. In TRLWE, messages are represented as polynomials, and the encryption process involves operations on these polynomials within the torus. This extension supports homomorphic multiplication and addition, essential for performing arbitrary computations on encrypted data.

GSW over the Torus

TRGSW (Torus Gentry-Sahai-Waters) encryption is another crucial component of TFHE, allowing for the evaluation of complex circuits on encrypted data. TRGSW encrypts bits of a secret key and supports homomorphic operations, including ciphertext multiplexer (CMux) operations, which are vital for implementing control structures in encrypted circuits.

Bootstrapping

Bootstrapping is a technique used in TFHE to reduce the noise that accumulates in ciphertexts during homomorphic operations. Noise growth is a critical issue in FHE schemes, and without bootstrapping, the noise can render the ciphertexts undecryptable. In TFHE, gate bootstrapping is performed using TRGSW ciphertexts, where the noisy ciphertext is homomorphically decrypted and refreshed, ensuring that the noise remains within manageable levels.

Lookup Tables

Lookup Tables (LUTs) in TFHE are used to homomorphically evaluate functions on encrypted inputs. By precomputing the possible outputs of a function and storing them in encrypted form, LUTs allow for efficient function evaluation during the computation process. This technique is particularly useful for implementing complex operations in an encrypted domain.

CMux Operation

The CMux (ciphertext multiplexer) operation is a fundamental building block in TFHE, used to conditionally select between two encrypted values based on an encrypted control bit. This operation is essential for implementing control flow in encrypted circuits, such as in conditional statements or loops, without revealing any information about the data or the control bit.

Blind Rotation

Blind rotation is a technique used in TFHE to cyclically shift the coefficients of a polynomial encrypted in a TRLWE ciphertext. This operation is performed without decrypting the ciphertext, preserving the confidentiality of the data. Blind rotation is a key component of the bootstrapping process, where it is used to correctly align the encrypted data before noise reduction.

If interested in more details on how these fundamentals are combined to form the entire scheme, see the workshop on TFHE at FHE Summit Brussels.

Conclusion

In conclusion, fully homomorphic encryption (FHE) has evolved from early cryptographic concepts, such as privacy homomorphisms, into a powerful tool for secure, privacy-preserving computations. The initial groundwork laid by pioneers like Rivest and Adleman, while limited in scope, inspired the cryptographic community to pursue more sophisticated methods. This effort resulted in Craig Gentry’s 2009 breakthrough, which introduced bootstrapping as a solution to the noise growth problem, making FHE schemes more practical.

Since Gentry's work, FHE schemes have continued to develop, with various implementations like BGV, BFV, and TFHE addressing efficiency and performance concerns. The combination of fully homomorphic encryption with problems like learning-with-errors (LWE) has further strengthened its security and broadened its applicability. Today, FHE is crucial in areas such as privacy-preserving AI, confidential blockchains, and secure voting systems, representing a significant leap toward achieving secure computation without compromising data confidentiality.

As technology advances, FHE will likely play a central role in enabling secure data operations across various fields, particularly as privacy becomes increasingly important in a data-driven world.

Subscribe to our newsletter

Top industry insights on FHE.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.