Secure Messaging Apps and Group Protocols, Part 2

In the first part of the blogpost, we tackled the issue of 1v1 conversations, and it is now time to see how this applies to 1vMANY: group chats! We will give an overview of current solutions, and then have a look at the Messaging Layer Security working group.


Ensuring the security of 1-on-1 conversations is already a challenge, as described in the first part. Ensuring the same security properties in a group conversation pushes the challenge to the next level: the security needs to be guaranteed for all members of a potentially large group. For example, a problem due to a compromised member or a network issue can grow out of proportion. As in the previous blog post, we will focus on the cryptographic protocols needed to ensure a fundamental layer of security for the conversation, and the solution to also minimize the bandwidth cost.

Group conversation

In 2018, the Messaging Layer Security (MLS) working group was created with the idea to standardize the security layer of a messaging protocol, similarly to what is done with TLS (hence the name), both in terms of security properties and the process to define the specification (iterative, following the IETF procedures, open to suggestions and contributions from anyone). This effort is divided in two different documents specifying:

Most messaging protocols focus on end-to-end encryption for 1-on-1 conversations first, and later extend that protocol to the group setting.

MLS on the other hand wishes to immediately propose a solution

for conversations involving two or more members, and aim to scale to groups with tens of thousands of members, [MLSA22]

without really making a difference between private messages and group messages: two people are already a group!

Formally, the protocol proposed by MLS is referred to as a Continuous Group Key Agreement, in the sense that several people agree on a shared key continuously, i.e. the key(s) and the group of people can change over time.

As mentioned in the previous part, there are a lot of different so-called secure messaging applications, some of them with the same basis (the Signal protocol, usually), some with remixes or their own idea, but all of them presenting the same issue of how impractical it is to scale-up from 1-on-1 to 1-on-Many without making compromises.

We first have a look at what messaging applications currently do, then we'll see how binary trees could optimize the protocol(s).

Existing solutions

Several solutions exist to "lift" the 1-on-1 protocol to the group version without having to create another protocol entirely. For all of the following solutions, since they achieve a form of Perfect Forward Secrecy (PFS) (see reminder below), adding a member in a current conversation does not allow the new member to decrypt past messages; and they also achieve a form of Post-Compromise Security (PCS) (again, see reminder), that should not allow members who leave a conversation to still be able to decrypt the subsequent messages.

Reminder
(Perfect) Forward Secrecy (PFS) means an attacker who gets the state of a user A at a time t cannot decrypt the messages that A received or sent before t; and the "opposite" property of Post-Compromise Security (PCS) means that the attacker cannot decrypt messages that A receives or sends after a time t+e where e depends on the protocol.

A group is just many pairs

The most straightforward way to obtain a group is to simply create 1-on-1 channels for each pair of participants. Supposing there are N users in a group, the cost of sending a message is then in O(N).

This method is what Signal currently does: group messages are sent through pairwise channels in a way that actually makes them indistinguishable from private messages (the server, for instance, is unaware of which conversation the messages belong to, which might be considered a nice privacy feature). As described by Moxie Marlinspike in a blog post, the resulting overhead of this choice for group chats is not actually costly when the conversation consists in simple, short texts.

Note
The same principle is used in Signal to handle the multi-device setting: each device is treated as another user (on an administrative level, this is invisible to the users), who cannot see which device was used to send a given message. Moreover, Signal users are not aware of how many devices the other users have, which is not the case in Wire, for example, where you can check your correspondent's devices.

With this method, if the underlying 1-on-1 protocol achieves PFS and PCS, then the group version has the same level of security.

One key to rule them all

Another idea detailed in that same blog post is to use so-called Sender Keys to limit the cost of sending a message: suppose the group consists in A, B, C, and D. When A wants to send a message, she generates a random symmetric key kA and uses it to encrypt her message, then encrypts kA with B's, C's, and D's public keys.

A then sends the encryptions of the key and the encrypted message to the group, which brings the cost down to O(1) for the message, and O(N) for the key. Note that this method is basically PGP for instant messaging!

There are two sub-variants to this method: let eB, eC, eD be encryptions of the key kA for B, C, and D, respectively, and let c be the encrypted message. A can send eB||eC||eD||c to everyone, which supposes that the other users know their respective "positions", or she can send eB||c to B, eC||c to C, and eD||c to D.

This Sender Key method is used by WhatsApp for all messages, and by Signal for large attached files such as pictures or videos.

Note
However, WhatsApp does not use this method or the previous one for multi-device, instead, the user's mobile phone is seen as a Master Device and acts as a pass-through for the other devices, i.e. a message sent from A's computer will first be sent to her mobile phone and then sent to the recipient's mobile phone, which will in turn push the message to the recipient's other devices.

As stated in the previous part of this blog post, getting the state of a user allows the attacker to decrypt past, present, and future messages that the user received or will receive, but not messages that the user sent/sends/will send. We have partial PFS and PCS.

One ratcheted key to rule them all

A less costly variant of the above method is the following: instead of using a randomly generated key and having to send it for each message, A generates an initial key kA when she joins the group, sends that key to all members of the group, and then uses a key derivation function (KDF) to obtain the symmetric message encryption keys. B, C, and D will also be able to compute the message keys in order to decrypt A's messages.

To summarize:

  1. A sends encryptions of kA to B, C, and D, who decrypt them using their private keys;
  2. A computes k0 = KDF(kA) and encrypts her first message m0 with it, obtaining ciphertext c0;
  3. A sends c0 to B, C, D;
  4. B, C, and D each compute k0 = KDF(kA) and decrypt c0 with k0 to obtain m0 (hopefully);
  5. to send her next message m1, A computes k1 = KDF(k0) (=KDF(KDF(kA))) and encrypts m1 with it to obtain c1;
  6. A sends c1 to B, C, D;
  7. ...

The cost of this version is thus O(1) to send messages, O(N) when initially sending the key (and when/if that key is updated).

In this setting, messages should also be signed with a signing key generated at the same time as the sender key: indeed, all members of the group know the other users' upcoming sending keys, and anyone could impersonate anyone. Adding the signature does not prevent deniability (see below), as the signature is randomly generated for each conversation.

Reminder
A messaging protocol achieves deniability when users can legitimately deny having participated in a conversation, no matter whether it really happened or not. To achieve deniability, it is sufficient to have a protocol such that anyone can fully compute all of the keys used in the conversation, and in this case since B, C, and D do not add anything but randomness to the keys, A can absolutely compute an entire conversation without even really talking with anyone. Because a "fake" conversation is indistinguishable from a real one, we achieve deniability.

If we suppose that all users delete the past keys after deriving them, we achieve PFS thanks to the ratcheting. However, PCS is not reached, as users do not regularly randomize their sender key, thus anyone compromising a user's state at a time t is capable of decrypting most of the group's subsequent messages. All sender keys should be randomized at least when a user leaves the group, at best as regularly as in Signal, which implies losing some efficiency.

Keys grow on trees

To solve the problem of group key computation, MLS uses binary trees. Let us start with a bit of history to understand the various flavors of "key derivation trees"...

Asynchronous ratcheting trees

<Beware, mathematics ahead.>

First, recall that for two private keys a and b, belonging to A and B, respectively, we classically denote the corresponding public keys with g^a and g^b, and the Diffie-Hellman shared key with g^ab: a and b are integers, and g^a, g^b, g^ab are group elements (i.e. not integers).

We summarize the Diffie-Hellman key exchange below:

A: a, g^a
B: b, g^b
A->B: g^a
B->A: g^b
A: g^ab = (g^b)^a
B: g^ab = (g^a)^b

Now, g^ab can technically also be called a private key, as it is known only to A and B, and it would make sense to use that key in an asymmetric setting, i.e. compute the corresponding public key so that anyone else can use that public key to encrypt a message that only A and B could decrypt, using g^ab. The problem is that computing g^(g^ab) does not make sense, so, we actually need to associate g^ab to an integer i, allowing us to compute the public key g^i.

A, B: i = Integerize(g^ab)
A, B -> World: "contact us using g^i"

Now, back to the trees: in the article [CGCGMM20] introducing Asynchronous Ratcheting Trees (ART), in order to use the above technique, the authors use the i(·) notation for the mapping (i.e. association) from group elements (like g^a) to integers – with a unique association, so, if a and b are different, i(g^a) ≠ i(g^b). We can then see g^ab as a private key shared by A and B with g^(i(g^ab)) being its public counterpart.

</No more mathematics (ok, maybe just a little bit).>

For simplicity, we will use the notation skX (resp. skXY) for a private (or secret) key known to X (resp. X and Y), and use pkX (resp. pkXY) for the public part.

In an ART, the nodes represent keys: at the leaf are the users' keys (each user has their own), and non-leaf nodes represent a key that is known (i.e. shared) by their leaf children or grand-children or grand-grand-(...)children.

Let us illustrate with an example.

Suppose A is talking to B, C, and D. Her key tree at a given moment in the conversation looks like:

       skABCD
      /      \
  skAB       pkCD
  /  \       /  \
skA  pkB    -    -

Where - denotes a node for which A knows neither the public nor the private key.

  • A knows her secret key skA;
  • A knows B's public key pkB, which she uses to compute their shared DH key skAB;
  • A knows C and D's shared public key pkCD, which she uses (along with skAB) to compute the key shared by all, skABCD.

Using the "integerization" we mentioned above, it is possible to have a tree as big as we want, progressively computing the shared keys all the way to the root.

The depth of the tree is log N where N is the number of participants. When a user updates their private key, only log N nodes must be modified, i.e. those on the path from the user's leaf to the root: A changing her key would update (skA, pkA), (skAB, pkAB), and (skABCD, pkABCD).

Initialization: to have an asynchronous design, any user who wishes to create a group must be able to start sending encrypted messages to the group even before all users come online. Inspired by the prekeys of Signal, in ART, the creator of the group – say, A – will query the server for prekeys from the other members, performing an initial key exchange such as [X3DH], and using the obtained shared master secrets msAB, msAC, and msAD as the other users' initial secret keys. This means that, at the beginning, A knows every single private key in the tree, but she does not need to store them, and only keeps the keys on the tree as shown above. Users will update their private keys as soon as they come online, and again regularly afterwards. (In other words, keeping the other users' initial private keys would only allow A to decrypt... the messages she sent.)

Message encryption: the shared key skABCD (called the tree key) is used as input to a key derivation function, the first in a chain of keys used to encrypt/decrypt the exchanged messages.

We see that the principle is very close to what is done in Signal: the tree key has the same role as the ratchet key, i.e. it is regularly updated by the users with random (and thus unpredictable) values, starting another chain of encryption keys, which ensures the post-compromise security of the protocol.

TreeKEM

The initial solution used in MLS was a variant of ARTs called TreeKEMs, as described in [BBR18].

To understand how TreeKEMs work, let's define what a Key Encapsulation Mechanism (KEM) is. Suppose A knows B's public key pkB, and B has the corresponding private key skB. A KEM has an encapsulation algorithm Encs and a decapsulation algorithm Decs. We have the following protocol:

  1. A calls Encs(pkB) and obtains a key k and a ciphertext c;
  2. A sends c to B;
  3. B calls Decs(skB,c) and obtains the key k.

Of course, Encs is not deterministic, otherwise every call to Encs with pkB would have the same output.

Here is an example, where (b, g^b) is a private-public key pair, we define Encs and Decs as:

  • Encs(g^b): generates a random integer a and returns k = g^ab, c = g^a;
  • Decs(b,c): computes k = c^b, i.e. k = (g^a)^b = g^ab.

We can see that this KEM example is an ephemeral Diffie-Hellman in disguise.

Let us now have a look at KEMs in trees.

The principle is very similar to the ART presented above, except the nodes contain hashes instead of DH elements, and only the most recently updated node contributes to its parents' content (see below).

A global view of the private keys at a given time is as follows, where H is a hash function, and we suppose the most recently updated key was skA, and the one before that was skD.

      H²(skA)
     /       \
 H(skA)    H(skD)
 /    \    /    \
skA  skB  skC   skD

When A updated her key to skA, she sent:

  1. pkA and H(skA) to B, encrypted with pkB;
  2. pk(H(skA)) and H²(skA) = H(H(skA)) to C and D, encrypted with pk(H(skD)).

Here we use the pk(x) notation for the public key corresponding to private key x (to avoid writing g^H(skA), and to follow the notation of [BBR18]).

This encryption key for the group would here be KDF(H²(skA), K) where K is the previous encryption key (or 0 if this is the initialization step).

We see here that the only difference with the ART is that only one "leaf" contributes to the value of the parent (and grand-parents), with a hash (or a hash of a hash, or a hash of a hash of a hash...) and must thus send two values to the other users: their shared parent's value (here, A sends H²(skA)), and the public key of the "highest non-shared node" (here, A sends pk(H(skA))). The resulting encryption key, however, depends on all the previous updates, as it is part of a chain of derivations.

Ratchet Tree

The current version of the MLS protocol [MLSP22] uses so-called Ratchet Trees, which are direct "descendants" of both TreeKEMs and ARTs, resulting from the discussions within the working group. In MLS, that specific tree is used to define membership of the group, and produce a value from which other keys (including encryption keys) are derived. See the end of this section, and the next, for a brief overview of these other mechanisms.

Each instance of a ratchet tree is defined in the same way as an instance of a Hybrid Public Key Encryption [HPKE] containing:

  • a Key Encapsulation Mechanism;
  • a Key Derivation Function;
  • an Authenticated Encryption with Additional Data (AEAD) scheme.

Somewhat similarly to what is done with ARTs (but in a more complicated manner), for a given user's tree, each node can contain the following values:

  • a private key, if it is on the direct path from the user's leaf to the root;
  • a public key;
  • a list of unmerged leaves, which represents the leaves of users who just joined and are not up to date with the private keys of the nodes above their leaves: this list needs to be kept so that users with unmerged leaves can still receive messages, e.g. (abusively re-using the ART notation) if D does not yet know the shared private key skCD, user B cannot use pkCD to send the message to both C and D and will have to use pkD until D has a merged leaf;
  • a hash with information about the node's parent (changes when the node changes);
  • a credential issued by an authentication system, if the node is a leaf, that authenticates the leaf's owner.

Now, to update the tree with fresh randomness (a principle common to ART and TreeKEM), users proceed as follows. Let sec be that fresh randomness generated by A.

The simplest way to understand how the various mechanisms work is actually to start big and then move on to the details, which is what we will do below. Beware, the logic only comes together towards the very end, so, hang tight.

First, there is a chain of derivation for elements referred to as path secrets. We write this chain as a list PS[0], …, PS[n], computed as such:

sec -KDFp-> PS[0] -KDFp-> PS[1] … -KDFp-> PS[n]

where KDFp represents the KDF with keyword "path" as its label input (i.e. PS[0] = KDF(sec, "path")).

Now, for each node on the path from the leaf of user A to the root, the node's secret sec_i (where i represents the position of the node with regards to the leaf, i.e. the leaf's parent has position 0 and the root has position n) is computed with:

sec_i = KDF(PS[i], "node")

where "node" is the label input of the KDF in the same way that "path" was used above. From the newly computed sec_i, we can derive a key pair with the KEM's specific function:

pki, ski = KEM.KeyPair(sec_i)

which will represent the node's new private and public keys.

For her leaf, A will follow the same idea:

node_sec = KDF(sec, "node")
pkA, skA = KEM.KeyPair(node_sec)

A must delete the secret sec immediately after.

To summarize the relations between the values:

sec   -KDFn->  node_sec -KKP->  skA, pkA
 |
KDFp
 |
 V
PS[0] -KDFn->   sec_0   -KKP->  sk0, pk0
 |
KDFp
 |
 V
PS[1] -KDFn->   sec_1   -KKP->  sk1, pk1
 |
...

A will now have an updated ratchet tree looking like:

           sk1, pk1
         /          \
    sk0, pk0        pkCD
   /        \      /    \
skA, pkA    pkB   -      -

And it is one step further than other users', so A must broadcast the information relevant to each user, i.e. for a given node, some users will need to know the corresponding new path secret if that node is directly above their leaf, and other users will only need to know the public key: in our example, for A and B's parent node, B needs to know PS[0] to compute all other keys, but C and D only need pk0.

Encryption keys are then derived via the Key Schedule with the value referred to as a commit_secret, equal to what we might write as P[n+1], i.e. KDFp(PS[n]), the derivation of the path secret at the root. The derivation process of the encryption keys is not fully detailed here (see below), but it can be found in the RFC of the protocol [MLSP22].

A period of time between two tree updates is called an epoch. In terms of security, this protocol provides forward secrecy between two epochs, as secret/private values are deleted once they are no longer needed, and secrets from two epochs are not linked. Within an epoch, the forward secrecy is brought thanks to the inner ratcheting of encryption keys. Finally, we also reach post-compromise security between two epochs, as new random secrets are used to update the tree.

Other mechanisms

Our aim here is to give an overview of other points of interest without going into details in the same way we did above. Two documents should be considered:

Protocol Document
The official IETF draft for the actual MLS protocol [MLSP22], describing the group key agreement and related cryptographic elements.
Architecture Document
The official IETF draft for the architecture of MLS [MLSA22], which can be seen as an "extension" of the previous document, focusing on the infrastructure and security goals (i.e. this document is the form where the previous one is the substance).

Let us first reuse the ascii drawing found in the protocol RFC [MLSP22] summarizing the relationships between the "areas of responsability":

                          .-    ...    -.
                         |               |
                         |       |       |
                         |       |       | Key Schedule
                         |       V       |
                         |  epoch_secret |
.                        |       |       |                             .
|\ Ratchet               |       |       |                     Secret /|
| \ Tree                 |       |       |                      Tree / |
|  \                     |       |       |                          /  |
|   \                    |       V       |                         /   |
|    +--> commit_secret --> epoch_secret --> encryption_secret -->+    |
|   /                    |       |       |                         \   |
|  /                     |       |       |                          \  |
| /                      |       |       |                           \ |
|/                       |       |       |                            \|
'                        |       V       |                             '
                         |  epoch_secret |
                         |       |       |
                         |       |       |
                         |       V       |
                         |               |
                          '-    ...    -'

We have detailed how a ratchet tree is populated, and what happens when a member updates their leaf secret, but we also need to talk about adding and deleting members, as well as the last two "areas" of the drawing, i.e. the Key Schedule and the Secret Tree.

Key Package

Recall that in Signal, users upload a bundle of "pre-keys" to the server, in order to be added to a conversation asynchronously. In MLS, we find a similar principle in the form of Key Packages, each containing:

  • the version of MLS and ciphersuites supported by the user/client;
  • a public key, for other users to encrypt and send a "Welcome" message;
  • the initial content of the leaf node.

Each package is only used once, unless a user has depleted their stock in which case the last surviving package is reused until a new one is added.

Group creation and member management

When creating a group, user A will first initialize the group with only herself (the tree is a leaf, then), and afterwards will query the "directory" for the Key Packages (see above) of each user she wishes to invite to the group. A will then send an Add proposal and commit (see section 13 of the RFC [MLSP22] for more details on proposals and commits) to the channel for each member, and the addition is broadcasted to all users (so first only to A, then to A and B, then to A, B, and C, etc).

Commits are meant to apply proposals, they start a new epoch and provide fresh entropy to the epoch secret.

Add

To add a member to the group, at creation or afterwards, the user's key package is used in the Ratchet Tree: the leaf assigned to the new user is either the leftmost empty leaf, if there is one, or a new leaf to the right of the others. The user receives all the necessary information via a "Welcome" message encrypted with the public key in their package, and other users receive the information via the channel, to update their trees. New users have no way of computing or knowing the previous keys.

Note that updating the key material in the leaf of a user works the same way except it changes an already existing leaf instead of a blank one.

Remove

The member to be removed is identified by their leaf's index in the proposal and commit messages, so that users can delete the leaf from their tree and start a new epoch for which the deleted user knows no key.

If the leaf that is removed creates a blank subtree, that subtree is cut from the tree.

Encryption keys

Ratchet Trees are not the only trees in the MLS forest: it also uses a so-called Secret Tree derived from the so-called encryption_secret, a derivation through the Key Schedule (see below) of the commit_secret defined above (KDFp(PS[n])) : this tree is defined from the root down to the leaf (as opposed to what is done in the Ratchet Tree), and has as many leaves as there are members in the groupe (same as the Ratchet Tree).

Each group member is assigned a leaf of the Secret Tree (in the same order as the leaves of the Ratchet Tree), and the secret of each leaf (called ratchet_secret, somewhat confusingly) is the first element of a chain of symmetric ratchets producing AEAD encryption keys and nonces.

Keys and nonces are only used once and must be deleted as soon as they are no longer needed.

Key schedule

To link the Ratchet Tree and the Secret Tree of an epoch, MLS uses a Key Schedule, as illustrated in the figure above.

Basically, the Key Schedule mixes various keys to produce other keys (obviously): for epoch n, we start with init_secret_[n-1] and add, derive, expand, extract (...) several values to end up with init_secret_[n], ready to go through the schedule again for the next epoch.

The commit_secret mentioned above is one of those added values, and the encryption_secret (also mentioned above) is one of the outputs.

As usual, a complete description of the Key Schedule is given in section 9 of the RFC [MLSP22].

Going further

BlackHat presentation
A presentation done by Raphael Robert (previously head of Wire), available on YouTube (the slides are also available), at BlackHat USA 2019. This presentation is a very high-level overview of MLS (at the time), its objectives, how it works, etc.
MLS mailing list
The mailing list contains both updates and discussions. The latter are especially interesting to understand the design choices made in the protocol.
Podcast
An episode of Cryptography FM featuring Raphael Robert discusses the generic problem of scaling secure messaging protocols to the group setting, as well as the subsequent creation of MLS.
PhD Thesis
The entire manuscript of Katriel Cohn-Gordon is a worthy read on secure messaging in general, containing the first formal proof of Signal (chapter 4) and interesting discussions on ART and group messaging (chapter 5). The manuscript is highly accessible to all and does not require intricate knowledge of cryptography.
HPKE
We briefly mentioned Hybrid Public-Key Encryption above as a building block, but the subject is interesting in and of itself, with its story being told in a very accessible, reasonably detailed, and well-written blog post by Benjamin Lipp, one of the authors of the RFC, along with a TL;DR by Franziskus Riefer.

A note on post-quantum schemes

The MLS drafts do not consider quantum attackers (except in the Pre-Shared Keys section). It is however important to address the problem of a quantum adversary for a group conversation, the efficiency of the protocol to limit bandwidth consumptions being one of the most important problem to be solved: indeed, in [HKPPW21], an estimation for a group message protocol

instantiating TreeKEM with Classic McEliece in a group of 256 members will deplete a 1 GiB mobile plan once each user has sent two commit messages.

In [HKPPW21], the authors propose (among other things) the use of some appropriate lattice-based (based on FrodoKEM, Kyber and NTRU LPRime) and isogeny-based primitives; they also specifically tweak some for the group messaging setting, to decrease data consumption as much as possible. The model they use is however not the MLS one, but the Chained mKEM one [BBN19]. After this work, two new lattice-based post-quantum key exchange mechanisms were prosposed, reaching a smaller bandwidth usage than the current lattice-based NIST candidates: BAT [FKPY22] and CTRU [LFZZ]. It is however not obvious that these two primitives can be adapted to fit in the framework of [HKPPW21], even if they are NTRU-based. The main difference between BAT and CTRU regarding the criteria for group messaging is that BAT achieves the smallest bandwidth consumption at the cost of a less tweakable scheme (especially with regards to reaching intermediate security levels, which does not matter in our setting) and a more expansive key generation.

Working on quantum-safe mechanisms not only allows to defend oneself against quantum computers, but may provide some other benefits: in [KKPP20], the authors show that multi-recipient key encapsulation mechanisms combined with MLS for large groups may provide bandwidth savings, where Chained mKEM may be a less efficient protocol. The research area of injecting post-quantum cryptography into group messaging is quite active, and we may expect some improvements in the coming year.

For more information about the two cited articles about quantum secure group protocols, you may also refer to the Thomas Prest's website and especially the Talk section, for example these slides.

Conclusion

The MLS working group is currently in last call, meaning it is giving the community a final opportunity to review the RFCs before moving from draft to standard.

We can hope that once MLS is a standard, more implementations will follow, not just as standalones but also embedded in other applications: the attention given to being as generic and adaptable as possible makes MLS the best choice for newly created (messaging) applications when deciding on a protocol. For pre-existing apps, the transition might be more difficult, but not entirely impossible.

Acknowledgments

These articles are inspired from a talk we gave at Quarkslab's Quarks in the Shell event in 2021 with Marion and Charlie, and we thank both of them for the many discussions we had for the preparation of this talk. We also thank Adrien, Iván, Mahé and Philippe for the helpful reviews.

Bibliography

[BBN19]"Formal Models and Verified Protocols for Group Messaging: Attacks and Proofs for IETF MLS", K. Bhargavan, B. Beurdouche and P. Naldurg, December 2019. https://hal.inria.fr/hal-02425229
[BBR18](1, 2) "TreeKEM: Asynchronous Decentralized Key Management for Large Dynamic Groups A protocol proposal for Messaging Layer Security (MLS)", K Bhargavan, R. Barnes and E. Rescorla https://hal.inria.fr/hal-02425247
[CGCGMM20]"On Ends-to-Ends Encryption: Asynchronous Group Messaging with Strong Security Guarantees", K. Cohn-Gordon, C. Cremers, L. Garratt, J. Millican and K. Milner. https://eprint.iacr.org/2017/666
[FKPY22]"BAT: Small and Fast KEM over NTRU Lattices", P.-A. Fouque, P. Kirchner, T. Pornin and Y. Yu, TCHES 2022. https://eprint.iacr.org/2022/031
[HKPPW21](1, 2, 3) "A Concrete Treatment of Efficient Continuous Group Key Agreement via Multi-Recipient PKEs", K. Hashimoto, S. Katsumata, E. Postlethwaite, T. Prest and B. Westerbaan, ACM CCS 2021. https://eprint.iacr.org/2021/1407
[HPKE]"Hybrid Public Key Encryption", R. Barnes, K. Bhargavan, B. Lipp and C. Wood, IRTF, RFC: 9180, February 2022. https://www.ietf.org/rfc/rfc9180.html
[KKPP20]"Scalable Ciphertext Compression Techniques for Post-Quantum KEMs and their Applications", S. Katsumata, K. Kwiatkowski, F. Pintore and T. Prest, Asiacrypt 2020. https://eprint.iacr.org/2020/1107
[LFZZ]"Compact and Efficient NTRU-based KEM with Scalable Ciphertext Compression", Z. Liang, B. Fang, J. Zheng and Y. Zhao. https://eprint.iacr.org/2022/579
[MLSA22](1, 2, 3) "The Messaging Layer Security (MLS) Architecture", B. Beurdouche, E. Rescorla, E. Omara, S. Inguva, A. Kwon and A. Duric, 15 June 2022. https://messaginglayersecurity.rocks/mls-architecture/draft-ietf-mls-architecture.html
[MLSP22](1, 2, 3, 4, 5, 6, 7) "The Messaging Layer Security (MLS) Protocol", R. Barnes, B. Beurdouche, R. Robert, J. Millican, E. Omara and K. Cohn-Gordon, 15 June 2022. https://messaginglayersecurity.rocks/mls-protocol/draft-ietf-mls-protocol.html
[X3DH]"The X3DH Agreement Protocol", M. Marlinspike and T. Perrin. https://signal.org/docs/specifications/x3dh/

Comments