This year’s Real World Cryptography Conference recently took place in Toronto, Canada. As usual, this conference organized by the IACR showcased recent academic results and industry perspectives on current cryptography topics over three days of presentations. A number of co-located events also took place before and after the conference, including the FHE.org Conference, the Real World Post-Quantum Cryptography (RWPQC) Workshop and the High Assurance Crypto Software (HACS) Workshop.
A number of NCC Group’s Cryptography Services team members attended the conference and several of the workshops this year. Some of our favorite talks and takeaways are summarized in this post.
Post-Quantum Cryptography
At this year’s Real World Cryptography conference, post-quantum cryptography was strongly represented. With two PQC sessions during the main conference itself, as well as the co-located RWPQC event which took place on the Sunday before the main conference, it was exciting to see so much engagement on the PQC front during our trip to Toronto!
Following the blueprint from last year’s event, the RWPQC workshop opened with an update about the NIST PQC competitions, which re-iterated the current status of the NIST PQC competition, and NIST’s goal of producing the final standards for NIST FIPS 203 and 204 drafts within the next few months, followed by an initial draft for the Falcon specification, under the name FN-DSA. This was followed by updates from other standardization bodies including ETSI, BSI, NSCS, and the IETF, which are all working towards providing PQC guidance in their respective areas of influence with the final FIPS drafts expected soon. MITRE and the Linux Foundation PQC migration consortiums both also gave updates during the workshop. As part of these talks, many standards bodies discussed their approach to the migration and whether or not they plan to mandate the use of hybrid algorithms, with approaches varying from required hybridization to less strong mandates on this front. Additionally, a number of the talks noted that while the use of hybrid algorithms may be helpful in the short term, the community should start considering eventual plans to migrate to a single set of algorithms post-hybridization, citing concerns about increased complexity or combinatorial expansion of algorithms as new algorithms get introduced in the future.
As a counterpart to the presentations by standardization bodies, the RWPQC program included real-world updates about the progress of the PQC migration at various companies, including Signal, Amazon, Google, Meta, and evolutionQ. All talks provided valuable insights as to the challenges, both already overcome and those that are yet to come, for migrating to PQC in their respective settings. Finally, a few more academic talks on lattice cryptanalysis and implementation footguns rounded off the program. We’ll do a slightly deeper dive for some of our favorite talks!
Lattice Cryptanalysis Talks
Martin Albrecht and John Schanck presented two complementary discussions on topics in lattice cryptanalysis. In the first presentation, Martin Albrecht did a deep dive into the analysis of the current best known attack for lattice cryptosystems, known as the dual attack, starting with a brief history of the primal and dual attacks, and noting some recent works that questioned the accuracy of some common heuristics, resulting in improved analyses for these dual algorithms. Martin also noted that there doesn’t seem to be a clear reason why the dual attacks appear to perform better than the primal attacks, noting that “it seems morally wrong that the dual attack would beat the primal attack”, since it introduces additional transformations over the direct approaches. Finally, the presentation concluded with a discussion of recent lattice attacks leveraging machine learning models, noting that in his opinion there is currently no reason to believe that ML can threaten lattice cryptosystems.
John Schanck’s following talk focused on the “real cost” of the best-known attacks. The NIST security levels I, III and V aim to guide protocol designers to select parameters which offer guarantees of security matching the cost of the best-known attacks against AES-128, 192 and 256 respectively. However, unlike attacks on AES, the dual-lattice attack has an incredibly expensive and memory-hungry sieving step. To make progress on an attack against Kyber and related schemes, one must perform a huge amount of computation before any progress is made on reducing the key-space (compare this to attacking AES where you can simply immediately just start guessing keys). The talk featured fun comparisons — a Moon’s weight of silicon would be needed to fabricate enough memory for the naive implementation of the dual-attack — and really demonstrated how challenging it is to align the real cost of attacking different cryptographic protocols when the attacks themselves are structured so differently at the algorithmic level. The take home message from Schanck’s talk was that when memory cost is taken into account, Kyber 768 should be enough for everyone.
Implementation Footguns for Post-Quantum Cryptography
Nadia Heninger presented a very detailed discussion about potential pitfalls she foresees as issues for post-quantum implementations, primarily based on her experiences with implementations of classical cryptography. She noted that many common classes of implementation pitfalls in classical cryptography are still applicable in PQC settings, including RNG issues, issues with sampling or uniformity of distributions (which may be even trickier in the PQC settings, as many lattice schemes require sampling from multiple distributions), API misuse, and missing validation checks, which can be tricky to enforce via tests. This talk resonated with us, as we have already started seeing some of these issues in the post-quantum projects that we have reviewed so far. Finally, her discussion noted that the increased implementation complexity for PQC schemes may be a blessing in disguise, as the more complicated an algorithm seems, the less likely people are to try to implement it themselves, and instead rely on existing implementations, which may end up helping avoid many of these potential issues at scale!
Making Signal Messenger Post Quantum / Making Encrypted Messaging Post Quantum
Rolfe Schmidt gave a fantastic talk on the upgrade to Signal messenger to begin the inclusion of post-quantum cryptography into the key-agreement stage of the protocol, now known as PQXDH. The talk motivated this change as a protection against “harvest-now, decrypt later” attacks with a design philosophy to change only what strictly needs to be changed to achieve protection against a quantum adversary. Although the key-agreement now includes a hybridized protocol using post-quantum algorithms, the Ratcheting algorithm is still classical only and so the classical guarantees of the Signal protocol are still not quite aligned with the post-quantum guarantees. Ensuring the ratchet is post-quantum secure is a work in progress of the Signal team, where they’re hoping to ensure that the performance of the messaging is not affected by the inclusion of Kyber into the ratcheting mechanism. The design documentation is now available PQXDH Specification
Additionally to the design and implementation of PQXDH, Signal collaborated with academia to produce a formally verified implementation of PQXDH using both ProVerif and CryptoVerif. Signal explained that through the process of formally verifying the protocol, they not only gained confidence in the changes, but verification also highlighted parts of the specification which had been under-described and could have led to attacks if misinterpreted. The process then not only added support for the validity of the design but acted as a guide for a robust description of PQXDH for developers in the future.
Conclusion
Overall, it’s very exciting to be seeing so much movement in the post-quantum real-world applications. We are looking forwards to future PQC updates at RWC, RWPQC and elsewhere, and to reviewing PQC projects that come our way!
– Giacomo Pope and Elena Bakos Lang
Key and Certificate Transparency
Key and certificate transparency was a hot topic at this year’s conference. The Levchin Prize was awarded to the team at Google responsible for “creating and deploying Certificate Transparency at scale”. In addition to the public recognition of what that work has pioneered, three talks were scheduled about different aspects of modern transparency solutions.
Invited talk: Key transparency: introduction, recent results, and open problems
The first talk by Melissa Chase from Microsoft Research delved into recent results and open problems in Key Transparency. In modern encrypted messaging deployments, a service provider is generally responsible for distributing users’ public keys. However, what if a man-in-the-middle attacker were to intercept (and meddle with) the public key of the recipient that a sender is trying to establish a secure communication with? Or worse, what if the server were to get compromised? In an end-to-end encrypted messaging setting, key transparency aims to solve this problem of trusted public key distribution which is often glossed over in academic works.
Until recently, the industry solution to the key transparency question was some form of out-of-band verification, in which users can display a fingerprint corresponding to the chat’s encryption key and compare it with one another. Subsequent deployments have made comparing these traditionally long numerical codes easier by displaying a QR code that can be verified when both users are physically close to each other. These solutions can be slightly tedious for users and the industry has started to deploy large-scale and automatic key transparency solutions based on relatively recent academic works such as CONIKS.
In some of these modern key transparency deployments, service providers provide a publicly accessible key directory which keeps track of users’ public keys. Users can then ensure that the key they hold for a given contact is consistent with the key tracked in the latest version of the online key directory. However, granting people access to public key repositories needs to be done while still maintaining user privacy. Indeed, the deployment of such systems should not make it easier for anyone to be able to track individual users’ actions, for example by figuring out when they refresh their keys (if they get a new device for instance) or by allowing attackers to find out which users are participating in the system by identifying personal information (such as phone numbers or email addresses) in the key directory.
In order to realize the goals outlined above, key transparency deployments make use of a few interesting cryptographic primitives. Service providers generally publish key directory together with a commitment to that directory. In practice, this is usually achieved with a Sparse Merkle Tree, and the commitment is the root of that Merkle Tree. In early academic proposals, the server would post a commitment to the current key directory at regular intervals. New developments (such as SEEMless) are proposing for the server to publish commitments to the incremental changes to the key directory, making the effort to audit the key transparency tree computationally lower (since the entire tree does not have to be recomputed and verified). To safeguard the privacy of users, modern key transparency deployments use Verifiable Random Functions (VRFs), which can be thought of as the public key variant of a hash function. In a VRF, only the private key owner may compute the hash output and its associated proof, but anyone can use the associated public key to verify that the output was calculated correctly. If the leaves of the Merkle tree were computed from the identifying information of users, for example by simply hashing some form of identifier, attackers could easily collect information about users. Using a VRF construction allows to conceal that information, by essentially randomizing the leaf positions in the Merkle tree. Melissa finished rounding up the literature review portion of her talk by presenting OPTIKS, a performant new key transparency solution which focuses on scalability, and which Melissa contributed to.
While many of the technical aspects of key transparency seem to be well ironed-out in theory, there are still a number of open questions and practical aspects that require further engineering efforts. To start, how to effectively instantiate the bulletin board, that publicly accessible key directory that should be efficiently and consistently accessed by users? A second crucial and often overlooked point is that of auditors. One common goal of these key transparency deployments is to provide the ability for auditors to validate the consistency of the key directory. But who are these auditors in practice, and what incentives do they have for performing costly validation work? And if they were to identify any wrongdoing, who would they even report such issues to? A third open question Melissa raised was around the security guarantees of such systems and whether stronger security notions could be obtained. For example, in current schemes, users will detect if a service provider maliciously replaces a user’s key but users themselves can’t prevent it.
WhatsApp Key Transparency
Later that day, Kevin Lewi and Sean Lawlor presented WhatsApp’s Key Transparency solution. Recent updates to WhatsApp added a feature to automatically validate users’ public keys based on a key transparency deployment following many of the concepts presented above. Previously, out-of-band verification used to be available to chat users, but automatic public key verification was recently added. Now, servers publish a commitment to the public key database, and, supported by UI updates in the app, the validity of a contact’s key is automatically checked when users access the “Encryption” menu of their contacts.
The presentation explored the different technical aspects this deployment necessitated, such as the infrastructure challenges to support these updates as well as the frequency at which they need to be updated. The speakers then presented some of the underlying cryptographic constructions used by the deployment. The system uses Sparse Merkle trees and VRFs in a fashion similar to SEEMless, and publishes incremental updates to the key transparency tree in the form of append-only proofs which are about ~200 MB each and are published at approximately 5 minutes intervals.
Kevin and Sean concluded their presentation by advertising the release of their implementation of the auditable key directory (accessible at https://github.com/facebook/akd), which is what WhatsApp uses in production for their key transparency deployment and which can also be used to verify the consistency proofs by external auditors. Members of NCC Group’s Cryptography Services team reviewed the implementation a few months before the conference; the public report can be found on NCC’s research platform: Public Report – WhatsApp Auditable Key Directory (AKD) Implementation Review.
Modern transparency logs
Finally, on the last day of the conference, Filippo Valsorda gave a talk on Modern Transparency Logs. Drawing parallels with key transparency solutions, Filippo kicked off his talk by framing transparency logs as a reusable primitive; a magic global append-only list of entries essentially defined by three fundamental questions: what are the entries, who can add them, and who monitors these entries? Different transparency solutions (such as the Go checksum database which Filippo used repeatedly as example throughout his presentation) are ultimately defined by the answers to these questions.
When building transparency logs solutions, a fundamental type of attacks that must be prevented is the ability to present different views of the system logs to different users, which is known as a split view attack. In a key transparency deployment for example, one could imagine a compromised (or rogue) server advertising a different public key for a target victim. There are a few solutions to circumvent split view attacks. A first one is to ensure local consistency (for example with an append-only log), a second measure is peer-to-peer gossip, where peers communicate amongst themselves to ensure they are being served the same system view, and finally, a third measure is witness cosigning. Witnesses are lightweight, third-party entities responsible for verifying consistency proofs between consecutive Merkle tree roots, and which will cosign that new tree head. Given a network of witnesses, more complex policies can be developed such as requiring a threshold of M-out-of-N signers in order for the tree head to be considered validated.
Filippo then proceeded to advertise a number of specifications and work-in-progress items to support modern transparency logs deployments. The first one being the checkpoint format specification, which is used to interoperate with the witness ecosystem. Checkpoints are essentially signed notes precisely formatted for use in transparency log applications, and which contain the origin of the checkpoint, the tree size and the root hash, and a number of potential co-signatures on that root hash. Recognizing that a checkpoint coupled with an inclusion proof is everything a client needs to verify an inclusion proof offline, Filippo then introduced the concept of “spicy signatures” (🌶️) which are offline verifiable proof of inclusion in a transparency log. He then concluded his talk by presenting a lightweight CLI tool and showing how spicy signatures can be used efficiently in existing deployments, for example by bringing transparency to the Debian package ecosystem in only a few hours.
– Paul Bottinelli
Symmetric Encryption
This year’s symmetric encryption session reinforced the motivations for modernizing our security requirements and design philosophy when it comes to symmetric primitives and modes of operation based on lessons learned and changing requirements over the past 20 years.
Building the Next Generation of AEAD
The symmetric cryptography session was opened by Sanketh Menda, who closed out last year’s event with a presentation on “context-committing” AEADs, or authenticated encryption with associated data, which acknowledges the need for standardized constructions that commit the complete “context” of an AEAD (e.g., the key and nonce). In his update this year, “Building the Next Generation of AEAD“, a broader set of goals was presented:
- We sometimes need a fast approach for lightweight devices;
- We sometimes need a suitable approach for cloud-scale data;
- We sometimes need nonce-misuse resistance;
- We sometimes need a nonce-hiding scheme;
- And as established last time, we sometimes need context commitment.
And is there one ideal scheme to rule them all? Of course not… However, there may be a new approach to designing a family of schemes that facilitates safer use. To this end, a “flexible AEAD” construction is proposed which presents an implementer with a single set of binary choices corresponding to various security properties, thereby allowing a developer to express their intent, rather than to choose and compose various modes of operation. Sanketh then presents a series of primitives that can be composed in standard ways to achieve these various security goals.
With two excellent back-to-back presentations on the topic, I’m hoping we’ll get to hear a progress update from Sanketh again next year.
What’s wrong with Poly1305?
Jan Gilcher and Jérôme Govinden followed up with a presentation looking back on the development and deployment of Poly1305 and ask a fundamental question: “Given today’s advancements and applications would we still converge to this same design?”. This is initially motivated by observations that Poly1305 sacrifices a degree of security in favor of speed on a 32-bit platform using optimizations in the floating-point unit, whereas most modern platforms are 64-bit and leverage the arithmetic logic unit for optimized Poly1305 computations. So how would we build and optimize a Poly1305-like construction on today’s hardware?
Much like the preceding talk, the authors consider a modular construction for a family of polynomial-based hashes, from which Poly1305 and other similar schemes can be implemented based on a set of input parameters. This allows for the efficient testing and comparison of a broad family of implementations which can be tweaked between favoring security level and speed on a given platform. While such an approach does not outperform a hand-optimized implementation of a specific function, it appears to achieve impressive results based on the flexibility it provides.
Leveraging their new construction, the authors present a variant, Poly1163, which is better optimized for current hardware at a similar security level to Poly1305. Impressively, despite not being hand-optimized at all, this variant outperforms OpenSSL’s Poly1305 implementation. On the other end of the design spectrum, the authors also present Poly1503, which focuses on providing higher bit-security by not clamping inputs in the same manner as Poly1305 without a substantial hit to performance.
I want to encrypt 2^64 bytes with AES-GCM using a single key
Shay Gueron closed out the session with his presentation “I want to encrypt 2^64 bytes with AES-GCM using a single key“, which proposes a new mode of operation for AES called double nonce double key (DNDK), purpose-built to extend AES-GCM to support modern cloud-scale encryption tasks using a single key.
AES-GCM is the most widely used AEAD we encounter and is generally a safe choice for most applications when used correctly. However, GCM has a few well-known limitations: The 12 byte initialization value (IV) limits the number of invocations that can be made with a single key, and GCM out of the box does not provide key commitment, meaning that an attacker can produce a single authenticated ciphertext that decrypts to two different messages under two different nonce+key combinations. It is precisely these two problems that DNDK addresses, while striving to remain as close as possible to the GCM construction itself.
In practice, the concept is simple: If the short IV (nonce) is holding us back, then simply make it bigger, say, double its size. But a “double nonce” isn’t quite enough with GCM, since the first internal step is to hash it down to its original smaller size. Instead, we can use AES itself to build a key derivation function that takes as input the “double nonce” and the encryption key and derives an invocation-specific encryption key. In short, we use our double-nonce-derived-key to encrypt our message, and we have DNDK. And as a bonus, DNDK supports key commitment out of the box as well, as an optional output parameter. This incurs little practical overhead and does not rely on any additional cryptographic primitives to achieve its security.
Shay and friends at Meta have provided an optimized open-source implementation of DNDK-GCM, alongside implementations of AES-GCM and AES-GCM-SIV for comparison. A draft RFC has also been published to guide those wishing to implement DNDK for themselves. The Crypto Services team is proud to have supported the development of the DNDK draft RFC, with team members Gérald Doussot, Thomas Pornin, and Eric Schorn being formally acknowledged in the draft RFC.
– Kevin Henry
We look forward to catching up with everyone next year in Sofia, Bulgaria!
Source: Original Post