Cryptographic Reverse Firewalls
Noah Stephens-Davidowitz


Recent history has shown us that a user's own hardware and software often cannot be trusted. Apparently inadvertent bugs are regularly found in widespread cryptographic implementations, and Snowden's revelations proved that powerful actors are willing to go to extraordinary lengths to compromise hardware and software. This motivates us to consider the following (seemingly absurd and rather scary) question: How can we guarantee a user's security when she may be using a buggy, malfunctioning, or even arbitrarily compromised machine?

To that end, we introduce the notion of a cryptographic reverse firewall. Such a machine sits between the user's computer and the outside world, potentially modifying the messages that she sends and receives as she engages in a cryptographic protocol. A good firewall guarantees a user's security even when her own computer is arbitrarily corrupted. (Importantly, it does this without sharing any secrets with the user, and in general we trust the firewall no more than the communication channel.) This model generalizes much prior work and provides a general framework for building cryptographic schemes that remain secure when run on a compromised machine. Perhaps surprisingly, we show that we are able to instantiate very strong primitives with secure reverse firewalls (such as private function evaluation), and we show that remarkably efficient and practical implementations exist for important primitives such as CCA-secure message transmission.

Joint work with Yevgeniy Dodis and Ilya Mironov.