Robust and Efficient Elimination of Cache andTiming Side Channels
Benjamin A. Braun1, Suman Jana1, and Dan Boneh1
1Stanford University
Abstract—Timing and cache side channels provide powerfulattacks against many sensitive operations including cryptographicimplementations. Existing defenses cannot protect against allclasses of such attacks without incurring prohibitive performanceoverhead. A popular strategy for defending against all classesof these attacks is to modify the implementation so that thetiming and cache access patterns of every hardware instructionis independent of the secret inputs. However, this solution isarchitecture-specific, brittle, and difficult to get right. In thispaper, we propose and evaluate a robust low-overhead techniquefor mitigating timing and cache channels. Our solution requiresonly minimal source code changes and works across multiple lan-guages/platforms. We report the experimental results of applyingour solution to protect several C, C++, and Java programs. Ourresults demonstrate that our solution successfully eliminates thetiming and cache side-channel leaks while incurring significantlylower performance overhead than existing approaches.
I. INTRODUCTION
Defending against cache and timing side channel attacksis known to be a hard and important problem. Timing andcache attacks can be used to extract cryptographic secretsfrom running systems [14, 15, 23, 29, 35, 36, 36, 40], spyon Web user activity [12], and even undo the privacy ofdifferential privacy systems [5, 24]. Attacks exploiting timingside channels have been demonstrated for both remote andlocal adversaries. A remote attacker is separated from its targetby a network [14, 15, 29, 36] while a local attacker can executeunprivileged spyware on the target machine [7, 9, 11, 36, 45,47].
Most existing defenses against cache and timing attacksonly protect against a subset of attacks and incur significantperformance overheads. For example, one way to defendagainst remote timing attacks is to make sure that the timing ofany externally observable events are independent of any datathat should be kept secret. Several different strategies havebeen proposed to achieve this, including application-specificchanges [10, 27, 30], static transformation [17, 20], anddynamic padding [6, 18, 24, 31, 47]. However, none of thesestrategies defend against local timing attacks where the attackerspies on the target application by measuring the target’s impacton the local cache and other resources. Similarly, the strategiesfor defending against local cache attacks like static partitioningof resources [28, 37, 43, 44], flushing state [50], obfuscatingcache access patterns [9, 10, 13, 35, 40], and moderatingaccess to fine-grained timers [33, 34, 42], also incur significantperformance penalties while still leaving the target potentiallyvulnerable to timing attacks. We survey these methods inrelated work (Section VIII).
A popular approach for defending against both local andremote timing attacks is to ensure that the low-level instructionsequence does not contain instructions whose performancedepends on secret information. This can be enforced bymanually re-writing the code, as was done in OpenSSL1, or bychanging the compiler to ensure that the generated code hasthis property [20].
Unfortunately, this popular strategy can fail to ensuresecurity for several reasons. First, the timing properties ofinstructions may differ in subtle ways from one architectureto another (or even from one processor model to another)resulting in an instruction sequence that is unsafe for somearchitectures/processor models. Second, this strategy does notwork for languages like Java where the Java Virtual Machine(JVM) optimizes the bytecode at runtime and may inad-vertently introduce secret-dependent timing variations. Third,manually ensuring that a certain code transformation preventstiming attacks can be extremely difficult and tedious, as wasthe case when updating OpenSSL to prevent the Lucky-thirteentiming attack [32].
Our contribution. We propose the first low-overhead,application-independent, and cross-language defense that canprotect against both local and remote timing attacks withminimal application code changes. We show that our defenseis language-independent by applying the strategy to protectapplications written in Java and C/C++. Our defense requiresrelatively simple modifications to the underlying OS and canrun on off-the-shelf hardware.
We implement our approach in Linux and show that theexecution times of protected functions are independent ofsecret data. We also demonstrate that the performance overheadof our defense is low. For example, the performance overheadto protect the entire state machine running inside a SSL/TLSserver against all known timing- and cache-based side channelattacks is less than 5% in connection latency.
We summarize the key insights behind our solution (de-scribed in detail in Section IV) below.
• We leverage programmer code annotations to identifyand protect sensitive code that operates on secret data.Our defense mechanism only protects the sensitive func-tions. This lets us minimize the performance impact ofour scheme by leaving the performance of non-sensitivefunctions unchanged.
1In the case of RSA private key operations, OpenSSL uses an additionaldefense called blinding.
arX
iv:1
506.
0018
9v2
[cs
.CR
] 3
1 A
ug 2
015
• We further minimize the performance overhead by sepa-rating and accurately accounting for secret-dependent andsecret-independent timing variations. Secret-independenttiming variations (e.g., the ones caused by interrupts, theOS scheduler, or non-secret execution flow) do not leakany sensitive information to the attacker and thus aretreated differently than secret-dependent variations by ourscheme.
• We demonstrate that existing OS services like schedulersand hardware features like memory hierarchies can beleveraged to create a lightweight isolation mechanism thatcan protect a sensitive function’s execution from otherlocal untrusted processes and minimize timing variationsduring the function’s execution.
• We show that naive implementations of delay loops inmost existing hardware leak timing information due tothe underlying delay primitive’s (e.g., NOP instruction)limited accuracy. We create and evaluate a new schemefor implementing delay loops that prevents such leakagewhile still using existing coarse-grained delay primitives.
• We design and evaluate a lazy state cleansing mechanismthat clears the sensitive state left in shared hardwareresources (e.g., branch predictors, caches, etc.) beforehanding them over to an untrusted process. We find thatlazy state cleansing incurs significantly less overhead thanperforming state cleaning as soon as a sensitive functionfinishes execution.
II. KNOWN TIMING ATTACKS
Before describing our proposed defense we briefly surveydifferent types of timing attackers. In the previous section, wediscussed the difference between a local and a remote timingattacker: a local timing attacker, in addition to monitoring thetotal computation time, can spy on the target application bymonitoring the state of shared hardware resources such as thelocal cache.
Concurrent vs. non-concurrent attacks. In a concurrentattack, the attacker can probe shared resources while the targetapplication is operating. For example, the attacker can measuretiming information or inspect the state of the shared resourcesat intermediate steps of a sensitive operation. The attacker’sprocess can control the concurrent access by adjusting itsscheduling parameters and its core affinity in the case ofsymmetric multiprocessing (SMP).
A non-concurrent attack is one in which the attacker onlygets to observe the timing information or shared hardware stateat the beginning and the end of the sensitive computation.For example, a non-concurrent attacker can extract secretinformation using only the aggregate time it takes the targetapplication to process a request.
Local attacks. Concurrent local attacks are the most prevalentclass of timing attacks in the research literature. Such attacksare known to be able to extract the secret/private key againsta wide-range of ciphers including RSA [4, 36], AES [23, 35,40, 46], and ElGamal [49]. These attacks exploit informationleakage through a wide range of shared hardware resources: L1or L2 data cache [23, 35, 36, 40], L3 cache [26, 46], instructioncache [1, 49], branch predictor cache [2, 3], and floating-pointmultiplier [4].
There are several known local non-concurrent attacks aswell. Osvik et al. [35], Tromer et al. [40], and Bonneauand Mironov [11] present two types of local, non-concurrentattacks against AES implementations. In the first, prime andprobe, the attacker “primes” the cache, triggers an AES en-cryption, and “probes” the cache to learn information about theAES private key. The spy process primes the cache by loadingits own memory content into the cache and probes the cache bymeasuring the time to reload the memory content after the AESencryption has completed. This attack involves the attacker’sspy process measuring its own timing information to indirectlyextract information from the victim application. Alternatively,in the evict and time strategy, the attacker measures the timetaken to perform the victim operation, evicts certain chosencache lines, triggers the victim operation and measure itsexecution time again. By comparing these two execution times,the attacker can find out which cache lines were accessedduring the victim operation. Osvik et al. were able to extractan 128-bit AES key after only 8,000 encryptions using theprime and probe attack.
Remote attacks. All existing remote attacks [14, 15, 29, 36]are non-concurrent, however this is not fundamental. A hy-pothetical remote, yet concurrent, attack would be one inwhich the remote attacker submits requests to the victimapplication at the same time that another non-adversarial clientsends some requests containing sensitive information to thevictim application. The attacker may then be able to measuretiming information at intermediate steps of the non-adversarialclient’s communication with the victim application and inferthe sensitive content.
III. THREAT MODEL
We allow the attacker to be local or remote and to executeconcurrently or non-concurrently with the target application.We assume that the attacker can only run spy processes asa different non-privileged user (i.e., no super-user privileges)than the owner of the target application. We also assumethat the spy process cannot bypass the standard user-basedisolation provided by the operating system. We believe thatthese are very realistic assumptions because if either one ofthese assumptions fail, the spy process can steal the user’ssensitive information without resorting to side channel attacksin most existing operating systems.
In our model, the operating system and the underlyinghardware are trusted. Similarly, we expect that the attackerdoes not have physical access to the hardware and cannotmonitor side channels such as electromagnetic radiations,power use, or acoustic emanations. We are only concernedwith timing and cache side channels since they are the easiestside channels to exploit without physical access to the victimmachine.
IV. OUR SOLUTION
In our solution, developers annotate the functions perform-ing sensitive computation(s) that they would like to protect.For the rest of the paper, we refer to such functions asprotected functions. Our solution instruments the protectedfunctions such that our stub code is invoked before and afterexecution of each protected function. The stub code ensures
that the protected functions, all other functions that may beinvoked as part of their execution, and all the secrets that theyoperate on are safe from both local and remote timing attacks.Thus, our solution automatically prevents leakage of sensitiveinformation by all functions (protected or unprotected) invokedduring a protected function’s execution.
Our solution ensures the following properties for eachprotected function:
• We ensure that the execution time of a protected functionas observed by either a remote or a local attacker isindependent of any secret data the function operates on.This prevents an attacker from learning any sensitive in-formation by observing the execution time of a protectedfunction.
• We clean any state left in the shared hardware resources(e.g., caches) by a protected function before handingthe resources over to an untrusted process. As describedearlier in our threat model (Section III), we treat anyprocess as untrusted unless it belongs to the same userwho is performing the protected computation. We cleanseshared state only when necessary in a lazy manner tominimize the performance overhead.
• We prevent other concurrent untrusted processes from ac-cessing any intermediate state left in the shared hardwareresources during the protected function’s execution. Weachieve this by efficiently dynamic partitioning the sharedresources while incurring minimal performance overhead.
L2#cache#
L3#cache#
L1#cache#
L2#cache#
L1#cache#
L2#cache#
L1#cache#
per,user#page#coloring#isolates#protected#func7on’s#cache#lines##
• no#user#process#can#preempt#protected#func7ons#• apply#padding#to#make#7ming#secret,independent#• lazily#clean#per,core#resources#
core#1# core#2# core#3#protected#func7on#
untrusted#process#
untrusted#process#
Fig. 1: Overview of our solution
Figure 1 shows the main components of our solution.We use two high-level mechanisms to provide the propertiesdescribed above for each protected function: time padding andpreventing leakage through shared resources. We first brieflysummarize these mechanisms below and then describe themin detail in Sections IV-A and IV-B.
Time padding. We use time padding to make sure thata protected function’s execution time does not depend on
the secret data. The basic idea behind time padding is sim-ple—pad the protected function’s execution time to its worst-case runtime over all possible inputs. The idea of paddingexecution time to an upper limit to prevent timing channelsitself is not new and has been explored in several priorprojects [6, 18, 24, 31, 47]. However, all these solutionssuffer from two major problems which prevent them frombeing adopted in real-world setting: i) they incur prohibitiveperformance overhead (90−400% in macro-benchmarks [47])because they have to add a large amount of time padding inorder to prevent any timing information leakage to a remoteattacker, and ii) they do not protect against local adversarieswho can infer the actual unpadded execution time through sidechannels beyond network events (e.g., by monitoring the cacheaccess patterns at periodic intervals).
We solve both of these problems in this paper. One of ourmain contributions is a new low-overhead time padding schemethat can prevent timing information leakage of a protectedfunction to both local and remote attackers. We minimizethe required time padding without compromising security byadapting the worst-case time estimates using the followingthree principles:
1) We adapt the worst-case execution estimates to the targethardware and the protected function. We do so by pro-viding an offline profiling tool to automatically estimateworst-case runtime of a particular protected functionrunning on a particular target hardware platform. Priorschemes estimate the worst-case execution times forcomplete services (i.e., web servers) across all possiblehardware configurations. This results in an over-estimateof the time pad that hurts performance.
2) We protect against local (and remote) attackers by ensur-ing that an untrusted process cannot intervene during aprotected function’s execution. We apply time padding atthe end of every protected function’s execution. This en-sures minimal overhead while preventing a local attackerfrom learning the running time of protected functions.Prior schemes applied a large time pad before sending aservice’s output over the network. Such schemes are notsecure against local attackers who can use local resources,such as cache behavior, to infer the execution time ofindividual protected functions.
3) Timing variations result from many factors. Some aresecret-dependent and must be prevented, while othersare secret independent and cause no harm. For example,timing variations due to the OS scheduler and interrupthandlers are generally harmless. We accurately measureand account for secret-dependent variations and ignorethe secret-independent variations. This lets us compute anoptimal time pad needed to protect secret data. None ofthe existing time padding schemes distinguish betweenthe secret-dependent and secret-independent variations.This results in unnecessarily large time pads, even whensecret-dependent timing variations are small.
Preventing leaks via shared resources. We prevent in-formation leakage through shared resources without addingsignificant performance overhead to the process executing theprotected function or to other (potentially malicious) processes.Our approach is as follows:
• We leverage the multi-core processor architecture foundin most modern processors to minimize the amount ofshared resources during a protected function’s executionwithout hurting performance. We dynamically reserveexclusive access to a physical core (including all per-core caches such as L1 and L2) while it is executinga protected function. This ensures that a local attackerdoes not have concurrent access to any per-core resourceswhile a protected function is accessing them.
• For L3 caches shared across multiple cores, we use pagecoloring to ensure that cache accesses during a protectedfunction’s execution are restricted within a reserved por-tion of the L3 cache. We further ensure that this reservedportion is not shared with other users’ processes. Thisprevents the attacker from learning any information aboutprotected functions through the L3 cache.
• We lazily cleanse the state left in both per-core resources(e.g., L1/L2 caches, branch predictors) and resourcesshared across cores (e.g., L3 cache) only before handingthem over to untrusted processes. This minimizes theoverhead caused by the state cleansing operation.
A. Time padding
We design a safe time padding scheme that defends againstboth local and remote attackers inferring sensitive informationfrom observed timing behavior of a protected function. Our de-sign consists of two main components: estimating the paddingthreshold and applying the padding safely without leaking anyinformation. We describe these components in detail next.
Determining the padding value. Our time padding onlyaccounts for secret-dependent time variations. We discardvariations due to interrupts or OS scheduler preemptions. Todo so we rely Linux’s ability to keep track of the number ofexternal preemptions. We adapt the total padding time basedon the amount of time that a protected function is preemptedby the OS.
• Let Tmax be the worst-case execution time of a protectedfunction when no external preemptions occur.
• Let Text preempt be the worst-case time spent during pre-emptions given the set of n preemptions that occur duringthe execution of the protected function.
Our padding mechanism pads the execution of each protectedfunction to Tpadded cycles, where
Tpadded = Text preempt + Tmax.
This leaks the amount of preemption time to the attacker,but nothing else. Since this is independent of the secret, theattacker learns nothing useful.
Estimating Tmax. Our time padding scheme requires a tightestimate of the worst-case execution time (WCET) of everyprotected function. There are several prior projects that tryto estimate WCET through different static analysis tech-niques [19, 25]. However, these techniques require precise andaccurate models of the target hardware (e.g., cache, branchtarget buffers, etc.) which are often very hard to get in practice.In our implementation we use a simple dynamic profilingmethod to estimate WCET described below. Our time padding
time
Padding target:
Leak
Fig. 2: Time leakage due to naive padding
scheme is not tied to any particular WCET estimation methodand can work with other estimation tools.
We estimate the WCET, Tmax, through dynamic offline pro-filing of the protected function. Since this value is hardware-specific, we perform the profiling on the actual hardwarethat will run protected functions. To gather profiling informa-tion, we run an application that invokes protected functionswith an input generating script provided by the applicationdeveloper/system administrator. To reduce the possibility ofovertimes occurring due to uncommon inputs, it is importantthat the script generate both common and uncommon inputs.We instrument the protected functions in the application sothat the worst-case performance behavior is stored in a profilefile. We compute the padding parameters based on the profilingresults.
To be conservative, we obtain all profiling measurementsfor the protected functions under high load conditions (i.e., inparallel with other application that produces significant loadson both memory and CPU). We compute Tmax from thesemeasurements such that it is the worst-case timing bound whenat most a κ fraction of all profiling readings are excluded. κ is asecurity parameter which provides a tradeoff between securityand performance. Higher values of κ reduce Tmax but increasethe chance of overtimes. For our prototype implementation weset κ to 10−5.
Safely applying padding. Once the padding amount has beendetermined using the techniques described earlier, waiting forthe target amount might seem easy at first glance. However,there are two major issues that make application of paddingcomplicated in practice as described below.
Handling limited accuracy of padding loops. As our solutiondepends on fine-grained padding, a naive padding scheme mayleak information due to limited accuracy of any padding loops.Figure 2 shows that a naive padding scheme that repeatedlymeasures the elapsed time in a tight loop until the target timeis reached leaks timing information. This is because the loopcan only break when the condition is evaluated, and henceif one iteration of the loop takes u cycles then the paddingloop leaks timing information mod u. Note that earlier timingpadding schemes do not get affected by this problem as theirpadding amounts are significantly larger than ours.
Our solution guarantees that the distribution of runningtimes of a protected function for some set of private inputsis indistinguishable from the same distribution produced whena different set of private inputs to the function are used. We
call this property the safe padding property. We overcomethe limitations of the simple wait loop by performing atiming randomization step before entering the simple waitloop. During this step, we perform m rounds of a randomizedwaiting operation. This goal of this step is to ensure that theamount of time spent in the protected function before thebeginning of the simple wait loop, when taken modulo u, thestable period of the simple timing loop (i.e. disregarding thefirst few iterations), is close to uniform. This technique can beviewed as performing a random walk on the integers modulo uwhere the runtime distribution of the waiting operation is thesupport of the walk and m is the number of steps walked. Priorwork by Chung et al. [16] has explored the sufficient conditionsfor the number of steps in a walk and its support that producea distribution that is exponentially close to uniform.
For the purposes of this paper, we perform timing random-ization using a randomized operation with 256 possible inputsthat runs for X + c cycles on input X where c is a constant.We describe the details of this operation in Section V. Wethen choose m to defeat our empirical statistical tests underpathological conditions that are very favorable to an attackeras shown in Section VI.
For our scheme’s guarantees to hold, the randomness usedinside the randomized waiting operation must be generatedusing a cryptographically secure generator. Otherwise, if anattacker can predict the added random noise, she can subtractit from the observed padded time and hence derive the originaltiming signal, modulo u.
A padding scheme that pads to the target time Tpaddedusing a simple padding loop and performs the randomizationstep after the execution of the protected function will notleak any information about the duration of the protectedfunction, as long as the following conditions hold: (i) nopreemptions occur; (ii) the randomization step successfullyyields a distribution of runtimes that is uniform modulo u;(iii) The simple padding loop executes for enough iterationsso that it reaches its stable period. The security of this schemeunder these assumptions can be proved as follows.
Let us assume that the last iteration of the simple waitloop take u cycles. Assuming the simple wait loop has iteratedenough times to reach its stable period, we can safely assumethat u does not depend on when the simple wait loop startedrunning. Now, due to the randomization step, we assume thatthe amount of time spent up to the start of the last iteration ofthe simple wait loop, taken modulo u, is uniformly distributed.Hence, the loop will break at a time that is between thetarget time and the target time plus u − 1. Because the lastiteration began when the elapsed execution time was uniformlydistributed modulo u, these u different cases will occur withequal probability. Hence, regardless of what is done withinthe protected function, the padded duration of the functionwill follow a uniform distribution of u different values afterthe target time. Therefore, the attacker will not learn anythingfrom observing the padded time of the function.
To reduce the worst-case performance cost of the random-ization step, we generate the required randomness at the startof the protected function, before measuring the start time ofthe protected function. This means that any variability in theruntime of the randomness generator does not increase Tpadded.
// At the return point of a protected function:// Tbegin holds the time at function start// Ibegin holds the preemption count at function start
1 for j = 1 to m2 Short-Random-Delay()3 Ttarget = Tbegin + Tmax4 overtime = 05 for i = 1 to ∞6 before = Current-Time()7 while Current-Time() < Ttarget, re-check.8 // Measure preemption count and adjust target9 Text preempt = (Preemptions()− Ibegin) ·Tpenalty
10 Tnext = Tbegin + Tmax + Text preempt + overtime11 // Overtime-detection support12 if before ≥ Tnext and overtime = 013 overtime = Tovertime14 Tnext = Tnext + overtime15 // If no adjustment was made, break16 if Tnext = Ttarget17 return18 Ttarget = Tnext
Fig. 3: Algorithm for applying time padding to a protected function’sexecution.
Handling preemptions occurring inside the padding loop.The scheme presented above assumes that no external pre-emptions can occur during the the execution of the paddingloop itself. However, blocking all preemptions during thepadding loop will degrade the responsiveness of the system. Toavoid such issues, we allow interrupts to be processed duringthe execution of the padding loop and update the paddingtime accordingly. We repeatedly update the padding time inresponse to preemptions until a “safe exit condition” is metwhere we can stop padding.
Our approach is to initially pad to the target value Tpadded,regardless of how many preemptions occur. We then repeatedlyincrease Text preempt and pad to the new adjusted padding targetuntil we execute a padding loop where no preemptions occur.The pseudocode of our approach is shown in Figure 3. Ourtechnique does not leak any information about the actualruntime of the protected function as the final padding targetonly depends on the pattern of preemptions but not on theinitial elapsed time before entering the padding loops. Notethat forward progress in our padding loops is guaranteed aslong as preemptions are rate limited on the cores executingprotected functions.
The algorithm computes Text preempt based on observedpreemptions simply by multiplying a constant Tpenalty by thenumber of preemptions. Since Text preempt should match theworst-case execution time of the observed preemptions, Tpenaltyis the worst-case execution time of any single preemption.Like Tmax, Tpenalty is machine specific and can be determinedempirically from profiling data.
Handling overtimes. Our WCET estimator may miss apathological input that causes the protected function to run forsignificantly more time than on other inputs. While we never
observed this in our experiments, if such a pathological inputappeared in the wild, the protected function may take longerthan the estimated worst-case bound and this will result inan overtime. This leaks information: the attacker learns that apathological input was just processed. We therefore augmentour technique to detect such overtimes, i.e., when the elapsedtime of the protected function, taking interrupts into account,is greater than Tpadded.
One option to limit leakage when such overtimes aredetected is to refuse to service such requests. The systemadministrator can then act by either updating the secrets (e.g.,secret keys) or increasing the parameter Tmax of the model.
We also support updating Tmax of a protected functionon the fly without restarting the running application. Thepadding parameters are stored in a file that has the sameaccess permissions as the application/library containing theprotected function. This file is memory-mapped when thecorresponding protected function is called for the first time.Any changes to the memory-mapped file will immediatelyimpact the padding parameters of all applications invoking theprotected function unless they are in the middle of applyingthe estimated padding.
Note that each overtime can at most leak log(N) bits ofinformation, where N is the total number of timing measure-ments observed by the attacker. To see why, consider a stringof N timing observations made by an attacker with at mostB overtimes. There can be < NB such unique strings andthus the maximum information content of such a string is< Blog(N) bits, i.e., < log(N) bits per overtime. However,the actual effect of such leakage depends on how much entropyan application’s timing patterns for different inputs have. Forexample, if an application’s execution time for a particularsecret input is significantly larger than all other inputs, evenleaking 1 bit of information will be enough for the attacker toinfer the complete secret input.
Minimizing external preemptions. Note that even thoughTpadded does not leak any sensitive information, padding to thisvalue will incur significant performance overhead if Text preemptis high due to frequent or long-running preemptions duringthe protected function’s execution. Therefore, we minimize theexternal events that can delay the execution of a protectedfunction. We describe the main external sources of delays andhow we deal with them in detail below.
• Preemptions by other user processes. Under regularcircumstances, execution of a protected function maybe preempted by other user processes. This can delaythe execution of the protected function as long as theprocess is preempted. Therefore, we need to minimizesuch preemptions while still keeping the system usable.In our solution, we prevent preemptions by other userprocesses during the execution of a protected functionby using a scheduling policy that prevents migratingthe process to a different core and prevents other userprocesses from being scheduled on the same core duringthe duration of the protected function’s execution.
• Preemptions by interrupts. Another common sourceof preemption is the hardware interrupts served by thecore executing a protected function. One way to solve
this problem is to block or rate limit the number ofinterrupts that can be served by a core while executing aprotected function. However, such a technique may makethe system non-responsive under heavy load. For thisreason, in our current prototype solution, we do not applysuch techniques.Note that some of these interrupts (e.g., network inter-rupts) can be triggered by the attacker and thus can beused by the attacker to slow down the protected function’sexecution. However, in our solution, such an attack in-creases Text preempt, and hence degrades performance, butdoes not cause information leakage.
• Paging. An attacker can potentially arbitrarily slow downthe protected function by causing memory paging eventsduring the execution of a protected function. To avoidsuch cases, our solution forces each process executing aprotected function to lock all of its pages in memory anddisables page swapping. As a consequence, our solutioncurrently does not allow processes that allocate morememory than is physically available in the target systemto use protected functions.
• Hyperthreading. Hyperthreading is a technique sup-ported by modern processor cores where one physicalcore supports multiple logical cores. The operating systemcan independently schedule tasks on these logical coresand the hardware transparently takes care of sharing theunderlying physical core. We observed that protectedfunctions executing on a core with hyperthreading enabledcan encounter large amounts of slowdown. This slowdownis caused because the other concurrent processes execut-ing on the same physical core can interfere with accessto some of the CPU resources.One potential way of avoiding this slowdown is to con-figure the OS scheduler to prevent any untrusted processfrom running concurrently on a physical core with aprocess in the middle of a protected function. However,such a mechanism may result in high overheads dueto the cost of actively unscheduling/migrating a processrunning on a virtual core. For our current prototypeimplementation, we simply disable hyperthreading as partof system configuration.
• CPU frequency scaling. Modern CPUs include mech-anisms to change the operating frequency of each coredynamically at runtime depending on the current work-load to save power. If a core’s frequency decreases inthe middle of the execution of a protected function or itenters the halt state to save power, it will take longer inreal-time, increasing Tmax. To reduce such variations, wedisable CPU frequency scaling and low-power CPU stateswhen a core executes a protected function.
B. Preventing leakage through shared resources
We prevent information leakage from protected functionsthrough shared resources in two ways: isolating shared re-sources from other concurrent processes and lazily cleansingstate left in shared resources before handing them over to otheruntrusted processes. Isolating shared resources of protectedfunctions from other concurrent processes help in preventinglocal timing and cache attacks as well as improving perfor-mance by minimizing variations in the runtime of protected
functions.
Isolating per-core resources. As described earlier in Sec-tion IV-A, we disable hyperthreading on a core during aprotected function’s execution to improve performance. Thisalso ensures that an attacker cannot run spy code that snoopson per-core state while a protected function is executing. Wealso prevent preemptions from other user processes during theexecution of protected function and thus ensure that the coreand its L1/L2 caches are dedicated for the protected function.
Preventing leakage through performance counters. Modernhardware often contain performance counters that keep track ofdifferent performance events such as the number of cache evic-tions or branch mispredictions occurring on a particular core.A local attacker with access to these performance counters mayinfer the secrets used during a protected function’s execution.Our solution, therefore, restricts access to performance mon-itoring counters so that a user’s process cannot see detailedperformance metrics of another user’s processes. We do notrestrict, however, a user from using hardware performancecounters to measure the performance of their own processes.
Preventing leakage through L3 cache. As L3 cache is ashared resources across multiple cores, we use page coloringto dynamically isolate the protected function’s data in the L3cache. To support page coloring we modify the OS kernel’sphysical page allocators so that they do not allocate pageshaving any of C reserved secure page colors, unless the callerspecifically requests a secure color. Pages are colored basedon which L3 cache sets a page maps to. Therefore, two pageswith different colors are guaranteed never to conflict in the L3cache in any of their cache lines.
In order to support page coloring, we disable transparenthuge pages and set up access control to huge pages. Anattacker that has access to a huge page can evade the isolationprovided by page coloring, since a huge page can spanmultiple page colors. Hence, we prevent access to huge pages(transparently or by request) for non-privileged users.
As part of our implementation of page coloring, we alsodisable memory deduplication features, such as kernel same-page merging. This prevents a secure-colored page mappedinto one process from being transparently mapped as sharedinto another process. Disabling memory deduplication is notunique to our solution and has been used in the past inhypervisors to prevent leakage of information across differentvirtual machines [39].
When a process calls a protected function for the first time,we invoke a kernel module routine to remap all pages allocatedby the process in private mappings (i.e., the heap, stack, text-segment, library code, and library data pages) to pages thatare not shared with any other user’s processes. We also ensurethese pages have a page color reserved by the user executingthe protected function. The remapping transparently changesthe physical pages that a process accesses without modifyingthe virtual memory addresses, and hence requires no specialapplication support. If the user has not yet reserved any pagecolors or there are no more remaining pages of any of herreserved page colors, the OS allocates one of the reservedcolors for the user. Also, the process is flagged with a ”secure-color” bit. We modify the OS so that it recognizes this flag and
ensures that the future pages allocated to a private mapping forthe process will come from a reserved page color for the user.Note that since we only remap private mappings, we do notprotect applications that access a shared mapping from insidea protected function.
This strategy for allocating page colors to users has a minorpotential downside that such a system restricts the numbers ofdifferent users’ processes that can concurrently call protectedfunctions. We believe that such a restriction is a reasonabletrade-off between security and performance.
Lazy state cleansing. To ensure that an attacker does notsee the tainted state in a per-core resource after a protectedfunction finishes execution, we lazily delete all per core re-sources. When a protected function returns, we mark the CPUas “tainted” with the user ID of the caller process. The nexttime the OS attempts to schedule a process from a differentuser on the core, it will first flush all per-CPU caches, includingthe L1 instruction cache, L1 data cache, L2 cache, BranchTranslation Buffer (BTB), and Translation lookaside buffer(TLB). Such a scheme ensures that the overhead of flushingthese caches can be amortized over multiple invocations ofprotected functions by the same user.
V. IMPLEMENTATION
We built a prototype implementation of our protectionmechanism for a system running Linux OS. We describe thedifferent components of our implementation below.
A. Programming API
We implement a new function annotation FIXED TIMEfor the C/C++ language that indicates that a function shouldbe protected. The annotation can be specified either in thedeclaration of the function or at its definition. Adding thisannotation is the only change to a C/C++ code base that aprogrammer has to make in order to use our solution. Wewrote a plugin for the Clang C/C++ compiler that handlesthis annotation. The plugin automatically inserts a call tothe function fixed time begin at the start of the protectedfunction and a call to fixed time end at any return point ofthe function. These functions protect the annotated functionusing the mechanisms described in Section IV.
Alternatively, a programmer can also call these functionsexplicitly. This is useful for protecting ranges of code withinfunction such as the state transitions of the TLS state machine(see Section VI-B). We provide a Java native interface wrapperto both fixed time begin and fixed time end functions, forsupporting protected functions written in Java.
B. Time padding
For implementing time padding loops, we read from thetimestamp counter in x86 processors to collect time measure-ments. In most modern x86 processors, including the one wetested on, the timestamp counter has a constant frequencyregardless of the power saving state of a processor. We generatepseudorandom bytes for the randomized padding step usingthe ChaCha/8 stream cipher [8]. We use a value of 300 µsfor Tpenalty as this bounds the worst-case slowdown due to asingle interrupt we observed in our experiments.
Our implementation of the randomized wait operation takesan input X and simply performs X +c noops in a loop, wherec is a large enough value so that the loop takes one cyclelonger for each additional iteration. We observe that c = 46 issufficient to achieve this property.
Some of the OS modifications specified in our solutionare implemented as a loadable kernel module. This modulesupports an IOCTL call to mark a core as tainted at theend of a protected function’s execution. The module alsosupports an IOCTL call that enables fast access to the interruptand context-switch count. In the standard Linux kernel, theinterrupt count is usually accessed through the proc file systeminterface. However, such an interface is too slow for ourpurposes. Instead, our kernel module allocates a page ofcounters that is mapped into the virtual address space of thecalling process. The task struct of the calling process alsocontains a pointer to these counters. We modify the kernelto check on every interrupt and context switch if the currenttask has such a page, and if so, to increment the correspondingcounter in that page.
Offline profiling. We provide a profiling wrapper script,fixed time record . sh, that computes worst-case executiontime parameters of each protected function as well as theworst-case slowdown on that function due to preemptions bydifferent interrupts or kernel tasks.
The profiling script automatically generates profiling in-formation for all protected functions in an executable byrunning the application on different inputs. During the pro-filing process, we run a variety of applications in parallelto create a stress-testing environment that triggers worst-caseperformance of the protected function. To allow the stresstesters to maximally slow down the user application, we resetthe scheduling parameters and CPU affinity of a thread atthe start and end of every protected function. One stresstester generates interrupts at a high frequency using a simpleprogram that generates a flood of UDP packets to the loopbacknetwork interface. We also run the mprime2, systester3, andthe LINPACK benchmark4 to cause high CPU load and largeamounts of memory contention.
C. Prevent leakage through shared resources
Isolating a processor core and core-specific caches. Wedisable hyperthreading in Linux by selectively disabling virtualcores. This prevents any other processes from interfering withthe execution of a protected function. As part of our prototype,we also implement a simple version of the page coloringscheme described in Section IV.
We prevent a user from observing hardware performancecounters showing the performance behavior of other users’processes. The perf events framework on Linux mediatesaccess to hardware performance counters. We configure theframework to allow accessing per-CPU performance countersonly by the privileged users. Note that an unprivileged user can
2http://www.mersenne.org/3http://systester.sourceforge.net4https://software.intel.com/en-us/articles/intel-math-kernel-library-linpack-
download/
still access per-process performance counters that measure theperformance of their own processes.
For ensuring that a processor core executing a pro-tected function is not preempted by other user processes,as specified in Section IV, we depend on a schedulingmode that prevents other userspace processes from preemptinga protected function. For this purpose, we use the LinuxSCHED FIFO scheduling mode at maximum priority. In orderto be able to do this, we allow unprivileged users to useSCHED FIFO at priority 99 by changing the limits in the/etc/security/limits.conf file.
One side effect of this technique is that if a protectedfunction manually yields to the scheduler or perform blockingoperations, the process invoking the protected function maybe scheduled off. Therefore, we do not allow any blockingoperations or system calls inside the protected function. Asmentioned earlier, we also disable paging for the processesexecuting protected functions by using the mlockall()system call with the MCL_FUTURE.
We detect whether a protected function has violated theconditions of isolated execution by determining whether anyvoluntary context switches occurred during the protected func-tion’s execution. This usually indicates that either the protectedfunction yield the CPU manually or performed some blockingoperations.
Flushing shared resources. We modify the Linux schedulerto check the taint of a core before scheduling a user processon a processor core and to flush per-core resources if neededas described in Section IV.
To flush the L1 and L2 caches, we iteratively read overa segment of memory that is larger than the correspondingcache sizes. We found this to be significantly more efficientthan using the WBINVD instruction, which we observed costas much as 300 microseconds in our tests. We flush theL1 instruction cache by executing a large number of NOPinstructions.
Current implementations of Linux flush the TLB duringeach context switch. Therefore, we do not need to separatelyflush them. However, if Linux starts leveraging the PCIDfeature of x86 processors in the future, the TLB would haveto be flushed explicitly. For flushing the BTB, we leverageda “branch slide” consisting of alternating conditional branchand NOP instructions.
VI. EVALUATION
To show that our approach can be applied to protect awide variety of software, we have evaluated our solution inthree different settings and found that our solution successfullyprevents local and remote timing attacks in all of these settings.We describe the settings in detail below.
Encryption algorithms implemented in high level interpretedlanguages like Java. Traditionally, cryptographic algorithmsimplemented in interpreted languages like Java have beenharder to protect from timing attacks than those implementedin low level languages like C. Most interpreted languagesare compiled down to machine code on-the-fly by a VMusing Just-in-Time (JIT) code compilation techniques. The
JIT compiler often optimizes the code non-deterministicallyto improve performance. This makes it extremely hard fora programmer to reason about the transformations that arerequired to make a sensitive function’s timing behavior secret-independent. While developers writing low level code canuse features such as in-line assembly to carefully control themachine code of their implementation, such low level controlis simply not possible in a higher level language.
We show that our techniques can take care of these issues.We demonstrate that our defense can make the computationtime of Java implementations of cryptographic algorithmsindependent of the secret key with minimal performanceoverhead.
Cryptographic operations and SSL/TLS state machine. Im-plementations of cryptographic primitives other than the pub-lic/private key encryption or decryption routines may alsosuffer from side channel attacks. For example, a cryptographichash algorithm like SHA-1 takes different amount of timedepending on the length of the input data. In fact, such timingvariations have been used as part of several existing attacksagainst SSL/TLS protocols (e.g., Lucky 13). Also, the timetaken to perform the computation for implementing differentstages of the SSL/TLS state machine may also be dependenton the secret key.
We find that our protection mechanism can protect cryp-tographic primitives like hash functions as well as individualstages of the SSL/TLS state machine from timing attacks whileincurring minimal overhead.
Sensitive data structures. Besides cryptographic algorithms,timing channels also occur in the context of different datastructure operations like hash table lookups. For example, hashtable lookups may take different amount of time depending onhow many items are present in the bucket where the desireditem is located. It will take longer time to find items in bucketswith higher number of items than in the ones with less items.This signal can be exploited by an attacker to cause denial ofservice attacks [22]. We demonstrate that our technique canprevent timing leaks using the associative arrays in C++ STL,a popular hash table implementation.
Experiment setup. We perform all our experiments on amachine with 2.3GHz Intel Xeon E5-2630 CPUs organizedin 2 sockets each containing 6 physical cores unless otherwisespecified. Each core has a 32KB L1 instruction cache, a 32KBL1 data cache, and a 256KB L2 cache. Each socket has a15MB L3 cache. The machine has a total of 64GB of RAM.
For our experiments, we use OpenSSL version 1.0.1l andJava version BouncyCastle 1.52 (beta). The test machine runsLinux kernel version 3.13.11.4 with our modifications asdiscussed in Section V.
A. Security evaluation.
Preventing a simple timing attack. To determine the effective-ness of our safe padding technique, we first test whether ourtechnique can protect against a large timing channel that candistinguish between two different inputs of a simple function.To make the attacker’s job easier, we craft a simple functionthat has an easily observable timing channel—the function
0.00
0.05
0.10
0.15
0.20
0.25
0 20 40 60Duration (ns)
Fre
qu
en
cy
Input 0 1A. Unprotected
0.00
0.05
0.10
0.15
0.20
0.25
0 20 40 60Duration (ns)
Fre
qu
en
cy
B. With time padding but no randomized noise
0.00
0.05
0.10
0.15
0.20
2390 2400 2410Duration (ns)
Fre
qu
en
cy
C. Full protection (padding+randomized noise)
0.00
0.04
0.08
0.12
2390 2400 2410Duration (ns)
Fre
qu
en
cy
Fig. 4: Defeated distinguishing attack
executes a loop for 1 iteration if the input is 0 and 11 iterationsotherwise. We use the x86 loop instruction to implementthe loop and just a single nop instruction as the body of theloop. We assume that the attacker calls the protected functiondirectly and measures the value of the timestamp counterimmediately before and after the call. The goal of the attackeris to distinguish between two different inputs (0 and 1) bymonitoring the execution time of the function. Note that theseconditions are extremely favorable for an attacker.
We found that our defense completely defeats such adistinguishing attack despite the highly favorable conditionsfor the attacker. We also found that the timing randomizationstep (described in Section IV-A) is critical for such protectionand a naive padding loop with any timing randomization stepindeed leaks information. Figure 4(A) shows the distributionsof observed runtimes of the protected function on inputs 0and 1 with no defense applied. Figure 4(B) shows the runtimedistributions where padding is added to reach Tmax = 5000cycles (≈ 2.17 µs) without the time randomization step. Inboth cases, it can be seen that the observed timing distribu-tions for the two different inputs are clearly distinguishable.Figure 4(C) shows the same distributions when m = 5 roundsof timing randomization are applied along with time padding.In this case, we are no longer able to distinguish the timingdistributions.
We quantify the possibility of success for a distinguishing
−5
−4
−3
−2
−1
0
0 1 2 3 4 5Rounds of noise
log
10
(Em
p. st
atis
tica
l dis
tan
ce)
Inputs
0 vs. 1
0 vs. 0
Fig. 5: The effect of multiple rounds of randomized noise additionon the timing channel
attack in Figure 5 by plotting the variation of empiricalstatistical distance between the observed distributions as theamount of padding noise added is changed. The statisticaldistance is computed using the following formula.
d(X,Y ) =1
2
∑i∈Ω|P [X = i]−P [Y = i]|
We measure the statistical distance over the set of observationsthat are within the range of 50 cycles on either side of the me-dian (this contains nearly all observations.) Each distributionconsist of around 600 million observations.
The dashed line in Figure 5 shows the statistical distancebetween two different instances of the test function with 0 asinput. The solid line shows the statistical distance where oneinstance has 0 as input and the other has 1. We observe thatthe attack can be completely prevented if at least 2 rounds ofnoise are used.
Preventing timing attack on RSA decryption We next evaluatethe effectiveness of our time padding approach to defeatthe timing attack by Brumley et al. [15] against unblindedRSA implementations. Blinding is an algorithmic modificationto RSA that uses randomness to prevent timing attacks. Toisolate the impact of our specific defense, we apply ourdefense to the RSA implementation in OpenSSL 1.0.1h withsuch constant time defenses disabled. In order to do so, weconfigure OpenSSL to disable blinding, use the non-constanttime exponentiation implementation, and use the non-word-based Montgomery reduction implementation. We measure thetime of decrypting 256-byte messages with a random 2048-bitkey. We chose messages to have Montgomery representationsdiffering by multiples of 21016. Figure 6(A) shows the averageobserved running time for such a decryption operation, whichis around 4.16 ms. The messages are displayed from left toright in sorted order of how many Montgomery reductionsoccur during the decryption. Each message was sampledroughly 8,000 times and the samples were randomly splitinto 4 sample sets. As observed by Brumley et al. [15], thenumber of Montgomery reductions can be roughly determined
−1.0
−0.5
0.0
0.5
1.0
Message
Du
ratio
n (
ns)
(+~
4.2
5 x
10
6 )
Trial 1 2 3 4A. Unprotected
−2000
−1000
0
1000
2000
Messages
Du
ratio
n(n
s)
(+~
4.1
6 x
10
6)
B. Protected
−1.0
−0.5
0.0
0.5
1.0
Messages
Du
ratio
n(n
s)
(+~
4.2
5 x
10
6)
Fig. 6: Protecting against timing attacks on unblinded RSA
from the running time of an unprotected RSA decryption. Suchinformation can be used to derive full length keys.
We then apply our defense to this decryption with Tmaxset to 9.68 × 106 cycles ≈ 4.21 ms. One timer interruptis guaranteed to occur during such an operation, as timerinterrupts occur at a rate of 250/s on our target machine. Wecollect 30 million measurements and observe a multi-modalpadded distribution with four narrow, disjoint peaks corre-sponding to the padding algorithm using different Text preemptvalues for 1, 2, 3, and 4 interrupts respectively. The fourpeaks represent, respectively, 94.0%,5.8%,0.6%, and 0.4% ofthe samples. We did not observe that these probabilities varyacross different messages. Hence, in Figure 6(B), we showthe average observed time considering only observations fromwithin the first peak. Again, samples are split into 4 randomsample sets, each key is sampled around 700,000 times. Weobserve no message-dependent signal.
Preventing cache attacks on AES encryption. We nextverify that our system protects against local cache attacks.Specifically, we measured the effectiveness of our defenseagainst the PRIME+PROBE attack by Osvik et.al [35] on thesoftware implementation of AES encryption in OpenSSL. Forour tests, we apply the attack on only the first round of AESinstead of the full AES to make the conditions very favorableto the attacker as subsequent rounds of AES add more noise tothe cache readings. In this attack, the attacker first primes thecache by filling a selection of cache sets with the attacker’smemory lines. Next, the attacker coerces the victim processto perform an AES encryption on a chosen plaintext on thesame processor core. Finally, the attacker reloads the memorylines it used to fill the cache sets prior to the encryption. Thisallows the attacker to detect whether the reloaded lines werestill cached by monitoring timing or performance counters andthus infer which memory lines were accessed during the AESencryption operation.
On our test machine, the OpenSSL software AES imple-
A. Unprotected
0
5
10
15
0 10 20 30 0 10 20 30
Cache set
pi
/ 1
6
B. Protected
0
5
10
15
0 10 20 30 0 10 20 30
Cache set
pi
/ 1
6
Fig. 7: Protecting against cache attacks on software AES
mentation performs table lookups during the first round ofencryption that access one of 16 cache sets in each of 4 lookuptables. The actual cache sets accessed during the operation aredetermined by XORs of the top 4 bits of certain plaintextbytes pi and certain key bytes ki. By repeatedly observingcache accesses on chosen plaintexts where pi takes all possiblevalues of its top 4 bits, but where the rest of the plaintext israndomized, the attacker observes cache line access patternsrevealing the top 4 bits of pi ⊕ ki, and hence the top 4 bitsof the key ki. This simple attack can be extended to learn theentire AES key.
We use a hardware performance monitoring counter thatcounts L2 cache misses as the probe measurement, and foreach measurement we subtract off the average measurement forthat cache set for all values of pi. Figure 7(A) and Figure 7(B)show the probe measurements when performing this attack forall values of the top 4 bits of p0 (left) and p5 (right) withand without our protection scheme, respectively. Darker cellsindicate elevated measurements, and hence imply cache setsthat contain a line loaded by the attacker during the “prime”phase that was evicted by the AES encryption. The secret keyk is randomly chosen, except that k0 = 0 and k5 = 80dec.Without our solution, the cache set accesses show a patternrevealing pi ⊕ ki which can be used to determine that thetop 4 bits of k0 and k5 are indeed 0 and 5, respectively. Oursolution flushes the L2 cache lazily before handing it overto any untrusted process and thus ensures that no signal isobserved by the attacker as shown in Figure 7(B).
B. Performance evaluation
Performance costs of individual components. Table I showsthe individual cost of the different components of our defense.Our total performance overhead is less than the total sumof these components as we do not perform most of theseoperations in the critical path. Note that retrieving the numberof times a process was interrupted or determining whether avoluntary context switch occurred during a protected function’s
Component Cost (ns)m = 5 time randomization step, WCET 710Get interrupt counters 16Detect context switch 4Set and restore SCHED FIFO 2,650Set and restore CPU affinity 1,235Flush L1D+L2 cache 23,000Flush BTB cache 7,000
TABLE I: Performance overheads of individual components of ourdefense. WCET indicates worst-case execution time. Only costs listedin the upper half of the table are incurred on each call to a protectedfunction.
execution is negligible due to our modifications to the Linuxkernel described in Section V.
Microbenchmarks: cryptographic operations in multiple lan-guages. We perform a set of microbenchmarks that testthe impact of our solution on individual operations such asRSA and ECDSA signing in the OpenSSL C library and inthe BouncyCastle Java library. In order to apply our defenseto BouncyCastle, we constructed JNI wrapper functions thatcall the fixed time begin and fixed time end functions. Sinceboth libraries implement RSA blinding to defend againsttiming attacks, we disable RSA blinding when applying ourdefense.
The results of the microbenchmarks are shown in Table II.Note that the delays experienced in any real applications willbe significantly less than these micro benchmarks as realapplications will also perform some I/O operations that willamortize the performance overhead.
For OpenSSL, our solution adds between 3% (for RSA)and 71% (for ECDSA) to the cost of computing a signature onaverage. However, we offer significantly reduced tail latencyfor RSA signatures. This behavior is caused by the fact thatOpenSSL regenerates the blinding factors every 32 calls tothe signing function to amortize the performance cost ofgenerating the blinding factors.
Focusing on the BouncyCastle results, our solution resultsin a 2% decrease in cost for RSA signing and a 63% in-crease in cost for ECDSA signing, compared to the stockBouncyCastle implementation. We believe that this increasein cost for ECDSA is justified by the increase in security,as the stock BouncyCastle implementation does not defendagainst local timing attacks. Furthermore, we believe that someoptimizations, such as configuring the Java VM to schedulegarbage collection outside of protected function executions,could reduce this overhead.
Macrobenchmark: protecting the TLS state machine. Weapplied our solution to protect the server-side implementationof the TLS connection protocol in OpenSSL. The TLS protocolis implemented as a state machine in OpenSSL, and this pre-sented a challenge for applying our solution which is defined interms of protected functions. Additionally, reading and writingto a socket is interleaved with cryptographic operations in thespecification of the TLS protocol, which conflicts with oursolution’s requirement that no blocking I/O may be performedwithin a protected function.
RSA 2048-bit sign Mean (ms) 99% TailOpenSSL w/o blinding 1.45 1.45Stock OpenSSL 1.50 2.18OpenSSL + our solution 1.55 1.59BouncyCastle w/o blinding 9.02 9.41Stock BouncyCastle 9.80 10.20BouncyCastle + our solution 9.63 9.82ECDSA 256-bit sign Mean (ms) 99% TailStock OpenSSL 0.07 0.08OpenSSL + our solution 0.12 0.38Stock BouncyCastle 0.22 0.25BouncyCastle + our solution 0.36 0.48
TABLE II: Impact on performance of signing a 100 byte messageusing SHA-256 with RSA or ECDSA for the OpenSSL and Boun-cyCastle implementations. Measurements are in milliseconds. Wedisable blinding when applying our defense to the RSA signatureoperation. Bold text indicates a measurement where our defenseresults in better performance than the stock implementation.
We addressed both challenges by generalizing the notion ofa protected function to that of a protected interval, which is aninterval of execution starting with a call to fixed time beginand ending with fixed time end. We then split an executionof the TLS protocol into protected intervals on boundariesdefined by transitions of the TLS state machine and on low-level socket read and write operations. To achieve this, wefirst inserted calls to fixed time begin and fixed time end atthe start and end of each state within the TLS state machineimplementation. Next, we modified the low-level socket readand socket write OpenSSL wrapper functions to end the currentinterval, communicate with the socket, and then start a newinterval. Thus divided, all cryptographic operations performedinside the TLS implementation are within a protected interval.Each interval is uniquely identifiable by the name of thecurrent TLS state concatenated with an integer incrementedevery time a new interval is started within the same TLS state(equivalently, the number of socket operations that occurredso far during the state.)
The advantage of this strategy is that, unlike any priordefenses, it protects the entire implementation of the TLSstate machine from any form of timing attack. However, suchprotection schemes may incur additional overheads due toprotecting parts of the protocol that may not be vulnerableto timing attacks because they do not work with secret data.
We evaluate the performance of the fully protected TLSstate machine as well as an implementation that only protectsthe public key signing operation. The results are shown in Ta-ble III. We observe an overhead of less than 5% on connectionlatency even when protecting the full TLS protocol.
Protecting sensitive data structures. We measured the over-head of applying our approach to protect the lookup operationof the C++ STL unordered_map. For this experiment, wepopulate the hash map with 1 million 64-bit integer keys andvalues. We assume that the attacker cannot insert elementsin the hash map or cause collisions. The average cost ofperforming a lookup of a key present in the map is 0.173µswithout any defense and 2.46µs with our defense applied.Most of this overhead is caused by the fact that the worst-caseexecution time of the lookup operation is significantly larger
Connection latency (RSA) Mean (ms) 99% TailStock OpenSSL 5.26 6.82Stock OpenSSL+ Our solution(sign only)
5.33 6.53
Stock OpenSSL+ Our solution 5.52 6.74Connection latency (ECDSA) Mean (ms) 99% TailStock OpenSSL 4.53 6.08Stock OpenSSL+ Our solution(sign only)
4.64 6.18
Stock OpenSSL+ Our solution 4.75 6.36
TABLE III: The impact on TLS v1.2 connection latency when apply-ing our defense to the OpenSSL server-side TLS implementation.We evaluate the cases where the the server uses an RSA 2048-bit or ECDSA 256-bit signing key with SHA-256 as the digestfunction. Latency given in milliseconds and measures the end-to-endconnection time. The client uses the unmodified OpenSSL libraryattempts. We evaluate our defense when only protecting the signingoperation and when protecting all server-side routines performed aspart of the TLS connection protocol that use cryptography. Even whenthe full TLS protocol is protected, our approach adds an overhead ofless than 5% to average connection latency. Bold text indicates ameasurement where our defense results in better performance thanthe stock implementation.
than the average-case. the profiled worst-case execution time ofthe lookup when interrupts do not occur is 1.32µs at κ = 10−5.Thus, any timing channel defense will cause the lookup totake at least 1.32µs. The worst-case execution estimate of thelookup operation increases to 13.3µs when interrupt cases arenot excluded, hence our scheme benefits significantly fromadapting to interrupts during padding for this example. Anothermajor part of the overhead of our solution (0.710µs) comesfrom the randomization step to ensure safe padding . As wedescribed earlier in Section VI-A, the randomization step iscrucial to ensure that there is no timing leakage.
Hardware portability. Our solution is not specific to anyparticular hardware. It will work on any hardware that supportsstandard cache hierarchy and where page coloring can be im-plemented. To test the portability of our solution, we executedsome of the benchmarks mentioned in Sections VI-A and VI-Bon a 2.93 GHz Intel Xeon X5670 CPU. We confirmed thatour solution successfully protects against the local and remotetiming attacks on that platform too. The relative performanceoverheads were similar to the ones reported above.
VII. LIMITATIONS
No system calls inside protected functions. Our currentprototype does not support protected functions that invokesystem calls. A system call can inadvertently leak informationto an attacker by leaving state in shared kernel data structures,which an attacker might indirectly observe by invoking thesame system call and timing its duration. Alternatively, asystem call might access regions of the L3 cache that canbe snooped by an attacker process.
The lack of system call support turned out to be not a bigissue in practice as our experiments so far indicate that systemcalls are rarely used in functions dealing with sensitive data(e.g., cryptographic operations). However, if needed in future,one way of supporting system calls inside protected functions
while still avoiding this leakage is to apply our solution to thekernel itself. For example, we can pad any system calls thatmodify some shared kernel data structures to their worst caseexecution times.
Indirect timing variations in unprotected code. Our ap-proach does not currently defend against timing variations inthe execution of non-sensitive code segments that might getindirectly affected by a protected function’s execution. Forexample, consider the case where a non-sensitive functionfrom a process gets scheduled on a processor core immediatelyafter another process from the same user finishes executing aprotected function. In such a case, our solution will not flushthe state of per-core resources like L1 cache as both theseprocesses belong to the same user. However, if such remnantcache state affects the timing of the non-sensitive function, anattacker may be able to observe these variations and infer someinformation about the protected function.
Note that currently there are no known attacks that couldexploit this kind of leakage. A conservative approach thatprevents such leakages is to flush all per-cpu resources at theend of each protected function. This will, of course, resultin higher performance overheads. The costs associated withcleansing different types of per-cpu resources are summarizedin Table I.
Leakage due to fault injection. If an attacker can causea process to crash in the middle of a protected function’sexecution, the attacker can potentially learn secret information.For example, consider a protected function that first performsa sensitive operation and then parses some input from the user.An attacker can learn the duration of the sensitive operationby providing a bad input to the parser that makes it crash andmeasuring how long it takes the victim process to crash.
Our solution, in its current form, does not protect againstsuch attacks. However, this is not a fundamental limitation.One simple way of overcoming these attacks is to modifythe OS to apply the time padding for a protected functioneven after it has crashed as part of the OS’s cleanup handler.This can be implemented by modifying the OS to keep trackof all processes that are executing protected functions at anygiven point of time and their respective padding parameters.If any protected function crashes, the OS cleanup handler forthe corresponding process can apply the desired amount ofpadding.
VIII. RELATED WORK
A. Defenses against remote timing attacks
The remote timing attacks exploit the input-dependent exe-cution times of cryptographic operations. There are three mainapproaches to make cryptographic operations’ execution timesindependent of their inputs: static transformation, application-specific changes, and dynamic padding.
Application-specific changes. One conceptually simple wayto defend an application against timing attacks is to modify itssensitive operations such that their timing behavior is not key-dependent. For example, AES [10, 27, 30] implementationscan be modified to ensure that their execution times arekey-independent. Note that, since the cache behavior impacts
running time, achieving secret-independent timing usually re-quires rewriting the operation so that its memory access patternis also independent of secrets. Such modifications are applica-tion specific, hard to design, and very brittle. By contrast, oursolution is completely independent of the application and theprogramming language.
Static transformation. An alternative approach to preventremote attacks is to use static transformations on the imple-mentation of the cryptographic operation to make it constanttime. One can use a static analyzer to find the longest possiblepath through the cryptographic operation and insert paddinginstructions that have no side-effects (like NOP) along otherpaths so that they take the same amount of time as the longestpath [17, 20]. While this approach is generic and can beapplied to any sensitive operation, it has several drawbacks. Inmodern architectures like x86, the execution time of severalinstructions (e.g., the integer divide instruction and multiplefloating-point instructions) depend the value of the input ofthese instructions. This makes it extremely hard and timeconsuming to statically estimate the execution time of theseinstructions. Moreover, it is very hard to statically predict thechanges in the execution time due to internal cache collisionsin the implementation of the cryptographic operation. To avoidsuch cases, in our solution, we use dynamic offline profilingto estimate the worst-case runtime of a protected function.However, such dynamic techniques suffer from incompletenessi.e. they might miss worst-case execution times triggered bypathological inputs.
Dynamic padding. Dynamic padding techniques add a vari-able amount of padding to a sensitive computation that dependson the observed execution time of the computation in orderto mitigate the timing side-channel. Several prior works [6,18, 24, 31, 47] have presented ways to pad the execution of ablack-box computation to certain predetermined thresholds andobtain bounded information leakage. Zhang et al. designed anew programming language that, when used to write sensitiveoperations, can enforce limits on the timing information leak-age [48]. The major drawback of existing dynamic paddingschemes is that they incur large performance overhead. Thisresults from the fact that their estimations of the worst-caseexecution time tend to be overly pessimistic as it depends onseveral external parameters like OS scheduling, cache behaviorof the simultaneously running programs, etc. For example,Zhang et al. [47] set the worst-case execution time to be 300seconds for protecting a Wiki server. Such overly pessimisticestimates increase the amount of required padding and thusresults in significant performance overheads (90 − 400% inmacro-benchmarks [47]). Unlike existing dynamic paddingschemes, our solution incurs minimal performance overheadand protects against both local and remote timing attacks.
B. Defenses against local attacks
Local attackers can also perform timing attacks, hencesome of the defenses provided in the prior section may alsobe used to defend against some local attacks. However, localattackers also have access to shared hardware resources thatcontain information related to the target sensitive operation.The local attackers also have access to fine-grained timers.
A common local attack vector is to probe a shared hardware
resource, and then, using the fine-grained timer, measure howlong the probe took to run. Most of the proposed defenses tosuch attacks try to either remove access to fine-grained timersor isolate access to the shared hardware resources. Some ofthese defenses also try to minimize information leakage byobfuscating the sensitive operation’s hardware access patterns.We describe these approaches in detail below.
Removing fine-grained timers. Several prior projects haveevaluated removing or modifying time measurements taken onthe target machine [33, 34, 42]. Such solutions are often quiteeffective at preventing a large number of local side channelattacks as the underlying states of most shared hardwareresources can only be read by accurately measuring the timetaken to perform certain operations (e.g., read a cache line).
However, removing access to wall clock time is not suffi-cient for protecting against all local attackers. For example, alocal attacker executing multiple probe threads can infer timemeasurements by observing the scheduling behavior of thethreads. Custom scheduling schemes (e.g., instruction-basedscheduling) can eliminate such an attack [38] but implementingthese defenses require major changes to the OS scheduler. Incontrast, our solution only requires minor changes to the OSscheduler and protects against both local and remote attackers.
Preventing sharing of hardware state across processes. Manyproposed defenses against local attackers prevent an attackerfrom observing state changes to shared hardware resourcescaused by a victim process. We divide the proposed defensesinto five categories and describe them next.
Resource partitioning. Partitioning shared hardware resourcescan defeat local attackers, as they cannot access the samepartition of the resource as a victim. Kim et al. [28] presentan efficient management scheme for preventing local timingattacks across virtual machines (VMs). Their technique locksmemory regions accessed by sensitive functions into reservedportions of the L3 cache. This scheme can be more efficientthan page coloring. Such protection schemes are comple-mentary to our technique. For example, our solution can bemodified to use such a mechanism instead of page coloring todynamically partition the L3 cache.
Some of the other resource partitioning schemes (e.g.,Ristenpart et al. [37]) suggest allocating dedicated hardwareto each virtual machine instance to prevent cross-VM attacks.However, such schemes are wasteful of hardware resources asthey decrease the amount of resources available to concurrentprocesses. By contrast, our solution utilizes the shared hard-ware resources efficiently as they are only isolated during theexecution of the protected functions. The time a process spendsexecuting protected functions is usually much smaller than thetime it spends in non-sensitive computations.
Limiting concurrent access. If gang scheduling [28] is usedor hyperthreading is disabled, an attacker can only observeper-CPU resources when it has preempted a victim. Hence,reducing the frequency of preemptions reduces the feasibilityof cache-attacks on per-CPU caches. Varadarajan et al. [41]propose using minimum runtime guarantees to ensure that aVM is not preempted too frequently. However, as noted in [41],such a scheme is very hard to implement in a OS scheduler
as, unlike a hypervisor scheduler, an OS scheduler must dealwith a unbounded number of processes.
Custom hardware. Custom hardware can be used to obfuscateand randomize the victim process’s usage of the hardware. Forexample, Wang et al. [43, 44] proposed new ways of designingcaches that ensures that no information about cache usage isshared across different processes. However such schemes havelimited practical usage as they, by design, cannot be deployedon off-the-shelf commodity hardware.
Flushing state. Another class of defenses ensure that the stateof any per-CPU hardware resources are cleared before trans-ferring them from one process to another. Düppel, by Zhanget al. [50], flushes per-CPU L1 and (optionally) L2 cachesperiodically in a multi-tenant VM setting. Their solution alsorequires the hyperthreading to be disabled. They report around7% overheads on regular workloads. In essence, this schemeis similar to our solution’s technique of flushing per-CPUresources in the OS scheduler. However, unlike Düppel, weflush the state lazily only when a context switch to a differentuser process than the one executing a protected operationoccurs. Also, Düppel only protects against local cache attacks.We protect against both local and remote timing and cacheattacks while still incurring less overhead than Düppel.
Application transformations. Sensitive operations like sensi-tive computations in different programs can also be modifiedto exhibit either secret-independent or obfuscated hardwareaccess patterns. If the access to the hardware is independentof secrets, then an attacker cannot use any of the state leakedthrough shared hardware to learn anything meaningful aboutthe sensitive operations. Several prior projects have shownhow to modify AES implementations to obfuscate their cacheaccess patterns [9, 10, 13, 35, 40]. Similarly, recent versions ofOpenSSL use a specifically modified implementation of RSAthat ensures secret-independent cache accesses. Some of thesetransformations can also be applied dynamically. For example,Crane et al. [21] implement a system that dynamically appliescache-access obfuscating transformations to an application atruntime.
However, these transformations are specific to particularcryptographic operations and are very hard to implement andmaintain correctly. For example, 924 lines of assembly codehad to be added to OpenSSL to implement make the RSAimplementation’s cache accesses secret-independent.
IX. CONCLUSION
We presented a low-overhead, cross-architecture defensethat protects applications against both local and remote timingattacks with minimal application code changes. Our exper-iments and evaluation also show that our defense worksacross different applications written in different programminglanguages.
Our solution defends against both local and remote attacksby using a combination of two main techniques: (i) a timepadding scheme that only takes secret-dependent time vari-ations into account, and (ii) preventing information leakagevia shared resources such as the cache and branch predictionbuffers. We demonstrated that applying small time pads ac-curately is non-trivial because the timing loop itself may leak
information. We developed a method by which small time padscan be applied securely. We hope that our work will motivateapplication developers to leverage some of our techniques toprotect their applications from a wide variety of timing attacks.We also expect that the underlying principles of our solutionwill be useful in future work protecting against other forms ofside channel attacks.
ACKNOWLEDGMENTS
This work was supported by NSF, DARPA, ONR, and aGoogle PhD Fellowship to Suman Jana. Opinions, findings andconclusions or recommendations expressed in this material arethose of the author(s) and do not necessarily reflect the viewsof DARPA.
REFERENCES
[1] O. Aciiçmez. Yet Another MicroArchitectural Attack: Exploit-ing I-Cache. In CSAW, 2007.
[2] O. Aciiçmez, Ç. Koç, and J. Seifert. On the power of simplebranch prediction analysis. In ASIACCS, 2007.
[3] O. Aciiçmez, Ç. Koç, and J. Seifert. Predicting secret keys viabranch prediction. In CT-RSA, 2007.
[4] O. Aciiçmez and J. Seifert. Cheap hardware parallelism impliescheap security. In FDTC, 2007.
[5] M. Andrysco, D. Kohlbrenner, K. Mowery, R. Jhala, S. Lerner,and H. Shacham. On Subnormal Floating Point and AbnormalTiming. In S&P, 2015.
[6] A. Askarov, D. Zhang, and A. Myers. Predictive black-boxmitigation of timing channels. In CCS, 2010.
[7] G. Barthe, G. Betarte, J. Campo, C. Luna, and D. Pichardie.System-level non-interference for constant-time cryptography.In CCS, 2014.
[8] D. J. Bernstein. Chacha, a variant of salsa20. http://cr.yp.to/chacha.html.
[9] D. J. Bernstein. Cache-timing attacks on AES, 2005.[10] J. Blömer, J. Guajardo, and V. Krummel. Provably secure
masking of AES. In Selected Areas in Cryptography, pages69–83, 2005.
[11] J. Bonneau and I. Mironov. Cache-collision timing attacksagainst AES. In CHES, 2006.
[12] A. Bortz and D. Boneh. Exposing private information by timingweb applications. In WWW, 2007.
[13] E. Brickell, G. Graunke, M. Neve, and J. Seifert. Softwaremitigations to hedge AES against cache-based software sidechannel vulnerabilities. IACR Cryptology ePrint Archive, 2006.
[14] B. Brumley and N. Tuver. Remote timing attacks are stillpractical. In ESORICS, 2011.
[15] D. Brumley and D. Boneh. Remote Timing Attacks ArePractical. In USENIX Security, 2003.
[16] F. R. K. Chung, P. Diaconis, and R. L. Graham. Random walksarising in random number generation. The Annals of Probability,pages 1148–1165, 1987.
[17] J. Cleemput, B. Coppens, and B. D. Sutter. Compiler mitigationsfor time attacks on modern x86 processors. TACO, 8(4):23,2012.
[18] D. Cock, Q. Ge, T. Murray, and G. Heiser. The Last Mile: AnEmpirical Study of Some Timing Channels on seL4. In CCS,2014.
[19] A. Colin and I. Puaut. Worst case execution time analysis fora processor with branch prediction. Real-Time Systems, 18(2-3):249–274, 2000.
[20] B. Coppens, I. Verbauwhede, K. D. Bosschere, and B. D. Sutter.Practical mitigations for timing-based side-channel attacks onmodern x86 processors. In S&P, 2009.
[21] S. Crane, A. Homescu, S. Brunthaler, P. Larsen, and M. Franz.Thwarting cache side-channel attacks through dynamic softwarediversity. 2015.
[22] S. A. Crosby and D. S. Wallach. Denial of service viaalgorithmic complexity attacks. In Usenix Security, volume 2,2003.
[23] D. Gullasch, E. Bangerter, and S. Krenn. Cache games–bringingaccess-based cache attacks on AES to practice. In S&P, 2011.
[24] A. Haeberlen, B. C. Pierce, and A. Narayan. Differential privacyunder fire. In USENIX Security Symposium, 2011.
[25] R. Heckmann and C. Ferdinand. Worst-case execution timeprediction by static program analysis. In IPDPS, 2004.
[26] G. Irazoqui, T. Eisenbarth, and B. Sunar. Jackpot stealinginformation from large caches via huge pages. Cryptology ePrintArchive, Report 2014/970, 2014. http://eprint.iacr.org/.
[27] E. Käsper and P. Schwabe. Faster and timing-attack resistantaes-gcm. In CHES. 2009.
[28] T. Kim, M. Peinado, and G. Mainar-Ruiz. Stealthmem: System-level protection against cache-based side channel attacks in thecloud. In USENIX Security symposium, 2012.
[29] P. Kocher. Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems. In CRYPTO, 1996.
[30] R. Könighofer. A fast and cache-timing resistant implementationof the AES. In CT-RSA, 2008.
[31] B. Kopf and M. Durmuth. A provably secure and efficientcountermeasure against timing attacks. In CSF, 2009.
[32] A. Langley. Lucky Thirteen attack on TLS CBC, 2013. www.imperialviolet.org/2013/02/04/luckythirteen.html.
[33] P. Li, D. Gao, and M. Reiter. Mitigating access-driven timingchannels in clouds using StopWatch. In DSN, 2013.
[34] R. Martin, J. Demme, and S. Sethumadhavan. Timewarp: re-thinking timekeeping and performance monitoring mechanismsto mitigate side-channel attacks. In ISCA, 2012.
[35] D. Osvik, A. Shamir, and E. Tromer. Cache attacks andcountermeasures: the case of AES. In CT-RSA, 2006.
[36] C. Percival. Cache missing for fun and profit, 2005.[37] T. Ristenpart, E. Tromer, H. Shacham, and S. Savage. Hey, you,
get off of my cloud: exploring information leakage in third-partycompute clouds. In CCS, 2009.
[38] D. Stefan, P. Buiras, E. Yang, A. Levy, D. Terei, A. Russo,and D. Mazières. Eliminating cache-based timing attacks withinstruction-based scheduling. In ESORICS, 2013.
[39] K. Suzaki, K. Iijima, T. Yagi, and C. Artho. Memory dedupli-cation as a threat to the guest os. In Proceedings of the FourthEuropean Workshop on System Security, page 1. ACM, 2011.
[40] E. Tromer, D. Osvik, and A. Shamir. Efficient cache attacks onAES, and countermeasures. Journal of Cryptology, 23(1):37–71,2010.
[41] V. Varadarajan, T. Ristenpart, and M. Swift. Scheduler-baseddefenses against cross-vm side-channels. In Usenix Security,2014.
[42] B. Vattikonda, S. Das, and H. Shacham. Eliminating fine grainedtimers in xen. In CCSW, 2011.
[43] Z. Wang and R. Lee. New cache designs for thwarting softwarecache-based side channel attacks. In ISCA, 2007.
[44] Z. Wang and R. Lee. A novel cache architecture with enhancedperformance and security. In MICRO, 2008.
[45] Y. Yarom and N. Benger. Recovering OpenSSL ECDSA Nonces
Using the FLUSH+ RELOAD Cache Side-channel Attack. IACRCryptology ePrint Archive, 2014.
[46] Y. Yarom and K. Falkner. Flush+ Reload: a High Resolution,Low Noise, L3 Cache Side-Channel Attack. In USENIX Secu-rity, 2014.
[47] D. Zhang, A. Askarov, and A. Myers. Predictive mitigation oftiming channels in interactive systems. In CCS, 2011.
[48] D. Zhang, A. Askarov, and A. Myers. Language-based control
and mitigation of timing channels. In PLDI, 2012.[49] Y. Zhang, A. Juels, M. Reiter, and T. Ristenpart. Cross-vm side
channels and their use to extract private keys. In CCS, 2012.[50] Y. Zhang and M. Reiter. Düppel: Retrofitting commodity
operating systems to mitigate cache side channels in the cloud.In CCS, 2013.
- I Introduction
- II Known timing attacks
- III Threat Model
- IV Our Solution
- IV-A Time padding
- IV-B Preventing leakage through shared resources
- V Implementation
- V-A Programming API
- V-B Time padding
- V-C Prevent leakage through shared resources
- VI Evaluation
- VI-A Security evaluation.
- VI-B Performance evaluation
- VII Limitations
- VIII Related work
- VIII-A Defenses against remote timing attacks
- VIII-B Defenses against local attacks
- IX Conclusion