Recently, I stumbled upon a paper on “Eventual Consistency: How soon is eventual? - An Evaluation of Amazon S3’s Consistency Behavior” written by David Bermbach and Stefan Tai (both KIT). Although this paper has not been directly written for security but more for economic purposes, I consider it to have a high impact on the forensic capabilities in cloud environments. This is actually pretty interesting stuff.
Eventual Consistency describes a state in which a data object has not been fully replicated throughout the whole cloud storage environment. This means, it will be replicated in the future, however this requires a) time and b) no further errors in the replication process.
The authors proposed the following basic steps in order to measure the consistency:
- Create a timestamp.
- Write a version number to the storage system.
- Continuously read until the old version number is no longer returned, then create a new timestamp.
- Calculate the difference between the write timestamp and the second timestamp (time of the last read of the previous version).
- Repeat these steps to achieve statistical significance.
The results for S3 of AWS have been quite interesting and I can highly recommend to have a look at the paper. But what’s the impact on forensics now?
Well, first of all, the approach mentioned above is quite interesting in terms of putting more light into the cloud blackbox. In general, I can see some basic similarities to the paper published by Ristenpart et al. The intention is somehow the same – isn’t it?
Secondly, this consistency issues is interesting in terms of data remnants that could be used in potential forensic investigations.
Let’s assume that a customer uploads a data object into the cloud storage environment and of course, due to load balancing features, the data object is only uploaded to one specific storage server based in Europe first. After a given time frame n this object will be replicated to k different storage servers around the globe. However, due to the nature of the storage environment, only one data object with a specific name can be stored (see S3) at a time.
Furthermore, we now assume that this customer is now in scope of a forensic investigation because it is assumed that the uploaded data objects contain information about potential terrorist attacks (bad example, I know ;)). He knows that the law enforcement agencies will come after him and therefore tries to get rid of all data objects in his cloud storage account. He knows that simple deletion is not enough and hence he decides to upload random data objects with the same name in order to overwrite the existing sensitive data (versioning is not provided by the CSP) objects. In terms of cloud forensics, the following to cases could be interesting for the forensic examiners:
Huge replication time window:
Given the case that the time n needed to replicate the random data objects to all storage servers is high, the forensic examiners could still be able to extract the sensitive data from other servers. On these server the old data object is still available.
Error in Replication Process:
The forensic examiners should consider and investigate the case that the replication to all k storage servers hasn’t been successful. There is still a chance, that only e.g. k-3 storage servers have received the new data object and the old data object is still available on 3 servers globally.
Finally, I guess this topic could be worth to be investigated further from a forensics perspective.