Long time readers will recall that I've been tinkering with shiny toys in the form of SSDs, trying to assess how changes in storage technology cause changes in the way transaction logging should be designed. SSDs are here now, getting cheaper all the time and therefore becoming more 'meh' by the minute. So, I need something even newer and shinier to drool over...
Enter memristors, arguably the coolest tech to emerge from HP since the last Total-e-Server release. Initially intended to complete with flash, memristor technology also has the longer term potential to give us persistent RAM. A server with a power-loss tolerant storage mechamism that runs at approximately main memory speed will fundamentally change the way we think about storage hierarchies, process state and fault tolerance.
Until now the on-chip cache hierarchy and off-chip RAM have both come under the heading of 'volatile', whilst disk has been considered persistent, give or take a bit of RAID controller caching.
Volatile storage is managed by the memory subsystem, with the cache hierarchy within that tier largely transparent to the O/S much less the apps. Yes, performance gurus will take issue with that - understanding cache coherency models is vital to getting the best out of multi-core chips and multi-socket servers. But by and large we don't control it directly - MESI is hardcoded in the CPU and we only influence it with simple primitives - memory fencing, thread to core pinning and such.
Persistent storage meanwhile is managed by the file system stack - the O/S block cache, disk drivers, RAID controllers, on-device and on-controller caches etc. As more of it is in software we have a little more control over the cache model, by O_DIRECT, fsync, firmware config tweaking and such. Most critically, we can divide the persistent storage into different pools with different properties. The best known example is the age old configuration suggestion for high performance transaction processing: put the log storage on a dedicated device.
So what will the programming model look like when we have hardware that offers persistent RAM, either for all the main memory or, more likely in the medium term, for some subset of it? Will the entire process state survive a power loss at all cache tiers from the on-CPU registers to the disk platters, or will we need fine grained cache control to say 'synchronously flush from volatile RAM to persistent RAM', much as we currently force a sync to disk? How will we resume execution after a power loss? Will we need to explicitly reattach to our persistent RAM and rebuild our transient data structures from its contents, or will the O/S magically handle it all for us? Do we explicitly serialize data moving between volatile and non-volatile RAM, as we currently do with RAM to disk transfers, or is it automatic as with cache to RAM movements? What does this mean for us in terms of new language constructs, libraries and design patterns?
Many, many interesting questions, the answers to which will dictate the programming landscape for a new generation of middleware and the applications that use it. The shift from HDD to SSD may seem minor in comparison. Will everything we know about the arcane niche of logging and crash recovery become obsolete, or become even more fundamental and mainstream? Job prospects aside, on this occasion I'm leaning rather in favour of obsolete.
1 comment:
OK, so what about entropy and the probability of catastrophic failures ;-) ?
Post a Comment