Larger Space
Today’s 64-bit operating systems and applications can address terabytes of memory (8Tb RAM with largest servers, expanding to 16Tb+ with CXL memory), and much larger with swap space to SSD - for most applications it is not a question size, but cost and performance.
Swap-space works very well for large image processing problems where access is sequential, but performance drops off very quickly when access is random. This is a particular problem for managed (garbage collected) platforms that need to frequently check for references that can be released, resulting in frequent load of objects that have not been referenced recently.
Cost of cloud server memory increases exponentially, thousands of time as quicky as SSD cost
While cost of SSD increases much more slowly
Larger Space
Hiperspace allows very large datasets to be used as if they were in memory without the cost of very large servers or poor performance of using swap-space.
For analytical use-cases Hiperspace acts as an intelligent swap space that loads objects as key/value pairs when needed. For durable use-cases Hiperspace, historical information remains in inexpensive storage until needed.
The RocksDB driver provides best in class key/value store that can scale to trillions of objects while retaining fast access with unreferenced objects in SSD.