In talking to our users it is clear that applications are getting more and more data hungry. According to IDC, data requirements are growing at an annual rate of 60 percent. This trend is driven further by cloud computing platforms, company consolidation and huge application platforms like Facebook. There is good news though. Server class machines purchased this year have a minimum of 8 Gig of RAM and likely have 32 Gig. Cisco is now selling mainstream UCS boxes with over 380 Gig of RAM (which I have tried and is amazing). On EC2 you can borrow 68.4 Gig machines for 2 dollars an hour (I have also tried this and it is also pretty amazing). Memory has gotten big and extremely cheap compared to things like developer time and user satisfaction.
Unfortunately a problem exists as well. For Java/JVM applications it is becoming an ever increasing challenge to use all that data and memory. At the same time that the data / memory explosion is occurring the amount of heap a Java process can effectively use has stayed largely unchanged. This is due to the ever increasing Garbage Collection pauses that occur as a Java heap gets large. We see this issue at our customers but we also see here at Terracotta tuning our products and the products we use like third party app servers, bug tracking systems CMS's and the like. How many times have you heard "run lots of JVM's" or "don't grow the heap" from your vendor's and/or devs?
So we set out to first identify the problem as it exists today, both in the wild and in-house. We then created a solution, first for us (an internal customer) and then for all of the millions of nodes of Ehcache out there (all of you)
3 Big Problems Seen by Java Applications
My Application is too slow
My application can't keep up with my users. I've got 10's of gigs of data in my database but it's over loaded and or too slow to service my needs. Either due to the complicated nature of my queriers or the volume of those queries. I want my data closer to the application so of course I start caching. Caching helps, but I want to cache more. My machine has 16 gigs of RAM but if I grow my heap that large, I get too many Java GC pauses.
My Application's latencies aren't predictable
On average my Java application is plenty fast but I see pauses that are unacceptable to my users. I can't meet my SLA's due to the size of my heap combined with Java GC pauses.
My software/deployment is too complicated
I've solved the Java GC problem. I run with many JVM's with heap sizes of 1-2 gigs. I partition my data and or loadbalance to get the performance and availability I need but my setup is complicated to manage because I need so many JVM's and I need to make sure the right data is in the right places. I fill up all 64 Gig's of RAM on my machine but it's too hard and fragile.
The other problem
Like many vendors, in the past we told our users to keep the heaps down under 6 gig. This forced our customers to not completely leverage the memory and or cpu on the machines they purchased and or stack JVM's on a machine. The prior is expensive and inefficient and the latter fragile and complex.
Here is a quick picture of what people do with their Java Applications today:
Base Case - Small heap JVM on a big machine because GC pauses are a problem
Big heap - That has long GC's that are complicated to manage
Stacked small JVM heaps - This in combination with various sharding, load balancing and clustering techniques is often used. This is complicated to manage and if all the nodes GC at the same time this can lead to availability problems.
What kind of solution would help?
Here's what we believe are the requirements for a stand-alone caching solution that attacks the above problems.
- Hold a large dataset in memory without impacting GC (10s-100s of Gig) - The more data that is cached the less you have to go to your external data source and or disk the faster the app goes
- Be Fast - needs to meet the SLA
- Stay Fast - Don't fragment, don't slowdown as the data is changed over time
- Concurrent - Scales with cpu and threads. No lock contention
- Predictable - can't have pauses if I want to make my SLA
- Needs to be 100 percent Java, work on your JVM on your OS
- Restartable - A big cache like this needs to be restartable because it takes too long to build
- Should just Snap-in and work - not a lot of complexity
What have we built?
First we built a core piece of technology, BigMemory, an off-heap, direct memory buffer store, with a highly optimized memory manager that meets and or exceeds requirements 1-6 above. This piece of technology is currently being applied in two ways:
1) Terracotta Server Array - We sold it to our built-in customer, the Terracotta Server Team, who can now create individual nodes of our L2 caches that can hold a hundred million entries, leverage 10's of gigs of memory, pause free and with linear TPS. This leverages entire machines (even big ones) with a single JVM for higher availability, a simpler deployment model, 8x improved density and rock steady latencies.
2) Ehcache - We've added BigMemory and a new disk store to Enterprise Ehcache to create a new tiered store adding in requirements 7-8 from above (snap-in simplicity and restart-ability). The Ehcache world at large can benefit from this store just as much as the Terracotta products do.
Check out the diagram below.
Typically, using either of the BigMemory backed products, you shrink your heap and grow your cache. By doing so SLA's are easier to meet because GC pauses pretty much go away and you are able to keep a huge chunk of data in memory.
Summing up
Memory is cheap and growing. Data is important and growing just as fast. Java's GC pauses are preventing applications from keeping up with your hardware. So do what every other layer of your software and hardware stack does: cache. But in Java, the large heaps needed to hold your cache can hurt performance due to GC pauses. So use a tiered cache with BigMemory that leverages your whole machine and keeps your data as close to where it is needed as possible. That's what Terracotta is doing for it's products. Do so simply, i.e. snap it in to Ehcache and have large caches without the pauses caused by GC. As a result create a simpler architecture with improved performance/density and better SLA's.
Learn more at http://terracotta.org/bigmemory
Check the Ehcache BigMemory docs
How would BigMemory free's up objects that are no more required, especially if there is hierarchical, transactional data cached?
ReplyDeleteHi, are you asking about BigMemory with for the Terracotta Server Array or BigMemory unclustered Ehcache? In the case of the Server Array nothing really changes except that it stores parts of objects off heap.
ReplyDeleteIn Ehcache unclustered it behaves in a similar manner to how the disk store does only way faster. The objects are serialized when in the off-heap tier. We manage the memory based on put, remove and expire so we don't need GC complexity.
Transactional cache has no effect on this as it's at a higher in the stack. It just works the same.
Not sure if I answered your question but please post again if I didn't
Steve thanks for the post, this looks very interesting. What communication mechanism(s) are you using between the local heap space and the off-heap tier?
ReplyDeleteIt's all in process so no comms layer is needed.
ReplyDeleteSee NIOs Direct Buffer API?
ReplyDeleteYep BigMemory certainly leverage Direct Memory buffers.
ReplyDeleteWhat's the difference between BigMemory and EhCache + DiskStore + RAMFS?
ReplyDeleteGood question. A number of differences exist.
ReplyDelete-off heap store is around 5x faster that disk store on a RAM Disk.
- OSS Disk Store keeps key's in memory so it still uses a lot of heap (EE Disk store that's in beta doesn't have this problem)
- OffHeap uses some pretty advanced algorithms for managing free space/fragmentation disk store fragments (This is solved in the ee disk store now in beta as well)
Hope that helps