Tuesday, September 25, 2012

What Would You Do Differently With Reliable In-Memory Big Data?

In The Beginning

Two years ago, Terracotta introduced a transformative new technology called BigMemory, delivering revolutionary advances in both scale and predictability for in-memory data management. Since then, our customers have leveraged BigMemory to eliminate tuning and achieve 1000x per-node jumps in scale—from about 2 GB per node to 2TB per node—all while achieving lower, more predictable latencies.

Moving Forward

Since that time we've layered in additional powerful technologies like on-heap bytes based tuning (ARC) and Search.

Once you start acting on that much data in-memory it becomes important that your able to keep it there. After all, a crash that requires you to rebuild gigs and gigs of data from other sources when the node restarts makes the application impractical. So we recently added a fast, fault tolerant, restartable store (FRS). 

With that kind of data one needs to see what's going on or you'll feel blind and helpless. So we followed up with a secure monitoring and management console to go along with it called Terracotta Management Console (TMC).

The Problem

These pieces when put together create an extremely powerful in-memory data management solution. There was just  one more problem. If people weren't used to storing large amounts of data in-memory/in-process, then how would they ever start to explore what's possible?


We decided that the best way to change behavior was to make it free. So you can now download BigMemory Go—and use it in production up to 32 gigs—on as many machines as you desire. And you won't pay us a dime. 


What Would You Do Differently With Reliable In-Memory BigData?

I look forward to finding out. Download BigMemory Go free at http://terracotta.org.