Lots of claims in the blog sphere around direct memory buffers being slow. We have been working with them a lot and hadn't seen that slowness but I'm more of a try it and see kinda guy so I did just that.
The nice thing about them is they occupy no heap so the data stored in them is hidden away from the JVM GC. Of course whether that 2 to 5 percent matters depends on how much data your app is trying to crank through them so as always you need to look at your use-case's, latency, through-put and SLA goals and code accordingly.
While more testing can and should(and has) be done, here is a place for people to start.
Here's the code and my results from my 1.6 ghz notebook. I only spent a few minutes on it so suggestions to improve it are welcome:
Type: ONHEAP Took: 8978 to write and read: 10737418368
Type: DIRECT Took: 9223 to write and read: 10737418368
Type: ONHEAP Took: 8827 to write and read: 10737418368
Type: DIRECT Took: 9283 to write and read: 10737418368
Type: ONHEAP Took: 8813 to write and read: 10737418368
Type: DIRECT Took: 9604 to write and read: 10737418368
Nice little example Steve !
ReplyDeleteIt's not the access of bytebuffers what is slow, but the way how bytebuffers are used. Most of the implementations showing slowness use unoptimized serializers to serialize large complex object graphs into a bytearray. As with many other things it's not the WHAT but the HOW which makes the diffrence.
yep, a few gotchas exist out there with direct memory buffers. Just need to understand them so you can work around them
ReplyDeleteIn some microbenchmarks I did a while ago, direct ByteBuffers were faster than heap buffers when filling them a byte at a time or an int at a time, but were slower for array copies. Sadly, both seem to be significantly slower than byte[] arrays.
ReplyDeleteMind you, I did these tests a while ago, so maybe some of the results have changed since then?
Good stuff. From my perspective the question really comes down to "Is it fast enough?" For our usage it's pretty darn fast. If you consider that generally speaking going over a network adds about a milli second of latency in most normal networked applications getting down to the micro-second level with direct byte buffers is a pretty compelling thing. With Ehcache BigMemory we've seen over 1 million ops(operations per/second) to the off heap cache built on direct memory buffers.
ReplyDeleteI have never understood how someone could say that Direct Buffer Access is slow. In the Java gaming space (see LWJGL for more information) that is how you actually get performance and the in those areas slow performance would get noticed far faster than in the enterprise space.
ReplyDelete