15 years of NoSQL learning brings forth Dynamo

AWS has another shot at NoSQL with cloud-based DynamoDB

Chris Mayer
Amazon-Webservices

Amazon’s newest NoSQL offering promises scalability and flexibility no matter how big the database

It was the paper that started it all. Amazon’s
Dynamo
paper was the kickstart that the industry needed in
addressing the problem of reliability at massive scale, and spurred
many on to creating their own incrementally scalable and
highly available key-value storage system.

Now after mulling over the concept and refining it, Amazon has
launched its own effort – Amazon DynamoDB,
‘a fully managed NoSQL database service that provides
fast performance at any scale’ according to the man behind the
project, Amazon CTO Werner Vogels.

Discussing
the genesis of DynamoDB in-depth in a blogpost
, Vogels
explained how DynamoDB came into its current form following 15
years of ‘learning in the areas of large scale non-relational
databases and cloud services’ and how it expands upon the
principles set out in the five year-old SimpleDB and
Amazon S3.

If anyone is well equipped to talk about
scalability then it’s Amazon. After all, it deals with monumental
amounts of traffic expertly, with the lights going out fairly
rarely. This experience over the near last two decades has helped
them understand how to perfect NoSQL and advise enterprises on how
to deal with the challenges such as growth in users,
traffic, and data
 when going truly
global.

Vogels says that DynamoDB aims to
address the limitations of SimpleDB like the 10GB limit for
datasets, pricing complexity and unpredictable performance when
indexing. Following in-depth discussions with users, Amazon came to
the conclusion that  ’while Dynamo gave them a
system that met their reliability, performance, and scalability
needs, it did nothing to reduce the operational complexity of
running large database systems’ and that many were opting for
SimpleDB and S3 for their cloud needs despite Dynamo perhapse being
the one for them.

Thus a new Dynamo was born,
as Vogels puts it
best:

DynamoDB is based on
the principles of Dynamo, a progenitor of NoSQL, and brings the
power of the cloud to the NoSQL database world. It offers customers
high-availability, reliability, and incremental scalability, with
no limits on dataset size or request throughput for a given
table. And it is fast – it runs on the latest in
solid-state drive (SSD) technology and incorporates numerous other
optimizations to deliver low latency at any
scale.

The release has huge
potential to tackle the pitfalls of latency for those who deal with
large datasets and the issues of the past for Amazon WebServices
look to have been addressed with this release. There’s still a
complex pricing system though, causing a headache for some
-

  • Data storage is $1.00 per GB-month.
  • Data transfer is free for incoming data, and free up to 10TB
    per month and between AWS services. After that pricing is $0.12 per
    GB up through 40TB and pricing continues to drop through 350TB. If
    you need to transfer more than 524TB, contact Amazon for
    pricing.
  • Throughput capacity is charged at $0.01 per hour for every 10
    units of write capacity, $0.01 per hour for every 50 units of read
    capacity.

DynamoDB is still in beta and only available to those in the US
East version currently before an expansion is made. But you can get
your hands on several SDKs
for it. For more information on Dynamo, the following video gives
you a brief introduction into the changes.

Author
Comments
comments powered by Disqus