News

Amazon Scales Up and Out with New 'DynamoDB' Service

DynamoDB, based on NoSQL, uses solid-state drives to increase speed.

Amazon says it's "cracked the code" for scalability and non-administration with its latest database offering.

The company on Wednesday launched a new database service that will let customers store and modify huge amounts of unstructured content while proving rapid access to data. The new internally developed distributed database service, called DynamoDB, is based on NoSQL, which is growing in popularity as an alternative to SQL databases, particularly for managing unstructured data.

Amazon said customers that tested the beta of DynamoDB were able to achieve anywhere from hundreds to hundreds of thousands of writes per second without having to rewrite any code. Helping achieve the low latency and predictable response times is the fact that data is stored on solid state drives, (SSDs) and synchronized across Availability Zones (datacenters) within an Amazon Region.

Unlike traditional hard disk drives used in datacenters and storage arrays, SSDs are able to read and write data much faster. SSDs also cost much more than HDDs, though prices are declining with the proliferation of the media on mobile devices.

Amazon's new service looks to vastly extend upon the limitation of its SimpleDB service, the company's existing non-relational data store. With its fast access to data, DynamoDB might be one of the most scalable database services yet offered by a public cloud service provider.

"I haven't seen any other service provider offer at this scale, and provide it as a service," said Forrester Research analyst Vanessa Alvarez in an e-mail. "Most cloud service providers today offer infrastructure as a service (or storage as a service) and haven't moved beyond that.  However, there is interest. I've had many calls with service providers, where they're inquiring what it should look like."

DynamoDB is already powering the company's Amazon Cloud Drive and Kindle platforms, as well as Web scale services run by photo- and video-sharing service SmugMug and health information provider Elsevier. In addition to performance, Amazon touted the fact that DynamoDB is a fully managed service, meaning it doesn't require database administrators or systems management. Customers can configure capacity requirements via the AWS Management Console.

"DynamoDB is a fully managed NoSQL database service that provides extremely fast and predictable performance with seamless scalability," said Amazon CTO Werner Vogels in a webcast announcing the new service. "It enables customers to offload the administrative burdens of operating and scaling distributed databases so they don't have to worry about provisioning, patching, configuration, cluster management, things like that. With DynamoDB we believe we've finally cracked the code in giving developers what they've always wanted -- a seamless scalability and zero administration."

Vogels added that DynamoDB will appeal to customers who don't want to run SQL databases via Amazon's EC2 service or on their own premises. Amazon handles the management and administration of the features of DynamoDB. In fact, the only controls it offers to customers is the ability to dial up or down the capacity of the service and to add or remove data.

"We handle all of the work that's required behind the scenes to make sure the customers' databases are consistently fast and secure," Vogels said. "With database software, whether it's relational or non-relational, almost all of this administration is manual, regardless of whether the software runs on the server or in a datacenter or in the cloud."

Swami Sivasubramanian, general manager of the DynamoDB business at Amazon, said providing low-latency access to content was a key design goal of the service. Depending upon the requested throughput, DynamoDB determines the number of partitions needed by a given table and provisions the right amount of resources to reach partition, Sivasubramanian said on the webcast.

Customers can explain in non-technical terms how they want a database provisioned -- for instance, the number of read-write requests made per second. This is aimed at removing complexity among customers who typically allocate resources and time to benchmarking an application to see how large their database clusters should be. Also with DynamoDB, Sivasubramanian said customers are no longer locked into the capacity they provision for a peak use-case.

"They can always scale it down once their application's peak decreases," he said. "For instance, let's say you're launching an application tomorrow and you're expected to be all over the Internet. You can dial up your throughput to handle the load to hundreds of thousands of requests per second. Once you're traffic subsides, you can dial down to your expected usage and you don't need to keep paying for your peak traffic. They can make the tradeoff between consistency, performance and cost."

SmugMug's CEO Don MacAskill, who was also on the webcast, said DynamoDB was able to achieve millisecond reads and writes. Noting his company's site manages billions of photos and videos that are constantly uploaded and downloaded, traditional databases were proving to be costly and management-intensive.

Amazon's existing EC2 compute and S3 storage service required too much overhead, MacAskill indicated. With DynamoDB, "we didn't have to worry about provisioning, we didn't have to worry about maintenance and backups and replication and all of those sorts of things," he said.

Initially, DynamoDB will appeal to large Web scale companies such as SmugMug, noted Forrester's Alvarez. "However, I can see this going more mainstream in areas like financial services and retail, where there's a need for something like this, and really don't want to make the capex investment in having to continue doing it themselves," she said.

Vogels said in a blog post that pricing starts at $1 per GB per month and $0.01 per hour for every 10 units or write capacity and $0.01 per hour for every 50 units or read capacity.

About the Author

Jeffrey Schwartz is editor of Redmond magazine and also covers cloud computing for Virtualization Review's Cloud Report. In addition, he writes the Channeling the Cloud column for Redmond Channel Partner. Follow him on Twitter @JeffreySchwartz.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube