

Later on, your provisioned database might outperform DynamoDB in terms of costs but at some point you'll encounter a point where current machine will be not enough. That's a hard cost you cannot skip and you're paying it even if your database is completely unused. In a non-managed database, you need to provision a VM/Machine/Instance with at least 1 vCPU.
#LOCAL DYNAMODB INSTANCE FREE#
Moreover, AWS offers quite generous Free Tier so you might even go to the production with zero database costs. In traditional, non-managed databases, the TCO (Total Cost of Ownership) is much more non-linear and has a lot of hidden costs that might be not visible at first glance:Īt the beginning, on a small scale, DynamoDB costs are close to zero since if there's no traffic, there are no costs. Actually, DynamoDB costs are super predictable and directly proportional to the usage. DynamoDB is expensiveĬomparing to traditional, non-managed databases, it's much cheaper to scale. Source: Amazon DynamoDB auto scaling: Performance and cost optimization at any scale 3. No matter if you're sending 1 request per second, or 1,00,000 requests per second, DynamoDB (if data model has been architected correctly), behaves great, sometimes even better under heavy load. What about DynamoDB in such conditions? The performance is always the same. As your relational database starts getting more and more traffic, you'll encounter slowdowns related to the load on the machine caused by other operations and processes, connection pool exhaustions, transaction conflicts, and so on.

Reality is often a lot messier, especially at scale. These scenarios are often based on an oversimplified setups where the speed is measured by fetching one row by an indexed field and on a beefy machine without traffic, erratic spikes and a myriad of other factors. Many times I've heard that developers can fetch data from their relational databases in less than 1ms! And DynamoDB? Same operation takes 10ms or even 20ms, it's too slow, right? DynamoDB is slowĪnoher argument is about the speed. in Event Sourcing) and transactional data store - thanks to transactions support. It makes a perfect key value store, metadata store, relational database, event store (e.g. In fact, DynamoDB is suitable for almost all types of data. If you want to learn more about single-table design, you can learn about it from Alex's DeBrie book about DynamoDB. While I highly recommend the former to get started and for smaller projects, the latter is more "professional" and officially recommended by AWS. He two most common approaches use AWS Amplify, which provides all AWS resources, including tables and resolvers, and a single-table design, which uses only one table to fit all data entities into one container and smart keys composition. Well, modeling relationship is perfectly doable in DynamoDB. It's NoSQL database so it's not suitable for relational dataĪ common misconception I often hear about DynamoDB is that because it's NoSQL and does not support JOINs like traditional relational databases, it is not suitable for relational data.
