Data-intensive applications are pushing the boundaries of what is possible by making use of these technological developments. We call an application data-intensive if data is its primary challenge—the quantity of data, the complexity of data, or the speed at which it is changing—as opposed to compute-intensive, where CPU cycles are the bottleneck
The tools and technologies that help data-intensive applications store and process data have been rapidly adapting to these changes. New types of database systems (“NoSQL”) have been getting lots of attention, but message queues, caches, search indexes, frameworks for batch and stream processing, and related technologies are very important too. Many applications use some combination of these.
Sometimes, when discussing scalable data systems, people make comments along the lines of, “You’re not Google or Amazon. Stop worrying about scale and just use a relational database.” There is truth in that statement: building for scale that you don’t need is wasted effort and may lock you into an inflexible design. In effect, it is a form of premature optimization. However, it’s also important to choose the right tool for the job, and different technologies each have their own strengths and weaknesses. As we shall see, relational databases are important but not the final word on dealing with data.
Ref:https://github.com/ept/ddia-references
This book is arranged into three parts:
In Part I, we discuss the fundamental ideas that underpin the design ofdata-intensive applications. We start in Chapter 1 by discussing what we’re actuallytrying to achieve: reliability, scalability, and maintainability; how we need to think aboutthem; and how we can achieve them. In Chapter 2 we compare several different datamodels and query languages, and see how they are appropriate to different situations. InChapter 3 we talk about storage engines: how databases arrange data on disk so that wecan find it again efficiently. Chapter 4 turns to formats for data encoding (serialization)and evolution of schemas over time.
In Part II, we move from data stored on one machine to data that isdistributed across multiple machines. This is often necessary for scalability, but brings with ita variety of unique challenges. We first discuss replication (Chapter 5),partitioning/sharding (Chapter 6), and transactions (Chapter 7). We thengo into more detail on the problems with distributed systems (Chapter 8) and what itmeans to achieve consistency and consensus in a distributed system (Chapter 9).
In Part III, we discuss systems that derive some datasets from other datasets. Deriveddata often occurs in heterogeneous systems: when there is no one database that can do everythingwell, applications need to integrate several different databases, caches, indexes, and so on. InChapter 10 we start with a batch processing approach to derived data, and we build upon it with stream processing in Chapter 11. Finally, in Chapter 12 we put everythingtogether and discuss approaches for building reliable, scalable, and maintainable applications inthe future.
No comments:
Post a Comment