Cache is king for speedy data retrieval. So if you’re interested in how Elasticsearch leverages various caches to ensure you are retrieving data as fast as possible, buckle up for the next 15 minutes and read through this post. This blog will shed some light on various caching features of Elasticsearch that help you to retrieve data faster after initial data accesses. Elasticsearch is a heavy user of various caches, but in this post we'll only be focusing on:
Page cache (sometimes called the filesystem cache)
Shard-level request cache
Query cache
You will learn what each of these caches is doing, how it works, and which cache is best for which use case. We'll also explore how sometimes you can control the caching and sometimes you have to trust another component to do a good job of caching. We'll also take a look how page caches deal with expiration of data. One thing you never want to encounter is a cache that returns stale data. A cache has to be bound to the lifecycle of your data and we will take a look how this works for each of them. And if you're wondering if this post is applicable to you, it doesn’t matter whether you are running Elasticsearch yourself or using Elastic Cloud — you will utilize these caches out of the box. Ok, let's dive in. Page cacheThe first cache exists on the operating system level. While this section is mainly about the Linux implementation, other operating systems have a similar feature. The basic idea of the page cache is to put data into the available memory after reading it from disk, so that the next read is returned from the memory and getting the data does not require a disk seek. All of this is completely transparent to the application, which is issuing the same system calls, but the operating system has the ability to use the page cache instead of reading from disk. Let's take a look at this diagram, where the application is executing a system call to read data from disk, and the kernel/operating system would go to disk for the first read and put the data into the page cache into memory. A second read then could be redirected by the kernel to the page cache within the operating system memory and thus would be much faster.
What does this mean for Elasticsearch? Instead of accessing data on-disk, the page cache can be much faster to access data. This is one of the reasons why the recommendation for Elasticsearch memory is generally not more than half of your total available memory — so the other half can be used for the page cache. This also means that no memory is wasted; rather, it’s reused for the page cache. How does data expire out of the cache? If the data itself is changed, the page cache marks that data as dirty and it will be released from the page cache. As segments with Elasticsearch and Lucene are only written once, this mechanism fits very well the way data is stored. Segments are read-only after the initial write, so a change of data might be a merge or the addition of new data. In that case, a new disk access is needed. The other possibility is the memory getting filled up. In that case, the cache will behave similarly to an LRU as stated by the kernel documentation. Testing the page cacheIf you want to check out the functionality of the page cache, we can use hyperfine to do so. hyperfine is a CLI benchmarking tool. Let's create a file with a size of 10MB via dd dd if=/dev/urandom of=test1 bs=1M count=10 If you want to run the above using macOS, you may want to use gdd instead and ensure that coreutils is installed via brew.
for Linux
hyperfine --warmup 5 'cat test1 > /dev/null' \ --prepare 'sudo sync; sudo echo 3 > /proc/sys/vm/drop_caches'
for osx
hyperfine --warmup 5 'cat test1 > /dev/null' --prepare 'sudo purge' Benchmark #1: cat test1 > /dev/null Time (mean ± σ): 38.1 ms ± 6.4 ms [User: 1.4 ms, System: 17.5 ms] Range (min … max): 30.4 ms … 50.5 ms 10 runs hyperfine --warmup 5 'cat test1 > /dev/null' Benchmark #1: cat test1 > /dev/null Time (mean ± σ): 3.8 ms ± 0.6 ms [User: 0.7 ms, System: 2.8 ms] Range (min … max): 2.9 ms … 7.0 ms 418 runs So, under my local macOS instance running the same cat command without clearing the page cache is about 10x faster, as disk access can be skipped. You definitely want this kind of access pattern for your Elasticsearch data! Diving deeperThe class responsible for reading a Lucene index is the HybridDirectory class. Based on the extension of files within a Lucene index there is a decision whether to use memory mapping or regular file access using Java NIO. Also note that some applications are more aware of their own access patterns and come with their own very specific and optimized caches, and the page cache would probably work against that. If needed, any application can bypass the page cache using O_DIRECT when opening a file. We will get back to this at the very end of this post. If you want to check for the cache hit ratio you can use cachestat which is part of perf-tools. One last thing about Elasticsearch here. You can configure Elasticsearch to preload data into the page cache via the index settings. Consider this an expert setting and be careful with this setting in order to ensure that the page cache does not get thrashed consistently. SummaryThe page cache helps to execute arbitrary searches faster by loading complete index data structures in the main memory of your operating system. There is no more granularity and it is solely based on the access pattern of your data. The operating system takes care of eviction. Let's go to the next level of caches. Shard-level request cacheThis cache helps a lot in speeding Kibana up by caching search responses consisting only of aggregations. Let's overlay the response of an aggregation with the data fetched from several indices to visualize the problem that is solved with this cache. A Kibana dashboard in your office usually displays data from several indices, and you simply specify a timespan like the last 7 days. You do not care how many indices or shards are queried. So, if you are using data streams for your time-based indices you may end up with a visualization like this covering five indices.
Now, let's jump 3 hours in the future, displaying the same dashboard:
The second visualization is very similar to the first — some data is not shown anymore as it has aged out (left of the blue line) and some more data was added at the end shown in the black line. Can you spot what has not changed? The data returned from indices logs-000002, logs-000003, and logs-000004. Even if this data had been in the page cache, we still would need to execute the search and the aggregation on top of the results. So, no need to do this double work. In order to make this work, one more optimization has been added to Elasticsearch: the ability to rewrite a query. Instead of specifying a timestamp range for the logs indices logs-000002, logs-000003, and logs-000004, we can rewrite this to a match_all query internally as every document within that index matches with regard to the timestamp (other filters would still apply of course). Using this rewrite, both requests now end up as the exact same request on these three indices and thus can be cached. This has become the shard-level request cache. The idea is to cache the full response of a request so you do not need to execute any search at all and can basically return the response instantly — as long as the data has not changed to ensure you don’t return any stale data! Diving deeperThe component responsible for caching is the IndicesRequestCache class. This method is used within the SearchService when executing the query phase. There is also an additional check if a query is eligible for caching — for example, queries that are being profiled are never cached to avoid skewing the results. This cache is enabled by default, can take up to one percent of the total heap, and can even be configured on a per-request basis if you need to. By default, this cache is enabled for search requests that do not return any hits — exactly what a Kibana visualization request is! However you can also use this cache when hits are returned by enabling it via a request parameter. You can retrieve statistics about the usage of this cache via: GET /_nodes/stats/indices/request_cache?human SummaryThe shard-level request cache remembers the full response to a search request and returns those if the same query comes in again without hitting the disk or the page cache. As its name implies, this data structure is tied to the shard containing the data and will also never return stale data. Query cacheThe query cache is the last cache we will take a look at in this post. Again, the way this cache works is rather different to the other caches. The page cache caches data independent of how much of this data is really read from a query. The shard-level query cache caches data when a similar query is used. The query cache is even more granular and can cache data that is reused between different queries. Let’s take a look how this works. Let’s imagine we search across logs. Three different users might be browsing this month’s data. However, each user uses a different search term:
User1 searched for “failure”
User2 searched for “Exception”
User3 searched for “pcre2_get_error_message”
Every search returns different results, and yet they are within the same time frame. This is where the query cache comes in: it is able to cache just that part of a query. The basic idea is to cache information hitting the disk and only search in those products. Your query is probably looking like this: GET logs-*/_search { "query": { "bool": { "must": [ { "match": { "message": "pcre2_get_error_message" } } ], "filter": [ { "range": { "@timestamp": { "gte": "2021-02-01", "lt": "2021-03-01" } } } ] } } } For every query the filter part stays the same. This is a highly simplified view of what the data looks like in an inverted index. Each time stamp is mapped to a d
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup
Version 7.17.27 of the Elastic Stack was released today. We recommend you upgrade to this latest version. We recommend 7.17.27 over the previous versi