浏览代码

v1.18: accounts-db: fix 8G+ memory spike during hash calculation (backport of #1308) (#1318)

accounts-db: fix 8G+ memory spike during hash calculation (#1308)

We were accidentally doing several thousands 4MB allocations - even
during incremental hash - which added up to a 8G+ memory spikes over ~2s
every ~30s.

Fix by using Vec::new() in the identity function. Empirically 98%+
reduces join arrays with less than 128 elements, and only the last few
reduces join large vecs. Because realloc does exponential growth we
don't see pathological reallocation but reduces do at most one realloc
(and often 0 because of exp growth).

(cherry picked from commit 2c71685b9492a52bc2dcfd158948115b058d2bbd)

Co-authored-by: Alessandro Decina <alessandro.d@gmail.com>
mergify[bot] 1 年之前
父节点
当前提交
c027cfc3e0
共有 1 个文件被更改,包括 7 次插入3 次删除
  1. 7 3
      accounts-db/src/accounts_hash.rs

+ 7 - 3
accounts-db/src/accounts_hash.rs

@@ -838,9 +838,13 @@ impl<'a> AccountsHasher<'a> {
                 accum
             })
             .reduce(
-                || DedupResult {
-                    hashes_files: Vec::with_capacity(max_bin),
-                    ..Default::default()
+                || {
+                    DedupResult {
+                        // Allocate with Vec::new() so that no allocation actually happens. See
+                        // https://github.com/anza-xyz/agave/pull/1308.
+                        hashes_files: Vec::new(),
+                        ..Default::default()
+                    }
                 },
                 |mut a, mut b| {
                     a.lamports_sum = a