Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compactor: Upload Sparse Index Headers to Object Storage #10684

Merged
merged 49 commits into from
Mar 7, 2025
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
c814a03
mv indexheader into utils
dmwilson-grafana Feb 14, 2025
df725dc
fmt + benchmarks for BinaryWrite
dmwilson-grafana Feb 18, 2025
b3de32d
update benchmarks
dmwilson-grafana Feb 18, 2025
4c63eb9
update tests on compactor e2e
dmwilson-grafana Feb 18, 2025
7329292
update tests on compactor e2e
dmwilson-grafana Feb 18, 2025
6ce1d79
fix err handling in WriteBinary
dmwilson-grafana Feb 18, 2025
cc8b313
mv indexheader to pkg/storage
dmwilson-grafana Feb 19, 2025
d771282
rm change to BinaryWrite
dmwilson-grafana Feb 19, 2025
753572d
rm unused import related to BinaryWrite
dmwilson-grafana Feb 19, 2025
ccb4912
pass uploadSparseIndexHeaders through Config + update docs
dmwilson-grafana Feb 20, 2025
c432e3a
update docs
dmwilson-grafana Feb 20, 2025
fa47f62
docs
dmwilson-grafana Feb 20, 2025
d8de8fa
docs
dmwilson-grafana Feb 20, 2025
00f9b41
handrail comments; rm TODO
dmwilson-grafana Feb 20, 2025
ef039aa
comments
dmwilson-grafana Feb 21, 2025
24aad5f
Merge branch 'main' into dwilson/upload-sparse-headers-from-compactor
dmwilson-grafana Feb 21, 2025
fe8a12e
add handling for configured sampling rate != sparse-index-header samp…
dmwilson-grafana Feb 22, 2025
fd2d9fd
add comments on DownsamplePostings
dmwilson-grafana Feb 22, 2025
1ea4e57
updates to downsampling
dmwilson-grafana Feb 24, 2025
c362589
golangci-lint
dmwilson-grafana Feb 24, 2025
d766041
add todo comment on test, can pass unexpectedly
dmwilson-grafana Feb 24, 2025
13753ff
update to tests
dmwilson-grafana Feb 25, 2025
41d561b
review comments
dmwilson-grafana Feb 26, 2025
fb28d6e
address review comments
dmwilson-grafana Feb 26, 2025
35a4c1b
golint
dmwilson-grafana Feb 26, 2025
8ad719f
pass config through init functions
dmwilson-grafana Feb 27, 2025
1d7d402
update downsampling in NewPostingOffsetTableFromSparseHeader to alway…
dmwilson-grafana Feb 27, 2025
7905de5
fix postings to pass TestStreamBinaryReader_CheckSparseHeadersCorrect…
dmwilson-grafana Feb 27, 2025
cd77083
fix postings to pass TestStreamBinaryReader_CheckSparseHeadersCorrect…
dmwilson-grafana Feb 27, 2025
62b3c6f
stat sparse index headers before block upload; no warning on failed u…
dmwilson-grafana Feb 27, 2025
75d604b
posting sampling tests
dmwilson-grafana Feb 27, 2025
3663d67
update header sampling tests
dmwilson-grafana Feb 27, 2025
be12809
split runCompactionJob upload into multiple concurrency.ForEachJob
dmwilson-grafana Feb 27, 2025
f4a8034
update changelog.md
dmwilson-grafana Feb 28, 2025
5af108b
Merge branch 'main' into dwilson/upload-sparse-headers-from-compactor
dmwilson-grafana Feb 28, 2025
0977f15
golint
dmwilson-grafana Feb 28, 2025
92b4610
Update CHANGELOG.md
dimitarvdimitrov Feb 28, 2025
d2439ac
Update CHANGELOG.md
dimitarvdimitrov Feb 28, 2025
7949eaf
Revert "Update CHANGELOG.md"
dimitarvdimitrov Feb 28, 2025
b69e920
Update pkg/compactor/bucket_compactor.go
dmwilson-grafana Feb 28, 2025
ce90588
add struct fields on test
dmwilson-grafana Feb 28, 2025
19a1b49
rework downsampling tests; require first and last
dmwilson-grafana Feb 28, 2025
3426a41
add check for first and last table offsets to CheckSparseHeadersCorre…
dmwilson-grafana Mar 3, 2025
c90a8cd
fix conflicts changelog.md
dmwilson-grafana Mar 3, 2025
f2e8cbd
check all ranges in index are in header
dmwilson-grafana Mar 4, 2025
4ee0a03
Merge branch 'main' into dwilson/upload-sparse-headers-from-compactor
dmwilson-grafana Mar 4, 2025
c9ead14
comment on offset adjust
dmwilson-grafana Mar 4, 2025
0f98926
Update docs/sources/mimir/configure/configuration-parameters/index.md
dmwilson-grafana Mar 5, 2025
a604399
update docs
dmwilson-grafana Mar 5, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@
* `go_cpu_classes_gc_total_cpu_seconds_total`
* `go_cpu_classes_total_cpu_seconds_total`
* `go_cpu_classes_idle_cpu_seconds_total`
* [ENCHANCEMENT] Compactor: Add experimental `-compactor.upload-sparse-index-headers` option. When enabled, the compactor will attempt to upload sparse index headers to object storage. This prevents latency spikes after adding store-gateway replicas. #10684
* [BUGFIX] Distributor: Use a boolean to track changes while merging the ReplicaDesc components, rather than comparing the objects directly. #10185
* [BUGFIX] Querier: fix timeout responding to query-frontend when response size is very close to `-querier.frontend-client.grpc-max-send-msg-size`. #10154
* [BUGFIX] Query-frontend and querier: show warning/info annotations in some cases where they were missing (if a lazy querier was used). #10277
Expand Down
11 changes: 11 additions & 0 deletions cmd/mimir/config-descriptor.json
Original file line number Diff line number Diff line change
Expand Up @@ -11512,6 +11512,17 @@
"fieldFlag": "compactor.max-lookback",
"fieldType": "duration",
"fieldCategory": "experimental"
},
{
"kind": "field",
"name": "upload_sparse_index_headers",
"required": false,
"desc": "If enabled, the compactor will construct and upload sparse index headers to object storage during each compaction cycle.",
"fieldValue": null,
"fieldDefaultValue": false,
"fieldFlag": "compactor.upload-sparse-index-headers",
"fieldType": "boolean",
"fieldCategory": "experimental"
}
],
"fieldValue": null,
Expand Down
2 changes: 2 additions & 0 deletions cmd/mimir/help-all.txt.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -1283,6 +1283,8 @@ Usage of ./cmd/mimir/mimir:
Number of symbols flushers used when doing split compaction. (default 1)
-compactor.tenant-cleanup-delay duration
For tenants marked for deletion, this is the time between deletion of the last block, and doing final cleanup (marker files, debug files) of the tenant. (default 6h0m0s)
-compactor.upload-sparse-index-headers
[experimental] If enabled, the compactor will construct and upload sparse index headers to object storage during each compaction cycle.
-config.expand-env
Expands ${var} or $var in config according to the values of the environment variables.
-config.file value
Expand Down
2 changes: 2 additions & 0 deletions docs/sources/mimir/configure/about-versioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,8 @@ The following features are currently experimental:
- `-compactor.in-memory-tenant-meta-cache-size`
- Limit blocks processed in each compaction cycle. Blocks uploaded prior to the maximum lookback aren't processed.
- `-compactor.max-lookback`
- Enable the compactor to upload sparse index headers to object storage during compaction cycles.
- `-compactor.upload-sparse-index-headers`
- Ruler
- Aligning of evaluation timestamp on interval (`align_evaluation_time_on_interval`)
- Allow defining limits on the maximum number of rules allowed in a rule group by namespace and the maximum number of rule groups by namespace. If set, this supersedes the `-ruler.max-rules-per-rule-group` and `-ruler.max-rule-groups-per-tenant` limits.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4948,6 +4948,11 @@ sharding_ring:
# blocks are considered regardless of their upload time.
# CLI flag: -compactor.max-lookback
[max_lookback: <duration> | default = 0s]

# (experimental) If enabled, the compactor will construct and upload sparse
# index headers to object storage during each compaction cycle.
# CLI flag: -compactor.upload-sparse-index-headers
[upload_sparse_index_headers: <boolean> | default = false]
```

### store_gateway
Expand Down
104 changes: 75 additions & 29 deletions pkg/compactor/bucket_compactor.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,10 @@ import (
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/tsdb"
"github.com/thanos-io/objstore"
"github.com/thanos-io/objstore/providers/filesystem"
"go.uber.org/atomic"

"github.com/grafana/mimir/pkg/storage/indexheader"
"github.com/grafana/mimir/pkg/storage/sharding"
mimir_tsdb "github.com/grafana/mimir/pkg/storage/tsdb"
"github.com/grafana/mimir/pkg/storage/tsdb/block"
Expand Down Expand Up @@ -393,13 +395,28 @@ func (c *BucketCompactor) runCompactionJob(ctx context.Context, job *Job) (shoul
c.metrics.compactionBlocksVerificationFailed.Inc()
}

var uploadSparseIndexHeaders = c.uploadSparseIndexHeaders
var fsbkt objstore.Bucket
if uploadSparseIndexHeaders {
// Create a bucket backed by the local compaction directory. This allows calls to NewStreamBinaryReader to
// construct sparse-index-headers without making requests to object storage. Building and uploading
// sparse-index-headers is best effort, and we do not skip uploading a compacted block if there's an
// error affecting the sparse-index-header upload.
fsbkt, err = filesystem.NewBucket(subDir)
if err != nil {
level.Warn(jobLogger).Log("msg", "failed to create filesystem bucket, skipping sparse header upload", "err", err)
uploadSparseIndexHeaders = false
}
}

blocksToUpload := convertCompactionResultToForEachJobs(compIDs, job.UseSplitting(), jobLogger)
err = concurrency.ForEachJob(ctx, len(blocksToUpload), c.blockSyncConcurrency, func(ctx context.Context, idx int) error {
blockToUpload := blocksToUpload[idx]

uploadedBlocks.Inc()

bdir := filepath.Join(subDir, blockToUpload.ulid.String())
blockID := blockToUpload.ulid.String()
bdir := filepath.Join(subDir, blockID)

// When splitting is enabled, we need to inject the shard ID as an external label.
newLabels := job.Labels().Map()
Expand Down Expand Up @@ -431,6 +448,32 @@ func (c *BucketCompactor) runCompactionJob(ctx context.Context, job *Job) (shoul
return errors.Wrapf(err, "upload of %s failed", blockToUpload.ulid)
}

if uploadSparseIndexHeaders {
// Calling NewStreamBinaryReader reads a block's index and writes a sparse-index-header to disk. Because we
// don't use the writer, we pass a default indexheader.Config and don't register metrics.
if _, err := indexheader.NewStreamBinaryReader(
ctx,
jobLogger,
fsbkt,
subDir,
blockToUpload.ulid,
mimir_tsdb.DefaultPostingOffsetInMemorySampling,
indexheader.NewStreamBinaryReaderMetrics(nil),
indexheader.Config{},
); err != nil {
level.Warn(jobLogger).Log("msg", "failed to create sparse index headers", "block", blockID, "err", err)
return nil
}

// upload local sparse-index-header to object storage
src := path.Join(bdir, block.SparseIndexHeaderFilename)
dst := path.Join(blockID, block.SparseIndexHeaderFilename)
if err := objstore.UploadFile(ctx, jobLogger, c.bkt, src, dst); err != nil {
level.Warn(jobLogger).Log("msg", "failed to upload sparse index headers", "block", blockID, "err", err)
return nil
}
}

elapsed := time.Since(begin)
level.Info(jobLogger).Log("msg", "uploaded block", "result_block", blockToUpload.ulid, "duration", elapsed, "duration_ms", elapsed.Milliseconds(), "external_labels", labels.FromMap(newLabels))
return nil
Expand Down Expand Up @@ -747,20 +790,21 @@ var ownAllJobs = func(*Job) (bool, error) {

// BucketCompactor compacts blocks in a bucket.
type BucketCompactor struct {
logger log.Logger
sy *metaSyncer
grouper Grouper
comp Compactor
planner Planner
compactDir string
bkt objstore.Bucket
concurrency int
skipUnhealthyBlocks bool
ownJob ownCompactionJobFunc
sortJobs JobsOrderFunc
waitPeriod time.Duration
blockSyncConcurrency int
metrics *BucketCompactorMetrics
logger log.Logger
sy *metaSyncer
grouper Grouper
comp Compactor
planner Planner
compactDir string
bkt objstore.Bucket
concurrency int
skipUnhealthyBlocks bool
uploadSparseIndexHeaders bool
ownJob ownCompactionJobFunc
sortJobs JobsOrderFunc
waitPeriod time.Duration
blockSyncConcurrency int
metrics *BucketCompactorMetrics
}

// NewBucketCompactor creates a new bucket compactor.
Expand All @@ -779,25 +823,27 @@ func NewBucketCompactor(
waitPeriod time.Duration,
blockSyncConcurrency int,
metrics *BucketCompactorMetrics,
uploadSparseIndexHeaders bool,
) (*BucketCompactor, error) {
if concurrency <= 0 {
return nil, errors.Errorf("invalid concurrency level (%d), concurrency level must be > 0", concurrency)
}
return &BucketCompactor{
logger: logger,
sy: sy,
grouper: grouper,
planner: planner,
comp: comp,
compactDir: compactDir,
bkt: bkt,
concurrency: concurrency,
skipUnhealthyBlocks: skipUnhealthyBlocks,
ownJob: ownJob,
sortJobs: sortJobs,
waitPeriod: waitPeriod,
blockSyncConcurrency: blockSyncConcurrency,
metrics: metrics,
logger: logger,
sy: sy,
grouper: grouper,
planner: planner,
comp: comp,
compactDir: compactDir,
bkt: bkt,
concurrency: concurrency,
skipUnhealthyBlocks: skipUnhealthyBlocks,
ownJob: ownJob,
sortJobs: sortJobs,
waitPeriod: waitPeriod,
blockSyncConcurrency: blockSyncConcurrency,
metrics: metrics,
uploadSparseIndexHeaders: uploadSparseIndexHeaders,
}, nil
}

Expand Down
17 changes: 16 additions & 1 deletion pkg/compactor/bucket_compactor_e2e_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ func TestGroupCompactE2E(t *testing.T) {
planner := NewSplitAndMergePlanner([]int64{1000, 3000})
grouper := NewSplitAndMergeGrouper("user-1", []int64{1000, 3000}, 0, 0, logger)
metrics := NewBucketCompactorMetrics(blocksMarkedForDeletion, prometheus.NewPedanticRegistry())
bComp, err := NewBucketCompactor(logger, sy, grouper, planner, comp, dir, bkt, 2, true, ownAllJobs, sortJobsByNewestBlocksFirst, 0, 4, metrics)
bComp, err := NewBucketCompactor(logger, sy, grouper, planner, comp, dir, bkt, 2, true, ownAllJobs, sortJobsByNewestBlocksFirst, 0, 4, metrics, true)
require.NoError(t, err)

// Compaction on empty should not fail.
Expand Down Expand Up @@ -374,6 +374,21 @@ func TestGroupCompactE2E(t *testing.T) {
return nil
}))

// expect the blocks that are compacted to have sparse-index-headers in object storage.
require.NoError(t, bkt.Iter(ctx, "", func(n string) error {
id, ok := block.IsBlockDir(n)
if !ok {
return nil
}

if _, ok := others[id.String()]; ok {
p := path.Join(id.String(), block.SparseIndexHeaderFilename)
exists, _ := bkt.Exists(ctx, p)
assert.True(t, exists, "expected sparse index headers not found %s", p)
}
return nil
}))

for id, found := range nonCompactedExpected {
assert.True(t, found, "not found expected block %s", id.String())
}
Expand Down
4 changes: 2 additions & 2 deletions pkg/compactor/bucket_compactor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ func TestBucketCompactor_FilterOwnJobs(t *testing.T) {
m := NewBucketCompactorMetrics(promauto.With(nil).NewCounter(prometheus.CounterOpts{}), nil)
for testName, testCase := range tests {
t.Run(testName, func(t *testing.T) {
bc, err := NewBucketCompactor(log.NewNopLogger(), nil, nil, nil, nil, "", nil, 2, false, testCase.ownJob, nil, 0, 4, m)
bc, err := NewBucketCompactor(log.NewNopLogger(), nil, nil, nil, nil, "", nil, 2, false, testCase.ownJob, nil, 0, 4, m, false)
require.NoError(t, err)

res, err := bc.filterOwnJobs(jobsFn())
Expand Down Expand Up @@ -156,7 +156,7 @@ func TestBlockMaxTimeDeltas(t *testing.T) {

metrics := NewBucketCompactorMetrics(promauto.With(nil).NewCounter(prometheus.CounterOpts{}), nil)
now := time.UnixMilli(1500002900159)
bc, err := NewBucketCompactor(log.NewNopLogger(), nil, nil, nil, nil, "", nil, 2, false, nil, nil, 0, 4, metrics)
bc, err := NewBucketCompactor(log.NewNopLogger(), nil, nil, nil, nil, "", nil, 2, false, nil, nil, 0, 4, metrics, true)
require.NoError(t, err)

deltas := bc.blockMaxTimeDeltas(now, []*Job{j1, j2})
Expand Down
5 changes: 5 additions & 0 deletions pkg/compactor/compactor.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,9 @@ type Config struct {
// Allow downstream projects to customise the blocks compactor.
BlocksGrouperFactory BlocksGrouperFactory `yaml:"-"`
BlocksCompactorFactory BlocksCompactorFactory `yaml:"-"`

// Allow compactor to upload sparse-index-header files
UploadSparseIndexHeaders bool `yaml:"upload_sparse_index_headers" category:"experimental"`
}

// RegisterFlags registers the MultitenantCompactor flags.
Expand Down Expand Up @@ -158,6 +161,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet, logger log.Logger) {
f.DurationVar(&cfg.TenantCleanupDelay, "compactor.tenant-cleanup-delay", 6*time.Hour, "For tenants marked for deletion, this is the time between deletion of the last block, and doing final cleanup (marker files, debug files) of the tenant.")
f.BoolVar(&cfg.NoBlocksFileCleanupEnabled, "compactor.no-blocks-file-cleanup-enabled", false, "If enabled, will delete the bucket-index, markers and debug files in the tenant bucket when there are no blocks left in the index.")
f.DurationVar(&cfg.MaxLookback, "compactor.max-lookback", 0*time.Second, "Blocks uploaded before the lookback aren't considered in compactor cycles. If set, this value should be larger than all values in `-blocks-storage.tsdb.block-ranges-period`. A value of 0s means that all blocks are considered regardless of their upload time.")
f.BoolVar(&cfg.UploadSparseIndexHeaders, "compactor.upload-sparse-index-headers", false, "If enabled, the compactor will construct and upload sparse index headers to object storage during each compaction cycle.")

// compactor concurrency options
f.IntVar(&cfg.MaxOpeningBlocksConcurrency, "compactor.max-opening-blocks-concurrency", 1, "Number of goroutines opening blocks before compaction.")
Expand Down Expand Up @@ -834,6 +838,7 @@ func (c *MultitenantCompactor) compactUser(ctx context.Context, userID string) e
c.compactorCfg.CompactionWaitPeriod,
c.compactorCfg.BlockSyncConcurrency,
c.bucketCompactorMetrics,
c.compactorCfg.UploadSparseIndexHeaders,
)
if err != nil {
return errors.Wrap(err, "failed to create bucket compactor")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/prometheus/tsdb/index"

streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index"
streamindex "github.com/grafana/mimir/pkg/storage/indexheader/index"
)

const (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/prometheus/tsdb/index"

streamencoding "github.com/grafana/mimir/pkg/storegateway/indexheader/encoding"
"github.com/grafana/mimir/pkg/storegateway/indexheader/indexheaderpb"
streamencoding "github.com/grafana/mimir/pkg/storage/indexheader/encoding"
"github.com/grafana/mimir/pkg/storage/indexheader/indexheaderpb"
)

const (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ import (
"github.com/grafana/dskit/runutil"
"github.com/prometheus/prometheus/tsdb/index"

streamencoding "github.com/grafana/mimir/pkg/storegateway/indexheader/encoding"
"github.com/grafana/mimir/pkg/storegateway/indexheader/indexheaderpb"
streamencoding "github.com/grafana/mimir/pkg/storage/indexheader/encoding"
"github.com/grafana/mimir/pkg/storage/indexheader/indexheaderpb"
)

// The table gets initialized with sync.Once but may still cause a race
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import (
"github.com/prometheus/prometheus/tsdb/index"
"github.com/stretchr/testify/require"

streamencoding "github.com/grafana/mimir/pkg/storegateway/indexheader/encoding"
streamencoding "github.com/grafana/mimir/pkg/storage/indexheader/encoding"
"github.com/grafana/mimir/pkg/util/test"
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ import (
"github.com/thanos-io/objstore"
"go.uber.org/atomic"

streamindex "github.com/grafana/mimir/pkg/storage/indexheader/index"
"github.com/grafana/mimir/pkg/storage/tsdb/block"
streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index"
)

var (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ import (
"github.com/thanos-io/objstore/providers/filesystem"
"go.uber.org/atomic"

streamindex "github.com/grafana/mimir/pkg/storage/indexheader/index"
"github.com/grafana/mimir/pkg/storage/tsdb/block"
streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index"
"github.com/grafana/mimir/pkg/util/test"
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ import (
"github.com/prometheus/prometheus/tsdb/index"
"github.com/thanos-io/objstore"

streamencoding "github.com/grafana/mimir/pkg/storage/indexheader/encoding"
streamindex "github.com/grafana/mimir/pkg/storage/indexheader/index"
"github.com/grafana/mimir/pkg/storage/indexheader/indexheaderpb"
"github.com/grafana/mimir/pkg/storage/tsdb/block"
streamencoding "github.com/grafana/mimir/pkg/storegateway/indexheader/encoding"
streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index"
"github.com/grafana/mimir/pkg/storegateway/indexheader/indexheaderpb"
"github.com/grafana/mimir/pkg/util/atomicfs"
"github.com/grafana/mimir/pkg/util/spanlogger"
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ import (
"github.com/stretchr/testify/require"
"github.com/thanos-io/objstore/providers/filesystem"

streamindex "github.com/grafana/mimir/pkg/storage/indexheader/index"
"github.com/grafana/mimir/pkg/storage/tsdb/block"
streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index"
"github.com/grafana/mimir/pkg/util/spanlogger"
)

Expand Down
2 changes: 1 addition & 1 deletion pkg/storage/tsdb/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import (

"github.com/grafana/mimir/pkg/ingester/activeseries"
"github.com/grafana/mimir/pkg/storage/bucket"
"github.com/grafana/mimir/pkg/storegateway/indexheader"
"github.com/grafana/mimir/pkg/storage/indexheader"
)

const (
Expand Down
4 changes: 2 additions & 2 deletions pkg/storegateway/bucket.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,14 +45,14 @@ import (
"google.golang.org/grpc/status"

"github.com/grafana/mimir/pkg/mimirpb"
"github.com/grafana/mimir/pkg/storage/indexheader"
streamindex "github.com/grafana/mimir/pkg/storage/indexheader/index"
"github.com/grafana/mimir/pkg/storage/sharding"
"github.com/grafana/mimir/pkg/storage/tsdb"
"github.com/grafana/mimir/pkg/storage/tsdb/block"
"github.com/grafana/mimir/pkg/storage/tsdb/bucketcache"
"github.com/grafana/mimir/pkg/storegateway/hintspb"
"github.com/grafana/mimir/pkg/storegateway/indexcache"
"github.com/grafana/mimir/pkg/storegateway/indexheader"
streamindex "github.com/grafana/mimir/pkg/storegateway/indexheader/index"
"github.com/grafana/mimir/pkg/storegateway/storegatewaypb"
"github.com/grafana/mimir/pkg/storegateway/storepb"
"github.com/grafana/mimir/pkg/util"
Expand Down
Loading
Loading