29 minutes ago
ClickHouse
23 hours ago
rustfs

1.0.0-alpha.84

What's Changed

New Contributors

Full Changelog: https://github.com/rustfs/rustfs/compare/1.0.0-alpha.83...1.0.0-alpha.84

4 days ago
MeiliSearch

v1.37.0

[!IMPORTANT]
This release contains breaking changes for users of the network experimental feature.

Meilisearch v1.37 introduces replicated sharding, removes the vectorStoreSetting experimental feature, stabilizes our new vector store for best performance, adds a security fix and miscellaneous improvements.

✨ Improvements

§ Replicated sharding

[!NOTE] Replicated sharding requires Meilisearch Enterprise Edition (EE).

  • Users of Meilisearch Cloud, please contact support if you need replicated sharding.
  • Users of the Community Edition, please contact the sales if you want to use replicated sharding in production.

§ Breaking changes

  • network objects sent to the PATCH /network route must now contain at least one shard object containing at least one remote when leader is not null.

Existing databases will be migrated automatically when upgraded with --experimental-dumpless-upgrade when leader is not null, such that for each remote:

  1. A shard with the same name as the remote is created
  2. This shard has exactly one remote in its remotes list: the remote with the same name as the shard.

This change will not cause any document to be resharded.

To be able to upgrade without resharding, the migration uses the same name for remotes and for shards. However, in new configurations, we recommend using different names for shards and remotes.

Example of migration

For instance, the following network object:

{
  "leader": "ms-00",
  "self": "ms-01",
  "remotes": {
    "ms-00": { /* .. */ },
    "ms-01": { /* .. */ }
  }
}

is converted to:

{
  "leader": "ms-00",
  "self": "ms-01",
  "remotes": {
    "ms-00": { /* .. */ },
    "ms-01": { /* .. */ }
  },
  "shards": {  // ✨ NEW
    "ms-00": {  // shard named like the remote
      "remotes": ["ms-00"] // is owned by the remote
    },
    "ms-01": {
      "remotes": ["ms-01"]
    }
  }
}

Addition of network.shards

The network object for routes PATCH /network and GET /network now contains the new field shards, which is an object whose values are shard objects, and keys the name of each shard.

Each shard object contains a single field remotes, which is an array of strings, each string representing the name of an existing remote.

Convenience fields

The shard objects in PATCH /network contain the additional fields addRemotes and removeRemotes meant for convenience:

  • pass an array of remote names to shard.addRemotes to add these remotes to the list of remotes of a shard.
  • pass an array of remote names to shard.removeRemotes to remove these remotes from the list of remotes of a shard.
  • if present and non-null, shard.remotes will completely override the existing list of remotes for a shard.
  • if several of these options are present and non-null, then the order of application is shard.remotes, then shard.addRemotes, then shard.removeShards.
Adding a new shard with some remotes
// PATCH /network
{
  // assuming that remotes `ms-0`, `ms-1`, `ms-2` where sent in a previous call to PATCH /network
  "shards": {
    "s-a": { // new shard
      "remotes": ["ms-0", "ms-1"]
    }
  }
}

Remotes ms-0 and ms-1 own the new shard s-a.

Fully overriding the list of remotes owning a shard
// PATCH /network
{
  // assuming remotes `ms-0`, `ms-1`, `ms-2`
  // assuming shard `s-a`, owned by `ms-0` and `ms-1`
  "shards": {
    "s-a": {
      "remotes": ["ms-2"]
    }
  }
}

ms-2 is now the sole owner of s-a, replacing ms-0 and ms-1.

Adding a remote without overriding the list of remotes owning a shard
// PATCH /network
{
  // assuming remotes `ms-0`, `ms-1`, `ms-2`
  // assuming shard `s-a`, owned by `ms-2`
  "shards": {
    "s-a": {
      "addRemotes": ["ms-0"]
    }
  }
}

ms-0 and ms-2 are now the owners of s-a.

Removing a remote without overriding the list of remotes owning a shard
// PATCH /network
{
  // assuming remotes `ms-0`, `ms-1`, `ms-2`
  // assuming shard `s-a`, owned by `ms-0` and `ms-2`
  "shards": {
    "s-a": {
      "removeRemotes": ["ms-2"]
    }
  }
}

ms-0 is now the sole owner of s-a.

Entirely removing a shard from the list of shards

Set the shard to null:

// PATCH /network
{
  "shards": {
    "s-a": null
  }
}

Or set its remotes list to the empty list:

// PATCH /network
{
  "shards": {
    "s-a": {
      "remotes": []
    }
  }
}

network.shards validity

When network.leader is not null, each shard object in network.shards must:

  1. Only contain remotes that exist in the list of remotes.
  2. Contain at least one remote.

Additionally, network.shards must contain at least one shard.

Failure to meet any of these conditions will cause the PATCH /network route to respond with 400 invalid_network_shards.

Change in sharding logic

Documents are now sharded according to the list of shards declared in the network rather than the list of remotes. All remotes owning a shard will process the documents that belong to this shard, allowing for replication.

Example of replication

The following configuration defines 3 remotes 0, 1 and 2, and 3 shards A, B, C, such that each remote owns two shards, achieving replication (losing one remote does not lose any document).

{
  "leader": "0",
  "self": "0",
  "remotes": {
    "0": { /* .. */ },
    "1": { /* .. */ },
    "2": { /* .. */ }
  },
  "shards": {
    "A": {
      "remotes": ["0", "1"]
    },
    "B": {
      "remotes": ["1", "2"]
    },
    "C": {
      "remotes": ["2", "0"]
    }
  }
}
  • Full replication is supported by having all remotes own all the shards.
  • Unbalanced replication is supported by having some remotes own more shards than other remotes.
  • "Watcher" remotes are supported by having remotes that own no shards. Watcher remotes are not very useful in this release, and might be upgraded in a future release, so that they keep all documents without indexing them, allowing to "respawn" shards for other remotes.

useNetwork takes network.shards into account

When useNetwork: true is passed to a search query, it is expanded to multiple queries such that each shard declared in network.shards appears exactly once, associated with a remote that owns that shard.

This ensures that there is no missing or duplicate documents in the results.

_shard filters

When the network experimental feature is enabled, then it becomes possible to filter documents depending on the shard they belong to.

Given s-a and s-b the names of two shards declared in network.shards, then:

  • _shard = "s-a" in a filter parameter to the search or documents fetch will return the documents that belong to s-a.
  • _shard != "s-a" will return the documents that do not belong to s-a
  • _shard IN ["s-a", "s-b"] will return the documents that belong to s-a or to s-b.

You can use these new filters in manual remote federated search to create a partitioning over all shards in the network.

[!IMPORTANT] To avoid duplicate or missing documents in results, for manually crafted remote federated search requests, all shards should appear in exactly one query.

[!TIP] Search requests built with useNetwork: true already build a correct partitioning over shards. They should be preferred to manually crafted remote federated search requests in replicated sharding scenarios.

Update instructions

When updating your Meilisearch network using dumpless upgrade, please observe the following guidelines:

  1. Do not call the PATCH /network route until all remotes of the network are finished updating
  2. If using the search routes with useNetwork: true, call them on un-updated remotes. Calling it on already updated remotes will cause un-updated remotes to fail the search as they don't know about the _shard filters.

By @dureuill in https://github.com/meilisearch/meilisearch/pull/6128

§ Remove vectorStoreSetting experimental feature

The new HNSW vector store (hannoy) has been stabilized and is now the only supported vector store in Meilisearch.

As a result, updating to v1.37.0 will migrate all remaining legacy vector store indexes (using arroy) to hannoy, and the vectorStoreSetting experimental feature is no longer available.

By @Kerollmops in https://github.com/meilisearch/meilisearch/pull/6176

Improve indexing performance for embeddings

We removed a computationally expensive step from vector indexing.

On a DB with 20M documents, this removes 300s per indexing batch of 1100s.

By @Kerollmops in https://github.com/meilisearch/meilisearch/pull/6175

§ 🔒 Security

  • Bump mini-dashboard (local web interface) which
    • now stores API key in RAM instead of in the localStorage
    • bumps dependencies with potential security vulnerabilities

By @Strift and @curquiza in https://github.com/meilisearch/meilisearch/pull/6186 and https://github.com/meilisearch/meilisearch/pull/6172

§ 🔩 Miscellaneous

Full Changelog: https://github.com/meilisearch/meilisearch/compare/v1.36.0...v1.37.0

4 days ago
prometheus

3.10.0 / 2026-02-24

Prometheus now offers a distroless Docker image variant alongside the default busybox image. The distroless variant provides enhanced security with a minimal base image, uses UID/GID 65532 (nonroot) instead of nobody, and removes the VOLUME declaration. Both variants are available with -busybox and -distroless tag suffixes (e.g., prom/prometheus:latest-busybox, prom/prometheus:latest-distroless). The busybox image remains the default with no suffix for backwards compatibility (e.g., prom/prometheus:latest points to the busybox variant).

For users migrating existing named volumes from the busybox image to the distroless variant, the ownership can be adjusted with:

docker run --rm -v prometheus-data:/prometheus alpine chown -R 65532:65532 /prometheus

Then, the container can be started with the old volume with:

docker run -v prometheus-data:/prometheus prom/prometheus:latest-distroless

User migrating from bind mounts might need to ajust permissions too, depending on their setup.

  • [CHANGE] Alerting: Add alertmanager dimension to following metrics: prometheus_notifications_dropped_total, prometheus_notifications_queue_capacity, prometheus_notifications_queue_length. #16355
  • [CHANGE] UI: Hide expanded alert annotations by default, enabling more information density on the /alerts page. #17611
  • [FEATURE] AWS SD: Add MSK Role. #17600
  • [FEATURE] PromQL: Add fill() / fill_left() / fill_right() binop modifiers for specifying default values for missing series. #17644
  • [FEATURE] Web: Add OpenAPI 3.2 specification for the HTTP API at /api/v1/openapi.yaml. #17825
  • [FEATURE] Dockerfile: Add distroless image variant using UID/GID 65532 and no VOLUME declaration. Busybox image remains default. #17876
  • [FEATURE] Web: Add on-demand wall time profiling under <URL>/debug/pprof/fgprof. #18027
  • [ENHANCEMENT] PromQL: Add more detail to histogram quantile monotonicity info annotations. #15578
  • [ENHANCEMENT] Alerting: Independent alertmanager sendloops. #16355
  • [ENHANCEMENT] TSDB: Experimental support for early compaction of stale series in the memory with configurable threshold stale_series_compaction_threshold in the config file. #16929
  • [ENHANCEMENT] Service Discovery: Service discoveries are now removable from the Prometheus binary through the Go build tag remove_all_sd and individual service discoveries can be re-added with the build tags enable_<sd name>_sd. Users can build a custom Prometheus with only the necessary SDs for a smaller binary size. #17736
  • [ENHANCEMENT] Promtool: Support promql syntax features promql-duration-expr and promql-extended-range-selectors. #17926
  • [PERF] PromQL: Avoid unnecessary label extraction in PromQL functions. #17676
  • [PERF] PromQL: Improve performance of regex matchers like .*-.*-.*. #17707
  • [PERF] OTLP: Add label caching for OTLP-to-Prometheus conversion to reduce allocations and improve latency. #17860
  • [PERF] API: Compute /api/v1/targets/relabel_steps in a single pass instead of re-running relabeling for each prefix. #17969
  • [PERF] tsdb: Optimize LabelValues intersection performance for matchers. #18069
  • [BUGFIX] PromQL: Prevent query strings containing only UTF-8 continuation bytes from crashing Prometheus. #17735
  • [BUGFIX] Web: Fix missing X-Prometheus-Stopping header for /-/ready endpoint in NotReady state. #17795
  • [BUGFIX] PromQL: Fix PromQL info() function returning empty results when filtering by a label that exists on both the input metric and target_info. #17817
  • [BUGFIX] TSDB: Fix a bug during exemplar buffer grow/shrink that could cause exemplars to be incorrectly discarded. #17863
  • [BUGFIX] UI: Fix broken graph display after page reload, due to broken Y axis min encoding/decoding. #17869
  • [BUGFIX] TSDB: Fix memory leaks in buffer pools by clearing reference fields (Labels, Histogram pointers, metadata strings) before returning buffers to pools. #17879
  • [BUGFIX] PromQL: info function: fix series without identifying labels not being returned. #17898
  • [BUGFIX] OTLP: Filter __name__ from OTLP attributes to prevent duplicate labels. #17917
  • [BUGFIX] TSDB: Fix division by zero when computing stale series ratio with empty head. #17952
  • [BUGFIX] OTLP: Fix potential silent data loss for sum metrics. #17954
  • [BUGFIX] PromQL: Fix smoothed interpolation across counter resets. #17988
  • [BUGFIX] PromQL: Fix panic with @ modifier on empty ranges. #18020
  • [BUGFIX] PromQL: Fix avg_over_time for a single native histogram. #18058