Ever wanted to use data from external systems to influence scoring in Elasticsearch? now you can. Here is why and how we created a plugin which uses Redis to rescore top ranking results in Elasticsearch.
We have been consulting on Elasticsearch and the Elastic Stack for many years now. Not a single month goes by without us being asked by a customer to do something non-standard and challenging with Elasticsearch and the Elastic Stack.
Usually, we try to work with the fundamentals of Elasticsearch and just Keep Things Simple™. We are firm believers in the KISS principle - and as such we always prefer to first try and model data and queries in a way that is native for Elasticsearch. That’s how we usually squeeze every bit of performance and stability from it.
Sometimes, however, the task at hand justifies doing something completely different and out of the ordinary.
The Problem
We were tasked by a customer to build an Elasticsearch-based search engine that reflects a complex and highly dynamic business logic in the search results. There was no doubt about Elasticsearch being the right choice for our customer: they needed full-text search and text-based relevance ranking, geo-search, aggregations, auto-complete and auto-correction for searches, all of which are features that Elasticsearch provides out of the box extremely well.
However, results ranking had to also reflect some non-trivial business logic. For example, the customer wanted to avoid displaying products based on availability in stock according to location and shipping method, and also boost certain products higher based on time of year and even time of day.
If we were to implement it natively in Elasticsearch, the data model would be far from simple (nested objects or parent-child modeling and many more unpleasant compromises), which in turn cause queries that would take a long time to complete.
The complexity of the customer’s business logic was not the only challenge. In addition to the ranking logic being complex, due to the customer’s scale that logic was also rapidly changing. The only way to make Elasticsearch aware of product data updates like stock availability, is to push that update into Elasticsearch and override the document in store. However, frequently updating Elasticsearch is not a good thing to do - it will slow down your cluster significantly and that definitely wasn’t an option for us. We needed to find a way to deal with both the rapid stream of updates coming in and the complex business logic and data structures the customer had.
Eventually, we realized the only way to make it work was to make those judgments and rankings in Elasticsearch in conjunction with data that is external to Elasticsearch - somewhere that can be updated rapidly and perform all the necessary complex logic. To achieve this, we created a way to leverage Redis as a backend for Elasticsearch scoring.
Leveraging Redis for Elasticsearch scoring
Our idea is simple - if we could query a low-latency service per document during search and get a numeric value for that specific document, we could use that number as a multiplier for influencing the document score, up or down. Indeed, to make it a viable solution, low-latency for those lookups is key.
Theoretically, that query can even be a complex one - as long as it can still respond super-fast and be consistently fast. For that, Redis was a natural choice, being able to provide consistently high query performance. Since Redis is mostly a key/value storage, a simple use-case would have to involve querying using a single value.
In our solution, the general flow goes as follows -
-
Each document has a single-value field that we are going to use as a score influencer. Let's assume an e-commerce scenario and say we use the Product ID as the reference value for external scoring. In this case, each document represents a product, which has a
productId
field containing the product ID for each document. -
A general search query can be executed and results scoring is done by Elasticsearch's usual algorithms (e.g. tf/idf for text relevance). If we wrote the query properly, the top results are going to be indeed the most relevant ones for the user.
-
Now comes the external scoring part. For each document result, we want to approach Redis by key, the key being the value of the score-influencer field (
productId
) and use the numeric value under that Redis key for boosting (or demoting) the result. So, for example, a document with productId ==A123
will trigger a Redis lookup for a keyA123
and, if it exists and it contains a numeric value - we can multiply the score produced by Elasticsearch for this document by the value in Redis. -
Since we multiply by that value, we can use 1 as a value (or just no value) to leave the result score unchanged, or we can use 0 if we want to completely erase it from the results. We can also use any other values to boost it up or down in the result set.
The most interesting opportunity here is the way those numeric values are put in Redis. You could use whatever business logic and processing methods that make sense to your business and, while those processes can even be long and heavy to run, using this approach, means that Elasticsearch only uses the final result per document, once it’s ready.
Rescoring
Even though Redis is a very fast data-store service, and indeed we can achieve very low-latency query responses using it, performing such an external request is still going to be at least an order of magnitude slower because of the principle of locality. This is why, in a potentially huge Elasticsearch index with millions (or more) of documents, we’d like to only execute external scoring to a select number of prospect documents, for example the 50 top ranking documents, instead of doing it for each search result.
Luckily, Elasticsearch has a feature called Rescoring. This feature allows you to run a secondary scoring and sorting on the top results so, while the initial query and scoring is fast, the secondary scoring can use slower methods, such as our external Redis-based scoring, which requires network access and hence is going to be slower.
With Rescoring, the exact same process as above mentioned is still executed, except that it’s now done for the top results (say, 50 or 100 top ranking results). The window-size that is selected, needs to represent the number of total results and the estimation for the number of results remaining at the top after rescoring (considering some results can be demoted significantly).
Elasticsearch Redis Rescoring Plugin
We developed a general purpose plugin to allow performing rescoring backed by Redis. It is open-source and ready to use, available on our GitHub account: https://github.com/BigDataBoutique/elasticsearch-rescore-redis
To use it, you’d need to install the plugin and then execute a rescoring query:
{
"query": { "match_all": {} },
"rescore": {
"redis":{
"key_field": "productId.keyword",
"key_prefix": "mystore-"
}
}
}
In this example, we are expecting each hit to contain a field productId
(of keyword type). The value of that field will be looked up in Redis as a key (for example, Redis key mystore-abc123
will be looked-up for a document with productId abc123; the mystore-
key prefix is configurable in query time).
The supported field types to rescore on are keyword and the numeric types . The window size to use really depends on the amount of filtering you expect to happen. The reason I went with the rescoring approach is performance - given a correct and minimal window-size, you should be able to approach the availability service only for the top ranking results, which would translate into a very high performance.
External scoring can be applied in many interesting use cases:
-
Scoring based on input from an external system (for example: current weather, traffic, and so on)
-
Rapid changes to scoring factors or complex business logic for scoring
-
Predictive search
-
Personalization
-
Real-time stock availability of products in an e-commerce system
-
Enforcing viewing permissions
Using Redis is just one option - one that is simple, stable and can be used for many simple scenarios. Over the years we found ourselves writing custom versions of this plugin for supporting much more advanced use-cases, e.g. when the multiplier value depends on additional query parameters or on data from more than one document field, etc. With this plugin and concept, we are barely scratching the surface of what’s possible for large-scale and complex systems with Elasticsearch.
Happy ranking!
Need help with optimizing your Elasticsearch cluster? Reach out to us today