Sitecore Search Highlighting with Solr : the highlights

In this post

Examples of how to get going with search result highlighting, using the Sitecore ContentSearch API and Solr

What does highlighting look like?

Solr’s highlighting system is extremely powerful. A simple use-case is to show the part of the document which matched a user’s search terms. We call this part a snippet. We can even supply some HTML to wrap the matching terms:

Search: healthy
Wrap with: <em> </em>
Snippet: The <em>healthy</em> workplace toolkits support you either as a health care employer..

Code: A Basic Search

Our documents have a field called ‘Summary’. Sitecore and the ContentSearch API don’t know about this field by default, so we create a custom SearchResultItem class to include the field in our search results:

using System;
using System.Runtime.Serialization;
using Sitecore.ContentSearch;
using Sitecore.ContentSearch.SearchTypes;

public class SearchResultWithSummary : SearchResultItem
{
    [IndexField("summary_t")]
    [DataMember]
    public virtual string Summary { get; set; }
}

Let’s search for any documents with the word healthy in the Summary field. Note that highlighting is currently only supported when we search directly through SolrNet, so we’ll construct the query that way.

const string searchField = "summary_t";
const string searchValue = "healthy";

var index = ContentSearchManager.GetIndex(string.Format("sitecore_{0}_index", Sitecore.Context.Database.Name));
using (var context = index.CreateSearchContext())
{
	var results = context.Query(new SolrQueryByField(searchField, searchValue), new QueryOptions());

	foreach (var result in results)
	{
		@result.Summary
                // Results:
		// - The healthy workplace toolkits support you either as a health care employer, RCN workplace representative, employment agency or host organisation to create healthy working environments.
		// - Engaging families, communities and schools to change the outlook of a generation. The Healthy Weight Commitment Foundation is a broad-based, not-for-profit organization whose mission is to help reduce obesity.
		// - People who are homeless are more likely than the general population to have poor health. Through our Healthy Futures project, we help homeless people when they are admitted to hospital.
	}
}

Code: Let’s add highlighting!

We populate a QueryOptions object with a HighlightingParameters configuration, and pass this in when creating our query. We specify (Field) the field to include in the highlight snippet returned by Solr, (BeforeTerm) the token to place before our matched terms, and (AfterTerm) the token to place after the matched terms.

const string searchField = "summary_t";
const string searchValue = "healthy";

var queryOptions = new QueryOptions
{
	Highlight = new HighlightingParameters
	{
		Fields = new[] { searchField },
		BeforeTerm = "<em>",
		AfterTerm = "</em>"
	}
};

Now, let’s execute our query, passing in the queryOptions object. The results object we get back now contains a populated Highlights collection.

var index = ContentSearchManager.GetIndex(string.Format("sitecore_{0}_index", Sitecore.Context.Database.Name));
using (var context = index.CreateSearchContext())
{
	var results = context.Query<SearchResultWithSummary>(new SolrQueryByField(searchField, searchValue), queryOptions);

	foreach (var result in results)
	{
		var highlights = results.Highlights[result.Fields["_uniqueid"].ToString()];

		if (highlights.Any())
		{
			<ul>
				@foreach (var highlight in highlights)
				{
					<li style="color: #696969">@result.Name</li>
					//The Healthy Workplace Toolkits
					<li>@Html.Raw(string.Join(",", highlight.Value))</li>
					// - The <em>healthy</em> workplace toolkits support you either as a health care employer, RCN workplace representative, employment agency or host organisation to create <em>healthy</em> working environments.
				}
			</ul>        
		}
	}
}

Controlling the size of the snippet

Solr allow us to pass in a parameter, Fragsize, to control the length of the snippet returned to us. I recommend playing around with this to suit your needs.

var queryOptions = new QueryOptions
{
	Highlight = new HighlightingParameters
	{
		Fields = new[] { searchField },
		BeforeTerm = "<em>",
		AfterTerm = "</em>",
		Fragsize = 30
	}
};
// - The <em>healthy</em> workplace toolkits support

A choice of highlighters!

Solr supports different highlighters – take a look at the “Choosing a Highlighter” section in the Solr documentation: https://lucene.apache.org/solr/guide/6_6/highlighting.html

The newest, shiniest highlighter (which shipped with Solr 6.4) is the Unified Highlighter (https://lucene.apache.org/solr/guide/6_6/highlighting.html#Highlighting-TheUnifiedHighlighter). By using this highlighter instead, we can remove the Fragsize parameter and instead get back a whole sentance, containing our highlighted terms. We have to add another parameter to the QueryOptions object, ExtraParams, to tell Solr which highlighter to use:

var queryOptions = new QueryOptions
{
	Highlight = new HighlightingParameters
	{
		Fields = new[] { searchField },
		BeforeTerm = "<em>",
		AfterTerm = "</em>"
	},
	ExtraParams = new List<KeyValuePair<string, string>>
	{
		new KeyValuePair<string, string>("hl.method", "unified")
	}
};
// - Through our <em>healthy</em> Futures project, we help homeless people when they are admitted to hospital.

Can I use Linq?

To make use of the QueryOptions object, we have to query directly through SolrNet. Losing our fancy ContentSearch Linq capabilities is a big deal! Here’s a not-so-great workaround to get it back. We serialize the Linq query to a string, then use it to create a native SolrNet query, attaching our QueryOptions once again.

var query = context.GetQueryable().Where(x => x.Summary.Contains(searchValue));
var solrQuery = new SolrQuery(((IHasNativeQuery)query).Query.ToString());
var results = context.Query(solrQuery, queryOptions);

Feedback

I’d love to hear nicer ways of working with Linq and Highlighting – please let me know any work you’ve done in this area!

Create a custom Solr index in Sitecore 9

Hello there. 

Hi! So you want to create a new Solr index?

Yes, I think so?

It’s a great idea. You’ll be familiar with the big three, sitecore_core_index, sitecore_master_index and sitecore_web_index, but you don’t have to stop there! You can create individual indexes for certain content types on your site, such as Products. Smaller, more individualised indexes are easier to maintain, troubleshoot, faster to rebuild and can be faster to query.

Are they hard to set up?

Not as hard as you’d expect! Let’s create one now.

OK. My Solr is set up and I can access the web UI on https://solr:8983/solr/#/ – what now?

Let’s create the physical Solr core.

  1. Find your Solr index folder for the sitecore_master_index. Mine was at C:\solr\solr-6.6.2\server\solr\sitecore_master_index
  2. Copy this whole folder (into the same parent folder) and call it sitecore_master_products_index
  3. Inside the sitecore_master_products_index folder, open up the core.properties file and change the name property to read sitecore_master_products_index
  4. Restart Solr (I use the solr stop and solr start commands – see below)
  5. Now, go to https://solr:8983/solr/#/ and check out your cores – you will have a new one!

Awesome, it’s there. So I get that we copied the sitecore_master_index and renamed it to sitecore_master_products_index – and in Solr I can see that it contains thousands of documents already, copied from sitecore_master_index. How do I clean things up?

Well, good question. We want to delete all of the existing items in this index and start afresh. You can do this via a web browser – just call this URL:

https://solr:8983/solr/sitecore_master_products_index/update?commit=true&stream.body=<delete><query>*:*</query></delete>

Radical. Everything is deleted. Soo. I want to use this index to only contain certain types of content from Sitecore. How do I configure it properly?

We need to just add a single configuration file to Sitecore. It’s below. It looks mostly like the configuration file for sitecore_master_index, but we change two important things, (a) which template types we want to include in our index and (b) which field types we want to include in our index. In your real solution, this will take a bit of time to set up, but being selective is the whole point of creating a custom index, and you’ll want to keep it as trim as possible.

Here’s the whole config file, which I’ve called Sitecore.ContentSearch.Solr.Index.Master.Products.config:

<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:search="http://www.sitecore.net/xmlconfig/search/">
  <sitecore role:require="Standalone or ContentManagement" search:require="solr">
    <contentSearch>
      <configuration type="Sitecore.ContentSearch.ContentSearchConfiguration, Sitecore.ContentSearch">
        <indexes hint="list:AddIndex">
          <index id="sitecore_master_products_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
            <param desc="name">$(id)</param>
            <param desc="core">$(id)</param>
            <param desc="propertyStore" ref="contentSearch/indexConfigurations/databasePropertyStore" param1="$(id)" />
              <configuration ref="contentSearch/indexConfigurations/defaultSolrIndexConfiguration">
                  <documentOptions type="Sitecore.ContentSearch.SolrProvider.SolrDocumentBuilderOptions, Sitecore.ContentSearch.SolrProvider">
                      <indexAllFields>false</indexAllFields>

                      <!-- Included fields -->
                      <include hint="list:AddIncludedField">
                          <ProductName>{E676F36E-B0E0-4BE5-998A-329A8F9055FD}</ProductName>
						  <LongDescription>{8A978A2E-0E7A-4415-9163-2F4ECF85A3AB}</LongDescription>
                      </include>

                      <!-- Included templates -->
                      <include hint="list:AddIncludedTemplate">
                          <Product>{665DC431-673A-4D63-B9A6-00EB148E693C}</Product>
                      </include>

                  </documentOptions>
              </configuration>
            <strategies hint="list:AddStrategy">
              <strategy ref="contentSearch/indexConfigurations/indexUpdateStrategies/syncMaster" />
            </strategies>
            <locations hint="list:AddCrawler">
              <crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
                <Database>master</Database>
                <Root>/sitecore</Root>
              </crawler>
            </locations>
            <enableItemLanguageFallback>false</enableItemLanguageFallback>
            <enableFieldLanguageFallback>false</enableFieldLanguageFallback>
          </index>
        </indexes>
      </configuration>
    </contentSearch>
  </sitecore>
</configuration>

The two bits you’ll need to replace here are the bits commented as Included Fields and Included Templates:

<!-- Included fields -->
<include hint="list:AddIncludedField">
  <ProductName>{E676F36E-B0E0-4BE5-998A-329A8F9055FD}</ProductName>
  <LongDescription>{8A978A2E-0E7A-4415-9163-2F4ECF85A3AB}</LongDescription>
</include>

<!-- Included templates -->
<include hint="list:AddIncludedTemplate">
  <Product>{665DC431-673A-4D63-B9A6-00EB148E693C}</Product>
</include>

OK, done. I’ve added my list of templates, and fields here. So, can I reindex now and see my new content?

Absolutely. Go into Sitecore > Control Panel > Indexing Manager, find your index and rebuild it.

When you’re done, go back to the Solr UI and see your documents! If things didn’t go quite to plan, check in your site Crawling.log, which will contain any indexing errors.

Production ready?

Well, not quite. You might want to create a sitecore_web_products_index and use the Sitecore.ContentSearch.Solr.Index.Web.config configuration file as an example of how to register it in Sitecore. Using Sitecore’s conventions for master and web keep the surprises to a minimum.

Search on, pals!

 

Sitecore 9: ContentSearch Solr query quirks with spaces and wildcards

Sitecore provides a Linq powered IQueryable mechanism with which you can build powerful search queries. Your query will be translated into a native query for your underlying search engine (eg. Solr). There are some odd quirks (bugs?) with this translation in Sitecore 9.0 and 9.0.1 when your search term includes a space. Let’s take a look.

In the below examples, context is an instance of IProviderSearchContext, which you’d typically wire up with dependency injection. In each case, we’re looking to query something from the index based the item’s path in the Sitecore tree.

Querying on exact matches:

context.GetQueryable().Where(x => x.Path == "Hello");
 Translates to: {_fullpath:(Hello)}

Ok! This makes sense.

context.GetQueryable().Where(x => x.Path == "Hello World");
 Translates to: {_fullpath:("Hello World")}

Notice that if your query term has a space, we need to wrap the term in quotes.

context.GetQueryable().Where(x => x.Path == "\\Hello");
 Translates to: {_fullpath:(\\Hello)}

Backslash? No problem.

context.GetQueryable().Where(x => x.Path == "/Hello");
 Translates to: {_fullpath:(\/Hello)}

Forwardslash? We need to escape that with a ‘\’

context.GetQueryable().Where(x => x.Path == "\\Hello World");
 Translates to: {_fullpath:("\\Hello World")}

Backslash with space? No problem, just add the quotes.

context.GetQueryable().Where(x => x.Path == "/Hello World");
 Translates to: {_fullpath:("\/Hello World")}

As above, we’re all good, the forwardslash is just escaped.

Querying on partial matches – where things get interesting:

context.GetQueryable().Where(x => x.Path.Contains("Hello"));
 Translates to: {_fullpath:(*Hello*)}

All good. Here, we wrap our search term in a wildcard, *

context.GetQueryable().Where(x => x.Path.Contains("Hello World"));
 Translates to: {_fullpath:("\*Hello\\ World\*")}

Uh oh! Something weird has happened. The quotes and wildcard seem to have got mixed up, and we’ve ended up with something which won’t return the results we want. Having read more about wildcard / space combinations here , we probably want to end up with something simpler, like {_fullpath:(*Hello\ World*)}

context.GetQueryable().Where(x => x.Path.Contains("\\Hello"));
 Translates to: {_fullpath:(*\\Hello*)}

No problem with this partial match, as we don’t have a space to deal with.

context.GetQueryable().Where(x => x.Path.Contains("/Hello"));
 Translates to: {_fullpath:(*\/Hello*)}

Again, fine.

context.GetQueryable().Where(x => x.Path.Contains("\\Hello World"));
 Translates to: {_fullpath:("\*\\Hello\\ World\*")}

The space completely breaks everything here

context.GetQueryable().Where(x => x.Path.Contains("/Hello World"));
 Translates to: {_fullpath:("\*\/Hello\\ World\*")}

and here..

Summary

I raised this with Sitecore and it has been raised as a bug. In the meantime – if you can get away with using StartsWith rather than Contains, you’ll find this works OK:

context.GetQueryable().Where(x => x.Path.StartsWith("Hello World"));
 Translates to: {_fullpath:(Hello\ World*)}

Which is just about perfect.

Sitecore Solr setup: Document is missing mandatory uniqueKey field: id

While reconfiguring Sitecore (8.2u5) to use Solr (6.6.1) instead of Lucene, I came across the following error:

Document is missing mandatory uniqueKey field: id

In full:

Job started: Index_Update_IndexName=sitecore_master_index|#Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> SolrNet.Exceptions.SolrConnectionException: <?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">400</int><int name="QTime">1</int></lst><lst name="error"><lst name="metadata"><str name="error-class">org.apache.solr.common.SolrException</str><str name="root-error-class">org.apache.solr.common.SolrException</str></lst><str name="msg">Document is missing mandatory uniqueKey field: id</str><int name="code">400</int></lst>
</response>
 ---> System.Net.WebException: The remote server returned an error: (400) Bad Request.
 at System.Net.HttpWebRequest.GetResponse()
 at HttpWebAdapters.Adapters.HttpWebRequestAdapter.GetResponse()
 at SolrNet.Impl.SolrConnection.GetResponse(IHttpWebRequest request)
 at SolrNet.Impl.SolrConnection.PostStream(String relativeUrl, String contentType, Stream content, IEnumerable`1 parameters)
 --- End of inner exception stack trace ---
 at SolrNet.Impl.SolrConnection.PostStream(String relativeUrl, String contentType, Stream content, IEnumerable`1 parameters)
 at SolrNet.Impl.SolrConnection.Post(String relativeUrl, String s)

Here’s what to check.

  • Does your Solr core index config directory have a file called managed-schema? If so, delete this file and reload the core. Solr will be ignoring any changes you’re making to schema.xml and using managed-schema instead. Deleting this file and reloading the core will pick up your latest version of schema.xml

 

Delete this file

solr-error

 

Reload the core

solr-rebuild

 

Rebuild the index in Sitecore and the error should be gone. 

solr-reindex

Elasticon London 2017 (for a Sitecore developer)

Even though Elasticsearch is built on the same foundations (Apache Lucene) as Solr, we in the Sitecore community don’t see a lot of cases where Solr or Coveo have been replaced by Elasticsearch as the main search component.

Today I’ve been at Elasticon London 2017 and have been soaking up new product releases by Elastic. Here’s some notes I scribbled and points which would be interesting to the Sitecore world.

Elasticsearch 6.0

  • Elasticsearch 6.0 is the next major release – but no firm release date has been set just yet. The official answer right now is “coming soon”.
  • Elastic have recognised how painful the upgrade process is – particularly between 2.x and 5.x. When performing the upgrade, you must re-index all of your data to make it 5.x compatible – a huge job on large clusters!
  • While the 2.x to 5.x upgrade path isn’t going to get much easier, Elastic are making sure that the 5.x to 6.x upgrade path doesn’t require you to re-index all data, or take nodes offline.
  • The .NET client creates REST requests with a JSON payload which proxies requests and responses between your code and the Elasticsearch cluster. This is the client model they’re sticking with, and are actually rewriting the native Java client (which currently doesn’t generate REST requests) to be more in line with the .NET client.
  • When it comes to swapping out Solr and Coveo for Elasticsearch, I think this would be a very individual decision based on the needs of your project.

IMG_3921

IMG_3925

 

Anomaly Detection

  • You can now create and set off unsupervised machine learning jobs to continually parse any data and highlight anomalies.
  • The engineer I spoke to said usually “around three weeks” of learning will be enough to begin pulling out anomalies.
  • This could have applications ranging from security (detecting usual IP or DNS activity), to marketing: spotting if a ‘suspicious’ user journey is taking place (whatever this may be!) or perhaps highlighting if a user is stuck or lost.
  • The Elastic stack uses Beats – agents which can monitor a set of files, database, or network packets and streams the data into an Elasticsearch instance / cluster.

IMG_3915

 

Opcode

  • Elastic founder and CEO Shay Banon announced during his keynote that Elastic have acquired Opbeat, a Copenhagen based company whose product adds monitoring and profiling to JavaScript applications (think Node.js, React and Angular).
  • While this might have limited applicability for a lot of Sitecore solutions (where React and Angular might not be the norm), the interesting thing here will be to wait and see how Elastic fit Opbeat into their stack. My guess would be that they’ll extend the product and make it a more general-purpose monitoring and profiling tool.

IMG_3923

 

Machine Learning

  • Elastic’s Steve Dodson (who heads up the Machine Learning product) showed us the current offering, which again is centred around anomaly detection.
  • Most of Elastic’s demo use-cases for anomaly detection are for ops-level indicators like 404’s, 500s, response time, DDOS detection, and so on. Machine Learning kicks in to ignore ‘regular’ surges such as an increase in page response time during weekly batch jobs – but still alerting if the surge is stronger than usual.
  • With some fiddling, you could set up an anomaly detection profiler which tolerates a certain amount of server errors after a code release (and assumes you’ll fix them), but alerts you if it looks like you’ve broken something really big.
  • There was a preview of forecasting, a feature of the upcoming Elasticsearch 6.0. Forecasting does exactly as you’d expect – look at historical data and predict statistics for a future window.

IMG_3931

 

Elasticsearch SQL

Being an ex-database nerd, I loved this session. I’m not sure how the wider search community are going to feel about writing SQL again, but here’s what Elastic have in development:

  • A SQL-like DSL which is 50% ANSI SQL and 50% Elasticsearch-specific syntax additions.
  • You can run queries like SHOW TABLES to list all indexes, DESCRIBE my_index to show fields (columns) and datatypes. You can run search queries like this:
SELECT * FROM my_index WHERE QUERY('+chris perks')
  • All SQL queries translate into the same old Elasticsearch QueryDSL
  • Other constructs they’re including are GROUP BY, HAVING, even JOINs are in there – all translating to their equivalent Elasticsearch QueryDSL commands.
  • You can even wrap SQL in JSON and use it via a REST call *confused face emoji*
  • This is still heavily in development and won’t be released for a while.

 

Summary

There’s plenty of overlap with what the Elastic stack offers, and what you’ll already have set up with Sitecore, Solr, and xDB. There’ll be a fair amount of plumbing work to get Elasticsearch set up properly with Sitecore, whereas you get this out of the box for Solr and/or Coveo.

As Elastic expand, they’re adding many new tools and capabilities to their stack, so it’s definitely not correct to see Elasticsearch as a like-for-like replacement for Solr or Coveo or parts of xDB.

I can see use cases for engaging Elasticsearch alongside your current setup, if you either need a particular Elastic capability which Sitecore / Solr doesn’t give you (such as Machine Learning, or streaming data). Or, you have more faith in the scaling capability of Elasticsearch than you do Sitecore and Solr.

 

Visualising Sitecore Analyzers

When Sitecore indexes your content, Lucene analyzers work to break down your text into a series of individual tokens. For instance, a simple analyzer might convert input text to lowercase, split into separate words, and remove punctuation:

  • input: Hi there! My name is Chris.
  • output tokens: “hi”, “there”, “my”, “name”, “is”, “chris”

While this happens behind the scenes, and is usually not of too much interest outside of diagnostics or curiosity, there’s a way we can view the output of the analyzers bundled with Sitecore.

Let’s get some input text to analyze, in both English and French:

var text = "Did the Quick Brown Fox jump over the Lazy Dog?";
var text_fr = "Le Fox brune rapide a-t-il sauté sur le chien paresseux?";

Next, let’s write a generic method which takes some text and a Lucene analyzer, and runs the text through the analyzer:

private static void displayTokens(Analyzer analyzer, string text)
{
    var stream = analyzer.TokenStream("content", new StringReader(text));
    var term = stream.AddAttribute();
    while (stream.IncrementToken())
    {
      Console.Write("'" + term.Term + "', ");
    }
}

Now, let’s try this out on some Sitecore analyzers!

  • CaseSensitiveStandardAnalyzer retains case, but removes punctuation and stop words (common words which offer no real value when searching)
displayTokens(new CaseSensitiveStandardAnalyzer(Lucene.Net.Util.Version.LUCENE_30), text);
> 'Did', 'Quick', 'Brown', 'Fox', 'jump', 'over', 'Lazy', 'Dog'
  • LowerCaseKeywordAnalyzer convers the input to lowercase, but retains the punctuation and doesn’t split the input into separate words.
displayTokens(new LowerCaseKeywordAnalyzer(), text);
> 'did the quick brown fox jump over the lazy dog?
  • NGramAnalyzer breaks text up into trigrams which are useful for autocomplete. See more here.
displayTokens(new NGramAnalyzer(), text);
> 'did_the_quick', 'the_quick_brown', 'quick_brown_fox', 'brown_fox_jump', 'fox_jump_over', 'jump_over_the', 'over_the_lazy', 'the_lazy_dog
  • StandardAnalyzerWithStemming introduces stemming, which finds a common root for similar words (lazy, lazily, laze -> lazi)
displayTokens(new StandardAnalyzerWithStemming(Lucene.Net.Util.Version.LUCENE_30), text);
> 'Did', 'the', 'Quick', 'Brown', 'Fox', 'jump', 'over', 'the', 'Lazi', 'Dog'
displayTokens(new SynonymAnalyzer(new XmlSynonymEngine("synonyms.xml")), text);
> 'did', 'quick', 'fast', 'rapid', 'brown', 'fox', 'jump', 'over', 'lazy', 'dog
  • Lastly, we try a FrenchAnalyzer. Stop words are language specific, and so the community often contributes analyzers which will remove stop words in languages other than English. In the example below, we remove common French words.
displayTokens(new FrenchAnalyzer(Lucene.Net.Util.Version.LUCENE_30), text_fr);
> 'le', 'fox', 'brun', 'rapid', 't', 'saut', 'chien', 'pares'

The full code is here: (https://gist.github.com/christofur/e2ea406c21bccd3b032c9b861df0749b)

Explaining Lucene explain

Each time you perform a search using Lucene, a score is applied to the results returned by your query.

--------------------------------------
| #   | Score | Tags                 |
--------------------------------------
| 343 | 2.319 | Movies, Action       |
| 201 | 2.011 | Music, Classical     |
| 454 | 1.424 | Movies, Kids         |
| 012 | 0.003 | Music, Kids          |
 --------------------------------------

In our index, # is the unique document number, score is the the closeness of each hit to our query, and tags is a text field belonging to a document.

There are many methods Lucene can use to calculate scoring. By default, we use the DefaultSimilarity implementation of the Similarity abstract class. This class implements the commonly referenced TfIdf scoring formula:


(more: https://lucene.apache.org/core/3_0_3/api/core/org/apache/lucene/search/Similarity.html)

If you’re new to Lucene (or even if you’re not!) this formula can be a bit to get your head around. To get inside the formula for a given search result, Lucene provides an explanation feature, which we can call from code (c# example in Lucene.Net):

public List GetExplainerByRawQuery(string rawQuery, int doc = 0)
{
    using (var searcher = new IndexSearcher(_directory, false))
    {
        // Create a parser, and parse a plain-text query which searches for items tagged with 'movies' or 'kids' (or hopefully, both)
        var parser = new QueryParser(Lucene.Net.Util.Version.LUCENE_30, "id", analyzer);
        var query = parser.Parse("tags:(movies OR kids)");

        // Get references to the top 25 results
        var hits = searcher.Search(query, 25).ScoreDocs;

        // For each hit, get the accompanying explanation plan. We now have a List
        var explains = hits.Select((x, i) => searcher.Explain(query, i)).ToList();

        //Clean up and return
        analyzer.Close();
        searcher.Dispose();
        return explains;
    }
}

Calling searcher.Explain(query, match.doc) gives us a text output explanation of how the matched document scores against the query:

query: tags:movies|kids
----------------------------------------------------
| #   | Score  | Tags                              |
----------------------------------------------------
| 127 | 2.4824 | Movies, Kids, Animation, Movies   |
----------------------------------------------------
2.4824  sum of:
  1.4570  weight(tags:movies in 127) [DefaultSimilarity], result of:
    1.4570  score(doc=127,freq=2.0 = termFreq=2.0), product of:
      0.7079  queryWeight, product of:
        2.9105  idf(docFreq=147, maxDocs=1000)
        0.2432  queryNorm
      2.0581  fieldWeight in 127, product of:
        1.4142  tf(freq=2.0), with freq of:
          2.0000  termFreq=2.0
        2.9105  idf(docFreq=147, maxDocs=1000)
        0.5000  fieldNorm(doc=127)
  1.0255  weight(tags:kids in 127) [DefaultSimilarity], result of:
    1.0255  score(doc=127,freq=1.0 = termFreq=1.0), product of:
      0.7063  queryWeight, product of:
        2.9038  idf(docFreq=148, maxDocs=1000)
        0.2432  queryNorm
      1.4519  fieldWeight in 127, product of:
        1.0000  tf(freq=1.0), with freq of:
          1.0000  termFreq=1.0
        2.9038  idf(docFreq=148, maxDocs=1000)
        0.5000  fieldNorm(doc=127)

Ok! But still, there’s a lot going on in there. Let’s try and break it down.

  • 2.4824 is the total score for this single search result. As our query contained two terms, ‘movies’ and ‘kids’, Lucene breaks the overall query down into two subqueries.
  • The sum of the two subqueries (1.4570 for ‘movies’ and 1.0255 for ‘kids’) are added to arrive at our total score.

For our first subquery, the ‘movies’ part, we arrive at the score of 1.4570 by multiplying queryWeight (0.709) by fieldWeight (2.0581). Let’s go line by line:

  1.4570  weight(tags:movies in 127) [DefaultSimilarity], result of:
The total score for the ‘movies’ subquery is 1.4570. ‘tags:movies‘ is the raw query, 127 is the individual document number we’re examining, and DefaultSimilarity is the scoring mecahsnism we’re using.
1.4570 score(doc=127,freq=2.0 = termFreq=2.0), product of:
The term (‘movies‘) appears twice in the ‘tags‘ field for document 127, so we get a term frequency of 2.0
0.7079 queryWeight, product of:
queryWeight (0.7079) is how rare the search term is within the whole index – in our case, ‘movies‘ appears in 147 out of the 1000 documents in our index.   This normalization factor is the same for all results returned by our query and just stops the queryWeight scores from becoming too exaggerated for any single result.
2.9105 idf(docFreq=147, maxDocs=1000)
  This rarity is called inverse document frequency (idf)
0.2432 queryNorm
  .. and is itself multiplied by a normalization factor (0.2432) called queryNorm.

This normalization factor is the same for all results returned by our query and just stops the queryWeight scores from becoming too exaggerated for any single result.

2.0581 fieldWeight in 127, product of:
  fieldWeight (2.0581) is how often the search term (‘movies‘) appears in the field we searched on ‘tags’.
1.4142 tf(freq=2.0), with freq of:
  2.0000 termFreq=2.0
  We take the square root of the termFreq (2.0) = 1.4142
2.9105 idf(docFreq=147, maxDocs=1000)
  This is multiplied by the idf which we calculated above (2.9105)
0.5000 fieldNorm(doc=127)
   and finally by a field normalization factor (0.5000), which tells us how many overall terms were in the field.

This ‘boost‘ value will be higher for shorter fields – meaning the more promenant your search term was in a field, the more relevant the result.

Further reading:

Happy Lucene hacking!