Guide to Lucene Analyzers
1. Overview
We mentioned analyzers briefly in our introductory tutorial.
In this tutorial, we’ll discuss commonly used Analyzers, how to construct our custom analyzer and how to assign different analyzers for different document fields.
2. Maven Dependencies
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>7.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queryparser</artifactId>
<version>7.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>7.4.0</version>
</dependency>
The latest Lucene version can be found here.
3. Lucene Analyzer
Analyzers mainly consist of tokenizers and filters. Different analyzers consist of different combinations of tokenizers and filters.
To demonstrate the difference between commonly used analyzers, we’ll use this following method:
public List<String> analyze(String text, Analyzer analyzer) throws IOException{
List<String> result = new ArrayList<String>();
TokenStream tokenStream = analyzer.tokenStream(FIELD_NAME, text);
CharTermAttribute attr = tokenStream.addAttribute(CharTermAttribute.class);
tokenStream.reset();
while(tokenStream.incrementToken()) {
result.add(attr.toString());
}
return result;
}
This method converts a given text into a list of tokens using the given analyzer.
4. Common Lucene Analyzers
4.1. StandardAnalyzer
private static final String SAMPLE_TEXT
= "This is baeldung.com Lucene Analyzers test";
@Test
public void whenUseStandardAnalyzer_thenAnalyzed() throws IOException {
List<String> result = analyze(SAMPLE_TEXT, new StandardAnalyzer());
assertThat(result,
contains("baeldung.com", "lucene", "analyzers","test"));
}
Note that the StandardAnalyzer can recognize URLs and emails.
Also, it removes stop words and lowercases the generated tokens.
4.2. StopAnalyzer
@Test
public void whenUseStopAnalyzer_thenAnalyzed() throws IOException {
List<String> result = analyze(SAMPLE_TEXT, new StopAnalyzer());
assertThat(result,
contains("baeldung", "com", "lucene", "analyzers", "test"));
}
In this example, the LetterTokenizer splits text by non-letter characters, while the StopFilter removes stop words from the token list.
However, unlike the StandardAnalyzer, StopAnalyzer isn’t able to recognize URLs.
4.3. SimpleAnalyzer
@Test
public void whenUseSimpleAnalyzer_thenAnalyzed() throws IOException {
List<String> result = analyze(SAMPLE_TEXT, new SimpleAnalyzer());
assertThat(result,
contains("this", "is", "baeldung", "com", "lucene", "analyzers", "test"));
}
Here, the SimpleAnalyzer didn’t remove stop words. It also doesn’t recognize URLs.
4.4. WhitespaceAnalyzer
@Test
public void whenUseWhiteSpaceAnalyzer_thenAnalyzed() throws IOException {
List<String> result = analyze(SAMPLE_TEXT, new WhitespaceAnalyzer());
assertThat(result,
contains("This", "is", "baeldung.com", "Lucene", "Analyzers", "test"));
}
4.5. KeywordAnalyzer
@Test
public void whenUseKeywordAnalyzer_thenAnalyzed() throws IOException {
List<String> result = analyze(SAMPLE_TEXT, new KeywordAnalyzer());
assertThat(result, contains("This is baeldung.com Lucene Analyzers test"));
}
The KeywordAnalyzer is useful for fields like ids and zipcodes.
4.6. Language Analyzers
There are also special analyzers for different languages like EnglishAnalyzer, FrenchAnalyzer, and SpanishAnalyzer:
@Test
public void whenUseEnglishAnalyzer_thenAnalyzed() throws IOException {
List<String> result = analyze(SAMPLE_TEXT, new EnglishAnalyzer());
assertThat(result, contains("baeldung.com", "lucen", "analyz", "test"));
}
Here, we’re using the EnglishAnalyzer which consists of StandardTokenizer, StandardFilter, EnglishPossessiveFilter, LowerCaseFilter, StopFilter, and PorterStemFilter.
5. Custom Analyzer
Next, let’s see how to build our custom analyzer. We’ll build the same custom analyzer in two different ways.
In the first example, we’ll use the CustomAnalyzer builder to construct our analyzer from predefined tokenizers and filters:
@Test
public void whenUseCustomAnalyzerBuilder_thenAnalyzed() throws IOException {
Analyzer analyzer = CustomAnalyzer.builder()
.withTokenizer("standard")
.addTokenFilter("lowercase")
.addTokenFilter("stop")
.addTokenFilter("porterstem")
.addTokenFilter("capitalization")
.build();
List<String> result = analyze(SAMPLE_TEXT, analyzer);
assertThat(result, contains("Baeldung.com", "Lucen", "Analyz", "Test"));
}
Our analyzer is very similar to EnglishAnalyzer, but it capitalizes the tokens instead.
In the second example, we’ll build the same analyzer by extending the Analyzer abstract class and overriding the createComponents() method:
public class MyCustomAnalyzer extends Analyzer {
@Override
protected TokenStreamComponents createComponents(String fieldName) {
StandardTokenizer src = new StandardTokenizer();
TokenStream result = new StandardFilter(src);
result = new LowerCaseFilter(result);
result = new StopFilter(result, StandardAnalyzer.STOP_WORDS_SET);
result = new PorterStemFilter(result);
result = new CapitalizationFilter(result);
return new TokenStreamComponents(src, result);
}
}
We can also create our custom tokenizer or filter and add it to our custom analyzer if needed.
Now, let’s see our custom analyzer in action – we’ll use InMemoryLuceneIndex in this example:
@Test
public void givenTermQuery_whenUseCustomAnalyzer_thenCorrect() {
InMemoryLuceneIndex luceneIndex = new InMemoryLuceneIndex(
new RAMDirectory(), new MyCustomAnalyzer());
luceneIndex.indexDocument("introduction", "introduction to lucene");
luceneIndex.indexDocument("analyzers", "guide to lucene analyzers");
Query query = new TermQuery(new Term("body", "Introduct"));
List<Document> documents = luceneIndex.searchIndex(query);
assertEquals(1, documents.size());
}
6. PerFieldAnalyzerWrapper
First, we need to define our analyzerMap to map each analyzer to a specific field:
Map<String,Analyzer> analyzerMap = new HashMap<>();
analyzerMap.put("title", new MyCustomAnalyzer());
analyzerMap.put("body", new EnglishAnalyzer());
We mapped the “title” to our custom analyzer and the “body” to the EnglishAnalyzer.
Next, let’s create our PerFieldAnalyzerWrapper by providing the analyzerMap and a default Analyzer:
PerFieldAnalyzerWrapper wrapper = new PerFieldAnalyzerWrapper(
new StandardAnalyzer(), analyzerMap);
Now, let’s test it:
@Test
public void givenTermQuery_whenUsePerFieldAnalyzerWrapper_thenCorrect() {
InMemoryLuceneIndex luceneIndex = new InMemoryLuceneIndex(new RAMDirectory(), wrapper);
luceneIndex.indexDocument("introduction", "introduction to lucene");
luceneIndex.indexDocument("analyzers", "guide to lucene analyzers");
Query query = new TermQuery(new Term("body", "introduct"));
List<Document> documents = luceneIndex.searchIndex(query);
assertEquals(1, documents.size());
query = new TermQuery(new Term("title", "Introduct"));
documents = luceneIndex.searchIndex(query);
assertEquals(1, documents.size());
}
7. Conclusion
We discussed popular Lucene Analyzers, how to build a custom analyzer and how to use a different analyzer per field.
The full source code can be found on GitHub.