hbasene(https://github.com/akkumar/hbasene)是开源项目,在hbase存储上封装使用Lucene来创建索引,代码API非常简单,熟悉lucene的朋友可以很方便地创建。
以下为测试代码,完成读取一张hbase上记录url和用户id的表,对其创建索引并进行简单的基于url的索引的代码。当取到search的结果后,就可以拿到想要的数据了。由于分词后将原始内容进行了反向索引,所以匹配就转化为了查询,速度相当快。
其中getDocumentFromHTable为读取一张hbase上己有的表,将url字段提取出来创建content索引。
创建索引的实质是用了HBaseIndexWriter和HBaseIndexReader两个分别继承自IndexWriter和IndexReader的类来做索引的读取和写入。同时使用了HBaseIndexStore来做存储。
而创建索引使用的分词等仍然是使用标准的lucene API。
注意hbasene使用的是hbase-0.20.5,需要修改少量源代码才能运行在0.90.x以上的版本中。
这里对创建索引表使用到的结构做下简单的说明,因为是lucene入门级水平,所以各位请尽管拍砖讨论。
索引表由以下几个CF构成:
- fm.sequence: 记录sequenceId,在执行createLuceneIndexTable时需要写死该CF的row为sequenceId,qulifier为qual.sequence,值为-1。可以不用理会
- fm.doc2int: DocumentId,每个document都会有一个这样的id,如果Field.Store设置为YES,则能在索引表中查询到该id并得到完整的内容。
- fm.termVector: 向量偏移,用于模糊查找,记录偏移量等信息
- fm.termFrequency:分词后的关键词在每个document中出现的频率,qulifier为documentId,value为出现次数
- fm.fields:记录了content内容,row为documentId,value为document的全文内容,它和fm.docint是相反的,后者是反向索引。
- fm.payloads:扩展CF,目前还没有用到
import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.HTablePool; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.Field; import org.apache.lucene.document.Fieldable; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.TopDocs; import org.apache.lucene.util.Version; import org.hbasene.index.HBaseIndexReader; import org.hbasene.index.HBaseIndexStore; import org.hbasene.index.HBaseIndexWriter; public class test{ static final String indexName = "myindex"; static final String dataName = "t1"; public static void main(String[] args) throws IOException { try{ Configuration conf = HBaseConfiguration.create(); //hbase-site.xml in the classpath conf.set("hbase.rootdir", "hdfs://192.168.0.1:9000/hbase"); conf.set("hbase.zookeeper.quorum", "192.168.0.1,192.168.0.2,192.168.0.3"); HTablePool tablePool = new HTablePool(conf, 10); HBaseIndexStore.createLuceneIndexTable(indexName, conf, true); //Write HBaseIndexStore hbaseIndex = new HBaseIndexStore(tablePool, conf, indexName); HBaseIndexWriter writer = new HBaseIndexWriter(hbaseIndex, "content"); //Name of the primary key field. getDocument(writer); writer.close(); //Read/Search IndexReader reader = new HBaseIndexReader(tablePool, indexName, "f"); IndexSearcher searcher = new IndexSearcher(reader); Term term = new Term("content", "item.taobao.com"); TermQuery termQuery = new TermQuery(term); TopDocs docs = searcher.search(termQuery, 3); searcher.close(); }catch(IOException e){ e.printStackTrace(); throw e; } } private static void getDocument(HBaseIndexWriter writer) throws IOException{ Document doc = new Document(); doc.add(new Field("content", "some content some dog", Field.Store.YES, Field.Index.ANALYZED)); writer.addDocument(doc, new StandardAnalyzer(Version.LUCENE_30)); doc = new Document(); doc.add(new Field("content", "some id", Field.Store.NO, Field.Index.ANALYZED)); writer.addDocument(doc, new StandardAnalyzer(Version.LUCENE_30)); doc = new Document(); doc.add(new Field("content", "hot dog", Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS)); writer.addDocument(doc, new StandardAnalyzer(Version.LUCENE_30)); } private static void getDocumentFromHTable(HTablePool tablePool, HBaseIndexWriter writer) throws IOException { Document doc = new Document(); Scan scan = new Scan(); HTable htable = (HTable)tablePool.getTable(dataName); ResultScanner results = htable.getScanner(scan); Result row; while((row = results.next()) != null){ doc = new Document(); String value = new String(row.getValue("test".getBytes(), null)); String url = value.split("\"")[2]; doc.add(new Field("content", url, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_OFFSETS)); writer.addDocument(doc, new StandardAnalyzer(Version.LUCENE_30)); } } }
参考:http://koven2049.iteye.com/blog/1129994