Is Search a Solved Problem?

September 1, 2012

“Is search a solved problem?”  This question came up at the industry panel of SIGIR 2012.  However, most of the panelists evaded the question in its most strict sense, and pointed at new search problems or new relevance criteria such as the combination of relevance and recency.  In its most strict sense, this question should be whether full text ad hoc retrieval is a solved problem.  Other ways of asking this question include “What is the next big thing in retrieval modeling?”, “How can we consistently and substantially improve over the basic tf.idf retrieval models?”, “Why haven’t retrieval models improved over BM25 for the last almost 20 years?”, “How can one improve over Google at basic full-text Web search?”

I believe my dissertation on the term mismatch problem provides a satisfactory answer for many of these similar questions.

The first half of this article comments on the questions and common mis-conceptions that people have about retrieval modeling, and show how they can be answered based on the new understanding of the term mismatch problem and of its relationship to the tf.idf retrieval models.  The second half of this article directly cites paragraphs from the introduction chapter of my dissertation, since these paragraphs directly address the question “is search a solved problem”.  The interested readers can explore the full dissertation at http://www.cs.cmu.edu/~lezhao

In IR research, there is a mis-conception or mis-perception about the limitations of the current retrieval models.  For example, in the Salton lecture of SIGIR 2012, Norbert Fuhr focused on the term independence assumption made by the Binary Independence Model (BIM).  However, what people often overlook is that another assumption is made when applying the BIM in ad hoc retrieval, e.g. in Okapi BM25.  It is commonly assumed that the probability of a term t occurring in the relevant set of documents R, P(t|R), is 0.5.  This assumption is made because in ad hoc retrieval the relevant set of documents for a query is typically unknown beforehand.  This assumption cripples the BIM by removing the relevance variable completely from the full model.  My dissertation shows evidence that in many cases it creates a much more significant problem than the term independence assumption does.  My dissertation also shows that this probability is directly related to the term mismatch problem, and that solving the term mismatch problem could improve retrieval by over 50-300%.  This means the mismatch problem happens deep down at the retrieval model level, and that it is a significant problem with a huge potential.

Because of neglecting the mismatch problem, when asked where can search be improved, people often resort to something else.  For example, in the Salton lecture, Norbert Fuhr mentioned that IR is about vagueness, which I agree, but then resorted to some sort of user interaction to clarify the information need, and stepped outside the realm of retrieval modeling.  Even within the scope of retrieval modeling, even when the search request is clearly described, like most of the TREC queries, search is still not a solved problem.  How can search be solved, when we are only getting a search accuracy of around 0.2 to 0.5 in standard evaluations?  In the industry panel of SIGIR 2012, Diane Kelly, like Norbert Fuhr, also pointed to user interactions when answering the title question.  Krishna Gade from Twitter changed the relevance criterion to consider recency along with relevance, which is just another way to avoid the question “is search a solved problem” in its original sense.  Jerome Pesenti mentioned some cases of enterprise search, such as facilitating the user in completing a task that uses search as a component, which also stepped outside search strictly defined.  I am not 100% sure whether it was Trystan Upstill who mentioned structured retrieval or semantic retrieval, but structured retrieval is more of a tool than a problem.  Furthermore, strict structured matching between queries and answer texts, e.g. phrases or using parsing structure of the query, worsens the mismatch problem.

Here, I am not trying to argue that retrieval modeling is the only problem in search.  These other directions are certainly valid and valuable research directions, and I am aware that search is only part of a bigger picture of decision making or task completion.  However, the retrieval model is the most fundamental component underlying a search system, and it affects any system or task that uses search as a component.  Fundamental limitations of the retrieval models would affect system performance, would affect the behavior of the retrieval techniques that work on top of the retrieval model, as well as the behavior of the search users.  This is true everywhere search is executed.  Because of that, understandings of the fundamentals should always be pursued with a higher level of enthusiasm than anything peripheral.  Now that we know that mismatch is an important problem in retrieval modeling, there is every reason to understand and to solve this term mismatch problem.

Of all the industry panelists, Steve Robertson’s point about searching on small collections such as desktop and some enterprise search is perhaps the only one about the core retrieval task.  More generally, a small collection typically implies a case of recall-oriented search, where an emphasis is given to finding all relevant documents in the collection.  In the case of desktop search, usually there is only one document that the user wants.  In the case of legal discovery, patent search or bio/medical retrieval, the cost of missing a relevant document can be very high.  Now that we know that mismatch is a fundamental limitation of the retrieval models, it is perhaps not so surprising that the current retrieval systems are not doing well on these recall oriented tasks.  Thus, underlying the small-collection search problem is the term mismatch problem.  If Bruce Croft were on the panel, he would have probably mentioned long queries, which is also the term mismatch in disguise, because the longer the query gets the more likely the query terms will mismatch relevant documents, and the more likely retrieval accuracy will be hurt.

On large collections like the Web, is search solved?  Definitely not.  Searchers doing informational searches are still constantly frustrated by the major Web search engines (see [Feild, Allan and Jones 2010]), and perhaps mismatch is one of the major problems causing the frustration.  Even for precision-oriented search, my dissertation shows that successfully solving the mismatch problem can substantially improve precision as well, and it is still valuable to solve mismatch.  Are there problems other than mismatch?  Yes, definitely.  Sometimes, precision can also be a problem.  However, a mismatch infected query may look like a precision problem to the untrained eye, and a wrong diagnosis may cost the searcher a lot of time trying to fix the query in futile ways.  Are there other problems?  Maybe, but that has to be carefully investigated, clearly defined and connected back to the fundamentals.

—— The following text is cited from my dissertation http://www.cs.cmu.edu/~lezhao/thesis/diss-Le.pdf ——

Web search engines like Google have popularized the use of search technology to the extent that search has become part of our daily life. These Web search engines have also made search so easy to use that an ordinary search user would be satisfied with the results from the search engine most of the times. Some would even think that search is a solved problem.

This dissertation argues that search is far from being solved. As long as machines cannot perfectly understand human language as humans do, search is an unsolved problem. Some may still disagree, thinking that although it’s probably true that machines still cannot perfectly understand human language, it is search that demonstrated that a natural language processing task can be tremendously successful even with very simple algorithms and easy to compute statistics (Singhal 2001). In fact, attempts to use more complex natural language processing techniques in retrieval have mostly been proved futile. Even people working intimately with retrieval technology have the impression that the research on basic retrieval algorithms is hitting a plateau. Some may even think that there is probably not much room for improvement. After all, the retrieval models of the 1990s and early 2000s (Okapi BM25 or Statistical Language Models) are still the standard baselines to compare to, and the go-to models of the modern day retrieval systems (Manning et al. 2008, Chapters 6 and 7).

We argue that search ranking, retrieval modeling to be specific, is far from being solved.

Firstly, the success of Web search engines is largely due to the vast and diverse set of high quality documents to retrieve from. Because of the diverse set of relevant documents out there, even if a search query is not well formulated, even if the retrieval algorithm cannot return most of the relevant documents in the collection, a few good ones will match the query and be ranked at the top of the rank list to satisfy the user. Just that these search engines can return satisfying results for most queries does not necessarily mean that the retrieval models used by these systems are successful, and the impression that the simple retrieval models are successful can be just an illusion created by the successful collection of a huge set of documents for the retrieval algorithm to search against.

Secondly, in cases where the document collection is small, where the set of documents relevant to the query is small, or where a high level of retrieval recall is needed, the current retrieval systems are still far from being satisfactory. For example, in perhaps all forms of desktop search and some forms of enterprise search, the document collection is much less diverse and much smaller than the Web, and the users can still be easily frustrated by the search systems. Even in Web search, for informational searches, users are still frequently frustrated by the current search engines (Feild et al. 2010). In legal discovery, the lawyers from both sides of the litigation care a lot about not missing any potentially relevant document, and usually spend lots of time on carefully creating effective search queries to improve search effectiveness.

Some may still ask, if search is not yet solved, why are the baseline retrieval models so difficult to surpass, and where can we see any large improvements? We show in this dissertation that two central and long standing problems in retrieval, vocabulary mismatch (Furnas et al. 1987) and relevance based term weighting (Croft and Harper 1979; Greiff 1998; Metzler 2008), might be the culprit. We show that the two problems are directly related, the vocabulary mismatch problem being the more general version of the two. We show that the current retrieval models do not effectively model the vocabulary mismatch between query terms and relevant results. We show that term mismatch is a very common problem in search, and that a large potential gain is possible. We demonstrate several initial successes in addressing term mismatch in retrieval using novel prediction methods and theoretically motivated retrieval techniques. These techniques can automatically improve the retrieval system by making it mismatch-aware. Ordinary search users can also manually apply the query expansion technique studied in this work to further reduce mismatch and increase the effectiveness of their searches.

Share


It takes a scientist to take good care of a newborn

April 13, 2012

Baby cries, parents need to diagnose the problem, form hypothesis of what might be wrong, do some testing to check if that’s really the case, and solve the problem.  A wrong diagnosis can only make a baby more upset, or disrupt the routine that is being established with the baby.

Baby doing everything, parents might want to log down all the activities including the surrounding environment.  This data can be used to identify outliers, or changes in the baby, allowing parents to respond more promptly.  For example, baby is eating more, baby is having stomachaches more frequently, baby is sleeping less when it gets warmer, some of these can be perfectly normal, some can mean a problem.

Luckily, newborns have a limited memory and are just a combination of simple reflexes.  Wondering what the scientist can do when babies grow up, the problems become more complex, and many factors begin to interact with each other.  Maybe the artist side of the scientist or the engineer side would need to take over now and then.

 


高中母校湖州二中一百一十年校庆投稿

December 21, 2011

李老师最近可好?已有4、5年没有去探望老师们,不知道最近都怎样?

从安彤那里知道高中110周年校庆征稿。算起来高中毕业已经12年了,我也正好可以回顾一下毕业之后体会到的一些对于高中时候的我会有帮助的东西。

总结下来就是要做一个明白人。

高中是人生中一个很繁忙,也很重要的时间段。因为繁忙,所以很容易忽略一些东西,而其中最不可以忽略的就是时时反省,做一个明白人。譬如,遇到事情多想想为什么;遇到问题找到解决办法后,多想想有没有更好的办法,什么是最好的解决办法;多和人交流,发现自己和他人的个性和长处;多观察、熟悉周围环境,充分利用周围的资源。

多读一些明白人的个人传记,可以是自己想成为什么样的人就多读什么人的传记,实业、科学、技术、教育、文化、经济、政治各个领域都有很好的传记。也可以铺开了都读读,自己觉得有意思的传记就多读读。现在还有blog,不过blog泛滥,要找到写得不错的blog还是需要一些时间和技巧的。

除了读,还要多想,多说,多写。特别是写,不要把写作当成任务,想想如何把一件事情的来龙去脉描述得清楚、简洁,把一个故事讲的有说服力,把一个论证过程写得没有漏洞。一边写,一边挑战自己逻辑上的漏洞、文章中的薄弱点。写完了还可以拿给别人看看,让别人挑战一下自己写作上的漏洞。说话和写作其实是一个清理思路、帮助自己思考的重要手段,同时还能释放自己的创造力、想象力。

多做一些事情,譬如,给好朋友办个生日聚会,组织个读书俱乐部或长跑协会,组织个班级或者年级排球比赛。这些事情通常都需要和其他人合作,影响力也更大一些。另外还有一些相对自私自利一点的事情,譬如发现自己的长处,给自己找个合适的事业发展方向;在不影响学习的前提下,规划一下怎么样充实和丰富自己的时间。虽然自私自利,这些事情也可以找几个同学或者老师帮着做,一起做。甚至小到自己洗个衣服,帮妈妈做个家务,也算是对自己的一个锻炼。

以上所说的可以总结为“读万卷书,行万里路”,是成为明白人的途径。不过对于高中阶段来说要求太高,在时间有限的情况下,只能挑选着读,有粗有细的读,挑选着做事。

高中学业很繁忙,不过要明白,繁忙不等于不快乐,繁忙不等于充实,而充实的人生都应该是繁忙的。重要的是找到自己的兴趣、适合自己的方向,明白自己为什么学习,充实并快乐着。而根本的是要做一个明白人,做一个高效率的人。

Share

湖州二中简介:http://www.2ndschool.net/ReadNews.asp?NewsID=141


The age of reviews

November 8, 2011

It is an age of many kinds of reviews.  It is an age of good reviews, an age of bad reviews, an age of long reviews, an age of short reviews, an age of constructive reviews, as well as an age of destructive reviews.

This time, I got a totally different kind of review, one that is not only helpful, but also empathetic.  One that not only tells you what is wrong or unclear, but also how to fix them.  One that not only tells you how to fix them, but also tried to understand why the current draft is laid out so, and argued against the reason behind, just to convince you that a better way is needed.

An empathetic review from WSDM, I feel this is worth blogging/bragging about, even though the paper itself got a rejection.

Share


Putting Conjunctive Normal Form (CNF) Expansion to the Test

June 9, 2011

I recently participated in the Patent Olympics event remotely from my home.  It was quite a learning experience.

About the competition: The goal for each team is to interact with the topic authority (referee), and to discover & submit as many relevant documents as possible.  Basically it’s an interactive search task, where the topic authority provides both the information need and relevance judgements.  This year, there were 3 topics, thus 3 rounds of interaction.  Each round was limited at 26 minutes, i.e. from getting the request to submitting all the results.  A maximum of 200 results per topic was allowed for submission.  The referees can judge results at any time, even before or after the interactions, but typically, we are all busy people, so most of the judging happens during the 26×2 minutes (2 is the number of participating teams).

Our approach: We aimed to formulate the perfect query through interaction with the topic authority and with the retrieval results.  A particular kind of query that we are interested in is the Boolean Conjunctive Normal Form (CNF) queries.  I’ve mentioned the advantages of such CNF queries in prior posts about term necessity/recall and WikiQuery.

Results: very different results came out of the 3 topics.  In summary, CNF, CNF, and when in doubt, use CNF queries.  If you are interested in more details, read on.

Well, it was in the middle of the night for me, even though I took a shower and dressed myself in a shirt, my brain had no response to the first topic, which is about the analgesic effect of some chemical.  The good thing about formulating CNF queries is that even if you are brain dead, as long as your searcher is responding effectively, you can get the right query out.  So I started the CNF routine by asking what other names does this chemical have, and expanded them to the original name.  I feared of bad impression from the referee, so I didn’t ask her for synonyms of “analgesic”.  “Pain reduction” and “analgesia” were all that I expanded it with.  Now that my brain is working better, I could easily suggest “pain relief”, “relieve pain” etc.  More, if I remembered to use a thesaurus, e.g. thesaurus.com.  Overall, I didn’t do particularly bad, but for sure I frustrated the referee at some point, because I didn’t provide any genuine help, except throwing a long list of results to the referee.  So, CNF query saved my ass, so to speak.

After warming up from the first topic, the second one was quite easy.  The referee did a good job in presenting a very detailed query, made my job of formulating the CNF query very easy.  We were already getting about 90% precision at top 20 (and likely even further down the list) for the first query.  Referee was happy to see good results, and our score skyrocketed.  Found a near perfect CNF query.

The third topic was a lot trickier.  The referee asked for the manufacture of a certain potassium salt, giving only the chemical structure of the salt.  For a chemical-structure search system, this might be a very good test topic, but since my system is pure text based (Lemur CGI running on distributed indexes), I naturally asked the referee for the common name of that chemical.  The referee said the other team didn’t get that information until much later, so that wasn’t very helpful.  After some struggling with the structure, and at about half the time into the topic, the referee finally found out the name in a result patent and gave it to me.  It turned out to be a very popular sweetener.  Naturally there are many patents about using this sweetener to do things.  These were false positives, as the referee was only interested in ways of manufacturing the sweetener.  Short of time, I panicked.  Instead of doing the CNF routine of looking for synonyms of the search terms from a thesaurus (there are general thesauri and thesauri for chemicals), I did the exact opposite: using proximity operators to restrict the match, requiring the word manufacture and the name of the chemical to appear in a small window of occurrence in the result documents.  If you read my thesis proposal, you’ll know that this is exactly the kind of mismatch cases that cause ad hoc retrieval to fail.  As a result, I think I only found 2 relevant documents for this query.  After the event, I consulted a general thesaurus and found that “synthesis” is the right word to use, so a query like (synthesize OR manufacture OR produce ) AND (chemical OR other names of the chemical) would give at least 10 good hits.  Failed by not doing it the CNF way.

Scoring:

Scoring was based on the total number of relevant documents found, and the users’ happiness with the system.  Without the topic for which I got a near perfect CNF query, I was surprisingly already leading the competition.  Scored a bit more than last year’s champion, 2nd place this year.  Overall, with the perfect query in, I got 20 more discovered relevant documents.

In terms of UI, I scored the least among the 3 teams.  I didn’t have much time to prepare the UI, only in time to distribute the collection over 3 nodes/6 CPUs.  Since I wasn’t at the event, I didn’t get to see the other participants systems.  What a pity.

Scoring board here: http://patolympics.ir-facility.org/PatOlympics/scoreboard.html

Some learnings:

No matter how efficient you think you are, 26 mins is a short time to get a good set of results.  By consciously formulating CNF queries, one can save some time, but it’s still quite stressful when the topic is difficult, like the sweetener manufacture query.

The decision to compete by formulating the best query turns out to be a good one.  A lot of different things can be done for chemical patent search, e.g. for retrieval strategies: citation analysis, chemical structure search, chemical name matching; for document processing: named entity (chemical name, disease name) annotation.  However, within a limited time of interaction, I guess the most effective way of interaction with the searcher and the collection is still to vary and improve your query.  I don’t know what other teams have done exactly, but I’m sure, CNF querying is my secrete weapon.  And I’m glad that text search is enough for the topics this year.

Because of the short time frame, UI turned out to be a big player.  Result titles and snippets speed up the relevance judging process a lot.  I would improve the result presentation a bit to include the titles of the patents if I have time.  There are also ways of automatically submitting results from the UI, to save the time of copy-pasting.  But I was copy-pasting 200 results at once, at the moment we arrived at the final query.  I don’t think clicking a button in the UI to submit a single result would be more efficient than batch submission, except to please the referees with a fancier UI.

I made it sound simple to do CNF querying, but as you have probably noticed, if the synonym suggestion component is not integrated in the system, the user (myself) forgets to expand term by term, thus cannot formulate effective CNF queries, especially when pressed by time.  The cognitive burden of understanding the initial query and analyzing the results, and the brain power needed to interact with the searcher to keep him/her busy is already huge enough.

With CNF queries, it’s easy to further restrict them and still get a reasonable number of relevant hits.  For one query, I used proximity restrictions, and for another query I restricted the search to the claims field of the patents per request from the searcher.  Not sure whether they helped overall performance at lower ranks, but they did perceivably improve top precision.

A final word, 3 topics is definitely not enough for any serious evaluation.  The evaluation metric – the total number of relevant documents – can be easily affected by 1 topic that has lots of relevant hits.  Maybe a per topic evaluation would be something better to do.  Other factors may also affect performance, for example, the other team did not fully submit 200 results per topic, and that may have brought them back when compared on the total number of relevant documents returned.  So as always, be careful when interpreting any result.  If you are interested in testing out the CNF queries yourself, try it out here with your own information needs: boston.lti.cs.cmu.edu/wikiquery/.

Share


一些重要的却易被忽视的东西

January 23, 2011

有些东西在本科学习生活中,以及以后工作生活中是很重要的,而我自己在本科的时候并没有清楚认识到这个重要性。我想来谈谈这些东西。我并不一定是最合适的 人来讲这些东西,我讲的也未必正确、未必有用。写这个短文除了希望对计算机系的老师同学们有点积极作用之外,我还有一个比较自私的目的,就是趁这个机会自 己总结一下,希望自己以后也不要忘记这些重要的东西。

在本科学习的时候,我自己最迷茫的恐怕是学的这些东西以后会怎么被用到。所以有可能的话,SRT或者暑期实践之类的实际的项目希望系里能鼓励同学们多参 加。其中最好是那些有实际应用或者科研背景的项目,要是能几个同学合作,然后有一个有管理和指导经验的老师或者师兄指导就最好了。

另外,我们系理工科课程是强项,我相信保持高水平没问题。但是很多成功必需的素质不是光靠这些理工课程能体会到的。譬如(对人与事的)热情和好奇心的重要 性,想象力、自信心的重要性,真诚正直、敏锐的观察力、自主的判断能力、和感同身受的能力的好处。安排时间的能力、解决问题的能力、自律性、对自己的弱点 的认识。所有这些素质在科学技术上、在人际关系中都是很重要的。虽然最好的理工科老师能在潜移默化中,或者一两句课堂上的题外话中展现出这些素质来,但是 这些课堂上的点点滴滴毕竟不成系统,而且课堂上的被动接受和实际中的主动应用也需要结合才行。所以我想,好的文科教育,配合上做实际项目以及和人合作,可 能能够对培养这些方面的素质有所帮助。清华的文科也在逐步建设中,也许可以有些相关的课程或者合作,让同学们受益。举个例子,就我自己,我觉得学完逻辑 (离散数学)后,学学王路老师开的语言哲学研讨班会对加深理解以及如何做科研有些帮助。

最后一点,在同学中和我关系最好的可能都是和我一起工作战斗过很久的那些,所以在一个团队中一起做过一些有意义的事情是本科学习中以及以后生活中最宝贵的机会,一定要珍惜。

(为系庆征稿写)


Superstitious models

January 18, 2011

Superstition is a form of overfitting in human learning.

It’s likely to happen when the human has no clue or no good clue of what’s going on.  Here, all kinds of noisy clues can become important for the human to predict what’s going to happen.

Similarly, a machine learning model is likely to get superstitious when no good clues are provided.  This is probably the most severe form of overfitting, and the model can even perform worse than a random baseline or a constant guess.

Unfortunately, this is happening to my research experiment now.  I know that there are good features out there, but they are much more expensive to compute, and I don’t want to get to use them.

I guess the only way out is to try to see if it’s possible to get some simple but reasonably good features.

Superstitious models..

Share


Follow

Get every new post delivered to your Inbox.