21 thoughts on “It's All In The [Search] Packaging”

  1. “Wikipedia’s pages… are incredibly detailed and often include enough information that obviate the need to search any further.” More often, I find that the Wikipedia page provides a better set of links than a Google search.

  2. I agree with you 100%. I think it is one of the more efficient ways of finding information. I am just wondering if this actually will replace the search-find-click paradigm we are used to now thanks to google.

  3. I say it won’t (replace Google). The reason Wikipedia is so useful is because it’s a central place for “everything”, just like Google is, and the only way they are able to handle is with heavy user input.

    If you try aggregate content on your own (even with a huge staff), you end up with an About.com clone. Then if you have various dedicated portals for niche topics, how is that different from the current Internet?

    We already have quality sites offering niche content, and those who plan to stay in the longer run better adapt for increased user interaction and input. Isn’t that what we are looking at and still heading towards with the next generation of the Web?

  4. Hello,
    Have been a regular reader of this blog. Wikipedia has become a great site to access “information” rather than the “search-find-click paradigm” as you mentioned. We are a three people startup based in Pune, India. We have been working on an “information-engine” which delivers “information” to the end users, rather than links of web-pages. Currently Wikipedia is manually driven, we have created a system which automates and rapidly accelerates the creation of information. To give an analogy, Wikipedia is like the early automotive industry, the assembly line was manually operated. Our system is the equivalent of a modern automated assembly line. The role of humans changes to setting parameters and inspecting results.

    We have also used interesting tools to present this information to the end users. We just had an internal alpha release and would love to share some search results and further details with you.

    Regards,
    Abhay.

  5. Will there be digital divide in providing separate servers for WEB-2.0,and web-3.0 Technologies.
    The answer is pending as some more navigational[air][personal in transit in traffic],and personalized web pages are going to jam the Net.
    surya

  6. Working for vertical, B2B, search / directory site I know that there are times where our site will have information available about a company that doesn’t even have a website. The role of specialized or vertical search sites will continue to grow as 2.0 content and apps take root.

  7. It depends on what stage of Google you’re talking about. To begin with it was certainly a revolution in terms of search. However, as people learned to manipulate it, and it seems to focus more heavily on the advertising that keeps it in business, the quality of its results have gone down.

  8. Oh boy.
    All these people throwing around terms without a clear definition.
    human brain != boolean computer
    human information = data in context
    human context = related (data,events,knowledge, emotions …)
    [ very simplified, but I don’t want to break it down to Consciousness]

    In other words if you don’t know what I’m working on, you provide data to me. Which I put into context, thereby creating information.

    Now let’s plan a business trip to Mountain View.
    I create a meeting in my Calendar with a location in Mountain View. The system creates a Workspace on my Computer under which all data for this meeting will be aggregated and preserved from now on. If I get flight confirmation the system checks the date and relates it to the meeting and ….
    In other words if I go on my system to a specific Workspace all information for this context are there. Now if I share this with a back end server wouldn’t the search be much more accurate?

    Come on now, Microsoft could build this in 2 years if they would use their dead brains. Google will struggle, since they have no local context. But they are working on it. And the rest …?

    Ok I will get a coffee now, getting far to agitated .

  9. It’s not the exposion of information which has made searching difficult, it’s the exposion of web bottom feeders, such as link aggregating sites with automatically generated pages which are nothing more than scraped scraps of text and associated advertising, not to mention so called “consumer review” sites such as Ciao, Kelkoo, Pocket-lint, etc, which provide poor quality information on the hope of a sales hit. In general I have no problem finding the information I am searching for unless my item of interest happens to fall into the hit zone of aformentioned sites, in which case I have to manually trawl through pages of garbage from companies who provide nothing of value to anyone.

  10. One important consideration which some of the other commenters have sort of hinted at is exclusion of unwanted information or results. Something as simple as a button to “exclude this site from future results” would go a long way toward eliminating the dross. Google’s Web History feature would seem to position them as a natural leader in this area. I have even suggested to them on several occasions that a simple user-ranking of results and particularly viewed results would be very powerful. Apparently that doesn’t fit their business model as an advertising company. Or maybe its a dumb idea, but I would like to see someone give it a try.

  11. Goog has simply established itself as the technologically best solution for the most number of people. It doesn’t preclude vertical sites from specializing to provide even more relevant results, say for venture capital, than Goog’s 63,900,000 results…

  12. Back in the 1980s, searching for information was something you needed a specialist for. Someone who was trained and experienced in not only knowing how to interrogate specialist databases, but how to identify the most reliable and valid results. When Google came along, most people decided it was easy enough to do the interrogation and validation themselves. You want to take a chance on unreliable, biased, inaccurate information, try Wikipedia. You want aggregation? Try a library. Maybe what is required is the specialist services of a trained human being again. Try a librarian.

  13. The challenge for software tool developers in search is emulating human judgement. In the meantime its interesting that dmoz.org (Open Directory) which is useful but limited, is still ranked 471 (by Alexa). Goog could usefully think about encouraging and cooperating with vertical search sites that have a strong human, therefore contextualised data, element.

  14. Mitchell Quinn said: “not to mention so called “consumer review” sites such as Ciao, Kelkoo, Pocket-lint, etc, which provide poor quality information on the hope of a sales hit.”

    Absolutely. Why doesn’t Google just ban these useless market comparison sites from its search results? They’re just another layer of crap we have to wade through before getting to what we want.

  15. Om, on your quote from your article in Business 2.0 magazine:

    “At a glance you can see what’s important. Smart new companies are finally figuring out how to do this online, where there’s too much content and not enough packaging.”

    I think one question to ask is whether the one-size-fits-all model necessary for off-line publications like newspapers makes sense for online aggregators of news.

    Online, there is an opportunity to create a different package for each individual, personalizing what is important. In new media, information could be packaged on a one-to-one basis.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.