How Google
Search organizes information
When you Search, Google looks through hundreds of billions of webpages and other content stored in our Search index to find helpful information — more information than all of the libraries of the world.
Three people sorting information on cards
Finding information by crawling

Most of our Search index is built through the work of software known as crawlers. These automatically visit publicly accessible webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from page to page and store information about what they find on these pages and other publicly-accessible content in Google's Search index.

A website with a bicycle
Organizing information by indexing

When crawlers find a webpage, our systems render the content of the page, just as a browser does. We take note of key signals — from keywords to website freshness — and we keep track of it all in the Search index.

Three sets of websites

The Google Search index contains hundreds of billions of webpages and is well over 100,000,000 gigabytes in size. It’s like the index in the back of a book — with an entry for every word seen on every webpage we index. When we index a webpage, we add it to the entries for all of the words it contains.

Constantly crawling for new info
More than webpages

Our Search index contains more than just what's on the web, because helpful information can be located in other sources.

In fact, we have multiple indexes of different types of information, which is gathered through crawling, through partnerships, through data feeds being sent to us and through our own encyclopedia of facts, the Knowledge Graph.

These many indexes mean that you can search within millions of books from major libraries, find travel times from your local public transit agency, or find data from public sources like the World Bank.