Search engines have three primary functions:
Explore the Internet for content, looking over the code/content for each URL they find.
Crawling is the process in which bots or web crawler read the content/code of webpage. Web Crawlers starts out by fetching a few web pages, and then follows the links on those webpage to find new URLs.
Store and organize the content found during the crawling process. Once a page is in the index, it’s in the running to be displayed as a result to relevant queries.
Indexing means saving data of website, related to keywords. It also means that the page is eligible to show up in the SERP. Search engines process and store information they find during crawling in an index.
Provide the pieces of content that will best answer a searcher’s query, which means that results are ordered by most relevant to least relevant.
When someone performs a search, search engines checks their index for highly relevant content and then orders that content in the hopes of solving the searcher’s query. This ordering of search results by relevance is known as ranking.
Can search engines find your pages?
As you’ve just learned, making sure your site gets crawled and indexed is a prerequisite to showing up in the SERPs. If you already have a website, it might be a good idea to start off by seeing how many of your pages are in the index. This will yield some great insights into whether Google is crawling and finding all the pages you want it to, and none that you don’t.
One way to check your indexed pages is “site:yourdomain.com”, an advanced search operator. Head to Google and type “site:yourdomain.com” into the search bar. This will return results Google has in its index for the site specified.