Crawling and indexing are two unmistakable things and this is generally misjudged in the SEO business. Crawling implies that Googlebot takes a gander at all the substance/code on the page and examines it. Indexing implies that the page is qualified to appear in Google’s list items. They aren’t commonly comprehensive.
We take a gander at it as though Googlebot were an individual who is a visit guide, and he’s strolling down a passage that has many shut entryways. In the event that Google is permitted to a slither a page (a room), he can open the entryway and really take a gander at what’s inside (crawling). Once inside the room, there may be an indication that says he’s permitted to demonstrate individuals the room (ready to list; the page appears in SERPs), or the sign may state that he’s not permitted to demonstrate individuals the room (“noindex” meta robots tag; the page was slithered since he had the capacity to peer inside, yet won’t appear in SERPs since he’s told not to indicate individuals the room). In the event that he’s hindered from crawling a page (suppose there’s a sign outwardly of the entryway that says “Google, don’t come in here”), at that point he won’t head inside and glance around, and in view of that reality, he doesn’t know regardless of whether he should demonstrate individuals the room in light of the fact that those directions are in reality within the room. So he won’t glimpse inside the room yet regardless he’ll bring up the room (file) to individuals and disclose to them they can head inside in the event that they need. Regardless of whether there’s a guidance within the room letting him know not to release individuals to the room (“noindex” meta robots tag), he’ll never observe it since he was told not to go into the room in any case.
So obstructing a page by means of robots.txt implies it IS qualified to be ordered, paying little mind to whether you have a “record” or “noindex” meta robots tag inside the page itself (since Google won’t most likely observe that since it’s hindered from crawling, so as a matter of course it regards it as indexable). Obviously, this implies the page’s positioning potential is decreased (since it can’t really dissect the substance on the page, thusly the positioning signs are on the whole off-page + area specialist). On the off chance that you’ve at any point seen an item where the portrayal says something like “This current page’s depiction isn’t accessible as a result of robots.txt”, that is the reason.