We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would be ideal that, given a domain, the crawler was capable of detecting the sitemap or accepting a sitemap as parameter.
I would think in the following workflow:
This would be both for main urls or secondary urls (there are sub-sitemaps and specially if you are using routes and behind reverse proxy server).
The text was updated successfully, but these errors were encountered:
Another option would be to allow the initial URL to be the URL of the sitemap itself.
Sorry, something went wrong.
This would be a good feature
No branches or pull requests
Would be ideal that, given a domain, the crawler was capable of detecting the sitemap or accepting a sitemap as parameter.
I would think in the following workflow:
This would be both for main urls or secondary urls (there are sub-sitemaps and specially if you are using routes and behind reverse proxy server).
The text was updated successfully, but these errors were encountered: