-
-
Save mofax/367839c85d83db85e263dd4b714b3008 to your computer and use it in GitHub Desktop.
Scrape An Entire Website with wget
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
this worked very nice for a single page site | |
``` | |
wget \ | |
--recursive \ | |
--page-requisites \ | |
--convert-links \ | |
[website] | |
``` | |
wget options | |
``` | |
wget \ | |
--recursive \ | |
--no-clobber \ | |
--page-requisites \ | |
--html-extension \ | |
--convert-links \ | |
--restrict-file-names=windows \ | |
--domains website.org \ | |
--no-parent \ | |
www.website.com | |
--recursive: download the entire Web site. | |
--domains website.org: don't follow links outside website.org. | |
--no-parent: don't follow links outside the directory tutorials/html/. | |
--page-requisites: get all the elements that compose the page (images, CSS and so on). | |
--html-extension: save files with the .html extension. | |
--convert-links: convert links so that they work locally, off-line. | |
--restrict-file-names=windows: modify filenames so that they will work in Windows as well. | |
--no-clobber: don't overwrite any existing files (used in case the download is interrupted and | |
resumed). | |
``` | |
there is also [node-wget](https://github.com/wuchengwei/node-wget) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment