![]() If left undefined, only minChapter will be downloaded. untilLast: If set to true, the job will cover the download of the first chapter following chapter until the last available one.If neither untilLast and maxChapter are defined, this will be the only chapter. chapter: First chapter to download (unless untilLast is true, then it will be the next one).series: The name of the series to download (ex: "One Piece").log ) crawler.createFetchJob(jobRequest)Ĭreate a job that can be ran later using nJobs(). cb: cb(error) if an error occurs somewhere in the process, or with cb(null, results) after all fetch and download jobs have ended, where results is an array of items of the form.Determines whether the downloaded resource gets extracted or stays compressed. outputFormat: "folder" (default) or "zip". ![]() Default outputDirectory is the current directory. When downloading Naruto's first chapter, it will be go to "/Naruto/Naruto 1". outputDirectory: Directory in which downloaded items items will go in.config: Object containing all the configuration and options needed to run the jobs.Take a look at crawler.getPageUrl() for more details.Īlternatively, you can create fetch jobs using crawler.createFetchJob() for more ease. It won't be needed for series like "Naruto" or "One Piece" (leave it undefined), but might for some with odd characters or for manhwas (ex: The Breaker -> "T/The_Breaker_(Manhwa)/"). Should not contain "" as it will be prepended to the url. Url is optional, and will help determining the url of the series page.
0 Comments
Leave a Reply. |