how to download a file in node js
You can easily download a file in Node.js using
built-in modules like
https
,
http
, or
external libraries like
axios
or
node-fetch
.
Here are two clear examples using native modules and Axios:
https
/
fs
):
javascriptconst https = require('https'); const fs = require('fs'); const url = 'https://example.com/file.zip'; const filePath = 'file.zip'; const file = fs.createWriteStream(filePath); https.get(url, (response) => { if (response.statusCode !== 200) { console.error(`Download failed. Status Code: ${response.statusCode}`); return; } response.pipe(file); file.on('finish', () => { file.close(); console.log('Download completed.'); }); }).on('error', (err) => { fs.unlink(filePath, () => {}); // Delete incomplete file console.error(`Error: ${err.message}`); });
First, install Axios:
bashnpm install axios
Then use the following code:
javascriptconst axios = require('axios'); const fs = require('fs'); const url = 'https://example.com/file.zip'; const filePath = 'file.zip'; async function downloadFile(url, path) { const response = await axios({ method: 'GET', url: url, responseType: 'stream' }); const writer = fs.createWriteStream(path); response.data.pipe(writer); return new Promise((resolve, reject) => { writer.on('finish', () => { console.log('Download completed.'); resolve(); }); writer.on('error', (err) => { fs.unlink(path, () => {}); console.error(`Error: ${err.message}`); reject(err); }); }); } downloadFile(url, filePath);
Choose between built-in modules (https
) or libraries (axios
).
Use a stream (fs.createWriteStream
) to write the file.
Handle success/error events properly.
This ensures efficient, robust downloading of files in Node.js.
To download a file within a specific byte range in
Node.js, you can set the HTTP header
Range
. This
is commonly used for downloading large files in
segments or resuming interrupted downloads.
Here's how you can achieve this easily:
https
)
javascriptconst https = require('https'); const fs = require('fs'); const url = 'https://example.com/file.zip'; const filePath = 'partial-file.zip'; // Define the byte range (for example, first 1MB of file) const startByte = 0; const endByte = 1048575; // 1 MB = 1024 * 1024 - 1 const options = { headers: { 'Range': `bytes=${startByte}-${endByte}` } }; https.get(url, options, (res) => { if (res.statusCode !== 206) { // 206 Partial Content console.error(`Unexpected status code: ${res.statusCode}`); res.resume(); // Consume response data to free up memory return; } const fileStream = fs.createWriteStream(filePath); res.pipe(fileStream); fileStream.on('finish', () => { fileStream.close(); console.log('Partial download completed.'); }); }).on('error', (err) => { fs.unlink(filePath, () => {}); console.error(`Download failed: ${err.message}`); });
javascriptconst axios = require('axios'); const fs = require('fs'); const url = 'https://example.com/file.zip'; const filePath = 'partial-file.zip'; // Byte range (e.g., bytes 0 to 1MB-1) const startByte = 0; const endByte = 1048575; async function downloadRange(url, path, start, end) { const response = await axios.get(url, { headers: { 'Range': `bytes=${start}-${end}` }, responseType: 'stream' }); if (response.status !== 206) { throw new Error(`Unexpected status code: ${response.status}`); } const writer = fs.createWriteStream(path); response.data.pipe(writer); return new Promise((resolve, reject) => { writer.on('finish', () => { console.log('Partial download completed.'); resolve(); }); writer.on('error', (err) => { fs.unlink(path, () => {}); console.error(`Error: ${err.message}`); reject(err); }); }); } downloadRange(url, filePath, startByte, endByte);
When requesting byte ranges, the server must support HTTP Range Requests (most servers do).
If the range request succeeds, you'll get an HTTP status code 206 Partial Content.
Adjust
startByte
and
endByte
values as per your requirements.
Useful for segmented downloads, resuming downloads, or parallel downloading.
That's how you can download a specific range of bytes from a file in Node.js!
To download a file in 8 parallel threads (segments) using Node.js, you can:
Get the file size.
Split it into 8 ranges.
Download each range concurrently using HTTP
Range
header.
Merge the parts into one file.
Hereβs a
complete working example
using
axios
,
fs
, and
Promise.all
:
Install required package first:
bashnpm install axios
Then use this code:
javascriptconst fs = require('fs'); const axios = require('axios'); const path = require('path'); const os = require('os'); const FILE_URL = 'https://example.com/largefile.zip'; const OUTPUT_FILE = 'final.zip'; const THREADS = 8; async function getFileSize(url) { const res = await axios.head(url); const size = parseInt(res.headers['content-length'], 10); if (isNaN(size)) throw new Error('Could not get file size'); return size; } async function downloadPart(url, start, end, index) { const response = await axios.get(url, { headers: { Range: `bytes=${start}-${end}` }, responseType: 'stream' }); const tempPath = path.join(os.tmpdir(), `part-${index}`); const writer = fs.createWriteStream(tempPath); response.data.pipe(writer); return new Promise((resolve, reject) => { writer.on('finish', () => resolve(tempPath)); writer.on('error', reject); }); } async function mergeParts(tempPaths, outputPath) { const writer = fs.createWriteStream(outputPath); for (const tempPath of tempPaths) { const data = fs.readFileSync(tempPath); writer.write(data); fs.unlinkSync(tempPath); // delete part after merge } writer.end(); console.log('Merge completed:', outputPath); } (async () => { const size = await getFileSize(FILE_URL); const partSize = Math.ceil(size / THREADS); const downloadPromises = []; for (let i = 0; i < THREADS; i++) { const start = i * partSize; const end = i === THREADS - 1 ? size - 1 : (start + partSize - 1); downloadPromises.push(downloadPart(FILE_URL, start, end, i)); } const tempPaths = await Promise.all(downloadPromises); await mergeParts(tempPaths, OUTPUT_FILE); })();
Detects the total file size.
Splits the download into 8 byte ranges.
Downloads each part concurrently.
Merges them in order into a final file.
Deletes temporary part files.
Works only if the server supports HTTP range requests (status 206).
Use
os.tmpdir()
or any other temp path to store parts safely.
Optional: Add retry/failure handling for robustness.
Do you want a version that shows download progress or retry on failure too?
Adding
pause/resume functionality
to a multi-threaded downloader in Node.js is possible
if the server supports
Range
requests, and if you
track the downloaded byte ranges.
Hereβs how to do it:
Each thread writes to a temp file, and we track the bytes downloaded per thread.
We use a
progress.json
file
to save state (start, end, downloaded).
On resume, we read this file and continue unfinished ranges.
javascriptconst fs = require('fs'); const axios = require('axios'); const path = require('path'); const os = require('os'); const FILE_URL = 'https://example.com/largefile.zip'; const OUTPUT_FILE = 'final.zip'; const THREADS = 8; const PROGRESS_FILE = 'progress.json'; async function getFileSize(url) { const res = await axios.head(url); const size = parseInt(res.headers['content-length'], 10); if (isNaN(size)) throw new Error('Could not get file size'); return size; } function saveProgress(progress) { fs.writeFileSync(PROGRESS_FILE, JSON.stringify(progress, null, 2)); } function loadProgress() { if (fs.existsSync(PROGRESS_FILE)) { return JSON.parse(fs.readFileSync(PROGRESS_FILE)); } return null; } async function downloadPart(url, part, index) { const tempPath = path.join(os.tmpdir(), `part-${index}`); const writer = fs.createWriteStream(tempPath, { flags: 'a' }); // append if resuming const headers = { Range: `bytes=${part.downloaded}-${part.end}` }; const response = await axios.get(url, { headers, responseType: 'stream' }); return new Promise((resolve, reject) => { response.data.on('data', chunk => { fs.appendFileSync(tempPath, chunk); part.downloaded += chunk.length; saveProgress(progress); // update progress on each chunk }); response.data.on('end', () => resolve(tempPath)); response.data.on('error', reject); }); } async function mergeParts(progress, outputPath) { const writer = fs.createWriteStream(outputPath); for (let i = 0; i < progress.length; i++) { const tempPath = path.join(os.tmpdir(), `part-${i}`); const data = fs.readFileSync(tempPath); writer.write(data); fs.unlinkSync(tempPath); } fs.unlinkSync(PROGRESS_FILE); writer.end(); console.log('Download complete:', outputPath); } let progress; (async () => { const size = await getFileSize(FILE_URL); const partSize = Math.ceil(size / THREADS); progress = loadProgress(); if (!progress) { // initialize progress if not resuming progress = Array.from({ length: THREADS }).map((_, i) => { const start = i * partSize; const end = i === THREADS - 1 ? size - 1 : start + partSize - 1; return { start, end, downloaded: start }; }); saveProgress(progress); } const downloadPromises = progress.map((part, i) => { if (part.downloaded > part.end) return Promise.resolve(); // already complete return downloadPart(FILE_URL, part, i); }); await Promise.all(downloadPromises); await mergeParts(progress, OUTPUT_FILE); })();
π’ Pause = Just stop the script (e.g. CTRL+C).
π’ Resume = Run the script again, it continues where it left off.
π’ Each thread stores partial data + tracked progress.
π’ Automatically skips finished parts.
Add a CLI command
pause
,
resume
,
clear
.
Show download progress bar per thread.
Add retry logic if a thread fails.
Would you like me to add a progress bar or retry mechanism to this version?
To check if a server supports streaming and range requests, you can examine its HTTP response headers β especially:
Accept-Ranges
β indicates support for byte-range requests.
Content-Length
β needed for resuming logic.
Transfer-Encoding: chunked
β suggests streaming (chunked transfer).
Content-Type
β helps infer if the content is streamable (e.g.,
video/audio).
javascriptconst axios = require('axios'); const url = 'https://example.com/largefile.zip'; async function checkStreamingSupport(url) { try { const res = await axios.head(url); const acceptRanges = res.headers['accept-ranges']; const contentLength = res.headers['content-length']; const transferEncoding = res.headers['transfer-encoding']; const contentType = res.headers['content-type']; console.log('--- Server Headers ---'); console.log('Accept-Ranges:', acceptRanges); console.log('Content-Length:', contentLength); console.log('Transfer-Encoding:', transferEncoding); console.log('Content-Type:', contentType); if (acceptRanges === 'bytes') { console.log('β Server supports partial content (Range requests).'); } else { console.log('β Server does NOT support Range requests.'); } if (transferEncoding === 'chunked') { console.log('β Server supports streaming via chunked transfer.'); } else if (contentLength) { console.log('β Server provides Content-Length (can download in parts).'); } else { console.log('β No clear support for streaming or partial download.'); } } catch (err) { console.error('Error checking server:', err.message); } } checkStreamingSupport(url);
--- Server Headers --- Accept-Ranges: bytes Content-Length: 10485760 Transfer-Encoding: undefined Content-Type: application/zip β Server supports partial content (Range requests). β Server provides Content-Length (can download in parts).
Header | Meaning | Use Case |
---|---|---|
Accept-Ranges: bytes
|
Server allows ranged (partial) downloads | Needed for resume/pause |
Transfer-Encoding: chunked
|
Server sends data in chunks (streaming) | Good for stream processing |
Content-Length
|
Total size in bytes | Required to split ranges |
Would you like this wrapped as a reusable CLI or function module?
Great idea. Here's a version of the Node.js multi-threaded downloader that:
βοΈ Detects if the server supports range requests (streaming/partial download).
βοΈ If supported, downloads using 8 parallel threads.
β If not supported, falls back to a single-threaded sequential download.
javascriptconst fs = require('fs'); const axios = require('axios'); const path = require('path'); const os = require('os'); const FILE_URL = 'https://example.com/largefile.zip'; const OUTPUT_FILE = 'final.zip'; const THREADS = 8; const PROGRESS_FILE = 'progress.json'; async function getFileHeaders(url) { const res = await axios.head(url); return { acceptRanges: res.headers['accept-ranges'], contentLength: parseInt(res.headers['content-length'], 10), contentType: res.headers['content-type'], transferEncoding: res.headers['transfer-encoding'], }; } async function downloadPart(url, part, index) { const tempPath = path.join(os.tmpdir(), `part-${index}`); const writer = fs.createWriteStream(tempPath, { flags: 'a' }); const headers = { Range: `bytes=${part.downloaded}-${part.end}` }; const response = await axios.get(url, { headers, responseType: 'stream' }); return new Promise((resolve, reject) => { response.data.on('data', chunk => { fs.appendFileSync(tempPath, chunk); part.downloaded += chunk.length; saveProgress(progress); }); response.data.on('end', () => resolve(tempPath)); response.data.on('error', reject); }); } function saveProgress(progress) { fs.writeFileSync(PROGRESS_FILE, JSON.stringify(progress, null, 2)); } function loadProgress() { if (fs.existsSync(PROGRESS_FILE)) { return JSON.parse(fs.readFileSync(PROGRESS_FILE)); } return null; } async function mergeParts(progress, outputPath) { const writer = fs.createWriteStream(outputPath); for (let i = 0; i < progress.length; i++) { const tempPath = path.join(os.tmpdir(), `part-${i}`); const data = fs.readFileSync(tempPath); writer.write(data); fs.unlinkSync(tempPath); } fs.unlinkSync(PROGRESS_FILE); writer.end(); console.log('β Download complete:', outputPath); } async function downloadSequential(url, outputPath) { console.log('β οΈ Server does not support multi-threaded download. Falling back to sequential mode...'); const response = await axios.get(url, { responseType: 'stream' }); const writer = fs.createWriteStream(outputPath); return new Promise((resolve, reject) => { response.data.pipe(writer); writer.on('finish', () => { console.log('β Download complete:', outputPath); resolve(); }); writer.on('error', reject); }); } let progress; (async () => { const headers = await getFileHeaders(FILE_URL); const supportsRange = headers.acceptRanges === 'bytes' && headers.contentLength; const fileSize = headers.contentLength; if (!supportsRange) { await downloadSequential(FILE_URL, OUTPUT_FILE); return; } const partSize = Math.ceil(fileSize / THREADS); progress = loadProgress(); if (!progress) { progress = Array.from({ length: THREADS }).map((_, i) => { const start = i * partSize; const end = i === THREADS - 1 ? fileSize - 1 : start + partSize - 1; return { start, end, downloaded: start }; }); saveProgress(progress); } const downloadPromises = progress.map((part, i) => { if (part.downloaded > part.end) return Promise.resolve(); return downloadPart(FILE_URL, part, i); }); await Promise.all(downloadPromises); await mergeParts(progress, OUTPUT_FILE); })();
Feature | Behavior |
---|---|
Server supports range | Download in 8 threads concurrently |
Server doesn't support range | Download whole file in one stream (sequential fallback) |
Pause | Just stop the script (e.g., CTRL+C) |
Resume | Run script again; continues where it left off |
β
Progress bar with
cli-progress
π Retry failed threads
π CLI options (pause
,
resume
,
status
)
Would you like me to integrate a progress bar or command-line control next?
Perfect. Here's an enhanced version of your Node.js download manager that includes:
π Progress bar per thread
using
cli-progress
π Retry logic on download failure (up to 3 retries)
π§ Auto
pause/resume
from a
progress.json
file
π Automatic fallback to sequential download if server doesn't support range
bashnpm install axios cli-progress
javascriptconst fs = require('fs'); const axios = require('axios'); const path = require('path'); const os = require('os'); const cliProgress = require('cli-progress'); const FILE_URL = 'https://example.com/largefile.zip'; const OUTPUT_FILE = 'final.zip'; const THREADS = 8; const MAX_RETRIES = 3; const PROGRESS_FILE = 'progress.json'; // π§ Get server headers async function getFileHeaders(url) { const res = await axios.head(url); return { acceptRanges: res.headers['accept-ranges'], contentLength: parseInt(res.headers['content-length'], 10), transferEncoding: res.headers['transfer-encoding'], }; } // πΎ Save/load progress function saveProgress(progress) { fs.writeFileSync(PROGRESS_FILE, JSON.stringify(progress, null, 2)); } function loadProgress() { return fs.existsSync(PROGRESS_FILE) ? JSON.parse(fs.readFileSync(PROGRESS_FILE)) : null; } // π€ Download single part with retry and progress async function downloadPart(url, part, index, bar) { const tempPath = path.join(os.tmpdir(), `part-${index}`); const writer = fs.createWriteStream(tempPath, { flags: 'a' }); const download = async (attempt = 1) => { try { const headers = { Range: `bytes=${part.downloaded}-${part.end}` }; const response = await axios.get(url, { headers, responseType: 'stream', }); return new Promise((resolve, reject) => { response.data.on('data', (chunk) => { writer.write(chunk); part.downloaded += chunk.length; bar.update(part.downloaded - part.start); saveProgress(progress); }); response.data.on('end', () => resolve(tempPath)); response.data.on('error', reject); }); } catch (err) { if (attempt < MAX_RETRIES) { console.warn(`π Retry [${index}] (attempt ${attempt})`); return download(attempt + 1); } else { throw new Error(`β Failed to download part ${index}: ${err.message}`); } } }; return download(); } // π Merge all parts into a single file async function mergeParts(progress, outputPath) { const writer = fs.createWriteStream(outputPath); for (let i = 0; i < progress.length; i++) { const tempPath = path.join(os.tmpdir(), `part-${i}`); const data = fs.readFileSync(tempPath); writer.write(data); fs.unlinkSync(tempPath); } fs.unlinkSync(PROGRESS_FILE); writer.end(); console.log('β Download complete:', outputPath); } // π§± Fallback: download full file sequentially async function downloadSequential(url, outputPath) { console.log('β οΈ Server does not support partial downloads. Fallback to sequential...'); const response = await axios.get(url, { responseType: 'stream' }); const writer = fs.createWriteStream(outputPath); return new Promise((resolve, reject) => { response.data.pipe(writer); writer.on('finish', () => { console.log('β Download complete:', outputPath); resolve(); }); writer.on('error', reject); }); } let progress; (async () => { const headers = await getFileHeaders(FILE_URL); const supportsRange = headers.acceptRanges === 'bytes' && headers.contentLength; const fileSize = headers.contentLength; if (!supportsRange) { await downloadSequential(FILE_URL, OUTPUT_FILE); return; } // Initialize or resume progress const partSize = Math.ceil(fileSize / THREADS); progress = loadProgress(); if (!progress) { progress = Array.from({ length: THREADS }).map((_, i) => { const start = i * partSize; const end = i === THREADS - 1 ? fileSize - 1 : start + partSize - 1; return { start, end, downloaded: start }; }); saveProgress(progress); } // Setup progress bars const multibar = new cliProgress.MultiBar({ clearOnComplete: false, hideCursor: true, format: '[{bar}] {percentage}% | {value}/{total} bytes | Thread {thread}', }, cliProgress.Presets.shades_classic); const bars = progress.map((part, i) => { return multibar.create(part.end - part.start + 1, part.downloaded - part.start, { thread: i + 1, }); }); const downloadPromises = progress.map((part, i) => { if (part.downloaded > part.end) return Promise.resolve(); return downloadPart(FILE_URL, part, i, bars[i]); }); await Promise.all(downloadPromises); multibar.stop(); await mergeParts(progress, OUTPUT_FILE); })();
Feature | Implemented? |
---|---|
β Multi-threaded download | β |
β Resume on rerun | β |
β Retry failed downloads | β (3x) |
β Per-thread progress bars | β |
β Fallback to sequential | β |
Would you like to also:
Convert this into a CLI tool (npx download-manager
)?
Add download speed info?
Let me know and Iβll help you modularize or extend it!
Awesome β let's add live download speed per thread to the progress bars. Here's how weβll do it:
Tracks download speed per thread in KB/s or MB/s.
Updated
cli-progress
bar to display:
[#####...] 85% | 800KB/s | 1.3MB/5.0MB | Thread 3
On every
data
chunk, we:
Track the number of bytes received.
Calculate speed over time.
Use
setInterval
to update the displayed speed every second.
Replace the
downloadPart()
function
in the previous code with this version:
javascriptasync function downloadPart(url, part, index, bar) { const tempPath = path.join(os.tmpdir(), `part-${index}`); const writer = fs.createWriteStream(tempPath, { flags: 'a' }); const download = async (attempt = 1) => { try { const headers = { Range: `bytes=${part.downloaded}-${part.end}` }; const response = await axios.get(url, { headers, responseType: 'stream', }); let received = 0; let lastUpdateTime = Date.now(); let speed = 0; // Speed updater const speedInterval = setInterval(() => { const now = Date.now(); const elapsed = (now - lastUpdateTime) / 1000; // in seconds speed = received / elapsed; // bytes/sec bar.update(part.downloaded - part.start, { speed: formatSpeed(speed), }); received = 0; lastUpdateTime = now; }, 1000); return new Promise((resolve, reject) => { response.data.on('data', (chunk) => { writer.write(chunk); part.downloaded += chunk.length; received += chunk.length; }); response.data.on('end', () => { clearInterval(speedInterval); bar.update(part.end - part.start, { speed: 'Done', }); resolve(tempPath); }); response.data.on('error', (err) => { clearInterval(speedInterval); reject(err); }); }); } catch (err) { if (attempt < MAX_RETRIES) { console.warn(`π Retry [${index}] (attempt ${attempt})`); return download(attempt + 1); } else { throw new Error(`β Failed to download part ${index}: ${err.message}`); } } }; return download(); }
formatSpeed()
Add this above or below your main function:
javascriptfunction formatSpeed(bytesPerSec) { const kb = bytesPerSec / 1024; const mb = kb / 1024; if (mb >= 1) return `${mb.toFixed(2)} MB/s`; return `${kb.toFixed(1)} KB/s`; }
Update the
MultiBar
config in your setup:
jsconst multibar = new cliProgress.MultiBar({ clearOnComplete: false, hideCursor: true, format: '[{bar}] {percentage}% | {speed} | {value}/{total} bytes | Thread {thread}', }, cliProgress.Presets.shades_classic);
[#####-----] 52% | 730.5 KB/s | 2.6MB/5.0MB | Thread 2 [##########] 100% | Done | 5.0MB/5.0MB | Thread 1
Would you like a summary speed indicator across all threads too?
Great! Let's now add a global speed indicator to show total download speed across all threads β live and updated in real-time β while keeping the per-thread progress bars.
Feature | Status |
---|---|
Per-thread download speed | β |
Global download speed | β |
Live progress bars | β |
Pause/resume | β |
Retry on failure | β |
Fallback to sequential | β |
Track total downloaded bytes across all threads.
Set an interval to update global speed every second.
Print speed to console (either in the same line or as a separate log above/below the bars).
jslet totalDownloaded = 0; let lastTotalDownloaded = 0;
jsfunction formatSpeed(bytesPerSec) { const kb = bytesPerSec / 1024; const mb = kb / 1024; if (mb >= 1) return `${mb.toFixed(2)} MB/s`; return `${kb.toFixed(1)} KB/s`; }
await Promise.all(...)
:
jsconst globalSpeedInterval = setInterval(() => { const speed = totalDownloaded - lastTotalDownloaded; lastTotalDownloaded = totalDownloaded; process.stdout.write(`\rπ Global speed: ${formatSpeed(speed)}`); }, 1000);
jsclearInterval(globalSpeedInterval); console.log(); // for line break
downloadPart()
function β inside
.on('data', ...)
add:
jstotalDownloaded += chunk.length;
So that section becomes:
jsresponse.data.on('data', (chunk) => { writer.write(chunk); part.downloaded += chunk.length; received += chunk.length; totalDownloaded += chunk.length; });
π Global speed: 3.25 MB/s [########--] 86% | 420 KB/s | 4.3MB/5.0MB | Thread 2 [####------] 40% | 1.2 MB/s | 2.0MB/5.0MB | Thread 3 ...
This gives you a clean, professional-looking CLI download manager experience.
Would you like a feature to limit global speed (e.g., max 5 MB/s) or throttle individual threads?
Perfect. Letβs now enhance the downloader with bandwidth throttling, adding two features:
Feature | Description |
---|---|
1. Global speed limit | Restricts total download speed (e.g., 5 MB/s across all threads) |
2. Per-thread speed throttling | Ensures no thread exceeds a specified rate (e.g., 1 MB/s per thread) |
Weβll implement this using
setTimeout()
between
data
chunks to delay reads when needed.
js// Limits (in bytes per second) const GLOBAL_SPEED_LIMIT = 5 * 1024 * 1024; // 5 MB/s const THREAD_SPEED_LIMIT = 1 * 1024 * 1024; // 1 MB/s
Add this anywhere before
downloadPart
:
jsfunction sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); }
downloadPart()
β Insert throttling logic
Replace the
.on('data', ...)
block in
downloadPart()
with the following:
jsresponse.data.on('data', async (chunk) => { writer.write(chunk); part.downloaded += chunk.length; received += chunk.length; totalDownloaded += chunk.length; // Per-thread throttling const delayPerChunk = (chunk.length / THREAD_SPEED_LIMIT) * 1000; // ms if (delayPerChunk > 5) await sleep(delayPerChunk); // skip small delays });
We add a download token bucket to enforce the global limit. Add this at the top of your file:
jslet globalBucket = GLOBAL_SPEED_LIMIT;
Then insert this token refill logic before starting downloads:
jssetInterval(() => { globalBucket = GLOBAL_SPEED_LIMIT; // refill each second }, 1000);
Now modify the
.on('data', ...)
section again to
respect global speed:
jsresponse.data.on('data', async (chunk) => { // Throttle globally while (globalBucket < chunk.length) { await sleep(10); // wait a bit and retry } globalBucket -= chunk.length; // Proceed with write and tracking writer.write(chunk); part.downloaded += chunk.length; received += chunk.length; totalDownloaded += chunk.length; // Per-thread throttling const delayPerChunk = (chunk.length / THREAD_SPEED_LIMIT) * 1000; if (delayPerChunk > 5) await sleep(delayPerChunk); });
You can log when throttling kicks in for debugging:
jsif (delayPerChunk > 5) console.log(`β³ Throttling Thread ${index + 1}`);
Feature | Status | Method |
---|---|---|
Global speed limit | β | Token bucket with refill per second |
Per-thread speed limit | β | Delays after each chunk per thread |
Efficiency | β | Non-blocking, respects concurrency |
Would you like to extract all these into a clean CLI
tool (aia-downloader
) or an npm package?
Absolutely! Let's break down the throttling logic step by step so it's easy to understand and customize.
Throttle = "slow down" the download speed.
Weβre controlling:
β Per-thread speed (each thread shouldn't exceed a certain speed)
β Global speed (all threads combined shouldnβt go above a limit)
This helps:
Prevent network overload
Stay within API rate limits
Avoid consuming all available bandwidth
Purpose: Make each thread obey a max speed (e.g., 1MB/s).
After writing a chunk, calculate:
delay = chunkSize / threadLimitPerSecond
Example:
Chunk = 100KB
Limit = 1MB/s = 1024KB/s
Delay = 100 / 1024 = 0.0976 seconds (~97ms)
We wait that long using
await sleep()
.
jsconst delayPerChunk = (chunk.length / THREAD_SPEED_LIMIT) * 1000; if (delayPerChunk > 5) await sleep(delayPerChunk); // skip tiny delays
Purpose: All threads combined must not exceed total bandwidth (e.g., 5MB/s).
Use a
"token bucket"
(shared value
globalBucket
)
Each second, refill it to the allowed global bandwidth.
Before writing a chunk:
Check if there are enough "tokens" (bytes)
left in
globalBucket
.
If yes β proceed and subtract the bytes.
If no β wait and retry.
js// refill every second setInterval(() => { globalBucket = GLOBAL_SPEED_LIMIT; }, 1000); // in download: while (globalBucket < chunk.length) { await sleep(10); // wait a bit } globalBucket -= chunk.length;
Throttle Type | Controls | Use Case |
---|---|---|
Per-thread | Fairness per task | Donβt let one thread hog speed |
Global | Total bandwidth | Don't exceed total network cap |
This two-layer system gives you fine-grained control over download behavior.
βββββββββββββββ βββββββββββββββ β Thread 1 β β Thread 2 β β Limit: 1MB/sβ β Limit: 1MB/sβ βββββββ¬ββββββββ βββββββ¬ββββββββ β β ββββββ¬ββββββ¬βββββββ βΌ βΌ Global Token Bucket (e.g. 5MB/s total) βΌ Network
Would you like to visualize this in a diagram or refactor it into a reusable download module?
Great question β and you're absolutely right to dig deeper. Letβs clarify:
Delays don't restrict actual network
bandwidth
at the TCP level.
Instead, they
control how fast your app reads and writes
data, which indirectly limits how much data flows
through.
So:
β± Delaying read/write operations β true bandwidth shaping,
but β it limits your effective bandwidth usage from the app side.
The OS/network stack buffers incoming data.
Your code decides when to read it.
If you
pause reading
(via
await sleep()
), the TCP window shrinks, and the server
eventually slows down sending.
Effect: You "throttle" how fast you pull data β server adapts β traffic slows down.
Letβs say:
You delay 100ms after reading each 100KB
Thatβs ~1MB/s effective pull rate.
Even though:
The server could send at 20MB/s
Youβre voluntarily reading at just 1MB/s
β‘οΈ So the total bandwidth used stays low, though the physical NIC is still capable of more.
Imagine a high-speed water tap and a small cup:
The tap is fast (server can send a lot).
You fill the cup, wait a bit, then drink.
Youβre not limiting the tap β just how fast you take sips.
Itβs not real QoS (Quality of Service).
It doesnβt prevent other apps from using the bandwidth.
It canβt cap traffic outside your app.
Youβre writing a CLI downloader or API scraper.
You need basic control to avoid server bans.
Youβre working on multiple concurrent requests and want fairness.
Then consider using lower-level tools:
Tool/Method | Description |
---|---|
tc
(Linux Traffic Control)
|
Kernel-level bandwidth control |
trickle
|
User-level bandwidth limiter wrapper |
Docker network config | Limit containers |
Proxy with shaping (e.g.,
Squid )
|
Centralized control |
Method | Controls Network? | App-Level Control | OS-Level Bandwidth Cap |
---|---|---|---|
await sleep()
after chunk
|
β (not true throttle) | β Yes | β No |
tc ,
trickle
|
β Yes | β External | β Yes |
So β you're not forcing the network to slow down,
but youβre slowing your own usage, which results in the same outcome from the serverβs perspective.
Do you want a version that uses OS-level throttling or test the download manager's effectiveness with speed charts?