Handling Files
How to work with files in Latenode: no-code and code approaches
In Latenode, there are two fundamentally different ways to interact with files received from other nodes. The approach you use depends on whether you are working with pre-built nodes (No-Code) or writing your own code (Node.js/Code).
1. No-Code Approach (Passing Files in Standard Nodes)
This approach applies when you pass a file from one built-in node to another (e.g., from an HTTP Request node to an AI node, a Google Disk node, etc.).

Principle
Latenode passes a special service object-reference rather than the raw file contents. Standard Latenode nodes automatically recognize and interpret this object, using it to retrieve the full file and process it.
How It Works
To pass a file, you simply use the variable interpolator ({{...}}
) to substitute the entire file structure or specific fields into the required fields of the receiving node.
Receiving Node Field | Value to Substitute | Example Interpolation |
File Content | The entire file structure (root object) | {{$json.file}} |
File Name | The file name (often required for saving) | {{$json.file.filename}} |
MIME Type | The file's media type | {{$json.file.fileType}} |

2. Code Approach (Working in the Node.js/JavaScript Node)
This approach is mandatory if you need to read, modify, or create a file using custom JavaScript code, as direct substitution of the service object will fail.
The Problem
The Node.js node cannot work with the file directly. You must retrieve the internal file path from the node's temporary file system, and then use the standard Node.js fs
(File System) library.
⚙️ Step 1: Retrieving the Internal File Path
The code must force Latenode to mount the file and retrieve its temporary path using the specific access syntax based on the previous node's output (Node \#2 in your workflow).
Theoretical Template | Working Example (Node \#2 Output) |
const filePath = data.file.content; | const contentFilePath = data["{{2.result.file.content}}"]; |
Explanation: This line retrieves the temporary, physical file path from the output of Node \#2 ( 2.result.file ). This is the only reliable way to get the file's location. |

⚙️ Step 2: Reading the File and Modifying Data
Once the path is secured, we read the content and perform the modification (CSV processing).
Theoretical Template | Working Example (CSV Processing) |
const buffer = fs.readFileSync(filePath); // ... Modification Logic ... const processedBuffer = Buffer.from(modifiedData); | const contentFileBuffer = fs.readFileSync(contentFilePath); // Logic to add ',"Processed"' column: // (Code block below) const processedFileBuffer = Buffer.from(processedCsvString, 'utf8'); |
CSV Processing Logic (Full Code for Step 2):
// Import fs (required for file operations)
import fs from 'fs';
// ... inside the run function ...
// Read the file into a buffer.
const contentFileBuffer = fs.readFileSync(contentFilePath);
// --- MODIFICATION LOGIC ---
let csvContent = contentFileBuffer.toString('utf8');
let rows = csvContent.split('\\n');
let header = rows[0];
let processedRows = [header]; // Keep header
// Add "Processed" column to all data rows
for (let i = 1; i < rows.length; i++) {
let row = rows[i];
if (row.trim() === '') continue;
row = row.trim() + ',"Processed"';
processedRows.push(row);
}
// Convert processed string back to a buffer
const processedCsvString = processedRows.join('\\n');
const processedFileBuffer = Buffer.from(processedCsvString, 'utf8');
⚙️ Step 3: Writing and Returning the File
The modified data must be written back to the temporary file system and returned using the specialized function file()
to be made available to the next nodes.
Theoretical Template | Working Example (CSV Return) |
fs.writeFileSync('new_file.ext', processedBuffer); return { file: file('new_file.ext'), fileType: 'mime/type' }; | fs.writeFileSync(newFileName, processedFileBuffer); return { file: file(newFileName), fileType: newMimeType }; |
Explanation: We write the buffer using a new filename ( processed_data.csv ) and return the file object structure expected by Latenode. |

Crucial Warning: Attempting to return the raw contentFilePath
without first reading, modifying, and then rewriting the file will result in an error in the subsequent node. You must always follow the cycle: get path → read → modify → write → return file reference.
Final Code Reference (Complete Module)
This module demonstrates the complete READ → MODIFY → WRITE → RETURN cycle for CSV files, including robust error handling.
import fs from 'fs';
export default async function run({execution_id, input, data}) {
// --- STEP 1: RETRIEVING THE INTERNAL PATH ---
const contentFilePath = data["{{2.result.file.content}}"];
// Robust check: Throw error if the file path access failed
if (!contentFilePath) {
throw new Error('File path not successfully retrieved from Node #2. Check the file structure or node ID.');
}
// 2. Read file buffer
const contentFileBuffer = fs.readFileSync(contentFilePath);
// --- STEP 2: READING AND MODIFYING DATA (CSV Logic) ---
let csvContent = contentFileBuffer.toString('utf8');
let rows = csvContent.split('\\n');
let header = rows[0];
let processedRows = [header];
// Add "Processed" column to all data rows
for (let i = 1; i < rows.length; i++) {
let row = rows[i];
if (row.trim() === '') continue;
row = row.trim() + ',"Processed"';
processedRows.push(row);
}
// Convert processed string back to a buffer
const processedCsvString = processedRows.join('\\n');
const processedFileBuffer = Buffer.from(processedCsvString, 'utf8');
// --- STEP 3: WRITING AND RETURNING THE FILE ---
const newFileName = 'processed_data.csv';
const newMimeType = 'text/csv';
// Write the new file to temporary storage
fs.writeFileSync(newFileName, processedFileBuffer);
// Return the file using the special file() function
return {
file: file(newFileName),
fileType: newMimeType
};
}