GridFSFileWriter
A GridFS file writer that can be used to upload a file to GridFS. This writer is not thread-safe.
final class GridFSFileWriterGridFSFileWriter provides a streaming interface for uploading large files to GridFS. It handles chunking the file data and managing the upload process, including error handling and cleanup.
Basic Usage
// Create a writer
let writer = try await GridFSFileWriter(
toBucket: gridFS,
fileId: ObjectId(), // Optional custom ID
chunkSize: 261_120 // Optional custom chunk size (default: 255KB)
)
// Write data in chunks
for chunk in dataChunks {
try await writer.write(data: chunk)
}
// Finalize and create the file
let file = try await writer.finalize(
filename: "large-file.dat",
metadata: [
"contentType": "application/octet-stream",
"description": "Important data"
]
)Streaming from HTTP
let writer = try await GridFSFileWriter(toBucket: gridFS)
do {
// Stream file from HTTP request
for try await chunk in request.body {
try await writer.write(data: chunk)
}
// Complete the upload
let file = try await writer.finalize(
filename: "uploaded-file.dat"
)
} catch {
// Clean up partial upload
try await writer.cancel()
throw error
}Error Handling
If an error occurs during upload, call
cancel()to clean up partial chunksThe writer becomes invalid after calling
finalize()orcancel()Writing to a finalized writer will trigger an assertion failure
Performance Tips
The default chunk size (255KB) is suitable for most use cases
Larger chunks reduce the number of database operations but use more memory
The writer buffers data until it has a full chunk before writing to GridFS
Call
flush()to force writing a partial chunk to the database
Implementation Details
Each chunk is stored as a separate document in the chunks collection
Chunks are numbered sequentially starting from 0
The file metadata is only written when
finalize()is calledIndexes are automatically created on the first write to a bucket