Upload retry improvements (including timeout, 4xx, 5xx)
in progress
Tom Raganowicz
Files > 100M are uploaded using multipart upload using 5MB chunks.
If upload gets interrupted, next time when file gets uploaded we verify MD5 (only if E2E encryption is disabled) of each chunk so it doesn't gets reuploaded.
The issues is that MD5 verification is relatively slow. If 10GB file was interrupted at 50%, then it means we need to verify 5GB of data which isn't instant.
The idea is to store the file local modification date and trust that file contents hasn't changed if modification date hasn't changed. Using this pattern we could resume upload immediately. We could also resume upload of encrypted files given that we continue uploading chunks using the same nonce prefix (this is safe, because we're not reusing nonce. Yet to be uploaded chunks receive their own nonce counter).
Finally implement exponential back-off retry for failed transfers (e.g. 400 bad request, 5xx errors and timeouts).
Tom Raganowicz
in progress
lei mix
I need this very strongly,Because when I upload a lot of files, I always get all kinds of errors. Same error after retry. Don't know what to do.
Tom Raganowicz
planned