Nginx direct file upload without passing them through backend

It's pretty straightforward to manage file upload. Everybody can do it with using multipart/form-data encoding RFC 1867. Let's see what happens:

  • client sends POST request with the file content in BODY
  • webserver accepts the request and initiates data transfer (or returns error 413 if the file size is exceed the limit)
  • webserver starts to populate buffers (depends on file and buffers size), store it on disk and send it via socket/network to back-end
  • back-end verifies the authentication (take a look, once file is uploaded)
  • back-end reads the file and cuts few headers Content-Disposition, Content-Type, stores it on disk again
  • back-end performs all you need to do with the file

Too much overhead? It happens all the time you upload something. The problems are obvious:

  • authentication happens on back-end after the file being saved on disk by webserver
  • the BODY request saves on disk twice (on web-server and back-end sides both)
  • back-end blocks while eating your file
  • resulted binary-data rarely required by back-end itself, because images usually use by Imagemagic, documents upload on S3 or something else

To be honest I can see no problem due to small file size upload. But what if you handle big files upload all the time? Let's assume you use Nginx web-server, so you have several options:

The best and production-ready solution is the last one, clientbodyinfileonly. Due to lack of documentation nobody uses it, but let me share with experience how to setup it. First of all you need to use premature authentication before file uploading is started - Basic HTTP Authentication (shared password) or httpauthrequest module (for back-end authentication through headers). Then update nginx configuration with the following config:

location /upload {
  auth_basic                 "Restricted Upload";
  auth_basic_user_file       basic.htpasswd;
  limit_except POST          { deny all; }

  client_body_temp_path      /tmp/;
  client_body_in_file_only   on;
  client_body_buffer_size    128K;
  client_max_body_size       1000M;

  proxy_pass_request_headers on;
  proxy_set_header           X-FILE $request_body_file; 
  proxy_set_body             off;
  proxy_redirect             off;
  proxy_pass                 http://backend/file;

Once you reload nginx, the new URL /upload is ready to accept file upload without any back-end interaction, it all goes through nginx and send callback to http://backend/file with file name in X-FILE header. It's all, easy?

You already know the file name before you make POST request, so you should preserve it until the back-end receive it. We do use extra headers with POST that pass through Nginx proxy and comes to back-end unmodified. For instance, having X-NAME headers from initial requests help you to catch it up on backend.

If you need to have back-end authentication, only way to handle is to use auth_request, for instance:

location = /upload {
  auth_request               /upload/authenticate;

location = /upload/authenticate {
  proxy_set_body             off;
  proxy_pass                 http://backend;

Upload request should come with headers to be validated, for instance X-API-KEY, once authentication is finished, Nginx started to file uploading and pass the file name to backend afterward. It's internal cascade of requests, so you have to do only one request with file BODY and authentication headers. The good news that auth_request module will be incorporated in the Nginx core soon, so we can use it without ./configure ... --add-module=/tmp/ngxhttpauth_request

P.S. clientbodyinfileonly incompatible with multi-part data upload, so you can use it via XMLHttpRequest2 (without multi-part) and binary data upload only

curl --data-binary '@file' http://localhost/upload

This method is prefer to use with native mobile applications that handle big file upload all the time.

56 Responses
Add your response


I found this very useful bit hit a couple of problems with it.

First, by default nginx was configured to store the files to /tmp/ as a different user to that I had running the proxy processes. Editing the user directive in /etc/nginx/nginx.conf was my solution to this.

Second, without adding "proxy_set_header Content-Length 0;" the response from my proxy just 'hung' open and had to be manually closed. This is a little confusing, but it seems to work for me :)

over 1 year ago ·

why only upload file successfully by using curl --data-binary?

I have tried normal form upload, but the temp file is form field name and value?

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@dawncold because clientbodyinfileonly doesn't support RFC 2388

over 1 year ago ·

NGinx will only start passing the file to the backend when the upload is complete. So at least there won't be a backend worker blocked while the client is still sending the data. And your setup will only work with NGinx and the backend having access to the same filesystem. Other than that: nice trick! Thanks!

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18


1) it's not right, Nginx only pass the filename instead of full file body to backend. So the backend should not parse and cut the Content-Disposition headers.

2) back-end is not blocked until the file is uploaded by Nginx in any case

3) Nginx and back-end should have the same filesystem, it's right.

over 1 year ago ·

I tried your solution, upload works great but the tmp file gets stored as /tmp/00000x (x is a digit), and like the upload is async compared to the rest of the form that has already been saved as a "resource", how to you know what uploaded file belongs to what resource ? How can you know in advance where it's going to store it ?

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@eppo back-end receives a request to URL http://backend/file with empty body(!) and the file name is header X-FILE. The storage location is declared by clientbodytemp_path

over 1 year ago ·

Ok I didn't notice that the upload request from the client is passed to the backend when the upload finishes within the same http request. So you are sure you're processing this client request. That's what I was wondering.
Thanks for your tip, really handy

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@eppo yes, this is the callback, it fires only if file is uploaded and saved on disk successfully.

over 1 year ago ·

How to pass file path as GET or POST variable (instead of the header X-File) in the request to the back-end?

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@dawncold you are able to send desirable file name in extra header and reuse it on back-end afterwards

over 1 year ago ·

@mikhailov, I have tried to making a URL like : http://back-end/file?name=xxx&path=$request_body_file, but I can't get the value of $requestbodyfile, and this variable's value only can be set in header. I don't know why.

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@dawncold We do use extra headers with POST that pass through Nginx proxy and comes to back-end unmodified. So try to use X-NAME headers from initial requests and you will catch it up on backend.

over 1 year ago ·

Hi mikhailov, here's my problem:
I'm using reverse proxy with Nginx. When I POST a file to the Nginx, it seems that it will store the whole file in local and forward it to the backend server after received the whole file. I want a solution to make Nginx receive & forward data synchronously.

Can clientbodyinfileonly do this?

PS: proxy and backend are different servers.

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@goace, the file stores to the local file system that's right. Once it has been uploaded the backend got synchronous callback (with X-FILE header and empty BODY) to the any URL you specified (http://backend/file in my case)

over 1 year ago ·

Saved files contain also the headers — is this the supposed behavior or am I doing something wrong? I.e.:

Content-Disposition: form-data; name="liteUploader_id"

Content-Disposition: form-data; name="custom"

Content-Disposition: form-data; name="fileUpload1[]"; filename="Gibson SG.jpg"

Content-Type: image/jpeg

...and then the binary image data follows.

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@xfrf how do you upload a file?

over 1 year ago ·

@mikhailov our fault, already fixed it all. Thank you for the method description!

over 1 year ago ·

@xfrf, what was the problem?

over 1 year ago ·

I have a similar problem with the file output containing

Content-Disposition: form-data; name="fileToUpload"; filename="Screen Shot 2013-09-16 at 8.48.45 AM.png"
Content-Type: image/png

How do I trim this data ? More importantly capture Content-Type as a Request Header perhaps something like X-Content-Type ?

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@meson10 change the way you upload the file, get rid of multipart/form-data

over 1 year ago ·

How do I access the Request parameters then ?

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

through custom headers that Nginx preserve, they come with original request

over 1 year ago ·

My Bad. Not just content-type I am sending a bunch of request parameters too like:

Content-Disposition: form-data; name="maxlength"


These would obviously not be a part of the Headers, but of the request body.
Which would require some pruning of the file saved. Correct, Or am I missing a Trick here ?

(I am relatively new to DevOps and Advanced Nginx tricks, pardon my naiveness with the concepts.)

over 1 year ago ·

Thanks for sharing this approach!
I have one small question: When nginx stores file in /tmp/ directory it sets file access to 'rw' for file owner only (and owner is nginx user, say 'www-data'). Then I need to use this file from backend process which runs under a separate user (say, 'deployer'). I wonder, how do you deal with this case?

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@kliuchnikau in case you have separate deployment user web-server may need to get the access to app tmp directory at least. So the approach is to run webserver under deployer user, it should be ok if you can control the application itself.

over 1 year ago ·

Анатолий, добрый вечер.

Спасибо за подробный how-to.

Не подскажите, правильно ли я понимаю, что нельзя влиять на именование файлов, которые будут записываться в upload? Если мы не берём возможность level 1-3?

Т.е. я не могу именовать файлы, например, вместо [\d]{10} как [\w]{6} ?


over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@2naive, в рассылке были вопросы по этому поводу, но, насколько я в курсе изменений, этот модуль не трогали (лишь добавили auth_request по просьбам), поэтому на имя файла влиять пока нельзя.

over 1 year ago ·

Спасибо за ответ.

И ещё один глупый вопрос - могу ли я быть уверенным, что при восстановлении папки на новой площадке из бекапа с такими же настройками - восстановленные файлы не будут перезаписаны nginx'ом (или не будут записаны новые по причине наличия файла с таким же идентификатором)?


over 1 year ago ·

For some reason on nignx 1.4.4 on Ubuntu using the default built in Uploader with nothing more than a max client body side set to 1GB I can get both PHP upload progress working and multipart with the $_FILES array being populated :S.

Not sure how that is but it is.

over 1 year ago ·
0 fspnurb4mxc9 4j2tynduzfs2flftoe2dobduzz61ga9ysxu y sz4thgw5jgjwhamlen0fjrfig

Hello Mikhailov,

is it possible to get the original uploaded file name? The X-File header only has the temp file name "/tmp/uploads/0000000001"

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@pointblank, I don't think it's possible because Nginx has its own internal naming conventions for body request content

over 1 year ago ·
0 xuxlrh3cbxdj eydqpjnrwhmbfiz mmd6ywsrwr5zg0xyomseg7dvikwwwwihw73lzgrq2a7omuj


Nginx upload module is supported in 1.4.4.

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@mandrei99 this plugin author's attitude is pretty clear https://github.com/vkholodkov/nginx-upload-module/issues/41

It's a bit dangerous to rely on patches that can break and make core functionality of Nginx and upload-module unstable

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@mandrei99 once you are start using non-supported plugins, it stops you from being on the latest stable Nginx version. New release is out, you are waiting for new patch again.
For example SPDY/2 support will be discontinuing soon, so Nginx 1.5.10 is a must.

We decided do not do that. Just using built-in functionality and develop a service on top of it. Core functionality is enough for our tasks.

over 1 year ago ·
Me normal

files created by nginx:
2014/02/26 21:47:23 [notice] 4533#0: *1 a client request body is buffered to a temporary file /var/www/staging/0000000001, client:, server:.com, request: "POST /upload HTTP/1.1", host: "localhost"

created file has owner nobody and very restrictive permissions
-rw------- 1 nobody admin 140257 26 Feb 21:47 0000000001

I'd like to read the file and process it's contents in the backend but I can't seem to figure out how to tell nginx to use a different umask (022) for the files it creates. Can anybody help me please?

The exception I get in the backend:
java.nio.file.AccessDeniedException: /tmp/0055119830
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@ronnyf what is the user and group you have in Nginx configuration file? I've checked the files created with the user you specified. If that does not help, please take a look at http://en.wikipedia.org/wiki/Setuid#setgid_on_directories

over 1 year ago ·
Me normal

thanks, I synchronized the nginx user and the backend user to be the same - works alright.

over 1 year ago ·

@mikhailov many thanks, but how about this part '- back-end reads the file and cuts few headers Content-Disposition, Content-Type, stores it on disk again' ??

How can you cut the headers ?? can you use the uploaded file (e.g. 00000002) as a string ?

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@vaggos2002 please look at RFC 1867. You can upload files one of either way: multipart form data or as a binary.

over 1 year ago ·

@mikhailov thanks a lot

over 1 year ago ·

How old is this post and comments? A year? 6 months? A week? Todayt?
I must be being really thick, but I don't see the rather essential dates! So when the post says:
"nginx-big-upload too young, nobody uses it in production yet" about something that has been in existence for over a year, it's kinda hard to get an idea of whether I should be reading this or not!

over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18
over 1 year ago ·

@mikhailov - Thanks for this writeup. We are experiencing the same problems at the moment with nginx version upgrade problems.

One question I have is that we need to support ie8 (public sector clients) so we can't use XMLHttpRequest2, the only option is to use binary uploads as you mentioned. However using a normal html form, would we still use the multipart/form-data definition at the top? Would this system handle text files for example (non binary data)?


over 1 year ago ·
Screen shot 2016 02 29 at 20.33.18

@hamzakc, no, RFC 2388 (multipart/form-data) doesn't supported by Nginx clientbodyinfileonly

over 1 year ago ·

@mikhailov - Thanks for getting back to me. So how did you get round the IE8, 9 problem then? Did your site just not use nginx for uploads when using those browsers by doing some javascript detection?

over 1 year ago ·

nginx -t fails because of motive host not found in upstream "backend" in /etc/nginx/nginx.conf. am I missing something?

over 1 year ago ·

Здравствуйте, Анатолий!
Спасибо вам за данную заметку.
Я к сожалению столкнулся с рядом трудностей и отчаявшись, прошу Вас помочь разобраться мне, я обратил внимаение что в профиле вы указали rails, а я как раз его использую. rails 4.2, rack 1.6.0, nginx 1.6.2.

Если использовать данную конфигурации Webrick отдает 500 ошибку bad content body
Если убрать опцию proxysetbody off; то все отлично, НО запрос доходит до rails app и она его пишет в ОЗУ а этого делать не надо.
Если добавить proxysetbody off; и proxysetheader Content-Type "multipart/form-data"; то webrick возвращает ActionController::InvalidAuthenticityToken.

Я уже 3 день ковыряюсь с настройками и опциями, и все никак не могу заставаить это все работать. Мне то и нужено, что бы nginx сохранял файл, сообщал его путь в rails и что бы rails не писал тело файла в ОЗУ после nginx.

View Haml
= form_for :file, url: '/uploadfile', html: { multipart: true } do |f|
  = file_field_tag 'uploaded_files[]', required: true
  = f.submit 'Save'

  def uploadfile
    # render :text => request.headers['X-File']
    render :text => env.inspect

Rails.application.routes.draw do
  resources :files
  root 'files#index'
  match '/uploadfile' => 'files#uploadfile', via: [:get, :post]

Или вот на pastebin

nginx.conf - http://pastebin.com/YuKjNkee
rails app - http://pastebin.com/G3Bwzggu

over 1 year ago ·

Если добавить в nignx к location "proxysetheader Content-Type "multipart/form-data";"
А в rails controller "protectfromforgery except: :uploadfile"
То схема заработает. Правда я не знаю, правильно ли и безопасно ли это делать.

over 1 year ago ·

Я только сейчас обратил внимание, что все вырезанные параметры и токены и все остальное было записано вместе с файлом, таким образом на . location /upload нужно отправлять только файл, без какой либо лишней информации.

over 1 year ago ·
Default profile 1 normal

I had to add "clientmaxbodysize" to the authrequest location too, otherwise nginx complained "client intended to send too large body" and "auth request unexpected status: 413"

over 1 year ago ·

Gorgeous. I thought there is no alternative to tengine's unbuffered upload.

over 1 year ago ·

How could I upload chunked file?

over 1 year ago ·

@craigloftus I'm having a possibly similar hanging problem... the file is sent up to the temp file and I get the nginx message

2015/06/04 09:03:53 [notice] 9239#0: *1 a client request body is buffered to a temporary ....

but then the backend doesn't get the call for another minute or so, I assume due to some timeout. I added
proxysetheader Content-Length 0;
to the X-FILE header setting that @mikhailov originally set, but it doesn't seem to help.

It does seem to be related to file size- perhaps it's just that the 'request body is buffered' shows up at the beginning of the transfer, and the forward happens after the transfer.

over 1 year ago ·

@mikhailov the 'proxysetbody off;' directive appears to set the body of the post to the backend to 'off'. That's why the Content-Length is set to 3. Perhaps you meant

proxypassrequest_body off;

anyway, cool trick.

over 1 year ago ·

This sounds cool. :)

Just wondering if this supports break-n-resume file uploading?

7 months ago ·