summaryrefslogtreecommitdiffhomepage
path: root/src (follow)
AgeCommit message (Collapse)AuthorFilesLines
2023-07-01Var: supported HTTP response header variables.Zhidao HONG3-6/+216
This commit adds the variable $response_header_NAME.
2023-06-19Variables refactoring.Zhidao HONG6-180/+204
This commit is to reimplement the variables with an unknown field such as $header_{name} to make the parsing more generic, it's a preparation for supporting response header variables.
2023-07-11NJS: supported 0.8.0.Zhidao HONG1-15/+15
2023-06-30Fixed indentation.Alejandro Colomar2-4/+4
Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-05-25HTTP: fixed variable caching.Zhidao HONG3-14/+41
When a variable is accessed in the Unit configuration, the value is cached. This was useful prior to the URI rewrite feature, but now that the URI (more precisely, the request target) can be rewritten, the contents of the variable $uri (which contains the path part of the request target, and is decoded) should not be cached anymore, or at least the cached value should be invalidated after a URI rewrite. Example: { "rewrite": "/prefix$uri", "share": "$uri" } For a request line like GET /foo?bar=baz HTTP/1.1\r\n, the expected file served in the response would be /prefix/foo, but due to the caching issue, Unit currently serves /foo.
2023-06-01Python: Fix error checks in nxt_py_asgi_request_handler().synodriver1-2/+2
Signed-off-by: synodriver <diguohuangjiajinweijun@gmail.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> [ Re-word commit subject - Andrew ] Fixes: c4c2f90c5b53 ("Python: ASGI server introduced.") Closes: <https://github.com/nginx/unit/issues/895> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-06-01Python: Add ASGI lifespan state support.synodriver4-3/+84
Lifespan state is a special dict in asgi lifespan scope, which allow applications to persist data from the lifespan cycle to request/response handling. The scope["state"] namespace provides a place to store these sorts of things. The server will ensure that a shallow copy of the namespace is passed into each subsequent request/response call into the application. Some frameworks are already taking advantage of this feature, for example, starlette, and without this feature they wouldn't work properly. Signed-off-by: synodriver <diguohuangjiajinweijun@gmail.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> [ Minor code tweaks to avoid lines > 80 chars, static a function and re-work the PyMemberDef structure initialisation for Python <3.7 and -Wwrite-strings compatibility - Andrew ] Tested-by: <https://github.com/synodriver> Tested-by: <https://github.com/hawiliali> Closes: <https://github.com/nginx/unit/issues/864> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-05-25Tests: fixed incorrect pointer assignment.Alejandro Colomar1-2/+2
If we don't update the pointer before copying the request body, then we get the behavior shown below. After this patch, "foo\n" is rightly appended at the end of the response body. Request: "GET / HTTP/1.1\r\nHost: _\nContent-Length: 4\n\nfoo\n" Response body: """ Hello world! foo est data: Method: GET Protocol: HTTP/1.1 Remote addr: 127.0.0.1 Local addr: 127.0.0.1 Target: / Path: / Fields: Host: _ Content-Length: 4 Body: """ Fixes: 1bb22d1e922c ("Unit application library.") Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-05-21Added back deprecated options to unitd.Alejandro Colomar1-0/+31
We renamed the options recently, with the intention of keeping the old names as supported but deprecated for some time, before removal. This was done with the configure script options, but in the unitd binary, we accidentally removed the old names, causing some unintended breakage. Keep support for the old names, albeit with a deprecation message to stderr, for some time, until we decide to remove them. Fixes: 5a37171f733f ("Added default values for pathnames.") Closes: <https://github.com/nginx/unit/issues/876> Reported-by: El RIDO <elrido@gmx.net> Acked-by: Liam Crilly <liam@nginx.com> Acked-by: Artem Konev <a.konev@f5.com> Acked-by: Timo Stark <t.stark@nginx.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-05-18Python: Fix ASGI applications accessed over IPv6.Andrew Clayton1-11/+3
There are a couple of reports on GitHub about issues accessing Python ASGI based applications over IPv6. A request over IPv6 would result in an error like 2023/05/13 17:49:12 [alert] 47202#47202 [unit] #10: Python failed to create 'client' pair 2023/05/13 17:49:12 [alert] 47202#47202 [unit] Python failed to call 'loop.call_soon' ValueError: invalid literal for int() with base 10: 'db8:1:1:1ee7:dead:beef:cafe' The above error was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib64/python3.11/asyncio/base_events.py", line 765, in call_soon handle = self._call_soon(callback, args, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/asyncio/base_events.py", line 781, in _call_soon handle = events.Handle(callback, args, self, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SystemError: <class 'asyncio.events.Handle'> returned a result with an exception set This issue occurred in the nxt_py_asgi_create_ip_address() function where it tries to create an IP address / port number pair. It does this by looking for the first ':' in the address and taking everything after it as the port number. Like in the above error message, if we tried to access the server @ 2001:db8:1:1:1ee7:dead:beef:cafe, then we'd end up with the port number as 'db8:1:1:1ee7:dead:beef:cafe'. There are two issues with this 1) The IP address and port number are already flowed through separately. 2) Even if (1) wasn't true, it would still be broken for IPv6 as we'd expect to a get an address literal like [2001:db8:1:1:1ee7:dead:beef:cafe]:8080, however there was no code to handle the []'s. The fix is to simply not try looking for a port number. We pass a port number into this function to use in the case where we don't find a port number, we never will... A further cleanup would be to flow through the server port number when creating the 'server pair' PyTuple, rather than just using the hard coded 80. Closes: <https://github.com/nginx/unit/issues/793> Closes: <https://github.com/nginx/unit/issues/874> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-05-08NJS: supported loadable modules.Zhidao HONG14-49/+1584
2023-04-20HTTP: added basic URI rewrite.Zhidao HONG9-11/+160
This commit introduced the basic URI rewrite. It allows users to change request URI. Note the "rewrite" option ignores the contained query if any and the query from the request is preserverd. An example: "routes": [ { "match": { "uri": "/v1/test" }, "action": { "return": 200 } }, { "action": { "rewrite": "/v1$uri", "pass": "routes" } } ] Reviewed-by: Alejandro Colomar <alx@nginx.com>
2023-04-25Allow to remove the version string in HTTP responses.Andrew Clayton4-3/+22
Normally Unit responds to HTTP requests by including a header like Server: Unit/1.30.0 however it can sometimes be beneficial to withhold the version information and in this case just respond with Server: Unit This patch adds a new "settings.http" boolean option called server_version, which defaults to true, in which case the full version information is sent. However this can be set to false, e.g "settings": { "http": { "server_version": false } }, in which case Unit responds without the version information as the latter example above shows. Link: <https://www.ietf.org/rfc/rfc9110.html#section-10.2.4> Closes: <https://github.com/nginx/unit/issues/158> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-25Decouple "Unit" from NXT_SERVER.Andrew Clayton1-1/+2
Split out the "Unit" name from the NXT_SERVER #define into its own NXT_NAME #define, then make NXT_SERVER a combination of that and NXT_VERSION. This is required for a subsequent commit where we may want the server name on its own. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-24Remove an erroneous semi-colon.Andrew Clayton1-1/+1
Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-24Don't conflate the error variable in nxt_kqueue_poll().Andrew Clayton1-3/+4
In nxt_kqueue_poll() error is declared as a nxt_bool_t aka unsigned int (on x86-64 anyway). It is used both as a boolean and as the return storage for a bitwise AND operation. This has potential to go awry. If nxt_bool_t was changed to be a u8 then we would have the following issue gcc12 -c -pipe -fPIC -fvisibility=hidden -O -W -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wmissing-prototypes -Werror -g -O2 -I src -I build -I/usr/local/include -o build/src/nxt_kqueue_engine.o -MMD -MF build/src/nxt_kqueue_engine.dep -MT build/src/nxt_kqueue_engine.o src/nxt_kqueue_engine.c src/nxt_kqueue_engine.c: In function 'nxt_kqueue_poll': src/nxt_kqueue_engine.c:728:17: error: overflow in conversion from 'int' to 'nxt_bool_t' {aka 'unsigned char'} changes value from '(int)kev->flags & 16384' to '0' [-Werror=overflow] 728 | error = (kev->flags & EV_ERROR); | ^ cc1: all warnings being treated as errors EV_ERROR has the value 16384, after the AND operation error holds 16384, however this overflows and wraps around (64 times) exactly to 0. With nxt_bool_t defined as a u32, we would have a similar issue if EV_ERROR ever became UINT_MAX + 1 (or a multiple thereof)... Rather than conflating the use of error, keep error as a boolean (it is used further down the function) but do the AND operation inside the if (). Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-24Remove a bunch of dead code.Andrew Clayton21-4935/+0
This removes a bunch of unused files that would have been touched by subsequent commits that switch to using nxt_bool_t (AKA unit6_t) in structures. In auto/sources we have NXT_LIB_SRC0=" \ src/nxt_buf_filter.c \ src/nxt_job_file.c \ src/nxt_stream_module.c \ src/nxt_stream_source.c \ src/nxt_upstream_source.c \ src/nxt_http_source.c \ src/nxt_fastcgi_source.c \ src/nxt_fastcgi_record_parse.c \ \ src/nxt_mem_pool_cleanup.h \ src/nxt_mem_pool_cleanup.c \ " None of these seem to actually be used anywhere (other than within themselves). That variable is _not_ referenced anywhere else. Also remove the unused related header files: src/nxt_buf_filter.h, src/nxt_fastcgi_source.h, src/nxt_http_source.h, src/nxt_job_file.h, src/nxt_stream_source.h and src/nxt_upstream_source.h Also, these files do not seem to be used, no mention under auto/ or build/ src/nxt_file_cache.c src/nxt_cache.c src/nxt_job_file_cache.c src/nxt_cache.h is #included in src/nxt_main.h, but AFAICT is not actually used. With all the above removed $ ./configure --openssl --debug --tests && make -j && make -j tests && make libnxt all builds. Buildbot passes. NOTE: You may need to do a 'make clean' before the next build attempt. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-12HTTP: optimizing $request_line.Alejandro Colomar5-41/+20
Don't reconstruct a new string for the $request_line from the parsed method, target, and HTTP version, but rather keep a pointer to the original memory where the request line was received. This will be necessary for implementing URI rewrites, since we want to log the original request line, and not one constructed from the rewritten target. This implementation changes behavior (only for invalid requests) in the following way: Previous behavior was to log as many tokens from the request line as were parsed validly, thus: Request -> access log ; error log "GET / HTTP/1.1" -> "GET / HTTP/1.1" OK ; = "GET / HTTP/1.1" -> "GET / HTTP/1.1" [1] ; = "GET / HTTP/2.1" -> "GET / HTTP/2.1" OK ; = "GET / HTTP/1." -> "GET / HTTP/1." [2] ; "GET / HTTP/1. [null]" "GET / food" -> "GET / food" [2] ; "GET / food [null]" "GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2] ; = "GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2] ; = "GET food HTTP/1.1" -> "GET" ; "GET [null] [null]" "OPTIONS * HTTP/1.1" -> "OPTIONS" [3] ; "OPTIONS [null] [null]" "FOOBAR baz HTTP/1.1"-> "FOOBAR" ; "FOOBAR [null] [null]" "FOOBAR / HTTP/1.1" -> "FOOBAR / HTTP/1.1" ; = "get / HTTP/1.1" -> "-" ; " [null] [null]" "" -> "-" ; " [null] [null]" This behavior was rather inconsistent. We have several options to go forward with this patch: - NGINX behavior. Log the entire request line, up to '\r' | '\n', even if it was invalid. This is the most informative alternative. However, RFC-complying requests will probably not send invalid requests. This information would be interesting to users where debugging requests constructed manually via netcat(1) or a similar tool, or maybe for debugging a client, are important. It might be interesting to support this in the future if our users are interested; for now, since this approach requires looping over invalid requests twice, that's an overhead that we better avoid. - Previous Unit behavior This is relatively fast (almost as fast as the next alternative, the one we chose), but the implementation is ugly, in that we need to perform the same operation in many places around the code. If we want performance, probably the next alternative is better; if we want to be informative, then the first one is better (maybe in combination with the third one too). - Chosen behavior Only logging request lines when the request is valid. For any invalid request, or even unsupported ones, the request line will be logged as "-". Thus: Request -> access log [4] "GET / HTTP/1.1" -> "GET / HTTP/1.1" OK "GET / HTTP/1.1" -> "GET / HTTP/1.1" [1] "GET / HTTP/2.1" -> "-" [3] "GET / HTTP/1." -> "-" "GET / food" -> "-" "GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2] "GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2] "GET food HTTP/1.1" -> "-" "OPTIONS * HTTP/1.1" -> "-" "FOOBAR baz HTTP/1.1"-> "-" "FOOBAR / HTTP/1.1" -> "FOOBAR / HTTP/1.1" "get / HTTP/1.1" -> "-" "" -> "-" This is less informative than previous behavior, but considering how inconsistent it was, and that RFC-complying agents will probably not send us such requests, we're ready to lose that information in the log. This is of course the fastest and simplest implementation we can get. We've chosen to implement this alternative in this patch. Since we modified the behavior, this patch also changes the affected tests. [1]: Multiple successive spaces as a token delimiter is allowed by the RFC, but it is discouraged, and considered a security risk. It is currently supported by Unit, but we will probably drop support for it in the future. [2]: Unit currently supports spaces in the request-target. This is a violation of the relevant RFC (linked below), and will be fixed in the future, and consider those targets as invalid, returning a 400 (Bad Request), and thus the log lines with the previous inconsistent behavior would be changed. [3]: Not yet supported. [4]: In the error log, regarding the "log_routes" conditional logging of the request line, we only need to log the request line if it was valid. It doesn't make sense to log "" or "-" in case that the request was invalid, since this is only useful for understanding decisions of the router. In this case, the access log is more appropriate, which shows that the request was invalid, and a 400 was returned. When the request line is valid, it is printed in the error log exactly as in the access log. Link: <https://datatracker.ietf.org/doc/html/rfc9112#section-3> Suggested-by: Liam Crilly <liam@nginx.com> Reviewed-by: Zhidao Hong <z.hong@f5.com> Cc: Timo Stark <t.stark@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Cc: Andrew Clayton <a.clayton@nginx.com> Cc: Artem Konev <a.konev@f5.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-04-11Add per-application logging.Andrew Clayton5-0/+81
Currently when running in the foreground, unit application processes will send stdout to the current TTY and stderr to the unit log file. That behaviour won't change. When running as a daemon, unit application processes will send stdout to /dev/null and stderr to the unit log file. This commit allows to alter the latter case of unit running as a daemon, by allowing applications to redirect stdout and/or stderr to specific log files. This is done via two new application options, 'stdout' & 'stderr', e.g "applications": { "myapp": { ... "stdout": "/path/to/log/unit/app/stdout.log", "stderr": "/path/to/log/unit/app/stderr.log" } } These log files are created by the application processes themselves and thus the log directories need to be writable by the user (and or group) of the application processes. E.g $ sudo mkdir -p /path/to/log/unit/app $ sudo chown APP_USER /path/to/log/unit/app These need to be setup before starting unit with the above config. Currently these log files do not participate in log-file rotation (SIGUSR1), that may change in a future commit. In the meantime these logs can be rotated using the traditional copy/truncate method. NOTE: You may or may not see stuff printed to stdout as stdout was traditionally used by CGI applications to communicate with the webserver. Closes: <https://github.com/nginx/unit/issues/197> Closes: <https://github.com/nginx/unit/issues/846> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-11Add nxt_file_stdout().Andrew Clayton2-0/+20
This is analogous to the nxt_file_stderr() function and will be used in a subsequent commit. This function redirects stdout to a given file descriptor. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-11PHP: Make the filter_input() function work.Andrew Clayton1-3/+12
On GitHub, @jamesRUS52 reported that the PHP filter_input()[0] function would just return NULL. To enable this function we need to run the variables through the sapi_module.input_filter() function when we call php_register_variable_safe(). In PHP versions prior to 7.0.0, input_filter() takes 'len' as an unsigned int, while later versions take it as a size_t. Now, with this commit and the following PHP <?php var_dump(filter_input(INPUT_SERVER, 'REMOTE_ADDR')); var_dump(filter_input(INPUT_SERVER, 'REQUEST_URI')); var_dump(filter_input(INPUT_GET, 'get', FILTER_SANITIZE_SPECIAL_CHARS)); ?> you get $ curl 'http://localhost:8080/854.php?get=foo<>' string(3) "::1" string(18) "/854.php?get=foo<>" string(13) "foo&#60;&#62;" [0]: <https://www.php.net/manual/en/function.filter-input.php> Tested-by: <https://github.com/jamesRUS52> Closes: <https://github.com/nginx/unit/issues/854> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-03Remove a useless assignment in nxt_mem_zone_alloc_pages().Andrew Clayton1-1/+1
This was reported by the 'Clang Static Analyzer' as a 'dead nested assignment'. We assign prev_size then check if it's != 0 and if true we then set prev_pages to page_size right shifted by two at the same time setting prev_size to be right shifted by two (>>=), however page_size is never used again so no need to set it here. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-03Prevent a possible NULL de-reference in nxt_job_create().Andrew Clayton1-4/+6
We allocate 'job' we then have a check if it's not NULL and do stuff with it, but then we accessed it outside this check. Simply return if job is NULL. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-04-03Remove a useless assignment in nxt_fs_mkdir_all().Andrew Clayton1-1/+1
This was reported by the 'Clang Static Analyzer' as a 'dead nested assignment'. We set end outside the loop but the first time we use it is to assign it in the loop (not used anywhere else). Further cleanup could be to reduce the scope of end by moving its declaration inside the loop. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-29Auto: mirroring installation structure in build tree.Alejandro Colomar1-1/+1
This makes the build tree more organized, which is good for adding new stuff. Now, it's useful for example for adding manual pages in man3/, but it may be useful in the future for example for extending the build system to run linters (e.g., clang-tidy(1), Clang analyzer, ...) on the C source code. Previously, the build tree was quite flat, and looked like this (after `./configure && make`): $ tree -I src build build ├── Makefile ├── autoconf.data ├── autoconf.err ├── echo ├── libnxt.a ├── nxt_auto_config.h ├── nxt_version.h ├── unitd └── unitd.8 1 directory, 9 files And after this patch, it looks like this: $ tree -I src build build ├── Makefile ├── autoconf.data ├── autoconf.err ├── bin │ └── echo ├── include │ ├── nxt_auto_config.h │ └── nxt_version.h ├── lib │ ├── libnxt.a │ └── unit │ └── modules ├── sbin │ └── unitd ├── share │ └── man │ └── man8 │ └── unitd.8 └── var ├── lib │ └── unit ├── log │ └── unit └── run └── unit 17 directories, 9 files It also solves one issue introduced in 5a37171f733f ("Added default values for pathnames."). Before that commit, it was possible to run unitd from the build system (`./build/unitd`). Now, since it expects files in a very specific location, that has been broken. By having a directory structure that mirrors the installation, it's possible to trick it to believe it's installed, and run it from there: $ ./configure --prefix=./build $ make $ ./build/sbin/unitd Fixes: 5a37171f733f ("Added default values for pathnames.") Reported-by: Liam Crilly <liam@nginx.com> Reviewed-by: Konstantin Pavlov <thresh@nginx.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Cc: Zhidao Hong <z.hong@f5.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-03-29Renamed --libstatedir to --statedir.Alejandro Colomar1-5/+5
In BSD systems, it's usually </var/db> or some other dir under </var> that is not </var/lib>, so $statedir is a more generic name. See hier(7). Reported-by: Andrei Zeliankou <zelenkov@nginx.com> Reported-by: Zhidao Hong <z.hong@f5.com> Reviewed-by: Konstantin Pavlov <thresh@nginx.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Cc: Liam Crilly <liam@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-03-21HTTP: added route logging.Alejandro Colomar7-0/+37
- Configuration: added "/config/settings/http/log_route". Type: bool Default: false This adds configurability to the error log. It allows enabling and disabling logs related to how the router performs selection of the routes. - HTTP: logging request line. Log level: [notice] The request line is essential to understand which logs correspond to which request when reading the logs. - HTTP: logging route that's been discarded. Log level: [info] - HTTP: logging route whose action is selected. Log level: [notice] - HTTP: logging when "fallback" action is taken. Log level: [notice] Closes: <https://github.com/nginx/unit/issues/758> Link: <https://github.com/nginx/unit/pull/824> Link: <https://github.com/nginx/unit/pull/839> Suggested-by: Timo Stark <t.stark@nginx.com> Suggested-by: Mark L Wood-Patrick <mwoodpatrick@gmail.com> Suggested-by: Liam Crilly <liam@nginx.com> Tested-by: Liam Crilly <liam@nginx.com> Acked-by: Artem Konev <a.konev@f5.com> Cc: Andrew Clayton <a.clayton@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Reviewed-by: Zhidao Hong <z.hong@f5.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-03-21HTTP: rewrote while loop as for loop.Alejandro Colomar1-7/+3
This considerably simplifies the function, and will also help log the iteration in which we are, which corresponds to the route array element. Link: <https://github.com/nginx/unit/pull/824> Link: <https://github.com/nginx/unit/pull/839> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Tested-by: Liam Crilly <liam@nginx.com> Reviewed-by: Zhidao Hong <z.hong@f5.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-03-17Default PR_SET_NO_NEW_PRIVS to off.Andrew Clayton1-0/+4
This prctl(2) option was enabled in commit 0277d8f1 ("Isolation: Fix the enablement of PR_SET_NO_NEW_PRIVS.") and this was being set by default. This prctl(2) when enabled renders (amongst other things) the set-UID and set-GID bits on executables ineffective after an execve(2). This causes an issue for applications that want to execute the sendmail(8) binary, this includes the PHP mail() function, which is usually set-GID. After some internal discussion it was decided to disable this option by default. Closes: <https://github.com/nginx/unit/issues/852> Fixes: 0277d8f1 ("Isolation: Fix the enablement of PR_SET_NO_NEW_PRIVS.") Fixes: e2b53e16 ("Added "rootfs" feature.") Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-17Improve an error message regarding Unix domain sockets.Andrew Clayton1-1/+1
When starting unit, if its Unix domain control socket was already active you would get an error message like 2023/03/15 18:07:55 [alert] 53875#8669650 connect(5, unix:/tmp/control.sock) succeed, address already in use which is confusing in a couple of regards, firstly we have the classic success/failure message and secondly 'address already in use' is an actual errno value, EADDRINUSE and we didn't get an error from this connect(2). Re-word this error message for greater clarity. Reported-by: Liam Crilly <liam.crilly@nginx.com> Cc: Liam Crilly <liam.crilly@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-17Socket: Remove Unix domain listen sockets upon reconfigure.Andrew Clayton3-3/+87
Currently when using Unix domain sockets for requests, if unit is reconfigured then it will fail if it tries to bind(2) again to a Unix domain socket with something like 2023/02/25 19:15:50 [alert] 35274#35274 bind(\"unix:/tmp/unit.sock\") failed (98: Address already in use) When closing such a socket we really need to unlink(2) it. However that presents a problem in that when running as root, while the main process runs as root and creates the socket, it's the router process, that runs as an unprivileged user, e.g nobody, that closes the socket and would thus remove it, but couldn't due to not having permission, even if the socket is mode 0666, you need write permissions on the containing directory to remove a file. There are several options to solve this, all with varying degrees of complexity and utility. 1) Give the user who the router process runs as write permission to the directory containing the listen sockets. These can then be unlink(2)'d from the router process. Simple and would work, but perhaps not the most elegant. 2) Using capabilities(7). The router process could temporarily attain the CAP_DAC_OVERRIDE capability, unlink(7) the socket, then relinquish the capability until required again. These are Linux specific (other systems may have similar mechanisms which would be extra work to support). There is also a, albeit small, window where the router process is running with elevated privileges. 3) Have the main process do the unlink(2), it is after all the process that created the socket. This is what this commit implements. We create a new port IPC message type of NXT_PORT_MSG_SOCKET_UNLINK, that is used by the router process to notify the main process about a Unix domain socket to unlink(2). Upon doing a reconfigure the router process will call nxt_router_listen_socket_release() which will close the socket, we extend this function in the case of non-abstract Unix domain sockets, so that it will send a message to the main process containing a copy of the nxt_sockaddr_t structure that will contain the filename of the socket. In the main process the handler that we have defined, nxt_main_port_socket_unlink_handler(), for this message type will run and allow us to look for the socket in question in the listen_sockets array and remove it and unlink(2) the socket. This then allows the reconfigure to work if it tries to bind(2) again to a socket that previously existed. Link: <https://github.com/nginx/unit/issues/669> Link: <https://github.com/nginx/unit/pull/735> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-17Remove some dormant code from nxt_process_quit().Andrew Clayton1-21/+0
In nxt_process_quit() there is a loop that iterates over the task->thread->runtime->listen_sockets array and closes the connections. This code has been there from the beginning $ git log --pretty=oneline -S'if (rt->listen_sockets != NULL)' e9e5ddd5a5d9ce99768833137eac2551a710becf Refactor of process management. 6f2c9acd1841ca20a1388b34aef64e9f00459090 Processes refactoring. The cycle has been renamed to the runtime. $ git log --pretty=oneline -S'if (cycle->listen_sockets != NULL) {' 6f2c9acd1841ca20a1388b34aef64e9f00459090 Processes refactoring. The cycle has been renamed to the runtime. 16cbf3c076a0aca6d47adaf3f719493674cf2363 Initial version. but never seems to have been used (AFAICT and certainly not recently, confirmed by code inspection and running pytests with a bunch of language modules enabled and the code in question was never executed) as the listen_sockets array has never been populated... until now. The previous commit now adds Unix domain sockets to this array so that they can be unlink(2)'d upon exit and reconfiguration. This has now caused this dormant code to become active as it now tries to close these sockets (from at least the prototype processes), this array is inherited via fork by other processes. The file descriptor for these sockets is set to -1 when they are put into this array. This then results in close(-1) calls which caused multiple failures in the pytests such as > assert not alerts, 'alert(s)' E AssertionError: alert(s) E assert not ['2023/03/09 23:26:14 [alert] 137673#137673 socket close(-1) failed (9: Bad file descriptor)'] I think the simplest thing is to just remove this code. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-17Socket: Remove Unix domain listen sockets at shutdown.Andrew Clayton2-2/+23
If we don't remove the Unix domain listen socket file then when Unit restarts it get an error like 2023/02/25 23:10:11 [alert] 36388#36388 bind(\"unix:/tmp/unit.sock\") failed (98: Address already in use) This patch makes use of the listen_sockets array, that is already allocated in the main process but never populated, to place the Unix domain listen sockets into. At shutdown we can then loop through this array and unlink(2) any Unix domain sockets found therein. Closes: <https://github.com/nginx/unit/issues/792> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-14Router: More accurately allocate request buffer memory.Andrew Clayton1-2/+2
In nxt_router_prepare_msg() we create a buffer (nxt_unit_request_t *req) that gets sent to an application process that contains details about a client request. This buffer was always a little larger than needed due to allocating space for the remote address _and_ port and the local address _and_ port. We also allocate space for the local port separately. ->{local,remote}->length includes the port number and ':' and also the '[]' for IPv6. E.g [2001:db8::1]:8080 ->{local,remote}->address_length represents the length of the unadorned IP address. E.g 2001:db8::1 Update the buffer size so that we only allocate what is actually needed. Suggested-by: Zhidao HONG <z.hong@f5.com> Cc: Zhidao HONG <z.hong@f5.com> Reviewed-by: Zhidao HONG <z.hong@f5.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-10Perl: Fix a crash in the language module.Andrew Clayton1-3/+4
User @bes-internal reported a Perl module crasher on GitHub. This was due to a Perl application sending back two responses, for each response we would call down into XS_NGINX__Unit__Sandbox_cb(), the first time pctx->req would point to a valid nxt_unit_request_info_t, the second time pctx->req would be NULL. Add an invalid responses check which covers this case. Closes: <https://github.com/nginx/unit/issues/841> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-03-10Router: Fix allocation of request buffer sent to application.Andrew Clayton1-0/+1
This fixes an issue reported by @Peter2121 on GitHub. In nxt_router_prepare_msg() we create a buffer (nxt_unit_request_t *req) that gets sent to an application process that contains details about a client request. The req structure comprises various members with the final member being an array (specified as a flexible array member, with its actual length denoted by the req->fields_count member) of nxt_unit_field_t's. These structures specify the length and offset for the various request headers name/value pairs which are stored after some request metadata that is stored immediately after this array of structs as individual nul terminated strings. After this we have the body content data (if any). So it looks a little like (gdb) x /64bs 0x7f38c976e060 0x7f38c976e060: "\353\346\244\t\006" <-- First nxt_unit_field_t 0x7f38c976e066: "" 0x7f38c976e067: "" 0x7f38c976e068: "T\001" 0x7f38c976e06b: "" 0x7f38c976e06c: "Z\001" 0x7f38c976e06f: "" ... 0x7f38c976e170: "\362#\244\v$" <-- Last nxt_unit_field_t 0x7f38c976e176: "" 0x7f38c976e177: "" 0x7f38c976e178: "\342\002" 0x7f38c976e17b: "" 0x7f38c976e17c: "\352\002" 0x7f38c976e17f: "" 0x7f38c976e180: "POST" <-- Start of request metadata 0x7f38c976e185: "HTTP/1.1" 0x7f38c976e18e: "unix:" 0x7f38c976e194: "unix:/dev/shm/842.sock" 0x7f38c976e1ab: "" 0x7f38c976e1ac: "fedora" 0x7f38c976e1b3: "/842.php" 0x7f38c976e1bc: "HTTP_HOST" <-- Start of header fields 0x7f38c976e1c6: "fedora" 0x7f38c976e1cd: "HTTP_X_FORWARDED_PROTO" 0x7f38c976e1e4: "https" ... 0x7f38c976e45a: "HTTP_COOKIE" 0x7f38c976e466: "PHPSESSID=8apkg25r9s9vju3pi085i21eh4" 0x7f38c976e48b: "public_form=sended" <-- Body content Well that's how things are supposed to look! When using Unix domain sockets what we actually got looked like ... 0x7f6141f3445a: "HTTP_COOKIE" 0x7f6141f34466: "PHPSESSID=uo5b2nu9buijkc89jotbgmd60vpublic_form=sended" Here, the body content (from a POST for example) has been appended straight onto the end of the last header field value. In this case corrupting the PHP session cookie. The body content would still be found by the application as its offset into this buffer is correct. This problem was actually caused by a0327445 ("PHP: allowed to specify URLs without a trailing '/'.") which added an extra item into this request buffer specifying the port number that unit is listening on that handled this request. Unfortunately when I wrote that patch I didn't increase the size of this request buffer to accommodate it. When using normal TCP sockets we actually end up allocating more space than required for this buffer, we track the end of this buffer up to where the body content would go and so we have a few spare bytes between the nul byte of the last field header value and the start of the body content. When using Unix domain sockets, they have no associated port number and thus the port number has a length of 0 bytes, but we still write a '\0' in there using up a byte that we didn't account for, this causes us to loose the nul byte of the last header fields value causing the body data to be appended to the last header field value. The fix is simple, account for the local port length, we also add 1 to it, this covers the nul byte, even if there is no port as with Unix domain sockets. Closes: <https://github.com/nginx/unit/issues/842> Fixes: a0327445 ("PHP: allowed to specify URLs without a trailing '/'.") Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-24Set a safer umask(2) when running as a daemon.Andrew Clayton1-3/+3
When running as a daemon. unit currently sets umask(0), i.e no umask. This is resulting in various directories being created with a mode of 0777, e.g rwxrwxrwx this is currently affecting cgroup and rootfs directories, which are being created with a mode of 0777, and when running as a daemon as there is no umask to restrict the permissions. This also affects the language modules (the umask is inherited over fork(2)) whereby unless something explicitly sets a umask, files and directories will be created with full permissions, 0666 (rw-rw-rw-)/ 0777 (rwxrwxrwx) respectively. This could be an unwitting security issue. My original idea was to just remove the umask(0) call and thus inherit the umask from the executing shell/program. However there was some concern about just inheriting whatever umask was in effect. Alex suggested that rather than simply removing the umask(0) call we change it to a value of 022 (which is a common default), which will result in directories and files with permissions at most of 0755 (rwxr-xr-x) & 0644 (rw-r--r--). If applications need some other umask set, they can (as they always have been able to) set their own umask(2). Suggested-by: Alejandro Colomar <alx.manpages@gmail.com> Reviewed-by: Liam Crilly <liam@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-24Isolation: rootfs: Set the sticky bit on the tmp directory.Andrew Clayton1-1/+1
When using the 'rootfs' isolation option, by default a tmpfs filesystem is mounted on tmp/. Currently this is mounted with a mode of 0777, i.e drwxrwxrwx. 3 root root 60 Feb 22 11:56 tmp however this should really have the sticky bit[0] set (as is per-normal for such directories) to prevent users from having free reign on the files contained within. What we really want is it mounted with a mode of 01777, i.e drwxrwxrwt. 3 root root 60 Feb 22 11:57 tmp [0]: To quote inode(7) "The sticky bit (S_ISVTX) on a directory means that a file in that directory can be renamed or deleted only by the owner of the file, by the owner of the directory, and by a privileged process." Reviewed-by: Liam Crilly <liam@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Remove the nxt_getpid() alias.Andrew Clayton2-4/+1
Since the previous commit, nxt_getpid() is only ever aliased to getpid(2). nxt_getpid() was only used once in the code, while there are multiple direct uses of getpid(2) $ grep -r "getpid()" src/ src/nxt_unit.c: nxt_unit_pid = getpid(); src/nxt_process.c: nxt_pid = nxt_getpid(); src/nxt_process.c: nxt_pid = getpid(); src/nxt_lib.c: nxt_pid = getpid(); src/nxt_process.h:#define nxt_getpid() \ src/nxt_process.h:#define nxt_getpid() \ src/nxt_process.h: getpid() Just remove it and convert the _single_ instance of nxt_getpid() to getpid(2). Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Remove the syscall(SYS_getpid) wrapper.Andrew Clayton1-9/+0
When using SYS_clone we used the getpid kernel system call directly via syscall(SYS_getpid) to avoid issues with cached pids. However since we are now only using fork(2) (+ unshare(2) for namespaces) we no longer need to call the kernel getpid directly as the fork(2) will ensure the cached pid is invalidated. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Remove nxt_clone().Andrew Clayton2-17/+0
Since the previous commit, this is no longer used. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Switch to fork(2) & unshare(2) on Linux.Andrew Clayton1-9/+247
On GitHub, @razvanphp & @hbernaciak both reported issues running the APCu PHP module under Unit. When using this module they were seeing errors like 'apcu_fetch(): Failed to acquire read lock' However when running APCu under php-fpm, everything was fine. The issue turned out to be due to our use of SYS_clone breaking the pthreads(7) API used by APCu. Even if we had been using glibc's clone(2) wrapper we would still have run into problems due to a known issue there. Essentially the problem is when using clone, glibc doesn't update the TID cache, so the child ends up having the same TID as the parent and that is used in various parts of pthreads(7) such as in the various locking primitives, so when APCu was grabbing a lock it ended up using the TID of the main unit process (rather than that of the php application processes that was grabbing the lock). So due to the above what was happening was when one of the application processes went to grab either a read or write lock, the lock was actually being attributed to the main unit process. If a process had acquired the write lock, then if a process tried to acquire a read or write lock then glibc would return EDEADLK due to detecting a deadlock situation due to thinking the process already held the write lock when in fact it didn't. It seems the right way to do this is via fork(2) and unshare(2). We already use fork(2) on other platforms. This requires a few tricks to keep the essence of the processes the same as before when using clone 1) We use the prctl(2) PR_SET_CHILD_SUBREAPER option (if its available, since Linux 3.4) to make the main unit process inherit prototype processes after a double fork(2), rather than them being reparented to 'init'. This avoids needing to ^C twice to fully exit unit when running in the foreground. It's probably also better if they maintain their parent child relationship where possible. 2) We use a double fork(2) technique on the prototype processes to ensure they themselves end up in a new PID namespace as PID 1 (when CLONE_NEWPID is being used). When using unshare(CLONE_NEWPID), the calling process is _not_ placed in the namespace (as discussed in pid_namespaces(7)). It only sets things up so that subsequent children are placed in a PID namespace. Having the prototype processes as PID 1 in the new PID namespace is probably a good thing and matches the behaviour of clone(2). Also, some isolation tests break if the prototype process is not PID 1. 3) Due to the above double fork(2) the main unit process looses track of the prototype process ID, which it needs to know. To solve this, we employ a simple pipe(2) between the main unit and prototype processes and pass the prototype grandchild PID from the parent of the second fork(2) before exiting. This needs to be done from the parent and not the grandchild, as the grandchild will see itself having a PID of 1 while the main process needs its externally visible PID. Link: <https://www.php.net/manual/en/book.apcu.php> Link: <https://sourceware.org/bugzilla/show_bug.cgi?id=21793> Closes: <https://github.com/nginx/unit/issues/694> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Rename NXT_HAVE_CLONE -> NXT_HAVE_LINUX_NS.Andrew Clayton6-17/+17
Due to the need to replace our use of clone/__NR_clone on Linux with fork(2)/unshare(2) for enabling Linux namespaces(7) to keep the pthreads(7) API working. Let's rename NXT_HAVE_CLONE to NXT_HAVE_LINUX_NS, i.e name it after the feature, not how it's implemented, then in future if we change how we do namespaces again we don't have to rename this. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-30NJS: adding the missing vm destruction.Zhidao HONG7-4/+63
This commit fixed the njs memory leak happened in the config validation, updating and http requests.
2023-02-07Python: ASGI: Don't log asyncio.get_running_loop() errors.Andrew Clayton1-2/+5
This adds a check to nxt_python_asgi_get_event_loop() on the event_loop_func name in the case that running that function fails, and if it's get_running_loop() that failed we skip printing an error message as this is an often expected behaviour since the previous commit and we don't want users reporting erroneous bugs. This check will always happen regardless of Python version while it really only applies to Python >= 3.7, there didn't seem much point adding complexity to the code for this case and in what will be an ever diminishing case of people running older Pythons. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-07Python: ASGI: Switch away from asyncio.get_event_loop().Andrew Clayton1-1/+20
Several users on GitHub reported issues with running Python ASGI apps on Unit with Python 3.11.1 (this would also effect Python 3.10.9) with the following error from Unit 2023/01/15 22:43:22 [alert] 0#77128 [unit] Python failed to call 'asyncio.get_event_loop' TL;DR asyncio.get_event_loop() is currently broken due to the process of deprecating part or all of it. First some history. In Unit we had this commit commit 8dcb0b9987033d0349a6ecf528014a9daa574787 Author: Max Romanov <max.romanov@nginx.com> Date: Thu Nov 5 00:04:59 2020 +0300 Python: request processing in multiple threads. One of things this did was to create a new asyncio event loop in each thread using asyncio.new_event_loop(). It's perhaps worth noting that all these asyncio.* functions are Python functions that we call from the C code in Unit. Then we had this commit commit f27fbd9b4d2bdaddf1e7001d0d0bc5586ba04cd4 Author: Max Romanov <max.romanov@nginx.com> Date: Tue Jul 20 10:37:54 2021 +0300 Python: using default event_loop for main thread for ASGI. This changed things so that Unit calls asyncio.get_event_loop() in the _main_ thread (but still calls asyncio.new_event_loop() in the other threads). asyncio.get_event_loop() up until recently would either return an already running event loop or return a newly created one. This was done for $reasons that the commit message and GitHub issue #560 hint at. But the intimation is that there can already be an event loop running from the application (I assume it's referring to the users application) at this point and if there is we should use it. Now for the Python side of things. On the main branch we had commit 172c0f2752d8708b6dda7b42e6c5a3519420a4e8 Author: Serhiy Storchaka <storchaka@gmail.com> Date: Sun Apr 25 13:40:44 2021 +0300 bpo-39529: Deprecate creating new event loop in asyncio.get_event_loop() (GH-23554) This commit began the deprecating of asyncio.get_event_loop(). commit fd38a2f0ec03b4eec5e3cfd41241d198b1ee555a Author: Serhiy Storchaka <storchaka@gmail.com> Date: Tue Dec 6 19:42:12 2022 +0200 gh-93453: No longer create an event loop in get_event_loop() (#98440) This turned asyncio.get_event_loop() into a RuntimeError _if_ there isn't a current event loop. commit e5bd5ad70d9e549eeb80aadb4f3ccb0f2f23266d Author: Serhiy Storchaka <storchaka@gmail.com> Date: Fri Jan 13 14:40:29 2023 +0200 gh-100160: Restore and deprecate implicit creation of an event loop (GH-100410) This re-creates the event loop if there wasn't one and emits a deprecation warning. After at least the last two commits Unit no longer works with the Python _main_ branch. Meanwhile on the 3.11 branch we had commit 3fae04b10e2655a20a3aadb5e0d63e87206d0c67 Author: Serhiy Storchaka <storchaka@gmail.com> Date: Tue Dec 6 17:15:44 2022 +0200 [3.11] gh-93453: Only emit deprecation warning in asyncio.get_event_loop when a new event loop is created (#99949) which is what caused our breakage, though perhaps unintentionally as we get the following traceback Traceback (most recent call last): File "/usr/lib64/python3.11/asyncio/events.py", line 676, in get_event_loop f = sys._getframe(1) ^^^^^^^^^^^^^^^^ ValueError: call stack is not deep enough 2023/01/18 02:46:10 [alert] 0#180279 [unit] Python failed to call 'asyncio.get_event_loop' However, regardless, it is clear we need to stop using asyncio.get_event_loop(). One option is to switch to the higher level asyncio.run() API, however that is a rather large change. This commit takes the simpler approach of using asyncio.get_running_loop() (which it seems get_event_loop() will eventually be an alias of) in the _main_ thread to return the currently running event loop, or if there is no current event loop, it will call asyncio.new_event_loop() to return a newly created event loop. I believe this mimics the current behaviour. In my testing get_event_loop() seemed to always return a newly created loop, as when just calling get_running_loop() it would return NULL and we would fail out. When running two processes each with 2 threads we would get the following loops with Python 3.11.0 and unpatched Unit <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> and with Python 3.11.1 and a patched Unit we would get <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> Tested-by: Rafał Safin <rafal.safin12@gmail.com> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-07Python: ASGI: Factor out event loop creation to its own function.Andrew Clayton1-21/+35
This is a preparatory patch that factors out the asyncio event loop creation code from nxt_python_asgi_ctx_data_alloc() into its own function, to facilitate being called multiple times. This a part of the work to move away from using the asyncio.get_event_loop() function due to it no longer creating event loops if there wasn't one running. See the following commit for the gory details. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-31Added default values for pathnames.Alejandro Colomar1-15/+16
This allows one to simply run `./configure` and expect it to produce sane defaults for an install. Previously, without specifying `--prefix=...`, `make install` would simply fail, recommending to set `--prefix` or `DESTDIR`, but that recommendation was incomplete at best, since it didn't set many of the subdirs needed for a good organization. Setting `DESTDIR` was even worse, since that shouldn't even affect an installation (it is required to be transparent to the installation). /usr/local is the historic Unix standard path to use for installations from source made manually by the admin of the system. Some package managers (Homebrew, I'm looking specifically at you) have abused that path to install their things, but 1) it's not our fault that someone else incorrectly abuses that path (and they seem to be fixing it for newer archs; e.g., they started using /opt/homebrew for Apple Silicon), 2) there's no better path than /usr/local, 3) we still allow changing it for systems where this might not be the desired path (MacOS Intel with hombrew), and 4) it's _the standard_. See a related conversation with Ingo (OpenBSD maintainer): On 7/27/22 16:16, Ingo Schwarze wrote: > Hi Alejandro, [...] > > Alejandro Colomar wrote on Sun, Jul 24, 2022 at 07:07:18PM +0200: >> On 7/24/22 16:57, Ingo Schwarze wrote: >>> Alejandro Colomar wrote on Sun, Jul 24, 2022 at 01:20:46PM +0200: > >>>> /usr/local is for sysadmins to build from source; > >>> Doing that is *very* strongly discouraged on OpenBSD. > >> I guess that's why the directory was reused in the BSDs to install ports >> (probably ports were installed by the sysadmin there, and by extension, >> ports are now always installed there, but that's just a guess). > > Maybe. In any case, the practice of using /usr/local for packages > created from ports is significantly older than the recommendation > to refrain from using upstream "make install" outside the ports > framework. > > * The FreeBSD ports framework was started by Jordan Hubbard in 1993. > * The ports framework was ported from FreeBSD to OpenBSD > by Niklas Hallqvist in 1996. > * NetBSD pkgsrc was forked from FreeBSD ports by Alistair G. Crooks > and Hubert Feyrer in 1997. > > I failed to quickly find Jordan's original version, but rev. 1.1 > of /usr/ports/infrastructure/mk/bsd.port.mk in OpenBSD (dated Jun 3 > 22:47:10 1996 UTC) already said > > LOCALBASE ?= /usr/local > PREFIX ?= ${LOCALBASE} > [...] >> I had a discussion in NGINX Unit about it, and >> the decission for now has been: "support prefix=/usr/local for default >> manual installation through the Makefile, and let BSD users adjust to >> their preferred path". > > That's an *excellent* solution for the task, thanks for doing it > the right way. By setting PREFIX=/usr/local by default in the > upstream Makefile, you are minimizing the work for *BSD porters. > > The BSD ports frameworks will typically run the upstreak "make install" > with the variable DESTDIR set to a custom value, for example > > DESTDIR=/usr/ports/pobj/groff-1.23.0/fake-amd64 > > so if the upstream Makefile sets PREFIX=/usr/local , > that's perfect, everything gets installed to the right place > without an intervention by the person doing the porting. > > Of course, if the upstream Makefile would use some other PREFIX, > that would not be a huge obstacle. All we have to do in that case > is pass the option --prefix=/usr/local to the ./configure script, > or something equivalent if the software isn't using GNU configure. > >> We were concerned that we might get collisions >> with the BSD port also installing in /usr/local, but that's the least >> evil (and considering BSD users don't typically run `make install`, it's >> not so bad). > > It's not bad at all. It's perfect. > > Of course, if a user wants to install *without* the ports framework, > they have to provide their own --prefix. But that's not an issue > because it is easy to do, and installing without a port is discouraged > anyway. === Directory variables should never contain a trailing slash (I've learned that the hard way, where some things would break unexpectedly). Especially, make(1) is likely to have problems when things have double slashes or a trailing slash, since it treats filenames as text strings. I've removed the trailing slash from the prefix, and added it to the derivate variables just after the prefix. pkg-config(1) also expects directory variables to have no trailing slash. === I also removed the code that would set variables as depending on the prefix if they didn't start with a slash, because that is a rather non-obvious behavior, and things should not always depend on prefix, but other dirs such as $(runstatedir), so if we keep a similar behavior it would be very unreliable. Better keep variables intact if set, or use the default if unset. === Print the real defaults for ./configure --help, rather than the actual values. === I used a subdirectory under the standard /var/lib for NXT_STATE, instead of a homemade "state" dir that does the same thing. === Modified the Makefile to create some dirs that weren't being created, and also remove those that weren't being removed in uninstall, probably because someone forgot to add them. === Add new options for setting the new variables, and rename some to be consistent with the standard names. Keep the old ones at configuration time for compatibility, but mark them as deprecated. Don't keep the old ones at exec time. === A summary of the default config is: Unit configuration summary: bin directory: ............. "/usr/local/bin" sbin directory: ............ "/usr/local/sbin" lib directory: ............. "/usr/local/lib" include directory: ......... "/usr/local/include" man pages directory: ....... "/usr/local/share/man" modules directory: ......... "/usr/local/lib/unit/modules" state directory: ........... "/usr/local/var/lib/unit" tmp directory: ............. "/tmp" pid file: .................. "/usr/local/var/run/unit/unit.pid" log file: .................. "/usr/local/var/log/unit/unit.log" control API socket: ........ "unix:/usr/local/var/run/unit/control.unit.sock" Link: <https://www.gnu.org/prep/standards/html_node/Directory-Variables.html> Link: <https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html> Reviewed-by: Artem Konev <a.konev@f5.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Tested-by: Andrew Clayton <a.clayton@nginx.com> Reviewed-by: Konstantin Pavlov <thresh@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-01-27PHP: Implement better error handling.Andrew Clayton1-5/+40
Previously the PHP module would produce one of four status codes 200 OK 301 Moved Permanently 500 Internal Server Error 503 Service Unavailable 200 for successful requests, 301 for cases where the url was a directory without a trailing '/', 500 for bad PHP or non-existing PHP file and 503 for all other errors. With this commit we now handle missing files and directories, returning 404 Not Found and files and directories that don't allow access, returning 403 Forbidden. We do these checks in two places, when we check if we should do a directory redirect (bar -> bar/) and in the nxt_php_execute() function. One snag with the latter is that the php_execute_script() function only returns success/failure (no reason). However while it took a zend_file_handle structure with the filename of the script to run, we can instead pass through an already opened file-pointer (FILE *) via that structure. So we can try opening the script ourselves and do the required checks before calling php_execute_script(). We also make use of the zend_stream_init_fp() function that initialises the zend_file_handle structure if it's available otherwise we use our own version. This is good because the zend_file_handle structure has changed over time and the zend_stream_init_fp() function should change with it. Closes: <https://github.com/nginx/unit/issues/767> Reviewed-by: Alejandro Colomar <alx@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-27PHP: Simplify ctx->script_filename.start in nxt_php_execute().Andrew Clayton1-4/+5
Create a const char *filename variable to hold ctx->script_filename.start, which is a much more manageable name and will negate the need for any more casting in the following commit when we switch to using a FILE * instead of a filename in php_execute_script(). Reviewed-by: Alejandro Colomar <alx@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>