Age | Commit message (Collapse) | Author | Files | Lines |
|
This commit fixed the njs memory leak happened in the config validation, updating and http requests.
|
|
While at it, fixed changelogs generation for Python 3.10 as well.
|
|
|
|
|
|
|
|
This patch gives users the option to set a `"prefix"` attribute
for Python applications, either at the top level or for specific
`"target"`s. If the attribute is present, the value of `"prefix"`
must be a string beginning with `"/"`. If the value of the `"prefix"`
attribute is longer than 1 character and ends in `"/"`, the
trailing `"/"` is stripped.
The purpose of the `"prefix"` attribute is to set the `SCRIPT_NAME`
context value for WSGI applications and the `root_path` context
value for ASGI applications, allowing applications to properly route
requests regardless of the path that the server uses to expose the
application.
The context value is only set if the request's URL path begins with
the value of the `"prefix"` attribute. In all other cases, the
`SCRIPT_NAME` or `root_path` values are not set. In addition, for
WSGI applications, the value of `"prefix"` will be stripped from
the beginning of the request's URL path before it is sent to the
application.
Reviewed-by: Andrei Zeliankou <zelenkov@nginx.com>
Reviewed-by: Artem Konev <artem.konev@nginx.com>
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
This hooks the cgroup support up to the config system so it can actually
be used.
To make use of this in unit a new "cgroup" section has been added to the
isolation configuration.
e.g
"applications": {
"python": {
"type": "python",
"processes": 5,
"path": "/opt/unit/unit-cgroup-test/",
"module": "app",
"isolation": {
"cgroup": {
"path": "app/python"
}
}
}
}
Now there are two ways to specify the path, relative, like the above
(without a leading '/') and absolute (with a leading '/').
In the above case the "python" application is placed into its own cgroup
under CGROUP_ROOT/<main unit process cgroup>/app/python. Whereas if you
specified say
"path": "/unit/app/python"
Then the python application would be placed under
CGROUP_ROOT/unit/app/python
The first option allows you to easily take advantage of any resource
limits that have already been configured for unit.
With the second method (absolute pathname) if you know of an already
existing cgroup where you'd like to place it, you can, e.g
"path": "/system.slice/unit/python"
Where system.slice has already been created by systemd and may already
have some overall system limits applied which would also apply to unit.
Limits apply down the hierarchy and lower groups can't exceed the
previous group limits.
So what does this actually look like? Lets take the unit-calculator
application[0] and have each of its applications placed into their own
cgroup. If we give each application a new section like
"isolation": {
"cgroup": {
"path": "/unit/unit-calculator/add"
}
}
changing the path for each one, we can visualise the result with the
systemd-cgls command, e.g
│ └─session-5.scope (#4561)
│ ├─ 6667 sshd: andrew [priv]
│ ├─ 6684 sshd: andrew@pts/0
│ ├─ 6685 -bash
│ ├─ 12632 unit: main v1.28.0 [/opt/unit/sbin/unitd --control 127.0.0.1:808>
│ ├─ 12634 unit: controller
│ ├─ 12635 unit: router
│ ├─ 13550 systemd-cgls
│ └─ 13551 less
├─unit (#4759)
│ └─unit-calculator (#5037)
│ ├─subtract (#5069)
│ │ ├─ 12650 unit: "subtract" prototype
│ │ └─ 12651 unit: "subtract" application
│ ├─multiply (#5085)
│ │ ├─ 12653 unit: "multiply" prototype
│ │ └─ 12654 unit: "multiply" application
│ ├─divide (#5101)
│ │ ├─ 12671 unit: "divide" prototype
│ │ └─ 12672 node divide.js
│ ├─sqroot (#5117)
│ │ ├─ 12679 unit: "sqroot" prototype
│ │ └─ 12680 /home/andrew/src/unit-calculator/sqroot/sqroot
│ └─add (#5053)
│ ├─ 12648 unit: "add" prototype
│ └─ 12649 unit: "add" application
We used an absolute path so the cgroups will be created relative to the
main cgroupfs mount, e.g /sys/fs/cgroup
We can see that the main unit processes are in the same cgroup as the
shell from where they were started, by default child process are placed
into the same cgroup as the parent.
Then we can see that each application has been placed into its own
cgroup under /sys/fs/cgroup
Taking another example of a simple 5 process python application, with
"isolation": {
"cgroup": {
"path": "app/python"
}
}
Here we have specified a relative path and thus the python application
will be placed below the existing cgroup that contains the main unit
process. E.g
│ │ │ ├─app-glib-cinnamon\x2dcustom\x2dlauncher\x2d3-43951.scope (#90951)
│ │ │ │ ├─ 988 unit: main v1.28.0 [/opt/unit/sbin/unitd --no-daemon]
│ │ │ │ ├─ 990 unit: controller
│ │ │ │ ├─ 991 unit: router
│ │ │ │ ├─ 43951 xterm -bg rgb:20/20/20 -fg white -fa DejaVu Sans Mono
│ │ │ │ ├─ 43956 bash
│ │ │ │ ├─ 58828 sudo -i
│ │ │ │ ├─ 58831 -bash
│ │ │ │ └─app (#107351)
│ │ │ │ └─python (#107367)
│ │ │ │ ├─ 992 unit: "python" prototype
│ │ │ │ ├─ 993 unit: "python" application
│ │ │ │ ├─ 994 unit: "python" application
│ │ │ │ ├─ 995 unit: "python" application
│ │ │ │ ├─ 996 unit: "python" application
│ │ │ │ └─ 997 unit: "python" application
[0]: <https://github.com/lcrilly/unit-calculator>
Reviewed-by: Alejandro Colomar <alx@nginx.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
|
|
|
|
Starting from Node.js v18.6.0 return value from all hooks must have
"shortCircuit: true" option specified. For more information see:
https://github.com/nodejs/node/commit/10bcad5c6e
|
|
Python 3.8 added a new Python initialisation configuration API[0].
Python 3.11 marked the old API as deprecated resulting in the following
compiler warnings which we treat as errors, failing the build
src/python/nxt_python.c: In function ‘nxt_python_start’:
src/python/nxt_python.c:130:13: error: ‘Py_SetProgramName’ is deprecated [-Werror=deprecated-declarations]
130 | Py_SetProgramName(nxt_py_home);
| ^~~~~~~~~~~~~~~~~
In file included from /opt/python-3.11/include/python3.11/Python.h:94,
from src/python/nxt_python.c:7:
/opt/python-3.11/include/python3.11/pylifecycle.h:37:38: note: declared here
37 | Py_DEPRECATED(3.11) PyAPI_FUNC(void) Py_SetProgramName(const wchar_t *);
| ^~~~~~~~~~~~~~~~~
src/python/nxt_python.c:134:13: error: ‘Py_SetPythonHome’ is deprecated [-Werror=deprecated-declarations]
134 | Py_SetPythonHome(nxt_py_home);
| ^~~~~~~~~~~~~~~~
/opt/python-3.11/include/python3.11/pylifecycle.h:40:38: note: declared here
40 | Py_DEPRECATED(3.11) PyAPI_FUNC(void) Py_SetPythonHome(const wchar_t *);
| ^~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
We actually have a few config scenarios: Python < 3, Python >= 3.0 < 3.8
and for Python 3 we have two configs where we select one based on
virtual environment setup.
Factor out the Python 3 config initialisation into its own function. We
actually create two functions, one for Python 3.8+ and one for older
Python 3. We pick the right function to use at build time.
The new API also has error checking (where the old API doesn't) which we
handle.
[0]: https://peps.python.org/pep-0587/
Closes: <https://github.com/nginx/unit/issues/710>
[ Andrew: Expanded upon patch from @sandeep-gh ]
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
|
|
|
|
Both @lucatacconi & @mwoodpatrick reported what appears to be the same
issue on GitHub. Namely that when using the PHP language module and
trying to access a URL that is a directory but without specifying the
trailing '/', they were getting a '503 Service Unavailable' error.
Note: This is when _not_ using the 'script' option.
E.g with the following config
{
"listeners": {
"[::1]:8080": {
"pass": "applications/php"
}
},
"applications": {
"php": {
"type": "php",
"root": "/var/tmp/unit-php"
}
}
}
and with a directory path of /var/tmp/unit-php/foo containing an
index.php, you would see the following
$ curl http://localhost/foo
<title>Error 503</title>
Error 503
However
$ curl http://localhost/foo/
would work and serve up the index.php
This commit fixes the above so you get the desired behaviour without
specifying the trailing '/' by doing the following
1] If the URL doesn't end in .php and doesn't have a trailing '/'
then check if the requested path is a directory.
2) If it is a directory then create a 301 re-direct pointing to it.
This matches the behaviour of the likes of nginx, Apache and
lighttpd.
This also matches the behaviour of the "share" action in Unit.
This doesn't effect the behaviour of the 'script' option which bypasses
the nxt_php_dynamic_request() function.
This also adds a couple of tests to test/test_php_application.py to
ensure this continues to work.
Closes: <https://github.com/nginx/unit/issues/717>
Closes: <https://github.com/nginx/unit/issues/753>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
|
|
Link: <https://www.openssl.org/docs/man3.0/man7/migration_guide.html>
Cc: Andy Postnikov <apostnikov@gmail.com>
Cc: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Remi Collet <remi@remirepo.net>
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
If we don't call SSL_CTX_set_cipher_list(), then it uses the
system's default.
Link: <https://fedoraproject.org/wiki/Changes/CryptoPolicy>
Link: <https://docs.fedoraproject.org/en-US/packaging-guidelines/CryptoPolicies/>
Link: <https://www.redhat.com/en/blog/consistent-security-crypto-policies-red-hat-enterprise-linux-8>
Signed-off-by: Remi Collet <remi@remirepo.net>
Acked-by: Andrei Belov <defan@nginx.com>
[ alx: add changelog and tweak commit message ]
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
|
|
This commit removed the $uri auto-append for the "share" option
introduced in rev be6409cdb028.
The main reason is that it causes problems when preparing Unit configurations
to be loaded at startup from the state directory. E.g. Docker. A valid conf.json
file with $uri references will end up with $uri$uri due to the auto-append.
|
|
PHP 8.2 changed the prototype of the function, removing the last
parameter.
Signed-off-by: Remi Collet <remi@remirepo.net>
Cc: Timo Stark <t.stark@nginx.com>
Cc: George Peter Banyard <girgias@php.net>
Tested-by: Andy Postnikov <apostnikov@gmail.com>
Acked-by: Andy Postnikov <apostnikov@gmail.com>
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
pthread_mutex_init(3) may fail for several reasons, and failing to
check will cause Undefined Behavior when those errors happen. Add
missing checks, and correctly deinitialize previously created
stuff before exiting from the API.
Signed-off-by: Alejandro Colomar <alx@nginx.com>
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Reviewed-by: Zhidao HONG <z.hong@f5.com>
|
|
|
|
Ruby applications would fail to start if they were using rack v3
2022/09/28 15:48:46 [alert] 0#80912 [unit] Ruby: Failed to parse rack script
2022/09/28 15:48:46 [notice] 80911#80911 app process 80912 exited with code 1
This was due to a change in the rack API
Rack V2
def self.load_file(path, opts = Server::Options.new)
...
cfgfile.sub!(/^__END__\n.*\Z/m, '')
app = new_from_string cfgfile, path
return app, options
end
Rack V3
def self.load_file(path)
...
return new_from_string(config, path)
end
This patch handles _both_ the above APIs by correctly handling the cases
where we do and don't get an array returned from
nxt_ruby_rack_parse_script().
Closes: <https://github.com/nginx/unit/issues/755>
Tested-by: Andrew Clayton <a.clayton@nginx.com>
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
[ Andrew: Patch by Zhidao, commit message by me with input from Zhidao ]
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
|
|
When proxy is used, the number of accepted connections is not counted,
This also results in the wrong number of active connections.
|
|
The fixing supports the cookie value with the '=' character.
This is related to #756 PR on Github.
Thanks to changxiaocui.
|
|
|
|
|
|
|
|
|
|
In nxt_unit_create() we could leak a mutex created in
nxt_unit_ctx_init().
This could happen if nxt_unit_ctx_init() succeeded but later on we
bailed out of nxt_unit_create(), we would destroy the mutex created in
nxt_unit_create() but not the one created in nxt_unit_ctx_init().
Reorder things so that we do the call to nxt_unit_create() after all the
other checks so if it fails we don't leak the mutex it created.
Co-developed-by: Andrew Clayton <a.clayton@f5.com>
Signed-off-by: Andrew Clayton <a.clayton@f5.com>
Signed-off-by: Alex Colomar <a.colomar@f5.com>
|
|
|
|
|
|
|
|
As was reported[0] by @travisbell on GitHub, if running unit from the
terminal in the foreground when hitting ^C to exit it, the ruby
application processes would segfault if they were using threads.
It's not 100% clear where the actual problem lies, but it _looks_ like
it may be in ruby.
The simplest way to deal with this for now is to just ignore SIGINT in
the ruby application processes. Unit will still receive and handle it,
cleanly shutting everything down.
For people who want to handle SIGINT in their ruby application running
under unit they can still trap SIGINT and it will override the ignore.
[0]: https://github.com/nginx/unit/issues/562#issuecomment-1223229585
Closes: https://github.com/nginx/unit/issues/562
|
|
The previous commit added/fixed support for abstract Unix domain sockets
on Linux with a leading '@' or '\0'. To be consistent in all platforms,
treat those prefixes as markers for abstract sockets in all platforms,
and fail if abstract sockets are not supported by the platform.
That will avoid mistakes when copying a config file from a Linux system
and using it in non-Linux, which would surprisingly create a normal socket.
|
|
Unix domain sockets are normally backed by files in the
filesystem. This has historically been problematic when closing
and opening again such sockets, since SO_REUSEADDR is ignored for
Unix sockets (POSIX left the behavior of SO_REUSEADDR as
implementation-defined, and most --if not all-- implementations
decided to just ignore this flag).
Many solutions are available for this problem, but all of them
have important caveats:
- unlink(2) the file when it's not needed anymore.
This is not easy, because the process that controls the fd may
not be the same process that created the file, and may not have
file permissions to remove it.
Further solutions can be applied to that caveat:
- unlink(2) the file right after creation.
This will remove the pathname from the filesystem without
closing the socket (it will continue to live until the last fd
is closed). This is not useful for us, since we need the
pathname of the socket as its interface.
- chown(2) or chmod(2) the directory that contains the socket.
For removing a file from the filesystem, a process needs
write permissions in the containing directory. We could
put sockets in dummy directories that can be chown(2)ed to
nobody. This could be dangerous, though, as we don't control
the socket names. It is our users who configure the socket
name in their configuration, and so it's easy that they don't
understand the many implications of not chosing an appropriate
socket pathname. A user could unknowingly put the socket in a
directory that is not supposed to be owned by user nobody, and
if we blindly chown(2) or chmod(2) the directory, we could be
creating a big security hole.
- Ask the main process to remove the socket.
This would require a very complex communication mechanism with
the main process, which is not impossible, but let's avoid it
if there are simpler solutions.
- Give the child process the CAP_DAC_OVERRIDE capability.
That is one of the most powerful capabilities. A process with
that capability can be considered root for most practical
aspects. Even if the capability is disabled for most of the
lifetime of the process, there's a slight chance that a
malicious actor could activate it and then easily do serious
damage to the system.
- unlink(2) the file right before calling bind(2).
This is dangerous because another process (for example, another
running instance of unitd(8)), could be using the socket, and
removing the pathname from the filesystem would be problematic.
To do this correctly, a lot of checks should be added before the
actual unlink(2), which is error-prone, and difficult to do
correctly, and atomically.
- Use abstract-namespace Unix domain sockets.
This is the simplest solution, as it only requires accepting a
slightly different syntax (basically a @ prefix) for the socket
name, to transform it into a string starting with a null byte
('\0') that the kernel can understand. The patch is minimal.
Since abstract sockets live in an abstract namespace, they don't
create files in the filesystem, so there's no need to remove
them later. The kernel removes the name when the last fd to it
has been closed.
One caveat is that only Linux currently supports this kind of
Unix sockets. Of course, a solution to that could be to ask
other kernels to implement such a feature.
Another caveat is that filesystem permissions can't be used to
control access to the socket file (since, of course, there's no
file). Anyone knowing the socket name can access to it. The
only method to control access to it is by using
network_namespaces(7). Since in unitd(8) we're using 0666 file
sockets, abstract sockets should be no more insecure than that
(anyone can already read/write to the listener sockets).
- Ask the kernel to implement a simpler way to unlink(2) socket
files when they are not needed anymore. I've suggested that to
the <linux-fsdevel@vger.kernel.org> mailing list, in:
<lore.kernel.org/linux-fsdevel/0bc5f919-bcfd-8fd0-a16b-9f060088158a@gmail.com/T>
In this commit, I decided to go for the easiest/simplest solution,
which is abstract sockets. In fact, we already had partial
support. This commit only fixes some small bug in the existing
code so that abstract Unix sockets work:
- Don't chmod(2) the socket if it's an abstract one.
This fixes the creation of abstract sockets, but doesn't make them
usable, since we produce them with a trailing '\0' in their name.
That will be fixed in the following commit.
This closes #669 issue on GitHub.
|
|
Registering an isolated PID in the global PID hash is wrong
because it can be duplicated. Isolated processes are stored only
in the children list until the response for the WHOAMI message is
processed and the global PID is discovered.
To remove isolated siblings, a pointer to the children list is
introduced in the nxt_process_init_t struct.
This closes #633 issue on GitHub.
|
|
|
|
|
|
This closes #562 issue on GitHub.
|
|
Having the basename of the script pathname was incorrect. While
we don't have something more accurate, the best thing to do is to
have it empty (which should be the right thing most of the time).
This closes #715 issue on GitHub.
The bug was introduced in git commit
0032543fa65f454c471c968998190b027c1ff270
'Ruby: added the Rack environment parameter "SCRIPT_NAME".'.
|
|
When fixing conflicts in the changelog, a line was removed by accident.
Signed-off-by: Alejandro Colomar <alx.manpages@gmail.com>
|
|
This closes #645 issue on GitHub.
(Also moved a changelog line that was misplaced in a previous commit.)
|
|
Allow $dollar (or ${dollar}) to translate to a literal $ to allow
support for sub-delimiters in URIs.
It is possible to have URLs like
https://example.com/path/15$1588/9925$2976.html
and thus it would be useful to be able to specify them in various bits
of the unit config such as the location setting.
However this hadn't been possible due to $ being used to denote
variables for substitution. E.g $host.
As was noted in the below GitHub issue it was suggested by @VBart to
use $sign to represent a literal $, however I feel $dollar is more
appropriate so we have a variable named after the thing it represents,
also @tippexs found[0] that &dollar is used in HTML to represent a $, so
there is some somewhat related precedent.
(The other idea to use $$ was rejected in my original pull-request[1]
for this issue.)
This means the above URL could be specified as
https://example.com/path/15${dollar}1588/9925${dollar}2976.html
in the unit config.
This is done by adding a variable called 'dollar' which is loaded into
the variables hash table which translates into a literal $.
This is then handled in nxt_var_next_part() where variables are parsed
for lookup and $dollar is set for substitution by a literal '$'. Actual
variable substitution happens in nxt_var_query_finish().
[0]: https://github.com/nginx/unit/pull/693#issuecomment-1130412323
[1]: https://github.com/nginx/unit/pull/693
Closes: https://github.com/nginx/unit/issues/675
|
|
This commit adds the following variables:
$remote_addr, $time_local, $request_line, $status,
$body_bytes_sent, $header_referer, $header_user_agent.
|
|
This commit adds the variables $arg_NAME, $header_NAME, and $cookie_NAME.
|
|
Closes: <https://github.com/nginx/unit/issues/676>
|
|
The code for finding the extension made a few assumptions that are
no longer true. It didn't account for pathnames that didn't
contain '/', including the empty string, or the NULL string. That
code was used with "share", which always had a '/', but now it's
also used with "index", which should not have a '/' in it.
This fix works by limiting the search to the beginning of the
string, so that if no '/' is found in it, it doesn't continue
searching before the beginning of the string.
This also happens to work for NULL. It is technically Undefined
Behavior, as we rely on `NULL + 0 == NULL` and `NULL - NULL == 0`.
But that is the only sane behavior for an implementation, and all
existing POSIX implementations will Just Work for this code.
Relying on this UB is useful, because we don't need to add an
explicit check for NULL, and therefore we have faster code.
Although the current code can't have a NULL, I expect that when we
add support for variables in the index, it will be NULL in some
cases.
Link: <https://stackoverflow.com/q/67291052/6872717>
The same code seems to be defined behavior in C++, which normally
will share implementation in the compiler for these cases, and
therefore it is really unlikely to be in trouble.
Link: <https://stackoverflow.com/q/59409034/6872717>
|
|
|
|
|
|
|
|
|
|
Before Node.js v16.14.0 the "format" value in defaultResolve
was ignored so error was hidden. For more information see:
https://github.com/nodejs/node/pull/40980
|