Age | Commit message (Collapse) | Author | Files | Lines |
|
This closes #363 issue on Github.
Thanks to to 洪志道 (Hong Zhi Dao).
|
|
|
|
This closes #371 issue on GitHub.
|
|
For backward compatibility, the Linux capabilities macros exposes v1 semantics
(32-bit) by default. We probe the version at runtime (because of pre-compiled
binaries) but the kernel syscall API is conservative and it doesn't return a
64-bit capability version if the input version is v1.
This patch suppress the kernel > 5.0 dmesg log below:
capability: warning: 'unitd' uses 32-bit capabilities (legacy support in use)
|
|
|
|
|
|
ServerResponse.write() method tries to write data buffer using libunit
and stores buffers to write in a Server-wide output queue, which is
processed in response to SHM_ACK message from router.
As a side effect 'drain' event implemented and socket.writable flag
reflect current state.
|
|
- OOSM (out of shared memory). Sent by application process to router
when application reaches the limit of allocated shared memory and
needs more.
- SHM_ACK. Sent by router to application when the application's shared
memory is released and the OOSM flag is enabled for the segment.
This implements blocking mode (the library waits for SHM_ACK in case of
out of shared memory condition and retries allocating the required memory
amount) and non-blocking mode (the library notifies the application that
it's out of shared memory and returns control to the application module
that sets up the output queue and puts SHM_ACK in the main message loop).
|
|
|
|
The function unchains the buffer from the buffer's linked list.
|
|
|
|
Current shared memory buffer implementation uses fixed-size memory blocks,
allocating at least 16384 bytes. When application sends data in a large
number of small chunks, it makes sense to buffer them or use plain
memory buffers to improve performance and reduce memory footprint.
This patch introduces minimum size limit (1024 bytes) for shared
memory buffers.
|
|
This patch includes packaging changes related to files move.
|
|
|
|
This is an optimization to avoid creating them at runtime on each request.
|
|
The setuid/setgid syscalls requires root capabilities but if the kernel
supports unprivileged user namespace then the child process has the full
set of capabilities in the new namespace, then we can allow setting "user"
and "group" in such cases (this is a common security use case).
Tests were added to ensure user gets meaningful error messages for
uid/gid mapping misconfigurations.
|
|
This is required to avoid include cycles, as some nxt_clone_* functions
depend on the credential structures, but nxt_process depends on clone
structures.
|
|
Introduces the functions nxt_process_init_create() and
nxt_process_init_creds_set().
|
|
Now the nxt_user_groups_get() function uses getgrouplist(3) when available
(except MacOS, see below). For some platforms, getgrouplist() supports
a method of probing how much groups the user has but the behavior is not
consistent. The method used here consists of optimistically trying to get up
to min(256, NGROUPS_MAX) groups; only if ngroups returned exceeds the original
value, we do a second call. This method can block main's process if LDAP/NDIS+
is in use.
MacOS has getgrouplist(3) but it's buggy. It doesn't update ngroups if the
value passed is smaller than the number of groups the user has. Some
projects (like Go stdlib) call getgrouplist() in a loop, increasing ngroups
until it exceeds the number of groups user belongs to or fail when a limit
is reached. For performance reasons, this is to be avoided and MacOS is
handled in the fallback implementation.
The fallback implementation is the old Unit approach. It saves main's
user groups (getgroups(2)) and then calls initgroups(3) to load application's
groups in main, then does a second getgroups(2) to store the gids and restore
main's groups in the end. Because of initgroups(3)' call to setgroups(2),
this method requires root capabilities. In the case of OSX, which has
small NGROUPS_MAX by default (16), it's not possible to restore main's groups
if it's large; if so, this method fallbacks again: user_cred gids aren't
stored, and the worker process calls initgroups() itself and may block for
some time if LDAP/NDIS+ is in use.
|
|
The reason for the change is that the req_app_link reference count
was incorrect if the application crashed at start; in this case,
the nxt_request_app_link_update_peer() function was never called.
This closes #332 issue on GitHub.
|
|
A quote from the Python 3 documentation:
| When interactive, stdout and stderr streams are line-buffered.
| Otherwise, they are block-buffered like regular text files.
As a result, if an exception occurred and PyErr_Print() was called, its output
could be buffered but not printed to the log for a while (ultimately, until
the interpreter finalization). If the application process crashed shortly,
the backtrace was completely lost.
Buffering can be disabled by redefining the sys.stderr stream object.
However, interference with standard environment objects was deemed undesirable.
Instead, sys.stderr.flush() is called every time after printing exceptions.
A potential advantage here is that lines from backtraces won't be mixed
with other lines in the log.
|
|
PyCallable_Check() doesn't produce errors.
The needless call was introduced in fdd6ed28e3b9.
|
|
PyObject_HasAttrString() is just a wrapper over PyObject_GetAttrString(),
while PyObject_CallMethod() calls it as the first step. As a result,
PyObject_GetAttrString() was called twice if close() was present.
To get rid of PyObject_HasAttrString() while keeping the same behaviour,
the PyObject_CallMethod() call has been decomposed into separate calls of
PyObject_GetAttrString() and PyObject_CallFunction().
|
|
On success, PyObject_CallMethod() returns a new reference to
the result of the call, which previously got lost.
Also, error logging on failure was added.
The issue was introduced by b0148ec28c4d.
|
|
|
|
|
|
According to the documentation, PyObject_GetIter():
| Raises TypeError and returns NULL if the object cannot be iterated.
Previously, this exception wasn't printed or cleared and remained unhandled.
|
|
According to the documentation, PyIter_Next():
| If there are no remaining values, returns NULL with no exception set.
| If an error occurs while retrieving the item, returns NULL and passes
| along the exception.
Previously, this exception wasn't properly handled and the response was
finalized as successful.
This issue was introduced in b0148ec28c4d.
A check for PyErr_Occurred() located in the code below might print this
traceback or occasionally catch an exception from one of the two response
close() calls.
Albeit that exceptions from the close() calls also need to be catched,
it's clear that this particular check wasn't supposed to do so. This is
another issue and it will be fixed later.
|
|
Keepalive connection is disabled if upstream response length
differs from specified in the "Content-Length" field value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
While connect(2) states that non-blocking connect should use EPOLLOUT:
EINPROGRESS
The socket is non-blocking and the connection cannot be completed
immediately. It is possible to select(2) or poll(2) for completion by
selecting the socket for writing. After select(2) indicates writability,
use getsockopt(2) to read the SO_ERROR option at level SOL_SOCKET to
determine whether connect() completed successfully (SO_ERROR is zero)
or unsuccessfully (SO_ERROR is one of the usual error codes listed here,
explaining the reason for the failure).
On connect error, Linux 2.6.32 (CentOS 6) may return EPOLLRDHUP, EPOLLERR,
EPOLLHUP, EPOLLIN, but not EPOLLOUT.
|
|
It unblocks other threads that can be forked by the application
to work in background.
This closes #336 issue on GitHub.
|
|
There was a change (ruby/ruby@6c70fed) in Ruby 2.6 that moved
RUBY_DESCRIPTION global constant definition out of Init_version().
Unit initialized Ruby incorrectly, so the constant was not defined.
This closes #330 issue on GitHub.
|
|
Name and value in each header are 0-terminated, so additional 2 bytes
should be allocated for them. There were several attempts to add these
2 bytes to headers in language modules, but some modules weren't updated.
Also, adding these 2 bytes is specific to the implementation which may be
changed later, so extending this mechanics to modules may cause errors.
|
|
- Introduced nxt_runtime_process_port_create().
- Moved nxt_process_use() into nxt_process.c from nxt_runtime.c.
- Renamed nxt_runtime_process_remove_pid() as nxt_runtime_process_remove().
- Some public functions transformed to static.
This closes #327 issue on GitHub.
|
|
This avoids memory leak reports from the address sanitizer.
|
|
Now it's possible to pass -DNXT_HAVE_CLONE=0 for debugging.
|
|
|
|
Python 3.8 has 'tp_print' field in PyTypeObject struct. This field is
attributed as deprecated. So, clang generates warning (which is turned to
error) as a result of initializing this field. From the other hand, it is
impossible to omit this field in positional initialization. The solution
is to use designated initializer.
Silencing usage message during configure python.
This is related to #331 issue on GitHub.
|
|
When using "credential: true", the new namespace starts with a completely
empty uid and gid ranges. Then, any setuid/setgid/setgroups calls using ids
not properly mapped with uidmap and gidmap fields return EINVAL, meaning
the id is not valid inside the new namespace.
|
|
This is related to #330 issue on GitHub.
|
|
There was a typo: nxt_queue_head() used instead of nxt_queue_first() in
connection iteration loop. This prevents idle connection close on quit.
This closes #334 issue on GitHub.
Thanks to 洪志道 (Hong Zhi Dao).
|
|
Thanks to tonyafanasyev.
This is related to #331 issue on GitHub.
|
|
This patch closes #328 in github.
|