Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Found by Coverity (CID 349456).
|
|
Each request references the router process structure that owns all memory
maps. The process structure has a reference counter; each request increases
the counter to lock the structure in memory until request processing ends.
Incoming and outgoing buffers reference memory maps that the process owns,
so the process structure should be released only when all buffers are
released to avoid invalid memory access and a crash.
This describes the libunit library mechanism used for application processes.
The background of this issue is as follows:
The issue was found on buildbot when the router crashed during Java
websocket tests. The Java application receives a notification from the
master process; when the notification is processed, libunit deletes the
process structure from its process hash and decrements the use counter;
however, active websocket connections maintain their use counts on the
process structure. After that, when the master process is stopping the
application, libunit releases active websocket connections. At this point,
it's important to release the connections' memory buffers before the
corresponding process structure and all shared memory segments are released.
|
|
One alert per failed allocation is enough.
|
|
By design, Unit context is created for the thread which reads messages from
the router. However, Go request handlers are called in a separate goroutine
that may be executed in a different thread. To avoid a racing condition,
access to lists of free structures in the context should be serialized. This
patch should fix random crashes in Go applications under high load.
This is related to #253 and #309 issues on GitHub.
|
|
|
|
|
|
This patch contains various logging improvements and bugfixes found
during Java module development.
|
|
Previously, the nxt_router_prepare_msg() function expected server host among
other headers unmodified. It's not true anymore since normalization of the
Host header has been introduced in 77aad2c142a0.
The nxt_unit_split_host() function was removed. It didn't work correctly with
IPv6 literals. Anyway, after 77aad2c142a0 the port splitting is done in router
while Host header processing.
|
|
In case nxt_unit_tracking_read() failed, execution would jump to the error path,
where it could try to release buffers from uninitialized yet incoming_buf queue.
|
|
|
|
|
|
This issue was introduced in libunit commit (e0f0cd7d244a). All port
sockets in application should be in blocking mode whereas Unit itself
operates non-blocking sockets.
Having non-blocking sockets in application may cause send error during
intensive response packets generation.
See https://mailman.nginx.org/pipermail/unit/2018-October/000080.html.
|
|
|
|
Unit router process may send mmap file decritptor to the application process
for further information exchange. During this process there may be various
errors, which should be described in application error log. If file descriptor
cannot be properly transferred with 'new mmap' message, fd variable will
be assigned to -1 and further syscalls using this file descriptor will fail.
For 'new port' message fd is checked in the same way.
This commit adds early 'invalid file descriptor' diagnostic and write
corresponding message to error log.
Found by Coverity (CID 308515).
|
|
For the optimization purpose, function nxt_unit_remove_process() expects
lib->mutex to be locked. The function then moves ports queue into temporary
queue and releases mutex. In nxt_unit_done() there were two errors: mutex was
not locked before nxt_unit_remove_process() call and mutex was not destroyed.
It is hard to tell what was possible negative impact of this errors.
Found by Coverity (CID 308517).
|
|
Library now used in all language modules.
Old 'nxt_app_*' code removed.
See src/test/nxt_unit_app_test.c for usage sample.
|