Age | Commit message (Collapse) | Author | Files | Lines |
|
The nxt_unit_ctx_port_recv() function may return the NXT_UNIT_AGAIN code, in
which case an attempt to reread the message should be made.
The issue was reproduced in load testing with response sizes 16k and up.
In the rare case of a NXT_UNIT_AGAIN result, a buffer of size -1 was processed,
which triggered a 'message too small' alert; after that, the app process was
terminated.
|
|
Under high load, a queue synchonization issue may occur, starting from the
steady state when an app queue message is dequeued immediately after it has been
enqueued. In this state, the router always puts the first message in the queue
and is forced to notify the app about a new message in an empty queue using a
socket pair. On the other hand, the application dequeues and processes the
message without reading the notification from the socket, so the socket buffer
overflows with notifications.
The issue was reproduced during Unit load tests. After a socket buffer
overflow, the router is unable to notify the app about a new first message.
When another message is enqueued, a notification is not required, so the queue
grows without being read by the app. As a result, request processing stops.
This patch changes the notification algorithm by counting the notifications in
the pipe instead of getting the number of messages in the queue.
|
|
Multithreaded application may create different shared memory segments in
different threads. The segments then passed to different router threads.
Because of this multithreading, the order of adding incoming segments is
not determined and there can be situation when some of the incoming segments
are not initialized yet.
This patch simply adds check for NULL to skip non-initialized segments.
Crash reproduced during load tests with high number of simultaneous
connections (1024 and more).
|
|
The WSGI environment dictionary contains a number of static items, that are
pre-initialized on application start. Then it's copied for each request to be
filled with request-related data.
Now this dictionary copy operation will be done between processing of requests,
which should save some CPU cycles during request processing and thus reduce
response latency for non-peak load periods.
|
|
The code had a wrong assumption that "mount namespaces" automatically
unmounts process mounts when exits but this happens only with
unprivileged mounts.
|
|
Ruby threads need to be created with GVL; otherwise, an attempt to access
locked resources may occur, causing a crash.
The issue was occasionally reproduced on Ubuntu 18.04 with Ruby 2.5.1
while running test_ruby_application_threads.
|
|
This closes #498 issue on GitHub.
|
|
According to Section #3.1.2 of RFC 7230, after the status code
there must be a space even if the reason phrase is empty.
Also, only 3 digits allowed.
This closes #507 issue on GitHub.
|
|
This fixes a crash on exit of Node.js application. The crash reproduced
on Ubuntu 20.10 with Node.js v15.1.0. Tests 'test_node_websockets_two_clients'
and 'test_node_websockets_7_13_1__7_13_2'.
The reason of the crash is using request struct which was already freed.
The issue was introduced in 5be509fda29e.
|
|
Warnings changed for debug messages.
|
|
|
|
|
|
If shared queue is empty, allocated read buffer should be explicitly
released.
Found by Coverity (CID 363943).
The issue was introduced in f5ba5973a0a3.
|
|
|
|
|
|
|
|
Removing unnecessary context operations from shared queue processing loop.
Initializing temporary queues only when required.
|
|
|
|
The issue occurred under highly concurrent request load in Go applications.
Such applications are multi-threaded but use a single libunit context; any
thread-safe code in the libunit context is only required for Go applications.
As a result of improper request state reset, the recycled request structure was
recovered in the released state, so further operations with this request
resulted in 'response already sent' warnings. However, the actual response was
never delivered to the router and the client.
|
|
The issue only occurred in Go applications because "port_send" is overloaded
only in Go. To reproduce it, send multiple concurrent requests to the
application after it has initialised. The warning message "[unit] [go] port
NNN:dd not found" is the first visible aspect of the issue; the second and more
valuable one is a closed connection, an error response, or a hanging response to
some requests.
When the application starts, it is unaware of the router's worker thread ports,
so it requests the ports from the router after receiving requests from the
corresponding router worker threads. When multiple requests are processed
simultaneously, the router port may be required by several requests, so request
processing starts only after the application receives the required port
information. The port should be added to the Go port repository after its
'ready' flag is updated. Otherwise, Unit may start processing some requests and
use the port before it is in the repository.
The issue was introduced in changeset 78836321a126.
|
|
Debug logging depends on macros defined in nxt_auto_config.h.
|
|
Previously, all requests that contained in header field names characters other
than alphanumeric, or "-", or "_" were rejected with a 400 "Bad Request" error
response.
Now, the parser allows the same set of characters as specified in RFC 7230,
including: "!", "#", "$", "%", "&", "'", "*", "+", ".", "^", "`", "|", and "~".
Header field names that contain only these characters are considered valid.
Also, there's a new option introduced: "discard_unsafe_fields". It accepts
boolean value and it is set to "true" by default.
When this option is "true", all header field names that contain characters
in valid range, but other than alphanumeric or "-" are skipped during parsing.
When the option is "false", these header fields aren't skipped.
Requests with non-valid characters in header field names according to
RFC 7230 are rejected regardless of "discard_unsafe_fields" setting.
This closes #422 issue on GitHub.
|
|
Now users can disable the default procfs mount point
in the rootfs.
{
"isolation": {
"automount": {
"procfs": false
}
}
}
|
|
Now users can disable the default tmpfs mount point
in the rootfs.
{
"isolation": {
"automount": {
"tmpfs": false
}
}
}
|
|
This closes #219 issue on GitHub.
|
|
The php_request_shutdown() function calls sapi_deactivate() that tries to read
request body into a dummy buffer. In our case it's just waste of CPU cycles.
This change is also required for the following implementation of the
fastcgi_finish_request() function, where the request context can be
cleared by the time of finalization.
|
|
Application shared queue only capable to pass one shared memory buffer.
The rest buffers in chain needs to be send directly to application in response
to REQ_HEADERS_AC message.
The issue can be reproduced for configurations where 'body_buffer_size' is
greater than memory segment size (10 Mb). Requests with body size greater
than 10 Mb are just `stuck` e.g. not passed to application awaiting for more
data from router.
The bug was introduced in 1d84b9e4b459 (v1.19.0).
|
|
Introducing manual protocol selection for 'universal' apps and frameworks.
|
|
The issue (deprecated API warning) introduced by ClassGraph upgrade
in ccd5c695b739 commit.
|
|
Application terminates in case of thread creation failure.
|
|
This closes #486 issue on GitHub.
|
|
This closes #482 issue on GitHub.
|
|
This shall save a couple of CPU cycles in request processing.
|
|
This closes #458 issue on GitHub.
|
|
|
|
This closes #459 issue on GitHub.
|
|
|
|
Compilers complained about unused variables after 37e2a3ea1bf1.
|
|
This closes #487 issue on GitHub.
|
|
The "iov" array was filled incorrectly when custom mounting options were set.
|
|
|
|
When mount points reside within other mount points, this
patch sorts them by path length and then unmounts then
in an order reverse to their mounting. This results in
independent paths being unmounted first.
This fixes an issue in buildbots where dependent paths failed
to unmount, leading to the build script removing system-wide
language libraries.
|
|
|
|
The socket is required for intercontextual communication in multithreaded apps.
|
|
|
|
The PORT_ACK message is the router's response to the application's NEW_PORT
message. After receiving PORT_ACK, the application is safe to process requests
using this port.
This message avoids a racing condition when the application starts processing a
request from the shared queue and sends REQ_HEADERS_ACK. The REQ_HEADERS_ACK
message contains the application port ID as reply_port, which the router uses
to send request data. When the application creates a new port, it
immediately sends it to the main router thread. Because the request is
processed outside the main thread, a racing condition can occur between the
receipt of the new port in the main thread and the receipt of REQ_HEADERS_ACK
in the worker router thread where the same port is specified as reply_port.
|
|
|
|
|
|
|
|
|