Age | Commit message (Collapse) | Author | Files | Lines |
|
According to libuv documentation, uv_poll_t memory should be released
in a callback function passed to uv_close(). Otherwise, the Node.js application
process may crash at exit.
|
|
A connection object is allocated in advance for each listen event object to be
used for the established connection. This connection needs to be freed when the
listen event is destroyed.
|
|
The invocation parameters should be logged as well, notably the path of the file
that is failed to be created.
Also, log level changed to ALERT as it's quite critical error.
|
|
Earlier, if nxt_mp_create() failed to allocate memory while accepting a new
connection, the resulting NULL was subsequently passed to nxt_mp_destroy(),
crashing the process.
More, if nxt_mp_create() was successful but nxt_sockaddr_cache_alloc() failed,
the connection object wasn't destroyed properly, leaving the connection counter
in an inconsistent state. Repeated, this condition lowered the connection
capacity of the process and could eventually prevent it from accepting
connections altogether.
|
|
Since the introduction of rootfs feature, some language modules
can't be configured multiple times.
Now the configure generates a separate nxt_<module>_mounts.h for
each module compiled.
|
|
There was an undefined behavior in the validation function, caused by testing
one character after the string if a wildcard was at the end.
|
|
There is no restrictions on configration size and using segmented shared memory
only doubles memory usage because to parse configration on router side,
it needs to be 'plain' e. g. located in single continous memory buffer.
|
|
|
|
This fixes undefined behaviour due to array over-read if an unknown parameter
is specified in an uidmap, a gidmap, or a php target object.
|
|
Thanks to 洪志道 (Hong Zhi Dao).
|
|
This is partially related to #434 issue on Github.
Thanks to 洪志道 (Hong Zhi Dao).
|
|
Racing conditions reproduced periodically on test_python_process_switch.
|
|
Currently, the router exits without waiting for the worker threads to stop.
There is a short gap between the runtime memory pool's free and the exit, during
which a worker thread may try to access a runtime structure. In turn, this may
cause a crash. For now, it is better to keep this memory allocated.
|
|
Using this function in all language modules helps to avoid code duplication
and reduce the size of future patches.
|
|
Correct value for non-initialized file descriptor is -1, because most of the
checks in libunit compares file descriptor with -1 before performing an
action. Using 0 as default value, may cause to close file descriptor #0, this
may affect application logic.
It is not required to list this patch in changelog because impact is not seen
by end users.
|
|
The nxt_assert macro uses nxt_thread_context, which caused the following linker
error when using it in the library:
ld: illegal thread local variable reference to regular symbol
_nxt_thread_context for architecture x86_64
|
|
Incorrect check prevents Unit to start without modules.
This issue was introduced in 4a3ec07f4b19.
|
|
|
|
Previously, the log message callback used a generic log function, that relied on the process time cache.
Since there were no time update calls in the application processes, all log lines were printed with the
same time, usually correlated with the process start.
Now, a non-cached logging function from libunit is used.
|
|
This makes log format used in libunit consistent with the daemon, where milliseconds are printed only in the
debug log level.
Currently a compile time switch is used, since there's no support for runtime changing of a log level for now.
But in the future this should be a runtime condition, similar to nxt_log_time_handler().
|
|
Matching 'start' and 'end' position now adjusted to avoid false matching.
This is related to #434 issue on Github.
Thanks to 洪志道 (Hong Zhi Dao).
|
|
The lifespan of a listening socket is longer than both router
configuration's and temporary router configuration's lifespan,
so the sockets should be stored in persistent queues. Safety
is ensured by the fact that the router processes only one new
configuration at any time.
|
|
|
|
|
|
|
|
|
|
|
|
The process abstraction has changed to:
setup(task, process)
start(task, process_data)
prefork(task, process, mp)
The prefork() occurs in the main process right before fork.
The file src/nxt_main_process.c is completely free of process
specific logic.
The creation of a process now supports a PROCESS_CREATED state. The
The setup() function of each process can set its state to either
created or ready. If created, a MSG_PROCESS_CREATED is sent to main
process, where external setup can be done (required for rootfs under
container).
The core processes (discovery, controller and router) doesn't need
external setup, then they all proceeds to their start() function
straight away.
In the case of applications, the load of the module happens at the
process setup() time and The module's init() function has changed
to be the start() of the process.
The module API has changed to:
setup(task, process, conf)
start(task, data)
As a direct benefit of the PROCESS_CREATED message, the clone(2) of
processes using pid namespaces now doesn't need to create a pipe
to make the child block until parent setup uid/gid mappings nor it
needs to receive the child pid.
|
|
This aims to avoid stream id clashes after router restart.
|
|
This is required to handle REMOVE_PID messages if router engine
initialization is incomplete.
|
|
After a process exits, all ports linked to it from other processes
should be closed. All unsent file descriptors in port queue, marked as
"close after send", should be closed to avoid resource leakage.
|
|
According to the C standard, pointer arguments passed to memcpy() calls shall
still have valid values. NULL is considered as invalid.
Found with GCC Static Analyzer.
|
|
|
|
This fixes building with GCC 10, which is default to -fno-common.
See: https://gcc.gnu.org/gcc-10/porting_to.html
|
|
This should resolve some static analyzers warnings.
|
|
|
|
This allows to specify multiple subsequent targets inside PHP applications.
For example:
{
"listeners": {
"*:80": {
"pass": "routes"
}
},
"routes": [
{
"match": {
"uri": "/info"
},
"action": {
"pass": "applications/my_app/phpinfo"
}
},
{
"match": {
"uri": "/hello"
},
"action": {
"pass": "applications/my_app/hello"
}
},
{
"action": {
"pass": "applications/my_app/rest"
}
}
],
"applications": {
"my_app": {
"type": "php",
"targets": {
"phpinfo": {
"script": "phpinfo.php",
"root": "/www/data/admin",
},
"hello": {
"script": "hello.php",
"root": "/www/data/test",
},
"rest": {
"root": "/www/data/example.com",
"index": "index.php"
},
}
}
}
}
|
|
This is useful to escape "/" in path fragments. For example, in order
to reference the application named "foo/bar":
{
"pass": "applications/foo%2Fbar"
}
|
|
|
|
|
|
This is required due to lack of a graceful shutdown: there is a small gap
between the runtime's memory pool release and router process's exit. Thus, a
worker thread may start processing a request between these two operations,
which may result in an http fields hash access and subsequent crash.
To simplify issue reproduction, it makes sense to add a 2 sec sleep before
exit() in nxt_runtime_exit().
|
|
|
|
|
|
|
|
After 41331471eee7 completion handlers should complete next buffer in chain.
Otherwise buffer memory may leak.
Thanks to Peter Tkatchenko for reporing the issue and testing fixes.
|
|
An earlier attempt (ad6265786871) to resolve this condition on the
router's side added a new issue: the app could get a request before
acquiring a port.
|
|
One of the ways to detect Unit's startup and subsequent readiness to accept
commands relies on waiting for the control socket file to be created.
Earlier, it was unreliable due to a race condition between the client's
connect() and the daemon's listen() calls after the socket's bind() call.
Now, unix domain listening sockets are created with a nxt_listen_socket_create()
call as follows:
s = socket();
unlink("path/to/socket.tmp")
bind(s, "path/to/socket.tmp");
listen(s);
rename("path/to/socket.tmp", "path/to/socket");
This eliminates a time-lapse when the socket file is already created but nobody
is listening on it yet, which therefore prevents the condition described above.
Also, it allows reliably detecting whether the socket is being used or simply
wasn't cleaned after the daemon stopped abruptly. A successful connection to
the socket file means the daemon has been started; otherwise, the file can be
overwritten.
|
|
Previously, the unix domain control socket file might have been left
in the file system after a failed nxt_listen_socket_create() call.
|
|
|
|
|