Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Generic process-to-process shared memory exchange is no more required. Here,
it is transformed into a router-to-application pattern. The outgoing shared
memory segments collection is now the property of the application structure.
The applications connect to the router only, and the process only needs to group
the ports.
|
|
The application process needs to request the port from the router instead of the
latter pushing the port before sending a request to the application. This is
required to simplify the communication between the router and the application
and to prepare the router to use the application shared port and then the queue.
|
|
Currently, the router exits without waiting for the worker threads to stop.
There is a short gap between the runtime memory pool's free and the exit, during
which a worker thread may try to access a runtime structure. In turn, this may
cause a crash. For now, it is better to keep this memory allocated.
|
|
|
|
The process abstraction has changed to:
setup(task, process)
start(task, process_data)
prefork(task, process, mp)
The prefork() occurs in the main process right before fork.
The file src/nxt_main_process.c is completely free of process
specific logic.
The creation of a process now supports a PROCESS_CREATED state. The
The setup() function of each process can set its state to either
created or ready. If created, a MSG_PROCESS_CREATED is sent to main
process, where external setup can be done (required for rootfs under
container).
The core processes (discovery, controller and router) doesn't need
external setup, then they all proceeds to their start() function
straight away.
In the case of applications, the load of the module happens at the
process setup() time and The module's init() function has changed
to be the start() of the process.
The module API has changed to:
setup(task, process, conf)
start(task, data)
As a direct benefit of the PROCESS_CREATED message, the clone(2) of
processes using pid namespaces now doesn't need to create a pipe
to make the child block until parent setup uid/gid mappings nor it
needs to receive the child pid.
|
|
This aims to avoid stream id clashes after router restart.
|
|
One of the ways to detect Unit's startup and subsequent readiness to accept
commands relies on waiting for the control socket file to be created.
Earlier, it was unreliable due to a race condition between the client's
connect() and the daemon's listen() calls after the socket's bind() call.
Now, unix domain listening sockets are created with a nxt_listen_socket_create()
call as follows:
s = socket();
unlink("path/to/socket.tmp")
bind(s, "path/to/socket.tmp");
listen(s);
rename("path/to/socket.tmp", "path/to/socket");
This eliminates a time-lapse when the socket file is already created but nobody
is listening on it yet, which therefore prevents the condition described above.
Also, it allows reliably detecting whether the socket is being used or simply
wasn't cleaned after the daemon stopped abruptly. A successful connection to
the socket file means the daemon has been started; otherwise, the file can be
overwritten.
|
|
|
|
This closes #386 on GitHub.
|
|
This is required to avoid include cycles, as some nxt_clone_* functions
depend on the credential structures, but nxt_process depends on clone
structures.
|
|
Introduces the functions nxt_process_init_create() and
nxt_process_init_creds_set().
|
|
- Introduced nxt_runtime_process_port_create().
- Moved nxt_process_use() into nxt_process.c from nxt_runtime.c.
- Renamed nxt_runtime_process_remove_pid() as nxt_runtime_process_remove().
- Some public functions transformed to static.
This closes #327 issue on GitHub.
|
|
This avoids memory leak reports from the address sanitizer.
|
|
|
|
There was a typo: nxt_queue_head() used instead of nxt_queue_first() in
connection iteration loop. This prevents idle connection close on quit.
This closes #334 issue on GitHub.
Thanks to 洪志道 (Hong Zhi Dao).
|
|
|
|
This closes #233 issue on GitHub.
Thanks to 洪志道 (Hong Zhi Dao).
|
|
|
|
There's nothing specific to Go language. This type of application object can
be used to run any external application that utilizes libunit API.
|
|
|
|
Unit master process restarts the router if the router accidentally dies.
New router process receives the configuration from controller and starts
configured applications. The information of running applications cannot
be transferred to router because currently there is no persistent application
identifier. To avoid orphan application processes started by died router,
master process stops all currently running applications once it receives
SIGCHLD for router process.
|
|
|
|
|
|
This and previous commit close #131 issue on GitHub.
|
|
The bug had appeared in 5cc5002a788e when process type has been
converted to bitmask. This commit reverts the type back to a number.
This commit is related to #131 issue on GitHub.
|
|
|
|
This also fixes #101 issue on GitHub. The function previously used to
parse IPv6 address of control socket was broken. Now the working function
is used instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CID 200496
CID 200494
CID 200490
CID 200489
CID 200483
CID 200482
CID 200472
CID 200465
|
|
Two different router threads may send different requests to single
application worker. In this case shared memory fds from worker
to router will be send over 2 different router ports. These fds
will be received and processed by different threads in any order.
This patch made possible to add incoming shared memory segments in
arbitrary order. Additionally, array and memory pool are no longer
used to store segments because of pool's single threaded nature.
Custom array-like structure nxt_port_mmaps_t introduced.
|
|
|
|
|
|
This helps to decouple process removal from port memory pool cleanups.
|
|
|
|
Use counter helps to simplify logic around port and application free.
Port 'post' function introduced to simplify post execution of particular
function to original port engine's thread.
Write message queue is protected by mutex which makes port write operation
thread safe.
|
|
To allow use port from different threads, the first step is to avoid using
port's memory pool for temporary allocations required to send data through
the port. Including but not limited by:
- buffers for data;
- send message structures;
- new mmap fd notifications;
It is still safe to use port memory pool for incoming buffers allocations
because recieve operation bound to single thread.
|
|
Memory pool is not used by port_hash and it was a mistake to pass it into
'add' and 'remove' functions. port_hash enrties are allocated from heap.
|
|
It's not used anyway, but breaks building with musl.
This closes issue #5 on GitHub.
|
|
Now configuration survives server reloads.
|
|
|
|
|
|
|
|
|