Age | Commit message (Collapse) | Author | Files | Lines |
|
Splitting the process type connectivity matrix to 'keep ports' and 'send
ports'; the 'keep ports' matrix is used to clean up unnecessary ports after
forking a new process, and the 'send ports' matrix determines which process
types expect to get created process ports.
Unfortunately, the original single connectivity matrix no longer works because
of an application stop delay caused by prototypes. Existing applications
should not get the new router port at the moment.
|
|
Application process started with shared port (and queue) already configured.
But still waits for PORT_ACK message from router to start request processing
(so-called "ready state").
Waiting for router confirmation is necessary. Otherwise, the application may
produce response and send it to router before the router have the information
about the application process. This is a subject of further optimizations.
|
|
|
|
This enables the reuse of process creation functions.
|
|
Shared app queue takes more memory than port memory. To unmap all memory pages
correct size need to be specified for munmap() call. Otherwise 4 Mb memory
leaked on each configured application removal.
The issue was introduced in 1d84b9e4b459.
|
|
Two consecutive fd and fd2 fields replaced with array.
|
|
The goal is to minimize the number of syscalls needed to deliver a message.
|
|
This is the port shared between all application processes which use it to pass
requests for processing. Using it significantly simplifies the request
processing code in the router. The drawback is 2 more file descriptors per each
configured application and more complex libunit message wait/read code.
|
|
The process abstraction has changed to:
setup(task, process)
start(task, process_data)
prefork(task, process, mp)
The prefork() occurs in the main process right before fork.
The file src/nxt_main_process.c is completely free of process
specific logic.
The creation of a process now supports a PROCESS_CREATED state. The
The setup() function of each process can set its state to either
created or ready. If created, a MSG_PROCESS_CREATED is sent to main
process, where external setup can be done (required for rootfs under
container).
The core processes (discovery, controller and router) doesn't need
external setup, then they all proceeds to their start() function
straight away.
In the case of applications, the load of the module happens at the
process setup() time and The module's init() function has changed
to be the start() of the process.
The module API has changed to:
setup(task, process, conf)
start(task, data)
As a direct benefit of the PROCESS_CREATED message, the clone(2) of
processes using pid namespaces now doesn't need to create a pipe
to make the child block until parent setup uid/gid mappings nor it
needs to receive the child pid.
|
|
Introduces the functions nxt_process_init_create() and
nxt_process_init_creds_set().
|
|
- Introduced nxt_runtime_process_port_create().
- Moved nxt_process_use() into nxt_process.c from nxt_runtime.c.
- Renamed nxt_runtime_process_remove_pid() as nxt_runtime_process_remove().
- Some public functions transformed to static.
This closes #327 issue on GitHub.
|
|
This closes #312 issue on GitHub.
|
|
|
|
|
|
This issue was introduced in libunit commit (e0f0cd7d244a). All port
sockets in application should be in blocking mode whereas Unit itself
operates non-blocking sockets.
Having non-blocking sockets in application may cause send error during
intensive response packets generation.
See https://mailman.nginx.org/pipermail/unit/2018-October/000080.html.
|
|
|
|
For accurate app descriptor release, it is required to count the number of
use counts. Use count increased when:
- app linked to configuration app queue;
- socket conf stores pointer to app;
- request for start app process posted to router service thread;
Application port has pointer to app, but it does not increase use count
to avoid use count loop.
Timer needs a pointer to nxt_timer_t which is stored in engine timers tree.
nxt_timer_t now resides in nxt_app_joint_t and does not lock the application.
Start process port RPC handlers is also linked to nxt_app_joint_t.
App joint (nxt_app_joint_t) is a 'weak pointer':
- single threaded;
- use countable;
- store pointer to nxt_app_t (which can be NULL);
nxt_app_t has pointer to nxt_app_joint_t and update its pointer to app.
|
|
This and previous commit close #131 issue on GitHub.
|
|
The bug had appeared in 5cc5002a788e when process type has been
converted to bitmask. This commit reverts the type back to a number.
This commit is related to #131 issue on GitHub.
|
|
This is required to avoid crashes and memory leaks on Unit exit.
|
|
|
|
|
|
- Pre-fork 'processes.spare' application processes;
- fork more processes to keep 'processes.spare' idle processes;
- fork on-demand up to 'processes.max' count;
- scale down idle application processes above 'processes.spare' after
'processes.idle_timeout';
- number of concurrently started application processes also limited by
'processes.spare' (or 1, if spare is 0).
|
|
|
|
Application timeout limits maximum time of worker response in processing
particular request. Not including the time required to start worker,
time in request queue etc.
|
|
CID 200496
CID 200494
CID 200490
CID 200489
CID 200483
CID 200482
CID 200472
CID 200465
|
|
- Main process should be connected to all other processes.
- Controller should be connected to Router.
- Router should be connected to Controller and all Workers.
- Workers should be connected to Router worker thread ports only.
This filtering helps to avoid unnecessary communication and various errors
during massive application workers stop / restart.
|
|
- Each sendmsg() transmits no more than port->max_size payload data.
- Longer buffers are fragmented and send using multiple sendmsg() calls.
- On receive side, buffers are connected in chain.
- Number of handler calls is the same as number of nxt_port_socket_write()
calls.
- nxt_buf_make_plain() function introduced to make single plain buffer from
the chain.
|
|
This helps to decouple process removal from port memory pool cleanups.
|
|
Remove pid proxying to worker engines implementation was originally
overcomplicated. Memory pool and 2 engine posts (there and back again) are
optimized out and replaced with band new nxt_port_post() call.
|
|
Use counter helps to simplify logic around port and application free.
Port 'post' function introduced to simplify post execution of particular
function to original port engine's thread.
Write message queue is protected by mutex which makes port write operation
thread safe.
|
|
To allow use port from different threads, the first step is to avoid using
port's memory pool for temporary allocations required to send data through
the port. Including but not limited by:
- buffers for data;
- send message structures;
- new mmap fd notifications;
It is still safe to use port memory pool for incoming buffers allocations
because recieve operation bound to single thread.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To disable implicit completion, handler should reset msg->buf field.
|
|
|
|
Usage:
1. Register handlers in incoming port with nxt_port_rpc_register_handler().
2. Use return value as a stream identifier for next nxt_port_socket_write().
|
|
nxt_req_conn_link_t still used for lookup connection by request id.
New nxt_req_app_link_t (ra) allocated from conn->mem_pool using mp_retain().
ra stored in app->requests if there is no free worker to process request.
|
|
Used for connection mem pool cleanup, which can be used by buffers.
Used for port mem pool to safely destroy linked process.
|
|
|
|
Application process start request DATA message from router to master.
Master notifies router via NEW_PORT message after worker process become ready.
|
|
New port message type introduced NXT_PORT_MSG_REMOVE_PID. Default handler
removes process description from nxt_runtime_t with all ports, incoming and
outgoing mmaps etc.
|
|
There is a case in router where we use port in router connection thread.
Buffers are allocated within connection memory pool which can be used only in
this router thread. sendmsg() can be postponed into main router thread and
completion handler will compare current engine and post itself to correct
engine.
|
|
- request to connection mapping in engine;
- requests queue in connection;
- engine port creation;
- connected ports hash for each process;
- engine port data messages processing (app responses);
|
|
|
|
|