Age | Commit message (Collapse) | Author | Files | Lines |
|
This issue was introduced in libunit commit (e0f0cd7d244a). All port
sockets in application should be in blocking mode whereas Unit itself
operates non-blocking sockets.
Having non-blocking sockets in application may cause send error during
intensive response packets generation.
See https://mailman.nginx.org/pipermail/unit/2018-October/000080.html.
|
|
|
|
For accurate app descriptor release, it is required to count the number of
use counts. Use count increased when:
- app linked to configuration app queue;
- socket conf stores pointer to app;
- request for start app process posted to router service thread;
Application port has pointer to app, but it does not increase use count
to avoid use count loop.
Timer needs a pointer to nxt_timer_t which is stored in engine timers tree.
nxt_timer_t now resides in nxt_app_joint_t and does not lock the application.
Start process port RPC handlers is also linked to nxt_app_joint_t.
App joint (nxt_app_joint_t) is a 'weak pointer':
- single threaded;
- use countable;
- store pointer to nxt_app_t (which can be NULL);
nxt_app_t has pointer to nxt_app_joint_t and update its pointer to app.
|
|
This and previous commit close #131 issue on GitHub.
|
|
The bug had appeared in 5cc5002a788e when process type has been
converted to bitmask. This commit reverts the type back to a number.
This commit is related to #131 issue on GitHub.
|
|
This is required to avoid crashes and memory leaks on Unit exit.
|
|
|
|
|
|
- Pre-fork 'processes.spare' application processes;
- fork more processes to keep 'processes.spare' idle processes;
- fork on-demand up to 'processes.max' count;
- scale down idle application processes above 'processes.spare' after
'processes.idle_timeout';
- number of concurrently started application processes also limited by
'processes.spare' (or 1, if spare is 0).
|
|
|
|
Application timeout limits maximum time of worker response in processing
particular request. Not including the time required to start worker,
time in request queue etc.
|
|
CID 200496
CID 200494
CID 200490
CID 200489
CID 200483
CID 200482
CID 200472
CID 200465
|
|
- Main process should be connected to all other processes.
- Controller should be connected to Router.
- Router should be connected to Controller and all Workers.
- Workers should be connected to Router worker thread ports only.
This filtering helps to avoid unnecessary communication and various errors
during massive application workers stop / restart.
|
|
- Each sendmsg() transmits no more than port->max_size payload data.
- Longer buffers are fragmented and send using multiple sendmsg() calls.
- On receive side, buffers are connected in chain.
- Number of handler calls is the same as number of nxt_port_socket_write()
calls.
- nxt_buf_make_plain() function introduced to make single plain buffer from
the chain.
|
|
This helps to decouple process removal from port memory pool cleanups.
|
|
Remove pid proxying to worker engines implementation was originally
overcomplicated. Memory pool and 2 engine posts (there and back again) are
optimized out and replaced with band new nxt_port_post() call.
|
|
Use counter helps to simplify logic around port and application free.
Port 'post' function introduced to simplify post execution of particular
function to original port engine's thread.
Write message queue is protected by mutex which makes port write operation
thread safe.
|
|
To allow use port from different threads, the first step is to avoid using
port's memory pool for temporary allocations required to send data through
the port. Including but not limited by:
- buffers for data;
- send message structures;
- new mmap fd notifications;
It is still safe to use port memory pool for incoming buffers allocations
because recieve operation bound to single thread.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To disable implicit completion, handler should reset msg->buf field.
|
|
|
|
Usage:
1. Register handlers in incoming port with nxt_port_rpc_register_handler().
2. Use return value as a stream identifier for next nxt_port_socket_write().
|
|
nxt_req_conn_link_t still used for lookup connection by request id.
New nxt_req_app_link_t (ra) allocated from conn->mem_pool using mp_retain().
ra stored in app->requests if there is no free worker to process request.
|
|
Used for connection mem pool cleanup, which can be used by buffers.
Used for port mem pool to safely destroy linked process.
|
|
|
|
Application process start request DATA message from router to master.
Master notifies router via NEW_PORT message after worker process become ready.
|
|
New port message type introduced NXT_PORT_MSG_REMOVE_PID. Default handler
removes process description from nxt_runtime_t with all ports, incoming and
outgoing mmaps etc.
|
|
There is a case in router where we use port in router connection thread.
Buffers are allocated within connection memory pool which can be used only in
this router thread. sendmsg() can be postponed into main router thread and
completion handler will compare current engine and post itself to correct
engine.
|
|
- request to connection mapping in engine;
- requests queue in connection;
- engine port creation;
- connected ports hash for each process;
- engine port data messages processing (app responses);
|
|
|
|
|
|
Write socket close() call moved out from nxt_port_create().
|
|
|
|
Usage:
b = nxt_port_mmap_get_buf(task, port, size);
b->mem.free = nxt_cpymem(b->mem.free, data, size);
nxt_port_socket_write(task, port, NXT_PORT_MSG_DATA, -1, 0, b);
|
|
The cycle has been renamed to the runtime.
|
|
|
|
|
|
|