Age | Commit message (Collapse) | Author | Files | Lines |
|
Go request registration should be removed before C request memory freed.
C request address used as a key in Go map. Freed memory can be instantly
reused for other request and older request registration should removed at this
point to avoid collisions.
|
|
This closes #57 issue on GitHub.
|
|
|
|
|
|
Go package build was broken by change 365:28b2a468be43.
|
|
Previously, stored configuration wasn't reread on controller
process restart, which resulted in segmentation fault.
|
|
|
|
- Main process should be connected to all other processes.
- Controller should be connected to Router.
- Router should be connected to Controller and all Workers.
- Workers should be connected to Router worker thread ports only.
This filtering helps to avoid unnecessary communication and various errors
during massive application workers stop / restart.
|
|
"All problems in computer science can be
solved by another level of indirection"
Butler Lampson
Completion handlers for application response buffers executed after
sending the data to client. Application worker can be stopped right
after send response buffers to router. Worker stop causes removal
of all data structures for the worker.
To prevent shared memory segment unmap, need to count the number of
buffers which uses it. So instead of direct reference to shared
memory, need to reference to intermediate 'handler' structure with
use counter and pointer to shared memory.
|
|
Two different router threads may send different requests to single
application worker. In this case shared memory fds from worker
to router will be send over 2 different router ports. These fds
will be received and processed by different threads in any order.
This patch made possible to add incoming shared memory segments in
arbitrary order. Additionally, array and memory pool are no longer
used to store segments because of pool's single threaded nature.
Custom array-like structure nxt_port_mmaps_t introduced.
|
|
This allows to use shared memory to communicate with main process.
This patch changes shared memory segment format and breaks compatibility
with older modules.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Each sendmsg() transmits no more than port->max_size payload data.
- Longer buffers are fragmented and send using multiple sendmsg() calls.
- On receive side, buffers are connected in chain.
- Number of handler calls is the same as number of nxt_port_socket_write()
calls.
- nxt_buf_make_plain() function introduced to make single plain buffer from
the chain.
|
|
Only purpose of request<->app link instance is to be enqueued in application
requests queue.
It is possible to avoid request<->app link allocation from memory pool in
case when spare application port is available. Instance from local stack
can be used to prepare and send message to application.
|
|
Port message handler may perform fork() and then close port read file
descriptor and enable write on same event fd. Next read attempt in this case
may cause different errors in log file.
|
|
This helps to decouple process removal from port memory pool cleanups.
|
|
|
|
Remove pid proxying to worker engines implementation was originally
overcomplicated. Memory pool and 2 engine posts (there and back again) are
optimized out and replaced with band new nxt_port_post() call.
|
|
Request <-> application link structure (nxt_req_app_link_t) used to register
the request in application request queue (nxt_app_t.requests) and generate
application-specific port message.
Now it is allocated from request pool. This pool created for request parsing
and used to allocate and store information specific to this request.
|
|
Request can be processed in thread different from the thread where the
connection originally handled.
Because of possible racing conditions, using original connection structures
is unsafe. To solve this, error condition is registered in 'ra' (request <->
application link) and traversed back to original connection thread where
the error message can be generated and send back to client.
|
|
For empty write queue cases, it is possible to avoid allocation and enqueue
send message structures. Send message initialized on stack and passed to
write handler. If immediate write fails, send message allocated from engine
pool and enqueued.
|
|
Use counter helps to simplify logic around port and application free.
Port 'post' function introduced to simplify post execution of particular
function to original port engine's thread.
Write message queue is protected by mutex which makes port write operation
thread safe.
|
|
To allow use port from different threads, the first step is to avoid using
port's memory pool for temporary allocations required to send data through
the port. Including but not limited by:
- buffers for data;
- send message structures;
- new mmap fd notifications;
It is still safe to use port memory pool for incoming buffers allocations
because recieve operation bound to single thread.
|
|
Memory pool is not used by port_hash and it was a mistake to pass it into
'add' and 'remove' functions. port_hash enrties are allocated from heap.
|
|
Worker threads ports need to receive 'remove pid' message to properly handle
application process exit case and finish requests processed by particular
application worker. Main process send 'remove pid' notification to service
thread port only and this message must be 'proxied' to other running engines.
Separate memory pool created for this message. For each engine structure
required to post message to engine allocate from the pool using 'retain'
allocation method. After successfull post structure will be freed using
'release' method. To completely destroy poll one more 'release' should be
called to release initial reference count.
I'm afraid this should be simplified using good old malloc() and free() calls.
|
|
|
|
|
|
Introducing event engine memory cache and using the cache for
nxt_sockaddr_t structures.
|
|
|
|
|
|
mallopt() is absent on Alpine musl.
|
|
It's not used anyway, but breaks building with musl.
This closes issue #5 on GitHub.
|
|
Do not reuse shared memory segment with different port until this segment
successfully received and indexed on other side. However, segment can be used
to transfer data via the port it was sent at any time.
|
|
The previous attempt of fixing this in e5a65b58101f hasn't been really
successful, because the actual memory leak was caused not by the request
parse context itself, but its memory pool.
|
|
|
|
|
|
|
|
|
|
|
|
Updating the router engines list before posting jobs to worker thread
engines is more logical because worker threads may exit after the posting.
However, the previous code was safe because an engine is freed by
the router main thread after worker its thread has exited.
|
|
The router process exited abnormally on reconfiguration if number
of worker threads had been decreased on the previous reconfiguration.
Besides the list of router engines should be updated only after a new
configuration joints have been prepared for all engines.
|