Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
New optional configuration parameter introduced: limits.reschedule_timeout.
Default value 1 second. In the case when request is written to the port
socket 'in advance', it is called 'pending'.
On every completed request, the head of pending request is checked against
reschedule timeout. If this request waiting for execution longer than
timeout, it is cancelled, new port selected for this request.
|
|
|
|
Application timeout limits maximum time of worker response in processing
particular request. Not including the time required to start worker,
time in request queue etc.
|
|
This patch increase precedence of non-started worker over busy worker.
1. idle worker;
2. start new worker;
3. busy worker, but can accept request in advance;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- Each sendmsg() transmits no more than port->max_size payload data.
- Longer buffers are fragmented and send using multiple sendmsg() calls.
- On receive side, buffers are connected in chain.
- Number of handler calls is the same as number of nxt_port_socket_write()
calls.
- nxt_buf_make_plain() function introduced to make single plain buffer from
the chain.
|
|
Only purpose of request<->app link instance is to be enqueued in application
requests queue.
It is possible to avoid request<->app link allocation from memory pool in
case when spare application port is available. Instance from local stack
can be used to prepare and send message to application.
|
|
Remove pid proxying to worker engines implementation was originally
overcomplicated. Memory pool and 2 engine posts (there and back again) are
optimized out and replaced with band new nxt_port_post() call.
|
|
Request <-> application link structure (nxt_req_app_link_t) used to register
the request in application request queue (nxt_app_t.requests) and generate
application-specific port message.
Now it is allocated from request pool. This pool created for request parsing
and used to allocate and store information specific to this request.
|
|
Request can be processed in thread different from the thread where the
connection originally handled.
Because of possible racing conditions, using original connection structures
is unsafe. To solve this, error condition is registered in 'ra' (request <->
application link) and traversed back to original connection thread where
the error message can be generated and send back to client.
|
|
Use counter helps to simplify logic around port and application free.
Port 'post' function introduced to simplify post execution of particular
function to original port engine's thread.
Write message queue is protected by mutex which makes port write operation
thread safe.
|
|
To allow use port from different threads, the first step is to avoid using
port's memory pool for temporary allocations required to send data through
the port. Including but not limited by:
- buffers for data;
- send message structures;
- new mmap fd notifications;
It is still safe to use port memory pool for incoming buffers allocations
because recieve operation bound to single thread.
|
|
Worker threads ports need to receive 'remove pid' message to properly handle
application process exit case and finish requests processed by particular
application worker. Main process send 'remove pid' notification to service
thread port only and this message must be 'proxied' to other running engines.
Separate memory pool created for this message. For each engine structure
required to post message to engine allocate from the pool using 'retain'
allocation method. After successfull post structure will be freed using
'release' method. To completely destroy poll one more 'release' should be
called to release initial reference count.
I'm afraid this should be simplified using good old malloc() and free() calls.
|
|
Introducing event engine memory cache and using the cache for
nxt_sockaddr_t structures.
|
|
It's not used anyway, but breaks building with musl.
This closes issue #5 on GitHub.
|
|
|
|
|
|
|
|
|
|
Updating the router engines list before posting jobs to worker thread
engines is more logical because worker threads may exit after the posting.
However, the previous code was safe because an engine is freed by
the router main thread after worker its thread has exited.
|
|
The router process exited abnormally on reconfiguration if number
of worker threads had been decreased on the previous reconfiguration.
Besides the list of router engines should be updated only after a new
configuration joints have been prepared for all engines.
|
|
requested this.
|
|
|
|
PHP SAPI tries to read body for POST request before registering
header-specific variables. For other methods, read_post_body() called by SAPI
after variables registration.
This closes #10 issue on GitHub.
|
|
|
|
|
|
Application free ports is a queue (double linked list) protected with mutex.
After successfull request parsing, each router thread (1) tries to get port
from this list. If this list is empty, (2) start worker request posted to main
router thread. Another thread may release port between (1) and (2).
This fix adds an attempt to get port from free ports list at the beginning of
start worker action in main thread.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Configuration and building example:
./configure
./configure python
./configure php
./configure go
make all
or
./configure
make nginext
./configure python
make python
./configure php
make php
./configure go
make go
Modules configuration options and building examples:
./configure python --module=python2 --config=python2.7-config
make python2
./configure php --module=php7 --config=php7.0-config
--lib-path=/usr/local/php7.0
make php7
./configure go --go=go1.6 --go-path=${HOME}/go1.6
make go1.6
|
|
|
|
|
|
With specific timeout and buffer size settings.
|