Age | Commit message (Collapse) | Author | Files | Lines |
|
This commit adds the variable $response_header_NAME.
|
|
Don't reconstruct a new string for the $request_line from the parsed
method, target, and HTTP version, but rather keep a pointer to the
original memory where the request line was received.
This will be necessary for implementing URI rewrites, since we want to
log the original request line, and not one constructed from the
rewritten target.
This implementation changes behavior (only for invalid requests) in the
following way:
Previous behavior was to log as many tokens from the request line as
were parsed validly, thus:
Request -> access log ; error log
"GET / HTTP/1.1" -> "GET / HTTP/1.1" OK ; =
"GET / HTTP/1.1" -> "GET / HTTP/1.1" [1] ; =
"GET / HTTP/2.1" -> "GET / HTTP/2.1" OK ; =
"GET / HTTP/1." -> "GET / HTTP/1." [2] ; "GET / HTTP/1. [null]"
"GET / food" -> "GET / food" [2] ; "GET / food [null]"
"GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2] ; =
"GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2] ; =
"GET food HTTP/1.1" -> "GET" ; "GET [null] [null]"
"OPTIONS * HTTP/1.1" -> "OPTIONS" [3] ; "OPTIONS [null] [null]"
"FOOBAR baz HTTP/1.1"-> "FOOBAR" ; "FOOBAR [null] [null]"
"FOOBAR / HTTP/1.1" -> "FOOBAR / HTTP/1.1" ; =
"get / HTTP/1.1" -> "-" ; " [null] [null]"
"" -> "-" ; " [null] [null]"
This behavior was rather inconsistent. We have several options to go
forward with this patch:
- NGINX behavior.
Log the entire request line, up to '\r' | '\n', even if it was
invalid.
This is the most informative alternative. However, RFC-complying
requests will probably not send invalid requests.
This information would be interesting to users where debugging
requests constructed manually via netcat(1) or a similar tool, or
maybe for debugging a client, are important. It might be interesting
to support this in the future if our users are interested; for now,
since this approach requires looping over invalid requests twice,
that's an overhead that we better avoid.
- Previous Unit behavior
This is relatively fast (almost as fast as the next alternative, the
one we chose), but the implementation is ugly, in that we need to
perform the same operation in many places around the code.
If we want performance, probably the next alternative is better; if
we want to be informative, then the first one is better (maybe in
combination with the third one too).
- Chosen behavior
Only logging request lines when the request is valid. For any
invalid request, or even unsupported ones, the request line will be
logged as "-". Thus:
Request -> access log [4]
"GET / HTTP/1.1" -> "GET / HTTP/1.1" OK
"GET / HTTP/1.1" -> "GET / HTTP/1.1" [1]
"GET / HTTP/2.1" -> "-" [3]
"GET / HTTP/1." -> "-"
"GET / food" -> "-"
"GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2]
"GET / / HTTP/1.1" -> "GET / / HTTP/1.1" [2]
"GET food HTTP/1.1" -> "-"
"OPTIONS * HTTP/1.1" -> "-"
"FOOBAR baz HTTP/1.1"-> "-"
"FOOBAR / HTTP/1.1" -> "FOOBAR / HTTP/1.1"
"get / HTTP/1.1" -> "-"
"" -> "-"
This is less informative than previous behavior, but considering how
inconsistent it was, and that RFC-complying agents will probably not
send us such requests, we're ready to lose that information in the
log. This is of course the fastest and simplest implementation we
can get.
We've chosen to implement this alternative in this patch. Since we
modified the behavior, this patch also changes the affected tests.
[1]: Multiple successive spaces as a token delimiter is allowed by the
RFC, but it is discouraged, and considered a security risk. It is
currently supported by Unit, but we will probably drop support for
it in the future.
[2]: Unit currently supports spaces in the request-target. This is
a violation of the relevant RFC (linked below), and will be fixed
in the future, and consider those targets as invalid, returning
a 400 (Bad Request), and thus the log lines with the previous
inconsistent behavior would be changed.
[3]: Not yet supported.
[4]: In the error log, regarding the "log_routes" conditional logging
of the request line, we only need to log the request line if it
was valid. It doesn't make sense to log "" or "-" in case that
the request was invalid, since this is only useful for
understanding decisions of the router. In this case, the access
log is more appropriate, which shows that the request was invalid,
and a 400 was returned. When the request line is valid, it is
printed in the error log exactly as in the access log.
Link: <https://datatracker.ietf.org/doc/html/rfc9112#section-3>
Suggested-by: Liam Crilly <liam@nginx.com>
Reviewed-by: Zhidao Hong <z.hong@f5.com>
Cc: Timo Stark <t.stark@nginx.com>
Cc: Andrei Zeliankou <zelenkov@nginx.com>
Cc: Andrew Clayton <a.clayton@nginx.com>
Cc: Artem Konev <a.konev@f5.com>
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
- Configuration: added "/config/settings/http/log_route".
Type: bool
Default: false
This adds configurability to the error log. It allows enabling and
disabling logs related to how the router performs selection of the
routes.
- HTTP: logging request line.
Log level: [notice]
The request line is essential to understand which logs correspond to
which request when reading the logs.
- HTTP: logging route that's been discarded.
Log level: [info]
- HTTP: logging route whose action is selected.
Log level: [notice]
- HTTP: logging when "fallback" action is taken.
Log level: [notice]
Closes: <https://github.com/nginx/unit/issues/758>
Link: <https://github.com/nginx/unit/pull/824>
Link: <https://github.com/nginx/unit/pull/839>
Suggested-by: Timo Stark <t.stark@nginx.com>
Suggested-by: Mark L Wood-Patrick <mwoodpatrick@gmail.com>
Suggested-by: Liam Crilly <liam@nginx.com>
Tested-by: Liam Crilly <liam@nginx.com>
Acked-by: Artem Konev <a.konev@f5.com>
Cc: Andrew Clayton <a.clayton@nginx.com>
Cc: Andrei Zeliankou <zelenkov@nginx.com>
Reviewed-by: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
The casts are unnecessary, since memchr(3)'s argument is 'const void *'.
It might have been necessary in the times of K&R, where 'void *' didn't
exist. Nowadays, it's unnecessary, and _very_ unsafe, since casts can
hide all classes of bugs by silencing most compiler warnings.
The changes from nxt_memchr() to memchr(3) were scripted:
$ find src/ -type f \
| grep '\.[ch]$' \
| xargs sed -i 's/nxt_memchr/memchr/'
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
The casts are unnecessary, since memcmp(3)'s arguments are 'void *'.
It might have been necessary in the times of K&R, where 'void *' didn't
exist. Nowadays, it's unnecessary, and _very_ unsafe, since casts can
hide all classes of bugs by silencing most compiler warnings.
The changes from nxt_memcmp() to memcmp(3) were scripted:
$ find src/ -type f \
| grep '\.[ch]$' \
| xargs sed -i 's/nxt_memcmp/memcmp/'
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Alejandro Colomar <alx@nginx.com>
|
|
|
|
|
|
Some lines (incorrectly) had an indentation of 3 or 5, or 7 or 9,
or 11 or 13, or 15 or 17 spaces instead of 4, 8, 12, or 16. Fix them.
Found with:
$ find src -type f | xargs grep -n '^ [^ ]';
$ find src -type f | xargs grep -n '^ [^ *]';
$ find src -type f | xargs grep -n '^ [^ ]';
$ find src -type f | xargs grep -n '^ [^ *]';
$ find src -type f | xargs grep -n '^ [^ +]';
$ find src -type f | xargs grep -n '^ [^ *+]';
$ find src -type f | xargs grep -n '^ [^ +]';
$ find src -type f | xargs grep -n '^ [^ *+]';
|
|
It was accidentally added in 4645a43bc248.
|
|
Found by Clang Static Analyzer.
|
|
A crash was caused by an incorrect timer handler nxt_h1p_idle_timeout() if
SSL_shutdown() returned SSL_ERROR_WANT_READ/SSL_ERROR_WANT_WRITE.
The flag SSL_RECEIVED_SHUTDOWN is used to avoid getting SSL_ERROR_WANT_READ, so
the server won't wait for a close notification from a client.
For SSL_ERROR_WANT_WRITE, a correct timer handler is set up.
|
|
This closes #498 issue on GitHub.
|
|
According to Section #3.1.2 of RFC 7230, after the status code
there must be a space even if the reason phrase is empty.
Also, only 3 digits allowed.
This closes #507 issue on GitHub.
|
|
Previously, all requests that contained in header field names characters other
than alphanumeric, or "-", or "_" were rejected with a 400 "Bad Request" error
response.
Now, the parser allows the same set of characters as specified in RFC 7230,
including: "!", "#", "$", "%", "&", "'", "*", "+", ".", "^", "`", "|", and "~".
Header field names that contain only these characters are considered valid.
Also, there's a new option introduced: "discard_unsafe_fields". It accepts
boolean value and it is set to "true" by default.
When this option is "true", all header field names that contain characters
in valid range, but other than alphanumeric or "-" are skipped during parsing.
When the option is "false", these header fields aren't skipped.
Requests with non-valid characters in header field names according to
RFC 7230 are rejected regardless of "discard_unsafe_fields" setting.
This closes #422 issue on GitHub.
|
|
The connection's local socket address is allocated from the connection
pool before the request is passed to the application; however, with keep-alive
connections, this field was unconditionally reset by a socket configuration
value that could be NULL. For the next request, the address was allocated
again from the same connection pool. Nonetheless, all leaked addresses
were released when the connection was closed.
The issue introduced in changeset 5c7dd85fabd5.
|
|
The Apache HTTP server benchmarking tool, ab, issues HTTP/1.0 requests with
the 'Connection: Keep-Alive' header and expects a 'Connection: Keep-Alive'
header in the response.
|
|
If idle connection was closed before h1proto had been allocated
then c->socket.data is NULL. This happens if nxt_h1p_idle_response()
is called by nxt_h1p_idle_close(). However, h1p->conn_write_tail
is used only in nxt_h1p_request_send() that would not be called
after nxt_h1p_idle_response().
The bug was introduced in f237e8c553fd.
|
|
|
|
An idle connection was not removed from idle connection list
if the connections detected that listening socket had been closed.
|
|
The invocation parameters should be logged as well, notably the path of the file
that is failed to be created.
Also, log level changed to ALERT as it's quite critical error.
|
|
|
|
This is required due to lack of a graceful shutdown: there is a small gap
between the runtime's memory pool release and router process's exit. Thus, a
worker thread may start processing a request between these two operations,
which may result in an http fields hash access and subsequent crash.
To simplify issue reproduction, it makes sense to add a 2 sec sleep before
exit() in nxt_runtime_exit().
|
|
|
|
This allows to specify redirects:
{
"action": {
"return": 301,
"location": "https://www.example.com/"
}
}
|
|
This fixes crash introduced in 039b00e32e3d.
|
|
Before this fix, only persistent connection request buffers were completed.
This issue was introduced in dc403927ab0b.
|
|
This closes #386 on GitHub.
|
|
The check was moved from the request body read stage.
|
|
|
|
|
|
Keepalive connection is disabled if upstream response length
differs from specified in the "Content-Length" field value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Request state ready_handler required for further websocket events processing.
It is not required for regular response transferring.
|
|
|
|
|
|
Scheme matches exact string “http” or “https”.
|
|
This closes #223 issue on GitHub.
|
|
When idle timeout occurs at the same time as a request comes in,
the timer handler closes connection while the read event triggers
request processing, and this eventually leads to segmentation fault.
|
|
|
|
|
|
According to RFC 7232:
| A 304 response cannot contain a message-body; it is always terminated
| by the first empty line after the header fields.
|