summaryrefslogtreecommitdiffhomepage
AgeCommit message (Collapse)AuthorFilesLines
2023-02-21Tests: added Python tests with encoding.Andrei Zeliankou3-0/+98
2023-02-21Tests: removed list usage as default argument.Andrei Zeliankou1-1/+3
Mutable types as default arguments is bad practice since they are evaluated only once when the function is defined.
2023-02-20Tools: using nicer characters for showing a tree.Alejandro Colomar1-19/+19
Especially in small trees, ASCII characters are confusing. Use nicer UTF-8 characters, which are more readable to the audience of this script. We don't expect the audience of this script to have limited environments where these characters will not be shown, but if that happens, we could improve the script to select the caracters based on the locale. Suggested-by: Liam Crilly <liam@nginx.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-02-17Remove the nxt_getpid() alias.Andrew Clayton2-4/+1
Since the previous commit, nxt_getpid() is only ever aliased to getpid(2). nxt_getpid() was only used once in the code, while there are multiple direct uses of getpid(2) $ grep -r "getpid()" src/ src/nxt_unit.c: nxt_unit_pid = getpid(); src/nxt_process.c: nxt_pid = nxt_getpid(); src/nxt_process.c: nxt_pid = getpid(); src/nxt_lib.c: nxt_pid = getpid(); src/nxt_process.h:#define nxt_getpid() \ src/nxt_process.h:#define nxt_getpid() \ src/nxt_process.h: getpid() Just remove it and convert the _single_ instance of nxt_getpid() to getpid(2). Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Remove the syscall(SYS_getpid) wrapper.Andrew Clayton1-9/+0
When using SYS_clone we used the getpid kernel system call directly via syscall(SYS_getpid) to avoid issues with cached pids. However since we are now only using fork(2) (+ unshare(2) for namespaces) we no longer need to call the kernel getpid directly as the fork(2) will ensure the cached pid is invalidated. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Remove nxt_clone().Andrew Clayton2-17/+0
Since the previous commit, this is no longer used. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Switch to fork(2) & unshare(2) on Linux.Andrew Clayton1-9/+247
On GitHub, @razvanphp & @hbernaciak both reported issues running the APCu PHP module under Unit. When using this module they were seeing errors like 'apcu_fetch(): Failed to acquire read lock' However when running APCu under php-fpm, everything was fine. The issue turned out to be due to our use of SYS_clone breaking the pthreads(7) API used by APCu. Even if we had been using glibc's clone(2) wrapper we would still have run into problems due to a known issue there. Essentially the problem is when using clone, glibc doesn't update the TID cache, so the child ends up having the same TID as the parent and that is used in various parts of pthreads(7) such as in the various locking primitives, so when APCu was grabbing a lock it ended up using the TID of the main unit process (rather than that of the php application processes that was grabbing the lock). So due to the above what was happening was when one of the application processes went to grab either a read or write lock, the lock was actually being attributed to the main unit process. If a process had acquired the write lock, then if a process tried to acquire a read or write lock then glibc would return EDEADLK due to detecting a deadlock situation due to thinking the process already held the write lock when in fact it didn't. It seems the right way to do this is via fork(2) and unshare(2). We already use fork(2) on other platforms. This requires a few tricks to keep the essence of the processes the same as before when using clone 1) We use the prctl(2) PR_SET_CHILD_SUBREAPER option (if its available, since Linux 3.4) to make the main unit process inherit prototype processes after a double fork(2), rather than them being reparented to 'init'. This avoids needing to ^C twice to fully exit unit when running in the foreground. It's probably also better if they maintain their parent child relationship where possible. 2) We use a double fork(2) technique on the prototype processes to ensure they themselves end up in a new PID namespace as PID 1 (when CLONE_NEWPID is being used). When using unshare(CLONE_NEWPID), the calling process is _not_ placed in the namespace (as discussed in pid_namespaces(7)). It only sets things up so that subsequent children are placed in a PID namespace. Having the prototype processes as PID 1 in the new PID namespace is probably a good thing and matches the behaviour of clone(2). Also, some isolation tests break if the prototype process is not PID 1. 3) Due to the above double fork(2) the main unit process looses track of the prototype process ID, which it needs to know. To solve this, we employ a simple pipe(2) between the main unit and prototype processes and pass the prototype grandchild PID from the parent of the second fork(2) before exiting. This needs to be done from the parent and not the grandchild, as the grandchild will see itself having a PID of 1 while the main process needs its externally visible PID. Link: <https://www.php.net/manual/en/book.apcu.php> Link: <https://sourceware.org/bugzilla/show_bug.cgi?id=21793> Closes: <https://github.com/nginx/unit/issues/694> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Enable the PR_SET_CHILD_SUBREAPER prctl(2) option on Linux.Andrew Clayton1-0/+13
This prctl(2) option can be used to set the "child subreaper" attribute of the calling process. This allows a process to take on the role of 'init', which means the process will inherit descendant processes when their immediate parent terminates. This will be used in an upcoming commit that uses a double fork(2) + unshare(2) to create a new PID namespace. The parent from the second fork will terminate leaving the child process to be inherited by 'init'. Aside from it being better to maintain the parent/child relationships between the various unit processes, without setting this you need to ^C twice to fully quit unit when running in the foreground after the double fork. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Rename NXT_HAVE_CLONE -> NXT_HAVE_LINUX_NS.Andrew Clayton8-25/+25
Due to the need to replace our use of clone/__NR_clone on Linux with fork(2)/unshare(2) for enabling Linux namespaces(7) to keep the pthreads(7) API working. Let's rename NXT_HAVE_CLONE to NXT_HAVE_LINUX_NS, i.e name it after the feature, not how it's implemented, then in future if we change how we do namespaces again we don't have to rename this. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-17Isolation: Fix the enablement of PR_SET_NO_NEW_PRIVS.Andrew Clayton1-1/+1
This prctl(2) option is checked for in auto/isolation, unfortunately due to a typo this feature has never been enabled. In the auto/isolation script the feature name was down as NXT_HAVE_PR_SET_NO_NEW_PRIVS0, which means we end up with the following in build/nxt_auto_config.h #ifndef NXT_HAVE_PR_SET_NO_NEW_PRIVS0 #define NXT_HAVE_PR_SET_NO_NEW_PRIVS0 1 #endif Whereas everywhere else is checking for NXT_HAVE_PR_SET_NO_NEW_PRIVS. This also guards the inclusion of sys/prctl.h in src/nxt_process.c which is required by a subsequent commit. Fixes: e2b53e1 ("Added "rootfs" feature.") Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-06Packages: get rid of deprecated configure options.Konstantin Pavlov8-20/+20
2023-01-30NJS: adding the missing vm destruction.Zhidao HONG8-4/+69
This commit fixed the njs memory leak happened in the config validation, updating and http requests.
2023-02-07Python: ASGI: Don't log asyncio.get_running_loop() errors.Andrew Clayton1-2/+5
This adds a check to nxt_python_asgi_get_event_loop() on the event_loop_func name in the case that running that function fails, and if it's get_running_loop() that failed we skip printing an error message as this is an often expected behaviour since the previous commit and we don't want users reporting erroneous bugs. This check will always happen regardless of Python version while it really only applies to Python >= 3.7, there didn't seem much point adding complexity to the code for this case and in what will be an ever diminishing case of people running older Pythons. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-07Python: ASGI: Switch away from asyncio.get_event_loop().Andrew Clayton1-1/+20
Several users on GitHub reported issues with running Python ASGI apps on Unit with Python 3.11.1 (this would also effect Python 3.10.9) with the following error from Unit 2023/01/15 22:43:22 [alert] 0#77128 [unit] Python failed to call 'asyncio.get_event_loop' TL;DR asyncio.get_event_loop() is currently broken due to the process of deprecating part or all of it. First some history. In Unit we had this commit commit 8dcb0b9987033d0349a6ecf528014a9daa574787 Author: Max Romanov <max.romanov@nginx.com> Date: Thu Nov 5 00:04:59 2020 +0300 Python: request processing in multiple threads. One of things this did was to create a new asyncio event loop in each thread using asyncio.new_event_loop(). It's perhaps worth noting that all these asyncio.* functions are Python functions that we call from the C code in Unit. Then we had this commit commit f27fbd9b4d2bdaddf1e7001d0d0bc5586ba04cd4 Author: Max Romanov <max.romanov@nginx.com> Date: Tue Jul 20 10:37:54 2021 +0300 Python: using default event_loop for main thread for ASGI. This changed things so that Unit calls asyncio.get_event_loop() in the _main_ thread (but still calls asyncio.new_event_loop() in the other threads). asyncio.get_event_loop() up until recently would either return an already running event loop or return a newly created one. This was done for $reasons that the commit message and GitHub issue #560 hint at. But the intimation is that there can already be an event loop running from the application (I assume it's referring to the users application) at this point and if there is we should use it. Now for the Python side of things. On the main branch we had commit 172c0f2752d8708b6dda7b42e6c5a3519420a4e8 Author: Serhiy Storchaka <storchaka@gmail.com> Date: Sun Apr 25 13:40:44 2021 +0300 bpo-39529: Deprecate creating new event loop in asyncio.get_event_loop() (GH-23554) This commit began the deprecating of asyncio.get_event_loop(). commit fd38a2f0ec03b4eec5e3cfd41241d198b1ee555a Author: Serhiy Storchaka <storchaka@gmail.com> Date: Tue Dec 6 19:42:12 2022 +0200 gh-93453: No longer create an event loop in get_event_loop() (#98440) This turned asyncio.get_event_loop() into a RuntimeError _if_ there isn't a current event loop. commit e5bd5ad70d9e549eeb80aadb4f3ccb0f2f23266d Author: Serhiy Storchaka <storchaka@gmail.com> Date: Fri Jan 13 14:40:29 2023 +0200 gh-100160: Restore and deprecate implicit creation of an event loop (GH-100410) This re-creates the event loop if there wasn't one and emits a deprecation warning. After at least the last two commits Unit no longer works with the Python _main_ branch. Meanwhile on the 3.11 branch we had commit 3fae04b10e2655a20a3aadb5e0d63e87206d0c67 Author: Serhiy Storchaka <storchaka@gmail.com> Date: Tue Dec 6 17:15:44 2022 +0200 [3.11] gh-93453: Only emit deprecation warning in asyncio.get_event_loop when a new event loop is created (#99949) which is what caused our breakage, though perhaps unintentionally as we get the following traceback Traceback (most recent call last): File "/usr/lib64/python3.11/asyncio/events.py", line 676, in get_event_loop f = sys._getframe(1) ^^^^^^^^^^^^^^^^ ValueError: call stack is not deep enough 2023/01/18 02:46:10 [alert] 0#180279 [unit] Python failed to call 'asyncio.get_event_loop' However, regardless, it is clear we need to stop using asyncio.get_event_loop(). One option is to switch to the higher level asyncio.run() API, however that is a rather large change. This commit takes the simpler approach of using asyncio.get_running_loop() (which it seems get_event_loop() will eventually be an alias of) in the _main_ thread to return the currently running event loop, or if there is no current event loop, it will call asyncio.new_event_loop() to return a newly created event loop. I believe this mimics the current behaviour. In my testing get_event_loop() seemed to always return a newly created loop, as when just calling get_running_loop() it would return NULL and we would fail out. When running two processes each with 2 threads we would get the following loops with Python 3.11.0 and unpatched Unit <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> and with Python 3.11.1 and a patched Unit we would get <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> <_UnixSelectorEventLoop running=False closed=False debug=False> Tested-by: RafaƂ Safin <rafal.safin12@gmail.com> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-02-07Python: ASGI: Factor out event loop creation to its own function.Andrew Clayton1-21/+35
This is a preparatory patch that factors out the asyncio event loop creation code from nxt_python_asgi_ctx_data_alloc() into its own function, to facilitate being called multiple times. This a part of the work to move away from using the asyncio.get_event_loop() function due to it no longer creating event loops if there wasn't one running. See the following commit for the gory details. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-31Added default values for pathnames.Alejandro Colomar15-159/+152
This allows one to simply run `./configure` and expect it to produce sane defaults for an install. Previously, without specifying `--prefix=...`, `make install` would simply fail, recommending to set `--prefix` or `DESTDIR`, but that recommendation was incomplete at best, since it didn't set many of the subdirs needed for a good organization. Setting `DESTDIR` was even worse, since that shouldn't even affect an installation (it is required to be transparent to the installation). /usr/local is the historic Unix standard path to use for installations from source made manually by the admin of the system. Some package managers (Homebrew, I'm looking specifically at you) have abused that path to install their things, but 1) it's not our fault that someone else incorrectly abuses that path (and they seem to be fixing it for newer archs; e.g., they started using /opt/homebrew for Apple Silicon), 2) there's no better path than /usr/local, 3) we still allow changing it for systems where this might not be the desired path (MacOS Intel with hombrew), and 4) it's _the standard_. See a related conversation with Ingo (OpenBSD maintainer): On 7/27/22 16:16, Ingo Schwarze wrote: > Hi Alejandro, [...] > > Alejandro Colomar wrote on Sun, Jul 24, 2022 at 07:07:18PM +0200: >> On 7/24/22 16:57, Ingo Schwarze wrote: >>> Alejandro Colomar wrote on Sun, Jul 24, 2022 at 01:20:46PM +0200: > >>>> /usr/local is for sysadmins to build from source; > >>> Doing that is *very* strongly discouraged on OpenBSD. > >> I guess that's why the directory was reused in the BSDs to install ports >> (probably ports were installed by the sysadmin there, and by extension, >> ports are now always installed there, but that's just a guess). > > Maybe. In any case, the practice of using /usr/local for packages > created from ports is significantly older than the recommendation > to refrain from using upstream "make install" outside the ports > framework. > > * The FreeBSD ports framework was started by Jordan Hubbard in 1993. > * The ports framework was ported from FreeBSD to OpenBSD > by Niklas Hallqvist in 1996. > * NetBSD pkgsrc was forked from FreeBSD ports by Alistair G. Crooks > and Hubert Feyrer in 1997. > > I failed to quickly find Jordan's original version, but rev. 1.1 > of /usr/ports/infrastructure/mk/bsd.port.mk in OpenBSD (dated Jun 3 > 22:47:10 1996 UTC) already said > > LOCALBASE ?= /usr/local > PREFIX ?= ${LOCALBASE} > [...] >> I had a discussion in NGINX Unit about it, and >> the decission for now has been: "support prefix=/usr/local for default >> manual installation through the Makefile, and let BSD users adjust to >> their preferred path". > > That's an *excellent* solution for the task, thanks for doing it > the right way. By setting PREFIX=/usr/local by default in the > upstream Makefile, you are minimizing the work for *BSD porters. > > The BSD ports frameworks will typically run the upstreak "make install" > with the variable DESTDIR set to a custom value, for example > > DESTDIR=/usr/ports/pobj/groff-1.23.0/fake-amd64 > > so if the upstream Makefile sets PREFIX=/usr/local , > that's perfect, everything gets installed to the right place > without an intervention by the person doing the porting. > > Of course, if the upstream Makefile would use some other PREFIX, > that would not be a huge obstacle. All we have to do in that case > is pass the option --prefix=/usr/local to the ./configure script, > or something equivalent if the software isn't using GNU configure. > >> We were concerned that we might get collisions >> with the BSD port also installing in /usr/local, but that's the least >> evil (and considering BSD users don't typically run `make install`, it's >> not so bad). > > It's not bad at all. It's perfect. > > Of course, if a user wants to install *without* the ports framework, > they have to provide their own --prefix. But that's not an issue > because it is easy to do, and installing without a port is discouraged > anyway. === Directory variables should never contain a trailing slash (I've learned that the hard way, where some things would break unexpectedly). Especially, make(1) is likely to have problems when things have double slashes or a trailing slash, since it treats filenames as text strings. I've removed the trailing slash from the prefix, and added it to the derivate variables just after the prefix. pkg-config(1) also expects directory variables to have no trailing slash. === I also removed the code that would set variables as depending on the prefix if they didn't start with a slash, because that is a rather non-obvious behavior, and things should not always depend on prefix, but other dirs such as $(runstatedir), so if we keep a similar behavior it would be very unreliable. Better keep variables intact if set, or use the default if unset. === Print the real defaults for ./configure --help, rather than the actual values. === I used a subdirectory under the standard /var/lib for NXT_STATE, instead of a homemade "state" dir that does the same thing. === Modified the Makefile to create some dirs that weren't being created, and also remove those that weren't being removed in uninstall, probably because someone forgot to add them. === Add new options for setting the new variables, and rename some to be consistent with the standard names. Keep the old ones at configuration time for compatibility, but mark them as deprecated. Don't keep the old ones at exec time. === A summary of the default config is: Unit configuration summary: bin directory: ............. "/usr/local/bin" sbin directory: ............ "/usr/local/sbin" lib directory: ............. "/usr/local/lib" include directory: ......... "/usr/local/include" man pages directory: ....... "/usr/local/share/man" modules directory: ......... "/usr/local/lib/unit/modules" state directory: ........... "/usr/local/var/lib/unit" tmp directory: ............. "/tmp" pid file: .................. "/usr/local/var/run/unit/unit.pid" log file: .................. "/usr/local/var/log/unit/unit.log" control API socket: ........ "unix:/usr/local/var/run/unit/control.unit.sock" Link: <https://www.gnu.org/prep/standards/html_node/Directory-Variables.html> Link: <https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html> Reviewed-by: Artem Konev <a.konev@f5.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Tested-by: Andrew Clayton <a.clayton@nginx.com> Reviewed-by: Konstantin Pavlov <thresh@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-01-30Tests: Add some PHP tests for 403 and 404 error handling.Andrew Clayton2-1/+21
Since the previous commit, we now properly handle 403 Forbidden & 404 Not Found errors in the PHP language module. This adds a test for 403 Forbidden to test/test_php_application.py, but also fixes a test in test/test_php_targets.py where we were checking for 503 but should have been a 404, which we now do. Acked-by: Alejandro Colomar <alx@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> [ Incorporates a couple of small test cleanups from Andrei ] Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-27PHP: Implement better error handling.Andrew Clayton1-5/+40
Previously the PHP module would produce one of four status codes 200 OK 301 Moved Permanently 500 Internal Server Error 503 Service Unavailable 200 for successful requests, 301 for cases where the url was a directory without a trailing '/', 500 for bad PHP or non-existing PHP file and 503 for all other errors. With this commit we now handle missing files and directories, returning 404 Not Found and files and directories that don't allow access, returning 403 Forbidden. We do these checks in two places, when we check if we should do a directory redirect (bar -> bar/) and in the nxt_php_execute() function. One snag with the latter is that the php_execute_script() function only returns success/failure (no reason). However while it took a zend_file_handle structure with the filename of the script to run, we can instead pass through an already opened file-pointer (FILE *) via that structure. So we can try opening the script ourselves and do the required checks before calling php_execute_script(). We also make use of the zend_stream_init_fp() function that initialises the zend_file_handle structure if it's available otherwise we use our own version. This is good because the zend_file_handle structure has changed over time and the zend_stream_init_fp() function should change with it. Closes: <https://github.com/nginx/unit/issues/767> Reviewed-by: Alejandro Colomar <alx@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-27PHP: Simplify ctx->script_filename.start in nxt_php_execute().Andrew Clayton1-4/+5
Create a const char *filename variable to hold ctx->script_filename.start, which is a much more manageable name and will negate the need for any more casting in the following commit when we switch to using a FILE * instead of a filename in php_execute_script(). Reviewed-by: Alejandro Colomar <alx@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-27PHP: Make use of zend_stream_init_filename().Andrew Clayton1-6/+6
Where possible make use of the zend_stream_init_filename() function introduced in PHP 7.4. This is essentially a preparatory patch for switching to using an already opened file-pointer in nxt_php_execute(). While wrapping this new code in a PHP version check with a fallback to our own function is perhaps slightly overkill, it does reduce the diff of the commit that switches to a FILE *. Reviewed-by: Alejandro Colomar <alx@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-27PHP: Factored out code into a helper function.Alejandro Colomar1-10/+19
We're going to use zend_stream_init_filename in a following commit. To reduce the diff of that change, move the current code that will be replaced, to a function that has the same interface. We use strlen(3) here to be able to use an interface without passing the length, but we will remove that call in a following code, so it has no performance issues. Co-developed-by: Andrew Clayton <a.clayton@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com> Reviewed-by: Andrew Clayton <a.clayton@nginx.com> Cc: Andrei Zeliankou <zelenkov@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-24Tests: added NJS iteration tests.Andrei Zeliankou1-0/+6
2023-01-24Tests: NJS tests reworked.Andrei Zeliankou1-35/+31
2023-01-18Removed repetitive phrasing from README.Artem Konev1-10/+9
2023-01-18Liam.Artem Konev1-2/+2
2023-01-17NJS: added the keys API for the request objects.Zhidao HONG2-33/+106
This commit is to loop through the request objects headers, arguments, and cookies.
2023-01-12PHP: Fix a potential problem parsing the path.Andrew Clayton2-1/+3
@dward on GitHub reported an issue with a URL like http://foo.bar/test.php?blah=test.php/foo where we would end up trying to run the script test.php?blah=test.php In the PHP module the format 'file.php/' is treated as a special case in nxt_php_dynamic_request() where we check the _path_ part of the url for the string '.php/'. The problem is that the path actually also contains the query string, thus we were finding 'test.php/' in the above URL and treating that whole path as the script to run. The fix is simple, replace the strstr(3) with a memmem(3), where we can limit the amount of path we use for the check. The trick here and what is not obvious from the code is that while path.start points to the whole path including the query string, path.length only contains the length of the _path_ part. NOTE: memmem(3) is a GNU extension and is neither specified by POSIX or ISO C, however it is available on a number of other systems, including: FreeBSD, OpenBSD, NetBSD, illumos, and macOS. If it comes to it we can implement a simple alternative for systems which lack memmem(3). This also adds a test case (provided by @dward) to cover this. Closes: <https://github.com/nginx/unit/issues/781> Cc: Andrei Zeliankou <zelenkov@nginx.com> Reviewed-by: Alejandro Colomar <alx@nginx.com> Reviewed-by: Andrei Zeliankou <zelenkov@nginx.com> [test] Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-12Fix endianness detection in nxt_websocket_header_t.Andrew Clayton1-2/+2
The nxt_websocket_header_t structure defines the layout of a websocket frame header. As the websocket frame is mapped directly onto this structure its layout needs to match how it's coming off the network. The network being big endian means on big endian systems the structure layout can simply match that of the websocket frame header. On little endian systems we need to reverse the two bytes. This was done via the BYTE_ORDER, BIG_ENDIAN and LITTLE_ENDIAN macros, however these are not universal, e.g they are _not_ defined on illumos (OpenSolaris / OpenIndiana) and so we get the following compiler errors In file included from src/nxt_h1proto.c:12:0: src/nxt_websocket_header.h:25:13: error: duplicate member 'opcode' uint8_t opcode:4; ^~~~~~ src/nxt_websocket_header.h:26:13: error: duplicate member 'rsv3' uint8_t rsv3:1; ^~~~ src/nxt_websocket_header.h:27:13: error: duplicate member 'rsv2' uint8_t rsv2:1; ^~~~ src/nxt_websocket_header.h:28:13: error: duplicate member 'rsv1' uint8_t rsv1:1; ^~~~ src/nxt_websocket_header.h:29:13: error: duplicate member 'fin' uint8_t fin:1; ^~~ src/nxt_websocket_header.h:31:13: error: duplicate member 'payload_len' uint8_t payload_len:7; ^~~~~~~~~~~ src/nxt_websocket_header.h:32:13: error: duplicate member 'mask' uint8_t mask:1; ^~~~ This commit fixes that by using the new NXT_HAVE_{BIG,LITTLE}_ENDIAN macros introduced in the previous commit. Closes: <https://github.com/nginx/unit/issues/297> Fixes: e501c74 ("Introducing websocket support in router and libunit.") Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-12Autodetect endianness.Andrew Clayton2-1/+32
In configure we set NXT_HAVE_LITTLE_ENDIAN for i386, amd64 and x86_64. However that misses at least AArch64 (arm64) where it's usually run in little endian mode. However none of that really matters as NXT_HAVE_LITTLE_ENDIAN isn't used anywhere. So why this patch? The only place we need to explicitly know about endianness is the nxt_websocket_header_t structure where we lay it out differently depending on endianness. This is currently done using BYTE_ORDER, LITTLE_ENDIAN and BIG_ENDIAN macros. However on at least illumos (OpenSolaris / OpenIndiana) those macros are not defined and we get compiler errors due to duplicate structure members. So let's use our own NXT_HAVE_{BIG,LITTLE}_ENDIAN macros. However it would be better to detect endianness programmatically as some architectures can run in either mode, e.g Linux used to run in big endian on PowerPC but has since switched to little endian (to match x86). This commit adds an auto/endian script (using a slightly modified version of the test program from nginx's auto script), that checks for the endianness of the platform being built on. E.g checking for endianness ... little endian The next commit will switch the nxt_websocket_header_t structure over to these new macros. Link: <https://github.com/nginx/unit/pull/298> Link: <https://developer.ibm.com/articles/l-power-little-endian-faq-trs/> Tested-by: Alejandro Colomar <alx@nginx.com> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-12Python: Fix enabling of UTF-8 in some situations.Andrew Clayton1-0/+14
There was a couple of reports of Python applications failing due to the following type of error File "/opt/netbox/netbox/netbox/configuration.py", line 25, in _import print(f"\U0001f9ec loaded config '{path}'") UnicodeEncodeError: 'ascii' codec can't encode character '\U0001f9ec' in position 0: ordinal not in range(128) due to the use of Unicode text in the print() statement. This only happened for python 3.8+ when using the "home" configuration option as this meant we were going through the new PyConfig configuration. When using this new configuration method with the 'isolated' specific API (for embedded Python) UTF-8 is disabled by default, PyPreConfig->utf8_mode = 0. To fix this we need to setup the Python pre config and enable utf-8 mode. However rather than enable utf-8 unconditionally we can set to it to -1 so that it will use the LC_CTYPE environment variable to determine whether to enable utf-8 mode or not. utf-8 mode will be enabled if LC_CTYPE is either: C, POSIX or some specific UTF-8 locale. This is the default utf8_mode setting when using the non-isolated PyPreConfig API. Reported-by: Tobias Genannt <tobias.genannt@kappa-velorum.net> Tested-by: Tobias Genannt <tobias.genannt@kappa-velorum.net> Link: <https://peps.python.org/pep-0587/> Link: <https://docs.python.org/3/c-api/init_config.html#c.PyPreConfig.utf8_mode> Fixes: 491d0f70 ("Python: Added support for Python 3.11.") Closes: <https://github.com/nginx/unit/issues/817> Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-12Python: Do some cleanup in nxt_python3_init_config().Andrew Clayton1-10/+12
This is a preparatory patch for future work and cleans up the code a little in the Python 3.8+ variant of nxt_python3_init_config(). The main advantage being we no longer have calls to PyConfig_Clear() in two different paths. The variables have a little extra space in their declarations to allow for the next patch which introduces a variable with a longer type name, which will help reduce the size of the diff. Reviewed-by: Alejandro Colomar <alx@nginx.com> Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-01-12Fixed the Slack workspace link.Artem Konev1-1/+1
2023-01-02Tools: setup-unit: disabled buggy behavior of zsh(1).Alejandro Colomar1-0/+3
Reported-by: Liam Crilly <liam@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-01-02Tools: setup-unit: workarounded macOS tmp file permissions.Alejandro Colomar1-1/+4
mktemp(1) in macOS uses a weird directory where only the running user has permissions. If we use that for the welcome website, unitd(8) won't be able to read the page. Use a directory at $HOME before trying a tmpdir. Reported-by: Liam Crilly <lcrilly@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2023-01-02Tools: setup-unit: removed root checks.Alejandro Colomar1-8/+0
Reported-by: Liam Crilly <lcrilly@nginx.com> Signed-off-by: Alejandro Colomar <alx@nginx.com>
2022-12-28Packages: do not clean up rpm build root.Konstantin Pavlov1-2/+2
These directories are used in the Makefile to determine status of a target.
2022-12-15Docs: added changelog for Python 3.11.Konstantin Pavlov2-2/+16
While at it, fixed changelogs generation for Python 3.10 as well.
2022-12-19Tools: unitc avoid interactive rm(1) invocations.Liam Crilly1-3/+3
2022-12-16Tools: Fixed bug in help message.Alejandro Colomar1-1/+1
'sudo' was misplaced. Signed-off-by: Alejandro Colomar <alx@nginx.com>
2022-12-16Tools: Using HereDoc instead of echo(1).Alejandro Colomar1-5/+7
This prevents accidents, which are likely to happen especially with quotes. Signed-off-by: Alejandro Colomar <alx@nginx.com>
2022-12-16Version bump.Andrei Zeliankou2-2/+31
2022-12-16Tools: fixed quoting for apostrophe in setup-unit.Liam Crilly1-1/+1
2022-12-15Unit 1.29.0 release.Andrei Zeliankou1-0/+1
2022-12-15Generated Dockerfiles for Unit 1.29.0.1.29.0Andrei Zeliankou8-8/+8
2022-12-15Added version 1.29.0 CHANGES.Andrei Zeliankou2-2/+42
2022-12-15Reordered changes for 1.29.0 by significance (subjective).Andrei Zeliankou1-17/+23
2022-12-14Tools: Updated built-in 'setup-unit' help, README.md command lines.Artem Konev2-70/+69
2022-12-14Packages: Used a more common name for pkg-config.Konstantin Pavlov1-1/+1
pkg-config package is named differently on supported rpm-based systems: - Amazon Linux 2 has pkgconfig - Fedora has pkgconf-pkg-config - RHEL 7 has pkgconfig - RHEL 8 and 9 have pkgconfig-pkg-config What they share in common is they all provide 'pkgconfig', which we can use in the spec file so we don't have to specify it per-OS.
2022-12-14Tests: added tests for "path" option in isolation/cgroup.Andrei Zeliankou1-0/+123
2022-12-07Packages: added njs support.Konstantin Pavlov5-8/+30