From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on sa.local.altlinux.org X-Spam-Level: X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD autolearn=ham autolearn_force=no version=3.4.1 Date: Sat, 7 Dec 2024 10:24:46 +0000 From: ALT beekeeper To: sisyphus-cybertalk@lists.altlinux.org Message-ID: Mail-Followup-To: sisyphus-cybertalk@lists.altlinux.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Subject: [cyber] I: Sisyphus-20241207 i586 beehive_status: +28 -33 (794) X-BeenThere: sisyphus-cybertalk@lists.altlinux.org X-Mailman-Version: 2.1.12 Precedence: list Reply-To: devel@lists.altlinux.org List-Id: ALT Linux Sisyphus cybertalk List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 07 Dec 2024 10:24:55 -0000 Archived-At: List-Archive: 28 NEW error logs altlinux-freedesktop-menu-0.70-alt1 + j=altlinux-webdevelopment.directory + grep altlinux-webdevelopment.directory ignore.list + grep 'Name\[ru\]=' desktop-directories/altlinux-webdevelopment.directory + for i in desktop-directories/*.directory ++ basename desktop-directories/altlinux-xfce.directory + j=altlinux-xfce.directory + grep altlinux-xfce.directory ignore.list + rm ignore.list + '[' 0 -gt 0 ']' + exit 1 cockpit-machines-315-alt1 runtime/asm_wasm.s:399 +0x1 fp=0x21edbfe8 sp=0x21edbfe0 pc=0x14070001 created by github.com/evanw/esbuild/internal/linker.(*linkerContext).scanImportsAndExports github.com/evanw/esbuild/internal/linker/linker.go:1572 +0x148 goroutine 11970 [runnable]: github.com/evanw/esbuild/internal/linker.(*linkerContext).scanImportsAndExports.func3() github.com/evanw/esbuild/internal/linker/linker.go:1572 fp=0x21edffe0 sp=0x21edffd8 pc=0x1efe0000 runtime.goexit() runtime/asm_wasm.s:399 +0x1 fp=0x21edffe8 sp=0x21edffe0 pc=0x14070001 created by github.com/evanw/esbuild/internal/linker.(*linkerContext).scanImportsAndExports github.com/evanw/esbuild/internal/linker/linker.go:1572 +0x148 firefox-133.0.0-alt1 The details of the failure are as follows: ValueError: '/usr/lib/python3/site-packages' is not in the subpath of '/usr/src/RPM/BUILD/firefox-133.0.0/mozbuild/srcdirs/firefox-133.0.0-b4770f016db0/_virtualenvs/build' File "/usr/src/RPM/BUILD/firefox-133.0.0/python/mozbuild/mozbuild/build_commands.py", line 221, in build firefox-esr-128.5.1-alt1 The details of the failure are as follows: ValueError: '/usr/lib/python3/site-packages' is not in the subpath of '/usr/src/RPM/BUILD/firefox-128.5.1/mozbuild/srcdirs/firefox-128.5.1-ee4ca9ecb43e/_virtualenvs/build' File "/usr/src/RPM/BUILD/firefox-128.5.1/python/mozbuild/mozbuild/build_commands.py", line 221, in build kernel-mainline-1-alt1 rpm-build-kernel is already the newest version. E: Couldn't find package kernel-image-mainline hsh-install: Failed to calculate package file list. mumps-5.3.5-alt3 gcc -shared graph.o gbipart.o gbisect.o ddcreate.o ddbisect.o nestdiss.o multisector.o gelim.o bucket.o tree.o symbfac.o interface.o sort.o minpriority.o -Wl,-soname,libpord_seq-5.3.so -o libpord_seq-5.3.so -fopenmp -lrt -Wl,-z,defs lto1: internal compiler error: resolution sub id 0x4a0790ccbc46ee87 not in object file libbacktrace could not find executable to open -- See for instructions. lto-wrapper: fatal error: i586-alt-linux-gcc returned 1 exit status compilation terminated. ld: error: lto-wrapper failed collect2: error: ld returned 1 exit status make[2]: *** [Makefile:34: libpord_seq.so] Error 1 python3-module-argcomplete-3.5.1-alt1 - test-module: error: argument arg: invalid choice: 'a' (choose from arg) FAIL: test_console_script_package (__main__.TestZshGlobalImplicit.test_console_script_package) Test completing a console_script for a package. -- self.assertEqual(self.sh.run_command(command), "arg\r\n") AssertionError: "usage: test-package [-h] {arg}\r\ntest-package: error: argument arg: invalid choice: 'a' (choose from arg)\r\n" != 'arg\r\n' + arg - usage: test-package [-h] {arg} - test-package: error: argument arg: invalid choice: 'a' (choose from arg) FAIL: test_console_script_package_wheel (__main__.TestZshGlobalImplicit.test_console_script_package_wheel) Test completing a console_script for a package from a wheel. -- self.assertEqual(self.sh.run_command(command), "arg\r\n") AssertionError: "usage: test-package [-h] {arg}\r\ntest-package: error: argument arg: invalid choice: 'a' (choose from arg)\r\n" != 'arg\r\n' + arg - usage: test-package [-h] {arg} - test-package: error: argument arg: invalid choice: 'a' (choose from arg) FAIL: test_continuation (__main__.TestZshGlobalImplicit.test_continuation) Traceback (most recent call last): -- self.assertEqual(self.sh.run_command("prog basic f\t--"), "foo\r\n") AssertionError: "usage: prog basic [-h] {foo,bar,baz}\r\nprog basic: error: argument arg: invalid choice: 'f--' (choose from foo, bar, baz)\r\n" != 'foo\r\n' + foo - usage: prog basic [-h] {foo,bar,baz} - prog basic: error: argument arg: invalid choice: 'f--' (choose from foo, bar, baz) FAIL: test_partial_completion (__main__.TestZshGlobalImplicit.test_partial_completion) python3-module-beartype-0.18.5-alt2 SKIPPED [1] beartype_test/a90_func/z90_lib/test_torch.py:21: could not import 'torch': No module named 'torch' FAILED beartype_test/a00_unit/a20_util/hint/a00_pep/proposal/test_utilpep593.py::test_is_hint_pep593_beartype - Failed: DID NOT WARN. No warnings of type (,) were emitted. Emitted warnings: []. ================== 1 failed, 333 passed, 15 skipped in 3.01s =================== ERROR: InvocationError for command /usr/src/RPM/BUILD/python3-module-beartype-0.18.5/.tox/py3/bin/pytest -vra (exited with code 1) py3 finish: run-test after 3.24 seconds -- ___________________________________ summary ____________________________________ ERROR: py3: commands failed cleanup /usr/src/RPM/BUILD/python3-module-beartype-0.18.5/.tox/.tmp/package/1/beartype-0.18.5-py3-none-any.whl python3-module-cheroot-10.0.1-alt2 ==================================== ERRORS ==================================== _______________ ERROR at setup of test_ssl_env[0-True-pyopenssl] _______________ [gw8] linux -- Python 3.12.8 /usr/src/RPM/BUILD/python3-module-cheroot-10.0.1/.run_venv/bin/python3 -- > assert col in needed_collectors, "previous item was not torn down properly" E AssertionError: previous item was not torn down properly col = -- XFAIL cheroot/test/test_core.py::test_large_request - https://github.com/cherrypy/cheroot/issues/106 ERROR cheroot/test/test_ssl.py::test_ssl_env[0-True-pyopenssl] - AssertionErr... =================== 152 passed, 3 xfailed, 1 error in 10.74s =================== INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-k', 'not test_tls_client_auth']' returned non-zero exit status 1. python3-module-django-5.0.9-alt1 A floating point mtime does not disturb was_modified_since (#18675). ... ok FAIL: test_subparser_invalid_option (user_commands.tests.CommandTests.test_subparser_invalid_option) Traceback (most recent call last): -- raise ArgumentError(action, msg % args) argparse.ArgumentError: argument {foo}: invalid choice: 'test' (choose from foo) During handling of the above exception, another exception occurred: django.core.management.base.CommandError: Error: argument {foo}: invalid choice: 'test' (choose from foo) During handling of the above exception, another exception occurred: -- self.assertIn(expected_message, str(getattr(cm, cm_attr))) AssertionError: "invalid choice: 'test' (choose from 'foo')" not found in "Error: argument {foo}: invalid choice: 'test' (choose from foo)" Ran 16830 tests in 561.467s FAILED (failures=1, skipped=1438, expected failures=5) Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')... python3-module-execnet-2.1.1-alt1 XFAIL testing/test_multi.py::test_safe_terminate[thread] - active_count() has been broken for some time XFAIL testing/test_termination.py::test_terminate_implicit_does_trykill[thread-sys.executable] - reason: since python3.12 this test triggers RuntimeError: can't create new thread at interpreter shutdown XFAIL testing/test_threadpool.py::test_limited_size[thread] - WorkerPool does not implement limited size -- XFAIL testing/test_multi.py::test_safe_terminate[main_thread_only] - active_count() has been broken for some time XFAIL testing/test_termination.py::test_terminate_implicit_does_trykill[main_thread_only-sys.executable] - reason: since python3.12 this test triggers RuntimeError: can't create new thread at interpreter shutdown XFAIL testing/test_threadpool.py::test_limited_size[main_thread_only] - WorkerPool does not implement limited size -- XPASS testing/test_xspec.py::TestMakegateway::test_popen_nice - fails due to timing problems on busy single-core VMs FAILED testing/test_gateway.py::TestBasicGateway::test__rinfo[main_thread_only-popen] FAILED testing/test_gateway.py::TestPopenGateway::test_waitclose_on_remote_killed = 2 failed, 463 passed, 613 skipped, 7 xfailed, 10 xpassed, 78 warnings in 47.62s = ERROR: InvocationError for command /usr/src/RPM/BUILD/python3-module-execnet-2.1.1/.tox/py3/bin/python -m pytest testing (exited with code 1) py3 finish: run-test after 48.06 seconds -- ___________________________________ summary ____________________________________ ERROR: py3: commands failed cleanup /usr/src/RPM/BUILD/python3-module-execnet-2.1.1/.tox/.tmp/package/1/execnet-2.1.1-py3-none-any.whl python3-module-executing-2.1.0-alt2 > assert Source.executing(frame).node is None E AssertionError: assert is None E + where = .node -- SKIPPED [1] tests/test_main.py:783: These tests are very slow, enable them explicitly FAILED tests/test_pytest.py::test_exception_catching - AssertionError: assert... ============= 1 failed, 191 passed, 17 skipped, 1 warning in 4.98s ============= INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-ra', 'tests']' returned non-zero exit status 1. python3-module-future-1.0.0-alt1.1 E AssertionError: '///C:/' != '///C:' E - ///C:/ -- '/////folder/test/') E AssertionError: '///folder/test/' != '/////folder/test/' E - ///folder/test/ -- > self.assertEqual(pathname2url(url2pathname(path)), path) E AssertionError: '//folder/test/' != '/////folder/test/' E - //folder/test/ -- =========================== short test summary info ============================ FAILED tests/test_future/test_urllib.py::URL2PathNameTests::test_converting_when_no_drive_letter FAILED tests/test_future/test_urllib.py::URL2PathNameTests::test_roundtrip_url2pathname FAILED tests/test_future/test_urllib.py::PathName2URLTests::test_converting_drive_letter FAILED tests/test_future/test_urllib.py::PathName2URLTests::test_converting_when_no_drive_letter FAILED tests/test_future/test_urllib.py::PathName2URLTests::test_roundtrip_pathname2url FAILED tests/test_future/test_urllib_toplevel.py::URL2PathNameTests::test_converting_when_no_drive_letter FAILED tests/test_future/test_urllib_toplevel.py::URL2PathNameTests::test_roundtrip_url2pathname FAILED tests/test_future/test_urllib_toplevel.py::PathName2URLTests::test_converting_drive_letter python3-module-inline-snapshot-0.11.0-alt1 > assert result.report == snapshot("") E AssertionError: assert '\nInfo: one ... interactiv\n' == '' E -- > assert result.report == snapshot( Error: one snapshot has incorrect values (--inline-snapshot=fix) Info: one snapshot can be trimmed (--inline-snapshot=trim) You can also use --inline-snapshot=review to approve the changes interactiv E AssertionError: assert '\nError: one... interactiv\n' == '\nError: one... interactiv\n' E -- =============================== inline snapshot ================================ Error: one snapshot has incorrect values (--inline-snapshot=fix) Info: one snapshot can be trimmed (--inline-snapshot=trim) -- =========================== short test summary info ============================ FAILED test_file.py::test_a - assert 5 == 4 ============================== 1 failed in 0.19s =============================== =============================== inline snapshot ================================ Error: one snapshot has incorrect values (--inline-snapshot=fix) Info: one snapshot changed its representation (--inline-snapshot=update) -- =========================== short test summary info ============================ FAILED tests/test_pytest_plugin.py::test_update - AssertionError: assert '\nI... FAILED tests/test_pytest_plugin.py::test_multiple - AssertionError: assert '\... = 2 failed, 347 passed, 1 deselected, 955 subtests passed in 122.67s (0:02:02) = INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-k', 'not pyright']' returned non-zero exit status 1. python3-module-mdp-3.6.0.15.g64f14eee-alt3 > return func(*args, **kwds) E AssertionError: E Arrays are not almost equal to 1 decimals -- =========================== short test summary info ============================ FAILED mdp/test/test_VartimeSFANode.py::test_VartimeSFANode1 - AssertionError: ==== 1 failed, 730 passed, 26 skipped, 176076 warnings in 72.82s (0:01:12) ===== ERROR: InvocationError for command /usr/src/RPM/BUILD/python3-module-mdp-3.6.0.15.g64f14eee/.tox/py3/bin/pytest mdp (exited with code 1) py3 finish: run-test after 73.62 seconds -- ___________________________________ summary ____________________________________ ERROR: py3: commands failed cleanup /usr/src/RPM/BUILD/python3-module-mdp-3.6.0.15.g64f14eee/.tox/.tmp/package/1/MDP-3.6-py2.py3-none-any.whl python3-module-nbmake-1.5.4-alt1 tests/test_pytest_plugin.py::test_when_in_build_dir_none_collected PASSED [ 63%] tests/test_pytest_plugin.py::test_when_parallel_passing_nbs_then_ok FAILED [ 66%] tests/test_pytest_plugin.py::test_when_passing_nbs_then_ok PASSED [ 69%] -- > assert hook_recorder.ret == ExitCode.OK E assert == E + where = <_pytest.pytester.HookRecorder object at 0xf37bf270>.ret E + and = ExitCode.OK -- [gw0] linux -- Python 3.12.8 /usr/src/RPM/BUILD/python3-module-nbmake-1.5.4/.run_venv/bin/python3 NBMAKE INTERNAL ERROR Kernel died before replying to kernel_info -- File "_zmq.py", line 179, in zmq.backend.cython._zmq._check_rc zmq.error.ZMQError: Address already in use (addr='tcp://127.0.0.1:55779') ------------------------------ Captured log call ------------------------------- ERROR traitlets:client.py:568 Error occurred while starting new kernel client for kernel a8bef034-be5b-4ee6-a495-01a9f82475bd: Kernel died before replying to kernel_info Learn more about nbmake at https://github.com/treebeardtech/nbmake =========================== short test summary info ============================ FAILED 4.ipynb:: ========================= 1 failed, 19 passed in 9.16s ========================= -- assert self.km is not None AssertionError: =========================== short test summary info ============================ FAILED tests/test_pytest_plugin.py::test_when_parallel_passing_nbs_then_ok - ... ======================== 1 failed, 32 passed in 52.05s ========================= INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-vra']' returned non-zero exit status 1. python3-module-nox-2024.10.9-alt1 ------------------------------ Captured log call ------------------------------- ERROR nox:command.py:55 Program /usr/src/RPM/BUILD/python3-module-nox-2024.10.9/.run_venv/bin/python3 not found. =================================== XPASSES ==================================== -- XPASS tests/test_logger.py::test_no_color_timestamp[color] FAILED tests/test_command.py::test_run_env_systemroot - nox.command.CommandFa... ===== 1 failed, 547 passed, 30 skipped, 4 deselected, 1 xpassed in 15.16s ====== INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-k', 'not test__create_venv_options']' returned non-zero exit status 1. python3-module-opentelemetry-contrib-0.47b0-alt1 WARNING opentelemetry.instrumentation.auto_instrumentation._load:_load.py:116 Configuration of custom_configurator1 not loaded because custom_configurator2 is set by OTEL_PYTHON_CONFIGURATOR ERROR opentelemetry.instrumentation.auto_instrumentation._load:_load.py:123 Configuration of custom_configurator2 failed Traceback (most recent call last): -- -------------------------------- live log call --------------------------------- ERROR opentelemetry.instrumentation.auto_instrumentation._load:_load.py:52 Distribution custom_distro2 configuration failed Traceback (most recent call last): -- tests/test_bootstrap.py::TestBootstrap::test_run_unknown_cmd PASSED [ 30%] tests/test_bootstrap.py::TestBootstrap::test_run_unknown_cmd ERROR [ 30%] tests/test_dependencies.py::TestDependencyConflicts::test_get_dependency_conflicts_empty PASSED [ 32%] -- ==================================== ERRORS ==================================== ___________ ERROR at teardown of TestBootstrap.test_run_unknown_cmd ____________ cls = -- > raise RuntimeError("Patch is already started") E RuntimeError: Patch is already started /usr/lib/python3.12/unittest/mock.py:1446: RuntimeError -- usage: bootstrap [-h] [--version] [-a {install,requirements}] bootstrap: error: argument -a/--action: invalid choice: 'pipenv' (choose from install, requirements) =============================== warnings summary =============================== -- INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-k', 'not test_run.py']' returned non-zero exit status 1. python3-module-oslo.service-3.5.0-alt1 return send_method(data, *args) BrokenPipeError: [Errno 32] Broken pipe 127.0.0.1 - - [07/Dec/2024 08:28:04] "GET / HTTP/1.1" 200 123 5.023087 -- raise self.failureException(msg) AssertionError: 60 != 59.94789818814024 within 1 places (0.05210181185975671 difference) Totals python3-module-pdm-2.11.2-alt1 FAILED tests/cli/test_run.py::test_composite_hooks_inherit_env - TypeError: ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given FAILED tests/cli/test_run.py::test_composite_inherit_env_in_cascade - TypeError: ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given FAILED tests/cli/test_run.py::test_composite_inherit_dotfile - TypeError: ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given FAILED tests/cli/test_run.py::test_composite_can_have_commands - TypeError: ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given FAILED tests/cli/test_run.py::test_run_shortcut - TypeError: ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given FAILED tests/cli/test_run.py::test_run_shortcuts_dont_override_commands - TypeError: ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given FAILED tests/cli/test_run.py::test_run_shortcut_fail_with_usage_if_script_not_found - AssertionError: assert 'Script unknown: whatever' in '' + where '' = RunResult(exit_code=1, stdout='', stderr='', exception=TypeError('ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given')).stderr FAILED tests/cli/test_run.py::test_empty_positionnal_args_still_display_usage[unknown param] - AssertionError: assert 'Usage' in '' + where '' = RunResult(exit_code=1, stdout='', stderr='', exception=TypeError('ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given')).stderr FAILED tests/cli/test_run.py::test_empty_positionnal_args_still_display_usage[not an user script] - AssertionError: assert 'Usage' in '' + where '' = RunResult(exit_code=1, stdout='', stderr='', exception=TypeError('ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given')).stderr FAILED tests/cli/test_run.py::test_empty_positional_args_display_help - AssertionError: assert 1 == 0 + where 1 = RunResult(exit_code=1, stdout='', stderr='', exception=TypeError('ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given')).exit_code python3-module-pdm-multirun-1.1.0-alt1 > namespace, args = self._parse_known_args(args, namespace, intermixed) E TypeError: ErrorArgumentParser._parse_known_args() takes 3 positional arguments but 4 were given /usr/lib/python3.12/argparse.py:1943: TypeError =========================== short test summary info ============================ FAILED tests/test_plugin.py::test_multirun - TypeError: ErrorArgumentParser._... ============================== 1 failed in 0.49s =============================== INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest']' returned non-zero exit status 1. python3-module-pyproject-installer-0.5.5-alt1 tests/unit/test_main.py::test_deps_cli_add_depsconfig PASSED [ 94%] tests/unit/test_main.py::test_deps_cli_add_wrong_srctype FAILED [ 94%] tests/unit/test_main.py::test_deps_cli_add_sourceargs[srcargs0] PASSED [ 94%] -- > assert expected_err_msg.encode("utf-8") in result.stderr E assert b"argument hook_name: invalid choice: 'invalid_hook_name' (choose from 'build_wheel', 'build_sdist', 'get_requires_for_build_wheel', 'get_requires_for_build_sdist', 'prepare_metadata_for_build_wheel')\n" in b"usage: backend_caller.py [-h] [--backend-path BACKEND_PATH]\n [--result-fd RESULT_FD] [-v] [--hook-args HOOK_ARGS]\n backend\n {build_wheel,build_sdist,get_requires_for_build_wheel,get_requires_for_build_sdist,prepare_metadata_for_build_wheel}\nbackend_caller.py: error: argument hook_name: invalid choice: 'invalid_hook_name' (choose from build_wheel, build_sdist, get_requires_for_build_wheel, get_requires_for_build_sdist, prepare_metadata_for_build_wheel)\n" E + where b"argument hook_name: invalid choice: 'invalid_hook_name' (choose from 'build_wheel', 'build_sdist', 'get_requires_for_build_wheel', 'get_requires_for_build_sdist', 'prepare_metadata_for_build_wheel')\n" = ('utf-8') E + where = "argument hook_name: invalid choice: 'invalid_hook_name' (choose from 'build_wheel', 'build_sdist', 'get_requires_for_build_wheel', 'get_requires_for_build_sdist', 'prepare_metadata_for_build_wheel')\n".encode E + and b"usage: backend_caller.py [-h] [--backend-path BACKEND_PATH]\n [--result-fd RESULT_FD] [-v] [--hook-args HOOK_ARGS]\n backend\n {build_wheel,build_sdist,get_requires_for_build_wheel,get_requires_for_build_sdist,prepare_metadata_for_build_wheel}\nbackend_caller.py: error: argument hook_name: invalid choice: 'invalid_hook_name' (choose from build_wheel, build_sdist, get_requires_for_build_wheel, get_requires_for_build_sdist, prepare_metadata_for_build_wheel)\n" = CompletedProcess(args=['/usr/src/RPM/BUILD/python3-module-pyproject-installer-0.5.5/.run_venv/bin/python3', '-m', 'pyproject_installer.lib.backend_helper.backend_caller', 'be', 'invalid_hook_name'], returncode=2, stdout=b'', stderr=b"usage: backend_caller.py [-h] [--backend-path BACKEND_PATH]\n [--result-fd RESULT_FD] [-v] [--hook-args HOOK_ARGS]\n backend\n {build_wheel,build_sdist,get_requires_for_build_wheel,get_requires_for_build_sdist,prepare_metadata_for_build_wheel}\nbackend_caller.py: error: argument hook_name: invalid choice: 'invalid_hook_name' (choose from build_wheel, build_sdist, get_requires_for_build_wheel, get_requires_for_build_sdist, prepare_metadata_for_build_wheel)\n").stderr tests/unit/test_build/test_backend_caller.py:140: AssertionError -- > assert expected_err_msg in captured.err E assert "invalid choice: 'bar' (choose from 'pep517', 'pep518', 'metadata', 'pip_reqfile', 'poetry', 'tox', 'hatch', 'pdm', 'pipenv', 'pep735')" in "usage: python -m pyproject_installer deps add [-h]\n srcname\n {pep517,pep518,metadata,pip_reqfile,poetry,tox,hatch,pdm,pipenv,pep735}\n [srcargs ...]\npython -m pyproject_installer deps add: error: argument srctype: invalid choice: 'bar' (choose from pep517, pep518, metadata, pip_reqfile, poetry, tox, hatch, pdm, pipenv, pep735)\n" E + where "usage: python -m pyproject_installer deps add [-h]\n srcname\n {pep517,pep518,metadata,pip_reqfile,poetry,tox,hatch,pdm,pipenv,pep735}\n [srcargs ...]\npython -m pyproject_installer deps add: error: argument srctype: invalid choice: 'bar' (choose from pep517, pep518, metadata, pip_reqfile, poetry, tox, hatch, pdm, pipenv, pep735)\n" = CaptureResult(out='', err="usage: python -m pyproject_installer deps add [-h]\n srcname\n {pep517,pep518,metadata,pip_reqfile,poetry,tox,hatch,pdm,pipenv,pep735}\n [srcargs ...]\npython -m pyproject_installer deps add: error: argument srctype: invalid choice: 'bar' (choose from pep517, pep518, metadata, pip_reqfile, poetry, tox, hatch, pdm, pipenv, pep735)\n").err tests/unit/test_main.py:1117: AssertionError =========================== short test summary info ============================ FAILED tests/unit/test_build/test_backend_caller.py::test_invalid_hook_choice FAILED tests/unit/test_main.py::test_deps_cli_add_wrong_srctype - assert "inv... ======================== 2 failed, 729 passed in 12.28s ======================== INFO : pyproject_installer : Command's result: FAILURE INFO : pyproject_installer : Command's error: Command '['pytest', '-vra', 'tests/unit']' returned non-zero exit status 1. error: Bad exit status from /usr/src/tmp/rpm-tmp.84715 (%check) python3-module-pytest-examples-0.0.13-alt1 '+x = [1, 2, 3]', E AssertionError: assert ['black faile..., 2, 3]', ...] == ['black faile... = [1, 2, 3]'] E -- collecting ... collected 1 item test_black_error.py::test_find_run_examples[my_file.md:3-5] FAILED [100%] =================================== FAILURES =================================== -- =========================== short test summary info ============================ FAILED test_black_error.py::test_find_run_examples[my_file.md:3-5] - Failed: ... ============================== 1 failed in 0.01s =============================== -- '+x = [1, 2, 3]', E AssertionError: assert ['black faile...- 1,', ...] == ['black faile...- 1,', ...] E -- collecting ... collected 1 item test_black_error_multiline.py::test_find_run_examples[my_file.md:3-9] FAILED [100%] =================================== FAILURES =================================== -- =========================== short test summary info ============================ FAILED test_black_error_multiline.py::test_find_run_examples[my_file.md:3-9] ============================== 1 failed in 0.01s =============================== -- INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-vra']' returned non-zero exit status 1. error: Bad exit status from /usr/src/tmp/rpm-tmp.39775 (%check) python3-module-pytest-xdist-3.6.1-alt1 ) == {"gw1": 10} E AssertionError: assert {'gw0': 10} == {'gw1': 10} E -- XFAIL testing/test_workermanage.py::test_unserialize_warning_msg[Nested] - Nested warning classes are not supported. FAILED testing/acceptance_test.py::TestLoadScope::test_workqueue_ordered_by_size ============ 1 failed, 190 passed, 6 skipped, 10 xfailed in 44.56s ============= INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-ra']' returned non-zero exit status 1. error: Bad exit status from /usr/src/tmp/rpm-tmp.22112 (%check) python3-module-sanic-24.6.0-alt2 tests/test_cli.py ..................................................... hasher-privd: parent: handle_io: idle time limit (3600 seconds) exceeded python3-module-traitlets-5.14.3-alt1 > for conflict_action in parser._action_conflicts.get(action, []): E AttributeError: 'MonkeyPatchedIntrospectiveArgumentParser' object has no attribute '_action_conflicts' action = IntrospectAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, required=False, help='show this help message and exit', metavar=None) -- > for conflict_action in parser._action_conflicts.get(action, []): E AttributeError: 'MonkeyPatchedIntrospectiveArgumentParser' object has no attribute '_action_conflicts' action = IntrospectAction(option_strings=['-h', '--help'], dest='help', nargs=0, const=None, default='==SUPPRESS==', type=None, choices=None, required=False, help='show this help message and exit', metavar=None) -- SKIPPED [1] ../../../../lib/python3/site-packages/_pytest/doctest.py:458: all tests skipped by +SKIP option FAILED tests/config/test_argcomplete.py::TestArgcomplete::test_complete_simple_app FAILED tests/config/test_argcomplete.py::TestArgcomplete::test_complete_custom_completers FAILED tests/config/test_argcomplete.py::TestArgcomplete::test_complete_subcommands FAILED tests/config/test_argcomplete.py::TestArgcomplete::test_complete_subcommands_subapp1 FAILED tests/config/test_argcomplete.py::TestArgcomplete::test_complete_subcommands_subapp2 FAILED tests/config/test_argcomplete.py::TestArgcomplete::test_complete_subcommands_main ============= 6 failed, 566 passed, 1 skipped, 1 warning in 1.98s ============== INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'pytest', '-v', '--ignore', 'tests/test_typing.py']' returned non-zero exit status 1. python3-module-xonsh-0.18.4-alt1 xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True TypeError: ArgParser._parse_known_args() takes 3 positional arguments but 4 were given INFO : Command's result: FAILURE INFO : Command's error: Command '['python3', '-m', 'xonsh', 'run-tests.xsh', 'test', '--', '--timeout=240', '-vvvra']' returned non-zero exit status 1. thunderbird-128.5.0-alt1 The details of the failure are as follows: ValueError: '/usr/lib/python3/site-packages' is not in the subpath of '/usr/src/.mozbuild/srcdirs/thunderbird-128.5.0-9379c83485c1/_virtualenvs/build' File "/usr/src/RPM/BUILD/thunderbird-128.5.0/python/mozbuild/mozbuild/build_commands.py", line 255, in configure 33 error logs REMOVED from the list Singular-4.2.1-alt2 barcode-0.99-alt3 btrfs-compsize-1.5-alt1 caddy-2.8.4-alt2 collectd-5.12.0-alt6 dd_rescue-1.99.15-alt1 eb-4.4.3-alt1 editorconfig-0.12.8-alt1 efitools-1.9.2-alt3 freeipmi-1.6.14-alt1 geogram-1.7.9-alt2 lexmark2070-0.6-alt1.1 lexmark7000linux-990516-alt1 lmbench-3.0a9-alt1 mgba-0.10.3-alt1 ml85p-0.2.0-alt2 paper-clip-5.5.1-alt1 perl-Test-LectroTest-0.5001-alt3 perl-WWW-Curl-4.17-alt8 ppmtocpva-1.0-alt1.1 printer-driver-lbp660-0.3.1-alt1 printer-driver-lxx74-0.8.4.2-alt1 python-2.7.18-alt11 python3-module-pkginfo-1.10.0-alt1 python3-module-pytest-mpi-0.6-alt1 python3-module-pytest-spec-4.0.0-alt1 seahorse-sharing-3.8.0-alt3 sfizz-1.2.3-alt2 tmux-3.5a-alt1 vorbisgain-0.37-alt1.qa1 w3m-0.5.3-alt4.git20200502 wine-1:9.0.10-alt1 x125-0.2.3-alt1 Total 794 error logs.