From: "Girar awaiter (vt)" <girar-builder@altlinux.org> To: Vitaly Chikunov <vt@altlinux.org> Cc: sisyphus-incominger@lists.altlinux.org, girar-builder-p11@altlinux.org, girar-builder-p11@lists.altlinux.org Subject: [#388454] p11 EPERM (try 2) tinyllamas-gguf.git=0-alt1 llama.cpp.git=5753-alt1 Date: Sun, 29 Jun 2025 22:58:48 +0000 Message-ID: <girar.task.388454.2.1@gyle.mskdc.altlinux.org> (raw) In-Reply-To: <girar.task.388454.1.1@gyle.mskdc.altlinux.org> https://git.altlinux.org/tasks/388454/logs/events.2.1.log https://packages.altlinux.org/tasks/388454 subtask name aarch64 i586 x86_64 #40 tinyllamas-gguf 37 21 21 #100 llama.cpp 12:41 - 7:52 2025-Jun-29 22:38:41 :: task #388454 for p11 resumed by vt: 2025-Jun-29 22:38:41 :: message: update #40 build 0-alt1 from /gears/t/tinyllamas-gguf.git fetched at 2025-Jun-29 22:38:31 from sisyphus #100 build 5753-alt1 from /gears/l/llama.cpp.git fetched at 2025-Jun-29 22:36:16 from sisyphus 2025-Jun-29 22:38:41 :: created build repo 2025-Jun-29 22:38:42 :: #100: force rebuild 2025-Jun-29 22:38:43 :: [aarch64] #40 tinyllamas-gguf.git 0-alt1: build start 2025-Jun-29 22:38:43 :: [i586] #40 tinyllamas-gguf.git 0-alt1: build start 2025-Jun-29 22:38:43 :: [x86_64] #40 tinyllamas-gguf.git 0-alt1: build start 2025-Jun-29 22:39:04 :: [i586] #40 tinyllamas-gguf.git 0-alt1: build OK 2025-Jun-29 22:39:04 :: [i586] #100 llama.cpp.git 5753-alt1: build start 2025-Jun-29 22:39:04 :: [x86_64] #40 tinyllamas-gguf.git 0-alt1: build OK 2025-Jun-29 22:39:05 :: [x86_64] #100 llama.cpp.git 5753-alt1: build start 2025-Jun-29 22:39:17 :: [i586] #100 llama.cpp.git 5753-alt1: build SKIPPED 2025-Jun-29 22:39:20 :: [aarch64] #40 tinyllamas-gguf.git 0-alt1: build OK 2025-Jun-29 22:39:21 :: [aarch64] #100 llama.cpp.git 5753-alt1: build start build/100/x86_64/log:[00:04:14] debuginfo.req: WARNING: /usr/lib64/libcublas.so.12 is not yet debuginfo-enabled build/100/x86_64/log:[00:04:14] debuginfo.req: WARNING: /usr/lib64/libcudart.so.12 is not yet debuginfo-enabled 2025-Jun-29 22:46:57 :: [x86_64] #100 llama.cpp.git 5753-alt1: build OK 2025-Jun-29 22:52:02 :: [aarch64] #100 llama.cpp.git 5753-alt1: build OK 2025-Jun-29 22:52:09 :: #40: tinyllamas-gguf.git 0-alt1: build check OK 2025-Jun-29 22:52:29 :: #100: llama.cpp.git 5753-alt1: build check OK 2025-Jun-29 22:52:31 :: build check OK 2025-Jun-29 22:52:58 :: noarch check OK 2025-Jun-29 22:53:00 :: plan: src +2 -1 =19694, aarch64 +8 -2 =34708, noarch +1 -0 =20811, x86_64 +10 -2 =35476 #100 llama.cpp 20240225-alt1 -> 1:5753-alt1 Wed Jun 25 2025 Vitaly Chikunov <vt@altlinux> 1:5753-alt1 - Update to b5753 (2025-06-24). - Install an experimental rpc backend and server. The rpc code is a proof-of-concept, fragile, and insecure. Sat May 10 2025 Vitaly Chikunov <vt@altlinux> 1:5332-alt1 - Update to b5332 (2025-05-09), with vision support in llama-server. - Enable Vulkan backend (for GPU) in llama.cpp-vulkan package. Mon Mar 10 2025 Vitaly Chikunov <vt@altlinux> 1:4855-alt1 - Update to b4855 (2025-03-07). - Enable CUDA backend (for NVIDIA GPU) in llama.cpp-cuda package. [...] 2025-Jun-29 22:53:01 :: llama.cpp: closes bugs: 50962 2025-Jun-29 22:53:39 :: patched apt indices 2025-Jun-29 22:53:48 :: created next repo 2025-Jun-29 22:53:58 :: duplicate provides check OK 2025-Jun-29 22:54:34 :: dependencies check OK 2025-Jun-29 22:55:03 :: [x86_64 aarch64] ELF symbols check OK 2025-Jun-29 22:55:16 :: [i586] #40 tinyllamas-gguf: install check OK 2025-Jun-29 22:55:18 :: [x86_64] #100 libllama: install check OK 2025-Jun-29 22:55:26 :: [x86_64] #100 libllama-debuginfo: install check OK 2025-Jun-29 22:55:27 :: [aarch64] #100 libllama: install check OK x86_64: libllama-devel=1:5753-alt1 post-install unowned files: /usr/lib64/cmake 2025-Jun-29 22:55:32 :: [x86_64] #100 libllama-devel: install check OK 2025-Jun-29 22:55:40 :: [aarch64] #100 libllama-debuginfo: install check OK aarch64: libllama-devel=1:5753-alt1 post-install unowned files: /usr/lib64/cmake 2025-Jun-29 22:55:51 :: [aarch64] #100 libllama-devel: install check OK 2025-Jun-29 22:55:56 :: [x86_64] #100 llama.cpp: install check OK 2025-Jun-29 22:56:05 :: [x86_64] #100 llama.cpp-cpu: install check OK 2025-Jun-29 22:56:05 :: [aarch64] #100 llama.cpp: install check OK 2025-Jun-29 22:56:20 :: [aarch64] #100 llama.cpp-cpu: install check OK 2025-Jun-29 22:56:25 :: [x86_64] #100 llama.cpp-cpu-debuginfo: install check OK 2025-Jun-29 22:56:49 :: [x86_64] #100 llama.cpp-cuda: install check OK 2025-Jun-29 22:56:51 :: [aarch64] #100 llama.cpp-cpu-debuginfo: install check OK 2025-Jun-29 22:57:06 :: [aarch64] #100 llama.cpp-vulkan: install check OK 2025-Jun-29 22:57:15 :: [x86_64] #100 llama.cpp-cuda-debuginfo: install check OK 2025-Jun-29 22:57:24 :: [aarch64] #100 llama.cpp-vulkan-debuginfo: install check OK 2025-Jun-29 22:57:24 :: [x86_64] #100 llama.cpp-vulkan: install check OK 2025-Jun-29 22:57:34 :: [aarch64] #40 tinyllamas-gguf: install check OK 2025-Jun-29 22:57:35 :: [x86_64] #100 llama.cpp-vulkan-debuginfo: install check OK 2025-Jun-29 22:57:41 :: [x86_64] #40 tinyllamas-gguf: install check OK 2025-Jun-29 22:58:00 :: [x86_64-i586] generated apt indices 2025-Jun-29 22:58:00 :: [x86_64-i586] created next repo 2025-Jun-29 22:58:10 :: [x86_64-i586] dependencies check OK 2025-Jun-29 22:58:12 :: gears inheritance check OK 2025-Jun-29 22:58:13 :: srpm inheritance check OK girar-check-perms: access to tinyllamas-gguf DENIED for vt: project `tinyllamas-gguf' is not listed in the acl file for repository `p11', and the policy for such projects in `p11' is to deny check-subtask-perms: #40: tinyllamas-gguf: needs approvals from members of @maint and @tester groups girar-check-perms: access to llama.cpp DENIED for vt: project `llama.cpp' is not listed in the acl file for repository `p11', and the policy for such projects in `p11' is to deny check-subtask-perms: #100: llama.cpp: needs approvals from members of @maint and @tester groups 2025-Jun-29 22:58:15 :: acl check FAILED 2025-Jun-29 22:58:36 :: created contents_index files 2025-Jun-29 22:58:44 :: created hash files: aarch64 noarch src x86_64 2025-Jun-29 22:58:48 :: task #388454 for p11 EPERM
prev parent reply other threads:[~2025-06-29 22:58 UTC|newest] Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top 2025-06-29 22:36 [#388454] [test-only] p11 FAILED llama.cpp.git=5753-alt1 Girar awaiter (vt) 2025-06-29 22:58 ` Girar awaiter (vt) [this message]
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=girar.task.388454.2.1@gyle.mskdc.altlinux.org \ --to=girar-builder@altlinux.org \ --cc=devel@lists.altlinux.org \ --cc=girar-builder-p11@altlinux.org \ --cc=girar-builder-p11@lists.altlinux.org \ --cc=sisyphus-incominger@lists.altlinux.org \ --cc=vt@altlinux.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
ALT Linux Girar Builder robot reports This inbox may be cloned and mirrored by anyone: git clone --mirror http://lore.altlinux.org/sisyphus-incominger/0 sisyphus-incominger/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 sisyphus-incominger sisyphus-incominger/ http://lore.altlinux.org/sisyphus-incominger \ sisyphus-incominger@lists.altlinux.org sisyphus-incominger@lists.altlinux.ru sisyphus-incominger@lists.altlinux.com public-inbox-index sisyphus-incominger Example config snippet for mirrors. Newsgroup available over NNTP: nntp://lore.altlinux.org/org.altlinux.lists.sisyphus-incominger AGPL code for this site: git clone https://public-inbox.org/public-inbox.git