From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on sa.local.altlinux.org X-Spam-Level: X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD autolearn=unavailable autolearn_force=no version=3.4.1 Date: Thu, 25 Dec 2025 16:35:13 +0000 From: "Girar awaiter (nash)" To: Nikita Shmatko Subject: [#398190] [test-only] FAILED (try 16) nvidia-cudnn.git=9.13.1.26-alt1 nvidia-nccl.git=2.28.3-alt1 ... Message-ID: Mail-Followup-To: Girar awaiter robot References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-girar-task-id: 398190 X-girar-task-owner: nash X-girar-task-repo: sisyphus X-girar-task-try: 16 X-girar-task-iter: 1 X-girar-task-status: FAILED X-girar-task-URL: https://git.altlinux.org/tasks/398190/ X-girar-task-log: logs/events.16.1.log X-girar-task-summary: [#398190] [test-only] FAILED (try 16) nvidia-cudnn.git=9.13.1.26-alt1 nvidia-nccl.git=2.28.3-alt1 dlpack.git=1.2-alt1 nvidia-cudnn-frontend.git=1.15.0-alt1 nvidia-cutlass.git=4.2.1-alt1 moodycamel-concurrentqueue.git=1.0.4-alt1 del=python3-module-torch srpm=python3-module-torch-cpu-2.9.1-alt1.src.rpm srpm=python3-module-torch-cuda-2.9.1-alt1.src.rpm User-Agent: Mutt/1.10.1 (2018-07-13) Cc: sisyphus-incominger@lists.altlinux.org, girar-builder-sisyphus@altlinux.org X-BeenThere: sisyphus-incominger@lists.altlinux.org X-Mailman-Version: 2.1.12 Precedence: list Reply-To: ALT Devel discussion list List-Id: ALT Linux Girar Builder robot reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Dec 2025 16:35:16 -0000 Archived-At: List-Archive: https://git.altlinux.org/tasks/398190/logs/events.16.1.log https://packages.altlinux.org/tasks/398190 subtask name aarch64 i586 x86_64 #2300 python3-module-torch-cuda failed - 1:33:07 2025-Dec-25 08:25:53 :: test-only task #398190 for sisyphus resumed by nash: 2025-Dec-25 08:25:53 :: message: PyTorch with CUDA support #100 removed #200 removed #300 removed #400 build 9.13.1.26-alt1 from /people/nash/packages/nvidia-cudnn.git fetched at 2025-Nov-26 08:25:27 #500 build 2.28.3-alt1 from /people/nash/packages/nvidia-nccl.git fetched at 2025-Nov-26 08:26:26 #600 removed #700 removed #740 build 1.2-alt1 from /people/nash/packages/dlpack.git fetched at 2025-Nov-26 09:59:35 #1000 build 1.15.0-alt1 from /people/nash/packages/nvidia-cudnn-frontend.git fetched at 2025-Nov-26 09:36:42 #1100 build 4.2.1-alt1 from /people/nash/packages/nvidia-cutlass.git fetched at 2025-Nov-26 09:36:54 #1140 build 1.0.4-alt1 from /people/nash/packages/moodycamel-concurrentqueue.git fetched at 2025-Nov-28 08:49:10 #1200 removed #1240 removed #1300 delete python3-module-torch #1400 removed #1500 removed #1600 removed #1640 removed #1700 removed #2000 build python3-module-torch-cpu-2.9.1-alt1.src.rpm #2100 removed #2200 removed #2300 build python3-module-torch-cuda-2.9.1-alt1.src.rpm 2025-Dec-25 08:25:54 :: created build repo 2025-Dec-25 08:25:55 :: [aarch64] #400 nvidia-cudnn.git 9.13.1.26-alt1: build start 2025-Dec-25 08:25:55 :: [x86_64] #400 nvidia-cudnn.git 9.13.1.26-alt1: build start 2025-Dec-25 08:25:55 :: [i586] #400 nvidia-cudnn.git 9.13.1.26-alt1: build start 2025-Dec-25 08:26:24 :: [i586] #400 nvidia-cudnn.git 9.13.1.26-alt1: build SKIPPED 2025-Dec-25 08:26:24 :: [i586] #500 nvidia-nccl.git 2.28.3-alt1: build start 2025-Dec-25 08:26:34 :: [x86_64] #400 nvidia-cudnn.git 9.13.1.26-alt1: build OK (cached) 2025-Dec-25 08:26:34 :: [x86_64] #500 nvidia-nccl.git 2.28.3-alt1: build start 2025-Dec-25 08:26:36 :: [i586] #500 nvidia-nccl.git 2.28.3-alt1: build SKIPPED 2025-Dec-25 08:26:37 :: [i586] #740 dlpack.git 1.2-alt1: build start 2025-Dec-25 08:26:50 :: [i586] #740 dlpack.git 1.2-alt1: build OK (cached) 2025-Dec-25 08:26:50 :: [i586] #1000 nvidia-cudnn-frontend.git 1.15.0-alt1: build start build/500/x86_64/log:[00:08:51] debuginfo.req: WARNING: /usr/lib64/libcudart.so.12 is not yet debuginfo-enabled 2025-Dec-25 08:26:55 :: [x86_64] #500 nvidia-nccl.git 2.28.3-alt1: build OK (cached) 2025-Dec-25 08:26:55 :: [x86_64] #740 dlpack.git 1.2-alt1: build start 2025-Dec-25 08:26:58 :: [aarch64] #400 nvidia-cudnn.git 9.13.1.26-alt1: build OK (cached) 2025-Dec-25 08:26:58 :: [aarch64] #500 nvidia-nccl.git 2.28.3-alt1: build start 2025-Dec-25 08:27:05 :: [i586] #1000 nvidia-cudnn-frontend.git 1.15.0-alt1: build SKIPPED 2025-Dec-25 08:27:05 :: [i586] #1100 nvidia-cutlass.git 4.2.1-alt1: build start 2025-Dec-25 08:27:09 :: [x86_64] #740 dlpack.git 1.2-alt1: build OK (cached) 2025-Dec-25 08:27:10 :: [x86_64] #1000 nvidia-cudnn-frontend.git 1.15.0-alt1: build start 2025-Dec-25 08:27:22 :: [i586] #1100 nvidia-cutlass.git 4.2.1-alt1: build SKIPPED 2025-Dec-25 08:27:22 :: [i586] #1140 moodycamel-concurrentqueue.git 1.0.4-alt1: build start 2025-Dec-25 08:27:29 :: [x86_64] #1000 nvidia-cudnn-frontend.git 1.15.0-alt1: build OK (cached) 2025-Dec-25 08:27:30 :: [x86_64] #1100 nvidia-cutlass.git 4.2.1-alt1: build start build/500/aarch64/log:[00:18:52] debuginfo.req: WARNING: /usr/lib64/libcudart.so.12 is not yet debuginfo-enabled 2025-Dec-25 08:27:33 :: [aarch64] #500 nvidia-nccl.git 2.28.3-alt1: build OK (cached) 2025-Dec-25 08:27:34 :: [aarch64] #740 dlpack.git 1.2-alt1: build start 2025-Dec-25 08:27:36 :: [i586] #1140 moodycamel-concurrentqueue.git 1.0.4-alt1: build OK (cached) 2025-Dec-25 08:27:36 :: [i586] #2000 python3-module-torch-cpu-2.9.1-alt1.src.rpm: build start 2025-Dec-25 08:27:54 :: [x86_64] #1100 nvidia-cutlass.git 4.2.1-alt1: build OK (cached) 2025-Dec-25 08:27:55 :: [x86_64] #1140 moodycamel-concurrentqueue.git 1.0.4-alt1: build start 2025-Dec-25 08:27:59 :: [aarch64] #740 dlpack.git 1.2-alt1: build OK (cached) 2025-Dec-25 08:28:00 :: [aarch64] #1000 nvidia-cudnn-frontend.git 1.15.0-alt1: build start 2025-Dec-25 08:28:00 :: [i586] #2000 python3-module-torch-cpu-2.9.1-alt1.src.rpm: build SKIPPED 2025-Dec-25 08:28:00 :: [i586] #2300 python3-module-torch-cuda-2.9.1-alt1.src.rpm: build start 2025-Dec-25 08:28:09 :: [x86_64] #1140 moodycamel-concurrentqueue.git 1.0.4-alt1: build OK (cached) 2025-Dec-25 08:28:09 :: [x86_64] #2000 python3-module-torch-cpu-2.9.1-alt1.src.rpm: build start 2025-Dec-25 08:28:24 :: [i586] #2300 python3-module-torch-cuda-2.9.1-alt1.src.rpm: build SKIPPED 2025-Dec-25 08:28:28 :: [x86_64] #2000 python3-module-torch-cpu-2.9.1-alt1.src.rpm: build OK (cached) 2025-Dec-25 08:28:28 :: [x86_64] #2300 python3-module-torch-cuda-2.9.1-alt1.src.rpm: build start 2025-Dec-25 08:28:34 :: [aarch64] #1000 nvidia-cudnn-frontend.git 1.15.0-alt1: build OK (cached) 2025-Dec-25 08:28:34 :: [aarch64] #1100 nvidia-cutlass.git 4.2.1-alt1: build start 2025-Dec-25 08:29:15 :: [aarch64] #1100 nvidia-cutlass.git 4.2.1-alt1: build OK (cached) 2025-Dec-25 08:29:15 :: [aarch64] #1140 moodycamel-concurrentqueue.git 1.0.4-alt1: build start 2025-Dec-25 08:29:40 :: [aarch64] #1140 moodycamel-concurrentqueue.git 1.0.4-alt1: build OK (cached) 2025-Dec-25 08:29:41 :: [aarch64] #2000 python3-module-torch-cpu-2.9.1-alt1.src.rpm: build start 2025-Dec-25 08:30:12 :: [aarch64] #2000 python3-module-torch-cpu-2.9.1-alt1.src.rpm: build OK (cached) 2025-Dec-25 08:30:12 :: [aarch64] #2300 python3-module-torch-cuda-2.9.1-alt1.src.rpm: build start 2025-Dec-25 10:01:35 :: [x86_64] #2300 python3-module-torch-cuda-2.9.1-alt1.src.rpm: build OK [aarch64] ptxas warning : Value of threads per SM for entry _ZN2at6native52_GLOBAL__N__3e5e678e_19_DilatedMaxPool2d_cu_6258b57422max_pool_backward_nchwIN3c108BFloat16EfEEvPKT_PKlillliiiiiiiiiiPS5_ is out of range. .minnctapersm will be ignored [aarch64] ptxas warning : Value of threads per SM for entry _ZN2at6native52_GLOBAL__N__3e5e678e_19_DilatedMaxPool2d_cu_6258b57422max_pool_backward_nchwIN3c104HalfEfEEvPKT_PKlillliiiiiiiiiiPS5_ is out of range. .minnctapersm will be ignored [aarch64] ptxas warning : Value of threads per SM for entry _ZN2at6native52_GLOBAL__N__3e5e678e_19_DilatedMaxPool2d_cu_6258b57422max_pool_backward_nchwIffEEvPKT_PKlillliiiiiiiiiiPS3_ is out of range. .minnctapersm will be ignored [aarch64] ptxas warning : Value of threads per SM for entry _ZN2at6native52_GLOBAL__N__3e5e678e_19_DilatedMaxPool2d_cu_6258b57422max_pool_backward_nchwIddEEvPKT_PKlillliiiiiiiiiiPS3_ is out of range. .minnctapersm will be ignored [aarch64] [1574/2022] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/DepthwiseConv3d.cu.o [aarch64] [1575/2022] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/DilatedMaxPool3d.cu.o [aarch64] [1576/2022] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/DistanceKernel.cu.o [aarch64] [1577/2022] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/DistributionCauchyKernel.cu.o [aarch64] [1578/2022] Building CUDA object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/DistributionExponentialKernel.cu.o [aarch64] hasher-privd: parent: work_limits_ok: time elapsed limit (28800 seconds) exceeded 2025-Dec-25 16:35:12 :: [aarch64] python3-module-torch-cuda-2.9.1-alt1.src.rpm: remote: build failed 2025-Dec-25 16:35:12 :: [aarch64] #2300 python3-module-torch-cuda-2.9.1-alt1.src.rpm: build FAILED 2025-Dec-25 16:35:12 :: [aarch64] requesting cancellation of task processing 2025-Dec-25 16:35:13 :: [aarch64] build FAILED 2025-Dec-25 16:35:13 :: task #398190 for sisyphus FAILED