From: Alexey Morsov <samurai@ricom.ru> To: ALT Linux Sisyphus discussion list <sisyphus@altlinux.ru> Subject: [sisyphus] rsync and ALT rsync server Date: Fri, 08 Jul 2005 09:15:58 +0400 Message-ID: <42CE0C0E.9000502@ricom.ru> (raw) Привет, Бросил вчера (точнее сегодня часа в 4) зеркалить rsync-ом сизиф. В 9 встал - глянул а rsync изволил подвиснуть (т.е. прервать его в консоле не получалось). Причем так круто что в памяти таки остался его кусок который ни kill ни kill -9 не выносили. В Итоге перегрузил машину. К сожалению забыл скопировать В messages вижу во такое: Jul 8 06:36:35 home kernel: Unable to handle kernel paging request at virtual address 7580c530 Jul 8 06:36:35 home kernel: printing eip: Jul 8 06:36:35 home kernel: b0138ba1 Jul 8 06:36:35 home kernel: *pde = 00000000 Jul 8 06:36:35 home kernel: Oops: 0002 [#1] Jul 8 06:36:35 home kernel: PREEMPT Jul 8 06:36:35 home kernel: Modules linked in: nvidia binfmt_misc autofs4 rivafb i2c_algo_bit vgastate ohci1394 ieee1394 shpchp pci_hotplug amd64_agp agpgart snd_via82xx snd_ac97_codec gameport snd_mpu401_uart snd_rawmidi 8250_pnp 8250 serial_core pcspkr tsdev evdev psmouse uhci_hcd ehci_hcd w83627hf eeprom i2c_sensor i2c_isa i2c_viapro sk98lin floppy reiserfs nls_koi8_r nls_cp866 vfat fat nls_base dm_mod capability commoncap powernow_k8 freq_table i2c_dev i2c_core ppp_async ppp_generic slhc crc_ccitt snd_pcm_oss snd_seq_oss snd_seq_midi_event snd_seq snd_seq_device snd_pcm snd_timer snd_page_alloc snd_mixer_oss snd soundcore ide_floppy ide_cd cdrom parport_pc lp parport usb_storage usbcore processor button ac battery rtc xfs exportfs sata_via libata sd_mod scsi_mod ide_disk ide_generic via82cxxx ide_core Jul 8 06:36:35 home kernel: CPU: 0 Jul 8 06:36:35 home kernel: EIP: 0060:[add_to_page_cache+81/176] Tainted: P VLI Jul 8 06:36:35 home kernel: EIP: 0060:[<b0138ba1>] Tainted: P VLI Jul 8 06:36:35 home kernel: EFLAGS: 00210046 (2.6.11-wks26-up-alt2) Jul 8 06:36:35 home kernel: EIP is at add_to_page_cache+0x51/0xb0 Jul 8 06:36:35 home kernel: eax: 20000000 ebx: 00000000 ecx: 00000000 edx: 0000003a Jul 8 06:36:35 home kernel: esi: b140b2e0 edi: c105237c ebp: 0000047a esp: e41ddc10 Jul 8 06:36:35 home kernel: ds: 007b es: 007b ss: 0068 Jul 8 06:36:35 home kernel: Process rsync (pid: 24765, threadinfo=e41dc000 task=e389a020) Jul 8 06:36:35 home kernel: Stack: b140b2e0 00000000 00000006 00000000 b017af6b b140b2e0 c105237c 0000047a Jul 8 06:36:35 home kernel: 000000d0 00000000 00000000 00000000 00000000 effa7d8c b0141159 00000001 Jul 8 06:36:35 home kernel: ffffffff ffffffff eefa5b80 000003ff ffffffff ffffffff b0138e7e effa0844 Jul 8 06:36:35 home kernel: Call Trace: Jul 8 06:36:35 home kernel: [mpage_readpages+107/272] mpage_readpages+0x6b/0x110 Jul 8 06:36:35 home kernel: [<b017af6b>] mpage_readpages+0x6b/0x110 Jul 8 06:36:35 home kernel: [cache_alloc_refill+297/544] cache_alloc_refill+0x129/0x220 Jul 8 06:36:35 home kernel: [<b0141159>] cache_alloc_refill+0x129/0x220 Jul 8 06:36:35 home kernel: [find_get_page+30/96] find_get_page+0x1e/0x60 Jul 8 06:36:35 home kernel: [<b0138e7e>] find_get_page+0x1e/0x60 Jul 8 06:36:35 home kernel: [pg0+1088059369/1338311680] linvfs_readpages+0x19/0x20 [xfs] Jul 8 06:36:35 home kernel: [<f11557e9>] linvfs_readpages+0x19/0x20 [xfs] Jul 8 06:36:35 home kernel: [pg0+1088058960/1338311680] linvfs_get_block+0x0/0x30 [xfs] Jul 8 06:36:35 home kernel: [<f1155650>] linvfs_get_block+0x0/0x30 [xfs] Jul 8 06:36:35 home kernel: [read_pages+241/272] read_pages+0xf1/0x110 Jul 8 06:36:35 home kernel: [<b013fa71>] read_pages+0xf1/0x110 Jul 8 06:36:35 home kernel: [buffered_rmqueue+189/560] buffered_rmqueue+0xbd/0x230 Jul 8 06:36:35 home kernel: [<b013d43d>] buffered_rmqueue+0xbd/0x230 Jul 8 06:36:35 home kernel: [__alloc_pages+420/1072] __alloc_pages+0x1a4/0x430 Jul 8 06:36:35 home kernel: [<b013d814>] __alloc_pages+0x1a4/0x430 Jul 8 06:36:35 home kernel: [__do_page_cache_readahead+377/432] __do_page_cache_readahead+0x179/0x1b0 Jul 8 06:36:35 home kernel: [<b013fc09>] __do_page_cache_readahead+0x179/0x1b0 Jul 8 06:36:35 home kernel: [blockable_page_cache_readahead+38/96] blockable_page_cache_readahead+0x26/0x60 Jul 8 06:36:35 home kernel: [<b013fd96>] blockable_page_cache_readahead+0x26/0x60 Jul 8 06:36:35 home kernel: [page_cache_readahead+198/576] page_cache_readahead+0xc6/0x240 Jul 8 06:36:35 home kernel: [<b013fe96>] page_cache_readahead+0xc6/0x240 Jul 8 06:36:35 home kernel: [do_generic_mapping_read+1072/1312] do_generic_mapping_read+0x430/0x520 Jul 8 06:36:35 home kernel: [<b0139630>] do_generic_mapping_read+0x430/0x520 Jul 8 06:36:35 home kernel: [tcp_recvmsg+724/1824] tcp_recvmsg+0x2d4/0x720 Jul 8 06:36:35 home kernel: [<b0265054>] tcp_recvmsg+0x2d4/0x720 Jul 8 06:36:35 home kernel: [__generic_file_aio_read+249/496] __generic_file_aio_read+0xf9/0x1f0 Jul 8 06:36:35 home kernel: [<b01398f9>] __generic_file_aio_read+0xf9/0x1f0 Jul 8 06:36:35 home kernel: [file_read_actor+0/224] file_read_actor+0x0/0xe0 Jul 8 06:36:35 home kernel: [<b0139720>] file_read_actor+0x0/0xe0 Jul 8 06:36:35 home kernel: [sock_common_recvmsg+59/80] sock_common_recvmsg+0x3b/0x50 Jul 8 06:36:35 home kernel: [<b02395db>] sock_common_recvmsg+0x3b/0x50 Jul 8 06:36:35 home kernel: [pg0+1088082439/1338311680] xfs_read+0x127/0x2b0 [xfs] Jul 8 06:36:35 home kernel: [<f115b207>] xfs_read+0x127/0x2b0 [xfs] Jul 8 06:36:35 home kernel: [pg0+1088069129/1338311680] linvfs_aio_read+0x69/0x90 [xfs] Jul 8 06:36:35 home kernel: [<f1157e09>] linvfs_aio_read+0x69/0x90 [xfs] Jul 8 06:36:35 home kernel: [do_sync_read+171/240] do_sync_read+0xab/0xf0 Jul 8 06:36:35 home kernel: [<b0156f3b>] do_sync_read+0xab/0xf0 Jul 8 06:36:35 home kernel: [__copy_to_user_ll+66/112] __copy_to_user_ll+0x42/0x70 Jul 8 06:36:35 home kernel: [<b01a8ba2>] __copy_to_user_ll+0x42/0x70 Jul 8 06:36:35 home kernel: [autoremove_wake_function+0/64] autoremove_wake_function+0x0/0x40 Jul 8 06:36:35 home kernel: [<b012ddb0>] autoremove_wake_function+0x0/0x40 Jul 8 06:36:35 home kernel: [kfree_skbmem+23/32] kfree_skbmem+0x17/0x20 Jul 8 06:36:35 home kernel: [<b0239a57>] kfree_skbmem+0x17/0x20 Jul 8 06:36:35 home kernel: [vfs_read+162/256] vfs_read+0xa2/0x100 Jul 8 06:36:35 home kernel: [<b0157022>] vfs_read+0xa2/0x100 Jul 8 06:36:35 home kernel: [sys_read+61/112] sys_read+0x3d/0x70 Jul 8 06:36:35 home kernel: [<b01572ad>] sys_read+0x3d/0x70 Jul 8 06:36:35 home kernel: [syscall_call+7/11] syscall_call+0x7/0xb Jul 8 06:36:35 home kernel: [<b0102f87>] syscall_call+0x7/0xb Jul 8 06:36:35 home kernel: Code: 5b 5e 5f 5d c3 90 8d 74 26 00 fa b8 01 00 00 00 e8 65 db fd ff 8d 47 04 56 55 50 e8 da d5 06 00 83 c4 0c 89 c3 85 c0 75 24 8b 06 <89> f2 f6 c4 80 75 53 ff 42 04 0f ba 2e 00 89 7e 10 89 6e 14 8b Jul 8 06:36:35 home kernel: <6>note: rsync[24765] exited with preempt_count 2 Jul 8 06:36:35 home kernel: scheduling while atomic: rsync/0x10000002/24765 Jul 8 06:36:35 home kernel: [schedule+986/1152] schedule+0x3da/0x480 Jul 8 06:36:35 home kernel: [<b029c75a>] schedule+0x3da/0x480 Jul 8 06:36:35 home kernel: [cond_resched+42/80] cond_resched+0x2a/0x50 Jul 8 06:36:35 home kernel: [<b029d04a>] cond_resched+0x2a/0x50 Jul 8 06:36:35 home kernel: [unmap_vmas+617/640] unmap_vmas+0x269/0x280 Jul 8 06:36:35 home kernel: [<b0146a29>] unmap_vmas+0x269/0x280 Jul 8 06:36:35 home kernel: [exit_mmap+100/336] exit_mmap+0x64/0x150 Jul 8 06:36:35 home kernel: [<b014b3d4>] exit_mmap+0x64/0x150 Jul 8 06:36:35 home kernel: [mmput+35/144] mmput+0x23/0x90 Jul 8 06:36:35 home kernel: [<b0117cf3>] mmput+0x23/0x90 Jul 8 06:36:35 home kernel: [do_exit+189/1008] do_exit+0xbd/0x3f0 Jul 8 06:36:35 home kernel: [<b011c34d>] do_exit+0xbd/0x3f0 Jul 8 06:36:35 home kernel: [add_to_page_cache+81/176] add_to_page_cache+0x51/0xb0 Jul 8 06:36:35 home kernel: [<b0138ba1>] add_to_page_cache+0x51/0xb0 Jul 8 06:36:35 home kernel: [die+352/368] die+0x160/0x170 Jul 8 06:36:35 home kernel: [<b01046d0>] die+0x160/0x170 Jul 8 06:36:35 home kernel: [do_page_fault+918/1480] do_page_fault+0x396/0x5c8 Jul 8 06:36:35 home kernel: [<b0114556>] do_page_fault+0x396/0x5c8 Jul 8 06:36:35 home kernel: [do_page_fault+950/1480] do_page_fault+0x3b6/0x5c8 Jul 8 06:36:35 home kernel: [<b0114576>] do_page_fault+0x3b6/0x5c8 Jul 8 06:36:35 home kernel: [pg0+1087931175/1338311680] xfs_imap_to_bmap+0x27/0x2f0 [xfs] Jul 8 06:36:35 home kernel: [<f1136327>] xfs_imap_to_bmap+0x27/0x2f0 [xfs] Jul 8 06:36:35 home kernel: [generic_make_request+165/560] generic_make_request+0xa5/0x230 Jul 8 06:36:35 home kernel: [<b021b545>] generic_make_request+0xa5/0x230 Jul 8 06:36:35 home kernel: [pg0+1087932683/1338311680] xfs_iomap+0x31b/0x460 [xfs] Jul 8 06:36:35 home kernel: [<f113690b>] xfs_iomap+0x31b/0x460 [xfs] Jul 8 06:36:35 home kernel: [pg0+1088008249/1338311680] xfs_trans_unlocked_item+0x29/0x40 [xfs] Jul 8 06:36:35 home kernel: [<f1149039>] xfs_trans_unlocked_item+0x29/0x40 [xfs] Jul 8 06:36:35 home kernel: [pg0+1087932482/1338311680] xfs_iomap+0x252/0x460 [xfs] Jul 8 06:36:35 home kernel: [<f1136842>] xfs_iomap+0x252/0x460 [xfs] Jul 8 06:36:35 home kernel: [do_page_fault+0/1480] do_page_fault+0x0/0x5c8 Jul 8 06:36:35 home kernel: [<b01141c0>] do_page_fault+0x0/0x5c8 Jul 8 06:36:35 home kernel: [error_code+43/48] error_code+0x2b/0x30 Jul 8 06:36:35 home kernel: [<b0103fab>] error_code+0x2b/0x30 Jul 8 06:36:35 home kernel: [add_to_page_cache+81/176] add_to_page_cache+0x51/0xb0 Jul 8 06:36:35 home kernel: [<b0138ba1>] add_to_page_cache+0x51/0xb0 Jul 8 06:36:35 home kernel: [mpage_readpages+107/272] mpage_readpages+0x6b/0x110 Jul 8 06:36:35 home kernel: [<b017af6b>] mpage_readpages+0x6b/0x110 Jul 8 06:36:35 home kernel: [cache_alloc_refill+297/544] cache_alloc_refill+0x129/0x220 Jul 8 06:36:35 home kernel: [<b0141159>] cache_alloc_refill+0x129/0x220 Jul 8 06:36:35 home kernel: [find_get_page+30/96] find_get_page+0x1e/0x60 Jul 8 06:36:35 home kernel: [<b0138e7e>] find_get_page+0x1e/0x60 Jul 8 06:36:35 home kernel: [pg0+1088059369/1338311680] linvfs_readpages+0x19/0x20 [xfs] Jul 8 06:36:35 home kernel: [<f11557e9>] linvfs_readpages+0x19/0x20 [xfs] Jul 8 06:36:35 home kernel: [pg0+1088058960/1338311680] linvfs_get_block+0x0/0x30 [xfs] Jul 8 06:36:35 home kernel: [<f1155650>] linvfs_get_block+0x0/0x30 [xfs] Jul 8 06:36:35 home kernel: [read_pages+241/272] read_pages+0xf1/0x110 Jul 8 06:36:35 home kernel: [<b013fa71>] read_pages+0xf1/0x110 Jul 8 06:36:35 home kernel: [buffered_rmqueue+189/560] buffered_rmqueue+0xbd/0x230 Jul 8 06:36:35 home kernel: [__alloc_pages+420/1072] __alloc_pages+0x1a4/0x430 Jul 8 06:36:35 home kernel: [<b013d814>] __alloc_pages+0x1a4/0x430 Jul 8 06:36:35 home kernel: [__do_page_cache_readahead+377/432] __do_page_cache_readahead+0x179/0x1b0 Jul 8 06:36:35 home kernel: [<b013fc09>] __do_page_cache_readahead+0x179/0x1b0 Jul 8 06:36:35 home kernel: [blockable_page_cache_readahead+38/96] blockable_page_cache_readahead+0x26/0x60 Jul 8 06:36:35 home kernel: [<b013fd96>] blockable_page_cache_readahead+0x26/0x60 Jul 8 06:36:35 home kernel: [page_cache_readahead+198/576] page_cache_readahead+0xc6/0x240 Jul 8 06:36:35 home kernel: [<b013fe96>] page_cache_readahead+0xc6/0x240 Jul 8 06:36:35 home kernel: [do_generic_mapping_read+1072/1312] do_generic_mapping_read+0x430/0x520 Jul 8 06:36:35 home kernel: [<b0139630>] do_generic_mapping_read+0x430/0x520 Jul 8 06:36:35 home kernel: [tcp_recvmsg+724/1824] tcp_recvmsg+0x2d4/0x720 Jul 8 06:36:35 home kernel: [<b0265054>] tcp_recvmsg+0x2d4/0x720 Jul 8 06:36:35 home kernel: [__generic_file_aio_read+249/496] __generic_file_aio_read+0xf9/0x1f0 Jul 8 06:36:35 home kernel: [<b01398f9>] __generic_file_aio_read+0xf9/0x1f0 Jul 8 06:36:35 home kernel: [file_read_actor+0/224] file_read_actor+0x0/0xe0 Jul 8 06:36:35 home kernel: [<b0139720>] file_read_actor+0x0/0xe0 Jul 8 06:36:35 home kernel: [sock_common_recvmsg+59/80] sock_common_recvmsg+0x3b/0x50 Jul 8 06:36:35 home kernel: [<b02395db>] sock_common_recvmsg+0x3b/0x50 Jul 8 06:36:35 home kernel: [pg0+1088082439/1338311680] xfs_read+0x127/0x2b0 [xfs] Jul 8 06:36:35 home kernel: [<f115b207>] xfs_read+0x127/0x2b0 [xfs] Jul 8 06:36:35 home kernel: [pg0+1088069129/1338311680] linvfs_aio_read+0x69/0x90 [xfs] Jul 8 06:36:35 home kernel: [<f1157e09>] linvfs_aio_read+0x69/0x90 [xfs] Jul 8 06:36:35 home kernel: [do_sync_read+171/240] do_sync_read+0xab/0xf0 Jul 8 06:36:35 home kernel: [<b0156f3b>] do_sync_read+0xab/0xf0 Jul 8 06:36:35 home kernel: [__copy_to_user_ll+66/112] __copy_to_user_ll+0x42/0x70 Jul 8 06:36:35 home kernel: [<b01a8ba2>] __copy_to_user_ll+0x42/0x70 Jul 8 06:36:35 home kernel: [autoremove_wake_function+0/64] autoremove_wake_function+0x0/0x40 Jul 8 06:36:35 home kernel: [<b012ddb0>] autoremove_wake_function+0x0/0x40 Jul 8 06:36:35 home kernel: [kfree_skbmem+23/32] kfree_skbmem+0x17/0x20 Jul 8 06:36:35 home kernel: [<b0239a57>] kfree_skbmem+0x17/0x20 Jul 8 06:36:35 home kernel: [vfs_read+162/256] vfs_read+0xa2/0x100 Jul 8 06:36:35 home kernel: [<b0157022>] vfs_read+0xa2/0x100 Jul 8 06:36:35 home kernel: [sys_read+61/112] sys_read+0x3d/0x70 Jul 8 06:36:35 home kernel: [<b01572ad>] sys_read+0x3d/0x70 Jul 8 06:36:35 home kernel: [syscall_call+7/11] syscall_call+0x7/0xb Jul 8 06:36:35 home kernel: [<b0102f87>] syscall_call+0x7/0xb Jul 8 06:51:01 home pam_tcb[25126]: crond: Session closed for root Jul 8 07:01:01 home pam_tcb[25151]: crond: Session opened for root by (uid=0) Jul 8 07:01:01 home crond[25153]: (root) CMD (run-parts /etc/cron.hourly) Вопрос собственно обычный - кто виноват и что делать? -- С уважением, Алексей Морсов Системный администратор ЗАО "ИК "РИКОМ-ТРАСТ" ICQ#: 196-766-290 JID: Samurai@jabber.pibhe.com inn - это несравнимо хуже, чем wu-ftpd. -- ldv in devel@
next reply other threads:[~2005-07-08 5:15 UTC|newest] Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top 2005-07-08 5:15 Alexey Morsov [this message] 2005-07-08 5:44 ` iLL 2005-07-08 9:36 ` Alexey Morsov
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=42CE0C0E.9000502@ricom.ru \ --to=samurai@ricom.ru \ --cc=sisyphus@altlinux.ru \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
ALT Linux Sisyphus discussions This inbox may be cloned and mirrored by anyone: git clone --mirror http://lore.altlinux.org/sisyphus/0 sisyphus/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 sisyphus sisyphus/ http://lore.altlinux.org/sisyphus \ sisyphus@altlinux.ru sisyphus@altlinux.org sisyphus@lists.altlinux.org sisyphus@lists.altlinux.ru sisyphus@lists.altlinux.com sisyphus@linuxteam.iplabs.ru sisyphus@list.linux-os.ru public-inbox-index sisyphus Example config snippet for mirrors. Newsgroup available over NNTP: nntp://lore.altlinux.org/org.altlinux.lists.sisyphus AGPL code for this site: git clone https://public-inbox.org/public-inbox.git