Solaris 10 disk fail.
Solaris 10 disk fail.
(OP)
Hi!
First, I will apologize about my bad english...
We have a Solaris 10 with one disk for system, and some zpools (raid-z) in 11 disks (12 disks in total). Here it is the disks in system:
0. c0t0d0 <DEFAULT cyl 60763 alt 2 hd 255 sec 126>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@0,0
1. c0t1d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@1,0
2. c0t2d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@2,0
3. c0t3d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@3,0
4. c0t4d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@4,0
5. c0t5d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@5,0
6. c0t6d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@6,0
7. c0t7d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@7,0
8. c0t8d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@8,0
9. c0t9d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@9,0
10. c0t10d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@a,0
11. c0t11d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@b,0
After a hard reboot (due to a machine freeze), our machin doesn't start. Seems one of the disks is failing (sd2 or c0t2d0). I think the problem is the same like the already discussed here in this thread h ttp://www. tek-tips.c om/viewthr ead.cfm?qi d=1124447& amp;page=1
The only difference is that we do not have the system running, but we can not know how to start so you can start the system disk and try to recover the damaged disk.
At this momment, we have this message in screen:
WARNING: /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@0,0 (sd2):
Error for command: read(10) Error Level: Retryable
Requested Block:115024900 Error Block: 0
Vendor: Adaptec Serial Number:
Sense Key: Unit Attention
ASC:0x29 (scsi bus reset occurred), ASCQ: 0x2, FRU: 0x0
WARNING: /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@0,0 (sd2):
SCSI transport failed: reason 'reset': retrying command.
Anybody could help us, please? Any kind of information (or link to it) will be greatly appreciated.
Thank you very much.
First, I will apologize about my bad english...
We have a Solaris 10 with one disk for system, and some zpools (raid-z) in 11 disks (12 disks in total). Here it is the disks in system:
0. c0t0d0 <DEFAULT cyl 60763 alt 2 hd 255 sec 126>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@0,0
1. c0t1d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@1,0
2. c0t2d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@2,0
3. c0t3d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@3,0
4. c0t4d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@4,0
5. c0t5d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@5,0
6. c0t6d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@6,0
7. c0t7d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@7,0
8. c0t8d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@8,0
9. c0t9d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@9,0
10. c0t10d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@a,0
11. c0t11d0 <Adaptec-31605-V1.0-930.99GB>
/pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@b,0
After a hard reboot (due to a machine freeze), our machin doesn't start. Seems one of the disks is failing (sd2 or c0t2d0). I think the problem is the same like the already discussed here in this thread h
The only difference is that we do not have the system running, but we can not know how to start so you can start the system disk and try to recover the damaged disk.
At this momment, we have this message in screen:
WARNING: /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@0,0 (sd2):
Error for command: read(10) Error Level: Retryable
Requested Block:115024900 Error Block: 0
Vendor: Adaptec Serial Number:
Sense Key: Unit Attention
ASC:0x29 (scsi bus reset occurred), ASCQ: 0x2, FRU: 0x0
WARNING: /pci@0,0/pci8086,25f7@2/pci8086,3500@0/pci8086,3510@0/pci8086,370@0/pci9005,2be@e/sd@0,0 (sd2):
SCSI transport failed: reason 'reset': retrying command.
Anybody could help us, please? Any kind of information (or link to it) will be greatly appreciated.
Thank you very much.
RE: Solaris 10 disk fail.
You can boot off a Solaris 10 CD or DVD to try and recover some of the data from it, but you'll still need to replace the disk and restore from backup anyway, so there isn't much point.
It may be a good idea to mirror the system disk in future.
Annihilannic.
RE: Solaris 10 disk fail.
You're right, I know. But I suppose that if I have a zfs raid-z, my only problem will be to change the disk and let the zfs remount it (Do not assume that the raid-z is precisely for situations like this?).
Why if the system is in the disk 0, I cannot boot? Do I need to start, perhaps, in single mode to make some operation on this disc that it is part of zpool, and then start normally?
Thank you very much in advance.
RE: Solaris 10 disk fail.
Can you tell whether that disk is spinning up at all?
Annihilannic.
RE: Solaris 10 disk fail.
Ok, so i only have the option of starting the system with an OpenSolaris LiveCD and pass a fsck, did I could only do this from a cd / dvd solaris 10 installation?
Thank you very much for your help.
RE: Solaris 10 disk fail.
Annihilannic.
RE: Solaris 10 disk fail.
Only wait that all will finish ok.
Thank you for your great help.
Regards.