

By mirroring your data, you prevent data loss from a disk failure. By mirroring disks critical to booting, you ensure that no single disk failure will leave your system unbootable and unusable. You should then mirror the root disk so that an alternate root disk exists for booting purposes. This converts the root and swap devices to volumes ( rootvol and swapvol). Place the disk containing the root file system (that is, the root or boot disk) under Volume Manager control (through encapsulation).
#Failed root disk using veritas volume manager how to
The following are some suggestions on how to protect your system and data: Furthermore, it must be preserved in such a way that it can be used in case of failure. In order to maintain system availability, the data important to running and booting your system must be mirrored. The VERITAS Volume Manager provides the ability to protect your system from either type of problem. Reinstallation and Reconfiguration Proceduresĭisk failures can cause two types of problems: loss of data on the failed disk and loss of access to your system due to the failure of a key disk (a disk involved with system operation).The following topics are covered in this appendix: It also describes possible plex and volume states.įor information specific to volume recovery, refer to Chapter 4. This appendix describes various recovery procedures and provides information to help you prevent loss of data or system access due to disk failures. Any RAID-5 volumes with storage on the failed disk may become unusable in the face of further failures.The VERITAS Volume Manager provides the ability to protect systems from disk failures and to recover from disk failures. These volumes are still usable, but the the redundancy of those volumes is reduced. The following volumes have storage on c2t5d0: No replacement was made and the disk is still unusable. Relocation was not successful for subdisks on disk c2t5d0 in volume u044-L04 in disk group rootdg. VERITAS Volume Manager is preparing to relocate for diskgroup rootdg. Failures have been detected by the VERITAS Volume Manager:ģ. The available plex u044-P12 will be used recover the data.Ģ. Attempting to relocate subdisk c2t5d0-03 from plex u044-P11.ĭev_offset - length 31457280 dm_name c2t5d0 da_name c2t5d0s2. Here are the emails i got from the server when the drive failed.ġ. The system was running fine before the c2t5d0 device failed.

The drive in question is one of the internal drives. RE: Veritas Volume Manager Problem 100mbs (MIS) Do you use a hotpspare pool for the diskgroup etc. Why don't you give us a little more background to your problem as far as what type of drive (fibre/scsi), what type of storage array it is in. What really happens is the luxadm remove_device does not actually remove the /dev/dsk, /dev/rdsk and /devices or Veritas dmp entries for the bad drive, and when you insert a new device Veritas can't correctly reference the WWN of the new drive. If the customer had moved the dump device to c0t1d0s1 before executing any veritas options or commands they would not have had this problem.

So you need to make sure there are no mounts to the device or the device is not being used as dumpdevice. Then they saw a duplicate entry in vxdisk list for c0t0d0 device and was unable to perform option 5 to replace the bad disk. The customer then ran the luxadm remove_device command on the drive using the -f flag, then inserted new drive with the luxadm insert_device command, the devfsadm, the vxdctl enable. The customers dump device was pointing to c0t0d0s1 since you can't point it to swapvol and the customer started the replacement procedure using option 4 and without changeing the dumpdevice to there c0t1d0s1 drive. The reason for the problem I saw was a customer had a c0t0d0 drive go bad in a 280R which has fibre drives. I have seen a similar issue when replacing a encapsulated root mirror. RE: Veritas Volume Manager Problem Annihilannic (MIS) 4 Apr 07 12:35 WHat the heck am i doing wrong i cant remove and replace a failed disk with out this happeneing. I restarted the vxconfigd services and this is what the vxdisk list looked like: Now after running the above commands it get a print out that looks like this below: I have ran the above commands a few times and everytime it keeps adding more disk devices.Įnter disk device or "all" (default: all) Run format and label the new disk, then exit. OK I had this same problem a few weeks ago and a reboot of the server fixed the problem.įor some reason every time i remove a disk for replacement then put the new disk in and do the following i get errors.
