On OpenVMS, this speed differential goes all the way back to the HSC50
controller (if not before), where a controller-level storage-cloning
operation is substantially faster than a host-based clone.
HSC50 is the first OpenVMS-supported storage controller I'm aware of
that had a controller-level cloning feature, but there may have been
earlier examples. And it was wicked fast. That with disk-to-disk, and
Post by Phillip Helbig (undress to reply)
Will forming a shadow set cause random I/O all over the failing disk or
will the disk be read exactly once from start to finish in strict
sequential order without any additional I/Os ?
Yes, host-based volume shadowing HBVS will churn I/O. And the source
volume has to be mounted read-write, and I'd rather not write to a
failing hard disk drive.
And HBVS will attempt to re-vector failing blocks after attempting
re-reading with the failing blocks, and it's somewhere between possible
and quite likely that the re-vectoring attempts will fail due to the
depletion of spares.
And that assumes that a corrupt source volume doesn't abort the HBVS
virtual unit formation, and as I'd expect would happen here.
Post by Phillip Helbig (undress to reply)
There are shadowing experts here who might reply.
I would not use shadowing for failing-storage-hardware volume recovery. Nope.
BACKUP /PHYSICAL, dd, COPY, controller-level cloning if that's
available, or other data-recovery-related tools (DISKBLOCK) that are
intended and coded to be somewhat more tolerant of errors in the source
data are the path. This with the source mounted write-locked, and
usually also mounted foreign.
DISKBLOCK063 was here: http://eisner.encompasserve.org/~halle/
Some semi-related reading:
Disk Bad Block Processing?
The Question is:
How operate-correct with SCSI disks or give him a second chance to be useful ,
while error count is growing up, badlog.sys rize and particial files cannot
accessed. Illus here.
Meaning of question.
1. Are the Bad Block Locator Utility(Analize/Media DCL coomand) is now obsolete
for this type(SCSI) device or not? Is this utility insert,automaticaly, bad
location(placement) in file on tested volume:
ddccu:[mfd]badblk.sys, to avoid a problem whith
using trusted pplaces.
2. Are the Analise/disk is now obsolete for this type(SCSI) device or not? Is
this utility insert,automaticaly, bad blocks location(placement) in file on
ddccu:[mfd]badlog.sys, to avoid a problem whith
using trusted pplaces.
I desire to be sure what i have workable tools,
even if it is from maintanance Tool kit.
Some Thing like annoing MS Scandisk. This is problem for me, becouse i am not
sure with data
integrityn whith all ot these RAID-nn solution, while logical fault from
ordinary file, which is represent entry level of data input, may destroy all
deepthinkig secure system.
It will be good, what storage device self repaire this problem like DSSI and
but it is good to have transparent ability and
possibility to assuranse.
With best regards ans great respect from long term VMS worshipper.
Live long and prosper.
The Answer is :
The ANALYZE/MEDIA (BAD) utility, BAD*.SYS files, and the related tools are
typically not required on modern disks; OpenVMS, the controllers and/or the
disks transparently and volume shadowing or a RAID controller all conspire
provide the appearance of a logically perfect storage medium. That said,
there are cases where the BADBLK.SYS file will be used to store bad blocks.
Failing disk blocks are typically revectored to spare blocks as failing
(or failed) blocks are detected, usually on write verify or on detection
of corruption on read. The implementation details do differ depending on
the particular disk device involved, and can involve the disk device, the
controller, and/or the host operating system software.
With more recent SCSI devices and OpenVMS V6.2 and later, the AWRE
(Automatic Write REmapping) and ARRE (Automatic Read Remapping) mechanism
is available -- when disabled, the host handles the revectoring. When
AWRE is enabled, the controller and the disk handle bad blocks and the
drive appears perfect (until there are more bad blocks than spare blocks).
When accessing a directly-connected disk with AWRE and ARRE disabled,
OpenVMS performs the defect management.
If on a SCSI disk WRITE (using the XQP or using host-based shadowing) a
hard error is encountered, OpenVMS will issue a REASSIGN command to the
disk and the disk will revector the block, adding the bad block to its
"GLIST" -- this is the SCSI equivalent of the DSA structure known as the
Replacement and Caching Table (RCT). If the defect list is full or if
the disk is incapable of REASSIGN, OpenVMS then marks the block bad in
If on a SCSI disk READ a hard error is encountered and the disk is a
member of a volume shadowset, then the data is read from another shadowset
member and returned to the user without error. A REASSIGN is issued against
the failing block, and the good data is written back to the disk. This
causes known-good data to be written into a new block, and the failing
block is added to the GLIST.
If on a SCSI disk READ a hard error is encountered and the disk is not
a member of a volume shadowset, the data is lost, and a parity error
(SS$_PARITY) error is returned. A REASSIGN cannot be issued against
the bad block, because there is no good data for the new block. OpenVMS
will set a forced-error flag in the file header, and an associated error
(SS$_FORCEDERROR) will be reported to accessors.
Upon the deletion of a file with a forced error, OpenVMS will start with
the last block of the file and work backwards until it finds the bad block.
VMS will attempt to issue a REASSIGN if the device is capable of it.
If the defect list is full or the device is incapable of REASSIGN
commands, then OpenVMS will add the bad block into BADBLK.SYS.
If a file on a SCSI device is not processed using volume shadowing
or the XQP, the bad block cannot be corrected. The system pagefile
-- when operating on a volume that is not shadowed -- is an example
of a file access that does not use the XQP, and that thus cannot be
revectored. Thre ability of OpenVMS to revector bad blocks under
the pagefile or other non-XQP-accessed file requires the re-creation
of the file, while the ability to tolerate and to recover from such
disk errors without the re-creation of the file requires the use of
host-based volume shadowing or of controller-based RAID.
Related tools include the (Freeware) RZDISK and the ANALYZE/MEDIA tools.
RAID storage controllers and host-based volume shadowing and similar
constructs are all intended to reduce the effects of hardware failures
on the host operating system and to particularly return good data to
the host (and to the revector) in the event of uncorrectable disk block
BADLOG.SYS is used as a repository of (potentially) failing disk blocks.
Known-bad blocks are stored in BADBLK.SYS.
You can request a scan of bad blocks (using BADBLOCK_SCAN) during file
deletion, by setting the FH2$V_BADBLOCK bit in the file header.
DSA disks provide a forced-error flag, SCSI devices do not -- this means
that reads can report data errors only on DSA disks.
The ANALYZE/DISK utility verifies the file structure. The other central
component involved in this area is the BACKUP utility, as this is one of
the core tools to recover from errors.
Topics specific to unintential initialization or the overwriting of
disk and tape media include (1286) and (6990).
For errors resulting from file structure, directory structure, or
file structure corruptions, please see topics such as (1213), (4088),
(4571), (5071), (5553), (5719), (6021), (6234).
If you want to overwrite or erase the data on the media for reasons of
security or other related reasons, related topics include (841), (3926),
(4286), (4598), and (7320). (Do note that ANALYZE/MEDIA (BAD) will save
and will write the addresses of bad blocks into a low-level on-disk
structure known as the Detected Bad Block File (DBBF), which will mean
that there will potentially be one or more blocks on a disk that will
not contain the erasure pattern even after a full BAD pass. This is
documented in the ANALYZE/MEDIA (BAD) manual.)
Answer written or last revised on 25-OCT-2004
Pure Personal Opinion | HoffmanLabs LLC