Discussion:
Reclaim disk space for a 3PAR Storage Box
Add Reply
Bruno Seghers
2021-08-27 12:50:14 UTC
Reply
Permalink
Dear all,
I would like to expose you an issue with the hope that you can help me to solve it in the best way.

We have OpenVMS 8.4 clusters connected to a full SSD 3PAR storage box via a FC SAN.

We have thin provisioning activated on the 3PAR meaning that we make some “overbooking” with the disk size. We can’t have all disks presented with 100% used space because we don’t have that physical disk space available.

The 3PAR use the free space available on each disk as a common free space pool available to ensure the used size increase of all the disks. (Don’t ask me more, I’m OpenVMS system manager, not Storage manager)

On OpenVMS, a simple DELETE doesn’t tell to the 3PAR that this deleted area is available to be part of the common pool. The 3PAR still considers that location as used.

With DELETE/ERASE, “the storage location is overwritten with a system specified pattern so that the data no longer exists.”

So, I had to implement something that will make an erase operation.

I though to put in place a SET VOLUME /ERASE_ON_DELETE but I’m afraid that severe performance decreases will occur, especially on heavy used disks (creating and deleting a lot of files).

So I made the choice of DELETE/ERASE.

Twice a week, I create 10 MB dummy files until the disk if 95% Full. Then I initiate a DELETE/ERASE of those files. The delete is made one by one to verify that something else is not busy to fill in the disk. If it is the case, I make a simple DELETE for the rest of the files which is more performant.

I don’t create a big file because the DELETE/ERASE could take many time and the size is not available during the operation. This could compromise the used space increase requested by the application.

My problem is that for big disks (800GB) with a low used space, we create thousands and thousands of files and the DELETE/ERASE takes days.

Do you see another way of working ?

I checked the defragmenter, there is a /CONSOLIDATE_FREESPACE but not a “/ERASE_FREESPACE” option

I was hoping to find something in OpenVMS that could act in background, all the time without impacting the performances.

Thanks for your help.

Seghers Bruno
Michael Moroney
2021-08-27 20:24:07 UTC
Reply
Permalink
Post by Bruno Seghers
Dear all,
I would like to expose you an issue with the hope that you can help me to solve it in the best way.
We have OpenVMS 8.4 clusters connected to a full SSD 3PAR storage box via a FC SAN.
HPE 8.4 or a VSI version?
Post by Bruno Seghers
We have thin provisioning activated on the 3PAR meaning that we make some “overbooking” with the disk size. We can’t have all disks presented with 100% used space because we don’t have that physical disk space available.
VMS does not know about thin provisioning, and while it will happily
grab space from the storage controller, it has no way to tell it to give
it back. VMS doesn't use the "trim" function needed.

This is item #1,724,764,331 of about 5,382,118,992 things that need to
be added to VMS but which VSI simply doesn't have the resources to do.

Expect to get some sort of error from the storage controller when it
tries to tell VMS 100% of the existing space has been used, even though
VMS may be trying to create a file on a disk with "plenty" of free
space. This could get ugly.

Maybe, just maybe, a storage controller may consider a huge chunk of
zeroed disk space as having been released. But I doubt it.
Volker Halle
2021-08-28 11:38:20 UTC
Reply
Permalink
Bruno,

there is a presentation from Bootcamp 2014 about 'Thin Provisioning and OpenVMS' by Keith Parris:

https://www.sciinc.com/remotedba/techinfo/tech_presentations/Boot%20Camp%202014/Bootcamp_2014_Thin%20Provisioning%20and%20OpenVMS.pdf

Volker.
Volker Halle
2021-08-30 11:59:33 UTC
Reply
Permalink
Post by Bruno Seghers
My problem is that for big disks (800GB) with a low used space, we create thousands and thousands of files and the DELETE/ERASE takes days.
Bruno,

on those big disks, is DELETE/ERASE the problem or the fact, that you have to delete 'thousands and thousands' of (small) file in ONE directory ? How big is the .DIR file after you've created those temporary 10 MB files ? Would it help, to DELETE/ERASE those files in reverse order, i.e. highest version number first ? This would reduce the shuffling of entries in the directory.

If this is the problem, SET FILE/ERASE *.* followed by a DFU DELETE/DIRECTORY may improve the situation.

Volker.
Simon Clubley
2021-08-30 12:19:51 UTC
Reply
Permalink
Post by Volker Halle
Post by Bruno Seghers
My problem is that for big disks (800GB) with a low used space, we create thousands and thousands of files and the DELETE/ERASE takes days.
Bruno,
on those big disks, is DELETE/ERASE the problem or the fact, that you have to delete 'thousands and thousands' of (small) file in ONE directory ? How big is the .DIR file after you've created those temporary 10 MB files ? Would it help, to DELETE/ERASE those files in reverse order, i.e. highest version number first ? This would reduce the shuffling of entries in the directory.
If he has a range of filenames, version numbers will not be the only issue
here, but also the ordering of the filenames in the directory file.

He needs to delete files in reverse name order _and_ for a given filename
needs to delete those files in reverse version number order as you say above.

Hard to believe that this is still a problem on VMS in 2021.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
abrsvc
2021-08-30 12:37:21 UTC
Reply
Permalink
Post by Simon Clubley
Post by Volker Halle
Post by Bruno Seghers
My problem is that for big disks (800GB) with a low used space, we create thousands and thousands of files and the DELETE/ERASE takes days.
Bruno,
on those big disks, is DELETE/ERASE the problem or the fact, that you have to delete 'thousands and thousands' of (small) file in ONE directory ? How big is the .DIR file after you've created those temporary 10 MB files ? Would it help, to DELETE/ERASE those files in reverse order, i.e. highest version number first ? This would reduce the shuffling of entries in the directory.
If he has a range of filenames, version numbers will not be the only issue
here, but also the ordering of the filenames in the directory file.
He needs to delete files in reverse name order _and_ for a given filename
needs to delete those files in reverse version number order as you say above.
Hard to believe that this is still a problem on VMS in 2021.
Simon.
--
Walking destinations on a map are further away than they appear.
I would not consider it a "problem" just the way that directory files work. If proper directory maintenance is performed on a regular basis, this is not a problem. Only mass deletions show the "problem". Knowing how things work often present more efficient ways to perform operations. Here for mass deletions, reverse order is more efficient. Is "normal order" a problem, NO, but it will take a while.

Think of this as a similar issue to column order referencing of array elements vs. row order. Depending upon how arrays are stored, you could either efficiently make use of sequential memory references or not. Problem NO for either case, just one way is more efficient.

Dan
David Jones
2021-08-30 13:58:28 UTC
Reply
Permalink
Post by Simon Clubley
Post by Volker Halle
Post by Bruno Seghers
My problem is that for big disks (800GB) with a low used space, we create thousands and thousands of files and the DELETE/ERASE takes days.
Bruno,
on those big disks, is DELETE/ERASE the problem or the fact, that you have to delete 'thousands and thousands' of (small) file in ONE directory ? How big is the .DIR file after you've created those temporary 10 MB files ? Would it help, to DELETE/ERASE those files in reverse order, i.e. highest version number first ? This would reduce the shuffling of entries in the directory.
If he has a range of filenames, version numbers will not be the only issue
here, but also the ordering of the filenames in the directory file.
He needs to delete files in reverse name order _and_ for a given filename
needs to delete those files in reverse version number order as you say above.
A directory entry has a filename and a list of version+file-id values with the highest version first (8 bytes per file version).
Unless you have enough versions of a file to force multiple directory records for it (directory records can't span blocks),
it doesn't make much difference the order they are deleted in. If you do have multiple records, you want to delete them
is ascending version order (which is opposite of how they are returned by SYS$SEARCH()).
Post by Simon Clubley
Hard to believe that this is still a problem on VMS in 2021.
It can still be a problem on Unix, depending upon the file system you are using.
Chris Townley
2021-08-30 14:45:18 UTC
Reply
Permalink
Post by David Jones
It can still be a problem on Unix, depending upon the file system you are using.
would agree - I had a log file directory grow massively on RHEL (ext4
I think) a few years ago that took a full day to purge
--
Chris
Stephen Hoffman
2021-08-30 15:04:33 UTC
Reply
Permalink
Post by Simon Clubley
Hard to believe that this is still a problem on VMS in 2021.
That would be a system that hasn't seen appreciable file system updates
or a file system replacement in a ~quarter-century, and one that
remains limited to 2 TiB volumes?

A whole lot has changed with storage and storage I/O since HDDs reigned
supreme. 12 TB drives, and 4 TB SSDs are now commonplace, among other
differences.

Without TRIM / UNMAP command support, highwater marking and volume
erase-on-delete are the usual path for thin-provisioned volumes, and
also for SSD volumes.

Otherwise, your SSD performance is entirely dependent on on how much
over-provisioning your SSDs might have and how fast the SSDs can erase,
when your app gets write-active.

These settings do increase the app and system I/O load, but the
controllers can retire the commands asynchronously; without having to
wait for the storage writes to complete.

VSI might want to post some documentation or guidance here, not just
for thin-provisioned and virtualized storage, but also for anybody
presently using SSD on OpenVMS. Similar issues lurk...
--
Pure Personal Opinion | HoffmanLabs LLC
Simon Clubley
2021-08-30 17:44:59 UTC
Reply
Permalink
Post by Stephen Hoffman
Post by Simon Clubley
Hard to believe that this is still a problem on VMS in 2021.
That would be a system that hasn't seen appreciable file system updates
or a file system replacement in a ~quarter-century, and one that
remains limited to 2 TiB volumes?
Yes, that's the one.

And nothing looks like it is going to change soon given that the
two candidate replacement projects for it appear to have both been
cancelled.
Post by Stephen Hoffman
A whole lot has changed with storage and storage I/O since HDDs reigned
supreme. 12 TB drives, and 4 TB SSDs are now commonplace, among other
differences.
SSDs make me nervous because when they fail, they tend to fail hard
and sudden without any warning.

Hardware RAID or software volume shadowing would normally take care
of that but there have been enough cases in recent years where a
complete set of SSD devices all fail at the same time due to firmware
bugs to make me nervous about using them.

I have never lost a HDD due to a firmware bug.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Scott Dorsey
2021-08-30 20:59:12 UTC
Reply
Permalink
Post by Simon Clubley
I have never lost a HDD due to a firmware bug.
I've got one word for you. One word. Are you listening?

Pla... err... I mean... Micropolis.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Dave Froble
2021-08-31 02:03:42 UTC
Reply
Permalink
Post by Simon Clubley
Post by Stephen Hoffman
Post by Simon Clubley
Hard to believe that this is still a problem on VMS in 2021.
That would be a system that hasn't seen appreciable file system updates
or a file system replacement in a ~quarter-century, and one that
remains limited to 2 TiB volumes?
Yes, that's the one.
And nothing looks like it is going to change soon given that the
two candidate replacement projects for it appear to have both been
cancelled.
Post by Stephen Hoffman
A whole lot has changed with storage and storage I/O since HDDs reigned
supreme. 12 TB drives, and 4 TB SSDs are now commonplace, among other
differences.
SSDs make me nervous because when they fail, they tend to fail hard
and sudden without any warning.
Never seen a total head crash on a HDD? Ok, neither have I, but I have
seen HDDs fail, and once they start, you cannot count on getting
accurate data off them.

Oh, and welcome to the new world of cheap and semi-reliable PC junk
hardware ....
Post by Simon Clubley
Hardware RAID or software volume shadowing would normally take care
of that but there have been enough cases in recent years where a
complete set of SSD devices all fail at the same time due to firmware
bugs to make me nervous about using them.
See above about cheap and semi-reliable PC junk hardware ....
Post by Simon Clubley
I have never lost a HDD due to a firmware bug.
You must be talking about older not so cheap and usually fully debugged
( well, sorta ) disk drives ....

SSDs are all that I'll acquire now, but, I still run disk drives that I
have. Haven't lost a SSD, yet ... Don't jinx me ...
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Volker Halle
2021-08-31 16:13:58 UTC
Reply
Permalink
Bruno,

according to the VSI forum, HPE OpenVMS I64 V8.4 has been enhanced to support 'Reclaim for thin provisioned units in OpenVMS' with the VMS84I_SYS-V0700 patch kit.

https://support.hpe.com/hpesc/public/docDisplay?docId=pdb_na-VMS84I_SYS_V0700

There seems to be no such support in any VSI OpenVMS I64 release.

Volker.
Robert A. Brooks
2021-08-31 16:22:41 UTC
Reply
Permalink
Post by Volker Halle
Bruno,
according to the VSI forum, HPE OpenVMS I64 V8.4 has been enhanced to support 'Reclaim for thin provisioned units in OpenVMS' with the VMS84I_SYS-V0700 patch kit.
https://support.hpe.com/hpesc/public/docDisplay?docId=pdb_na-VMS84I_SYS_V0700
There seems to be no such support in any VSI OpenVMS I64 release.
We looked at the implementation, didn't like it, and did not accept those changes.

HP did the work at the behest of a customer, who was so unimpressed with the performance,
that the customer who asked for it does not use that feature.
--
-- Rob
Arne Vajhøj
2021-08-31 17:42:03 UTC
Reply
Permalink
Post by Simon Clubley
Post by Stephen Hoffman
A whole lot has changed with storage and storage I/O since HDDs reigned
supreme. 12 TB drives, and 4 TB SSDs are now commonplace, among other
differences.
SSDs make me nervous because when they fail, they tend to fail hard
and sudden without any warning.
Never seen a total head crash on a HDD?  Ok, neither have I, but I have
seen HDDs fail, and once they start, you cannot count on getting
accurate data off them.
Oh, and welcome to the new world of cheap and semi-reliable PC junk
hardware ....
Post by Simon Clubley
Hardware RAID or software volume shadowing would normally take care
of that but there have been enough cases in recent years where a
complete set of SSD devices all fail at the same time due to firmware
bugs to make me nervous about using them.
See above about cheap and semi-reliable PC junk hardware ....
There are usually some correlation between price and quality.

I have no doubt that:

MTBF(enterprise,2021) > MTBF(consumer,2021)
MTBF(enterprise,1986) > MTBF(consumer,1986)

but at least for disk then:

MTBF(consumer,2021) > MTBF(enterprise,1986)

Modern disks are actually pretty reliable.

VAX disks were not always reliable. I remember a 8650
with RA81's and RA82's - DEC field service replaced disks
several times per year (especially the RA81's).

Arne
Bill Gunshannon
2021-08-31 22:24:24 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Stephen Hoffman
A whole lot has changed with storage and storage I/O since HDDs reigned
supreme. 12 TB drives, and 4 TB SSDs are now commonplace, among other
differences.
SSDs make me nervous because when they fail, they tend to fail hard
and sudden without any warning.
Never seen a total head crash on a HDD?  Ok, neither have I, but I
have seen HDDs fail, and once they start, you cannot count on getting
accurate data off them.
Oh, and welcome to the new world of cheap and semi-reliable PC junk
hardware ....
Post by Simon Clubley
Hardware RAID or software volume shadowing would normally take care
of that but there have been enough cases in recent years where a
complete set of SSD devices all fail at the same time due to firmware
bugs to make me nervous about using them.
See above about cheap and semi-reliable PC junk hardware ....
There are usually some correlation between price and quality.
MTBF(enterprise,2021) > MTBF(consumer,2021)
MTBF(enterprise,1986) > MTBF(consumer,1986)
MTBF(consumer,2021) > MTBF(enterprise,1986)
Modern disks are actually pretty reliable.
VAX disks were not always reliable. I remember a 8650
with RA81's and RA82's - DEC field service replaced disks
several times per year (especially the RA81's).
Interesting. I had about two dozen RA80's and 81's That were
old and used when I got them. Ran them for years on VAX and PDP-11's.
Never had a failure and passed them all on when I had to get rid of
those systems. Would not be surprised to find they are still running.

bill
abrsvc
2021-08-31 22:46:39 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Stephen Hoffman
A whole lot has changed with storage and storage I/O since HDDs reigned
supreme. 12 TB drives, and 4 TB SSDs are now commonplace, among other
differences.
SSDs make me nervous because when they fail, they tend to fail hard
and sudden without any warning.
Never seen a total head crash on a HDD? Ok, neither have I, but I
have seen HDDs fail, and once they start, you cannot count on getting
accurate data off them.
Oh, and welcome to the new world of cheap and semi-reliable PC junk
hardware ....
Post by Simon Clubley
Hardware RAID or software volume shadowing would normally take care
of that but there have been enough cases in recent years where a
complete set of SSD devices all fail at the same time due to firmware
bugs to make me nervous about using them.
See above about cheap and semi-reliable PC junk hardware ....
There are usually some correlation between price and quality.
MTBF(enterprise,2021) > MTBF(consumer,2021)
MTBF(enterprise,1986) > MTBF(consumer,1986)
MTBF(consumer,2021) > MTBF(enterprise,1986)
Modern disks are actually pretty reliable.
VAX disks were not always reliable. I remember a 8650
with RA81's and RA82's - DEC field service replaced disks
several times per year (especially the RA81's).
Interesting. I had about two dozen RA80's and 81's That were
old and used when I got them. Ran them for years on VAX and PDP-11's.
Never had a failure and passed them all on when I had to get rid of
those systems. Would not be surprised to find they are still running.
bill
IRRC there were a series that had issues with the seal that let in contaminants that would trash the drives and another series that had issues with the heads. Once those issues were cleared up, the drives were solid. I too had many that were heavily utilized with testing systems with no failures.

Dan
Chris Scheers
2021-09-01 18:35:22 UTC
Reply
Permalink
Post by Bill Gunshannon
Post by Arne Vajhøj
Post by Dave Froble
Post by Simon Clubley
Post by Stephen Hoffman
A whole lot has changed with storage and storage I/O since HDDs reigned
supreme. 12 TB drives, and 4 TB SSDs are now commonplace, among other
differences.
SSDs make me nervous because when they fail, they tend to fail hard
and sudden without any warning.
Never seen a total head crash on a HDD? Ok, neither have I, but I
have seen HDDs fail, and once they start, you cannot count on getting
accurate data off them.
Oh, and welcome to the new world of cheap and semi-reliable PC junk
hardware ....
Post by Simon Clubley
Hardware RAID or software volume shadowing would normally take care
of that but there have been enough cases in recent years where a
complete set of SSD devices all fail at the same time due to firmware
bugs to make me nervous about using them.
See above about cheap and semi-reliable PC junk hardware ....
There are usually some correlation between price and quality.
MTBF(enterprise,2021) > MTBF(consumer,2021)
MTBF(enterprise,1986) > MTBF(consumer,1986)
MTBF(consumer,2021) > MTBF(enterprise,1986)
Modern disks are actually pretty reliable.
VAX disks were not always reliable. I remember a 8650
with RA81's and RA82's - DEC field service replaced disks
several times per year (especially the RA81's).
Interesting. I had about two dozen RA80's and 81's That were
old and used when I got them. Ran them for years on VAX and PDP-11's.
Never had a failure and passed them all on when I had to get rid of
those systems. Would not be surprised to find they are still running.
The early RA81s had a problem where the heads would "unglue" from the
actuator and totally trash the disk.

DEC actively hunted out and replaced those original drives. The
replacement RA81s were good and ran forever.

I never really had a problem with RA80s or RA82s.
--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ***@applied-synergy.com
Fax: 817-237-3074
Simon Clubley
2021-08-31 19:19:48 UTC
Reply
Permalink
Post by Dave Froble
Post by Simon Clubley
SSDs make me nervous because when they fail, they tend to fail hard
and sudden without any warning.
Never seen a total head crash on a HDD? Ok, neither have I, but I have
seen HDDs fail, and once they start, you cannot count on getting
accurate data off them.
Yes, I have actually (back as a student). Saw what happens when a
head decides to make contact with a platter on a removable disk pack. :-)

In modern times, yes I have had multiple HDDs fail but the usual levels
of redundancy have saved me so far. You cannot count on that with SSDs
however. Here's a couple of examples from your favourite vendor:

https://www.theregister.com/2019/11/25/hpe_ssd_32768/
https://www.theregister.com/2020/03/25/hpe_ssd_death_fix/

Imagine if your VMS cluster had those (and only those) installed as
part of a HBVS setup and then you walked in the next day and found that
every single drive, including all cluster-wide redundant backups, had all
failed at pretty much the same time and that you couldn't recover the data.

Like I said, SSDs make me nervous.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Hein RMS van den Heuvel
2021-08-30 20:04:39 UTC
Reply
Permalink
Post by Bruno Seghers
Dear all,
We have OpenVMS 8.4 clusters connected to a full SSD 3PAR storage box via a FC SAN.
We have thin provisioning activated on the 3PAR meaning that we make some “overbooking” with the disk size.
FYI - This topic was cross-posted in the VMSSoftware forums as

https://forum.vmssoftware.com/viewtopic.php?f=31&t=214

Hein
Ian Miller
2021-09-06 10:34:09 UTC
Reply
Permalink
Post by Bruno Seghers
Dear all,
I would like to expose you an issue with the hope that you can help me to solve it in the best way.
We have OpenVMS 8.4 clusters connected to a full SSD 3PAR storage box via a FC SAN.
We have thin provisioning activated on the 3PAR meaning that we make some “overbooking” with the disk size. We can’t have all disks presented with 100% used space because we don’t have that physical disk space available.
The 3PAR use the free space available on each disk as a common free space pool available to ensure the used size increase of all the disks. (Don’t ask me more, I’m OpenVMS system manager, not Storage manager)
On OpenVMS, a simple DELETE doesn’t tell to the 3PAR that this deleted area is available to be part of the common pool. The 3PAR still considers that location as used.
With DELETE/ERASE, “the storage location is overwritten with a system specified pattern so that the data no longer exists.”
So, I had to implement something that will make an erase operation.
I though to put in place a SET VOLUME /ERASE_ON_DELETE but I’m afraid that severe performance decreases will occur, especially on heavy used disks (creating and deleting a lot of files).
So I made the choice of DELETE/ERASE.
Twice a week, I create 10 MB dummy files until the disk if 95% Full. Then I initiate a DELETE/ERASE of those files. The delete is made one by one to verify that something else is not busy to fill in the disk. If it is the case, I make a simple DELETE for the rest of the files which is more performant.
I don’t create a big file because the DELETE/ERASE could take many time and the size is not available during the operation. This could compromise the used space increase requested by the application.
My problem is that for big disks (800GB) with a low used space, we create thousands and thousands of files and the DELETE/ERASE takes days.
Do you see another way of working ?
I checked the defragmenter, there is a /CONSOLIDATE_FREESPACE but not a “/ERASE_FREESPACE” option
I was hoping to find something in OpenVMS that could act in background, all the time without impacting the performances.
Thanks for your help.
Seghers Bruno
In the VMS84I_SYS-V0700 and VMSA_SYS-V0700 ECO kits, new features have been added to allow HPE OpenVMS to tell storage controllers to deallocate space on thin-provisioned volumes
–OpenVMS can now use the SCSI UNMAP function for this purpose.
Simon Clubley
2021-09-06 12:08:10 UTC
Reply
Permalink
Post by Ian Miller
In the VMS84I_SYS-V0700 and VMSA_SYS-V0700 ECO kits, new features have been added to allow HPE OpenVMS to tell storage controllers to deallocate space on thin-provisioned volumes
?OpenVMS can now use the SCSI UNMAP function for this purpose.
Are those the same features that Rob says were so slow that the original
customer who commissioned them ended up not using them ?

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Robert A. Brooks
2021-09-06 14:13:36 UTC
Reply
Permalink
Post by Simon Clubley
Post by Ian Miller
In the VMS84I_SYS-V0700 and VMSA_SYS-V0700 ECO kits, new features have been added to allow HPE OpenVMS to tell storage controllers to deallocate space on thin-provisioned volumes
?OpenVMS can now use the SCSI UNMAP function for this purpose.
Are those the same features that Rob says were so slow that the original
customer who commissioned them ended up not using them ?
Yes.
--
-- Rob
Loading...