Discussion:
FTP FYI
(too old to reply)
Jeffrey H. Coffield
2020-11-21 20:24:00 UTC
Permalink
I was moving savesets from one OpenVMS system to another one during a
data center migration using (not my choice) an Amazon ftp service and
found that on occasion a saveset file would not restore on the new
system. I got this error :

%BACKUP-F-INSBUFSPACE, insufficient virtual memory to accommodate
minimum buffer count

Googling the error provided no help. After digging into the problem I
discovered the save sets in question had the correct file size but
contained nothing but nulls. Re-pulling the files from the ftp server
corrected the problem.

It was not a case of pulling the files too soon as the restore command
was only started until after the entire file had been pushed.
geze...@rlgsc.com
2020-11-22 14:08:29 UTC
Permalink
Post by Jeffrey H. Coffield
I was moving savesets from one OpenVMS system to another one during a
data center migration using (not my choice) an Amazon ftp service and
found that on occasion a saveset file would not restore on the new
%BACKUP-F-INSBUFSPACE, insufficient virtual memory to accommodate
minimum buffer count
Googling the error provided no help. After digging into the problem I
discovered the save sets in question had the correct file size but
contained nothing but nulls. Re-pulling the files from the ftp server
corrected the problem.
It was not a case of pulling the files too soon as the restore command
was only started until after the entire file had been pushed.
Jeffrey,

Nasty.

After some problems with binary/text ftp a long time ago, I got into the habit of ZIP'ing my BACKUP Save Sets before using FTP to transfer them.

Following the FTP transfer, use UNZIP's Test function to confirm integrity of the ZIP archive. The extra step ensures that no corruption has occured enroute.

- Bob Gezelter, http://www.rlgsc.com
Tom Wade
2020-11-23 16:43:40 UTC
Permalink
Post by Jeffrey H. Coffield
I was moving savesets from one OpenVMS system to another one during a
data center migration using (not my choice) an Amazon ftp service and
found that on occasion a saveset file would not restore on the new
%BACKUP-F-INSBUFSPACE, insufficient virtual memory to accommodate
minimum buffer count
Googling the error provided no help. After digging into the problem I
discovered the save sets in question had the correct file size but
contained nothing but nulls. Re-pulling the files from the ftp server
corrected the problem.
It was not a case of pulling the files too soon as the restore command
was only started until after the entire file had been pushed.
I have seen this before. The cause (in my case) was that the saveset
files were marked SET FILE/NOBACKUP. When the disk they were on was
image copied to another machine, the saveset files within the directory
were all null blocks. The weird error message is what Backup says when
asked to restore from a saveset with the correct file attributes but
containing only nulls.
Post by Jeffrey H. Coffield
AMOK-# backup wd$:[twade...]*.*;0 temp.bck/save/nocrc/group=0
AMOK-# set file temp.bck/nobackup
AMOK-# backup temp.bck $13$dke0:[temp]temp.bck
%BACKUP-I-NOBACKUP, US$:[TWADE]TEMP.BCK;1 data not copied, file marked NOBACKUP
AMOK-# backup/list $13$dke0:[temp]temp.bck/save
Listing of save set(s)
%BACKUP-F-INSBUFSPACE, insufficient virtual memory to accommodate minimum buffer
count
Tom Wade
tom dot wade at tomwade dot eu
Stephen Hoffman
2020-11-23 18:35:38 UTC
Permalink
Post by Jeffrey H. Coffield
I was moving savesets from one OpenVMS system to another one during a
data center migration using (not my choice) an Amazon ftp service and
found that on occasion a saveset file would not restore on the new
%BACKUP-F-INSBUFSPACE, insufficient virtual memory to accommodate
minimum buffer count
Googling the error provided no help. After digging into the problem I
discovered the save sets in question had the correct file size but
contained nothing but nulls. Re-pulling the files from the ftp server
corrected the problem.
It was not a case of pulling the files too soon as the restore command
was only started until after the entire file had been pushed.
Fodder for OpenVMS enhancements from this thread...

BACKUP should restore a /NOBACKUP file with some generic "There's
nothing here because..." text embedded in the middle of the first
sector. Though there's probably some app somewhere that's somehow
dependent on finding a null file, per Hyrum's Law.

BACKUP saveset input handling already has issues including around RMS
metadata, and this buffer-count error case seems a different variation
on that same mess. Squabbling over stuffed-up RMS metadata is
unhelpfully recalcitrant at best. Issue a diagnostic at most, and keep
going. Or here, badly detecting a hosed saveset header.

However... Updates to BACKUP seem nonsensical longer term, as the
current app design is close enough to its performance limits to not
matter. What might replace BACKUP and when, we shall eventually
learn... And whether the hypothetical new BACKUP replacement can read
BACKUP savesets for the purposes of restoration, or whether the
existing BACKUP tools and BACKUP API and the BACKUP-related issues and
limits linger on for the foreseeable future.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2020-11-23 21:00:24 UTC
Permalink
Post by Stephen Hoffman
However... Updates to BACKUP seem nonsensical longer term, as the
current app design is close enough to its performance limits to not
matter.
Is there any reason not to use ZIP? It not only makes an archive, but
it is much faster to extract individual files from it, it also does
compression, and is very portable should one need to transfer files
between different operating systems. For a while now it can handle
large files and/or large archives.

For really large savesets, is there something BACKUP can do but ZIP
can't? Is BACKUP faster than ZIP (assuming no compression)?
Steven Schweda
2020-11-23 21:25:41 UTC
Permalink
Post by Phillip Helbig (undress to reply)
For really large savesets, is there something BACKUP can do but ZIP
can't? [...]
Not specific to large files, but Info-ZIP programs generally don't
handle an alias or hard link. On VMS or elsewhere, I believe.
David Jones
2020-11-23 22:57:36 UTC
Permalink
Post by Steven Schweda
Post by Phillip Helbig (undress to reply)
For really large savesets, is there something BACKUP can do but ZIP
can't? [...]
Not specific to large files, but Info-ZIP programs generally don't
handle an alias or hard link. On VMS or elsewhere, I believe.
In the process of figuring out how to pull data out of a Excel .xlsx file, I studied the
zip format a lot (MS office files are a directory tree of XML modules saved in
zip format to save space). I don't think there is anything stopping the multiple
central directory entries from pointing to the same file data. If the file is created
is OS extensions (e.g. -V to save VMS attributes), link information will be saved.
I recently modified my web-based zip file browser to recognized the symbolic
organization and follow the link when extracting the data.

The format started in the MS-dos days and is still most comfortable in that
environment. The unix world still seems to favor a gzipped tar over zip.
One quirk is the binary timestamp format only has a 2 second resolution
(another case where you can use extension data to get the original timestamp).

Zip format supports a large number of compression methods, but I rarely see
anything but the default 'deflate' compression method.
Steven Schweda
2020-11-23 23:28:38 UTC
Permalink
[...] I don't think there is anything stopping the multiple
central directory entries from pointing to the same file data. [...]
There wasn't until fairly recently, but it's a feature with
increasing interest/popularity:

https://en.wikipedia.org/wiki/Zip_bomb

Exactly how it ought to be done is not obvious to me. Certainly,
processing the compressed data more than once is not the way. I haven't
looked lately at the APPNOTE to see if there's already a suitable spec.
Zip format supports a large number of compression methods, but I
rarely see anything but the default 'deflate' compression method.
I believe that bzip2 is currently supported as a build-time option.
A couple others have gotten some work, but they've been stuck in the
pipeline for a while. The usual chicken-egg problem prevails.
Stephen Hoffman
2020-11-23 21:44:28 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
However... Updates to BACKUP seem nonsensical longer term, as the
current app design is close enough to its performance limits to not
matter.
Is there any reason not to use ZIP?
Other than that it doesn't address the performance issues, and lacks
support for aliases as are routinely used on OpenVMS system disks, no.

To use zip here, zip would need updates including a call to the
SETBOOTSHR API, the addition of support for aliases, and likely a few
other details.

And when last I checked, there was no way within the zip archive layout
to have multiple file entries pointing to one hunk of file data, which
means zip would capture a lot of extra data on a system disk. BACKUP
has alias support.
Post by Phillip Helbig (undress to reply)
It not only makes an archive, but it is much faster to extract
individual files from it, it also does compression, and is very
portable should one need to transfer files between different operating
systems. For a while now it can handle large files and/or large
archives.
I prefer zip to BACKUP for most uses, too.

I'd tried getting zip and unzip into the distro. Maybe VSI succeeds.

BACKUP offers compression, and also offers robust encryption. zip
provides compression by default, though lacks robust encryption support.

It wouldn't surprise to learn that both zip and BACKUP use older and
less efficient compression, but I've not verified that.

nb: always remember to compress before you encrypt. This as there's no
point to even try compressing encrypted data.
Post by Phillip Helbig (undress to reply)
For really large savesets, is there something BACKUP can do but ZIP
can't? Is BACKUP faster than ZIP (assuming no compression)?
BACKUP and zip share the underlying I/O issues here. You can't get
faster than the underlying storage speed with the whole-device strategy
used by BACKUP. Which here mirrors that used by zip. Incremental
backups such as can be used with BACKUP aren't a particularly good
alternative to file system change notifications, either.
--
Pure Personal Opinion | HoffmanLabs LLC
Jan-Erik Söderholm
2020-11-23 22:06:49 UTC
Permalink
...is there something BACKUP can do but ZIP can't?
BACKUP /IMAGE?
Arne Vajhøj
2020-11-23 23:59:54 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
However... Updates to BACKUP seem nonsensical longer term, as the
current app design is close enough to its performance limits to not
matter.
Is there any reason not to use ZIP? It not only makes an archive, but
it is much faster to extract individual files from it, it also does
compression, and is very portable should one need to transfer files
between different operating systems. For a while now it can handle
large files and/or large archives.
For really large savesets, is there something BACKUP can do but ZIP
can't? Is BACKUP faster than ZIP (assuming no compression)?
I would use ZIP for transferring some files and BACKUP for
doing backups.

:-)

I expect an image backup to do be perfect for backup and restore.

ZIP compresses and can be unpacked on non-VMS< which is pretty
convenient. And it works perfectly to transfer a directory tree
with something. But I fear that there may be corner cases
where attempting to use ZIP and UNZIP as backup and restore
will not produce the correct result. It was never
intended to be used for that purpose.

Arne
Simon Clubley
2020-11-24 13:27:38 UTC
Permalink
Post by Phillip Helbig (undress to reply)
For really large savesets, is there something BACKUP can do but ZIP
can't?
Redundancy groups.

/VERIFY against the source data. ("unzip -t" only tests the archive itself).

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Chris
2020-11-24 15:07:44 UTC
Permalink
Post by Jeffrey H. Coffield
I was moving savesets from one OpenVMS system to another one during a
data center migration using (not my choice) an Amazon ftp service and
found that on occasion a saveset file would not restore on the new
%BACKUP-F-INSBUFSPACE, insufficient virtual memory to accommodate
minimum buffer count
Googling the error provided no help. After digging into the problem I
discovered the save sets in question had the correct file size but
contained nothing but nulls. Re-pulling the files from the ftp server
corrected the problem.
It was not a case of pulling the files too soon as the restore command
was only started until after the entire file had been pushed.
Don't know all the specifics here, but plain old ftp needs to be in
binary mode for anything other than text data. Also, sftp tends to
be more robust than ftp if you have it on the system...

Chris
geze...@rlgsc.com
2020-11-24 15:58:51 UTC
Permalink
All,

I previously commented in this thread that I make it a practice to ZIP BACKUP Save Sets before transferring them over networks using FTP. This does reduce the size of the file, and make for smaller transfers, but the genesis of my habit was error control.

Today, when most connections have extraordinarily low error rates, the concern often seems quaint, until it is not.

I first discovered this danger when working with the VAX-11/780 connected to a pair of PDP-11/34 systems using DMC-11 1M/bit triaxial coax links. With the DMC-11 adapters supporting the DDCMP protocol, including CRC-16 on all messages, one would have reasonably thought that transmission errors were not a problem. That was incorrect. A bug in the DMC-11 would, on occasion, miss bytes when moving data between main memory and the device.

This could show up in many ways, including encountering the "Recovery Groups" message from BACKUP. BACKUP was the least dangerous case, not all files are processed by programs checking for file integrity.

More generally, there are many ways for corruption to occur on a network transfer. The guarantees provided by the TCP/IP stack (specifically TCP and FTP) are far weaker than those provided by DDCMP/Ethernet/DAP. ZIPing a file provides an easy way to verify that a file is intact. As volumes have increased, and multiprocessors have become ubiquitous, I have transferred the drudgery of running ZIP to batch jobs.

For large scale transfers, where I am concerned about OpenVMS-specific details (e.g. ACL, Protections, etc.) I use BACKUP to create a save set, then ZIP the save set (preserving the RMS attributes of the save set), then transfer the ZIP archive. On receipt, the ZIP archive is tested for integrity, then UNZIPed.

This choreography ensures data integrity. Discovering data corruption at a later point is much more difficult to remediate.

- Bob Gezelter, http://www.rlgsc.com
Stephen Hoffman
2020-11-24 23:46:53 UTC
Permalink
Post by ***@rlgsc.com
This choreography ensures data integrity. Discovering data corruption
at a later point is much more difficult to remediate.
I usually use a zip archive and a checksum check for that, but whatever
choreography works for you, have at.

OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of
V8.4.

Or use the checksum support in OpenSSL (openssl dgst -sha256 -binary
{zipfile}, etc).

...this having gotten burned enough times by BACKUP savesets embedded
within zip archives. Which are awkward to read and verify, when off of
OpenVMS...
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2020-11-25 03:37:52 UTC
Permalink
Post by Stephen Hoffman
Post by ***@rlgsc.com
This choreography ensures data integrity. Discovering data corruption
at a later point is much more difficult to remediate.
I usually use a zip archive and a checksum check for that, but whatever
choreography works for you, have at.
OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of V8.4.
Or use the checksum support in OpenSSL (openssl dgst -sha256 -binary
{zipfile}, etc).
Or if you have Perl installed:

$ shasum -a 512 -b {zipfile}

where the argument to -a can be 1, 224, 256, 384, 512, 512224, or 512256.
hb
2020-11-25 10:15:00 UTC
Permalink
Post by Stephen Hoffman
OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of V8.4.
Do you mind saying what that should be?
Arne Vajhøj
2020-11-25 13:17:30 UTC
Permalink
Post by hb
Post by Stephen Hoffman
OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of V8.4.
Do you mind saying what that should be?
I would assume he was hinting at SHA-1 being obsolete and
should be replaced by SHA-256 or SHA-512.

Arne
Simon Clubley
2020-11-25 13:21:14 UTC
Permalink
Post by hb
Post by Stephen Hoffman
OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of V8.4.
Do you mind saying what that should be?
Not speaking for Stephen, but SHA-256 would be my absolute minimum and
SHA-512 would be strongly preferred for increased protection against
future attacks.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
John Dallman
1970-01-01 00:00:00 UTC
Permalink
... SHA-256 would be my absolute minimum and SHA-512 would be strongly
preferred for increased protection against future attacks.
Hear, hear. SHA-1 is now worthless against a deliberate attack, although
still fine against accidental corruption.

An OS claiming to be highly secure needs SHA-512; supporting the
relatively new SHA-3 would add credibility.

John
Dave Froble
2020-11-25 14:46:00 UTC
Permalink
Post by John Dallman
... SHA-256 would be my absolute minimum and SHA-512 would be strongly
preferred for increased protection against future attacks.
Hear, hear. SHA-1 is now worthless against a deliberate attack, although
still fine against accidental corruption.
An OS claiming to be highly secure needs SHA-512; supporting the
relatively new SHA-3 would add credibility.
John
Perhaps we should be a bit more focused on the issue?

From what I was reading, the issue was catching data corruptions, not
security. Isn't it sort of silly to introduce security into another
issue? A checksum either works, or it doesn't. If it works, doesn't
that solve the potential issue?

Or maybe I don't understand the issue ...
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2020-11-25 15:18:28 UTC
Permalink
Post by Dave Froble
Post by John Dallman
... SHA-256 would be my absolute minimum and SHA-512 would be strongly
preferred for increased protection against future attacks.
Hear, hear. SHA-1 is now worthless against a deliberate attack, although
still fine against accidental corruption.
An OS claiming to be highly secure needs SHA-512; supporting the
relatively new SHA-3 would add credibility.
Perhaps we should be a bit more focused on the issue?
From what I was reading, the issue was catching data corruptions, not
security.  Isn't it sort of silly to introduce security into another
issue?  A checksum either works, or it doesn't.  If it works, doesn't
that solve the potential issue?
Or maybe I don't understand the issue ...
You do.

For catching accidental data corruption SHA-1 is OK.

It is for security that it is bad.

But from a practical perspective then using the same
algorithm for both purposes makes sense.

Which I suspect is why Hoff used the wording he did.

Arne
Stephen Hoffman
2020-11-25 16:24:47 UTC
Permalink
Post by Dave Froble
Perhaps we should be a bit more focused on the issue?
From what I was reading, the issue was catching data corruptions, not
security. Isn't it sort of silly to introduce security into another
issue? A checksum either works, or it doesn't. If it works, doesn't
that solve the potential issue?
Or maybe I don't understand the issue ...
OpenVMS is "the most secure operating system on the planet" 🤣, which
means that vendor and third-party developers have thought about both
non-malicious corruptions and about actively-malicious corruptions,
right?

Same applies for the default choice for random-number generation: use a
cryptographically secure random number generator, absent very specific
reasons to use a lesser generator. Or a lesser message digest hash.

Or somewhat more succinctly, choose and use and offer and work toward
secure defaults, absent specific reasons not to.

We are all working toward actually living up to that "the most secure
operating system on the planet" claim, right?
--
Pure Personal Opinion | HoffmanLabs LLC
Dave Froble
2020-11-25 19:55:33 UTC
Permalink
Post by Stephen Hoffman
Post by Dave Froble
Perhaps we should be a bit more focused on the issue?
From what I was reading, the issue was catching data corruptions, not
security. Isn't it sort of silly to introduce security into another
issue? A checksum either works, or it doesn't. If it works, doesn't
that solve the potential issue?
Or maybe I don't understand the issue ...
OpenVMS is "the most secure operating system on the planet" 🤣, which
means that vendor and third-party developers have thought about both
non-malicious corruptions and about actively-malicious corruptions, right?
Same applies for the default choice for random-number generation: use a
cryptographically secure random number generator, absent very specific
reasons to use a lesser generator. Or a lesser message digest hash.
Or somewhat more succinctly, choose and use and offer and work toward
secure defaults, absent specific reasons not to.
We are all working toward actually living up to that "the most secure
operating system on the planet" claim, right?
Actually, no.

Why, because security is so much more than an OS, or any other single thing.

I find it irritating when security is not the topic, that some feel that
they have to introduce it into a topic, where it is not an issue.

If it ain't broke, don't fix it ...

YMMV
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Stephen Hoffman
2020-11-25 20:35:07 UTC
Permalink
Post by Dave Froble
Post by Stephen Hoffman
Post by Dave Froble
Perhaps we should be a bit more focused on the issue?
From what I was reading, the issue was catching data corruptions, not
security. Isn't it sort of silly to introduce security into another
issue? A checksum either works, or it doesn't. If it works, doesn't
that solve the potential issue?
Or maybe I don't understand the issue ...
OpenVMS is "the most secure operating system on the planet" 🤣, which
means that vendor and third-party developers have thought about both
non-malicious corruptions and about actively-malicious corruptions, right?
Same applies for the default choice for random-number generation: use a
cryptographically secure random number generator, absent very specific
reasons to use a lesser generator. Or a lesser message digest hash.
Or somewhat more succinctly, choose and use and offer and work toward
secure defaults, absent specific reasons not to.
We are all working toward actually living up to that "the most secure
operating system on the planet" claim, right?
Actually, no.
Why, because security is so much more than an OS, or any other single thing.
I find it irritating when security is not the topic, that some feel
that they have to introduce it into a topic, where it is not an issue.
If it ain't broke, don't fix it ...
YMMV
We should all have bad checksums, bad defaults, bad designs, bad APIs,
and bad documentation, right?

telnet, FTP, DECnet, AUTODIN-2 CRC32, MD5, that was good enough for our
ancestors, so it's good enough for us, right?

Do I like that we're increasingly forced to choose insecurity or to
update our app code? No. I've commented around frameworks to help
incrementally isolate some of that, as have others.

But change is the development world that we're increasingly all
residing within. With the occasional breaking changes and/or source
code changes, yes.

And awful defaults and sentiments including "you should have known
better than to use the defaults" don't help our app development and
maintenance efforts.

Put differently... If the data here is important enough, then MD5 and
AUTODIN-2 CRC32 and ilk are broken.

And if the data is not important, the defaults should still be reliable
and robust.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main (C.O.V.)
2020-11-27 01:39:24 UTC
Permalink
-----Original Message-----
Hoffman via Info-vax
Sent: November-25-20 4:35 PM
Subject: Re: [Info-vax] FTP FYI
Post by Dave Froble
Post by Stephen Hoffman
Post by Dave Froble
Perhaps we should be a bit more focused on the issue?
From what I was reading, the issue was catching data corruptions,
not security. Isn't it sort of silly to introduce security into
another issue? A checksum either works, or it doesn't. If it
works, doesn't that solve the potential issue?
Or maybe I don't understand the issue ...
OpenVMS is "the most secure operating system on the planet" 🤣, which
means that vendor and third-party developers have thought about both
non-malicious corruptions and about actively-malicious corruptions, right?
Same applies for the default choice for random-number generation: use
a cryptographically secure random number generator, absent very
specific reasons to use a lesser generator. Or a lesser message digest
hash.
Post by Dave Froble
Post by Stephen Hoffman
Or somewhat more succinctly, choose and use and offer and work toward
secure defaults, absent specific reasons not to.
We are all working toward actually living up to that "the most secure
operating system on the planet" claim, right?
Actually, no.
Why, because security is so much more than an OS, or any other single
thing.
Post by Dave Froble
I find it irritating when security is not the topic, that some feel
that they have to introduce it into a topic, where it is not an issue.
If it ain't broke, don't fix it ...
YMMV
We should all have bad checksums, bad defaults, bad designs, bad APIs, and
bad documentation, right?
telnet, FTP, DECnet, AUTODIN-2 CRC32, MD5, that was good enough for our
ancestors, so it's good enough for us, right?
Do I like that we're increasingly forced to choose insecurity or to update our
app code? No. I've commented around frameworks to help incrementally
isolate some of that, as have others.
But change is the development world that we're increasingly all residing
within. With the occasional breaking changes and/or source code changes,
yes.
And awful defaults and sentiments including "you should have known better
than to use the defaults" don't help our app development and maintenance
efforts.
Put differently... If the data here is important enough, then MD5 and
AUTODIN-2 CRC32 and ilk are broken.
And if the data is not important, the defaults should still be reliable and
robust.
Multinet V5.6 Release Notes:
<http://www.process.com/docs/multinet5_6/MULTINET056_RELEASE_NOTES.txt>

- TLSv1.2 is now the default for FTPS on Alpha and ia64 systems.
Highlights in this release include:
- PTPv2 support has been added (Precision Time Protocol)
- SSH 'Suite B' support on Alpha and I64, which adds support for more modern key exchanges, certificates, etc.
- Numerous bug fixes for SSH2 and SFTP
- OpenSSL 1.0.2T on Alpha and I64
- Performance enhancements and bug fixes for the MultiNet kernel
- NTP updated to 4.2.18p15 from ntp.org
- NAMED updated to BIND 9.11.21 from isc.org
- FTP support for TLSv1.2 and bug fixes
- NFSv3 improvements and bug fixes
- UCXDRIVER and UCX_LIBRARY_EMULATION bug fixes

Kerry
--
This email has been checked for viruses by AVG.
https://www.avg.com
Dave Froble
2020-11-26 22:47:39 UTC
Permalink
Post by Dave Froble
Why, because security is so much more than an OS, or any other single thing.
I find it irritating when security is not the topic, that some feel that
they have to introduce it into a topic, where it is not an issue.
If it ain't broke, don't fix it ...
Security is one area where something can become broken simply by not
doing something to update existing and previously working standards.
Simon.
The discussion was transferring data, and determining if it transferred
correctly. One method is using a checksum, which was part of the
discussion.

Do checksums do the job?

Do checksums break over time?

My question is, does every discussion always have to devolve into a
security issue?

Is it raining today?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Chris Townley
2020-11-26 23:17:38 UTC
Permalink
Post by Dave Froble
Post by Dave Froble
Why, because security is so much more than an OS, or any other single thing.
I find it irritating when security is not the topic, that some feel that
they have to introduce it into a topic, where it is not an issue.
If it ain't broke, don't fix it ...
Security is one area where something can become broken simply by not
doing something to update existing and previously working standards.
Simon.
The discussion was transferring data, and determining if it transferred
correctly.  One method is using a checksum, which was part of the
discussion.
Do checksums do the job?
Do checksums break over time?
My question is, does every discussion always have to devolve into a
security issue?
Is it raining today?
I can't tell you - state secret!

Chris

Stephen Hoffman
2020-11-25 16:08:31 UTC
Permalink
Post by hb
Post by Stephen Hoffman
OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of V8.4.
Do you mind saying what that should be?
Use of SHA-2 and SHA-3 hashes would be typical, now.


Background:

If I'm going to be doing a checksum (message digest hash), best to use
a reasonably secure one. SHA-1 is somewhat stale.

SHA-1 collisions are known, and SHA-1 has been deprecated by US NIST.
As are collisions for the yet-older MD5. And we were doing AUTODIN-2
CRC32 collisions on OpenVMS a decade or two ago, as we suspected the
target folks were still using CHECKSUM.

One of the networks I sometimes use in more recent times is known for
dynamically modifying unencrypted (e.g. FTP-transferred) Windows
executables detected in the network file transfer traffic. Copy a file
via that network, and and it's distinctly possible to have the
executable image detected by an intermediate host and malware inserted
for free. Which also ties back to best using sftp or FTP via VPN, and
cryptographically secure hashes, and not insecure hashes and
unencrypted links. And yes, added malware would probably be detected by
AUTODIN-2 CRC32 offered by CHECKSUM, but if I'm adding a checksum
comparison, the overhead of SHA-2 or SHA-3 will be lost in the noise of
the network file transfer. And spoofing the AUTODIN-2 CRC32 is trivial.

And on the subject of use and misuse of cryptographic hashes, here's an
old article on message digest hashes and password hashes:
https://krebsonsecurity.com/2012/06/how-companies-can-beef-up-password-security/


ps: $DEITY remind me to never click on a documentation link at the VSI
website. Who thought pointing to an in-browser Scribd-like
PDF-rendering tool at the HPE website was a good idea? Yuck.
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2020-11-25 16:44:49 UTC
Permalink
Post by Stephen Hoffman
Post by hb
Post by Stephen Hoffman
OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of V8.4.
Do you mind saying what that should be?
Use of SHA-2 and SHA-3 hashes would be typical, now.
OK. But where did you find that stale, non-typical support of SHA-1 in
CHECKSUM on V8.4?
Stephen Hoffman
2020-11-25 16:59:09 UTC
Permalink
Post by hb
Post by Stephen Hoffman
Post by hb
Post by Stephen Hoffman
OpenVMS is fairly late to adopting file checksums, but the CHECKSUM
command does have an only-somewhat-stale SHA-1 checksum support as of V8.4.
Do you mind saying what that should be?
Use of SHA-2 and SHA-3 hashes would be typical, now.
OK. But where did you find that stale, non-typical support of SHA-1 in
CHECKSUM on V8.4?
Ah, you're right; it's worse than I'd recalled.

Use the OpenSSL tool.
--
Pure Personal Opinion | HoffmanLabs LLC
Loading...