Discussion:
Looking for suggestions for new $GETDVI item codes
(too old to reply)
Robert A. Brooks
2014-10-18 03:03:56 UTC
Permalink
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes. I received many good ideas over the years, among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).

It is with more pleasure than you can imagine that I get to make that query
again as a proud member of the VMS Software, Inc engineering staff!

This time, I'm also interested in suggestions for enhancements to various
utilities, such as $ SHOW DEVICE. The likelihood of my ever
implementing any of these ideas is directly proportional to the ease of said
suggestion.

Don't forget that our releases will be for IA64 only (until we release an x86
version). I'll offer any enhancements back to HP for inclusion in the Alpha
source tree.

The caveats:

There is no guarantee that any suggestion will ever be implemented, no matter
how reasonable it is.

It's highly unlikely that any new item codes will appear in our first release
in the Spring of 2015, given that our focus is Poulson-specific for that release.

OK, go ahead -- I'm listening!
--
Robert Brooks VMS Software -- I/O Exec Group ***@vmssoftware.com
Dale Dellutri
2014-10-18 11:56:01 UTC
Permalink
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes. I received many good ideas over the years, among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).
It is with more pleasure than you can imagine that I get to make that query
again as a proud member of the VMS Software, Inc engineering staff!
This time, I'm also interested in suggestions for enhancements to various
utilities, such as $ SHOW DEVICE. The likelihood of my ever
implementing any of these ideas is directly proportional to the ease of said
suggestion.
Don't forget that our releases will be for IA64 only (until we release an x86
version). I'll offer any enhancements back to HP for inclusion in the Alpha
source tree.
There is no guarantee that any suggestion will ever be implemented, no matter
how reasonable it is.
It's highly unlikely that any new item codes will appear in our first release
in the Spring of 2015, given that our focus is Poulson-specific for that release.
OK, go ahead -- I'm listening!
Two enhancements:

1. BACKUP /STATISTICS, modeled after SORT /STATISTICS. When I do
a backup, I'd like to know how much of the tape I've used to
estimate how much space is left on the tape. Right now I do
it by figuring out how many disk blocks I write:
$ inuse_before = f$getdvi(d,"MAXBLOCK") - f$getdvi(d,"FREEBLOCKS")
where d is the disk device. In some of my backup procedures,
I write multiple savesets to the same tape.

2. Allow F$GETSYI to retrieve any item which would be available
Post by Robert A. Brooks
show
auto_action RESTART
...
--
Dale Dellutri <***@panQQQix.com> (lose the Q's)
V***@SendSpamHere.ORG
2014-10-18 12:43:24 UTC
Permalink
Post by Dale Dellutri
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes. I received many good ideas over the years, among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).
It is with more pleasure than you can imagine that I get to make that query
again as a proud member of the VMS Software, Inc engineering staff!
This time, I'm also interested in suggestions for enhancements to various
utilities, such as $ SHOW DEVICE. The likelihood of my ever
implementing any of these ideas is directly proportional to the ease of said
suggestion.
Don't forget that our releases will be for IA64 only (until we release an x86
version). I'll offer any enhancements back to HP for inclusion in the Alpha
source tree.
There is no guarantee that any suggestion will ever be implemented, no matter
how reasonable it is.
It's highly unlikely that any new item codes will appear in our first release
in the Spring of 2015, given that our focus is Poulson-specific for that release.
OK, go ahead -- I'm listening!
1. BACKUP /STATISTICS, modeled after SORT /STATISTICS. When I do
a backup, I'd like to know how much of the tape I've used to
estimate how much space is left on the tape. Right now I do
$ inuse_before = f$getdvi(d,"MAXBLOCK") - f$getdvi(d,"FREEBLOCKS")
where d is the disk device. In some of my backup procedures,
I write multiple savesets to the same tape.
2. Allow F$GETSYI to retrieve any item which would be available
Post by Robert A. Brooks
show
auto_action RESTART
...
Re #2. That is already available, albeit with limitations, using F$GETENV.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Stephen Hoffman
2014-10-18 13:35:21 UTC
Permalink
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes.
OK... You asked for it...

Not really a new itemcode, but a longstanding
<http://www.openvms.compaq.com/wizard/wiz_8428.html> difference from
what DECnet did: get DVI$_TT_ACCPORNAM to identify the origin of remote
TCP/IP telnet and ssh connections via TCP/IP Services. Also something
like DVI$C_SECONDARY that will to allow me to get from the telnet or
ssh terminal device back to the BG socket device, for cases where there
are "stacked" devices. (VAXman has a tool that can help here, but it'd
be more convenient to have the base OS do the proper thing here.)

A disk native sector size itemcode: 512, 512e, 2048 (CD, DVD), 4096
(4Kn advanced 4K native); this from the optical and the IDEMA advanced
format support in various storage devices. VMS won't do much with 4Kn
native (yet?), but support for those disks are already here on other
platforms. DVI$_DEVBUFSIZ gets you want VMS sees, not what the disk
can do.

That four-byte DVI$_MAXBLOCKS field is going to need some work, sooner
or later. Might want to make it happen sooner, even if the underlying
change doesn't arrive until later.

An item code for easier scanning and detecting optical media, and
another for detecting flash media, and SSD. Also having an indicator
for virtual disk and virtual tape would be handy, particularly once
Jur's LD updates are back-integrated into VMS and with the virtual disk
and tape support added/upgraded. Similarly, some indication of whether
the disk uses FC, SCSI, iSCSI or NI hardware, and then if the storage
access protocol is MSCP or NFS or maybe (eventually?) via a CIFS client
or some other remote-access client. (e.g. DVI$_DFS_ACCESS, or maybe
rolling what could be a whole herd of _ACCESS codes into one new
mechanism rather than allowing the itemcodes to breed.) Here's a spot
where DVI$C_SECONDARY might be handy, too.

At some point in the hypothetical future of VMS, some $getdvi
infrastructures around whether the disk is encrypted will be of
interest. Maybe not for immediately reporting that (as there's not yet
anything to report here), but some way architected that MOUNT can then
leave some details around for $getdvi to report on the encryption
status, particularly when the disk is mounted with encryption enabled.
Unmounted disks would be "unknown", mounted disks (currently)
"unencrypted", and (probably eventually) specific disks will be
"encrypted" or (probably better) some indication of the type of
encryption.

Being able to detect a recordable optical device, and acquiring some
details of the particular disk medium loaded in the would have been
nice, but then the need for optical is waning.

Deprecate the entirely fictional DVI$_CYLINDERS, DVI$_SECTORS,
DVI$_TRACKS, and have a look at deprecating or undocumenting whatever
other itemcodes don't make sense any more.

An itemcode that lets me know if the associated RAID hardware
underneath a controller-instantiated disk volume is somehow degraded,
and needs attention. (There's no good way to see if you're operating
with a degraded storage configuration short of walking up to the
cabinet and checking for angry fruit salad colors or rummaging the
error logs, and the error logging and reporting is, um, somewhat of a
quagmire.)

Have somebody review the documentation for DVI$_SPECIAL_FILES. That's
the POSIX symbolic links support, right? (Because $getdvi isn't
looking at "special files" in the Unix and C sense, well, not unless
somebody was quietly planning on adding VMS devices associated with
Unix /dev and /proc stuff?) Probably the whole list of $getdvi
itemcodes should be reviewed, as some of the $getdvi documentation can
skew toward cryptic. As part of this, the following
<http://h71000.www7.hp.com/doc/84final/5763/5763profile_021.html>
arcana points over to the basically non-existent $getdvi documentation:
"Also, on ODS-5 volumes, a SPECIAL_FILES flag is used to communicate to
RMS operations whether to follow symbolic links or not. The RMS
operations that follow symbolic links are SYS$OPEN, SYS$CREATE,
SYS$SEARCH and all directory path interpretations.
For more information on how to set the SPECIAL_FILES flag, see HP
OpenVMS System Services Reference Manual and HP OpenVMS DCL Dictionary."

A more general comment as you're getting familiar with second-era VMS
and as you're getting rolling with third-era VMS: remember to have a
look at the UPGRADE release notes for VMS and at the release notes for
various of the layered products such as TCP/IP Services, as there was
some support and documentation added into those release notes that was
apparently never backported into the main VMS documentation. With
TCP/IP Services had this in 5.7, for instance, both with the base 5.7
release notes and with the patch kits, which added NFSv3 support. I
don't recall whether there was any $getdvi stuff in this category, but
it would not surprise me.

Of the above, the DVI$_TT_ACCPORNAM and the degraded RAID storage have
been the biggest hassles in recent years. On balance and outside of
these two areas and of the pain involved when scanning for specific
sorts of devices present in a configuration, $getdvi hasn't been a
particular problem area.
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2014-10-18 14:41:46 UTC
Permalink
An itemcode that lets me know if the associated RAID hardware underneath
a controller-instantiated disk volume is somehow degraded, and needs
attention. (There's no good way to see if you're operating with a
degraded storage configuration short of walking up to the cabinet and
checking for angry fruit salad colors or rummaging the error logs, and
the error logging and reporting is, um, somewhat of a quagmire.)
I think you can get the angry fruit salad colors from SMH without
leaving your desk, assuming you can hold your nose and close your eyes
to the various glaring faults of SMH. Don't know how it does it, but
maybe there's something in the SNMP giblets that are part of SMH that
could be cleaned up and made fit for general consumption.
Stephen Hoffman
2014-10-18 15:12:15 UTC
Permalink
...assuming you can hold your nose and close your eyes to the various
glaring faults of SMH....
"Glaring faults" in SMH meaning "wildly insecure", among its various
other issues. I'd expect latent and known remote code executions,
given the fixes that have gone into SMH on other platforms.

As for these sorts of tasks distributed tasks, HP seems to have gone to
Helion[1][2]. Where VSI might decide to go with their error-logging
and error-reporting strategy?

Better SNMP support would be handy but the VMS version never got to
SNMPv3 and encryption. One of may security problems latent in VMS.

Mr Brooks had specifically asked for $getdvi suggestions, so I've held
off on listing the many, many other suggestions for improvements
elsewhere in VMS. There are rather more of those other suggestions
than (I have) suggestions for $getdvi, too.



####
[1]
<http://docs.openstack.org/openstack-ops/content/logging_monitoring.html>,
<https://wiki.openstack.org/wiki/LoggingStandards>, etc.
[2] Yes, I've been saving that one.
--
Pure Personal Opinion | HoffmanLabs LLC
V***@SendSpamHere.ORG
2014-10-18 16:30:46 UTC
Permalink
Post by Stephen Hoffman
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes.
OK... You asked for it...
Not really a new itemcode, but a longstanding
<http://www.openvms.compaq.com/wizard/wiz_8428.html> difference from
what DECnet did: get DVI$_TT_ACCPORNAM to identify the origin of remote
TCP/IP telnet and ssh connections via TCP/IP Services. Also something
like DVI$C_SECONDARY that will to allow me to get from the telnet or
ssh terminal device back to the BG socket device, for cases where there
are "stacked" devices. (VAXman has a tool that can help here, but it'd
be more convenient to have the base OS do the proper thing here.)
It's roughly the same code which MultiNet and TCPware have employed in their
ssh implementation to provide the ACCPORNAM. If you look carefully in their
documentation, you'll see my copyright listed therein.

What I'd like to see is an amendment of F$getjpi() to allow the pid argument
to be an IPID. If the desire is to traceback to obtain the ssh session's bg
devices, you can get this from a F$getjpi() of the controller process. The
controller process's PID (well, IPID) can be had from F$getjpi("TT","LOCKID")
because the UCB$L_LOCKID shares its offset with UCB$L_CPID (controller PID).
(Perhaps, make CPID an alias for LOCKID when using F$getjpi() to keep things
more "readable".) However, getting from the IPID of the controller process
to its EPID is a bit more challenging in DCL.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
JF Mezei
2014-10-18 21:30:01 UTC
Permalink
Post by Stephen Hoffman
what DECnet did: get DVI$_TT_ACCPORNAM to identify the origin of remote
TCP/IP telnet and ssh connections via TCP/IP Services.
YES YES YES !
Post by Stephen Hoffman
Also something
like DVI$C_SECONDARY that will to allow me to get from the telnet or
ssh terminal device back to the BG socket device,
And vice versa too.
Post by Stephen Hoffman
An item code for easier scanning and detecting optical media, and
another for detecting flash media, and SSD.
Detection of SSD media type may be required eventually to implement TRIM
commands by the file system. TRIM_CAPABLE might be another item code,
although not sure it is necessary.

For SSDs, the file system might want to know the PAGE size and BLOCK
size. (page = smallest writeable unit, block = smallest erasable unit.)
These may be transparent to the file system though.


For all drives, itemcodes to access the device's SMART data on disk health.

You may also want to think about access to the power supply/supplies as
a device. (voltage, temperatures, fan speeds, current amperage, whether
battery or power).

Similarly, integration with UPS systems via USB or whatever may present
the UPS as a device that one can interrogate via F$GETDVI.
Dirk Munk
2014-10-18 23:50:51 UTC
Permalink
Post by JF Mezei
Post by Stephen Hoffman
what DECnet did: get DVI$_TT_ACCPORNAM to identify the origin of remote
TCP/IP telnet and ssh connections via TCP/IP Services.
YES YES YES !
Post by Stephen Hoffman
Also something
like DVI$C_SECONDARY that will to allow me to get from the telnet or
ssh terminal device back to the BG socket device,
And vice versa too.
Post by Stephen Hoffman
An item code for easier scanning and detecting optical media, and
another for detecting flash media, and SSD.
Detection of SSD media type may be required eventually to implement TRIM
commands by the file system.
Without TRIM you shouldn't use SSD, you will run into big performance
problems. But I'm sure you know that.
Post by JF Mezei
TRIM_CAPABLE might be another item code,
although not sure it is necessary.
Every SSD supports TRIM I assume.
Post by JF Mezei
For SSDs, the file system might want to know the PAGE size and BLOCK
size. (page = smallest writeable unit, block = smallest erasable unit.)
These may be transparent to the file system though.
I don't think a file system erases. It just marks blocks as free, and
the SSD will erase them at a convenient moment.
Post by JF Mezei
For all drives, itemcodes to access the device's SMART data on disk health.
Excellent idea.
Post by JF Mezei
You may also want to think about access to the power supply/supplies as
a device. (voltage, temperatures, fan speeds, current amperage, whether
battery or power).
Normally motherboards gather that information, so you have to get it
from there.
Post by JF Mezei
Similarly, integration with UPS systems via USB or whatever may present
the UPS as a device that one can interrogate via F$GETDVI.
That should work for a small UPS with USB status information.
JF Mezei
2014-10-19 00:08:47 UTC
Permalink
Post by Dirk Munk
I don't think a file system erases. It just marks blocks as free, and
the SSD will erase them at a convenient moment.
Well, the file system does an "erase on delete" except instead of
writing 0s back over the blocks used by the file, it sends a "TRIM"
command for those blocks. The SSD controller then zaps those block so
they can be written to again.
Post by Dirk Munk
Normally motherboards gather that information, so you have to get it
from there.
GETDVI should still be able to provide a documented way with item codes
etc to get power supply info (voltage, amps, temperature, fan speed etc)
Dirk Munk
2014-10-18 23:15:52 UTC
Permalink
Post by Stephen Hoffman
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I
would periodically poll the collective wisdom of comp.os.vms for
suggestions regarding
new $GETDVI item codes.
OK... You asked for it...
Not really a new itemcode, but a longstanding
<http://www.openvms.compaq.com/wizard/wiz_8428.html> difference from
what DECnet did: get DVI$_TT_ACCPORNAM to identify the origin of remote
TCP/IP telnet and ssh connections via TCP/IP Services. Also something
like DVI$C_SECONDARY that will to allow me to get from the telnet or ssh
terminal device back to the BG socket device, for cases where there are
"stacked" devices. (VAXman has a tool that can help here, but it'd be
more convenient to have the base OS do the proper thing here.)
A disk native sector size itemcode: 512, 512e, 2048 (CD, DVD), 4096 (4Kn
advanced 4K native); this from the optical and the IDEMA advanced format
support in various storage devices. VMS won't do much with 4Kn native
(yet?), but support for those disks are already here on other
platforms.
I think it is save to say that all new SATA disks have 4kB sectors,
smaller ones (500GB) included. The cluster size for these disks must be
a multiple of 8 blocks to avoid misalignment. In fact all IO operations
should be done in multiples of 4kB on aligned file systems, otherwise
these disks will get very busy with rectifying misaligned IOs
Post by Stephen Hoffman
DVI$_DEVBUFSIZ gets you want VMS sees, not what the disk can
do.
I suppose you mean the cache size of a disk (32MB or 64MB these days).
It should also be nice if we can see if the write cache of a disk is
enabled, I'm not sure if that is visible now.
Post by Stephen Hoffman
That four-byte DVI$_MAXBLOCKS field is going to need some work, sooner
or later. Might want to make it happen sooner, even if the underlying
change doesn't arrive until later.
Yes, 10TB disk have been announced.
Post by Stephen Hoffman
An item code for easier scanning and detecting optical media, and
another for detecting flash media, and SSD. Also having an indicator
for virtual disk and virtual tape would be handy, particularly once
Jur's LD updates are back-integrated into VMS and with the virtual disk
and tape support added/upgraded. Similarly, some indication of whether
the disk uses FC, SCSI, iSCSI or NI hardware,
That is visible on the name of the device isn't it? DG = FC, DK = SCSI
etc, iSCSI hasn't been implemented yet.

The VMS SCSI-3 driver expects a lun 0 to communicate with a storage
array, in accordance with the SCSI-3 standards. Unfortunately some
arrays (EMC) and iSCSI don't offer a lun 0. That can be a problem. I
don't know what kind of information lun 0 can offer, but perhaps a
$GETDVI for lun 0 could be nice too.
Post by Stephen Hoffman
and then if the storage
access protocol is MSCP or NFS or maybe (eventually?) via a CIFS client
or some other remote-access client. (e.g. DVI$_DFS_ACCESS, or maybe
rolling what could be a whole herd of _ACCESS codes into one new
mechanism rather than allowing the itemcodes to breed.) Here's a spot
where DVI$C_SECONDARY might be handy, too.
At some point in the hypothetical future of VMS, some $getdvi
infrastructures around whether the disk is encrypted will be of
interest. Maybe not for immediately reporting that (as there's not yet
anything to report here), but some way architected that MOUNT can then
leave some details around for $getdvi to report on the encryption
status, particularly when the disk is mounted with encryption enabled.
Unmounted disks would be "unknown", mounted disks (currently)
"unencrypted", and (probably eventually) specific disks will be
"encrypted" or (probably better) some indication of the type of encryption.
Brocade switches will only accept certain types/brands of USB sticks for
firmware updates, I think they use STEC sticks. The reasons is that they
only trust these sticks, others are not reliable enough. Quite similar
to the time that VMS would only accept known SCSI drives. Perhaps such a
check could be used for VMS installations as well, assuming that in
future installations and updates can be done from a USB stick.
Post by Stephen Hoffman
Being able to detect a recordable optical device, and acquiring some
details of the particular disk medium loaded in the would have been
nice, but then the need for optical is waning.
Deprecate the entirely fictional DVI$_CYLINDERS, DVI$_SECTORS,
DVI$_TRACKS,
I agree with you that the Cylinder, Heads, Sector information is
completely bogus, however these values are reported in the SCSI pages,
and as I noticed with Solaris, sometimes you need them. Silly, but true.
Post by Stephen Hoffman
and have a look at deprecating or undocumenting whatever
other itemcodes don't make sense any more.
An itemcode that lets me know if the associated RAID hardware underneath
a controller-instantiated disk volume is somehow degraded, and needs
attention. (There's no good way to see if you're operating with a
degraded storage configuration short of walking up to the cabinet and
checking for angry fruit salad colors or rummaging the error logs, and
the error logging and reporting is, um, somewhat of a quagmire.)
I suppose that is the kind of information lun 0 could be offering you.
Post by Stephen Hoffman
Have somebody review the documentation for DVI$_SPECIAL_FILES. That's
the POSIX symbolic links support, right? (Because $getdvi isn't
looking at "special files" in the Unix and C sense, well, not unless
somebody was quietly planning on adding VMS devices associated with Unix
/dev and /proc stuff?) Probably the whole list of $getdvi itemcodes
should be reviewed, as some of the $getdvi documentation can skew toward
cryptic. As part of this, the following
<http://h71000.www7.hp.com/doc/84final/5763/5763profile_021.html> arcana
points over to the basically non-existent $getdvi documentation: "Also,
on ODS-5 volumes, a SPECIAL_FILES flag is used to communicate to RMS
operations whether to follow symbolic links or not. The RMS operations
that follow symbolic links are SYS$OPEN, SYS$CREATE, SYS$SEARCH and all
directory path interpretations.
For more information on how to set the SPECIAL_FILES flag, see HP
OpenVMS System Services Reference Manual and HP OpenVMS DCL Dictionary."
A more general comment as you're getting familiar with second-era VMS
and as you're getting rolling with third-era VMS: remember to have a
look at the UPGRADE release notes for VMS and at the release notes for
various of the layered products such as TCP/IP Services, as there was
some support and documentation added into those release notes that was
apparently never backported into the main VMS documentation. With
TCP/IP Services had this in 5.7, for instance, both with the base 5.7
release notes and with the patch kits, which added NFSv3 support. I
don't recall whether there was any $getdvi stuff in this category, but
it would not surprise me.
Of the above, the DVI$_TT_ACCPORNAM and the degraded RAID storage have
been the biggest hassles in recent years. On balance and outside of
these two areas and of the pain involved when scanning for specific
sorts of devices present in a configuration, $getdvi hasn't been a
particular problem area.
k***@verizon.net
2015-01-14 19:54:45 UTC
Permalink
Post by Stephen Hoffman
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes.
OK... You asked for it...
That four-byte DVI$_MAXBLOCKS field is going to need some work, sooner
or later. Might want to make it happen sooner, even if the underlying
change doesn't arrive until later.
So I am late in commenting...

I agree with Hoff here. This needs work; it's already too late.

Try a DVI$_MAXBLOCK on a drive over 1.0 TB capacity and you get a negative number. DVI$_FREEBLOCKS has the same issue.
Stephen Hoffman
2015-01-14 21:43:24 UTC
Permalink
Post by k***@verizon.net
Try a DVI$_MAXBLOCK on a drive over 1.0 TB capacity and you get a
negative number. DVI$_FREEBLOCKS has the same issue.
FWIW, whatever programming language you're working with here (DCL?) is
apparently defaulting to a 32-bit, signed longword, integer
representation.

sys$getdvi[w] is returning an unsigned longword value, as that's the
only way to fit a 2 TiB block value via a longword.

The DCL environment just doesn't implement unsigned integer values.
Nor does DCL offer floating point values, quadword signed or unsigned
integers, nor extended-precision math, nor arrays and dictionaries and
data structures, definitely no objects, command line completion,
immutable data, UTF-8 character encoding, regular expressions, nor
user-written lexical functions or any other forms of extensions, or
other increasingly-expected constructs. And yes, there are some
hack-arounds available for some of these. But integers? Those are
signed values.

Part of the delay involved in extending the supported volume sizes to 2
TiB was due to the need to ensure that existing OpenVMS system code
always treated the disk size values as unsigned integers. Prior to
that work, the OpenVMS code had never been supported past 1 TiB, and
was expected to have some "wrinkles".
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2015-01-15 01:56:24 UTC
Permalink
Post by Stephen Hoffman
Post by k***@verizon.net
Try a DVI$_MAXBLOCK on a drive over 1.0 TB capacity and you get a
negative number. DVI$_FREEBLOCKS has the same issue.
FWIW, whatever programming language you're working with here (DCL?) is
apparently defaulting to a 32-bit, signed longword, integer representation.
sys$getdvi[w] is returning an unsigned longword value, as that's the
only way to fit a 2 TiB block value via a longword.
The DCL environment just doesn't implement unsigned integer values. Nor
does DCL offer floating point values, quadword signed or unsigned
integers, nor extended-precision math, nor arrays and dictionaries and
data structures, definitely no objects, command line completion,
immutable data, UTF-8 character encoding, regular expressions, nor
user-written lexical functions or any other forms of extensions, or
other increasingly-expected constructs. And yes, there are some
hack-arounds available for some of these. But integers? Those are
signed values.
Part of the delay involved in extending the supported volume sizes to 2
TiB was due to the need to ensure that existing OpenVMS system code
always treated the disk size values as unsigned integers. Prior to that
work, the OpenVMS code had never been supported past 1 TiB, and was
expected to have some "wrinkles".
Maybe I'm painting with a "too broad" brush, but, today memory is cheap
and plentiful. Why not make anything that might possibly have a large
value a quadword. Yeah, lots of work up front, but perhaps not much
more work to do 50-100 (or whatever) than just a few.
David Froble
2015-01-15 10:13:27 UTC
Permalink
Post by David Froble
Maybe I'm painting with a "too broad" brush, but, today memory is
cheap and plentiful. Why not make anything that might possibly have a
large value a quadword. Yeah, lots of work up front, but perhaps not
much more work to do 50-100 (or whatever) than just a few.
Sure. Great idea. But it's more than a little work. For everybody.
The client applications will have to be reviewed and modified to use
the new itemcodes, or to expect the newly- and optionally-extended
return fields from the existing itemcodes here. While existing and
unmodified applications might not fail, the applications also won't work
as expected when presented with an 6 TiB disk spindle or a 32 TiB RAID6
volume, or whatever other fields were extended to quadwords. These
existing and unmodified applications will likely either get an error
code they might not expect, or they'll get and process what is bogus
data, depending on what approach VSI might decide to do here, if VSI
decides to promote fields. More than a few existing OpenVMS
applications haven't made it to ODS-5 support after all, so there'll be
more than a few that won't get updated for quadword returns... Probably
fodder for a major release (V9, V10, etc), given the amount of effort
and change and churn that would be involved here, too.
Yes, I know it would be major work. But what is the alternative for
those wanting to use large disks and such?

What about a SYSGEN parameter that all such data would respect, either
quadwords, or longwords, based upon the parameter?

Still don't have an ODS-5 disk ....
Stephen Hoffman
2015-01-15 16:51:06 UTC
Permalink
Post by David Froble
Yes, I know it would be major work. But what is the alternative for
those wanting to use large disks and such?
Hopefully, the disk-style interfaces are all long dead well before
64-bit, quadword values are exceeded. Though that likely means memory
windows and/or larger physical address spaces will be required, or
both. But I digress.

As for extending the fields to a quadword, there isn't a good
alternative. Larger sector counts and larger sector sizes will be
disruptive, no matter how those changes are rolled out. Modern disk
blocks are now 4096 bytes, and there's more than a little code around
that assumes 512-byte blocks, and there's more than a little code that
assumes that disk sizes will fit in longwords, which in aggregate means
breaking that code, and quite possibly implementing something similar
to the existing IDE/ATAPI optical disk synthetic-sector support for
2048-byte optical-media sectors. Or quite possibly both.
Post by David Froble
What about a SYSGEN parameter that all such data would respect, either
quadwords, or longwords, based upon the parameter?
That's a system-wide setting, which means that toggling the setting and
migrating to larger disks is an all-or-nothing undertaking; more of a
migration. This particular detail is much more likely to be
conditionalized at run-time, either based on a new itemcode that allows
8 byte buffers, or probably more likely based on seeing whether the
existing buffer for the itemcode is 4 or 8 bytes in size. The
system-wide "parameter setting" with this approach being the classic
"don't configure and connect a volume larger than 2 TiB, if your code
really isn't ready to deal with that" mechanism.

The historical fondness for conditionalized settings and adding
parameters and compatibility serves to build up code cruft over time,
as well as documentation cruft, testing cruft, and a whole host of
other issues. Once something gets added to the operating system, the
VMS folks were classically quite reasonably loathe to remove it, even
when removing it would have made more sense. The accretion that arises
then creates longer-term and more subtle problems. Sure, it's bad to
break APIs and interfaces unnecessarily, but it's also bad to be
dragging around code that is problematic, deprecated or insecure, and
that's before any discussions of whether the particular features are of
sufficiently broad interest to warrant further investments. The old
parallel-processing library PPLRTL was deprecated a decade or two ago,
and it's still around. For an operating system to be financially
successful, the folks should want or need to be over onto the New
Hotness much sooner than that... It's better to spend more of the
available time and effort on newer features and newer interfaces and
newer hardware and the rest of the stuff that folks want to buy and
upgrade to, and less on old code and old features. To be absolutely
clear, this doesn't mean ripping out old code just because it's old.
It does mean ripping out old code that's been deprecated and/or
superseded.
Post by David Froble
Still don't have an ODS-5 disk ....
Ayup. DEC/Compaq/HP didn't sufficiently enable and didn't encourage
folks to upgrade VMS. Operating system and hardware support contracts
are nice, but encouraging folks to move off the old software and the
old hardware has better trade-offs for the vendor and for the customer.
As one of the many changes in the industry since the era of Alpha and
VAX, HP is recommending a five-year hardware replacement cycle for
servers, for instance. The older boxes just aren't as economical to
run, or to maintain, as the newer ones. Where you can replace a rack
or two of older servers with a Moonshot box, there can be substantial
space savings to be had through consolidation, too.

Now what VSI does here, and what they have for both shorter- and
longer-term plans, we'll find out over time.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2015-01-15 21:31:10 UTC
Permalink
Post by Stephen Hoffman
Post by David Froble
Yes, I know it would be major work. But what is the alternative for
those wanting to use large disks and such?
Hopefully, the disk-style interfaces are all long dead well before
64-bit, quadword values are exceeded. Though that likely means memory
windows and/or larger physical address spaces will be required, or
both. But I digress.
As for extending the fields to a quadword, there isn't a good
alternative. Larger sector counts and larger sector sizes will be
disruptive, no matter how those changes are rolled out. Modern disk
blocks are now 4096 bytes, and there's more than a little code around
that assumes 512-byte blocks, and there's more than a little code that
assumes that disk sizes will fit in longwords, which in aggregate means
breaking that code, and quite possibly implementing something similar to
the existing IDE/ATAPI optical disk synthetic-sector support for
2048-byte optical-media sectors. Or quite possibly both.
Post by David Froble
What about a SYSGEN parameter that all such data would respect, either
quadwords, or longwords, based upon the parameter?
That's a system-wide setting, which means that toggling the setting and
migrating to larger disks is an all-or-nothing undertaking; more of a
migration. This particular detail is much more likely to be
conditionalized at run-time, either based on a new itemcode that allows
8 byte buffers, or probably more likely based on seeing whether the
existing buffer for the itemcode is 4 or 8 bytes in size. The
system-wide "parameter setting" with this approach being the classic
"don't configure and connect a volume larger than 2 TiB, if your code
really isn't ready to deal with that" mechanism.
I'm not saying how it should work.

There will be customers who just don't need the larger disks. This
assumes the "smaller" disks will remain available.
Post by Stephen Hoffman
The historical fondness for conditionalized settings and adding
parameters and compatibility serves to build up code cruft over time, as
well as documentation cruft, testing cruft, and a whole host of other
issues. Once something gets added to the operating system, the VMS
folks were classically quite reasonably loathe to remove it, even when
removing it would have made more sense. The accretion that arises then
creates longer-term and more subtle problems. Sure, it's bad to break
APIs and interfaces unnecessarily, but it's also bad to be dragging
around code that is problematic, deprecated or insecure, and that's
before any discussions of whether the particular features are of
sufficiently broad interest to warrant further investments. The old
parallel-processing library PPLRTL was deprecated a decade or two ago,
and it's still around. For an operating system to be financially
successful, the folks should want or need to be over onto the New
Hotness much sooner than that... It's better to spend more of the
available time and effort on newer features and newer interfaces and
newer hardware and the rest of the stuff that folks want to buy and
upgrade to, and less on old code and old features. To be absolutely
clear, this doesn't mean ripping out old code just because it's old. It
does mean ripping out old code that's been deprecated and/or superseded.
I know you have attitudes about "code cruft".

I'm going to guess that initially VSI's initial concern will be existing
customers, and their needs. Maybe they can reach beyond that in the future.

Now, being able to use very large disks will most likely not be a
problem, whether they are needed, or not. For some, it will be an
absolute requirement.

What would be possibly a disaster would be breaking the applications of
existing customers.
Post by Stephen Hoffman
Post by David Froble
Still don't have an ODS-5 disk ....
Ayup. DEC/Compaq/HP didn't sufficiently enable and didn't encourage
folks to upgrade VMS. Operating system and hardware support contracts
are nice, but encouraging folks to move off the old software and the old
hardware has better trade-offs for the vendor and for the customer. As
one of the many changes in the industry since the era of Alpha and VAX,
HP is recommending a five-year hardware replacement cycle for servers,
for instance. The older boxes just aren't as economical to run, or to
maintain, as the newer ones. Where you can replace a rack or two of
older servers with a Moonshot box, there can be substantial space
savings to be had through consolidation, too.
Now what VSI does here, and what they have for both shorter- and
longer-term plans, we'll find out over time.
Yep.
JF Mezei
2015-01-15 21:53:26 UTC
Permalink
Post by Stephen Hoffman
disruptive, no matter how those changes are rolled out. Modern disk
blocks are now 4096 bytes, and there's more than a little code around
that assumes 512-byte blocks, and there's more than a little code that
assumes that disk sizes will fit in longwords,
Would sector/block size be a driver level issue that is more or less
irrelevant to applications ?

If I do a "seek" in an application, I am not too concerned with which
sector it is in as I give a relative offset from start of file.

With a block size of 4k, and an application that assumes 512byte
sectors, doesn't this simply become similar to assuming a file with
512byte records ? Currently, am I not able to rewrite 100 bytes in a
file at the application level and let the "system" figure out how to do
it (akaL read block, replace 100 bytes, write block). Does it really
matter what size the block is to the application ?

Sure apps that thought ther would be more efficient to access files in
512 byte chunks, thinking they bypassed RMS would now end up using RMS
to read 512 byte chunks from 4096byte blocks, but wouldn't they continue
to work ?
Simon Clubley
2015-01-16 01:29:54 UTC
Permalink
Post by JF Mezei
Post by Stephen Hoffman
disruptive, no matter how those changes are rolled out. Modern disk
blocks are now 4096 bytes, and there's more than a little code around
that assumes 512-byte blocks, and there's more than a little code that
assumes that disk sizes will fit in longwords,
Would sector/block size be a driver level issue that is more or less
irrelevant to applications ?
Not just a driver issue, JF.

For example, the filesystem needs to know the location of files by
LBA. You can work around that by using blocklets, but you can't use
the bigger disks in that case unless you extend the size of the fields
for the LBAs.

Likewise, locking includes a block number component and you would have
to translate that into blocklets unless you supported 4096 byte blocks
above the driver level.
Post by JF Mezei
If I do a "seek" in an application, I am not too concerned with which
sector it is in as I give a relative offset from start of file.
If that's a 32-bit seek, then 4096 byte blocks don't buy you anything.

BTW, I've believed for a good number of years now, that integers should
be unsigned by default and if you want a signed integer, you should have
to ask for it.

Make signed integers the default and people will use them even when it
doesn't accurately model the data which the program is manipulating.
The end results are all the artificial limits and security issues we see.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Bob Gezelter
2015-01-15 14:23:54 UTC
Permalink
Post by David Froble
Maybe I'm painting with a "too broad" brush, but, today memory is cheap
and plentiful. Why not make anything that might possibly have a large
value a quadword. Yeah, lots of work up front, but perhaps not much
more work to do 50-100 (or whatever) than just a few.
Sure. Great idea. But it's more than a little work. For everybody.
The client applications will have to be reviewed and modified to use
the new itemcodes, or to expect the newly- and optionally-extended
return fields from the existing itemcodes here. While existing and
unmodified applications might not fail, the applications also won't
work as expected when presented with an 6 TiB disk spindle or a 32 TiB
RAID6 volume, or whatever other fields were extended to quadwords.
These existing and unmodified applications will likely either get an
error code they might not expect, or they'll get and process what is
bogus data, depending on what approach VSI might decide to do here, if
VSI decides to promote fields. More than a few existing OpenVMS
applications haven't made it to ODS-5 support after all, so there'll be
more than a few that won't get updated for quadword returns...
Probably fodder for a major release (V9, V10, etc), given the amount of
effort and change and churn that would be involved here, too.
--
Pure Personal Opinion | HoffmanLabs LLC
David,

I second Hoff's comment, with an add-on.

Extension to quadwords is probably a good long-term goal, however care is needed. As Hoff noted, it is not just a base software issue. The impact of such a change would ricochet through customer's existing code base with side effects ranging from annoying (mis-formatted output) to catastrophic (production processes stop running).

I would advocate for an approach which is both deliberate and embracive. First, qwuadwords responses are probably the right long-term destination. However, I would take the following steps to ease the transition and preserve presently operating images (including shared middleware and emulated/translated images).

- define a datatype for "disk block count" in the user system service header files (e.g., DVI$DiskBlockCountReturn). Presently, that would equate to 32 bits, later being defined to 64 bits. This would allow savy coders to write code that compiled correctly on BOTH old and future platforms.

- adopt (and DOCUMENT) the convention that if a value is too large, the maximum value representable in the field will be returned (e.g., LONG_MAX, ULONG_MAX, ..._). Coded and compiled optimized correctly (e.g., result = (value > xxx_MAX) xxx_MAX : value) this should require at most a handful of extra instructions in the code path.

- use disjoint itemcodes for the returning of 64-bit returns, preserving the old item codes for back compatibility.

The above is probably incomplete, but it summarizes an approach which will benefit all minimizing disruption)

- Bob Gezelter, http://www.rlgsc.com
c***@gmail.com
2015-01-15 16:32:25 UTC
Permalink
Post by Bob Gezelter
I would advocate for an approach which is both deliberate and embracive. First, qwuadwords responses are probably the right long-term destination. However, I would take the following steps to ease the transition and preserve presently operating images (including shared middleware and emulated/translated images).
- define a datatype for "disk block count" in the user system service header files (e.g., DVI$DiskBlockCountReturn). Presently, that would equate to 32 bits, later being defined to 64 bits. This would allow savy coders to write code that compiled correctly on BOTH old and future platforms.
c***@gmail.com
2015-01-15 16:53:14 UTC
Permalink
Post by Bob Gezelter
I would advocate for an approach which is both deliberate and embracive. First, qwuadwords responses are probably the right long-term destination. However, I would take the following steps to ease the transition and preserve presently operating images (including shared middleware and emulated/translated images).
I would like VMS Engineering to consider honoring the length field of a $GETDVI request. $GETDVI uses an item list 3, which includes a length field for the user's buffer. My old doc set claims that if the length field is large the datum will be truncated. I think that's a typo, and I expect if the length is too short for the datum the datum will be truncated. If $GETDVI could honor lengths larger than 4 then a lot of existing code will just continue to run but not handle large disks, since existing code is probably using 4 and a 4 byte integer, and existing correct code at least gives the correct length for the user's buffer. New code could supply 8, or whatever is appropriate for the user's buffer, and get larger results.

That doesn't solve DCL's problem since it doesn't have larger integers, and it won't help running old code with large disks, but I think it's a better starting place than new item codes.
Post by Bob Gezelter
- define a datatype for "disk block count" in the user system service header files (e.g., DVI$DiskBlockCountReturn). Presently, that would equate to 32 bits, later being defined to 64 bits. This would allow savy coders to write code that compiled correctly on BOTH old and future platforms.
OK, so in addition to PAGESLETS vs. PAGES we now get BLOCKLETS vs. BLOCKS? Works, and has precedent. But given the workings of Files-11, maybe disk block cluster size is the correct approach, and an item code for the number of clusters should be added (DVI$_CLUSTER already returns the cluster size).
Post by Bob Gezelter
- adopt (and DOCUMENT) the convention that if a value is too large, the maximum value representable in the field will be returned (e.g., LONG_MAX, ULONG_MAX, ..._). Coded and compiled optimized correctly (e.g., result = (value > xxx_MAX) xxx_MAX : value) this should require at most a handful of extra instructions in the code path.
I very much disagree with that approach. If VMS tells me something is FFFFFFFF blocks, I want to be able to believe it.
Post by Bob Gezelter
- use disjoint itemcodes for the returning of 64-bit returns, preserving the old item codes for back compatibility.
IMHO, allowing the user's code to ask for disk size in bytes, KiB, MiB, GiB, TiB, ... may also be a more flexible and workable approach with longer lasting expandability.
Bob Gezelter
2015-01-15 17:18:40 UTC
Permalink
Post by c***@gmail.com
...
I very much disagree with that approach. If VMS tells me something is FFFFFFFF blocks, I want to be able to believe it.
My point is that 0x7FFFFFFF is the largest unsigned number in the field (0xFFFFFFFF if all code is verified to use the number unsigned). VMS would then be guaranteeing that AT LEAST that amount is available.

You can believe it in the context that it is true, albeit potentially incomplete. For many programs, particularly those that are binary images using the existing API, that is likely enough.

In any event, maxing the field is a safer bet than truncating the result. Maxing returns 0x7FFF FFFF in a longword. Truncating 0x80 0000 0000 yields 0x0, which is far more damaging.

Maxing is not a cure-all, it is merely a more constructive response to back compatibility.

- Bob Gezelter, http://www.rlgsc.com
Stephen Hoffman
2015-01-15 18:14:15 UTC
Permalink
Post by Bob Gezelter
Post by c***@gmail.com
...
I very much disagree with that approach. If VMS tells me something is
FFFFFFFF blocks, I want to be able to believe it.
My point is that 0x7FFFFFFF is the largest unsigned number in the field
(0xFFFFFFFF if all code is verified to use the number unsigned).
2 TiB is the V8.4 limit, and the VMS code was tested for unsigned,
modulo the DCL handling.
Post by Bob Gezelter
Maxing is not a cure-all, it is merely a more constructive response to back compatibility.
Maxing out the value has been done in the lock manager, but it's an
expedient hack, and such hacks almost inevitably cause problems later.
In the case of the volume size, quite possibly because somebody
eventually encounters a disk volume of that size and runs into weird
behavior, or because there's now a disjoint range of values that makes
for ugly and arcane and failure-prone application code (there's
effectively now a third place to look for a return status value from
$getdvi[w]), or because some unwary programmer somewhere messed up the
handling of the magic value when they added support for 64-bit support
(as C signed and unsigned integer promotion rules have messed up more
than a few folks, after all).

If the value doesn't fit, return an error via the IOSB. Let the
application programmer sort out the error, or remove the volume that's
larger than 2 TiB.
--
Pure Personal Opinion | HoffmanLabs LLC
c***@gmail.com
2015-01-15 18:52:48 UTC
Permalink
Post by Bob Gezelter
My point is that 0x7FFFFFFF is the largest unsigned number in the field (0xFFFFFFFF if all code is verified to use the number unsigned). VMS would then be guaranteeing that AT LEAST that amount is available.
Yes, of course. But it is also a valid correct answer, indistinguishable from "and maybe more".
If a volume did have exactly FFFFFFFE + 1 blocks, there would be no way for the software to know.
Post by Bob Gezelter
You can believe it in the context that it is true, albeit potentially incomplete. For many programs, particularly those that are binary images using the existing API, that is likely enough.
I write software, not magic. Either the answer is right or it is wrong.
Post by Bob Gezelter
In any event, maxing the field is a safer bet than truncating the result. Maxing returns 0x7FFF FFFF in a longword. Truncating 0x80 0000 0000 yields 0x0, which is far more damaging.
7FFFFFFF is only the maximum if you are stuck with signed integers. The VMS executive should not
be making up for limitations of a chosen implementation language. Folks working in languages
that don't have unsigned are already used to the work needed to compensate for the language
limitation. The executive should not cater to the needs of DCL, Fortran, ..., when the user could
choose to work in C, Macro-32, ..., or any other language that readily deals with unsigned.

Although I think I found a typo, truncating the result is consistent with the documented behavior.
I feel strongly that correcting the limitations of the existing behavior should not result in "maybe"
answers that imply my code can't be sure.
Bob Gezelter
2015-01-15 19:52:44 UTC
Permalink
Post by c***@gmail.com
Post by Bob Gezelter
My point is that 0x7FFFFFFF is the largest unsigned number in the field (0xFFFFFFFF if all code is verified to use the number unsigned). VMS would then be guaranteeing that AT LEAST that amount is available.
Yes, of course. But it is also a valid correct answer, indistinguishable from "and maybe more".
If a volume did have exactly FFFFFFFE + 1 blocks, there would be no way for the software to know.
Post by Bob Gezelter
You can believe it in the context that it is true, albeit potentially incomplete. For many programs, particularly those that are binary images using the existing API, that is likely enough.
I write software, not magic. Either the answer is right or it is wrong.
Post by Bob Gezelter
In any event, maxing the field is a safer bet than truncating the result. Maxing returns 0x7FFF FFFF in a longword. Truncating 0x80 0000 0000 yields 0x0, which is far more damaging.
7FFFFFFF is only the maximum if you are stuck with signed integers. The VMS executive should not
be making up for limitations of a chosen implementation language. Folks working in languages
that don't have unsigned are already used to the work needed to compensate for the language
limitation. The executive should not cater to the needs of DCL, Fortran, ..., when the user could
choose to work in C, Macro-32, ..., or any other language that readily deals with unsigned.
Although I think I found a typo, truncating the result is consistent with the documented behavior.
I feel strongly that correcting the limitations of the existing behavior should not result in "maybe"
answers that imply my code can't be sure.
My comment about 0x7fff ffff is with regards to Hoff's comment about the work that was done to fix the signed/unsigned presumption. Earlier versions of OpenVMS were less fastidious in this regard, and my concern is breaking presently running code.

- Bob Gezelter, http://www.rlgsc.com
m***@gmail.com
2015-01-15 21:10:56 UTC
Permalink
Post by Bob Gezelter
Post by c***@gmail.com
Post by Bob Gezelter
My point is that 0x7FFFFFFF is the largest unsigned number in the field (0xFFFFFFFF if all code is verified to use the number unsigned). VMS would then be guaranteeing that AT LEAST that amount is available.
Yes, of course. But it is also a valid correct answer, indistinguishable from "and maybe more".
If a volume did have exactly FFFFFFFE + 1 blocks, there would be no way for the software to know.
Post by Bob Gezelter
You can believe it in the context that it is true, albeit potentially incomplete. For many programs, particularly those that are binary images using the existing API, that is likely enough.
I write software, not magic. Either the answer is right or it is wrong.
Post by Bob Gezelter
In any event, maxing the field is a safer bet than truncating the result. Maxing returns 0x7FFF FFFF in a longword. Truncating 0x80 0000 0000 yields 0x0, which is far more damaging.
7FFFFFFF is only the maximum if you are stuck with signed integers. The VMS executive should not
be making up for limitations of a chosen implementation language. Folks working in languages
that don't have unsigned are already used to the work needed to compensate for the language
limitation. The executive should not cater to the needs of DCL, Fortran, ..., when the user could
choose to work in C, Macro-32, ..., or any other language that readily deals with unsigned.
Although I think I found a typo, truncating the result is consistent with the documented behavior.
I feel strongly that correcting the limitations of the existing behavior should not result in "maybe"
answers that imply my code can't be sure.
My comment about 0x7fff ffff is with regards to Hoff's comment about the work that was done to fix the signed/unsigned presumption. Earlier versions of OpenVMS were less fastidious in this regard, and my concern is breaking presently running code.
- Bob Gezelter, http://www.rlgsc.com
One option would be create a new set of prefix codes and routines so that people could move away from $GETDVI and DVI$_* codes and move to 64-bit alternative codes and routine if they wished to (eg. new $ GETDVI64 and DVI64$_*).

Users should of course be warned of the limitation of staying with the old codes and routines and be encouraged to move to the new.

Tinkering with the existing 32-bit stuff is likely to cause problems that the calling routines don't have provision for these newly introduced "enhancements" (e.g. a new error code for buffer overflow, a user defined input flag that says report sizes in MB).
Michael Moroney
2015-01-16 00:28:51 UTC
Permalink
Post by c***@gmail.com
I would like VMS Engineering to consider honoring the length field of a
$GETDVI request. $GETDVI uses an item list 3, which includes a length
field for the user's buffer. My old doc set claims that if the length
field is large the datum will be truncated. I think that's a typo, and I
expect if the length is too short for the datum the datum will be
truncated. If $GETDVI could honor lengths larger than 4 then a lot of
existing code will just continue to run but not handle large disks, since
existing code is probably using 4 and a 4 byte integer, and existing
correct code at least gives the correct length for the user's buffer.
New code could supply 8, or whatever is appropriate for the user's
buffer, and get larger results.
I did some experiments with V8.4 and I have some facts for this discussion.

First, SDA reveals disk drives have a UCB field UCB$Q_MAXBLOCK_64
(overlain by UCB$L_MAXBLOCK), showing there has been _some_ (possibly
minimal) work towards eventual larger drive support.

Second, $GETDVI will only need minimal work. I ran a program that does a
GETDVI request for DVI$_MAXBLOCK, but with a 20 byte return buffer and
asking for a returned byte count. It works correctly, giving a return
bytecount of 4 and zeroing the other 16 bytes of my buffer. It will need
to be taught to return all 8 bytes of UCB$Q_MAXBLOCK_64 and return a
bytecount of 8 (assuming the user's buffer is large enough). $GETDVI will
truncate fields when the user field isn't large enough. If the user buffer
is only 4 bytes (unsigned int), is this the correct behavior (giving an
incorrect blockcount), or should it return the maximum it can, %xFFFFFFFF.

I also created and accessed drives just under and somewhat over the 2.1TB
limit. VMS returns a correct (unsigned) value, $ SHOW DEVICE Dxxx:
displays the correct value, and DCL's F$GETDVI returns a negative value.
The >2.1TB drive will not mount at all, returning a DEVOFFLINE error.
It shows 0 for a blocksize.
Post by c***@gmail.com
Post by Bob Gezelter
- use disjoint itemcodes for the returning of 64-bit returns, preserving
the old item codes for back compatibility.
Not necessary. A properly taught GETDVI will do the right thing.
Higher layers such as DCL will need more work.
Post by c***@gmail.com
IMHO, allowing the user's code to ask for disk size in bytes, KiB, MiB, GiB,
TiB, ... may also be a more flexible and workable approach with longer
lasting expandability.
David Froble
2015-01-15 21:47:37 UTC
Permalink
Post by Rich Jordan
Post by David Froble
Maybe I'm painting with a "too broad" brush, but, today memory is cheap
and plentiful. Why not make anything that might possibly have a large
value a quadword. Yeah, lots of work up front, but perhaps not much
more work to do 50-100 (or whatever) than just a few.
Sure. Great idea. But it's more than a little work. For everybody.
The client applications will have to be reviewed and modified to use
the new itemcodes, or to expect the newly- and optionally-extended
return fields from the existing itemcodes here. While existing and
unmodified applications might not fail, the applications also won't
work as expected when presented with an 6 TiB disk spindle or a 32 TiB
RAID6 volume, or whatever other fields were extended to quadwords.
These existing and unmodified applications will likely either get an
error code they might not expect, or they'll get and process what is
bogus data, depending on what approach VSI might decide to do here, if
VSI decides to promote fields. More than a few existing OpenVMS
applications haven't made it to ODS-5 support after all, so there'll be
more than a few that won't get updated for quadword returns...
Probably fodder for a major release (V9, V10, etc), given the amount of
effort and change and churn that would be involved here, too.
--
Pure Personal Opinion | HoffmanLabs LLC
David,
I second Hoff's comment, with an add-on.
Extension to quadwords is probably a good long-term goal, however
care is needed. As Hoff noted, it is not just a base software issue.
The impact of such a change would ricochet through customer's
existing code base with side effects ranging from annoying
(mis-formatted output) to catastrophic (production processes stop
running).
Indeed! Which is why I raised the "concept" of having larger data, or,
having things work as in the past.
Post by Rich Jordan
I would advocate for an approach which is both deliberate and
embracive. First, qwuadwords responses are probably the right
long-term destination. However, I would take the following steps to
ease the transition and preserve presently operating images
(including shared middleware and emulated/translated images).
- define a datatype for "disk block count" in the user system service
header files (e.g., DVI$DiskBlockCountReturn). Presently, that would
equate to 32 bits, later being defined to 64 bits. This would allow
savy coders to write code that compiled correctly on BOTH old and
future platforms.
- adopt (and DOCUMENT) the convention that if a value is too large,
the maximum value representable in the field will be returned (e.g.,
LONG_MAX, ULONG_MAX, ..._). Coded and compiled optimized correctly
(e.g., result = (value > xxx_MAX) xxx_MAX : value) this should
require at most a handful of extra instructions in the code path.
- use disjoint itemcodes for the returning of 64-bit returns,
preserving the old item codes for back compatibility.
The above is probably incomplete, but it summarizes an approach which
will benefit all minimizing disruption)
- Bob Gezelter, http://www.rlgsc.com
The problem with this entire discussion is the idea that disk block
count is the only issue. I submit that there can be other data that
would also benefit from larger values. I don't try to list any, as that
again would be restrictive.

While I'm not going to attempt to suggest a solution, not my job, I feel
that whoever might take a look at the issues might be able to design a
system where the code used for one piece of data could be used for many
pieces of data, thus getting many benefits for not much more work than
just for disk block count. The code for doing longwords vs quadwords
might also lend itself to re-use.

If you're going to do something, don't do it half assed ....
k***@verizon.net
2015-01-15 12:55:25 UTC
Permalink
Post by Stephen Hoffman
Post by k***@verizon.net
Try a DVI$_MAXBLOCK on a drive over 1.0 TB capacity and you get a
negative number. DVI$_FREEBLOCKS has the same issue.
FWIW, whatever programming language you're working with here (DCL?) is
apparently defaulting to a 32-bit, signed longword, integer
representation.
Yes in this case I was simply referring to DCL. I know it should be unsigned 32-bit; I was just trying to point out that it would be handy to have areas like this in VMS looked at. Or perhaps a new DVI$_xxx item code that returns these values in say GB or some other unit that scales into DCL's signed 32-bit values.
Jilly
2014-10-18 17:28:30 UTC
Permalink
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes. I received many good ideas over the years, among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).
PERCENT_USED or PERCENT_FREE for disks so as to avoid DCL math
Dirk Munk
2014-10-18 19:25:09 UTC
Permalink
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes. I received many good ideas over the years, among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).
It is with more pleasure than you can imagine that I get to make that
query again as a proud member of the VMS Software, Inc engineering staff!
I have no suggestions at the moment, but congratulations on your new
job, and the best wishes for the whole VSI team and VMS.
David Froble
2014-10-18 21:12:16 UTC
Permalink
Post by Robert A. Brooks
Don't forget that our releases will be for IA64 only (until we release
an x86 version). I'll offer any enhancements back to HP for inclusion
in the Alpha
source tree.
Ok, since you mentioned it.

Do you know, and if so can you say, what is going to happen to Alpha VMS
and VAX VMS? Will VSI have access to the products? Will they be
allowed to provide new releases of them, should they decide to do so?

I'm pretty sure HP isn't going to do anything with either, I'm just
wondering whether they are killing them off?
Jan-Erik Soderholm
2014-10-18 22:30:44 UTC
Permalink
Post by David Froble
Post by Robert A. Brooks
Don't forget that our releases will be for IA64 only (until we release an
x86 version). I'll offer any enhancements back to HP for inclusion in
the Alpha
source tree.
Ok, since you mentioned it.
Do you know, and if so can you say, what is going to happen to Alpha VMS
and VAX VMS? Will VSI have access to the products? Will they be allowed
to provide new releases of them, should they decide to do so?
I'm pretty sure HP isn't going to do anything with either, I'm just
wondering whether they are killing them off?
I think I read that when VAX and Alpha are de-supported from
HP according to the currelnt *HP* roadmap, VSI will be able to
take up those platforms. Until then, thay are HP's babies...

Jan-Erik.
David Froble
2014-10-19 01:05:08 UTC
Permalink
Post by Jan-Erik Soderholm
Post by David Froble
Post by Robert A. Brooks
Don't forget that our releases will be for IA64 only (until we release an
x86 version). I'll offer any enhancements back to HP for inclusion in
the Alpha
source tree.
Ok, since you mentioned it.
Do you know, and if so can you say, what is going to happen to Alpha VMS
and VAX VMS? Will VSI have access to the products? Will they be allowed
to provide new releases of them, should they decide to do so?
I'm pretty sure HP isn't going to do anything with either, I'm just
wondering whether they are killing them off?
I think I read that when VAX and Alpha are de-supported from
HP according to the currelnt *HP* roadmap, VSI will be able to
take up those platforms. Until then, thay are HP's babies...
Jan-Erik.
Thanks.

And we can speculate as to why HP has retained them. There will be no
work done, just milking ....

I doubt VSI is going to have any time to even mention the words Alpha or
VAX for a while, but, while some might think they are dead products, it
seems as if there is still some old hardware in use, and, then, there
are those running the emulators.

I'd think the more revenue streams that can be supported would be good
for VSI.
Phillip Helbig---undress to reply
2014-10-19 08:12:39 UTC
Permalink
Post by David Froble
I doubt VSI is going to have any time to even mention the words Alpha or
VAX for a while, but, while some might think they are dead products, it
seems as if there is still some old hardware in use, and, then, there
are those running the emulators.
I'd think the more revenue streams that can be supported would be good
for VSI.
But why is someone running an emulator? If it is because of
hardware-specific stuff, he probably doesn't need any new development.
If it is to save money, he can save even more by running directly on
X86.
Dirk Munk
2014-10-19 09:01:40 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by David Froble
I doubt VSI is going to have any time to even mention the words Alpha or
VAX for a while, but, while some might think they are dead products, it
seems as if there is still some old hardware in use, and, then, there
are those running the emulators.
I'd think the more revenue streams that can be supported would be good
for VSI.
But why is someone running an emulator?
Many good reasons. Let's assume you have one or several ES40s. The
hardware support costs are enormous compared with a standard x86. In
fact you can emulate several ES40s on one moderate x86 server. You can
have a spare x86 on standby, at almost no costs. The power costs, in
three or four years time an x86 server will cost you abou as much on
electricity as you paid to buy it, can you imagine the power costs for
an ES40? Faster Ethernet controllers, 1Gb or 10Gb is no problem on x86,
try to get them for an ES40. More memory for your application, easy and
cheap for x86. Foot print in the data centre. I can go on with these
examples.
Post by Phillip Helbig---undress to reply
If it is because of
hardware-specific stuff, he probably doesn't need any new development.
If it is to save money, he can save even more by running directly on
X86.
Maybe, but it will require him to port all his applications. And if he
bought applications, you can just hope his supplier will also port his
software. I agree porting is a far better way than emulating, but it
must be possible.
Phillip Helbig---undress to reply
2014-10-19 11:16:18 UTC
Permalink
Post by Dirk Munk
Post by Phillip Helbig---undress to reply
Post by David Froble
I doubt VSI is going to have any time to even mention the words Alpha or
VAX for a while, but, while some might think they are dead products, it
seems as if there is still some old hardware in use, and, then, there
are those running the emulators.
I'd think the more revenue streams that can be supported would be good
for VSI.
But why is someone running an emulator?
Many good reasons. Let's assume you have one or several ES40s. The
hardware support costs are enormous compared with a standard x86. In
fact you can emulate several ES40s on one moderate x86 server. You can
have a spare x86 on standby, at almost no costs. The power costs, in
three or four years time an x86 server will cost you abou as much on
electricity as you paid to buy it, can you imagine the power costs for
an ES40? Faster Ethernet controllers, 1Gb or 10Gb is no problem on x86,
try to get them for an ES40. More memory for your application, easy and
cheap for x86. Foot print in the data centre. I can go on with these
examples.
Post by Phillip Helbig---undress to reply
If it is because of
hardware-specific stuff, he probably doesn't need any new development.
If it is to save money, he can save even more by running directly on
X86.
Maybe, but it will require him to port all his applications. And if he
bought applications, you can just hope his supplier will also port his
software. I agree porting is a far better way than emulating, but it
must be possible.
OK. Again, if he has ALPHA- or VAX-specific code, he definitely does not
want a new version of VMS etc since he has to run his applications which
have been certified with some old version. If not, he can port to
native X86 and save even more money. So, in neither case will an
emulator customer need ALPHA or VAX development.
Jan-Erik Soderholm
2014-10-19 11:31:25 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by Dirk Munk
Post by Phillip Helbig---undress to reply
Post by David Froble
I doubt VSI is going to have any time to even mention the words Alpha or
VAX for a while, but, while some might think they are dead products, it
seems as if there is still some old hardware in use, and, then, there
are those running the emulators.
I'd think the more revenue streams that can be supported would be good
for VSI.
But why is someone running an emulator?
Many good reasons. Let's assume you have one or several ES40s. The
hardware support costs are enormous compared with a standard x86. In
fact you can emulate several ES40s on one moderate x86 server. You can
have a spare x86 on standby, at almost no costs. The power costs, in
three or four years time an x86 server will cost you abou as much on
electricity as you paid to buy it, can you imagine the power costs for
an ES40? Faster Ethernet controllers, 1Gb or 10Gb is no problem on x86,
try to get them for an ES40. More memory for your application, easy and
cheap for x86. Foot print in the data centre. I can go on with these
examples.
Post by Phillip Helbig---undress to reply
If it is because of
hardware-specific stuff, he probably doesn't need any new development.
If it is to save money, he can save even more by running directly on
X86.
Maybe, but it will require him to port all his applications. And if he
bought applications, you can just hope his supplier will also port his
software. I agree porting is a far better way than emulating, but it
must be possible.
OK. Again, if he has ALPHA- or VAX-specific code, he definitely does not
want a new version of VMS etc since he has to run his applications which
have been certified with some old version. If not, he can port to
native X86 and save even more money. So, in neither case will an
emulator customer need ALPHA or VAX development.
It's not a one or the other thing. You can have *some* code
that is stuck on (say) Alpha but still want to run some more
modern/new things also. Such is our case. We have *some*
things not available on IA64 but still use some "modern"
things like webserver and WS services.

I agree that we probably will see no new VAX development.
The main efforts for *new* stuff will be x86.
Maybe some backporting to IA64 and Alpha.

And no, a web-browser is not at the top of my list. :-)
There area way better plattforms for surfing today...

Jan-Erik.
Phillip Helbig---undress to reply
2014-10-19 12:54:15 UTC
Permalink
Post by Jan-Erik Soderholm
And no, a web-browser is not at the top of my list. :-)
There area way better plattforms for surfing today...
Yes, TODAY. However, VMS now has a FUTURE. Consider how absurd the
situation is: you are running VMS on X86, and have to have ANOTHER
computer running on X86 in order to surf the web.
Jan-Erik Soderholm
2014-10-19 13:13:28 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
And no, a web-browser is not at the top of my list. :-)
There area way better plattforms for surfing today...
Yes, TODAY. However, VMS now has a FUTURE. Consider how absurd the
situation is: you are running VMS on X86, and have to have ANOTHER
computer running on X86 in order to surf the web.
I don't need a browser on my VMS server.
I have a perfect browser on my Windows laptop.

The *hardware* they use is completely irrelevant!

And if VMS has a future (lets hope so) it is as a server.
VMS will *NEVER* have comparable "office" features as
todays common laptop/desktop envorinments!

So no, I see no professional use of a browser on VMS.
V***@SendSpamHere.ORG
2014-10-19 13:35:11 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
And no, a web-browser is not at the top of my list. :-)
There area way better plattforms for surfing today...
Yes, TODAY. However, VMS now has a FUTURE. Consider how absurd the
situation is: you are running VMS on X86, and have to have ANOTHER
computer running on X86 in order to surf the web.
I don't need a browser on my VMS server.
I have a perfect browser on my Windows laptop.
Perfect? LOL!
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
David Froble
2014-10-19 17:13:22 UTC
Permalink
Post by Jan-Erik Soderholm
And if VMS has a future (lets hope so) it is as a server.
VMS will *NEVER* have comparable "office" features as
todays common laptop/desktop envorinments!
And perhaps we can look at why this will be so.

The smart phones and tablets are taking over. The desktops, while
always being in use, will suffer what desktop use of VMS has suffered.

Yes, some dinosaurs such as myself will not embrace the new and smaller
products. But us old fossels are not where the volume and money is at.

I've always considered VMS as the "workhorse" that does the "real work",
and have with a few exceptions not attempted to use it as a GUI desktop.
Phillip Helbig---undress to reply
2014-10-19 17:53:39 UTC
Permalink
Post by Jan-Erik Soderholm
I don't need a browser on my VMS server.
I have a perfect browser on my Windows laptop.
OK if you have a Windows laptop anyway. But some people don't (VAXman
comes to mind) and for them it would be silly to have an extra computer
just to browse the web.
Post by Jan-Erik Soderholm
And if VMS has a future (lets hope so) it is as a server.
VMS will *NEVER* have comparable "office" features as
todays common laptop/desktop envorinments!
I think one has to distinguish between "office" applications such as MS
WORD, Excel, Powerpoint (or Apple Pages, Numbers and Keynote) which are
just one way of doing things (personally, I write documents and
presentations with LaTeX), i.e. just one possible tool for the job, on
the one hand, and, on the other hand, a method of accessing stuff from
elsewhere in a more or less standard format. These days, HTML and PDF
are probably the most common such formats. GhostScript already provides
reasonable PDF-viewing capabilities on VMS. Mozilla exists for VMS and,
while it is not up-to-date, it shows that it can be done. It shouldn't
be THAT hard to get VMS into the open-source loop.

Of course, different strokes for different folks. If you already have a
browser somewhere else, and use it essentially for browsing, then maybe
you don't need one on VMS. But if one downloads stuff to and uploads
stuff from VMS, then a web browser on VMS is useful.
Stephen Hoffman
2014-10-19 18:17:46 UTC
Permalink
Post by Phillip Helbig---undress to reply
I think one has to distinguish between "office" applications such as MS
WORD, Excel, Powerpoint (or Apple Pages, Numbers and Keynote) which are
just one way of doing things
In many business environments, most everything uses the Microsoft
Office formats, and uses the Exchange Server and SharePoint servers and
related features.

This then segues into discussions of the effort involved with getting
to competitive features in the desktop market. Which is no small
effort.

There are other folks that aren't using Microsoft products, and that
are using RHEL, Ubuntu LTS or such, or maybe OS X Server for smaller
environments, or some other technologies, or that have moved to hosted
services.
Post by Phillip Helbig---undress to reply
(personally, I write documents and presentations with LaTeX), i.e. just
one possible tool for the job, on the one hand, and, on the other hand,
a method of accessing stuff from elsewhere in a more or less standard
format. These days, HTML and PDF are probably the most common such
formats.
Sure. Most folks also want audio and video support, given the
increasing use of these formats for training, presentations and
documentation.

On VMS and when just hacking around and transferring files around, Lynx
and curl can be sufficient.
Post by Phillip Helbig---undress to reply
GhostScript already provides reasonable PDF-viewing capabilities on
VMS. Mozilla exists for VMS and, while it is not up-to-date, it shows
that it can be done.
It shouldn't be THAT hard to get VMS into the open-source loop.
Other than porting and supporting the code and complying with the GPL
to compete in a market that's already saturated with far superior
products, sure.
Post by Phillip Helbig---undress to reply
Of course, different strokes for different folks.
Most folks would find your preferred approach to be far more effort,
and far less desirable. For TeX and LaTeX on various platforms for
instance, the MacTeX <http://tug.org/mactex/> distribution and
equivalents will usually work nicely, and with little need for the
command line tools or related tasks.
Post by Phillip Helbig---undress to reply
If you already have a browser somewhere else, and use it essentially
for browsing, then maybe you don't need one on VMS. But if one
downloads stuff to and uploads stuff from VMS, then a web browser on
VMS is useful.
It's trivial to copy the files around when that's necessary, and
drag-and-drop copies to and from VMS using CIFS are entirely feasible.
Probably also into WebDAV, but I've not tried that path. Using sftp or
scp with keys, if operating at the command line — the command line
being something that many folks don't do, or prefer to avoid, of course
— also works well.
--
Pure Personal Opinion | HoffmanLabs LLC
Jan-Erik Soderholm
2014-10-19 18:25:01 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
I don't need a browser on my VMS server.
I have a perfect browser on my Windows laptop.
OK if you have a Windows laptop anyway. But some people don't (VAXman
comes to mind) and for them it would be silly to have an extra computer
just to browse the web.
What difference does one guy do (OK, two then :-) ) if the world
(and the market/businesses) at large goes in another direction?

And in what direction do we want VSI to walk? The one where they
can expect to find real customers or the one where they probably
will walk alone?
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
And if VMS has a future (lets hope so) it is as a server.
VMS will *NEVER* have comparable "office" features as
todays common laptop/desktop envorinments!
I think one has to distinguish between "office" applications such as MS
WORD, Excel, Powerpoint (or Apple Pages, Numbers and Keynote) which are
just one way of doing things (personally, I write documents and
presentations with LaTeX), i.e. just one possible tool for the job, on
the one hand, and, on the other hand, a method of accessing stuff from
elsewhere in a more or less standard format. These days, HTML and PDF
are probably the most common such formats. GhostScript already provides
reasonable PDF-viewing capabilities on VMS. Mozilla exists for VMS and,
while it is not up-to-date, it shows that it can be done. It shouldn't
be THAT hard to get VMS into the open-source loop.
Of course, different strokes for different folks. If you already have a
browser somewhere else, and use it essentially for browsing, then maybe
you don't need one on VMS. But if one downloads stuff to and uploads
stuff from VMS, then a web browser on VMS is useful.
Phillip Helbig---undress to reply
2014-10-19 18:06:12 UTC
Permalink
I've changed the subject of this thread, as per Bob's suggestion.
Post by David Froble
Post by Jan-Erik Soderholm
And if VMS has a future (lets hope so) it is as a server.
VMS will *NEVER* have comparable "office" features as
todays common laptop/desktop envorinments!
And perhaps we can look at why this will be so.
The smart phones and tablets are taking over. The desktops, while
always being in use, will suffer what desktop use of VMS has suffered.
I don't know. I recently read that tablet sales were down, or at least
not rising as fast as they once were. It doesn't look like laptops and
even desktop machines are on the way out. Check out the recent Apple
presentation of the new retina-display iMAc, for example.
Post by David Froble
Yes, some dinosaurs such as myself will not embrace the new and smaller
products. But us old fossels are not where the volume and money is at.
I think most people writing code don't do so on a tablet. Yes, the
fraction of computer users writing code is negligible today, but in
absolute numbers is probably increasing.
Post by David Froble
I've always considered VMS as the "workhorse" that does the "real work",
and have with a few exceptions not attempted to use it as a GUI desktop.
Again, a false dichotomy. I've never used VMS as a "GUI desktop", not
even old things like DECwindows MAIL or whatever. However, I like a
nice monitor connected to a VMS machine so I can have lots of DECterms
in various CDE workspaces. A web browser is a nice addition to this.
V***@SendSpamHere.ORG
2014-10-19 18:48:21 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
I don't need a browser on my VMS server.
I have a perfect browser on my Windows laptop.
OK if you have a Windows laptop anyway. But some people don't (VAXman
comes to mind) and for them it would be silly to have an extra computer
just to browse the web.
Linux here and I've been using it for many years. Firefox works, albeit
it's had memory leak issues for a forever with no hopes of it ever to be
fixed. Nothing a reboot can't correct.

I've recently installed vtAlpha on one of my laptops. I gives me OpenVMS
and DECWindows. I'm trying to figure out if it's possible to launch the
Firefox browser in its Linux "chassis" such that it could be displays in
the DECWindows.

I also have a MacBook Pro with which I can use Safari and I have recently
acquired an Android based tablet (Firefox and Chrome on it).

Having a functional browser on OpenVMS would be nice but not really needed
for my purposes since I have other options.
Post by Phillip Helbig---undress to reply
I think one has to distinguish between "office" applications such as MS
WORD, Excel, Powerpoint (or Apple Pages, Numbers and Keynote) which are
just one way of doing things (personally, I write documents and
presentations with LaTeX), i.e. just one possible tool for the job, on
the one hand, and, on the other hand, a method of accessing stuff from
elsewhere in a more or less standard format. These days, HTML and PDF
are probably the most common such formats. GhostScript already provides
reasonable PDF-viewing capabilities on VMS. Mozilla exists for VMS and,
while it is not up-to-date, it shows that it can be done. It shouldn't
be THAT hard to get VMS into the open-source loop.
Of course, different strokes for different folks. If you already have a
browser somewhere else, and use it essentially for browsing, then maybe
you don't need one on VMS. But if one downloads stuff to and uploads
stuff from VMS, then a web browser on VMS is useful.
That could be useful for things like OpenVMS patch kits but only necessary
because HP encumber access with patches with a web interface. Some better
mechanism would certainly be welcomed before going whole hog into porting
some modern browser to the OpenVMS desktop.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Jan-Erik Soderholm
2014-10-19 19:39:05 UTC
Permalink
Post by V***@SendSpamHere.ORG
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
I don't need a browser on my VMS server.
I have a perfect browser on my Windows laptop.
OK if you have a Windows laptop anyway. But some people don't (VAXman
comes to mind) and for them it would be silly to have an extra computer
just to browse the web.
Linux here and I've been using it for many years. Firefox works, albeit
it's had memory leak issues for a forever with no hopes of it ever to be
fixed. Nothing a reboot can't correct.
I've recently installed vtAlpha on one of my laptops. I gives me OpenVMS
and DECWindows. I'm trying to figure out if it's possible to launch the
Firefox browser in its Linux "chassis" such that it could be displays in
the DECWindows.
I also have a MacBook Pro with which I can use Safari and I have recently
acquired an Android based tablet (Firefox and Chrome on it).
Having a functional browser on OpenVMS would be nice but not really needed
for my purposes since I have other options.
Post by Phillip Helbig---undress to reply
I think one has to distinguish between "office" applications such as MS
WORD, Excel, Powerpoint (or Apple Pages, Numbers and Keynote) which are
just one way of doing things (personally, I write documents and
presentations with LaTeX), i.e. just one possible tool for the job, on
the one hand, and, on the other hand, a method of accessing stuff from
elsewhere in a more or less standard format. These days, HTML and PDF
are probably the most common such formats. GhostScript already provides
reasonable PDF-viewing capabilities on VMS. Mozilla exists for VMS and,
while it is not up-to-date, it shows that it can be done. It shouldn't
be THAT hard to get VMS into the open-source loop.
Of course, different strokes for different folks. If you already have a
browser somewhere else, and use it essentially for browsing, then maybe
you don't need one on VMS. But if one downloads stuff to and uploads
stuff from VMS, then a web browser on VMS is useful.
That could be useful for things like OpenVMS patch kits but only necessary
because HP encumber access with patches with a web interface.
As long as the "web interface" provides clean links to the
files a tool such as FETCH_HTTP does the job. And you can run
it in batch if you have long download times. But then, if you
do not get clean links, simply downloading to your lap/desktop
system and FTP'ing works just fine also. You do not have to
"surf" to the files using VMS...

This is hardly something you do every day.

Jan-Erik.
Post by V***@SendSpamHere.ORG
Some better
mechanism would certainly be welcomed before going whole hog into porting
some modern browser to the OpenVMS desktop.
Stephen Hoffman
2014-10-19 19:40:25 UTC
Permalink
Post by V***@SendSpamHere.ORG
But if one downloads stuff to and uploads stuff from VMS, then a web
browser on VMS is useful.
That could be useful for things like OpenVMS patch kits but only
necessary because HP encumber access with patches with a web interface.
Some better mechanism would certainly be welcomed before going whole
hog into porting some modern browser to the OpenVMS desktop.
Ayup.

The VMS software update implementation and software patch notification
and distribution mechanisms are unnecessarily manual, confusing,
awkward and generally archaic.

Performing manual downloads from a VMS-based web browser created for
and introduced into this patch process is not progress.

It's just making the morass deeper and more intractable.

But then I'm in a charitable mood.
--
Pure Personal Opinion | HoffmanLabs LLC
JF Mezei
2014-10-19 21:57:16 UTC
Permalink
Post by Stephen Hoffman
The VMS software update implementation and software patch notification
and distribution mechanisms are unnecessarily manual, confusing,
awkward and generally archaic.
Not so fast !

Consider a mission critical site. It certaintly doesn't want updates to
be installed automatically and trigger and unplanned reboot or cause
software to fail due to newly introduced incompatibility.

So very serious sites want to be able to deploy a patch on a test system
first and then carefully plan its deployment in production, node by node
on a cluster.

Having said this...

A DCl tool that scans your system and then reports to you what patches
are available for the installed software and then lets you download kits
within that tool would be a great improvement.

However, reporting what is installed on your system may be something
some customers may be weary about. (even if it is just the
product-install logs/database).
Stephen Hoffman
2014-10-19 22:28:48 UTC
Permalink
Post by JF Mezei
Having said this...
Thanks for the complete vote of confidence there, and thanks for the
DCL chuckle.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2014-10-20 18:50:04 UTC
Permalink
Post by JF Mezei
Post by Stephen Hoffman
The VMS software update implementation and software patch notification
and distribution mechanisms are unnecessarily manual, confusing,
awkward and generally archaic.
Not so fast !
Consider a mission critical site. It certaintly doesn't want updates to
be installed automatically and trigger and unplanned reboot or cause
software to fail due to newly introduced incompatibility.
So very serious sites want to be able to deploy a patch on a test system
first and then carefully plan its deployment in production, node by node
on a cluster.
Having said this...
A DCl tool that scans your system and then reports to you what patches
are available for the installed software and then lets you download kits
within that tool would be a great improvement.
However, reporting what is installed on your system may be something
some customers may be weary about. (even if it is just the
product-install logs/database).
I'm with JF on this topic. While what's there now is poor, and methods
for a much better picture of the status of your VMS installation would
be a very good thing. What version, what patches are installed, and
what they do, what patches are not installed, and what they do.

Consider an entity that has spend much time and money validating some
application(s) including the version and patch status of VMS. Now
consider invalidating all that work by applying some patch to their
system. When you see them heading for you with a rope, tar, feathers,
and a rail, you better be able to run very fast.

My Microsoft systems, and browsers, and anything else I can find, with
the exception of the Avast security product, have all automatic updates
turned off. Nor do I install patches, unless there is a problem.
JF Mezei
2014-10-20 22:30:15 UTC
Permalink
Post by David Froble
I'm with JF on this topic.
You should see a doctor. Something obviously wrong with you :-)
Stephen Hoffman
2014-10-21 01:23:30 UTC
Permalink
While what's there now is poor, and methods for a much better picture
of the status of your VMS installation would be a very good thing.
What version, what patches are installed, and what they do, what
patches are not installed, and what they do.
Basic problem is that the patches have had a history of breaking stuff,
and automatic patch installations can break stuff on somebody else's
schedule.

None of which I've suggested, and none of which are (currently)
something most customers would be willing to implement.

Basic issue here is y'all are so used to patches which break stuff, and
patches which are hard to use, and patches which have only gotten
harder to use and more user hostile in recent years. The less reliable
the patches are, the more cautious y'all are about applying them.
Which is an entirely reasonable defensive reaction. If the patches
are better, y'all become more willing to load them. It's a matter of
trust, of testing, and of experience.

But given the current turd of a patch-management and installation
system, improvements here are easy.

Improvements here are necessary, too, because the current process is
somewhere past absurd, and because going forward it's likely that
patches will become more frequent and there'll be a need to load them
more quickly, and because — if VSI succeeds — there'll be more VMS
systems around, and more systems and more patches either means more
work, or that better tools are needed.

Now getting patches to work more reliably, that'll take an investment
and an effort by HP and VSI, and it'll take more time and a longer
track record. HP's been doing quite good with their UPDATE patches in
recent years and the lack of regressions, but it'll still take time for
folks to start to be more willing to load these patches directly, or to
load these with less testing.

Getting these patches staged onto the server, getting notifications,
getting bugs and crashes uploaded (with permission) and preemptive
information on bugs back down to the hosts, getting PCSI to use a
database and to particularly be able to get arbitrary patches installed
onto or rolled back and removed reliably, and the rest — now that is
what I wrote about.

It's certainly rather easier to load a big'ol wad of DCL, than it is to
look at the whole problem and the customer requirements and at the
customer expectations (and at why the customers are doing certain
things, such as deferring patches), and starting to work on
incrementally fixing the various problems. At some point, VSI will be
building a patch infrastructure. Hopefully they get this right, or at
least leave themselves with some room to better automate this whole
patch mess.
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2014-10-21 02:39:04 UTC
Permalink
Post by Stephen Hoffman
At some point, VSI will be
building a patch infrastructure. Hopefully they get this right, or at
least leave themselves with some room to better automate this whole
patch mess.
When they do, having a look at Jim Duff's old patch syndication service
might be worth it:

<http://www.eight-cubed.com/patches/patches.shtml>

A decent XML schema defining what's in the patches could form the basis
of a whole range of human- and machine-readable formats.
Stephen Hoffman
2014-10-21 13:20:21 UTC
Permalink
Post by Craig A. Berry
Post by Stephen Hoffman
At some point, VSI will be
building a patch infrastructure. Hopefully they get this right, or at
least leave themselves with some room to better automate this whole
patch mess.
When they do, having a look at Jim Duff's old patch syndication service
<http://www.eight-cubed.com/patches/patches.shtml>
A decent XML schema defining what's in the patches could form the basis
of a whole range of human- and machine-readable formats.
Yes; replicating the current and clumsy web site isn't the best path
forward, even if VSI quite reasonably wants to maintain some control
over and limit access into their patches. An RSS-like XML feed would
seem one of the more reasonable approaches for subscribing to this data
with REST to authenticate and access the patches, but I digress.
This'd also allow the patches to be available more quickly[1] even if
the patches aren't (yet) staged on the target server, or tied into the
subscription and login system that VSI is likely going to be
establishing.

There was rather more than just the patches-related download
processing[2] that'd be part of a more modern software-management
system, with the CLUE CRASH data probably being one of the more visible
aspects. I'd prefer to see that data uploaded[3] even without a
support contract, as the diagnostics can provide data around where VMS
is failing, as even hobbyist data can be applicable to commercial
support customers. There's all sorts of useful system performance
data[4] here for a vendor and their support organization[5].

#####
[1]attacks spread far more quickly now, which means that fixes must
also spread more quickly.
[2]patches only installed automatically with explicit opt-in.
[3]crash data uploaded only with permission; by explicit opt-in.
Crash data encrypted with public key of vendor.
[4]diagnostic data uploaded only by permission; by explicit opt-in.
[5]"hey, your server trends are heading toward saturation. Consider a
new and faster server?"
--
Pure Personal Opinion | HoffmanLabs LLC
RobertsonEricW
2014-10-21 13:12:37 UTC
Permalink
Post by Stephen Hoffman
While what's there now is poor, and methods for a much better picture
of the status of your VMS installation would be a very good thing.
What version, what patches are installed, and what they do, what
patches are not installed, and what they do.
Basic problem is that the patches have had a history of breaking stuff,
and automatic patch installations can break stuff on somebody else's
schedule.
Yes. But who says that patch management must be restricted to immediately downloading and installing all patches that become available. Given the current situation, it would be a vast improvement to develop a simple process that executes periodically to query a vendor hosted Patch Inventory Service that permits an OpenVMS system to Download the (as yet uninstalled/undownloaded) patches onto the machine and notify the system manager that new patches are available for perusal and allowing the system manager to execute a patch management utility at a time of his/her choosing to determine which patches to install (or simply to defer installation.)
Post by Stephen Hoffman
None of which I've suggested, and none of which are (currently)
something most customers would be willing to implement.
Basic issue here is y'all are so used to patches which break stuff, and
patches which are hard to use, and patches which have only gotten
harder to use and more user hostile in recent years. The less reliable
the patches are, the more cautious y'all are about applying them.
Which is an entirely reasonable defensive reaction. If the patches
are better, y'all become more willing to load them. It's a matter of
trust, of testing, and of experience.
But given the current turd of a patch-management and installation
system, improvements here are easy.
I would not describe the current "system" in place as one of "patch- management". Simply because providing a heap of patch files (which the system manager must manually probe for and scan) does not exactly fit the mental image normally evoked by the word "management".
Post by Stephen Hoffman
Improvements here are necessary, too, because the current process is
somewhere past absurd, and because going forward it's likely that
patches will become more frequent and there'll be a need to load them
more quickly, and because -- if VSI succeeds -- there'll be more VMS
systems around, and more systems and more patches either means more
work, or that better tools are needed.
That is an understatement. It is more like twenty years past absurd!!!
Post by Stephen Hoffman
Now getting patches to work more reliably, that'll take an investment
and an effort by HP and VSI, and it'll take more time and a longer
track record. HP's been doing quite good with their UPDATE patches in
recent years and the lack of regressions, but it'll still take time for
folks to start to be more willing to load these patches directly, or to
load these with less testing.
There still seems to be some "disconnects" I notice every once in a while with respect to "unavailable" patch dependencies when attempting to download sets of patches. This would indicate some "internal" chaos with the system (or lack thereof?) used to track, generate and make available the patches.
Post by Stephen Hoffman
Getting these patches staged onto the server, getting notifications,
getting bugs and crashes uploaded (with permission) and preemptive
information on bugs back down to the hosts, getting PCSI to use a
database and to particularly be able to get arbitrary patches installed
onto or rolled back and removed reliably, and the rest -- now that is
what I wrote about.
It's certainly rather easier to load a big'ol wad of DCL, than it is to
look at the whole problem and the customer requirements and at the
customer expectations (and at why the customers are doing certain
things, such as deferring patches), and starting to work on
incrementally fixing the various problems. At some point, VSI will be
building a patch infrastructure. Hopefully they get this right, or at
least leave themselves with some room to better automate this whole
patch mess.
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen Hoffman
2014-10-21 13:44:31 UTC
Permalink
Post by RobertsonEricW
Post by Stephen Hoffman
Basic problem is that the patches have had a history of breaking stuff,
and automatic patch installations can break stuff on somebody else's
schedule.
Yes. But who says that patch management must be restricted to
immediately downloading and installing all patches that become
available.
Given the current situation, it would be a vast improvement to develop
a simple process that executes periodically to query a vendor hosted
Patch Inventory Service that permits an OpenVMS system to Download the
(as yet uninstalled/undownloaded) patches onto the machine and notify
the system manager that new patches are available for perusal and
allowing the system manager to execute a patch management utility at a
time of his/her choosing to determine which patches to install (or
simply to defer installation.)
Patch[1] push notifications would be logical for high-priority updates
and eventually also for malware-related updates[2] or for certificate
revocation or the loading of new digital certificates[3], and polling
would work where the end-user didn't want to accept the push
notifications, or wanted to resynchronize or to verify a local cache of
patches, or to review what was current and available.

######
[1]Again: explicit opt-in. Not automatically notified. Not
automatically installed. etc.
[2]Yeah, everybody gets that treatment, if they're successful enough or
if they're a ripe-enough target.
[3]Yes, with security implications galore, and that ignoring the mess
that is CDSA.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2014-10-22 00:45:43 UTC
Permalink
-----Original Message-----
Stephen Hoffman
Sent: 21-Oct-14 9:45 AM
Subject: Re: [New Info-vax] Patch management
On Monday, October 20, 2014 9:23:30 PM UTC-4, Stephen Hoffman
Post by Stephen Hoffman
Basic problem is that the patches have had a history of breaking stuff,
and automatic patch installations can break stuff on somebody else's
schedule.
Yes. But who says that patch management must be restricted to
immediately downloading and installing all patches that become
available.
Given the current situation, it would be a vast improvement to develop
a simple process that executes periodically to query a vendor hosted
Patch Inventory Service that permits an OpenVMS system to Download
the
(as yet uninstalled/undownloaded) patches onto the machine and
notify
the system manager that new patches are available for perusal and
allowing the system manager to execute a patch management utility at
a
time of his/her choosing to determine which patches to install (or
simply to defer installation.)
Patch[1] push notifications would be logical for high-priority updates
and eventually also for malware-related updates[2] or for certificate
revocation or the loading of new digital certificates[3], and polling
would work where the end-user didn't want to accept the push
notifications, or wanted to resynchronize or to verify a local cache of
patches, or to review what was current and available.
######
[1]Again: explicit opt-in. Not automatically notified. Not
automatically installed. etc.
[2]Yeah, everybody gets that treatment, if they're successful enough or
if they're a ripe-enough target.
[3]Yes, with security implications galore, and that ignoring the mess
that is CDSA.
Basic principles I would suggest:
- similar to Windows, end user has the option of selecting auto
updates, download+notify only or notify only.
- host utility maintains once /per day (settable) sync so it knows what
patches are installed and which ones are missing
- security patches are freely available, but a contract is required for
other patches (wait, wait ..)

At the risk of setting alarms off, there is also a benefit to having a
basic support contract in place (albeit not crazy high prices like HP)
which among other basic support features, would include access to
patches. Other potential support ideas could be added as well e.g.
access to restricted notesfile-like utilities with VSI participation

Let's remember that VSI is a software company. A big challenge for
VSI will be to rebuild the OpenVMS Customer DB.

HP lost the capability of keeping accurate Cust information a long
time ago .. started with Digital, so it is not only a reflection on HP.

By providing a low cost support contract option, VSI can rebuild its
Cust DB accuracy and get a bit of additional revenue at the same
time.

Now donning hard hat ..

Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit dot com
David Froble
2014-10-22 06:08:09 UTC
Permalink
Post by Kerry Main
- similar to Windows, end user has the option of selecting auto
updates, download+notify only or notify only.
- host utility maintains once /per day (settable) sync so it knows what
patches are installed and which ones are missing
- security patches are freely available, but a contract is required for
other patches (wait, wait ..)
At the risk of setting alarms off, there is also a benefit to having a
basic support contract in place (albeit not crazy high prices like HP)
which among other basic support features, would include access to
patches. Other potential support ideas could be added as well e.g.
access to restricted notesfile-like utilities with VSI participation
Let's remember that VSI is a software company. A big challenge for
VSI will be to rebuild the OpenVMS Customer DB.
HP lost the capability of keeping accurate Cust information a long
time ago .. started with Digital, so it is not only a reflection on HP.
By providing a low cost support contract option, VSI can rebuild its
Cust DB accuracy and get a bit of additional revenue at the same
time.
Well, why be stupid about it?

Disk space is cheap, so the database could be quite large.

Limiting the database to those with contracts, or any other limit, would
be counter productive.

Invite anyone with any VMS system to register. Yeah, even the
hobbyists. There would be various registration status's. VSI could
reserve to themselves the contract status of each registration, and
whatever other data that might be appropriate.

One consideration would be some way to mark some registrations as
inactive or whatever, for when that registration is no longer using VMS.
This actually would be difficult.

With the internet, there is no longer the cost of a stamp. Unless a
registration declines unsolicited information, VSI should be sending out
all kinds of information. Never know what might produce some revenue.

In another post I suggested that the existence of a patch should be made
known to all with a VMS system, or systems. What is the harm? The up
side is, people cannot purchase something, if they are not aware it exists.
Stephen Hoffman
2014-10-22 10:03:18 UTC
Permalink
Post by David Froble
Post by Kerry Main
- similar to Windows, end user has the option of selecting auto
updates, download+notify only or notify only.
- host utility maintains once /per day (settable) sync so it knows what
patches are installed and which ones are missing - security patches are
freely available, but a contract is required for
other patches (wait, wait ..)
At the risk of setting alarms off, there is also a benefit to having a
basic support contract in place (albeit not crazy high prices like HP)
which among other basic support features, would include access to
patches. Other potential support ideas could be added as well e.g.
access to restricted notesfile-like utilities with VSI participation
Let's remember that VSI is a software company. A big challenge for VSI
will be to rebuild the OpenVMS Customer DB.
HP lost the capability of keeping accurate Cust information a long time
ago .. started with Digital, so it is not only a reflection on HP.
By providing a low cost support contract option, VSI can rebuild its
Cust DB accuracy and get a bit of additional revenue at the same time.
Well, why be stupid about it?
Disk space is cheap, so the database could be quite large.
Limiting the database to those with contracts, or any other limit,
would be counter productive.
Welcome to part of what I've been commenting on, around VSI setting up
their infrastructure, and how much work is in front of them here.

In a modern web-facing organization, that customer record would be tied
to single-sign-on for the vendor's discussion forums, entitlements for
kit and patch downloads, entitlements for various sorts of support
access, for mailing list preferences, etc.

Using this login information as part of purchasing, users can then
either associate and use a credit card directly with the record, or can
specify the credit card at the time of purchase, and can then perform
online purchases for licenses and support and renewals, or potentially
à la carte patch purchases, and such.

Depending on how far the VSI folks want to go with this — and selling
software licenses and support contracts online is a whole lot cheaper
than having a dedicated sales force to do that for you — there's much
that can be considered, and much that can be implemented here.

Two relevant examples of these accounts: HP has their Passport login
<https://ovrd.external.hp.com/rd/passport-about?target=> and Apple has
their AppleID <https://appleid.apple.com>. Apple associates nearly
everything with the AppleID: support access, developer access, systems,
patches, purchases, pretty much everything.
Post by David Froble
Invite anyone with any VMS system to register. Yeah, even the
hobbyists. There would be various registration status's. VSI could
reserve to themselves the contract status of each registration, and
whatever other data that might be appropriate.
They'd be dumb not to.
Post by David Froble
One consideration would be some way to mark some registrations as
inactive or whatever, for when that registration is no longer using
VMS. This actually would be difficult.
Automatic (I'd go opt-out with install-time and documented notices,
here) checks and opt-in software updates — which I'd look to get at
least some basic form of this into V8.4-1H1 — would provide data on
active hosts.
Post by David Froble
With the internet, there is no longer the cost of a stamp. Unless a
registration declines unsolicited information, VSI should be sending
out all kinds of information. Never know what might produce some
revenue.
Yep, and as good as a mailing list is information that's displayed to
the system manager around updates, either directly at login or by local
email notifications or (maybe for the more critical updates) an OPCOM.
Massively over-designing this $sndopr equivalent to REQUEST /REPLY
/TO=(CENTRAL,SECURITY,UPDATES) (that lattermost being a new class) and
— for those that didn't decide to opt-in for the automatic download
route — maybe REPLY /TO=314 DOWNLOAD for such.
Post by David Froble
In another post I suggested that the existence of a patch should be
made known to all with a VMS system, or systems. What is the harm?
The up side is, people cannot purchase something, if they are not aware
it exists.
I'd suggested that availability to HP, and they were seemingly
surprised. Beyond the use as marketing fodder, the patch release notes
are where the most recent VMS documentation is located, and those
materials are hard for anybody that doesn't have support access to get
at, and HPSC is not particularly easy for folks with support access to
get at, and there's no way to integrate those reference materials back
into the local VMS documentation and environment as the following just
isn't very easy to deal with (and if one of the VMS engineering folks
doesn't know which kit contains the LIB$ stuff, what chance do the rest
of us have?):

$ direct sys$help:VMS84A*.RELEASE_NOTES/total/size

Directory SYS$COMMON:[SYSHLP]

Total of 52 files, 2333 blocks.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2014-10-22 05:56:26 UTC
Permalink
Post by RobertsonEricW
Post by Stephen Hoffman
While what's there now is poor, and methods for a much better picture
of the status of your VMS installation would be a very good thing.
What version, what patches are installed, and what they do, what
patches are not installed, and what they do.
Basic problem is that the patches have had a history of breaking stuff,
and automatic patch installations can break stuff on somebody else's
schedule.
Yes. But who says that patch management must be restricted to
immediately downloading and installing all patches that become
available. Given the current situation, it would be a vast
improvement to develop a simple process that executes periodically to
query a vendor hosted Patch Inventory Service that permits an OpenVMS
system to Download the (as yet uninstalled/undownloaded) patches onto
the machine and notify the system manager that new patches are
available for perusal and allowing the system manager to execute a
patch management utility at a time of his/her choosing to determine
which patches to install (or simply to defer installation.)
Let me try again.

What I envision might start with the capability, for a particular
version of VMS that I'm running, of getting a report of all patches for
that version of VMS, including the purpose of the patch, and a column
that shows for each patch whether I've got that patch installed.
Perhaps also a column showing whether I have the actual patch.

This would give the person responsible for the VMS system a quick and
easy to understand view of the patch status.

I'd expect that patches that have not been obtained from VSI to still be
in the report. So, external information would be the patch description,
and the actual patch. Even if you don't have the patch, for whatever
reason, you'd know about it's existence.

The next step might be to use this information, in a utility program, to
select patches for installation, or even perhaps removal. At a user
selected time, the selected activity would be performed.
Simon Clubley
2014-10-23 00:53:40 UTC
Permalink
Post by David Froble
Let me try again.
What I envision might start with the capability, for a particular
version of VMS that I'm running, of getting a report of all patches for
that version of VMS, including the purpose of the patch, and a column
that shows for each patch whether I've got that patch installed.
Perhaps also a column showing whether I have the actual patch.
This would give the person responsible for the VMS system a quick and
easy to understand view of the patch status.
I'd expect that patches that have not been obtained from VSI to still be
in the report. So, external information would be the patch description,
and the actual patch. Even if you don't have the patch, for whatever
reason, you'd know about it's existence.
The next step might be to use this information, in a utility program, to
select patches for installation, or even perhaps removal. At a user
selected time, the selected activity would be performed.
So basically, you want a VMS version of yum.

Simon.

PS: IOW, these capabilities have been standard on Linux for over
a decade.

PPS: If you are motivated to look at the yum documentation, what is
called a patch in VMS land is called an update in Linux land.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
V***@SendSpamHere.ORG
2014-10-23 10:26:21 UTC
Permalink
Post by Simon Clubley
Post by David Froble
Let me try again.
What I envision might start with the capability, for a particular
version of VMS that I'm running, of getting a report of all patches for
that version of VMS, including the purpose of the patch, and a column
that shows for each patch whether I've got that patch installed.
Perhaps also a column showing whether I have the actual patch.
This would give the person responsible for the VMS system a quick and
easy to understand view of the patch status.
I'd expect that patches that have not been obtained from VSI to still be
in the report. So, external information would be the patch description,
and the actual patch. Even if you don't have the patch, for whatever
reason, you'd know about it's existence.
The next step might be to use this information, in a utility program, to
select patches for installation, or even perhaps removal. At a user
selected time, the selected activity would be performed.
So basically, you want a VMS version of yum.
Simon.
PS: IOW, these capabilities have been standard on Linux for over
a decade.
Not all that standard...

***@Envy17:~$ yum
The program 'yum' is currently not installed. You can install it by typing:
sudo apt-get install yum
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
David Froble
2014-10-23 13:52:06 UTC
Permalink
Post by Simon Clubley
Post by David Froble
Let me try again.
What I envision might start with the capability, for a particular
version of VMS that I'm running, of getting a report of all patches for
that version of VMS, including the purpose of the patch, and a column
that shows for each patch whether I've got that patch installed.
Perhaps also a column showing whether I have the actual patch.
This would give the person responsible for the VMS system a quick and
easy to understand view of the patch status.
I'd expect that patches that have not been obtained from VSI to still be
in the report. So, external information would be the patch description,
and the actual patch. Even if you don't have the patch, for whatever
reason, you'd know about it's existence.
The next step might be to use this information, in a utility program, to
select patches for installation, or even perhaps removal. At a user
selected time, the selected activity would be performed.
So basically, you want a VMS version of yum.
Simon.
PS: IOW, these capabilities have been standard on Linux for over
a decade.
PPS: If you are motivated to look at the yum documentation, what is
called a patch in VMS land is called an update in Linux land.
Well, if you say so ....

I don't use *ix, so I've never heard of yum.

It just seems reasonable to me to make things simple.

I learned a long time ago that when setting up information for high
level pay grades that you don't make such people have to try to
understand what they are looking for and digging through details. You
always limit it to one page, just what they need to see, and in a format
that they will instantly understand.

I see no reason why the same techniques should not be used for the rest
of us.
Stephen Hoffman
2014-10-25 19:40:50 UTC
Permalink
I see no reason why the same techniques should not be used for the rest of us.
Correct; current patch management on VMS is a mess. But patch
management is just one part of the mess, too. Dealing efficiently and
proactively with multiple systems and crashes is another area that VMS
just isn't very good with. The faster details of the security bugs are
propagated around the 'net, the faster the patches should be
distributed and evaluated and optionally/preferably applied, too. VSI
is also going to have to figure out how they'll deal with and report
system hardware errors — this both for hardware repairs, and also as
part of figuring out whether software error reports are triggered by
software errors, and not by problems secondary to hardware errors.
Then there's the whole discussion of patch entitlements and patch
security — VSI probably won't want to be using the HP signing
certificates, which means VSI will want to embed their own public
certificate into V8.4-1H1 and sign their own kits...
--
Pure Personal Opinion | HoffmanLabs LLC
JF Mezei
2014-10-25 20:11:13 UTC
Permalink
Post by Stephen Hoffman
Correct; current patch management on VMS is a mess. But patch
management is just one part of the mess, too.
How difficult would it be to make almost all patches apply without a
reboot ?

Conceptually, it is possible to patch the running system in memory ?

Could be an interesting business practice:

for $10 you get patch files, requires reboot.

for $100 you get patch files and "live" patches to patch running system
without reboot.
Stephen Hoffman
2014-10-25 23:12:00 UTC
Permalink
Post by Stephen Hoffman
Correct; current patch management on VMS is a mess. But patch
management is just one part of the mess, too.
How difficult would it be to make almost all patches apply without a reboot ?
Difficult, but possible. Not cheap to provide, either. Easier if
this capability is designed into the system from the onset. Most don't
have this capability designed in.

Monolithic / modular kernels make that a fairly difficult proposition,
particularly when the patch or the software upgrade needs to make
changes to the shared data structures. VMS was never all that good at
unloading device drivers and related hunks of code, for instance, and
that's a fairly constrained form of what an arbitrary "hot" patch would
need to contend with.

The VMS approach to this problem is a rolling reboot in a cluster.
Other systems will seek to use (for instance) a gazillion little
servers in a moonshot box.

Aiming for uptime looks great in theory, but it's rather better to
design your application systems to allow for hunks to come and go, and
for hunks to fail, and for hunks to be upgraded.
Conceptually, it is possible to patch the running system in memory ?
Yes. A trivial patch: <http://labs.hoffmanlabs.com/node/815> Also see
Oracle's Ksplice acquisition <http://www.ksplice.com>, which aims to
offer "hot" patches for specific RHEL patches.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2014-10-26 01:49:41 UTC
Permalink
-----Original Message-----
Stephen Hoffman
Sent: 25-Oct-14 7:12 PM
Subject: Re: [New Info-vax] Patch management
Post by JF Mezei
Post by Stephen Hoffman
Correct; current patch management on VMS is a mess. But patch
management is just one part of the mess, too.
How difficult would it be to make almost all patches apply without a
reboot ?
Difficult, but possible. Not cheap to provide, either. Easier if
this capability is designed into the system from the onset. Most don't
have this capability designed in.
Monolithic / modular kernels make that a fairly difficult proposition,
particularly when the patch or the software upgrade needs to make
changes to the shared data structures. VMS was never all that good at
unloading device drivers and related hunks of code, for instance, and
that's a fairly constrained form of what an arbitrary "hot" patch would
need to contend with.
The VMS approach to this problem is a rolling reboot in a cluster.
Other systems will seek to use (for instance) a gazillion little
servers in a moonshot box.
The pain point with kernel patches is that when you have to reboot,
typically it means application services availability is negatively impacted.

If application services availability is maintained, then rebooting a OS
instance (virtual or physical) in a cluster for a patch is not an issue.

This is what VMware does (very well btw) today. This can be done
with OpenVMS with DNS load balancing and proactively migrating
connections away from the server to be upgraded. When the server
has no more connections, that server can be patched and rebooted.

Once rebooted, the DNS server rebalances the connections and App
availability remains at 100%.

Takes a bit longer, but is used in some Cust environments today.

Imho, rather than waste cycles on developing a means to apply kernel
patches without OS reboots, the holy grail target should really be about
dynamically migrating processes from one OS instance to another in a
cluster. Perhaps using Galaxy type functionality?
Aiming for uptime looks great in theory, but it's rather better to
design your application systems to allow for hunks to come and go, and
for hunks to fail, and for hunks to be upgraded.
Agreed. DNS load balancing is another approach but assumes a cluster
aware application along with smart use of batch queues.
Post by JF Mezei
Conceptually, it is possible to patch the running system in memory ?
Yes. A trivial patch: <http://labs.hoffmanlabs.com/node/815> Also see
Oracle's Ksplice acquisition <http://www.ksplice.com>, which aims to
offer "hot" patches for specific RHEL patches.
Dynamic process migration in a cluster.. that is really the next generation
goal for leading edge OS design.


Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit dot com
j***@yahoo.co.uk
2014-10-26 12:24:47 UTC
Permalink
Post by Kerry Main
-----Original Message-----
Stephen Hoffman
Sent: 25-Oct-14 7:12 PM
Subject: Re: [New Info-vax] Patch management
Post by JF Mezei
Post by Stephen Hoffman
Correct; current patch management on VMS is a mess. But patch
management is just one part of the mess, too.
How difficult would it be to make almost all patches apply without a
reboot ?
Difficult, but possible. Not cheap to provide, either. Easier if
this capability is designed into the system from the onset. Most don't
have this capability designed in.
Monolithic / modular kernels make that a fairly difficult proposition,
particularly when the patch or the software upgrade needs to make
changes to the shared data structures. VMS was never all that good at
unloading device drivers and related hunks of code, for instance, and
that's a fairly constrained form of what an arbitrary "hot" patch would
need to contend with.
The VMS approach to this problem is a rolling reboot in a cluster.
Other systems will seek to use (for instance) a gazillion little
servers in a moonshot box.
The pain point with kernel patches is that when you have to reboot,
typically it means application services availability is negatively impacted.
If application services availability is maintained, then rebooting a OS
instance (virtual or physical) in a cluster for a patch is not an issue.
This is what VMware does (very well btw) today. This can be done
with OpenVMS with DNS load balancing and proactively migrating
connections away from the server to be upgraded. When the server
has no more connections, that server can be patched and rebooted.
Once rebooted, the DNS server rebalances the connections and App
availability remains at 100%.
Takes a bit longer, but is used in some Cust environments today.
Imho, rather than waste cycles on developing a means to apply kernel
patches without OS reboots, the holy grail target should really be about
dynamically migrating processes from one OS instance to another in a
cluster. Perhaps using Galaxy type functionality?
Aiming for uptime looks great in theory, but it's rather better to
design your application systems to allow for hunks to come and go, and
for hunks to fail, and for hunks to be upgraded.
Agreed. DNS load balancing is another approach but assumes a cluster
aware application along with smart use of batch queues.
Post by JF Mezei
Conceptually, it is possible to patch the running system in memory ?
Yes. A trivial patch: <http://labs.hoffmanlabs.com/node/815> Also see
Oracle's Ksplice acquisition <http://www.ksplice.com>, which aims to
offer "hot" patches for specific RHEL patches.
Dynamic process migration in a cluster.. that is really the next generation
goal for leading edge OS design.
Regards,
Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future
Kerry dot main at backtothefutureit dot com
Dynamic process migration in a cluster? Next generation goal for
leading edge OS design?

Might have been a bit leading edge a decade and a half ago, but even
then a real (not slideware) transparent process migration was a core
part of the NonStop Clusters for SCO UnixWare which CPQ bought. And
back then there was Tandem too.

Compaq decided to open source NonStop Clusters and it became OpenSSI
(SSI = Single System Image), which now seems to be gathering dust.

Tandem's still around, although the Bank of England managed to stop
one of their NonStop system a few days ago - the RTGS outage. Unplanned
office hours downtime on a system which wasn't even a 24x7 system, that
must take some doing.

Odd.
Kerry Main
2014-10-26 19:08:47 UTC
Permalink
-----Original Message-----
Sent: 26-Oct-14 8:25 AM
Subject: [New Info-vax] Transparent process migration (was: Re: Patch
management)
[snip..]
Dynamic process migration in a cluster? Next generation goal for
leading edge OS design?
Might have been a bit leading edge a decade and a half ago, but even
then a real (not slideware) transparent process migration was a core
part of the NonStop Clusters for SCO UnixWare which CPQ bought. And
back then there was Tandem too.
As I recall, but certainly willing to be corrected, before Intel killed the
dual pair CPU capability in the Integrity arch, Nonstop had the ability
to use shared pair processes but this was within a single OS instance.
I do not believe (even today with their software clustering replacement
for dual pair cpu arch) NonStop has the ability to do kernel OS patches
without shutting down the app and/or restarting it on another OS
instance.
Compaq decided to open source NonStop Clusters and it became
OpenSSI
(SSI = Single System Image), which now seems to be gathering dust.
Tandem's still around, although the Bank of England managed to stop
one of their NonStop system a few days ago - the RTGS outage.
Unplanned
office hours downtime on a system which wasn't even a 24x7 system, that
must take some doing.
Odd.
For systems that are not 24x7, they can usually schedule outages which
means they do not need this dynamic process migration functionality.
They simply schedule the outage, then do the patching, test and open
the system for prod again.

Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit dot com
Stephen Hoffman
2014-10-26 14:56:10 UTC
Permalink
Post by Kerry Main
Agreed. DNS load balancing is another approach but assumes a cluster
aware application along with smart use of batch queues.
DNS with a short TTL is course, but does work in various environments.
IP load balancers are also available and are somewhat more responsive,
as are the NAT-ish features HP is offering with some of their servers.
There are features of VMS that might have some limited use here, too;
failsafe IP, or the IP cluster alias.

Batch queues are useful for batch, but they're not very good at a
number of things a robust application would want. Dealing with all of
the various failure modes and (for instance) having just one copy of a
key batch job running, or dealing with a job that keeps crashing takes
a fair amount of add-on code to get right. If the application is
playing in this league, then a job scheduler would be a core OS
feature. I'm really not fond of rolling my own scheduling and
recovery, piecemeal, scattered through all the DCL procedures.
Post by Kerry Main
Dynamic process migration in a cluster.. that is really the next
generation goal for leading edge OS design.
This migration problem is related to online backups; where the OS and
the application have to cooperate, or where the application (or
database) deals with its process state. A process migration is very
close to a backup that's been restored and restarted.

These process restarts and these migrations are already feasible, but
it's as much about application design as any hooks provided by the OS
that might enable it.

There are application environments and languages that target this.
Erlang, for instance. There are other "user-space" options
<<http://en.wikipedia.org/wiki/Application_checkpointing> that might
work on VMS.

The VMS I/O model isn't really set up for OS-initiated
checkpoint-restart and similar, nor for OS-initiated online backup.
Though it can flush I/O caches and the like, the OS doesn't have an
indication of when the application and particularly its log and
non-volatile data is in a consistent state. The application would have
to receive and process a backup or a migration notification received
from the OS, and quiesce itself. Or a whole lot of VMS would have to
be reworked to perform these activities entirely on behalf of the
application, and I'm not sure that's even feasible without programmer
assistance except in the most course of implementations.

Restarting VMS itself gets interesting. In isolation, something like
the old VAX FastBoot is feasible. Things get far more interesting and
difficult in a network or a cluster, as there's no direct means to
restore the state of the network connections, and then there are the
error paths such as served (SCS, NFS[1], CIFS/SMB[2], WebDAV[2]) disk
storage having changed or being offline.

In years past, you'd see the frameworks and tools to do this
incorporated into VMS. But it'll likely take some years before VSI is
in a position to make such a push with VMS, and only after they figure
out what they want to do with VMS — beyond the obvious and
comparatively mundane hardware support, that is. (VMS on a Moonshot
box with be interesting, but that brings issues such as "how do you
reasonably manage and deploy and license and maintain a gazillion VMS
instances in a rack or in a row of racks, whether you can cluster with
more than a couple of Moonshot boxes, and "who really wants to deal
with ~100 separate sets of software licenses?". This beyond questions
such as "OK, but who would buy it?" and "how many licenses would they
buy?", etc. But I digress.)

Given fat binaries weren't viewed as viable and given how much of a
mess that decision pushed out into the building and kitting and
packaging and distribution and software management, I'd wager that
we'll be using databases and application-integrated transactions and
DNS- and appliance-based load balancing and IP balancers for a while.
Applications either explicitly coded, or with the assistance of a
language such as Erlang.


######
[1] any reports of issues with the current NFSv3 client aside.
[2] if VMS ever gets a client for common these or other network shares.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2014-10-26 18:40:03 UTC
Permalink
-----Original Message-----
Stephen Hoffman
Sent: 26-Oct-14 10:56 AM
Subject: Re: [New Info-vax] Patch management
Post by Kerry Main
Agreed. DNS load balancing is another approach but assumes a cluster
aware application along with smart use of batch queues.
DNS with a short TTL is course, but does work in various environments.
IP load balancers are also available and are somewhat more responsive,
as are the NAT-ish features HP is offering with some of their servers.
There are features of VMS that might have some limited use here, too;
failsafe IP, or the IP cluster alias.
[snip..]
Restarting VMS itself gets interesting. In isolation, something like
the old VAX FastBoot is feasible. Things get far more interesting and
difficult in a network or a cluster, as there's no direct means to
restore the state of the network connections, and then there are the
error paths such as served (SCS, NFS[1], CIFS/SMB[2], WebDAV[2]) disk
storage having changed or being offline.
In years past, you'd see the frameworks and tools to do this
incorporated into VMS. But it'll likely take some years before VSI is
in a position to make such a push with VMS, and only after they figure
out what they want to do with VMS — beyond the obvious and
comparatively mundane hardware support, that is. (VMS on a Moonshot
box with be interesting, but that brings issues such as "how do you
reasonably manage and deploy and license and maintain a gazillion VMS
instances in a rack or in a row of racks, whether you can cluster with
more than a couple of Moonshot boxes, and "who really wants to deal
with ~100 separate sets of software licenses?". This beyond questions
such as "OK, but who would buy it?" and "how many licenses would they
buy?", etc. But I digress.)
[snip..]

Agreed. The great thing about VSI now being 100% focussed on
OpenVMS is that once their initial setup issues are addressed, they can
begin to make 5-10 year plans which, over time, can accomplish the
long term goals.

This might include a huge simplification of the licensing model which
might include looking at such things as combining a 3 tier criticality
level (User, Dept, Enterprise?) with a host instance licensing model.
The host instance (P or V) component might include increasing
discounts for an increased number of host instances.

Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerr
Jan-Erik Soderholm
2014-10-26 23:20:40 UTC
Permalink
This might include a huge simplification of the licensing model...
Ha ha!

I have waited several weeks now trying to get pricing on
DECset (and some of the parts separately) for Alpha, and
I'm still waiting. And this is through a contact that I
know have good channels to HP.

So far I have got a confirmation that DECset/Alpha
at least *is* on the price list! I have also got some
per-user "list prices" of CMS and MMS, but nothing
on DECset so far. And, prices for the command line
tools like CMS and MMS are aprox 10 times that of
a full blown Microsoft Visio Studio seat. A joke...

If they could simplify the licensing model that far that
they at least can *find* the price list, it would be
a huge step forward...

Jan-Erik.
David Froble
2014-10-25 22:29:19 UTC
Permalink
Post by Stephen Hoffman
I see no reason why the same techniques should not be used for the rest of us.
Correct; current patch management on VMS is a mess. But patch
management is just one part of the mess, too. Dealing efficiently and
proactively with multiple systems and crashes is another area that VMS
just isn't very good with. The faster details of the security bugs are
propagated around the 'net, the faster the patches should be distributed
and evaluated and optionally/preferably applied, too. VSI is also going
to have to figure out how they'll deal with and report system hardware
errors — this both for hardware repairs, and also as part of figuring
out whether software error reports are triggered by software errors, and
not by problems secondary to hardware errors. Then there's the whole
discussion of patch entitlements and patch security — VSI probably won't
want to be using the HP signing certificates, which means VSI will want
to embed their own public certificate into V8.4-1H1 and sign their own
kits...
As to the hardware issues ....

VSI is not going to be, as I think I understand things, in the hardware
business. So from some perspectives, "it's not their problem". Yeah,
easy to say, but maybe not so easy to get away with.

Reproducable errors on "supported" hardware are one thing, and probably
software. But it's the other problems that are sort of hard. I'd think
the Microsoft solution, re-install the OS, just isn't going to work with
VMS users.

I'm guessing that the users are going to have to be more responsible in
the future, and how to do that could be interesting.

Perhaps have a "test" system, in some ways identical to the live system,
and see if the problem happens on the test system. If so, perhaps it's
software. If not, well, then what does a dumb user do with a hardware
problem?

The old DEC field service that actually tested and fixed things is long
gone. Today's commodity hardware won't support such an organization.

Not saying I got answers, but, I do have that big question ....

I'm not aware of any, does anyone know if there is any decent hardware
diagnostic software for x86 systems available?
Stephen Hoffman
2014-10-25 23:16:45 UTC
Permalink
Post by David Froble
VSI is not going to be, as I think I understand things, in the hardware
business. So from some perspectives, "it's not their problem". Yeah,
easy to say, but maybe not so easy to get away with.
Yeah; VSI might not want to get into hardware here, but I'd suspect
they'll get dragged into it; whether with "qualified", "reference",
"pre-configured" or "pre-packaged" systems.
Post by David Froble
I'm not aware of any, does anyone know if there is any decent hardware
diagnostic software for x86 systems available?
Apple has decent diagnostics available for their gear. HP is looking
to provide interfaces and capabilities for their RAS-enabled servers,
too. As for generic diagnostics for x86 beyond the box's own POST,
donno. Probably only from the vendor, if at all.
--
Pure Personal Opinion | HoffmanLabs LLC
Jan-Erik Soderholm
2014-10-26 10:58:38 UTC
Permalink
Post by David Froble
VSI is not going to be, as I think I understand things, in the hardware
business. So from some perspectives, "it's not their problem". Yeah,
easy to say, but maybe not so easy to get away with.
Yeah; VSI might not want to get into hardware here, but I'd suspect they'll
get dragged into it; whether with "qualified", "reference",
"pre-configured" or "pre-packaged" systems.
Post by David Froble
I'm not aware of any, does anyone know if there is any decent hardware
diagnostic software for x86 systems available?
Apple has decent diagnostics available for their gear. HP is looking to
provide interfaces and capabilities for their RAS-enabled servers, too. As
for generic diagnostics for x86 beyond the box's own POST, donno...
http://www.memtest86.com/
Probably
only from the vendor, if at all.
V***@SendSpamHere.ORG
2014-10-20 20:25:47 UTC
Permalink
Post by David Froble
Post by JF Mezei
Post by Stephen Hoffman
The VMS software update implementation and software patch notification
and distribution mechanisms are unnecessarily manual, confusing,
awkward and generally archaic.
Not so fast !
Consider a mission critical site. It certaintly doesn't want updates to
be installed automatically and trigger and unplanned reboot or cause
software to fail due to newly introduced incompatibility.
So very serious sites want to be able to deploy a patch on a test system
first and then carefully plan its deployment in production, node by node
on a cluster.
Having said this...
A DCl tool that scans your system and then reports to you what patches
are available for the installed software and then lets you download kits
within that tool would be a great improvement.
However, reporting what is installed on your system may be something
some customers may be weary about. (even if it is just the
product-install logs/database).
I'm with JF on this topic. While what's there now is poor, and methods
for a much better picture of the status of your VMS installation would
be a very good thing. What version, what patches are installed, and
what they do, what patches are not installed, and what they do.
Consider an entity that has spend much time and money validating some
application(s) including the version and patch status of VMS. Now
consider invalidating all that work by applying some patch to their
system. When you see them heading for you with a rope, tar, feathers,
and a rail, you better be able to run very fast.
My Microsoft systems, and browsers, and anything else I can find, with
the exception of the Avast security product, have all automatic updates
turned off. Nor do I install patches, unless there is a problem.
You installed WEENDOZE, that's problem enough.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Stephen Hoffman
2014-10-19 14:33:21 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
And no, a web-browser is not at the top of my list. :-)
There area way better plattforms for surfing today...
Yes, TODAY.
For the foreseeable future, too.

Getting to relative parity with the desktop environment and web tools
is no small project — that involves porting over the web browsers and
multimedia support, and adding audio and video playback and
increasingly recording features, PDF viewers, and at least viewing
Microsoft Office documents, among many other details.

Products featuring relative feature parity with a web browser aren't
usually very effective at attracting new customers, either.
<http://labs.hoffmanlabs.com/node/1266>

Unless they've most of a spare billion to burn on a desktop effort, VSI
isn't in a position to start their own WebKit-scale project, nor would
I expect to see anything on VMS approaching how easy it is to integrate
and use WebKit from ObjC or Swift on OS X, for that matter.

Getting to a competitive position in desktops is a huge investment, and
as a desktop operating system you're going up against entrenched
competitors in a high-volume low-margin business, and the x86-64
business has been flat or declining in aggregate.
Post by Phillip Helbig---undress to reply
However, VMS now has a FUTURE.
VSI needs to get and maintain revenue to sustain their work and to
provide income for their investor. That likely means VMS packaged and
configured and sold as a server for the foreseeable future, with some
use as an embedded display desktop for a few applications and for
developers. Though when I can aim X at an x86-64 or ARM box
somewhere, do I really need a VMS graphics controller and an X server?

Sure, a new port of Firefox or Seahorse would be nice for developers,
but don't expect the VMS browser environment to interest most folks.
Worse for this approach, the market share of the Mozilla Firefox
browser is dropping substantially, too:
<http://en.wikipedia.org/wiki/Usage_share_of_web_browsers>
Post by Phillip Helbig---undress to reply
Consider how absurd the situation is: you are running VMS on X86, and
have to have ANOTHER computer running on X86 in order to surf the web.
So? The tools on OS X and Windows desktops are better at that task,
and are cheaper.

The markets have changed. Prices on the x86-64 boxes are now a very
small fraction of what dumb VT terminals used to cost, too — the HP
Stream laptops are starting at US$199, and the HP Stream tablets start
at US$99. These products are running Microsoft Windows on Intel Atom
x86 processors, BTW.
<http://blogs.windows.com/bloggingwindows/2014/09/29/new-hp-stream-thin-and-light-windows-notebooks-and-tablets-including-99-stream-7-tablet-just-unveiled/>


For many applications and environments that need servers, Linux,
Windows Server and sometimes even OS X Server are better choices than
VMS (presently) is, too.

VMS has targeted server installations and environments and operations
for many years, and not desktop users.

VSI has a very long way to go if they want onto the desktop, too. Then
there's the whole discussion of pricing and revenue — if you're after
the desktop and client market, Microsoft licenses are "free" with many
x86-64 hardware purchases and , and Apple has been providing OS X
Mavericks and Yosemite at no charge for their customers. For
comparison, operating systems targeting server applications including
RHEL with support and Microsoft Windows Server with support still cost
"real money".
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig---undress to reply
2014-10-19 18:02:13 UTC
Permalink
Post by Stephen Hoffman
Getting to a competitive position in desktops is a huge investment, and
as a desktop operating system you're going up against entrenched
competitors in a high-volume low-margin business, and the x86-64
business has been flat or declining in aggregate.
I think there is some misunderstanding here. It is not "VMS competes
with everyone else for the `desktop'" or "VMS is a server-only OS and
just forget `desktop to datacenter'". There is something in-between.
Speaking for myself, I have never used Word, Powerpoint etc simply
because I never needed to. One can write presentations and documents
with other tools, and in many areas LaTeX is quite common for documents,
and PDF (perhaps generated from LaTeX) is common for presentations.
However, a web browser is necessary for many things these days. LYNX
works fine on VMS, but of course lacks graphics. (Sometimes this is a
good thing.) I'm sure I'm not the only one who would benefit from a
modern web browser on VMS, even if it didn't have audio and video. (My
guess is that supporting them would be much more work, and be much more
dependent on hardware, and use more system resources, than simply
getting a web browser which suppports all the features of the latest
HTML standard.)
Stephen Hoffman
2014-10-19 13:43:30 UTC
Permalink
Sure, there are folks keeping PDP-11 and undoubtedly yet older boxes
alive and operating, but is that a viable market for an operating
system vendor?

Is that a market that'll grow and attract newer customers?

VMS growth doesn't happen with VAX or Alpha gear, nor can growth happen
with Itanium gear post-Kittson. Nor does substantial VMS growth seem
particularly likely with current and likely future Itanium servers.

Some growth might start to happen when x86-64 support gets here and
significant new features start rolling out, but there's no schedule for
the x86-64 port and no certainty whether nor when VSI will complete
that porting work — VSI needs to get their V8.4-1H1 release and a
revenue stream going first.
Post by David Froble
I doubt VSI is going to have any time to even mention the words Alpha
or VAX for a while, but, while some might think they are dead products,
it seems as if there is still some old hardware in use, and, then,
there are those running the emulators.
In the case of VAX, there've been no new versions of OpenVMS VAX in ~13
years
<http://h71000.www7.hp.com/openvms/os/openvms-release-history.html>,
the last of the new VAX servers were sold well over a decade ago, and
all software support for OpenVMS VAX ends in 2015 per the current HP
roadmap <http://hp.com/go/openvms/roadmap>.

The last of the Alpha systems shipped out more than seven years ago,
and the Alpha "fleet average" is undoubtedly rather older than that,
and all OpenVMS Alpha support ends in 2018 per the HP roadmap.

Then there are the business questions: is this old gear even a viable
market? How much new software are these folks buying, how much support
are they buying, and what's keeping them on this old and very
inefficient gear and what might encourage them to upgrade or replace?
Post by David Froble
I'd think the more revenue streams that can be supported would be good for VSI.
For the long-term with VMS, I'd prefer / expect to see VSI support
their long-term support (LTS) releases for ~five years as is now
typical with other LTS environments, and provide a release map to
assist and to encourage folks to upgrade their VMS software or to
migrate their data and to replace their servers.

Whether VSI supports V9.x on Itanium or back-ports their (eventual,
planned) x86-64 work to Itanium, or extends support for the terminal
Itanium release isn't known, and probably hasn't even been particularly
considered yet. VSI needs to get OpenVMS I64 V8.4-1H1 out the door,
and to get their business — their web site, contract administration,
price lists and discounts, license tracking, support forums, internal
support databases, single-sign-on, networking and servers and the
physical plant, installing truck-loads of new Tukwila and Poulson and
other gear[1], customer calls and customer meetings and preparing and
delivering presentations, and all the rest of the baggage that a
software company needs to have — going.

VSI needs to build and test and then get customers over onto V8.4-1H1,
V8.4-1H2 and V9.x, too. Revenue.

As for support, doing what DEC and other older vendors have done in
years past — supporting some releases for ~15 years — strikes me as a
losing proposition these days. Particularly given software and
hardware replacement cycles, and the ever-lower software and support
prices. Beyond the obvious need for the support vendor to keep their
own 13+ year old systems around[2], all vendors involved are also
constrained on what features the support vendor and any of the
third-party software vendors can use on these older releases in the
available products and tools[3], and which makes development either
more difficult, or into a least-common-feature-driven process. And to
do that in an environment where the vendor's server[4] prices and
software prices are dropping, and when newer-generation servers are
still getting more efficient? Something has to give, and for software
vendors — as one of the speakers from HP stated at the boot camp —
that's usually long-long-term support.
But why is someone running an emulator? ...
If it is to save money, he can save even more by running directly on X86.
That still involves a port, and porting code — even a
cross-architecture port of a VMS application — is more expensive than
just leaving the code as it is. Some organizations aren't spending
more than physical repairs, power and the minimal effort involved in
keeping critical applications and backups running within their existing
VMS environments.

Performing application ports from one VMS architecture to another also
assumes that all prerequisite products or equivalent products are
available and affordable. This is not always the case. Binary
translation does not always resolve these cases, whether that's due to
technical limitations of the translation or due to reasons involving
product licensing or product support. Some of these ports can be quite
intractable, short of a very large infusion of cash.


####
[1] VMS is (presently) not very adept at mass deployments, and hardware
mass deployments require the sorts of infrastructure and network and
authentication services that VSI is (also) now undoubtedly building and
bootstrapping for itself.
[2] VAX or Alpha emulators can't be trusted to be exactly duplicate the
behavior of real hardware. When you're sorting out some bugs,
sometimes emulation — even if you've extended support to specific
emulators — requires access to the actual configurations with the
errors.
[3] VAX was and is a hassle to support, as it's missing hardware and
software features available on newer OpenVMS releases, meaning your
cross-platform tools and products either can't support OpenVMS VAX, or
can't use the newer features on the newer platforms, or will require
conditionalizing the code and documenting and maintaining different
support and different features. So there's added expense here, and
potentially on a platform where the customers might not be spending as
much money as the folks closer to the front of the product offerings.
[4] VSI doesn't have a bespoke hardware product offering (yet?), which
means VSI is entirely dependent on software and software support
revenues, and on any consulting and custom feature requests they choose
to accept.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2014-10-19 17:24:40 UTC
Permalink
Post by Stephen Hoffman
Sure, there are folks keeping PDP-11 and undoubtedly yet older boxes
alive and operating, but is that a viable market for an operating system
vendor?
Lot of stuff snipped ...

I'm not saying there is, or is not, an opportunity for revenue with old
products. All I'd say about such is that if an opportunity occurs, and
is worth the effort, then VSI just might consider it, when and if they
manage to get their head above water with the x86 work.

But consider, Numonix is selling new devices for the old hardware.
Perhaps support for the new devices might be lucrative. Perhaps not.
Bill Gunshannon
2014-10-20 13:47:16 UTC
Permalink
Post by Stephen Hoffman
Sure, there are folks keeping PDP-11 and undoubtedly yet older boxes
alive and operating, but is that a viable market for an operating
system vendor?
Is that a market that'll grow and attract newer customers?
Well, if RSTS/E ran on x86-64 who knows what might be possible. (He says
with toungue planted firmly in cheek!!! But then, where have we heard that
before?)

bill
--
Bill Gunshannon | de-moc-ra-cy (di mok' ra see) n. Three wolves
***@cs.scranton.edu | and a sheep voting on what's for dinner.
University of Scranton |
Scranton, Pennsylvania | #include <std.disclaimer.h>
Kerry Main
2014-10-22 01:31:58 UTC
Permalink
-----Original Message-----
Stephen Hoffman
Sent: 19-Oct-14 9:44 AM
Looking for suggestions for new $GETDVI item codes)
Sure, there are folks keeping PDP-11 and undoubtedly yet older boxes
alive and operating, but is that a viable market for an operating
system vendor?
Is that a market that'll grow and attract newer customers?
VMS growth doesn't happen with VAX or Alpha gear, nor can growth happen
with Itanium gear post-Kittson. Nor does substantial VMS growth seem
particularly likely with current and likely future Itanium servers.
Some growth might start to happen when x86-64 support gets here and
significant new features start rolling out, but there's no schedule for
the x86-64 port and no certainty whether nor when VSI will complete
that porting work — VSI needs to get their V8.4-1H1 release and a
revenue stream going first.
[snip...]

Imho, having a VAX, Alpha and Integrity emulator (not translator
as its too complicated) available in some OpenVMS V9.next release
would mean:
- larger support revenues as VAX, Alpha and Integrity Custs can
migrate off their old HW and on to new supported x86-64 HW
without changing their binaries and not have to worry about all
the issues associated with commodity OS's (yes, still need parallel
focus on bringing in new Custs as well)
- much better business case for future OpenVMS revenues
- would reduce requirement for porting any OpenVMS V9.next
code to anything but X86-64. As Hoff stated, the days of supporting
old HW forever are fading fast.

Imho, V9.next should be X86-64 only with 8.4.x releases proving
sunset functionality and patches for VAI (vax, alpha, integrity) HW.

If Custs have an easy way fwd with minimal impact on their env,
then most will likely take it and would not mind paying support
revenues which they likely have not done in years (decades for VAX)
Think of Custs who jumped on Stromasys Alpha on X86 emulator.

Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit dot com
David Froble
2014-10-19 17:07:05 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by David Froble
I doubt VSI is going to have any time to even mention the words Alpha or
VAX for a while, but, while some might think they are dead products, it
seems as if there is still some old hardware in use, and, then, there
are those running the emulators.
I'd think the more revenue streams that can be supported would be good
for VSI.
But why is someone running an emulator? If it is because of
hardware-specific stuff, he probably doesn't need any new development.
If it is to save money, he can save even more by running directly on
X86.
I don't know. Do you know? There could be reasons neither of us can guess.

My point is, if there is any money to be made, and if VSI can see their
way to making that money, that is a good thing for VSI, and "good for
VSI" is quite likely to be "good for VMS".
Phillip Helbig---undress to reply
2014-10-19 08:10:08 UTC
Permalink
Post by Jan-Erik Soderholm
I think I read that when VAX and Alpha are de-supported from
HP according to the currelnt *HP* roadmap, VSI will be able to
take up those platforms. Until then, thay are HP's babies...
Right, but the question is whether VSI should take them up. Certainly
further development for VAX and ALPHA wouldn't be a desirable goal:
those running old hardware for a specific reason don't need it; those
running it for lack of money can go to X86.

VSI's resources are not infinite. I would rather see X86 done correctly
with enhancements like a modern web browser rather than developing ALPHA
or VAX.
j***@yahoo.co.uk
2014-10-19 17:26:24 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
I think I read that when VAX and Alpha are de-supported from
HP according to the currelnt *HP* roadmap, VSI will be able to
take up those platforms. Until then, thay are HP's babies...
Right, but the question is whether VSI should take them up. Certainly
those running old hardware for a specific reason don't need it; those
running it for lack of money can go to X86.
VSI's resources are not infinite. I would rather see X86 done correctly
with enhancements like a modern web browser rather than developing ALPHA
or VAX.
"further development for VAX and ALPHA wouldn't be a desirable goal:
those running old hardware for a specific reason don't need it; those
running it for lack of money can go to X86."

Given that VSI's backers are Nemonix's backers, and Nemonix makes
money from ongoing support for VAX and Alpha and other "legacy"
systems, I suspect VSI/Nemonix will be able to make better
informed decisions about "desirability", initial priorities, etc,
than most of us here could.

We'll find out in due course.
David Froble
2014-10-19 17:29:11 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by Jan-Erik Soderholm
I think I read that when VAX and Alpha are de-supported from
HP according to the currelnt *HP* roadmap, VSI will be able to
take up those platforms. Until then, thay are HP's babies...
Right, but the question is whether VSI should take them up. Certainly
those running old hardware for a specific reason don't need it; those
running it for lack of money can go to X86.
VSI's resources are not infinite. I would rather see X86 done correctly
with enhancements like a modern web browser rather than developing ALPHA
or VAX.
Ah, let me make a prediction for you ....

You ain't ever going to get that web browser!

GIVE IT UP !!!

Any money and effort in that direction would be a very bad idea, and
could threaten the future of VMS.

About the best you might get, is the capability to run non-VMS x86
software in some kind of environment that would support such, but, it
will still be non-VMS stuff. And I doubt VSI will waste any money on
such a capability.
JF Mezei
2014-10-19 18:05:24 UTC
Permalink
Post by David Froble
You ain't ever going to get that web browser!
Different way to look at the issue:

Building Firefox on a platform requires the porting of a number of
middleware such as GTK etc.

What if customers require that middleware to port their own applications
and get software from the Unix world ?

Building Firefox on VMS may just be the project that brings to VMS lots
of needed middleware and have a piece of software to showcase that it works.


Say a VMs *server* runs some sort of traffic control and handles X
displays on many workstations and that software requires GTK. GTK is
needed on VMS not on the workstations.


Whether the customer base has signaled to VSI that it needs stuff like
GTK and other Unix stuff is a different question. But I would not
dismiss the "browser on VMS" project so easily.
Phillip Helbig---undress to reply
2014-10-19 18:08:54 UTC
Permalink
Post by David Froble
Ah, let me make a prediction for you ....
You ain't ever going to get that web browser!
GIVE IT UP !!!
How long ago was it that I heard "Ah, let me make a prediction for you.
You ain't ever going to get VMS on {Poulson|X86}. GIVE IT UP!" :-)
Post by David Froble
Any money and effort in that direction would be a very bad idea, and
could threaten the future of VMS.
VMS has survived so much already; threat of a web browser won't kill it!
Post by David Froble
About the best you might get, is the capability to run non-VMS x86
software in some kind of environment that would support such, but, it
will still be non-VMS stuff. And I doubt VSI will waste any money on
such a capability.
There is nothing "magic" about a web browser. It's just an application.
Why shouldn't it run on VMS?
Hans Vlems
2014-10-19 18:26:29 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by David Froble
Ah, let me make a prediction for you ....
You ain't ever going to get that web browser!
GIVE IT UP !!!
How long ago was it that I heard "Ah, let me make a prediction for you.
You ain't ever going to get VMS on {Poulson|X86}. GIVE IT UP!" :-)
Post by David Froble
Any money and effort in that direction would be a very bad idea, and
could threaten the future of VMS.
VMS has survived so much already; threat of a web browser won't kill it!
Post by David Froble
About the best you might get, is the capability to run non-VMS x86
software in some kind of environment that would support such, but, it
will still be non-VMS stuff. And I doubt VSI will waste any money on
such a capability.
There is nothing "magic" about a web browser. It's just an application.
Why shouldn't it run on VMS?
I don't quite understand the discussion. 30 years ago DEC provided a great deal
of the software to run on VMS. That situation changed; a lot. So when VMS has
more than just a future, i.e. is on solid ground, third parties will see
opportunities and write tools that future VMS users need.
Possibly even a webbrowser.
Hans
David Froble
2014-10-19 21:48:45 UTC
Permalink
Post by Phillip Helbig---undress to reply
Post by David Froble
Ah, let me make a prediction for you ....
You ain't ever going to get that web browser!
GIVE IT UP !!!
How long ago was it that I heard "Ah, let me make a prediction for you.
You ain't ever going to get VMS on {Poulson|X86}. GIVE IT UP!" :-)
Not from me you didn't. I kept the faith when many had given it up.
Post by Phillip Helbig---undress to reply
Post by David Froble
Any money and effort in that direction would be a very bad idea, and
could threaten the future of VMS.
VMS has survived so much already; threat of a web browser won't kill it!
First, let's see how profitable VSI can be, before they build you your
personal browser ....
Post by Phillip Helbig---undress to reply
Post by David Froble
About the best you might get, is the capability to run non-VMS x86
software in some kind of environment that would support such, but, it
will still be non-VMS stuff. And I doubt VSI will waste any money on
such a capability.
There is nothing "magic" about a web browser. It's just an application.
Why shouldn't it run on VMS?
1) It implys a GUI, graphics, and such.

2) It's a moving target. Firefox V32, or V33, or v57 ??

3) How many people will use it? You got anyone else who has your dream?

I'll leave room for others to add to the list ..
Bob Gezelter
2014-10-19 14:03:35 UTC
Permalink
To all,

Bob asked a specific question, let's give him information that is on point.

I am emphatically not discouraging other discussions, but lets keep this thread on the subject of $GETDVI and put the other topics in different threads.

- Bob Gezelter, http://www.rlgsc.com
David Froble
2014-10-19 17:40:25 UTC
Permalink
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes. I received many good ideas over the years, among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).
It is with more pleasure than you can imagine that I get to make that
query again as a proud member of the VMS Software, Inc engineering staff!
This time, I'm also interested in suggestions for enhancements to
various utilities, such as $ SHOW DEVICE. The likelihood of my ever
implementing any of these ideas is directly proportional to the ease of said
suggestion.
Ok, here is a problem I'm looking at. All, or most, of our customers
are using P400 RAID controllers with all disks mirrored. With mirrored
disks, you don't really lose the services when one of the disks goes bad.

Scenario, pretty much "lights out" operation. Someone goes into the
computer room for the first time in 6 weeks, comes back out and asks,
how long has the red light been on for Disk21?

There is a utility for checking the disks attached to the P400. It's
interactive only. Such deserves a resounding "WTF?"

I think it's MSA$UTIL, or something like that.

Now, all I'd be looking for is some programming interface where I can
run a job periodically, and if there is a problem with any disk, set off
the fire alarms, spam everyone in the company, and whatever else I can
think of. (Got a file of the Star Trek "Red Alert" sound bit.)

Having an interactive only utility is too much like Microsoft, where you
got to "click" to do anything ....
Rich Jordan
2014-10-21 15:40:54 UTC
Permalink
Post by David Froble
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions
regarding
new $GETDVI item codes. I received many good ideas over the years,
among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).
It is with more pleasure than you can imagine that I get to make that
query again as a proud member of the VMS Software, Inc engineering staff!
This time, I'm also interested in suggestions for enhancements to
various utilities, such as $ SHOW DEVICE. The likelihood of my ever
implementing any of these ideas is directly proportional to the ease of
said
suggestion.
Ok, here is a problem I'm looking at. All, or most, of our customers
are using P400 RAID controllers with all disks mirrored. With mirrored
disks, you don't really lose the services when one of the disks goes bad.
Scenario, pretty much "lights out" operation. Someone goes into the
computer room for the first time in 6 weeks, comes back out and asks,
how long has the red light been on for Disk21?
There is a utility for checking the disks attached to the P400. It's
interactive only. Such deserves a resounding "WTF?"
I think it's MSA$UTIL, or something like that.
Now, all I'd be looking for is some programming interface where I can
run a job periodically, and if there is a problem with any disk, set off
the fire alarms, spam everyone in the company, and whatever else I can
think of. (Got a file of the Star Trek "Red Alert" sound bit.)
Having an interactive only utility is too much like Microsoft, where you
got to "click" to do anything ....
David,
We work around this by using a Kermit script to run the MSA$UTIL, dumping output to a file, and parsing it. That script runs in batch once an hour, and has been successful in letting us know when a drive goes offline (or a controller or environmental temperature gets out of range with an MSA device). Sure its a kludge but it works.

Robert,
I agree with Hoff; it would be nice to have this kind of information from drive controllers available progammatically (and DCL lexicals would be a very useful addition once the $GETDVI capabilities are added).
Robert A. Brooks
2014-10-21 23:27:40 UTC
Permalink
Robert, I agree with Hoff; it would be nice to have this kind of information
from drive controllers available progammatically (and DCL lexicals would be a
very useful addition once the $GETDVI capabilities are added).
Anything done for SYS$GETDVI would also be available for F$GETDVI and LIB$GETDVI.

The only exception is when I unofficially backport stuff that gets released
in various maintenance releases, due to the way US-based VMS Engineering
used to issue patch kits. This is because the base work for SYS$GETDVI would
get released in a SYS kit, and DCL lexicals would be updated in a DCL kit, and
we didn't release all the kits at the same time. I'm not even sure what kit
would have had a new LIB$ variant.

I'm not quite sure how VMS Software, Inc. will handle patch kit releases, but
please do remember that the likelihood of anything new for $GETDVI in the
coming V8.4-1H1 release next spring is quite low. Not zero, but low.

Thanks to everyone for all the suggestions; if I have any questions, I'll
contact the submitter directly.
--
-- Rob
Stephen Hoffman
2014-10-21 23:52:33 UTC
Permalink
I'm not even sure what kit would have had a new LIB$ variant.
LIBRTL
--
Pure Personal Opinion | HoffmanLabs LLC
Bob Gezelter
2014-10-21 14:31:25 UTC
Permalink
Post by Robert A. Brooks
Back in the day when I worked in VMS Engineering in Nashua, NH, I would
periodically poll the collective wisdom of comp.os.vms for suggestions regarding
new $GETDVI item codes. I received many good ideas over the years, among them
the LAN_* item codes that first appeared in V8.3 (and quietly backported
to V7.3-2).
It is with more pleasure than you can imagine that I get to make that query
again as a proud member of the VMS Software, Inc engineering staff!
This time, I'm also interested in suggestions for enhancements to various
utilities, such as $ SHOW DEVICE. The likelihood of my ever
implementing any of these ideas is directly proportional to the ease of said
suggestion.
Don't forget that our releases will be for IA64 only (until we release an x86
version). I'll offer any enhancements back to HP for inclusion in the Alpha
source tree.
There is no guarantee that any suggestion will ever be implemented, no matter
how reasonable it is.
It's highly unlikely that any new item codes will appear in our first release
in the Spring of 2015, given that our focus is Poulson-specific for that release.
OK, go ahead -- I'm listening!
--
Most everyone,

Again, I do not wish to be the scold, but Robert asked for $GETDVI related additions, not a wish list of everything that could be done in the future.

Discussion of patch policy: Open a new thread
etc.: Open a new thread

On the subject of $GETDVI, I would not recommend additions that eliminate simple operations (e.g., free space = total - allocated). $GETDVI item codes should be limited to information which is not easily computed from other queries, it should be reserved for fundamentals.

There is no end to the variations of what can be computed, we should not be cavalier in creating an expanded code base for both maintenance and clarity purposes.

- Bob Gezelter, http://www.rlgsc.com
Bob Gezelter
2015-01-15 20:24:03 UTC
Permalink
-- Pyffle HQ -=- London, England -=- http://pyffle.com
Continue reading on narkive:
Loading...