Discussion:
Why is the VMS codebase apparently so convoluted ?
(too old to reply)
Simon Clubley
2017-01-12 01:22:35 UTC
Permalink
Raw Message
[This was prompted by the shadow set driver discussion.]

Why is the VMS codebase apparently so convoluted ?

We already know that the terminal driver kernel code is an
unchangable mass of code so it's very difficult to add in
any new features such as editing across line boundaries.

We now discover that the shadow driver isn't that far behind
and we have previously discovered (when talking about re-loadable
device drivers) that kernel code tends to jump around uncleanly
between different sections of code.

My question is why ?

Given the critical nature of systems running VMS, one would have
thought that highly modular and simple code (instead of monolithic
code filled with various tricks) would have been a highly desirable
design property.

Was VMS simply a victim of the limited hardware of the day and
needed to be made as small as possible (even at the possible
expense of future maintainability) or was it something else ?

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Arne Vajhøj
2017-01-12 01:42:47 UTC
Permalink
Raw Message
Post by Simon Clubley
[This was prompted by the shadow set driver discussion.]
Why is the VMS codebase apparently so convoluted ?
We already know that the terminal driver kernel code is an
unchangable mass of code so it's very difficult to add in
any new features such as editing across line boundaries.
We now discover that the shadow driver isn't that far behind
and we have previously discovered (when talking about re-loadable
device drivers) that kernel code tends to jump around uncleanly
between different sections of code.
My question is why ?
Given the critical nature of systems running VMS, one would have
thought that highly modular and simple code (instead of monolithic
code filled with various tricks) would have been a highly desirable
design property.
Was VMS simply a victim of the limited hardware of the day and
needed to be made as small as possible (even at the possible
expense of future maintainability) or was it something else ?
There is a reason why assembler code is extremely rare today.
Even well written assembler code is much harder to read
than HLL code.

I was not there, but I can guess:
* assembler was chosen for various reasons:
- skill sets available
- state of compilers when work was done
- tradition at the time (late 70's)
* it evolved:
- reasonable simple and nice code to start with
- later lots of enhancements had to be done
with very tight deadlines not permitting
a healthy refactoring of the code
- result one big mess

It is just a guess, but variation of that has
been seen over and over again.

Arne
c***@gmail.com
2017-01-12 11:42:33 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Post by Simon Clubley
[This was prompted by the shadow set driver discussion.]
Why is the VMS codebase apparently so convoluted ?
We already know that the terminal driver kernel code is an
unchangable mass of code so it's very difficult to add in
any new features such as editing across line boundaries.
We now discover that the shadow driver isn't that far behind
and we have previously discovered (when talking about re-loadable
device drivers) that kernel code tends to jump around uncleanly
between different sections of code.
My question is why ?
Given the critical nature of systems running VMS, one would have
thought that highly modular and simple code (instead of monolithic
code filled with various tricks) would have been a highly desirable
design property.
Was VMS simply a victim of the limited hardware of the day and
needed to be made as small as possible (even at the possible
expense of future maintainability) or was it something else ?
There is a reason why assembler code is extremely rare today.
Even well written assembler code is much harder to read
than HLL code.
- skill sets available
- state of compilers when work was done
- tradition at the time (late 70's)
- reasonable simple and nice code to start with
- later lots of enhancements had to be done
with very tight deadlines not permitting
a healthy refactoring of the code
- result one big mess
It is just a guess, but variation of that has
been seen over and over again.
Arne
At first I was going to ignore this but it was thought provoking enough that I started to formulate a response which is now growing. I'll get it together within a day or so. It deserves a few insider answers without exposing all the dirty laundry.
Simon Clubley
2017-01-12 13:15:45 UTC
Permalink
Raw Message
Post by c***@gmail.com
At first I was going to ignore this but it was thought provoking enough that
I started to formulate a response which is now growing. I'll get it together
within a day or so. It deserves a few insider answers without exposing all
the dirty laundry.
Thank you Clair.

I look forward to reading your comments and thank you for taking the
time to write what sounds like a detailed response.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
David Froble
2017-01-12 21:02:08 UTC
Permalink
Raw Message
Post by c***@gmail.com
At first I was going to ignore this but it was thought provoking enough that
I started to formulate a response which is now growing. I'll get it together
within a day or so. It deserves a few insider answers without exposing all
the dirty laundry.
Rather you didn't ignore things, responses can be helpful to someone, or just
entertaining.

Dirty laundry, no problem, we're all part of the VMS family, right?
Robert A. Brooks
2017-01-12 21:50:29 UTC
Permalink
Raw Message
Post by c***@gmail.com
At first I was going to ignore this but it was thought provoking enough that
I started to formulate a response which is now growing. I'll get it together
within a day or so. It deserves a few insider answers without exposing all
the dirty laundry.
Rather you didn't ignore things, responses can be helpful to someone, or just entertaining.
Dirty laundry, no problem, we're all part of the VMS family, right?
Perhaps, but you do not want to see what's under a relative's kimono.
--
-- Rob
IanD
2017-01-13 06:34:17 UTC
Permalink
Raw Message
+1 (to David's comment)...

The problem could be that external folk, some with an axe to grind may run with the bad publicity of VMS internals just to stick the knife in to VMS

Who knows what situations exist out there were someone could gain financially if VMS was to be totally transparent without a fair context also being relayed as to shortcomings

Having said this, open source hides nothing and in security if you don't open your algorithms you don't get a look in, so both of these arenas have advanced to the point where open disclosure is seen as a way to increase the overall quality

To be fair to VSI, they have adopted a beast warts and all, I'm not expecting them to open up on the dirt that was made long ago inside VMS but I remain hopeful that in time an open model with a far greater audience participating in the development of VMS eventuates, whether open source or a dirivative thereof
Arne Vajhøj
2017-01-13 14:04:22 UTC
Permalink
Raw Message
Post by IanD
The problem could be that external folk, some with an axe to grind
may run with the bad publicity of VMS internals just to stick the
knife in to VMS
Find someone that has a large old code base without any bad
code and I will give you the name of a liar.

:-)

Arne
Stephen Hoffman
2017-01-13 15:47:39 UTC
Permalink
Raw Message
Post by IanD
The problem could be that external folk, some with an axe to grind may
run with the bad publicity of VMS internals just to stick the knife in
to VMS
Find someone that has a large old code base without any bad code and I
will give you the name of a liar.
:-)
Checked with Margaret Hamilton?
--
Pure Personal Opinion | HoffmanLabs LLC
Bill Gunshannon
2017-01-13 17:59:41 UTC
Permalink
Raw Message
Post by Stephen Hoffman
Post by IanD
The problem could be that external folk, some with an axe to grind
may run with the bad publicity of VMS internals just to stick the
knife in to VMS
Find someone that has a large old code base without any bad code and I
will give you the name of a liar.
:-)
Checked with Margaret Hamilton?
What does the Wicked Witch of the West got to do with it.

bill
Arne Vajhøj
2017-01-14 03:41:38 UTC
Permalink
Raw Message
Post by Stephen Hoffman
Post by IanD
The problem could be that external folk, some with an axe to grind
may run with the bad publicity of VMS internals just to stick the
knife in to VMS
Find someone that has a large old code base without any bad code and I
will give you the name of a liar.
:-)
Checked with Margaret Hamilton?
No.

:-)

But I am not sure that she is so relevant for the topic.

Her project developed true high quality software at a
productivity rate of approx. 20 LOC per man month.

Very relevant if discussing how to produce high quality
software.

But not so relevant if discussing long term maintenance
of large code bases.

300 KLOC, life time of 9 years, practically only
one hardware and used in production 9 times
over 4 years.

Arne
Chris
2017-01-14 17:48:49 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Her project developed true high quality software at a
productivity rate of approx. 20 LOC per man month.
A good production programmer can do much better than that and yes,
read Brookes years ago :-)...

Chris
Arne Vajhøj
2017-01-15 00:58:36 UTC
Permalink
Raw Message
Post by Chris
Post by Arne Vajhøj
Her project developed true high quality software at a
productivity rate of approx. 20 LOC per man month.
A good production programmer can do much better than that
In NASA quality? (where a software bug can mean a dead astronaut)

Several years later NASA did the space shuttle and for that
the rumor says that each LOC costed $1500. Which seems
to be in the same magnitude as the Apollo numbers.

Arne
Chris
2017-01-16 23:00:01 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Post by Chris
Post by Arne Vajhøj
Her project developed true high quality software at a
productivity rate of approx. 20 LOC per man month.
A good production programmer can do much better than that
In NASA quality? (where a software bug can mean a dead astronaut)
Several years later NASA did the space shuttle and for that
the rumor says that each LOC costed $1500. Which seems
to be in the same magnitude as the Apollo numbers.
Arne
Aerospace and other safety critical code is as much about the
process as the code itself. So yes, it would spend far more
time at the design stage and in testing. One project I worked
on years ago, written entirely in 6800 (8 bit) assembler did
involve simulation and test to check every single path through
the code for every module. For such systems, the actual time
spent writing the code is a small proportion of the total project
time.

For industry type embedded work, with a consistent methodology,
even if it's your own, you can get far better rates that that
and have it work first time...

Chris
c***@gmail.com
2017-01-16 23:54:06 UTC
Permalink
Raw Message
The degree of complexity, ugliness, convolutedness, etc. does not matter at all unless it presents a problem.

In my opinion, the biggest stumbling block we run into on a regular basis is the original 32bitness of VAX/VMS and in most cases the MACRO32 code that surrounds it. There is nothing inherently bad about assembler but it does make it more difficult to make some types of changes.

Here are a few examples that make life difficult.

#1 - 32b LBN (degree of difficulty: BIG)

Promoting the 32b LBN to 64b in order to make the device drivers capable of dealing with volumes larger than 2TB is a big piece of work which is now underway. From design to final qualification it will take multiple people, developers and testers, about a year to complete the affected drivers, plus BACKUP, INIT, etc.

#2 - cluster interoperability (degree of difficulty: HUGE)

There is an issue we need to fix and we have tried multiple approaches within the current 32b framework and do not like any of them. The only concrete solution is promotion from 32b to 64b. Huge challenge making this work with prior versions but that is what is necessary. Taking the entire cluster down at once is not a viable option. This one requires some serious thinking but we will ultimately get there.

#3 - DCL (degree of difficulty: GIGANTIC)

There is just so much code in DCL that nothing is easy. There are no general answers but something needs to be done and this a constant topic of discussion. When we ported to Alpha we promoted everything we could from 32b to 64b but when it comes to user interfaces those darn customers and backward compatibility keep getting in the way! In some areas we even did something special to make preserving the 32bitness easier. Ouch. Rewrite DCL in C? Great idea but highly unlikely. The addition/integration of another scripting language seems more likely.

#4 - increasing the file size beyond 1TB (degree of difficulty: TOO HORRIBLE TO CONTEMPLATE)

I suppose this is in the ‘never say never’ category but I do not see RMS and its close relatives being changed to increase the file size. I’d be happy to be proven wrong.
David Froble
2017-01-17 02:07:11 UTC
Permalink
Raw Message
Post by c***@gmail.com
The degree of complexity, ugliness, convolutedness, etc. does not matter at
all unless it presents a problem.
In my opinion, the biggest stumbling block we run into on a regular basis is
the original 32bitness of VAX/VMS and in most cases the MACRO32 code that
surrounds it. There is nothing inherently bad about assembler but it does
make it more difficult to make some types of changes.
Well, the names says is all ...

Macro 32
^^

It's a 32 bit "language". We're now in a 64 bit (or larger) world.

I won't say it's bad, I will say it's no longer appropriate.
Post by c***@gmail.com
Here are a few examples that make life difficult.
#1 - 32b LBN (degree of difficulty: BIG)
Promoting the 32b LBN to 64b in order to make the device drivers capable of
dealing with volumes larger than 2TB is a big piece of work which is now
underway. From design to final qualification it will take multiple people,
developers and testers, about a year to complete the affected drivers, plus
BACKUP, INIT, etc.
#2 - cluster interoperability (degree of difficulty: HUGE)
There is an issue we need to fix and we have tried multiple approaches within
the current 32b framework and do not like any of them. The only concrete
solution is promotion from 32b to 64b. Huge challenge making this work with
prior versions but that is what is necessary. Taking the entire cluster down
at once is not a viable option. This one requires some serious thinking but
we will ultimately get there.
#3 - DCL (degree of difficulty: GIGANTIC)
There is just so much code in DCL that nothing is easy. There are no general
answers but something needs to be done and this a constant topic of
discussion. When we ported to Alpha we promoted everything we could from 32b
to 64b but when it comes to user interfaces those darn customers and
backward compatibility keep getting in the way! In some areas we even did
something special to make preserving the 32bitness easier. Ouch. Rewrite DCL
in C? Great idea but highly unlikely. The addition/integration of another
scripting language seems more likely.
#4 - increasing the file size beyond 1TB (degree of difficulty: TOO HORRIBLE
TO CONTEMPLATE)
I suppose this is in the ‘never say never’ category but I do not see RMS and
its close relatives being changed to increase the file size. I’d be happy to
be proven wrong.
Perhaps one way to look at it is, as with some other things, RMS's time is past.

I've got a database product, implemented in 1984, in Macro-32. It was a rather
good product in 1984. That cannot be said in 2017. I would not consider any
"improvements", modifications, and such today. There are much better products
available.

Maybe just leave RMS alone. Those using it already have what they need.
Introduce better products for new uses.
Arne Vajhøj
2017-01-17 03:34:40 UTC
Permalink
Raw Message
Post by David Froble
Perhaps one way to look at it is, as with some other things, RMS's time is past.
Maybe just leave RMS alone. Those using it already have what they need.
Introduce better products for new uses.
When you say replace RMS what do you mean?

* Replace file meta data (SEQ/REL/IDX, VAR/VFC/LF/CR/CRLF/FIX/UDF,
CR/FTN etc.)?
* Replace API (SYS$OPEN, ..., SYS$CLOSE)?
* Replace some of the stuff no longer needed?
* Combination?
* Everything?

Arne
David Froble
2017-01-17 06:30:25 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Post by David Froble
Perhaps one way to look at it is, as with some other things, RMS's time is past.
Maybe just leave RMS alone. Those using it already have what they need.
Introduce better products for new uses.
When you say replace RMS what do you mean?
* Replace file meta data (SEQ/REL/IDX, VAR/VFC/LF/CR/CRLF/FIX/UDF,
CR/FTN etc.)?
* Replace API (SYS$OPEN, ..., SYS$CLOSE)?
* Replace some of the stuff no longer needed?
* Combination?
* Everything?
Arne
Note, I didn't write "replace RMS". I suggested adding new capabilities,
whatever that may be.

As for your question:

To be honest, I really don't know.

Today's disks and file sizes are more than what my customers need. So I'm not
aware of what some people might need.

Steve has been campaigning for a RDBMS (or DBMS) to be part of the VMS
distribution. Perhaps this is the way to go.

If I was working on a new application today, I'd be looking very hard at some
database products. Right now I don't know much about them.

Do we still need some current capabilities on VMS? Yes, I think so. Temporary
and work files. Output text files. I cannot say if what's currently available
is adequate. Maybe yes, maybe no.
c***@gmail.com
2017-01-17 11:14:32 UTC
Permalink
Raw Message
Post by David Froble
Post by c***@gmail.com
The degree of complexity, ugliness, convolutedness, etc. does not matter at
all unless it presents a problem.
In my opinion, the biggest stumbling block we run into on a regular basis is
the original 32bitness of VAX/VMS and in most cases the MACRO32 code that
surrounds it. There is nothing inherently bad about assembler but it does
make it more difficult to make some types of changes.
Well, the names says is all ...
Macro 32
^^
It's a 32 bit "language". We're now in a 64 bit (or larger) world.
I won't say it's bad, I will say it's no longer appropriate.
Post by c***@gmail.com
Here are a few examples that make life difficult.
#1 - 32b LBN (degree of difficulty: BIG)
Promoting the 32b LBN to 64b in order to make the device drivers capable of
dealing with volumes larger than 2TB is a big piece of work which is now
underway. From design to final qualification it will take multiple people,
developers and testers, about a year to complete the affected drivers, plus
BACKUP, INIT, etc.
#2 - cluster interoperability (degree of difficulty: HUGE)
There is an issue we need to fix and we have tried multiple approaches within
the current 32b framework and do not like any of them. The only concrete
solution is promotion from 32b to 64b. Huge challenge making this work with
prior versions but that is what is necessary. Taking the entire cluster down
at once is not a viable option. This one requires some serious thinking but
we will ultimately get there.
#3 - DCL (degree of difficulty: GIGANTIC)
There is just so much code in DCL that nothing is easy. There are no general
answers but something needs to be done and this a constant topic of
discussion. When we ported to Alpha we promoted everything we could from 32b
to 64b but when it comes to user interfaces those darn customers and
backward compatibility keep getting in the way! In some areas we even did
something special to make preserving the 32bitness easier. Ouch. Rewrite DCL
in C? Great idea but highly unlikely. The addition/integration of another
scripting language seems more likely.
#4 - increasing the file size beyond 1TB (degree of difficulty: TOO HORRIBLE
TO CONTEMPLATE)
I suppose this is in the ‘never say never’ category but I do not see RMS and
its close relatives being changed to increase the file size. I’d be happy to
be proven wrong.
Perhaps one way to look at it is, as with some other things, RMS's time is past.
I've got a database product, implemented in 1984, in Macro-32. It was a rather
good product in 1984. That cannot be said in 2017. I would not consider any
"improvements", modifications, and such today. There are much better products
available.
Maybe just leave RMS alone. Those using it already have what they need.
Introduce better products for new uses.
Precisely. That's the plan. I mentioned RMS because we are constantly being asked about the 1TB file size limit.
Jan-Erik Soderholm
2017-01-17 11:32:37 UTC
Permalink
Raw Message
Post by c***@gmail.com
Post by David Froble
Maybe just leave RMS alone. Those using it already have what they
need. Introduce better products for new uses.
Precisely. That's the plan. I mentioned RMS because we are constantly
being asked about the 1TB file size limit.
Now, if we take Rdb as an example...

The core database files are not using RMS calls.
But any files used for table load/unload, database
export/import or the main database backup file
uses RMS. And there, we will still have the 1TB
file limit then.

Jan-Erik.
Bob Gezelter
2017-01-17 13:53:32 UTC
Permalink
Raw Message
Post by Jan-Erik Soderholm
Post by c***@gmail.com
Post by David Froble
Maybe just leave RMS alone. Those using it already have what they
need. Introduce better products for new uses.
Precisely. That's the plan. I mentioned RMS because we are constantly
being asked about the 1TB file size limit.
Now, if we take Rdb as an example...
The core database files are not using RMS calls.
But any files used for table load/unload, database
export/import or the main database backup file
uses RMS. And there, we will still have the 1TB
file limit then.
Jan-Erik.
Jan-Erik,

I agree.

Large files come in two categories:

- Database container files
- Files related thereto (e.g., change logs, archives, transfer, backup, unload, work)

For those who disagree, try copying a directory containing database files. The 1TB limit is fairly pervasive. The change is not limited to RMS, user visible data structures would need to be modified. What did you say about old programs without sources?

I would love a full embracive solution which was transparent to extant RMS-using programs. That is likely not happening (e.g. NOTE/POINT would have to change, which would impact non-RMS user data structures).

A set of RMS-64 (or while we were at it, RMS-128) calls. Probably doable with A LOT of effort, but that leaves unresolved the problem of "conversion by the sword" for user and third party code. Not a pretty thing to contemplate. Certainly, not compatible with existing RMS.

- Bob Gezelter, http://www.rlgsc.com
Jan-Erik Soderholm
2017-01-17 13:59:14 UTC
Permalink
Raw Message
Post by Bob Gezelter
Post by Jan-Erik Soderholm
Post by c***@gmail.com
Post by David Froble
Maybe just leave RMS alone. Those using it already have what
they need. Introduce better products for new uses.
Precisely. That's the plan. I mentioned RMS because we are
constantly being asked about the 1TB file size limit.
Now, if we take Rdb as an example...
The core database files are not using RMS calls. But any files used
for table load/unload, database export/import or the main database
backup file uses RMS. And there, we will still have the 1TB file limit
then.
Jan-Erik.
Jan-Erik,
I agree.
- Database container files...
These are usualy no major issue, since a database of that size
more or less always is split into multiple "storage areas", that
is, separate physical files. It is not unusual to have 10s or even
100s of separate "container" files for a single Rdb database.
Post by Bob Gezelter
- Files related thereto (e.g., change logs,
archives, transfer, backup, unload, work)
Yes, here are the issues...
Post by Bob Gezelter
For those who disagree, try copying a directory containing database
files. The 1TB limit is fairly pervasive. The change is not limited to
RMS, user visible data structures would need to be modified. What did
you say about old programs without sources?
I would love a full embracive solution which was transparent to extant
RMS-using programs. That is likely not happening (e.g. NOTE/POINT would
have to change, which would impact non-RMS user data structures).
A set of RMS-64 (or while we were at it, RMS-128) calls. Probably doable
with A LOT of effort, but that leaves unresolved the problem of
"conversion by the sword" for user and third party code. Not a pretty
thing to contemplate. Certainly, not compatible with existing RMS.
- Bob Gezelter, http://www.rlgsc.com
David Froble
2017-01-17 15:23:34 UTC
Permalink
Raw Message
Post by Jan-Erik Soderholm
Post by Bob Gezelter
Post by Jan-Erik Soderholm
Post by c***@gmail.com
Post by David Froble
Maybe just leave RMS alone. Those using it already have what
they need. Introduce better products for new uses.
Precisely. That's the plan. I mentioned RMS because we are
constantly being asked about the 1TB file size limit.
Now, if we take Rdb as an example...
The core database files are not using RMS calls. But any files used
for table load/unload, database export/import or the main database
backup file uses RMS. And there, we will still have the 1TB file limit
then.
Jan-Erik.
Jan-Erik,
I agree.
- Database container files...
These are usualy no major issue, since a database of that size
more or less always is split into multiple "storage areas", that
is, separate physical files. It is not unusual to have 10s or even
100s of separate "container" files for a single Rdb database.
Post by Bob Gezelter
- Files related thereto (e.g., change logs,
archives, transfer, backup, unload, work)
Yes, here are the issues...
Post by Bob Gezelter
For those who disagree, try copying a directory containing database
files. The 1TB limit is fairly pervasive. The change is not limited to
RMS, user visible data structures would need to be modified. What did
you say about old programs without sources?
I would love a full embracive solution which was transparent to extant
RMS-using programs. That is likely not happening (e.g. NOTE/POINT would
have to change, which would impact non-RMS user data structures).
A set of RMS-64 (or while we were at it, RMS-128) calls. Probably doable
with A LOT of effort, but that leaves unresolved the problem of
"conversion by the sword" for user and third party code. Not a pretty
thing to contemplate. Certainly, not compatible with existing RMS.
- Bob Gezelter, http://www.rlgsc.com
I have no idea what will be included with the new file system VSI is working on.
Once some details are available, perhaps we can know what will be available.

However, note that using RMS to access on-disk files is not the only option on
VMS. I'm going to assume, (really bad thing to do), that the ability for larger
files will exist in the new file system. Anything else would be sort of lame.

While some products, Rdb for example, may expect to use RMS for some operations,
it should be possible to have Rdb (for example) use an alternative for such
operations. Such an alternative might be whatever capabilities are part of the
new file system. For example, if Oracle ports Rdb to x86, and the new file
system allows, part of the port might be options for other than RMS for
non-database files.
Jan-Erik Soderholm
2017-01-18 00:25:43 UTC
Permalink
Raw Message
Post by David Froble
Post by Jan-Erik Soderholm
Post by Bob Gezelter
Post by Jan-Erik Soderholm
Post by c***@gmail.com
Post by David Froble
Maybe just leave RMS alone. Those using it already have what
they need. Introduce better products for new uses.
Precisely. That's the plan. I mentioned RMS because we are
constantly being asked about the 1TB file size limit.
Now, if we take Rdb as an example...
The core database files are not using RMS calls. But any files used
for table load/unload, database export/import or the main database
backup file uses RMS. And there, we will still have the 1TB file limit
then.
Jan-Erik.
Jan-Erik,
I agree.
- Database container files...
These are usualy no major issue, since a database of that size
more or less always is split into multiple "storage areas", that
is, separate physical files. It is not unusual to have 10s or even
100s of separate "container" files for a single Rdb database.
Post by Bob Gezelter
- Files related thereto (e.g., change logs,
archives, transfer, backup, unload, work)
Yes, here are the issues...
Post by Bob Gezelter
For those who disagree, try copying a directory containing database
files. The 1TB limit is fairly pervasive. The change is not limited to
RMS, user visible data structures would need to be modified. What did
you say about old programs without sources?
I would love a full embracive solution which was transparent to extant
RMS-using programs. That is likely not happening (e.g. NOTE/POINT would
have to change, which would impact non-RMS user data structures).
A set of RMS-64 (or while we were at it, RMS-128) calls. Probably doable
with A LOT of effort, but that leaves unresolved the problem of
"conversion by the sword" for user and third party code. Not a pretty
thing to contemplate. Certainly, not compatible with existing RMS.
- Bob Gezelter, http://www.rlgsc.com
I have no idea what will be included with the new file system VSI is
working on. Once some details are available, perhaps we can know what will
be available.
I did post a VSI presentation regarding the new file system a few
weeks ago, maybe you where away on your Christmas holliday? :-)

Do read up on the posted material first so that you do not
have to make uninformed guesses.
Post by David Froble
However, note that using RMS to access on-disk files is not the only option
on VMS. I'm going to assume, (really bad thing to do), that the ability
for larger files will exist in the new file system.
No, not in "What you get", and it says so on the "What you don’t get"
page. It *is* listed under "Future Opportunities" page. See:

VMS File System Update PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HBUUADUwNAVE


Here is the full post again.


Hi.

A few links to PDFs and MPEGs provided by the
Swedish HP-Connect office. Note that some of the
documents can be a year or a little more old...

Most of the MPEG presentations are in Swedish but
usually using English slides.

There are also some slides from the IKEA presentation.

Have a good new Year!

Jan-Erik.



PDF slides

VSI Multivendor Storage PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HAkUADUwNAVE

VSI OpenVMS Alpha PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HA0UADUwNAVE

Java 1.8 (Java 8) Update PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HBEUADUwNAVE

VMS File System Update PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HBUUADUwNAVE

VSI TCP/IP Stack & VCI 2.0 PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HBkUADUwNAVE

OpenVMS Rolling Roadmap PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HB0UADUwNAVE

IKEAs presentation PDF
http://www.hp-connect.se/lists/lt.php?id=cE4HCEUADUwNAVE


Videos
HPE IL / Hadoop MPEG4
Swedish talk, Englinsh slides.
http://www.hp-connect.se/lists/lt.php?id=cE4HCUUADUwNAVE

The portable C Compiler - PCC MPEG4
Swedish talk, Englinsh slides.
http://www.hp-connect.se/lists/lt.php?id=cE4EAEUADUwNAVE

OpenVMS runs on Integrity I4 MPEG4
Short intro in SWedish, talk and slides in English (Ken Surplice)
http://www.hp-connect.se/lists/lt.php?id=cE4EAUUADUwNAVE

OpenVMS roadmap MPEG4
Swedish talk, Englinsh slides.
http://www.hp-connect.se/lists/lt.php?id=cE4EAkUADUwNAVE

Porting OpenVMS to x86-64 MPEG4
Swedish talk, Englinsh slides.
http://www.hp-connect.se/lists/lt.php?id=cE4EA0UADUwNAVE

Heartbleed bug OpenVMS MPEG4
Swedish talk, Englinsh slides.
http://www.hp-connect.se/lists/lt.php?id=cE4FAkUADUwNAVE

HP OneView MPEG4
Swedish talk, Englinsh slides.
http://www.hp-connect.se/lists/lt.php?id=cE4ECEUADUwNAVE

Migrering till Linux MPEG4
Swedish talk, Englinsh slides.
http://www.hp-connect.se/lists/lt.php?id=cE4EBEUADUwNAVE

VMS Software Incs - VSI MPEG4
Swedish talk, Englinsh slides.
By Johan Gedda, Chairman of VSI and main investor.
http://www.hp-connect.se/lists/lt.php?id=cE4EBUUADUwNAVE

VMS Technical update - VSI MPEG4
English talk and slides, Clair Grant.
http://www.hp-connect.se/lists/lt.php?id=cE4EBkUADUwNAVE
Arne Vajhøj
2017-01-17 03:36:50 UTC
Permalink
Raw Message
Post by c***@gmail.com
#3 - DCL (degree of difficulty: GIGANTIC)
There is just so much code in DCL that nothing is easy. There are no
general answers but something needs to be done and this a constant
topic of discussion. When we ported to Alpha we promoted everything
we could from 32b to 64b but when it comes to user interfaces those
darn customers and backward compatibility keep getting in the way! In
some areas we even did something special to make preserving the
32bitness easier. Ouch. Rewrite DCL in C? Great idea but highly
unlikely.
How difficult would it be to express DCL as a grammar?
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
There are certainly some good candidates.

Just note that Microsofts experience replacing CMD with VBS and later PS
has not been a huge success.

Arne
Simon Clubley
2017-01-17 18:55:45 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Post by c***@gmail.com
#3 - DCL (degree of difficulty: GIGANTIC)
There is just so much code in DCL that nothing is easy. There are no
general answers but something needs to be done and this a constant
topic of discussion. When we ported to Alpha we promoted everything
we could from 32b to 64b but when it comes to user interfaces those
darn customers and backward compatibility keep getting in the way! In
some areas we even did something special to make preserving the
32bitness easier. Ouch. Rewrite DCL in C? Great idea but highly
unlikely.
How difficult would it be to express DCL as a grammar?
The grammer bit probably isn't the problem; it's likely to be all
the other stuff that DCL does on VMS. DCL is not just a normal Unix
style shell but is way more heavily integrated into VMS.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Arne Vajhøj
2017-01-17 19:08:17 UTC
Permalink
Raw Message
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
#3 - DCL (degree of difficulty: GIGANTIC)
There is just so much code in DCL that nothing is easy. There are no
general answers but something needs to be done and this a constant
topic of discussion. When we ported to Alpha we promoted everything
we could from 32b to 64b but when it comes to user interfaces those
darn customers and backward compatibility keep getting in the way! In
some areas we even did something special to make preserving the
32bitness easier. Ouch. Rewrite DCL in C? Great idea but highly
unlikely.
How difficult would it be to express DCL as a grammar?
The grammer bit probably isn't the problem; it's likely to be all
the other stuff that DCL does on VMS. DCL is not just a normal Unix
style shell but is way more heavily integrated into VMS.
If you have the grammar and generate the parser, then doing the
interaction with VMS itself should be a lot more manageable.

My guess would be that just having the grammar and a parser
in C keeping the interactions in Macro-32 would make maintenance
a lot easier.

Arne
Stephen Hoffman
2017-01-18 00:17:06 UTC
Permalink
Raw Message
Post by Arne Vajhøj
How difficult would it be to express DCL as a grammar?
DCL permits what amounts to self-modifying code — there are procedures
that use this, too — and efforts toward (non-subset'd) DCL compilation
tend to go downhill from there.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2017-01-18 00:38:53 UTC
Permalink
Raw Message
Post by Stephen Hoffman
Post by Arne Vajhøj
How difficult would it be to express DCL as a grammar?
DCL permits what amounts to self-modifying code — there are procedures
that use this, too — and efforts toward (non-subset'd) DCL compilation
tend to go downhill from there.
Yes. But I was not suggesting compilation. I was just suggesting
that the lexer and parser for the interpreter be generated from
grammar.

Arne
Stephen Hoffman
2017-01-18 00:58:28 UTC
Permalink
Raw Message
Post by Arne Vajhøj
How difficult would it be to express DCL as a grammar?
DCL permits what amounts to self-modifying code — there are
procedures that use this, too — and efforts toward (non-subset'd) DCL
compilation tend to go downhill from there.
Yes. But I was not suggesting compilation. I was just suggesting that
the lexer and parser for the interpreter be generated from grammar.
Trying to build the front end of a compiler is little different than
building the rest of the compiler, in this case. The front end
doesn't have enough context to figure out what the code is really going
to do.

Each DCL statement can be varied at run-time.

It's very common to use ! as a symbolic replacement for some upcoming
DCL command that you don't want to execute, for instance; to stub out
something in a jump table, or some SET command or otherwise.

But that stubbing is just part of what can be done within DCL when the
code starts to modify itself, if the programmer is so inclined.

Most folks don't try and don't do this in DCL — some don't realize it's
even possible, and some that do are understandably horrified and avoid
this approach — but there's more than a little DCL code around that
does.

Static parsing will work for "subset" DCL, but won't work for the
general case; not until after the symbol substitution phases are
completed — at run-time.

It'd be great to try to do this with flex and bison or such, but by the
time all the weird corner cases within DCL are dealt with — such as
that certain lexical functions are intentionally executed within valid
DCL comments — you're probably better off replacing DCL with something
else; with CCL (Clair's Command Language) or some PowerShell-like or
other new parser or some BSD-licensed sh-like shell or otherwise.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2017-01-17 03:39:51 UTC
Permalink
Raw Message
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.

But I have a wild idea.

:-)

Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?

Arne
Simon Clubley
2017-01-17 18:57:16 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Arne Vajhøj
2017-01-17 19:09:44 UTC
Permalink
Raw Message
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Interactive terminal limitations?

That is not a DCL problem is it?

Arne
Simon Clubley
2017-01-17 22:04:14 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Post by Simon Clubley
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Interactive terminal limitations?
That is not a DCL problem is it?
Some are and some are not. I've posted a list in response to
Brian's post.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
V***@SendSpamHere.ORG
2017-01-17 19:08:44 UTC
Permalink
Raw Message
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Which are?
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
David Froble
2017-01-17 19:15:46 UTC
Permalink
Raw Message
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Which are?
Now Brian, shirley you're aware that Simon's lifelong goals are to keep the
command line buffer around forever, and to edit lies up to 32767 characters.

:-)

I caught the misspelling "lies", but decided to leave it as is.

:-)
Johnny Billquist
2017-01-17 19:56:07 UTC
Permalink
Raw Message
Post by David Froble
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Which are?
Now Brian, shirley you're aware that Simon's lifelong goals are to keep
the command line buffer around forever, and to edit lies up to 32767
characters.
...which don't have anything to do with DCL...

People need to know and understand what the issues are, and *where* they
are. DCL as a user interface do not really have that many issues, unless
you just dislike the syntax or something like that.

Most people actually have issues with scripting in DCL, which probably
is better solved by a new scripting language. or else have problems more
related to terminal handling, which is in the terminal driver.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Simon Clubley
2017-01-17 22:02:47 UTC
Permalink
Raw Message
Post by David Froble
Now Brian, shirley you're aware that Simon's lifelong goals are to keep the
command line buffer around forever, and to edit lies up to 32767 characters.
Your problem is that you know me too well... :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Simon Clubley
2017-01-17 22:00:16 UTC
Permalink
Raw Message
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Which are?
1) Editing lines longer than the current terminal width.

This is a terminal driver issue in VMS, but it could be fixed in
DCL if all else fails.

2) Preserving the command history between sessions.

This is a DCL issue.

When you log out, DCL needs to automatically append the commands
actaully entered in that session only into a history file and
needs to restore them automatically at login. This needs to work
in the presence of multiple simultaneous DCL sessions, so DCL can't
simply overwrite the history file with the full command history
buffer loaded into one session.

DCL also needs to remove old commands as required from the history
file. A reasonable number of commands to keep is somewhere in the
1000 to 5000 range.

3) A _sane_ way of incrementally searching the command history.

This is a DCL issue.

Compare bash's Ctrl-R incremental search with the required use of
DCL's recall commands.

4) Bash style tab completion of filenames and directories when
entering commands.

This is a DCL issue.

Once you have used this, you never want to use a CLI which doesn't
have it.

5) Enhanced wildcard pattern matching of filenames when entering
commands.

This can either be a DCL issue or a program issue.

In a Unix shell, the matching filenames are expanded by the shell
so a program doesn't have to worry about the wildcard lookups.

In VMS, every single program which wants to support wildcard
filenames needs to have the code in it to do this.

It would make sense for DCL to optionally expand the list of
filenames found as the result of a wildcard lookup and pass
them on the command line to the program so each program doesn't
have to duplicate code.

For backwards compatibility, this could be a CLD option. This would
not handle the foreign command situation so maybe you could
optionally specify something when defining a foreign command.

6) There is no 6) for now. The rest of my issues are scripting
related IIRC.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Johnny Billquist
2017-01-17 23:23:43 UTC
Permalink
Raw Message
Post by Simon Clubley
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Which are?
1) Editing lines longer than the current terminal width.
This is a terminal driver issue in VMS, but it could be fixed in
DCL if all else fails.
I suspect trying to move this into DCL might be an ugly hack...
Post by Simon Clubley
2) Preserving the command history between sessions.
This is a DCL issue.
When you log out, DCL needs to automatically append the commands
actaully entered in that session only into a history file and
needs to restore them automatically at login. This needs to work
in the presence of multiple simultaneous DCL sessions, so DCL can't
simply overwrite the history file with the full command history
buffer loaded into one session.
DCL also needs to remove old commands as required from the history
file. A reasonable number of commands to keep is somewhere in the
1000 to 5000 range.
This might be a combo of DCL and the terminal driver.
Post by Simon Clubley
3) A _sane_ way of incrementally searching the command history.
This is a DCL issue.
Compare bash's Ctrl-R incremental search with the required use of
DCL's recall commands.
Might also be a combo of DCL and the terminal driver.
Post by Simon Clubley
4) Bash style tab completion of filenames and directories when
entering commands.
This is a DCL issue.
Once you have used this, you never want to use a CLI which doesn't
have it.
Not sure where I'd place that...
Post by Simon Clubley
5) Enhanced wildcard pattern matching of filenames when entering
commands.
This can either be a DCL issue or a program issue.
In a Unix shell, the matching filenames are expanded by the shell
so a program doesn't have to worry about the wildcard lookups.
In VMS, every single program which wants to support wildcard
filenames needs to have the code in it to do this.
It would make sense for DCL to optionally expand the list of
filenames found as the result of a wildcard lookup and pass
them on the command line to the program so each program doesn't
have to duplicate code.
For backwards compatibility, this could be a CLD option. This would
not handle the foreign command situation so maybe you could
optionally specify something when defining a foreign command.
I think you got that backwards. In Unix, it's that every program who
wants it needs to implement it. However, most programs have no clue, and
the shells are expected to do the expansion before calling the program.

In VMS, this is done in RMS. No program have to implement this. It
already exists in the system and all programs can utilize it.

Which, in my opinion, is the correct way of doing it.

Unix way is a cheap hack, with some funny properties.

Consider RENAME *.FOO *.BAR for example...

Other annoying and/or funny effects of this is that things like ~ to
refer to your home directory also don't work in most programs. Another
thing the shell expands for you. Oh, and also environment variables.


And this is also why I'm not sure I would like to have filename
expansion placed in DCL. I would like it to work in my program prompting
as well. And then DCL is not even involved. Command line editing and
recall already works in my programs. However, search obviously does not.
So in general, I think these kind of functions should not be tied in
with DCL at all. It's generic services that should always be there.


All that said, I do agree with all your comments about things I would
like to see/have in my terminal interaction. It's a horrible life to
have to live without them.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
hb
2017-01-18 09:19:25 UTC
Permalink
Raw Message
Post by Johnny Billquist
Post by Simon Clubley
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
That will handle the scripting side of things but it will not
handle all the UI limitations that DCL has.
Which are?
1) Editing lines longer than the current terminal width.
This is a terminal driver issue in VMS, but it could be fixed in
DCL if all else fails.
I suspect trying to move this into DCL might be an ugly hack...
Post by Simon Clubley
2) Preserving the command history between sessions.
This is a DCL issue.
When you log out, DCL needs to automatically append the commands
actaully entered in that session only into a history file and
needs to restore them automatically at login. This needs to work
in the presence of multiple simultaneous DCL sessions, so DCL can't
simply overwrite the history file with the full command history
buffer loaded into one session.
DCL also needs to remove old commands as required from the history
file. A reasonable number of commands to keep is somewhere in the
1000 to 5000 range.
This might be a combo of DCL and the terminal driver.
Post by Simon Clubley
3) A _sane_ way of incrementally searching the command history.
This is a DCL issue.
Compare bash's Ctrl-R incremental search with the required use of
DCL's recall commands.
Might also be a combo of DCL and the terminal driver.
Post by Simon Clubley
4) Bash style tab completion of filenames and directories when
entering commands.
This is a DCL issue.
Once you have used this, you never want to use a CLI which doesn't
have it.
Not sure where I'd place that...
Post by Simon Clubley
5) Enhanced wildcard pattern matching of filenames when entering
commands.
This can either be a DCL issue or a program issue.
In a Unix shell, the matching filenames are expanded by the shell
so a program doesn't have to worry about the wildcard lookups.
In VMS, every single program which wants to support wildcard
filenames needs to have the code in it to do this.
It would make sense for DCL to optionally expand the list of
filenames found as the result of a wildcard lookup and pass
them on the command line to the program so each program doesn't
have to duplicate code.
For backwards compatibility, this could be a CLD option. This would
not handle the foreign command situation so maybe you could
optionally specify something when defining a foreign command.
I think you got that backwards. In Unix, it's that every program who
wants it needs to implement it. However, most programs have no clue, and
the shells are expected to do the expansion before calling the program.
In VMS, this is done in RMS. No program have to implement this. It
already exists in the system and all programs can utilize it.
The VMS file systems (ODS2/5) know about wildcards ('*' and '%'/'?'):
they are special and can not be part of a filename. Unix file systems do
not have wildcards: '*' and '?' are legal characters in a filename.

On VMS the wildcard pattern matching of filenames is done in the file
system.

DCL knows that wildcards can be in a file specification and so accepts
these when an argument or qualifier value (as defined in the CLD) is a
file specification. DCL passes these wildcards to the program (so that
CLI$ routines can pick them up). VMS programs let (directly or via RMS
calls) the file system do the pattern matching.

On Unix/Linux usually the shell does the pattern matching. That's why
some people can use "echo *" instead of "ls" to list files in the
current directory. An example for a Unix utility which wants to do
pattern matching of filenames itself is "find". To pass it a wildcard
you have to quote it - one way or the other: "find . -name \*.txt".
Simon Clubley
2017-01-18 19:04:38 UTC
Permalink
Raw Message
Post by Johnny Billquist
Post by Simon Clubley
1) Editing lines longer than the current terminal width.
This is a terminal driver issue in VMS, but it could be fixed in
DCL if all else fails.
I suspect trying to move this into DCL might be an ugly hack...
I strongly agree but this really does need fixing somewhere.
Post by Johnny Billquist
Post by Simon Clubley
2) Preserving the command history between sessions.
This is a DCL issue.
When you log out, DCL needs to automatically append the commands
actaully entered in that session only into a history file and
needs to restore them automatically at login. This needs to work
in the presence of multiple simultaneous DCL sessions, so DCL can't
simply overwrite the history file with the full command history
buffer loaded into one session.
DCL also needs to remove old commands as required from the history
file. A reasonable number of commands to keep is somewhere in the
1000 to 5000 range.
This might be a combo of DCL and the terminal driver.
The terminal driver only keeps the current line; it does not keep
a record of multiple input lines. You need to keep the history in
your application/DCL itself if you want to maintain a history of
entered commands.
Post by Johnny Billquist
Post by Simon Clubley
3) A _sane_ way of incrementally searching the command history.
This is a DCL issue.
Compare bash's Ctrl-R incremental search with the required use of
DCL's recall commands.
Might also be a combo of DCL and the terminal driver.
It's only a terminal driver issue to the extent that the read needs
to complete immediately when the search character is entered.
Everything after that would be controlled by DCL.
Post by Johnny Billquist
Post by Simon Clubley
4) Bash style tab completion of filenames and directories when
entering commands.
This is a DCL issue.
Once you have used this, you never want to use a CLI which doesn't
have it.
Not sure where I'd place that...
Tab would cause a return to DCL immediately when pressed. Everything
else would be a DCL issue although you need to have a way to position
the cursor at a specific point in the line when the modified command
line is passed back to the terminal driver for further input.
Post by Johnny Billquist
Post by Simon Clubley
5) Enhanced wildcard pattern matching of filenames when entering
commands.
This can either be a DCL issue or a program issue.
In a Unix shell, the matching filenames are expanded by the shell
so a program doesn't have to worry about the wildcard lookups.
In VMS, every single program which wants to support wildcard
filenames needs to have the code in it to do this.
It would make sense for DCL to optionally expand the list of
filenames found as the result of a wildcard lookup and pass
them on the command line to the program so each program doesn't
have to duplicate code.
For backwards compatibility, this could be a CLD option. This would
not handle the foreign command situation so maybe you could
optionally specify something when defining a foreign command.
I think you got that backwards. In Unix, it's that every program who
wants it needs to implement it. However, most programs have no clue, and
the shells are expected to do the expansion before calling the program.
In VMS, this is done in RMS. No program have to implement this. It
already exists in the system and all programs can utilize it.
I think it's a matter of perspective although I do see why you
think this. With Unix you simply get a list of filenames which
you can process immediately in your code whereas with DCL you
have to add a little wildcard loop in every VMS program that
wants to use this functionality.
Post by Johnny Billquist
Which, in my opinion, is the correct way of doing it.
Unix way is a cheap hack, with some funny properties.
Consider RENAME *.FOO *.BAR for example...
Other annoying and/or funny effects of this is that things like ~ to
refer to your home directory also don't work in most programs. Another
thing the shell expands for you. Oh, and also environment variables.
And this is also why I'm not sure I would like to have filename
expansion placed in DCL. I would like it to work in my program prompting
as well. And then DCL is not even involved. Command line editing and
recall already works in my programs. However, search obviously does not.
So in general, I think these kind of functions should not be tied in
with DCL at all. It's generic services that should always be there.
I wouldn't really have much of an issue with this approach provided
the available filename wildcarding was enhanced somewhere.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Craig A. Berry
2017-01-18 03:21:16 UTC
Permalink
Raw Message
Post by Simon Clubley
all the UI limitations that DCL has.
Post by V***@SendSpamHere.ORG
Which are?
1) Editing lines longer than the current terminal width.
This is a terminal driver issue in VMS, but it could be fixed in
DCL if all else fails.
2) Preserving the command history between sessions.
This is a DCL issue.
I recently stumbled on the description of the Almquist shell ash whose
creator Kenneth Almquist purportedly thought that line editing and
command history both belonged in the terminal driver rather than the
shell. At least according to Wikipedia.[1] Apparently the dash
derivative of ash that is now the default /bin/sh on Ubuntu and
elsewhere didn't follow this precedent and does include those features
in the shell just like bash, etc. It doesn't really mean anything one
way or the other about how it should be done on VMS, but I was surprised
there's been that much variation of where it's done on Unix.

[1] https://en.wikipedia.org/wiki/Almquist_shell
Bob Koehler
2017-01-18 14:44:34 UTC
Permalink
Raw Message
Post by Simon Clubley
2) Preserving the command history between sessions.
This is a DCL issue.
This can be a security issue, since DCL allows remote node username
and passwords to be entered in plain text for both DECnet and IP.

As in, for DECnet:

$ copy node"username password"::filespec localfile

for IP:

$ copy/ftp node"username password"::filespec locacfile
Simon Clubley
2017-01-18 18:49:33 UTC
Permalink
Raw Message
Post by Bob Koehler
Post by Simon Clubley
2) Preserving the command history between sessions.
This is a DCL issue.
This can be a security issue, since DCL allows remote node username
and passwords to be entered in plain text for both DECnet and IP.
$ copy node"username password"::filespec localfile
$ copy/ftp node"username password"::filespec locacfile
In that case, all you would need is a DCL version of "unset HISTFILE".

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Craig A. Berry
2017-01-18 04:01:35 UTC
Permalink
Raw Message
Post by Arne Vajhøj
Post by c***@gmail.com
The addition/integration of another scripting language
seems more likely.
Obviously that will raise heated discussion about which language.
But I have a wild idea.
:-)
Have you considered approaching it slightly different and not
provide a scripting language but instead provide an extensive
library of VMS functions for JVM platform. And the let people
use Python, Ruby, Groovy, JavaScript, beanshell or whatever they
prefer?
While there may be variants of some of those languages that run on the
JVM, it isn't the preferred implementation for most of them. But I agree
they shouldn't choose *a* language; macOS comes with Perl, Python, Ruby,
AppleScript, JavaScript, and various Unix shells pre-installed (there
may be others I'm unaware of). Why not include a few languages and maybe
pick one or two for support and internal use?
c***@gmail.com
2017-01-18 11:34:08 UTC
Permalink
Raw Message
Why not include a few languages and maybe pick one or two for support and internal use?
That is exactly what we are planning and will likely settle on the details within the next few months.
Chris
2017-01-14 17:44:45 UTC
Permalink
Raw Message
Post by Arne Vajhøj
- skill sets available
- state of compilers when work was done
- tradition at the time (late 70's)
- reasonable simple and nice code to start with
- later lots of enhancements had to be done
with very tight deadlines not permitting
a healthy refactoring of the code
- result one big mess
It is just a guess, but variation of that has
been seen over and over again.
Arne
Early Vax systems never had that much memory, nor was the performance
that good. In some cases, every machine cycle was important to get the
performance and quality systems programming compilers were non existent.
It was not unusual to hand optimise the code to minimise execution
time, especially for interrupt handlers and network code. Assembler was
the only choice and would typically result in convoluted code and few
docs that only the original programmer could easily understand. Later
added to by people who only partly understood the code, then later
rewritten in C, whatever, also added to and you end up with code rot
that only a complete redesign can address. But, companies are often in
a hurry, ime and won't budget the time to do it right. Typical attitude
being: "See how much of the code from that project from a few years ago
you can cut and paste and use". Aaaarghhhh...


Chris
Stephen Hoffman
2017-01-12 16:28:23 UTC
Permalink
Raw Message
Post by Simon Clubley
Why is the VMS codebase apparently so convoluted ?
Operating systems are very complex constructs, and filled with
trade-offs. There are always trade-offs. There are decisions that
are the least-wrong among the bad-choices. There are the inevitable
compromises around available developers and scheduling; around what you
can get done, with the budget and time and staff you have.

In various cases not the least of which is shadowing, that code deals
with a simple-looking problem yet has to deal with errors arising from
networking and local hardware devices, NIC, simple controllers, RAID
controllers, disk devices, and has to do the appropriate thing in cases
such as whether the disk supports re-vectoring or not. Shadowing also
has to deal with various different hardware and firmware, some of which
is sometimes... in the most charitable of phrasing... somewhat odd.
Some of that hardware should be long gone, but — because there's little
precedent for deprecation — somebody's probably still using it. Some
devices and some configurations should have been yanked long ago.
I've worked with more than a few widgets that simply lock up — this
from user-mode code! — and you have to power-cycle the whole box to get
them back. There's more than a little of this hard-earned knowledge
baked into the code of shadowing and of OpenVMS in general, too. That
knowledge is really hard to replicate.

There's also that rewriting existing source code isn't often the best
use of anybody's time. The old code works. Better to spend the time
designing and working on a "strangler" than on a direct rewrite, if
you're going to undertake the effort. Don't just rewrite, make the
replacement substantially better. Then — and there's little precedent
for deprecation, though it has happened with the first-generation of
shadowing — deprecate and remove the old code. Don't patch problems in
customer- or business- or future-critical areas, design or redesign the
solution, provide fundamental and potentially marketable enhancements,
and then schedule and deprecate the problems.

As differentiated from rewriting, there's refactoring the source code.
The OpenVMS source code refactoring tools are entirely non-existent,
and the formatting tools are all add-ons. On other platforms, using
something as limited as EDT for source code development is akin to
using Notepad on Windows or even punched cards; absurd. But there
aren't particularly good development tools available. I've worked
with tools that are quite good at cleaning up code, too.

Then there's having the necessary schedule time available to refactor
the code. Once there's working and neatly formatted code and test
cases (and code reviews, where those are done), there's tremendous
pressure to move on to the next project. Not on re-solving the
problem, based on what the developer(s) have learned from the first
solution. But doing that refactoring later means you have to reset
your context and relearn the old code. Best to do that refactoring
immediately. I and most of you have had test and prototype code put
into production, too. Un-refactored, un-rewritten, "ship it" code, and
that sketchy code becomes permanent, and can and variously does come
back and bite... somebody; the developer, the end-user, and sometimes
the board when the Brian Krebs calls up the PR folks. Technical debt
comes due. Always. The question then becomes whether or not the
original developer(s) or designer(s) — or the product or the whole
organization — is gone before the debt comes due.

One of the more pernicious problems here is hardware and software
compatibility. Compatibility with old hardware. That hits storage
more than you can imagine, as well as the terminal driver. There's
hardware-specific code in OpenVMS that goes back decades, and for
hardware that nobody maintaining and updating existing code or doing
new work even remotely cares about. Some old hardware has gotten
deprecated, such as the DEQNA. But that VT52 will probably still
work. Even DCL procedures are more difficult to tweak, because simple
changes can throw off tools that parse output. The MAIL rewrite ran
into these slight differences for instance, and more than a little work
went into the rewrite to avoid breaking existing tools. Effort that
didn't go to tasks such as integrating MIME into MAIL, which you'd
certainly want to do if you were setting out to strangle the old MAIL
application. Compatibility with old software, too.

There's no right decision around these compatibility trade-offs,
though. Only that not deprecating and not occasionally and
selectively breaking compatibility and deprecating older and
problematic hardware will eventually and inevitably occlude all
substantive future work. And that breaking too much, too fast and/or
with no easy migration will cause the customers to port to elsewhere.

If you can't deprecate and replace problematic areas of any operating
system or application — such as the known-to-be-insecure password hash,
for instance — you can only accrue complexity and technical debt, and
changes get more and more difficult and expensive and hazardous to
compatibility, and sooner or later you get into a situation where
developers can choose to make isolated changes such as adding metadata
storage into the LINKER, where designing and making more systemic
changes — such as enhancing the file system to provide a more generic
solution for metadata — means far more work and far more risk. As
another example, there's the spectacular 64-bit memory addressing
design in OpenVMS. One of the more brilliant efforts, and one that
allowed existing applications to be incrementally upgraded to 64-bit
addressing. Which also — remember, there are always trade-offs — left
OpenVMS with a completely hideous 64-bit native addressing scheme.

There are also issues around developer experience, too. Not to impugn
the VSI staff, but the experience of the VSI team is very much focused
in OpenVMS itself. Customers and users, too, become accustomed to how
OpenVMS works now. ACLs, for instance, ceased to be a competitive
differentiation well over a decade ago. Deep, but limited experience
is neither good nor bad, but it does tend to reduce the numbers of
different and new and variously better approaches that might be
incorporated. Microsoft is not the behemoth it once was, and — while
there are good ideas from Windows and Windows Server — there are more
than a few good ideas (and bad ideas to avoid) from packages and
products and tools elsewhere. More subtly, existing customers are
seldom a good source for suggestions around wholly new enhancements.
Incremental changes, removing pain points, sure. Wholly new features
or substantial updates? Not so much. And every single customer will
prefer to avoid changes. As happened at the boot camp, before the
folks even knew what the benefits of the suggested changes were, too.
Each vendor needs to look at their own products with some brutal
self-honesty, and also look around and learn from and incorporate the
best of the (relevant) advantages and disadvantages of other platforms
and packages.

Then there's having an idea about what the product can and cannot do,
and the ability to say "no". In a small company, that's exceedingly
difficult. Both for reasons of funding — VSI will be loathe to turn
down changes associated from any substantive prospective sale, if they
can at all afford to make a profit from it. You can bet larger
organizations know this, too. This also includes any hardware or
software products that any substantial number of VSI customers need,
but that VSI does not themselves control, too. That's either in VSI's
own supply chain, or in the supply chains of VSI customers. You can
bet those vendors know their positional advantages here, too.

That's a short answer.

I have more than a little reading in this area, but here are a few
relevant to what's written above....

https://martinfowler.com/bliki/StranglerApplication.html
http://astyle.sourceforge.net
https://m.signalvnoise.com/position-position-position-34b510a28ddc
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2017-01-12 20:59:57 UTC
Permalink
Raw Message
Stephen Hoffman wrote:

TL; but I did read it ....

I occurs to me that with x86, some (much?) of the old HW just will not be able
to be connected to an x86 system. Also, VSI has indicated, with some
exceptions, they are looking at x86, not VAX, Alpha, or even itanic.

Now, to attempt to just jerk out anything which supported old HW can be
dangerous, just might break something you still need, and perhaps the effort is
better spent elsewhere.

Still, I see merit in Steve's suggestions that carrying forward all the
accumulated cruff and such may not be a good thing to do.

Brings up the tough questions, what do you attempt, and what do you just leave
alone?

As for convoluted, sometimes it must be.

As for readability of Macro-32, if well written, I don't find it hard to read,
or use, at all.
Simon Clubley
2017-01-13 14:10:45 UTC
Permalink
Raw Message
Post by David Froble
TL; but I did read it ....
I occurs to me that with x86, some (much?) of the old HW just will not be able
to be connected to an x86 system. Also, VSI has indicated, with some
exceptions, they are looking at x86, not VAX, Alpha, or even itanic.
Now, to attempt to just jerk out anything which supported old HW can be
dangerous, just might break something you still need, and perhaps the effort is
better spent elsewhere.
Still, I see merit in Steve's suggestions that carrying forward all the
accumulated cruff and such may not be a good thing to do.
The questions I have are in the areas of VMS which don't seem to have
any hardware compatibility issues. For example, the part of the terminal
driver which would have to be altered to do editing of long lines would
seem to be highly unconnected to specific terminal hardware.

For example, when the original terminal driver line editing code was
written, did it turn out to be something which was simply pushing the
bounds of what could be cleanly achieved in kernel space (the issues
with the code would be understandable to some extent in that case),
or was it the result of someone trying to be unjustifiably "clever"
and hence causing an unjustified major maintainence headache for
future maintainers ?

I won't comment further until after Clair's posted his response however.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Stephen Hoffman
2017-01-13 15:24:19 UTC
Permalink
Raw Message
Post by David Froble
TL; but I did read it ....
I occurs to me that with x86, some (much?) of the old HW just will not
be able to be connected to an x86 system. Also, VSI has indicated,
with some exceptions, they are looking at x86, not VAX, Alpha, or even
itanic.
Now, to attempt to just jerk out anything which supported old HW can be
dangerous, just might break something you still need, and perhaps the
effort is better spent elsewhere.
The Alpha and Itanium ports yanked a bunch of the old hardware, and
re-issued the prefix codes for a number of devices. There's no reason
to assume that x86-64 won't do the same. The driver prefixes and
facility prefixes and such never particularly got published, but that's
fodder for another discussion. That whole area would be better off
headed toward UUIDs for device and facility prefixes and to replace
UICs and such, but OpenVMS is stuck with the old ways for the
foreseeable future.
Post by David Froble
Still, I see merit in Steve's suggestions that carrying forward all the
accumulated cruff and such may not be a good thing to do.
Brings up the tough questions, what do you attempt, and what do you
just leave alone?
And what do you replace and deprecate, or just deprecate, and
eventually remove? These tasks and these efforts are among the most
ignored parts of any large and on-going software development project.
Code inevitably accretes, and sometimes not in a good way. You can
either accept the problems — an insecure password hash — or you can
work to remove the problems and/or provide far better approaches for
new work. This all trades off among the existing users and their
preferences, for easier future work among those same existing folks and
also among potential newer developers and projects. Backwards-looking
or forward-looking. Not an easy call, either way. Sometimes you have
to accept the problems. Sometimes you have to make incompatible
changes, or you undercut your own customers, and/or your own
development efforts, and/or your product positioning and related
marketing messages.
Post by David Froble
As for convoluted, sometimes it must be.
Ayup. For a number of situations — both in computing and in various
other realms and endeavors — the easy answer is wrong.
Post by David Froble
As for readability of Macro-32, if well written, I don't find it hard
to read, or use, at all.
The difference is that it takes a lot of code to do something simple,
which both means a whole lot of development work and more than a little
exposure to subtle bugs. The less glue code I have — to write, to
read, to debug, to test — the happier I am. This is really something
that is completely invisible to most any developer, until they get into
an environment that requires far less glue code and that provides far
more capable frameworks, too. I know I was blind to this stuff, until
I got slammed by some tool chain updates which eliminated a whole
shedload of this code. The source code results were... far nicer to
write, read and support.
--
Pure Personal Opinion | HoffmanLabs LLC
t***@glaver.org
2017-01-13 00:15:59 UTC
Permalink
Raw Message
Post by Stephen Hoffman
There's also that rewriting existing source code isn't often the best
use of anybody's time. The old code works. Better to spend the time
designing and working on a "strangler" than on a direct rewrite, if
you're going to undertake the effort. Don't just rewrite, make the
replacement substantially better. Then — and there's little precedent
for deprecation, though it has happened with the first-generation of
shadowing — deprecate and remove the old code. Don't patch problems in
customer- or business- or future-critical areas, design or redesign the
solution, provide fundamental and potentially marketable enhancements,
and then schedule and deprecate the problems.
As an example of what can go wrong, even back in the "glory days" of DEC and VMS - "new" BACKUP. Various unexpected conditions were discovered in the field, despite extensive internal / external testing, and it took a couple of VMS versions until it all got settled.
Stephen Hoffman
2017-01-13 15:43:02 UTC
Permalink
Raw Message
Post by t***@glaver.org
Post by Stephen Hoffman
There's also that rewriting existing source code isn't often the best
use of anybody's time. The old code works. Better to spend the time
designing and working on a "strangler" than on a direct rewrite, if
you're going to undertake the effort. Don't just rewrite, make the
replacement substantially better. Then — and there's little precedent
for deprecation, though it has happened with the first-generation of
shadowing — deprecate and remove the old code. Don't patch problems in
customer- or business- or future-critical areas, design or redesign the
solution, provide fundamental and potentially marketable enhancements,
and then schedule and deprecate the problems.
As an example of what can go wrong, even back in the "glory days" of
DEC and VMS - "new" BACKUP. Various unexpected conditions were
discovered in the field, despite extensive internal / external testing,
and it took a couple of VMS versions until it all got settled.
And looking at the results from out here, we didn't end up with
something that was substantially better than the earlier versions of
BACKUP. We did end up with one of the most intractable and complex
and inscrutable tools around, and — despite the substantial effort in
the help text and examples, and despite the BACKUP Manager — it's still
one of the more difficult tools for system managers to use correctly,
and one that requires more than a little effort to script, and — when
you look at dealing with MOUNT and DISMOUNT and archives and the utter
lack of RMS and application and third-party database integration — an
ongoing problem area for OpenVMS system managers. BACKUP solves what
it does quite well. But creating the accoutrements and related tasks
around anyone actually using that tool are very complex, as is the tool
itself. Customization is great. Right up until you're in a thicket,
with no clear paths forward, no defaults and no templates, no
integration, and whatnot.

Without going into details, a couple of those, um, ripples point
directly back to the complexity and inconsistencies of some of the APIs
and implementations within OpenVMS, too. Beyond BACKUP itself. To
the sorts of mistakes that even very experienced developers can make.
--
Pure Personal Opinion | HoffmanLabs LLC
d***@gmail.com
2017-01-13 15:51:31 UTC
Permalink
Raw Message
Post by Stephen Hoffman
And looking at the results from out here, we didn't end up with
something that was substantially better than the earlier versions of
BACKUP. We did end up with one of the most intractable and complex
and inscrutable tools around, and — despite the substantial effort in
the help text and examples, and despite the BACKUP Manager — it's still
one of the more difficult tools for system managers to use correctly,
and one that requires more than a little effort to script, and — when
you look at dealing with MOUNT and DISMOUNT and archives and the utter
lack of RMS and application and third-party database integration — an
ongoing problem area for OpenVMS system managers. BACKUP solves what
it does quite well. But creating the accoutrements and related tasks
around anyone actually using that tool are very complex, as is the tool
itself. Customization is great. Right up until you're in a thicket,
with no clear paths forward, no defaults and no templates, no
integration, and whatnot.
Without going into details, a couple of those, um, ripples point
directly back to the complexity and inconsistencies of some of the APIs
and implementations within OpenVMS, too. Beyond BACKUP itself. To
the sorts of mistakes that even very experienced developers can make.
--
Pure Personal Opinion | HoffmanLabs LLC
http://xkcd.com/1168/
Simon Clubley
2017-01-13 21:03:28 UTC
Permalink
Raw Message
Post by d***@gmail.com
http://xkcd.com/1168/
tar -jvcf dg.tar.bz2 dg/

Typed in less than 10 seconds and using what I know of tar without
having to look it up. :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
c***@gmail.com
2017-01-13 22:29:39 UTC
Permalink
Raw Message
RE: "Why is the VMS codebase apparently so convoluted ? “

As a general statement about VMS I think that is way too harsh. We all have our pieces of the code we love to hate and think are deserving of an immediate dagger in the heart. There are certainly modules where some seriously unnatural acts are performed. But before we start thinking VMS is a pile of crap I’ll contend that those pockets of true ugliness are a minuscule portion of the overall system.

I was tempted to reply to Simon’s post with one word - MACRO32 - but that would be unfair in a number of ways. But, if I were to list 10 source modules in VMS that I would like to see magically re-written tomorrow I can guarantee you it would be a list of MACRO32 modules. Assembler, by its very nature, is just harder to create, understand, modify, add to, etc. than higher-level languages. One of our developers is fond of the phrase ‘waxy buildup’. Well, my observation is that waxy buildup is far worse in assembler over four decades than in BLISS or C.

In one of the books written about the creation of Windows NT there are a few lines in the beginning from Dave Cutler about VMS. He laments creating so much of VMS in assembler but adds that it was the only practical choice at the time. Completely understandable even to this day; as I have said many times, we don’t do computer science, we have to ship a product. Sometimes you just have to play with the cards you are dealt.

There are things in the original MACRO32 code that we changed (made better) in order to make it compilable for Alpha. We banned new code in MACRO32 except where it makes more sense to modify what exists. Even today we have work underway re-writing a major driver from MACRO32 to C. It is not uncommon to see a .MAP file containing a bunch of MACRO32 modules and one or more C modules (that’s where all the new stuff is).

The overall system is fairly well structured. It is a ton of code but the big picture is understandable and has not changed to any great degree since the exec re-org project in the early 80s which broke up the monolithic SYS.EXE into a few dozen execlets. For example, if you have a memory management project, you are working in the modules that make up SYS$VM.EXE, many of which are in C at this point. Is VMS modular? That might be a stretch. But it is organized well enough to have survived forty years of hundreds of developers working on it and still generally looks and feels like it always has.
Alan Greig
2017-01-13 23:02:16 UTC
Permalink
Raw Message
Post by c***@gmail.com
In one of the books written about the creation of Windows NT there are a few lines in the beginning from Dave Cutler about VMS. He laments creating so much of VMS in assembler but adds that it was the only practical choice at the time. Completely understandable even to this day; as I have said many times, we don’t do computer science, we have to ship a product. Sometimes you just have to play with the cards you are dealt.
Given that Dave Cutler seemingly shows no sign of stopping and has most recently been involved with designing the Microsoft Cloud Azure operating system and Playstation Hypervisor, co-designed the AMD X64 extension and is still apparently one of their top coders maybe you should send him some routines and say "Now's your chance" :-) ;-)

Wishing you best of with the rest of the port.
Alan Greig
2017-01-13 23:07:11 UTC
Permalink
Raw Message
Post by Alan Greig
Given that Dave Cutler seemingly shows no sign of stopping and has most recently been involved with designing the Microsoft Cloud Azure operating system and Playstation Hypervisor,
Xbox Hypervisor of course unless he's already doing some serious moonlighting!
John Reagan
2017-01-13 23:40:34 UTC
Permalink
Raw Message
And you only see me pointing fingers at the ugly code. There are lots of well-written Macro-32 modules scattered around as well.

The "problem" with Macro-32 code is that assembly programmers are taught to be highly efficient. They'll jump from one routine to another just to avoid having to duplicate code. They'll tell themselves that they are using "modular" programming. Of course, all those cross-jumps might have been clever when first written but the overall complexity is horrible. That turns into the long term maintenance nightmare that Clair mentions (and makes the Macro compiler do back flips to keep things working - you'd be impressed by the compiler's flow analysis code [written in C]).

The addition of 64-bit addresses and operations in Alpha created all those Macro-32 EVAX_ builtins which meant people went in and touched many modules to make it 64-bit aware. That extra clutter makes it even harder to read and understand since EVAX_ builtins really don't behave like a VAX (none of them condition codes for example).
Craig A. Berry
2017-01-14 00:18:52 UTC
Permalink
Raw Message
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are
taught to be highly efficient.
Does the manual selection of registers in MACRO-32 (or BLISS, for that
matter) actually hurt performance compared to what a true HLL does on a
modern architecture, or are the registers in those languages so
virtualized that it amounts to the same thing as compiler-optimized
register usage?
abrsvc
2017-01-14 01:40:40 UTC
Permalink
Raw Message
With all this discussion, we need to keep in mind that compiler optimization was not at the same level it is now. Macro routines were necessary to get the performance needed for many routines. Even with the Alpha, there were some modules coded in M64 to get the best performance.

With more modern hardware and with the code now generated with modern compilers, the performance difference is much less. HLL coding is preferred as easier to maintain.

Dan
Bob Koehler
2017-01-17 14:45:55 UTC
Permalink
Raw Message
With all this discussion, we need to keep in mind that compiler optimizatio=
n was not at the same level it is now. Macro routines were necessary to ge=
t the performance needed for many routines. Even with the Alpha, there wer=
e some modules coded in M64 to get the best performance.
With more modern hardware and with the code now generated with modern compi=
lers, the performance difference is much less. HLL coding is preferred as e=
asier to maintain.
Dan
I look at code generated by a lot of compilers. It's amazing how
many times the compilers won't even do the most obvious re-use of
temporary results.

Good compilers will generate code that's amazing, instead.
John Reagan
2017-01-14 14:28:16 UTC
Permalink
Raw Message
Post by Craig A. Berry
Does the manual selection of registers in MACRO-32 (or BLISS, for that
matter) actually hurt performance compared to what a true HLL does on a
modern architecture, or are the registers in those languages so
virtualized that it amounts to the same thing as compiler-optimized
register usage?

There is some truth to that. We keep the registers pretty much intact since the carry over when you call other Macro routines. BLISS global register usage tends to be limited to custom calling sequences. Macro doesn't do things like loop hoisting, common subexpressions, etc
Bob Koehler
2017-01-17 14:44:01 UTC
Permalink
Raw Message
Post by Craig A. Berry
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are
taught to be highly efficient.
Does the manual selection of registers in MACRO-32 (or BLISS, for that
matter) actually hurt performance compared to what a true HLL does on a
modern architecture, or are the registers in those languages so
virtualized that it amounts to the same thing as compiler-optimized
register usage?
I'm not sure I follow your question, but
1) good macro programmers can be very good at allocating registers,
but they are human and can occasionally miss something
2) the compilers I've looked at know all the tricks the good macro
programmers rely on, and never tire of chasing every possibility
to it's conclusion
Johnny Billquist
2017-01-17 19:58:16 UTC
Permalink
Raw Message
Post by Bob Koehler
Post by Craig A. Berry
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are
taught to be highly efficient.
Does the manual selection of registers in MACRO-32 (or BLISS, for that
matter) actually hurt performance compared to what a true HLL does on a
modern architecture, or are the registers in those languages so
virtualized that it amounts to the same thing as compiler-optimized
register usage?
I'm not sure I follow your question, but
1) good macro programmers can be very good at allocating registers,
but they are human and can occasionally miss something
2) the compilers I've looked at know all the tricks the good macro
programmers rely on, and never tire of chasing every possibility
to it's conclusion
While compilers certainly can be very clever, there are some tricks
assembler programmers sometimes use which no compiler can manage.
The most obvious one being when you dedicate a register to some specific
value/use, even across several modules.
A compiler don't have enough scope to pull that one off.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
John Reagan
2017-01-17 20:22:51 UTC
Permalink
Raw Message
Post by Johnny Billquist
While compilers certainly can be very clever, there are some tricks
assembler programmers sometimes use which no compiler can manage.
The most obvious one being when you dedicate a register to some specific
value/use, even across several modules.
A compiler don't have enough scope to pull that one off.

Sure they do. Lots of compilers do interprocedural analysis and customize calling sequences and register allocation. Go to the LLVM website, look at the videos from the Fall conference and watch the video on 'LiteLTO'


Johnny Billquist
2017-01-17 20:55:36 UTC
Permalink
Raw Message
Post by Johnny Billquist
Post by Johnny Billquist
While compilers certainly can be very clever, there are some tricks
assembler programmers sometimes use which no compiler can manage.
The most obvious one being when you dedicate a register to some specific
value/use, even across several modules.
A compiler don't have enough scope to pull that one off.
Sure they do. Lots of compilers do interprocedural analysis and customize calling sequences and register allocation. Go to the LLVM website, look at the videos from the Fall conference and watch the video on 'LiteLTO'
http://youtu.be/9OIEZAj243g
Well, I was meaning across several modules that are in separate files.
Unless you are saying that the compiler can somehow know of other
modules that was compiled in a separate compilation, which would still
require static analysis in each compilation to try and figure out if
something actually could make sense to allocate constantly to some value
without knowing all the code. And then preserve this information for
other compilations, so that they know about this. And this in turn would
also cause some very strange and interesting effects for routines in
libraries...

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Craig A. Berry
2017-01-17 21:59:49 UTC
Permalink
Raw Message
Post by Johnny Billquist
Post by Johnny Billquist
Post by Johnny Billquist
While compilers certainly can be very clever, there are some tricks
assembler programmers sometimes use which no compiler can manage.
The most obvious one being when you dedicate a register to some specific
value/use, even across several modules.
A compiler don't have enough scope to pull that one off.
Sure they do. Lots of compilers do interprocedural analysis and customize
calling sequences and register allocation.
Well, I was meaning across several modules that are in separate files.
Unless you are saying that the compiler can somehow know of other
modules that was compiled in a separate compilation
It might be more of a linker thing than a compiler thing per se, but it's now pretty common:

https://en.wikipedia.org/wiki/Interprocedural_optimization

http://clang.llvm.org/docs/ThinLTO.html
Johnny Billquist
2017-01-17 23:31:38 UTC
Permalink
Raw Message
Post by Johnny Billquist
Post by Johnny Billquist
Post by Johnny Billquist
While compilers certainly can be very clever, there are some tricks
assembler programmers sometimes use which no compiler can manage.
The most obvious one being when you dedicate a register to some specific
value/use, even across several modules.
A compiler don't have enough scope to pull that one off.
Sure they do. Lots of compilers do interprocedural analysis and customize
calling sequences and register allocation.
Well, I was meaning across several modules that are in separate files.
Unless you are saying that the compiler can somehow know of other
modules that was compiled in a separate compilation
Yes, the linker stage is potentially are more viable place to work at
this, but maybe I'm behind the curve, but my understanding is that IPO
can deal with dead code elimination, constant reuse, code duplication
and function inlining. Which, while some work, are still pretty straight
forward to actually do.
More deep optimization, such as changing register allocation, is not so
trivial, and I do not think IPO does that. It could potentially require
whole modules to be recompiled if it did.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Simon Clubley
2017-01-17 21:23:42 UTC
Permalink
Raw Message
Post by John Reagan
Sure they do. Lots of compilers do interprocedural analysis and
customize calling sequences and register allocation. Go to the LLVM
website, look at the videos from the Fall conference and watch the
video on 'LiteLTO'
http://youtu.be/9OIEZAj243g
The optimisation stuff in LLVM is very, very, good.

A few months ago, I wrote a toy compiler using LLVM as the backend
(just to learn this aspect of using LLVM) and when I started ramping
up the set of optimisation passes I was calling, LLVM started doing
various things which I would be unlikely to do manually in assembly
language. (And this was just with the simple passes; I never even
bothered looking at the full range of optimisations available.)

For example, this was one of my test programs:

=========================================================================
Procedure llvm_test is

i, x, y : Integer; -- Comment at end of line
mval : Integer;

begin

x := 42;
y := x * 5;
println(y);

mval := 3;
mval := mval - 7;
println(mval);

i := 0;
loop
exit when i = 10;

y := i * 5;
println(y);
i := i + 1;
end loop;

End llvm_test;
=========================================================================

As well as the obvious compile time optimisations available (and LLVM
discovered them), LLVM also removed the multiply and turned the loop
into one where the single counter in the generated code was advanced
by 5 on each pass and exited the loop once 50 had been reached.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Craig A. Berry
2017-01-17 22:07:26 UTC
Permalink
Raw Message
Post by Bob Koehler
Post by Craig A. Berry
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are
taught to be highly efficient.
Does the manual selection of registers in MACRO-32 (or BLISS, for that
matter) actually hurt performance compared to what a true HLL does on a
modern architecture, or are the registers in those languages so
virtualized that it amounts to the same thing as compiler-optimized
register usage?
I'm not sure I follow your question, but
1) good macro programmers can be very good at allocating registers,
but they are human and can occasionally miss something
2) the compilers I've looked at know all the tricks the good macro
programmers rely on, and never tire of chasing every possibility
to it's conclusion
Good MACRO-32 programmers can never do more than pick the best VAX registers, which are probably mapped to a subset of the native registers on non-VAX architectures, and unlikely to be as optimally selected as what a native optimizing compiler can do that has knowledge of the target architecture.
David Froble
2017-01-14 05:50:57 UTC
Permalink
Raw Message
Post by John Reagan
And you only see me pointing fingers at the ugly code. There are lots of
well-written Macro-32 modules scattered around as well.
I think so.
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are taught to
be highly efficient. They'll jump from one routine to another just to avoid
having to duplicate code.
Not everyone.

As has been stated many times, one can write good, or bad, code in just about
any language.

If a coder wants to write maintainable code in Macro-32, it can be done.
Post by John Reagan
They'll tell themselves that they are using
"modular" programming. Of course, all those cross-jumps might have been
clever when first written but the overall complexity is horrible. That turns
into the long term maintenance nightmare that Clair mentions (and makes the
Macro compiler do back flips to keep things working - you'd be impressed by
the compiler's flow analysis code [written in C]).
Is that part of the compiler? Or something you have for analysis? I'd like to
check out some of my stuff, if I knew how, and had the product.
Post by John Reagan
The addition of 64-bit addresses and operations in Alpha created all those
Macro-32 EVAX_ builtins which meant people went in and touched many modules
to make it 64-bit aware. That extra clutter makes it even harder to read and
understand since EVAX_ builtins really don't behave like a VAX (none of them
condition codes for example).
Well, Macro-32 was for VAX, as you can guess from it's name. Sure, you did
something rather brilliant with the compiler for Alpha, itanic, and x86.
Prudent, but, really, not good practice. I'm rather glad you did it, since
otherwise I'd have a bunch of stuff to re-write, and it might not have happened.
John Reagan
2017-01-14 14:51:27 UTC
Permalink
Raw Message
Post by David Froble
Is that part of the compiler? Or something you have for analysis? I'd like to
check out some of my stuff, if I knew how, and had the product

The compiler always builds a flow graph and identifies basic blocks. We track registers coming into the BB, registers written, condition codes used, etc. We need that to aid in determining which registers can be used as compiler temporaries for computing complex addressing modes. We also need to figure out which instructions that need to create condition codes and if we have to put those into temporaries in the event you do a CMPL, jump to a label, and then do something based in those CCs.

And on Itanium, there is more work to analyze for extra handling for NaTs and to ensure that cross jumping routines all agree on the output register numbering.

The internal compiler prints out statistics in the listing file about the 'complexity' which is where I learned of the shadow driver.
Stephen Hoffman
2017-01-17 22:58:19 UTC
Permalink
Raw Message
Post by David Froble
Post by John Reagan
And you only see me pointing fingers at the ugly code. There are lots
of well-written Macro-32 modules scattered around as well.
I think so.
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are
taught to be highly efficient. They'll jump from one routine to
another just to avoid having to duplicate code.
Not everyone.
As has been stated many times, one can write good, or bad, code in just
about any language.
If a coder wants to write maintainable code in Macro-32, it can be done.
Ayup. But what John is referring to is also beyond good and bad
code, it's also that Macro32 has far more glue code than BASIC. It
takes more lines of code, and to do less. Or similarly with C code,
as compared with Objective C code, for that matter. Though Macro32
and — to a lesser degree, C — also expressly allow the programmer to do
things that BASIC just doesn't allow, or has to work (more) at. This
ability is a central part of being more useful for system-level
programming. That there are different targets for different languages
is obvious to us all, of course. The same code abstractions hold for
BASIC programming as compared with newer languages and frameworks, too.
More BASIC code for the developer to write and maintain, as compared
with newer and increasingly higher-level abstractions available in
other environments.
Post by David Froble
Post by John Reagan
They'll tell themselves that they are using "modular" programming. Of
course, all those cross-jumps might have been clever when first written
but the overall complexity is horrible. That turns into the long term
maintenance nightmare that Clair mentions (and makes the Macro compiler
do back flips to keep things working - you'd be impressed by the
compiler's flow analysis code [written in C]).
Is that part of the compiler? Or something you have for analysis? I'd
like to check out some of my stuff, if I knew how, and had the product.
It's what are called coroutines and various other stack or low-level
coding techniques. Jumping between subroutines, for instance,
entering one and using the return from another subroutine. JSB
subroutine linkages. Pushing or popping call frames onto the stack
and transferring control from... somewhere... to somewhere else...
without the compiler having a good idea of what's going on. (Remember
those null pointers I was grumbling about? Similar sorts of low-level
ugly messes, largely where the compiler can't help the developer, or is
configured or designed to not help.) Among others. BASIC will toss
a snit if you try most of that, and you have to use some of the
language extensions to get to some of that in C.
Post by David Froble
Post by John Reagan
The addition of 64-bit addresses and operations in Alpha created all
those Macro-32 EVAX_ builtins which meant people went in and touched
many modules to make it 64-bit aware. That extra clutter makes it even
harder to read and understand since EVAX_ builtins really don't behave
like a VAX (none of them condition codes for example).
Well, Macro-32 was for VAX, as you can guess from it's name. Sure, you
did something rather brilliant with the compiler for Alpha, itanic, and
x86. Prudent, but, really, not good practice. I'm rather glad you did
it, since otherwise I'd have a bunch of stuff to re-write, and it might
not have happened.
More than a little of the VAX Macro32 I've looked at can be replaced by
system services and newer RTL calls. The Macro32 was written for and
in an earlier era, and those other calls often just didn't exist yet.
But it's also a case of how the whole 32-bit to 64-bit migration was
architected in OpenVMS, too. Which was great for compatibility with
existing code. But that design also going to have fallout for
updating existing code and writing new code for the foreseeable future,
too; much more gnarly code, going forward. Not to imply that Macro32
— 32-bit VAX assembler — was ever a particularly good candidate for
code that was going to be handling 64-bit addressing, so that was
always going to be somewhat of a train-wreck.
--
Pure Personal Opinion | HoffmanLabs LLC
Chris
2017-01-14 18:07:40 UTC
Permalink
Raw Message
Post by John Reagan
And you only see me pointing fingers at the ugly code. There are lots of well-written Macro-32 modules scattered around as well.
The "problem" with Macro-32 code is that assembly programmers are taught to be
highly efficient.

They'll jump from one routine to another just to avoid having to
duplicate code.

That's correct, but there's no reason why you can't write assembler
modules with a single entry and exit point and the overhead with
any cpu these days is negligable and was even in the days when I
programmed macro for RT11 and RSX. Only start being clever if there
is no other way. Many programmers fought against the discipline of
structured programming, but once into the flow, it becomes second
nature and is much easier to maintain...

Chris
m***@gmail.com
2017-01-15 06:24:30 UTC
Permalink
Raw Message
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are taught to be highly efficient. They'll jump from one routine to another just to avoid having to duplicate code. They'll tell themselves that they are using "modular" programming. Of course, all those cross-jumps might have been clever when first written but the overall complexity is horrible. That turns into the long term maintenance nightmare that Clair mentions (and makes the Macro compiler do back flips to keep things working - you'd be impressed by the compiler's flow analysis code [written in C]).
It looks VMS developers would do themselves a favor by straightening a lot of this out. Memory is far less limited than it used to be. Sure the work means a little extra time now but if it avoids a greater amount of time down the track ...?

Also, on the provision of some of these ancient device drivers, I'm wondering if not only drivers should be conditionally activated but perhaps also some decision-making software back up the code path. What's the point of code that is frequently executed and takes certain action if you are running an RA81 disk drive long before you get to the device driver, if you don't have any RA81's?
Paul Sture
2017-01-15 06:55:32 UTC
Permalink
Raw Message
Post by m***@gmail.com
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are
taught to be highly efficient. They'll jump from one routine to
another just to avoid having to duplicate code. They'll tell
themselves that they are using "modular" programming. Of course, all
those cross-jumps might have been clever when first written but the
overall complexity is horrible. That turns into the long term
maintenance nightmare that Clair mentions (and makes the Macro
compiler do back flips to keep things working - you'd be impressed by
the compiler's flow analysis code [written in C]).
It looks VMS developers would do themselves a favor by straightening a
lot of this out. Memory is far less limited than it used to be. Sure
the work means a little extra time now but if it avoids a greater
amount of time down the track ...?
Also, on the provision of some of these ancient device drivers, I'm
wondering if not only drivers should be conditionally activated but
perhaps also some decision-making software back up the code path.
What's the point of code that is frequently executed and takes certain
action if you are running an RA81 disk drive long before you get to
the device driver, if you don't have any RA81's?
s/decision-making software/AI/ and you have some marketing to ride on
the wave of the media's current fascination with the concept of AI.

I wish I were only joking here.
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
John Reagan
2017-01-15 15:01:17 UTC
Permalink
Raw Message
Post by m***@gmail.com
It looks VMS developers would do themselves a favor by straightening a lot of this out. Memory is far less limited than it used to be. Sure the work means a little extra time now but if it avoids a greater amount of time down the track ...?
That's why I have a bounty on Macro-32 code being rewritten into anything else. :-)

Trimming out dead code isn't much of a payback, as you say, memory isn't an issue. Cleaning out RD53 support out of some larger driver might help, but most of that is just table driven. It is the complex control flow that often requires a change in algorithm when rewriting into C.
j***@yahoo.co.uk
2017-01-15 19:38:01 UTC
Permalink
Raw Message
Post by John Reagan
Post by m***@gmail.com
It looks VMS developers would do themselves a favor by straightening a lot of this out. Memory is far less limited than it used to be. Sure the work means a little extra time now but if it avoids a greater amount of time down the track ...?
That's why I have a bounty on Macro-32 code being rewritten into anything else. :-)
Trimming out dead code isn't much of a payback, as you say, memory isn't an issue. Cleaning out RD53 support out of some larger driver might help, but most of that is just table driven. It is the complex control flow that often requires a change in algorithm when rewriting into C.
At last! Correct me if I'm wrong, but this disccussion so
far has focused largely on *code*, perhaps because of the
way the question was originally framed. Code isn't the
same as algorithms, but algorithms and the associated data
do make programs (to misquote a book title from years ago).

So, here's the obvious variation, made explicit:

Are many of the *algorithms* (and data structures) in VMS
convoluted, or is any convolutedness mostly just in the
legacy code?

And does the answer matter much either way, in 2017 after
two and half (soon three?) technically-successful ports
to new architectures (or even in 2022, whatever IT may
bring)?

Have a lot of fun.
Simon Clubley
2017-01-16 01:34:47 UTC
Permalink
Raw Message
Post by j***@yahoo.co.uk
Are many of the *algorithms* (and data structures) in VMS
convoluted, or is any convolutedness mostly just in the
legacy code?
And does the answer matter much either way, in 2017 after
two and half (soon three?) technically-successful ports
to new architectures (or even in 2022, whatever IT may
bring)?
It matters if the code stops you from easily adding in new
functionality to match other operating systems because it's
too tricky to alter the existing code.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Stephen Hoffman
2017-01-17 23:49:21 UTC
Permalink
Raw Message
Post by m***@gmail.com
Post by John Reagan
The "problem" with Macro-32 code is that assembly programmers are
taught to be highly efficient. They'll jump from one routine to
another just to avoid having to duplicate code. They'll tell
themselves that they are using "modular" programming. Of course, all
those cross-jumps might have been clever when first written but the
overall complexity is horrible. That turns into the long term
maintenance nightmare that Clair mentions (and makes the Macro compiler
do back flips to keep things working - you'd be impressed by the
compiler's flow analysis code [written in C]).
It looks VMS developers would do themselves a favor by straightening a
lot of this out. Memory is far less limited than it used to be. Sure
the work means a little extra time now but if it avoids a greater
amount of time down the track ...?
If you start down that path, you're probably going to be writing and
using Macro32 code-refactoring tools, and the effort involved in more
than a little of the Macro32 refactoring I've worked on approaches or
exceeds rewriting the code. Once that level of effort is in
consideration, you might as well target moving to C11, C++14, Rust or
such — languages and tools that newer developers are interested with
and familiar with and that has decent abstraction and existing tools —
and work to drag the whole design of the particular component or
subsystem forward. Minimally reworking otherwise working application
source code — outside of code that's been known as a bug farm, or code
that's difficult or limiting or tedious for necessary changes or
customer-desired features, or that automated scanners have identified
vulnerabilities — usually isn't worth it. Nor is investing more than
absolutely necessary in a language that's either entirely
platform-specific — Bliss or Macro32, for instance — nor migrating to a
language that's in the wrong part of the usual adoption lifecycle, for
that matter.
Post by m***@gmail.com
Also, on the provision of some of these ancient device drivers, I'm
wondering if not only drivers should be conditionally activated but
perhaps also some decision-making software back up the code path.
What's the point of code that is frequently executed and takes certain
action if you are running an RA81 disk drive long before you get to the
device driver, if you don't have any RA81's?
Ayup. Deprecating and removing the old. Too little deprecation and
you end up dragging along and spending maintenance and build and
bug-fixing time for comparatively few users still using the old gear,
and changes that are too much or too fast and you cause churn in your
own and your customers' code. Best to have a better replacement
available ahead of the deprecations, too. Not that there aren't many
better replacements for the old DSA/RA/MSCP/TA/TMSCP storage hardware.
But then MSCP and TMSCP clients and servers are used to serve storage
within clusters too, and that's not going to be easy to remove.
--
Pure Personal Opinion | HoffmanLabs LLC
j***@yahoo.co.uk
2017-01-14 10:23:07 UTC
Permalink
Raw Message
Post by c***@gmail.com
RE: "Why is the VMS codebase apparently so convoluted ? “
As a general statement about VMS I think that is way too harsh. We all have our pieces of the code we love to hate and think are deserving of an immediate dagger in the heart. There are certainly modules where some seriously unnatural acts are performed. But before we start thinking VMS is a pile of crap I’ll contend that those pockets of true ugliness are a minuscule portion of the overall system.
I was tempted to reply to Simon’s post with one word - MACRO32 - but that would be unfair in a number of ways. But, if I were to list 10 source modules in VMS that I would like to see magically re-written tomorrow I can guarantee you it would be a list of MACRO32 modules. Assembler, by its very nature, is just harder to create, understand, modify, add to, etc. than higher-level languages. One of our developers is fond of the phrase ‘waxy buildup’. Well, my observation is that waxy buildup is far worse in assembler over four decades than in BLISS or C.
In one of the books written about the creation of Windows NT there are a few lines in the beginning from Dave Cutler about VMS. He laments creating so much of VMS in assembler but adds that it was the only practical choice at the time. Completely understandable even to this day; as I have said many times, we don’t do computer science, we have to ship a product. Sometimes you just have to play with the cards you are dealt.
There are things in the original MACRO32 code that we changed (made better) in order to make it compilable for Alpha. We banned new code in MACRO32 except where it makes more sense to modify what exists. Even today we have work underway re-writing a major driver from MACRO32 to C. It is not uncommon to see a .MAP file containing a bunch of MACRO32 modules and one or more C modules (that’s where all the new stuff is).
The overall system is fairly well structured. It is a ton of code but the big picture is understandable and has not changed to any great degree since the exec re-org project in the early 80s which broke up the monolithic SYS.EXE into a few dozen execlets. For example, if you have a memory management project, you are working in the modules that make up SYS$VM.EXE, many of which are in C at this point. Is VMS modular? That might be a stretch. But it is organized well enough to have survived forty years of hundreds of developers working on it and still generally looks and feels like it always has.
Survived forty years of developers working on it and customers
working with it, and a couple of decades of its owners working
against it (until recently).
Chris
2017-01-14 18:11:19 UTC
Permalink
Raw Message
Post by j***@yahoo.co.uk
its owners working
against it (until recently
Hardly a good example to the staff if the owners
obviously don't care about it...
Kerry Main
2017-01-14 18:56:37 UTC
Permalink
Raw Message
-----Original Message-----
Chris via Info-vax
Sent: January 14, 2017 1:11 PM
Subject: Re: [Info-vax] Why is the VMS codebase apparently so
convoluted ?
Post by j***@yahoo.co.uk
its owners working
against it (until recently
Hardly a good example to the staff if the owners obviously don't
care
about it...
Agree in principle, but when looking at OpenVMS's pre-VSI owners
(HP/Compaq/DEC), HP had literally hundreds of products with each group
clamouring for more funding for their particular product suite.

To make things worse, before I left HP in 2012, the number of
companies that HP had bought was up something like 42. With each new
company brought into the fold, the product pool got all that much
larger and each existing product group all that much smaller. Smaller
fish in a much bigger ocean. To survive, each product had to literally
fight for relevancy, funding and recognition in each new branding
scheme that the latest marketing suits would parade out.

Personally, I always used to think of OpenVMS as the "Cinderella"
product in BCS (HP's systems group where OpenVMS lived). I will leave
it to others to think of who the bad step sisters and evil mother was.

:-)

This challenge is not unique to HP - same goes for product groups in
companies like IBM, Dell, Oracle, Microsoft, CA etc.

Today, there is only one owner of OpenVMS and it is 100% focussed on
creating a better product.

Hind sight is 20-20, but while it is great that it finally happened,
its unfortunate that the transfer Of OpenVMS to VSI did not happen 5+
years previously.

Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Simon Clubley
2017-01-14 17:57:01 UTC
Permalink
Raw Message
Post by c***@gmail.com
RE: "Why is the VMS codebase apparently so convoluted ? “
As a general statement about VMS I think that is way too harsh. We
all have our pieces of the code we love to hate and think are
deserving of an immediate dagger in the heart. There are certainly
modules where some seriously unnatural acts are performed. But before
we start thinking VMS is a pile of crap I’ll contend that those
pockets of true ugliness are a minuscule portion of the overall
system.
I freely admit that it's possible I have indeed been too harsh.

The problem I have is that I don't have access to the VMS source
code so I have to judge it's quality and composition by indirect means.

This means, for example, that when I am given reasons why we
cannot easily have in VMS some of the things we see in other
operating systems, it's all too easy to judge the whole of
VMS based on the apparent problems in the specific parts of VMS
which I have asked about.
Post by c***@gmail.com
I was tempted to reply to Simon’s post with one word - MACRO32 - but
that would be unfair in a number of ways. But, if I were to list 10
source modules in VMS that I would like to see magically re-written
tomorrow I can guarantee you it would be a list of MACRO32
modules. Assembler, by its very nature, is just harder to create,
understand, modify, add to, etc. than higher-level languages. One of
our developers is fond of the phrase ‘waxy buildup’. Well, my
observation is that waxy buildup is far worse in assembler over four
decades than in BLISS or C.
I agree with this. In the distant past I have done a good number of
system level modules in various assembly languages. and their higher
level language equivalents have always been easier to write,
understand months/years later and then to alter for new requirements.

These days, I only use bits of assembly language in specific places
(the first part of initialisation code after power on, interrupt
dispatch wrappers, inline accessing of CPU registers, etc).
Everything else is C at a minimum.
Post by c***@gmail.com
In one of the books written about the creation of Windows NT there
are a few lines in the beginning from Dave Cutler about VMS. He
laments creating so much of VMS in assembler but adds that it was the
only practical choice at the time. Completely understandable even to
this day; as I have said many times, we don’t do computer science, we
have to ship a product. Sometimes you just have to play with the cards
you are dealt.
Sadly, I can also agree with this. In some ways it's a pity that VMS
wasn't created 5-10 years later than it was. I strongly suspect that
some things would have been very different.
Post by c***@gmail.com
There are things in the original MACRO32 code that we changed (made
better) in order to make it compilable for Alpha. We banned new code
in MACRO32 except where it makes more sense to modify what
exists. Even today we have work underway re-writing a major driver
from MACRO32 to C. It is not uncommon to see a .MAP file containing a
bunch of MACRO32 modules and one or more C modules (that’s where all
the new stuff is).
The overall system is fairly well structured. It is a ton of code
but the big picture is understandable and has not changed to any great
degree since the exec re-org project in the early 80s which broke up
the monolithic SYS.EXE into a few dozen execlets. For example, if you
have a memory management project, you are working in the modules that
make up SYS$VM.EXE, many of which are in C at this point. Is VMS
modular? That might be a stretch. But it is organized well enough to
have survived forty years of hundreds of developers working on it and
still generally looks and feels like it always has.
Thank you for having taken the time to give me your insights.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Paul Sture
2017-01-14 11:34:01 UTC
Permalink
Raw Message
Post by Simon Clubley
Post by d***@gmail.com
http://xkcd.com/1168/
tar -jvcf dg.tar.bz2 dg/
Typed in less than 10 seconds and using what I know of tar without
having to look it up. :-)
Aye, but restoring the contents without the dg/ prefix, and to a
different directory in less then 10 seconds might be tricky.

tar --strip-components 1 -xjf dg.tar.bz2 -C /etc/config/wires/green

Nope, timed out, and I didn't need to look that one up.

This flavour has the advantage for the Hollywood version that the last
element, green, is easy to change to blue or red at the last moment.
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
Paul Sture
2017-01-14 11:52:17 UTC
Permalink
Raw Message
Post by Simon Clubley
Post by d***@gmail.com
http://xkcd.com/1168/
tar -jvcf dg.tar.bz2 dg/
Typed in less than 10 seconds and using what I know of tar without
having to look it up. :-)
A tip I forgot to mention in my other reply:

Once you are confident with aiming tar at the correct source/target,
you really want to get into the habit of dropping the 'v'.

Yes, this goes against all the examples you will see out there, but
for anything involving more than say a score of files:

* - any errors will get lost in the noise
* - with large numbers of files, the display of each file will
slow down execution, especially when executing on a nice
fast server via a relatively slow network connection
* - if you redirect the output to file, you can consume huge
amounts of disk space, occasionally causing jobs to
fail with insufficient free space for the log file.
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
Simon Clubley
2017-01-14 20:30:08 UTC
Permalink
Raw Message
Post by Paul Sture
Post by Simon Clubley
Post by d***@gmail.com
http://xkcd.com/1168/
tar -jvcf dg.tar.bz2 dg/
Typed in less than 10 seconds and using what I know of tar without
having to look it up. :-)
Once you are confident with aiming tar at the correct source/target,
you really want to get into the habit of dropping the 'v'.
Hey, I've got 10 seconds to be a successful geek before the nuclear
bomb goes off. Cut me some slack!!! :-) :-)

Seriously however, you do have a very good point. However, for the
same reasons as others, I don't fully trust tar so I tend to only
use it to archive things which I can recreate if needed (such as
builds of projects).

tar is also supposed to return an exit code if something goes wrong
and on any archives I create I always do a compare pass although that
is not perfect as if the file is on the source filesystem but not in
the tar archive, it's apparently not reported as an error.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Paul Sture
2017-01-18 00:07:36 UTC
Permalink
Raw Message
Post by Simon Clubley
Post by Paul Sture
Post by Simon Clubley
Post by d***@gmail.com
http://xkcd.com/1168/
tar -jvcf dg.tar.bz2 dg/
Typed in less than 10 seconds and using what I know of tar without
having to look it up. :-)
Once you are confident with aiming tar at the correct source/target,
you really want to get into the habit of dropping the 'v'.
Hey, I've got 10 seconds to be a successful geek before the nuclear
bomb goes off. Cut me some slack!!! :-) :-)
Seriously however, you do have a very good point. However, for the
same reasons as others, I don't fully trust tar so I tend to only
use it to archive things which I can recreate if needed (such as
builds of projects).
There are also cross platform incompatibilities in tarfiles - one
that bit me was unpacking a tarfile produced by FreeBSD on an OS X
system (solution in that case: grab a copy of bsdtar for OS X).
Post by Simon Clubley
tar is also supposed to return an exit code if something goes wrong
and on any archives I create I always do a compare pass although that
is not perfect as if the file is on the source filesystem but not in
the tar archive, it's apparently not reported as an error.
You might find Fossil's integrity self-checks of interest:

<http://fossil-scm.org/index.html/doc/trunk/www/selfcheck.wiki>

:-)
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
Paul Sture
2017-01-14 11:37:01 UTC
Permalink
Raw Message
Post by d***@gmail.com
Post by Stephen Hoffman
And looking at the results from out here, we didn't end up with
something that was substantially better than the earlier versions of
BACKUP. We did end up with one of the most intractable and complex
and inscrutable tools around, and — despite the substantial effort in
the help text and examples, and despite the BACKUP Manager — it's still
one of the more difficult tools for system managers to use correctly,
and one that requires more than a little effort to script, and — when
you look at dealing with MOUNT and DISMOUNT and archives and the utter
lack of RMS and application and third-party database integration — an
ongoing problem area for OpenVMS system managers. BACKUP solves what
it does quite well. But creating the accoutrements and related tasks
around anyone actually using that tool are very complex, as is the tool
itself. Customization is great. Right up until you're in a thicket,
with no clear paths forward, no defaults and no templates, no
integration, and whatnot.
Without going into details, a couple of those, um, ripples point
directly back to the complexity and inconsistencies of some of the APIs
and implementations within OpenVMS, too. Beyond BACKUP itself. To
the sorts of mistakes that even very experienced developers can make.
http://xkcd.com/1168/
Very appropriate. One of the best moves I made with tar was to make
my own crib sheet.
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
j***@yahoo.co.uk
2017-01-14 14:01:03 UTC
Permalink
Raw Message
Post by Paul Sture
Post by d***@gmail.com
Post by Stephen Hoffman
And looking at the results from out here, we didn't end up with
something that was substantially better than the earlier versions of
BACKUP. We did end up with one of the most intractable and complex
and inscrutable tools around, and — despite the substantial effort in
the help text and examples, and despite the BACKUP Manager — it's still
one of the more difficult tools for system managers to use correctly,
and one that requires more than a little effort to script, and — when
you look at dealing with MOUNT and DISMOUNT and archives and the utter
lack of RMS and application and third-party database integration — an
ongoing problem area for OpenVMS system managers. BACKUP solves what
it does quite well. But creating the accoutrements and related tasks
around anyone actually using that tool are very complex, as is the tool
itself. Customization is great. Right up until you're in a thicket,
with no clear paths forward, no defaults and no templates, no
integration, and whatnot.
Without going into details, a couple of those, um, ripples point
directly back to the complexity and inconsistencies of some of the APIs
and implementations within OpenVMS, too. Beyond BACKUP itself. To
the sorts of mistakes that even very experienced developers can make.
http://xkcd.com/1168/
Very appropriate. One of the best moves I made with tar was to make
my own crib sheet.
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
I'm mildly puzzled as to how tar fits in a discussion re
backups? OK it very definitely fits in a discussion about
complexity and inconsistency, but tar is a backup tool on
*x in the same way as FLX is a backup tool on RSX and VMS.
IE if you're using either tar or FLX for something other
than file transfer, are you sure you're really using the
right tool?

BACKUP does what backup does, and for backups and restores
and even the associated administrivia - verifying that what's
on the media matches what's on the filesystem, recovery
from media errors, keeping track of what files were backed
up when, and to where, and so on - is light years ahead
of anything I've used elsewhere (admittedly that's primarily
*x and Windows built in stuff).

tar does what tar does. Arguably it's the direct opposite
of the "do one thing and do it well" approach. Maybe it
doesn't matter. Maybe it's where the concept for RedHat's
systemd came from.

Windows own built in backup (which back in NT 3/4 era was
Windows own *bought in* backup, ie a cut down version of a
commercial backup product) seems to gain incompatibilities
every time it's significantly updated. So retrieving old
stuff may be a challenge, even if the media is readable.
Maybe that doesn't matter if the relevant application also
no longer works. Windows 10 has doubtless improved this
kind of thing. Or not.

Have a lot of fun.
Craig A. Berry
2017-01-14 15:17:58 UTC
Permalink
Raw Message
Post by j***@yahoo.co.uk
Windows own built in backup (which back in NT 3/4 era was
Windows own *bought in* backup, ie a cut down version of a
commercial backup product) seems to gain incompatibilities
every time it's significantly updated. So retrieving old
stuff may be a challenge, even if the media is readable.
Maybe that doesn't matter if the relevant application also
no longer works. Windows 10 has doubtless improved this
kind of thing. Or not.
At least Windows has its Volume Snapshot Service (often called shadow
copy) that works even when the volume is in use.[1]


[1] https://en.wikipedia.org/wiki/Shadow_Copy
j***@yahoo.co.uk
2017-01-14 16:12:48 UTC
Permalink
Raw Message
Post by Craig A. Berry
Post by j***@yahoo.co.uk
Windows own built in backup (which back in NT 3/4 era was
Windows own *bought in* backup, ie a cut down version of a
commercial backup product) seems to gain incompatibilities
every time it's significantly updated. So retrieving old
stuff may be a challenge, even if the media is readable.
Maybe that doesn't matter if the relevant application also
no longer works. Windows 10 has doubtless improved this
kind of thing. Or not.
At least Windows has its Volume Snapshot Service (often called shadow
copy) that works even when the volume is in use.[1]
[1] https://en.wikipedia.org/wiki/Shadow_Copy
Windows has indeed had Snapshot Services for a while, and
before that came something called StorageWorks (or later,
SANworks] Virtual Replicator for Windows NT and Windows
2000. The earlier product did various other things as well
as offering much the same kind of thing as Snapshot
Services later did, but the vendor lacked the clout to
(a) get it properly integrated into the OS and
(b) to get applications to be "snapshot aware", so that
changing data could be made consistent (from a given
application's point of view) at the point in time when the
snapshot was taken (this for the same reason that VMS BACKUPs
taken from a dismounted shadow set member were not always a
bright idea, at least from a data consistency point of view).

Windows Snapshot Services (or maybe the associated backup
tool(s)?) also seems quite capable of failing in strange and
mysterious ways which the MS community are usually unable to
help with. No "System Messages and Recovery Procedures" manual
here, frequently not even a proper equivalent of errno.h and
perror().
Paul Sture
2017-01-16 20:57:21 UTC
Permalink
Raw Message
Post by j***@yahoo.co.uk
Post by Paul Sture
Post by d***@gmail.com
Post by Stephen Hoffman
Without going into details, a couple of those, um, ripples point
directly back to the complexity and inconsistencies of some of the APIs
and implementations within OpenVMS, too. Beyond BACKUP itself. To
the sorts of mistakes that even very experienced developers can make.
http://xkcd.com/1168/
Very appropriate. One of the best moves I made with tar was to make
my own crib sheet.
I'm mildly puzzled as to how tar fits in a discussion re
backups? OK it very definitely fits in a discussion about
complexity and inconsistency,
I read it as a comment on the user-friendliness of the tar command line.
Post by j***@yahoo.co.uk
but tar is a backup tool on *x in the same way as FLX is a backup tool
on RSX and VMS. IE if you're using either tar or FLX for something
other than file transfer, are you sure you're really using the right
tool?
True, but what alternatives are there in the *x world? It's pretty
ubiquitous as a software distribution medium.
Post by j***@yahoo.co.uk
BACKUP does what backup does, and for backups and restores
and even the associated administrivia - verifying that what's
on the media matches what's on the filesystem, recovery
from media errors, keeping track of what files were backed
up when, and to where, and so on - is light years ahead
of anything I've used elsewhere (admittedly that's primarily
*x and Windows built in stuff).
Most folks I have come across who say "I don't trust tapes" base that
statement on tar and tar-like utilities; they haven't had the benefit of
using something with the error recovery features of BACKUP.

I have come across one utility, BRU, which dates back to the mid-80s
and looks as if it was inspired by BACKUP features (or maybe even
the RSX BRU utility?).

The website has the aura of a largely abandoned product.

<http://www.tolisgroup.com/bru-archiving-and-backup-for-business.html>

A couple of PDFs:

<http://www.tolisgroup.com/assets/tarbrucontrast.pdf>
<http://www.tolisgroup.com/assets/provingthebruadvantage.pdf>

though from the example in the latter, I can't work out whether BRU
did anything more than flag an error and exit - not the same as
recovering the correct contents as BACKUP's CRC processing can do.
Post by j***@yahoo.co.uk
tar does what tar does. Arguably it's the direct opposite
of the "do one thing and do it well" approach. Maybe it
doesn't matter. Maybe it's where the concept for RedHat's
systemd came from.
:-)
Post by j***@yahoo.co.uk
Windows own built in backup (which back in NT 3/4 era was
Windows own *bought in* backup, ie a cut down version of a
commercial backup product) seems to gain incompatibilities
every time it's significantly updated. So retrieving old
stuff may be a challenge, even if the media is readable.
In certain circles, senior management probably sees that as
an advantage. Fine for emails they would prefer to forget,
not so good where there are legal obligations to keep records
for x years (vehicle airbag component data used to have a
mandated data retention period of 14 years IIRC).
Post by j***@yahoo.co.uk
Maybe that doesn't matter if the relevant application also
no longer works. Windows 10 has doubtless improved this
kind of thing. Or not.
The only decent way I found of backing up Windows 7 so that a true bare
metal reinstallation worked was to set it as a client of Windows Home
Server 2011 and backup with that. It did work reliably, but of course
Windows 8 was not supported...
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
Chris Scheers
2017-01-16 21:23:14 UTC
Permalink
Raw Message
Post by Paul Sture
Post by j***@yahoo.co.uk
Post by Paul Sture
Post by d***@gmail.com
Post by Stephen Hoffman
Without going into details, a couple of those, um, ripples point
directly back to the complexity and inconsistencies of some of the APIs
and implementations within OpenVMS, too. Beyond BACKUP itself. To
the sorts of mistakes that even very experienced developers can make.
http://xkcd.com/1168/
Very appropriate. One of the best moves I made with tar was to make
my own crib sheet.
I'm mildly puzzled as to how tar fits in a discussion re
backups? OK it very definitely fits in a discussion about
complexity and inconsistency,
I read it as a comment on the user-friendliness of the tar command line.
Post by j***@yahoo.co.uk
but tar is a backup tool on *x in the same way as FLX is a backup tool
on RSX and VMS. IE if you're using either tar or FLX for something
other than file transfer, are you sure you're really using the right
tool?
True, but what alternatives are there in the *x world? It's pretty
ubiquitous as a software distribution medium.
Post by j***@yahoo.co.uk
BACKUP does what backup does, and for backups and restores
and even the associated administrivia - verifying that what's
on the media matches what's on the filesystem, recovery
from media errors, keeping track of what files were backed
up when, and to where, and so on - is light years ahead
of anything I've used elsewhere (admittedly that's primarily
*x and Windows built in stuff).
Most folks I have come across who say "I don't trust tapes" base that
statement on tar and tar-like utilities; they haven't had the benefit of
using something with the error recovery features of BACKUP.
I have come across one utility, BRU, which dates back to the mid-80s
and looks as if it was inspired by BACKUP features (or maybe even
the RSX BRU utility?).
The website has the aura of a largely abandoned product.
<http://www.tolisgroup.com/bru-archiving-and-backup-for-business.html>
<http://www.tolisgroup.com/assets/tarbrucontrast.pdf>
<http://www.tolisgroup.com/assets/provingthebruadvantage.pdf>
though from the example in the latter, I can't work out whether BRU
did anything more than flag an error and exit - not the same as
recovering the correct contents as BACKUP's CRC processing can do.
Post by j***@yahoo.co.uk
tar does what tar does. Arguably it's the direct opposite
of the "do one thing and do it well" approach. Maybe it
doesn't matter. Maybe it's where the concept for RedHat's
systemd came from.
:-)
Post by j***@yahoo.co.uk
Windows own built in backup (which back in NT 3/4 era was
Windows own *bought in* backup, ie a cut down version of a
commercial backup product) seems to gain incompatibilities
every time it's significantly updated. So retrieving old
stuff may be a challenge, even if the media is readable.
In certain circles, senior management probably sees that as
an advantage. Fine for emails they would prefer to forget,
not so good where there are legal obligations to keep records
for x years (vehicle airbag component data used to have a
mandated data retention period of 14 years IIRC).
Post by j***@yahoo.co.uk
Maybe that doesn't matter if the relevant application also
no longer works. Windows 10 has doubtless improved this
kind of thing. Or not.
The only decent way I found of backing up Windows 7 so that a true bare
metal reinstallation worked was to set it as a client of Windows Home
Server 2011 and backup with that. It did work reliably, but of course
Windows 8 was not supported...
This comment is wandering off a bit, but, FWIW, I use WHS2011 to backup
Windows 8, Windows 10, and Windows Server 2008 R2 as well as Windows 7,
Windows Vista, and Windows XP.

It is not officially supported, but it works. And I have done bare
metal restores of Windows 8.

I would have to check if any of these are GPT disks. I think so, but
could be mistaken.
--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ***@applied-synergy.com
Fax: 817-237-3074
Bob Koehler
2017-01-17 14:49:35 UTC
Permalink
Raw Message
Post by Paul Sture
I have come across one utility, BRU, which dates back to the mid-80s
and looks as if it was inspired by BACKUP features (or maybe even
the RSX BRU utility?).
I saw folks using BRU on VMS 1.x. I think it predates BACKUP, but
I'm not sure because the very earliest VMS 1.x (1.6 I think?) had
BACKUP.

The BRU on VMS was the RSX BRU.
Paul Sture
2017-01-17 16:02:16 UTC
Permalink
Raw Message
Post by Bob Koehler
Post by Paul Sture
I have come across one utility, BRU, which dates back to the mid-80s
and looks as if it was inspired by BACKUP features (or maybe even
the RSX BRU utility?).
I saw folks using BRU on VMS 1.x. I think it predates BACKUP, but
I'm not sure because the very earliest VMS 1.x (1.6 I think?) had
BACKUP.
I don't recall BACKUP proper surfacing before 3.0. Before that our
system manager used BCK and RST, which like BRU were compatibility mode
utilities. I've just checked and I don't see BACKUP or BRU on my copy
of 1.5, but I do see DSC1, DSC2, BCK and RST.

I note that DIRECTORY wasn't there in 1.5 either. DIR works but
looks like the output from PIP /LI
Post by Bob Koehler
The BRU on VMS was the RSX BRU.
The RSX system manager at a customer site showed me how much faster
BRU was to and from tape than the other utilities I had come across
and I was sold.
--
A supercomputer is a device for turning compute-bound problems into
I/O-bound problems. ---Ken Batcher
u***@gmail.com
2017-01-14 19:18:58 UTC
Permalink
Raw Message
Post by Simon Clubley
[This was prompted by the shadow set driver discussion.]
Why is the VMS codebase apparently so convoluted ?
We already know that the terminal driver kernel code is an
unchangable mass of code so it's very difficult to add in
any new features such as editing across line boundaries.
We now discover that the shadow driver isn't that far behind
and we have previously discovered (when talking about re-loadable
device drivers) that kernel code tends to jump around uncleanly
between different sections of code.
My question is why ?
Given the critical nature of systems running VMS, one would have
thought that highly modular and simple code (instead of monolithic
code filled with various tricks) would have been a highly desirable
design property.
Was VMS simply a victim of the limited hardware of the day and
needed to be made as small as possible (even at the possible
expense of future maintainability) or was it something else ?
Simon.
--
Microsoft: Bringing you 1980s technology to a 21st century world
I can answer that in one letter, C
Simon Clubley
2017-01-14 20:11:07 UTC
Permalink
Raw Message
Post by u***@gmail.com
I can answer that in one letter, C
Bob, the original implementation languages for the VMS kernel
were Macro-32 and BLISS. It's generally agreed that implementing
new VMS functionality in C within the kernel is far better than
the use of either Macro-32 or BLISS.

IOW, C is not exactly the best language out there but it's better
than the languages which it is replacing. (My opinion is based on
my direct knowledge of Macro-32 and the opinions of others who know
BLISS in the case of that language.)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Loading...