Discussion:
Access to _all_ VMS system services and library functions from DCL ?
Add Reply
Simon Clubley
2017-04-03 00:22:07 UTC
Reply
Permalink
Raw Message
Is there any interest in getting access to _all_ the VMS system
services and library functions directly from DCL ?

The kind of thing I am thinking of is some kind of automatic
interface generator which is used as part of the development
process for a new VMS version. It would be used to automatically
generate the DCL interface code and thereby make any new
system services or library functions automatically available
to DCL command procedures.

It would have a lower level "feel" to it than the existing DCL
lexical interfaces (for example, you may need to write DCL code
to manually create itemlists by calling newly built-in DCL
functions) but you would get full access to all the system
services and libraries from DCL.

If there's no appetite for adding this to DCL (and I can see
how some DCL limits might be an issue) then I do think this
automatic interface generator should be a part of whatever
VSI is planning to replace DCL with for scripting.

In the latter case, it would be nice if VSI's interface generator
for whatever language they choose could implement some kind of
a thin object wrapper (so that for example, you could add entries
to an itemlist by pushing them onto an itemlist object).

Comments ?

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
t***@glaver.org
2017-04-03 01:02:49 UTC
Reply
Permalink
Raw Message
Post by Simon Clubley
Is there any interest in getting access to _all_ the VMS system
services and library functions directly from DCL ?
I seem to recall this being a wishlist / SIR from a very long time ago. Does anybody remember what DEC's response to it was? [Impossible for some reason(s) vs. too much work, for example.]
Henry Crun
2017-04-03 02:24:52 UTC
Reply
Permalink
Raw Message
Post by t***@glaver.org
Post by Simon Clubley
Is there any interest in getting access to _all_ the VMS system
services and library functions directly from DCL ?
I seem to recall this being a wishlist / SIR from a very long time ago. Does anybody remember what DEC's response to it was? [Impossible for some reason(s) vs. too much work, for example.]
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
--
Mike R.
Home: http://alpha.mike-r.com/
QOTD: http://alpha.mike-r.com/qotd.php
No Micro$oft products were used in the URLs above, or in preparing this message.
Recommended reading: http://www.catb.org/~esr/faqs/smart-questions.html#before
and: http://alpha.mike-r.com/jargon/T/top-post.html
Missile address: N31.7624/E34.9691
Richard Levitte
2017-04-03 06:18:10 UTC
Reply
Permalink
Raw Message
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
Maybe it's time for a new native CLI (*)... 'cause yeah, from a programming language perspective, DCL is kinda lacking...

Cheers,
Richard

(*) yeah, I know, there gnv and bash, but you'll have to forgive me, I have a very hard time seeing that as "native"... However much I like it in any Unix environment, it's more of a "oh looked at what the cat dragged in" on VMS...
t***@glaver.org
2017-04-03 08:33:42 UTC
Reply
Permalink
Raw Message
Post by Richard Levitte
Maybe it's time for a new native CLI (*)... 'cause yeah, from a programming language perspective, DCL is kinda lacking...
When did AUTHORIZE get /CLI=foo? It was pretty early on, right? I think I remember DEC saying "we never wrote an alternate CLI, and neither should you", or was it just that the interface was completely undocumented? VMS POSIX ran under DCL, rather than as an alternate CLI, correct?
hb
2017-04-03 12:02:50 UTC
Reply
Permalink
Raw Message
Post by t***@glaver.org
VMS POSIX ran under DCL, rather than as an alternate CLI, correct?
No. From the docs:

When you log in to a VMS system by using POSIX$CLI
as your command language interpreter (CLI). This
is done either by having your system manager define
POSIX$CLI as your default CLI in the user authorization
file (UAF), or by using the qualifier /CLI=POSIX$CLI
after your user name when you log in. In this case, when
your VMS POSIX session terminates, you also log out of
the VMS system.
Bill Gunshannon
2017-04-03 15:13:40 UTC
Reply
Permalink
Raw Message
Post by t***@glaver.org
Post by Richard Levitte
Maybe it's time for a new native CLI (*)... 'cause yeah, from a programming language perspective, DCL is kinda lacking...
When did AUTHORIZE get /CLI=foo? It was pretty early on, right? I think I remember DEC saying "we never wrote an alternate CLI, and neither should you", or was it just that the interface was completely undocumented? VMS POSIX ran under DCL, rather than as an alternate CLI, correct?
It's been a really long time, but I seem to remember being able to make
the POSIX Shell your CLI using that option. Some Unix users did that
but I was always against it (Although I did test it so I could help
users who wanted to do it.) for the same reason I never agreed with
making command aliases with the Unix names (like "ls" for "DIR" or "cd"
for "SET DEFAULT").

bill
Stephen Hoffman
2017-04-03 16:42:16 UTC
Reply
Permalink
Raw Message
Post by t***@glaver.org
Post by Richard Levitte
Maybe it's time for a new native CLI (*)... 'cause yeah, from a
programming language perspective, DCL is kinda lacking...
When did AUTHORIZE get /CLI=foo?
No sé.
Post by t***@glaver.org
It was pretty early on, right?
Ayup. That selection was needed for the selection of the
PDP-11/RSX-11 MCR environment, and for the native DCL CLI.

Both of those CLIs were in common use during the early versions of OpenVMS.

MCR was migrated to a layered product with the release of VAX/VMS V4.0,
and was then retired.
Post by t***@glaver.org
I think I remember DEC saying "we never wrote an alternate CLI, and
neither should you",
DCL, MCR and DEC/Shell were three OpenVMS CLIs that shipped to customers.
Post by t***@glaver.org
or was it just that the interface was completely undocumented?
Writing and debugging a CLI isn't documented.

Being in supervisor mode, CLIs are also effectively fully privileged.

IIRC, there was a user-written open-source CLI around for OpenVMS VAX.
Post by t***@glaver.org
VMS POSIX ran under DCL, rather than as an alternate CLI, correct?
DEC/Shell was a CLI.

POSIX support was a separate package that followed the retirement of
DEC/Shell, and both are now long-retired.

VIP (VMS Integrated POSIX) stuff was a separately-packaged, free,
open-source, software package.

AFAIK, DEC/Shell was never open-sourced.

AFAIK, the POSIX package did not contain a CLI.

GNV does not contain a CLI.

See page 2-12:
http://bitsavers.trailing-edge.com/pdf/dec/vax/handbook/VMS_Language_and_Tools_Handbook_1985.pdf
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2017-04-03 19:18:38 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
AFAIK, the POSIX package did not contain a CLI.
The package contained POSIX$CLI.EXE and POSIX$CLITABLES.EXE. It smells
like a CLI. As mentioned before, it is documented.
Stephen Hoffman
2017-04-03 20:08:57 UTC
Reply
Permalink
Raw Message
Post by hb
Post by Stephen Hoffman
AFAIK, the POSIX package did not contain a CLI.
The package contained POSIX$CLI.EXE and POSIX$CLITABLES.EXE. It smells
like a CLI. As mentioned before, it is documented.
Ayup. So four CLIs.

Being that DCL is the canonical CLI — and none of the other three CLIs
are likely to re-appear — implementing a bridge into system services
remains a large effort, both for the bridge as well as for moving DCL
forward to better contend with the calls. As for OpenVMS system
services, those are increasingly unrelated to what I'm implementing in
DCL, too. I'd once posted a list of suggested lexical functions.
f$ldap, f$getuai, f$setuai, f$dns, f$urldecode, f$urlencode, f$utf8,
f$locale, f$regexp, f$xml, f$json, f$https, f$tls, f$ssh and suchlike.
This if there isn't a better implementation language choice available
than DCL. Not that I wouldn't mind system services for these and other
tasks. But if I need these sorts of capabilities, I'm usually headed
for an executable, or for Python, Perl or otherwise. Not DCL. Not
for REXX nor PowerShell, either.
--
Pure Personal Opinion | HoffmanLabs LLC
Dirk Munk
2017-04-03 10:25:15 UTC
Reply
Permalink
Raw Message
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
Maybe it's time for a new native CLI (*)... 'cause yeah, from a programming language perspective, DCL is kinda lacking...
Cheers,
Richard
(*) yeah, I know, there gnv and bash, but you'll have to forgive me, I have a very hard time seeing that as "native"... However much I like it in any Unix environment, it's more of a "oh looked at what the cat dragged in" on VMS...
I like your humour :-)
Jan-Erik Soderholm
2017-04-03 11:02:30 UTC
Reply
Permalink
Raw Message
Post by Dirk Munk
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
Maybe it's time for a new native CLI (*)... 'cause yeah, from a
programming language perspective, DCL is kinda lacking...
Cheers,
Richard
(*) yeah, I know, there gnv and bash, but you'll have to forgive me, I
have a very hard time seeing that as "native"... However much I like it
in any Unix environment, it's more of a "oh looked at what the cat
dragged in" on VMS...
I like your humour :-)
I think it is better to add new features to some tool that compliments
DCL, not to replace DCL or do major changes to DCL in it self. Does
anyone discuss a replacement of JCL for MVS?

Just as an example, it is easier to add features to a tool like Python
and then use DCL as the environment where you run your Python scripts.

The current Python kit for VMS already have more VMS features built-in
than DCL has. Including a some common OSS tools of different sorts..
Dirk Munk
2017-04-03 11:39:44 UTC
Reply
Permalink
Raw Message
Post by Jan-Erik Soderholm
Post by Dirk Munk
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
Maybe it's time for a new native CLI (*)... 'cause yeah, from a
programming language perspective, DCL is kinda lacking...
Cheers,
Richard
(*) yeah, I know, there gnv and bash, but you'll have to forgive me, I
have a very hard time seeing that as "native"... However much I like it
in any Unix environment, it's more of a "oh looked at what the cat
dragged in" on VMS...
I like your humour :-)
I think it is better to add new features to some tool that compliments
DCL, not to replace DCL or do major changes to DCL in it self. Does
anyone discuss a replacement of JCL for MVS?
Just as an example, it is easier to add features to a tool like Python
and then use DCL as the environment where you run your Python scripts.
The current Python kit for VMS already have more VMS features built-in
than DCL has. Including a some common OSS tools of different sorts..
Python has one important aspect that I absolutely hate. On my PC I have
several applications that use Python, and each one of these applications
brings along its own version of Python in its own directory structure.

That is stupid, there should be one (the most recent) version of Python
on my PC (perhaps a 32 bit and a 64 bit version, don't know), and every
application that relies on Python should be able to use it.

If new versions of Python are not downwards compatible, than that is a
serious problem.
John E. Malmberg
2017-04-03 12:49:22 UTC
Reply
Permalink
Raw Message
Post by Dirk Munk
Python has one important aspect that I absolutely hate. On my PC I have
several applications that use Python, and each one of these applications
brings along its own version of Python in its own directory structure.
That is stupid, there should be one (the most recent) version of Python
on my PC (perhaps a 32 bit and a 64 bit version, don't know), and every
application that relies on Python should be able to use it.
If new versions of Python are not downwards compatible, than that is a
serious problem.
There are multiple python kits available for Windows, and a standalone
application can not depend on any of them being installed with the
needed components.

So the Windows Python application packager has a choice:

1. Test with a number of the Python packages for Windows and provide
instructions for each of them.

2. Test with only one of the above and restrict the person to that.

3. Bundle a Python interpreter with the package so that the installation
instructions are simple.

On Linux, a python based package can specify the dependencies it needs
so that you only have to request the package.

Or with pip, you can install python modules, some requiring a C compiler
and/or development headers. I am not sure that PIP works on VMS at this
time.

And some python libraries do not play well with others. To solve this
on Linux, there is a tool called virtualenv. With virtualenv,
non-privileged users can set up a tailored environment for a specific
python application. I do not think virtualenv has been ported to VMS yet.

Linux distributions have the concept of vendor and third-party package
repositories that can be signed for verification, which allows this.
VMS does not have that feature.

Regards,
-John
***@qsl.net_work
Jan-Erik Soderholm
2017-04-03 13:30:07 UTC
Reply
Permalink
Raw Message
Post by Dirk Munk
Post by Jan-Erik Soderholm
Post by Dirk Munk
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
Maybe it's time for a new native CLI (*)... 'cause yeah, from a
programming language perspective, DCL is kinda lacking...
Cheers,
Richard
(*) yeah, I know, there gnv and bash, but you'll have to forgive me, I
have a very hard time seeing that as "native"... However much I like it
in any Unix environment, it's more of a "oh looked at what the cat
dragged in" on VMS...
I like your humour :-)
I think it is better to add new features to some tool that compliments
DCL, not to replace DCL or do major changes to DCL in it self. Does
anyone discuss a replacement of JCL for MVS?
Just as an example, it is easier to add features to a tool like Python
and then use DCL as the environment where you run your Python scripts.
The current Python kit for VMS already have more VMS features built-in
than DCL has. Including a some common OSS tools of different sorts..
Python has one important aspect that I absolutely hate. On my PC I have
several applications that use Python, and each one of these applications
brings along its own version of Python in its own directory structure.
That is stupid, there should be one (the most recent) version of Python on
my PC (perhaps a 32 bit and a 64 bit version, don't know), and every
application that relies on Python should be able to use it.
If new versions of Python are not downwards compatible, than that is a
serious problem.
If Python would have been as widespread as, say, Java, I'd guess that would
have been less of an issue. On VMS, with it's heritage, I guess that one
would rather have some more or less VSI supported Python install, so that
anyone supplying Python tools/applications would know what to expect.

And yes, then there is the Python2/Python3 issues, of course...
Bill Gunshannon
2017-04-03 15:16:57 UTC
Reply
Permalink
Raw Message
Post by Dirk Munk
Post by Jan-Erik Soderholm
Post by Dirk Munk
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
Maybe it's time for a new native CLI (*)... 'cause yeah, from a
programming language perspective, DCL is kinda lacking...
Cheers,
Richard
(*) yeah, I know, there gnv and bash, but you'll have to forgive me, I
have a very hard time seeing that as "native"... However much I like it
in any Unix environment, it's more of a "oh looked at what the cat
dragged in" on VMS...
I like your humour :-)
I think it is better to add new features to some tool that compliments
DCL, not to replace DCL or do major changes to DCL in it self. Does
anyone discuss a replacement of JCL for MVS?
Just as an example, it is easier to add features to a tool like Python
and then use DCL as the environment where you run your Python scripts.
The current Python kit for VMS already have more VMS features built-in
than DCL has. Including a some common OSS tools of different sorts..
Python has one important aspect that I absolutely hate. On my PC I have
several applications that use Python, and each one of these applications
brings along its own version of Python in its own directory structure.
That is stupid, there should be one (the most recent) version of Python
on my PC (perhaps a 32 bit and a 64 bit version, don't know), and every
application that relies on Python should be able to use it.
If new versions of Python are not downwards compatible, than that is a
serious problem.
People happily accepted such behavior for PHP, why would you expect
anything better from Python? Scripting languages are the antithesis
of software engineering.

bill
Stephen Hoffman
2017-04-03 17:04:37 UTC
Reply
Permalink
Raw Message
Post by Dirk Munk
Python has one important aspect that I absolutely hate. On my PC I have
several applications that use Python, and each one of these
applications brings along its own version of Python in its own
directory structure.
Application bundles do waste disk space when code is duplicated, but —
absent far better dependency management than many platforms can
reasonably implement and manage — that is still a better alternative
than the other available options.
Post by Dirk Munk
That is stupid, there should be one (the most recent) version of Python
on my PC (perhaps a 32 bit and a 64 bit version, don't know), and every
application that relies on Python should be able to use it.
Which unfortunately doesn't work in various cases. Including Python.

Even if older or newer versions do happen to work, adding arbitrary
dependencies makes for some rather fun testing matrices, too.

And arbitrarily old dependencies often don't work with newer apps.
Post by Dirk Munk
If new versions of Python are not downwards compatible, than that is a
serious problem.
Welcome to what commonly and increasingly happens in IT. Welcome to
what containers are used for, too. Welcome to the unfortunate and
increasing futility of upward compatibility, too.

Even OpenVMS has had some similar fun, too. The C RTL was one case
that ended up embedded in app installations, for some C and app
versions (q.v. AACRT060). Some of the other RTLs have occasionally
had similar embedding requirements. GSMATCH has been ignored on
various RTLs, to avoid encountering the version checks. Which means
that there's a chance that newer apps will hit bugs in older RTLs and
fail, and there'll be no image activator version checks flagged for
those cases. The developers and the end-user are left to sort this
out. OpenSSL has been a moving target here, too. And OpenVMS lacks
any sort of bundling, which has led some apps to include massive
dependencies directly in the kit — one or two tools I've seen were
shipping entire versions of a language. Then there's keeping the
dependencies updated. OpenVMS never supported back-linking, either.
Accordingly, various OpenVMS apps shipped objects and linked on the
target system, too. That all can get very messy too, as you have to
get the linker maps from the target system when something crashes.

In aggregate, which is why stuffing the dependencies directly into the
app bundle often ends up chosen, as ugly as it is.

TL;DR: Welcome to why app bundles, containers and VM guests are popular.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2017-04-03 17:49:17 UTC
Reply
Permalink
Raw Message
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: April 3, 2017 1:05 PM
Subject: Re: [Info-vax] Access to _all_ VMS system services and library
functions from DCL ?
Post by Dirk Munk
Python has one important aspect that I absolutely hate. On my PC I
have several applications that use Python, and each one of these
applications brings along its own version of Python in its own
directory structure.
Application bundles do waste disk space when code is duplicated, but
— absent far better dependency management than many platforms
can reasonably implement and manage — that is still a better
alternative than the other available options.
Post by Dirk Munk
That is stupid, there should be one (the most recent) version of
Python on my PC (perhaps a 32 bit and a 64 bit version, don't know),
and every application that relies on Python should be able to use it.
Which unfortunately doesn't work in various cases. Including Python.
Even if older or newer versions do happen to work, adding arbitrary
dependencies makes for some rather fun testing matrices, too.
And arbitrarily old dependencies often don't work with newer apps.
Post by Dirk Munk
If new versions of Python are not downwards compatible, than that
is a
Post by Dirk Munk
serious problem.
Welcome to what commonly and increasingly happens in IT. Welcome to
what containers are used for, too. Welcome to the unfortunate and
increasing futility of upward compatibility, too.
Even OpenVMS has had some similar fun, too. The C RTL was one case
that ended up embedded in app installations, for some C and app
versions (q.v. AACRT060). Some of the other RTLs have occasionally
had similar embedding requirements. GSMATCH has been ignored on
various RTLs, to avoid encountering the version checks. Which means
that there's a chance that newer apps will hit bugs in older RTLs and
fail, and there'll be no image activator version checks flagged for
those cases. The developers and the end-user are left to sort this
out. OpenSSL has been a moving target here, too. And OpenVMS lacks
any sort of bundling, which has led some apps to include massive
dependencies directly in the kit — one or two tools I've seen were
shipping entire versions of a language. Then there's keeping the
dependencies updated. OpenVMS never supported back-linking,
either.
Accordingly, various OpenVMS apps shipped objects and linked on the
target system, too. That all can get very messy too, as you have to
get the linker maps from the target system when something crashes.
In aggregate, which is why stuffing the dependencies directly into the
app bundle often ends up chosen, as ugly as it is.
TL;DR: Welcome to why app bundles, containers and VM guests are popular.
Re: App backwards compatibility .. interesting development (not sure if confirmed yet) in the Apple space:

<https://www.theinquirer.net/inquirer/news/3007694/ios-11-could-render-almost-200-000-apps-obsolete>
"With iOS 11, set to be unveiled in June at WWDC before it's rolled out to in around six month's time, Apple looks set to remove support for 32-bit apps and will stop supporting those that don't run natively in 64-bit mode."

"SensorTower, an app research outfit, is claiming that Apple's move to stop supporting 32-bit apps will see at least 187,000 apps, or eight per cent of the App Store, rendered obsolete by iOS 11."

Perhaps something for VSI to consider for OpenVMS V10.. ok, maybe V11?

To their credit, it appears Apple has been requiring developers for awhile (2015) to develop 64b apps.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Stephen Hoffman
2017-04-03 19:54:45 UTC
Reply
Permalink
Raw Message
Post by Kerry Main
Re: App backwards compatibility .. interesting development (not sure if
<https://www.theinquirer.net/inquirer/news/3007694/ios-11-could-render-almost-200-000-apps-obsolete>
"With iOS 11, set to be unveiled in June at WWDC before it's rolled out
to in around six month's time, Apple looks set to remove support for
32-bit apps and will stop supporting those that don't run natively in
64-bit mode."
"SensorTower, an app research outfit, is claiming that Apple's move to
stop supporting 32-bit apps will see at least 187,000 apps, or eight
per cent of the App Store, rendered obsolete by iOS 11."
Perhaps something for VSI to consider for OpenVMS V10.. ok, maybe V11?
Short answer:

Nope.


Long answer...

I'd laugh, but then I'd cry.

The vast majority of apps for OpenVMS are 32-bit, that's not changing
anytime soon, and we will be dealing with the hybrid 32-bit/64-bit
implementation for the foreseeable future.

I really like the OpenVMS 32-/64-bit design, though it intentionally
and centrally optimized for upward compatibility, and for mixing 32-
and 64-bit into one executable. New app development was not a central
target, which means we have a mix of descriptors and entry points and
pointers and preprocessor macros. Complexity. Working with the
64-bit parts of the DEC 32-/64-bit environment sometimes reminds me of
the fun of dealing with the address space on PDP-11/RSX-11.
Thankfully no TKB, but there are parallels to the effort involved with
that, too.

Unlike DEC, Apple created a new a flat 64-bit address space, and
related tools that build 32- and 64-bit code, and fixed a whole pile of
other latent problems and limits in the process of that work. Unlike
DEC with OpenVMS, 32-bit and 64-bit code is not mixed in the same build
of the code, but can be present in the same binary. This is a little
confusing, in that a single executable file can contain separate
binaries, and macOS and iOS picks the appropriate binary for the
current system environment. Commonly one binary for 32-bit
environments, and one for 64-bit environments, though there are more
permutations for iOS devices. This is a so-called fat binary; more
formally a multi-architecture binary, or a universal binary.
https://developer.apple.com/library/content/qa/qa1765/_index.html

With macOS and iOS, existing 32-bit code is unchanged. But for code
moving to 64-bit, Apple broke compatibility. Developers had to change
their code to migrate to 64-bit. Apple fixed many of the bugs and
limits in the old API when they did that, too. And the 32-bit and
64-bit apps can be combined into the same application package, and into
the same binary.

As for moving to 64-bit, the Apple migration to 64-bit started well
over a decade ago.

https://developer.apple.com/library/content/documentation/Darwin/Conceptual/64bitPorting/intro/intro.html


With iOS — which was based on macOS — Apple also implemented bitcode,
which allows Apple to generate the necessary executable binaries
directly for current and (likely) new devices, too. The closest
analog to this on OpenVMS is linking on site, and that's not at all
close to what bitcode provides.

https://developer.apple.com/library/content/documentation/IDEs/Conceptual/AppDistributionGuide/AppThinning/AppThinning.html


Then there's the whole discussion of what going to a flat 64-bit
address space buys for OpenVMS. Which isn't much for the folks at
VSI. Going to a flat 64-bit address space is an investment in the
future for developers, and VSI — given our recent CPRNG discussions
among other threads — isn't really there yet.

Then — in the unlikely event this work gets scheduled — there's the
discussion of whether the existing 32-/64-bit environment should be
preserved — for reasons of upward-compatibility, after all — which just
heaps on more work for everybody involved. For VSI, and for end-user
developers that are looking to use the new features. Again,
upward-compatibility is not free. Absent infinite resources, catering
to apps that are not being actively maintained and are not being
updated inherently comes at the cost of new development, both for VSI
and for end-user developers. Upward-compatibility also completely
blocks various fixes and updates, too. Either because the changes
would be prohibitively expensive, or are simply not possible to
perform. But break too much compatibility too quickly or too
haphazardly, and don't give folks valuable reasons to upgrade to the
newer OpenVMS versions, and things (also) don't end well for VSI and
OpenVMS. Not a fun balance.

None of this is to state that Apple does everything correctly, nor that
VSI should follow all of what Apple does. Apple have had their share
of mistakes, too. VSI — and each of us doing development or support
or operations — should be aware of what else is going on from other
vendors, including Apple, Microsoft and Google, and in the open-source
community, and in areas such as regulatory environments, in security
designs and breaches and mechanisms, and suchlike. An operating system
is a huge project.
--
Pure Personal Opinion | HoffmanLabs LLC
John Reagan
2017-04-03 21:08:56 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Then there's the whole discussion of what going to a flat 64-bit
address space buys for OpenVMS. Which isn't much for the folks at
VSI. Going to a flat 64-bit address space is an investment in the
future for developers, and VSI — given our recent CPRNG discussions
among other threads — isn't really there yet.
Then — in the unlikely event this work gets scheduled — there's the
discussion of whether the existing 32-/64-bit environment should be
preserved — for reasons of upward-compatibility, after all — which just
heaps on more work for everybody involved. For VSI, and for end-user
developers that are looking to use the new features. Again,
upward-compatibility is not free. Absent infinite resources, catering
to apps that are not being actively maintained and are not being
updated inherently comes at the cost of new development, both for VSI
and for end-user developers. Upward-compatibility also completely
blocks various fixes and updates, too. Either because the changes
would be prohibitively expensive, or are simply not possible to
perform. But break too much compatibility too quickly or too
haphazardly, and don't give folks valuable reasons to upgrade to the
newer OpenVMS versions, and things (also) don't end well for VSI and
OpenVMS. Not a fun balance.
Well, from my view, OpenVMS already has a flat 64-bit address space. We are just careful about allocating certain things in the bottom 32-bits (stack, code, heap). You have to explicitly ask for things to go into 64-bit space (heap and code on Itanium), but a 64-bit pointer can access any memory you have.

There is nothing stopping VSI from doing something like Tru64 and not mapping the bottom 32-bits of address space and putting everything in a 64-bit environment. Just breaks all 32-bit descriptors (how many of those are built in open code much less all the compilers, RTLs, etc.?). Breaks anybody from VAX who things they can use %LOC in Fortran and put the value into an INTEGER*4 or an 'int' in C or a 32-bit item list? You get the idea. Of course, Macro-32 lets you have that kind of fun and more.

[Yes, we are rewriting more things out of Macro-32 - I'm still paying out my dollar-per-module bounty to other developers who convert from Macro-32 to anything else. I think I own a few folks $10s of dollars. Money well spent.]

C's /POINTER_SIZE=64 (and Pascal's quadword pointers and Fortran's POINTER64) automatically turn your memory allocation to a 64-bit memory allocation. [Which in turn screwed over people then using that address in a 32-bit descriptor by the way.] Of course, C keeping "long" to be 32-bit even with /POINTER_SIZE=64 did expand the vocabulary of many people who wanted to learn new swear words.

And then there is RMS. Even with RAB64, there is still a 32-bit pointer back to the parent FAB so FABs better be in 32-bit space (stack or heap). The pointer size ripples to all corners.

For x86, where we don't have to worry about mixing in objects compiled 20 years ago, I (and others) have pushed for more 64-bit pointers. Again, it just ripples into lots of code. For instance, it would mean that BASIC would want to use 64-bit class D descriptors which in turn puts pressure on code that BASIC might call passing such string descriptors.

Not to ask for more abuse, but if we only had pointers to nul terminated strings instead of more flavors of descriptors than a Baskin Robbins, it would be an easier transition to make.
Stephen Hoffman
2017-04-03 22:52:55 UTC
Reply
Permalink
Raw Message
Post by John Reagan
Post by Stephen Hoffman
Then there's the whole discussion of what going to a flat 64-bit
address space buys for OpenVMS. Which isn't much for the folks at
VSI. Going to a flat 64-bit address space is an investment in the
future for developers, and VSI — given our recent CPRNG discussions
among other threads — isn't really there yet.
Well, from my view, OpenVMS already has a flat 64-bit address space.
If we're discounting what the developer has to do around managing
64-bit pointers through variously-divergent APIs, different headers
including RAB64 as you've mentioned, different compiler and linker
switches and suchlike, coding or recoding exposed app calls to work
either in parallel with existing 32-bit calls or to correctly manage
both types of descriptors and pointers, and other details, sure.

Have a look around on the OpenVMS master pack and in customer
environments and bug reports and elsewhere, and find out how many
64-bit apps are around. I'll wager it's a pretty small percentage.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2017-04-04 00:48:22 UTC
Reply
Permalink
Raw Message
Post by John Reagan
Post by Stephen Hoffman
Then there's the whole discussion of what going to a flat 64-bit
address space buys for OpenVMS. Which isn't much for the folks at
VSI. Going to a flat 64-bit address space is an investment in
the future for developers, and VSI — given our recent CPRNG
discussions among other threads — isn't really there yet.
Then — in the unlikely event this work gets scheduled — there's the
discussion of whether the existing 32-/64-bit environment should
be preserved — for reasons of upward-compatibility, after all —
which just heaps on more work for everybody involved. For VSI,
and for end-user developers that are looking to use the new
features. Again, upward-compatibility is not free. Absent
infinite resources, catering to apps that are not being actively
maintained and are not being updated inherently comes at the cost
of new development, both for VSI and for end-user developers.
Upward-compatibility also completely blocks various fixes and
updates, too. Either because the changes would be prohibitively
expensive, or are simply not possible to perform. But break too
much compatibility too quickly or too haphazardly, and don't give
folks valuable reasons to upgrade to the newer OpenVMS versions,
and things (also) don't end well for VSI and OpenVMS. Not a fun
balance.
Well, from my view, OpenVMS already has a flat 64-bit address space.
We are just careful about allocating certain things in the bottom
32-bits (stack, code, heap). You have to explicitly ask for things
to go into 64-bit space (heap and code on Itanium), but a 64-bit
pointer can access any memory you have.
There is nothing stopping VSI from doing something like Tru64 and not
mapping the bottom 32-bits of address space and putting everything in
a 64-bit environment. Just breaks all 32-bit descriptors (how many
of those are built in open code much less all the compilers, RTLs,
etc.?). Breaks anybody from VAX who things they can use %LOC in
Fortran and put the value into an INTEGER*4 or an 'int' in C or a
32-bit item list? You get the idea. Of course, Macro-32 lets you
have that kind of fun and more.
In another thread I asked:

#But how difficult would it be to:
#- have totally new API's that was only 64 bit
#- have new images mapped into bottom of P2 and use top of P2 for stack
#- have new compilers generate 64 bit code for that (unless /32bit used)
#- have legacy applications use P0+P1 and current API's
#?

Arne
John Reagan
2017-04-04 13:03:35 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by John Reagan
Post by Stephen Hoffman
Then there's the whole discussion of what going to a flat 64-bit
address space buys for OpenVMS. Which isn't much for the folks at
VSI. Going to a flat 64-bit address space is an investment in
the future for developers, and VSI — given our recent CPRNG
discussions among other threads — isn't really there yet.
Then — in the unlikely event this work gets scheduled — there's the
discussion of whether the existing 32-/64-bit environment should
be preserved — for reasons of upward-compatibility, after all —
which just heaps on more work for everybody involved. For VSI,
and for end-user developers that are looking to use the new
features. Again, upward-compatibility is not free. Absent
infinite resources, catering to apps that are not being actively
maintained and are not being updated inherently comes at the cost
of new development, both for VSI and for end-user developers.
Upward-compatibility also completely blocks various fixes and
updates, too. Either because the changes would be prohibitively
expensive, or are simply not possible to perform. But break too
much compatibility too quickly or too haphazardly, and don't give
folks valuable reasons to upgrade to the newer OpenVMS versions,
and things (also) don't end well for VSI and OpenVMS. Not a fun
balance.
Well, from my view, OpenVMS already has a flat 64-bit address space.
We are just careful about allocating certain things in the bottom
32-bits (stack, code, heap). You have to explicitly ask for things
to go into 64-bit space (heap and code on Itanium), but a 64-bit
pointer can access any memory you have.
There is nothing stopping VSI from doing something like Tru64 and not
mapping the bottom 32-bits of address space and putting everything in
a 64-bit environment. Just breaks all 32-bit descriptors (how many
of those are built in open code much less all the compilers, RTLs,
etc.?). Breaks anybody from VAX who things they can use %LOC in
Fortran and put the value into an INTEGER*4 or an 'int' in C or a
32-bit item list? You get the idea. Of course, Macro-32 lets you
have that kind of fun and more.
#- have totally new API's that was only 64 bit
#- have new images mapped into bottom of P2 and use top of P2 for stack
#- have new compilers generate 64 bit code for that (unless /32bit used)
#- have legacy applications use P0+P1 and current API's
#?
Arne
Could we? Sure. Difficult? Well, it does turn into a lengthly list of changes across lots of code with perhaps dual-versions of libraries.

We have some of that today (ie, LINK/SEG=CODE=P2, 64-bit flavors of the APIs that DO care about the sizes, etc.).

How do you detect "legacy" from "new"?

Back to the discussion on services from DCL....
Arne Vajhøj
2017-04-04 14:55:31 UTC
Reply
Permalink
Raw Message
Post by John Reagan
Post by Arne Vajhøj
Post by John Reagan
Well, from my view, OpenVMS already has a flat 64-bit address
space. We are just careful about allocating certain things in the
bottom 32-bits (stack, code, heap). You have to explicitly ask
for things to go into 64-bit space (heap and code on Itanium),
but a 64-bit pointer can access any memory you have.
There is nothing stopping VSI from doing something like Tru64 and
not mapping the bottom 32-bits of address space and putting
everything in a 64-bit environment. Just breaks all 32-bit
descriptors (how many of those are built in open code much less
all the compilers, RTLs, etc.?). Breaks anybody from VAX who
things they can use %LOC in Fortran and put the value into an
INTEGER*4 or an 'int' in C or a 32-bit item list? You get the
idea. Of course, Macro-32 lets you have that kind of fun and
more.
#- have totally new API's that was only 64 bit
#- have new images mapped into bottom of P2 and use top of P2 for stack
#- have new compilers generate 64 bit code for that (unless /32bit used)
#- have legacy applications use P0+P1 and current API's
#?
Could we? Sure. Difficult? Well, it does turn into a lengthly list
of changes across lots of code with perhaps dual-versions of
libraries.
Sure. But it would provide a clean go forward path but still
allow the old stuff to work.
Post by John Reagan
We have some of that today (ie, LINK/SEG=CODE=P2, 64-bit flavors of
the APIs that DO care about the sizes, etc.).
I am sure that it is a very complex change.

But question is: do we still want those 32 bit addresses in
VMS code in 30 years?
Post by John Reagan
How do you detect "legacy" from "new"?
Flag in image header and compiler and linker options.

Arne
David Froble
2017-04-04 02:10:57 UTC
Reply
Permalink
Raw Message
Post by John Reagan
Post by Stephen Hoffman
Then there's the whole discussion of what going to a flat 64-bit
address space buys for OpenVMS. Which isn't much for the folks at
VSI. Going to a flat 64-bit address space is an investment in the
future for developers, and VSI — given our recent CPRNG discussions
among other threads — isn't really there yet.
Then — in the unlikely event this work gets scheduled — there's the
discussion of whether the existing 32-/64-bit environment should be
preserved — for reasons of upward-compatibility, after all — which just
heaps on more work for everybody involved. For VSI, and for end-user
developers that are looking to use the new features. Again,
upward-compatibility is not free. Absent infinite resources, catering
to apps that are not being actively maintained and are not being
updated inherently comes at the cost of new development, both for VSI
and for end-user developers. Upward-compatibility also completely
blocks various fixes and updates, too. Either because the changes
would be prohibitively expensive, or are simply not possible to
perform. But break too much compatibility too quickly or too
haphazardly, and don't give folks valuable reasons to upgrade to the
newer OpenVMS versions, and things (also) don't end well for VSI and
OpenVMS. Not a fun balance.
Well, from my view, OpenVMS already has a flat 64-bit address space. We are
just careful about allocating certain things in the bottom 32-bits (stack,
code, heap). You have to explicitly ask for things to go into 64-bit space
(heap and code on Itanium), but a 64-bit pointer can access any memory you
have.
There is nothing stopping VSI from doing something like Tru64 and not mapping
the bottom 32-bits of address space and putting everything in a 64-bit
environment. Just breaks all 32-bit descriptors (how many of those are built
in open code much less all the compilers, RTLs, etc.?). Breaks anybody from
VAX who things they can use %LOC in Fortran and put the value into an
INTEGER*4 or an 'int' in C or a 32-bit item list? You get the idea. Of
course, Macro-32 lets you have that kind of fun and more.
[Yes, we are rewriting more things out of Macro-32 - I'm still paying out my
dollar-per-module bounty to other developers who convert from Macro-32 to
anything else. I think I own a few folks $10s of dollars. Money well spent.]
C's /POINTER_SIZE=64 (and Pascal's quadword pointers and Fortran's POINTER64)
automatically turn your memory allocation to a 64-bit memory allocation.
[Which in turn screwed over people then using that address in a 32-bit
descriptor by the way.] Of course, C keeping "long" to be 32-bit even with
/POINTER_SIZE=64 did expand the vocabulary of many people who wanted to learn
new swear words.
And then there is RMS. Even with RAB64, there is still a 32-bit pointer back
to the parent FAB so FABs better be in 32-bit space (stack or heap). The
pointer size ripples to all corners.
For x86, where we don't have to worry about mixing in objects compiled 20
years ago, I (and others) have pushed for more 64-bit pointers. Again, it
just ripples into lots of code. For instance, it would mean that BASIC would
want to use 64-bit class D descriptors which in turn puts pressure on code
that BASIC might call passing such string descriptors.
Well, that's a rather unsavory thought. Now, where did I leave my rope? Tar,
feathers, and a rail also might be required ....

You may say we should not have done so, but every program in out apps plays
around inside descriptors.
o***@gmail.com
2017-04-18 14:32:45 UTC
Reply
Permalink
Raw Message
What I like to see is DCL add a "SET SCRIPT_LANGUAGE xxx" that would switch the interpreter to xxx. The difference would be it would spawn the xxx engine with a special interface giving the interpreter special callbacks to the parent process. The most important would be getting and setting DCL symbols (i.e. 'export') and the 'system' function for having the parent execute a command with I/O redirection options rolled in.
Stephen Hoffman
2017-04-18 23:44:36 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
What I like to see is DCL add a "SET SCRIPT_LANGUAGE xxx" that would
switch the interpreter to xxx. The difference would be it would spawn
the xxx engine with a special interface giving the interpreter special
callbacks to the parent process. The most important would be getting
and setting DCL symbols (i.e. 'export') and the 'system' function for
having the parent execute a command with I/O redirection options rolled
in.
Passing data back via logical name is typical for these cases, and the
hack that folks have been using for many years. Exporting back via a
DCL symbol would be an interesting extension to what's available now
within DCL, but doesn't seem to warrant any specific new DCL commands
beyond some variant of the existing equate operator, or (as suggested)
some sort of export-like command or (ugh) LOGOUT /SYMBOL.
LOGINOUT/SYMBOL or some new sort of symbol scoping wouldn't be my
choice, though. This whole area isn't particularly documented,
though; how the mailboxes work between the parent and subprocess.

Depending on where you're headed with this if not exporting a symbol
back; if not a way to return data into DCL via symbol... For command
line interpreters... HELP SPAWN /CLI has been around for most of the
history of OpenVMS.

For scripting languages such as Python or Perl and as ugly as both of
these two approaches might be, the so-called shebang and file extension
processing are typical approaches for invoking scripts in various
languages. Having to tell DCL explicitly what to do with a particular
scripting language seems... clunky.

Implementing scripting languages as interpreters would be an
interesting idea, but whether any of that would play nicely with what
sys$cli implements for interprocess communications between the parent
and spawned subprocess. How easy porting the interpreter and running
an arbitrary language in supervisor mode — CLIs use supervisor mode in
OpenVMS — might be? Whether a CLI implementation fits with how most
scripting languages work?

I/O redirection and piping as available in OpenVMS — DCL and otherwise,
SPAWN or PIPE or otherwise — is somewhat lacking in comparison with
other platforms, and the whole idea of piping around ASCII or MCS text
would at best be catching up with what's been around for ten or twenty
year or longer in other platforms and implementations. Nice in
comparison to OpenVMS as currently implemented, but badly lacking
otherwise, Passing around objects rather than streams of text would
be a rather more modern approach for a wholly new implementation, too.
PowerShell implements this approach.
https://msdn.microsoft.com/en-us/powershell/scripting/getting-started/fundamental/about-windows-powershell
--
Pure Personal Opinion | HoffmanLabs LLC
Bob Koehler
2017-04-19 15:17:55 UTC
Reply
Permalink
Raw Message
What I like to see is DCL add a "SET SCRIPT_LANGUAGE xxx" that would switch=
the interpreter to xxx. The difference would be it would spawn the xxx en=
gine with a special interface giving the interpreter special callbacks to t=
he parent process. The most important would be getting and setting DCL sym=
bols (i.e. 'export') and the 'system' function for having the parent execut=
e a command with I/O redirection options rolled in.
Why spawn a separate process? Why not just map a different CLI into
the current process? A data structure could probably be worked out
to hold symbolic data and pass them between CLIs.
o***@gmail.com
2017-04-19 19:10:27 UTC
Reply
Permalink
Raw Message
Post by Bob Koehler
Why spawn a separate process? Why not just map a different CLI into
the current process? A data structure could probably be worked out
to hold symbolic data and pass them between CLIs.
So it can run in user mode and pull in any libraries it sees fit.
Craig A. Berry
2017-04-20 02:42:16 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Bob Koehler
Why spawn a separate process? Why not just map a different CLI into
the current process? A data structure could probably be worked out
to hold symbolic data and pass them between CLIs.
So it can run in user mode and pull in any libraries it sees fit.
You mean like running an ordinary user-mode program? Then why not just
run an ordinary user-mode program? If you want to affect supervisor-mode
things that will persist after image exit like the DCL symbols you
mentioned, then you just have to call the right services. Perl does that
with the VMS::DCLsym. Python may also have a way to do this but I don't
know for sure. bash not so much, but it would just be a matter of making
a "super_export" built-in to create symbols (or logicals) that persist
after bash exits.
Arne Vajhøj
2017-04-28 17:20:53 UTC
Reply
Permalink
Raw Message
Post by Craig A. Berry
Post by o***@gmail.com
Post by Bob Koehler
Why spawn a separate process? Why not just map a different CLI into
the current process? A data structure could probably be worked out
to hold symbolic data and pass them between CLIs.
So it can run in user mode and pull in any libraries it sees fit.
You mean like running an ordinary user-mode program? Then why not just
run an ordinary user-mode program? If you want to affect supervisor-mode
things that will persist after image exit like the DCL symbols you
mentioned, then you just have to call the right services. Perl does that
with the VMS::DCLsym. Python may also have a way to do this but I don't
know for sure. bash not so much, but it would just be a matter of making
a "super_export" built-in to create symbols (or logicals) that persist
after bash exits.
It is certainly one way.

But it would mean moving to the traditional *nix paradigm of
starting a new process for each image activation.

Probably OK. The overhead of process creation on todays hardware
can not be that big.

Arne
Bob Koehler
2017-04-20 13:19:01 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Bob Koehler
Why spawn a separate process? Why not just map a different CLI into
the current process? A data structure could probably be worked out
to hold symbolic data and pass them between CLIs.
So it can run in user mode and pull in any libraries it sees fit.
Starting a user mode thread from supervisor mode isn't exactly rocket
science.

Of course, UNIX/C programmers may be lost just trying to think about
it.
John Reagan
2017-04-20 13:56:10 UTC
Reply
Permalink
Raw Message
Post by Bob Koehler
Post by o***@gmail.com
Post by Bob Koehler
Why spawn a separate process? Why not just map a different CLI into
the current process? A data structure could probably be worked out
to hold symbolic data and pass them between CLIs.
So it can run in user mode and pull in any libraries it sees fit.
Starting a user mode thread from supervisor mode isn't exactly rocket
science.
Of course, UNIX/C programmers may be lost just trying to think about
it.
I don't know what UNIX/C has to do with it. I've been doing VMS since 1981 and consider myself fluent in VAX Macro-32, BLISS, as well as UNIX/C, etc. and I can't do it. My first guess would be to fake up some return address information and do an REI to lower the mode. The REI should know about switching stacks, etc. Am I close?
Arne Vajhøj
2017-04-28 17:31:48 UTC
Reply
Permalink
Raw Message
Post by Bob Koehler
Post by o***@gmail.com
Post by Bob Koehler
Why spawn a separate process? Why not just map a different CLI into
the current process? A data structure could probably be worked out
to hold symbolic data and pass them between CLIs.
So it can run in user mode and pull in any libraries it sees fit.
Starting a user mode thread from supervisor mode isn't exactly rocket
science.
Of course, UNIX/C programmers may be lost just trying to think about
it.
Can threads have different mode??

What is stored for PS in KS??

Arne
Bob Koehler
2017-05-01 13:41:11 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Can threads have different mode??
I am using "thread" in a very general sense.
Post by Arne Vajhøj
What is stored for PS in KS??
Depends on how you set up the thread. I would expect it to be the
same as, or less privildged than, the mode of the thread.
Arne Vajhøj
2017-05-01 19:19:26 UTC
Reply
Permalink
Raw Message
Post by Bob Koehler
Post by Arne Vajhøj
Can threads have different mode??
I am using "thread" in a very general sense.
Threads have a rather specific meaning.
Post by Bob Koehler
Post by Arne Vajhøj
What is stored for PS in KS??
Depends on how you set up the thread. I would expect it to be the
same as, or less privildged than, the mode of the thread.
In the setup you described you have two threads:
* one executing in supervisor mode running the CLI
* one executing in user mode started by the first

Those two threads share KS. That KS has one slot for PS.

What is in it supervisor mode or user mode?

Arne
Bob Koehler
2017-05-02 17:30:55 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by Bob Koehler
I am using "thread" in a very general sense.
Threads have a rather specific meaning.
Maybe in your head. So if you want to play the
rose-by-any-other-name game, pick a word that in your head is generic
for meaning some manner of two logically separate contexts of
execution.
Arne Vajhøj
2017-05-02 18:50:06 UTC
Reply
Permalink
Raw Message
Post by Bob Koehler
Post by Arne Vajhøj
Post by Bob Koehler
I am using "thread" in a very general sense.
Threads have a rather specific meaning.
Maybe in your head.
In the IT industry.

Arne
David Froble
2017-05-02 19:50:58 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by Bob Koehler
Post by Arne Vajhøj
Post by Bob Koehler
I am using "thread" in a very general sense.
Threads have a rather specific meaning.
Maybe in your head.
In the IT industry.
Arne
Ah, as in this particular topic?

"Mark thread as read"

"Sort by thread"

Perhaps some may consider the noun one thing, but it's too general for everyone
to do so.
Arne Vajhøj
2017-05-02 20:40:34 UTC
Reply
Permalink
Raw Message
Post by David Froble
Post by Arne Vajhøj
Post by Bob Koehler
Post by Arne Vajhøj
Post by Bob Koehler
I am using "thread" in a very general sense.
Threads have a rather specific meaning.
Maybe in your head.
In the IT industry.
Ah, as in this particular topic?
"Mark thread as read"
"Sort by thread"
Perhaps some may consider the noun one thing, but it's too general for
everyone to do so.
The context was execution within an OS not reading usenet.

https://en.wikipedia.org/wiki/Thread_(computing)

vs

https://en.wikipedia.org/wiki/Conversation_threading

Arne
Richard Levitte
2017-04-03 06:20:48 UTC
Reply
Permalink
Raw Message
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a programming
language"
Also, I can only assume that the irony isn't lost on anyone who's read... oh I dunno, vmsinstal.com? ;-)

Cheers,
Richard
Robert A. Brooks
2017-04-03 16:20:32 UTC
Reply
Permalink
Raw Message
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a
programming language"
Also, I can only assume that the irony isn't lost on anyone who's
read... oh I dunno, vmsinstal.com? ;-)
It's not that bad. Many (most?) the command procedures that ship on VMS
have been run through the DCLDIET.COM procedure that strips out comments and
reduces white space to a bare minimum.

The actual source that we maintain is relatively well-written and easy to read.
--
-- Rob
Richard Levitte
2017-04-03 20:17:35 UTC
Reply
Permalink
Raw Message
Post by Robert A. Brooks
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a
programming language"
Also, I can only assume that the irony isn't lost on anyone who's
read... oh I dunno, vmsinstal.com? ;-)
It's not that bad. Many (most?) the command procedures that ship on VMS
have been run through the DCLDIET.COM procedure that strips out comments and
reduces white space to a bare minimum.
The actual source that we maintain is relatively well-written and easy to read.
That isn't the irony I'm seeing. The irony I saw is claiming that DCL isn't meant to be a programming language, and then use it as a programming language...

Cheers,
Richard
Kerry Main
2017-04-03 20:58:43 UTC
Reply
Permalink
Raw Message
-----Original Message-----
Richard Levitte via Info-vax
Sent: April 3, 2017 4:18 PM
Subject: Re: [Info-vax] Access to _all_ VMS system services and library
functions from DCL ?
Post by Robert A. Brooks
Post by Richard Levitte
Post by Henry Crun
IIRC somwhere in Dec's reply was that "DCL is not meant to be a
programming language"
Also, I can only assume that the irony isn't lost on anyone who's
read... oh I dunno, vmsinstal.com? ;-)
It's not that bad. Many (most?) the command procedures that ship
on
Post by Robert A. Brooks
VMS have been run through the DCLDIET.COM procedure that strips
out
Post by Robert A. Brooks
comments and reduces white space to a bare minimum.
The actual source that we maintain is relatively well-written and easy
to read.
That isn't the irony I'm seeing. The irony I saw is claiming that DCL isn't
meant to be a programming language, and then use it as a
programming language...
Now if only Microsoft had adopted more from OpenVMS and integrated this 3rd party addon as well - XLNT.

Perhaps the need for PowerShell would have been less?

<https://www.advsyscon.com/en-us/products/xlnt-scripting/xlnt-description>
"XLNT®, the Enterprise Command and Scripting Language is a powerful, and yet easy to use approach to improve Windows System Administration. XLNT's full featured commands, powerful built-in functions (i.e. Lexicals) and easy to use language improve System Administrators and Application Developers productivity by simplifying access, through a Command Line Interface, to all Windows securable objects."

And btw, from what I recall, the functionality XLNT is very impressive. Check out the sample screen shots. There is an eval kit as well.

😊


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Stephen Hoffman
2017-04-03 22:44:49 UTC
Reply
Permalink
Raw Message
Post by Kerry Main
Now if only Microsoft had adopted more from OpenVMS and integrated this
3rd party addon as well - XLNT.
Yeah; the XLNT package and products from the folks at Sector7 have been
useful for various folks porting from OpenVMS to Windows.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2017-04-04 01:22:03 UTC
Reply
Permalink
Raw Message
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: April 3, 2017 6:45 PM
Subject: Re: [Info-vax] Access to _all_ VMS system services and library
functions from DCL ?
Post by Kerry Main
Now if only Microsoft had adopted more from OpenVMS and
integrated
Post by Kerry Main
this 3rd party addon as well - XLNT.
Yeah; the XLNT package and products from the folks at Sector7 have
been useful for various folks porting from OpenVMS to Windows.
The way I remember it was Sector 7 appeared to be DEC/Compaq friendly,
even attended DEC/Compaq events, but then ended up talking Cust's
offline into moving to AIX.

Hence, the following IBM announcement from 2003:
<https://www-03.ibm.com/press/us/en/pressrelease/6037.wss>
"IBM Acquires Application Porting Services Business From Privately
Held Sector7"

Nice partner.

Regards,

Kerry Main
Kerry dot main at starkgaming dot com
David Froble
2017-04-04 02:15:24 UTC
Reply
Permalink
Raw Message
Post by Simon Clubley
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: April 3, 2017 6:45 PM
Subject: Re: [Info-vax] Access to _all_ VMS system services and
library
functions from DCL ?
Post by Kerry Main
Now if only Microsoft had adopted more from OpenVMS and
integrated
Post by Kerry Main
this 3rd party addon as well - XLNT.
Yeah; the XLNT package and products from the folks at Sector7 have
been useful for various folks porting from OpenVMS to Windows.
The way I remember it was Sector 7 appeared to be DEC/Compaq friendly,
even attended DEC/Compaq events, but then ended up talking Cust's
offline into moving to AIX.
<https://www-03.ibm.com/press/us/en/pressrelease/6037.wss>
"IBM Acquires Application Porting Services Business From Privately
Held Sector7"
Nice partner.
Well, to be a bit fair, some of this was happening when DEC themselves was
saying "get off VMS". Well, Ok, some people at DEC, perhaps not all of them.
Kerry Main
2017-04-04 13:33:08 UTC
Reply
Permalink
Raw Message
-----Original Message-----
David Froble via Info-vax
Sent: April 3, 2017 10:15 PM
Subject: Re: [Info-vax] Access to _all_ VMS system services and library
functions from DCL ?
Post by Simon Clubley
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: April 3, 2017 6:45 PM
Subject: Re: [Info-vax] Access to _all_ VMS system services and
library
functions from DCL ?
Post by Kerry Main
Now if only Microsoft had adopted more from OpenVMS and
integrated
Post by Kerry Main
this 3rd party addon as well - XLNT.
Yeah; the XLNT package and products from the folks at Sector7
have
Post by Simon Clubley
been useful for various folks porting from OpenVMS to Windows.
The way I remember it was Sector 7 appeared to be DEC/Compaq
friendly,
Post by Simon Clubley
even attended DEC/Compaq events, but then ended up talking
Cust's
Post by Simon Clubley
offline into moving to AIX.
<https://www-03.ibm.com/press/us/en/pressrelease/6037.wss>
"IBM Acquires Application Porting Services Business From Privately
Held Sector7"
Nice partner.
Well, to be a bit fair, some of this was happening when DEC
themselves was saying "get off VMS". Well, Ok, some people at DEC,
perhaps not all of them.
Well, lets not forget that other large companies had the same issue
i.e. at IBM, where the IBM Linux Sale advocates were busy promoting
migrating from AIX to Linux.

Having stated this, the practice of telling a partner you will work
with them and help their move Cust's to Linux/Windows with your
porting tools and then in the background sell those same Cust's on
"high end UNIX" i.e. AIX is not what most would call ethical
practices.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
David Froble
2017-04-04 14:43:37 UTC
Reply
Permalink
Raw Message
Post by Kerry Main
Having stated this, the practice of telling a partner you will work
with them and help their move Cust's to Linux/Windows with your
porting tools and then in the background sell those same Cust's on
"high end UNIX" i.e. AIX is not what most would call ethical
practices.
So, you're suggesting that these people learned their business practices from
Microsoft?

I remember the story, can't say it actually happened, that some company working
with Microsoft had things go bad, and when they asked a Microsoft person "what
went wrong", the reply was something like "your problem is you trusted us".

I'm too lazy to research that story ....
Stephen Hoffman
2017-04-04 15:25:56 UTC
Reply
Permalink
Raw Message
Post by Simon Clubley
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: April 3, 2017 6:45 PM
Subject: Re: [Info-vax] Access to _all_ VMS system services and
library
functions from DCL ?
Post by Kerry Main
Now if only Microsoft had adopted more from OpenVMS and integrated this
3rd party addon as well - XLNT.
Yeah; the XLNT package and products from the folks at Sector7 have been
useful for various folks porting from OpenVMS to Windows.
The way I remember it was Sector 7 appeared to be DEC/Compaq friendly,
even attended DEC/Compaq events, but then ended up talking Cust's
offline into moving to AIX.
Didn't senior DEC and Compaq management folks openly tell existing DEC
and Compaq customer folks — including folks using OpenVMS — that
Windows was the future, too? Affinity, et al. In retrospect and
looking at the installed base sizes and trends, those management folks
were clearly mostly right, too. Windows on the desktop. Windows
Server and Linux both ate most of the classic server market too, and
greatly expanded the whole server market. But I digress.

As for your recollection... Ponder the same situation from the
perspective of the customers involved. Not from the perspective of
the vendor. The customers that are or were buying OpenVMS. At why
customers have decided to port off of OpenVMS. If they were
satisfied, then it's far more difficult for other vendors to acquire
those customers. Customers don't want to port. It's expensive,
wasteful and tedious. (q.v. the VSI OpenVMS x86-64 port.) Helping
those same existing customer folks become more effective and efficient,
while also acquiring enough and over time more new folks as customers,
is the only way there will be any future for OpenVMS. Ensuring those
customers have few or no reasons to port. The port is one part of
this, as are the other projects under way and on the whiteboards in
Bolton and other VSI offices.

Making OpenVMS more efficient, easier to use, easier to understand, and
more cost-effective.... Seems the best way to reduce the numbers of
folks porting away from OpenVMS, and to pick up new customers and
wholly new applications. Looking at 2022 and 2027 here, not at
OpenVMS in the last millennium. Forward. Not at what didn't trend
all that well last time.
--
Pure Personal Opinion | HoffmanLabs LLC
Clark G
2017-04-11 18:11:04 UTC
Reply
Permalink
Raw Message
Post by Robert A. Brooks
It's not that bad. Many (most?) the command procedures that ship on
VMS have been run through the DCLDIET.COM procedure that strips out
comments and reduces white space to a bare minimum.
The actual source that we maintain is relatively well-written and easy to read.
Is that done to hide information the customer should not see, or for
performance reasons, or to save disk space or all three?

Did the VMS source fische that used to be provided have the pre-DCLDIET.COM
version?
--
Clark G
* take away the em's to reply
d***@gmail.com
2017-04-11 18:56:15 UTC
Reply
Permalink
Raw Message
Post by Clark G
Post by Robert A. Brooks
It's not that bad. Many (most?) the command procedures that ship on
VMS have been run through the DCLDIET.COM procedure that strips out
comments and reduces white space to a bare minimum.
Is that done to hide information the customer should not see, or for
performance reasons, or to save disk space or all three?
Performance and disk space in the VAX days.
Post by Clark G
--
Clark G
* take away the em's to reply
Paul Sture
2017-04-11 20:29:03 UTC
Reply
Permalink
Raw Message
Post by Clark G
Post by Robert A. Brooks
It's not that bad. Many (most?) the command procedures that ship on
VMS have been run through the DCLDIET.COM procedure that strips out
comments and reduces white space to a bare minimum.
The actual source that we maintain is relatively well-written and easy to read.
Is that done to hide information the customer should not see, or for
performance reasons, or to save disk space or all three?
In VAX days there was a pretty overwhelming case for doing that for
performance reasons alone. The overhead of the kind of DCL seen in
VMSINSTAL.COM was nowhere near as bad on Alpha.

For reference, in 1985 I wrote a quite complex VMSINSTAL procedure
involving several hundred files over 8 savesets. On the 11/750 I used
for that job, I got the installation time down to ~25 minutes by
restoring the contents of some of those savesets directly, using BACKUP
commands instead of the VMSINSTAL routines PROVIDE_FILE et al.

Someone later put those VMSINSTAL routines back (dunno why), increasing
the installation time to something like an hour and a half. Quite
frustrating when I knew the whole process didn't need to take so long.
--
The First of April: The only day of the year that people critically
evaluate news stories before believing them.
Stephen Hoffman
2017-04-12 16:29:21 UTC
Reply
Permalink
Raw Message
Post by Clark G
Post by Robert A. Brooks
It's not that bad. Many (most?) the command procedures that ship on
VMS have been run through the DCLDIET.COM procedure that strips out
comments and reduces white space to a bare minimum.
The actual source that we maintain is relatively well-written and easy to read.
Is that done to hide information the customer should not see, or for
performance reasons, or to save disk space or all three?
With the exception of the few files among the expurgated listings
files, the source files and the DCL command procedure un-dieted files
are all on the OpenVMS source listings kits. The original reasons for
dieting and other related shenanigans were to save storage space on the
disks and on the old patch tapes, and because DCL is slow. The need
for dieting has become largely irrelevent with faster and more
capacious hardware, and with the advent of PCSI compression needed to
stuff OpenVMS onto DVD media.

The more recent reason is of priorities and inertia; removing the
processing would involve added work and incur some risk of problems,
and there's other work deemed higher priority.

The expurgated files are those considered LMF-related, third-party
proprietary, or similarly constrained, and very few files and
facilities are included among those. The un-dieted versions of
VMSINSTAL, AUTOGEN and the rest are included on the OpenVMS source
listings kits.

Even well-structured, DCL procedures such as VMSINSTAL and AUTOGEN are
big and complex and clunky, and — having just seen a barrage of DCL
error messages from underneath TCPIP$CONFIG — not always easy to write
or maintain or extend, and a whole lot of glue code is always involved.
Post by Clark G
Did the VMS source fische that used to be provided have the
pre-DCLDIET.COM version?
Yes. it's also been on the source listings optical media, which
replaced the fiche decades ago. Listings kits were available for
around $2K.

DCLDIET is best ignored and left with VMSINSTAL VUPs and the rest of
the old baggage, but reasons. Lacking any integrated lint-like tool,
DCLCHECK is still quite useful, though.
--
Pure Personal Opinion | HoffmanLabs LLC
Simon Clubley
2017-04-12 17:58:56 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Even well-structured, DCL procedures such as VMSINSTAL and AUTOGEN are
big and complex and clunky, and ? having just seen a barrage of DCL
error messages from underneath TCPIP$CONFIG ? not always easy to write
or maintain or extend, and a whole lot of glue code is always involved.
[snip]
Post by Stephen Hoffman
DCLDIET is best ignored and left with VMSINSTAL VUPs and the rest of
the old baggage, but reasons. Lacking any integrated lint-like tool,
DCLCHECK is still quite useful, though.
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?

And yes, a DCL version of something like jshint would be a very useful
tool to be supplied as part of VMS for when you are writing something
in DCL.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Craig A. Berry
2017-04-12 23:43:47 UTC
Reply
Permalink
Raw Message
Post by Simon Clubley
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?
DCL, obviously. It doesn't have "use strict" and hasn't had a
multi-million-dollar avalanche of resources spent on making it faster,
safer, and more feature-rich, plus massive tooling and stricter language
variations such as TypeScript.
Post by Simon Clubley
And yes, a DCL version of something like jshint would be a very useful
tool to be supplied as part of VMS for when you are writing something
in DCL.
Did you mean jslint? Yes, that would be a start.
David Froble
2017-04-13 01:30:07 UTC
Reply
Permalink
Raw Message
Post by Craig A. Berry
Post by Simon Clubley
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?
DCL, obviously. It doesn't have "use strict" and hasn't had a
multi-million-dollar avalanche of resources spent on making it faster,
safer, and more feature-rich, plus massive tooling and stricter language
variations such as TypeScript.
Post by Simon Clubley
And yes, a DCL version of something like jshint would be a very useful
tool to be supplied as part of VMS for when you are writing something
in DCL.
Did you mean jslint? Yes, that would be a start.
There is Brian's DCL debugger, which I've not had the opportunity to try out.

And I still don't consider DCL a programming language ....

Scripting, yes. Now, someone will ask me what is the difference ....
Simon Clubley
2017-04-13 17:54:34 UTC
Reply
Permalink
Raw Message
Post by David Froble
Post by Craig A. Berry
Did you mean jslint? Yes, that would be a start.
There is Brian's DCL debugger, which I've not had the opportunity to try out.
Instead of a debugger, which only allows you to look at the code after
it's gone wrong, think edit time checker which allows a static check
of the code as soon as you leave the editor and before running the
code in question.

It picks up some of the things a compiler or lint checker would do in
a traditional compiled language.
Post by David Froble
And I still don't consider DCL a programming language ....
Scripting, yes. Now, someone will ask me what is the difference ....
Some scripting languages _are_ programming languages these days
(Python comes to mind here).

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Simon Clubley
2017-04-13 17:48:09 UTC
Reply
Permalink
Raw Message
Post by Craig A. Berry
Post by Simon Clubley
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?
DCL, obviously. It doesn't have "use strict" and hasn't had a
multi-million-dollar avalanche of resources spent on making it faster,
safer, and more feature-rich, plus massive tooling and stricter language
variations such as TypeScript.
That's my opinion as well.
Post by Craig A. Berry
Post by Simon Clubley
And yes, a DCL version of something like jshint would be a very useful
tool to be supplied as part of VMS for when you are writing something
in DCL.
Did you mean jslint? Yes, that would be a start.
No, I meant jshint. It's a fork of jslint but without all the annoying
style nonsense that jslint throws at you. For example, I for one use
tabs and not multiple spaces and am very happy to do so.

Look at http://jshint.com/ for further information about JSHint.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Arne Vajhøj
2017-04-12 23:51:22 UTC
Reply
Permalink
Raw Message
Post by Simon Clubley
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?
JavaScript is a much more powerful language than DCL and the
available tools are also much better.

That does not imply that DCL is bad. It just imply that DCL
was not designed to write large programs in.

Arne
Bob Koehler
2017-04-13 14:11:29 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by Simon Clubley
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?
JavaScript is a much more powerful language than DCL and the
available tools are also much better.
JavaScript has no file copy command. Or backup command. Or directory
listing command ...
Arne Vajhøj
2017-04-28 17:37:03 UTC
Reply
Permalink
Raw Message
Post by Bob Koehler
Post by Arne Vajhøj
Post by Simon Clubley
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?
JavaScript is a much more powerful language than DCL and the
available tools are also much better.
JavaScript has no file copy command. Or backup command. Or directory
listing command ...
Neither does DCL.

It uses external executables for those.

Based on CLITABLES.

But true - JavaScript would need to be made support that as well.

Arne
Bob Koehler
2017-05-01 13:44:08 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by Bob Koehler
Post by Arne Vajhøj
Post by Simon Clubley
I wonder what people would consider more tricky to write reliably;
a large DCL program or a large Javascript program ?
JavaScript is a much more powerful language than DCL and the
available tools are also much better.
JavaScript has no file copy command. Or backup command. Or directory
listing command ...
Neither does DCL.
It uses external executables for those.
Based on CLITABLES.
The DCL CLI table is very much a part of DCL. It has a COPY command,
so DCL has a COPY command. Doesn't have to. COPY.EXE could be
triggered by the FEED command, if that's what you want in your table.

I never said a command had to be internally implemented, which seems
to be the source of your claim.

But what "command" is build into JavaScript that does a file copy?
Stephen Hoffman
2017-04-15 21:53:11 UTC
Reply
Permalink
Raw Message
I wonder what people would consider more tricky to write reliably; a
large DCL program or a large Javascript program ?
Of those two?

I'd prefer ECMAScript over DCL, though there's no support for command
operations on OpenVMS and nothing akin to the automation tooling on
macOS.

Meaning that DCL will be used.

Discussions of various problems with the language aside, ECMAScript is
making language and capability improvements, too. DCL, not so much.

Linux is a large and complex application, so here's an example of what
can be done with ECMAScript...
http://bellard.org/jslinux/

Here's how:
https://github.com/kripken/emscripten

Automation tools using JavaScript / ECMAScript:
https://developer.apple.com/library/content/documentation/LanguagesUtilities/Conceptual/MacAutomationScriptingGuide/index.html

https://developer.apple.com/videos/play/wwdc2014/306/

And should it both become more prevalent and should it also work better
than ECMAScript in one or more useful ways...
http://webassembly.org
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2017-04-04 00:43:01 UTC
Reply
Permalink
Raw Message
Post by t***@glaver.org
Post by Simon Clubley
Is there any interest in getting access to _all_ the VMS system
services and library functions directly from DCL ?
I seem to recall this being a wishlist / SIR from a very long time
ago. Does anybody remember what DEC's response to it was? [Impossible
for some reason(s) vs. too much work, for example.]
I think that the functions could be divided into 3 categories:
* those that are perfect fit for DCL
* those that could be implemented but the API really should be changed
to make sense in DCL
* those that would be impossible or at leasr very cumbersome to
implement in DCL due to DCL being too high level

Arne
Simon Clubley
2017-04-04 18:21:59 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by t***@glaver.org
Post by Simon Clubley
Is there any interest in getting access to _all_ the VMS system
services and library functions directly from DCL ?
I seem to recall this being a wishlist / SIR from a very long time
ago. Does anybody remember what DEC's response to it was? [Impossible
for some reason(s) vs. too much work, for example.]
* those that are perfect fit for DCL
* those that could be implemented but the API really should be changed
to make sense in DCL
* those that would be impossible or at leasr very cumbersome to
implement in DCL due to DCL being too high level
This is why there really should be a marshalling system for the types
to convert between the DCL types and the VMS native types so you can
mostly eliminate this problem. This is also part of why the VMS
development process needs an automatic interface generator if you
do this.

As mentioned in my original post, some aspects might be challenging
in DCL as it stands today. For example, how do you marshall an
itemlist as seen by DCL into an itemlist directly usable by VMS ?

It would be a lot easier if DCL had a lists within lists data
structure or a lists within an array data structure. As it
stands, you would probably need a new DCL intrinsic to allow
you to manually build a usable itemlist from within DCL.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
o***@gmail.com
2017-04-21 02:37:20 UTC
Reply
Permalink
Raw Message
Post by Simon Clubley
This is why there really should be a marshalling system for the types
to convert between the DCL types and the VMS native types so you can
mostly eliminate this problem. This is also part of why the VMS
development process needs an automatic interface generator if you
do this.
Alternatively, and perhaps the more prudent approach would be to solve the problem completely and incorporating a proper marshaling system into OpenVMS as a whole module/library.

On another system this would be more difficult, but the Common Language Environment is already a big step towards this, as it provides a language-independent interface system to all the OpenVMS languages. -- So we could extend it by incorporating IBM's System Object Model [or something very similar].

Adding marshaling to the complete system then becomes the addition of a set of methods to the base SOM_Object metaclass: Input and Output. -- Both of these would take a Stream as an input (the stream itself might be read-only, write-only, or read/write WRT data-flow) and either the actual SOM_Object-type as a result (for Output) or an additional parameter of [the value of] the actual SOM_Object type in the case of Input.

[IIUC] The only modification to the SOM-model I'd recommend is that a clear distinction be made WRT parameters and return-values between "this type" and "this type, or any of its decedents".

If you have a CLE-compliant Ada-95+ compiler, then you already have an implementation of Input and Output, or at least a *very* good start thereon.
Post by Simon Clubley
As mentioned in my original post, some aspects might be challenging
in DCL as it stands today. For example, how do you marshall an
itemlist as seen by DCL into an itemlist directly usable by VMS ?
See above: you define a common method/library for everything (the system, application-programmers, and user-programs, all) and use that -- after all, the DCL program is a program and, as such, should have access to the CLR and system-libraries.
Post by Simon Clubley
It would be a lot easier if DCL had a lists within lists data
structure or a lists within an array data structure. As it
stands, you would probably need a new DCL intrinsic to allow
you to manually build a usable itemlist from within DCL.
That sounds like an extension to the DCL-language, rather than the DCL-program.
Stephen Hoffman
2017-04-21 13:38:57 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Simon Clubley
This is why there really should be a marshalling system for the types
to convert between the DCL types and the VMS native types so you can
mostly eliminate this problem. This is also part of why the VMS
development process needs an automatic interface generator if you
do this.
Alternatively, and perhaps the more prudent approach would be to solve
the problem completely and incorporating a proper marshaling system
into OpenVMS as a whole module/library.
On another system this would be more difficult, but the Common Language
Environment is already a big step towards this, as it provides a
language-independent interface system to all the OpenVMS languages. --
So we could extend it by incorporating IBM's System Object Model [or
something very similar].
ASN.1 is one of the approaches used and is sort-of available on OpenVMS
via OpenSSL, XML and JSON are others though that'd have to be added, or
something akin to the archiving and unarchiving support — marshalling
and unmarshalling — in macOS for apps that don't care to interchange
the data

https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/Archiving/Articles/codingobjects.html#//apple_ref/doc/uid/20000948-BCIHBJDE


There'll still need to be database access for apps that need that,
either RMS for the stuff that needs key-value or other NoSQL storage,
relational into SQLite and maybe PostgreSQL, and some mechanisms for
storing preferences and settings — all the dreck that's now using DEC C
logical names or random turd files — into an application-specific
bundle; some sort of app-specific preferences file, using JSON or maybe
SQLite or whatever. That app-specific user or system configuration
data which can't and shouldn't be moved out into LDAP, that is.
Post by o***@gmail.com
Post by Simon Clubley
It would be a lot easier if DCL had a lists within lists data structure
or a lists within an array data structure. As it stands, you would
probably need a new DCL intrinsic to allow you to manually build a
usable itemlist from within DCL.
That sounds like an extension to the DCL-language, rather than the DCL-program.
Sometimes wholesale replacement and migration is better than
extensions. Because itemlists are an utter and complete disaster.
But then I'm feeling polite today. Itemlists push the hassles of
dealing with API software changes up into the application code, and
require acres of glue code for not-very-much benefit. There are now
far better ways — simpler abstractions — to allow the same sorts of
changes and extensions, and with vastly simpler application code.
Competing with grafted-on incremental changes and more DCL syntax
clutter atop a solution from the 1970s isn't going to sway a lot of
folks to either move forward — existing users — or to adopt OpenVMS —
new users and new deployments.
--
Pure Personal Opinion | HoffmanLabs LLC
o***@gmail.com
2017-04-21 17:57:26 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Post by o***@gmail.com
Post by Simon Clubley
This is why there really should be a marshalling system for the types
to convert between the DCL types and the VMS native types so you can
mostly eliminate this problem. This is also part of why the VMS
development process needs an automatic interface generator if you
do this.
Alternatively, and perhaps the more prudent approach would be to solve
the problem completely and incorporating a proper marshaling system
into OpenVMS as a whole module/library.
On another system this would be more difficult, but the Common Language
Environment is already a big step towards this, as it provides a
language-independent interface system to all the OpenVMS languages. --
So we could extend it by incorporating IBM's System Object Model [or
something very similar].
ASN.1 is one of the approaches used and is sort-of available on OpenVMS
via OpenSSL, XML and JSON are others though that'd have to be added, or
something akin to the archiving and unarchiving support — marshalling
and unmarshalling
ASN.1 is sadly overlooked a lot in our industry.

If you were to upgrade that "sort of" support to full support, then ASN.1 could serialize to JSON or XML too:
* https://www.obj-sys.com/docs/JSONEncodingRules.pdf
* https://en.wikipedia.org/wiki/XML_Encoding_Rules#Example_encoded_in_XER
Post by Stephen Hoffman
There'll still need to be database access for apps that need that,
either RMS for the stuff that needs key-value or other NoSQL storage,
relational into SQLite and maybe PostgreSQL, and some mechanisms for
storing preferences and settings — all the dreck that's now using DEC C
logical names or random turd files — into an application-specific
bundle; some sort of app-specific preferences file, using JSON or maybe
SQLite or whatever. That app-specific user or system configuration
data which can't and shouldn't be moved out into LDAP, that is.
??
I'm not sure why DB-apps come up -- it is rather orthogonal to serialization/deserialization functions.

Insofar as a stream-based system (as laid out) would go there's no difference in reading if the actual-storage is a DB, RAM, ROM etc... just like for writing it wouldn't matter if it was RAM, disk, DB, radio-antenna, etc. (At that level of abstraction we're just dealing with the properties of readability/writability.)

(ASN.1 would be a good method to enact/realize the serialization/deserialization methods; integrating SOM [or similar] to the CLE, such that all types have a serialize/deserialize methods associated with the type wold ensure that all types/languages could access those methods.)
Post by Stephen Hoffman
Post by o***@gmail.com
Post by Simon Clubley
It would be a lot easier if DCL had a lists within lists data structure
or a lists within an array data structure. As it stands, you would
probably need a new DCL intrinsic to allow you to manually build a
usable itemlist from within DCL.
That sounds like an extension to the DCL-language, rather than the DCL-program.
Sometimes wholesale replacement and migration is better than
extensions. Because itemlists are an utter and complete disaster.
But then I'm feeling polite today. Itemlists push the hassles of
dealing with API software changes up into the application code, and
require acres of glue code for not-very-much benefit. There are now
far better ways — simpler abstractions — to allow the same sorts of
changes and extensions, and with vastly simpler application code.
Competing with grafted-on incremental changes and more DCL syntax
clutter atop a solution from the 1970s isn't going to sway a lot of
folks to either move forward — existing users — or to adopt OpenVMS —
new users and new deployments.
Well, I am interested in OSes and design -- and certainly have my own opinions on how CLIs should be -- so, how would you design a replacement?
Stephen Hoffman
2017-04-21 19:21:18 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
I'm not sure why DB-apps come up -- it is rather orthogonal to
serialization/deserialization functions.
Because you have to marshal and unmarshal the data into and out of some
sort of storage, you want that to be reasonably extensible and
upgradeable over time, and you have to store the metadata definitions,
and you have to set up some structures to allow apps to cooperate at
some level. Which means you're either soon using an existing and
probably relational database, or you're spending substantial effort
dragging RMS forward to deal better with upgrades and record-format
changes, cleanups and online backups. Among other details.

OpenVMS provided these sorts of abstractions years ago using RMS, for
instance. Decades ago, having an integrated and common and useful
file system was really handy, as various then-contemporary app
developers had their own app-specific formats. There've been few
efforts to provide OpenVMS with better and higher-level abstractions
and more powerful APIs in more recent years, though.

...
Post by o***@gmail.com
Well, I am interested in OSes and design -- and certainly have my own
opinions on how CLIs should be -- so, how would you design a
replacement?
Depends on the target and the budget. If I'm aiming for less porting
effort and a mostly-like-current-DCL environment, there are constraints
in what can be changed. If I'm moving further forward, then there's
more room for change. I'd look to OO, to start with. I'd be
borrowing good ideas from newer tools, eschewing the worst, and taking
a long and careful look at where folks are spending their time with the
existing alternatives. Borrowing the best of the ideas and approaches
from DCL, Unix CLIs, scripting languages such as Python, and
PowerShell, for instance. Avoiding the worst of those same and other
languages. Probably with a JIT and with better debugging support, as
some folks are going to write larger programs in the language, whether
it makes sense to do that or not. To accelerate adoption, some sort of
translation or migration or conversion tool for existing DCL procedures
is almost certainly a necessary feature, too.
--
Pure Personal Opinion | HoffmanLabs LLC
o***@gmail.com
2017-04-21 21:47:46 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Post by o***@gmail.com
I'm not sure why DB-apps come up -- it is rather orthogonal to
serialization/deserialization functions.
Because you have to marshal and unmarshal the data into and out of some
sort of storage, you want that to be reasonably extensible and
upgradeable over time, and you have to store the metadata definitions,
and you have to set up some structures to allow apps to cooperate at
some level.
And that's all irrelevant to the given abstraction -- the marshaling doesn't need to be impacted by any property of the storage-system or data-handling other than the properties of the availability of the serialize/deserialize method for the given type, and the stream's attributes for read and write.

As an example of doing this somewhat manually:

----------------------
-- Stream Interface --
----------------------
Type Serialized_Data; -- Stub-type for illustration.

Type Stream_Type( Readable, Writable : Boolean ) is interface;

Function Read(Stream : Stream_Type) return Serialized_Data is abstract
with Pre => Stream.Readable or else raise DATAFLOW_ERROR;

Procedure Write(Item : Serialized_Data; Stream : in out Stream_Type) is abstract
with Pre => Stream.Writable or else raise DATAFLOW_ERROR;

----------------
-- Interfaces --
----------------

Generic
Type T(<>) is limited private;
with Function Serialize (Input : T) return Serialized_Data;
with Procedure Write(Item : Serialized_Data; Stream : in out Stream_Type);
Procedure Write( Item : T; Stream : in out Stream_Type );

Generic
Type T(<>) is limited private;
with Function Deseralize(Input : Serialized_Data) return T;
with Function Read(Stream : Stream_Type) return Serialized_Data;
Function Read( Stream : in out Stream_Type ) return T;

---------------------
-- Implementations --
---------------------

Procedure Write( Item : T; Stream : in out Stream_Type ) is
Begin
Write( Serialize(Item), Stream );
End Write;

Function Read( Stream : in out Stream_Type ) is
begin
Return Deseralize( Read(Stream) );
End Read;

Thus we have a method that, for any type that has a serialization function can be given to a instantiation of the generic read/write to properly read/write that data -- fortunately this can all be automated (even the serialize/deserialize functions themselves), and has been in Ada 95+.

And, if all your types have this sort of function, then that allows you to be concerned with only the data-flow -- programming-wise you no longer care if the input is coming from a file, or RAM, or the keyboard; you no longer care whether the output is a printer, or file, or the screen, or a radio transmitter.

The Ada-95 Rationale probably explains the idea better than I did -- http://www.adaic.org/resources/add_content/standards/95rat/rat95html/rat95-p3-a.html#4-1 -- or perhaps the wikibook: https://en.wikibooks.org/wiki/Ada_Programming/Libraries/Ada.Streams.Stream_IO
Post by Stephen Hoffman
Which means you're either soon using an existing and
probably relational database, or you're spending substantial effort
dragging RMS forward to deal better with upgrades and record-format
changes, cleanups and online backups. Among other details.
These details are irrelevant for the serialization and deserialization processes -- as wikipedia says: "In computer science, in the context of data storage, serialization is the process of translating data structures or object state into a format that can be stored and reconstructed later in the same or another computer environment."

Translating the object/state is simply a different operation than actually storing it.
Post by Stephen Hoffman
Post by o***@gmail.com
Well, I am interested in OSes and design -- and certainly have my own
opinions on how CLIs should be -- so, how would you design a
replacement?
Depends on the target and the budget. If I'm aiming for less porting
effort and a mostly-like-current-DCL environment, there are constraints
in what can be changed. If I'm moving further forward, then there's
more room for change. I'd look to OO, to start with. I'd be
borrowing good ideas from newer tools, eschewing the worst, and taking
a long and careful look at where folks are spending their time with the
existing alternatives. Borrowing the best of the ideas and approaches
from DCL, Unix CLIs, scripting languages such as Python, and
PowerShell, for instance. Avoiding the worst of those same and other
languages. Probably with a JIT and with better debugging support, as
some folks are going to write larger programs in the language, whether
it makes sense to do that or not. To accelerate adoption, some sort of
translation or migration or conversion tool for existing DCL procedures
is almost certainly a necessary feature, too.
Hm, those are all things I'd pretty readily agree to -- except considering Unix CLIs (saving tab-completion they're terrible) -- but all of that is very, very abstract with very little that could be turned into an actual feature-set right now. (A good way to get what would become a feature-set though.)
Stephen Hoffman
2017-04-24 14:47:08 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Stephen Hoffman
I'm not sure why DB-apps come up -- it is rather orthogonal to> >
serialization/deserialization functions.
Because you have to marshal and unmarshal the data into and out of
some> sort of storage, you want that to be reasonably extensible and>
upgradeable over time, and you have to store the metadata definitions,>
and you have to set up some structures to allow apps to cooperate at>
some level.
And that's all irrelevant to the given abstraction -- the marshaling
doesn't need to be impacted by any property of the storage-system or
data-handling other than the properties of the availability of the
serialize/deserialize method for the given type, and the stream's
attributes for read and write.
Have you tried doing this sort of thing with RMS? It's possible, but
pretty soon you're making the usual OpenVMS screw-up and writing your
own database atop RMS because OpenVMS! or some such.
--
Pure Personal Opinion | HoffmanLabs LLC
o***@gmail.com
2017-04-24 17:08:27 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Post by o***@gmail.com
Post by Stephen Hoffman
I'm not sure why DB-apps come up -- it is rather orthogonal to> >
serialization/deserialization functions.
Because you have to marshal and unmarshal the data into and out of
some> sort of storage, you want that to be reasonably extensible and>
upgradeable over time, and you have to store the metadata definitions,>
and you have to set up some structures to allow apps to cooperate at>
some level.
And that's all irrelevant to the given abstraction -- the marshaling
doesn't need to be impacted by any property of the storage-system or
data-handling other than the properties of the availability of the
serialize/deserialize method for the given type, and the stream's
attributes for read and write.
Have you tried doing this sort of thing with RMS? It's possible, but
pretty soon you're making the usual OpenVMS screw-up and writing your
own database atop RMS because OpenVMS! or some such.
The idea has nothing to do with where/how the data is actually & ultimately stored, it is an abstraction off of dataflow -- there is no "writing your own database atop RMS".
j***@yahoo.co.uk
2017-04-21 20:19:55 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Stephen Hoffman
Post by o***@gmail.com
Post by Simon Clubley
This is why there really should be a marshalling system for the types
to convert between the DCL types and the VMS native types so you can
mostly eliminate this problem. This is also part of why the VMS
development process needs an automatic interface generator if you
do this.
Alternatively, and perhaps the more prudent approach would be to solve
the problem completely and incorporating a proper marshaling system
into OpenVMS as a whole module/library.
On another system this would be more difficult, but the Common Language
Environment is already a big step towards this, as it provides a
language-independent interface system to all the OpenVMS languages. --
So we could extend it by incorporating IBM's System Object Model [or
something very similar].
ASN.1 is one of the approaches used and is sort-of available on OpenVMS
via OpenSSL, XML and JSON are others though that'd have to be added, or
something akin to the archiving and unarchiving support — marshalling
and unmarshalling
ASN.1 is sadly overlooked a lot in our industry.
* https://www.obj-sys.com/docs/JSONEncodingRules.pdf
* https://en.wikipedia.org/wiki/XML_Encoding_Rules#Example_encoded_in_XER
Post by Stephen Hoffman
There'll still need to be database access for apps that need that,
either RMS for the stuff that needs key-value or other NoSQL storage,
relational into SQLite and maybe PostgreSQL, and some mechanisms for
storing preferences and settings — all the dreck that's now using DEC C
logical names or random turd files — into an application-specific
bundle; some sort of app-specific preferences file, using JSON or maybe
SQLite or whatever. That app-specific user or system configuration
data which can't and shouldn't be moved out into LDAP, that is.
??
I'm not sure why DB-apps come up -- it is rather orthogonal to serialization/deserialization functions.
Insofar as a stream-based system (as laid out) would go there's no difference in reading if the actual-storage is a DB, RAM, ROM etc... just like for writing it wouldn't matter if it was RAM, disk, DB, radio-antenna, etc. (At that level of abstraction we're just dealing with the properties of readability/writability.)
(ASN.1 would be a good method to enact/realize the serialization/deserialization methods; integrating SOM [or similar] to the CLE, such that all types have a serialize/deserialize methods associated with the type wold ensure that all types/languages could access those methods.)
Post by Stephen Hoffman
Post by o***@gmail.com
Post by Simon Clubley
It would be a lot easier if DCL had a lists within lists data structure
or a lists within an array data structure. As it stands, you would
probably need a new DCL intrinsic to allow you to manually build a
usable itemlist from within DCL.
That sounds like an extension to the DCL-language, rather than the DCL-program.
Sometimes wholesale replacement and migration is better than
extensions. Because itemlists are an utter and complete disaster.
But then I'm feeling polite today. Itemlists push the hassles of
dealing with API software changes up into the application code, and
require acres of glue code for not-very-much benefit. There are now
far better ways — simpler abstractions — to allow the same sorts of
changes and extensions, and with vastly simpler application code.
Competing with grafted-on incremental changes and more DCL syntax
clutter atop a solution from the 1970s isn't going to sway a lot of
folks to either move forward — existing users — or to adopt OpenVMS —
new users and new deployments.
Well, I am interested in OSes and design -- and certainly have my own opinions on how CLIs should be -- so, how would you design a replacement?
Afaik ASN.1 has been around (and supported) on VMS for
decades, although obviously not well known. It's part of
VMS SNMP support, it's documented in e.g. the OpenVMS
Utility Routines manual (LDAP section), and it was/is
also (whisper it) part of DECnet/OSI and its predecessors
(OSAK).

In the IP world in general, lots of people seemed to
like reinventing their own implementations of encoding and
decoding routines (to name just one example) for every
different class of application.

Maybe that's inevitable when the network stack stops at
the sockets layer and doesn't have a proper presentation
layer. Who wants to reuse someone else's tried and
trusted ideas and software anyway so they can focus on
what's specific to their particular requirements ? Well
I do but obviously I'm out of step with much of today's
developer world.

What I can't comment on is whether the VMS implementation
of ASN.1 fits nicely with the way you envision your
application.

Best of luck anyway.

New to ASN.1? Here's a (not ideal) place to start:
https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One
o***@gmail.com
2017-04-21 21:56:18 UTC
Reply
Permalink
Raw Message
Post by j***@yahoo.co.uk
Afaik ASN.1 has been around (and supported) on VMS for
decades, although obviously not well known. It's part of
VMS SNMP support, it's documented in e.g. the OpenVMS
Utility Routines manual (LDAP section), and it was/is
also (whisper it) part of DECnet/OSI and its predecessors
(OSAK).
It's a shame that OSI 'lost'.
Post by j***@yahoo.co.uk
In the IP world in general, lots of people seemed to
like reinventing their own implementations of encoding and
decoding routines (to name just one example) for every
different class of application.
This is true -- and what's frustrating about all the ad-hoc systems is that often they're erroneous or incomplete.
Post by j***@yahoo.co.uk
Maybe that's inevitable when the network stack stops at
the sockets layer and doesn't have a proper presentation
layer. Who wants to reuse someone else's tried and
trusted ideas and software anyway so they can focus on
what's specific to their particular requirements ? Well
I do but obviously I'm out of step with much of today's
developer world.
Tell me about it.
Post by j***@yahoo.co.uk
What I can't comment on is whether the VMS implementation
of ASN.1 fits nicely with the way you envision your
application.
Actually it probably does pretty well.
Bill Gunshannon
2017-04-22 00:13:04 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by j***@yahoo.co.uk
Afaik ASN.1 has been around (and supported) on VMS for
decades, although obviously not well known. It's part of
VMS SNMP support, it's documented in e.g. the OpenVMS
Utility Routines manual (LDAP section), and it was/is
also (whisper it) part of DECnet/OSI and its predecessors
(OSAK).
It's a shame that OSI 'lost'.
Some people don't agree. :-)

bill
Arne Vajhøj
2017-04-22 00:07:40 UTC
Reply
Permalink
Raw Message
Post by j***@yahoo.co.uk
In the IP world in general, lots of people seemed to
like reinventing their own implementations of encoding and
decoding routines (to name just one example) for every
different class of application.
Really?

The 3 big technologies Java, .NET and PHP all come
with various standard serialization in their libraries.

I would say that custom is rather rare.

Arne
Arne Vajhøj
2017-04-22 00:05:52 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Alternatively, and perhaps the more prudent approach would be to
solve the problem completely and incorporating a proper marshaling
system into OpenVMS as a whole module/library.
On another system this would be more difficult, but the Common
Language Environment is already a big step towards this, as it
provides a language-independent interface system to all the OpenVMS
languages. -- So we could extend it by incorporating IBM's System
Object Model [or something very similar].
Adding marshaling to the complete system then becomes the addition of
a set of methods to the base SOM_Object metaclass: Input and Output.
-- Both of these would take a Stream as an input (the stream itself
might be read-only, write-only, or read/write WRT data-flow) and
either the actual SOM_Object-type as a result (for Output) or an
additional parameter of [the value of] the actual SOM_Object type in
the case of Input.
[IIUC] The only modification to the SOM-model I'd recommend is that a
clear distinction be made WRT parameters and return-values between
"this type" and "this type, or any of its decedents".
If you have a CLE-compliant Ada-95+ compiler, then you already have
an implementation of Input and Output, or at least a *very* good
start thereon.
Binary serialization is sort of last century's concept.

Today the industry is willing to pay the the performance overhead
of text serialization.

Lots of XML and JSON serialization stuff available.

Arne
o***@gmail.com
2017-04-22 17:32:08 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead
of text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you wouldn't recommend XML and JSON at the OS-level, would you?

Besides, if you do as I've laid out you could use the ASN.1 machinery to handle serialization/deserialization, using the particular encoder as a parameter, and bang you've got XML (via XER encoding) or JSON (via that JSON ASN.1 encoder referenced).

Seriously, they already *have* most of the components that would be needed for it. The one thing they don't have is a SOM-analog, but to tie all these components together you'd end up with something comparable. (Actually, probably less full-featured, as SOM has clear design goals targeting usage in libraries... which would be synergetic with the CLE.)
Stephen Hoffman
2017-04-24 15:03:07 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead of
text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
Why not? It works. It's portable. It's familiar. It's common.
It's comparatively simple, too. We're already using JSON and XML for
data import and export, and for data storage and retrieval in apps that
aren't performance-sensitive, even if some database-based binary scheme
is (also) implemented for storing object graph storage or otherwise,
whether for higher performance or otherwise.

Not so keen on implementations that are putting ASN.1 into the kernel
or into security-sensitive contexts, as that's not ended well for some
environments. Particularly when the data isn't trusted or trustworthy.

But then we're either going to be implementing our own security bugs
and vulnerabilities with these formats, or reusing open source for
parsing and marshaling, or looking for system routines and frameworks
that abstract this effort. It's not unheard of to export, transfer
and import any of these text or binary formats, for instance. (SQLite
goes past this, by allowing the database to be transportable. But I
digress.) JSON, XML and the rest are normal parts of data transfers.
ASN.1 for certificates and some other areas, too.
--
Pure Personal Opinion | HoffmanLabs LLC
o***@gmail.com
2017-04-24 17:22:09 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead of
text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
Why not? It works. It's portable. It's familiar. It's common.
It's comparatively simple, too. We're already using JSON and XML for
data import and export, and for data storage and retrieval in apps that
aren't performance-sensitive,
If we're talking about kernel/OS level, it probably is somewhat performance-sensitive.
Post by Stephen Hoffman
Not so keen on implementations that are putting ASN.1 into the kernel
or into security-sensitive contexts, as that's not ended well for some
environments. Particularly when the data isn't trusted or trustworthy.
The issue of data being trusted/trustworthy is exactly the same with JSON and XML as ASN.1 -- so that's not a valid objection in favor of those.

One of the big problems with JSON and [sadly] most common/modern XML is the lack of a DTD. This certainly *IS* a security issue, and one that favors ASN.1 as its type definition fills the exact same roll as a DTD.
Post by Stephen Hoffman
But then we're either going to be implementing our own security bugs
and vulnerabilities with these formats, or reusing open source for
parsing and marshaling, or looking for system routines and frameworks
that abstract this effort.
This is true -- and we want to do these both correctly and make them available system-wide so that we (a) eliminate the need for ad-hoc serialize/deserialize functions, (b) unify the serialization/deserialization across programming languages, and (c) have a single library/module that needs to be verified/proven correct.
Stephen Hoffman
2017-04-24 18:32:33 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Stephen Hoffman
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead of
text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
Why not? It works. It's portable. It's familiar. It's common.
It's comparatively simple, too. We're already using JSON and XML for
data import and export, and for data storage and retrieval in apps that
aren't performance-sensitive,
If we're talking about kernel/OS level, it probably is somewhat performance-sensitive.
Various OpenVMS designs often head that way. Which is too bad though,
as that approach often optimizes for the rare cases and older servers
and older hardware, at the expense of common cases and newer cases;
complex source code, glue code, newer hardware, etc. It's
unfortunately less often that there's evidence of a look at a whole
hunk of OpenVMS — the new file system work is certainly one of the few
large-scale looks at a whole hunk of the platform — and at how to drag
those areas forward. Without a wider look at how these and other
pieces fit together for new applications, various APIs added around the
edges and grafting some new features into DCL and adding marshaling and
unmarshaling support are each incrementally adding to the complexity,
unfortunately. We're all increasingly working with SSD storage —
even on OpenVMS — and that really changes the performance calculations
for many applications, too. Which in aggregate means centrally
optimizing for performance — possibly prematurely, too — might not be
the best design approach, rather than optimizing for lower development
costs and easier support costs. Getting better designs and simpler
calls available, integrated into the platform, and deployed, and then
find out how large a case exists for embedding ASN.1 underneath, or
optimizing performance. Initially, a JSON hack atop RMS will work,
too.
Post by o***@gmail.com
Post by Stephen Hoffman
Not so keen on implementations that are putting ASN.1 into the kernel>
or into security-sensitive contexts, as that's not ended well for some>
environments. Particularly when the data isn't trusted or trustworthy.
The issue of data being trusted/trustworthy is exactly the same with
JSON and XML as ASN.1 -- so that's not a valid objection in favor of
those.
Parsers for JSON are simpler than those for ASN.1. XML too, though
that's more complex than JSON, and there are various security-relevant
updates for XML. ASN.1
Post by o***@gmail.com
One of the big problems with JSON and [sadly] most common/modern XML is
the lack of a DTD. This certainly *IS* a security issue, and one that
favors ASN.1 as its type definition fills the exact same roll as a DTD.
ASN.1 is bit of a dog's breakfast there, but sure.
Post by o***@gmail.com
Post by Stephen Hoffman
But then we're either going to be implementing our own security bugs
and vulnerabilities with these formats, or reusing open source for>
parsing and marshaling, or looking for system routines and frameworks>
that abstract this effort.
This is true -- and we want to do these both correctly and make them
available system-wide so that we (a) eliminate the need for ad-hoc
serialize/deserialize functions, (b) unify the
serialization/deserialization across programming languages, and (c)
have a single library/module that needs to be verified/proven correct.
Which all sits atop a database, and — in finest OpenVMS style — that
database underneath is far too often RMS. RMS has burned me more
times than I can count, usually around clusters and upgrades, and
around online backups and maintenance. But then I'm looking at how
to implement and use these marshaling and unmarshaling calls and
closely-related export and import calls — what I'm using these calls
for on OpenVMS and on other platforms — and at how much code I have to
write and slog through and maintain and patch to get the code to work,
too.
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen Hoffman
2017-05-22 15:33:09 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Parsers for JSON are simpler than those for ASN.1. XML too, though
that's more complex than JSON, and there are various security-relevant
updates for XML.
Why I don't trust parsers used in marshaling and unmarshaling data, and
why you shouldn't either?

https://github.com/mbechler/marshalsec/

Best to isolate parsers and any untrusted data away from access,
privileges, inner-mode, etc. Consider using a sandbox and pledge too,
if the platform supports sandboxing and pledge.

These sorts of flaws also include deliberately-broken removable-storage
volume structures, maliciously-crafted zip files (zip bombs still
occasionally work!), and any files or objects or executables that are
likely destined for parsing. That's most anti-malware tools, too.
In some ways, these sorts of parser vulnerabilities are similar risks
to SQL injection and other related web and network-related shenanigans;
deliberately-bad input data.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2017-04-28 16:37:37 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
One of the big problems with JSON and [sadly] most common/modern XML
is the lack of a DTD. This certainly *IS* a security issue, and one
that favors ASN.1 as its type definition fills the exact same roll as
a DTD.
XML DTD's has been obsolete for about a decade.

XML schemas are widely used today and can provide very strong type
safeness (more Pascal/Modula-2/Ada style than C/C++/Java/C# style).
Post by o***@gmail.com
Post by Stephen Hoffman
But then we're either going to be implementing our own security
bugs and vulnerabilities with these formats, or reusing open source
for parsing and marshaling, or looking for system routines and
frameworks that abstract this effort.
This is true -- and we want to do these both correctly and make them
available system-wide so that we (a) eliminate the need for ad-hoc
serialize/deserialize functions, (b) unify the
serialization/deserialization across programming languages, and (c)
have a single library/module that needs to be verified/proven
correct.
It is certainly a noble goal.

It was possible in 1978.

Today it is going to be hard.

Developers want all the usual XML parsers
(DOM, event pull, event push etc.).

But they need them in multiple runtime environments:
* C++
* Ada [if still relevant]
* Java
* PHP
* .NET Core [if ever ported to VMS]

How does an XML parser return something useful for all these
runtime environments?

It is not going to work.

Arne
o***@gmail.com
2017-04-29 17:41:49 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by o***@gmail.com
One of the big problems with JSON and [sadly] most common/modern XML
is the lack of a DTD. This certainly *IS* a security issue, and one
that favors ASN.1 as its type definition fills the exact same roll as
a DTD.
XML DTD's has been obsolete for about a decade.
XML schemas are widely used today and can provide very strong type
safeness (more Pascal/Modula-2/Ada style than C/C++/Java/C# style).
Post by o***@gmail.com
Post by Stephen Hoffman
But then we're either going to be implementing our own security
bugs and vulnerabilities with these formats, or reusing open source
for parsing and marshaling, or looking for system routines and
frameworks that abstract this effort.
This is true -- and we want to do these both correctly and make them
available system-wide so that we (a) eliminate the need for ad-hoc
serialize/deserialize functions, (b) unify the
serialization/deserialization across programming languages, and (c)
have a single library/module that needs to be verified/proven
correct.
It is certainly a noble goal.
It was possible in 1978.
Today it is going to be hard.
Developers want all the usual XML parsers
(DOM, event pull, event push etc.).
* C++
* Ada [if still relevant]
* Java
* PHP
* .NET Core [if ever ported to VMS]
How does an XML parser return something useful for all these
runtime environments?
* OpenVMS has the Common Runtime Environment, which is all about allowing subprograms written in different languages to interact.
* SOM (or similar), is all about a language independent way to do libraries, eliminating "DLL Hell" altogether.
* ASN.1 is a combination of two things:
— Describing/defining a type independently of the language, and
— Applying serialization/deserialization (ie "encoding").

The solution is fairly simple:
(1) Extend the CLE with a sort of meta-type which:
a) Has a serialize & deserialize method for the type,
b) which takes, as a parameter, the serializing/deserializing method, and
c) returns (or reads) a value of that type (as appropriate).
(2) #1 can be achieved by extending the CLE with SOM (or similar):
a) Tying the serialization in the base SOM-Object meta-type type.
(3) Hook this into the ASN.1 functionality by:
a) defining the ASN.1 encoder & decoder in terms of SOM, and
b) defining the ASN.1 type-description parser into the CLE (ie define the base-types in terms of the ASN.1 type-description).

You now have a platform that has native types, all of which are serializable/deseralizable via an automatically available subprogram, and which is suitable for writing language independent (and binary-compatible) libraries.
Post by Arne Vajhøj
It is not going to work.
See above.
Paul Sture
2017-04-25 14:44:20 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead
of text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
I gather that PowerShell uses XML to pipe objects. (I tried searching
for confirmation of this but didn't find anything useful)

At a lower level, Linux stores the process information in /proc in human
readable format.
--
Everybody has a testing environment. Some people are lucky enough enough
to have a totally separate environment to run production in.
Jan-Erik Soderholm
2017-04-25 15:11:21 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead
of text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
I gather that PowerShell uses XML to pipe objects. (I tried searching
for confirmation of this but didn't find anything useful)
At a lower level, Linux stores the process information in /proc...
Nothing at all is stored in /proc. The system creates the inforamtion
displayed in-the-fly when any of the /proc subdirectories are accesses.

Jusy like SHOW SYS does...
Post by Paul Sture
in human
readable format.
Paul Sture
2017-04-25 16:33:28 UTC
Reply
Permalink
Raw Message
Post by Jan-Erik Soderholm
Post by Paul Sture
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead
of text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
I gather that PowerShell uses XML to pipe objects. (I tried searching
for confirmation of this but didn't find anything useful)
At a lower level, Linux stores the process information in /proc...
Nothing at all is stored in /proc. The system creates the inforamtion
displayed in-the-fly when any of the /proc subdirectories are accesses.
Jusy like SHOW SYS does...
Thanks for the correction.

It's supplied "on demand". Here's an article explaining it:

<http://www.slashroot.in/proc-file-system-linux-explained>

A simple 'ls -l' will show everthing in /proc as zero bytes, and the
information is generated when you actually look at the files there.
--
Everybody has a testing environment. Some people are lucky enough enough
to have a totally separate environment to run production in.
Jan-Erik Soderholm
2017-04-25 17:15:52 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Post by Jan-Erik Soderholm
Post by Paul Sture
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead
of text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
I gather that PowerShell uses XML to pipe objects. (I tried searching
for confirmation of this but didn't find anything useful)
At a lower level, Linux stores the process information in /proc...
Nothing at all is stored in /proc. The system creates the inforamtion
displayed in-the-fly when any of the /proc subdirectories are accesses.
Jusy like SHOW SYS does...
Thanks for the correction.
<http://www.slashroot.in/proc-file-system-linux-explained>
A simple 'ls -l' will show everthing in /proc as zero bytes, and the
information is generated when you actually look at the files there.
OK... :-)

Then there is the question if that is a "smart" solution... :-)
Phillip Helbig (undress to reply)
2017-04-25 19:35:49 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Everybody has a testing environment. Some people are lucky enough enough
to have a totally separate environment to run production in.
Is this aphorism your own? If not, what is the source?
Paul Sture
2017-04-25 21:15:34 UTC
Reply
Permalink
Raw Message
Post by Phillip Helbig (undress to reply)
Post by Paul Sture
Everybody has a testing environment. Some people are lucky enough enough
to have a totally separate environment to run production in.
Is this aphorism your own? If not, what is the source?
No it isn't my own. The repetition of "enough" leads me to this tweet:

<https://twitter.com/stahnma/status/634849376343429120>

I think I picked it up elsewhere, without the attribution.
--
Everybody has a testing environment. Some people are lucky enough to
have a totally separate environment to run production in.
j***@yahoo.co.uk
2017-04-26 07:05:56 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Post by Phillip Helbig (undress to reply)
Post by Paul Sture
Everybody has a testing environment. Some people are lucky enough enough
to have a totally separate environment to run production in.
Is this aphorism your own? If not, what is the source?
<https://twitter.com/stahnma/status/634849376343429120>
I think I picked it up elsewhere, without the attribution.
--
Everybody has a testing environment. Some people are lucky enough to
have a totally separate environment to run production in.
Am I right in thinking that Twit is dated 21 Aug 2015?

In which case, this blogpost apparently dated 21 May 2015
contains the same quote (without the unwanted duplication)
and attributed to "unknown"
http://anthonyramella.com/blog/collection-of-quotes/

Same blogpost also contains a variety of other well known
and less well known gems. E.g. this one should be widely
known by now:
“In theory, there is no difference between theory and
practice. But, in practice, there is.” — Jan L. A. van de
Snepscheut

I don't recall seeing these particular words before, but
the principle needs to be widely understood:
“Innovation is like climbing a mountain. Most teams
fail because they pick the wrong mountain to climb”
-Sebastian Thrun


And finally:
“Simplicity is prerequisite for reliability. Simplicity
is prerequisite for reliability.” -Edsger W. Dijkstra
So important, it was posted twice? Or my browser is
misinterpreting it? Maybe it was cut/pasted from
RUNOFF-type output?

Enjoy.
Paul Sture
2017-04-26 20:35:36 UTC
Reply
Permalink
Raw Message
Post by j***@yahoo.co.uk
Post by Paul Sture
Post by Phillip Helbig (undress to reply)
Post by Paul Sture
Everybody has a testing environment. Some people are lucky enough enough
to have a totally separate environment to run production in.
Is this aphorism your own? If not, what is the source?
<https://twitter.com/stahnma/status/634849376343429120>
I think I picked it up elsewhere, without the attribution.
--
Everybody has a testing environment. Some people are lucky enough to
have a totally separate environment to run production in.
Am I right in thinking that Twit is dated 21 Aug 2015?
22 Aug 2015, to be picky :-)
Post by j***@yahoo.co.uk
In which case, this blogpost apparently dated 21 May 2015
contains the same quote (without the unwanted duplication)
and attributed to "unknown"
http://anthonyramella.com/blog/collection-of-quotes/
Aha.
Post by j***@yahoo.co.uk
Same blogpost also contains a variety of other well known
and less well known gems. E.g. this one should be widely
“In theory, there is no difference between theory and
practice. But, in practice, there is.” — Jan L. A. van de
Snepscheut
I don't recall seeing these particular words before, but
“Innovation is like climbing a mountain. Most teams
fail because they pick the wrong mountain to climb”
-Sebastian Thrun
“Simplicity is prerequisite for reliability. Simplicity
is prerequisite for reliability.” -Edsger W. Dijkstra
So important, it was posted twice? Or my browser is
misinterpreting it? Maybe it was cut/pasted from
RUNOFF-type output?
Those two sentences have no intervening space, so I'd guess at
a copy & double paste.

cf the following, which has it once:

<https://www.brainyquote.com/quotes/quotes/e/edsgerdijk204332.html>
--
Everybody has a testing environment. Some people are lucky enough to
have a totally separate environment to run production in.
Arne Vajhøj
2017-04-28 16:19:28 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead
of text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
Not sure what you really mean by OS level. Very little need for
serialization inside the OS.

If you mean native libraries to support native applications
shipping with OS, then absolutely.
Post by o***@gmail.com
Besides, if you do as I've laid out you could use the ASN.1 machinery
to handle serialization/deserialization, using the particular encoder
as a parameter, and bang you've got XML (via XER encoding) or JSON
(via that JSON ASN.1 encoder referenced).
ASN.1 seems not to be what the market wants. That does not preclude
it from being a nice technology. But just not so intertesting
from a practical perspective.

Arne
o***@gmail.com
2017-05-01 18:56:17 UTC
Reply
Permalink
Raw Message
Post by Arne Vajhøj
Post by o***@gmail.com
Post by Arne Vajhøj
Binary serialization is sort of last century's concept.
Today the industry is willing to pay the the performance overhead
of text serialization.
Lots of XML and JSON serialization stuff available.
But we're talking something that's at the OS level -- surely you
wouldn't recommend XML and JSON at the OS-level, would you?
Not sure what you really mean by OS level. Very little need for
serialization inside the OS.
This is generally true for most OSes... OpenVMS is a bit different though, as it has the ability to be distributed across multiple physical machines:

"OpenVMS commercialized many features that are now considered standard requirements for any high-end server operating system. These include:
* Symmetrical, asymmetrical, and NUMA multiprocessing, including clustering" - https://infogalactic.com/info/OpenVMS

"The system offers high availability through clustering and the ability to distribute the system over multiple physical machines." - https://en.wikipedia.org/wiki/OpenVMS

This means that the OS absolutely needs a serialization/deserialization methodology. This ability also needs to be both standardized and internally accessible by the OS, which implies being available at the OS level.
Post by Arne Vajhøj
If you mean native libraries to support native applications
shipping with OS, then absolutely.
That's half the purpose of integrating SOM; as per Infogalactic:
"SOM defines an interface between programs, or between libraries and programs, so that an object's interface is separated from its implementation. SOM allows classes of objects to be defined in one programming language and used in another, and it allows libraries of such classes to be updated without requiring client code to be recompiled.

A SOM library consists of a set of classes, methods, static functions, and data members. Programs that use a SOM library can create objects of the types defined in the library, use the methods defined for an object type, and derive subclasses from SOM classes, even if the language of the program accessing the SOM library does not support class typing. A SOM library and the programs that use objects and methods of that library need not be written in the same programming language. SOM also minimizes the impact of revisions to libraries. If a SOM library is changed to add new classes or methods, or to change the internal implementation of classes or methods, one can still run a program that uses that library without recompiling. This is not the case for all other C++ libraries, which in some cases require recompiling all programs that use them whenever the libraries are changed." - https://infogalactic.com/info/IBM_System_Object_Model
Post by Arne Vajhøj
Post by o***@gmail.com
Besides, if you do as I've laid out you could use the ASN.1 machinery
to handle serialization/deserialization, using the particular encoder
as a parameter, and bang you've got XML (via XER encoding) or JSON
(via that JSON ASN.1 encoder referenced).
ASN.1 seems not to be what the market wants. That does not preclude
it from being a nice technology. But just not so intertesting
from a practical perspective.
If done as presented then ASN.1 is essentially invisible to the programmers using the system, as it is interfaced/integrated on the underlying SOM meta-types. (ie All they 'see' is that every type has both a serialize and deserialize function.)
Stephen Hoffman
2017-05-02 14:37:21 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
This is generally true for most OSes... OpenVMS is a bit different
though, as it has the ability to be distributed across multiple
"OpenVMS commercialized many features that are now considered standard
* Symmetrical, asymmetrical, and NUMA multiprocessing, including
clustering" - https://infogalactic.com/info/OpenVMS
"The system offers high availability through clustering and the ability
to distribute the system over multiple physical machines." -
https://en.wikipedia.org/wiki/OpenVMS
Nice history. Those capabilities are now available elsewhere, and can
variously be done differently, better, easier and/or cheaper.
Post by o***@gmail.com
This means that the OS absolutely needs a serialization/deserialization
methodology. This ability also needs to be both standardized and
internally accessible by the OS, which implies being available at the
OS level.
I have no idea why that list of OS features and capabilities and OS
history leads to this particular requirement; seems a non-sequitur?

There is certainly usefulness in the basic ability to store and
retrieve data and particularly object graphs, and preferably without
having to slog through RMS and records and the hassles involved when
upgrading applications or otherwise changing application data
structures. (Same hassles have been hitting OpenVMS itself for
decades, too.)

These capabilities can certainly be useful to some apps and
particularly in some newer apps and designs.

But none of this is tied to SMP or clustering or the rest of the
history of application development on OpenVMS.

Application development which increasingly involves dragging some of
the old code base forward in specific areas, and also periodically
reviewing the application code base for latent bugs, and rewriting
parts and extending parts. This is what a ~forty year old operating
system means, where some few of the apps go back even further than
forty years; back into the PDP era.

There's seemingly little reason to create a design for marshaling and
unmarshaling that can't also store OO data structures and object
graphs, in addition to the sorts of traditional data structures typical
of C or BASIC or otherwise, as well as the whole and unfortunately
increasingly limited zoo of available OpenVMS descriptors, either.
Not in this era. Or preferences, as marshaling and unmarshaling data
is most of an application preferences mechanism, too. But I digress.
Post by o***@gmail.com
If you mean native libraries to support native applications shipping
with OS, then absolutely.
"SOM defines an interface between programs, or between libraries and
programs, so that an object's interface is separated from its
implementation. SOM allows classes of objects to be defined in one
programming language and used in another, and it allows libraries of
such classes to be updated without requiring client code to be
recompiled.
A SOM library consists of a set of classes, methods, static functions,
and data members. Programs that use a SOM library can create objects of
the types defined in the library, use the methods defined for an object
type, and derive subclasses from SOM classes, even if the language of
the program accessing the SOM library does not support class typing. A
SOM library and the programs that use objects and methods of that
library need not be written in the same programming language. SOM also
minimizes the impact of revisions to libraries. If a SOM library is
changed to add new classes or methods, or to change the internal
implementation of classes or methods, one can still run a program that
uses that library without recompiling. This is not the case for all
other C++ libraries, which in some cases require recompiling all
programs that use them whenever the libraries are changed." -
https://infogalactic.com/info/IBM_System_Object_Model
That could be worded better, but then that's a common local perception
whenever reading most IBM-related documentation. Adopting something
akin to SOM is a substantial overhaul of how applications interoperate
with OpenVMS. It's very much akin to the OO that I've occasionally
mentioned and discounted, and its how both macOS and Windows with .NET
are programmed.

In addition to that implementation effort and upkeep of the new OO
designs — SOM or otherwise — the current imperative programming
languages and tools and API designs of current and past OpenVMS would
very likely continue to be updated in parallel. New OO-focused
development work would be quite different than current work.

Not only does any OO implementation work involve having databases
backing the archiving and restoration of objects and data structures —
relational databases aren't OS add-ons anymore — the rest of the
implementation of an OO "replacement" model for system services and
libraries would be a very large development effort for both VSI, with
slow adoption for most current end-user developers and partners from
there.

As much as I'd like to see an OO interface for OpenVMS and as it'd
resolve some of the messes with itemlists and the existing APIs, it's
not something VSI seems likely to develop in the next five or ten years.
Post by o***@gmail.com
Post by o***@gmail.com
Besides, if you do as I've laid out you could use the ASN.1 machinery
to handle serialization/deserialization, using the particular encoder
as a parameter, and bang you've got XML (via XER encoding) or JSON (via
that JSON ASN.1 encoder referenced).
ASN.1 seems not to be what the market wants. That does not preclude it
from being a nice technology. But just not so intertesting from a
practical perspective.
If done as presented then ASN.1 is essentially invisible to the
programmers using the system, as it is interfaced/integrated on the
underlying SOM meta-types. (ie All they 'see' is that every type has
both a serialize and deserialize function.)
If the marshaling and unmarshaling is opaque, then ASN.1 or JSON or XML
or some local or binary encoding all works fine, and the appropriate
choice can be made as determined by the scale of the data, whether the
data will be exported, etc. But as soon as the adoption of OO
programming is in play — and that's what SOM would be, or
ObjC/Swift/Cocoa, or .NET et al — the marshaling and unmarshaling code
is a rounding error in the development effort involved. Or to
functional programming, for that matter.
--
Pure Personal Opinion | HoffmanLabs LLC
o***@gmail.com
2017-05-02 21:04:56 UTC
Reply
Permalink
Raw Message
Post by Stephen Hoffman
Post by o***@gmail.com
This is generally true for most OSes... OpenVMS is a bit different
though, as it has the ability to be distributed across multiple
"OpenVMS commercialized many features that are now considered standard
* Symmetrical, asymmetrical, and NUMA multiprocessing, including
clustering" - https://infogalactic.com/info/OpenVMS
"The system offers high availability through clustering and the ability
to distribute the system over multiple physical machines." -
https://en.wikipedia.org/wiki/OpenVMS
Nice history. Those capabilities are now available elsewhere, and can
variously be done differently, better, easier and/or cheaper.
Post by o***@gmail.com
This means that the OS absolutely needs a serialization/deserialization
methodology. This ability also needs to be both standardized and
internally accessible by the OS, which implies being available at the
OS level.
I have no idea why that list of OS features and capabilities and OS
history leads to this particular requirement; seems a non-sequitur?
You're obviously an intelligent man Mr. Hoffman, which is why your failure to see how it's needful (and on-topic) is both surprising and a bit of a dismay.

In any case, I'll attempt to enlighten you: if we have a distributed program running on separate machines (say #1, #2, and #3) then there must exist a way to communicate data between them. That *is* a marshaling system combined with the transmission/reception method; such a system is needed to handle more complex types than say INTEGER, things like STRING or a record.

For a string, you need a method for specifying the length, serializing that, and *then* serializing the string-contents; this allows the receiving end to read the string-length, reserve that required-space in memory, and read the deserialization of the string-contents into that memory. -- Similar happens for a record, say one for a doubly linked list (on positive numbers):

Type Node is record
Data : Positive;
Previous,
Next : access Node;
end record:

In this record we have two elements which are [essentially] pointers, we cannot therefore serialize and deserialize their values as the machines have their own address-spaces and such deserialization would essentially be pointing to a random location in memory on the remote machine. (Actually, that's not *entirely* true, some distributed networks have a shared/common address-space, IEEE1394 is an example.) -- The correct way to handle this is the serialization of the contents of the dereferenced fields, where the deserialization method would take care of generating pointers to the reconstituted records linking into the structure appropriately.
Post by Stephen Hoffman
There is certainly usefulness in the basic ability to store and
retrieve data and particularly object graphs, and preferably without
having to slog through RMS and records and the hassles involved when
upgrading applications or otherwise changing application data
structures. (Same hassles have been hitting OpenVMS itself for
decades, too.)
A lot of that "slogging through RMS and records" can be automated/made transparent, if I understand correctly. (IIRC, Ada's Direct_IO package was directly influenced/inspired by VMS's RMS/records.) -- That's where the Stream methods I mentioned awhile back come into play: they offer a standard interface/abstraction on data-flow.
Post by Stephen Hoffman
Application development which increasingly involves dragging some of
the old code base forward in specific areas, and also periodically
reviewing the application code base for latent bugs, and rewriting
parts and extending parts. This is what a ~forty year old operating
system means, where some few of the apps go back even further than
forty years; back into the PDP era.
True, that's where the SOM idea shines: it extends the CLE from procedural/imperative to OOP, even offering its objects to non-OOP languages:
* SOM works with procedural programming languages.
* SOM provides an object model for non-object-oriented languages.
(See: https://www.techopedia.com/definition/1315/system-object-model-som-ibm )
Post by Stephen Hoffman
There's seemingly little reason to create a design for marshaling and
unmarshaling that can't also store OO data structures and object
graphs, in addition to the sorts of traditional data structures typical
of C or BASIC or otherwise, as well as the whole and unfortunately
increasingly limited zoo of available OpenVMS descriptors, either.
Not in this era. Or preferences, as marshaling and unmarshaling data
is most of an application preferences mechanism, too. But I digress.
Except that the marshaling/unmarshaling *CAN* operate on OO data-structures and graphs. Here's an example using Ada's Stream attributes:

Pragma Ada_2012;

With
Ada.Text_IO.Text_Streams,
Ada.Integer_Text_IO;

Procedure IO_Example_3 is

-------------------------------
-- Object & Supporting Types --
-------------------------------
Package Objects is
Type Abstract_Object(Name_Length : Natural) is abstract tagged record
Name : String(1..Name_Length);
end record;

Function "+"(Left : Abstract_Object) return String is abstract;
Function "-"(Left : Abstract_Object'Class) return String is
(Left."+");

Type Fruit is new Abstract_Object with null record;

Type Animal_Noise is (Bark, Meow, Low, Quack);
Type Animal is new Abstract_Object with record
Noise : Animal_Noise;
end record;

Type Ship_Class is (Cutter, Destroyer, Battleship);
Type Ship_Tonnage is delta 10.0 range 4_600.0..45_000.0;
Type Ship( Class : not null access Ship_Class;
Name_Length : Natural ) is
new Abstract_Object(Name_Length) with record
Tonnage : Ship_Tonnage;
end record;
Private
Overriding Function "+"(Left : Fruit) Return String is
( '[' & Left.Name & ']' );
Overriding Function "+"(Left : Animal) Return String is
( '{' & Left.Name &
" / Says: " & Animal_Noise'Image(Left.Noise) & '}' );
Overriding Function "+"(Left : Ship) return String is
( '<' & Left.Name & " is a" & Ship_Tonnage'Image(Left.Tonnage) &
" ton " & Ship_Class'Image(Left.Class.All) & '>' );

End Objects;
Use Objects;
------------------------------------
-- File/Stream Types & Operations --
------------------------------------

Subtype File_Mode is Ada.Text_IO.File_Mode;
Subtype Text_File is Ada.Text_IO.File_Type;
Subtype Text_Stream is Ada.Text_IO.Text_Streams.Stream_Access;

Function File(Mode : File_Mode:= Ada.Text_IO.In_File) return Text_File is
Begin
Return Result : Text_File do
Ada.Text_IO.Open(
File => Result,
Mode => Mode,
Name => "Input.txt"
);
End return;
End File;

Begin

WRITE_OBJECTS:
Declare
Pear : Fruit := (Name_Length => 4, Name => "Pear");
New_Jersey : Ship := (Name_Length => 10, Name => "New Jersey",
Class => new Ship_Class'(Battleship),
Tonnage => 45_000.0);
Dog : Animal := (Name_Length => 3, Name => "Dog", Noise => Bark);
Cow : Animal := (Name_Length => 4, Name => "Bess", Noise => Low);
Strawberry : Fruit := (Name_Length => 4, Name => "Dave");

use Ada.Text_IO;
Output_File: Text_File := File(Out_File);
Output : Text_Stream := Text_Streams.Stream(Output_File);
Begin
Fruit'Output ( Output, Pear );
Ship'Output ( Output, New_Jersey );
Animal'Output( Output, Dog );
Animal'Output( Output, Cow );
Abstract_Object'Class'Output(Output, Abstract_Object(Strawberry));
Close(Output_File); -- Close the file.
End WRITE_OBJECTS;


READ_OBJECTS:
Declare
Use Ada.Text_IO;
Input_File : Text_File := File( In_File );
Input : Text_Stream := Text_Streams.Stream(Input_File);

Pear : Fruit := Fruit'Input( Input );
New_Jersey : Abstract_Object'Class := Ship'Input( Input );
Dog : Animal := Animal'Input(Input);
Cow : Animal := Animal'Input(Input);
Strawberry : Abstract_Object'Class := Abstract_Object'Class'Input(Input);
Begin
Put_Line( -Pear );
Put_Line( -New_Jersey );
Put_Line( -Dog );
Put_Line( -Cow );
Put_Line( +Strawberry );
Close(Input_File); -- Close file.
End READ_OBJECTS;

End IO_Example_3;

Program Output:
[Pear]
<New Jersey is a 45000.0 ton BATTLESHIP>
{Dog / Says: BARK}
{Bess / Says: LOW}
[Dave]
Post by Stephen Hoffman
Post by o***@gmail.com
If you mean native libraries to support native applications shipping
with OS, then absolutely.
"SOM defines an interface between programs, or between libraries and
programs, so that an object's interface is separated from its
implementation. SOM allows classes of objects to be defined in one
programming language and used in another, and it allows libraries of
such classes to be updated without requiring client code to be
recompiled.
A SOM library consists of a set of classes, methods, static functions,
and data members. Programs that use a SOM library can create objects of
the types defined in the library, use the methods defined for an object
type, and derive subclasses from SOM classes, even if the language of
the program accessing the SOM library does not support class typing. A
SOM library and the programs that use objects and methods of that
library need not be written in the same programming language. SOM also
minimizes the impact of revisions to libraries. If a SOM library is
changed to add new classes or methods, or to change the internal
implementation of classes or methods, one can still run a program that
uses that library without recompiling. This is not the case for all
other C++ libraries, which in some cases require recompiling all
programs that use them whenever the libraries are changed." -
https://infogalactic.com/info/IBM_System_Object_Model
That could be worded better, but then that's a common local perception
whenever reading most IBM-related documentation. Adopting something
akin to SOM is a substantial overhaul of how applications interoperate
with OpenVMS. It's very much akin to the OO that I've occasionally
mentioned and discounted, and its how both macOS and Windows with .NET
are programmed.
True, but it was special-purpose made for libraries and *CAN* be used by non-OOP languages, and that's why extending the CLR with it could be so valuable.
Post by Stephen Hoffman
Not only does any OO implementation work involve having databases
backing the archiving and restoration of objects and data structures —
relational databases aren't OS add-ons anymore — the rest of the
implementation of an OO "replacement" model for system services and
libraries would be a very large development effort for both VSI, with
slow adoption for most current end-user developers and partners from
there.
The previous example used streams on a text-file, there's no reason though that the stream can't be a database or a instantiation of direct-IO or a straight-up memory-dump: **THAT** is the whole purpose of their data-flow abstraction.
Post by Stephen Hoffman
Post by o***@gmail.com
Post by o***@gmail.com
Besides, if you do as I've laid out you could use the ASN.1 machinery
to handle serialization/deserialization, using the particular encoder
as a parameter, and bang you've got XML (via XER encoding) or JSON (via
that JSON ASN.1 encoder referenced).
ASN.1 seems not to be what the market wants. That does not preclude it
from being a nice technology. But just not so intertesting from a
practical perspective.
If done as presented then ASN.1 is essentially invisible to the
programmers using the system, as it is interfaced/integrated on the
underlying SOM meta-types. (ie All they 'see' is that every type has
both a serialize and deserialize function.)
If the marshaling and unmarshaling is opaque, then ASN.1 or JSON or XML
or some local or binary encoding all works fine, and the appropriate
choice can be made as determined by the scale of the data, whether the
data will be exported, etc. But as soon as the adoption of OO
programming is in play — and that's what SOM would be, or
ObjC/Swift/Cocoa, or .NET et al — the marshaling and unmarshaling code
is a rounding error in the development effort involved. Or to
functional programming, for that matter.
A rounding error?
I don't understand what you're trying to say there.
Stephen Hoffman
2017-05-03 16:13:14 UTC
Reply
Permalink
Raw Message
Post by o***@gmail.com
Post by Stephen Hoffman
Post by o***@gmail.com
This is generally true for most OSes... OpenVMS is a bit different
though, as it has the ability to be distributed across multiple
"OpenVMS commercialized many features that are now considered standard
* Symmetrical, asymmetrical, and NUMA multiprocessing, including
clustering" - https://infogalactic.com/info/OpenVMS
"The system offers high availability through clustering and the ability
to distribute the system over multiple physical machines."
https://en.wikipedia.org/wiki/OpenVMS
Nice history. Those capabilities are now available elsewhere, and can
variously be done differently, better, easier and/or cheaper.
Post by o***@gmail.com
This means that the OS absolutely needs a serialization/deserialization
methodology. This ability also needs to be both standardized and
internally accessible by the OS, which implies being available at the
OS level.
I have no idea why that list of OS features and capabilities and OS
history leads to this particular requirement; seems a non-sequitur?
You're obviously an intelligent man Mr. Hoffman, which is why your
failure to see how it's needful (and on-topic) is both surprising and a
bit of a dismay.
In any case, I'll attempt to enlighten you: if we have a distributed
program running on separate machines (say #1, #2, and #3) then there
must exist a way to communicate data between them. That *is* a
marshaling system combined with the transmission/reception method; such
a system is needed to handle more complex types than say INTEGER,
things like STRING or a record.
So you have a network. That's kind of a standard thing these days,
even with OpenVMS. AFAICT, it doesn't and shouldn't matter if it's a
connection with the industry-something decades-old cluster design from
the first-in-the-industry something-something provider, or it's an
authenticated and encrypted IPv6 connection arriving from a lightbulb.
It's still a network. We still have to deal with the
potentially-hostile REST data arriving via HTTPS, or binary blobs of
potentially-hostile JSONB data arriving from the Saint-Denis office in
Réunion, or whatever. Etc.
Post by o***@gmail.com
Post by Stephen Hoffman
If the marshaling and unmarshaling is opaque, then ASN.1 or JSON or XML
or some local or binary encoding all works fine, and the appropriate
choice can be made as determined by the scale of the data, whether the
data will be exported, etc. But as soon as the adoption of OO
programming is in play — and that's what SOM would be, or
ObjC/Swift/Cocoa, or .NET et al — the marshaling and unmarshaling code
is a rounding error in the development effort involved. Or to
functional programming, for that matter.
A rounding error?
I don't understand what you're trying to say there.
I'm saying that what you're describing is a very small part of what's
going to be involved to get to a competitive configuration and tools
implemented and working and (slowly) adopted. I do understand
marshaling and unmarshaling data, as I deal with that routinely on
other platforms. Not that I'm all that great with Core Data quite
yet, but that's another discussion. Whether it's stored into ASN.1
or JSON or JSONB or XML or whatever matters rather less than either
successful export or import — which is what your network history was
apparently referencing — or to getting the data or a list of
preferences in or out of memory effectively. OpenVMS hasn't been
good at this and has little generic support. I'd like to see it,
certainly. Preferably that's with parsers that are robust against
maliciously-crafted data, too. But getting native format and objects
in and out is a fair development investment, and involves dragging any
of the involved compilers forward, too. The export-import is but a
very small part of moving to an OO environment, too, if that were to
happen in some distant future VSI OpenVMS release. It's not often I
bump into a reply in a discussion that's even wordier than one of mine,
as well. 😃
--
Pure Personal Opinion | HoffmanLabs LLC
IanD
2017-05-07 14:44:55 UTC
Reply
Permalink
Raw Message
On Thursday, May 4, 2017 at 2:13:17 AM UTC+10, Stephen Hoffman wrote:

<snip>
It's not often I bump into a reply in a discussion that's even wordier than one of mine,
as well. 😃
--
Pure Personal Opinion | HoffmanLabs LLC
Perhaps it because you tend to give context as well so that the pointy end of what you are saying finds the correct intended target?

For what its worth, I don't find your posts lengthy, I find them informative
IanD
2017-05-07 14:46:04 UTC
Reply
Permalink
Raw Message
On Thursday, May 4, 2017 at 2:13:17 AM UTC+10, Stephen Hoffman wrote:

<snip>
It's not often I bump into a reply in a discussion that's even wordier than one of mine,
as well. 😃
--
Pure Personal Opinion | HoffmanLabs LLC
Perhaps it because you tend to give context as well so that the pointy end of what you are saying finds the correct intended target?

For what its worth, I don't find your posts lengthy, I find them informative with the appropriate background and context to match
Loading...