Discussion:
DCL's flaws (both scripting and UI)
(too old to reply)
hb
2015-02-04 15:15:29 UTC
Permalink
I've tried to figure out how to eliminate pulling that image in, but the
compiler generates calls to things like CXXL$LRTS___COMPAREQ even for
plain C code, so I don't see a sure-fire way to eliminate it, and of
course anything genuinely C++ would need the C++ RTL.
Are you here compiling C code with a C++ compiler as well? Otherwise I
don't see why this should generate references to CXXL$LRTS.
Can you write it in a different language (MACRO32 should be available on
all systems, for free) and/or always compile it with the C compiler and
provide the object(s)? You definitely have references to the CRTL, but
there must not be any reference to the C++ RTLs.
If there is some supported (or at least reliable) way to control the
order of image activation, I don't know what it is.
Your link command! Other than that, the image activator has to honor
image dependencies. You can't change/influence that - other than
avoiding the dependencies.
Jan-Erik Soderholm
2015-02-04 17:07:03 UTC
Permalink
If you want a true Cobol precompiler I can just think of "Oracle Oracle"
or "Oracle Rdb". If I remember correctly, you could download and run
Rdb for free once, at least for "hobbyist" use.
Well, I knew about Oracle. Other databases have to have them,
too otherwise the database turns out to be pretty useless.
Well, not many databases has precompilers on *VMS*.
Not everyone wants to use PL/SQL. :-)
Maybe not, but that has nothing to do with precompilers.
I do not see the connection. You can call your PL/SQL
stored procedures by SQL calls using embedded SQL and
your precompiler.
Does anyone know if Rdb is still available for free?
I know Oracle is (not the VMS version
however) but running Oracle is a lot of overhead. Pretty much
takes a fulltime DBA to keep it running properly.
Probably depends on the size of the database operation as
such. But yes, Rdb is known for beeing able to run more or
less "lights-out" with no DBA intervention for years.
If you skip the precompiler and can use some C middleware maybe
MySQL might work.
And that would be an immediate showstopper. No text is going to
cover doing that and no real business is doing that. The idea of
getting COBOL and VMS back into the edu world is to show that it
is the way business is being done.
MySQL (just to mention one) lacks any SQL pre-compiler at all.
Seems to be pretty popular anyway.

Embedded SQL for C has been deprecated as of Microsoft SQL
Server 2008.

So embedded SQL through SQL precompilers is not generally
"the way business is being done" today.
Then there is what is called "SQL Modular language", don't know if
anything but Rdb supports that, but it is a realy nice way to separate
the application logic in the Cobol code from the SQL code running
the database operations.
The whole idea is to show COBOL doing with databses what used to
be d one with files. If you are going to separate the database
from the COBOL people will ask, "Why are we learning COBOL if you
can do all this without it?"...
Without it? You need to have all your business (non-SQL)
application logic somewhere, not? That is what Cobol is for.


There are Rdb pre-compilers for Ada, C, Fortran, Pascal, PL/I and Cobol.
There is nothing special around Cobol here.

You use Cobol becuse it solves your bussines logic as
you like, not specificaly becuse it has a SQL precompiler.
And we are back to PL/SQL. And we
are also back to the way the anti-COBOLists want it to be even
though we have those millions of lines of COBOL still running
out there.
I do not follow your logic. You do not seem to understand
the concept of pre-compiled (also called embedded) SQL and
definitely not "SQL module language".
The SQLMOD compiler compiles SQL files into standard object files
that is simply CALL'ed from and linked to the main Cobol code just
as if it had been written in any language.
And that is yet another niche language...
Well, it is "Feature ID: E182 Module language" in all SQL standards
since at least SQL:1999.

Fully supported by Rdb. On Oracle you are forced to the less flexible
embedded SQL through pre-compilers. And of course also limited to
those languages that *have* a pre-compiler at all.

But yes, many uses embedded SQL anyway, in particular for
smaller project where it might be that the programmer writing
the Cobol code also writes the SQL code...
that time has to be wasted
teaching when the intent is to teach something the students can
actually apply when they leave school. My intent is to re-introduce
COBOL, not show that there are alternatives. They already know that,
it's called Java.
Right. You obviosly don't get it. Yes, you can teach basic
Cobol to your students. But do not involve databases then.
If you are going to involve database work, you have to
choose your path. Maybe VMS/Cobol/Rdb. Maybe MVS/Cobol/DB2.
Maybe something else. You learn a specific environment incl
the database that is popular in that specific envrionment.

But fine, learning some basic Cobol might be a good thing,
you can later take a Cobol job and learn the specifics there.


Regards,
Jan-Erik.
Would be real nice to do this all on VMS machines but first and foremost
I have to have COBOL, and a database and a database pre-compiler because
teaching COBOL with nothing but file access is a guaranteed non-starter.
Comments?
bill
JF Mezei
2015-02-04 17:27:19 UTC
Permalink
Post by Jan-Erik Soderholm
Well, not many databases has precompilers on *VMS*.
Wouldn't a precompiler be rather simple to port ? After all, it is
nothing but a text parser that replaces certain constructs with a CALL
statement for a named subroutine (which I assume is the same name across
different platforms),

It would be platform agnostic since it doesn't deal with machine code,
right ?

Or am I totally off track and mistaken here ?
Jan-Erik Soderholm
2015-02-04 17:52:30 UTC
Permalink
Post by JF Mezei
Post by Jan-Erik Soderholm
Well, not many databases has precompilers on *VMS*.
Wouldn't a precompiler be rather simple to port ? After all, it is
nothing but a text parser that replaces certain constructs with a CALL
statement for a named subroutine (which I assume is the same name across
different platforms),
It would be platform agnostic since it doesn't deal with machine code,
right ?
Or am I totally off track and mistaken here ?
The Rdb SQL precompilers replaces the

EXEC SQL
...
...
END EXEC

with a CALL(...) to a object module that is a compiled
version of the "embedded" SQL. Rdb creates machine code
for some tasks and that in turn calls routines in the
Rdb shareable library.

Rdb then also creates some machine code on-the-fly
at runtime, but that is another issue...

The whole package (precompiler and shareable library)
has to be arhitecture specific, of course.

For Alpha there was until reasently two installation
Rdb kits, one pre-EV56 and one post-EV56 (faster).

If you do not have a precompiler, you can use "dynamic
SQL" where the full SQL statement is sent to the
database for processing each time. Much less efficent
then having pre-compiled SQL, of course...

Jan-Erik.
Jan-Erik Soderholm
2015-02-04 18:43:13 UTC
Permalink
Post by Jan-Erik Soderholm
Post by JF Mezei
Post by Jan-Erik Soderholm
Well, not many databases has precompilers on *VMS*.
Wouldn't a precompiler be rather simple to port ? After all, it is
nothing but a text parser that replaces certain constructs with a CALL
statement for a named subroutine (which I assume is the same name across
different platforms),
It would be platform agnostic since it doesn't deal with machine code,
right ?
Or am I totally off track and mistaken here ?
The Rdb SQL precompilers replaces the
EXEC SQL
...
...
END EXEC
with a CALL(...) to a object module that is a compiled
version of the "embedded" SQL. Rdb creates machine code
for some tasks and that in turn calls routines in the
Rdb shareable library.
Rdb then also creates some machine code on-the-fly
at runtime, but that is another issue...
The whole package (precompiler and shareable library)
has to be arhitecture specific, of course.
For Alpha there was until reasently two installation
Rdb kits, one pre-EV56 and one post-EV56 (faster).
If you do not have a precompiler, you can use "dynamic
SQL" where the full SQL statement is sent to the
database for processing each time. Much less efficent
then having pre-compiled SQL, of course...
Jan-Erik.
I have a short SCO file (Cobol with embedded SQL) and the
/LIST/MACHINE_CODE output from the precompile on an Alpha.

This (getting the next number from an SEQUENCE:

---------------------------------------------
exec sql
select seq1.nextval into ws-seq1
from rdb$database
end-exec
---------------------------------------------

gets pre-compiled into:

---------------------------------------------
* exec sql
* select seq1.nextval into ws-seq1
* from rdb$database
* end-exec

CALL "SQL$PRC1_C6DLP4474PB10E2R2000" USING SQLCA ,WS_SEQ1
---------------------------------------------

The PRC1 above is aproc 1K bytes of machine code.
I can post that to my web-server if there is any
interest to see the Alpha machine code generated.

Jan-Erik.
JF Mezei
2015-02-04 19:31:49 UTC
Permalink
Post by Jan-Erik Soderholm
---------------------------------------------
* exec sql
* select seq1.nextval into ws-seq1
* from rdb$database
* end-exec
CALL "SQL$PRC1_C6DLP4474PB10E2R2000" USING SQLCA ,WS_SEQ1
So "SQL$PRC1_C6DLP4474PB10E2R2000" is subroutine is created by the
pre-compiler and specific to this invovation ? I was under the
impression that the precompiler called static/generic SQL routines and
fed them the arguments.

Obviously, if subroutines are created for each EXEC, then yeah, they
necome intinitely tied to the architecture especially if the
pre-compiler creates object code.

How does this work for linking ? Does it generate some linker file that
contains the refenece to all the object file(s) generated by the
pre-compiler ?
Jan-Erik Soderholm
2015-02-04 23:44:28 UTC
Permalink
Post by JF Mezei
Post by Jan-Erik Soderholm
---------------------------------------------
* exec sql
* select seq1.nextval into ws-seq1
* from rdb$database
* end-exec
CALL "SQL$PRC1_C6DLP4474PB10E2R2000" USING SQLCA ,WS_SEQ1
So "SQL$PRC1_C6DLP4474PB10E2R2000" is subroutine is created by the
pre-compiler and specific to this invovation ?
Yes, it is dynamicaly created Alpha assembler/machine code.
Each exec sql/end-exe combo creates a separate routine in the
OBJ file with its own entry point and so on.

The OBJ file also containes the main code from the Cobol
code itself, of course. The listing file has two separate
machin elistnings, one from "Oracle Rdb SQL V7.3-121" and
the rest from "HP COBOL V2.9-1453", the "traditional" part.

The standard Cobol cimpiler never sees the SQL parts, of
course, since they are commented out by the precompiler.

The linker in turn just sees one OBJ file with embedded
subroutines, just "as usual" and links it all together.
The linkiing is also against the Rdb shareable image
so that all lower lever functionality is available.
Remember that Rdb doesn't have a separate "database engine"
like most other databases has, everything is done within
the user process.
Post by JF Mezei
I was under the
impression that the precompiler called static/generic SQL routines and
fed them the arguments.
Yes, there are lower lever routines that are in turn called
from that "SQL$PRC1_C6DLP4474PB10E2R2000" thing. Like:
SQL$DECLARE_HANDLES, SQL$START_TRANSACTION and so on.
These are in the Rdb sharable module.

Many other databases uses a model where the SQL is sent
more or less as a string to the database engine to be
interpretted and compiled at runtime. That is called
"Dynamic SQL" and can be used with Rdb, if one want.

The nice thing with Dynamic SQL is that is also supports
DDL statements (like CREATE DATABASE and so on). Way back
in time, when there was was separate "developemt" and "runtime"
Rdb licenes, we wrote a interactive CLI that took "CREATE"
commands and run them using Dynamic SQL. So we could create
and manage databases on our runtime-only environment. We also
built a version using an early VisualBasic version so that
we got a Win3.1 GUI interface. This was early 90s using
Rdb 3.x or maybe an earliy V4 of Rdb.
Post by JF Mezei
Obviously, if subroutines are created for each EXEC, then yeah, they
necome intinitely tied to the architecture especially if the
pre-compiler creates object code.
How does this work for linking ? Does it generate some linker file that
contains the refenece to all the object file(s) generated by the
pre-compiler ?
Well, the references are in the symbol tables within the OBJ
file itself. So the linker will have no problem linking the
CALL to SQL$PRC1_C6DLP4474PB10E2R2000 to the routine itself
since they are together in the same OBJ anyway.

You only get one OBJ file.

Then you have to point the linker to the shareable parts by
including "SQL$USER/LIB" in the LINK command (or OPT file).

Jan-Erik.
Jan-Erik Soderholm
2015-02-05 00:16:40 UTC
Permalink
Post by JF Mezei
Post by Jan-Erik Soderholm
---------------------------------------------
* exec sql
* select seq1.nextval into ws-seq1
* from rdb$database
* end-exec
CALL "SQL$PRC1_C6DLP4474PB10E2R2000" USING SQLCA ,WS_SEQ1
So "SQL$PRC1_C6DLP4474PB10E2R2000" is subroutine is created by the
pre-compiler and specific to this invovation ?
OK, probably the last post in this subject... :-)

For those still interested, here is a LIS file:
http://jescab2.dyndns.org/pub_docs/get_seq_lis.txt

Look for SQL$PRC in the list file.

Note that the output from both the precompiler
and the standard Cobol compiler is included.

This was compiled without using the /ARCH switch
which gives code compatible with EV4 and up.

If I compile with /ARCH=EV67 I get slightly smaller
(and probably faster) code from the precompiler using
some EV6 (and up) new instructions (byte/word memory I/O).


Jan-Erik.
John Reagan
2015-02-05 19:01:22 UTC
Permalink
Post by Jan-Erik Soderholm
This was compiled without using the /ARCH switch
which gives code compatible with EV4 and up.
If I compile with /ARCH=EV67 I get slightly smaller
(and probably faster) code from the precompiler using
some EV6 (and up) new instructions (byte/word memory I/O).
Shift your fingers over one key. It was EV56 that introduced the byte/word instructions.
Jan-Erik Soderholm
2015-02-05 22:11:51 UTC
Permalink
Post by John Reagan
This was compiled without using the /ARCH switch which gives code
compatible with EV4 and up.
If I compile with /ARCH=EV67 I get slightly smaller (and probably
faster) code from the precompiler using some EV6 (and up) new
instructions (byte/word memory I/O).
Shift your fingers over one key. It was EV56 that introduced the byte/word instructions.
Correct. I used /ARCH=EV67 since my office/labb box is
an EV68 DS25. 67 and 68 are equivalent in the /ARCH switch.

But the byte/word support is from EV56, not EV6 as I wrote, right.


Jan-Erik.
JF Mezei
2015-02-05 04:49:00 UTC
Permalink
Post by Jan-Erik Soderholm
Yes, it is dynamicaly created Alpha assembler/machine code.
Each exec sql/end-exe combo creates a separate routine in the
OBJ file with its own entry point and so on.
The OBJ file also containes the main code from the Cobol
code itself, of course.
How is that done ?

I can see the precompiler generating its own .OBJ and then creating a
new .cob text file with CALL statements instead of the EXEC code. But
that new .COB would be compiled by the normal COBOL compiler , right ?
How can this output from the COBOL compiler go into the same .OBJ as
was generated by the pre-compiler ?
Jan-Erik Soderholm
2015-02-05 08:24:18 UTC
Permalink
Post by JF Mezei
Post by Jan-Erik Soderholm
Yes, it is dynamicaly created Alpha assembler/machine code.
Each exec sql/end-exe combo creates a separate routine in the
OBJ file with its own entry point and so on.
The OBJ file also containes the main code from the Cobol
code itself, of course.
How is that done ?
I can see the precompiler generating its own .OBJ and then creating a
new .cob text file with CALL statements instead of the EXEC code. But
that new .COB would be compiled by the normal COBOL compiler , right ?
It is. The precompiler calles the specific language compiler.
The precomiler is run by:

$ SQL$PRE /cobol /list/mach file.sco
Post by JF Mezei
How can this output from the COBOL compiler go into the same .OBJ as
was generated by the pre-compiler ?
Don't know. Does it matter how? It does.

As you can see from the list file I provided, output
from both the precompiler and the standard compiler are
in the same list file. The file actualy starts over
at "Page 1" when the standard compiler output starts.


Jan-Erik
hb
2015-02-05 17:13:24 UTC
Permalink
Post by Jan-Erik Soderholm
Post by JF Mezei
How can this output from the COBOL compiler go into the same .OBJ as
was generated by the pre-compiler ?
Don't know. Does it matter how? It does.
Object files can contain more than one object module. Analyze/object
should show, if that is used. Contatenaction of object files with
something like APPEND should do. Dunno what they (rdb) use.
Bill Gunshannon
2015-02-05 17:37:47 UTC
Permalink
Post by Jan-Erik Soderholm
Post by JF Mezei
Post by Jan-Erik Soderholm
Well, not many databases has precompilers on *VMS*.
Wouldn't a precompiler be rather simple to port ? After all, it is
nothing but a text parser that replaces certain constructs with a CALL
statement for a named subroutine (which I assume is the same name across
different platforms),
It would be platform agnostic since it doesn't deal with machine code,
right ?
Or am I totally off track and mistaken here ?
The Rdb SQL precompilers replaces the
EXEC SQL
...
...
END EXEC
with a CALL(...) to a object module that is a compiled
version of the "embedded" SQL. Rdb creates machine code
for some tasks and that in turn calls routines in the
Rdb shareable library.
And, as stated previously, these EXEC-SQL directives seem to be be
similar enough that working with one would make it easy to work with
another. Thus the reason I would like to see it included in the
course. That is the way all the COBOL/SQL I have seen or worked
with lately is done.
Post by Jan-Erik Soderholm
Rdb then also creates some machine code on-the-fly
at runtime, but that is another issue...
The whole package (precompiler and shareable library)
has to be arhitecture specific, of course.
Naturally, thus the reason I asked if there was anything available
on VMS. My last experience was Micro Focus/AIX/Oracle.
Post by Jan-Erik Soderholm
For Alpha there was until reasently two installation
Rdb kits, one pre-EV56 and one post-EV56 (faster).
For our use, speed differences would have to be extreme for it to
matter. But then, I still don't know if there is an emulator
available for academic use.
Post by Jan-Erik Soderholm
If you do not have a precompiler, you can use "dynamic
SQL" where the full SQL statement is sent to the
database for processing each time. Much less efficent
then having pre-compiled SQL, of course...
Also not really what Dynamic SQL is intended for but then, when all
you have in your toolbox is a hammer......

bill
--
Bill Gunshannon | de-moc-ra-cy (di mok' ra see) n. Three wolves
***@cs.scranton.edu | and a sheep voting on what's for dinner.
University of Scranton |
Scranton, Pennsylvania | #include <std.disclaimer.h>
Jan-Erik Soderholm
2015-02-05 22:08:15 UTC
Permalink
Post by Bill Gunshannon
Post by Jan-Erik Soderholm
Post by JF Mezei
Post by Jan-Erik Soderholm
Well, not many databases has precompilers on *VMS*.
Wouldn't a precompiler be rather simple to port ? After all, it is
nothing but a text parser that replaces certain constructs with a CALL
statement for a named subroutine (which I assume is the same name across
different platforms),
It would be platform agnostic since it doesn't deal with machine code,
right ?
Or am I totally off track and mistaken here ?
The Rdb SQL precompilers replaces the
EXEC SQL
...
...
END EXEC
with a CALL(...) to a object module that is a compiled
version of the "embedded" SQL. Rdb creates machine code
for some tasks and that in turn calls routines in the
Rdb shareable library.
And, as stated previously, these EXEC-SQL directives seem to be be
similar enough that working with one would make it easy to work with
another.
Yes, the basic struture is the same. SQL dialects can
of course vary between database platforms.
Post by Bill Gunshannon
Thus the reason I would like to see it included in the
course. That is the way all the COBOL/SQL I have seen or worked
with lately is done.
I worked once as one part in a larger mainframe project.
We had a DB2 guy who help with writing optimized DB2 SQL
code that we then CALL'ed from our applications. That way,
the "database programmer" also can have full control over
query desing and performance vs. table/index design.

It is not always that a good Cobol programmer also
is a good SQL programmer.

But yes, the modules that the DB2 guy write for us to CALL
was written using embedded SQL in Cobol. :-) But my point
was that separating business logiv from database access
can be a good thing.

Jan-Erik.
Craig A. Berry
2015-02-05 00:51:05 UTC
Permalink
Post by JF Mezei
Post by Jan-Erik Soderholm
Well, not many databases has precompilers on *VMS*.
Wouldn't a precompiler be rather simple to port ?
Not necessarily.
Post by JF Mezei
Or am I totally off track and mistaken here ?
Yes, but it can be done. I'm pretty sure I heard Brett Cameron mention
in a webinar in the last year or so that he had some open source SQL
precompiler running on VMS. I don't remember details and I don't recall
finding anything publicly available at the time.
Bill Gunshannon
2015-02-05 16:51:52 UTC
Permalink
Post by JF Mezei
Post by Jan-Erik Soderholm
Well, not many databases has precompilers on *VMS*.
Wouldn't a precompiler be rather simple to port ? After all, it is
nothing but a text parser that replaces certain constructs with a CALL
statement for a named subroutine (which I assume is the same name across
different platforms),
It would be platform agnostic since it doesn't deal with machine code,
right ?
Or am I totally off track and mistaken here ?
Well, your close. :-)

bill
--
Bill Gunshannon | de-moc-ra-cy (di mok' ra see) n. Three wolves
***@cs.scranton.edu | and a sheep voting on what's for dinner.
University of Scranton |
Scranton, Pennsylvania | #include <std.disclaimer.h>
Bill Gunshannon
2015-02-05 16:50:41 UTC
Permalink
Post by Jan-Erik Soderholm
If you want a true Cobol precompiler I can just think of "Oracle Oracle"
or "Oracle Rdb". If I remember correctly, you could download and run
Rdb for free once, at least for "hobbyist" use.
Well, I knew about Oracle. Other databases have to have them,
too otherwise the database turns out to be pretty useless.
Well, not many databases has precompilers on *VMS*.
Well, I was hoping....
Post by Jan-Erik Soderholm
Not everyone wants to use PL/SQL. :-)
Maybe not, but that has nothing to do with precompilers.
I do not see the connection. You can call your PL/SQL
stored procedures by SQL calls using embedded SQL and
your precompiler.
Sorry, too subtle for anyone not currently in the COBOL world. There are
are a lot of people (think PHB) who see PL/SQL and Crystal Reports as a
replacement for not only COBOL, but any real programming language doing
database access.
Post by Jan-Erik Soderholm
Does anyone know if Rdb is still available for free?
I know Oracle is (not the VMS version
however) but running Oracle is a lot of overhead. Pretty much
takes a fulltime DBA to keep it running properly.
Probably depends on the size of the database operation as
such. But yes, Rdb is known for beeing able to run more or
less "lights-out" with no DBA intervention for years.
I'll look into wether or not Rdb can be used for something like this.
But, I have my doubts.
Post by Jan-Erik Soderholm
If you skip the precompiler and can use some C middleware maybe
MySQL might work.
And that would be an immediate showstopper. No text is going to
cover doing that and no real business is doing that. The idea of
getting COBOL and VMS back into the edu world is to show that it
is the way business is being done.
MySQL (just to mention one) lacks any SQL pre-compiler at all.
Seems to be pretty popular anyway.
Exactly. Writting C libraries to access it from COBOL is not something
that any serious COBOL operation is going to be interested in and thus
not something acceptable for the purpose of this course. I am trying
to convince people here that COBOL is still needed for serious business
and, trust me, it's a real hard sell in academia. (Kinda like selling
VMS to the same group!!)

Just as an aside, I wrote COBOL wrappers for the Postgres C libraries
that worked quite well and eliminated the need for the COBOL developer
to know anything but COBOL. And, I am trying to convince a grad student
here to make his thesis project an SQL Pre-compiler for COBOL which
could then mmove into the Open Source world making GNUCOBOL (formerly
Open COBOL) a more viable option for serious COBOL programming.
Post by Jan-Erik Soderholm
Embedded SQL for C has been deprecated as of Microsoft SQL
Server 2008.
That's because they are trying to push their own option to SQL.
Post by Jan-Erik Soderholm
So embedded SQL through SQL precompilers is not generally
"the way business is being done" today.
Well, having done it for a living within the last couple years and
knowing a lot of places that are still doing it, I disagree. :-)
Never confuse the actions of MicroSoft with what the business world
is or wants to do. Like academia, they are more interested in steering
the bus than getting their riders to the destinations they want.
Post by Jan-Erik Soderholm
Then there is what is called "SQL Modular language", don't know if
anything but Rdb supports that, but it is a realy nice way to separate
the application logic in the Cobol code from the SQL code running
the database operations.
The whole idea is to show COBOL doing with databses what used to
be d one with files. If you are going to separate the database
from the COBOL people will ask, "Why are we learning COBOL if you
can do all this without it?"...
Without it? You need to have all your business (non-SQL)
application logic somewhere, not? That is what Cobol is for.
Many people today think the PL/SQL is the answer to that. This attitude
was one of the primary reasons I left that last COBOL gig I was doing.
Post by Jan-Erik Soderholm
There are Rdb pre-compilers for Ada, C, Fortran, Pascal, PL/I and Cobol.
There is nothing special around Cobol here.
I never said there was, other than the very strong need for COBOL programmers
in the business world. In the IBM world PL/I is still going strong. And
Fortran is out there but I doubt there are many people doing database
business applications in it as that wasn't Fortran's forte. We did COBOL,
Fotran and PL/I on Univac mainframes using DMS-11 a long time ago. A lot
of those applications are actually still out there. :-) I really can't
imagine there is much call for Ada programmers who can do the same. Come
to think of it, I really can't imagine there is much call for Ada programmers
period. :-)
Post by Jan-Erik Soderholm
You use Cobol becuse it solves your bussines logic as
you like, not specificaly becuse it has a SQL precompiler.
Where did you get this logic from what I said? COBOL with database access
is how most of it has been done for at least 20 years. Maybe the fact that
most COBOL courses were still teaching sequential, direct access and ISAM
was one of the reason for it's fall from grace, who knows. But my intent
in getting a COBOL course going again is to help design a course based on
what businesses need in a COBOL programmer today. If what they learn isn't
going to help them get a job, no student is going to take the course.
Post by Jan-Erik Soderholm
And we are back to PL/SQL. And we
are also back to the way the anti-COBOLists want it to be even
though we have those millions of lines of COBOL still running
out there.
I do not follow your logic. You do not seem to understand
the concept of pre-compiled (also called embedded) SQL and
definitely not "SQL module language".
I can only talk about what I saw (recently) in use at COBOL shops.
It wasn't a lot of "pre-compiled SQL" it was a lot of:
CONNECT, DECLARE CURSOR, FETCH, INSERT, SELECT, UPDATE, etc. All
set off by EXEC-SQL/END-EXEC pairs which get expanded into system
calls by the pre-compiler..
Post by Jan-Erik Soderholm
The SQLMOD compiler compiles SQL files into standard object files
that is simply CALL'ed from and linked to the main Cobol code just
as if it had been written in any language.
And that is yet another niche language...
Well, it is "Feature ID: E182 Module language" in all SQL standards
since at least SQL:1999.
Yeah, and COBOL has had OO since 2002. And most of the millions of lines
of existing code (and most of the serious new development as well) don't
use it.
Post by Jan-Erik Soderholm
Fully supported by Rdb. On Oracle you are forced to the less flexible
embedded SQL through pre-compilers.
And yet, Oracle is the number one databse in the world, even though
others, including free ones, have more capabilities and features.
What you see as "less flexible" businesses see as the way they have
been doing it for decades. They don't need flexibilty, they need to
get the job done.
Post by Jan-Erik Soderholm
And of course also limited to
those languages that *have* a pre-compiler at all.
Like COBOL. Which is the only one I am interested in at this point
in time.
Post by Jan-Erik Soderholm
But yes, many uses embedded SQL anyway, in particular for
smaller project
Define "smaller". The IRS? The Navy? All your credit card transactions?
Post by Jan-Erik Soderholm
where it might be that the programmer writing
the Cobol code also writes the SQL code...
If he can write it in a form that matches the COBOL language, why not?
How is that any different from picking out which records or fields in
a file he works with? I've done it. On some really big systems.
Post by Jan-Erik Soderholm
that time has to be wasted
teaching when the intent is to teach something the students can
actually apply when they leave school. My intent is to re-introduce
COBOL, not show that there are alternatives. They already know that,
it's called Java.
Right. You obviosly don't get it. Yes, you can teach basic
Cobol to your students. But do not involve databases then.
If you are going to involve database work, you have to
choose your path. Maybe VMS/Cobol/Rdb. Maybe MVS/Cobol/DB2.
Maybe something else. You learn a specific environment incl
the database that is popular in that specific envrionment.
All of the systems I have seen use pretty much the same EXEC-SQL
directives. If they learn any they will find it easy to adapt.
They aren't going to be "involved in database work" any more than
they would have to know how the filesystem works if they were doing
sequential files. The database is merely a datasource and from COBOL
it is accessed only slightly different than if it were a file. Heck,
I have had to fix programs where the conversion (2 decades ago!) from
flat files to databases involved writting a module in COBOL to read
the database, write the data to a flat file and then process the flat
file. Obviously, because they were government contractors specifically
hired for the conversion they were not the brightest bulbs in the
lamp. But it showed just how similar the methodologies were. My
goal is to have the students taught what the business world wants them
to learn. Like it or not, their primary reason for being here is to
prepare for a job and anything that detracts from that is a waste of
their time and their parents money.
Post by Jan-Erik Soderholm
But fine, learning some basic Cobol might be a good thing,
you can later take a Cobol job and learn the specifics there.
Your gonna have to do that no matter what you study. No student
leaves school as an expert. I just want our students to have a better
set of tools than the people they will be competing with in the job
market. And I (and others at other schools lately) think that COBOL
is a good tool to add to the toolbox.

bill
--
Bill Gunshannon | de-moc-ra-cy (di mok' ra see) n. Three wolves
***@cs.scranton.edu | and a sheep voting on what's for dinner.
University of Scranton |
Scranton, Pennsylvania | #include <std.disclaimer.h>
Craig A. Berry
2015-02-05 01:18:36 UTC
Permalink
I've tried to figure out how to eliminate pulling that image in, but the
compiler generates calls to things like CXXL$LRTS___COMPAREQ even for
plain C code, so I don't see a sure-fire way to eliminate it, and of
course anything genuinely C++ would need the C++ RTL.
Are you here compiling C code with a C++ compiler as well?Otherwise I
don't see why this should generate references to CXXL$LRTS.
Yes, I'm compiling C code with C++ in order to make sure the headers and
the external symbols are ship-shape for building add-ons in C++. Among
other reasons stated elsewhere in this thread.
Can you write it in a different language (MACRO32 should be available on
all systems, for free)
Um, it's the Perl sources. 250,000 lines of C that's been evolving
rapidly under the hands of hundreds of contributors for 25 years. Let me
know when you're done rewriting it in MACRO-32, but only if you're
willing to maintain it.
and/or always compile it with the C compiler and
provide the object(s)?
Distributing platform-specific object code with an open source project
is a non-starter. For one thing it would become obsolete several times a
day as new changes are pushed.
You definitely have references to the CRTL, but
there must not be any reference to the C++ RTLs.
OK, thanks. I was able to eliminate some of the dependencies on the C++
RTL by building with /DEFINE=__NONAMESPACE_STD=1, but that didn't get
rid of all of them.
hb
2015-02-05 17:17:26 UTC
Permalink
Post by Craig A. Berry
Um, it's the Perl sources. 250,000 lines of C that's been evolving
rapidly under the hands of hundreds of contributors for 25 years. Let me
know when you're done rewriting it in MACRO-32, but only if you're
willing to maintain it.
Maybe I didn't make it clear: just the piece of code which sets the
DECC$ARGV_PARSE_STYLE, which is VMS specific, anyway.
Craig A. Berry
2015-02-05 19:18:59 UTC
Permalink
Post by hb
Post by Craig A. Berry
Um, it's the Perl sources. 250,000 lines of C that's been evolving
rapidly under the hands of hundreds of contributors for 25 years. Let me
know when you're done rewriting it in MACRO-32, but only if you're
willing to maintain it.
Maybe I didn't make it clear: just the piece of code which sets the
DECC$ARGV_PARSE_STYLE, which is VMS specific, anyway.
Ah, so you're saying put the init code in a separate shareable image? In
that case it's no problem to get LIB$INITIALIZE into an image that has
no references to the C++ RTL, even when compiling with the C++ compiler.
But how do I guarantee that the init-only shareable image gets activated
before the primary shareable image that does have references in it to
the C++ RTL?

I should probably explain that most of the Perl interpreter lives in a
shareable image (PERLSHR.EXE located via the PERLSHR logical name) and
very little in the main PERL.EXE image. Adding another shareable image
containing only LIB$INITIALIZE is of course possible, but adds more
complexity to work around what's really a bug in C++. Ironically, the
C++ release notes are the only place that the use of LIB$INITIALIZE is
documented with (formerly) working example code.
hb
2015-02-05 20:01:21 UTC
Permalink
Post by Craig A. Berry
Ah, so you're saying put the init code in a separate shareable image? In
that case it's no problem to get LIB$INITIALIZE into an image that has
no references to the C++ RTL, even when compiling with the C++ compiler.
But how do I guarantee that the init-only shareable image gets activated
before the primary shareable image that does have references in it to
the C++ RTL?
Yes, I mean a separate shareable image, which only references the C RTL.
And a link command and/or linker options to ensure that it is activated
before the C++ RTL. The image activator processes the shareable images
as they are listed in your image and the linker writes that information
depending on your link command/options. As said before, the image
activator recognizes dependencies and may need to re-order what was
found in the image.
Post by Craig A. Berry
I should probably explain that most of the Perl interpreter lives in a
shareable image (PERLSHR.EXE located via the PERLSHR logical name) and
very little in the main PERL.EXE image. Adding another shareable image
containing only LIB$INITIALIZE is of course possible, but adds more
complexity to work around what's really a bug in C++. Ironically, the
C++ release notes are the only place that the use of LIB$INITIALIZE is
documented with (formerly) working example code.
I don't know whether this is a "bug" in C++, that is in the C++ RTLs, or
that the C++ RTL developers had some good reasons for designing it this
way. Did you report this as a bug and did you get a response? Obviously
it prohibits setting C-RTL features from init code in C++ images (main
or shareable).

I don't have access to an I64 system with a C++ compiler. I thought of
compiling the vms.c sources I found in the perl repository and analyzing
the created object. But if that is part of the big PERLSHR.EXE, I
suspect it is not worth the effort: there are probably many other object
modules which have references to the C++ RTLs.

Oh, there was a bootcamp presentation on image initialization, because
the presenter thought that there is not much documentation on this stuff
in the manuals.
Craig A. Berry
2015-02-05 20:52:12 UTC
Permalink
Post by hb
I don't know whether this is a "bug" in C++, that is in the C++ RTLs, or
that the C++ RTL developers had some good reasons for designing it this
way. Did you report this as a bug and did you get a response? Obviously
it prohibits setting C-RTL features from init code in C++ images (main
or shareable).
As far as C++ is concerned I'm just a hobbyist with no support contract,
so no, there has been no report through official channels. As far as I
know the same is true for Steven Schweda, who was the first to raise the
issue in a public forum.
Post by hb
I don't have access to an I64 system with a C++ compiler. I thought of
compiling the vms.c sources I found in the perl repository and analyzing
the created object. But if that is part of the big PERLSHR.EXE, I
suspect it is not worth the effort: there are probably many other object
modules which have references to the C++ RTLs.
I managed to eliminate all references to the C++ run-time in PERLSHR.EXE
except for these:

$ search dbgperlshr.map cxxl$langrtl
CXXL$LANGRTL CXXL V1.0-0 Lkg
0 30-MAR-2010 19:45 Linker I02-37
SYS$COMMON:[SYSLIB]CXXL$LANGRTL.EXE;1
CXXL$LANGRTL LESS/EQUAL 1 1
CXXL$LRTS___COMPAREQ 00000007-X CXXL$LANGRTL
SV
CXXL$LRTS___DTOQ 00000009-X CXXL$LANGRTL
SV
CXXL$LRTS___QTOD 0000000F-X CXXL$LANGRTL
SV

These calls must be generated from ordinary C-language constructs in
sv.c or are perhaps used to implement C library calls like memcmp,
though I've been unable to figure out from the listings where in the
source the generated calls came from. Even if I figured out how to get
rid of them, it wouldn't really be a stable solution as anyone could
reintroduce similar code at any time.

I may try putting the init code into a separate image sometime but for
now will probably just document that DECC$ARGV_PARSE_STYLE doesn't work
when building with C++ on Itanium.
Chris Scheers
2015-02-05 22:53:48 UTC
Permalink
Post by Craig A. Berry
Post by hb
Post by Craig A. Berry
Um, it's the Perl sources. 250,000 lines of C that's been evolving
rapidly under the hands of hundreds of contributors for 25 years. Let me
know when you're done rewriting it in MACRO-32, but only if you're
willing to maintain it.
Maybe I didn't make it clear: just the piece of code which sets the
DECC$ARGV_PARSE_STYLE, which is VMS specific, anyway.
Ah, so you're saying put the init code in a separate shareable image? In
that case it's no problem to get LIB$INITIALIZE into an image that has
no references to the C++ RTL, even when compiling with the C++ compiler.
But how do I guarantee that the init-only shareable image gets activated
before the primary shareable image that does have references in it to
the C++ RTL?
I should probably explain that most of the Perl interpreter lives in a
shareable image (PERLSHR.EXE located via the PERLSHR logical name) and
very little in the main PERL.EXE image. Adding another shareable image
containing only LIB$INITIALIZE is of course possible, but adds more
complexity to work around what's really a bug in C++. Ironically, the
C++ release notes are the only place that the use of LIB$INITIALIZE is
documented with (formerly) working example code.
Do you actually need to explicitly use LIB$INITIALIZE?

IIRC, in C++, if a static variable is of a class that has an
initializer, the initializer runs before execution of main(). I would
assume that in VMS-land, this is implemented by LIB$INITIALIZE and is
probably the code that is running before your LIB$INITIALIZE code.

Could you achieve your objective by declaring a static variable with
your own class with an initializer and then doing what you need to do
from the initialization routine?
--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ***@applied-synergy.com
Fax: 817-237-3074
Craig A. Berry
2015-02-06 03:57:25 UTC
Permalink
Post by Chris Scheers
Do you actually need to explicitly use LIB$INITIALIZE?
IIRC, in C++, if a static variable is of a class that has an
initializer, the initializer runs before execution of main(). I would
assume that in VMS-land, this is implemented by LIB$INITIALIZE and is
probably the code that is running before your LIB$INITIALIZE code.
Could you achieve your objective by declaring a static variable with
your own class with an initializer and then doing what you need to do
from the initialization routine?
Thanks for the suggestion. Given that the C++ release notes document the
use of LIB$INITIALIZE exactly as I'm already doing, I suspect not, but
anything's possible. Note that setting DECC$ARGV_PARSE_STYLE has to
happen before the command line is parsed into the argv array, not just
before main() is called.

It also of course necessary to set parse style to extended in the
process before invoking your program or DECC$ARGV_PARSE_STYLE will not
work. There is an API to set the parse style:


status = sys$set_process_propertiesw( 0,
0,
0,
PPROP$C_PARSE_STYLE_TEMP,
PARSE_STYLE$C_EXTENDED,
0 );

but that API is for all practical purposes non-functional since you
can't call it early enough to do any good. Whatever the effect of parse
style is, it apparently happens before any LIB$INITIALIZE calls are
made, so you can't actually change the parse style at run time in a way
that makes any difference to the command that invoked the program. I
have ranted about this in more detail at:

<http://sourceforge.net/p/vms-ports/tickets/34/>

Hopefully extended parse will be always on in VMS 9.0 and we can finally
stop having to explain that sometimes you have to quote things to
preserve case and sometimes you don't.
John E. Malmberg
2015-02-06 13:05:47 UTC
Permalink
Post by Craig A. Berry
Thanks for the suggestion. Given that the C++ release notes document the
use of LIB$INITIALIZE exactly as I'm already doing, I suspect not, but
anything's possible. Note that setting DECC$ARGV_PARSE_STYLE has to
happen before the command line is parsed into the argv array, not just
before main() is called.
It also of course necessary to set parse style to extended in the
process before invoking your program or DECC$ARGV_PARSE_STYLE will not
status = sys$set_process_propertiesw( 0,
0,
0,
PPROP$C_PARSE_STYLE_TEMP,
PARSE_STYLE$C_EXTENDED,
0 );
but that API is for all practical purposes non-functional since you
can't call it early enough to do any good. Whatever the effect of parse
style is, it apparently happens before any LIB$INITIALIZE calls are
made, so you can't actually change the parse style at run time in a way
that makes any difference to the command that invoked the program. I
<http://sourceforge.net/p/vms-ports/tickets/34/>
Hopefully extended parse will be always on in VMS 9.0 and we can finally
stop having to explain that sometimes you have to quote things to
preserve case and sometimes you don't.
What is needed is for both foreign commands and DCL verbs to have a way
to indicate to DCL that they want to force parse_style=extended for just
that command.

For foreign commands it could be something like using a double dollar
sign instead of a single one:

foo :== $$dev:[dir]foo.exe

There should be some way to add an equivalent flag to the CLD.

I do not know how hard it would be to implement such a feature into DCL.

I suspect that DCL does its case conversion before it looks up the
command in the tables or as a foreign command, so even adding that
attribute to a verb or a foreign command is too late in the command
processing sequence.

Regards,
-John
***@qsl.network
Personal Opinion Only


Regards,
-John
Stephen Hoffman
2015-02-06 14:54:03 UTC
Permalink
Post by John E. Malmberg
What is needed is for both foreign commands and DCL verbs to have a way
to indicate to DCL that they want to force parse_style=extended for
just that command.
For foreign commands it could be something like using a double dollar
foo :== $$dev:[dir]foo.exe
There should be some way to add an equivalent flag to the CLD.
I do not know how hard it would be to implement such a feature into DCL.
I suspect that DCL does its case conversion before it looks up the
command in the tables or as a foreign command, so even adding that
attribute to a verb or a foreign command is too late in the command
processing sequence.
Related fixes for several areas: probably embedded image-specific
environmental configuration settings, whether for CRTL, or for command
parsing, or for the debug bootstrap. DCL'd have to keep the same-case
and the upper-case strings around, and choosing one means either DCL'd
have to sniff the image settings and pass the right string into the
image activation (ugly), or the image activator'd have to sniff and
select the command string on activation (ugly), or use a sys$cli-like
callback to provide details to and then fetch the command string from
the interpreter (ugly, more complex). Adding the parsing settings
into the CLD syntax gets coverage only for CLD-based commands, and
adding the double-dollar only gets the older and non-automatic foreign
commands — the not-DCL$PATH foreign commands — working. This all gets
back to the implementation of and the interface between the CLIs and
the image activator, unfortunately.
--
Pure Personal Opinion | HoffmanLabs LLC
V***@SendSpamHere.ORG
2015-02-06 18:51:28 UTC
Permalink
Post by Stephen Hoffman
Post by John E. Malmberg
What is needed is for both foreign commands and DCL verbs to have a way
to indicate to DCL that they want to force parse_style=extended for
just that command.
For foreign commands it could be something like using a double dollar
foo :== $$dev:[dir]foo.exe
There should be some way to add an equivalent flag to the CLD.
I do not know how hard it would be to implement such a feature into DCL.
I suspect that DCL does its case conversion before it looks up the
command in the tables or as a foreign command, so even adding that
attribute to a verb or a foreign command is too late in the command
processing sequence.
Related fixes for several areas: probably embedded image-specific
environmental configuration settings, whether for CRTL, or for command
parsing, or for the debug bootstrap. DCL'd have to keep the same-case
and the upper-case strings around, and choosing one means either DCL'd
have to sniff the image settings and pass the right string into the
image activation (ugly), or the image activator'd have to sniff and
select the command string on activation (ugly), or use a sys$cli-like
callback to provide details to and then fetch the command string from
the interpreter (ugly, more complex). Adding the parsing settings
into the CLD syntax gets coverage only for CLD-based commands, and
adding the double-dollar only gets the older and non-automatic foreign
commands — the not-DCL$PATH foreign commands — working. This all gets
back to the implementation of and the interface between the CLIs and
the image activator, unfortunately.
$REST_OF_LINE_NOUPCASE
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Stephen Hoffman
2015-02-06 20:30:01 UTC
Permalink
Post by V***@SendSpamHere.ORG
$REST_OF_LINE_NOUPCASE
I'm clearly missing out on some just-dig-the-hole-deeper meme, here.
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2015-02-06 20:41:41 UTC
Permalink
Post by Stephen Hoffman
Post by V***@SendSpamHere.ORG
$REST_OF_LINE_NOUPCASE
I'm clearly missing out on some just-dig-the-hole-deeper meme, here.
If this is a service that is documented I've sure never heard of it, or
maybe it's just a global symbol in the CLI?

$ search/format=nonulls sys$system:*.exe REST_OF_LINE_NOUPCASE

******************************
SYS$COMMON:[SYSEXE]CDU.EXE;1

LE<SYN>$REST_OF_LINE_NOUPCASEABBREVIATEFOREIGNIMMEDIATEMCRIGNOREMCROPTDELIMMCRPARSENOSTATUS.LISLISTING<BEL><SO><SOH>t<LF><ACK>LISTING<BEL><SO><SOH><
IND><LF><ACK><FF><SOH><SO><SOH><CCH><LF><ACK><SO><SOH><xA0><LF><ACK><SO><SOH>¨<LF><ACK><SO><SOH>°<LF><ACK><SO><SOH>¸<LF><ACK><SO><SOH>À<LF><ACK>$LIN
E<ENQ><SO><SOH>È<LF><ACK>ORORANDANDANY2NOTNEGCLI_MCR<BEL><SO><SOH><VT><ACK>OBJECT<ACK><SO><SOH><DLE><VT><ACK>OBJECT<ACK><SO><SOH>
<VT><ACK>
hb
2015-02-06 22:32:56 UTC
Permalink
Post by Craig A. Berry
Post by Stephen Hoffman
Post by V***@SendSpamHere.ORG
$REST_OF_LINE_NOUPCASE
I'm clearly missing out on some just-dig-the-hole-deeper meme, here.
If this is a service that is documented I've sure never heard of it, or
maybe it's just a global symbol in the CLI?
$ search/format=nonulls sys$system:*.exe REST_OF_LINE_NOUPCASE
$ search sys$update:*.cld $rest_of_line_noupcase
Craig A. Berry
2015-02-06 23:08:47 UTC
Permalink
Post by hb
Post by Craig A. Berry
Post by Stephen Hoffman
Post by V***@SendSpamHere.ORG
$REST_OF_LINE_NOUPCASE
I'm clearly missing out on some just-dig-the-hole-deeper meme, here.
If this is a service that is documented I've sure never heard of it, or
maybe it's just a global symbol in the CLI?
$ search/format=nonulls sys$system:*.exe REST_OF_LINE_NOUPCASE
$ search sys$update:*.cld $rest_of_line_noupcase
Ah, thanks. Looks like it's an undocumented value type in the command
definition utility, currently only used by CREATE/TERM, SPAWN, and PIPE.
Having nothing to go on about what it does except its name, I'm still in
the dark about how preventing DCL from upcasing something has anything
to do with preventing the CRTL or C++ RTL from downcasing it.
Stephen Hoffman
2015-02-06 23:32:21 UTC
Permalink
Post by Craig A. Berry
Post by Stephen Hoffman
Post by V***@SendSpamHere.ORG
$REST_OF_LINE_NOUPCASE
I'm clearly missing out on some just-dig-the-hole-deeper meme, here.
If this is a service that is documented I've sure never heard of it, or
maybe it's just a global symbol in the CLI?
CLD items will not solve this for the common cases — foreign commands —
and for the reasons I stated in earlier reply. Or yeah, just
dig-dig-dig that hole deeper.
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2015-02-07 12:24:38 UTC
Permalink
Post by Stephen Hoffman
Post by Craig A. Berry
Post by Stephen Hoffman
Post by V***@SendSpamHere.ORG
$REST_OF_LINE_NOUPCASE
I'm clearly missing out on some just-dig-the-hole-deeper meme, here.
If this is a service that is documented I've sure never heard of it,
or maybe it's just a global symbol in the CLI?
CLD items will not solve this for the common cases — foreign commands —
and for the reasons I stated in earlier reply. Or yeah, just
dig-dig-dig that hole deeper.
You can't get that hole deeper when digging in the swamps :-)

Back to the original C++ problem. At the moment it looks like you have
two options: the earlier mentioned separate shareable image setting the
decc$argv_parse_style feature, or a DCL command with
$REST_OF_LINE_NOUPCASE and doing your own argv parsing.

As usual there are pros and cons. Setting decc$argv_parse_style is not
enough your process has to be in ODS5-parse style mode, aka
/parse=extended. On the other hand you need a CLD-file and a set
command, but then lib$get_foreign will give you the command line (after
symbol substitution) with cases preserved, no matter what the process
parse mode or decc$argv_parse_style is.

No question, this area is longing for some improvements. The main
players are the C RTL and DCL. The image activator was and is not part
of the problem. Whether it can help is a different question. To me it
seems it is time to create a new, additional C RTL, one which is as
Posix/Unix/... compliant as possible: in other words, no more decc$
features (and no new vsic$ features :-). And it seems that DCL should be
enhanced to provide the command line as is, after symbol substitution,
or even raw - on request.

That reminds me of another DCL feature I would like to see, a P0 symbol,
showing the file of the command procedure as entered (with or without
the @-sign).
Stephen Hoffman
2015-02-06 14:23:51 UTC
Permalink
Post by Craig A. Berry
Hopefully extended parse will be always on in VMS 9.0 and we can finally
stop having to explain that sometimes you have to quote things to
preserve case and sometimes you don't.
Yes, it'd be nice — financial and revenue concerns aside — if the CRTL
setting swamp, the command-parsing swamp and the image activator
settings swamp were at least partially drained.
--
Pure Personal Opinion | HoffmanLabs LLC
John Reagan
2015-02-06 23:01:44 UTC
Permalink
I keep finding things that break with DECC$ARGV_PARSE_STYLE and keep having to turn it off (but still keeping parse mode extended)
Paul Sture
2015-02-05 19:19:41 UTC
Permalink
<snip>
"Just a long sequence of table lookups, list manipulation, tree
manipulation, etc. all strung together in just the right way. "
And not even a little to do with a simple understanding of language
concepts like grammar, syntax/semantics, and other stuff generally
no longer taught in school, not even in foreign language classes????
That'll do me for now anyway. Back to DCL, and its logical successor,
Python ;)
Now your talk'n lol :-)
As much as I like Python as a way forward from DCL, I'm wondering
whether even Python is going to eventually go the way of the dodo?
Python in certain circles is starting to come under pressure from the
likes of Julia as scripted languages being used to glue all sorts of
things together are being pushed to run faster and faster
Costs drives everything in the long run (at least in my simple world)
and the likes of Julia is gaining momentum because it's being pitched
as a write once language as it fulfils the role of a quick scripting
language but one that can scale up to be blindly fast (beaten
consistently only really by Fortran who is still king across the board
in terms of raw speed).
In the HPC world, the cost of scripting something in a purely
interpreted language and then having to code bits of it to run faster
is causing pain - hence how Julia has become the new kid on the block
and is being taunted as a language to watch. This paragraph wasn't to
push Julia but to ask the larger question of where we should be
pitching any replacement language. What is the hammer we are looking
for and the nail we wish to hit?
I've had a look at Julia and can see it as targeting R (and MATLAB,
though I haven't used that). It certainly has some nice features for
HPC applications but I wonder if it would be overkill as a DCL
replacement.
"Write it once only" is gaining momentum which is how Julia is rising
in certain fields such as HPC (a field I believe VMS should migrate to
as it's a field that is emerging and one where VMS could compete in
because other OS's are only just moving towards Exa-scale and beyond
(especially with a revamped file system) - better than trying to
compete in already saturated markets IMO).
How far ahead of the curve do we want to push VMS in terms of a
scripting / cli language?
Since I'm of the belief VMS is in a do or die spiral then I cast my
vote on DCL's replacement as being 'out-there' as much as possible
while still being usable and functional and known by an existing base
of new future VMS people out there. Obviously the likes of Fortress
could have been good but it was abandoned and open sourced - it was
simply too far out there and ambitious and doesn't blend itself to a
cli I don't think. Anyhow I don't advocate investing in anything that
experimental, that just takes too long to bring to fruition and is a
money pit, something VMS/VSI cannot afford. Time to market is against
us
A more pragmatic approach might be to port Julia (or other future
contender) to VMS as an alternative to Python. I would see that
happening more in conjunction with the X86 port than for Itanium (the
nitty gritty of Julia/other product for that architecture already being
done, and the chance of volume sales that X86 hopefully brings).
I've been doing some reading on how MS coded MSIL (CIL in it's more
formal name). Very smart. They basically created a core language that
ended up being human readable byte like code where any (most) of their
languages could compile down to in the .net framework. Is this a path
VMS could take I wonder? A bit like a glorified OO expansion of the
VMS calling standard, for want of extreme over over over simplification
I'm still liking functionally based languages myself (including
Python) however as a replacement to DCL...
On a slightly skewed topic, could someone alleviate my ignorance here,
Why is it exactly that Windows can have so much stuff ported to it so
quickly whilst VMS languishes away? i.e. What is it that MS did right
with Windows that allows code to be ported to it much easier than VMS?
Rapid development tools make things easier I know but what is it at
the Os base level that doesn't get in people's way?
Steve Ballmer's cry of "Developers, Developers, Developers" played a
large part. Cheap MSDN subscriptions, etc etc.
I know there are a lot of coders out there in the MS world but a lot
of open source projects have been ported to windows even when the user
base for that ported software isn't going to be huge and yet people do
port stuff and fairly quickly too - surely they are being aided by
what exactly in the windows world that VMS doesn't have?
What is it we need to fix with VMS to get people to code on it, in
particular, porting stuff to it? and by implication, what scripting
language should we be looking at for VMS that will aid this?
Good question.
--
An invention needs to make sense in the world in which it's finished,
not the world in which it's started. -- Ray Kurzweil
David Froble
2015-02-05 20:09:01 UTC
Permalink
Post by Paul Sture
On a slightly skewed topic, could someone alleviate my ignorance here,
Why is it exactly that Windows can have so much stuff ported to it so
quickly whilst VMS languishes away? i.e. What is it that MS did right
with Windows that allows code to be ported to it much easier than VMS?
Rapid development tools make things easier I know but what is it at
the Os base level that doesn't get in people's way?
I'll take a shot at this question.

Actually, what was the lure of weendoze, and MS-DOS before weendoze came
out? It was cheap / affordable hardware. DEC considered commercial and
such customers as it's target. Back in 1990 or so the DEC systems were
not cheap / affordable.

Possibly some would get a DEC system. But many more would get a PC.
Post by Paul Sture
Steve Ballmer's cry of "Developers, Developers, Developers" played a
large part. Cheap MSDN subscriptions, etc etc.
So yeah, with this, the numbers of potential developers in each camp
were on the side of cheap / affordable.

Sometimes quantity is a quality all on it's own ....
Post by Paul Sture
I know there are a lot of coders out there in the MS world but a lot
of open source projects have been ported to windows even when the user
base for that ported software isn't going to be huge and yet people do
port stuff and fairly quickly too - surely they are being aided by
what exactly in the windows world that VMS doesn't have?
Don't ask me. Ask Bill and John. They are deep into porting stuff to
VMS. However, some things depend on advances that exist on other
platforms, and with the unwanted stepchild method HP has treated VMS,
VMS doesn't have some of these advances.

Now, VMS has always had a great development environment. Lately it's a
bit tarnished by the "unwanted stepchild" handling by HP. But there is
plenty of good development environments on VMS. There is also the
common calling standard (well except for C) which allows pieces to be
written in whatever language suits each particular piece.
Post by Paul Sture
What is it we need to fix with VMS to get people to code on it, in
particular, porting stuff to it? and by implication, what scripting
language should we be looking at for VMS that will aid this?
For me, I'd rather see stuff coded on VMS instead of ported. But, hey,
if it works, then just work it, and if it ain't broke, don't fix it.

Also for me, I'm not so much into scripting stuff. To each his own ..
Stephen Hoffman
2015-02-05 21:35:15 UTC
Permalink
Post by Paul Sture
Why is it exactly that Windows can have so much stuff ported to it so
quickly whilst VMS languishes away? i.e. What is it that MS did right
with Windows that allows code to be ported to it much easier than VMS?
Rapid development tools make things easier I know but what is it at
the Os base level that doesn't get in people's way?
Steve Ballmer's cry of "Developers, Developers, Developers" played a
large part. Cheap MSDN subscriptions, etc etc.
DEC encouraged folks on OpenVMS to move to Windows NT back in the
1990s, with their Windows NT Affinity efforts. Seems like that
Windows migration was probably the right move for many folks, too.
Well, unless you were DEC, but that's most of twenty years ago; that's
ancient history.

More recently, having more than a billion Windows systems makes for an
expansive and skilled and very well-established user base, and
Microsoft and partners are diligently moving that installed base
forward with new features and tools.

But if you're even wondering about this stuff and why development is
easier and ports are easier, then please go try Windows, OS X or
Linux...
Post by Paul Sture
I know there are a lot of coders out there in the MS world but a lot
of open source projects have been ported to windows even when the user
base for that ported software isn't going to be huge and yet people do
port stuff and fairly quickly too - surely they are being aided by
what exactly in the windows world that VMS doesn't have?
Other than Redmond having an installed base of over a billion users,
and all of the advantages that such scale entails?
Post by Paul Sture
What is it we need to fix with VMS to get people to code on it, in
particular, porting stuff to it? and by implication, what scripting
language should we be looking at for VMS that will aid this?
The technical details and suggestions have been discussed here, at
least once or twice.

The finances are somewhat simpler: until there are financial advantages
to customers and software developers around deploying applications on
OpenVMS, few folks outside the existing installed base will
particularly care about OpenVMS.

VSI has to get to stable and preferably to increasing revenues with
"Bolton", too. That's for their own finances. Everything else is
secondary.

VSI then gets to figure out potential answers to your question to go
try. How to get more applications and more deployments. Assuming
VSI do see sufficient revenue, VSI likely isn't going to be able to do
a whole lot of new work within OpenVMS over the next five years or so,
either. Some new work will occur, certainly. But they're going to be
pretty busy with compilers and patches and hardware support and
customer support and getting the business going, and then there's that
little side-project of the x86-64 port. VSI also needs to figure out
how to get more folks over to OpenVMS, whether that's adding specific
features or dropping prices, or partnerships, or something else. If
VSI was aggressive and could make the revenues work, then maybe free
OpenVMS, free volume shadowing, free clustering, free
development-related products, and sustaining the VSI business on
commercial-support and customizations and related revenues. Now if
VSI ends up swimming in cash after the "Bolton" release and/or x86-64
port arrives much more quickly than I'd expect, that's another
discussion.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2015-02-05 23:14:09 UTC
Permalink
Post by Paul Sture
Why is it exactly that Windows can have so much stuff ported to it so
quickly whilst VMS languishes away? i.e. What is it that MS did right
with Windows that allows code to be ported to it much easier than VMS?
Rapid development tools make things easier I know but what is it at
the Os base level that doesn't get in people's way?
Steve Ballmer's cry of "Developers, Developers, Developers" played a
large part. Cheap MSDN subscriptions, etc etc.
DEC encouraged folks on OpenVMS to move to Windows NT back in the 1990s,
with their Windows NT Affinity efforts. Seems like that Windows
migration was probably the right move for many folks, too. Well,
unless you were DEC, but that's most of twenty years ago; that's ancient
history.
More recently, having more than a billion Windows systems makes for an
expansive and skilled and very well-established user base, and Microsoft
and partners are diligently moving that installed base forward with new
features and tools.
But if you're even wondering about this stuff and why development is
easier and ports are easier, then please go try Windows, OS X or Linux...
Post by Paul Sture
I know there are a lot of coders out there in the MS world but a lot
of open source projects have been ported to windows even when the user
base for that ported software isn't going to be huge and yet people do
port stuff and fairly quickly too - surely they are being aided by
what exactly in the windows world that VMS doesn't have?
Other than Redmond having an installed base of over a billion users, and
all of the advantages that such scale entails?
Post by Paul Sture
What is it we need to fix with VMS to get people to code on it, in
particular, porting stuff to it? and by implication, what scripting
language should we be looking at for VMS that will aid this?
The technical details and suggestions have been discussed here, at least
once or twice.
The finances are somewhat simpler: until there are financial advantages
to customers and software developers around deploying applications on
OpenVMS, few folks outside the existing installed base will particularly
care about OpenVMS.
I'll talk briefly about a current event.

We developed a service for a specific customer. It was always in the
plans, but this customer moved it forward a bit.

Basically, it is a service, using socket communications, to accept
customer orders. The application package is targeted at wholesale
distribution.

Now, the communications have not been optimized. I got a few things I
want to do about that. However, an average stock inquiry is taking .1
second, and an average order is taking .8 second. After the service
went live, a bunch of other customers learned about it, and now we got
many trading partners using the service.

What we hear more than anything else is "how do you handle so many
transactions, and so fast?"

:-)

I attribute it to not using a bunch of bloated middleware, which has way
too much overhead. Actually, I don't know if we're all that much faster
for such operations, but the trading partners are all impressed. I can
also wonder how much running on VMS has to do with it. At this time I
cannot know if VMS has anything to do with things.

Regardless, here is a highly successful operation running on VMS. It
could be pitched at potential customers as "use VMS if you want good
real world performance".
j***@gmail.com
2015-02-06 00:00:21 UTC
Permalink
Post by David Froble
I attribute it to not using a bunch of bloated middleware, which has way
too much overhead.
Yes very likely this. The less bloatware, the better.
Post by David Froble
[...] I can also wonder how much running on VMS has to do with it.
At this time I cannot know if VMS has anything to do with things.
This part probably very little. It depends on where the bottlenecks are in
the 100/800ms that its taking you to handle inquiries/transactions. It's likely
some of this time is lost in TCP/IP stacks on VMS which are consistently
worse than their modern counterparts on Linux or elsewhere. And of course,
a modern x86 is definitely significantly faster than Tukwila. But perhaps you
are just waiting for IO in which case then, its neither an OS or CPU thing.

EJ
David Froble
2015-02-06 03:03:20 UTC
Permalink
Post by j***@gmail.com
Post by David Froble
I attribute it to not using a bunch of bloated middleware, which has way
too much overhead.
Yes very likely this. The less bloatware, the better.
Post by David Froble
[...] I can also wonder how much running on VMS has to do with it.
At this time I cannot know if VMS has anything to do with things.
This part probably very little. It depends on where the bottlenecks are in
the 100/800ms that its taking you to handle inquiries/transactions. It's likely
some of this time is lost in TCP/IP stacks on VMS which are consistently
worse than their modern counterparts on Linux or elsewhere. And of course,
a modern x86 is definitely significantly faster than Tukwila. But perhaps you
are just waiting for IO in which case then, its neither an OS or CPU thing.
EJ
There is I/O involved, but with much of the data files cached, it's
pretty quick. Got to love the memory available on today's systems.

I don't think it is a network issue, the actual messages aren't very
verbose. However, there is some processing involved. Significant
application work. It's not just taking the messages, it is actually
posting the orders into the data files.

This is VMS. It's doing more than one thing. It's running all company
operations. Not office stuff. It's doing it well, with no problems
anywhere.
Phillip Helbig (undress to reply)
2015-02-05 21:04:28 UTC
Permalink
If you want a true Cobol precompiler I can just think of "Oracle Oracle"
or "Oracle Rdb".
The latter is what I thought of.
If I remember correctly, you could download and run
Rdb for free once, at least for "hobbyist" use.
I don't think that was ever possible. One can download and run it for
free as a DEVELOPER, i.e. for developing a (presumably commercial)
application. I don't think there was ever a hobbyist option. I don't
know about educational options. Another thing I know nothing about is
license transfer.
Craig A. Berry
2015-02-07 15:19:01 UTC
Permalink
Perhaps somebody will go look at the C#/VB compilers in Roslyn?
https://github.com/dotnet/roslyn
It appears that those compilers run within Visual Studio and target the
CLR. They are also mostly implemented in C# so there is a bit of a
bootstrapping problem. I think I've seen reports that the core CLR, JIT
and so forth will eventually appear as part of the corefx project
(<https://github.com/dotnet/corefx>) but aren't there yet.
They released the core CLR this week:

<https://github.com/dotnet/coreclr>

Ports to Linux and Mac OS X are just beginning. There is a dependency on
CMake (<http://www.cmake.org>). I see no IA64 targets so getting this to
run on VMS near-term would involved major code generation work. Mono
appears to still have its IA64 targets so might be a better bet.
Paul Sture
2015-02-08 09:44:42 UTC
Permalink
On 2015-01-31, Simon Clubley
I didn't know any of those, but with my new more detailed Javascript
knowledge I am now not surprised one little bit.
My quick dabble in Ruby indicated that he only scratched the surface.
There's a common theme here though, which is that Javascript, Ruby and
PHP provide a platform where beginners can see results quickly and then
build on that, and this is why they have become so popular.
This is a conclusion I have already come to and software reliability
suffers as a result.
Agreed.
In fairness to Javascript, while it's a problem within Javascript, it's
not a Javascript specific problem.
But the lack of other numeric types is a Javascript problem.
Oops, you are correct. :-)
I was thinking in terms of the floating point datatype itself and wasn't
thinking about Javascript's use of only this datatype.
I believe you do need a higher level language for the types of applications
Javascript is used for, but the more I learn about Javascript, the more
I am understanding it's problems. Unfortunately, in a number of cases (such
as writing Firefox addons) there isn't really any other choice.
Javascript unexpectedly cropped up last week as an alternative to
AppleScript for OS X Workflow Automation.

This flavour does appear to support integers and there's an
Objective-C bridge which looks interesting.
--
An invention needs to make sense in the world in which it's finished,
not the world in which it's started. -- Ray Kurzweil
Paul Sture
2015-02-08 09:48:26 UTC
Permalink
Post by Paul Sture
Javascript unexpectedly cropped up last week as an alternative to
AppleScript for OS X Workflow Automation.
This flavour does appear to support integers and there's an
Objective-C bridge which looks interesting.
Rats, forgot the URL:

<https://developer.apple.com/library/mac/releasenotes/InterapplicationCommunication/RN-JavaScriptForAutomation/index.html
--
An invention needs to make sense in the world in which it's finished,
not the world in which it's started. -- Ray Kurzweil
Doug Gordon
2015-02-09 17:44:28 UTC
Permalink
ICC from Python. There's a thought that will keep me up at night.

--Doug
(One of the original ICC implementers. Note: *not* one of the ICC designers.)
I see Jan-Erik has also posted a queue example in Python which does
pretty much the same thing. If I read that code correctly, it appears
to populate an object in a similar way to the one I am suggesting.
Simon.
I was just thinking if commenting your OO-DCL example code. :-)
Yes, the Python implementation use the OO model that Python
is built upon. I have move the Python file that contains the
http://jescab2.dyndns.org/pub_docs/queues.py
so you can study how it is implementet. That file is what is
read by my first command in my example, "from vms import queues".
That "loads" the queue definitions and functions into Python.
As you can see, the higher level functions are built on the
interface to getquiw() that is included in the "starlet"
part of the VMS module for Python.
The "starlet" module has quite a few of the common system
services implented and directly usable from Python. Doing
import vms.starlet
help (vms-starlet) gives a list of functions directly
icc_accept(...)
icc_close_assoc(...)
icc_connectw(...)
icc_disconnectw(...)
icc_open_assoc(...)
icc_receivew(...)
icc_reject(...)
icc_replyw(...)
icc_transceivew(...)
icc_transmitw(...)
In many cases higher level funtions has been created like
the ones I used in my example to read queue and job info.
Jan-Erik.
Stephen Hoffman
2015-02-09 18:22:42 UTC
Permalink
Post by Doug Gordon
ICC from Python. There's a thought that will keep me up at night.
So no plans to support PyInfoServer? {ducks}
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2015-02-10 00:35:01 UTC
Permalink
Post by Doug Gordon
ICC from Python. There's a thought that will keep me up at night.
--Doug
(One of the original ICC implementers. Note: *not* one of the ICC designers.)
Relax, you can do it from Perl as well:

<http://search.cpan.org/~dsugal/vms-icc-0_02/icc.pm>
g***@gmail.com
2015-02-16 14:51:28 UTC
Permalink
This is a long thread so apologies if this has been mentioned but why does DCL treat a completely unknown command as a warning?

For instance,
$ fred
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
\FRED\
$ sho sym $status
$STATUS == "%X00038090"

Also, could someone please implement an integer overflow on DCL's integers?

I really hope that VSI can turn the VMS situation around.

Bye for now,

Gerald.
I think it's time once again to build a list of DCL's flaws now that
VSI are around. Don't forget however that DCL is both a scripting
language and a UI. My initial list is below.
1) You can't edit commands across line boundaries. Even Windows CLIs
can do this.
2) You can't save your command history automatically, bash style, and
have it restored automatically at the next session.
With bash, you can have multiple shells active at the same time and
only the commands entered during a specific session will be added to
the history file when that session exits even though the shell has the
full command history from previous shells available (up to a user
defined limit).
Any implementation needs to think about the multiple shells active at
the same time issue before proposing a solution to this.
3) No filename completion. This is _really_ annoying especially since
it even exists (after a fashion) in the command prompt on current
versions of Windows.
4) No elegant incremental search through the command history (bash
Ctrl-R style).
1) No structured programming constructs such as (for example) while
loops.
2) No ability to iterate cleanly over a list of items (bash "for i in"
style)
3) No ability to add site specific lexicals.
4) The output from the lexicals should be an immutable collection of
objects which you can then iterate over without having to worry about
the state changing during iteration.
5) No regex support (this is also a UI issue).
6) Pathetic limits on the maximum size of symbol contents.
7) No array or collection of objects support. (In addition to normal
arrays, DCL should also support associative arrays.)
DCL has absolutely no way to group related variables together in the
way you can with structs in C or objects in other languages.
8) You cannot delete a directory tree in one go.
9) differences is very limited by today's standards. The functionality
in GNU diff, with (for example) it's ability to find differences in
whole directory trees and produce patch files for the differences
in an entire tree, should be the minimum baseline for functionality
these days.
I also find the unified diff output to be a _lot_ more readable than
the output from the DCL differences command.
Simon.
--
Microsoft: Bringing you 1980s technology to a 21st century world
Simon Clubley
2015-02-16 17:52:23 UTC
Permalink
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does
DCL treat a completely unknown command as a warning?
No, this has not been mentioned and it's a _really_ good point.

This should be an error, not a warning.
Post by g***@gmail.com
For instance,
$ fred
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
\FRED\
$ sho sym $status
$STATUS == "%X00038090"
Also, could someone please implement an integer overflow on DCL's integers?
I'll admit I've never tried overflowing DCL integer variables (I've
always designed DCL command procedures which operate within the documented
limits) so this never occurred to me. I wouldn't mind seeing that as an
error as well.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Paul Sture
2015-02-16 18:21:48 UTC
Permalink
Post by Simon Clubley
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does
DCL treat a completely unknown command as a warning?
No, this has not been mentioned and it's a _really_ good point.
This should be an error, not a warning.
Post by g***@gmail.com
For instance,
$ fred
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
\FRED\
$ sho sym $status
$STATUS == "%X00038090"
Also, could someone please implement an integer overflow on DCL's integers?
I'll admit I've never tried overflowing DCL integer variables (I've
always designed DCL command procedures which operate within the documented
limits) so this never occurred to me. I wouldn't mind seeing that as an
error as well.
That one is quite easy to achieve if you have a bit of DCL monitoring free
disk space and another department suddenly expands an NFS mounted share
to a value far exceeding anything you have on your VMS systems.

But I think we covered disk free space as a percentage in an earlier wish
list.
--
1972 - IBM begins development on its last tape drive (3480) ever because
of the declining cost of disk drives.
V***@SendSpamHere.ORG
2015-02-16 20:08:16 UTC
Permalink
Post by Simon Clubley
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does
DCL treat a completely unknown command as a warning?
No, this has not been mentioned and it's a _really_ good point.
This should be an error, not a warning.
Why?
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
David Froble
2015-02-16 20:29:52 UTC
Permalink
Post by Simon Clubley
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does
DCL treat a completely unknown command as a warning?
No, this has not been mentioned and it's a _really_ good point.
This should be an error, not a warning.
Why?
Yeah, that was sort of my feeling. You're told you have a problem,
isn't that enough?
Simon Clubley
2015-02-17 08:25:36 UTC
Permalink
Post by David Froble
Post by Simon Clubley
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does
DCL treat a completely unknown command as a warning?
No, this has not been mentioned and it's a _really_ good point.
This should be an error, not a warning.
Why?
Yeah, that was sort of my feeling. You're told you have a problem,
isn't that enough?
Not when that problem is buried within a command procedure.

It's a syntax error and it should bring the command procedure to a halt.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Stephen Hoffman
2015-02-17 16:18:23 UTC
Permalink
Post by Simon Clubley
Not when that problem is buried within a command procedure.
It's a syntax error and it should bring the command procedure to a halt.
The IVVERB severity can't be changed without yet another compatibility
mode setting somewhere, if the longstanding goal of installed-based
compatibility is to be maintained.

This is why I keep pointing to making fundamental shift and fundamental
breaks with the past as the most likely approach for moving parts of
VMS forward; toward targeted overhauls, and toward migrations to new
environments and new tools, and toward the deprecation of and the
removal of the old.

This is why I point to breaking specific parts of VMS, and deprecating
the older parts, and then removing the older parts. Deprecation and
removal won't be popular, of course. Which means that if VSI is going
to change things here (incompatibly), then they're going to have to
look at breaking compatibility in the areas where that makes the most
sense, and they'll want to look at a larger set of changes or a larger
overhaul, and to provide updates and enhancements — maybe numerous
fixes, wholesale UTF-8 support, and no need to special-case extended
filename parsing, for instance — that'll cause existing DCL users to
want to move forward, and to provide customers with a schedule and a
migration that will reduce the number of existing VMS users that will
object to the deprecation of the older syntax. Note: reduce.
Definitely not eliminate.

If you're going to decide to break application upward-compatibility, do
so with reason and thought, and break everything related all at once
and provide a got-to-have-that replacement, and — to avoid having to
support the old code forever — a migration path and a deprecation
schedule. Dribbling out a series of isolated and smaller corrections
and incompatible fixes such as this one is typically very unpopular, or
it's slow and complex and creates a more complex mess over time.
Fixing or replacing DCL is going to be a particularly difficult gaol to
achieve, given how endemic and how embedded DCL is.

VMS users have been trained to expect upward compatibility, which is
both wonderful and also quite unfortunate. Compatibility is
increasingly expensive to provide, whether in terms of the engineering
effort, the increasing interface complexity — all those C RTL mode
logical names should give you the flavor of the mess and the complexity
that this compatibility inevitably creates — and it also means that the
older and potentially insecure code is still in use around the
applications and in the VMS user base, and still triggering problem
reports and support requests and (potentially, depending on the
particular code involved) crashes or security breaches.

VMS used to be simple. One look at all the different parsing and
formatting and compatibility modes and run-time settings and commands
and logical names — run-time settings stored in logical names
themselves being a pretty hideous hack — should tell you that VMS isn't
simple anymore. This complexity inevitably arises from incremental
fixes for cases such as what this IVVERB case would lead to if fixed in
isolation; from the choice and the goal to maintain long-term
application compatibility over simplicity and supportability and
quicker forward progress.

Breaking compatibility will perturb existing VMS users, and will be
unpopular. Particularly in areas such as overhauling or replacing DCL,
if VSI should undertake that effort. But it's the only way to fix
VMS, to simplify VMS, and to haul it forward — without drowning in the
swamp that is compatibility. (But then I don't see VSI doing any of
this — not wholesale replacements, nor overhauls, nor altering IVVERB —
at least until after they can get a revenue stream and a release
cadence and marketing and a support presence established. They're
going to continue with the old and outdated strategies here, at least
initially. Only then might VSI decide to start to break stuff, and to
provide long-term support releases — places folks can stay on, and a
schedule for upgrades — for those that can't stay on the bleeding edge.)

Make no mistake, the goal is to get folks to buy and stay on support
and to upgrade to more current versions. I seriously doubt that the
revenues from the folks that want or need to stay on ancient releases
or ancient APIs or tools would be particularly worth what they cost to
support — not any more. Move them forward, sell them updates and
upgrades and support, and have at least a little of the New Shiny
available here. Same as what happened back when VAX/VMS V4.0 was the
New Shiny, and caused folks to really want to upgrade off V3.0 to get —
for instance — command-line editing. Fixes for IVVERB? Not so much.
--
Pure Personal Opinion | HoffmanLabs LLC
abrsvc
2015-02-17 17:00:34 UTC
Permalink
I would agree here. While Digital and VMS have been known for compatibility, there comes a time where the move to more modern systems must take place. I recall the transition from V3 to V4. It was not the most pleasant time for those of us in the field. Once the transition was complete however, people couldn't understand how they survived at the older version.

I would expect a certain amount of grief at the transition, but it is a relatively short period of time. Perhaps this is a revenue opportunity for VSI: Update DCL to something more modern and provide transitional support services to assist those sites without programming support personnel.

Dan
m***@gmail.com
2015-02-17 21:43:52 UTC
Permalink
Post by abrsvc
I would agree here. While Digital and VMS have been known for compatibility, there comes a time where the move to more modern systems must take place. I recall the transition from V3 to V4. It was not the most pleasant time for those of us in the field. Once the transition was complete however, people couldn't understand how they survived at the older version.
I would expect a certain amount of grief at the transition, but it is a relatively short period of time. Perhaps this is a revenue opportunity for VSI: Update DCL to something more modern and provide transitional support services to assist those sites without programming support personnel.
Dan
How many operating systems and JCL's has IBM had in the last 38 years? How many iterations of Windows, of Unix ...? Users coped, but don't take that to be endorsement of the "big bang" approach. I'd like to think VMS engineers were a little more professional and more aware of the consequences, especially when VMS systems can run for years without a reboot.

I'd like two options.
(a) DCL as it is and with limited enhancements in future.
(b) a new alternative (VCL?) that is less worried about being compatible with DCL as it was about 35 years ago (or more)

I guess the simple decision-maker is "Will this change break some old DCL?" and if the answer is Yes, DCL isn't changed but VCL gets the new functionality.

How long the two remain available in parallel is a business decision. Maybe it's just five years while sites transition, but maybe it's longer.

Maybe they can be switched in and out, e.g. $ SET DCL_PARSING OLD (or NEW), but that's up to others to decide.
Stephen Hoffman
2015-02-17 22:25:30 UTC
Permalink
Post by m***@gmail.com
Post by abrsvc
I would agree here. While Digital and VMS have been known for
compatibility, there comes a time where the move to more modern systems
must take place. I recall the transition from V3 to V4. It was not
the most pleasant time for those of us in the field. Once the
transition was complete however, people couldn't understand how they
survived at the older version.
I would expect a certain amount of grief at the transition, but it is a
relatively short period of time. Perhaps this is a revenue opportunity
for VSI: Update DCL to something more modern and provide transitional
support services to assist those sites without programming support
personnel.
Dan
How many operating systems and JCL's has IBM had in the last 38 years?
How many iterations of Windows, of Unix ...? Users coped, but don't
take that to be endorsement of the "big bang" approach. I'd like to
think VMS engineers were a little more professional and more aware of
the consequences, especially when VMS systems can run for years without
a reboot.
Running years without a reboot is not a feature. It's a basic,
fundamental, utter and complete failure of local management to maintain
systems. This is not something to brag about. It's something that
should be strongly discouraged, prior to the as-yet-unavailable
KSplice-like capabilities <https://www.ksplice.com>; the ability to
hot-patch updates. The cluster rolling upgrade is the closest analog
to KSplice within VMS, and that's very much dependent on the price of
clustering, the familiarity programmers have with using clustering, and
— preferably — vastly better frameworks and tools for making use of
clustering within applications. This includes APIs for online backups,
online file conversions, and particularly for getting security and
stability patches at least notified and staged (obligatory comment:
default to staged and notified, and optionally installed.)
Post by m***@gmail.com
I'd like two options.
(a) DCL as it is and with limited enhancements in future.
And which gets gone on some schedule, preferably on some schedule and
with a migration. Otherwise there's more than a little pressure to
maintain and update the old cruft.
Post by m***@gmail.com
(b) a new alternative (VCL?) that is less worried about being
compatible with DCL as it was about 35 years ago (or more)
I guess the simple decision-maker is "Will this change break some old
DCL?" and if the answer is Yes, DCL isn't changed but VCL gets the new
functionality.
How long the two remain available in parallel is a business decision.
Maybe it's just five years while sites transition, but maybe it's
longer.
Maybe they can be switched in and out, e.g. $ SET DCL_PARSING OLD (or
NEW), but that's up to others to decide.
Please, no. Get rid of the damned complexity. Adding more modes and
more settings and more options and more knobs and a few dozen system
parameters and a passel of logical names and j-random configuration
file and a conversion tool and of course an export and upgrade utility
that runs once and switches over from the old format to some new
application-specific configuration file format which then cannot be
parsed for validity and correctness and which tends to silently fail
is.... Wonderful. Not. Sure, it looks simple and probably shaves a
few cycles off somebody's development schedule, and it sells the user
and the developers and the system managers down the road, and leaves
folks with a pile of old code that must also be tested, and all with no
way to get rid of it.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2015-02-18 02:15:55 UTC
Permalink
-----Original Message-----
Stephen Hoffman
Sent: 17-Feb-15 5:26 PM
Subject: Re: [New Info-vax] DCL's flaws (both scripting and UI)
Post by m***@gmail.com
Post by abrsvc
I would agree here. While Digital and VMS have been known for
compatibility, there comes a time where the move to more modern
systems
Post by m***@gmail.com
Post by abrsvc
must take place. I recall the transition from V3 to V4. It was not
the most pleasant time for those of us in the field. Once the
transition was complete however, people couldn't understand how
they
Post by m***@gmail.com
Post by abrsvc
survived at the older version.
I would expect a certain amount of grief at the transition, but it is a
relatively short period of time. Perhaps this is a revenue opportunity
for VSI: Update DCL to something more modern and provide
transitional
Post by m***@gmail.com
Post by abrsvc
support services to assist those sites without programming support
personnel.
Dan
How many operating systems and JCL's has IBM had in the last 38
years?
Post by m***@gmail.com
How many iterations of Windows, of Unix ...? Users coped, but don't
take that to be endorsement of the "big bang" approach. I'd like to
think VMS engineers were a little more professional and more aware
of
Post by m***@gmail.com
the consequences, especially when VMS systems can run for years
without
Post by m***@gmail.com
a reboot.
Running years without a reboot is not a feature. It's a basic,
fundamental, utter and complete failure of local management to
maintain
systems. This is not something to brag about. It's something that
should be strongly discouraged, prior to the as-yet-unavailable
KSplice-like capabilities <https://www.ksplice.com>; the ability to
hot-patch updates. The cluster rolling upgrade is the closest analog
to KSplice within VMS, and that's very much dependent on the price of
clustering, the familiarity programmers have with using clustering, and
— preferably — vastly better frameworks and tools for making use of
clustering within applications. This includes APIs for online backups,
online file conversions, and particularly for getting security and
default to staged and notified, and optionally installed.)
Yes and no.

Yes, it's good to reboot systems occasionally (once a quarter is all one
should really need) to ensure all works as expected. A system crash
in prime time is not the time that one discovers an error in a start-up
procedure.

No, it's not good to reboot systems if it impacts application availability
In any way that impacts the business. If a reboot can be scheduled
without impacting the business, then fine.

With the right planning, OpenVMS clusters allow one to proactively
reboot systems proactively with zero application availability impact
(albeit with a few seconds of cluster transition).

The real strength of OpenVMS's stability is that there is a lot fewer
reasons to have to reboot than commodity systems. Issues can be
resolved online (errorlog/dump analysis as contrast to ctrl-alt-reboot),
dynamic adjustment of process quotas (avail mgr), not having to worry
about the 10-40+ security patches released each and every month that
commodity OS's have etc.

[snip..]

Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit do
j***@yahoo.co.uk
2015-02-16 22:18:54 UTC
Permalink
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does DCL treat a completely unknown command as a warning?
For instance,
$ fred
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
\FRED\
$ sho sym $status
$STATUS == "%X00038090"
Also, could someone please implement an integer overflow on DCL's integers?
I really hope that VSI can turn the VMS situation around.
Bye for now,
Gerald.
I think it's time once again to build a list of DCL's flaws now that
VSI are around. Don't forget however that DCL is both a scripting
language and a UI. My initial list is below.
1) You can't edit commands across line boundaries. Even Windows CLIs
can do this.
2) You can't save your command history automatically, bash style, and
have it restored automatically at the next session.
With bash, you can have multiple shells active at the same time and
only the commands entered during a specific session will be added to
the history file when that session exits even though the shell has the
full command history from previous shells available (up to a user
defined limit).
Any implementation needs to think about the multiple shells active at
the same time issue before proposing a solution to this.
3) No filename completion. This is _really_ annoying especially since
it even exists (after a fashion) in the command prompt on current
versions of Windows.
4) No elegant incremental search through the command history (bash
Ctrl-R style).
1) No structured programming constructs such as (for example) while
loops.
2) No ability to iterate cleanly over a list of items (bash "for i in"
style)
3) No ability to add site specific lexicals.
4) The output from the lexicals should be an immutable collection of
objects which you can then iterate over without having to worry about
the state changing during iteration.
5) No regex support (this is also a UI issue).
6) Pathetic limits on the maximum size of symbol contents.
7) No array or collection of objects support. (In addition to normal
arrays, DCL should also support associative arrays.)
DCL has absolutely no way to group related variables together in the
way you can with structs in C or objects in other languages.
8) You cannot delete a directory tree in one go.
9) differences is very limited by today's standards. The functionality
in GNU diff, with (for example) it's ability to find differences in
whole directory trees and produce patch files for the differences
in an entire tree, should be the minimum baseline for functionality
these days.
I also find the unified diff output to be a _lot_ more readable than
the output from the DCL differences command.
Simon.
--
Microsoft: Bringing you 1980s technology to a 21st century world
For the avoidance of doubt: are you saying you want DCL to do
something different than the behaviour documented in the quote below
(behaviour which has been around since - err, I forget, but the extract
is from VMS 7.3-1 docs)

If you do want something different, what might it be?

From the OpenVMS User's Manual
http://h71000.www7.hp.com/doc/731final/6489/6489pro_033.html#bottom_033

[quote]
When you enter a command verb that is not a DCL symbol and that is not
in the DCL command tables, the system usually displays the following
message:


DCL-W-IVVERB, unrecognized command verb - check validity and spelling

However, if the logical name DCL$PATH is defined (and is not blank),
DCL instead performs an RMS $SEARCH for any file that contains the
invalid verb in its file name and DCL$PATH:.* as the default file
specification.

If DCL finds a .COM or .EXE file, DCL will automatically execute
that file with the rest of the command line as its parameters. (This
behavior is similar to the PATH options found in DOS, UNIX, and other
operating systems.)

[endquote]

As you'd expect, there's lots more info in the docs; this is officially
referred to as "automatic foreign commands".
Simon Clubley
2015-02-17 08:28:42 UTC
Permalink
Post by j***@yahoo.co.uk
For the avoidance of doubt: are you saying you want DCL to do
something different than the behaviour documented in the quote below
(behaviour which has been around since - err, I forget, but the extract
is from VMS 7.3-1 docs)
Not speaking for Gerald, but I would expect the DCL$PATH lookup to
continue as normal and _then_ an IVVERB status to be emitted if
unsuccessful. Making that an error instead of a warning would
not affect current DCL$PATH behaviour and would make command
procedures more robust.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Phillip Helbig (undress to reply)
2015-02-17 21:21:29 UTC
Permalink
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does DCL treat a completely unknown command as a warning?
For instance,
$ fred
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
\FRED\
$ sho sym $status
$STATUS == "%X00038090"
A similar grap: If a label, or subroutine name, is not found, it's just
a warning.
Norm Raphael
2015-02-17 23:45:51 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by g***@gmail.com
This is a long thread so apologies if this has been mentioned but why does DCL treat a completely unknown command as a warning?
For instance,
$ fred
%DCL-W-IVVERB, unrecognized command verb - check validity and spelling
\FRED\
$ sho sym $status
$STATUS == "%X00038090"
A similar grap: If a label, or subroutine name, is not found, it's just
a warning.
I don't have a dog in this hunt, but ISTM these must needs be caught during
test and debug and never make it into a released-to-production status if any
sane checking is in place, so it really does not matter if W or E and so can
be left alone without any harm. My gripe has always been with getting that
warning during, say, an execution of VMSINSTAL.COM or AUTOGEN.COM
with no idea what was messed up (It usually turned out to be an optional "/LOG"
appended by the option switch to a DCL verb before a qualifier required by
extension (The testing there was obviously incomplete.)

Norman F. Raphael
"Everything worthwhile eventually
degenerates into real work." -Murphy
Stephen Davies
2020-09-10 13:58:55 UTC
Permalink
1) You can't edit commands across line boundaries. Even Windows CLIs
can do this.
It has been explained in the past that the problem was this stuff was
implemented in the terminal driver.
Sorry for replying to such an old post but this limitation is *so* annoying
that I don't think we should just forget about it. So even though this has
probably already been considered, I wondered if, assuming that the terminal
driver cannot handles long lines split across DECterm rows, perhaps it
could simply scroll the bottom row of the DECterm?
Jan-Erik Söderholm
2020-09-10 14:04:34 UTC
Permalink
Post by Stephen Davies
1) You can't edit commands across line boundaries. Even Windows CLIs
can do this.
It has been explained in the past that the problem was this stuff was
implemented in the terminal driver.
Sorry for replying to such an old post but this limitation is *so* annoying
that I don't think we should just forget about it. So even though this has
probably already been considered, I wondered if, assuming that the terminal
driver cannot handles long lines split across DECterm rows, perhaps it
could simply scroll the bottom row of the DECterm?
Use a wide terminal window and put commands longer than that in a
small COM file. It is just us few admins that are affected anyway,
so I can't see this as a major issue for OpenVMS as a whole...
David Jones
2020-09-10 15:11:13 UTC
Permalink
It has been explained in the past that the problem was this stuff was
implemented in the terminal driver.
I heard once that Xyplex was started by a some ex DEC engineers who had a
philosophical disagreement over the LAT approach to terminal servers. Having
the driver turn the QIO into an RPC to the Xyplex server was great for off loading
the 11/780 from character interrupts, but tracking the functionality of the terminal
class driver in the Xyplex firmware eventually became untenable.

Having a library that takes over command line editing and allows editing long
lines is certainly doable (almost certainly requiring a discrete $QIO for every
keypress), but no one is going to invest the required effort.

I certainly wish I had long line editing for my ad hoc SQLite queries.
David Jones
2020-10-15 03:57:29 UTC
Permalink
Post by David Jones
Having a library that takes over command line editing and allows editing long
lines is certainly doable (almost certainly requiring a discrete $QIO for every
keypress), but no one is going to invest the required effort.
I've been investigating what it takes to do multi-row editing at the application
level and it's pretty fiddly. Just to make it easier to keep of where the cursor is,
you temporarily set the terminal mode /nowrap and do all your reads trmnoecho.
You also intercept broadcasts via a mailbox so you can display it and cleanly
refresh the input line (with prompt). If you're using SMG for device independence,
you encounter the inexplicable things like ctrl-X get mapped to ctrl-U so you can't
have one mean 'clear input buffer' and the other mean 'erase back to start of
current line'.

Stephen Hoffman
2020-09-10 15:54:31 UTC
Permalink
So even though this has probably already been considered, I wondered
if, assuming that the terminal driver cannot handles long lines split
across DECterm rows, perhaps it could simply scroll the bottom row of
the DECterm?
The current serial terminal connection paths into "recent" OpenVMS are
(in no particular order) DECnet CTERM, OSI VT, ssh, telnet, iLO/DRAC,
LAT, legacy I/O console DB9 and DB9 serial ports, USB, the VT52 "Glass
TTY", pseudo-terminals, and, yes, DECterm.

Prolly a few others I've missed.

PCI, PCI-X, and PCIe serial controllers do exist and might be used in a
few spots, though I've seen approximately none of those installed in
~20 years.

Lacking telemetry, the numbers of these serial ports used, and of
whatever other potential ports might exist, is guesswork.

Suggestions around overhauling the terminal driver have been...
technically unpopular, as the class-and-port design has ~40 years of
accumulated knowledge of and work-arounds for existing and
formerly-existing hardware.

There also isn't yet a supported-hardware list for VSI OpenVMS x86-64,
and some changes in supported hardware are to be expected.

Replacing the terminal driver would be a fair-sized project, and the
work will probably break some existing apps. Compatibility and all.

A new terminal driver would hopefully fix a few existing messes and
existing gaps too, such as the BG/TN mess, and the ssh mess, the lack
of UTF-8, and piping and flow-control "fun" in general.

And there are potential wrinkles awaiting, including the various sorts
of pseudo terminals, and printing and symbionts, and the SMG and DCL
interfaces into the terminal driver, that'd all have to be looked at
and tested.

For most of us, setting the session terminal windows wide is the
workaround. I routinely run with ~200-character terminal widths. And
yes, the results are poor as compared with what editing is possible on
most other platforms.

Log some formal feedback with the folks at VSI, etc.
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen Davies
2020-09-10 16:14:05 UTC
Permalink
Post by Stephen Hoffman
So even though this has probably already been considered, I wondered
if, assuming that the terminal driver cannot handles long lines split
across DECterm rows, perhaps it could simply scroll the bottom row of
the DECterm?
Replacing the terminal driver would be a fair-sized project, and the
work will probably break some existing apps. Compatibility and all.
I had hoped that compatibility problems could be avoided by adding
a new terminal characteristic, e.g. "set /terminal /long_edit".
Dave Froble
2020-09-10 16:38:51 UTC
Permalink
Post by Stephen Davies
Post by Stephen Hoffman
So even though this has probably already been considered, I wondered
if, assuming that the terminal driver cannot handles long lines split
across DECterm rows, perhaps it could simply scroll the bottom row of
the DECterm?
Replacing the terminal driver would be a fair-sized project, and the
work will probably break some existing apps. Compatibility and all.
I had hoped that compatibility problems could be avoided by adding
a new terminal characteristic, e.g. "set /terminal /long_edit".
From some perspectives, terminals are so last century. Regardless, I
do use them.

However, one of the more popular user interfaces today is a browser. If
VSI is going to devote any efforts into a user interface, one would
think there would be more user benefit in a GUI or browser interface.

I've read too many times some of the reasons nobody wants to get into
the terminal driver. That makes me think that it's just not going to
happen.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Phillip Helbig (undress to reply)
2020-09-10 17:40:49 UTC
Permalink
Post by Dave Froble
However, one of the more popular user interfaces today is a browser. If
VSI is going to devote any efforts into a user interface, one would
think there would be more user benefit in a GUI or browser interface.
Of course, it would be nice to have a reasonably modern graphical
browser which runs on VMS. :-D
Stephane Tougard
2020-09-14 08:55:01 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Of course, it would be nice to have a reasonably modern graphical
browser which runs on VMS. :-D
links ?

The version 2 show pictures ...
Craig A. Berry
2020-09-10 19:20:43 UTC
Permalink
However, one of the more popular user interfaces today is a browser.  If
VSI is going to devote any efforts into a user interface, one would
think there would be more user benefit in a GUI or browser interface.
Like maybe the new WebUI management tool they released last week?
Craig A. Berry
2020-09-10 19:22:30 UTC
Permalink
Post by Craig A. Berry
Post by Dave Froble
However, one of the more popular user interfaces today is a browser.
If VSI is going to devote any efforts into a user interface, one would
think there would be more user benefit in a GUI or browser interface.
Like maybe the new WebUI management tool they released last week?
And it is on their web site (couldn't find it a minute ago):

<https://vmssoftware.com/products/webui/>
David Hittner
2020-09-10 20:31:50 UTC
Permalink
Actually, a "Web-based Terminal" could be a desirable feature, as it could allow direct access to VMS DCL without an SSH client.
Craig A. Berry
2020-09-10 20:37:43 UTC
Permalink
Post by David Hittner
Actually, a "Web-based Terminal" could be a desirable feature, as it could allow direct access to VMS DCL without an SSH client.
Put "DCLInABox" into your favorite search engine.
Jan-Erik Söderholm
2020-09-10 21:49:56 UTC
Permalink
Post by Craig A. Berry
Post by David Hittner
Actually, a "Web-based Terminal" could be a desirable feature, as it
could allow direct access to VMS DCL without an SSH client.
Put "DCLInABox" into your favorite search engine.
I have looked at that to get rid of the VT-emulators, but it is
just VT100 and our apps needs more then F1-F4. I havn't looked
at the code if it could be extended to add the F5-F12 key-codes...
Mark Daniel
2020-09-10 23:10:21 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Craig A. Berry
Post by David Hittner
Actually, a "Web-based Terminal" could be a desirable feature, as it
could allow direct access to VMS DCL without an SSH client.
Put "DCLInABox" into your favorite search engine.
I have looked at that to get rid of the VT-emulators, but it is
just VT100 and our apps needs more then F1-F4. I havn't looked
at the code if it could be extended to add the F5-F12 key-codes...
Don't look if you are given to migraines :-|

And it's a /bit/ more than VT100, passing a fair proportion of the
relevant Dickey/Lindberg vttest suite.

https://invisible-island.net/vttest/vttest.html

Mapping of some keyboards to the extended keypad was worked on (quite
the rats nest IIRC).

https://wasd.vsm.com.au/wasd_root/src/dclinabox/keypad.html

I might have explored F5-F12 too but don't have a "real" PC with "real"
10n keyboard to feel confident it might translate to the "real" world.
Phillip Helbig (undress to reply)
2020-09-11 08:03:02 UTC
Permalink
Post by David Hittner
Actually, a "Web-based Terminal" could be a desirable feature, as it
could allow direct access to VMS DCL without an SSH client.
It would probably be pretty easy to write a script for a webserver which
takes a command line as input and returns the output.
Jan-Erik Söderholm
2020-09-11 08:50:33 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by David Hittner
Actually, a "Web-based Terminal" could be a desirable feature, as it
could allow direct access to VMS DCL without an SSH client.
It would probably be pretty easy to write a script for a webserver which
takes a command line as input and returns the output.
Would have issues with MONITOR...

And there is at least one VT emulators written in JavaScript.

But yes, in the 80's I made a tool where I used the messaging
systemn of the time (MEMO, IBM MVS based that had a gateway to smtp)
to send "one-liners" to our VMS systems and return the result
as a "mail". So I could pick any 3270 terminal at any site/factory
and remote control our local VMS boxes at home.
Phillip Helbig (undress to reply)
2020-09-11 09:34:34 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Phillip Helbig (undress to reply)
It would probably be pretty easy to write a script for a webserver which
takes a command line as input and returns the output.
Would have issues with MONITOR...
I'm sure that someone has a VAX online with MONITOR output visible over
http. :-)
Stephen Hoffman
2020-09-10 17:34:16 UTC
Permalink
Post by Stephen Hoffman
So even though this has probably already been considered, I wondered
if, assuming that the terminal driver cannot handles long lines split
across DECterm rows, perhaps it could simply scroll the bottom row of
the DECterm?
Replacing the terminal driver would be a fair-sized project, and the
work will probably break some existing apps. Compatibility and all.
I had hoped that compatibility problems could be avoided by adding a
new terminal characteristic, e.g. "set /terminal /long_edit".
TL;DR: https://groups.google.com/d/msg/comp.os.vms/2URw3twynok/8l8HqsaAIBAJ


Yet Another Qualifier with Yet Another Bad Initial Default is certainly
a common approach to upward-compatibility, yes.

That approach is a wonderful example of how an environment increases in
complexity and confusion and accrues poor defaults, but I'm in a polite
mood.

But I digress.

There's rather more to maintaining and editing several lines of data in
the terminal driver while running at IPL, and fitting the update or the
replacement into the existing run-time context.

If retrofitting line editing were easy and if development staff were
available for reworking and testing the replacement line-editing
project, it'd have been done.

Some reading, from a previous thread on this topic from the
then-maintainer of the terminal driver:
https://groups.google.com/d/msg/comp.os.vms/2URw3twynok/8l8HqsaAIBAJ

And IIRC, both SMG and DCL include stuff-it-back-into-the-driver "fun"
for multi-line editing with history, so that code would have to be
looked at and potentially remediated, too.

Tossing the 1980s-era terminal driver code would be the better
long-term solution, and would allow fixes for the various other issues
we've come to accept. But that wholesale replacement of the terminal
driver is a yet larger project, with yet larger risks.

But again, renovating and refactoring or wholesale replacing the
terminal driver is a whole lot of work, it'll invariably break some
existing apps, and there have always been other priorities.

Breaking apps? Yes, some user apps. Also some VSI apps and VSI OpenVMS
components, too.

Replacing the existing multi-byte processing with UTF-8 support likely
adds another pile of work elsewhere in OpenVMS and in the OpenVMS
language variants that use the existing multi-byte support, for
instance.

Would I like this terminal driver mess drained? Absolutely. But I don't
expect to see this happen, short of massive increases to the available
funding for and the available technical staff at VSI. And I'd expect to
break some apps.
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen Davies
2020-09-11 09:09:31 UTC
Permalink
Post by Stephen Hoffman
I had hoped that compatibility problems could be avoided by adding a
new terminal characteristic, e.g. "set /terminal /long_edit".
Yet Another Qualifier with Yet Another Bad Initial Default is certainly
a common approach to upward-compatibility, yes.
Well it seems like a better option than only allowing the 'bad' behaviour.
Post by Stephen Hoffman
There's rather more to maintaining and editing several lines of data in
the terminal driver while running at IPL, and fitting the update or the
replacement into the existing run-time context.
Hence my suggestion to instead allowing editing of long lines within a
single row of the DECterm.

Alas, from some of the responses it seems that the problem is actually
that the OS sources are so impenetrable that *no* changes can be made,
which is, to say the least, disappointing.
Dave Froble
2020-09-11 12:31:50 UTC
Permalink
Post by Stephen Davies
Post by Stephen Hoffman
I had hoped that compatibility problems could be avoided by adding a
new terminal characteristic, e.g. "set /terminal /long_edit".
Yet Another Qualifier with Yet Another Bad Initial Default is certainly
a common approach to upward-compatibility, yes.
Well it seems like a better option than only allowing the 'bad' behaviour.
Post by Stephen Hoffman
There's rather more to maintaining and editing several lines of data in
the terminal driver while running at IPL, and fitting the update or the
replacement into the existing run-time context.
Hence my suggestion to instead allowing editing of long lines within a
single row of the DECterm.
Alas, from some of the responses it seems that the problem is actually
that the OS sources are so impenetrable that *no* changes can be made,
which is, to say the least, disappointing.
One of the critical issues is testing. For VSI to make modifications,
and then provide support, they must be able to perform testing.

Consider all the terminals used since 1978. I seriously doubt that VSI
has one of each for testing. I also believe that somewhere out there
there is still some in use. Should some problem be introduced, VSI has
no way to test on the specific equipment. Much safer to "don't fix it".

And what would be the benefits? A few individuals who still see DCL?
Our users never see DCL.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Stephen Hoffman
2020-09-11 15:51:24 UTC
Permalink
Post by Stephen Davies
Post by Stephen Hoffman
I had hoped that compatibility problems could be avoided by adding a
new terminal characteristic, e.g. "set /terminal /long_edit".
Yet Another Qualifier with Yet Another Bad Initial Default is certainly
a common approach to upward-compatibility, yes.
Well it seems like a better option than only allowing the 'bad' behaviour.
Ever tried to write a cross-platform "portable" login script using the
morass that is the SET TERMINAL command? I have. Fun times. I tried to
fix a few terminal settings things in SYLOGIN.TEMPLATE years ago. To
solve bug reports. Poking at all the different terminal types, so that
some command didn't get directed at some controller or some terminal
device that wouldn't handle the command appropriately. The test suite
promptly (and rightfully) tossed errors, due to the changes in the
handling of console serial lines in the case that were caught out after
my changes. SET TERMINAL alone is a wonderful morass of corner cases,
across all the different sorts of terminals and terminal types and
terminal controllers. Pretty much all of the terminal handling is
gnarly, across the different hardware and different hardware settings
and emulator settings and emulator bugs. (If there's a bug in an
emulator that some editing-related implementation change then exposes,
you can bet that VSI will receive a bug report too, requesting whatever
change exposing the bug be reverted.)

This also ignoring adding yet another qualifier that somebody needs to
summon, and which merely adds to the chaos and hostility of the whole
user interface. Ever-worse and ever-more-hostile user and app defaults
are a longstanding tradition, unfortunately.
Post by Stephen Davies
Post by Stephen Hoffman
There's rather more to maintaining and editing several lines of data in
the terminal driver while running at IPL, and fitting the update or the
replacement into the existing run-time context.
Hence my suggestion to instead allowing editing of long lines within a
single row of the DECterm.
And that's how OpenVMS then ended up with line-editing support embedded
in the ssh, telnet, VT, and CTERM servers, among other places. And with
the requisite testing for each, both for function and for divergences.
Because ssh sessions are far more common with OpenVMS access than are
DECterm sessions.
Post by Stephen Davies
Alas, from some of the responses it seems that the problem is actually
that the OS sources are so impenetrable that *no* changes can be made,
which is, to say the least, disappointing.
An issue with fixing the terminal driver is decades of accreted app
dependencies, on both documented and undocumented features, and in this
case dependencies across a massive number of hardware controllers
various of which are no longer available, across various terminals and
terminal emulators and the individual settings and quirks of each,
mixed together with my favorite requirement for preventing positive
changes and better defaults, compatibility. Unless the old support and
the old features and the old testing can be deprecated and removed and
some code selectively—very selectively—refactored or removed or
replaced, there can be little progress on a number of issues.

As for your request... Fixing line editing is absolutely possible.
Replacing a pile of twisty Macro32 code with Rust or C or new source
code in some other language is absolutely possible. What's more
difficult is making those changes in a fashion that doesn't also blow
up some customer-critical app or customer-critical process somewhere.
The path probably involves refactoring or replacing the terminal driver
with code that deals with modern interfaces, and that drops support for
most of the existing hardware. Of replacing hunks of the test suite,
and adding new hunks, too. The alternative being what you propose, with
modular handling embedded into different app servers and/or into
different emulators. Which'll work, but now OpenVMS either has some
sort of common line editor made callable, or has multiple different
places that implement line editing, and potentially those areas then
diverging. Or sure, retrofitting this into DECterm, setting up the
controls and the checks that allow toggling the necessary
synchronization with the terminal driver settings, and adding this into
the other places that'll then be in demand.


With DECterm, using a wide session and enabling horizontal scrolling
within DECterm goes a fair distance toward establishing what you want.
(Discussed by Fred K. in an earlier thread, and linked up-thread
previously here.) Or rework one of the Xterm ports, for that matter.
And involves a whole lot less development effort, churn, and risk.

Would I like this terminal driver line-editing limit fixed? Absolutely.
I'd like to see the terminal driver dragged into this millennium, too.


TL;DR: NGDGU. "With a sufficient number of users of an API, it does not
matter what you promise in the contract: all observable behaviors of
your system will be depended on by somebody." —Hyrum's law. And for VSI
and other vendors of large and complex software apps, some number of
those folks with now-broken dependencies will log support calls.
Inevitably. Even if the dependency broken is considered undocumented or
unsupported, or (gasp) the dependency is fodder for deprecation and
replacement with a better or more secure or more maintainable approach.
--
Pure Personal Opinion | HoffmanLabs LLC
Jan-Erik Söderholm
2020-09-10 17:37:37 UTC
Permalink
Post by Stephen Davies
Post by Stephen Hoffman
So even though this has probably already been considered, I wondered
if, assuming that the terminal driver cannot handles long lines split
across DECterm rows, perhaps it could simply scroll the bottom row of
the DECterm?
Replacing the terminal driver would be a fair-sized project, and the
work will probably break some existing apps. Compatibility and all.
I had hoped that compatibility problems could be avoided by adding
a new terminal characteristic, e.g. "set /terminal /long_edit".
My view is that, if one routinelly use 132+ line lenghts in DCL, one
should reconcider how one work.

SQL was mentioned earlier. I use SQL a lot from DCL, of course. But
as soon as it becomes more complex then around 132 chars, it goes into
a small COM file I keep for just that use. Also much easier when same
user comes back a month later and asks for that data extract again.
Simon Clubley
2020-09-11 12:28:05 UTC
Permalink
Post by Stephen Davies
Post by Stephen Hoffman
I had hoped that compatibility problems could be avoided by adding a
new terminal characteristic, e.g. "set /terminal /long_edit".
Yet Another Qualifier with Yet Another Bad Initial Default is certainly
a common approach to upward-compatibility, yes.
Well it seems like a better option than only allowing the 'bad' behaviour.
That requires both the old code and the new code to be present in the
kernel in order to maintain that backwards compatibility.
Post by Stephen Davies
Post by Stephen Hoffman
There's rather more to maintaining and editing several lines of data in
the terminal driver while running at IPL, and fitting the update or the
replacement into the existing run-time context.
Hence my suggestion to instead allowing editing of long lines within a
single row of the DECterm.
DECterm is only one (relatively minor) way of accessing DCL.

The majority of people are unlikely to be running DECterm.
Post by Stephen Davies
Alas, from some of the responses it seems that the problem is actually
that the OS sources are so impenetrable that *no* changes can be made,
which is, to say the least, disappointing.
That's the _real_ problem, unfortunately.

Someone thought they were being clever at the time but only ended up
creating a massive headache for future generations of VMS users.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Stephen Hoffman
2020-09-11 16:03:12 UTC
Permalink
Post by Simon Clubley
Someone thought they were being clever at the time but only ended up
creating a massive headache for future generations of VMS users.
The terminal driver line-editing design was a good choice and a good
trade-off for its time.

That change alone led to many folks really wanting to install the
VAX/VMS V4.0 upgrade.

That line editing worked when SET HOST to VAX/VMS V3.x systems was
useful to many.

Huge reason to upgrade to VAX/VMS V4.0.

Decades on, the expectations and assumptions and limits all tend to
change, while the dependencies on documented and undocumented behavior
accrue, and while the corpus of fixes and workarounds in the existing
code increases.

Compatibility then constrains the permissible changes, revisions,
remediations, and refactoring permissible. Which increases the
complexity and the cost of the changes made. And that compatibility
makes developers risk-averse. All understandably. All expected.
--
Pure Personal Opinion | HoffmanLabs LLC
Scott Dorsey
2020-09-11 17:14:14 UTC
Permalink
Post by Stephen Hoffman
Post by Simon Clubley
Someone thought they were being clever at the time but only ended up
creating a massive headache for future generations of VMS users.
The terminal driver line-editing design was a good choice and a good
trade-off for its time.
When compared with MPE and Pr1mos it was incredibly elegant and advanced.
And they didn't even have typeahead!
Post by Stephen Hoffman
Compatibility then constrains the permissible changes, revisions,
remediations, and refactoring permissible. Which increases the
complexity and the cost of the changes made. And that compatibility
makes developers risk-averse. All understandably. All expected.
The solution is to retain the old one for a while but also provide a new
terminal driver available as an option. Give it a few versions for users
to make sure their applications can be made to work with the new driver
before discarding the old one.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Craig A. Berry
2020-09-11 19:28:18 UTC
Permalink
Post by Scott Dorsey
Post by Stephen Hoffman
Compatibility then constrains the permissible changes, revisions,
remediations, and refactoring permissible. Which increases the
complexity and the cost of the changes made. And that compatibility
makes developers risk-averse. All understandably. All expected.
The solution is to retain the old one for a while but also provide a new
terminal driver available as an option. Give it a few versions for users
to make sure their applications can be made to work with the new driver
before discarding the old one.
<https://xkcd.com/1172/>
Stephen Hoffman
2020-09-11 20:03:17 UTC
Permalink
Post by Craig A. Berry
Post by Scott Dorsey
Post by Stephen Hoffman
Compatibility then constrains the permissible changes, revisions,
remediations, and refactoring permissible. Which increases the
complexity and the cost of the changes made. And that compatibility
makes developers risk-averse. All understandably. All expected.
The solution is to retain the old one for a while but also provide a
new terminal driver available as an option. Give it a few versions for
users to make sure their applications can be made to work with the new
driver before discarding the old one.
I'd like to see folks becoming accustomed to where that migration,
deprecation, and removal can and does happen, yes.

Within the terminal driver, and within a number of other areas that are
long overdue for overhaul, refactoring, retirement, and/or replacement.

This does require the replacement API provide a substantial
improvement, decent migration path, and preferably enough design
headroom to last a decade or more.

This so that we're not all revisiting the same source code again when
some yet-newer limit is reached or when some new mess is identified.

Selectively breaking the problematic among the existing APIs, and only
when and where that breakage is necessary and appropriate.

Albeit, code periodic re-reviews and code retirements and code removal
are not tasks that many developers have never gotten particularly good
at.

If the developers even have the time and administrative support to
perform that work.
Post by Craig A. Berry
<https://xkcd.com/1172/>
<https://xkcd.com/2224/>
--
Pure Personal Opinion | HoffmanLabs LLC
Dave Froble
2020-09-11 22:36:35 UTC
Permalink
Post by Stephen Hoffman
Post by Craig A. Berry
Post by Scott Dorsey
Post by Stephen Hoffman
Compatibility then constrains the permissible changes, revisions,
remediations, and refactoring permissible. Which increases the
complexity and the cost of the changes made. And that compatibility
makes developers risk-averse. All understandably. All expected.
The solution is to retain the old one for a while but also provide a
new terminal driver available as an option. Give it a few versions
for users to make sure their applications can be made to work with
the new driver before discarding the old one.
I'd like to see folks becoming accustomed to where that migration,
deprecation, and removal can and does happen, yes.
Within the terminal driver, and within a number of other areas that are
long overdue for overhaul, refactoring, retirement, and/or replacement.
This does require the replacement API provide a substantial improvement,
decent migration path, and preferably enough design headroom to last a
decade or more.
This so that we're not all revisiting the same source code again when
some yet-newer limit is reached or when some new mess is identified.
Selectively breaking the problematic among the existing APIs, and only
when and where that breakage is necessary and appropriate.
Albeit, code periodic re-reviews and code retirements and code removal
are not tasks that many developers have never gotten particularly good at.
If the developers even have the time and administrative support to
perform that work.
Post by Craig A. Berry
<https://xkcd.com/1172/>
<https://xkcd.com/2224/>
I suggest a poll or survey ....

1) Who thinks this modification should be made?

2) Of those, who thinks it's an essential mod?

3) Of those, who is a paying customer?

:-)
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Jan-Erik Söderholm
2020-09-12 10:15:37 UTC
Permalink
Post by Dave Froble
Post by Stephen Hoffman
Post by Craig A. Berry
Post by Scott Dorsey
Post by Stephen Hoffman
Compatibility then constrains the permissible changes, revisions,
remediations, and refactoring permissible. Which increases the
complexity and the cost of the changes made. And that compatibility
makes developers risk-averse. All understandably. All expected.
The solution is to retain the old one for a while but also provide a
new terminal driver available as an option.  Give it a few versions
for users to make sure their applications can be made to work with
the new driver before discarding the old one.
I'd like to see folks becoming accustomed to where that migration,
deprecation, and removal can and does happen, yes.
Within the terminal driver, and within a number of other areas that are
long overdue for overhaul, refactoring, retirement, and/or replacement.
This does require the replacement API provide a substantial improvement,
decent migration path, and preferably enough design headroom to last a
decade or more.
This so that we're not all revisiting the same source code again when
some yet-newer limit is reached or when some new mess is identified.
Selectively breaking the problematic among the existing APIs, and only
when and where that breakage is necessary and appropriate.
Albeit, code periodic re-reviews and code retirements and code removal
are not tasks that many developers have never gotten particularly good at.
If the developers even have the time and administrative support to
perform that work.
Post by Craig A. Berry
<https://xkcd.com/1172/>
<https://xkcd.com/2224/>
I suggest a poll or survey ....
1) Who thinks this modification should be made?
2) Of those, who thinks it's an essential mod?
3) Of those, who is a paying customer?
:-)
No, no and yes.

But I also expect a few "yes, yes and no".
Loading...