Discussion:
How much of VMS is still in MACRO-32?
Add Reply
John Dallman
2021-05-29 17:58:00 UTC
Reply
Permalink
Out of curiosity, I took a look at the documentation for the MACRO-32
compilers for Alpah and Itanium, and started reading the VAX MACRO
instruction set manual.

I now understand the 32/64-bit issues with VMS a bit better, but I'm
curious as to how much of the OS is still written in MACRO-32.

Obviously, this begs the problem of defining the OS, as separate from its
utility programs. Let's say "The stuff that's running once the OS has
booted and is ready to run applications, plus the programs that needed to
run to get it there."

John
Arne Vajhøj
2021-05-29 21:27:20 UTC
Reply
Permalink
Post by John Dallman
Out of curiosity, I took a look at the documentation for the MACRO-32
compilers for Alpah and Itanium, and started reading the VAX MACRO
instruction set manual.
Very nice CISC ISA.
Post by John Dallman
I now understand the 32/64-bit issues with VMS a bit better, but I'm
curious as to how much of the OS is still written in MACRO-32.
30 years ago the word was that it was 1/3 <acro-32 and 1/3 Bliss
and 1/3 everything else.

Since then I suspect very little Macro-32 and Bliss has been
added but that some C has been added.
Post by John Dallman
Obviously, this begs the problem of defining the OS, as separate from its
utility programs. Let's say "The stuff that's running once the OS has
booted and is ready to run applications, plus the programs that needed to
run to get it there."
Why separate?

It is common in *nix world to distinguish between kernel and
userland.

But not sure that it makes much sense on VMS:
* both were developed by DEC
* both were developed in the same languages
* they have always been consider a single entity

The only difference I can see is that at least some of
the kernel code may be non-trivial to write in HLL,
while all userland code could be rewritten in C++ or Rust
(assuming Rust become supported on VMS).

And VSI would probably even like to do it if some
millions of dollars dumped down from the sky to do it.
With a limited budget (and limited developer resources)
they need to prioritize and rewriting DCL and DCL
commands in C++/Rust just doesn't provide short and
mid term benefits that can justify spending the money.

Arne
John Dallman
2021-05-30 09:00:00 UTC
Reply
Permalink
Post by Arne Vajhøj
Why separate?
It is common in *nix world to distinguish between kernel and
userland.
* both were developed by DEC
* both were developed in the same languages
* they have always been consider a single entity
The reason for distinguishing, now that I've thought about it a bit more,
is that the kernel, some device drivers, the loader and so on need to be
able to deal with 64-bit addresses, memory above the 4GB line, and so on.
That isn't something that MACRO-32 does natively. In contrast, some of
the utility programs can probably remain 32-bit forever, so there's less
need to revise them.

John
Stephen Hoffman
2021-05-30 16:54:23 UTC
Reply
Permalink
Post by John Dallman
The reason for distinguishing, now that I've thought about it a bit
more, is that the kernel, some device drivers, the loader and so on
need to be able to deal with 64-bit addresses, memory above the 4GB
line, and so on. That isn't something that MACRO-32 does natively. In
contrast, some of the utility programs can probably remain 32-bit
forever, so there's less need to revise them.
There's all sorts of design and particularly ugly API shenanigans to
allow 32-bit apps to work within 64-bit space, and the OpenVMS V7.0
64-bit design is great at that.

The existing design is great for existing apps and for incremental
changes to existing apps, but not so great for new work, nor for
substantial refactoring, nor for incrementally fixing busted APIs.

The Macro32 compilers all use 64-bit addresses internally, with sign
extension. As do all other apps on the 64-bit platforms.

Within the Macro32 compilers, BASIC, C without the 64-bit knobs
twiddled in the compiler and in the linker, and other such, the code is
restricted to accessing S0, S1, P0, and P1 addresses absent "creative"
coding; the lowest 31 bits and highest 31 bits of 64-bit address space.

(qv: my previous rants on this topic, having experience using flat
64-bit apps and by-default 64-bit APIs and ABIs else-platform.)

But the kernel is also not source code that would be (or is) written in
assembler.

With very few exceptions, all new device driver work and kernel work
has been in C, before Y2K.

See the Step 2 driver manual doc in the 'net archives, from OpenVMS
Alpha V6.1, etc.

The uproar around migrations from 2GL to 3GL that was raging through
the 1980s and 1990s died out a while back.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2021-05-30 17:08:36 UTC
Reply
Permalink
Post by John Dallman
Post by Arne Vajhøj
Why separate?
It is common in *nix world to distinguish between kernel and
userland.
* both were developed by DEC
* both were developed in the same languages
* they have always been consider a single entity
The reason for distinguishing, now that I've thought about it a bit more,
is that the kernel, some device drivers, the loader and so on need to be
able to deal with 64-bit addresses, memory above the 4GB line, and so on.
That isn't something that MACRO-32 does natively. In contrast, some of
the utility programs can probably remain 32-bit forever, so there's less
need to revise them.
Whatever changes necessary to support 64 bit was done 30 years ago
for Alpha.

Arne
John Dallman
2021-05-30 20:03:00 UTC
Reply
Permalink
Post by Arne Vajhøj
Whatever changes necessary to support 64 bit was done 30 years ago
for Alpha.
For the current 64-bit APIs, sure, but there are APIs that only take
32-bit addresses. Once I began to get to grips with the MACRO-32
compilers, it became plausible that one reason why there aren't 64-bit
versions of all APIs is that the interfaces are implemented in MACRO-32.

John
Arne Vajhøj
2021-05-30 23:32:21 UTC
Reply
Permalink
Post by John Dallman
Post by Arne Vajhøj
Whatever changes necessary to support 64 bit was done 30 years ago
for Alpha.
For the current 64-bit APIs, sure, but there are APIs that only take
32-bit addresses. Once I began to get to grips with the MACRO-32
compilers, it became plausible that one reason why there aren't 64-bit
versions of all APIs is that the interfaces are implemented in MACRO-32.
SYS$FOOBAR may be written in Macro-32 and use 32 bit pointers,

But that does not prevent VSI to write a SYS$FOOBAR64 in C that
use 64 bit pointers.

That is the model used by VMS - not change the existing system
service but to add a new.

Arne
John Reagan
2021-05-31 02:40:33 UTC
Reply
Permalink
Post by John Dallman
Post by Arne Vajhøj
Whatever changes necessary to support 64 bit was done 30 years ago
for Alpha.
For the current 64-bit APIs, sure, but there are APIs that only take
32-bit addresses. Once I began to get to grips with the MACRO-32
compilers, it became plausible that one reason why there aren't 64-bit
versions of all APIs is that the interfaces are implemented in MACRO-32.
John
All the Macro compilers (Alpha, Itanium, and x86) have 64-bit builtins beyond the
VAX instruction set. However, changing a field or argument involves touching every
single instruction that involves it as the size is part of the instruction.

As an example, PTEs on Alpha and Itanium have their interesting fields in the first
32-bits so they've been easily managed with Macro-32 code. On x86, there are flags
and fields in the upper 32-bits of the quadword. A MOVL #<***@53>,(R0) doesn't do
what you'd hoped it would

Widening arguments can be a little tricky, but all those string descriptors, itemlists, and
RMS data structures have 32-bit pointers in them. Yes, there are some 64-bit flavors
of itemlists and descriptors and even some RMS data structures (but even RAB64 didn't
widen ALL of the pointers) but switching involves touching lots of code again.
Dave Froble
2021-05-31 03:19:39 UTC
Reply
Permalink
Post by John Reagan
Post by John Dallman
Post by Arne Vajhøj
Whatever changes necessary to support 64 bit was done 30 years ago
for Alpha.
For the current 64-bit APIs, sure, but there are APIs that only take
32-bit addresses. Once I began to get to grips with the MACRO-32
compilers, it became plausible that one reason why there aren't 64-bit
versions of all APIs is that the interfaces are implemented in MACRO-32.
John
All the Macro compilers (Alpha, Itanium, and x86) have 64-bit builtins beyond the
VAX instruction set. However, changing a field or argument involves touching every
single instruction that involves it as the size is part of the instruction.
As an example, PTEs on Alpha and Itanium have their interesting fields in the first
32-bits so they've been easily managed with Macro-32 code. On x86, there are flags
what you'd hoped it would
Widening arguments can be a little tricky, but all those string descriptors, itemlists, and
RMS data structures have 32-bit pointers in them. Yes, there are some 64-bit flavors
of itemlists and descriptors and even some RMS data structures (but even RAB64 didn't
widen ALL of the pointers) but switching involves touching lots of code again.
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce such
a tool?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Camiel Vanderhoeven
2021-05-31 08:07:01 UTC
Reply
Permalink
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce such
a tool?
SEARCH? :-)

Seriously, having the structure name repeated in each field name helps a lot with this. Consider a field simply called "id" in a structure "employee". "SEARCH *.* id" would likely turn up fields called "id" in various different structures, loose variables called "id", etc. Having that field called emp$l_id makes for a much more meaningful result from "SEARCH *.* emp$l_id".

Pinpointing references to an argument to a procedure written in Macro requires following the logic, as the argument may be stored on the stack, moved to a different register, etc.

Camiel
Simon Clubley
2021-05-31 13:07:28 UTC
Reply
Permalink
Post by Camiel Vanderhoeven
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce such
a tool?
SEARCH? :-)
Seriously, having the structure name repeated in each field name helps a lot with this. Consider a field simply called "id" in a structure "employee". "SEARCH *.* id" would likely turn up fields called "id" in various different structures, loose variables called "id", etc. Having that field called emp$l_id makes for a much more meaningful result from "SEARCH *.* emp$l_id".
Pinpointing references to an argument to a procedure written in Macro requires following the logic, as the argument may be stored on the stack, moved to a different register, etc.
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
John Reagan
2021-05-31 15:52:22 UTC
Reply
Permalink
Post by Simon Clubley
Post by Camiel Vanderhoeven
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce such
a tool?
SEARCH? :-)
Seriously, having the structure name repeated in each field name helps a lot with this. Consider a field simply called "id" in a structure "employee". "SEARCH *.* id" would likely turn up fields called "id" in various different structures, loose variables called "id", etc. Having that field called emp$l_id makes for a much more meaningful result from "SEARCH *.* emp$l_id".
Pinpointing references to an argument to a procedure written in Macro requires following the logic, as the argument may be stored on the stack, moved to a different register, etc.
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
Simon.
--
Walking destinations on a map are further away than they appear.
Not that I know of. EXE PSECTs are usually marked NOWRT. The linker now emits a message if it sees EXE, WRT in OBJ files although such malformed PSECTs can be corrected in the linker options file.
David Jones
2021-05-31 18:06:45 UTC
Reply
Permalink
Post by Simon Clubley
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
Back in the VAX days, I had an image processing program that would take
a convolution matrix and image array size and return the address of a
dynamically created procedure to filter an input image. The matrix was
analyzed to eliminate the zero multiplies in the resulting instructions (I
can't remember if it optimized the 1.0 multiplies as well).

It wasn't exactly self-modifying, but the procedure was in read/write
memory.
abrsvc
2021-05-31 19:22:20 UTC
Reply
Permalink
Post by David Jones
Post by Simon Clubley
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
Back in the VAX days, I had an image processing program that would take
a convolution matrix and image array size and return the address of a
dynamically created procedure to filter an input image. The matrix was
analyzed to eliminate the zero multiplies in the resulting instructions (I
can't remember if it optimized the 1.0 multiplies as well).
It wasn't exactly self-modifying, but the procedure was in read/write
memory.
There was a product that would generate VAX executable code on the fly in memory and execute it as well. IRRC it was from a company called Corvision.
I was involved with them during the port to Alpha as they were trying to determine whether or not to attempt the same methods on Alpha. I don't recall the outcome.
John Dallman
2021-05-31 20:47:00 UTC
Reply
Permalink
Post by abrsvc
There was a product that would generate VAX executable code on the
fly in memory and execute it as well. IRRC it was from a company
called Corvision.
I was involved with them during the port to Alpha as they were
trying to determine whether or not to attempt the same methods on
Alpha. I don't recall the outcome.
If it was these guys <https://en.wikipedia.org/wiki/CorVision> it did
make it onto Alpha, but development ended with a Y2K fix pack.

John
Arne Vajhøj
2021-05-31 23:32:42 UTC
Reply
Permalink
Post by abrsvc
There was a product that would generate VAX executable code on the
fly in memory and execute it as well. IRRC it was from a company
called Corvision. I was involved with them during the port to Alpha
as they were trying to determine whether or not to attempt the same
methods on Alpha. I don't recall the outcome.
I had (well - still have the code) some Macro-32 where one could
build a calculating formula as VAX instructions and execute.
It worked fine. VAX was easy!!

Arne
Stephen Hoffman
2021-05-31 20:26:42 UTC
Reply
Permalink
Is there any self-modifying code in VMS ? (I hope the answer to that is
no, BTW. :-))
There's DCL around that is self-modifying. Which is part of why
compiling DCL can be such "fun".

I'm not aware of self-modifying code or a JIT within OpenVMS itself,
though I'm a little murky on the full "creativitity" of the debugger in
this context.

There is some related support present (INSTRUCTION_MB, EVAX_IMB, etc)
which certainly implies self-modifying code does exist.

I've written and have met self-modifying app code for OpenVMS. As have
others. Including Oracle Rdb, IIRC.

Met some app code that invoked a compiler and linker and then FIS'd
that code into the app, too. That was clunky and very limited, but
workable for that app.

There's Java and its JIT, of course.

There are various parts of OpenVMS where applying a JIT could or would
be useful.

Adding a JIT into DCL or into a replacement for DCL—any "fun" with
self-modifying DCL aside—would be an obvious investigation.

Should it ever appear within OpenVMS, BPF uses a JIT, too.
https://lwn.net/Articles/437981/
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2021-05-31 21:30:44 UTC
Reply
Permalink
Post by Stephen Hoffman
There is some related support present (INSTRUCTION_MB, EVAX_IMB, etc)
which certainly implies self-modifying code does exist.
It implies that self-modifying code can be written. The main purpose of
these instructions is to ensure that the I-cache is updated after code
from the image was read/paged into memory and is about to be executed.
Stephen Hoffman
2021-05-31 21:44:47 UTC
Reply
Permalink
Post by hb
Post by Stephen Hoffman
There is some related support present (INSTRUCTION_MB, EVAX_IMB, etc)
which certainly implies self-modifying code does exist.
It implies that self-modifying code can be written. The main purpose of
these instructions is to ensure that the I-cache is updated after code
from the image was read/paged into memory and is about to be executed.
And DEC et al would have gotten off its collective arse to create that
API if there wasn't a direct need?
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2021-05-31 23:23:39 UTC
Reply
Permalink
Post by Stephen Hoffman
Post by Simon Clubley
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
There's DCL around that is self-modifying. Which is part of why
compiling DCL can be such "fun".
I'm not aware of self-modifying code or a JIT within OpenVMS itself,
though I'm a little murky on the full "creativitity" of the debugger in
this context.
There is some related support present (INSTRUCTION_MB, EVAX_IMB, etc)
which certainly implies self-modifying code does exist.
I've written and have met self-modifying app code for OpenVMS.  As have
others. Including Oracle Rdb, IIRC.
Met some app code that invoked a compiler and linker and then FIS'd that
code into the app, too. That was clunky and very limited, but workable
for that app.
There's Java and its JIT, of course.
It may be relevant to distinguish between:
A) dynamic code generation where an application generate
new code and execute it
B) code that modifies itself aka replace some of its code
with new code

#A probably have more legitimate uses than #B.

Arne
Arne Vajhøj
2021-05-31 23:25:44 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Stephen Hoffman
Post by Simon Clubley
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
There's DCL around that is self-modifying. Which is part of why
compiling DCL can be such "fun".
I'm not aware of self-modifying code or a JIT within OpenVMS itself,
though I'm a little murky on the full "creativitity" of the debugger
in this context.
There is some related support present (INSTRUCTION_MB, EVAX_IMB, etc)
which certainly implies self-modifying code does exist.
I've written and have met self-modifying app code for OpenVMS.  As
have others. Including Oracle Rdb, IIRC.
Met some app code that invoked a compiler and linker and then FIS'd
that code into the app, too. That was clunky and very limited, but
workable for that app.
There's Java and its JIT, of course.
A) dynamic code generation where an application generate
   new code and execute it
B) code that modifies itself aka replace some of its code
   with new code
#A probably have more legitimate uses than #B.
Note that Java can actually generate code at two levels:
* it can generate Java byte code dynamically and execute it
* Java byte code get JIT compiled to native code when the
JVM decide it is time

Arne
Jan-Erik Söderholm
2021-05-31 22:12:24 UTC
Reply
Permalink
Post by Simon Clubley
Post by Camiel Vanderhoeven
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce such
a tool?
SEARCH? :-)
Seriously, having the structure name repeated in each field name helps a lot with this. Consider a field simply called "id" in a structure "employee". "SEARCH *.* id" would likely turn up fields called "id" in various different structures, loose variables called "id", etc. Having that field called emp$l_id makes for a much more meaningful result from "SEARCH *.* emp$l_id".
Pinpointing references to an argument to a procedure written in Macro requires following the logic, as the argument may be stored on the stack, moved to a different register, etc.
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
Simon.
In the OS? Don't know. In anything else running on VMS?
Yes, Rdb creates some machine code on-the-fly and run that.
Lee Gleason
2021-06-01 00:43:44 UTC
Reply
Permalink
Post by Simon Clubley
Post by Camiel Vanderhoeven
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce such
a tool?
SEARCH? :-)
Seriously, having the structure name repeated in each field name helps a lot with this. Consider a field simply called "id" in a structure "employee". "SEARCH *.* id" would likely turn up fields called "id" in various different structures, loose variables called "id", etc. Having that field called emp$l_id makes for a much more meaningful result from "SEARCH *.* emp$l_id".
Pinpointing references to an argument to a procedure written in Macro requires following the logic, as the argument may be stored on the stack, moved to a different register, etc.
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
Simon.
I haven't looked lately, but circa 4.7, lib$tparse contained self
modifying code.

--
Lee K. Gleason N5ZMR
Control-G Consultants
***@comcast.net
John Reagan
2021-06-01 12:11:14 UTC
Reply
Permalink
Post by Lee Gleason
Post by Simon Clubley
Post by Camiel Vanderhoeven
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce such
a tool?
SEARCH? :-)
Seriously, having the structure name repeated in each field name helps a lot with this. Consider a field simply called "id" in a structure "employee". "SEARCH *.* id" would likely turn up fields called "id" in various different structures, loose variables called "id", etc. Having that field called emp$l_id makes for a much more meaningful result from "SEARCH *.* emp$l_id".
Pinpointing references to an argument to a procedure written in Macro requires following the logic, as the argument may be stored on the stack, moved to a different register, etc.
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
Simon.
I haven't looked lately, but circa 4.7, lib$tparse contained self
modifying code.
--
Lee K. Gleason N5ZMR
Control-G Consultants
I'm going to disagree with you. I've spent lots of time recently in LIB$TPARSE/LIB$TABLE_PARSE.
It is boring BLISS code with ugly data structures. However, it is just another routine in LIBRTL.EXE.

When I read Simon's question about "self-modifying code", I took that as code that modifies itself
during program execution. It turned into "generating code on the fly". That's a different question.
I know of a few places that generate code on the fly but nothing in OS itself.

And the comments about the varioue EVAX_ builtins to update the i-cache for generating code on
the fly (and would self modifying code) varies from target to target. Alpha has one set of instructions,
Itanium another, and x86 a different model still.
Simon Clubley
2021-06-01 12:15:22 UTC
Reply
Permalink
Post by John Reagan
When I read Simon's question about "self-modifying code", I took that as code that modifies itself
during program execution. It turned into "generating code on the fly". That's a different question.
I know of a few places that generate code on the fly but nothing in OS itself.
John is correct. I was talking about executable code that modifies itself
during execution.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Camiel Vanderhoeven
2021-06-01 14:39:51 UTC
Reply
Permalink
Post by John Reagan
When I read Simon's question about "self-modifying code", I took that as code that modifies itself
during program execution. It turned into "generating code on the fly". That's a different question.
I know of a few places that generate code on the fly but nothing in OS itself.
I do :-)

When execlets are loaded, the transfer vectors to call system services are generated on the fly.
John Reagan
2021-06-01 15:17:34 UTC
Reply
Permalink
Post by Camiel Vanderhoeven
Post by John Reagan
When I read Simon's question about "self-modifying code", I took that as code that modifies itself
during program execution. It turned into "generating code on the fly". That's a different question.
I know of a few places that generate code on the fly but nothing in OS itself.
I do :-)
When execlets are loaded, the transfer vectors to call system services are generated on the fly.
I wasn't going to mention that "symbol vectors" on x86 are trampolines much like the VAX-era
transfer vectors. I consider this just a relocation that the image activator/exec-loader performs.

On Linux systems, you might even see PLT routines also self-modify by doing a dlopen/dlsym in
their first execution, modify themselves to just jump to the target on subsequenct executions.
We don't do that.
hb
2021-06-01 15:43:19 UTC
Reply
Permalink
Post by John Reagan
Post by Camiel Vanderhoeven
Post by John Reagan
When I read Simon's question about "self-modifying code", I took that as code that modifies itself
during program execution. It turned into "generating code on the fly". That's a different question.
I know of a few places that generate code on the fly but nothing in OS itself.
I do :-)
When execlets are loaded, the transfer vectors to call system services are generated on the fly.
I wasn't going to mention that "symbol vectors" on x86 are trampolines much like the VAX-era
transfer vectors. I consider this just a relocation that the image activator/exec-loader performs.
On Linux systems, you might even see PLT routines also self-modify by doing a dlopen/dlsym in
their first execution, modify themselves to just jump to the target on subsequenct executions.
We don't do that.
FWIW, the loaded code, generated by the linker, is different from what
the loader writes: it is more than changing a (target) address.
Andrew Commons
2021-06-02 07:18:49 UTC
Reply
Permalink
Post by John Reagan
I'm going to disagree with you. I've spent lots of time recently in LIB$TPARSE/LIB$TABLE_PARSE.
It is boring BLISS code with ugly data structures. However, it is just another routine in LIBRTL.EXE.
LIB$TPARSE also had a very small limit on the number of keywords (I think). I remember
hitting it and scouring the fiche for other code that used it to find out how to get
around it. I think NCP eventually provided a way of chaining things to get arbitrarily long
lists of keywords.
Michael Moroney
2021-06-02 00:33:01 UTC
Reply
Permalink
Post by Simon Clubley
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
There's self-modifying code on the CDC 6x00 systems -- IN THE HARDWARE.
You see, the procedure call instruction (RJ nnn) jumps to address nnn+1
(60 bits) and writes a branch to the instruction after the RJ
instruction at address nnn. To return from the procedure, branch to the
entry point nnn and execute the (branch) instruction written by the RJ
instruction.

Obviously this prevents recursive code or anything like that.

If I recall correctly, a register save/restore procedure (ab)used the RJ
instruction to write a bunch of self-modifying code.
Arne Vajhøj
2021-06-02 00:37:26 UTC
Reply
Permalink
Post by Michael Moroney
Post by Simon Clubley
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
There's self-modifying code on the CDC 6x00 systems -- IN THE HARDWARE.
You see, the procedure call instruction (RJ nnn) jumps to address nnn+1
(60 bits) and writes a branch to the instruction after the RJ
instruction at address nnn.  To return from the procedure, branch to the
entry point nnn and execute the (branch) instruction written by the RJ
instruction.
Obviously this prevents recursive code or anything like that.
If I recall correctly, a register save/restore procedure (ab)used the RJ
instruction to write a bunch of self-modifying code.
I remember that.

Pascal could do recursion.

Fortran could not.

Unless one wrote two small Compass routines to get and set the
saved address and managed that in a small array.

Arne
Andrew Commons
2021-06-02 07:13:27 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Michael Moroney
Post by Simon Clubley
Is there any self-modifying code in VMS ? (I hope the answer to that
is no, BTW. :-))
There's self-modifying code on the CDC 6x00 systems -- IN THE HARDWARE.
You see, the procedure call instruction (RJ nnn) jumps to address nnn+1
(60 bits) and writes a branch to the instruction after the RJ
instruction at address nnn. To return from the procedure, branch to the
entry point nnn and execute the (branch) instruction written by the RJ
instruction.
Obviously this prevents recursive code or anything like that.
If I recall correctly, a register save/restore procedure (ab)used the RJ
instruction to write a bunch of self-modifying code.
I remember that.
Pascal could do recursion.
Fortran could not.
Unless one wrote two small Compass routines to get and set the
saved address and managed that in a small array.
Arne
The CDC 6X00 systems did not have a hardware stack pointer which left you with the RJ kludge.

I recall writing one small Compass routine that implemented and managed a return stack that
allowed me to have quasi-recursive Fortran routines.
Stephen Hoffman
2021-05-31 20:06:12 UTC
Reply
Permalink
Post by Dave Froble
I'm curious. Do you have any tools that can pinpoint all references to
a particular field or argument? If not, did you attempt to produce
such a tool?
A combination of compiler symbol tables and linker maps can usually
spot symbol references, though pointer use can obviously bypass that.

At run-time, debugger watchpoints can be useful, though that can be
less than inclusive, assuming incomplete code coverage for testing.

When handed similar problems for longer-term work, maybe start by
creating APIs and then encapsulating the data and data access.

Which is a common path toward component replacements, and can be used
for platform migrations; toward app refactoring.

This retrofitting and refactoring work is an investment that can
require months or years to pay off, within any complex app.

Writing a refactoring-focused presentation for OpenVMS apps would be an
interesting project in itself.\
--
Pure Personal Opinion | HoffmanLabs LLC
Marc Van Dyck
2021-06-01 15:27:34 UTC
Reply
Permalink
I'm curious. Do you have any tools that can pinpoint all references to a
particular field or argument? If not, did you attempt to produce such a
tool?
A combination of compiler symbol tables and linker maps can usually spot
symbol references, though pointer use can obviously bypass that.
At run-time, debugger watchpoints can be useful, though that can be less than
inclusive, assuming incomplete code coverage for testing.
When handed similar problems for longer-term work, maybe start by creating
APIs and then encapsulating the data and data access.
Which is a common path toward component replacements, and can be used for
platform migrations; toward app refactoring.
This retrofitting and refactoring work is an investment that can require
months or years to pay off, within any complex app.
Writing a refactoring-focused presentation for OpenVMS apps would be an
interesting project in itself.\
Don't you do that with Source Code Analyzer, for languages that support
it ?
--
Marc Van Dyck
Stephen Hoffman
2021-06-01 17:33:32 UTC
Reply
Permalink
Don't you do that with Source Code Analyzer, for languages that support it ?
I usually use Xcode and Instruments and related tooling for that, oh,
wait, you meant OpenVMS. Never mind. I use DECset SCA and PCA only
rarely, as few sites have licenses for that. Which means using symbol
tables and maps, and the debugger, and preferably refactoring where
permitted.
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2021-06-02 01:34:52 UTC
Reply
Permalink
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,
and preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without or you mangle your build procedures to make a static
version, which in turn makes the results of any performance analysis
somewhat suspect for drawing conclusions about the real-world application.
Arne Vajhøj
2021-06-02 13:08:06 UTC
Reply
Permalink
Post by Craig A. Berry
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only  rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,
and  preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without
Would it ignore the time spent in the shareable image or would it just
count it as being spent in the calling code?

The latter may be good enough for many purposes.
Post by Craig A. Berry
or you mangle your build procedures to make a static
version, which in turn makes the results of any performance analysis
somewhat suspect for drawing conclusions about the real-world application.
It would be a hassle. And if it is external code then one may not even
be able to do it.

But why do you expect a big difference in result due to static
linking?

Arne
Jan-Erik Söderholm
2021-06-02 14:03:42 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Craig A. Berry
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only  rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,
and  preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without
Would it ignore the time spent in the shareable image or would it just
count it as being spent in the calling code?
Hm... Aren't the code in sharable images mapped into local process space
and run just as if it had been in a locally loaded image? Does it make
any differnce from the outside? I mean, if you don't start looking at
actual physical addresses where the code is loaded into memory.
Post by Arne Vajhøj
The latter may be good enough for many purposes.
Post by Craig A. Berry
                  or you mangle your build procedures to make a static
version, which in turn makes the results of any performance analysis
somewhat suspect for drawing conclusions about the real-world application.
It would be a hassle. And if it is external code then one may not even
be able to do it.
But why do you expect a big difference in result due to static
linking?
Arne
Arne Vajhøj
2021-06-02 14:31:28 UTC
Reply
Permalink
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Craig A. Berry
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only  rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,
and  preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without
Would it ignore the time spent in the shareable image or would it just
count it as being spent in the calling code?
Hm... Aren't the code in sharable images mapped into local process space
and run just as if it had been in a locally loaded image? Does it make
any differnce from the outside? I mean, if you don't start looking at
actual physical addresses where the code is loaded into memory.
It was Craig that stated that it somehow acted differently.

I don't know why it would.

Maybe PCA does not distinguish between application shareable images
and VMS shareable images and it don't want to measure VMS shareable
images.

Maybe there is a different reason.

I have not used PCA since VAX.

Arne
hb
2021-06-02 15:58:08 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Craig A. Berry
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only  rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,
and  preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without
Would it ignore the time spent in the shareable image or would it just
count it as being spent in the calling code?
Hm... Aren't the code in sharable images mapped into local process space
and run just as if it had been in a locally loaded image? Does it make
any differnce from the outside? I mean, if you don't start looking at
actual physical addresses where the code is loaded into memory.
It was Craig that stated that it somehow acted differently.
I don't know why it would.
Maybe PCA does not distinguish between application shareable images
and VMS shareable images and it don't want to measure VMS shareable
images.
Maybe there is a different reason.
I have not used PCA since VAX.
Arne
Where the shareable image is mapped usually depends on if and how you
want to share code. Code of shareable images can be mapped into P0 but
also in S0/S1 or S2.

If a shareable image is installed with shared, resident code, it usually
was in S0/S1 (on x86 it usually is in S2). DECC$SHR is an example. I
don't know how PCA works, it may not work on shareable images where code
is shared.

Usually sharing of a shareable image can be avoided by telling the image
activator not to activate the installed image. This is done with a
logical name pointing to the image file, full file specification. A
DECC$SHR logical would point to SYS$SHARE:DECC$SHR.EXE;0 Then the CRTL
will be mapped to P0. This may be enough for PCA, but, as mentioned
above, I don't know.
Craig A. Berry
2021-06-02 21:02:21 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Craig A. Berry
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only  rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,
and  preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without
Would it ignore the time spent in the shareable image or would it just
count it as being spent in the calling code?
IIRC, it counts it all as being in the calling code.
Post by Arne Vajhøj
The latter may be good enough for many purposes.
Not when you have a tiny bootstrap program that loads a library to do
all the heavy lifting -- then it tells you that 99.9999% of the time was
taken by the one routine that loads the library. I believe this is a
fairly common architecture; it's certainly one way to make a package
that can be run standalone but also be embedded in other programs.
Post by Arne Vajhøj
Post by Craig A. Berry
                  or you mangle your build procedures to make a static
version, which in turn makes the results of any performance analysis
somewhat suspect for drawing conclusions about the real-world
application.
It would be a hassle. And if it is external code then one may not even
be able to do it.
And if things get loaded via LIB$FIS then you're going to have to change
program logic as well as build procedures to try to make a static image
that has everything that normally resides in potentially dozens or
hundreds of shareable images.
Post by Arne Vajhøj
But why do you expect a big difference in result due to static
linking?
Possibly it wouldn't be today. I guess I had in mind that old post
about "A Day in the Life of the Image Activator." There is also the
chance that you'll run out of memory or pagefile, again less likely
today than yesteryear, but still possible.
Arne Vajhøj
2021-06-03 00:48:33 UTC
Reply
Permalink
Post by Craig A. Berry
Post by Arne Vajhøj
Post by Craig A. Berry
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only  rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,
and  preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without
Would it ignore the time spent in the shareable image or would it just
count it as being spent in the calling code?
IIRC, it counts it all as being in the calling code.
That makes sense.
Post by Craig A. Berry
Post by Arne Vajhøj
The latter may be good enough for many purposes.
Not when you have a tiny bootstrap program that loads a library to do
all the heavy lifting -- then it tells you that 99.9999% of the time was
taken by the one routine that loads the library. I believe this is a
fairly common architecture; it's certainly one way to make a package
that can be run standalone but also be embedded in other programs.
If you need you application to be both standalone and embeddable
then that is what you have to do.

I don't know how common it it.
Post by Craig A. Berry
Post by Arne Vajhøj
Post by Craig A. Berry
                  or you mangle your build procedures to make a static
version, which in turn makes the results of any performance analysis
somewhat suspect for drawing conclusions about the real-world application.
It would be a hassle. And if it is external code then one may not even
be able to do it.
And if things get loaded via LIB$FIS then you're going to have to change
program logic as well as build procedures to try to make a static image
that has everything that normally resides in potentially dozens or
hundreds of shareable images.
Yep.
Post by Craig A. Berry
Post by Arne Vajhøj
But why do you expect a big difference in result due to static
linking?
Possibly it wouldn't be today.  I guess I had in mind that old post
about "A Day in the Life of the Image Activator."  There is also the
chance that you'll run out of memory or pagefile, again less likely
today than yesteryear, but still possible.
Considering all the stuff that happens today then dynamic image
loading is probably not so bad. It must be way way less
overhead than a JIT compiler.

Arne
Stephen Hoffman
2021-06-03 15:50:13 UTC
Reply
Permalink
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
Post by Craig A. Berry
Don't you do that with Source Code Analyzer, for languages that support it ?
I use DECset SCA and PCA only  rarely, as few sites have licenses for
that. Which means using symbol tables and maps, and the debugger,and 
preferably refactoring where permitted.
And as far as I remember PCA doesn't work on shareable images, which
means on any kind of application with a semi-modern architecture, you
either do without
Last I looked, DECset PCA does have support for shareable images, but
it has long reminded me of the debugger in that regard; of having to
treat the image pieces ~separately, and not as parts of the same app.

The debugger was long one of the still-remarkable pieces of the OpenVMS
development platform, but other platforms have reached parity with or
have surpassed the debugger in recent years.

macOS with Xcode and Instruments utterly blows away the DECset
performance-related and memory-related tooling, for instance.
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
Would it ignore the time spent in the shareable image or would it just
count it as being spent in the calling code?
IIRC, it counts it all as being in the calling code.
That makes sense.
It made sense in the last millennium on a VAX and with ~30 bits of
address space for apps and tooling. Now? Not so much.

Now? Treating shareable images separately is just dumb. Can you
symbolicate each piece? Show it. No symbols? Display what you can. And
get better at reversing executable images when symbols are unavailable.
q.v. Ghidra, for reversing.
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
The latter may be good enough for many purposes.
Not when you have a tiny bootstrap program that loads a library to do
all the heavy lifting -- then it tells you that 99.9999% of the time
was taken by the one routine that loads the library. I believe this is
a fairly common architecture; it's certainly one way to make a package
that can be run standalone but also be embedded in other programs.
If you need you application to be both standalone and embeddable then
that is what you have to do.
I don't know how common it it.
Having an app that can build both as standalone monolithic and as
shareable images is vanishingly rare. One of the very few apps around
that (sort of) does do this is SQLite.

Breaking up apps into callable hunks and into UI or networking or web
other app-specific pieces is ubiquitous, of course. Those hunks get one
or more object libraries, and the shareable images for deployment. Get
the shareable image hunks working separately, build test harnesses for
each hunk, and re-use the hunks across various executables, and update
the hunks as needed. These hunks of code can be created for various
purposes; to ease and isolate and modularize app development, for
porting to some new UI, or for porting code to another platform
entirely.

OpenVMS developer tooling needs help. What the Visual Studio Code IDE
provides is a start. More work is needed for the tooling, and for VSC.
It'd be interesting to see the changes and the rate of change, were VSI
to integrate with and encourage the use of VSC internally, too.
Downside of development tooling, of course: developers can have decades
of investments in a specific set of tooling, and changing tooling (from
edit-compile-link-debug-command-line, or from some other IDE) is no
small investment of time and focus and effort. Ah well. But I digress.
"Time to re-launch Xcode for this coding project", Hoff said swiftly.
--
Pure Personal Opinion | HoffmanLabs LLC
John Reagan
2021-06-01 18:06:04 UTC
Reply
Permalink
Post by Marc Van Dyck
I'm curious. Do you have any tools that can pinpoint all references to a
particular field or argument? If not, did you attempt to produce such a
tool?
A combination of compiler symbol tables and linker maps can usually spot
symbol references, though pointer use can obviously bypass that.
At run-time, debugger watchpoints can be useful, though that can be less than
inclusive, assuming incomplete code coverage for testing.
When handed similar problems for longer-term work, maybe start by creating
APIs and then encapsulating the data and data access.
Which is a common path toward component replacements, and can be used for
platform migrations; toward app refactoring.
This retrofitting and refactoring work is an investment that can require
months or years to pay off, within any complex app.
Writing a refactoring-focused presentation for OpenVMS apps would be an
interesting project in itself.\
Don't you do that with Source Code Analyzer, for languages that support
it ?
--
Marc Van Dyck
SCA can help but the Macro compiler doesn't support SCA (well, VAX Macro-32 has some support)
which is where Dave asked about searching Macro code for FOO$L_FIELD and changing the name
and opcode to some EVAX_ and FOO$Q_BIGGERFIELD combination.
John Dallman
2021-05-31 10:54:00 UTC
Reply
Permalink
Post by John Reagan
All the Macro compilers (Alpha, Itanium, and x86) have 64-bit
builtins beyond the VAX instruction set. However, changing a
field or argument involves touching every single instruction
that involves it as the size is part of the instruction.
Make sense.
Post by John Reagan
Widening arguments can be a little tricky, but all those string
descriptors, itemlists, and RMS data structures have 32-bit
pointers in them. Yes, there are some 64-bit flavors
of itemlists and descriptors and even some RMS data structures (but
even RAB64 didn't widen ALL of the pointers) but switching involves
touching lots of code again.
Got it, thanks.

John
Michael Moroney
2021-05-31 06:15:47 UTC
Reply
Permalink
Post by John Dallman
Post by Arne Vajhøj
Whatever changes necessary to support 64 bit was done 30 years ago
for Alpha.
For the current 64-bit APIs, sure, but there are APIs that only take
32-bit addresses. Once I began to get to grips with the MACRO-32
compilers, it became plausible that one reason why there aren't 64-bit
versions of all APIs is that the interfaces are implemented in MACRO-32.
"Macro-32" is a bit misleading. The Macro compiler accepts a weird
mishmash of (32 bit) VAX instructions as well as (64 bit) pseudo-Alpha
instructions, the EVAX_xxx ones. The registers are mostly the 64 bit
Alpha registers. VAX instructions all follow rules for setting the upper
32 register bits, but are not restricted to VAX registers (ADDL3
R22,R19,R28 is perfectly acceptable). Funky things happen when
referring to AP, FP etc.

In many cases changes from 32 bit to 64 bit are necessary. The PTEs John
mentioned are a fine example. In that case the 32 bit VAX MOVL
instructions need to be changed to 64 bit EVAX_xxx.
"It would be nice" to convert the MACRO-32 modules to something else as
they need modification, but things like deadlines make it so that
usually the smaller changes (like MOVL-->EVAX_xxx) is easier and faster.
Simon Clubley
2021-05-31 13:05:58 UTC
Reply
Permalink
Post by John Dallman
Post by Arne Vajhøj
Whatever changes necessary to support 64 bit was done 30 years ago
for Alpha.
For the current 64-bit APIs, sure, but there are APIs that only take
32-bit addresses. Once I began to get to grips with the MACRO-32
compilers, it became plausible that one reason why there aren't 64-bit
versions of all APIs is that the interfaces are implemented in MACRO-32.
An additional problem is that Macro-32 is a supported application
programming language on VMS and the APIs have to reflect that.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Stephen Hoffman
2021-05-29 22:15:10 UTC
Reply
Permalink
Post by John Dallman
Out of curiosity, I took a look at the documentation for the MACRO-32
compilers for Alpah and Itanium, and started reading the VAX MACRO
instruction set manual.
I now understand the 32/64-bit issues with VMS a bit better, but I'm
curious as to how much of the OS is still written in MACRO-32.
From the master pack line count I ran ~25 years ago, it was roughly
thirds between C, Bliss, and Macro32, with then-far-smaller chunks of
"other" code around.

This is OpenVMS itself. Not the OpenVMS builds, not the OpenVMS tests,
and definitely not the layered products.

Somewhat more than 64,000 modules in the 64-bit source pool back then,
and ~330 facilities (groups of modules), IIRC.

New work then was to be written in C per development policies and this
absent other specific requirements, with updates and enhancements made
to existing Bliss and Macro32 code and with few new modules.

That was not all that long after C acquired "system programming
language" status on OpenVMS, too.
Post by John Dallman
Obviously, this begs the problem of defining the OS, as separate from
its utility programs. Let's say "The stuff that's running once the OS
has booted and is ready to run applications, plus the programs that
needed to run to get it there."
Chunks of OpenVMS user-land code are written in Macro32, and a whole
lot of kernel code is written in C, so I'm not sure where you're headed
with your comments about Macro32.

Linux splits up userland and kernel. OpenVMS doesn't.

There are discussions of products produced paralleling their producing
organizations to be had here too, of course. But I digress.

Some few parts of the kernel do tend to be written in the
platform-native assembler; in Macro64 on Alpha, and IAS on Itanium, and
as-yet-unspecified AT&T and/or Intel syntax assembler on x86-64.

The mixture of the languages involved in the rest of the kernel and in
userland matter rather less, once the compilers are available.

Yes, C, Bliss, and Macro32 each have their issues, and there are
alternatives and debates to be had there.

Would VSI like to have everything all written in one language? Sure.
Fewer compilers to drag around. Fewer languages to learn. Easier
tooling.

But that existing source code is written and ~working, which counts for
a whole lot in these discussions.

As for the 32- and 64-bit virtual memory design, the 64-bit porting
manual—which was later rolled into the Programming Concepts manual—was
a starting point. But that documentation has its gaps.
--
Pure Personal Opinion | HoffmanLabs LLC
Camiel Vanderhoeven
2021-05-30 08:34:19 UTC
Reply
Permalink
Post by Stephen Hoffman
Some few parts of the kernel do tend to be written in the
platform-native assembler; in Macro64 on Alpha, and IAS on Itanium, and
as-yet-unspecified AT&T and/or Intel syntax assembler on x86-64.
AT&T Syntax. Some key kernel modules (most of which I wrote), and some helper routines in various places, but a lot less assembler code than we have on Itanium. On Itanium, we have ~260 IAS modules (~85 if you exclude the MATH library) in the base OS. On x86-64, we have 32 assembler modules, a majority of these contain just a few very small helper routines (like shuffling arguments into the correct registers to call UEFI firmware procedure). About 10 of these are substantial modules. The most substantial assembler modules are those that deal with initial interrupt/exception handling, system service calling, and AST delivery; those are the modules where we make the switch between Kernel, Executive, Supervisor, and User mode. They have to be written in assembly language, because we need to (a) exercise complete control over what goes into (and comes out of) which of the x86 registers, and (b) do things like change the stack pointer, page tables, etc. that would be impossible to do in C and survive. Even so, we're limiting these in scope, switching to C as soon as possible. A good example is the main scheduler loop. It was written in MACRO-32 on the VAX, MACRO-64 on Alpha, and in IAS on Itanium. On x86-64, it's written in C, calling on an assembly routine only to do the minimum bit of context changing that couldn't be done in C. We've gone from 1200 lines of Itanium assembly to 200 lines of C code + 150 lines of x86-64 assembler code.

Camiel
Camiel Vanderhoeven
2021-05-30 08:48:53 UTC
Reply
Permalink
Oh, as to the original question, looking at recent x86-64 builds, the linker object files built by the different compilers are ~55% C, ~30% BLISS, ~15% MACRO-32, <1% Assembler. That more or less reflects the number of modules in these languages (multiple linker object files are sometimes produced from a single module).

Camiel
John Dallman
2021-05-30 09:04:00 UTC
Reply
Permalink
Post by Camiel Vanderhoeven
Oh, as to the original question, looking at recent x86-64 builds,
the linker object files built by the different compilers are ~55%
C, ~30% BLISS, ~15% MACRO-32, <1% Assembler.
Thanks!

John
Loading...