Discussion:
HPE Integrity emulator
(too old to reply)
David Turner
2022-08-11 22:48:16 UTC
Permalink
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)

I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?

Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available


Comments please.


David Turner
abrsvc
2022-08-11 22:55:08 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available
Comments please.
David Turner
Since there is a performance penalty to pay when using an emulator on a system, there is likely to be no emulator that could approach Integrity performance levels with currently available hardware. I know of no emulator for Integrity systems at this time.

Dan
gah4
2022-08-11 23:02:05 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
Without looking at it in much detail, it would seem to me not so good an idea.

IA-64 is specifically designed such that the instruction set optimizes the
ability of the hardware to execute instructions. All the out-of-order
hazards are solved at compile time, such that everything happens in
the right order. (Part of the reason for the complication of the design,
and especially of writing compilers for it.)

One problem with any RISC design, and especially with IA-64, is
how it scales over time. Things that made sense with the technology
one year, might be completely wrong not so many years later. (*)

Now, the thing that has made emulation work well over the years,
is that newer, faster, processors are enough faster, and also more
energy efficient, to overcome the cost of emulation. It might be
that is now true for IA-64. It does seem likely, though, that instructions
optimized for hardware are less optimized for emulation.

(*) One interesting idea from early RISC is the branch delay slot,
where one instruction is executed after the branch, while the
hardware figures out how to do the branch, and keep the pipeline
full. But as technology changed, that would have required more
and more instructions in the delay slot, inconvenient for existing
hardware, and also for compiler writers if it was done in new
hardware.
jimc...@gmail.com
2022-08-12 07:44:03 UTC
Permalink
Post by gah4
IA-64 is specifically designed such that the instruction set optimizes the
ability of the hardware to execute instructions. All the out-of-order
hazards are solved at compile time, such that everything happens in
the right order. (Part of the reason for the complication of the design,
and especially of writing compilers for it.)
One problem with any RISC design, and especially with IA-64, is
how it scales over time. Things that made sense with the technology
one year, might be completely wrong not so many years later. (*)
IA-64 isn't a RISC design, and the problem wasn't that it "didn't scale over time"; EPIC was a flawed premise for general-purpose computing. Turns out that it is impossible to solve out-of-order hazards at compile time for most workloads involving random memory accesses -- which makes it impossible to extract significant performance benefits from VLIW architectures for the vast majority of software.

VLIW architectures are very useful for streaming workloads with no dynamic latency, and strictly ordered execution -- they're very successful in DSP and GPU applications to this day. Bu
gah4
2022-08-12 10:48:46 UTC
Permalink
On Friday, August 12, 2022 at 12:44:05 AM UTC-7, ***@gmail.com wrote:

(snip)
Post by ***@gmail.com
IA-64 isn't a RISC design, and the problem wasn't that it "didn't scale over time";
EPIC was a flawed premise for general-purpose computing. Turns out that it is
impossible to solve out-of-order hazards at compile time for most workloads
involving random memory accesses -- which makes it impossible to extract
significant performance benefits from VLIW architectures for the vast majority of software.
Well it isn't so easy at run-time, either. Much of my early programming was on an
IBM 360/91, which was a favorite machine for books on pipelined processors.
(And one of the few that do out-of-order retirement.)

The goal of the 360/91 was one instruction per clock cycle on normal programs,
not specifically written for it. (That is, generated by usual compilers.)
Among others, the 360/91 can prefetch on two branch paths, in addition to the
non-branch path. Keeping the pipelines full isn't so easy, and often likely
didn't run as fast as one might have hoped.

As well as I know, no parallel processor, or pipelined processor, ever runs
as fast as its (over-optimistic) designers hoped.

But okay, memory access is always a problem. The 360/91 uses 16 way
interleaved memory, as memory access time is about 13 clock cycles.
But since you can't predict the access patterns, you don't know
how well interleaved memory works.

With cache, one hopes to have more uniform memory access times,
but yes it is not easy to predict. Yes it is not possible to solve hazards
at compile time, but it is also not possible at run time. One just does
as well as it can be done, and hopes it is good enough.

(One of the fun things about the 360/91 is imprecise interrupts.
When an interrupt occurs, the pipeline is flushed, and the address is
(usually) not the address of the source of the interrupt.)
jimc...@gmail.com
2022-08-12 15:26:31 UTC
Permalink
Well it isn't so easy at run-time, either. Much of my early programming was on an
IBM 360/91, which was a favorite machine for books on pipelined processors.
It's not easy at run-time, but the 50+ years since the 360/91 was designed have shown that run-time techniques are more effective for most workloads.
Yes it is not possible to solve hazards > at compile time, but it is also not possible at run time. One just does
as well as it can be done, and hopes it is good enough.
Successful hardware engineering usually doesn't come from "do the best you can with a technique and hope it's enough".

Hardware techniques to address execution hazards have always delivered more usable performance in general-purpose computing than EPIC offered -- and everything genuinely useful that came from EPIC designs (compiler innovations, large on-die caches, memory controllers, process shrinks) provided even more performance when applied to other instruction architectures.

Itanium only became usably performant by adding SMT, out-of-order execution, and speculative execution -- all of which had already pulled AMD64/x64 and other architectures ahead in pure performance, in speed-per-gate-count, as well as in thermal efficiency and power consumption.

For general-purpose computing, nearly everything of value that came from the billions of dollars poured into EPIC provided more benefit for other technologies.
Arne Vajhøj
2022-08-12 12:42:07 UTC
Permalink
Post by ***@gmail.com
Post by gah4
IA-64 is specifically designed such that the instruction set optimizes the
ability of the hardware to execute instructions. All the out-of-order
hazards are solved at compile time, such that everything happens in
the right order. (Part of the reason for the complication of the design,
and especially of writing compilers for it.)
One problem with any RISC design, and especially with IA-64, is
how it scales over time. Things that made sense with the technology
one year, might be completely wrong not so many years later. (*)
IA-64 isn't a RISC design, and the problem wasn't that it "didn't
scale over time"; EPIC was a flawed premise for general-purpose
computing. Turns out that it is impossible to solve out-of-order
hazards at compile time for most workloads involving random memory
accesses -- which makes it impossible to extract significant
performance benefits from VLIW architectures for the vast majority of
software. >
VLIW architectures are very useful for streaming workloads with no
dynamic latency, and strictly ordered execution -- they're very
successful in DSP and GPU applications to this day.
I am not fully convinced that VLIW was a bad idea.

Yes - it turned out to be extremely difficult to
get N VLIW execution units to be N times as fast
as traditional single execution unit.

But I think that is the wrong comparison.

The correct comparison is whether N VLIW
execution units are faster than N multi-core
execution units requiring multiple threads.

I suspect that may frequently be the case.

Arne
jimc...@gmail.com
2022-08-12 15:03:55 UTC
Permalink
Post by Arne Vajhøj
The correct comparison is whether N VLIW
execution units are faster than N multi-core
execution units requiring multiple threads.
For certain workloads VLIW excels -- execution patterns that don't require non-deterministic memory access, don't benefit from out-of-order execution, and require massive vectorized instructions. It's why VLIW continues to receive investment and innovation in applications like digital signal processing and graphics acceleration.

For general-purpose workloads, they are not. Itanium eventually needed multiple cores, SMT, out-of-order execution, speculative processing in order to achieve reasonable performance -- all techniques that VLIW were intended to make unnecessary.
Arne Vajhøj
2022-08-12 00:04:00 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available
Comments please.
Based on previous discussions here then no Itanium emulator
currently exist.

In theory one could be made. It should be possible to emulate any
CPU where detailed enough documentation is available.

Several posters has raised the performance issue. And even though
it is obviously easier to get similar performance of a 1 core
@ 400-600 MHz Alpha than a 4/8 core @ 1.5-2.0 GHz Itanium on
a 16/24/32 core @ 3 GHz x86-64, then I think it could be
done. I don't expect a non-JIT emulator to be fast enough,
but I believe a JIT emulator could just be fast enough to
be usable.

But I also suspect that developing such an emulator would be
a lot of work (read: bloody expensive). Itanium is a complex
CPU - I suspect a lot more complex than Alpha, and that means
more expensive to develop.

So the feasibility will depend on how many licenses could
be sold.

If you are really interested then you could reach out to Stromasys
and EmuVM and ask how many licenses they would need to sell
for them to be willing to do an Itanium emulator.

Honestly I doubt the numbers will work out. I expect the
vast majority of VMS I64 users to have migrated to VMS x86-64 within
5-10 years. 5-10 years may sound like a long time, but it is not
a long time if it is the timespan where an expensive software
product will sell.

Anyway it will not cost you much to make a few phone calls
and ask people that really knows instead of listening to someone
like me that are just thinking out loud.

Arne
gah4
2022-08-12 01:15:43 UTC
Permalink
On Thursday, August 11, 2022 at 5:04:08 PM UTC-7, Arne Vajhøj wrote:

(snip)
Post by Arne Vajhøj
But I also suspect that developing such an emulator would be
a lot of work (read: bloody expensive). Itanium is a complex
CPU - I suspect a lot more complex than Alpha, and that means
more expensive to develop.
The idea was that it would be simpler than a processor figuring out
on its own how to overlap and reorder instructions. The compiler
is supposed to do that (once) instead of the processor (every time
instructions are executed.

But yes, it is a very complicated processor.

Now, it is possible that there are people who don't need such a fast
processor, but instead need a large memory. (I just noticed that
the DS10 goes up to only 2GB.)

In the Cray-1 days, I wondered why there was no machine to compile
Cray programs on, without using expensive actual Cray-1 time.

A slow IA-64 emulator might not be so hard to write, but getting
reasonable speed should be a real challenge. Especially doing anything
in parallel.
abrsvc
2022-08-12 02:01:12 UTC
Permalink
Post by gah4
(snip)
Post by Arne Vajhøj
But I also suspect that developing such an emulator would be
a lot of work (read: bloody expensive). Itanium is a complex
CPU - I suspect a lot more complex than Alpha, and that means
more expensive to develop.
The idea was that it would be simpler than a processor figuring out
on its own how to overlap and reorder instructions. The compiler
is supposed to do that (once) instead of the processor (every time
instructions are executed.
But yes, it is a very complicated processor.
Now, it is possible that there are people who don't need such a fast
processor, but instead need a large memory. (I just noticed that
the DS10 goes up to only 2GB.)
In the Cray-1 days, I wondered why there was no machine to compile
Cray programs on, without using expensive actual Cray-1 time.
A slow IA-64 emulator might not be so hard to write, but getting
reasonable speed should be a real challenge. Especially doing anything
in parallel.
Realize that a system emulator is more involved that just emulating the instruction stream. The underlying hardware must be emulated as well. This may be as simple as translating an I/O stream into something that the host system can understand or as complex as emulating the functions of a file system within a "data file". There is much involved here.

Dan
gah4
2022-08-12 10:26:26 UTC
Permalink
On Thursday, August 11, 2022 at 7:01:14 PM UTC-7, abrsvc wrote:

(snip)
Post by abrsvc
Realize that a system emulator is more involved that just emulating the
instruction stream. The underlying hardware must be emulated as well.
This may be as simple as translating an I/O stream into something that
the host system can understand or as complex as emulating the functions
of a file system within a "data file". There is much involved here.
It is.

As a rough approximation, which mostly goes back to microprogrammed
machines from the 1960's and 1970's, but I believe also to software
emulated CISC processors is about 1/10 the speed. That is, about 10
instructions to emulate one, on average.

The idea behind RISC is simpler instructions, and the possibility that
more can be executed in the same time. One might hope that RISC
instructions are easier to emulate, but it isn't so obvious that the
RISC advantage still applies with emulation.

IA-64 is supposed to be able to execute 6 instructions per clock cycle.
My guess is that, at least the easier emulation, might still be 10 real
instructions per emulated instruction, so maybe 60 times slower.

And yes things like I/O all need to be emulated, but usually aren't
a big limit on execution speed. They might still take time to get
right, though.
Arne Vajhøj
2022-08-12 12:35:13 UTC
Permalink
Post by gah4
As a rough approximation, which mostly goes back to microprogrammed
machines from the 1960's and 1970's, but I believe also to software
emulated CISC processors is about 1/10 the speed. That is, about 10
instructions to emulate one, on average.
The idea behind RISC is simpler instructions, and the possibility that
more can be executed in the same time. One might hope that RISC
instructions are easier to emulate, but it isn't so obvious that the
RISC advantage still applies with emulation.
IA-64 is supposed to be able to execute 6 instructions per clock cycle.
My guess is that, at least the easier emulation, might still be 10 real
instructions per emulated instruction, so maybe 60 times slower.
1/10th seems slightly optimistic for non-JIT emulation.

But the fastest Alpha emulators use JIT today and an IA-64
emulator would need to as well if it is to perform well.

And then we are talking closer to 1:1 instruction wise.

https://emuvm.com/support/faq/

<quote>
What is CPU server: basic, JIT1, JIT2, JIT3?

AlphaVM supports several CPU implementation back-ends. They all
implement the same Alpha CPU functionality, but in various ways.

Basic CPU is the simplest CPU implementation based on the
interpretation of Alpha instructions fetched from the memory. This CPU
serfver is the only CPU server available in AlphaVM-Basic.
JITx CPUs are based on the Just-In-Time compilation of Alpha code
to increase the performance.
JIT1 server compiles to byte code. Its performance is almost
double of the basic CPU.
JIT2 server compiles Alpha code to naive x86-64 code. Its
performance on most workloads is about a factor of 5 faster than the
basic CPU.
JIT3 server compiles Alpha code to naive x86-64 code. This CPU
server applies sophisticated optimization. It’s performance is a factor
of 10 faster than the basic CPU.

AphaVM-Pro is offered with JIT3 CPU. AlphaVM-Basic only supports the
basic CPU. Other CPU servers are used merely for debugging.
</quote>

Note that this is the vendors own description - not an
independent benchmark.

Arne
Simon Clubley
2022-08-12 13:10:37 UTC
Permalink
Post by Arne Vajhøj
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available
Comments please.
Based on previous discussions here then no Itanium emulator
currently exist.
In theory one could be made. It should be possible to emulate any
CPU where detailed enough documentation is available.
If you think this problem is about emulating the CPU, then you don't
understand the problem.

A good chunk of the CPU emulation work has already been done in Ski,
but that's only a userland binaries emulator for Linux and would be
useless as-is for running even userland VMS binaries.

In a full system emulator, the CPU is only one small part of the
emulation. You also have to emulate all the rest of the hardware to
a good enough accuracy that VMS can't tell the difference.

_That_ is where the majority of the work lies.

A full system emulator would also need access to the firmware loaded
onto the real hardware and that is now only available under a support
contract.

A userland binaries emulator OTOH would need to be run on top of
another VMS system on a different architecture as it works by calling
the system services in the underlying VMS system when a call to a VMS
system service is made in the Itanium binary.

If you run it on Alpha, you need to emulate any system services added
to Itanium that don't exist on Alpha VMS. If you run it on x86-64 VMS,
you need to hope that all the system services available on Itanium exist
on x86-64 VMS, or you have the same problem.

In addition, VMS has a major problem that simply doesn't exist in Linux
and that is whereas the vast majority of interaction between a Linux
userland binary and Linux itself is via a nice well-defined syscall
interface, VMS binaries have a nasty habit of looking at data cells
which exist directly in the VMS process's address space.

Such data cell access would have to be recognised and emulated in such
a userland level emulator.

In addition to this, you also have the problem of sharable images mapped
into user space during image activation. Such images would have to be
brought along from the Itanium system and run through the emulator
as well. I don't know what the licence implications of doing that would be.

In short, a userland binaries emulator would very likely be unsuitable
for anything other than simple VMS Itanium userland binaries so you are
looking at a full system emulator for running a real Itanium application
on another architecture.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2022-08-12 13:21:15 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available
Comments please.
Based on previous discussions here then no Itanium emulator
currently exist.
In theory one could be made. It should be possible to emulate any
CPU where detailed enough documentation is available.
If you think this problem is about emulating the CPU, then you don't
understand the problem.
A good chunk of the CPU emulation work has already been done in Ski,
but that's only a userland binaries emulator for Linux and would be
useless as-is for running even userland VMS binaries.
In a full system emulator, the CPU is only one small part of the
emulation. You also have to emulate all the rest of the hardware to
a good enough accuracy that VMS can't tell the difference.
_That_ is where the majority of the work lies.
Possible.

But it is still a matter of documentation.

And unlike the IA-64 instruction set that is pretty unique, then
I would assume the hardware support is different but same style as
other emulators.
Post by Simon Clubley
A full system emulator would also need access to the firmware loaded
onto the real hardware and that is now only available under a support
contract.
The Alpha emulators get it from somewhere. HP(E) I presume. Anyone
doing an IA-64 emulator would need the same.

This is not a hobbyist weekend project. This would be a commercial
company deciding to invest millions of dollars.

Arne
Simon Clubley
2022-08-12 17:54:27 UTC
Permalink
Post by Arne Vajhøj
And unlike the IA-64 instruction set that is pretty unique, then
I would assume the hardware support is different but same style as
other emulators.
Yes and no.

Emulating various standard disk drive interfaces (for example) is one
thing, but the Itanium architecture itself has its own unique hardware
infrastructure of which the CPU instruction set is just one part.

Once again, emulating the instruction set is only one task that needs
to be done in a long list of tasks before you have a viable full system
emulator.

This hardware also needs to be emulated to a level of accuracy that means
VMS can't tell the difference. That's a _lot_ of work. Just look at the
bug reports that show up here every so often for Alpha that turn out to
be an emulation problem in the Alpha emulator in use.

That's for an architecture which is very well-known and _far_ less complex
than Itanium is. It may also interest you to know that nobody has put an
Itanium emulator in QEMU even through it supports this list of architectures:

https://www.qemu.org/docs/master/system/index.html

Writing an Itanium emulator is probably not viable these days, either as
a commercial project or a hobbyist project, given the amount of effort
required to create one and the need to access restricted firmware (for
hobbyists) or the limited user base (for commercial projects).

The fact Itanium is also both complex and dead counts against it when
trying to get people interested in it for a hobbyist project.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
jimc...@gmail.com
2022-08-12 21:42:30 UTC
Permalink
Post by Simon Clubley
The fact Itanium is also both complex and dead counts against it when
trying to get people interested in it for a hobbyist project.
At some point, I predict being complex and an infamous business failure will ensure that hobbyists build a platform emulator for Itanium :) It will be too late for the scenario David's customers need, however
Johnny Billquist
2022-08-12 22:38:28 UTC
Permalink
Post by Simon Clubley
In addition, VMS has a major problem that simply doesn't exist in Linux
and that is whereas the vast majority of interaction between a Linux
userland binary and Linux itself is via a nice well-defined syscall
interface, VMS binaries have a nasty habit of looking at data cells
which exist directly in the VMS process's address space.
Such data cell access would have to be recognised and emulated in such
a userland level emulator.
I find that claim incredibly hard to believe. Can you give some examples
of this? Because even RSX, which is just a primitive predecessor of VMS
do not have such behavior. Everything in the kernel is completely hidden
and out of scope for a process, and the only way to do or get to
anything is through system calls. And that is generally true of almost
any reasonable multiuser, timesharing, memory protected operating system.

There is absolutely nothing Unix/Linux specific about this.

Heck - how would such programs even survive upgrading to a new version
of the OS, when things might move around and change internally???

Johnny
Stephen Hoffman
2022-08-13 00:21:10 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
In addition, VMS has a major problem that simply doesn't exist in Linux
and that is whereas the vast majority of interaction between a Linux
userland binary and Linux itself is via a nice well-defined syscall
interface, VMS binaries have a nasty habit of looking at data cells
which exist directly in the VMS process's address space.
Such data cell access would have to be recognised and emulated in such
a userland level emulator.
I find that claim incredibly hard to believe. Can you give some
examples of this?
VAX stuff that does this will reference SYS$BASE_IMAGE during the link,
and Alpha and Integrity apps will use LINK /SYSEXE to resolve these
symbols.

As one of various examples of symbols that some few apps will poke at:
CTL$A_COMMON — and there are others.

We met a few back in the era of Y2K too, where some apps were reading
directly from the kernel clock storage quadword.
Post by Johnny Billquist
Because even RSX, which is just a primitive predecessor of VMS do not
have such behavior.
RSX and OpenVMS are different. (I'd have thought you'd already been
singed enough by this erroneous assumption, but here we are again.)

The four-rings UREW/URKW/etc design specifically permits developers to
allow these cross-mode access shenanigans, too. BTW: UREW wasn't
feasible on Itanium.

To make some of these cross-mode shenanigans somewhat more supportable,
OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
allowing supervisor code to poke at kernel data. But I digress.
Post by Johnny Billquist
Everything in the kernel is completely hidden and out of scope for a
process, and the only way to do or get to anything is through system
calls.
Nope.
--
Pure Personal Opinion | HoffmanLabs LLC
Johnny Billquist
2022-08-13 10:19:14 UTC
Permalink
Post by Stephen Hoffman
Post by Johnny Billquist
Post by Simon Clubley
In addition, VMS has a major problem that simply doesn't exist in
Linux and that is whereas the vast majority of interaction between a
Linux userland binary and Linux itself is via a nice well-defined
syscall interface, VMS binaries have a nasty habit of looking at data
cells which exist directly in the VMS process's address space.
Such data cell access would have to be recognised and emulated in
such a userland level emulator.
I find that claim incredibly hard to believe. Can you give some
examples of this?
VAX stuff that does this will reference SYS$BASE_IMAGE during the link,
and Alpha and Integrity apps will use LINK /SYSEXE to resolve these
symbols.
CTL$A_COMMON — and there are others.
We met a few back in the era of Y2K too, where some apps were reading
directly from the kernel clock storage quadword.
Are such symbols then guaranteed to never move between different
versions of the OS, or how is this managed?
Post by Stephen Hoffman
Post by Johnny Billquist
Because even RSX, which is just a primitive predecessor of VMS do not
have such behavior.
RSX and OpenVMS are different.  (I'd have thought you'd already been
singed enough by this erroneous assumption, but here we are again.)
I know. :-)
Post by Stephen Hoffman
The four-rings UREW/URKW/etc design specifically permits developers to
allow these cross-mode access shenanigans, too. BTW: UREW wasn't
feasible on Itanium.
I know that the VAX hardware have these. I just find it weird that you
would have a design where you directly reach into the innards of the OS
without going through any system call layer.
In general it have been understood for quite some time that this is a
bad idea. Abstraction and isolation is more or less some core designs
for making things more robust and possible to change without breaking
things.
Post by Stephen Hoffman
To make some of these cross-mode shenanigans somewhat more supportable,
OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
allowing supervisor code to poke at kernel data. But I digress.
That is digressing. Supervisor code is not normal user processes.

Well. I'm tempted to paraphrase the late Mark Crispin. RSX - a great
improvements on its successors.
(He used that with TOPS-20 and any Unix system)

Johnny
Stephen Hoffman
2022-08-13 19:46:55 UTC
Permalink
Post by Johnny Billquist
Are such symbols then guaranteed to never move between different
versions of the OS, or how is this managed?
Linking against the kernel can vary, whether from boot to boot, or from
patch to patch. There are some apps which resolve these references at
app startup, and others that require relinking after updates or
upgrades.

Whether anybody wanted users accessing data directly is one discussion.
That some of the kernel data was accessible from an outer mode (user,
super, etc) and which meant some developers would access it directly is
another discussion.
Post by Johnny Billquist
I know that the VAX hardware have these. I just find it weird that you
would have a design where you directly reach into the innards of the OS
without going through any system call layer.
VAX/VMS programmers can and did make substantial efforts to optimize
some VAX code.

Worked to reduce or eliminate CALLS/CALLG calls and change-mode
operations and longword offsets was popular, along with some other VAX
operations.

That code tuning is related to why some of us have been cleaning up
co-routine code in recent decades, why the OpenVMS Alpha C system
programming work that occurred leading up to OpenVMS Alpha V6.1 was
gnarly, and why compiler code generation can be such a joy.

There's sketchy Y2K-era timekeeping and time-drifting code around and
still in use, too. Apps that haven't been remediated to deal correctly
with daylight saving time changes, mostly.

That VAX code-optimization work has become needed less often in recent
times particularly as the compilers address much of that, though there
are still performance-sensitive code paths in some apps. Just not as
widespread as on VAX.
Post by Johnny Billquist
In general it have been understood for quite some time that this is a
bad idea. Abstraction and isolation is more or less some core designs
for making things more robust and possible to change without breaking
things.
Which is why I've been known to grumble about itemlists and descriptors
and related abstractions, too. Itemlists and descriptors were great for
the 1980s and 1990s, but are increasingly limiting what changes can be
made to OpenVMS APIs.
Post by Johnny Billquist
Hoff: To make some of these cross-mode shenanigans somewhat more
supportable, OpenVMS also implements a P1 window into system space at
CTL$GL_PHD, allowing supervisor code to poke at kernel data. But I
digress.
That is digressing. Supervisor code is not normal user processes.
It's another of design compromises intended to reduce or avoid
overhead. VAX/VMS had those. All operating systems have those.

TL;DR: Yes, there are outer-mode apps that read directly from
inner-mode memory.
--
Pure Personal Opinion | HoffmanLabs LLC
Johnny Billquist
2022-08-13 21:22:54 UTC
Permalink
Post by Stephen Hoffman
Post by Johnny Billquist
Are such symbols then guaranteed to never move between different
versions of the OS, or how is this managed?
Linking against the kernel can vary, whether from boot to boot, or from
patch to patch. There are some apps which resolve these references at
app startup, and others that require relinking after updates or upgrades.
I see. Potential nastiness ahead there then.
Post by Stephen Hoffman
Whether anybody wanted users accessing data directly is one discussion.
That some of the kernel data was accessible from an outer mode (user,
super, etc) and which meant some developers would access it directly is
another discussion.
Understood. But I guess the fact that they made it possible means
obviously some will do it.
Post by Stephen Hoffman
Post by Johnny Billquist
I know that the VAX hardware have these. I just find it weird that you
would have a design where you directly reach into the innards of the
OS without going through any system call layer.
VAX/VMS programmers can and did make substantial efforts to optimize
some VAX code.
Worked to reduce or eliminate CALLS/CALLG calls and change-mode
operations and longword offsets was popular, along with some other VAX
operations.
Understood. And I do remember a lot of this stuff from way back when.
It do seem that in the goal to make things a bit more efficient, they
were willing to bend things just a bit more than I had expected.
Post by Stephen Hoffman
That code tuning is related to why some of us have been cleaning up
co-routine code in recent decades, why the OpenVMS Alpha C system
programming work that occurred leading up to OpenVMS Alpha V6.1 was
gnarly, and why compiler code generation can be such a joy.
I know that there was quite some effort before VAX and Alpha was
somewhat unified. Never knew much of the details, but I see that I'm
getting some of that now.
Post by Stephen Hoffman
There's sketchy Y2K-era timekeeping and time-drifting code around and
still in use, too. Apps that haven't been remediated to deal correctly
with daylight saving time changes, mostly.
Meh. Tell me about it. Same mess in RSX.
Post by Stephen Hoffman
That VAX code-optimization work has become needed less often in recent
times particularly as the compilers address much of that, though there
are still performance-sensitive code paths in some apps. Just not as
widespread as on VAX.
I would hope that they are working on getting rid of this stuff as they
port things.
Post by Stephen Hoffman
Post by Johnny Billquist
In general it have been understood for quite some time that this is a
bad idea. Abstraction and isolation is more or less some core designs
for making things more robust and possible to change without breaking
things.
Which is why I've been known to grumble about itemlists and descriptors
and related abstractions, too. Itemlists and descriptors were great for
the 1980s and 1990s, but are increasingly limiting what changes can be
made to OpenVMS APIs.
Descriptors, if we talk about the kind used for strings, are not
unreasonable. But it seems a lot of the extensions to VMS over the years
have made things more complicated.
Post by Stephen Hoffman
Post by Johnny Billquist
Hoff: To make some of these cross-mode shenanigans somewhat more
supportable, OpenVMS also implements a P1 window into system space at
CTL$GL_PHD, allowing supervisor code to poke at kernel data. But I
digress.
That is digressing. Supervisor code is not normal user processes.
It's another of design compromises intended to reduce or avoid overhead.
VAX/VMS had those. All operating systems have those.
Different mode code to provide services and/or libraries not exactly in
user space are definitely common. And I do give such code more leeway,
since commonly they might be shipped with the OS itself, and as such,
are in sync with other internal bits, or else have other APIs used
internally, for which other rules apply anyway.
Post by Stephen Hoffman
TL;DR: Yes, there are outer-mode apps that read directly from inner-mode
memory.
Check. That's the thing that surprised me. Especially since that's not
happening in RSX, unless you have a privileged program which is mapped
to the kernel. But such a program is already not very normal anyway, and
not something any normal user can write or run (well, of course they can
write it, but they can't actually run it.)

Johnny
Simon Clubley
2022-08-15 18:06:41 UTC
Permalink
Post by Johnny Billquist
Post by Stephen Hoffman
To make some of these cross-mode shenanigans somewhat more supportable,
OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
allowing supervisor code to poke at kernel data. But I digress.
That is digressing. Supervisor code is not normal user processes.
On VMS, there is no such thing as a normal user process.

There is one process that at various times during its lifecycle
executes a mixture of code running in all four modes (KESU).

As such, the supervisor mode code and data structures are part of
the same address space as the user-mode programs. It's just that
most of it is not directly accessible to user-mode programs due
to page protections.

My opinions about whether I think this is a good idea these days
have already been discussed at length. :-)

BTW, are you aware that on VMS, a normal user program can execute
a function within that same program in kernel mode provided it has
sufficient privileges ?

I don't mean jump into the kernel address space, but to actually
execute a function within the program with kernel-mode access.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2022-08-15 23:34:22 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
Post by Stephen Hoffman
To make some of these cross-mode shenanigans somewhat more supportable,
OpenVMS also implements a P1 window into system space at CTL$GL_PHD,
allowing supervisor code to poke at kernel data. But I digress.
That is digressing. Supervisor code is not normal user processes.
On VMS, there is no such thing as a normal user process.
Sorry. But here is where you go into nonsense. I probably should have
avoided the word "process", since the point was normal programs. Any and
every program will at least do calls into kernel mode, if nothing else,
at one or another point in their execution. But that is calls to code
that is not a part of the program. But it's done within the context of a
process. This is normal.
Post by Simon Clubley
There is one process that at various times during its lifecycle
executes a mixture of code running in all four modes (KESU).
The fact that code in those different modes are invoked is irrelevant.
It's not code I wrote or compiled. Makes no difference if it's all in
kernel, or a mix of different modes.
It can all just as well be compressed into one mode as well. Makes no
difference. I know that you constantly is missing that point, and think
you've found security holes where there actually aren't any. Why don't
you actually understand this, and go hunt for other actual bugs and issues?

My program was not written to run in any other mode than user mode, and
that's what a normal program does. Sorry if I used the word "process" in
a way that confused you.

Johnny
gah4
2022-08-13 01:03:09 UTC
Permalink
On Friday, August 12, 2022 at 3:38:31 PM UTC-7, Johnny Billquist wrote:

(snip)
Post by Johnny Billquist
I find that claim incredibly hard to believe. Can you give some examples
of this? Because even RSX, which is just a primitive predecessor of VMS
do not have such behavior. Everything in the kernel is completely hidden
and out of scope for a process, and the only way to do or get to
anything is through system calls. And that is generally true of almost
any reasonable multiuser, timesharing, memory protected operating system.
Does timesharing mean interactive?

It might not be true for OS/360, though that is batch and was designed
before some things were known, and especially when main memory
was expensive ($1/byte, maybe more).

It mostly works at user level, as CMS does it. (That is, IBM's own
emulation of OS/360 system calls.)

One of the complications of OS/360 is that the most important
control block, the DCB, is in user space. Even more, it has some 24
bit addresses, even with 31 and 64 bit OS versions. Much fun.
Johnny Billquist
2022-08-13 10:28:01 UTC
Permalink
Post by gah4
(snip)
Post by Johnny Billquist
I find that claim incredibly hard to believe. Can you give some examples
of this? Because even RSX, which is just a primitive predecessor of VMS
do not have such behavior. Everything in the kernel is completely hidden
and out of scope for a process, and the only way to do or get to
anything is through system calls. And that is generally true of almost
any reasonable multiuser, timesharing, memory protected operating system.
Does timesharing mean interactive?
No. I just tried to limit myself to systems that fulfilled all those
attributes as systems where this isolation would be obvious. It was not
meant to be read that all timesharing systems are interactive, or that
all multiuser systems have memory protection, or any combination of
attributes means that all of those attributes apply or are necessary.

Unix systems, of which Linux is one, used to also not have that
isolation. In the old days, a lot of things were done by opening
/dev/kmem, and read through the kernel memory. Which then had to be done
in combination with reading the kernel symbol table in order to find out
where in kernel memory to read. This was always ugly, risky and tricky.
They obviously learned that this is no good, and got away from it. The
fact that VMS still have this is very surprising to me. I would have
thought it never had it to start with. Like I said, RSX do not. But in a
way that was easier/more obvious on a PDP-11, since it's not such a flat
address space as on the VAX. Kernel space on a PDP-11 is generally not
even possible to see from user space, and you'd have to mess things up,
and use extra resources there. On the VAX, the kernel space is always a
part of your address space, but I would expect it to normally all have
been fully protected from user space access. But now I'm being told it
actually isn't with VMS. I guess they might have been concerned about
performance, but this is a sad state and excuse.

Johnny
Rich Alderson
2022-08-14 21:14:29 UTC
Permalink
Post by gah4
It might not be true for OS/360, though that is batch and was designed
before some things were known, and especially when main memory
was expensive ($1/byte, maybe more).
It mostly works at user level, as CMS does it. (That is, IBM's own
emulation of OS/360 system calls.)
ITYM, actually IKYM DOS/360 here.
Post by gah4
One of the complications of OS/360 is that the most important
control block, the DCB, is in user space. Even more, it has some 24
bit addresses, even with 31 and 64 bit OS versions. Much fun.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Simon Clubley
2022-08-15 17:28:17 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
In addition, VMS has a major problem that simply doesn't exist in Linux
and that is whereas the vast majority of interaction between a Linux
userland binary and Linux itself is via a nice well-defined syscall
interface, VMS binaries have a nasty habit of looking at data cells
which exist directly in the VMS process's address space.
Such data cell access would have to be recognised and emulated in such
a userland level emulator.
I find that claim incredibly hard to believe. Can you give some examples
of this? Because even RSX, which is just a primitive predecessor of VMS
do not have such behavior. Everything in the kernel is completely hidden
and out of scope for a process, and the only way to do or get to
anything is through system calls. And that is generally true of almost
any reasonable multiuser, timesharing, memory protected operating system.
As you now know Johnny, you were (once again) very very wrong to try
and compare the two. :-)

BTW, the fact you immediately switched to talking about kernel mode,
makes me wonder if you are even aware of P1 space in a VMS process.
Post by Johnny Billquist
There is absolutely nothing Unix/Linux specific about this.
Oh yes there is.

Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.

A normal VMS session only ever has one process that is reused over
and over again to run programs (unless you choose to start a subprocess
for some reason.)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2022-08-15 23:40:29 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
In addition, VMS has a major problem that simply doesn't exist in Linux
and that is whereas the vast majority of interaction between a Linux
userland binary and Linux itself is via a nice well-defined syscall
interface, VMS binaries have a nasty habit of looking at data cells
which exist directly in the VMS process's address space.
Such data cell access would have to be recognised and emulated in such
a userland level emulator.
I find that claim incredibly hard to believe. Can you give some examples
of this? Because even RSX, which is just a primitive predecessor of VMS
do not have such behavior. Everything in the kernel is completely hidden
and out of scope for a process, and the only way to do or get to
anything is through system calls. And that is generally true of almost
any reasonable multiuser, timesharing, memory protected operating system.
As you now know Johnny, you were (once again) very very wrong to try
and compare the two. :-)
There is so much more that is common between RSX and VMS, than there are
things different between them. Not sure if you know that, but anyway.
Way more than between VMS and any Unix, for instance.
Post by Simon Clubley
BTW, the fact you immediately switched to talking about kernel mode,
makes me wonder if you are even aware of P1 space in a VMS process.
Yes, I'm very aware of P1 space.
Post by Simon Clubley
Post by Johnny Billquist
There is absolutely nothing Unix/Linux specific about this.
Oh yes there is.
No there isn't. Most operating systems have a clean separation between
user code and the kernel. Including all PDP-11 OSes. It turned out that
VMS does not, which rather makes VMS the exception here, not Unix.
Post by Simon Clubley
Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.
Yes.
Well, technically, P1 space is an artifact of the hardware, and as such,
P1 space exists also for Unix systems running on VAX, and possibly also
Alpha.

VMS just keeps P1 space around a bit more disconnected from the program
you might be executing.
Post by Simon Clubley
A normal VMS session only ever has one process that is reused over
and over again to run programs (unless you choose to start a subprocess
for some reason.)
Well, not really true. Every time you start a program, it gets a new
process ID, with new resources allocated in the kernel for it. Just that
P1 space is retained between them, unless I remember wrong.

Johnny
Arne Vajhøj
2022-08-15 23:51:11 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
A normal VMS session only ever has one process that is reused over
and over again to run programs (unless you choose to start a subprocess
for some reason.)
Well, not really true. Every time you start a program, it gets a new
process ID, with new resources allocated in the kernel for it. Just that
P1 space is retained between them, unless I remember wrong.
Same process with same process id.

I would say that P0 space is not retained. But there is no
difference in substance between P0 not retained (implicit
P1 retained) and P1 retained (implicit P0 not retained).

Arne
Arne Vajhøj
2022-08-15 23:53:36 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.
Yes.
Well, technically, P1 space is an artifact of the hardware, and as such,
P1 space exists also for Unix systems running on VAX, and possibly also
Alpha.
VMS just keeps P1 space around a bit more disconnected from the program
you might be executing.
The big difference is that DCL is living in P1 space (stack space)
while a Unix shell is living in heap space (P0 space on a VAX).

Arne
Johnny Billquist
2022-08-16 09:24:57 UTC
Permalink
Post by Arne Vajhøj
Post by Johnny Billquist
Post by Simon Clubley
Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.
Yes.
Well, technically, P1 space is an artifact of the hardware, and as
such, P1 space exists also for Unix systems running on VAX, and
possibly also Alpha.
VMS just keeps P1 space around a bit more disconnected from the
program you might be executing.
The big difference is that DCL is living in P1 space (stack space)
while a Unix shell is living in heap space (P0 space on a VAX).
Well. P0 isn't just heap. P0 is basically all memory that you want to
look at as either static or growing upward. So heap is one part, but
plain executable code is also in P0. P1 is static stuff as well, and
data growing downward, like a stack for example.

So yes, DCL sits in P1, while Unix shells sits in P0 *and* P1, just as
any other binary. The Unix shell hangs around because you normally fork
and then execute something else in place, while DCL hangs around by
sitting P1 which is not as process local in VMS as it is in Unix.

Johnny
Simon Clubley
2022-08-16 18:01:21 UTC
Permalink
Post by Arne Vajhøj
Post by Johnny Billquist
Post by Simon Clubley
Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.
Yes.
Well, technically, P1 space is an artifact of the hardware, and as such,
P1 space exists also for Unix systems running on VAX, and possibly also
Alpha.
VMS just keeps P1 space around a bit more disconnected from the program
you might be executing.
The big difference is that DCL is living in P1 space (stack space)
while a Unix shell is living in heap space (P0 space on a VAX).
Actually, the _major_ difference is that on VMS, they are in the same
process. In Unix land, they are in different processes.

Also, the other major difference is that parts of P1 space are
directly accessible by a user-mode VMS program, so to get back to the
topic, such access would have to be detected and emulated in any
user-mode binaries level emulator (as opposed to a full-system emulator).

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
John Dallman
2022-08-16 19:26:00 UTC
Permalink
Post by Simon Clubley
Actually, the _major_ difference is that on VMS, they are in the
same process. In Unix land, they are in different processes.
It is a quirk of UNIX-style OSes that process creation is extremely cheap,
and is thus used for all kinds of things. Most other OSes, including VMS
and its mutant child Windows NT, take rather longer to create processes.

John
Bill Gunshannon
2022-08-16 19:33:12 UTC
Permalink
Post by John Dallman
Post by Simon Clubley
Actually, the _major_ difference is that on VMS, they are in the
same process. In Unix land, they are in different processes.
It is a quirk of UNIX-style OSes that process creation is extremely cheap,
and is thus used for all kinds of things. Most other OSes, including VMS
and its mutant child Windows NT, take rather longer to create processes.
Quirk? :-)

bill
Arne Vajhøj
2022-08-16 23:15:52 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Johnny Billquist
Post by Simon Clubley
Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.
Yes.
Well, technically, P1 space is an artifact of the hardware, and as such,
P1 space exists also for Unix systems running on VAX, and possibly also
Alpha.
VMS just keeps P1 space around a bit more disconnected from the program
you might be executing.
The big difference is that DCL is living in P1 space (stack space)
while a Unix shell is living in heap space (P0 space on a VAX).
Actually, the _major_ difference is that on VMS, they are in the same
process. In Unix land, they are in different processes.
(they being shell and programs)

That is the same thing. It is possible because DCL is in P1.

Arne
Simon Clubley
2022-08-16 17:54:08 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.
Yes.
Well, technically, P1 space is an artifact of the hardware, and as such,
P1 space exists also for Unix systems running on VAX, and possibly also
Alpha.
VMS just keeps P1 space around a bit more disconnected from the program
you might be executing.
It's what VMS does with that address space that makes it so different
from other operating systems.
Post by Johnny Billquist
Post by Simon Clubley
A normal VMS session only ever has one process that is reused over
and over again to run programs (unless you choose to start a subprocess
for some reason.)
Well, not really true. Every time you start a program, it gets a new
process ID, with new resources allocated in the kernel for it. Just that
P1 space is retained between them, unless I remember wrong.
That is completely and totally utterly wrong. However, if you really
believe that (instead of you just doing a David by trolling by making
false statements :-)) it also explains your confusion because VMS works
so differently to what you are clearly used to.

The PID does _not_ belong to the program. It belongs to the process itself.
At many times during the lifecycle of a typical VMS process, there will not
even _be_ a user-mode program loaded into the process P0 address space.

In Linux, there is no such thing as an executing process without a
user-mode program, regardless of whether that user-mode program is
a shell, a user's application program, or something else. Also, every
time the shell runs a new program, the program is run in a new and
different process.

OTOH, in VMS, having a process you can interact with, but without
having any user-mode P0 program loaded, is a perfectly normal thing.

When you ask DCL to run a program, _it_ maps the requested program
into the P0 address space, sets it up, and then calls it to start
execution of the user program.

When the user program exits, the user-mode pages used by that program,
but _only_ those user-mode pages, are removed from the process address
space, and control returns to DCL to await your next command.

There is no "new process ID, with new resources allocated in the kernel
for it". It's the same physical process that gets used over and over
again during the user's session to run different user-mode programs.

Running a user program on VMS from DCL is much more like DCL doing
a dlopen() on the user program into P0 space and then doing a call
to it, instead of the Linux/Unix approach of creating a whole new
fresh process for each program the shell wants to run.

_Now_ do you understand why I am describing the VMS approach in the
way I am ?

For the record, I prefer the Unix approach, but I am trying to make
you understand how the VMS approach actually works, not how you think
it works.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2022-08-17 11:49:15 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
Unix/Linux sets up a new process to run an image and then deletes
it immediately afterwards and has no such thing as P1 space.
Yes.
Well, technically, P1 space is an artifact of the hardware, and as such,
P1 space exists also for Unix systems running on VAX, and possibly also
Alpha.
VMS just keeps P1 space around a bit more disconnected from the program
you might be executing.
It's what VMS does with that address space that makes it so different
from other operating systems.
It's certainly been a long time since I looked inside VMS. Which I get
called out on every time I make some mistake/assumption/remember things
wrong. Embarassing each time...
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
A normal VMS session only ever has one process that is reused over
and over again to run programs (unless you choose to start a subprocess
for some reason.)
Well, not really true. Every time you start a program, it gets a new
process ID, with new resources allocated in the kernel for it. Just that
P1 space is retained between them, unless I remember wrong.
That is completely and totally utterly wrong. However, if you really
believe that (instead of you just doing a David by trolling by making
false statements :-)) it also explains your confusion because VMS works
so differently to what you are clearly used to.
No. I did believe that. I had some recollection that the PIDs were
allocated each time a program was started. Partly (again) coming from
RSX. Structures like PCB, TCB, task headers and so on are setup when a
program is started, and thus every time a program starts, you have a new
context in this sense.
But this is also a place where RSX and VMS differs the most, since in
RSX, the "shell" is in a sense even weirder than VMS, or any other OS I
know of.

But the end result is that every time a program is started, it has it's
own process id. That DCL under VMS actually will be starting everything
as a part of its own process is really weird, and it makes me also
wonder how things like spawning another program from a program under VMS
works, since it would need to create a new DCL instance then. On the
other hand, I now recollect that VMS don't have spawn as a system call
like RSX do.

But that certainly explains why creating a new process under VMS is even
heaver.

So yeah, I certainly seem to have been totally lost on this detail.
Post by Simon Clubley
The PID does _not_ belong to the program. It belongs to the process itself.
That was something I thought I remembered being different.
Post by Simon Clubley
At many times during the lifecycle of a typical VMS process, there will not
even _be_ a user-mode program loaded into the process P0 address space.
That on the other hand isn't any strange to me, and does not necessarily
follow, or lead to the topics of the PID itself.
Post by Simon Clubley
In Linux, there is no such thing as an executing process without a
user-mode program, regardless of whether that user-mode program is
a shell, a user's application program, or something else. Also, every
time the shell runs a new program, the program is run in a new and
different process.
Yes.
Post by Simon Clubley
OTOH, in VMS, having a process you can interact with, but without
having any user-mode P0 program loaded, is a perfectly normal thing.
Yes.
Post by Simon Clubley
When you ask DCL to run a program, _it_ maps the requested program
into the P0 address space, sets it up, and then calls it to start
execution of the user program.
But you say that not only that - it also uses the context of DCL. So
that from an accounting point of view, it's still the same process. What
about process quotas like runtime limits? Do DCL reset these, and DCL
itself is excluded from such? And accounting. When a program runs and is
finished, you get accounting information on how much cpu time was used,
memory, and all kind of stuff. Is DCL then doing that accounting
processing, and not the kernel? A process calling something like exit
will not terminate the process, but just jump back to DCL?
Post by Simon Clubley
When the user program exits, the user-mode pages used by that program,
but _only_ those user-mode pages, are removed from the process address
space, and control returns to DCL to await your next command.
Does DCL do that, or the kernel?
Post by Simon Clubley
There is no "new process ID, with new resources allocated in the kernel
for it". It's the same physical process that gets used over and over
again during the user's session to run different user-mode programs.
That was something I had forgotten/misunderstood/never realized.
Post by Simon Clubley
Running a user program on VMS from DCL is much more like DCL doing
a dlopen() on the user program into P0 space and then doing a call
to it, instead of the Linux/Unix approach of creating a whole new
fresh process for each program the shell wants to run.
I can understand that bit. But I then wonder about the whole winding
down of the running of the program, as commented above.
Post by Simon Clubley
_Now_ do you understand why I am describing the VMS approach in the
way I am ?
In part, yes. I still do not consider DCL to be part of userspace, user
programs or anything like that. It's an OS component, and have rights
and privileges which means it can do anything really. Your ranting about
security issues around that topic is still nonsense to me. But VMS is
certainly doing things a bit odd in some ways that I think are unwise here.
Post by Simon Clubley
For the record, I prefer the Unix approach, but I am trying to make
you understand how the VMS approach actually works, not how you think
it works.
And as I observed, this is hardly Unix specific. The fact that VMS do
things odd is just a bit more surprising to me, since I know how RSX
works, upon which so much of VMS is based, but this is one place where
RSX works more like Unix. So how VMS diverged there is an interesting
topic in my head.
(Not that RSX actually is like Unix, RSX is actually sortof different in
another way, but in the perspective of how VMS works, RSX isn't close here.)

Johnny
Rich Alderson
2022-08-17 18:10:46 UTC
Permalink
Post by Johnny Billquist
No. I did believe that. I had some recollection that the PIDs were
allocated each time a program was started. Partly (again) coming from
RSX. Structures like PCB, TCB, task headers and so on are setup when a
program is started, and thus every time a program starts, you have a new
context in this sense.
But this is also a place where RSX and VMS differs the most, since in
RSX, the "shell" is in a sense even weirder than VMS, or any other OS I
know of.
Interestingly, RSX does things the way TOPS-20 (< TENEX) does them, while VMS
does them very much like the way Tops-10 does them! I would never have guessed
that.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Simon Clubley
2022-08-17 19:02:07 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
That is completely and totally utterly wrong. However, if you really
believe that (instead of you just doing a David by trolling by making
false statements :-)) it also explains your confusion because VMS works
so differently to what you are clearly used to.
No. I did believe that. I had some recollection that the PIDs were
allocated each time a program was started. Partly (again) coming from
RSX. Structures like PCB, TCB, task headers and so on are setup when a
program is started, and thus every time a program starts, you have a new
context in this sense.
But this is also a place where RSX and VMS differs the most, since in
RSX, the "shell" is in a sense even weirder than VMS, or any other OS I
know of.
But the end result is that every time a program is started, it has it's
own process id. That DCL under VMS actually will be starting everything
as a part of its own process is really weird, and it makes me also
wonder how things like spawning another program from a program under VMS
works, since it would need to create a new DCL instance then. On the
other hand, I now recollect that VMS don't have spawn as a system call
like RSX do.
VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.

This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).

This most certainly is _NOT_ the way you normally run a program on VMS
however. :-)

For example, all user programs listed in the DCL command table run in
the same process as the DCL instance that loads and executes them.
Post by Johnny Billquist
Post by Simon Clubley
When you ask DCL to run a program, _it_ maps the requested program
into the P0 address space, sets it up, and then calls it to start
execution of the user program.
But you say that not only that - it also uses the context of DCL. So
that from an accounting point of view, it's still the same process. What
about process quotas like runtime limits? Do DCL reset these, and DCL
itself is excluded from such? And accounting. When a program runs and is
finished, you get accounting information on how much cpu time was used,
memory, and all kind of stuff. Is DCL then doing that accounting
processing, and not the kernel? A process calling something like exit
will not terminate the process, but just jump back to DCL?
The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.

Accounting is the same, unless there are some exceptions I don't know about.
Try hitting Ctrl-T repeatedly while at the DCL prompt and watch the I/O
count increase.

A user-mode exit() in a program run from DCL never terminates the process.
The user-mode program exits and control is returned to DCL.
Post by Johnny Billquist
Post by Simon Clubley
When the user program exits, the user-mode pages used by that program,
but _only_ those user-mode pages, are removed from the process address
space, and control returns to DCL to await your next command.
Does DCL do that, or the kernel?
Both. There are system services, but they are called under the control
of DCL. What I can't remember is if they need to be called manually
from DCL code as part of the cleanup or if they are run automatically
as part of some exit handler previously established by DCL. (It's been
a while since I've been in that part of the I&DS manual :-)).

(IIRC, sys$rundwn() is called with a user-mode flag to cause the user-mode
part of the process to be run down. Everyone feel free to correct me if
I am wrong about that. :-))
Post by Johnny Billquist
Post by Simon Clubley
_Now_ do you understand why I am describing the VMS approach in the
way I am ?
In part, yes. I still do not consider DCL to be part of userspace, user
programs or anything like that. It's an OS component, and have rights
and privileges which means it can do anything really. Your ranting about
security issues around that topic is still nonsense to me. But VMS is
certainly doing things a bit odd in some ways that I think are unwise here.
It's only nonsense until you realise that, unlike on Linux, DCL has access
to the privileges of the programs it runs.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2022-08-18 19:43:00 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
That is completely and totally utterly wrong. However, if you really
believe that (instead of you just doing a David by trolling by making
false statements :-)) it also explains your confusion because VMS works
so differently to what you are clearly used to.
No. I did believe that. I had some recollection that the PIDs were
allocated each time a program was started. Partly (again) coming from
RSX. Structures like PCB, TCB, task headers and so on are setup when a
program is started, and thus every time a program starts, you have a new
context in this sense.
But this is also a place where RSX and VMS differs the most, since in
RSX, the "shell" is in a sense even weirder than VMS, or any other OS I
know of.
But the end result is that every time a program is started, it has it's
own process id. That DCL under VMS actually will be starting everything
as a part of its own process is really weird, and it makes me also
wonder how things like spawning another program from a program under VMS
works, since it would need to create a new DCL instance then. On the
other hand, I now recollect that VMS don't have spawn as a system call
like RSX do.
VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.
This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).
Under RSX, SPWN$ is the system call. And it creates a new process, which
is also associated with a terminal, and a UIC, which is given as
arguments to SPWN$. The new process have it's own virtual memory, in
which the task image is loaded, all shared libraries are setup with
regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
like a combo of fork() and exec() under Unix.

Which obviously is rather different than what VMS does then.
Post by Simon Clubley
This most certainly is _NOT_ the way you normally run a program on VMS
however. :-)
For example, all user programs listed in the DCL command table run in
the same process as the DCL instance that loads and executes them.
That is no surprise and not so different from lots of systems. Heck,
even in Unix shells, and bunch of stuff are actually built into the
shell itself, and when you give the command, it's all done within the
shell process itself. Some commands are even *required* to be run within
the shell itself, and it would not work to run them as separate programs.
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
When you ask DCL to run a program, _it_ maps the requested program
into the P0 address space, sets it up, and then calls it to start
execution of the user program.
But you say that not only that - it also uses the context of DCL. So
that from an accounting point of view, it's still the same process. What
about process quotas like runtime limits? Do DCL reset these, and DCL
itself is excluded from such? And accounting. When a program runs and is
finished, you get accounting information on how much cpu time was used,
memory, and all kind of stuff. Is DCL then doing that accounting
processing, and not the kernel? A process calling something like exit
will not terminate the process, but just jump back to DCL?
The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)
Post by Simon Clubley
Accounting is the same, unless there are some exceptions I don't know about.
Try hitting Ctrl-T repeatedly while at the DCL prompt and watch the I/O
count increase.
Well. No surprise about that. The whole login session does have such
counting, since that's what accounting wants to have, in order to
(potentially) charge users with used resources.
But accounting usually can also report how much CPU time, I/O, memory
and so on individual programs used. I was pretty sure VMS could report
that as well, which would be something logged as soon as a program
finishes. But since this is all done within the DCL context, it means
the process is not finished. So how does this happen, or can VMS not
have accounting that gives this kind of information?
(Yes, it's been a bloody long time since I was admining VMS systems...)
Post by Simon Clubley
A user-mode exit() in a program run from DCL never terminates the process.
The user-mode program exits and control is returned to DCL.
So things jumps back to DCL at that point. So exit() would not terminate
the process at all.
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
_Now_ do you understand why I am describing the VMS approach in the
way I am ?
In part, yes. I still do not consider DCL to be part of userspace, user
programs or anything like that. It's an OS component, and have rights
and privileges which means it can do anything really. Your ranting about
security issues around that topic is still nonsense to me. But VMS is
certainly doing things a bit odd in some ways that I think are unwise here.
It's only nonsense until you realise that, unlike on Linux, DCL has access
to the privileges of the programs it runs.
DCL runs as a part of the kernel. It has the potential to have any
privilege it wants, if it was malicious. User privileges are pretty
irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
because of the rights and abilities it has.
This is where you seem to miss the point. DCL is already at a point
where, if it wanted, it could do anything. Which is why users cannot
write their own replacements for DCL and run them, without having
serious privileges.
And partly also why there are almost no alternatives to DCL. It's a bit
of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever saw
anything else.

Johnny
Bill Gunshannon
2022-08-18 20:48:01 UTC
Permalink
Post by Johnny Billquist
And partly also why there are almost no alternatives to DCL. It's a bit
of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever saw
anything else.
Actually, there was. When they came out with the first POSIX subsystem
(I really don't know what else to call it) it came with a version of the
Bourne Shell that could be installed on a per user basis as the login
CLI instead of DCL. I know I did it but only for testing. I don't
remember how it was done. Something set up with SYSUAF, I think.
None of my users ever asked for it and even being primarily a Unix user
I preferred DCL on VMS.

bill
abrsvc
2022-08-18 21:00:23 UTC
Permalink
Post by Johnny Billquist
And partly also why there are almost no alternatives to DCL. It's a bit
of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever saw
anything else.
Actually, there was. When they came out with the first POSIX subsystem
(I really don't know what else to call it) it came with a version of the
Bourne Shell that could be installed on a per user basis as the login
CLI instead of DCL. I know I did it but only for testing. I don't
remember how it was done. Something set up with SYSUAF, I think.
None of my users ever asked for it and even being primarily a Unix user
I preferred DCL on VMS.
bill
Also, Cerner had their clinical application that was a replacement for DCL. At the time, it was the only CLI replacement application known.

Dan
Arne Vajhøj
2022-08-18 23:21:28 UTC
Permalink
Post by Johnny Billquist
And partly also why there are almost no alternatives to DCL. It's a
bit of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever
saw anything else.
Actually, there was.  When they came out with the first POSIX subsystem
(I really don't know what else to call it) it came with a version of the
Bourne Shell that could be installed on a per user basis as the login
CLI instead of DCL.  I know I did it but only for testing.  I don't
remember how it was done.  Something set up with SYSUAF, I think.
None of my users ever asked for it and even being primarily a Unix user
I preferred DCL on VMS.
SYSUAF> MOD username /CLI=xxxxxx

It could also be done for session when logging in by user:

Login: username/CLI=xxxxxx

Arne
Bill Gunshannon
2022-08-19 12:09:48 UTC
Permalink
Post by Arne Vajhøj
Post by Johnny Billquist
And partly also why there are almost no alternatives to DCL. It's a
bit of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever
saw anything else.
Actually, there was.  When they came out with the first POSIX subsystem
(I really don't know what else to call it) it came with a version of the
Bourne Shell that could be installed on a per user basis as the login
CLI instead of DCL.  I know I did it but only for testing.  I don't
remember how it was done.  Something set up with SYSUAF, I think.
None of my users ever asked for it and even being primarily a Unix user
I preferred DCL on VMS.
SYSUAF> MOD username /CLI=xxxxxx
Login: username/CLI=xxxxxx
Thank you. That jogged my memory.

bill
Scott Dorsey
2022-08-19 13:02:34 UTC
Permalink
Post by Arne Vajhøj
Post by Johnny Billquist
And partly also why there are almost no alternatives to DCL. It's a
bit of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever
saw anything else.
Actually, there was.  When they came out with the first POSIX subsystem
(I really don't know what else to call it) it came with a version of the
Bourne Shell that could be installed on a per user basis as the login
CLI instead of DCL.  I know I did it but only for testing.  I don't
remember how it was done.  Something set up with SYSUAF, I think.
None of my users ever asked for it and even being primarily a Unix user
I preferred DCL on VMS.
SYSUAF> MOD username /CLI=xxxxxx
Login: username/CLI=xxxxxx
This was kind of like Software Tools for Pr1mos or Cygwin for Windows. It
was just enough like Unix to seem familiar, but not enough like Unix to
actually be familiar. It was just enough different to be frustrating...
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Bill Gunshannon
2022-08-19 14:37:30 UTC
Permalink
Post by Scott Dorsey
Post by Arne Vajhøj
Post by Johnny Billquist
And partly also why there are almost no alternatives to DCL. It's a
bit of a mess, and pretty tricky to write another CLI for VMS.
MCR did exist at one point, and might still, but I'm not sure I ever
saw anything else.
Actually, there was.  When they came out with the first POSIX subsystem
(I really don't know what else to call it) it came with a version of the
Bourne Shell that could be installed on a per user basis as the login
CLI instead of DCL.  I know I did it but only for testing.  I don't
remember how it was done.  Something set up with SYSUAF, I think.
None of my users ever asked for it and even being primarily a Unix user
I preferred DCL on VMS.
SYSUAF> MOD username /CLI=xxxxxx
Login: username/CLI=xxxxxx
This was kind of like Software Tools for Pr1mos or Cygwin for Windows. It
was just enough like Unix to seem familiar, but not enough like Unix to
actually be familiar. It was just enough different to be frustrating...
Are you talking about the CLI or the POSIX Subsystem? The POSIX
Subsystem was very much like the Softwware Tools Virtual Operating
System (not to be confused with the Kernighan & Plauger Software
Tools which was a handful of utilities but no API). But the ability
to run the Bourne Shell (or any other alternate CLI) is something
much different. It could not be done on Pr1mos and I don't believe
it can be done on Windows. An alternate shell can only be run as
a sub-process to the normal OS CLI. And that can be done on most
any OS, really. STVOS ran on a lot of different systems (including
all the DEC OSes) but I was never aware of a way to make the shell
an alternate CLI like you could do with VMS and the POSIX Subsystem.

And I have long said that the whole POSIX concept was nothing more
than STVOS revived and warmed over. Imagine what POSIX could have
been if the development of the STVOS had continued from its origin
until the present instead of lying fallow for decades only to be
tried again starting from scratch.

On another side note, I wonder if being able to run a Unix like
shell as a CLI would help with using the install scripts under
GNV?

bill
Scott Dorsey
2022-08-19 22:36:39 UTC
Permalink
Post by Bill Gunshannon
Post by Scott Dorsey
This was kind of like Software Tools for Pr1mos or Cygwin for Windows. It
was just enough like Unix to seem familiar, but not enough like Unix to
actually be familiar. It was just enough different to be frustrating...
Are you talking about the CLI or the POSIX Subsystem? The POSIX
Subsystem was very much like the Softwware Tools Virtual Operating
System (not to be confused with the Kernighan & Plauger Software
Tools which was a handful of utilities but no API). But the ability
to run the Bourne Shell (or any other alternate CLI) is something
much different. It could not be done on Pr1mos and I don't believe
it can be done on Windows. An alternate shell can only be run as
a sub-process to the normal OS CLI. And that can be done on most
any OS, really. STVOS ran on a lot of different systems (including
all the DEC OSes) but I was never aware of a way to make the shell
an alternate CLI like you could do with VMS and the POSIX Subsystem.
SWT on Primos gave you a shell that was kind of like the Bourne Shell
until you tried to do something useful with it and then it turned out
it wasn't exactly like it. It had pipes and redirection but they didn't
quite work the way they did under Unix with easy forks.
Post by Bill Gunshannon
And I have long said that the whole POSIX concept was nothing more
than STVOS revived and warmed over. Imagine what POSIX could have
been if the development of the STVOS had continued from its origin
until the present instead of lying fallow for decades only to be
tried again starting from scratch.
Posix shells and compatibility libraries exist on various operating systems
and exist only to allow them to bid for specific government contracts. In
many cases they pass the compatibility test suites without actually working
in any useful way.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Jan-Erik Söderholm
2022-08-18 22:10:05 UTC
Permalink
Post by Johnny Billquist
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program you
run. Or are you saying that VMS can't have a runtime limit? (runtime, like
in, you're not allowed to use more than 2 CPU seconds, and when you hit
that, you'll be killed.)
Process quotas are *process* quotas. Doesn't matter if you run 1 or 10 EXEs
in that process.

Don't mixup process quotas with the accounting features.
Post by Johnny Billquist
I was pretty sure VMS could report that as
well, which would be something logged as soon as a program finishes.
Yes, you can enable that. But that is an *accounting* feature,
not some quota for the process. The resources used by the EXE
are still accumulated against the *process* quotas.
Post by Johnny Billquist
So things jumps back to DCL at that point. So exit() would not terminate
the process at all.
It depends.
If the EXE runs in an DCL context, the process will return to DCL.
If the EXE runs without an DCL context, exit from the EXE terminates the
process.

It depends on how the process was created.

If you just do a RUN /DETACH on the target EXE itself, there is no DCL
environment. Exit of the EXE terminates the process.

If you RUN /DETACH the image named LOGINOUT.EXE and give it a COM
file as the /input parameter, you will have an DCL environment and
you can do whatever you like in the COM file. Exit from the/an EXE
just return to DCL and the COM file.
Johnny Billquist
2022-08-21 15:08:00 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Johnny Billquist
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)
Process quotas are *process* quotas. Doesn't matter if you run 1 or 10
EXEs in that process.
Don't mixup process quotas with the accounting features.
It's more being lazy. I was hoping people would understand the concepts
here without having to write every detail in some very specific form.
Post by Jan-Erik Söderholm
Post by Johnny Billquist
I was pretty sure VMS could report that as well, which would be
something logged as soon as a program finishes.
Yes, you can enable that. But that is an *accounting* feature,
not some quota for the process. The resources used by the EXE
are still accumulated against the *process* quotas.
Well. CPU usage limit would be something you would expect to be applied
to the program you run, and not to your session as a whole. But I'm
starting to get the feeling that VMS can't do this then.

And if a program finishes, but it just means you get back to DCL, then
I'm still wondering how the accounting is done, since the process is
still there, the kernel don't have as much clue about what happened.
Post by Jan-Erik Söderholm
Post by Johnny Billquist
So things jumps back to DCL at that point. So exit() would not
terminate the process at all.
It depends.
If the EXE runs in an DCL context, the process will return to DCL.
If the EXE runs without an DCL context, exit from the EXE terminates the
process.
It depends on how the process was created.
If you just do a RUN /DETACH on the target EXE itself, there is no DCL
environment. Exit of the EXE terminates the process.
If you RUN /DETACH the image named LOGINOUT.EXE and give it a COM
file as the /input parameter, you will have an DCL environment and
you can do whatever you like in the COM file. Exit from the/an EXE
just return to DCL and the COM file.
But how is this done from a technical point of view? There is a huge
difference between the kernel getting a call/signal/whatever that the
process should die, and the kernel removes all associated resources, and
a return being done to DCL, from where the program was called.

Or is a program terminating always going into the kernel, and the kernel
then notices that there is a CLI associated here, and it then moves the
execution back to the CLI with some additional information that the
program terminated?

Johnny
Simon Clubley
2022-08-22 17:53:43 UTC
Permalink
Post by Johnny Billquist
And if a program finishes, but it just means you get back to DCL, then
I'm still wondering how the accounting is done, since the process is
still there, the kernel don't have as much clue about what happened.
The image-level accounting records are probably written during the
user-mode rundown system service call, but that's just a guess as this
is a part of VMS I have not really looked at.
Post by Johnny Billquist
But how is this done from a technical point of view? There is a huge
difference between the kernel getting a call/signal/whatever that the
process should die, and the kernel removes all associated resources, and
a return being done to DCL, from where the program was called.
As already mentioned, DCL is responsible for kicking off the cleanup of
the resources allocated to the user-mode program when that _program_ exits.
The kernel does the normal process-level cleanup when the _process_ exits.
Post by Johnny Billquist
Or is a program terminating always going into the kernel, and the kernel
then notices that there is a CLI associated here, and it then moves the
execution back to the CLI with some additional information that the
program terminated?
The CLI is between the user-mode program exiting and the process exiting.

If you manage to crash DCL itself so DCL exits, the process itself exits
as a result.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Simon Clubley
2022-08-19 12:24:31 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.
This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).
Under RSX, SPWN$ is the system call. And it creates a new process, which
is also associated with a terminal, and a UIC, which is given as
arguments to SPWN$. The new process have it's own virtual memory, in
which the task image is loaded, all shared libraries are setup with
regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
like a combo of fork() and exec() under Unix.
Which obviously is rather different than what VMS does then.
No. To this point in the process lifecycle, a spawn on VMS ends up doing
the same as you describe above with RSX in that you do end up with another
process with its own PID.

It's just that after this, a subprocess behaves in the same way as in the
main process, in that the DCL instance running in the subprocess starts
any user programs in the same subprocess just as DCL running in the main
process starts any user programs in the same main process.
Post by Johnny Billquist
Post by Simon Clubley
The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)
In VMS, CPU runtime limits are documented as being against the process,
although I've never used them. For example:

SUBMIT

/CPUTIME

/CPUTIME=time

Defines a CPU time limit for the batch job. You can specify time
as delta time, 0, INFINITE, or NONE. If the queue on which the
job executes has a defined CPUMAXIMUM value, the smaller of
the SUBMIT command and queue values is used. If the queue on
which the job executes does not have a specified maximum CPU time
limit, the smaller of the SUBMIT command and user authorization
file (UAF) values is used. If the queue on which the job executes
does not have a specified maximum CPU time limit and the UAF has
a specified CPU time limit of NONE, either the value 0 or the
keyword INFINITE allows unlimited CPU time. If you specify the
keyword NONE, the specified queue or UAF value is used. CPU time
values must be greater than or equal to the number specified by
the system parameter PQL_MCPULM.
Post by Johnny Billquist
Post by Simon Clubley
Accounting is the same, unless there are some exceptions I don't know about.
Try hitting Ctrl-T repeatedly while at the DCL prompt and watch the I/O
count increase.
Well. No surprise about that. The whole login session does have such
counting, since that's what accounting wants to have, in order to
(potentially) charge users with used resources.
But accounting usually can also report how much CPU time, I/O, memory
and so on individual programs used. I was pretty sure VMS could report
that as well, which would be something logged as soon as a program
finishes. But since this is all done within the DCL context, it means
the process is not finished. So how does this happen, or can VMS not
have accounting that gives this kind of information?
(Yes, it's been a bloody long time since I was admining VMS systems...)
Jan-Erik pointed out one thing I had forgotten about and that was the
optional image-level accounting in addition to the overall process-level
accounting. You still get the normal process-level accounting on top of
the image-level accounting if you use that option however.
Post by Johnny Billquist
Post by Simon Clubley
It's only nonsense until you realise that, unlike on Linux, DCL has access
to the privileges of the programs it runs.
DCL runs as a part of the kernel. It has the potential to have any
privilege it wants, if it was malicious. User privileges are pretty
irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
because of the rights and abilities it has.
This is where you seem to miss the point. DCL is already at a point
where, if it wanted, it could do anything. Which is why users cannot
write their own replacements for DCL and run them, without having
serious privileges.
Actually, no I am not. The point I am making is that a DCL which behaves
in this way increases the available attack surface, compared to more
secure options such as how Unix shells work.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Jan-Erik Söderholm
2022-08-19 13:31:48 UTC
Permalink
Post by Simon Clubley
Jan-Erik pointed out one thing I had forgotten about and that was the
optional image-level accounting in addition to the overall process-level
accounting.
Well, both PROCESS and IMAGE are possible to enable or disable.

So you *can* have image accounting *without* process accounting... :-)
Johnny Billquist
2022-08-21 15:18:26 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.
This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).
Under RSX, SPWN$ is the system call. And it creates a new process, which
is also associated with a terminal, and a UIC, which is given as
arguments to SPWN$. The new process have it's own virtual memory, in
which the task image is loaded, all shared libraries are setup with
regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
like a combo of fork() and exec() under Unix.
Which obviously is rather different than what VMS does then.
No. To this point in the process lifecycle, a spawn on VMS ends up doing
the same as you describe above with RSX in that you do end up with another
process with its own PID.
It's just that after this, a subprocess behaves in the same way as in the
main process, in that the DCL instance running in the subprocess starts
any user programs in the same subprocess just as DCL running in the main
process starts any user programs in the same main process.
Meaning there is always DCL? That seems to contradict what Jan-Erik said.
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)
In VMS, CPU runtime limits are documented as being against the process,
CPU limits for a batch process is actually for the whole thing, and not
for individual programs.

Not sure if VMS have CPU limits for individual programs. After all these
messages, it almost sounds like it don't.

In RSX, it's a switch to RUN. Like this:

.help run ins tim

RUN [ddnn:][$]filename /TIME=nM
/TIME=nS

Sets the time limit for a task that uses the CPU. When the time limit
expires,
the task is aborted and a message is displayed. If the task being run is
privileged, this keyword is ignored.

Specify the time limit in minutes (M) or in seconds (S); M is the
default.
(Valid only on systems with Resource Accounting.)


Obviously, if RSX had worked like VMS here, you would have a serious
headache if DCL was running in the same process context, as that process
context is killed at that point.
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
It's only nonsense until you realise that, unlike on Linux, DCL has access
to the privileges of the programs it runs.
DCL runs as a part of the kernel. It has the potential to have any
privilege it wants, if it was malicious. User privileges are pretty
irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
because of the rights and abilities it has.
This is where you seem to miss the point. DCL is already at a point
where, if it wanted, it could do anything. Which is why users cannot
write their own replacements for DCL and run them, without having
serious privileges.
Actually, no I am not. The point I am making is that a DCL which behaves
in this way increases the available attack surface, compared to more
secure options such as how Unix shells work.
That there are more risks with code that have such rights is hardly new,
is it?
You could argue that this design makes it more sensitive to bugs causing
security problems, and I'm sure everyone would agree.

No different than any other part of the kernel. A bug anywhere in the
kernel have the same potential problem.

From a security point of view then, minimizing the size of the kernel
and other subsystems that runs with such elevated rights makes the risk
easier to assess, analyze and fix. Nothing new in that either.

So there isn't really anything new under the sun here. If you find a bug
in DCL, good. Report it, and let's hope it gets fixed. Is there a
security issue that DCL gets the rights of the executing program? Nope.

Johnny
Jan-Erik Söderholm
2022-08-21 15:27:23 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.
This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).
Under RSX, SPWN$ is the system call. And it creates a new process, which
is also associated with a terminal, and a UIC, which is given as
arguments to SPWN$. The new process have it's own virtual memory, in
which the task image is loaded, all shared libraries are setup with
regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
like a combo of fork() and exec() under Unix.
Which obviously is rather different than what VMS does then.
No. To this point in the process lifecycle, a spawn on VMS ends up doing
the same as you describe above with RSX in that you do end up with another
process with its own PID.
It's just that after this, a subprocess behaves in the same way as in the
main process, in that the DCL instance running in the subprocess starts
any user programs in the same subprocess just as DCL running in the main
process starts any user programs in the same main process.
Meaning there is always DCL? That seems to contradict what Jan-Erik said.
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)
In VMS, CPU runtime limits are documented as being against the process,
CPU limits for a batch process is actually for the whole thing, and not for
individual programs.
Not sure if VMS have CPU limits for individual programs. After all these
messages, it almost sounds like it don't.
.help run ins tim
 RUN [ddnn:][$]filename /TIME=nM
                        /TIME=nS
 Sets the time limit for a task that uses the CPU. When the time limit
expires,
 the task is aborted and a message is displayed. If the task being run is
 privileged, this keyword is ignored.
 Specify the time limit in minutes (M) or in seconds (S); M is the default.
 (Valid only on systems with Resource Accounting.)
Sure, na RSX "task" is like a VMS "process".

You can of course start an VMS EXE in a new "detached process" and
run it without an DCL envionment. Then there is nothing but that
EXE running in that process. And when the EXE exits, the process
is deleted.

But you can also, if you want or need, start the same EXE in a DCL
environment by calling LOGINOUT.EXE and having a COM file as the
sys$input to that EXE where you run your main EXE. You might need
to have a "script" environment in your detached process where you
run different EXEs.
Post by Johnny Billquist
Obviously, if RSX had worked like VMS here, you would have a serious
headache if DCL was running in the same process context, as that process
context is killed at that point.
I'd say that in most cases you just run the EXE in the detached
process without a DCL envionment. So it is a bit like running
an RSX EXE in a new "task".
Post by Johnny Billquist
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
It's only nonsense until you realise that, unlike on Linux, DCL has access
to the privileges of the programs it runs.
DCL runs as a part of the kernel. It has the potential to have any
privilege it wants, if it was malicious. User privileges are pretty
irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
because of the rights and abilities it has.
This is where you seem to miss the point. DCL is already at a point
where, if it wanted, it could do anything. Which is why users cannot
write their own replacements for DCL and run them, without having
serious privileges.
Actually, no I am not. The point I am making is that a DCL which behaves
in this way increases the available attack surface, compared to more
secure options such as how Unix shells work.
That there are more risks with code that have such rights is hardly new, is
it?
You could argue that this design makes it more sensitive to bugs causing
security problems, and I'm sure everyone would agree.
No different than any other part of the kernel. A bug anywhere in the
kernel have the same potential problem.
From a security point of view then, minimizing the size of the kernel and
other subsystems that runs with such elevated rights makes the risk easier
to assess, analyze and fix. Nothing new in that either.
So there isn't really anything new under the sun here. If you find a bug in
DCL, good. Report it, and let's hope it gets fixed. Is there a security
issue that DCL gets the rights of the executing program? Nope.
  Johnny
Dave Froble
2022-08-21 18:52:06 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.
This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).
Under RSX, SPWN$ is the system call. And it creates a new process, which
is also associated with a terminal, and a UIC, which is given as
arguments to SPWN$. The new process have it's own virtual memory, in
which the task image is loaded, all shared libraries are setup with
regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
like a combo of fork() and exec() under Unix.
Which obviously is rather different than what VMS does then.
No. To this point in the process lifecycle, a spawn on VMS ends up doing
the same as you describe above with RSX in that you do end up with another
process with its own PID.
It's just that after this, a subprocess behaves in the same way as in the
main process, in that the DCL instance running in the subprocess starts
any user programs in the same subprocess just as DCL running in the main
process starts any user programs in the same main process.
Meaning there is always DCL? That seems to contradict what Jan-Erik said.
Actually, I'm not sure of that.

An interactive process haws a CLI, whatever is specified in the SYSUAF record
for that user account. Usually DCL, but it does not have to be DCL.

A batch job has a batch command file that specifies activity.

A detached process can read from a command file, however, I do not think it has
to have such. While I've used detached processes, I usually have a command file
for activity. Not sure it is required.

Now, normally on VMS, there is some kind of SYS$COMMAND, SYS$INPUT, SYS$OUTPUT,
and SYS$ERROR. Or some other method of seeing completion, whether successful or
not.

The I&DS book(s) would be helpful ...
Post by Johnny Billquist
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)
In VMS, CPU runtime limits are documented as being against the process,
CPU limits for a batch process is actually for the whole thing, and not for
individual programs.
Not sure if VMS have CPU limits for individual programs. After all these
messages, it almost sounds like it don't.
It's been a while, but I'm pretty sure that CPU and time limits are on a
process. I've never used them.
Post by Johnny Billquist
.help run ins tim
RUN [ddnn:][$]filename /TIME=nM
/TIME=nS
Sets the time limit for a task that uses the CPU. When the time limit expires,
the task is aborted and a message is displayed. If the task being run is
privileged, this keyword is ignored.
Specify the time limit in minutes (M) or in seconds (S); M is the default.
(Valid only on systems with Resource Accounting.)
If I wished such, I'd most likely use a timer AST.
Post by Johnny Billquist
Obviously, if RSX had worked like VMS here, you would have a serious headache if
DCL was running in the same process context, as that process context is killed
at that point.
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
It's only nonsense until you realise that, unlike on Linux, DCL has access
to the privileges of the programs it runs.
DCL runs as a part of the kernel. It has the potential to have any
privilege it wants, if it was malicious. User privileges are pretty
irrelevant and uninteresting. And yes, bugs in DCL can be rather serious
because of the rights and abilities it has.
This is where you seem to miss the point. DCL is already at a point
where, if it wanted, it could do anything. Which is why users cannot
write their own replacements for DCL and run them, without having
serious privileges.
Actually, no I am not. The point I am making is that a DCL which behaves
in this way increases the available attack surface, compared to more
secure options such as how Unix shells work.
That there are more risks with code that have such rights is hardly new, is it?
You could argue that this design makes it more sensitive to bugs causing
security problems, and I'm sure everyone would agree.
A friend got tired of hearing about bugs, so he implimented a "bug" in the
terminal I/O routines. If active, the "bug" would come out and crawl around the
screen. Some people are easily bored.
Post by Johnny Billquist
No different than any other part of the kernel. A bug anywhere in the kernel
have the same potential problem.
From a security point of view then, minimizing the size of the kernel and other
subsystems that runs with such elevated rights makes the risk easier to assess,
analyze and fix. Nothing new in that either.
Not having bugs is an even better idea ...
Post by Johnny Billquist
So there isn't really anything new under the sun here. If you find a bug in DCL,
good. Report it, and let's hope it gets fixed. Is there a security issue that
DCL gets the rights of the executing program? Nope.
I don't have a problem with that.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Jan-Erik Söderholm
2022-08-21 20:29:48 UTC
Permalink
Post by Dave Froble
Post by Johnny Billquist
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
VMS has LIB$SPAWN(), which is a library wrapper around the lower-level
system services. It also has a "$ spawn" DCL command.
This allows you to either 1) run something in a subprocess while you
carry on in the main process or 2) wait for the subprocess to complete
(depending on the options you use).
Under RSX, SPWN$ is the system call. And it creates a new process, which
is also associated with a terminal, and a UIC, which is given as
arguments to SPWN$. The new process have it's own virtual memory, in
which the task image is loaded, all shared libraries are setup with
regards to memory mapping, and all that kind of stuff. SPWN$ is sortof
like a combo of fork() and exec() under Unix.
Which obviously is rather different than what VMS does then.
No. To this point in the process lifecycle, a spawn on VMS ends up doing
the same as you describe above with RSX in that you do end up with another
process with its own PID.
It's just that after this, a subprocess behaves in the same way as in the
main process, in that the DCL instance running in the subprocess starts
any user programs in the same subprocess just as DCL running in the main
process starts any user programs in the same main process.
Meaning there is always DCL? That seems to contradict what Jan-Erik said.
Actually, I'm not sure of that.
There is an DCL environment if you start your detached process
by using the LOGINOUT.EXE system image to start/create it.

But you do not have to, if you do not need an DCL environment,
then you just let your detached process run your own EXE directly.

Without a DCL environment:

$ run /detached [other switches as needed] MYEXE.EXE

With a DCL envionment:

$ run /detached /input=myexe.com sys$system:loginout.exe

LOGINOUT does a full "login" of the detached process and creates
a DCL envionment in it and starts reading the /input file just
as when DCL processes any COM file.

The MYEXE.COM file can have any setup needed for the main EXE
and then do a normal RUN of it. Such as process unique logical
names or whatever.

We have background (deteched) processes started in both ways
depening on the requirement of the process.
Post by Dave Froble
An interactive process haws a CLI, whatever is specified in the SYSUAF
record for that user account.  Usually DCL, but it does not have to be DCL.
A batch job has a batch command file that specifies activity.
I expect any batch job to have a DCL envionment.
Post by Dave Froble
A detached process can read from a command file, however, I do not think it
has to have such.  While I've used detached processes, I usually have a
command file for activity.  Not sure it is required.
No, you can start an EXE directly, if that is fine.
Post by Dave Froble
Now, normally on VMS, there is some kind of SYS$COMMAND, SYS$INPUT,
SYS$OUTPUT, and SYS$ERROR.  Or some other method of seeing completion,
whether successful or not.
But those are the process "permanent" logical names. As far as I know,
any process has these defined by the system at process creation.
Post by Dave Froble
The I&DS book(s) would be helpful ...
Post by Johnny Billquist
Post by Simon Clubley
Post by Johnny Billquist
Post by Simon Clubley
The quotas are against the process, not the program. When you try to
run a program that doesn't fit into those quotas, the account or system
quotas need adjusting to give the _process_ (not the program) more quota.
Um. Sure, I can see that for things like memory limits. But if we talk
about CPU runtime limits, it's usually meant for that specific program
you run. Or are you saying that VMS can't have a runtime limit?
(runtime, like in, you're not allowed to use more than 2 CPU seconds,
and when you hit that, you'll be killed.)
In VMS, CPU runtime limits are documented as being against the process,
CPU limits for a batch process is actually for the whole thing, and not for
individual programs.
Not sure if VMS have CPU limits for individual programs. After all these
messages, it almost sounds like it don't.
It's been a while, but I'm pretty sure that CPU and time limits are on a
process.  I've never used them.
Sometimes you'd wished you had, when you get that run-away process... :-)
Post by Dave Froble
Post by Johnny Billquist
.help run ins tim
 RUN [ddnn:][$]filename /TIME=nM
                        /TIME=nS
 Sets the time limit for a task that uses the CPU. When the time limit
expires,
 the task is aborted and a message is displayed. If the task being run is
 privileged, this keyword is ignored.
But that creates a new RSX process (called "task" in RSX).

It is the same doing this on VMS:

$ run /detached /time_limit=00:10:00 [other switches as needed] MYEXE.EXE

A 10 min CPU limit in that case. Can also be used for the other
case with an DCL envionment, of course. It is still valid för the
whole process, no matter if it is a single EXE or a DCL environment.

$ help run process /time

RUN

Process

/TIME_LIMIT

/TIME_LIMIT=limit

Specifies the maximum amount of CPU time (in delta time) a
created process can use. CPU time is allocated to the created
process in units of 10 milliseconds. When it has exhausted its
CPU time limit quota, the created process is deleted.
Bob Gezelter
2022-08-12 09:52:06 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available
Comments please.
David Turner
David,

I remember asking a similar question a ways back, with respect to the x86-64 port. The comment I received concerning a binary emulator on OVMS x86-64 was that there were features of the instruction set covered by Intel patents. If that response was correct, before doing a project like this, one would need to determine the accuracy of that statement.

Ignoring the patent issue, the instruction set is fully documented, albeit significant in size. Technically, it could be done, particularly with a scope limitation of the non-privileged instruction set. Unlike the question of the VAX, there is probably a smaller market, as recompiling the source code is a far better option.

There are those who are bound to other issues, e.g., regulated configurations, but that requires full system emulation, which has correctly been identified as a far wider scope than just user-mode execution.

- Bob Gezelter, http://www.rlgsc.com
Stephen Hoffman
2022-08-12 22:15:49 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or
i6 (16 cores max)
Nope. Not now, not particularly effectively, and not anytime soon.

Used Itanium server prices and availability will be a bellwether for
the success of VSI OpenVMS x86-64.

Though if somebody wants to try this:
http://iccd.et.tudelft.nl/Proceedings/2004/22310288.pdf
--
Pure Personal Opinion | HoffmanLabs LLC
John Dallman
2022-08-13 08:49:00 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not
willing or able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4
or i6 (16 cores max)
It would be useful, but it does not exist. Stromasys seem to be the
leading vendor of emulators - they support VAX, Alpha, PDP-11, SPARC and
PA-RISC - but they show no sign of launching an Itanium emulator. You
could always ask them about it? https://www.stromasys.com/

John
Scott Dorsey
2022-08-13 20:54:13 UTC
Permalink
Post by John Dallman
It would be useful, but it does not exist. Stromasys seem to be the
leading vendor of emulators - they support VAX, Alpha, PDP-11, SPARC and
PA-RISC - but they show no sign of launching an Itanium emulator. You
could always ask them about it? https://www.stromasys.com/
It is very, very hard to build an efficient emulator for the itanium, which
is part of why HP didn't actually realize how bad the architecture was until
they were close to having silicon on the die.

Although people in this newsgroup keep referring to itanium as a risc machine,
it's not at all a risc machine. It's a VLIW architecture where the instruction
actually sets the bits to route the data within the processor rather than just
saying what operations to perform. That is, it's basically microcode instead
of a normal operating instruction code.

This means that the actual number of possible operations that you can perform
is enormous, and a lot of the instructions themselves aren't completely
documented. You can do weird combinations of operations in one instruction,
routing an accumulator into several different parts of the alu and then picking
pieces of each of the alu outputs and putting them into another register.

Getting the compiler to efficiently take advantage of the VLIW archiecture
is really, really hard, and not enough actual work got put into it to make
the Intel compiler good enough. It might have taken decades to make it good.

Anyway, because of this, either you look at the instructions that the compiler
generates and you emulate those and hope nobody runs any code that didn't
come from that compiler, or you simulate at gate level and get a an emulator
that is accurate and reliable and slow as molasses.

It's a really interesting approach to building a computer, going in a very
different direction than either CISC or RISC architectures, but it relies
entirely on either very sophisticated compilers or very sophisticated assembler
programmers, and there remains a shortage of both.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Robert A. Brooks
2022-08-13 21:11:05 UTC
Permalink
Post by Scott Dorsey
Post by John Dallman
It would be useful, but it does not exist. Stromasys seem to be the
leading vendor of emulators - they support VAX, Alpha, PDP-11, SPARC and
PA-RISC - but they show no sign of launching an Itanium emulator. You
could always ask them about it? https://www.stromasys.com/
It is very, very hard to build an efficient emulator for the itanium, which
is part of why HP didn't actually realize how bad the architecture was until
they were close to having silicon on the die.
Although people in this newsgroup keep referring to itanium as a risc machine,
it's not at all a risc machine. It's a VLIW architecture where the instruction
actually sets the bits to route the data within the processor rather than just
saying what operations to perform. That is, it's basically microcode instead
of a normal operating instruction code.
https://en.wikipedia.org/wiki/Multiflow
--
--- Rob
Johnny Billquist
2022-08-13 21:35:07 UTC
Permalink
Post by Scott Dorsey
Although people in this newsgroup keep referring to itanium as a risc machine,
it's not at all a risc machine. It's a VLIW architecture where the instruction
actually sets the bits to route the data within the processor rather than just
saying what operations to perform. That is, it's basically microcode instead
of a normal operating instruction code.
I wouldn't agree with that. Yes, it's not really RISC, and yes, it's
most definitely VLIW.
However, you have a clear set of defined opcodes, with arguments, and
all that stuff. No different than any other processor. It's just that
because of the long word, you stuff multiple instructions into one word,
and then you get to the point of all the rules of which instructions can
actually be combined in one word, since you do not have enough execution
units to perform all the operations for all the instructions in one word
in parallel. This is where scheduling comes in, and with VLIW, it was
thought that the compiler can work this out, reorder code, and come up
with the optimal ordering and combination of things to do to maximize
the utilization of the execution units.

As opposed to the Alpha, for example, which instead dynamically can
reorder instructions to keep all execution units busy.

The Alpha thus is more complex in the silicon, since the rescheduling
and resource allocation, along with making it behave somewhat correct,
is pretty complex. On the other hand, the compile don't really have to
be so clever.

And it turned out that statically working this out isn't only a bit too
complex in the generic case. It's not even very possible when you have
unknown (at compile time) work to do.

The dynamic rescheduling deals with this much better. In addition, with
VLIW, you are getting to the same problem some other RISC CPUs exposed,
where things like the delayed branch slot, while considered a great idea
at one point, became one of the worst achilles heels of the SPARC later
on, since every implementation had to implement that same behavior, even
when it was no longer needed.

VLIW is bad in that way that if you would want to add more execution
units, and more instructions into the word, you just can't. You are
locking yourself into the current design limits, based on current
technology, making future development very hard.

It's just a dead end, except for more specialized problems, where it
works well. What Alpha did was actually the right thing. But that whole
thing is moot now. We have x86, which have been poured so much resources
on that it's hard to displace. ARM seems to be the only realistic
alternative still around. ARM on the other hand, can potentially benefit
from at least some of the same solutions that Alpha had.

Johnny
gah4
2022-08-13 22:09:44 UTC
Permalink
On Saturday, August 13, 2022 at 2:35:11 PM UTC-7, Johnny Billquist wrote:

(snip)
Post by Johnny Billquist
It's just a dead end, except for more specialized problems, where it
works well. What Alpha did was actually the right thing. But that whole
thing is moot now. We have x86, which have been poured so much resources
on that it's hard to displace. ARM seems to be the only realistic
alternative still around. ARM on the other hand, can potentially benefit
from at least some of the same solutions that Alpha had.
I believe RISC-V is on its way to a realistic alternative, though
maybe not there yet.
Simon Clubley
2022-08-15 17:37:25 UTC
Permalink
Post by gah4
(snip)
Post by Johnny Billquist
It's just a dead end, except for more specialized problems, where it
works well. What Alpha did was actually the right thing. But that whole
thing is moot now. We have x86, which have been poured so much resources
on that it's hard to displace. ARM seems to be the only realistic
alternative still around. ARM on the other hand, can potentially benefit
from at least some of the same solutions that Alpha had.
I believe RISC-V is on its way to a realistic alternative, though
maybe not there yet.
I keep looking at RISC-V. I will become _much_ more interested when
you can get a RISC-V board at Raspberry Pi or BeagleBone Black prices
and with the capabilities of those boards.

Once it reaches that level, that's when it is _really_ going to take
off (IMHO), but it's not there yet.

As with the ARM stuff, you need that price/functionality point to get
enough people to start playing with them to build a critical mass of
interested people.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2022-08-13 22:33:53 UTC
Permalink
By the way, since people asked about IA64 emulators, and the general
belief that they don't exist and are too difficult to do.

They do exist, and have for a long time. It's not that complex from this
point of view, but of course, performance is probably nowhere near where
anyone would actually want to use it for production.

See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/

Last updated in 2004. But that is how they developed all the tooling and
so on before they had actual hardware.

And to correct myself and others a little. IA84 isn't really just a VLIW
machine. It also incorporated EPIC, which is sortof an attempt at half
dynamically be able to figure out dynamically which bundles of
instructions could be parallelized.
See: https://en.wikipedia.org/wiki/Explicitly_parallel_instruction_computing

It was still crap though.

Johnny
Dave Froble
2022-08-14 03:27:55 UTC
Permalink
By the way, since people asked about IA64 emulators, and the general belief that
they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from this point
of view, but of course, performance is probably nowhere near where anyone would
actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the tooling and so on
before they had actual hardware.
And to correct myself and others a little. IA84 isn't really just a VLIW
machine. It also incorporated EPIC, which is sortof an attempt at half
dynamically be able to figure out dynamically which bundles of instructions
could be parallelized.
See: https://en.wikipedia.org/wiki/Explicitly_parallel_instruction_computing
It was still crap though.
Johnny
I seem to recall that at some point HP engineers tried to tell management that
VLIW was a bad idea, and another path (perhaps Alpha which they then had) should
be taken. HP management would not hear of it. Don't remember when this was.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
John Dallman
2022-08-14 14:26:00 UTC
Permalink
Post by Dave Froble
I seem to recall that at some point HP engineers tried to tell
management that VLIW was a bad idea, and another path (perhaps
Alpha which they then had) should be taken. HP management would
not hear of it. Don't remember when this was.
That's consistent with HP management's behaviour in 2002-04, when it was
becoming clear that (a) Intel's plan to replace x86 with Itanium had been
wrecked by AMD's x86-64 and (b) making Windows and HP-UX software run
fast on Itanium was quite hard. At this point, HP made a lot of noise
about how they were "Betting the company on Itanium" and quite a few
companies felt they needed to become less reliant on HP.

Later on, an HP person said "You're biased against Itanium!" and our
chief of operations responded "We think of ourselves as well-informed."

John
John Dallman
2022-08-14 09:02:00 UTC
Permalink
Post by Johnny Billquist
By the way, since people asked about IA64 emulators, and the
general belief that they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from
this point of view, but of course, performance is probably nowhere
near where anyone would actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the
tooling and so on before they had actual hardware.
Is that page still up? I can't access it.

In 1999, when trying to port software to Windows Itanium, I had a copy of
Intel's emulator for Windows. It was ... slow. Too slow to actually be
useful for software development, never mind production. Part of this was
because it ran on 32-bit x86. It could have run faster on Alpha, but
Intel said "they couldn't do that, could they?"

Intel thought of it as the fast simulator, because it didn't do
gate-level emulation. Heaven knows how slow that was. One of the early
indicators of problems with the project was their answer which I asked if
the emulator was generated from the formal model of the processor. They
didn't understand the question.

John
Johnny Billquist
2022-08-14 10:10:02 UTC
Permalink
Post by John Dallman
Post by Johnny Billquist
By the way, since people asked about IA64 emulators, and the
general belief that they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from
this point of view, but of course, performance is probably nowhere
near where anyone would actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the
tooling and so on before they had actual hardware.
Is that page still up? I can't access it.
It works for me. No idea what the problem might be for you.
Post by John Dallman
In 1999, when trying to port software to Windows Itanium, I had a copy of
Intel's emulator for Windows. It was ... slow. Too slow to actually be
useful for software development, never mind production. Part of this was
because it ran on 32-bit x86. It could have run faster on Alpha, but
Intel said "they couldn't do that, could they?"
:-)
But I think performance wouldn't exactly have been great on an Alpha
either. Better, but not useful.
Post by John Dallman
Intel thought of it as the fast simulator, because it didn't do
gate-level emulation. Heaven knows how slow that was. One of the early
indicators of problems with the project was their answer which I asked if
the emulator was generated from the formal model of the processor. They
didn't understand the question.
I would sortof have expected that they'd know and would have answered
"no". But not even understanding the question would be a bad sign
indeed. I wonder if they had a formal model even.

Johnny
Jan-Erik Söderholm
2022-08-14 12:28:30 UTC
Permalink
Post by Johnny Billquist
Post by John Dallman
Post by Johnny Billquist
By the way, since people asked about IA64 emulators, and the
general belief that they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from
this point of view, but of course, performance is probably nowhere
near where anyone would actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the
tooling and so on before they had actual hardware.
Is that page still up? I can't access it.
It works for me. No idea what the problem might be for you.
Doesn't work for me. Gives "www.irisa.fr doesn't respond".

First hit when googling "irisa" is www.irisa.fr, but doesn't work either.
Johnny Billquist
2022-08-14 13:02:18 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Johnny Billquist
Post by John Dallman
Post by Johnny Billquist
By the way, since people asked about IA64 emulators, and the
general belief that they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from
this point of view, but of course, performance is probably nowhere
near where anyone would actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the
tooling and so on before they had actual hardware.
Is that page still up? I can't access it.
It works for me. No idea what the problem might be for you.
Doesn't work for me. Gives "www.irisa.fr doesn't respond".
First hit when googling "irisa" is www.irisa.fr, but doesn't work either.
Seems to have stopped working for me as well now.
I got the link from the Itanium wikipedia page.

Well, there is always the wayback machine (those people should really
get some kudos...)

https://web.archive.org/web/20220410003719/http://www.irisa.fr/caps/projects/ArchiCompil/iato/

Johnny
John Dallman
2024-02-27 18:02:00 UTC
Permalink
Post by Johnny Billquist
Post by Jan-Erik Söderholm
Doesn't work for me. Gives "www.irisa.fr doesn't respond".
First hit when googling "irisa" is www.irisa.fr, but doesn't work either.
Seems to have stopped working for me as well now.
I got the link from the Itanium wikipedia page.
Working now, and I took the chance to grab all the files.

John
Simon Clubley
2024-02-27 18:23:06 UTC
Permalink
Post by John Dallman
Post by Johnny Billquist
Post by Jan-Erik Söderholm
Doesn't work for me. Gives "www.irisa.fr doesn't respond".
First hit when googling "irisa" is www.irisa.fr, but doesn't work either.
Seems to have stopped working for me as well now.
I got the link from the Itanium wikipedia page.
Working now, and I took the chance to grab all the files.
Is this a full-system emulator or just a CPU emulator ?

[From what I can tell from the webpage, it appears to be another CPU
emulator only, just like Ski.]

Thanks,

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
John Dallman
2024-02-27 22:29:00 UTC
Permalink
Post by Simon Clubley
Is this a full-system emulator or just a CPU emulator ?
[From what I can tell from the webpage, it appears to be another CPU
emulator only, just like Ski.]
A bit more than just a CPU emulator, but not a full-system emulator. From
the documentation PDF:

* ISA library
A library that implements the IA64 instruction set.
* ELF library
A library that implements the support for IA64 binary executables
((this does not handle dynamic linking at present)).
* KRN library
A library that implements the support for Linux compatible IA64
system calls ((as far as Kernel 2.4)).
* MAC library
A library that implements the support for detailed architectural
simulation.
* ECU library
A library that implements the support for special architectures.

If the documentation is correct, then making an IA64 VMS emulator for
x96-64 VMS would require, at least:

* Extending the ELF library to cope with dynamically linked executables
and libraries.
* Creating a system call translation layer for VMS. This would be a lot
easier with the VMS source available.
* Fixing bugs that doubtless exist in the libraries.
* Getting the instruction set library to run at a reasonable speed.

John
Simon Clubley
2024-02-28 18:03:13 UTC
Permalink
Post by John Dallman
If the documentation is correct, then making an IA64 VMS emulator for
* Extending the ELF library to cope with dynamically linked executables
and libraries.
* Creating a system call translation layer for VMS. This would be a lot
easier with the VMS source available.
* Fixing bugs that doubtless exist in the libraries.
* Getting the instruction set library to run at a reasonable speed.
Thanks John.

So direct execution of some standalone Itanium VMS user-mode executables
might be possible with enough effort, but no running Itanium VMS as an
entity in its own right.

It really does speak to how complex the Itanium architecture is that
nobody has ever done an Itanium full-system emulator. :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2024-02-28 19:20:36 UTC
Permalink
Post by Simon Clubley
Post by John Dallman
If the documentation is correct, then making an IA64 VMS emulator for
* Extending the ELF library to cope with dynamically linked executables
and libraries.
* Creating a system call translation layer for VMS. This would be a lot
easier with the VMS source available.
* Fixing bugs that doubtless exist in the libraries.
* Getting the instruction set library to run at a reasonable speed.
So direct execution of some standalone Itanium VMS user-mode executables
might be possible with enough effort, but no running Itanium VMS as an
entity in its own right.
It really does speak to how complex the Itanium architecture is that
nobody has ever done an Itanium full-system emulator. :-)
That and lack of demand (demand = businesses willing to
pay for such an emulator not hobbyists that think it could
be fun with such an emulator).

Maybe it will change. HP-UX is not being ported to x86-64
as far as I know, so *if* some businesses do not want to
migrate from HP-UX/Itanium to Linux/x86-64, then demand
for an Itanium emulator may rise.

(note the *if* - I don't know any HP-UX people)

Arne
Hans Bachner
2024-02-28 23:57:33 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by John Dallman
If the documentation is correct, then making an IA64 VMS emulator for
* Extending the ELF library to cope with dynamically linked executables
   and libraries.
* Creating a system call translation layer for VMS. This would be a lot
   easier with the VMS source available.
* Fixing bugs that doubtless exist in the libraries.
* Getting the instruction set library to run at a reasonable speed.
So direct execution of some standalone Itanium VMS user-mode executables
might be possible with enough effort, but no running Itanium VMS as an
entity in its own right.
It really does speak to how complex the Itanium architecture is that
nobody has ever done an Itanium full-system emulator. :-)
That and lack of demand (demand = businesses willing to
pay for such an emulator not hobbyists that think it could
be fun with such an emulator).
Maybe it will change. HP-UX is not being ported to x86-64
as far as I know, so *if* some businesses do not want to
migrate from HP-UX/Itanium to Linux/x86-64, then demand
for an Itanium emulator may rise.
(note the *if* - I don't know any HP-UX people)
Well... I know VMS customers who stepped back from Itanium to Alpha
because an Alpha emulator was available (they used a specific PCI(e)
card for their application).

HP-UX customers could step back to PA-RISC instead of porting to Linux.
Stromasys offers a PA-RISC emulator.

Hans.
Arne Vajhøj
2024-02-29 00:23:23 UTC
Permalink
Post by Hans Bachner
Post by Arne Vajhøj
Maybe it will change. HP-UX is not being ported to x86-64
as far as I know, so *if* some businesses do not want to
migrate from HP-UX/Itanium to Linux/x86-64, then demand
for an Itanium emulator may rise.
(note the *if* - I don't know any HP-UX people)
Well... I know VMS customers who stepped back from Itanium to Alpha
because an Alpha emulator was available (they used a specific PCI(e)
card for their application).
HP-UX customers could step back to PA-RISC instead of porting to Linux.
Stromasys offers a PA-RISC emulator.
HP-UX/PA - I was not even aware that recent HP-UX still run on PA.

That could be an option for the HP-UX people.

Does the relevant ISV's like Oracle still support HP-UX/PA?

Arne
John Dallman
2024-02-29 08:06:00 UTC
Permalink
Post by Arne Vajhøj
Maybe it will change. HP-UX is not being ported to x86-64
as far as I know, so *if* some businesses do not want to
migrate from HP-UX/Itanium to Linux/x86-64, then demand
for an Itanium emulator may rise.
HP-UX isn't all that different from Linux, and I seriously doubt there
would be enough businesses that want to stay with HP-UX badly enough.

John
Dave Froble
2024-02-28 21:39:56 UTC
Permalink
Post by Simon Clubley
Post by John Dallman
If the documentation is correct, then making an IA64 VMS emulator for
* Extending the ELF library to cope with dynamically linked executables
and libraries.
* Creating a system call translation layer for VMS. This would be a lot
easier with the VMS source available.
* Fixing bugs that doubtless exist in the libraries.
* Getting the instruction set library to run at a reasonable speed.
Thanks John.
So direct execution of some standalone Itanium VMS user-mode executables
might be possible with enough effort, but no running Itanium VMS as an
entity in its own right.
It really does speak to how complex the Itanium architecture is that
nobody has ever done an Itanium full-system emulator. :-)
Simon.
What would be the point?

Before it was considered a bad idea, it was still available. They are still
available used. I got one which hasn't been powered on in months. Want it?

Emulators allowed use of discontinued architectures that people actually wanted
to run. I don't know anyone who really wants to run an itanic. Do you?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Scott Dorsey
2024-02-28 23:10:00 UTC
Permalink
Post by Dave Froble
Before it was considered a bad idea, it was still available. They are still
available used. I got one which hasn't been powered on in months. Want it?
Emulators allowed use of discontinued architectures that people actually wanted
to run. I don't know anyone who really wants to run an itanic. Do you?
Well, that's sort of the thing. MAYBE the Itanium might actually have been
a viable architecture if the compilers could have been made smart enough.
But this turned out to be a whole lot harder than the Intel crew expected.

The idea was that with the long instruction word, compilers could have
multiple operations taking place across the chip in ways that pipelining
microcoded machines could not do. But in fact, the actual utilization of
processor elements was much worse when it actually came down to the wire.
Could this have been corrected with smarter compilers? That's the question
nobody can really answer. And now there is no interest in answering it.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
John Dallman
2024-02-29 08:39:00 UTC
Permalink
Post by Scott Dorsey
Well, that's sort of the thing. MAYBE the Itanium might actually
have been a viable architecture if the compilers could have been
made smart enough. But this turned out to be a whole lot harder
than the Intel crew expected.
No, it couldn't. The problem is the delays in accessing memory.

EPIC requires the compilers to issue speculative loads far enough in
advance to keep the processor from stalling waiting for memory for most
of the time. However, that doesn't work: the information isn't available
enough of the time. The compiler also doesn't know what's in what level
of cache, because it's /impossible/ to know that when code is running on
a multi-tasking OS that is taking interrupts.

Out-of-order execution, as used on modern x86 processors (and ARM, POWER,
IBM Z, and anything else that's still competitive) deals with the memory
and cache problems by letting the data dependencies for instructions be
resolved dynamically as data arrives. This works much better.

EPIC only made sense in a system that was running a single process and
taking few, if any, interrupts. That was how early embedded systems,
which were Intel's original market, worked in the 1970s and early 1980s.
Trying to apply that to a processor that appeared in 2001 was a massive
failure of concept and project management. Itanium was obsolete when it
shipped.

John

Simon Clubley
2022-08-15 17:47:21 UTC
Permalink
Post by Johnny Billquist
By the way, since people asked about IA64 emulators, and the general
belief that they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from this
point of view, but of course, performance is probably nowhere near where
anyone would actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the tooling and
so on before they had actual hardware.
I've just had a quick look at this. This emulator is no good for VMS.

From the documentation:

|The IATO environment operates directly with ELF binary executables. As of
|release 1.0, fully static binary executables are only supported. In the
|presence of dynamically linked executables, the IATO clients reports an
|error and terminates. The best method to check for a file type is to use
|the file command

It's an user-level binary emulator only and it doesn't even support dynamic
binaries.

Also:

|3.4 Kernel emulation library
|The kernel (KRN) library is a set of classes that handles Linux system
|calls. Systems calls are vectored traps sent by the program. They are
|caught by the emulator or the simulator and routed to the system call
|handler. The Syscall class encapsulates all Linux system calls. Note that a
|system call argument mapping procedure is also included into this library.

And it only supports Linux syscalls.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Dave Froble
2022-08-15 18:06:11 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
By the way, since people asked about IA64 emulators, and the general
belief that they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from this
point of view, but of course, performance is probably nowhere near where
anyone would actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the tooling and
so on before they had actual hardware.
I've just had a quick look at this. This emulator is no good for VMS.
|The IATO environment operates directly with ELF binary executables. As of
|release 1.0, fully static binary executables are only supported. In the
|presence of dynamically linked executables, the IATO clients reports an
|error and terminates. The best method to check for a file type is to use
|the file command
It's an user-level binary emulator only and it doesn't even support dynamic
binaries.
|3.4 Kernel emulation library
|The kernel (KRN) library is a set of classes that handles Linux system
|calls. Systems calls are vectored traps sent by the program. They are
|caught by the emulator or the simulator and routed to the system call
|handler. The Syscall class encapsulates all Linux system calls. Note that a
|system call argument mapping procedure is also included into this library.
And it only supports Linux syscalls.
Simon.
Can't everybody just let the itanic boat anchor sink quietly into the mud, never
to be seen again?

I'm also a bit surprised by David's question. I was under the impression that
there were many discarded itanics, available rather cheap. What has changed?

The one I have cost exactly $0, and I rarely run it.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Simon Clubley
2022-08-15 18:16:12 UTC
Permalink
Post by Dave Froble
Can't everybody just let the itanic boat anchor sink quietly into the mud, never
to be seen again?
It's getting there, but there's still the legacy installed base.
A legacy installed base which has permanent licences BTW, so things
like that are going to factor into various decisions.

BTW, over a couple of days (I think it was a weekend :-)) I had a look
at what would be involved in writing a full-system Itanium emulator.

At the end of those couple of days, I had come to the conclusion that
I would be more likely to succeed with doing something less insane
such as writing a modern web browser by myself. :-)

IOW, writing an Itanium full-system emulator would be a major undertaking.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2022-08-15 23:43:01 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
By the way, since people asked about IA64 emulators, and the general
belief that they don't exist and are too difficult to do.
They do exist, and have for a long time. It's not that complex from this
point of view, but of course, performance is probably nowhere near where
anyone would actually want to use it for production.
See: http://www.irisa.fr/caps/projects/ArchiCompil/iato/
Last updated in 2004. But that is how they developed all the tooling and
so on before they had actual hardware.
I've just had a quick look at this. This emulator is no good for VMS.
Never claimed it was. My point was that emulators for IA64 do exist, and
are not impossible or unobtanium, as some people suggested.

Obviously that project was not interested in VMS. Does not mean it
couldn't be done. IA64 isn't that hard to emulate, as such. But again -
performance is another question.

Johnny
Simon Clubley
2022-08-16 18:05:22 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
I've just had a quick look at this. This emulator is no good for VMS.
Never claimed it was. My point was that emulators for IA64 do exist, and
are not impossible or unobtanium, as some people suggested.
I know you didn't, but it was still worth me looking at it, to see if
it could be something useful. Unfortunately, that does not appear to
be the case, as it doesn't offer anything over what Ski already does,
and Ski would be only a small part of any required full-system emulator.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
plugh
2022-08-14 13:35:00 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available
Comments please.
David Turner
Based on a review of object code generated for this machine by a certain C compiler, I'd say you need only one instruction: NOP
David Turner
2022-08-14 18:19:58 UTC
Permalink
I am still convinced that running HP_UX on an Itanium emulator, not
messing with code, applications etc, would be a better option than
trying to port to another Unix-like OS.
Perhaps not so for OpenVMS. But on the other hand, there are many
companies out there just using OpenVMS; their app vendors have either
gone out of business or stopped supporting OpenVMS all together on ANY
platform. An emulator with decent performance would be better than the
many 100,000s of dollars to port to a new OS. And yes, from the people I
have talked to, there is nothing cheap about any work done in the
OpenVMS market.
A $10K emulator that performs efficiently and fast, would still be
cheaper than going with any unnecessary hardware or OS upgrades. I think
AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.

DT
Post by David Turner
Does anyone here think that this is an option for people not willing
or able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or
i6 (16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I
have probably purchased 80% of the ones on the market and some
companies are buying up whatever is available
Comments please.
David Turner
Arne Vajhøj
2022-08-14 20:49:01 UTC
Permalink
Post by David Turner
I am still convinced that running HP_UX on an Itanium emulator, not
messing with code, applications etc, would be a better option than
trying to port to another Unix-like OS.
Perhaps not so for OpenVMS. But on the other hand, there are many
companies out there just using OpenVMS; their app vendors have either
gone out of business or stopped supporting OpenVMS all together on ANY
platform. An emulator with decent performance would be better than the
many 100,000s of dollars to port to a new OS. And yes, from the people I
have talked to, there is nothing cheap about any work done in the
OpenVMS market.
A $10K emulator that performs efficiently and fast, would still be
cheaper than going with any unnecessary hardware or OS upgrades. I think
AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.
Obviously the situation for HP-UX is a lot different than
for VMS.

VMS has a company dedicated to it. VMS has been ported to x86-64.

HP-UX got neither of those. Unless HPE does something then HP-UX is
stuck on Itanium and current functionality.

But I also suspect that the typical HP-UX site is a lot easier
to migrate than the typical VMS site.

Macro-11, VMS Pascal and VMS Basic are a rewrite from scratch
on Linux. All LIB$ and SYS$ calls would need to be changed
on Linux no matter the language. Lots of VMS concepts are not 1:1
portable to Linux including logical names and queue system.
Rdb is not available on Linux. RMS index-sequential files
would (except for Cobol) require a third party software solution
and change of calls to use API of that. No DCL on Linux so all
script would be rewrite from scratch. VMS to Linux is not easy - not
impossible either but expensive and risky.

I believe a lot of HP-UX systems are database servers running
Oracle DB, Sybase ASE etc. - and those are available on
Linux (in fact the vendors would like to see customers migrate
to Linux). Most application code would be C/C++ and Cobol
which are respectively available by default and available for
a price on Linux. Most programming concepts and system
calls would work on Linux. The shells used would be available
on Linux. HP-UX to Linux would not be trivial - definitely a
huge project, but both risk and cost seems significant lower
than VMS to Linux.

Arne
John Dallman
2022-08-14 23:14:00 UTC
Permalink
Unless HPE does something then HP-UX is stuck on Itanium and
current functionality.
I've been watching for that for years. There were rumours during the
HP-Oracle lawsuit that HP had investigated porting HP-UX to x86-64, but
nothing came of them. HP has been running Linux for years on its high-end
"Superdome" x86-64 systems. They haven't said anything to indicate that
HP-UX will have a life after the end of Itanium support in 2025 AFAIK.
But I also suspect that the typical HP-UX site is a lot easier
to migrate than the typical VMS site.
You're right. HP-UX has a few quirks of its own, but it isn't
fundamentally hard to port from it to Linux.

John
Simon Clubley
2022-08-15 18:19:28 UTC
Permalink
Post by Arne Vajhøj
Macro-11, VMS Pascal and VMS Basic are a rewrite from scratch
on Linux. All LIB$ and SYS$ calls would need to be changed
on Linux no matter the language. Lots of VMS concepts are not 1:1
portable to Linux including logical names and queue system.
Rdb is not available on Linux. RMS index-sequential files
would (except for Cobol) require a third party software solution
and change of calls to use API of that. No DCL on Linux so all
script would be rewrite from scratch. VMS to Linux is not easy - not
impossible either but expensive and risky.
There's the third-party porting toolkits that can help with this.

Simon.

PS: BTW Arne, Macro-11 ??? :-)
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2022-08-15 20:44:14 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Macro-11, VMS Pascal and VMS Basic are a rewrite from scratch
on Linux. All LIB$ and SYS$ calls would need to be changed
on Linux no matter the language. Lots of VMS concepts are not 1:1
portable to Linux including logical names and queue system.
Rdb is not available on Linux. RMS index-sequential files
would (except for Cobol) require a third party software solution
and change of calls to use API of that. No DCL on Linux so all
script would be rewrite from scratch. VMS to Linux is not easy - not
impossible either but expensive and risky.
There's the third-party porting toolkits that can help with this.
Yes. Sector7 etc.. But without diminishing their products I would
not expect a silver bullet.
Post by Simon Clubley
PS: BTW Arne, Macro-11 ??? :-)
Ooops.

Macro-32

Arne
Scott Dorsey
2022-08-14 22:51:32 UTC
Permalink
Post by David Turner
I am still convinced that running HP_UX on an Itanium emulator, not
messing with code, applications etc, would be a better option than
trying to port to another Unix-like OS.
HP-UX really is Unix. If the code is well-written, it should not be
difficult to port to any other SysV-like Unix. Realtime code excepted
perhaps.
Post by David Turner
Perhaps not so for OpenVMS. But on the other hand, there are many
companies out there just using OpenVMS; their app vendors have either
gone out of business or stopped supporting OpenVMS all together on ANY
platform. An emulator with decent performance would be better than the
many 100,000s of dollars to port to a new OS. And yes, from the people I
have talked to, there is nothing cheap about any work done in the
OpenVMS market.
OpenVMS is not Unixlike and porting OpenVMS code to Unix-like systems is
frequently problematic. Which is why x86 VMS is such a great idea. In
most cases this involves a complete rewrite rather than a port.
Post by David Turner
A $10K emulator that performs efficiently and fast, would still be
cheaper than going with any unnecessary hardware or OS upgrades. I think
AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.
I don't think an IA64 emulator that performs efficiently and fast is even
feasible. Making it reliable is still more difficult. It's not like
emulating a normal architecture like Alpha.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
abrsvc
2022-08-14 23:01:10 UTC
Permalink
Post by David Turner
A $10K emulator that performs efficiently and fast, would still be
cheaper than going with any unnecessary hardware or OS upgrades. I think
AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.
DT
No to be picky, but the Stromasys product is called Charon/AXP.

Dan

(Currently working for Stromasys)
Hans Bachner
2022-08-15 19:31:21 UTC
Permalink
Post by abrsvc
Post by David Turner
A $10K emulator that performs efficiently and fast, would still be
cheaper than going with any unnecessary hardware or OS upgrades. I think
AlphaVM-Pro VTALpha and Cahron-Alpha have all proven that fact.
DT
No to be picky, but the Stromasys product is called Charon/AXP.
in fact, it is called CHARON-AXP :-)
Post by abrsvc
Dan
(Currently working for Stromasys)
Hans.

(Stromasys partner)
Sunset Ash
2022-08-16 01:33:11 UTC
Permalink
Post by David Turner
Does anyone here think that this is an option for people not willing or
able to move over to x86-64 yet?
An HP Integrity emulator, emulating something like an rx2800 i2 i4 or i6
(16 cores max)
I could imagine it would be useful if stuck with HP-UX or OpenVMS for
Integrity for some reason?!?
Why am I asking? Well, HPE Integrity servers are getting scarce. I have
probably purchased 80% of the ones on the market and some companies are
buying up whatever is available
Comments please.
David Turner
HPE has an Integrity emulator for running HP-UX - it's called Portable HP-UX and runs on Linux; you can request access if you have an active contract. I suspect running VMS was not of particular interest to them during development, though.
Simon Clubley
2022-08-16 18:08:22 UTC
Permalink
Post by Sunset Ash
HPE has an Integrity emulator for running HP-UX - it's called Portable HP-UX and runs on Linux; you can request access if you have an active contract. I suspect running VMS was not of particular interest to them during development, though.
This:

https://downloads.linux.hpe.com/SDR/project/c-ux-beta/

appears to be the download page for anyone interested in it.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Craig A. Berry
2022-08-16 21:17:23 UTC
Permalink
Post by Simon Clubley
Post by Sunset Ash
HPE has an Integrity emulator for running HP-UX - it's called
Portable HP-UX and runs on Linux; you can request access if you have an
active contract. I suspect running VMS was not of particular interest to
them during development, though.
https://downloads.linux.hpe.com/SDR/project/c-ux-beta/
appears to be the download page for anyone interested in it.
It claims to be a full-system emulator that uses JIT for some
instructions. The release posted there is a beta release from May 2019
that only supports HP-UX. Whether it actually works or is still under
active development? Your guess is as good as mine.
Loading...