Discussion:
New CEO of VMS Software
(too old to reply)
Slo
2024-01-03 20:16:52 UTC
Permalink
Darya Zelenina, speaks 9 languages, looks like she is about 35.
Practically all of the OpenVMS users seem to be 65+ years old!
She is soon to be the CEO!

https://www.linkedin.com/in/darya-zelenina-8a3b3272/

Darya will assume the role of CEO in June 2024. She joined VMS Software as a technical writer and OpenVMS instructor in 2017 and has since held key leadership positions in software and web development, documentation, the Community Program and Marketing. Darya brings extensive expertise in OpenVMS and the OpenVMS ecosystem, coupled with deep commitment to shaping the platform's long-term trajectory.
Arne Vajhøj
2024-01-03 20:52:16 UTC
Permalink
Post by Slo
Darya Zelenina, speaks 9 languages,
Per her LinkedIn profile:

Russian
English
Esperanto

French
German

Dutch
Hebrew
Swedish
Turkish
Post by Slo
looks like she is about 35.
She is younger than most other VSI managers.

And she does not have a past in DEC/CPQ/HP.

New times.

Arne
Lawrence D'Oliveiro
2024-01-03 22:49:24 UTC
Permalink
Post by Arne Vajhøj
And she does not have a past in DEC/CPQ/HP.
Does she have a background in finance? If yes, then ...

... is the company being prepared for a selloff?
Arne Vajhøj
2024-01-04 00:10:10 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
And she does not have a past in DEC/CPQ/HP.
Does she have a background in finance?
Per her LinkedIn profile her bachelor degree is in linguistics.
Post by Lawrence D'Oliveiro
If yes, then ...
... is the company being prepared for a selloff?
I don't think there would be much point in that.

VSI seems to be in good shape financially, but it is not
a "hot" company that can be sold for X B$ due to buzz
in the press.

And it also seems that they try to benefit from some
synergies between the multiple companies in the
Teracloud group.

Arne
Simon Clubley
2024-01-04 14:00:10 UTC
Permalink
Post by Slo
Darya Zelenina, speaks 9 languages, looks like she is about 35.
Practically all of the OpenVMS users seem to be 65+ years old!
She is soon to be the CEO!
https://www.linkedin.com/in/darya-zelenina-8a3b3272/
Darya will assume the role of CEO in June 2024. She joined VMS Software as a technical writer and OpenVMS instructor in 2017 and has since held key leadership positions in software and web development, documentation, the Community Program and Marketing. Darya brings extensive expertise in OpenVMS and the OpenVMS ecosystem, coupled with deep commitment to shaping the platform's long-term trajectory.
This move does not give me a good feeling.

She does not seem like a good fit for a CEO of a company providing
the types of mission-critical services that companies running VMS
rely on.

Even ignoring all the touchy-feeling stuff in her bio, someone who
has "successfully managed teams in documentation, marketing, web
development, and DevOps" as her main achievement does not seem to
be a good match for the needs of VMS users.

Where were all the other candidates for the job, and why was she
considered to be the best one for the job ? Would no-one else
look at taking the job for some reason ?

BTW, what the hell is "Intercultual Communication" ?

Also, is her Russian background going to be a problem for the
US governement ? I'm not saying it is an issue in real life, I am just
asking how some people might react. For example, look at all the crap
the Sailfish OS people have to deal with in this area...

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2024-01-04 14:56:31 UTC
Permalink
Post by Simon Clubley
Post by Slo
Darya will assume the role of CEO in June 2024. She joined VMS
Software as a technical writer and OpenVMS instructor in 2017 and
has since held key leadership positions in software and web
development, documentation, the Community Program and Marketing.
Darya brings extensive expertise in OpenVMS and the OpenVMS
ecosystem, coupled with deep commitment to shaping the platform's
long-term trajectory. >
This move does not give me a good feeling.
She does not seem like a good fit for a CEO of a company providing
the types of mission-critical services that companies running VMS
rely on.
Even ignoring all the touchy-feeling stuff in her bio, someone who
has "successfully managed teams in documentation, marketing, web
development, and DevOps" as her main achievement does not seem to
be a good match for the needs of VMS users.
A CEO has to have managerial experience for obvious reasons. People
do not move directly from individual contributor to CEO.

She does not have an engineering background. But CEO's for tech
companies not having an engineering background is not unusual.

She has experience with the development process and the engineering
teams from her devops work.

She has experience with customers from marketing and sales work.

She seems more focused on new ways (CI/CD, web etc.) than
how DEC did things 40 years ago.

She was working on the CL program, which I think turned out
very good for VSI - I suspect a lot of the bug reports come
from CL users.

Based on VSI web page and LinkedIn profile I think it looks
as a good choice.
Post by Simon Clubley
Where were all the other candidates for the job, and why was she
considered to be the best one for the job ? Would no-one else
look at taking the job for some reason ?
They have had plenty of time to look for and evaluate candidates.

We will probably never know who was interested and exactly what
made Johan Gedda pick her.

But I suspect that some of the managers within engineering was
not interested because they prefer EVE/LSE/VSCode over Excel.
That is quite common, so probably also in VSI.
Post by Simon Clubley
Also, is her Russian background going to be a problem for the
US governement ? I'm not saying it is an issue in real life, I am just
asking how some people might react. For example, look at all the crap
the Sailfish OS people have to deal with in this area...
I suspect that she will be presented as "born in Russia" and
"living and working in Copenhagen".

Denmark is a member of NATO and EU, close ally to the US etc..

Arne
Lawrence D'Oliveiro
2024-01-04 19:25:48 UTC
Permalink
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
If she is less invested in how DEC used to do things, maybe she’s the one
to put in place the program I suggested sometime back: get rid of most of
VMS itself, leaving only the parts that users care about--namely their
userland programs and DCL command procedures. All that could run on an
emulation layer on Linux.
Arne Vajhøj
2024-01-04 20:42:57 UTC
Permalink
Post by Lawrence D'Oliveiro
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
If she is less invested in how DEC used to do things, maybe she’s the one
to put in place the program I suggested sometime back: get rid of most of
VMS itself, leaving only the parts that users care about--namely their
userland programs and DCL command procedures. All that could run on an
emulation layer on Linux.
Not likely.

Lots of work to implement.

Not much interest from customers.

Sector 7 has offered such products for decades. Without taking away
the VMS customer base. Apperently VMS customer prefer to either stay
on VMS or port to Windows or Linux instead of running VMS emulation
on top of Windows or Linux.

Arne
Lawrence D'Oliveiro
2024-01-04 22:20:07 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
If she is less invested in how DEC used to do things, maybe she’s the
one to put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All
that could run on an emulation layer on Linux.
Lots of work to implement.
Much less than the 7 years it took to reimplement VMS on top of AMD64.
Remember, it took less time (and resources) than that to move Linux from
32-bit x86 to 64-bit Alpha.
Post by Arne Vajhøj
Not much interest from customers.
Just think: there would have been more customers left if they’d got it
working sooner.
Post by Arne Vajhøj
Sector 7 has offered such products for decades. Without taking away the
VMS customer base.
Maybe they have.
Arne Vajhøj
2024-01-05 01:26:33 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
She seems more focused on new ways (CI/CD, web etc.) than how DEC did
things 40 years ago.
If she is less invested in how DEC used to do things, maybe she’s the
one to put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All
that could run on an emulation layer on Linux.
Lots of work to implement.
Much less than the 7 years it took to reimplement VMS on top of AMD64.
I doubt that.

Mapping from one OS to another OS is not easy.
Post by Lawrence D'Oliveiro
Remember, it took less time (and resources) than that to move Linux from
32-bit x86 to 64-bit Alpha.
Very different task.

Adding support for a new CPU to an OS mostly written in C and
making the API's and utilities of one OS run on top of another
OS kernel are not the same.
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Not much interest from customers.
Just think: there would have been more customers left if they’d got it
working sooner.
Sector 7 has been around for many years. So the lack of interest in
their product is not likely to be due to timing.
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Sector 7 has offered such products for decades. Without taking away the
VMS customer base.
Maybe they have.
That is something we would know about.

They have customers, but not nearly as many as those migrating
natively to other platforms.

Arne
Lawrence D'Oliveiro
2024-01-05 01:48:29 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
... put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All
that could run on an emulation layer on Linux.
Lots of work to implement.
Much less than the 7 years it took to reimplement VMS on top of AMD64.
I doubt that.
Mapping from one OS to another OS is not easy.
Linux is a more versatile kernel than VMS. For example, the WINE project
has been able to substantially implement the Windows APIs on top of Linux,
while Microsoft’s attempt to do the reverse, implement the Linux APIs on
top of the Windows kernel with WSL1, has been abandoned as a failure.
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new
architecture.
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Just think: there would have been more customers left if they’d got it
working sooner.
Sector 7 has been around for many years. So the lack of interest in
their product is not likely to be due to timing.
I mean, customers left who are still interested in original VMS.
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Sector 7 has offered such products for decades. Without taking away
the VMS customer base.
Maybe they have.
That is something we would know about.
You mean “would not know about”?
Post by Arne Vajhøj
They have customers, but not nearly as many as those migrating
natively to other platforms.
I think we’ve discussed their product before. Reading between the lines of
their case studies, seems their product lacks some of the niceties that it
should be possible to implement on top of the Linux kernel. DECnet, I
think, was one thing they seemed to be missing.
Arne Vajhøj
2024-01-05 02:11:49 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
... put in place the program I suggested sometime back: get rid of
most of VMS itself, leaving only the parts that users care
about--namely their userland programs and DCL command procedures. All
that could run on an emulation layer on Linux.
Lots of work to implement.
Much less than the 7 years it took to reimplement VMS on top of AMD64.
I doubt that.
Mapping from one OS to another OS is not easy.
Linux is a more versatile kernel than VMS. For example, the WINE project
has been able to substantially implement the Windows APIs on top of Linux,
while Microsoft’s attempt to do the reverse, implement the Linux APIs on
top of the Windows kernel with WSL1, has been abandoned as a failure.
Excellent examples.

Have you noticed how the world has moved from Windows to Linux
with Wine? No. Because it did not happen. Wine is a niche
thing.

MS tried WSL1 and changed to to a VM model with WSL2.

2 x commercial failure.
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new
architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.

But the reality is that it is very different.
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
the VMS customer base.
Maybe they have.
That is something we would know about.
You mean “would not know about”?
No. We would know.

A company could not pick a large number of DEC/CPQ/HP/HPE/VSI
customers without the VMS community knowing.
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
They have customers, but not nearly as many as those migrating
natively to other platforms.
I think we’ve discussed their product before. Reading between the lines of
their case studies, seems their product lacks some of the niceties that it
should be possible to implement on top of the Linux kernel. DECnet, I
think, was one thing they seemed to be missing.
They do run on Linux (and Windows).

It is possible that someone could do better than them.

But they did not.

And there were a couple of other companies offering
similar (or somewhat similar) services: Accel8 and BosBC. They
are no longer in business.

That makes it a 0 out of 3 success rate.

Arne
Lawrence D'Oliveiro
2024-01-05 03:01:43 UTC
Permalink
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new
architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.
But the reality is that it is very different.
New CPU -- check
“underlying foreign OS kernel” -- this was about porting the same kernel
onto a different CPU. In both cases.

So tell me again: “very different” how?
Arne Vajhøj
2024-01-05 03:09:12 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new
architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.
But the reality is that it is very different.
New CPU -- check
“underlying foreign OS kernel” -- this was about porting the same kernel
onto a different CPU. In both cases.
So tell me again: “very different” how?
Sorry. I messed up this one.

I was not comparing "Linux port to Alpha" with "VSI actual port
of VMS to x86-64" but to "hypothetical port of VMS to being on
top of Linux kernel".

Arne
Lawrence D'Oliveiro
2024-01-05 04:40:52 UTC
Permalink
I was not comparing "Linux port to Alpha" with "VSI actual port of VMS
to x86-64" but to "hypothetical port of VMS to being on top of Linux
kernel".
Remember, I’m not talking about porting the whole of VMS, just the part
that users care about: userland executables and DCL command procedures.
That’s it.
Stephen Hoffman
2024-01-06 00:59:12 UTC
Permalink
Post by Lawrence D'Oliveiro
I was not comparing "Linux port to Alpha" with "VSI actual port of VMS
to x86-64" but to "hypothetical port of VMS to being on top of Linux
kernel".
Remember, I’m not talking about porting the whole of VMS, just the part
that users care about: userland executables and DCL command procedures.
That’s it.
For some idea of relative scale for that "that's it", that's "merely" a
sizable chunk of what is roughly 35 million lines of code written in a
mix of assembler, BLISS, and C with proprietary extensions and API
calls, and which is entirely dependent on the rest of the 35 million
lines of code and some unique compilers. Not a small project.



There have been various discussions about this kernel port in a time
and place that no longer exists too, but that's all fodder for another
time and another place, and maybe with a little more included below.



Ponder how much of the "upper level" APIs and tools here tie into the
XQP and ACPs and device drivers, including the terminal drivers,
network drivers, and storage drivers. There have been ongoing I/O
issues (e.g. SSIO, quorum I/O) underneath clustering and assumptions of
storage writes too, and I'd expect those issues to appear elsewhere
when porting to a different kernel.

It's an immense project. FreeVMS made an effort in this direction some
years ago, but effectively ran out of staff (volunteers) and funding.
Lost their domain, too. DEC did a partial port to Mach years ago as an
advanced development project, but that work was far from complete.

Sector 7 has been porting APIs incrementally for decades now, and
Sector 7 can have the option of reworking the app source code as and
where needed.

Valve has a tougher problem, as they can't rework the app source code
to run on Steam Deck. Valve and the open source projects involved have
in aggregate done an immense pile of work to get a number of games
working, though there are many that don't work, and pretty much any
games with anti-cheat won't. https://www.steamdeck.com/en/verified
Apple has expended some development effort in this area too, with their
game-porting tools: https://developer.apple.com/games/

Could VSI do something akin to FreeVMS, Sector 7, Steam Deck, or Apple,
and their respective tooling, but starting with the original OpenVMS
source code? Sure. Then existing OpenVMS customers then have another
five or ten years of wait to enjoy and with no particular enhancements.
And then quite probably a whole pile of app-level workarounds for
whatever didn't get ported or implemented, or that had to diverge for
reasons. Or waiting for a yet larger and longer and more complex
project to allow binaries to run directly, work which would necessarily
lag behind the rest of the porting work.





What does this kernel swap mean? VSI spends years creating an
inevitably-somewhat-incomplete third-party Linux porting kit for
customer OpenVMS apps, and the end goal of the intended customers then
inexorably shifts toward the removal of that porting kit, and probably
in the best case the whole effort inevitably degrades into apps ported
top and running on VSI Linux. Or to porting to and running on some
other not-VSI Linux. That's certainly a service business opportunity,
providing customers assistance porting their OpenVMS apps to VSI Linux.
It does get VSI out of maintaining a kernel, but does not reduce much
else. And that at no small cost and no small investment, and at a cost
of a number of other opportunities.




TL;DR: The kernel isn't a big hunk of the ongoing development effort,
once the port is complete. Yeah, it takes a while to get to a working
bootstrap and working kernel during a port, though operating as a guest
reduces that somewhat. Porting to a different platform supported by the
shared kernel would get easier, though VSI still has to drag along
compilers and other tooling, or work to expunge that. De-kerneling the
userland would be a larger effort. And re-kerneling means VSI must now
track changes to their chosen replacement kernel, because y'all just
know some kernel changes will almost certainly be required here. In
the best case with re-kerneling, the customers then get a decade of
delays, with few enhancements, and all for an OS and APIs that arguably
haven't seen appreciable enhancements and new features since before
Y2K. Or customers can choose the existing OpenVMS x86-64 guests, and
can get back to whatever they were doing, and VSI can get back to
working on enhancements and updates and performance.





From another time and place, a DEC Usenix paper from way back in 1992
discussing a kernel-swap project:
http://fossies.org/linux/freevms/doc/Usenix_VMS-on-Mach.PS

From a reference to that work: "In 1992, a development team from
Digital Equipment described a proof-of-concept implementation of VMS on
Mach 3.0. ... Their work provided independent confirmation that
multiple OS personalities could be supported on the Mach microkernel.
At the same time, it exposed certain limitations. For example, it
proved impossible to accurately emulate VMS scheduling policies using
Mach. As another example, it was not possible to emulate VMS’ strong
isolation of kernel resource usage by different users. Preliminary
measurements also suggested that layering VMS on Mach resulted in
unacceptable performance overhead. Due to these technical concerns and
other nontechnical considerations, Digital Equipment did not follow
through with a production quality implementation of VMS on Mach."

That prototype work predated L4Ka and such, which reduced the overhead
involved with Mach.





Would I like to see an OpenVMS port to Linux, L4, or otherwise? Sure.
Fun project. But who wants to buy that? And for how much?
--
Pure Personal Opinion | HoffmanLabs LLC
Lawrence D'Oliveiro
2024-01-06 02:48:42 UTC
Permalink
[lots of interesting stuff omitted]
From another time and place, a DEC Usenix paper from way back in 1992
http://fossies.org/linux/freevms/doc/Usenix_VMS-on-Mach.PS
From a reference to that work: "In 1992, a development team from Digital
Equipment described a proof-of-concept implementation of VMS on Mach
3.0. ... Their work provided independent confirmation that multiple OS
personalities could be supported on the Mach microkernel. At the same
time, it exposed certain limitations. For example, it proved impossible
to accurately emulate VMS scheduling policies using Mach.
That can be blamed on the limitations of Mach. People still seem to think
microkernels are somehow a good idea, but they really don’t help much, do
they?
As another example, it was not possible to emulate VMS’ strong isolation
of kernel resource usage by different users.
Would the Linux cgroups functionality (as commonly used in the various
container schemes) help with this?
Preliminary measurements also
suggested that layering VMS on Mach resulted in unacceptable performance
overhead.
No big surprise -- microkernel trouble yet again.
Stephen Hoffman
2024-01-06 18:36:59 UTC
Permalink
Post by Lawrence D'Oliveiro
[lots of interesting stuff omitted]
From another time and place, a DEC Usenix paper from way back in 1992
http://fossies.org/linux/freevms/doc/Usenix_VMS-on-Mach.PS
From a reference to that work: "In 1992, a development team from
Digital Equipment described a proof-of-concept implementation of VMS on
Mach 3.0. ... Their work provided independent confirmation that
multiple OS personalities could be supported on the Mach microkernel.
At the same time, it exposed certain limitations. For example, it
proved impossible to accurately emulate VMS scheduling policies using
Mach.
That can be blamed on the limitations of Mach. People still seem to
think microkernels are somehow a good idea, but they really don’t help
much, do they?
Every choice made in an OS is a trade-off. Every one.

There are OS and hardware trade-offs in every design, every era, every
generation.

And the trade-offs vary over time and place. 1992 was VAX.

With current hardware including cores and performance and with newer
message-passing designs such as OKL4 and ilk, some things are looking
rather better.
Post by Lawrence D'Oliveiro
As another example, it was not possible to emulate VMS’ strong
isolation of kernel resource usage by different users.
Would the Linux cgroups functionality (as commonly used in the various
container schemes) help with this?
No.

Designers of VAX/VMS chose a memory management model closer to that of
Multics, where much of the rest of hardware and software in the
industry diverged from that lotsa-rings memory management design.
Memory management designs with more than rings have largely disappeared
in the ensuing decades, with Itanium being one of the most recent
examples. x86-64 sorta-kinda has four rings, but the page table and
ring design was too limited for OpenVMS expectations, and VSI is
accordingly (and creatively) using two page tables to provide the
necessary modes.

Containers are arguably fundamentally about product-licensing
arbitrage, too. But it's not a foundation for a kernel transplantation.
Post by Lawrence D'Oliveiro
Preliminary measurements also suggested that layering VMS on Mach
resulted in unacceptable performance overhead.
No big surprise -- microkernel trouble yet again.
Mach on VAX in 1992, yeah. Microkernels are in use all over the place
nowadays, seL4-, L4-, and OKL4-derived. And hardware performance has
improved over the decades, and core counts are far higher than VAX era.

https://docs.sel4.systems/projects/sel4/frequently-asked-questions.html

Other deployments? Apple is using their own L4 derivative throughout.

Compare a 1992-era VAX to a 2023-era smartphone with 6 cores, 8 GB
memory and a terabyte of persistent storage, and with vastly better
graphics and networking. In a server design with similarly-modern
heterogeneous processor hardware, the efficiency cores would likely be
running batch and baggage, too. VAX SIMH on a smartwatch probably runs
decently well too, save for the woefully inadequate UI.

For a small development team—and VSI is tiny—kernel transplantation
doesn't gain much from a technical basis, once the platform port is
completed. It might help with future ports, sure. Ports (including
transplantations) more generally are entirely disruptive, and delay
other userland work and userland enhancements. And the kernel
transplantation still requires ongoing maintenance and support and
updates and releases as the new "host" kernel is modified by the
outside vendors. From another vendor doing this:

https://www.chromium.org/chromium-os/chromiumos-design-docs/chromium-os-kernel/

From a business perspective, what Sector 7 offers is an easier and
incremental off-ramp from OpenVMS to else-platform. Which is not going
to be popular roadmap choice for the folks at VSI. And I'm somewhat at
a loss for what the transplantation offers users.

Would I like to see a more modern kernel design underneath OpenVMS?
Sure. But pragmatically, that's all way, way, way, way, way down the
priority list for an ISV or third-party developer or customer. Or at
VSI, by all appearances. And that new design will be
userland-disruptive. Just as userland-disruptive as a port, if not
larger. Booting OpenVMS as a guest on x86-64 gets rid of most of the
longstanding customer hardware issues. And VSI isn't nearly well-funded
enough to create a new OS both with easier portability for existing
OpenVMS apps, and with enough new work and new features to draw in new
customers—that's a decade of work and a chunk of a billion dollars just
to get going. Got a bored billionaire handy that wants to take on
~everybody with a new RISC V supermicrominikernel megaOS product with
extra added OpenVMS flavor? Have at. Lemme know too, as that sounds
like a fun project.
--
Pure Personal Opinion | HoffmanLabs LLC
Lawrence D'Oliveiro
2024-01-06 20:08:02 UTC
Permalink
Post by Stephen Hoffman
Post by Lawrence D'Oliveiro
That can be blamed on the limitations of Mach. People still seem to
think microkernels are somehow a good idea, but they really don’t help
much, do they?
With current hardware including cores and performance and with newer
message-passing designs such as OKL4 and ilk, some things are looking
rather better.
Hope springs eternal in the microkernel aficionado’s breast. ;)
Post by Stephen Hoffman
Post by Lawrence D'Oliveiro
Post by Stephen Hoffman
As another example, it was not possible to emulate VMS’ strong
isolation of kernel resource usage by different users.
Would the Linux cgroups functionality (as commonly used in the various
container schemes) help with this?
No.
Designers of VAX/VMS chose a memory management model closer to that of
Multics, where much of the rest of hardware and software in the industry
diverged from that lotsa-rings memory management design.
Seems you are confusing two different things here. I am aware of the user/
supervisor/exec/kernel privilege-level business, but you did say “resource
usage by different *users*”. cgroups are indeed designed to manage that.

Remember that my proposal for adopting the Linux kernel would get rid of
every part of VMS that currently runs at higher than user mode. It’s only
their own user-mode code that customers would care about.
Post by Stephen Hoffman
Containers are arguably fundamentally about product-licensing arbitrage,
too.
I don’t use them that way. I use them as a cheap way to run up multiple
test installations of things I am working on, instead of resorting to full
VMs. Typically it only takes a few gigabytes to create a new userland for
a container. E.g. on this machine I am using now:

***@theon:~ # du -ks /var/lib/lxc/*/rootfs/
1700060 /var/lib/lxc/debian10/rootfs/
7654028 /var/lib/lxc/debian11/rootfs/
876568 /var/lib/lxc/debian12/rootfs/
Post by Stephen Hoffman
Microkernels are in use all over the place nowadays, seL4-, L4-, and
OKL4-derived.
Really?? Can you name some deployments? How would performance compare with
Linux? Because, let’s face it, Linux is the standard for high-performance
computing.
Post by Stephen Hoffman
For a small development team—and VSI is tiny—kernel transplantation
doesn't gain much from a technical basis, once the platform port is
completed. It might help with future ports, sure.
Which was my point all along: if they’d done this for the AMD64 port from
the beginning, they would have shaved *years* off the development time.
And likely ended up with a somewhat larger (remaining) customer base than
they have now.
Dan Cross
2024-01-06 20:31:22 UTC
Permalink
Post by Stephen Hoffman
Post by Lawrence D'Oliveiro
That can be blamed on the limitations of Mach. People still seem to
think microkernels are somehow a good idea, but they really don’t help
much, do they?
With current hardware including cores and performance and with newer
message-passing designs such as OKL4 and ilk, some things are looking
rather better.
Hope springs eternal in the microkernel aficionado’s breast. ;)
Post by Stephen Hoffman
Post by Lawrence D'Oliveiro
As another example, it was not possible to emulate VMS’ strong
isolation of kernel resource usage by different users.
Would the Linux cgroups functionality (as commonly used in the various
container schemes) help with this?
No.
Designers of VAX/VMS chose a memory management model closer to that of
Multics, where much of the rest of hardware and software in the industry
diverged from that lotsa-rings memory management design.
Seems you are confusing two different things here. I am aware of the user/
supervisor/exec/kernel privilege-level business, but you did say "resource
usage by different *users*". cgroups are indeed designed to manage that.
But that's not what he actually said: you omitted the critical
word, "kernel", as in _kernel resources_ used by different
users. cgroups are designed to manage _userspace_ resources;
they still exist in fundamentally the same kernel space.
Remember that my proposal for adopting the Linux kernel would get rid of
every part of VMS that currently runs at higher than user mode. It's only
their own user-mode code that customers would care about.
You think that's easy, but it is clear that you really don't
understand the issues involved.
Post by Stephen Hoffman
Containers are arguably fundamentally about product-licensing arbitrage,
too.
I don't use them that way. I use them as a cheap way to run up multiple
test installations of things I am working on, instead of resorting to full
VMs. Typically it only takes a few gigabytes to create a new userland for
Containers started as a way to run multiple versions of some
very large programs with disparate library and other
dependencies on a single system, and grew into a mechanism for
managing resources generally.
1700060 /var/lib/lxc/debian10/rootfs/
7654028 /var/lib/lxc/debian11/rootfs/
876568 /var/lib/lxc/debian12/rootfs/
Post by Stephen Hoffman
Microkernels are in use all over the place nowadays, seL4-, L4-, and
OKL4-derived.
Really?? Can you name some deployments? How would performance compare with
Linux? Because, let's face it, Linux is the standard for high-performance
computing.
He gave you examples.

He pointed you to seL4, which is used in plenty of safety
critical systems.

Additionally, QNX runs nuclear power plants. Every Mac on the
planet runs more or less a version of Mach+BSD. The Intel ME
embedded in most Intel CPUs runs Minix3.

Such a shame that the Linux team, despite their vast resources,
are incapable of delivering an effective microkernel. I guess
they just can't pull it off. (/s)
Post by Stephen Hoffman
For a small development team—and VSI is tiny—kernel transplantation
doesn't gain much from a technical basis, once the platform port is
completed. It might help with future ports, sure.
Which was my point all along: if they'd done this for the AMD64 port from
the beginning, they would have shaved *years* off the development time.
So you say. People who know better disagree.
And likely ended up with a somewhat larger (remaining) customer base than
they have now.
You have yet to articulate any measurable way in which this
would have made a difference to customers. It seems clear that
this is because you yourself have no idea other than, "lol Linux
is better."

- Dan C.
Lawrence D'Oliveiro
2024-01-06 22:46:13 UTC
Permalink
But that's not what he actually said: you omitted the critical word,
"kernel", as in _kernel resources_ used by different users.
Since *all* resources are defined (and managed) as such by the kernel, I
fail to see what the distinction is.

cgroups let you manage CPU time usage and CPU affinity (are CPUs a
“kernel” resource?), memory usage (is that a “kernel” resource?), I/O
usage (is that a “kernel” resource?), RDMA usage (is that a “kernel”
resource?), numbers of processes created (is that a “kernel” resource?)
etc etc.

Otherwise, feel free to explain what the distinction is between a “user”
resource and a “kernel” resource.
Post by Lawrence D'Oliveiro
Remember that my proposal for adopting the Linux kernel would get rid of
every part of VMS that currently runs at higher than user mode. It's
only their own user-mode code that customers would care about.
You think that's easy, but it is clear that you really don't understand
the issues involved.
The poster I was replying to already conceded this point.
Containers started as a way to run multiple versions of some very large
programs with disparate library and other dependencies on a single
system, and grew into a mechanism for managing resources generally.
You are thinking of Docker. Which is just one kind of “container”
technology. Remember that “containers” as such do not exist as a built-in
primitive in the Linux kernel: they are constructed out of a bunch of
lower-level primitives, including cgroups and the various kinds of
namespaces. This allows for very different kinds of technologies to be
built that call themselves “containers”. And for them to coexist.
Every Mac on the planet runs more or less a version of Mach+BSD.
And doesn’t exactly do so well. Back when Apple sold servers, I remember a
review of MySQL running on OS X Server versus Linux, on the same hardware.
Linux ran circles around Apple’s microkernel-based OS. On the company’s
own hardware.
The Intel ME embedded in most Intel CPUs runs Minix3.
Bad, bad example.
<https://arstechnica.com/information-technology/2017/05/the-hijacking-
flaw-that-lurked-in-intel-chips-is-worse-than-anyone-thought/>
Dan Cross
2024-01-07 00:52:22 UTC
Permalink
Post by Lawrence D'Oliveiro
But that's not what he actually said: you omitted the critical word,
"kernel", as in _kernel resources_ used by different users.
Since *all* resources are defined (and managed) as such by the kernel, I
fail to see what the distinction is.
That is evident, but this is not the flex you think that it is.
Post by Lawrence D'Oliveiro
cgroups let you manage CPU time usage and CPU affinity (are CPUs a
"kernel" resource?), memory usage (is that a "kernel" resource?), I/O
usage (is that a "kernel" resource?), RDMA usage (is that a "kernel"
resource?), numbers of processes created (is that "kernel" resource?)
etc etc.
Otherwise, feel free to explain what the distinction is between a "user"
resource and a "kernel" resource.
Since you asked....

User resources: resources allocated to a user process for the
purpose of executing user code. Examples may include mapped
segments containing executable text, read-only or read/write
data, identifiers handed to userspace to identify resources
held by the kernel on a process's behalf (file and socket
descriptors, for example).

Kernel resources: Those allocated by the kernel for its internal
use. Examples may include page tables describing an address
space, the mapping of user-visible tokens to IO resources (e.g.,
the file array that maps file descriptors to the kernel's
representation of an open file or socket or pipe or whatever),
on Unix-y systems the set of signals pending delivery in the
process, etc. Other examples may include things like a
description of the buses and peripheral devices (e.g., the
complete enumeration of the PCIe topology), buffers associated
with IO devices and the filesystem, a page cache, etc.

Some things blur the line: is the `proc` structure describing a
process a kernel resource, even though it's sole purpose is to
describe a user process? Most kernel people would probably say
yes, particularly as (on Unix-y style systems) the proc
structure allocated to a process will outlive the process itself
(e.g., so that the parent can `wait` on it to collect its exit
status).

These are just a few examples.
Post by Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Remember that my proposal for adopting the Linux kernel would get rid of
every part of VMS that currently runs at higher than user mode. It's
only their own user-mode code that customers would care about.
You think that's easy, but it is clear that you really don't understand
the issues involved.
The poster I was replying to already conceded this point.
Yes, everyone has acknowledged that you don't understand the
issues involved.
Post by Lawrence D'Oliveiro
Containers started as a way to run multiple versions of some very large
programs with disparate library and other dependencies on a single
system, and grew into a mechanism for managing resources generally.
You are thinking of Docker.
No, I am not. I was there when containers were invented, and I
know very much what the original use case was.
Post by Lawrence D'Oliveiro
Which is just one kind of "container"
technology. Remember that "containers" as such do not exist as a built-in
primitive in the Linux kernel: they are constructed out of a bunch of
lower-level primitives, including cgroups and the various kinds of
namespaces. This allows for very different kinds of technologies to be
built that call themselves "containers". And for them to coexist.
Yawn. Cool story, bro.
Post by Lawrence D'Oliveiro
Every Mac on the planet runs more or less a version of Mach+BSD.
And doesn't exactly do so well.
"For some specific use cases." FTFY.
Post by Lawrence D'Oliveiro
Back when Apple sold servers, I remember a
review of MySQL running on OS X Server versus Linux, on the same hardware.
Linux ran circles around Apple's microkernel-based OS. On the company's
own hardware.
I remember when Linux acquired TCP/IP support. It was
consistently about 10% slower than BSD at the time. What's your
point? But if we're going to go there....

Why, after all these years, does USB audio on Linux fail so
miserably so often? Why do they still have problems with
suspend and resume on laptops? Why does Linux panic when you
turn on built-in features, like AX.25? Why is it such a pain to
partition disks? Why does the `ss` command still not show
routing tables for lesser-used protocols? Why are the debugging
tools so primitive compared to what Sun was doing 20 years ago
with dtrace? Why don't they have something as robust and
functional as ZFS? Why can't the amazingly resourced Linux
repeat what a group of like 5 engineers at Sun did in 18 months
20 years ago?

You seem to think that Linux is so great, and the irony is that
you're actually _right_. But it's obvious that you don't have
the first clue about _why_ you think that, or what makes Linux
great.
Post by Lawrence D'Oliveiro
The Intel ME embedded in most Intel CPUs runs Minix3.
Bad, bad example.
<https://arstechnica.com/information-technology/2017/05/the-hijacking-
flaw-that-lurked-in-intel-chips-is-worse-than-anyone-thought/>
Yup. They had a pretty bad flaw. Do you think it hasn't been
fixed? And do you think that's the fault of the kernel that
runs on the ME? That was, after all, in an application program
that ran on that OS: should we start comparing to CVEs in
programs that run under Linux? For that matter, should we start
comparing CVEs between Minix and Linux?

I notice you elided my other examples, as they did not support
your point.

- Dan C.
Dan Cross
2024-01-05 03:09:37 UTC
Permalink
Post by Lawrence D'Oliveiro
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
That's not Linux with wine. You can install Wine on the steam
deck, but their success has much more to do with their native
architecture.
Post by Lawrence D'Oliveiro
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop. I can feel it!
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new
architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.
But the reality is that it is very different.
New CPU -- check
“underlying foreign OS kernel” -- this was about porting the same kernel
onto a different CPU. In both cases.
So tell me again: “very different” how?
I think, again, you are talking at cross-purposes: my suspicion
is that Arne is referring to a VMS compatibility layer built on
top of Linux, not the effort of porting VMS to x86_64.

That said, VMS was not originally written for portability and
wasn't ported to anything other than successive version of the
VAX for the first 10 or so years it existed; Linux was ported
to the Alpha pretty early on (sponsored by DEC; thanks Mad Dog).
So Linux filed off a lot of portability sharp edges for the
machines at the time pretty early on, when it was still pretty
small; VMS not so much.

- Dan C.
Arne Vajhøj
2024-01-05 03:22:19 UTC
Permalink
Post by Dan Cross
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop. I can feel it!
:-)
Post by Dan Cross
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Remember, it took less time (and resources) than that to move Linux
from 32-bit x86 to 64-bit Alpha.
Very different task.
How different? It’s exactly the same sort of thing: port an OS to a new
architecture.
If you call both a CPU and an underlying foreign OS kernel for "a new
architecture" then yes.
But the reality is that it is very different.
New CPU -- check
“underlying foreign OS kernel” -- this was about porting the same kernel
onto a different CPU. In both cases.
So tell me again: “very different” how?
I think, again, you are talking at cross-purposes: my suspicion
is that Arne is referring to a VMS compatibility layer built on
top of Linux, not the effort of porting VMS to x86_64.
Yes.

I was being unclear in my response, so I think that one is on me.
Post by Dan Cross
That said, VMS was not originally written for portability and
wasn't ported to anything other than successive version of the
VAX for the first 10 or so years it existed; Linux was ported
to the Alpha pretty early on (sponsored by DEC; thanks Mad Dog).
So Linux filed off a lot of portability sharp edges for the
machines at the time pretty early on, when it was still pretty
small; VMS not so much.
Yes.

But there is also the difference that Linux was implemented
(first and later) for existing architecture. They had to live
with what they got. When VMS was first created the VMS software
people could walk over to the VAX HW people and say "we want
this nifty instruction to make our work easier". An Alpha got
the PAL code mechanism.

I believe one of the VSI people has told that one of issues
in the x86-64 port is probing memory. VAX got PROBEx instructions.
Alpha got CALL_PAL PROBER and PROBEW.

Arne
Lawrence D'Oliveiro
2024-01-05 04:46:57 UTC
Permalink
I believe one of the VSI people has told that one of issues in the
x86-64 port is probing memory. VAX got PROBEx instructions.
Alpha got CALL_PAL PROBER and PROBEW.
Linux, too, has the issue of having to check that addresses that a caller
passes are actually accessible by them, in relevant system calls. And
unlike VMS, it has to deal with that issue across something like 2 dozen
different processor architectures, at current count.

Maybe look at how that deals with it?

Or, like I said, avoid the issue by letting Linux deal with it.
Lawrence D'Oliveiro
2024-01-05 04:44:07 UTC
Permalink
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
That said, VMS was not originally written for portability and wasn't
ported to anything other than successive version of the VAX for the
first 10 or so years it existed ...
And being typical of proprietary software, think of the layers of cruft
the code will have accumulated, first in the move to Alpha, then Itanium,
and now AMD64. All without ever really becoming a fully 64-bit OS.
Linux was ported to the Alpha pretty early on (sponsored by DEC; thanks
Mad Dog). So Linux filed off a lot of portability sharp edges for the
machines at the time pretty early on, when it was still pretty small;
VMS not so much.
Which is reinforcing my point, is it not? That Linux stands a good chance
of being able to take on enough of a VMS layer to make VMS itself
unnecessary.
Dan Cross
2024-01-05 13:27:14 UTC
Permalink
Post by Lawrence D'Oliveiro
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
That would necessarily entail dragging in much of the rest of
the operating system. Which isn't to say that it couldn't be
done, but your, "I'm only..." pseudo-subset appears to be a
suggestion borne of ignorance of what's actually involved.
Post by Lawrence D'Oliveiro
That said, VMS was not originally written for portability and wasn't
ported to anything other than successive version of the VAX for the
first 10 or so years it existed ...
And being typical of proprietary software, think of the layers of cruft
the code will have accumulated, first in the move to Alpha, then Itanium,
and now AMD64. All without ever really becoming a fully 64-bit OS.
You are, once again, speculating from a position of ignorance.

Consider that for both the VAX _and_ Alpha, DEC was able to
shape the design of the hardware _and_ of VMS simultaneously to
match one another. There is a big difference between "cruft"
and deep design decisions that impact portability to different
architectures that were not nearly so tightly coupled with the
software being ported.
Post by Lawrence D'Oliveiro
Linux was ported to the Alpha pretty early on (sponsored by DEC; thanks
Mad Dog). So Linux filed off a lot of portability sharp edges for the
machines at the time pretty early on, when it was still pretty small;
VMS not so much.
Which is reinforcing my point, is it not? That Linux stands a good chance
of being able to take on enough of a VMS layer to make VMS itself
unnecessary.
No, it isn't. At least not for those who aren't confused. It
is a comparison of very different things. Your point is simply
unfounded speculation based on fan-boyism and lack of technical
depth.

- Dan C.
Lawrence D'Oliveiro
2024-01-05 22:10:53 UTC
Permalink
Post by Lawrence D'Oliveiro
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
That would necessarily entail dragging in much of the rest of the
operating system.
No it wouldn’t. Any more than WINE entails implementing the whole of
Windows on top of Linux. We don’t need any actual supervisor-mode DCL, or
kernel-mode drivers, or any actual ACPs/XQPs, only a layer that emulates
their behaviour, for example. No need for EVL or MPW or the whole queue
system, because Linux already provides plenty of existing facilities for
that kind of thing. No VMScluster rigmarole.
Consider that for both the VAX _and_ Alpha, DEC was able to shape the
design of the hardware _and_ of VMS simultaneously to match one another.
And yet they were never able to make VMS a fully 64-bit OS, even on their
own fully 64-bit hardware.
Robert A. Brooks
2024-01-05 22:22:34 UTC
Permalink
Post by Lawrence D'Oliveiro
And yet they were never able to make VMS a fully 64-bit OS, even on their
own fully 64-bit hardware.
That statement is literally not true.

The issue isn't that we are not capable of doing that; we don't want to break
decades of compatibility in order to do that.

The project of getting the native X86_64 C++ compiler to straddle the 32- and 64-bit world
of VMS and play nice with open source that expects fully 64-bitness everywhere would be much
easier if we could abandon the 32-bit aspects of VMS, but we cannot, if we expect the vast majority
of our customers to remain on VMS.
--
-- Rob
Lawrence D'Oliveiro
2024-01-06 02:35:25 UTC
Permalink
Post by Robert A. Brooks
Post by Lawrence D'Oliveiro
And yet they were never able to make VMS a fully 64-bit OS, even on
their own fully 64-bit hardware.
That statement is literally not true.
The issue isn't that we are not capable of doing that; we don't want to
break decades of compatibility in order to do that.
That’s just trying to rephrase it in a more PR-friendly way.
Post by Robert A. Brooks
The project of getting the native X86_64 C++ compiler to straddle the
32- and 64-bit world of VMS and play nice with open source that expects
fully 64-bitness everywhere would be much easier if we could abandon the
32-bit aspects of VMS, but we cannot, if we expect the vast majority of
our customers to remain on VMS.
Such a long-winded way of saying “we could not make VMS fully 64-bit, even
on our own fully 64-bit hardware”.
John Dallman
2024-01-06 15:30:00 UTC
Permalink
Post by Robert A. Brooks
The project of getting the native X86_64 C++ compiler to straddle
the 32- and 64-bit world of VMS and play nice with open source that
expects fully 64-bitness everywhere would be much easier if we could
abandon the 32-bit aspects of VMS, but we cannot, if we expect the
vast majority of our customers to remain on VMS.
Such a long-winded way of saying _we could not make VMS fully
64-bit, even on our own fully 64-bit hardware_.
It can't be made fully 64-bit without breaking source-level compatibility
with customer code. This is due to a series of past decisions that all
seemed reasonable at the time, but have combined in an unfortunate way in
today's situation.

In about 1975, the VMS API was defined. Unlike [UL]inux with C, VMS is
not based on a particular programming language. It was considered vital
at the time to allow programs to be put together from a mix of
programming languages, including MACRO-32 assembler, BLISS, Pascal, Basic,
Cobol and Fortran. This meant that the calling convention and APIs were
defined in terms of bytes and words, using absolute sizes rather than
types that could change size with the memory model. 64-bit addressing was
not considered: a memory size of 1MB was considered large at the time.

When Alpha came along, the API immediately became a problem. The first
versions of VMS on Alpha were 32-bit, in that they stored addresses in
memory as 32 bits and sign-extended them when they were loaded into
registers. VMS processes only used the bottom 2GB and the top 2GB of
their 64-bit address spaces. This was not a problem with OSF/1 Unix,
which was always fully 64-bit, because addresses were C pointer types and
changed with the memory model.

Obviously, DEC needed a 64-bit VMS. They also needed it /soon/, so they
added 64-bit versions of the APIs that most needed to deal with lots of
memory. Quite a lot of APIs that took pointers to user memory carried on
taking 32-bit pointers, and thus could only deal with data in the bottom
2GB of a process address space.

They probably intended to add 64-bit versions of all the other APIs, but
this never happened, for reasons that probably included some of:

* Lack of budget: DEC was never as successful in the 1990s as it
had been in the 1980s.
* Lack of concern about source portability: management still thought
like a dominant company, although this was increasingly misleading.
* Diffusion of effort over many projects and products.

In C and C++, it's pretty easy to call alternate APIs according to the
memory model you're compiling for. You can do it with simple preprocessor
macros. Doing that for all the languages VMS supported, with supported
interoperability in the same process, is rather harder, and doesn't seem
to have been achieved. Customer source would have needed subtle, but
precise changes to call 64-bit APIs, and creating those APIs would have
been expensive. There was a VAX to Alpha binary translator too, and
32-bit processes had to be retained for that.

Then came Itanium. HP had to produce an Itanium version of VMS. Giving it
a complete 64-bit API at this point would have been the right thing to do,
but it would cost more than the basic job, and HP weren't that interested
in VMS. The 32-bit APIs would have had to be retained anyway for the
Alpha to Itanium translator, and to allow customer source compatibility.

Now we get to x86-64. We don't have translators from any of the previous
architectures, because doing that for Itanium is Very Hard, but we still
have the customer source compatibility problem. VSI is a much smaller
company than DEC or HP, and getting VMS capable of compiling and running
customers' source is clearly the first priority, since there's no more
Itanium hardware. Once that is completely achieved, they could start to
develop towards a fully 64-bit system, with steps like:

First, complete the 64-bit API, after all these years. It's a fairly
well-defined task, but I don't know how big it is.

Then, support the 64-bit APIs in the Clang headers, to allow building of
open-source programs that expect straight I32LP64. This should be fairly
straightforward.

Finally, provide options, errors and warnings in the DEC-heritage
compilers to make it reasonably easy to adapt customer source to using
the 64-bit APIs. This is a large and unpleasant project that requires
delving into many codebases. It doesn't start to produce useful results
until most of the languages have been done, and the work is all debugged.
You still have to support the 32-bit APIs, for customers who are staying
on 32-bit for reasons varying from "impossible to change" to "we don't
feel like it."

John
Lawrence D'Oliveiro
2024-01-06 20:27:42 UTC
Permalink
Post by John Dallman
It can't be made fully 64-bit without breaking source-level
compatibility with customer code.
...
Obviously, DEC needed a 64-bit VMS. They also needed it /soon/, so they
added 64-bit versions of the APIs that most needed to deal with lots of
memory. Quite a lot of APIs that took pointers to user memory carried on
taking 32-bit pointers, and thus could only deal with data in the bottom
2GB of a process address space.
They probably intended to add 64-bit versions of all the other APIs, but
* Lack of budget: DEC was never as successful in the 1990s as it
had been in the 1980s.
Yes, but remember, at the same time, they were able to bring out their own
Unix OS for the same hardware, and make it fully 64-bit from the get-go.

Look at how the Linux kernel does it, on platforms (e.g. x86) where 32-bit
code still matters: it is able to be fully 64-bit internally, yet offer
both 32-bit and 64-bit APIs to userland.

By about 1996, there were 4 OSes that you might say were in common use on
Alpha: DEC Unix, OpenVMS, Windows NT, and Linux. Two of them (Unix and
Linux) were fully 64-bit; one (OpenVMS) was a hybrid of 32- and 64-bit
code; and Windows NT was 32-bit only.
Arne Vajhøj
2024-01-06 20:59:58 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by John Dallman
It can't be made fully 64-bit without breaking source-level
compatibility with customer code.
Yes, but remember, at the same time, they were able to bring out their own
Unix OS for the same hardware, and make it fully 64-bit from the get-go.
It was the same technical problem, but a different business context.

Changing all pointers from 32 to 64 bit would break a lot of legacy
code.

But DEC was making a lot of money from VMS VAX customers and wanted
to keep those customers, so VMS VAX code had build on VMS Alpha (and
actually allow converting VMS VAX executables to VMS Alpha executables
without source code).

No such concern was made for the Ultrix customers going to DEC OSF/1
aka DUNIX aka Tru64.

DEC made less money from Ultrix. Ultrix and OSF/1 was two different
Unixes so compatibility would have been difficult anyway. And porting
C code using a C API was easier anyway.
Post by Lawrence D'Oliveiro
Look at how the Linux kernel does it, on platforms (e.g. x86) where 32-bit
code still matters: it is able to be fully 64-bit internally, yet offer
both 32-bit and 64-bit APIs to userland.
By about 1996, there were 4 OSes that you might say were in common use on
Alpha: DEC Unix, OpenVMS, Windows NT, and Linux. Two of them (Unix and
Linux) were fully 64-bit; one (OpenVMS) was a hybrid of 32- and 64-bit
code; and Windows NT was 32-bit only.
32 and 64 bit on Linux (or Windows) is totally different from
32 and 64 bit on VMS Alpha/Itanium/x86-64.

They have 32 bit code with 32 bit pointers and 64 bit code with
64 bit pointers.

VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).

Arne
Lawrence D'Oliveiro
2024-01-06 22:28:48 UTC
Permalink
No such concern was made for the Ultrix customers going to DEC OSF/1 aka
DUNIX aka Tru64.
DEC made less money from Ultrix. Ultrix and OSF/1 was two different
Unixes so compatibility would have been difficult anyway. And porting C
code using a C API was easier anyway.
You almost got the point, didn’t you? That POSIX had defined standard
types like “time_t” and “size_t”, and code that was written to adhere to
those types as appropriate was much easier to port between different
architectures. This applied to customer code, to third-party code ... to
all code.

And POSIX already existed when Dave Cutler commenced development on
Windows NT. Back when he was starting VMS, he could claim ignorance of
such techniques for avoiding obsolescence; what was his excuse this time?
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
Not sure how you can have 64-bit code without 64-bit addressing ... that
is practically the essence of 64-bit code.

Does that “64-bit” code on VMS still call LIB$EMUL?
Dan Cross
2024-01-07 00:27:15 UTC
Permalink
No such concern was made for the Ultrix customers going to DEC OSF/1 aka
DUNIX aka Tru64.
DEC made less money from Ultrix. Ultrix and OSF/1 was two different
Unixes so compatibility would have been difficult anyway. And porting C
code using a C API was easier anyway.
You almost got the point, didn't you? That POSIX had defined standard
types like "time_t" and "size_t", and code that was written to adhere to
those types as appropriate was much easier to port between different
architectures. This applied to customer code, to third-party code ... to
all code.
It took literally decades from the introduction of 64-bit Unix
machines until most software was 64-bit clean. I was there; it
was a painful time, and Linux was actually behind the curve
here compared to many of the commercial vendors.

The mere existence of those types a) didn't help the piles of
code that was sloppy and made assumptions about primitive types
and b) didn't help with binary compatibility during the
(lengthy) transition period. And yes, binary compatibility
mattered to a lot of people.
And POSIX already existed when Dave Cutler commenced development on
Windows NT. Back when he was starting VMS, he could claim ignorance of
such techniques for avoiding obsolescence; what was his excuse this time?
What was Linux's?
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
Not sure how you can have 64-bit code without 64-bit addressing ...
Of course you're not. "64-bit code" for something like x86
refers to details of the processor mode and e.g. the handling
of the REX prefix. On Alpha or Itanium, presumably that means
using the 64-bit ISA that uses e.g. 64-bit registers and so on.

But in either case, that's distinct from data pointers in
userspace are truncated represented as 32-bit values, as only
the low 2GiB of the address space is used by VMS applications.
that is practically the essence of 64-bit code.
Nope.

- Dan C.
Arne Vajhøj
2024-01-07 00:37:30 UTC
Permalink
Post by Dan Cross
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
Not sure how you can have 64-bit code without 64-bit addressing ...
Of course you're not. "64-bit code" for something like x86
refers to details of the processor mode and e.g. the handling
of the REX prefix. On Alpha or Itanium, presumably that means
using the 64-bit ISA that uses e.g. 64-bit registers and so on.
But in either case, that's distinct from data pointers in
userspace are truncated represented as 32-bit values, as only
the low 2GiB of the address space is used by VMS applications.
A VMS application with all pointers being 32 bit only
use the low 2 GB.

A VMS application with all 64 bit pointers or a mix of
32 bit and 64 bit pointers can use more (in theory 4 EB,
but I believe both HW and VMS has limits lower than that).

Arne
Lawrence D'Oliveiro
2024-01-07 00:49:18 UTC
Permalink
It took literally decades from the introduction of 64-bit Unix machines
until most software was 64-bit clean.
I was doing Unix sysadmin work on DEC Alphas in the late 1990s until the
early 2000s, when the client saw the writing on the wall and moved to
Linux (and so did I).

They frequently asked me to download, build and install various items of
open-source software. I don’t recall ever having a problem with 64-bitness
per se.
I was there; it was a painful
time, and Linux was actually behind the curve here compared to many of
the commercial vendors.
Jon “maddog” Hall shipped an Alpha to Linus Torvalds somewhere around
1995, and Linux was running native 64-bit on DEC Alpha in releasable form
by about 1996. That was only only the second hardware platform that Linux
had been implemented on, at that stage. So it went portable at the same
time it went 64-bit.
The mere existence of those types a) didn't help the piles of code that
was sloppy and made assumptions about primitive types ...
Piles of proprietary code, certainly.
Arne Vajhøj
2024-01-07 00:28:41 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
VMS has only 64 bit code but both 32 bit pointers and 64 bit pointers
(32 bit pointers getting extended to 64 bit addresses).
Not sure how you can have 64-bit code without 64-bit addressing ... that
is practically the essence of 64-bit code.
It has 64 bit addressing. It only has 64 bit addressing.

But pointers can be stored in RAM as both 32 bit pointers
and 64 bit pointers.

A 32 bit pointer with value 0x12345678 does not mean a
32 bit address of 0x12345678 but a 64 bit address of
0x0000000012345670.

And a 32 bit pointer with value 0x87654321 does not mean a
32 bit address of 0x87654321 but a 64 bit address of
0xFFFFFFFF87654321.
Post by Lawrence D'Oliveiro
Does that “64-bit” code on VMS still call LIB$EMUL?
LIB$EMUL multiply a 32 bit integer with a 32 bit integer
to give a 64 bit integer.

It is sort of obsolete because at the VAX to Alpha
move the languages got 64 bit integers.

The function still exist for compatibility reasons.

But it has nothing to do with addressing.

Arne
John Dallman
2024-01-07 01:01:00 UTC
Permalink
Post by Lawrence D'Oliveiro
Yes, but remember, at the same time, they were able to bring out
their own Unix OS for the same hardware, and make it fully 64-bit
from the get-go.
That was a much more straightforward problem, with known solutions.
Post by Lawrence D'Oliveiro
Look at how the Linux kernel does it, on platforms (e.g. x86) where
32-bit code still matters: it is able to be fully 64-bit
internally, yet offer both 32-bit and 64-bit APIs to userland.
That isn't the situation with VMS. An individual 64-bit process will
contain calls to APIs that take 64-bit addresses, as you would expect.
But it can also contain code to APIs that take 32-bit addresses, and
their data or buffers must be in the bottom 2GB of the process address
space. This is done with pointer qualifiers in C, like the "near" and
"far" pointers of 16-bit MS-DOS C compilers.

The 32-bit and 64-bit APIs have different names, because the multiplicity
of supported languages makes renaming according to the memory model
impractical

On Linux, a given process is entirely 32-bit, or entirely 64-bit.

The mixture within a process that exists on VMS is very unusual: there
may be other OSes that do it, although I don't know of any. The current
situation is a good example of why nobody should do this in any other OS.


John

Dan Cross
2024-01-06 00:10:40 UTC
Permalink
Post by Lawrence D'Oliveiro
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
That would necessarily entail dragging in much of the rest of the
operating system.
No it wouldn't.
Bluntly, you haven't impressed me as having the technical
knowledge to be capable of offering an intelligent opinion on
the matter, so your statement is worthless.
Any more than WINE entails implementing the whole of
Windows on top of Linux. We don't need any actual supervisor-mode DCL, or
kernel-mode drivers, or any actual ACPs/XQPs, only a layer that emulates
their behaviour, for example. No need for EVL or MPW or the whole queue
system, because Linux already provides plenty of existing facilities for
that kind of thing. No VMScluster rigmarole.
On this I actually agree with Arne. He's right; you are wrong.
Consider that for both the VAX _and_ Alpha, DEC was able to shape the
design of the hardware _and_ of VMS simultaneously to match one another.
And yet they were never able to make VMS a fully 64-bit OS, even on their
own fully 64-bit hardware.
Cool story, bro.

- Dan C.
Single Stage to Orbit
2024-01-06 21:35:46 UTC
Permalink
Post by Lawrence D'Oliveiro
And yet they were never able to make VMS a fully 64-bit OS, even on
their own fully 64-bit hardware.
Eh?! It's 100% 64 bits.
--
Tactical Nuclear Kittens
Arne Vajhøj
2024-01-05 14:08:42 UTC
Permalink
Post by Lawrence D'Oliveiro
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.

If the goal is 100% compatibility, then it becomes tricky and expensive.

There will be both some hard problems and a gazillion trivial problems
to deal with.

Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
Post by Lawrence D'Oliveiro
That said, VMS was not originally written for portability and wasn't
ported to anything other than successive version of the VAX for the
first 10 or so years it existed ...
And being typical of proprietary software, think of the layers of cruft
the code will have accumulated, first in the move to Alpha, then Itanium,
and now AMD64. All without ever really becoming a fully 64-bit OS.
I would expect practically zero cruft.

In general OS cruft does not come from adding CPU architecture support.

And both the Itanium and x86-64 ports has had as stated goals to
port as-is.

Alpha port added a whole bunch of 64 bit stuff. But that is useful
stuff not cruft.

Not everybody likes the implementation decisions made over 30 years
ago about that Alpha port, but that was prioritization back then
still not cruft.

Arne
Arne Vajhøj
2024-01-05 14:23:36 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and expensive.
There will be both some hard problems and a gazillion trivial problems
to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
Another example:

SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE
as "LNM$SYSTEM_TABLE".

The API is not that complex. The semantics on VMS is well
documented.

But the code does not really make any sense on Linux. So
what to do?

Arne
Lawrence D'Oliveiro
2024-01-06 02:33:52 UTC
Permalink
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones with a
server process and communication via some IPC mechanism. D-Bus or Varlink
might be good enough for this.
Arne Vajhøj
2024-01-06 04:40:33 UTC
Permalink
Post by Lawrence D'Oliveiro
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones with a
server process and communication via some IPC mechanism. D-Bus or Varlink
might be good enough for this.
And another service for the privileges.

A lot is possible if one is willing to put enough effort into it.

But I don't see any point.

For this model you will end up with x10 code doing emulation compared
to the original VMS code.

And performance will likely suck. You are really proposing a
microkernel design - a full size monolithic Linux kernel and
VMS services implemented as user mode services.

Arne
Lawrence D'Oliveiro
2024-01-06 05:23:22 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE
as "LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones with
a server process and communication via some IPC mechanism. D-Bus or
Varlink might be good enough for this.
And another service for the privileges.
A lot is possible if one is willing to put enough effort into it.
Remember that we have lots of off-the-shelf toolkits to call on. For
example, not having to invent our own hash-table format because we have
open-source libraries for that now.
Neil Rieck
2024-01-06 14:32:01 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
SYS$SETPRV with PRV$M_SYSNAM followed by SYS$CRELNM with LNM$_TABLE
as "LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones with
a server process and communication via some IPC mechanism. D-Bus or
Varlink might be good enough for this.
And another service for the privileges.
A lot is possible if one is willing to put enough effort into it.
Remember that we have lots of off-the-shelf toolkits to call on. For
example, not having to invent our own hash-table format because we have
open-source libraries for that now.
The official CEO blurb from VSI is on their LinkedIn page:

https://www.linkedin.com/company/vms-software-inc-/

Neil Rieck
Waterloo, Ontario, Canada.
https://neilrieck.net
Lawrence D'Oliveiro
2024-01-06 20:29:54 UTC
Permalink
Post by Neil Rieck
https://www.linkedin.com/company/vms-software-inc-/
“Specialties ... VMS Development and Disaster Recovery”

The two have nothing to do with each other, of course. ;)
Single Stage to Orbit
2024-01-06 21:22:20 UTC
Permalink
Post by Lawrence D'Oliveiro
SYS$SETPRV with PRV$M_SYSNAM  followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus
or Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
--
Tactical Nuclear Kittens
Lawrence D'Oliveiro
2024-01-06 22:22:17 UTC
Permalink
Post by Single Stage to Orbit
Post by Lawrence D'Oliveiro
We can emulate logical names on Linux beyond the per-process ones with
a server process and communication via some IPC mechanism. D-Bus or
Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
No, environment variables are no good (other than process-specific logical
names) because they do not affect processes other than the current one.
Arne Vajhøj
2024-01-07 00:31:23 UTC
Permalink
Post by Single Stage to Orbit
Post by Lawrence D'Oliveiro
SYS$SETPRV with PRV$M_SYSNAM  followed by SYS$CRELNM with
LNM$_TABLE as
"LNM$SYSTEM_TABLE".
The API is not that complex. The semantics on VMS is well
documented.
But the code does not really make any sense on Linux. So what to do?
We can emulate logical names on Linux beyond the per-process ones
with a server process and communication via some IPC mechanism. D-Bus
or Varlink might be good enough for this.
if setenv() and getenv() were thread-safe it'd be easier to use these.
I see environment variables being closer to VMS symbols than to
VMS logicals.

But just closer not identical.

Arne
bill
2024-01-05 18:01:14 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and expensive.
There will be both some hard problems and a gazillion trivial problems
to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
What would they return if asked for an item that does not exist on VMS?

bill
Arne Vajhøj
2024-01-05 18:17:28 UTC
Permalink
Post by bill
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and expensive.
There will be both some hard problems and a gazillion trivial problems
to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
What would they return if asked for an item that does not exist on VMS?
SS$_BADPARAM I believe.

But returning that for items codes working on VMS could
easily break code.

Arne
bill
2024-01-05 18:20:46 UTC
Permalink
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and expensive.
There will be both some hard problems and a gazillion trivial problems
to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
What would they return if asked for an item that does not exist on VMS?
SS$_BADPARAM I believe.
But returning that for items codes working on VMS could
easily break code.
Times change. Sometimes code needs to as well. :-)

bill
Arne Vajhøj
2024-01-05 18:33:12 UTC
Permalink
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
I think, again, you are talking at cross-purposes: my suspicion is that
Arne is referring to a VMS compatibility layer built on top of Linux,
not the effort of porting VMS to x86_64.
I thought I made it pretty clear early on that I was only talking about
porting across userland executables and DCL command
procedures--just the
parts of VMS that users care about, nothing more.
If the goal is 90% compatibility, then it is reasonable easy and
low cost. But no customer demand.
If the goal is 100% compatibility, then it becomes tricky and expensive.
There will be both some hard problems and a gazillion trivial problems
to deal with.
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are
they going to return when asked for an item that does not exist
on Linux?
What would they return if asked for an item that does not exist on VMS?
SS$_BADPARAM I believe.
But returning that for items codes working on VMS could
easily break code.
Times change.  Sometimes code needs to as well.  :-)
Yes.

But the point is that most companies don't want to
migrate from VMS to a VMS emulation environment and
still have to modify the code because the emulation
is only 90% - they stay at VMS or modify the code
to run natively at another platform.

Arne
Lawrence D'Oliveiro
2024-01-06 02:32:27 UTC
Permalink
Post by Arne Vajhøj
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are they
going to return when asked for an item that does not exist on Linux?
Maybe, be more specific. Give some examples of info you think would not
make sense to return (or emulate) under Linux, and we can discuss them.
Arne Vajhøj
2024-01-06 04:25:38 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Let me be specific. It is not difficult creating functions named
LIB$GETJPI and SYS$GETJPIW accepting certain argument. But what are they
going to return when asked for an item that does not exist on Linux?
Maybe, be more specific. Give some examples of info you think would not
make sense to return (or emulate) under Linux, and we can discuss them.
Just take a list of all the JPI$_* codes.

Arne
Lawrence D'Oliveiro
2024-01-06 04:52:06 UTC
Permalink
Post by Arne Vajhøj
Just take a list of all the JPI$_* codes.
OK, looking at the VMS 5.5 System Services manual from Bitsavers (they
don’t seem to have anything more recent):

JPI$_ACCOUNT -- we can maintain that per-process
JPI$_APTCNT -- same as the resident working set
JPI$_ASTACT -- ASTs would have to be maintained as part of the emulation
layer, this count would come from there
JPI$_ASTCNT -- ditto
JPI$_ASTEN -- ditto ditto
JPI$_ASTLM -- ditto ditto ditto
JPI$_AUTHPRI -- equivalent to the “nice” value
JPI$_AUTHPRIV -- either emulation layer, or just some dummy value
JPI$_BIOCNT -- just a count of I/O operations to block devices in progress
JPI$_BIOLM -- a limit that could be imposed by the emulation layer
JPI$_BUFIO -- same thing, but for buffered I/O this time
JPI$_BYTCNT -- ditto
JPI$_BYTLM -- ditto
JPI$_CHAIN -- hmm, new to me, but no problem
JPI$_CLINAME -- part of the emulation layer (CLI would run in a separate
Linux process, of course, but there’s no reason the VMS code needs to be
aware of that)
JPI$_CPU_ID -- straightforward extraction from /sys/devices/system/cpu
JPI$_CPULIM -- can be obtained from prlimit(2)/getrlimit(2)
JPI$_CPUTIM -- can be obtained from prlimit(2)/getrlimit(2)
JPI$_CREPRC_FLAGS -- maintained by emulation layer

So that’s the second page done. I could keep going on, but do you want to
shortcut the process by pointing out where you think the traps lie?
Arne Vajhøj
2024-01-06 20:09:25 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Just take a list of all the JPI$_* codes.
OK, looking at the VMS 5.5 System Services manual from Bitsavers (they
JPI$_ACCOUNT -- we can maintain that per-process
JPI$_APTCNT -- same as the resident working set
JPI$_ASTACT -- ASTs would have to be maintained as part of the emulation
layer, this count would come from there
JPI$_ASTCNT -- ditto
JPI$_ASTEN -- ditto ditto
JPI$_ASTLM -- ditto ditto ditto
JPI$_AUTHPRI -- equivalent to the “nice” value
JPI$_AUTHPRIV -- either emulation layer, or just some dummy value
JPI$_BIOCNT -- just a count of I/O operations to block devices in progress
JPI$_BIOLM -- a limit that could be imposed by the emulation layer
JPI$_BUFIO -- same thing, but for buffered I/O this time
JPI$_BYTCNT -- ditto
JPI$_BYTLM -- ditto
JPI$_CHAIN -- hmm, new to me, but no problem
JPI$_CLINAME -- part of the emulation layer (CLI would run in a separate
Linux process, of course, but there’s no reason the VMS code needs to be
aware of that)
JPI$_CPU_ID -- straightforward extraction from /sys/devices/system/cpu
JPI$_CPULIM -- can be obtained from prlimit(2)/getrlimit(2)
JPI$_CPUTIM -- can be obtained from prlimit(2)/getrlimit(2)
JPI$_CREPRC_FLAGS -- maintained by emulation layer
So that’s the second page done. I could keep going on, but do you want to
shortcut the process by pointing out where you think the traps lie?
It becomes complex to maintain that process state in a VMS process
style aka across image activations.

Let us look at the scenario where an EVE use press spawn
key and do a DIR to check some filenames.

On VMS it is:

process 1 with DCL in P1, EVE in P0 and process info in S0
subprocess 2 with DCL in P1, DIRECTORY in P0 and process info in S0

First attempt:

process A with DCL and "process 1" info
process B with EVE
process C with DCL and "subprocess 2" info
process D with DIRECTORY
IPC B->A
IPC D->C
IPC C->A

Making shells a server process is not good design so:

process A with "process 1" info
process B with DCL
process C with EVE
process D with "subprocess 2" info
process F with DIRECTORY
process E with DCL
IPC B->A
IPC C->A
IPC F->D
IPC E->D
IPC D->A

Messy.

Arne
Lawrence D'Oliveiro
2024-01-06 22:23:50 UTC
Permalink
Post by Lawrence D'Oliveiro
So that’s the second page done. I could keep going on, but do you want
to shortcut the process by pointing out where you think the traps lie?
It becomes complex to maintain that process state in a VMS process style
aka across image activations.
Not sure how that’s relevant to the question about $GETJPI.
Arne Vajhøj
2024-01-07 01:00:50 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
So that’s the second page done. I could keep going on, but do you want
to shortcut the process by pointing out where you think the traps lie?
It becomes complex to maintain that process state in a VMS process style
aka across image activations.
Not sure how that’s relevant to the question about $GETJPI.
GETJPI retrive that info, so that the info is correct per VMS
semantics is important for GETJPI, and VMS semantics are a bit
tricky because of the differences between VMS and *nix.

Arne
bill
2024-01-05 07:53:43 UTC
Permalink
Post by Dan Cross
Post by Lawrence D'Oliveiro
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
That's not Linux with wine. You can install Wine on the steam
deck, but their success has much more to do with their native
architecture.
Post by Lawrence D'Oliveiro
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop. I can feel it!
That's a weird thing to say. I have been running Linux Desktops for
over 20 years.

bill
Dan Cross
2024-01-05 13:28:29 UTC
Permalink
Post by bill
Post by Dan Cross
Post by Lawrence D'Oliveiro
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
That's not Linux with wine. You can install Wine on the steam
deck, but their success has much more to do with their native
architecture.
Post by Lawrence D'Oliveiro
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop. I can feel it!
That's a weird thing to say. I have been running Linux Desktops for
over 20 years.
It's the perennial joke about Linux replacing Windows as the
industry desktop of choice for end users. It seems like every
year is the "Year of the Linux Desktop."

- Dan C.
Lawrence D'Oliveiro
2024-01-05 22:11:56 UTC
Permalink
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a “desktop” OS, it is (and always has been) a
“workstation” OS.
Arne Vajhøj
2024-01-05 22:31:16 UTC
Permalink
Post by Lawrence D'Oliveiro
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a “desktop” OS, it is (and always has been) a
“workstation” OS.
Not everybody agrees on that.

https://ubuntu.com/download/desktop

"Download Ubuntu Desktop"

https://fedoraproject.org/

"The leading Linux desktop"

https://www.suse.com/products/desktop/

"SUSE Linux Enterprise Desktop"

Arne
Lawrence D'Oliveiro
2024-01-06 02:38:26 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a “desktop” OS, it is (and always has been) a
“workstation” OS.
Not everybody agrees on that.
Sure, most people call it “desktop”, because they have been conditioned to
think only in terms of a “desktop” situation.

For what I mean by “workstation”, look at the capabilities of the Unix
workstations in the 1980s/1990s: remember, they ran the same OS as their
respective companies’ server offerings, with all the same capabilities. It
was Microsoft that came along and offered a “Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could
charge less for the former ... and more for the latter.
bill
2024-01-06 15:47:11 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a “desktop” OS, it is (and always has been) a
“workstation” OS.
Not everybody agrees on that.
Sure, most people call it “desktop”, because they have been conditioned to
think only in terms of a “desktop” situation.
For what I mean by “workstation”, look at the capabilities of the Unix
workstations in the 1980s/1990s: remember, they ran the same OS as their
respective companies’ server offerings, with all the same capabilities. It
was Microsoft that came along and offered a “Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could
charge less for the former ... and more for the latter.
Not sure I agree with this at all. It's been a long time and my
memory may not be what it once was but I distinctly remember the
only difference between NT Server and NT Workstation was Registry
Settings.

bill
Arne Vajhøj
2024-01-06 16:30:07 UTC
Permalink
Post by Lawrence D'Oliveiro
For what I mean by “workstation”, look at the capabilities of the Unix
workstations in the 1980s/1990s: remember, they ran the same OS as their
respective companies’ server offerings, with all the same
capabilities. It
was Microsoft that came along and offered a “Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could
charge less for the former ... and more for the latter.
Not sure I agree with this at all.  It's been a long time and my
memory may not be what it once was but I distinctly remember the
only difference between NT Server and NT Workstation was Registry
Settings.
Not much difference in those days.

But the difference has become bigger over time.

Arne
Lawrence D'Oliveiro
2024-01-06 20:11:43 UTC
Permalink
Post by Lawrence D'Oliveiro
For what I mean by “workstation”, look at the capabilities of the Unix
workstations in the 1980s/1990s: remember, they ran the same OS as
their respective companies’ server offerings, with all the same
capabilities. It was Microsoft that came along and offered a
“Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could
charge less for the former ... and more for the latter.
Not sure I agree with this at all. It's been a long time and my memory
may not be what it once was but I distinctly remember the only
difference between NT Server and NT Workstation was Registry Settings.
You are remembering NT 3.51, I think it was, when somebody discovered
that, indeed, all it took was a single Registry setting change to enable
“Server” functionality on an NT “Workstation” installation.

Microsoft fixed that in the next version. Remember, it was not in their
interests to allow this sort of thing to continue, given the significant
difference in price between the two products.

So you see, on the Unix side, the vendors never thought to charge any
different for the “workstation” versus “server” software, because it was
the exact same software, with the exact same capabilities.

Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
Arne Vajhøj
2024-01-06 20:25:59 UTC
Permalink
Post by Lawrence D'Oliveiro
So you see, on the Unix side, the vendors never thought to charge any
different for the “workstation” versus “server” software, because it was
the exact same software, with the exact same capabilities.
Commercial Unix was usually sold as systems - a bundle of HW and OS.

Making the distinction irrelevant.
Post by Lawrence D'Oliveiro
Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
I don't see the big difference between Linux and Windows in that regard.

MS has a NT kernel NN.N that end up in Windows MM and Windows Server YYYY.

Desktop and server Windows do share kernel. It is all the services
and tools on top that are different.

Linux kernel V.V end up in distro Zzzzz Server P.P and distro Zzzzz desktop.

Most commercial Linux distros have both a server version and a
desktop version. Including Redhat, SUSE and Ubuntu.

Arne
Lawrence D'Oliveiro
2024-01-06 20:40:24 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D'Oliveiro
So you see, on the Unix side, the vendors never thought to charge any
different for the “workstation” versus “server” software, because it
was the exact same software, with the exact same capabilities.
Commercial Unix was usually sold as systems - a bundle of HW and OS.
Making the distinction irrelevant.
Let me repeat the point: if you wanted server-type functionality, you
didn’t need to specifically buy a server box. You could do it with one of
their workstations.

This was true of all the Unix vendors, it was not true of Windows NT.
Post by Arne Vajhøj
Most commercial Linux distros have both a server version and a
desktop version. Including Redhat, SUSE and Ubuntu.
That’s purely a difference of packaging, not functionality. For example,
if I run “desktop” Fedora, SUSE or Ubuntu, I am not limited in the number
of network shares, or the number of concurrent users, or what server
packages I can install (such as Samba, Kerberos, BIND, Apache, Nginx,
LDAP, MariaDB, PostgresQL, mail MTAs, NNTP servers, virtualization/
containerization, whatever). It’s all the same.
Dan Cross
2024-01-06 23:42:26 UTC
Permalink
Post by Lawrence D'Oliveiro
For what I mean by “workstation”, look at the capabilities of the Unix
workstations in the 1980s/1990s: remember, they ran the same OS as
their respective companies’ server offerings, with all the same
capabilities. It was Microsoft that came along and offered a
“Workstation” OS that had cut-
down capabilities compared to their “Server” offering, so they could
charge less for the former ... and more for the latter.
Not sure I agree with this at all. It's been a long time and my memory
may not be what it once was but I distinctly remember the only
difference between NT Server and NT Workstation was Registry Settings.
You are remembering NT 3.51, I think it was, when somebody discovered
that, indeed, all it took was a single Registry setting change to enable
“Server” functionality on an NT “Workstation” installation.
Microsoft fixed that in the next version. Remember, it was not in their
interests to allow this sort of thing to continue, given the significant
difference in price between the two products.
So you see, on the Unix side, the vendors never thought to charge any
different for the "workstation" versus "server" software, because it was
the exact same software, with the exact same capabilities.
I remember pretty specifically maximum user limits on versions
of commercial Unix. Most of the time it didn't matter for a
workstation, where only one user at a time (generally) was
logged into the machine. For servers and timesharing hosts?
It was a big deal.
Post by Lawrence D'Oliveiro
Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
Or FreeBSD. Or OpenBSD.

- Dan C.
Lawrence D'Oliveiro
2024-01-07 00:13:17 UTC
Permalink
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some
extra-cost “layered product”, not to the core OS.

Because consider that users are defined in /etc/passwd, which is just a
text file. How would you limit the number of lines in that? And the kernel
itself knows nothing of which user/group IDs are “valid” or “invalid”, it
will happily accept any numbers within the permissible ranges, regardless
of whether they appear in /etc/passwd or not. A network service (like
Telnet or SSH or file service) could limit the number of concurrent
connections, I suppose. But given there was open-source code available for
all of that anyway, it would be easy enough to bypass the limits by
replacing the vendor-provided code.

(Unless maybe you’re talking about IBM’s AIX. I am dimly aware that that
had its own proprietary ways of configuring things, that the traditional
*nix text-based configuration files were only a partial reflection of
that.)
Post by Lawrence D'Oliveiro
Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
Or FreeBSD. Or OpenBSD.
I did say “widespread”. ;)
Dan Cross
2024-01-07 00:19:43 UTC
Permalink
Post by Lawrence D'Oliveiro
I remember pretty specifically maximum user limits on versions of
commercial Unix.
How would such limits be enforced? Presumably they only applied to some
extra-cost "layered product", not to the core OS.
No, they applied to the OS as a while.
Post by Lawrence D'Oliveiro
Because consider that users are defined in /etc/passwd, which is just a
text file. How would you limit the number of lines in that?
Ignore lines after some predetermined maximum? Just not let
more than $n$ users at a time login? Just because you find it
difficult to conceive of how it was done does not mean that it
was not done.
Post by Lawrence D'Oliveiro
And the kernel
itself knows nothing of which user/group IDs are "valid" or "invalid", it
will happily accept any numbers within the permissible ranges,
Assuming it hasn't been modified. Remember, commercial Unix
vendors had the source code and modified it.
Post by Lawrence D'Oliveiro
regardless
of whether they appear in /etc/passwd or not. A network service (like
Telnet or SSH or file service) could limit the number of concurrent
connections, I suppose. But given there was open-source code available for
all of that anyway, it would be easy enough to bypass the limits by
replacing the vendor-provided code.
Much of this was before SSH was invented, and way before "open
source" was the force it is today.
Post by Lawrence D'Oliveiro
(Unless maybe you're talking about IBM's AIX. I am dimly aware that that
had its own proprietary ways of configuring things, that the traditional
*nix text-based configuration files were only a partial reflection of
that.)
AIX was a pretty standard port of System V at the kernel level,
but they made a lot of changes in userspace. But then, so did
most of the Unix vendors.

Regardless, even if you could get *around* it, you'd be
violating your license agreement, which isn't a great idea.
Post by Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
Today, the only OS in widespread use with this commonality of function
across disparate hardware configurations is Linux.
Or FreeBSD. Or OpenBSD.
I did say "widespread". ;)
Yes. So I mentioned FreeBSD and OpenBSD. You, undoubtedly,
simply aren't aware of just how widespread they are.

- Dan C.
Lawrence D'Oliveiro
2024-01-07 00:53:22 UTC
Permalink
Much of this was before SSH was invented, and way before "open source"
was the force it is today.
Open source was there practically from the beginning of the Unix
workstation era (say, mid-1980s onwards). The first thing any seasoned
Unix sysadmin did on a new machine was download, build and install the GNU
tools. In the pre-RISC era, GCC was known for generating better code than
the vendors’ own compilers. And in any case, Bash was almost always a
better shell than whatever buggy version of sh or csh the vendor provided.
GNU tools tended to have more functionality than vendor-specific ones. And
so on.
Dan Cross
2024-01-06 00:12:08 UTC
Permalink
It's the perennial joke about Linux replacing Windows as the industry
desktop of choice for end users.
Except Linux was never a "desktop" OS, it is (and always has been) a
"workstation" OS.
That's hilarious.

- Dan C.
Arne Vajhøj
2024-01-05 13:45:16 UTC
Permalink
Post by Lawrence D'Oliveiro
Have you noticed how the world has moved from Windows to Linux with
Wine?
Yes. Look at the (Linux-based) Steam Deck, which has been making some
inroads into the very core of Windows dominance, namely the PC gaming
market. Enough to get Microsoft to take notice.
That's not Linux with wine.  You can install Wine on the steam
deck, but their success has much more to do with their native
architecture.
Post by Lawrence D'Oliveiro
MS tried WSL1 and changed to to a VM model with WSL2.
2 x commercial failure.
On the part of Windows, not on the part of Linux.
2024 will be the year of the Linux desktop.  I can feel it!
That's a weird thing to say.  I have been running Linux Desktops for
over 20 years.
"the year of the Linux desktop" is a known phrase (meme)
eluding to the permanent expectation from some Linux users
that Linux will take over the desktop market.

Arne
bill
2024-01-05 18:08:40 UTC
Permalink
Post by Arne Vajhøj
2024 will be the year of the Linux desktop.  I can feel it!
That's a weird thing to say.  I have been running Linux Desktops for
over 20 years.
"the year of the Linux desktop" is a known phrase (meme)
eluding to the permanent expectation from some Linux users
that Linux will take over the desktop market.
OK. Guess I never followed all that silliness enough
to even notice.

But on the subject of Linux vs. MS on the desktop.
Who would you say was the largest single user of MS
Windows products? Why do you think they continue to
use MS instead of Linux?

bill
Arne Vajhøj
2024-01-05 18:27:11 UTC
Permalink
Post by bill
But on the subject of Linux vs. MS on the desktop.
Who would you say was the largest single user of MS
Windows products?  Why do you think they continue to
use MS instead of Linux?
There are two big groups of Windows usage:
* business - in the office
* consumer - at home

On the business side the driver are probably mostly about
integration.

Windows PC's with Edge, Outlook, Office and Teams works
with Active Directory, SharePoint, phone system, mobile
phones etc..

Too expensive and too risky to try and migrate that
to Linux based solution.

On the consumer side I expect drivers like:
- they know Windows
- they use Windows at work
- they have some old Windows programs that they like
- their PC came with Windows

The hassle of changing to Linux is not worth it given
how cheap Windows is for consumers.

Arne
bill
2024-01-05 18:43:42 UTC
Permalink
Post by Arne Vajhøj
Post by bill
But on the subject of Linux vs. MS on the desktop.
Who would you say was the largest single user of MS
Windows products?  Why do you think they continue to
use MS instead of Linux?
* business - in the office
* consumer - at home
I didn't say classes. I said largest single user. How about
the US Government. Who also happen to be the largest business
(if you really want to call them that) in the US. Definitely
the current largest employer which gives them a lot if users.
Post by Arne Vajhøj
On the business side the driver are probably mostly about
integration.
Windows PC's with Edge, Outlook, Office and Teams works
with Active Directory, SharePoint, phone system, mobile
phones etc..
Too expensive and too risky to try and migrate that
to Linux based solution.
Actually, the biggest reason is more likely to be political.
Or government financial (another system that would bankrupt
any real business!!)
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
Post by Arne Vajhøj
- they use Windows at work
I would expect that most of the people who develop Linux OS and
apps in their spare time use Windows at work. One does not preclude
the other.
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work. I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux. And it could make them cheaper.
Post by Arne Vajhøj
The hassle of changing to Linux is not worth it given
how cheap Windows is for consumers.
Something not guaranteed to continue. Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.

Only time will tell but I really think like so many other IT
Giants MS's time is running out. I only wish I was likely to
still be around to see it. :-)

bill
Arne Vajhøj
2024-01-05 19:05:55 UTC
Permalink
Post by bill
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
Post by bill
Post by Arne Vajhøj
- they use Windows at work
I would expect that most of the people who develop Linux OS and
apps in their spare time use Windows at work.  One does not preclude
the other.
"people who develop Linux OS and apps" are very different from
average Joe.
Post by bill
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work.  I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Post by bill
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux.  And it could make them cheaper.
It has been tried. Not much sale.
Post by bill
Post by Arne Vajhøj
The hassle of changing to Linux is not worth it given
how cheap Windows is for consumers.
Something not guaranteed to continue.
No guarantees for anything.

Well the old joke say that it is guaranteed that we will all
die and taxes will go up.

:-)
Post by bill
  Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.

But there is also a large number of home PC's with Windows but
without MS Office.

Arne
bill
2024-01-05 19:26:07 UTC
Permalink
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different. There are still things I used
to do that I can not figure out on Windows 10.
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they use Windows at work
I would expect that most of the people who develop Linux OS and
apps in their spare time use Windows at work.  One does not preclude
the other.
"people who develop Linux OS and apps" are very different from
average Joe.
Of course, but the way you said it one might imply that use of
Windows "at work" made use of Linux unlikely or impossible. I
used Windows at work for decades. Even admined a lot of Windows
systems. And yet, I also use VMS, Unix, Linux, Os-9, Plan9, etc.
And do development work on all of them.
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work.  I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly. I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux.  And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
The hassle of changing to Linux is not worth it given
how cheap Windows is for consumers.
Something not guaranteed to continue.
No guarantees for anything.
Well the old joke say that it is guaranteed that we will all
die and taxes will go up.
:-)
Post by bill
                                    Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
That's true. I have a laptop running Windows 10 that only performs the
task people have accused the PC of all along. It launches Minecraft and
then runs as a game console. :-)

bill
Arne Vajhøj
2024-01-05 20:32:48 UTC
Permalink
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different.  There are still things I used
to do that I can not figure out on Windows 10.
Apparently the average Joe's of the world think differently.
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work.  I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly.  I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
There is no reason why it should not work.

API's are maintained.
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux.  And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
MS dropped that practice in 1994.

30 years ago.
Post by bill
Post by Arne Vajhøj
Post by bill
                                    Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
That's true.  I have a laptop running Windows 10 that only performs the
task people have accused the PC of all along.  It launches Minecraft and
then runs as a game console.  :-)
You should be able to run Minecraft on Linux.

Arne
Dave Froble
2024-01-05 20:52:38 UTC
Permalink
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different. There are still things I used
to do that I can not figure out on Windows 10.
Apparently the average Joe's of the world think differently.
Not claiming to be an "average Joe", but this is being posted from an XP system.
I'm less than happy with the WEENDOZE 7 and later user interface. Of course,
SSL/TLS latest versions don't work here, and I'm limited on browser versions.
Nor does my version of SmarTerm work on WEENDOZE 7 and later.
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work. I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly. I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
There is no reason why it should not work.
API's are maintained.
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux. And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
MS dropped that practice in 1994.
30 years ago.
Post by bill
Post by Arne Vajhøj
Post by bill
Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
Office 2000 works fine for me. Until someone sends me a docx file and such.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
bill
2024-01-06 15:59:27 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different.  There are still things I used
to do that I can not figure out on Windows 10.
Apparently the average Joe's of the world think differently.
Not claiming to be an "average Joe", but this is being posted from an XP
system.  I'm less than happy with the WEENDOZE 7 and later user
interface.  Of course, SSL/TLS latest versions don't work here, and I'm
limited on browser versions. Nor does my version of SmarTerm work on
WEENDOZE 7 and later.
Do you have Updates turned off? I remember that at some point an update
caused the system to start constantly reminding me that XP was dead and
while that was really annoying the bad part was when it just stopped
starting up.
Post by Dave Froble
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work.  I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly.  I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
There is no reason why it should not work.
API's are maintained.
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux.  And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
MS dropped that practice in 1994.
30 years ago.
Post by bill
Post by Arne Vajhøj
Post by bill
                                    Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
Office 2000 works fine for me.  Until someone sends me a docx file and
such.
I have old versions of Office (still have a bunch of OEM ones in the
shrinkwrap) that work just fine. Don't need them as I moved to Open
Source Office a long time ago. Even when I still got MS Office for
free thru the University I worked at and the military. But that doesn't
help much with using my Eprom Programmer or my telescope camera or any
of the other things that the supporting software no longer runs on Windows.

bill
Lawrence D'Oliveiro
2024-01-06 20:21:15 UTC
Permalink
Post by bill
I have old versions of Office (still have a bunch of OEM ones in the
shrinkwrap) that work just fine. Don't need them as I moved to Open
Source Office a long time ago.
Related to this, it is dismaying to discover how many people, even
supposed professionals, continue to use Microsoft Excel to do
statistical analysis. And suffer the bugs therefrom, often without
even noticing. This came to a head when the decision was made some
years ago, in the genetics community, to change the official names of
some genes, just to avoid Excel continually trying to interpret them
as dates.

This article
<https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008984>
looks at the situation since then, to see if researchers are behaving
any more intelligently nowadays. The answer seems to be no.
chrisq
2024-01-06 18:14:17 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different.  There are still things I used
to do that I can not figure out on Windows 10.
Apparently the average Joe's of the world think differently.
Not claiming to be an "average Joe", but this is being posted from an XP
system.  I'm less than happy with the WEENDOZE 7 and later user
interface.  Of course, SSL/TLS latest versions don't work here, and I'm
limited on browser versions. Nor does my version of SmarTerm work on
WEENDOZE 7 and later.
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work.  I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly.  I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
There is no reason why it should not work.
API's are maintained.
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux.  And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
MS dropped that practice in 1994.
30 years ago.
Post by bill
Post by Arne Vajhøj
Post by bill
                                    Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
Office 2000 works fine for me.  Until someone sends me a docx file and
such.
No one I know uses ms office anymore. Have a look at Libre office,
for a better experience. Free, and works on all the usual OS's as well...
bill
2024-01-06 15:51:04 UTC
Permalink
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
It may still be more effort than average Joe want to put in.
They all accepted it (like I had to) when MS killed XP and Vista.
The replacement was very different.  There are still things I used
to do that I can not figure out on Windows 10.
Apparently the average Joe's of the world think differently.
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work.  I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Any ordinary application build for NT or 2000 should still work.
Not hardly.  I have boxes of programs from versions of Windows much
newer than NT and 2000 that will not run on mt Windows 10 system.
And many more that required that I get a newer version (sometimes
not a free upgrade).
There is no reason why it should not work.
There may be no reason, but I still have a lot of software that
ran fine on Vista and XP that does not run on Win 10.
Post by Arne Vajhøj
API's are maintained.
Post by bill
Post by Arne Vajhøj
Post by bill
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux.  And it could make them cheaper.
It has been tried. Not much sale.
Because the seller was still required to pay the (illegal?) MS tax.
MS dropped that practice in 1994.
30 years ago.
Post by bill
Post by Arne Vajhøj
Post by bill
                                    Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Some are comfortable with the subscription model. A lot use
it for various services used by their smartphone.
But there is also a large number of home PC's with Windows but
without MS Office.
That's true.  I have a laptop running Windows 10 that only performs the
task people have accused the PC of all along.  It launches Minecraft and
then runs as a game console.  :-)
You should be able to run Minecraft on Linux.
Why would I want to waste a good Linux machine when I have this Windows
box that's not really good for much else ?

bill
Lawrence D'Oliveiro
2024-01-06 20:15:55 UTC
Permalink
Post by bill
There may be no reason, but I still have a lot of software that
ran fine on Vista and XP that does not run on Win 10.
There seems to be this persistent myth that Microsoft never breaks old
software in new Windows versions. In fact, Windows has become so complex
that it is impossible for them to avoid such breakage.
Lawrence D'Oliveiro
2024-01-06 02:41:27 UTC
Permalink
How about the US Government.
In Europe, Governmental organizations seem a bit more open to adopting
non-proprietary systems.

The most notorious case has to be the Munich city council, which moved to
Linux years ago, then faced a massive pressure campaign from Microsoft
(aided and abetted by HP, I think it was, at one stage) to try to make it
appear that they were worse off as a result. Which they were not.
Arne Vajhøj
2024-01-06 05:10:26 UTC
Permalink
Post by Lawrence D'Oliveiro
The most notorious case has to be the Munich city council, which moved to
Linux years ago, then faced a massive pressure campaign from Microsoft
(aided and abetted by HP, I think it was, at one stage) to try to make it
appear that they were worse off as a result. Which they were not.
Some like to blame MS for what happened. But the project
execution does not seem attractive to follow.

* significant software development to develop Munich specific
solutions:
- a special Linux "distro" LiMux
- a large extension to OpenOffice/LibreOffice called Wollmux
needed because base software did not do what they wanted
* constant changes in base software:
- Debian -> Ubuntu 10.04 -> Ubuntu 12.04 -> Ubuntu 14.04 -> Kubuntu 18
- KDE 3.5 -> KDE 4.12 -> KDE 4.14 -> KDE 5.44
- OpenOffice 3.1 -> LibreOffice 4.1 -> LibreOffice 4.15 ->
LibreOffice 5.2
- Firefox -> Chrome
* never got Linux adoption over 5/6 of PC's meaning that
they still had to support Windows in parallel

That sounds like a total IT fuckup to me.

I don't think it is Linux'es fault. Similar disasters has
happened for closed source software. Like various ERP projects.

But it is not an example that any smart CIO want to follow.

Arne
Lawrence D'Oliveiro
2024-01-06 05:22:21 UTC
Permalink
Some like to blame MS for what happened. But the project execution does
not seem attractive to follow.
It saved money over all. That was one of the main points of the exercise.
bill
2024-01-06 16:01:34 UTC
Permalink
Post by Lawrence D'Oliveiro
How about the US Government.
In Europe, Governmental organizations seem a bit more open to adopting
non-proprietary systems.
The most notorious case has to be the Munich city council, which moved to
Linux years ago, then faced a massive pressure campaign from Microsoft
(aided and abetted by HP, I think it was, at one stage) to try to make it
appear that they were worse off as a result. Which they were not.
My point exactly. If the US Government were somehow convinced to drop
MS, how long do you think they would continue? But the decision to use
MS products is not in any way technology based.

bill
Chris Townley
2024-01-06 17:16:25 UTC
Permalink
Post by Arne Vajhøj
Post by bill
But on the subject of Linux vs. MS on the desktop.
Who would you say was the largest single user of MS
Windows products?  Why do you think they continue to
use MS instead of Linux?
* business - in the office
* consumer - at home
I didn't say classes.  I said largest single user.  How about
the US Government.  Who also happen to be the largest business
(if you really want to call them that) in the US.  Definitely
the current largest employer which gives them a lot if users.
Post by Arne Vajhøj
On the business side the driver are probably mostly about
integration.
Windows PC's with Edge, Outlook, Office and Teams works
with Active Directory, SharePoint, phone system, mobile
phones etc..
Too expensive and too risky to try and migrate that
to Linux based solution.
Actually, the biggest reason is more likely to be political.
Or government financial (another system that would bankrupt
any real business!!)
Post by Arne Vajhøj
- they know Windows
At the user level very low learning curve to change.
Post by Arne Vajhøj
- they use Windows at work
I would expect that most of the people who develop Linux OS and
apps in their spare time use Windows at work.  One does not preclude
the other.
Post by Arne Vajhøj
- they have some old Windows programs that they like
Unless they are running an old version of Windows their
programs probably don't work.  I have had to replace external
hardware devices I use not because the device stopped working
but because the software did.
Post by Arne Vajhøj
- their PC came with Windows
And, if the (Illegal?) pressure from MS was removed they could
just as easily come with Linux.  And it could make them cheaper.
Post by Arne Vajhøj
The hassle of changing to Linux is not worth it given
how cheap Windows is for consumers.
Something not guaranteed to continue.  Not how Office is no longer
sold but uses a subscription service so they can continue to collect
revenue while forcing users to constantly change to newer versions
even if the newer version offers the user nothing.
Only time will tell but I really think like so many other IT
Giants MS's time is running out.  I only wish I was likely to
still be around to see it. :-)
bill
You can still buy MS office, apart from the one-time keys, which
apparently are legal, but at silly low prices.

I also believe there are more linux desktops out there than people give
credit for. A few local councils in the UK a few years back. Also look
at the raspberry pi - over 55 million units sold, by default will run a
slightly tailored version of Debian, with a perfectly usable desktop and
Libre office
--
Chris
Lawrence D'Oliveiro
2024-01-06 20:14:26 UTC
Permalink
Post by Chris Townley
I also believe there are more linux desktops out there than people give
credit for.
lxer.com maintains an ongoing list of deployments:
<http://lxer.com/module/db/viewby.php?uid=108&option=&value=&sort=108&offset=0&dbn=12>.
Arne Vajhøj
2024-01-04 15:02:45 UTC
Permalink
Post by Simon Clubley
BTW, what the hell is "Intercultual Communication" ?
Probably something about the need to communicate differently
with people from different cultural backgrounds. Do you start directly
with the point or do you start with some polite chit chat. Does the
boss order or suggest to team. Etc.etc..

Useful skill.

Arne
gcalliet
2024-01-04 18:45:31 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
BTW, what the hell is "Intercultual Communication" ?
Probably something about the need to communicate differently
with people from different cultural backgrounds. Do you start directly
with the point or do you start with some polite chit chat. Does the
boss order or suggest to team. Etc.etc..
Useful skill.
Arne
Perhaps intercultural is necessary to speak altogether with computers,
computers scientists, business men, from Europe to US and US to europe... :)

I remember having met her during the first bootcamps of the new age.
Impressive for her cleverness, really curious of VMS culture.

I am hoping she will be the one who gets a "vision"... which is the
first function of a good ceo.

Yes, to be borned russian can generate problems. I think it underlines
the determination behind this choice. Johan Geda here takes a risk.

Gérard Calliet
--
Cet e-mail a été vérifié par le logiciel antivirus d'Avast.
www.avast.com
Arne Vajhøj
2024-01-04 19:10:30 UTC
Permalink
Post by gcalliet
Post by Arne Vajhøj
Post by Simon Clubley
BTW, what the hell is "Intercultual Communication" ?
Probably something about the need to communicate differently
with people from different cultural backgrounds. Do you start directly
with the point or do you start with some polite chit chat. Does the
boss order or suggest to team. Etc.etc..
Useful skill.
Perhaps intercultural is necessary to speak altogether with computers,
computers scientists, business men, from Europe to US and US to europe... :)
And Asia.

Many westerners have messed up big time in Asia because they did not
understand the culture.
Post by gcalliet
I remember having met her during the first bootcamps of the new age.
Impressive for her cleverness, really curious of VMS culture.
:-)
Post by gcalliet
I am hoping she will be the one who gets a "vision"... which is the
first function of a good ceo.
To me the most significant is that she is not ex-DEC (or ex-IBM
or ex-bigcorpwhatever). That must give a different perspective on VMS.

She joined a small company with a niche OS that she need to get to
prosper and grow.

She did not join the second largest IT company in the world (DEC
80's) with one of the worlds major OS (VMS 80's), has seen
it decline over several decades and want to "resurrect" it.

Nothing wrong with being old (I am old!). But experience leaves an
impact on ones thinking.

She could be the right person to move VSI and VMS into the 2030's.

Arne
Craig A. Berry
2024-01-05 13:24:06 UTC
Permalink
Post by Arne Vajhøj
She did not join the second largest IT company in the world (DEC
80's) with one of the worlds major OS (VMS 80's), has seen
it decline over several decades and want to "resurrect" it.
Nothing wrong with being old (I am old!). But experience leaves an
impact on ones thinking.
The previous CEO (Kevin Shaw) was 44 when he was killed by a car while
crossing the street, so it's not news that the next generation of
leadership will be too young to have been ex-DECCies.
Post by Arne Vajhøj
She could be the right person to move VSI and VMS into the 2030's.
Let's hope. Some of the engineering seems to be getting done. It
remains to be seen whether they will get back to any of the bigger
engineering projects after the port is done or develop any more of a
clue about the customers and the community than previous purveyors of
VMS have had.
Arne Vajhøj
2024-01-05 13:37:09 UTC
Permalink
Post by Craig A. Berry
Post by Arne Vajhøj
She did not join the second largest IT company in the world (DEC
80's) with one of the worlds major OS (VMS 80's), has seen
it decline over several decades and want to "resurrect" it.
Nothing wrong with being old (I am old!). But experience leaves an
impact on ones thinking.
The previous CEO (Kevin Shaw) was 44 when he was killed by a car while
crossing the street, so it's not news that the next generation of
leadership will be too young to have been ex-DECCies.
He was also relative young.

But I believe he came over from the mainframe world.

Arne
Loading...