Discussion:
SAMBA and Ransomeware
(too old to reply)
Neil Rieck
2017-07-12 14:48:40 UTC
Permalink
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.

https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/

Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
Arne Vajhøj
2017-07-13 00:09:55 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
There probably also are some vulnerabilities in Samba for VMS.

Arne
Ian Miller
2017-07-14 09:46:50 UTC
Permalink
Post by Arne Vajhøj
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
There probably also are some vulnerabilities in Samba for VMS.
Arne
cve-2017-7494 applies to SAMBBA 3.5.0 onwards. OpenVMS CIFS is based on Samba 3.0.28a (released in 2008).
Arne Vajhøj
2017-07-14 23:25:39 UTC
Permalink
Post by Ian Miller
Post by Arne Vajhøj
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
There probably also are some vulnerabilities in Samba for VMS.
cve-2017-7494 applies to SAMBBA 3.5.0 onwards. OpenVMS CIFS is based on
Samba 3.0.28a (released in 2008).
So that one does not apply.

But there can easily be older known vulnerabilities in the VMS version.

Or unknown vulnerabilities.

Arne
Norbert Schönartz
2017-07-16 16:17:26 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
Triggered by the recently detected vulnerabilities in SAMBA our IT
department now plans to enforce SMB V2 as the minimum version for all
PCs. SAMBA on OpenVMS only supports SMB V1. Thus we will not be able to
transparently connect our PCs to our OpenVMS server any longer. Without
this easy access from the PCs to data on the OpenVMS system it will lose
acceptance and eventually be replaced with something else. We need a
modern SAMBA version, and I do not think we are the only ones.
--
Norbert Schönartz
Stephen Hoffman
2017-07-16 17:52:20 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
I'd question running SMB 1 anywhere. It's insecure.

Ned Pile of the Microsoft SMB team has repeatedly stated that running
SMB 1 is very bad, and needs to stop. Here's a longer write-up on that
topic:

https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/

Samba 3.6 and later support SMB 2 (from 2011) and Samba 4.3 added SMB
3.1.1 (2015). The OpenVMS CIFS port is based on 3.0.28a. So...
there's no way around using SMB 1 with the current Samba port.

As for folks that need file shares? The alternatives include WebDAV
and NFS for those folks that need file sharing hosted on OpenVMS.

Here's another fun little discussion arising from stale open-source
ports... This with Apache.

https://httpd.apache.org/security/vulnerabilities_24.html
https://httpd.apache.org/security/vulnerabilities_22.html

Current Apache is 2.4.27. The newest Apache port for OpenVMS is
2.4.12; from VSI. The most current HPE SWS/CSWS web server V2.2-1
port is based on Apache 2.0.65.

Why do I mention Apache in the context of a Samba discussion, beyond
the obvious parallels with issues and vulnerabilities with other
down-revision software? Apache is the usual implementation of the
WebDAV service on OpenVMS.

For those interested in attacking OpenVMS servers, there's more than a
few (other) areas to explore, too. VSI is addressing various of these
issues, but this current treadmill is not going to slow down. If
anything, it's going to get faster. Which also all ties back to
comments I've made else-thread about faster and easier and more
automated patching, better telemetry, and other implementation details
that are increasingly expected on any platform with "legendary
security."

For now? Until newer ports are available and are maintained closer to
current releases? I'd question sites exposing OpenVMS to the
internet. Block everything not absolutely required of the server at
an external firewall, access the server via VPN or console or bastion
host, and network-partition the OpenVMS servers from other
network-connected hardware including printers and client computing
systems. Use encrypted and secured and non-locally-accessible backups
(write-only remote archiving or otherwise), etc.

Even with actively maintained and current ports, I'd still question
openly exposing OpenVMS servers. This because knowledge of
vulnerabilities increasingly spreads faster than many OpenVMS sites can
receive notifications and schedule patch installations. (n.b.
attackers are perfectly willing to use a couple of steps to get to the
server, they'll use whatever chain of local or other-hosts or printer
exploits or whatnot to get where they want. The network the system,
and the vulnerability.)
--
Pure Personal Opinion | HoffmanLabs LLC
Scott Dorsey
2017-07-16 19:26:13 UTC
Permalink
Post by Stephen Hoffman
Ned Pile of the Microsoft SMB team has repeatedly stated that running
SMB 1 is very bad, and needs to stop. Here's a longer write-up on that
https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/
Samba 3.6 and later support SMB 2 (from 2011) and Samba 4.3 added SMB
3.1.1 (2015). The OpenVMS CIFS port is based on 3.0.28a. So...
there's no way around using SMB 1 with the current Samba port.
This is true and unfortunate.

Some of the issue here is that the SMB protocol really wasn't designed for
security, and Microsoft over the years has tacked more and more stuff on it
to improve security and availability. We can expect that they will continue
to do this in the future.

This means that SMB is a moving target, and any attempt at supporting SMB
is going to require constant attention and a lot of updating. There is no
way around that I fear.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Arne Vajhøj
2017-07-16 20:30:36 UTC
Permalink
Post by Scott Dorsey
Post by Stephen Hoffman
Ned Pile of the Microsoft SMB team has repeatedly stated that running
SMB 1 is very bad, and needs to stop. Here's a longer write-up on that
https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/
Samba 3.6 and later support SMB 2 (from 2011) and Samba 4.3 added SMB
3.1.1 (2015). The OpenVMS CIFS port is based on 3.0.28a. So...
there's no way around using SMB 1 with the current Samba port.
This is true and unfortunate.
Some of the issue here is that the SMB protocol really wasn't designed for
security, and Microsoft over the years has tacked more and more stuff on it
to improve security and availability. We can expect that they will continue
to do this in the future.
This means that SMB is a moving target, and any attempt at supporting SMB
is going to require constant attention and a lot of updating. There is no
way around that I fear.
Like web browsers, web servers, browser plugins, JavaScript engines,
SSL libraries, application servers, CMS'es, mail servers and
a ton of other stuff.

Arne
Scott Dorsey
2017-07-16 22:15:35 UTC
Permalink
Post by Arne Vajhøj
Post by Scott Dorsey
Post by Stephen Hoffman
Ned Pile of the Microsoft SMB team has repeatedly stated that running
SMB 1 is very bad, and needs to stop. Here's a longer write-up on that
https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/
Samba 3.6 and later support SMB 2 (from 2011) and Samba 4.3 added SMB
3.1.1 (2015). The OpenVMS CIFS port is based on 3.0.28a. So...
there's no way around using SMB 1 with the current Samba port.
This is true and unfortunate.
Some of the issue here is that the SMB protocol really wasn't designed for
security, and Microsoft over the years has tacked more and more stuff on it
to improve security and availability. We can expect that they will continue
to do this in the future.
This means that SMB is a moving target, and any attempt at supporting SMB
is going to require constant attention and a lot of updating. There is no
way around that I fear.
Like web browsers, web servers, browser plugins, JavaScript engines,
SSL libraries, application servers, CMS'es, mail servers and
a ton of other stuff.
Like some of that stuff, yeah. But most of that stuff I don't want on a
server in the first place. And the stuff I do want on a server, I want to
fight to make as stable as possible. But an SMB server inherently cannot
be.

The web browser is an excellent analogy, though... and it's a reason why
I don't want to see a serious web browser on VMS. It takes up far too much
support effort for the gain it provides.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Simon Clubley
2017-07-16 23:07:35 UTC
Permalink
Post by Scott Dorsey
Post by Arne Vajhøj
Like web browsers, web servers, browser plugins, JavaScript engines,
SSL libraries, application servers, CMS'es, mail servers and
a ton of other stuff.
Like some of that stuff, yeah. But most of that stuff I don't want on a
server in the first place. And the stuff I do want on a server, I want to
fight to make as stable as possible. But an SMB server inherently cannot
be.
There's also the possibility that code written today may be written
to various standards because of now known security issues from the
past but an older codebase may not have been modified to conform
to those standards.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Stephen Hoffman
2017-07-17 14:48:57 UTC
Permalink
Post by Scott Dorsey
Post by Arne Vajhøj
Post by Scott Dorsey
Post by Stephen Hoffman
Ned Pile of the Microsoft SMB team has repeatedly stated that running
SMB 1 is very bad, and needs to stop. Here's a longer write-up on that
https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/
Samba 3.6 and later support SMB 2 (from 2011) and Samba 4.3 added SMB
3.1.1 (2015). The OpenVMS CIFS port is based on 3.0.28a. So...
there's no way around using SMB 1 with the current Samba port.
This is true and unfortunate.
Some of the issue here is that the SMB protocol really wasn't designed
for security, and Microsoft over the years has tacked more and more
stuff on it to improve security and availability. We can expect that
they will continue to do this in the future.
SMB 1 wasn't secure. It's also ancient. DECnet and SCS are also
old. And insecure. Current SMB does rather better here. DECnet
really needs to be overhauled or deprecated and removed, and SCS needs
work, and we increasingly can't depend on internal networks to be
trusted and secure.
Post by Scott Dorsey
Post by Arne Vajhøj
Post by Scott Dorsey
This means that SMB is a moving target, and any attempt at supporting
SMB is going to require constant attention and a lot of updating.
There is no way around that I fear.
We're not going back to the 1990s or early 2000s era. We have to work
within the constraints and the environment we have. That means no SMB
1, no DECnet, no telnet, no ftp, and hoping we end-users can keep SCS
private until and unless we get a secure and authenticated replacement.
Post by Scott Dorsey
Post by Arne Vajhøj
Like web browsers, web servers, browser plugins, JavaScript engines,
SSL libraries, application servers, CMS'es, mail servers and a ton of
other stuff.
Like some of that stuff, yeah. But most of that stuff I don't want on
a server in the first place. And the stuff I do want on a server, I
want to fight to make as stable as possible. But an SMB server
inherently cannot be.
I'd like stable servers, too. But I know we won't ever have that.
OpenVMS servers are routinely badly down-revision with Apache, no TLS
for Mail, and issued with other services including the ISC BIND port
and such. VSI is addressing some of these, but it's inherently a
moving target. Given we won't ever have stable servers — not what we
had five or ten or more years ago — what does that mean for our OpenVMS
servers? It means... We'll have to patch our servers. Faster. We'll
have to be able to roll our upgrades for uptime. It means we'll have
to isolate those services into sandboxes to contain damage. Better
backups. Firewalls. VPN servers. The ability to apply kernel patches
without reboot akin to Linux; with fewer reboots, and with easier
application and server designs allowing rolling upgrades or whatever.
That might be rolling upgrades in-box using VMs or across multiple
boxes using SCSv2 and newer and easier rolling-upgrade APIs, or
whatever. Our app designs stink, and the OpenVMS APIs are really
primitive, but I digress.
Post by Scott Dorsey
The web browser is an excellent analogy, though... and it's a reason
why I don't want to see a serious web browser on VMS. It takes up far
too much support effort for the gain it provides.
I don't expect we'll see one, but we will see REST and HTTP clients and
HTTP and HTTPS servers on OpenVMS, and those are vulnerable, too.
Again, we can live in the past, with the pleasantly dangerous delusion
that "OpenVMS is secure", or we can migrate to more secure versions of
OpenVMS — VSI is working on that — and to security features and
processes and upgrades which better contend with our current
environment. Or we can migrate to other platforms which do provide
support for the environment we are in now. We're not in the Y2K era
any more. Security and authentication and increasingly automated
tools are what we're all working with and contending with, attackers
and defenders both.

Again: we can live in and can desire and seek Y2K-era security and
long-term server stability and the rest of the uptime era, or we can
deal with the environment we have now, with the need to deploy patches
more quickly, and prepare for the environment we're clearly headed
toward. Wrist watches commonly have more capacity and more
performance than most of the VAX servers. For those folks here that
are fond of disparaging or ranting about Microsoft or other vendors,
please do look at what they're doing, what they have available now, and
what they're working on. Microsoft and Linux and other choices are far
more competitive than they once were, far more secure, and are far
ahead of OpenVMS in various areas. The sorts of areas that show up on
RFPs and bid requests, too. Times and requirements and environments
all change. We either change, or we and our apps and our servers
retire in place.

In this case, that means newer Samba or a replacement SMB server
package, or use of WebDAV or other alternative protocols, or shuttering
the OpenVMS servers at the sites that require services OpenVMS can no
longer provide or can no longer provide securely. Going toward
embedded and emulated and static and eventual replacement, and not
coming back, and not getting new deployments and upgrades.

We're headed for 2022 and 2027, and not back to Y2K.
--
Pure Personal Opinion | HoffmanLabs LLC
Scott Dorsey
2017-07-17 17:16:51 UTC
Permalink
Post by Stephen Hoffman
Post by Scott Dorsey
This means that SMB is a moving target, and any attempt at supporting
SMB is going to require constant attention and a lot of updating.
There is no way around that I fear.
We're not going back to the 1990s or early 2000s era. We have to work
within the constraints and the environment we have. That means no SMB
1, no DECnet, no telnet, no ftp, and hoping we end-users can keep SCS
private until and unless we get a secure and authenticated replacement.
Absolutely, and that's reasonable because SMB version 1 is a horror, and
it was a horror when it came out.

I am just pointing out that SMB version 2 is only going to be around for
a limited time as it is. SMB 3.0 is already out, and the next version is
going to be on the way soon. It's a moving target.
Post by Stephen Hoffman
I'd like stable servers, too. But I know we won't ever have that.
There's some stuff that we can have stable. There is other stuff that we
cannot have stable.

My job as an admin is to keep that other stuff up to date, and to keep as
much line of demarcation between that other stuff and the stable stuff as
possible.
Post by Stephen Hoffman
OpenVMS servers are routinely badly down-revision with Apache, no TLS
for Mail, and issued with other services including the ISC BIND port
and such. VSI is addressing some of these, but it's inherently a
moving target.
Apache is a moving target. Email isn't so much of an issue, but TLS
extensions to SMTP would be nice to have. BIND is going to be more of
an issue in the future than it was in the past.


Given we won't ever have stable servers — not what we
Post by Stephen Hoffman
had five or ten or more years ago — what does that mean for our OpenVMS
servers? It means... We'll have to patch our servers. Faster. We'll
have to be able to roll our upgrades for uptime. It means we'll have
to isolate those services into sandboxes to contain damage. Better
backups. Firewalls. VPN servers.
All of these are true, but I'm going to say that isolation with sandboxes
is the absolute key to keeping these things secure, and everything else
you mention is secondary to that.

Isolation also means that we'll be patching just the server application, not
the kernel most of the time, and that's important.
Post by Stephen Hoffman
The ability to apply kernel patches
without reboot akin to Linux; with fewer reboots, and with easier
application and server designs allowing rolling upgrades or whatever.
That might be rolling upgrades in-box using VMs or across multiple
boxes using SCSv2 and newer and easier rolling-upgrade APIs, or
whatever.
This is absolutely true.
Post by Stephen Hoffman
Our app designs stink, and the OpenVMS APIs are really
primitive, but I digress.
I claim that the primitive APIs are a feature and that when you limit
control flow to a small number of simple primitive calls that it makes it
much easier to tell what is going on and keep things secure and debugged.

As far as the app designs stinking, well, that's true but the standards of
the industry in that regard are fearfully low.
Post by Stephen Hoffman
Again: we can live in and can desire and seek Y2K-era security and
long-term server stability and the rest of the uptime era, or we can
deal with the environment we have now, with the need to deploy patches
more quickly, and prepare for the environment we're clearly headed
toward.
I think we can have both, by forcing modularity, so that the parts that need
constant patching can be constantly patched _without_ affecting the parts that
do not. Modularity and diminished interconnection between modules is where
control comes from.
Post by Stephen Hoffman
In this case, that means newer Samba or a replacement SMB server
package, or use of WebDAV or other alternative protocols, or shuttering
the OpenVMS servers at the sites that require services OpenVMS can no
longer provide or can no longer provide securely. Going toward
embedded and emulated and static and eventual replacement, and not
coming back, and not getting new deployments and upgrades.
Yes, this is clear, and it's clear that the replacement SMB server package
is going to need to be replaced with something else in a couple years so
it's not a single change. But it's also clear that if we can build systems
so that SMB is segregated from the kernel then we can have security, the
ability to patch without rebooting, and all of the benefits of the future
while retaining the benefits of Y2K.
Post by Stephen Hoffman
We're headed for 2022 and 2027, and not back to Y2K.
Yes, but that doesn't mean we should throw out the good part.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Stephen Hoffman
2017-07-17 18:06:18 UTC
Permalink
Post by Scott Dorsey
Our app designs stink, and the OpenVMS APIs are really primitive, but
I digress.
I claim that the primitive APIs are a feature and that when you limit
control flow to a small number of simple primitive calls that it makes
it much easier to tell what is going on and keep things secure and
debugged.
Primitive APIs means everybody rolls their own stacks atop those
building blocks, and folks then make different mistakes. Or sometimes
make the same mistakes in multiple apps.

I'm currently working with some of the OpenVMS security APIs here, and
these APIs are particularly bad. They're gnarly, difficult-to-use,
easy-to-get-wrong, and can require ongoing maintenance for (for
instance) the root certificates. VSI has done some good work moving
parts of these forward, but there's a whole lot that still needs to be
implemented in each app, and a whole lot of work with the APIs.
Because we're not tossing around UDP packets quite as often, and we now
need to integrate with IPv6, DTLS, DNS and a host of other details.
Post by Scott Dorsey
As far as the app designs stinking, well, that's true but the standards
of the industry in that regard are fearfully low.
Unfortunately that applies to the standards of more than a few OpenVMS
apps, too. The OpenVMS guide to system security manual is also
woefully outdated too, but I digress, Many of the apps I've written
in past years assumed the local network was secure, rather than
implementing it. Times change. Expectations and attacks change.
Which leads to apps that work well enough for continued use them, but —
if you were to review them or fuzz them or attack them — those same
apps might not be considered top be quite as robust. Apps which might
need larger investments — and security doesn't get the investments
often warranted — or app problems might well lead to breaches, or to
wholesale app, server and/or OS replacements.
Post by Scott Dorsey
Again: we can live in and can desire and seek Y2K-era security and
long-term server stability and the rest of the uptime era, or we can
deal with the environment we have now, with the need to deploy patches
more quickly, and prepare for the environment we're clearly headed
toward.
I think we can have both, by forcing modularity, so that the parts that
need constant patching can be constantly patched _without_ affecting
the parts that do not. Modularity and diminished interconnection
between modules is where control comes from.
That's containers and sandboxes for now, and a whole lot of work around
dependency management and app packaging and tools, and migrations for
the actively-maintained apps.
--
Pure Personal Opinion | HoffmanLabs LLC
a***@yahoo.com
2017-07-17 16:26:50 UTC
Permalink
Post by Stephen Hoffman
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
I'd question running SMB 1 anywhere. It's insecure.
Ned Pile of the Microsoft SMB team has repeatedly stated that running
SMB 1 is very bad, and needs to stop. Here's a longer write-up on that
https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/
I really don't like this blog post.
If Microsoft knew long ago that SMB1 is bad then why didn't they provided a better variant of SMB with original WinXP? Or with WS2003? Or with one of the winXp service packs or with one of several service packs and releases of WS2003?

Telling people to stop using WinXp is *not* a solution. Telling people to stop using Ws2003 is somewhat more bearable, but also problematic.

For reference, WinXP SP3 is at least two years newer than the first implementations of SMB2, so my suggestions are not anachronistic.
Post by Stephen Hoffman
Samba 3.6 and later support SMB 2 (from 2011) and Samba 4.3 added SMB
3.1.1 (2015). The OpenVMS CIFS port is based on 3.0.28a. So...
there's no way around using SMB 1 with the current Samba port.
As for folks that need file shares? The alternatives include WebDAV
and NFS for those folks that need file sharing hosted on OpenVMS.
Here's another fun little discussion arising from stale open-source
ports... This with Apache.
https://httpd.apache.org/security/vulnerabilities_24.html
https://httpd.apache.org/security/vulnerabilities_22.html
Current Apache is 2.4.27. The newest Apache port for OpenVMS is
2.4.12; from VSI. The most current HPE SWS/CSWS web server V2.2-1
port is based on Apache 2.0.65.
Why do I mention Apache in the context of a Samba discussion, beyond
the obvious parallels with issues and vulnerabilities with other
down-revision software? Apache is the usual implementation of the
WebDAV service on OpenVMS.
For those interested in attacking OpenVMS servers, there's more than a
few (other) areas to explore, too. VSI is addressing various of these
issues, but this current treadmill is not going to slow down. If
anything, it's going to get faster. Which also all ties back to
comments I've made else-thread about faster and easier and more
automated patching, better telemetry, and other implementation details
that are increasingly expected on any platform with "legendary
security."
For now? Until newer ports are available and are maintained closer to
current releases? I'd question sites exposing OpenVMS to the
internet. Block everything not absolutely required of the server at
an external firewall, access the server via VPN or console or bastion
host, and network-partition the OpenVMS servers from other
network-connected hardware including printers and client computing
systems. Use encrypted and secured and non-locally-accessible backups
(write-only remote archiving or otherwise), etc.
Even with actively maintained and current ports, I'd still question
openly exposing OpenVMS servers. This because knowledge of
vulnerabilities increasingly spreads faster than many OpenVMS sites can
receive notifications and schedule patch installations. (n.b.
attackers are perfectly willing to use a couple of steps to get to the
server, they'll use whatever chain of local or other-hosts or printer
exploits or whatnot to get where they want. The network the system,
and the vulnerability.)
--
Pure Personal Opinion | HoffmanLabs LLC
Scott Dorsey
2017-07-17 17:22:55 UTC
Permalink
Post by a***@yahoo.com
I really don't like this blog post.
If Microsoft knew long ago that SMB1 is bad then why didn't they provided a better variant of SMB with original WinXP? Or with WS2003? Or with one of the winXp service packs or with one of several service packs and releases of WS2003?
Because Microsoft has traditionally not thought about security in any way,
until they have been forced to think about security.

And, because the security profile has changed... systems that were designed
for use on a small local network somehow got connected to the public internet
and all of a sudden design decisions that seemed reasonable turned out to be
incredibly stupid.
Post by a***@yahoo.com
Telling people to stop using WinXp is *not* a solution. Telling people to stop using Ws2003 is somewhat more bearable, but also problematic.
That's what Microsoft has done, yes. You can take that up with them.
Post by a***@yahoo.com
For reference, WinXP SP3 is at least two years newer than the first implementations of SMB2, so my suggestions are not anachronistic.
SMB1 was a terribly designed protocol. SMB2 is a terribly designed protocol
but one with security features that SMB1 did not have. I have not looked
under the covers of SMB3 but I suspect it's also terribly designed but with
additional security bags on the side. I predict soon we will have SMB4 to
deal with whatever is gone wrong in SMB3.

If I had a choice, I wouldn't deal with SMB at all because it is just so
horrible. It's like hanging a KICK ME sign on your computer. But we live
in the world where Microsoft compatibility is critical, so we have to talk
SMB.

Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Stephen Hoffman
2017-07-17 18:23:33 UTC
Permalink
Post by Scott Dorsey
Post by a***@yahoo.com
I really don't like this blog post.
If Microsoft knew long ago that SMB1 is bad then why didn't they
provided a better variant of SMB with original WinXP? Or with WS2003?
Or with one of the winXp service packs or with one of several service
packs and releases of WS2003?
Because Microsoft has traditionally not thought about security in any
way, until they have been forced to think about security.
Nobody does. Not vendors, not end-users, nobody. Security is an
add-on cost.

For Microsoft, their approach toward security was changed massively
around the era of Windows Vista.

https://blogs.microsoft.com/microsoftsecure/2012/01/12/what-a-journey-it-has-been/

https://www.microsoft.com/security/sdl/story/
https://www.microsoft.com/mscorp/execmail/2002/07-18twc.mspx

I'm collecting information and links for an OpenVMS Boot camp
presentation on security. Microsoft has a lot of good information
available. and some clever approaches toward making successful
exploitation harder.)
Post by Scott Dorsey
Our question, then, becomes this: How do we, knowing we have an
inherently untrustworthy protocol, manage to implement it in the safest
possible way? Because we have to implement it. And we have to do it
as safely as we can.
Microsoft has some guidelines here, and has some helpful tools and
APIs, and the next part of that same discussion is how to upgrade the
implementation with as few perturbations to applications as is
feasible. Also of how to make exploitation more difficult. Because
we're going to make mistakes. Because there will be vulnerabilities.
And because we're going to be presented with new attacks and new
approaches, and just with changes in computing resources that can make
(for instance) brute-forcing a whole lot more affordable to attackers.
Then there's the discussion around how to upgrade legacy apps for
better robustness, around comparative approaches toward security and
related trade-offs, and those and other details are particularly
lacking in the OpenVMS security documentation.

n.b. I'm not a Microsoft proponent, don't use the Windows platform at
all regularly, and find that Windows definitely still has some issues.
But they've learned a lot over the years, have made massive
improvements in their platform and tools and security, and have some
approaches and some suggestions that I routinely use when developing
apps on OpenVMS, and also for Unix, macOS and iOS systems.
--
Pure Personal Opinion | HoffmanLabs LLC
Michael Moroney
2017-07-17 19:13:19 UTC
Permalink
Post by Scott Dorsey
Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
I suppose the VMS server process has as few privileges as absolutely possible,
ideally TMPMBX+NETMBX only, if at all possible.

Naive question: Are the protocols fundamentally broken, security wise, or,
in theory, could a good VMS programmer given the SMBx spec and no existing
code as a bad example, write a secure SAMBA implementation from scratch?
Stephen Hoffman
2017-07-17 20:27:39 UTC
Permalink
Post by Michael Moroney
Post by Scott Dorsey
Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
I suppose the VMS server process has as few privileges as absolutely
possible, ideally TMPMBX+NETMBX only, if at all possible.
It's quite possible to cause issues with just minimal privileges, if an
exploit allowing code execution can be located, or if sensitive data
can be directly or indirectly bled back out of the server context.
TMPMBX and NETMBX are also likely not enough for an app that's going to
be a proxy into OpenVMS authentication and possibly also into whatever
locking is necessary, there'll be additional access into OpenVMS
granted via privilege or identifier or installation or UWSS, and that
access potentially exposed, or there'll be a connection into a separate
and additional authentication server component to mediate that access.
Potentially for mounting the target device, and for accessing
configuration information that remote users should not have direct or
modify access into.
Post by Michael Moroney
Naive question: Are the protocols fundamentally broken, security wise,
or, in theory, could a good VMS programmer given the SMBx spec and no
existing code as a bad example, write a secure SAMBA implementation
from scratch?
SMB 1 is known to be problematic, and the SMB 2 replacement version
became available over a decade ago. The current SMB 3.1.1 is not known
to be problematic. Could somebody choose to code up their own SMB
server? Sure. Apple decided to write their own service, known as
SMBX. As for replicating the capabilities of Samba itself, that is a
rather larger project. This given the ability of Samba to provide an
Active Directory server compatible with what Microsoft offers with
Windows Server, for instance.

This discussion also approaches adding FUSE support into OpenVMS, and
that's not something OpenVMS particularly has available. This
particularly if there's to be an SMB client for OpenVMS.

FUSE:

https://en.wikipedia.org/wiki/Filesystem_in_Userspace

SMBX:

https://www.murage.ca/os-x-yosemite-server-4-03-smb3/

SMB history, info:

https://en.wikipedia.org/wiki/Server_Message_Block
--
Pure Personal Opinion | HoffmanLabs LLC
Scott Dorsey
2017-07-17 20:51:59 UTC
Permalink
Post by Michael Moroney
Post by Scott Dorsey
Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
I suppose the VMS server process has as few privileges as absolutely possible,
ideally TMPMBX+NETMBX only, if at all possible.
That's key number one.
Post by Michael Moroney
Naive question: Are the protocols fundamentally broken, security wise, or,
in theory, could a good VMS programmer given the SMBx spec and no existing
code as a bad example, write a secure SAMBA implementation from scratch?
Unknown, since nobody has actually seen the SMB spec outside of Microsoft,
and SAMBA exists entirely due to reverse-engineering of the protocol.

SMB1 is fundamentally broken in every possible way.

SMB2 has some things which are alarming but I suspect it's not fundamentally
broken. But, given the history, I am sure there are some problems in there
that we don't know about yet. It has been reverse-engineered well enough to
talk to and from, but that doesn't mean there aren't some gotchas somewhere.

SMB3 I have no idea about since I have never seen it, but again knowing the
source I am suspicious.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Craig A. Berry
2017-07-17 21:19:43 UTC
Permalink
Post by Scott Dorsey
Post by Michael Moroney
Naive question: Are the protocols fundamentally broken, security wise, or,
in theory, could a good VMS programmer given the SMBx spec and no existing
code as a bad example, write a secure SAMBA implementation from scratch?
Unknown, since nobody has actually seen the SMB spec outside of Microsoft,
and SAMBA exists entirely due to reverse-engineering of the protocol.
So you're quite sure no one outside of Microsoft has read any of the following documents?

<http://www.snia.org/sites/default/education/tutorials/2012/fall/file/JoseBarreto_SMB3_Remote_File_Protocol_revision.pdf>

<https://msdn.microsoft.com/en-us/library/cc246232.aspx>

<https://msdn.microsoft.com/en-us/library/cc246231.aspx>

<https://msdn.microsoft.com/en-us/library/cc246482.aspx>
Scott Dorsey
2017-07-17 22:52:13 UTC
Permalink
Post by Craig A. Berry
Post by Scott Dorsey
Post by Michael Moroney
Naive question: Are the protocols fundamentally broken, security wise, or,
in theory, could a good VMS programmer given the SMBx spec and no existing
code as a bad example, write a secure SAMBA implementation from scratch?
Unknown, since nobody has actually seen the SMB spec outside of Microsoft,
and SAMBA exists entirely due to reverse-engineering of the protocol.
So you're quite sure no one outside of Microsoft has read any of the following documents?
<http://www.snia.org/sites/default/education/tutorials/2012/fall/file/JoseBarreto_SMB3_Remote_File_Protocol_revision.pdf>
Oh, I have read that. It's a _lot_ more detailed than the previous specs,
but I wouldn't call it anywhere NEAR a complete protocol spec.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Craig A. Berry
2017-07-18 01:35:43 UTC
Permalink
Post by Scott Dorsey
Post by Craig A. Berry
Post by Scott Dorsey
Post by Michael Moroney
Naive question: Are the protocols fundamentally broken, security wise, or,
in theory, could a good VMS programmer given the SMBx spec and no existing
code as a bad example, write a secure SAMBA implementation from scratch?
Unknown, since nobody has actually seen the SMB spec outside of Microsoft,
and SAMBA exists entirely due to reverse-engineering of the protocol.
So you're quite sure no one outside of Microsoft has read any of the following documents?
<http://www.snia.org/sites/default/education/tutorials/2012/fall/file/JoseBarreto_SMB3_Remote_File_Protocol_revision.pdf>
Oh, I have read that. It's a _lot_ more detailed than the previous specs,
but I wouldn't call it anywhere NEAR a complete protocol spec.
Those are just slides from a presentation. The links that you snipped,
such as this one:

<https://msdn.microsoft.com/en-us/library/cc246482.aspx>

comprise hundreds of pages detailing data structure layouts, event and
response sequences, and so on. What would a complete protocol spec have
that isn't there?
Arne Vajhøj
2017-07-18 01:43:37 UTC
Permalink
Post by Craig A. Berry
Post by Scott Dorsey
Post by Craig A. Berry
Post by Scott Dorsey
Post by Michael Moroney
Naive question: Are the protocols fundamentally broken, security wise, or,
in theory, could a good VMS programmer given the SMBx spec and no existing
code as a bad example, write a secure SAMBA implementation from scratch?
Unknown, since nobody has actually seen the SMB spec outside of Microsoft,
and SAMBA exists entirely due to reverse-engineering of the protocol.
So you're quite sure no one outside of Microsoft has read any of the
following documents?
<http://www.snia.org/sites/default/education/tutorials/2012/fall/file/JoseBarreto_SMB3_Remote_File_Protocol_revision.pdf>
Oh, I have read that. It's a _lot_ more detailed than the previous specs,
but I wouldn't call it anywhere NEAR a complete protocol spec.
Those are just slides from a presentation. The links that you snipped,
<https://msdn.microsoft.com/en-us/library/cc246482.aspx>
comprise hundreds of pages detailing data structure layouts, event and
response sequences, and so on. What would a complete protocol spec have
that isn't there?
Some people don't want to let boring things like facts get in the way
of some MS bashing.

:-)

Arne
Stephen Hoffman
2017-07-17 21:52:39 UTC
Permalink
Post by Scott Dorsey
Post by Michael Moroney
Post by Scott Dorsey
Our question, then, becomes this: How do we, knowing we have an
inherently untrustworthy protocol, manage to implement it in the safest
possible way? Because we have to implement it. And we have to do it as
safely as we can.
I suppose the VMS server process has as few privileges as absolutely
possible, ideally TMPMBX+NETMBX only, if at all possible.
That's key number one.
If we're working within the constraints of the rather limited OpenVMS
security implementation, then most definitely. But that's likely not
enough privileges for a file server, either. For one example, NFS
requires cmkrnl, netmbx, oper, sysnam, sysprv in one context, and
cmkrnl, oper, sysnam, sysprv, and world in another; depending on what's
going on. The desire and increasingly the need to isolate what
OpenVMS has traditionally used privileges for — such as isolating the
system calls and operations that are permitted to a particular
application, and quite possibly the use of classic OpenVMS privileges —
are part of why sandboxes have become interesting to folks. Having to
break the particular server application, and then further escape the
sandbox is (hopefully) more difficult for an attacker.
Post by Scott Dorsey
Post by Michael Moroney
Naive question: Are the protocols fundamentally broken, security wise,
or, in theory, could a good VMS programmer given the SMBx spec and no
existing code as a bad example, write a secure SAMBA implementation
from scratch?
Unknown, since nobody has actually seen the SMB spec outside of
Microsoft, and SAMBA exists entirely due to reverse-engineering of the
protocol.
Microsoft has published various specifications for Windows-related
protocols, including SMB 2 and SMB 3.

https://msdn.microsoft.com/en-us/library/cc246482.aspx
--
Pure Personal Opinion | HoffmanLabs LLC
John E. Malmberg
2017-07-18 03:55:56 UTC
Permalink
Post by Michael Moroney
Post by Scott Dorsey
Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
I suppose the VMS server process has as few privileges as absolutely possible,
ideally TMPMBX+NETMBX only, if at all possible.
Naive question: Are the protocols fundamentally broken, security wise, or,
in theory, could a good VMS programmer given the SMBx spec and no existing
code as a bad example, write a secure SAMBA implementation from scratch?
While a lot of the SMB protocols are documented, there are likely things
that are ambiguous or not specified.

And one of the thing to remember is the speed of the CPUs back when the
earlier protocols were developed.

Turning on signing to prevent spoofing pretty much took all the CPU
available once upon a time.

NTLM up to about Windows 7, is vulnerable to replay attacks by design.
Windows 7 added a place to register how trusted a host is for some NTLM
traffic.

In the past, a LAN protocol did not need to be real secure, just good
enough to prevent accidents.

There are other issues besides the protocol that need to be considered,
Back in the Samba V4 development days there was quite a bit of
discussion as to if it should be forked() like Samba V3, or threaded to
be more like the Microsoft implementation.

The forked() model offered the advantages of the daemon running as the
target user, and a program bug normally only caused a crash and to the
user silent restart of that daemon.

The threaded model seemed to match how the messages in the protocol were
actually sent and delivered to a server from a multi-user client.
And the proponents thought it might scale better.

It seemed to me at the time that the threaded implementation would fit
VMS better.

Regards,
-John
***@qsl.net_work
Arne Vajhøj
2017-07-28 20:08:16 UTC
Permalink
Post by Michael Moroney
Post by Scott Dorsey
Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
I suppose the VMS server process has as few privileges as absolutely possible,
ideally TMPMBX+NETMBX only, if at all possible.
What does VMS Samba actually require?

Arne
Stephen Hoffman
2017-07-28 22:17:21 UTC
Permalink
Post by Arne Vajhøj
What does VMS Samba actually require?
More than TMPMBX and NETMBX, depending on the component. The Samba
testparm tool tips over with a stackdump if you don't have SYSLCK
enabled for instance, and that requirement was undocumented on the
versions I was working with.
--
Pure Personal Opinion | HoffmanLabs LLC
John E. Malmberg
2017-07-28 23:24:18 UTC
Permalink
Post by Stephen Hoffman
Post by Arne Vajhøj
What does VMS Samba actually require?
More than TMPMBX and NETMBX, depending on the component. The Samba
testparm tool tips over with a stackdump if you don't have SYSLCK
enabled for instance, and that requirement was undocumented on the
versions I was working with.
The SYSLCK is needed probably for simulating byte range locking and
implementing op-locks.

It also needs a privilege for listening on a privilege port.

Samba needs "CMKRNL,SYSPRV" privileges because it needs to accept a
connection and after authenticating the user, has to access the file as
the user.

Samba V3 does this by starting as a privileged user and then when the
authentication process is done, it does a change user to that user and
drops any privileges that user does not have. Not sure if it retains
any privileges the user has that are not needed for Samba, as I have not
looked at the HP VMS port source.

The Samba V3 model is based on the premise that once the connection is
authenticated, all transfers on that connection will be for the same user.

That turned out to be only mostly true, and Samba put in a hack to deal
with it. On the ports previous to the HP VMS port, that hack did not
get implemented, but no-one ever reported a problem to the samba-vms
mailing list about it.

I do not know if the HP VMS port implemented the hack or not.

The Samba V4 threaded model eliminates the hack, but then requires Samba
to run at elevated privileges since all thread must be the same user.
There were alleged to be performance and scaling improvements for using
the threaded model. Unfortunately I while I got most of Samba V4
building on VMS, I never got enough built to properly test it.

Full building of Samba V4 required a newer Kerberos and OpenLDAP client
than VMS had at the time. And while the then current OpenLDAP client
mostly built on VMS, it needed access to a non-public OpenSSL routine,
so it could not be built on VMS with the then existing OpenSSL packages.

A fully functional Samba V3 port also needs those things.

One of the more interesting things I had in the Samba V4 port is mapping
named pipes to SYS$ICC services, so had I completed the port, it may
have been able have only one windbindd process needed for a cluster.

Regards,
-John
***@qsl.net_work
a***@yahoo.com
2017-07-17 22:02:46 UTC
Permalink
Post by Scott Dorsey
Post by a***@yahoo.com
I really don't like this blog post.
If Microsoft knew long ago that SMB1 is bad then why didn't they provided a better variant of SMB with original WinXP? Or with WS2003? Or with one of the winXp service packs or with one of several service packs and releases of WS2003?
Because Microsoft has traditionally not thought about security in any way,
until they have been forced to think about security.
XP is released in 2001. They were well aware of security problems by then. At least the "system" side of the company should have been aware.
And XPSP3 is 2008, By then even tools and Office sides of Microsoft knew that security can't be ignored.
Post by Scott Dorsey
And, because the security profile has changed... systems that were designed
for use on a small local network somehow got connected to the public internet
and all of a sudden design decisions that seemed reasonable turned out to be
incredibly stupid.
Post by a***@yahoo.com
Telling people to stop using WinXp is *not* a solution. Telling people to stop using Ws2003 is somewhat more bearable, but also problematic.
That's what Microsoft has done, yes. You can take that up with them.
SMB2 is ported to dozen or so of OSes. I have hard time understanding what exactly prevents it's porting to WinXP. Esp. if port doesn't aim for performance parity with newer OSes.
Post by Scott Dorsey
Post by a***@yahoo.com
For reference, WinXP SP3 is at least two years newer than the first implementations of SMB2, so my suggestions are not anachronistic.
SMB1 was a terribly designed protocol. SMB2 is a terribly designed protocol
but one with security features that SMB1 did not have. I have not looked
under the covers of SMB3 but I suspect it's also terribly designed but with
additional security bags on the side. I predict soon we will have SMB4 to
deal with whatever is gone wrong in SMB3.
If I had a choice, I wouldn't deal with SMB at all because it is just so
horrible.
I had never even look at the SMB protocols.
From what I read today it sounds that in presence of sophisticated man-in-the-middle adversary SMB1 is as insecure as classic DECNET. Does it, at least, require higher level of sophistication from the attacker?

Is it designed more or less terribly than NFS?
Somehow I heard much more horror stories about NFS than about SMB, but may be it's unrelated to the protocol.
Post by Scott Dorsey
It's like hanging a KICK ME sign on your computer. But we live
in the world where Microsoft compatibility is critical, so we have to talk
SMB.
Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
David Wade
2017-07-18 08:13:18 UTC
Permalink
Post by a***@yahoo.com
Post by Scott Dorsey
Post by a***@yahoo.com
I really don't like this blog post.
If Microsoft knew long ago that SMB1 is bad then why didn't they provided a better variant of SMB with original WinXP? Or with WS2003? Or with one of the winXp service packs or with one of several service packs and releases of WS2003?
Because Microsoft has traditionally not thought about security in any way,
until they have been forced to think about security.
XP is released in 2001. They were well aware of security problems by then. At least the "system" side of the company should have been aware.
And XPSP3 is 2008, By then even tools and Office sides of Microsoft knew that security can't be ignored.
Post by Scott Dorsey
And, because the security profile has changed... systems that were designed
for use on a small local network somehow got connected to the public internet
and all of a sudden design decisions that seemed reasonable turned out to be
incredibly stupid.
Post by a***@yahoo.com
Telling people to stop using WinXp is *not* a solution. Telling people to stop using Ws2003 is somewhat more bearable, but also problematic.
That's what Microsoft has done, yes. You can take that up with them.
SMB2 is ported to dozen or so of OSes. I have hard time understanding what exactly prevents it's porting to WinXP. Esp. if port doesn't aim for performance parity with newer OSes.
Post by Scott Dorsey
Post by a***@yahoo.com
For reference, WinXP SP3 is at least two years newer than the first implementations of SMB2, so my suggestions are not anachronistic.
SMB1 was a terribly designed protocol. SMB2 is a terribly designed protocol
but one with security features that SMB1 did not have. I have not looked
under the covers of SMB3 but I suspect it's also terribly designed but with
additional security bags on the side. I predict soon we will have SMB4 to
deal with whatever is gone wrong in SMB3.
If I had a choice, I wouldn't deal with SMB at all because it is just so
horrible.
I had never even look at the SMB protocols.
From what I read today it sounds that in presence of sophisticated man-in-the-middle adversary SMB1 is as insecure as classic DECNET. Does it, at least, require higher level of sophistication from the attacker?
Is it designed more or less terribly than NFS?
Somehow I heard much more horror stories about NFS than about SMB, but may be it's unrelated to the protocol.
Whilst I can't find any links Digital did produce a version of NT (so
before Windows/2000 and XP) with the encryption used in SMB1 replaced by
more secure algorithms. They also produced a whole disk encryption
system (Kilgetty)

https://www.ia.nato.int/niapc/Product/KILGETTY-2K_47
Post by a***@yahoo.com
Post by Scott Dorsey
It's like hanging a KICK ME sign on your computer. But we live
in the world where Microsoft compatibility is critical, so we have to talk
SMB.
Our question, then, becomes this: How do we, knowing we have an inherently
untrustworthy protocol, manage to implement it in the safest possible way?
Because we have to implement it. And we have to do it as safely as we can.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Stephen Hoffman
2017-07-17 17:51:10 UTC
Permalink
Post by a***@yahoo.com
Post by Stephen Hoffman
https://blogs.technet.microsoft.com/filecab/2016/09/16/stop-using-smb1/
I really don't like this blog post.
If Microsoft knew long ago that SMB1 is bad then why didn't they
provided a better variant of SMB with original WinXP? Or with WS2003?
Or with one of the winXp service packs or with one of several service
packs and releases of WS2003?
Telling people to stop using WinXp is *not* a solution. Telling people
to stop using Ws2003 is somewhat more bearable, but also problematic.
For reference, WinXP SP3 is at least two years newer than the first
implementations of SMB2, so my suggestions are not anachronistic.
Remaining on ancient software and having it all magically work like
more current bits, and without the vendor and the end-user spending
appreciable money or effort to stay current?

Getting to where that's even possible is not a trivial investment in
development, and also in customer expectations and acceptance.
Dependency hell is just part of this.

Beyond SMB 1, there was a whole lot wrong with Windows XP. The
much-maligned Windows Vista broke a number of apps, and fixed a number
of issues with earlier releases.

The same sorts of old-software issues arise with OpenVMS, too. How
many OpenVMS folks are still using DECnet, or insecure SCS, or telnet
or ftp or the rest, or the folks that have yet to apply OpenVMS patches?

VSI is clearly commencing the efforts necessary bring OpenVMS forward
to more current services, and is also starting the API work to make
this easier for themselves and for developers. There are already
examples of these API dependencies on OpenVMS (e.g. OpenSSL APIs), and
the frequency of occurrences of API dependencies arising is only going
to increase as VSI continues and increases efforts to fix parts of
OpenVMS itself, and to upgrade OpenVMS capabilities, and work toward
more current software versions.

Some of which will break existing apps or require increasingly
expensive and hairy technical debt ("compatibility"), and a situation
which will inevitably keep some folks on older OpenVMS releases.

From a product producer prospective, back-porting features and upgrades
is more cost and more effort, and detracts from the efforts of getting
folks to move forward to better and newer and more current platforms.

That written, Microsoft has decided to follow your suggestions here,
but is doing so with Windows 10. Not with Windows XP or older
releases. We'll eventually learn how well the Windows 10 approach
works for Microsoft, for Microsoft partners and ISVs, and for end-users
of Windows, too. How well this works from a technical approach, and
around corporate financial, marketing and partnerships? Even if
software compatibility continues, some new features are inevitably
going to exclude older hardware, and which will force hardware
upgrades; older hardware is inevitably going to age out.

Maybe VSI eventually follows this continuous-upgrades same-version
approach with the OpenVMS 10 release? This is very close to what's
called SaaS, too; software as a service. So long as you're covered by
support, you get patches. Even if VSI decides to adopt SaaS or
similar, there's still more than a little technical work involved in a
workable implementation for OpenVMS, too.

Related: PCSI lacks capabilities around maintaining and managing and
upgrading dependencies, requiring end-users and developers to hand-roll
their own unique solutions to API dependencies. Of these, I happen to
like the approach Oracle Rdb uses, but it's one of many. For an
approach around dependency management used elsewhere, see the nix
package manager and NixOS:
https://nixos.org/nix/
https://nixos.org/
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen Hoffman
2017-07-18 15:41:43 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Linux, too:
http://blog.trendmicro.com/trendlabs-security-intelligence/linux-users-urged-update-new-threat-exploits-sambacry/
--
Pure Personal Opinion | HoffmanLabs LLC
j***@yahoo.co.uk
2017-07-18 18:18:52 UTC
Permalink
Post by Stephen Hoffman
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
http://blog.trendmicro.com/trendlabs-security-intelligence/linux-users-urged-update-new-threat-exploits-sambacry/
--
Pure Personal Opinion | HoffmanLabs LLC
Further Linux-centric reading following on from trendmicro's
article can be found at e.g.
https://access.redhat.com/security/cve/cve-2017-7494 (Red Hat extract below)
and in a StackExchange discussion at
https://unix.stackexchange.com/questions/367138/wannacry-on-linux-systems-how-do-you-protect-yourself


This vulnerability appears to derive from (oversimplified)
allowing remote Samba users to write shareable libraries to
writable Samba shares, and then allowing those sharable
libraries to be invoked by remote users, perhaps executing
them with elevated privileges.

Sensible DECnet/FAL users on VMS were encouraged to stop
permitting that kind of thing a couple of decades ago
(actually, probably more). Obviously it was in the context
of FAL not Samba but the remote access/untrusted code
parallels and principles perhaps need to be re-invented.

Don't execute untrusted code, especially with privileges,
even more so when the code is in some remotely writable
location and is invoked via some remote access mechanism
with limited authentication.

A shared library in a semi-public upload directory might
contain untrusted code; might be safest to assume it does.
Tread carefully.

It's not rocket science, but it appears to be new to
some people who perhaps should know better.

RedHat's default SELinux policy seems to do the right
thing.

Corrections and clarifications welcome.

..............

Highlights from the RedHat article at
https://access.redhat.com/security/cve/cve-2017-7494
"A malicious authenticated samba client, having write
access to the samba share, could use this flaw to execute
arbitrary code as root.
[...]
Mitigation

Any of the following:

1. SELinux is enabled by default and our default policy prevents loading of modules from outside of samba's module directories and therefore blocks the exploit

2. Mount the filesystem which is used by samba for its writable share using "noexec" option.

3. Add the parameter:

nt pipe support = no

to the [global] section of your smb.conf and restart smbd. This prevents clients from accessing any named pipe endpoints. Note this can disable some expected functionality for Windows clients.
"
u***@gmail.com
2017-07-29 14:56:00 UTC
Permalink
Post by j***@yahoo.co.uk
Highlights from the RedHat article at
https://access.redhat.com/security/cve/cve-2017-7494
"A malicious authenticated samba client, having write
access to the samba share, could use this flaw to execute
arbitrary code as root.
[...]
Mitigation
1. SELinux is enabled by default and our default policy prevents loading of modules from outside of samba's module directories and therefore blocks the exploit
"
SECURE LINUX? Funniest thing I ever heard :)
Stephen Hoffman
2017-07-24 19:51:02 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
For those of y'all using gSOAP (and IIRC, OP was...) might want to have
a look at whether CVE-2017-9765 might cause your operations any
issues... Whether for the cited cameras or other local usage, or
because of some other local use of gSOAP. Whether it effects gSOAP on
OpenVMS? No idea. Based on the NVD, probably not? But then I'd
still want to confirm that, as an (actual) RCE in network-facing code
would not be a good day.

https://nvd.nist.gov/vuln/detail/CVE-2017-9765
https://krebsonsecurity.com/2017/07/experts-in-lather-over-gsoap-security-flaw/
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen Hoffman
2017-07-25 23:41:24 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
CVE-2017-5664 "Apache Tomcat Security Constraint Bypass"

https://nvd.nist.gov/vuln/detail/CVE-2017-5664
https://lists.apache.org/thread.html/a42c48e37398d76334e17089e43ccab945238b8b7896538478d76066@%3Cannounce.tomcat.apache.org%3E


The Tomcat version available for OpenVMS is effected by this. Sites
running Tomcat will want to have a look at the details within their
specific environments.

Haven't looked around to see what else happened between the HPE 7.0-29
version and the VSI 7.0-29B version and the Apache Tomcat 7.0.78
version, or some other and more recent current Tomcat version.
--
Pure Personal Opinion | HoffmanLabs LLC
seasoned_geek
2017-07-28 17:16:35 UTC
Permalink
On Monday, July 17, 2017 at 9:49:03 AM UTC-5, Stephen Hoffman wrote:

Hoff,

Sorry for taking snippets from multiple messages here, but was editing off-line due to flaky Internet. Please forgive if I pasted this in the wrong message thread as well. I had to be off-line for a while.
Post by Stephen Hoffman
That written, Microsoft has decided to follow your suggestions here,
but is doing so with Windows 10. Not with Windows XP or older
releases. We'll eventually learn how well the Windows 10 approach
works for Microsoft, for Microsoft partners and ISVs, and for end-users
of Windows, too. How well this works from a technical approach, and
around corporate financial, marketing and partnerships? Even if
software compatibility continues, some new features are inevitably
going to exclude older hardware, and which will force hardware
upgrades; older hardware is inevitably going to age out.
Odd this since Microsoft is in the process of abandoning Windows in reality, not name, and Google is in the process of abandoning Android. Microsoft has already issued EOL for Windows Mobile without announcing any replacement. Both Google and Cannonical are fast at work on their own forks of Fuschia. Both companies have chosen that platform as "the one OS to rule them all." Pieces of Ubuntu have already moved in under Windows 10. It will soon be a few Windows APIs on top of Ubuntu with a Microsoft looking desktop. This was one of the big pushes behind ".Net Anywhere" or ".Net Lite" or whatever it was called. Given the rousing success of Mono I don't hold much hope for Microsoft getting it to work on a non-Microsoft platform, hence the multi-year transition to Linux under the hood.

I haven't looked at the Fuschia code base, but, it is an off-shoot of another project. I don't know if that project fully jettisoned the Linux kernel or not. The Linux kernel has some serious legacy design flaws which are getting worse now that they are trying to utilize CUDA cores. I understand, back in the day compiling the video driver into the kernel made some sense. It no longer does. We can no longer count on some 90% of hardware providing a VGA address space at a specific address range. Automatic upgrades of the kernel for Nvidia users are currently problematic at least for the YABU distros. Hopefully Neon will be full Debian soon and a large part of the problem "might" go away. At least the Ubuntu don't test sh*t part will go away.

A full redesign of the Linux Kernel making it small and API, not pointer, driven with shelled APIs for future external specialized processors was/is long overdue. CUDA is not going to be the last. There already are a few low market CUDA competitors, but when you can get a 2Gig video card having 384 CUDA for under $50, that's a lot of market inertia for competitors to overcome. Yes those cores are specialized and can be morphed to do many things, but, the reality is this quad-core now has 388 cores of varying capabilities. From at least one perspective it is a desktop sized Connection Machine much like people at Fermi and a few other places were creating with cast off microvaxes back in the day. The next logical step is for cards to come with 4Gig of RAM and close to 1024 something more general than CUDA consuming low power card people drop into their desktops for massive crunching/big data capabilities. The small form factor desktop or even fuller sized ATX mobo now becomes a backplane other computing capabilities get stuck into.

In short, what's old is new again and the Linux kernel was in no shape to handle it. The current ham-fisted CUDA stuff is proof of that. Even my friend who from time to time works with Linus himself readily admits that. He's just not quite ready to get a new baby and bath water.
Post by Stephen Hoffman
Again: we can live in and can desire and seek Y2K-era security and
long-term server stability and the rest of the uptime era, or we can
deal with the environment we have now, with the need to deploy patches
more quickly, and prepare for the environment we're clearly headed
toward. Wrist watches commonly have more capacity and more
performance than most of the VAX servers. For those folks here that
are fond of disparaging or ranting about Microsoft or other vendors,
please do look at what they're doing, what they have available now, and
what they're working on. Microsoft and Linux and other choices are far
more competitive than they once were, far more secure, and are far
ahead of OpenVMS in various areas. The sorts of areas that show up on
RFPs and bid requests, too. Times and requirements and environments
all change. We either change, or we and our apps and our servers
retire in place.
I hear what you are saying, but firmly believe it is based on a false premise. Long ago, before VMS got an "Open" pre-pended by the sales resistance force, disgusting low life cretins paid lots of money to even lower forms of biological life, namely The Gartern Group and what became Accenture, to market a false statement:

<b>Proprietary bad, OpenSource good.</b>

This was a completely false statement. It was massive spin on the reality "Proprietary expensive, OpenSource cheap" and it completely overlooked the real definition of "cheap" there. North Korean knock-off sold at Walmart cheap, not high quality at low cost.

This "Proprietary bad, OpenSource good" mantra got beat into people's brains so much they believe it is true today. It's not.

Where is it written that every business system must connect directly to the Internet?

Where is it written that your core critical cluster must use TCP/IP?

Where is it written that external XML messages must feed directly from the Internet into a server which is directly connected to a critical database?

Where is it written the exact same CPU with the exact same BIOS/UEFI with the exact same cheap hard drive containing programmable firmware as the nearly worthless desktop must run your critical systems?

These are __all__ dramatic system architecture failures of biblical proportions. By moving to an x86 platform OpenVMS is now placing itself in the same position other worthless platforms now are in. Processor level binary code which can execute independent of OS can now penetrate OpenVMS infecting the BIOS/UEFI and commodity drive firmware. The OpenSource code gives hackers who've never seen anything other than a PC the way to penetrate and trigger its execution. Firmware viruses are the new frontier for both mafia and clandestine types.

https://www.kaspersky.com/blog/equation-hdd-malware/7623/

While we know about the hard drive firmware virus, people have been rather quiet about the BIOS/UEFI (whichever term you prefer) viruses. VMS never had this exposure and there is no method of defense when connected to the Internet or allowing TCP/IP on the box.

Even casually secure businesses never use XML or any other free-form messaging format internally. Externally, yes. They place some sacrificial Websphere or other message manipulator outside to be abused, but it can only pass back in fixed width proprietary formatted messages and it __never__ has access to a database it doesn't own. Doesn't matter if someone managed to force 2 billion characters into that first name tag trying to send in an SQL injection attack or some other nasty via an overflow crash. The first N characters are all that get dropped into the internal message and the rest are thrown away. Yes, you can get some garbage data, but you cannot be penetrated. Yes, I see many systems set up by complete and total idiots which grew up in the Microsoft sh*t design world but that doesn't mean real systems on real computers have to do stupid things.

Laugh all you want, I've been told of production systems which have some sacrificial Internet connected device which gets all kinds of messages from the outside world in Unicode, chunks out what it wants and morphs the data into EBCDIC and sends the data to big blue boxes via old proprietary IBM network protocols.

Of course, the most secure designs are fully air gapped. On a periodic basis they sneakernet removable media containing only data files over to the real machines.

Odd that. In today's mindlessly connected world, sneakernet is still the ultimate in system security.
j***@yahoo.co.uk
2017-07-28 19:33:38 UTC
Permalink
Post by seasoned_geek
Hoff,
Sorry for taking snippets from multiple messages here, but was editing off-line due to flaky Internet. Please forgive if I pasted this in the wrong message thread as well. I had to be off-line for a while.
Post by Stephen Hoffman
That written, Microsoft has decided to follow your suggestions here,
but is doing so with Windows 10. Not with Windows XP or older
releases. We'll eventually learn how well the Windows 10 approach
works for Microsoft, for Microsoft partners and ISVs, and for end-users
of Windows, too. How well this works from a technical approach, and
around corporate financial, marketing and partnerships? Even if
software compatibility continues, some new features are inevitably
going to exclude older hardware, and which will force hardware
upgrades; older hardware is inevitably going to age out.
Odd this since Microsoft is in the process of abandoning Windows in reality, not name, and Google is in the process of abandoning Android. Microsoft has already issued EOL for Windows Mobile without announcing any replacement. Both Google and Cannonical are fast at work on their own forks of Fuschia. Both companies have chosen that platform as "the one OS to rule them all." Pieces of Ubuntu have already moved in under Windows 10. It will soon be a few Windows APIs on top of Ubuntu with a Microsoft looking desktop. This was one of the big pushes behind ".Net Anywhere" or ".Net Lite" or whatever it was called. Given the rousing success of Mono I don't hold much hope for Microsoft getting it to work on a non-Microsoft platform, hence the multi-year transition to Linux under the hood.
I haven't looked at the Fuschia code base, but, it is an off-shoot of another project. I don't know if that project fully jettisoned the Linux kernel or not. The Linux kernel has some serious legacy design flaws which are getting worse now that they are trying to utilize CUDA cores. I understand, back in the day compiling the video driver into the kernel made some sense. It no longer does. We can no longer count on some 90% of hardware providing a VGA address space at a specific address range. Automatic upgrades of the kernel for Nvidia users are currently problematic at least for the YABU distros. Hopefully Neon will be full Debian soon and a large part of the problem "might" go away. At least the Ubuntu don't test sh*t part will go away.
A full redesign of the Linux Kernel making it small and API, not pointer, driven with shelled APIs for future external specialized processors was/is long overdue. CUDA is not going to be the last. There already are a few low market CUDA competitors, but when you can get a 2Gig video card having 384 CUDA for under $50, that's a lot of market inertia for competitors to overcome. Yes those cores are specialized and can be morphed to do many things, but, the reality is this quad-core now has 388 cores of varying capabilities. From at least one perspective it is a desktop sized Connection Machine much like people at Fermi and a few other places were creating with cast off microvaxes back in the day. The next logical step is for cards to come with 4Gig of RAM and close to 1024 something more general than CUDA consuming low power card people drop into their desktops for massive crunching/big data capabilities. The small form factor desktop or even fuller sized ATX mobo now becomes a backplane other computing capabilities get stuck into.
In short, what's old is new again and the Linux kernel was in no shape to handle it. The current ham-fisted CUDA stuff is proof of that. Even my friend who from time to time works with Linus himself readily admits that. He's just not quite ready to get a new baby and bath water.
Post by Stephen Hoffman
Again: we can live in and can desire and seek Y2K-era security and
long-term server stability and the rest of the uptime era, or we can
deal with the environment we have now, with the need to deploy patches
more quickly, and prepare for the environment we're clearly headed
toward. Wrist watches commonly have more capacity and more
performance than most of the VAX servers. For those folks here that
are fond of disparaging or ranting about Microsoft or other vendors,
please do look at what they're doing, what they have available now, and
what they're working on. Microsoft and Linux and other choices are far
more competitive than they once were, far more secure, and are far
ahead of OpenVMS in various areas. The sorts of areas that show up on
RFPs and bid requests, too. Times and requirements and environments
all change. We either change, or we and our apps and our servers
retire in place.
<b>Proprietary bad, OpenSource good.</b>
This was a completely false statement. It was massive spin on the reality "Proprietary expensive, OpenSource cheap" and it completely overlooked the real definition of "cheap" there. North Korean knock-off sold at Walmart cheap, not high quality at low cost.
This "Proprietary bad, OpenSource good" mantra got beat into people's brains so much they believe it is true today. It's not.
Where is it written that every business system must connect directly to the Internet?
Where is it written that your core critical cluster must use TCP/IP?
Where is it written that external XML messages must feed directly from the Internet into a server which is directly connected to a critical database?
Where is it written the exact same CPU with the exact same BIOS/UEFI with the exact same cheap hard drive containing programmable firmware as the nearly worthless desktop must run your critical systems?
These are __all__ dramatic system architecture failures of biblical proportions. By moving to an x86 platform OpenVMS is now placing itself in the same position other worthless platforms now are in. Processor level binary code which can execute independent of OS can now penetrate OpenVMS infecting the BIOS/UEFI and commodity drive firmware. The OpenSource code gives hackers who've never seen anything other than a PC the way to penetrate and trigger its execution. Firmware viruses are the new frontier for both mafia and clandestine types.
https://www.kaspersky.com/blog/equation-hdd-malware/7623/
While we know about the hard drive firmware virus, people have been rather quiet about the BIOS/UEFI (whichever term you prefer) viruses. VMS never had this exposure and there is no method of defense when connected to the Internet or allowing TCP/IP on the box.
Even casually secure businesses never use XML or any other free-form messaging format internally. Externally, yes. They place some sacrificial Websphere or other message manipulator outside to be abused, but it can only pass back in fixed width proprietary formatted messages and it __never__ has access to a database it doesn't own. Doesn't matter if someone managed to force 2 billion characters into that first name tag trying to send in an SQL injection attack or some other nasty via an overflow crash. The first N characters are all that get dropped into the internal message and the rest are thrown away. Yes, you can get some garbage data, but you cannot be penetrated. Yes, I see many systems set up by complete and total idiots which grew up in the Microsoft sh*t design world but that doesn't mean real systems on real computers have to do stupid things.
Laugh all you want, I've been told of production systems which have some sacrificial Internet connected device which gets all kinds of messages from the outside world in Unicode, chunks out what it wants and morphs the data into EBCDIC and sends the data to big blue boxes via old proprietary IBM network protocols.
Of course, the most secure designs are fully air gapped. On a periodic basis they sneakernet removable media containing only data files over to the real machines.
Odd that. In today's mindlessly connected world, sneakernet is still the ultimate in system security.
Where is it written that there no difference between
Open Source (e.g. Linux) and Open Standards (e.g. POSIX)?
seasoned_geek
2017-07-28 21:34:36 UTC
Permalink
Post by j***@yahoo.co.uk
Where is it written that there no difference between
Open Source (e.g. Linux) and Open Standards (e.g. POSIX)?
Quite honestly there isn't.

Don't get me wrong, this very HP Laptop had the virus known as Windows wiped from it and various Linux distros installed. Unless you count my blogging, book writing and the occasional Qt sample compile "production" work, none of them are running "production." I consider "production" work MRP, ERP, WMS, payroll and the like. In short, regularly scheduled jobs with regularly scheduled uptime requirements.

OpenSource: Mostly untested code without even the tiniest thought given to security, much of it written by 12 year old boys just trying to learn, hurled into a repository where bugs will be logged where the vast majority of bugs will be closed "because the version is no longer supported."

Open Standards: People who consider themselves very learned gather together, sometimes virtually, and politically decide by committee what something as of yet non-existing should do. There will already be competing implementations which do some/much of what ends up becoming part of the standard but do it differently. During the political hashing out of what should and should not be part of "the standard" those who consider themselves very learned behave like 12 year old boys.

The grand example of the Open Standard process is the Open Standard for XBASE files. Go ahead, look it up. We all use that term and we all think we know what it means, but there is no actual standard. For 5 years those who considered themselves learned gathered together with reps from all of the major XBASE products (DBASE, Lotus, Borland, Clipper, etc.) and no standard could be reached because each vendor in the room wanted their product to be the standard and all others to license the product from them.

COBOL has a standard but try taking a COBOL application for an IBM and compiling it for a VAX. Even if they both claim to support the same COBOL year standard, in the real world it doesn't work. Much the same could/can be said about POSIX. Try compiling a POSIX application of significant size written for a commercial Unix implementation on VMS. Many of us have had to port stuff like this over the years. Hell I had to deal with COGNOS PowerHouse when it moved to a "Common Codebase" which basically was their HP-UX version and if you happened to be on HP-UX it worked for you. On OpenVMS the bulk of the tools disappeared because UX didn't provide what was needed for them though both were reportedly POSIX compliant.

The final sad reality in all of this is that OpenSource code for something like TCP/IP ends up having snippets of itself (or the entire thing) become the base of commercial versions so the OpenSource security holes ripple out untraceably into the real world.
Arne Vajhøj
2017-07-30 23:54:54 UTC
Permalink
Post by seasoned_geek
OpenSource: Mostly untested code without even the tiniest thought
given to security, much of it written by 12 year old boys just trying
to learn, hurled into a repository where bugs will be logged where
the vast majority of bugs will be closed "because the version is no
longer supported."
Your view of open source is very far from reality.

Today open source is to a very large extent a corporate thing.

The Linux kernel actually make statistics on contributions. In last
report less than 10% was from individuals and more than 90% was from
companies. And the list of companies is almost all the big companies
in IT: Intel, AMD, ARM, TI, NVidia, Qualcomm, Cisco, Huawei, Samsung,
Redhat, IBM, Oracle, Google, Facebook etc..

And the most known open source guys are very far from kids. Among
the famous/infamous: Richard Stallman is 64, Linus Torvalds is 47,
Greg Kroah-Hartman must be around 50, Ulrich Drepper is 49,
Michael Widenius is 55, Rasmus Lerdorf is 48, Guido van Rossum
is 61, Miguel de Icaza is mid 40's, Geir Magnusson must be mid 40's
and so on.

Arne
David Froble
2017-07-28 19:52:44 UTC
Permalink
Post by seasoned_geek
I hear what you are saying, but firmly believe it is based on a false
premise. Long ago, before VMS got an "Open" pre-pended by the sales
resistance force, disgusting low life cretins paid lots of money to even
lower forms of biological life, namely The Gartern Group and what became
<b>Proprietary bad, OpenSource good.</b>
This was a completely false statement. It was massive spin on the reality
"Proprietary expensive, OpenSource cheap" and it completely overlooked the
real definition of "cheap" there. North Korean knock-off sold at Walmart
cheap, not high quality at low cost.
This "Proprietary bad, OpenSource good" mantra got beat into people's brains
so much they believe it is true today. It's not.
Where is it written that every business system must connect directly to the Internet?
Where is it written that your core critical cluster must use TCP/IP?
Where is it written that external XML messages must feed directly from the
Internet into a server which is directly connected to a critical database?
Where is it written the exact same CPU with the exact same BIOS/UEFI with the
exact same cheap hard drive containing programmable firmware as the nearly
worthless desktop must run your critical systems?
I'm not arguing with you, I agree with much of what you write. But you got to
keep one thing in mind. People's perceptions ARE their reality, and they act
accordingly.
seasoned_geek
2017-07-28 21:48:59 UTC
Permalink
Post by David Froble
I'm not arguing with you, I agree with much of what you write. But you got to
keep one thing in mind. People's perceptions ARE their reality, and they act
accordingly.
Those perceptions exist because of the pathetic excuses we now have for higher education. The few IT programs which still exist teach a fraudulent AGILE methodology which legitimizes hacking on the fly without any real plan. Just hurling sh*t against the wall hoping something sticks. You can't even find one actually teaching programming logic anymore. If they have a class called that when you look at it you find they are teaching PASCAL in there instead of logic. The kids are useless when they try to learn the next language if it isn't exactly like PASCAL.

Adding insult to injury we now have MBA toilet paper mills cranking out worthless suits with haircuts teaching them less than zero about software development or technology. Adding insult to injury they tend to also be taught how to use Microsoft Windows based project management and email so they graduate, receiving their fresh roll of Charmin "It must be good enough to run a company if that is what we had in school."

Buy a canned package which provides a "dashboard", "turn a couple of knobs" to "customize" the package for your current company and collect your bonus check.

Institutions of higher education have really failed the youth of today. At client site after client site I run into fresh graduates who know nothing. Even the token few who bother to learn C++ have zero problem solving capability. Doesn't matter what school they are from. When I ask them "Did you have a programming logic course" they say yes and when I ask "Did they teach PASCAL in it" they also say yes.

No wonder those kids quickly move off into scripted Web language of the week. Their parents and my tax dollars paid for an education they never got.

In order to fix perceptions you have to fix a completely failed higher education system. Both the IT tracts and the MBA tracts need to be teaching what is and is not proper systems development instead of a fraud Gartner was paid to market like AGILE. Even the ACM is finally starting to empty its collective colon on AGILE. Way past time!
Arne Vajhøj
2017-07-30 00:35:34 UTC
Permalink
Post by seasoned_geek
You can't even find one actually teaching
programming logic anymore. If they have a class called that when you
look at it you find they are teaching PASCAL in there instead of
logic. The kids are useless when they try to learn the next language
if it isn't exactly like PASCAL.
Institutions of higher education have really failed the youth of
today. At client site after client site I run into fresh graduates
who know nothing. Even the token few who bother to learn C++ have
zero problem solving capability. Doesn't matter what school they are
from. When I ask them "Did you have a programming logic course" they
say yes and when I ask "Did they teach PASCAL in it" they also say
yes.
That may have been a credible story about 20 years ago.

In the 80's and 90's it was common to teach Pascal.

The world has changed since then.

Today it is languages like Java and Python that is being teached.

Arne
Arne Vajhøj
2017-07-30 00:38:31 UTC
Permalink
Post by seasoned_geek
Those perceptions exist because of the pathetic excuses we now have for higher education.
Institutions of higher education have really failed the youth of today. At client site after client site I run into fresh graduates who know nothing.
No wonder those kids quickly move off into scripted Web language of the week. Their parents and my tax dollars paid for an education they never got.
In order to fix perceptions you have to fix a completely failed higher education system.
Says the guy that has a web page stating:

<quote>
Date handling in Java is somewhat littered with land mines. Java
provides you with a Date class, then tells you not to use it. When you
use the classes that are supposed to replace the now depreciated Date
class, you are forced to use Date objects. Use a Date object in your
code where it will actually get a name and the compiler will flag a warning.
</quote>

Arne
Stephen Hoffman
2017-07-28 22:12:45 UTC
Permalink
Post by seasoned_geek
Hoff,
Sorry for taking snippets from multiple messages here, but was editing
off-line due to flaky Internet. Please forgive if I pasted this in the
wrong message thread as well. I had to be off-line for a while.
Post by Stephen Hoffman
That written, Microsoft has decided to follow your suggestions here,
but is doing so with Windows 10. Not with Windows XP or older
releases. We'll eventually learn how well the Windows 10 approach
works for Microsoft, for Microsoft partners and ISVs, and for end-users
of Windows, too. How well this works from a technical approach, and
around corporate financial, marketing and partnerships? Even if
software compatibility continues, some new features are inevitably
going to exclude older hardware, and which will force hardware
upgrades; older hardware is inevitably going to age out.
Odd this since Microsoft is in the process of abandoning Windows in reality,
Microsoft is not abandoning Windows. No vendor willingly abandons a
profitable installed base in the billions. Though the desktop market
has been drifting downward in size, as the mobile market massively
increases in size (and which also reduces the influence of Microsoft in
the client market). Mr. Nadella is looking toward the future of
Microsoft with Azure and hosted services, however. With the
commoditization of and the competition among the the operating systems,
they have to look at the next five and ten years. Whether the bet on
Azure, and on associated hosted services such as hosted Active
Directory and Exchange Server, and on apps such as Office365, pays off?
Post by seasoned_geek
not name, and Google is in the process of abandoning Android. Microsoft
has already issued EOL for Windows Mobile without announcing any
replacement.
Microsoft was not successful in mobile, and — much like HPE and Itanium
— have decided to exit the market. They got stuck between Android
and iOS.
Post by seasoned_geek
Both Google and Cannonical are fast at work on their own forks of
Fuschia. Both companies have chosen that platform as "the one OS to
rule them all." Pieces of Ubuntu have already moved in under Windows
10. It will soon be a few Windows APIs on top of Ubuntu with a
Microsoft looking desktop. This was one of the big pushes behind ".Net
Anywhere" or ".Net Lite" or whatever it was called. Given the rousing
success of Mono I don't hold much hope for Microsoft getting it to work
on a non-Microsoft platform, hence the multi-year transition to Linux
under the hood.
I'm well aware of the Unix subsystem available in the most recent
Windows 10 installations. It's rather like GNV and OpenVMS, but far
more extensive and better integrated with the operating system.

Whether Linux is also going to be the new kernel? Donno. But I
doubt it. If the Microsoft folks were even going to try re-hosting
Windows onto a new kernel, they'd almost certainly be aiming well past
the existing kernels.

The advertising giant needs a way to advertise, and Android was how
they avoided getting strangled by Apple and Microsoft and others as
mobile really got rolling. Google then got themselves into some
trouble with Android and support for earlier versions, particularly due
to how they positioned and licensed Android to the various handset
vendors. Which is why I expect they're headed toward Fuchsia, if
they're going to replace it with something sufficiently better.
That's if they're not simply looking to use Fuchsia for their own
internal use. Nobody outside of Google really knows what they're up
to here. They're certainly seemingly approaching it as a way to get
apps from both iOS and Android, though.

Microsoft got themselves into some trouble with mobile because their
approach was at odds with that of their competitors; the Microsoft
folks couldn't price Windows Mobile underneath Android, and iOS was
vacuuming most of the profits. Among other details.

Here's some fodder for thought...
https://qz.com/1037753/the-windows-phone-failure-was-easily-preventable-but-microsofts-culture-made-it-unavoidable/
Post by seasoned_geek
I haven't looked at the Fuschia code base, but, it is an off-shoot of
another project. I don't know if that project fully jettisoned the
Linux kernel or not. The Linux kernel has some serious legacy design
flaws which are getting worse now that they are trying to utilize CUDA
cores.
Legacy software is any complex software package where various
subsystems are no longer optimally designed for current requirements
and environments.

As for CUDA or Metal or OpenCL or Vulkan or DirectX, or of GPU or GPGPU
support, I've only been particularly following those topics on macOS
and iOS platforms and not particularly over on Linux or Windows.
Post by seasoned_geek
I understand, back in the day compiling the video driver into the
kernel made some sense. It no longer does. We can no longer count on
some 90% of hardware providing a VGA address space at a specific
address range. Automatic upgrades of the kernel for Nvidia users are
currently problematic at least for the YABU distros. Hopefully Neon
will be full Debian soon and a large part of the problem "might" go
away. At least the Ubuntu don't test sh*t part will go away.
There are some rather long and interesting discussions of the
trade-offs involved with having the drivers in the kernel, as compared
with the safety of having more of the code outside of the kernel. For
details on that, rummage around for the Windows Driver Model and the
Windows Driver Framework discussions, as compared with the Graphics
Device Interface (GDI). Copying blocks of memory around gets...
expensive.

Here's a decent starting point for that particular Windows NT GDI
design discussion:
https://technet.microsoft.com/en-us/library/cc750820.aspx#XSLTsection124121120120


Also see:
https://docs.microsoft.com/en-us/windows-hardware/drivers/display/submitting-a-command-buffer


Also have a look at the wrestling that Qubes OS has been having with
isolating the potential for device shenanigans.
https://blog.invisiblethings.org/index.html

Different operating systems make different trade-offs, too.
Post by seasoned_geek
A full redesign of the Linux Kernel making it small and API, not
pointer, driven with shelled APIs for future external specialized
processors was/is long overdue.
I don't expect to see the Linux kernel redesigned that way, though
stranger things have happened. I would expect to see further interest
in L4 and maybe DragonFly BSD. There's been more than a little
research and testing around lowering the overhead of message-passing
with L4 kernels, and faster hardware certainly helps.
Post by seasoned_geek
CUDA is not going to be the last. There already are a few low market
CUDA competitors, but when you can get a 2Gig video card having 384
CUDA for under $50, that's a lot of market inertia for competitors to
overcome. Yes those cores are specialized and can be morphed to do many
things, but, the reality is this quad-core now has 388 cores of varying
capabilities. From at least one perspective it is a desktop sized
Connection Machine much like people at Fermi and a few other places
were creating with cast off microvaxes back in the day. The next
logical step is for cards to come with 4Gig of RAM and close to 1024
something more general than CUDA consuming low power card people drop
into their desktops for massive crunching/big data capabilities. The
small form factor desktop or even fuller sized ATX mobo now becomes a
backplane other computing capabilities get stuck into.
There'll continue to be better integration between scalar cores and
GPUs, for those folks that need that. CUDA is how NVIDIA allows folks
to access GPUs. Metal is what Apple has been using for that in recent
times.

There are substantial differences in how scalar cores and GPUs work,
and it's been interesting working with them; GPUs are screaming fast at
various tasks, and utterly glacial at others. There's been
substantial work toward support of machine learning on macOS with Core
ML, for instance, and other tasks that are well suited to GPU
computing. Getting data into and out of the GPUs has been problematic
in recent years, though that access is improving with each generation.
With what I've been working with, there's also the overhead of
compiling the code for the GPU, whether that's compiled ahead or
happens while the application is running.

And for now, folks get to choose Metal, Vulkan or NVIDIA's CUDA as the
interface, or some higher-level framework that abstracts that, or they
can use Unity or Unreal and let those tools deal with Metal or CUDA or
whatever.
Post by seasoned_geek
In short, what's old is new again and the Linux kernel was in no shape
to handle it. The current ham-fisted CUDA stuff is proof of that. Even
my friend who from time to time works with Linus himself readily admits
that. He's just not quite ready to get a new baby and bath water.
Linus is most definitely not a fool. As for what's been happening
with NVIDIA CUDA support over on Linux, I haven't been following that.
But it wouldn't surprise me that there's some skepticism around
supporting a vendor-specific framework such as CUDA in Linux — NVIDIA
is not the only graphics vendor around — and graphics hardware support
in general has been a long-running thorn for various open-source
operating systems. Yes, there are fully-documented graphics
controllers, and that's been a very nice change from earlier years.
The performance of various recent commodity integrated graphics such as
Intel HD and Iris graphics is actually quite decent, too. And various
vendors are interested in Vulkan in addition to or in place of CUDA,
too.

https://en.wikipedia.org/wiki/Vulkan_(API)
Post by seasoned_geek
Post by Stephen Hoffman
Again: we can live in and can desire and seek Y2K-era security and
long-term server stability and the rest of the uptime era, or we can
deal with the environment we have now, with the need to deploy patches
more quickly, and prepare for the environment we're clearly headed
toward. Wrist watches commonly have more capacity and more
performance than most of the VAX servers. For those folks here that
are fond of disparaging or ranting about Microsoft or other vendors,
please do look at what they're doing, what they have available now, and
what they're working on. Microsoft and Linux and other choices are far
more competitive than they once were, far more secure, and are far
ahead of OpenVMS in various areas. The sorts of areas that show up on
RFPs and bid requests, too. Times and requirements and environments
all change. We either change, or we and our apps and our servers
retire in place.
I hear what you are saying, but firmly believe it is based on a false
premise. Long ago, before VMS got an "Open" pre-pended by the sales
resistance force, disgusting low life cretins paid lots of money to
even lower forms of biological life, namely The Gartern Group and what
<b>Proprietary bad, OpenSource good.</b>
This was a completely false statement. It was massive spin on the
reality "Proprietary expensive, OpenSource cheap" and it completely
overlooked the real definition of "cheap" there. North Korean knock-off
sold at Walmart cheap, not high quality at low cost.
This "Proprietary bad, OpenSource good" mantra got beat into people's
brains so much they believe it is true today. It's not.
You seem to be misinterpreting my comments. I'm specifically
referring to the current and future environments, and not to what
analysts in the 1980s and 1990s stated, nor about what investments and
what guesses made back then that worked or not, nor am I even remotely
interested in rehashing the product management decision to rename the
VMS product to OpenVMS. Nope. Wrong direction. Forward. History
and how we got here is fun and interesting and a good foundation for
learning from successes and not repeating mistakes, but we're just not
going back that way again.
Post by seasoned_geek
Where is it written that every business system must connect directly to the Internet?
Outside of the military and intelligence communities, there are few
air-gapped systems around, and IPv6 means most every other server is
connected. Whether those servers communicate outside of the local
network is dependent on local requirements.
Post by seasoned_geek
Where is it written that your core critical cluster must use TCP/IP?
I wouldn't expect TCP, though I do expect to see DTLS and UDP.
Because — as many OpenVMS sites learned — local IT requires IP, or the
servers cannot be networked.
Post by seasoned_geek
Where is it written that external XML messages must feed directly from
the Internet into a server which is directly connected to a critical
database?
XML and JSON are how data can be packaged, and frameworks and tools are
available for those. Folks are free to use other approaches, though a
bespoke format or network protocol or database or other such is code
that's not particularly differentiated, and that must be written and
maintained and updated. Trade-offs and reasons for bespoke code
certainly do exist, but such decisions are best approached skeptically.
Post by seasoned_geek
Where is it written the exact same CPU with the exact same BIOS/UEFI
with the exact same cheap hard drive containing programmable firmware
as the nearly worthless desktop must run your critical systems?
Ayup. Alpha was certainly fun and a very nice design, as was DEC's
DLT and RA and RF storage and the rest of that era. Or IBM and their
DASD storage. Much of the traditional middle market from the 1980s and
1990s lost out to commodity components and lower prices and higher
volumes, and the high-end got higher and more expensive. OpenVMS
isn't anywhere near the high end. And for the foreseeable future,
OpenVMS doesn't have the volume to have dedicated and custom hardware,
beyond commodity-based servers that've been tested for OpenVMS
compatibility. Apple, Microsoft and other vendors are of the scale
where custom hardware can be feasible, and Apple has the volume where
A10 and A10X and such are not just possible but advantageous, but the
folks at VSI have at least a year or two or ten before they're building
and supporting bespoke microprocessors, custom memory and extreme
storage devices. Or requiring such. Until then, mid- and upper-end
commodity hardware from existing providers will have to suffice for
OpenVMS. But again, looking forward and not backwards. VSI is in
2017. With limited staff and funding. And with a port to x86-64
well underway.
Post by seasoned_geek
These are __all__ dramatic system architecture failures of biblical
proportions. By moving to an x86 platform OpenVMS is now placing itself
in the same position other worthless platforms now are in. Processor
level binary code which can execute independent of OS can now penetrate
OpenVMS infecting the BIOS/UEFI and commodity drive firmware. The
OpenSource code gives hackers who've never seen anything other than a
PC the way to penetrate and trigger its execution. Firmware viruses are
the new frontier for both mafia and clandestine types.
I've yet to encounter binary code that's transportable to OpenVMS in
the fashion described, nor malware executables that are portable across
a mix of operating systems that includes OpenVMS. The executable code
may or may not run, but — absent some sort of compatibility framework —
the I/O and system calls will fail. Malware binary executables — any
meaningful binary executables, for that matter — are simply not
magically portable across disparate operating systems. Sure, maybe
you somehow get an RCE and manage to get a loop running. Beyond that?
The code is specific to the operating system context. I've stated
all this before, of course.

As for malware that targets outboard or low-level components of the
platform such as the Intel management engine or HPE iLO or other
similar components, or that's written to target (for instance) SMH or
Java or such execution environments, that's all certainly in play
irrespective of the operating system might be in use on the server.
Or reflection attacks or denial-of-service attacks against some network
servers or related. That's irrespective of whether x86-64 is in use.

That written, security has gotten much more complex, certainly.

I do not wish to again debate whether or not anybody thinks that x86-64
is elegant or wonderful or even particularly sane — I certainly don't
believe it to be all that and a bag of chips — but x86-64 also the only
volume server platform processor this side of some future ARM and
AArch64 / ARMv8.x and SBSA server market. If an operating system
doesn't support x86-64 servers, then purchases and installations of
that operating system are going to be at a competitive disadvantage
because they can't run on commodity hardware and interoperate with
standard tools such as virtual machines. Performance-competitive
microprocessor designs are expensive, and extremely expensive when the
producer lacks the production volume of Intel or AMD, or of the ARM
producers, and when there's the need to design and support low-volume
custom servers. Then there's also ending up beholden to the producer
of the custom processor or server design you're based on if not x86-64
(or maybe eventually on some commodity ARM AArch64 server designs or in
some potential and currently-distant future RISC V servers), but that's
a rather more advanced business-related topic.

At the very base of the whole discussion of a commercial operating
system such as OpenVMS is making a profit. The related details such
as business economics, production and sales costs, and product
forecasting are all such fun discussions, of course. VSI has to sell
enough OpenVMS licenses and support to cover their costs and recouping
sufficient profits for their investor, and purchasing and running and
supporting OpenVMS has to make financial sense to enough third-party
providers and customers to matter.

If y'all can show a path to better profits than those likely arising
from commodity hardware and x86-64 processors and the current port, the
folks at VSI will probably be interested. Line up enough paying
customers and you'll have their full attention. But that new port and
that lower-volume or bespoke hardware will also lock the VSI team for
another three or five years and now's not an audacious time for that,
and the VSI folks have to be able to sell that hardware and software to
other folks — to many of us — in sufficient volume to matter. Or if
you really think there's a market here and have a decent chunk of a
billion dollars tucked into the couch cushions, start designing and
building your own operating system and high-end hardware and your
preferred or your own microprocessor, and clobber everybody else in the
computing business in ten or twenty years...
--
Pure Personal Opinion | HoffmanLabs LLC
u***@gmail.com
2017-07-29 14:58:51 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
THIS IS THE TIME FOR PROCESS SOFTWARE TO STEP UP TO THE PLATE.
IT WOULD NOT BE VERY HARD TO DUST OFF PURVEYOR AND UPDATE THE SSL
AND BRING IT BACK. IT WOULD BE THE TOP OPENVMS WEBSERVER ON THE MARKET.
Stephen Hoffman
2017-07-29 20:00:48 UTC
Permalink
THIS IS THE TIME FOR PROCESS SOFTWARE TO STEP UP TO THE PLATE. IT WOULD
NOT BE VERY HARD TO DUST OFF PURVEYOR AND UPDATE THE SSL AND BRING IT
BACK. IT WOULD BE THE TOP OPENVMS WEBSERVER ON THE MARKET.
Maybe it's time to replace that ASR33 you're apparently using with a
refurb ASR38 or maybe a VT100?

Basically, you're asking for VSI to dust off code that was retired most
of two decades ago (and undoubtedly for good and valid reasons), and
bringing it forward. It'd be worthwhile to ponder why Process got out
of the Purveyor business back then, and what has happened with web
servers and expectations since then? Maybe also ponder what current
folks are using Apache for, and which features they're using within the
Apache port?

So... Easy, huh? Okay. So... Let us play this one out. VSI
decides to license the old code and dust it off and update it.
Because "IT WOULD NOT BE VERY HARD TO DUST OFF PURVEYOR AND UPDATE THE
SSL AND BRING IT BACK", of course. To bring it back to what wasn't
particularly competitive most of twenty years ago, when the product was
retired. Sure, if we're looking at TLS, that's (probably) a fairly
isolated hunk. Adding stapling and other TLS-related work will add to
that. Some other TLS-related work.

But if your suggestion undergoes seemingly-inevitable "project creep"
and for this to be the primary VSI web server, then folks are going to
want and need various features present in Apache that aren't in
something as dated as Purveyor, and there'll be ongoing support and
upgrade efforts as the standards evolve. Not just porting the
existing code over to OpenVMS as is the current case with Apache or to
"DUST OFF PURVEYOR", but of designing and implementing and testing
HTTP/2 support, IPv6 support, Unicode, LDAP, WebDAV, of connecting it
all with VSI IP, and whatever other newer web services features and
integration and features that might be required by the customers; of
creating a more competitive web server.

The customers are not going to be fast to migrate to a web server
lacking features they're using, after all. And then there's the
discussion of continuing support for Apache or migration of folks from
Apache over to this new VSI web server, and duplication of development
adds incremental costs. And duplication will detract from the
migration to and adoption of this hypothetical new VSI ULTRAPURVEYOR
web server.

Creating and updating and testing documentation and product support for
all that, and ensuring that content management systems and the rest of
what now connects into web servers are tested with and documentation
written for use with this new web server, too.

There'll also be customer questions about importing or exporting the
configuration data from the existing Apache environments, and getting
content management systems or business systems connected to the web
server, and eventually working on whatever open-source or VSI-specific
tools might access and potentially even eventually manage the contents
of the web server configuration files, or the web server database.
(I'm routinely working with servers that manage and modify the
underlying Apache files, as part of the platform management interface,
too. Any of that will have to be modified to track the new platform,
or the new platform will have to use the Apache files and formats.
The use of the Apache configuration files would ease the migration into
the new platform, of course. But now you're dealing with all of the
Apache modules, and a whole lot more support. Or a subset of the
supported Apache modules being supported in VSI ULTRAPURVEYOR, and all
the trade-offs involved there.)

Then there's the question of the licensing and pricing for this
hypothetical new web server; whether to license the code with the
platform, or to require a separate license for folks that want that.
Web servers are now integrated parts of pretty much any other server
available on the market, but now VSI is accepting a rather larger
effort than porting across Apache. Customers too are now learning
about and managing a new web server and one that's wholly different
from Apache or nginx or another server, which adds to their own costs.

While certainly not all of this will be needed by all customers and
there'll certainly be other work outside this list, here's a starting
point for what else can be involved with updating Purveyor from most of
twenty years ago, and replacing Apache:

https://httpd.apache.org/docs/trunk/new_features_2_4.html
https://httpd.apache.org/docs/2.2/new_features_2_2.html
https://httpd.apache.org/docs/2.2/new_features_2_0.html
https://en.wikipedia.org/wiki/Apache_HTTP_Server
Probably some other web-services historical lists of added features and
web server changes has been posted... somewhere...

Purveyor is far enough back that it may well lack server-side includes
and a whole lot of other support that's now considered common and
expected.

Then there'll be requests for adding support for Tomcat and other such,
and whatever else Purveyor lacks.

There'll be updating the code for modern compilers and maybe for modern
coding practices, if not starting to rewrite the code in a more modern
language, depending on what Purveyor was originally written in.

So.... sure. "IT WOULD NOT BE VERY HARD TO DUST OFF PURVEYOR". If
it's just what Purveyor had, selling that to folks will be fun. If
it's hauling Purveyor forward to competitive features, it wouldn't
surprise me to require a team of a dozen OpenVMS- and web-familiar
developers and probably a year or two of effort and possibly longer,
and an ongoing investment to track and upgrade to newer web standards
and requirements, and VSI will have some combination of a large hole in
their development budget to fund that work, or a unique and
platform-specific web server with rather less than what
comparable-vintage Apache or other major web servers provide. I'm
sure that'll really sell well with any folks wholly new to OpenVMS, and
also with the OpenVMS folks using Apache. And in parallel to this web
server, VSI will continue to need developers for work on OpenVMS
itself; on other features and tools, too. Development budgets and
schedules are seldom unlimited, and there are always these sorts of
trade-offs.

I'd prefer to see Apache integrated directly into the base distro, and
to see it and its TLS support kept current. At least until there's a
replacement that has marketable advantages over Apache or nginx or the
other and more established web servers. Because I can use Apache for
whatever Purveyor did back then. And for a whole lot more.

p.s. I've just noticed that the entire Apache 2.2 series has ended
support. Which means the VSI Apache port is the path forward for folks
using the HPE port.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2017-07-29 23:31:19 UTC
Permalink
Post by u***@gmail.com
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
THIS IS THE TIME FOR PROCESS SOFTWARE TO STEP UP TO THE PLATE.
IT WOULD NOT BE VERY HARD TO DUST OFF PURVEYOR AND UPDATE THE SSL
AND BRING IT BACK. IT WOULD BE THE TOP OPENVMS WEBSERVER ON THE MARKET.
I've got to wonder where that came from ?????

Oh, wait, it's Bob ....

WASD is already dusted off, and has updated SSL. What can Purveyor provide to
beat that. Better price? Don't see Process paying you to use it ....
Arne Vajhøj
2017-07-30 00:24:09 UTC
Permalink
Post by u***@gmail.com
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
THIS IS THE TIME FOR PROCESS SOFTWARE TO STEP UP TO THE PLATE.
IT WOULD NOT BE VERY HARD TO DUST OFF PURVEYOR AND UPDATE THE SSL
AND BRING IT BACK. IT WOULD BE THE TOP OPENVMS WEBSERVER ON THE MARKET.
They could.

But it did not succeed in the VMS market 25 years ago.

Why should it now when starting with catching up
technically and having zero marketshare.

There are several web servers available for VMS. No missing
products gap to fill.

Arne
seasoned_geek
2017-07-29 19:21:38 UTC
Permalink
Post by Stephen Hoffman
Microsoft is not abandoning Windows. No vendor willingly abandons a
profitable installed base in the billions. Though the desktop market
has been drifting downward in size, as the mobile market massively
increases in size (and which also reduces the influence of Microsoft in
the client market). Mr. Nadella is looking toward the future of
Microsoft with Azure and hosted services, however. With the
commoditization of and the competition among the the operating systems,
they have to look at the next five and ten years. Whether the bet on
Azure, and on associated hosted services such as hosted Active
Directory and Exchange Server, and on apps such as Office365, pays off?
Quite honestly Microsoft is abandoning Windows and will be putting a Windows desktop on top of Ubuntu in the near future. They are doing it in part because of the reasons you've listed _and_ for the following other reasons:

1) They cannot force Windows 7, XP and older version users to upgrade when much of the newer software can and does still run on these older versions. New software written for Ubuntu/Linux won't run on XP so at best those users will have to find a VM and install some supported Linux version.

2) Linux flavors have become far too commonplace for Microsoft _not_ to push out a Linux version for some of its cash cows. Maintaining the same products for 3 platforms is too big of a strain. Apple uses BSD which is kind of sort of close to Linux so if they move to Linux under the hood they can get very close to a single code base for Office and other cash cow products while reducing their over all development effort.

3) A growing number of hardware vendors are pre-installing Linux. There are now even on-line databases to find them.
http://lxer.com/module/db/index.php?dbn=14

4) When I did a project for a marketing company for their client, INTEL, all of the European versions of the product had to be Linux based because in large parts over there it is illegal to bundle OS and hardware so almost all of those machines go out of the retailer with a Linux disk.

5) Most Android 2-in-1s out there can be flashed for Linux. This is not true for many of the low end Windows 2-in-1s due to Microsoft being the only source of some drivers, but the Linux based Android devices already have Linux drivers. You may have to look for them, but they were written and released.
Post by Stephen Hoffman
Microsoft was not successful in mobile, and — much like HPE and Itanium
— have decided to exit the market. They got stuck between Android
and iOS.
6) The bulk of Microsoft's cloud services are already running on Linux because Microsoft couldn't get its own OS to work. That I got straight from people working on those services. Windows simply can't play in the market Microsoft is trying to get to. They have decided to exit the Windows desktop/tablet/2-in-1 market a Windows like desktop on top of Linux. They are basically following a market which has already been created.

Today's ranking of 9 on distrowatch
http://distrowatch.com/table.php?distribution=zorin

Today's ranking of 16 on distrowatch - clean room Windows binary compatibility.
http://distrowatch.com/table.php?distribution=reactos
It is from Russia though so you might want to keep that in mind before trying it. I haven't looked at it in quite a while.

Let us also not forget the fact WINE has continued to improve.
https://www.winehq.org/
Many gamers now use it for XP and older era games. There was something broken with it a year or two ago when Lotus SmartSuite 98 would no longer install, but, I believe that was fixed. Stumbled into a huge rash of messages from people with massive document archives and the-whole-business-is-in-it Lotus Approach systems when that happened. I know it had to have been fixed as I finally bit the bullet and exported some Lotus WordPro documents to file formats LibreOffice could actually consume right after that. Stuff I wrote a long time ago and always _meant_ to convert but secretly hoped SmartSuite would rise again on Linux somehow. For writing a book WordPro was the best.

So, Microsoft was faced with an ever escalating maintenance cost along with continually diminished income to support an OS they were no longer charging for when OEMs sold units below a certain retail price _or_ they could cut a deal with Cannonical for Ubuntu, roll in an enhanced version of WINE and put their own Windows looking desktop on it.

From a cash flow perspective silent abandonment made the most sense. Cannonical is used to cutting deals for custom versions. China paid them to develop their "national operating system" quite some time back.

As to abandoning a profitable installed base, your favorite company, Apple, has done it numerous times. They have abandoned processor architectures and entire operating systems over and over again. The current strategy Microsoft is employing is more from the view of "Forcing them to pay us again." They want all of those older Windows users to be forced into finally upgrading instead of buying 3rd party hacks that kinda-sorta makes newer software work.

This also explains why WINE has started having much more frequent releases of things lately. Someone gave them some cash.
Post by Stephen Hoffman
Whether Linux is also going to be the new kernel? Donno. But I
doubt it. If the Microsoft folks were even going to try re-hosting
Windows onto a new kernel, they'd almost certainly be aiming well past
the existing kernels.
Aiming for a more far reaching kernel would assume they see more than 10 years of life in the desktop OS market. The growth of Linux, Apple and Android in the desktop/laptop/notebook/2-in-1 world. While all of these devices will continue to be around and used for that length of time and longer there will be an ever decreasing market share held by Microsoft. Managing and paying for all of the licenses has been yet another burden for corporations. After Windows Vista and Windows 8 debacles many corporations switch from Microsoft products, including the cash cows like Office and Exchange. More and more companies are using Google Docs and masked Gmail accounts for employees making it all the more easier to hop desktop platforms.

Let us also not forget Office Online was designed to work natively with Linux.
https://community.linuxmint.com/tutorial/view/1584
Post by Stephen Hoffman
The advertising giant needs a way to advertise, and Android was how
they avoided getting strangled by Apple and Microsoft and others as
mobile really got rolling. Google then got themselves into some
trouble with Android and support for earlier versions, particularly due
to how they positioned and licensed Android to the various handset
vendors. Which is why I expect they're headed toward Fuchsia, if
they're going to replace it with something sufficiently better.
That's if they're not simply looking to use Fuchsia for their own
internal use. Nobody outside of Google really knows what they're up
to here. They're certainly seemingly approaching it as a way to get
apps from both iOS and Android, though.
The developers working on Fuchsia have been posting in various places that this is the replacement OS. They have been told this and believe it. True, Google could pull the plug at any moment, but, Android outright sucks and that has gotten to be an industry wide opinion.
Post by Stephen Hoffman
Microsoft got themselves into some trouble with mobile because their
approach was at odds with that of their competitors; the Microsoft
folks couldn't price Windows Mobile underneath Android, and iOS was
vacuuming most of the profits. Among other details.
Btw, I've been getting calls for Qt on Linux programming projects requiring experience in porting from Windows embedded. (Not mobile, but the embedded version used in some medical and other devices, the one which replaced WinCE.) It appears they didn't just pull the plug on Mobile, but all embedded and semi-embedded versions.
Post by Stephen Hoffman
Here's some fodder for thought...
https://qz.com/1037753/the-windows-phone-failure-was-easily-preventable-but-microsofts-culture-made-it-unavoidable/
At some point I will read the link. Having been on the Qt side of that battle I can say they failed when they bought Nokia and forced it to use only Microsoft tools, jettisoning Qt (they owned the product at the time.) This created something of a backlash and Microsoft seemed to go out of its way to make it incredibly difficult to cross compile Qt based phone apps for Windows Mobile. They built and worked fine for Android and Apple so many apps simply didn't get ported because you just couldn't get a single code base to work.
Post by Stephen Hoffman
Legacy software is any complex software package where various
subsystems are no longer optimally designed for current requirements
and environments.
Which also describes the current version of Windows <Grin>
Post by Stephen Hoffman
The performance of various recent commodity integrated graphics such as
Intel HD and Iris graphics is actually quite decent, too. And various
vendors are interested in Vulkan in addition to or in place of CUDA,
too.
NVIDIA managed to get the BOINC project involved. True they are also supporting OpenCL. but, there are way more NVIDIA logos on the project list than Radeon or others.

http://boinc.berkeley.edu/projects.php
Post by Stephen Hoffman
You seem to be misinterpreting my comments. I'm specifically
referring to the current and future environments, and not to what
analysts in the 1980s and 1990s stated, nor about what investments and
what guesses made back then that worked or not, nor am I even remotely
interested in rehashing the product management decision to rename the
VMS product to OpenVMS. Nope. Wrong direction. Forward. History
and how we got here is fun and interesting and a good foundation for
learning from successes and not repeating mistakes, but we're just not
going back that way again.
I was actually pointing out that _Forward_ is with a proprietary networking protocol, CPU and other hardware. Moving to a cheap and unstable platform while becoming more Linux/Windows like is not forward, it's the grave. The powers that be sat with a forearm up their ass for so long we now have OpenSource databases which manage to half bake a cluster. Without "uptime measured in decades" hardware VMS has little to offer new customers. Maybe, just maybe, someone _might_ convince them what Windows and Linux call clustering is simply wire fraud, but, that's about it.

Yes, 10 years ago, I would have still railed against an x86 port, but, 10 years ago there was a chance the VARs could have made sales. I'm talking about all of those small ERP, WMS, etc. packages written mostly in BASIC and riding on top of RMS indexed files. I used to get many calls from people who had been hacking things out using PC stuff, but their business had grown to the point they _needed_ applications like that. There was a time I even considered trying to buy the rights to one I used to work on, but not any more. SAP and a host of others have started offering bite sized pieces of their own modules on-line or installed. Quickbooks keeps getting bigger as do other packages. I don't know about how well they work trying to handled 60+ simultaneous users for the same company, but, the market is gone. Yes, 10 or so years ago the owners who called me understood they would be spending $20K or more on a customized package, but they absolutely refused to spend another 20K or more on a computer. They had yet to be educated about the seriousness of their needs.
Post by Stephen Hoffman
Post by seasoned_geek
Where is it written that every business system must connect directly to
the Internet?
Outside of the military and intelligence communities, there are few
air-gapped systems around, and IPv6 means most every other server is
connected. Whether those servers communicate outside of the local
network is dependent on local requirements.
I have encountered them, but, be that as it may, the fact they aren't common doesn't mean this isn't a complete and total failurre of the systems architect. Pretty soon CEOs and other executives are going to start getting sent to prison for data breaches resulting in large scale identity theft. That will be the tipping point.
Post by Stephen Hoffman
Post by seasoned_geek
Where is it written that your core critical cluster must use TCP/IP?
I wouldn't expect TCP, though I do expect to see DTLS and UDP.
Because — as many OpenVMS sites learned — local IT requires IP, or the
servers cannot be networked.
They can, you simply need a bridge appliance. We used to use lots of them back in the day, especially when we had token ring, SNA, DECnet and Netware all on the same logical network. The bridge appliance or access nanny if you prefer was the only point of contact between the disparate networks. It understood two different network protocols providing both conversion and filtering so only certain types of traffic got through.

We will be seeing these things again in the very near future. The days of "connect to everything" are coming to a close. Recent IOT attacks combined with the large scale breaches are finally bringing some intelligence into upper management by way of their insurance companies. Policies are coming with more strings and inspections now that governments have started to get serious about punishment and fines.
Post by Stephen Hoffman
Post by seasoned_geek
Where is it written that external XML messages must feed directly from
the Internet into a server which is directly connected to a critical
database?
XML and JSON are how data can be packaged, and frameworks and tools are
available for those. Folks are free to use other approaches, though a
bespoke format or network protocol or database or other such is code
that's not particularly differentiated, and that must be written and
maintained and updated. Trade-offs and reasons for bespoke code
certainly do exist, but such decisions are best approached skeptically.
Or forced on a company via its insurance carrier and government regulations. Direct in always equals penetration.
Post by Stephen Hoffman
But again, looking forward and not backwards. VSI is in
2017. With limited staff and funding. And with a port to x86-64
well underway.
I've yet to encounter binary code that's transportable to OpenVMS in
the fashion described, nor malware executables that are portable across
a mix of operating systems that includes OpenVMS. The executable code
may or may not run, but — absent some sort of compatibility framework —
the I/O and system calls will fail. Malware binary executables — any
meaningful binary executables, for that matter — are simply not
magically portable across disparate operating systems. Sure, maybe
you somehow get an RCE and manage to get a loop running. Beyond that?
The code is specific to the operating system context. I've stated
all this before, of course.
While I am _not_ working on this, there are tons of bare metal tools out there. Most of them have comments in the code of which interrupts and other processor specific commands expose/grant access to BIOS/UEFI and other hardware components. You do not need os level support for the first stage of an N-level attack. I seriously doubt that the alphabet soup created disk drive firmware virus got loaded via OS level system service calls. I could be wrong, but seriously doubt that. If I remember that report correctly many of those disks were in disk farm type enclosures (SAN, NAS, whatever they are called today.)
u***@gmail.com
2017-08-02 12:54:20 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
You people are amazing. Purveyor did not succeed 15 years ago because
no one besides me was using openvms as a webserver.

Now to the add ons. You don't need a bunch of add ons when they
can be written with dcl and dibol code. That's what I did. Something
was needed I wrote it. I wrote our whole on line partners program
using dcl and dibol. I passed on line orders on to the field offices
using gold fax.

In other words, I did what I was paid to do, DEVELOP.

And I like the sound of ULTRA PURVEYOR. Very catchy hoff :)
Jan-Erik Soderholm
2017-08-02 13:16:15 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Neil Rieck
Post by Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
You people are amazing. Purveyor did not succeed 15 years...
...because there were free alternativs that worked just fine.
...ago because no one besides me was using openvms as a webserver.
That is just silly of course. No one besides you? Get real...
Stephen Hoffman
2017-08-02 17:13:58 UTC
Permalink
Post by u***@gmail.com
You people are amazing.
Glad you think so! Some of us are beyond amazing, too!
Post by u***@gmail.com
Purveyor did not succeed 15 years ago because no one besides me was
using openvms as a webserver.
The end of support for Purveyor was announced more than fifteen years ago.

Product retirement usually because maintaining the product wasn't
sufficiently profitable, or wasn't a focus for the business.

Most folks have moved to Apache on OpenVMS, or to the use of other
operating systems as web servers.
Post by u***@gmail.com
Now to the add ons. You don't need a bunch of add ons when they can be
written with dcl and dibol code. That's what I did. Something was
needed I wrote it. I wrote our whole on line partners program using dcl
and dibol. I passed on line orders on to the field offices using gold
fax.
Alas, we're not in the 1990s, and rather more is expected by younger
developers and by the developers that have experience with other
platforms with different tooling and feature support. Expectations
change. The amount of source code required of developers changes, too.
Both as platform features change, and as the scale of most software
projects increases, and more subtly as the tooling and the abstraction
reduces the amount of code needed for particular tasks. There's
rather less time to dedicate to replicating what's already available,
given the choice of working on code more directly needed by the
organization. Spending time implementing or reimplementing TLS or
authentication in an application can be necessary on some platforms,
but it's not something most folks want to experience, and it's not easy
doing that securely.
Post by u***@gmail.com
In other words, I did what I was paid to do, DEVELOP.
Most developers would prefer to be working on application code and not
on replicating frameworks and services that are already available, and
certainly not spending time on on badly reimplementing what's already
available, of course. Most managers of developers would certainly
prefer that, too.
Post by u***@gmail.com
And I like the sound of ULTRA PURVEYOR. Very catchy hoff :)
Purveyor isn't the starting point. Integrating and more rapidly
porting over updates for Apache, or porting and integrating nginx, or
WASD. In the case of WASD, maintaining a unique web server isn't a
competitive advantage these days. It means folks have to learn
multiple servers to work with the platform, if the platform isn't using
one of the common web servers. But hauling forward a web server from
twenty years ago, as a starting point? Particularly for a server that
needs to implement a number of newer standards, and that must be secure
against modern attacks? Not a good place.

There's old code around, and much of it can still work. But against
newer environments and newer requirements — and against newer attacks
and newer security requirements, as is the case with a web server — old
code and old designs often doesn't fare as well.

As often happens in computing, norms and tools and platforms and
expectations change. It takes time and effort to remain aware of what
other tools and platforms are available, and what sorts of changes to
the application environment — changes in user expectations, changes in
vulnerabilities and attacks, changes in business needs — have arisen.
And for managers of developers, who can allow their developers to drift
out of competitiveness and variously then decide to replace the
developers, or to expend the time and effort and budget to train and to
maintain the competitiveness of those developers. Developers are
ultimately responsible for their own career choices, of course.
There's just not a big market in DCL and DIBOL and Purveyor right now,
and VSI is not going to rekindle that. DCL and DIBOL are comparatively
expensive languages to develop in, and all but a handful of folks will
have to learn to install, manage and troubleshoot Purveyor, too.
--
Pure Personal Opinion | HoffmanLabs LLC
Norman F Raphael
2017-08-03 19:24:45 UTC
Permalink
Sent: Wed, Aug 2, 2017 9:01 am
Subject: Re: [New Info-vax] SAMBA and Ransomeware
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
You people are amazing. Purveyor did not succeed 15 years ago because
no one besides me was using openvms as a webserver.
Now to the add ons. You don't need a bunch of add ons when they
can be written with dcl and dibol code. That's what I did. Something
was needed I wrote it. I wrote our whole on line partners program
using dcl and dibol. I passed on line orders on to the field offices
using gold fax.
Wow! A reference to GoldFax, a remarkably stable, useful product I used for
years, until some offsite manager thought he had a better solution (which he
did not, of course). Sic transit gloria mundi.
In other words, I did what I was paid to do, DEVELOP.
And I like the sound of ULTRA PURVEYOR. Very catchy hoff :)
Norman F. Raphael
Please reply to: ***@ieee.org
"Everything worthwhile eventually
degenerates into real work." -Murphy
Arne Vajhøj
2017-08-08 02:10:59 UTC
Permalink
Post by u***@gmail.com
Now to the add ons. You don't need a bunch of add ons when they
can be written with dcl and dibol code. That's what I did. Something
was needed I wrote it. I wrote our whole on line partners program
using dcl and dibol. I passed on line orders on to the field offices
using gold fax.
In other words, I did what I was paid to do, DEVELOP.
Writing CGI scripts in DCL or DIBOL is neither as productive
or as performant as what is expected today.

Arne
Stephen Hoffman
2017-08-02 17:45:10 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
Current nmap can be used to detect SMB servers, and configurations and
related security:

https://nmap.org/download.html

I'll expect to find lots of these around, as various printers support SMB.

...and here's an SMB DoS, and its eponymous web server DoS:

https://gist.github.com/marcan/6a2d14b0e3eaa5de1795a763fb58641e

https://en.wikipedia.org/wiki/Slowloris_(computer_security)

Random thought: If somebody here is somehow granted one of their
wishes, hopefully somebody else remembers to test Slowloris and all the
other ancient web server attacks against Purveyor.
--
Pure Personal Opinion | HoffmanLabs LLC
u***@gmail.com
2017-08-04 07:45:14 UTC
Permalink
Post by Neil Rieck
I posted my worry about SAMBA a few weeks back but just noticed this blurb today.
https://www.theregister.co.uk/2017/07/11/hpe_stops_nonstop_server_samba_bugs/
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/demo_vms_html/openvms_demo_index.html
don't bother talking to anyone here about stable products. Gold fax was not
free. They don't belive in you get what you pay for.

They want their code free from some MIT dropout like Bill gates or some
self proclaimed programmer who couldn't get a job and writes junk out of
their basements.

They have bought in to the socialist mentality, We need free. Give us
terrific free code for the good of the community.

Well I don't work for free. maybe for charity, but not for some
penguin riding down a slippery slope leading to disaster.

I get paid for my talents. Much of my code is still in use because
it is good and it works.

Programming takes talent. It's an art. There are good programmers
and products and then there are free ones.

Many fools have chosen to ride on the back of the penguin on a slippery
slope.

Hey, at least Bill Gates charges for his junk. :)
Arne Vajhøj
2017-08-08 02:16:39 UTC
Permalink
Post by u***@gmail.com
don't bother talking to anyone here about stable products. Gold fax was not
free. They don't belive in you get what you pay for.
They want their code free from some MIT dropout like Bill gates or some
self proclaimed programmer who couldn't get a job and writes junk out of
their basements.
Maybe they want their software from somebody that knows that
MIT and Harvard are two different universities?

:-)
Post by u***@gmail.com
They have bought in to the socialist mentality, We need free. Give us
terrific free code for the good of the community.
Well I don't work for free. maybe for charity, but not for some
penguin riding down a slippery slope leading to disaster.
I get paid for my talents. Much of my code is still in use because
it is good and it works.
Programming takes talent. It's an art. There are good programmers
and products and then there are free ones.
Many fools have chosen to ride on the back of the penguin on a slippery
slope.
Hey, at least Bill Gates charges for his junk. :)
Per you logic Windows should be of much higher quality than
Linux.

I think a few people will disagree.

But fundamentally your impression of open source is wrong.

Large portions of open source are being done by big
IT companies using paid developers.

For Linux it is now >90% corporate and <10% volunteers.

Arne
Neil Rieck
2017-09-05 15:26:23 UTC
Permalink
I am posting this Linux blurb here only because it mentions CIFS (Samba) and SMB1

Neil Rieck
Waterloo, Ontario, Canada.
http://neilrieck.net

Continue reading on narkive:
Loading...