Discussion:
How do I assign a disk to a CPU?
Add Reply
Edgar Ulloa
2020-10-16 02:38:08 UTC
Reply
Permalink
Hello friends
when you run $ show dev $1$dga100: / full

reach to see current preferred cpu id 0

If you want to dedicate it to cpu id 3, do you know any command or procedure?

Cheers
Steven Schweda
2020-10-16 03:01:23 UTC
Reply
Permalink
If SHOW DEVICE shows it, then my first guess would be
SET DEVICE.

HELP SET DEVICE /PREFERRED_CPUS

"dedicate"? "dedicated" and "preferred" are spelled
differently for a reason.
Edgar Ulloa
2020-10-16 03:21:56 UTC
Reply
Permalink
Post by Steven Schweda
If SHOW DEVICE shows it, then my first guess would be
SET DEVICE.
HELP SET DEVICE /PREFERRED_CPUS
"dedicate"? "dedicated" and "preferred" are spelled
differently for a reason.
the command
$ set dev / preferred_cpus = (0,1,2) device
does not work for disks it is only for some devices

:(
Volker Halle
2020-10-16 04:52:05 UTC
Reply
Permalink
Edgar,

you need to set the preferred_cpu on the fast path port, not the disk itself. In your case:

$ SET DEVICE/PREFERRED_CPUS=(0,1,2) PGA0:

Check with $ SHOW FASTPATH

Volker.
Stephen Hoffman
2020-10-16 16:03:55 UTC
Reply
Permalink
...
Some more reading on the topic: see the horribly-named Fast Path
discussion—and confusingly not the horribly-named Fast I/O
discussion—available in Chapter 10 here:
http://h30266.www3.hpe.com/odl/axpos/opsys/vmsos84/BA554_90018/BA554_90018.pdf#page317


Unless the primary processor is approaching saturated, switching the
path is usually of negligible benefit.

And if still on HDDs while pondering permuting preferred path, promote
pondering SSDs too. FC SAN Storage Controllers are fast, but HDDs are
HDDs and slow and there's only so much cache. Though SSDs can or will
push the primary processor closer to saturation, given the performance
boost.
--
Pure Personal Opinion | HoffmanLabs LLC
Edgar Ulloa
2020-10-17 03:47:41 UTC
Reply
Permalink
Post by Stephen Hoffman
Post by Volker Halle
you need to set the preferred_cpu on the fast path port, not the disk
...
Some more reading on the topic: see the horribly-named Fast Path
discussion—and confusingly not the horribly-named Fast I/O
http://h30266.www3.hpe.com/odl/axpos/opsys/vmsos84/BA554_90018/BA554_90018.pdf#page317
Unless the primary processor is approaching saturated, switching the
path is usually of negligible benefit.
And if still on HDDs while pondering permuting preferred path, promote
pondering SSDs too. FC SAN Storage Controllers are fast, but HDDs are
HDDs and slow and there's only so much cache. Though SSDs can or will
push the primary processor closer to saturation, given the performance
boost.
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen

Your participation is very timely, I thank you very much for your attention.

I really thought that using another cpu would have a better performance on the disk that has more io.

Thanks again
Richard Brodie
2020-10-17 09:26:33 UTC
Reply
Permalink
Post by Edgar Ulloa
I really thought that using another cpu would have a better performance on the disk that has more io.
It might. It's just that levelling out the interrupt load between CPUs doesn't really help until the primary is maxed out servicing I/O.
Stephen Hoffman
2020-10-17 15:42:09 UTC
Reply
Permalink
Post by Richard Brodie
Post by Edgar Ulloa
I really thought that using another cpu would have a better performance
on the disk that has more io.
It might. It's just that levelling out the interrupt load between CPUs
doesn't really help until the primary is maxed out servicing I/O.
A fast HDD does ~150 to ~200 IOPS.

A recent SSD does ~100,000, and some models variously more.

Sometimes much more.

There can be activity clogging the primary.

But that's probably not (strictly) HDD interrupt activity.
--
Pure Personal Opinion | HoffmanLabs LLC
Volker Halle
2020-10-18 14:54:57 UTC
Reply
Permalink
Edgar,

you could use MONITOR MODE/CPU=0 to check the CPU utilization of your primary CPU. If you see very high interrupt and kernel mode state during heavy disk-IO, it might help to change the preferred CPU for the fibre channel adapter/port.

Volker.
Stephen Hoffman
2020-10-18 16:37:32 UTC
Reply
Permalink
Post by Volker Halle
you could use MONITOR MODE/CPU=0 to check the CPU utilization of your
primary CPU. If you see very high interrupt and kernel mode state
during heavy disk-IO, it might help to change the preferred CPU for the
fibre channel adapter/port.
And for various of these sorts of request cases, MONITOR
DISK/ITEM=QUEUE tends to show queue length activity of 0.5 or higher.

Over the measured interval, a queue length of 0.5 means half of all I/O
requests are waiting for a previous I/O to complete.

Queue length values of 0.5 and larger usually mean storage is saturated
and apps are stalling, or less commonly means that the storage is
failing.

Some OpenVMS app I/O can be continuous, but other app activity can be
quite bursty.

Best to keep the queue length measurement intervals closer to the app
run-time duration and/or to the times of heaviest I/O activity.

Amdahl's Law applies here.
--
Pure Personal Opinion | HoffmanLabs LLC
seasoned_geek
2020-10-18 10:06:57 UTC
Reply
Permalink
Post by Stephen Hoffman
And if still on HDDs while pondering permuting preferred path, promote
pondering SSDs too. FC SAN Storage Controllers are fast, but HDDs are
HDDs and slow and there's only so much cache.
In the PC world that hasn't exactly been my experience. SSDs are very fast with READ operations but duth sucketh with WRITE operations. They mask this with cache. When you are doing massive write operations, you need a spinning disk.

Actually got forced into conducting that "experiment" on two different projects for the same client. Needed to build Qt from source looking for a magic build combination that would allow a single Debian to install and run on every YABU version they wanted to support. (Not as simple as that sounds.)

https://wiki.qt.io/Building_Qt_5_from_Git#Getting_the_source_code

In the same computer using a spinning drive no better than a 1TB Western Digital Blue, and several different Samsung 840 series SSDs the build time difference was about an hour. Performing all of the builds on a spinning disk shaved about a week off the project.

When faced with having to write thousands of tiny obj files the SSD just tossed up its hands. The on-disk cache backed up. The Linux OS disk cache backed up. Compilation halted. Basically it kept going out for cigarettes while it waited on the SSD.

Part of me wonders what the time difference would have been had I used the 4TB WD Black drive I know have.

I'm not dissing the 840 line. I like them and have many of them. They are a good durable general purpose SSD. Pretty much every SSD I've ever encountered duth sucketh at massive builds. SSDs are fantastic with the I part of I/O. They are just near worthless when it comes to the O part.

Maybe newer designs have "fixed" this problem, but I doubt it.

Before there is any confusion, I started with one SSD. Lost patience with it. Used Terabyte's Image for Linux to make a bare metal backup. Swapped in an 840, laid down the image and went back to work. A few days later this duth sucketh too. Made another image. Swapped in a Blue drive. Laid image back down. Noticed a build time improvement of about an hour.

No. I didn't conduct this experiment in any scientific manner. I was just trying to make it to the end of a tunnel before the train came in.

As a result of my experience though, I never recommend an SSD for any high write system. Database tables (or RMS indexed files) that will have massive write operations I always recommend be placed on a good spinning disk due to the faster write operation. (Think order intake system during Black Friday sales or the H&R Block e-file central collection system during tax season for high write.)

Some day I will experiment with a hybrid drive.
Stephen Hoffman
2020-10-18 15:22:45 UTC
Reply
Permalink
Post by seasoned_geek
Post by Stephen Hoffman
And if still on HDDs while pondering permuting preferred path, promote
pondering SSDs too. FC SAN Storage Controllers are fast, but HDDs are
HDDs and slow and there's only so much cache.
In the PC world that hasn't exactly been my experience. SSDs are very
fast with READ operations but duth sucketh with WRITE operations. They
mask this with cache. When you are doing massive write operations, you
need a spinning disk.
...and several different Samsung 840 series SSDs the build time
difference was about an hour...
If this was ~five years ago or so, and/or with under-revision storage...

Samsung 840 Pro purportedly had a firmware TRIM bug in then-common
firmware versions. Asynchronous TRIM would delete random data. Linux
worked around that corruption by disabling asynchronous TRIM. Which'd
throttle write performance, causing the observed write behavior.

Samsung released firmware fixes for 840 Pro, 850 Pro, and other
effected devices some years ago.

Among various discussions of this firmware bug from back then:
http://forum.notebookreview.com/threads/major-trim-bug-found-in-samsung-ssds-limited-to-linux.777427/
Post by seasoned_geek
Some day I will experiment with a hybrid drive.
Unimpressed with the Apple Fusion hybrid drives; HDDs with a
variable-size flash cache. The performance is better than HDD on
average, though apps can end up running at HDD speeds if (or when?) the
cache mis-predicts.

Errata...

Smaller caching best needs to be cognizant of system and app activity
(XFC, etc), where simpler caching further down the I/O hardware stack
can't be and can trade off cache sizing (hopefully) for performance
increases. All designs have trade-offs and compromises.

With contemporaneous server and app and storage designs and with
contemporaneous pricing, HDDs tend to best be used for archival
storage. When you need big pools of storage, and aren't in a hurry to
access it; fast nearline, or backup, or archival storage.
DVD, HDD, and other storage firmware has also had issues. There were
well-known-vendor-branded optical drives that sometimes mis-recorded
data, as verified with OpenVMS and a patched DQDRIVER. And OpenVMS
itself has shipped with firmware maintenance tools for updating storage
firmware.

There's the whole discussion of how VSI OpenVMS will be auditing
firmware and how customers will be loading new vendor firmware with the
OpenVMS x86-64 port, too. Firmware is ubiquitous, residing within
processors, management processors, storage, NICs, and ~everything else.
Server maintenance starts out problematic, and the mess increases as
the numbers of servers in use increases.

The OpenVMS servers configured with SSD storage are stonking fast, and
the configurations have been at least as stable and reliable as HDDs.
And did I mention stonking fast I/O?
--
Pure Personal Opinion | HoffmanLabs LLC
Loading...