Discussion:
Lock rates
Add Reply
Rod Regier
2020-11-09 19:17:23 UTC
Reply
Permalink
Just confirming that on a non-cluster node w/proper AUTOGEN tuning that
I don't need to worry about large lock rates that appear in concert with large direct I/O rates.
abrsvc
2020-11-09 19:28:27 UTC
Reply
Permalink
Post by Rod Regier
Just confirming that on a non-cluster node w/proper AUTOGEN tuning that
I don't need to worry about large lock rates that appear in concert with large direct I/O rates.
Lock rates can be large if the application itself does not use them efficiently or there are multiple updates to singular records from many sources. Try to understand where the locking is coming from and why rather than looking just at the rates.
Stephen Hoffman
2020-11-09 19:47:08 UTC
Reply
Permalink
Post by Rod Regier
Just confirming that on a non-cluster node w/proper AUTOGEN tuning that
I don't need to worry about large lock rates that appear in concert
with large direct I/O rates.
Does your performance data show that your app is meeting your current
performance requirements, and are the longer-term app performance
trends within the capacity of your current hardware and any existing
hardware upgrade plans?

Yes? No issue.

No? Figure out what the performance bottleneck might be, whether
locking or caching or I/O performance or app logic or otherwise. T4 can
be helpful there, as is app profiling.
--
Pure Personal Opinion | HoffmanLabs LLC
Hein RMS van den Heuvel
2020-11-09 22:46:04 UTC
Reply
Permalink
Post by Stephen Hoffman
Post by Rod Regier
Just confirming that on a non-cluster node w/proper AUTOGEN tuning that
I don't need to worry about large lock rates that appear in concert
with large direct I/O rates.
Does your performance data show that your app is meeting your current
performance requirements
Yup, as Hoff asks and answers.

Now if there is a suggestion of a problem then we'll need much more info.
Alpha or Itanium?1 Cpu? 2? 16? -
If more than 4 cpus and there are lock management suspicions , did you try the dedicated lock manager?

What is the order of magnitude of what you perceive as large?
10,000/sec ? walk in the park!
50,000/sec ? It's getting busy
100,000/sec ? Starting to be worrisome
150,000/sec ? Boy you do have a nice fast CPU going, but it is maxing out on the lock management.
.
Lock management overload would likely show as high MPsync time, in which case a dedicated lock manager CPU may help to use that wasted (mpsync) time more effectively.

fwiw - RMS will typically do at least one often more locks per direct IO on a shared files.
The core (VBN) lock to be one to protect others from reading updated data while in transit.

Cheers,
Hein
Stephen Hoffman
2020-11-10 17:17:17 UTC
Reply
Permalink
Post by Hein RMS van den Heuvel
fwiw - RMS will typically do at least one often more locks per direct IO on a shared files.
The core (VBN) lock to be one to protect others from reading updated data while in transit.
I've become quite fond of mapping the whole file section into memory,
if speed is a requirement.

Depending on local requirements, flush the section or flush the changes
to backing store, or implement journaling.

Using a database and potentially an in-memory database is the usual
approach for those that don't want to write this code.

SQLite is one that's available on OpenVMS—that'll still use locks for
coordination, so prototyping the performance would be wise.

The other approach is sharding; of splitting up the I/O activity and
usually of sending the user to the data and not the data to the user.

Sharding the data gets yet more interesting with clustering, as the
coordination overhead is inherently larger in clustered configurations.

But as mentioned earlier, collect and track the performance, as that'll
usually reduce the numbers of production-relevant performance surprises.
--
Pure Personal Opinion | HoffmanLabs LLC
Loading...