Discussion:
How can I read a locked VMS file?
(too old to reply)
S***@cfl.rr.com
2006-06-15 15:43:14 UTC
Permalink
I have a log file that is locked. The process that has the file locked
is production critical; I cannot terminate it. However, I really need
to view the contents of this log file. Is there a way to override the
VMS file lock in order to view the contents of this file?

Thanks!
S***@cfl.rr.com
2006-06-15 15:47:46 UTC
Permalink
Bah. Never mind. I am not getting a file protection violation. The
problem is that there is no committed data in the file (I misunderstood
the initial problem). The file size is 0/432 blocks. Then I type the
file, it shows nothing. I guess there is a concept of uncomitted data
here. Any ideas on how I may look at this data?

Thanks!!
Colin Butcher
2006-06-15 16:28:13 UTC
Permalink
BACKUP <locked_file>/IGNORE=INTERLOCK <temporary_file>
Then look at the temporary file and with a bit of luck you might see
something of use.

If you really mean a LOG file (eg: batch output from SYS$OUTPUT:) then you
might find making carefully considered changes to the output rate with SET
OUTPUT <interval> useful.
--
Hope this helps, Colin.
colin DOT butcher AT xdelta DOT co DOT uk
It's not mine, but I like this definition: Legacy = stuff that works.
John Santos
2006-06-16 02:44:13 UTC
Permalink
Post by Colin Butcher
BACKUP <locked_file>/IGNORE=INTERLOCK <temporary_file>
Then look at the temporary file and with a bit of luck you might see
something of use.
If you really mean a LOG file (eg: batch output from SYS$OUTPUT:) then you
might find making carefully considered changes to the output rate with SET
OUTPUT <interval> useful.
You will probably still end up with the EOF pointing to the beginning of
the file (directory shows 0/xxx for the size), but BACKUP should copy
whatever has actually been written to the disk and you will be able to
see it either by changing the EOD pointer ($ set file/end or $ set
file/attributes=... on the temporary file), or with $ dump/allocated.

You could also examine the "allocated and written" blocks of the
original log file with $ dump/allocated, but you would probably
get a "file locked" error if you try this while the application is
stil running.
--
John Santos
Evans Griffiths & Hart, Inc.
781-861-0670 ext 539
JF Mezei
2006-06-16 03:05:20 UTC
Permalink
Post by John Santos
You will probably still end up with the EOF pointing to the beginning of
the file (directory shows 0/xxx for the size), but BACKUP should copy
whatever has actually been written to the disk
Can anyone confirm this ?

If I have a file that is 0/432 blocks, and I use
backup/ignore=interlock, how does BACKUP know how many blocks have been
written ? If it forces a flush to disk, wouldn't that cause the
blocks/used count to increase after the backup completes ?

If i start a new file and start writing data with the C runtime with no
special options, (eg: no ctx=rec ) and no flushes/fsync, at what point
does the blocks/used displayed by dir get updated ?

While the program is running, I assume that the file system has a local
copy in memory of the blocks used that increases as I do my fprintfs,
but is that value accessible by other processes ? Does this get written
to dick from time to time, or only opnce the file is closed ?

And more importantly, if there is a power failure after I have written
100 blocks. When the system reboots, how does it know that 100 blocks
have been written if, at the time of the failure, dir/zize=all still
showed 0 blocks used ?
(does the set volume/rebuild go though the file to find an end of file
market and then update the blocks used ?)
John Santos
2006-06-16 04:12:01 UTC
Permalink
Post by JF Mezei
Post by John Santos
You will probably still end up with the EOF pointing to the beginning of
the file (directory shows 0/xxx for the size), but BACKUP should copy
whatever has actually been written to the disk
Can anyone confirm this ?
I can't confirm this; this is based on behavior of backup with files
where the EOF pointer can't be trusted, i.e. indexed files.
Post by JF Mezei
If I have a file that is 0/432 blocks, and I use
backup/ignore=interlock, how does BACKUP know how many blocks have been
written ?
It doesn't know or care. It can copy the entire allocated space, which
is what it does with indexed sequential files. (Unless you say
/truncate, so don't do that!)

If it forces a flush to disk, wouldn't that cause the
Post by JF Mezei
blocks/used count to increase after the backup completes ?
A flush of what to what disk? Of the original data file to the input
disk? No. If the program hasn't written the data to the disk, it isn't
there for Backup to see.

Of course Backup flushes all its output to the output disk.
Post by JF Mezei
If i start a new file and start writing data with the C runtime with no
special options, (eg: no ctx=rec ) and no flushes/fsync, at what point
does the blocks/used displayed by dir get updated ?
This is almost certainly irrelevant to the original poster. He had
a log file (possibly a process created with run/detach/output=...)
that wasn't opened with "allow read" or "SHR=GET" or whatever the
language equivalent. We don't know what language was use to write
the program, no on has said "C".

But writing to the disk and updating the file attributes are
entirely separate operations. There is no guarantee that anything
useful has been written to the log yet, but is certainly possible.
The log, IIRC was several hundred blocks long, so quite likely the
program has been writing it to disk (and extending it as it goes),
but hasn't updated the attributes and won't until it closes the
file. The default behavior of RMS with regards to flushing data
and writing attributes depends on the sharing options used. Since
apparently no sharing was defined for the file, it saves the overhead
of re-writing the attributes until it has to, i.e. when the file is
closed. However, it probably writes data to the disk when ever its
internal buffer gets filled, probably determined by RMS Multi-Block
count, unless it has been overridden by the program.
Post by JF Mezei
While the program is running, I assume that the file system has a local
copy in memory of the blocks used that increases as I do my fprintfs,
but is that value accessible by other processes ? Does this get written
to dick from time to time, or only opnce the file is closed ?
And more importantly, if there is a power failure after I have written
100 blocks. When the system reboots, how does it know that 100 blocks
have been written if, at the time of the failure, dir/zize=all still
showed 0 blocks used ?
Well, there is also highwater marking (if enabled), but it doesn't know
that valid data has been written to the blocks between the EOF and the
highwater mark, just that all the blocks have either been written by
the program or by the highwater-marking zeroing routine, so they are
safe from scavenging.
Post by JF Mezei
(does the set volume/rebuild go though the file to find an end of file
market and then update the blocks used ?)
No. SET VOLUME/REBUILD has nothing to do with this; it fixes extent
and file header allocation caches. (These are free extents and headers
pre-allocated to a system so it doesn't have to go through all the
hand-shaking every time a file is created or extended. That can be
a lot of overhead, especially on a cluster, so instead of allocating
them one-at-a-time, each system grabs a bunch and uses them until it
runs out, then grabs a bunch more. I think it uses a door-bell lock
to get other nodes to release some of their caches if an allocation
fails, so this is generally invisible to the cluster but a lot more
efficient. Locks are really cool, but work best when you don't use
them :-) :-)

The logical place for something like that would be analyze/disk/repair,
but there are too many cases where this would be the disastrously
wrong thing to do. Anyway, there are no embedded EOFs in most data
files; EOF is determined by the attributes (or for ISAM files, by
following the bucket links until you get to the end.) I suppose
there might be a ctrl/Z or a <nul> at the end of a STREAM_* file,
and there are cases of a -1 length byte in SEQUENTIAL VARIABLE
files, but these aren't necessarily present, and if you find one,
it might be noise.
--
John Santos
Evans Griffiths & Hart, Inc.
781-861-0670 ext 539
Rob Brown
2006-06-16 15:40:06 UTC
Permalink
Post by JF Mezei
Post by John Santos
You will probably still end up with the EOF pointing to the
beginning of the file (directory shows 0/xxx for the size), but
BACKUP should copy whatever has actually been written to the disk
Can anyone confirm this ?
I did some experiments with VMS/Alpha V7.1. It appears that this
approach does not give the desired result.

This is what I did:

$ open/write/share f openlog.txt
$ write f "the quick brown fox jumps over the lazy dog"

I repeated the write statement until a $DIRECTORY command on another
terminal showed that an extent had been added to the file. Then I did
the backup command, with the following results:

$ backup/ignore=interlock openlog.txt;0 copy.txt/log/new
%BACKUP-W-ACCONFLICT, DISK$8:[BROWN.TEMP]OPENLOG.TXT;2 is open for
write by another user
%BACKUP-S-CREATED, created DISK$8:[BROWN.TEMP]COPY.TXT;9
$ dir/size=all/date openlog.txt,copy

Directory DISK$8:[BROWN.TEMP]

OPENLOG.TXT;2 0/12 16-JUN-2006 09:30:50.51
COPY.TXT;9 0/12 16-JUN-2006 09:30:50.51
...
$ dump/allocated/block=count=1 copy.txt

Dump of file DISK$8:[BROWN.TEMP]COPY.TXT;9 on 16-JUN-2006 09:32:35.25
File ID (28737,18,0) End of file block 0 / Allocated 12

Virtual block number 1 (00000001), 512 (0200) bytes

0000A400 03000100 0000A500 0100054F O....¥......... 000000
...

As you can see, it does not say "the quick brown fox" anywhere here.
Doing a SET FILE/END COPY.TXT does not change anything. The data is
not there.

:-(
--
Rob Brown b r o w n a t g m c l d o t c o m
G. Michaels Consulting Ltd. (780)438-9343 (voice)
Edmonton (780)437-3367 (FAX)
http://gmcl.com/
Dave Froble
2006-06-16 16:51:35 UTC
Permalink
Post by Rob Brown
Post by JF Mezei
Post by John Santos
You will probably still end up with the EOF pointing to the beginning
of the file (directory shows 0/xxx for the size), but BACKUP should
copy whatever has actually been written to the disk
Can anyone confirm this ?
I did some experiments with VMS/Alpha V7.1. It appears that this
approach does not give the desired result.
$ open/write/share f openlog.txt
$ write f "the quick brown fox jumps over the lazy dog"
I repeated the write statement until a $DIRECTORY command on another
terminal showed that an extent had been added to the file. Then I did
$ backup/ignore=interlock openlog.txt;0 copy.txt/log/new
%BACKUP-W-ACCONFLICT, DISK$8:[BROWN.TEMP]OPENLOG.TXT;2 is open for write
by another user
%BACKUP-S-CREATED, created DISK$8:[BROWN.TEMP]COPY.TXT;9
$ dir/size=all/date openlog.txt,copy
Directory DISK$8:[BROWN.TEMP]
OPENLOG.TXT;2 0/12 16-JUN-2006 09:30:50.51
COPY.TXT;9 0/12 16-JUN-2006 09:30:50.51
...
$ dump/allocated/block=count=1 copy.txt
Dump of file DISK$8:[BROWN.TEMP]COPY.TXT;9 on 16-JUN-2006 09:32:35.25
File ID (28737,18,0) End of file block 0 / Allocated 12
Virtual block number 1 (00000001), 512 (0200) bytes
0000A400 03000100 0000A500 0100054F O....¥......... 000000
...
As you can see, it does not say "the quick brown fox" anywhere here.
Doing a SET FILE/END COPY.TXT does not change anything. The data is not
there.
:-(
An interesting experiment, and not the results I would have expected.

Are you running some type of caching product that does not do a write
through to disk?

When a file extent is added, I would expect the cause would be a need to
write (more) data to disk than there is currently allocated filespace.

The BACKUP >>should<< pick up all allocated disk storage.

Regardless, your result overrides expectations.

The 'proper' place to handle such is in the application writing the log
file.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Rob Brown
2006-06-16 17:54:23 UTC
Permalink
Post by Dave Froble
Post by Rob Brown
I did some experiments with VMS/Alpha V7.1. It appears that this
approach does not give the desired result.
...
:-(
An interesting experiment, and not the results I would have
expected.
Are you running some type of caching product that does not do a
write through to disk?
I am not.
Post by Dave Froble
When a file extent is added, I would expect the cause would be a
need to write (more) data to disk than there is currently allocated
filespace.
After my last post, I did a DUMP/ALLOCATED of the open log file. The
data was definitely in the logfile, but it does *not* get copied by
backup.

:-(
Post by Dave Froble
The BACKUP >>should<< pick up all allocated disk storage.
If only that were true.
Post by Dave Froble
The 'proper' place to handle such is in the application writing the
log file.
Evidently.
--
Rob Brown b r o w n a t g m c l d o t c o m
G. Michaels Consulting Ltd. (780)438-9343 (voice)
Edmonton (780)437-3367 (FAX)
http://gmcl.com/
Hein RMS van den Heuvel
2006-06-19 21:22:43 UTC
Permalink
Post by Rob Brown
Post by Dave Froble
Post by Rob Brown
I did some experiments with VMS/Alpha V7.1. It appears that this
approach does not give the desired result.
An interesting experiment, and not the results I would have
expected.
The BACKUP >>should<< pick up all allocated disk storage.
If only that were true.
It should. Somehow. If need be with an extra switch.
Post by Rob Brown
Post by Dave Froble
The 'proper' place to handle such is in the application writing the
log file.
Evidently.
Beg to differ. The application, in Rob's example does everything write
(:-)
The readers VMS provide are could be improved on (are broken?)

I replied it with a MACRO program already to show the writer is
correct.

I forgot to put the really simple solution in that reply.
Just open the file with DCL for READ, allowing WRITE
Then use the logical name. Using Rob's example:

$ open/read/share=write x openlog.txt
$ type x

Enjoy!
Hein.
John Santos
2006-06-21 07:16:17 UTC
Permalink
Post by Hein RMS van den Heuvel
Post by Rob Brown
Post by Dave Froble
Post by Rob Brown
I did some experiments with VMS/Alpha V7.1. It appears that this
approach does not give the desired result.
An interesting experiment, and not the results I would have
expected.
The BACKUP >>should<< pick up all allocated disk storage.
If only that were true.
It should. Somehow. If need be with an extra switch.
Post by Rob Brown
Post by Dave Froble
The 'proper' place to handle such is in the application writing the
log file.
Evidently.
Beg to differ. The application, in Rob's example does everything write
(:-)
The readers VMS provide are could be improved on (are broken?)
I replied it with a MACRO program already to show the writer is
correct.
I forgot to put the really simple solution in that reply.
Just open the file with DCL for READ, allowing WRITE
$ open/read/share=write x openlog.txt
$ type x
I don't know if this matches the original poster's problem (and the one
you are solving if it's actually a side-track from the OP's), but I
just tried this and it didn't work. I still get:

$ open/read/share=write x tmp:ABC_LINK.LOG
%DCL-E-OPENIN, error opening DSA3:[XYZZY.TMP]ABC_LINK.LOG; as
input
-RMS-E-FLK, file currently locked by another user

(Giving myself all privileges didn't help.)



(Here and in the examples below, TMP: is a logical name pointing
to a work directory, DSA3:[XYZZY.TMP], and RUN: is a logical name
pointing to the executables directory, DSA3:[XYZZY.RUN]. The
logical names actually get expanded because they are run through
f$parse to apply defaults, i.e. tmp:.log and run:.exe, to the
symbols eventually passed to the run/detached command. So please
ignore the noise :-)

This is Alpha VMS V7.3-2,

$ run DSA3:[XYZZY.RUN]ABC_LINK.EXE; -
/process_name=ABC_LINK -
/detached -
/input=NL: -
/output=DSA3:[XYZZY.TMP]ABC_LINK.LOG -
/error=DSA3:[XYZZY.TMP]ABC_LINK.ERR -
/...

$ dir/full tmp:ABC_LINK.LOG;

Directory DSA3:[XYZZY.TMP]

ABC_LINK.LOG;156 File ID: (53291,121,0)
Size: 0/252 Owner: [XYZZY]
Created: 20-MAY-2006 15:01:03.42
Revised: 20-MAY-2006 15:01:03.42 (0)
Expires: <None specified>
Backup: 22-MAY-2006 03:14:40.52
Effective: <None specified>
Recording: <None specified>
Accessed: <None specified>
Attributes: <None specified>
Modified: <None specified>
Linkcount: 1
File organization: Sequential
Shelved state: Online
Caching attribute: Writethrough
File attributes: Allocation: 252, Extend: 0, Global buffer count: 0
No version limit
Record format: Variable length, maximum 0 bytes, longest 0 bytes
Record attributes: Carriage return carriage control
RMS attributes: None
Journaling enabled: None
File protection: System:RWED, Owner:RWED, Group:RE, World:
Access Cntrl List: None
Client attributes: None

Total of 1 file, 0/252 blocks.
$



The program is written in BASIC V1.5, and the log file is the result of
normal BASIC print statements.

We have lots of examples of this (not, I think, all in BASIC.) If it
gets annoying enough, we fix it by creating a stub command file that
just has a "$ run run:abc_link" in it, and then change the startup
command file to specify that file for /input, and loginout.exe as the
image to run. This creates a TMP:ABC_LINK.LOG that we can read without
stopping the program.

$ run sys$system:loginout.exe -
/process_name=ABC_LINK -
/detached -
/input=DSA3:[XYZZY.RUN]ABC_LINK_INPUT.COM -
/output=DSA3:[XYZZY.TMP]ABC_LINK.LOG -
/error=DSA3:[XYZZY.TMP]ABC_LINK.ERR -
/...

The down-side of this method is that it seems to require (since VMS
V7.2-1) CMKRNL, or the process just vanishes after creation.
Post by Hein RMS van den Heuvel
Enjoy!
Hein.
--
John Santos
Evans Griffiths & Hart, Inc.
781-861-0670 ext 539
JF Mezei
2006-06-21 07:30:58 UTC
Permalink
Post by John Santos
$ open/read/share=write x tmp:ABC_LINK.LOG
%DCL-E-OPENIN, error opening DSA3:[XYZZY.TMP]ABC_LINK.LOG; as
input
-RMS-E-FLK, file currently locked by another user
If the other guy didn't open the file with /share=write, then it is
normal behaviour that you are refused access.

What the /share=write does is tell VMS if you are willing to let others
write to the file at same time as you. For this to work, those who have
already opened the file must have also told VMS they are willing to see
others share writing to the file.
John Santos
2006-06-21 08:24:48 UTC
Permalink
Post by JF Mezei
Post by John Santos
$ open/read/share=write x tmp:ABC_LINK.LOG
%DCL-E-OPENIN, error opening DSA3:[XYZZY.TMP]ABC_LINK.LOG; as
input
-RMS-E-FLK, file currently locked by another user
If the other guy didn't open the file with /share=write, then it is
normal behaviour that you are refused access.
The "other guy" in this case being DCL, when it creates sys$output
for a process created with run/detached...
Post by JF Mezei
What the /share=write does is tell VMS if you are willing to let others
write to the file at same time as you. For this to work, those who have
already opened the file must have also told VMS they are willing to see
others share writing to the file.
Do you have a patch for DCL.EXE to do this? :-) :-) :-)
--
John Santos
Evans Griffiths & Hart, Inc.
781-861-0670 ext 539
Hein RMS van den Heuvel
2006-06-21 12:17:54 UTC
Permalink
John Santos wrote:

Hi John, I failed to recognize your name on the earlier replies.
Good stuff, like how the EOF on an indexed fiel can not be trusted.
Good to see your company name out in the open in VMS context.
Post by John Santos
I don't know if this matches the original poster's problem (and the one
you are solving if it's actually a side-track from the OP's),
Right, I was not dealing with to OP, just continuing on Rob's example.
Post by John Santos
$ open/read/share=write x tmp:ABC_LINK.LOG
-RMS-E-FLK, file currently locked by another user
$ run DSA3:[XYZZY.RUN]ABC_LINK.EXE; -
/detached -
/output=DSA3:[XYZZY.TMP]ABC_LINK.LOG -
If it gets annoying enough, we fix it by creating a stub command file that
just has a "$ run run:abc_link" in it, and then change the startup
command file to specify that file for /input, and loginout.exe as the
image to run. This creates a TMP:ABC_LINK.LOG that we can read without
stopping the program.
In the run/detached, DCL does NOT get involved.
It is just the job controler and the image playing.

DCL log files allow readers, as you show.
Furthermore they can FLUSH every 'timeout' seconds (30).
Post by John Santos
The down-side of this method is that it seems to require (since VMS
V7.2-1) CMKRNL, or the process just vanishes after creation.
I would have to spend more time than I have now to check that,
but why not just submit a much similar helper as a batch job?
No special privs required, and will allow the read share no?

Hein.
John Santos
2006-06-22 03:00:26 UTC
Permalink
Post by Hein RMS van den Heuvel
Hi John, I failed to recognize your name on the earlier replies.
Good stuff, like how the EOF on an indexed fiel can not be trusted.
Good to see your company name out in the open in VMS context.
Post by John Santos
I don't know if this matches the original poster's problem (and the one
you are solving if it's actually a side-track from the OP's),
Right, I was not dealing with to OP, just continuing on Rob's example.
Post by John Santos
$ open/read/share=write x tmp:ABC_LINK.LOG
-RMS-E-FLK, file currently locked by another user
$ run DSA3:[XYZZY.RUN]ABC_LINK.EXE; -
/detached -
/output=DSA3:[XYZZY.TMP]ABC_LINK.LOG -
If it gets annoying enough, we fix it by creating a stub command file that
just has a "$ run run:abc_link" in it, and then change the startup
command file to specify that file for /input, and loginout.exe as the
image to run. This creates a TMP:ABC_LINK.LOG that we can read without
stopping the program.
In the run/detached, DCL does NOT get involved.
It is just the job controler and the image playing.
I think I said it did in another followup, but without
the loginout trick, there is no DCL (that's why we did
it in the first place), so it's unfair to blame DCL :-)

HELP SYS_FILES says it is sys$system:rundet.exe that
actually starts detached images. I take it that it
signals the job controller to do the work of setting
up the new process, assigning sys$input, sys$output,
etc?

So does the job controller create and open the sys$output
file or does that happen the first time something (BASIC
RTL when you do a print to channel 0, for example?) calls
LIB$PUT_OUTPUT? Or does BASIC create it when it notices
it needs it, and all the job controller does is define
the logical name in the new process's table? I.E. what
is actually doing the $open without the "SHR=GET"?
Post by Hein RMS van den Heuvel
DCL log files allow readers, as you show.
Furthermore they can FLUSH every 'timeout' seconds (30).
Yes, which is why using loginout, which causes the new
process to run under DCL, results in a log file that
can be read by others.
Post by Hein RMS van den Heuvel
Post by John Santos
The down-side of this method is that it seems to require (since VMS
V7.2-1) CMKRNL, or the process just vanishes after creation.
I would have to spend more time than I have now to check that,
but why not just submit a much similar helper as a batch job?
No special privs required, and will allow the read share no?
Well, there are dozens of such processes, so the batch job limit
for that queue would have to be huge, and everything would fall
over dead if someone stopped the queue manager or the queue. But
it might be simpler to manage, and fewer privileges required by
the process (another batch job) that starts up everything.
Post by Hein RMS van den Heuvel
Hein.
--
John Santos
Evans Griffiths & Hart, Inc.
781-861-0670 ext 539
Hoff Hoffman
2006-06-22 14:15:32 UTC
Permalink
John Santos wrote:

Specific details on the process creation paths are in the IDSM.
Post by John Santos
HELP SYS_FILES says it is sys$system:rundet.exe that
actually starts detached images. I take it that it
signals the job controller to do the work of setting
up the new process, assigning sys$input, sys$output,
etc?
When a detached process is created, there are a few attributes passed
into the process via $creprc. The rest of the initialization arises
within the context of the target process. If the first image is
LOGINOUT (which is also why the first round of trusted image stuff
didn't work for DCL-based applications, as the identifiers were cleaned
off by an image rundown of LOGINOUT, but I digress), then it maps DCL
and sets up various (additional) process context.
Post by John Santos
So does the job controller create and open the sys$output
file or does that happen the first time something (BASIC
RTL when you do a print to channel 0, for example?) calls
LIB$PUT_OUTPUT? Or does BASIC create it when it notices
it needs it, and all the job controller does is define
the logical name in the new process's table? I.E. what
is actually doing the $open without the "SHR=GET"?
The system services and the run-time open the basic I/O channels as
they first need them, while LOGINOUT sets up the environment.

Within an open from within the DCL-based environment, you can see the
channel number lurking at the front of the logical name translation for
the active process logicals for the process permanent files; there is an
escape and a null and a couple of bytes of channel-related data. (The
manuals reference do obscurely this stuff, but it's generally not
noticed nor particularly notable.)
Post by John Santos
Yes, which is why using loginout, which causes the new
process to run under DCL, results in a log file that
can be read by others.
I typically have the process explicitly open its own log file -- that
the process opens up either the specified output or a default output
path. (I typically code the logfile stuff directly, and don't generally
send all that much -- if anything -- to SYS$OUTPUT.) I route the output
through a set of central routines for logging, and these routines can
send the data along to a log server or can more directly output the
data. Basically, I have my own print routines. (An analog would be to
substitute your own localized LIB$PUT_OUTPUT routine into various of the
services that accept this localized output routine in the argument list.
This also works nicely, obviously.)

That written, there may well be something slightly odd (and more
specifically, inconsistent) in how the output log is accessed here at
the root of all this; that the default path for output doesn't open the
log for shared access and the DCL path to same does. Again, I don't
usually write log-related information directly to the SYS$OUTPUT path
for what will be a detached process. But I can see why folks might do
that. (I'm guessing here as to what's really happening within this
particular application environment. Obviously.)
Post by John Santos
Well, there are dozens of such processes, so the batch job limit
for that queue would have to be huge, and everything would fall
over dead if someone stopped the queue manager or the queue. But
it might be simpler to manage, and fewer privileges required by
the process (another batch job) that starts up everything.
Process control is always entertaining, and there are open source and
commercial packages available -- OpenVMS lacks an integrated in-built
process control subsystem, however. So many folks use batch queues.
There are several good scheduling packages available on the market.
Dave Froble
2006-06-22 23:42:47 UTC
Permalink
Post by Hoff Hoffman
Specific details on the process creation paths are in the IDSM.
Post by John Santos
HELP SYS_FILES says it is sys$system:rundet.exe that
actually starts detached images. I take it that it
signals the job controller to do the work of setting
up the new process, assigning sys$input, sys$output,
etc?
When a detached process is created, there are a few attributes passed
into the process via $creprc. The rest of the initialization arises
within the context of the target process. If the first image is
LOGINOUT (which is also why the first round of trusted image stuff
didn't work for DCL-based applications, as the identifiers were cleaned
off by an image rundown of LOGINOUT, but I digress), then it maps DCL
and sets up various (additional) process context.
Post by John Santos
So does the job controller create and open the sys$output
file or does that happen the first time something (BASIC
RTL when you do a print to channel 0, for example?) calls
LIB$PUT_OUTPUT? Or does BASIC create it when it notices
it needs it, and all the job controller does is define
the logical name in the new process's table? I.E. what
is actually doing the $open without the "SHR=GET"?
The system services and the run-time open the basic I/O channels as
they first need them, while LOGINOUT sets up the environment.
Within an open from within the DCL-based environment, you can see the
channel number lurking at the front of the logical name translation for
the active process logicals for the process permanent files; there is an
escape and a null and a couple of bytes of channel-related data. (The
manuals reference do obscurely this stuff, but it's generally not
noticed nor particularly notable.)
Post by John Santos
Yes, which is why using loginout, which causes the new
process to run under DCL, results in a log file that
can be read by others.
I typically have the process explicitly open its own log file -- that
the process opens up either the specified output or a default output
path. (I typically code the logfile stuff directly, and don't generally
send all that much -- if anything -- to SYS$OUTPUT.) I route the output
through a set of central routines for logging, and these routines can
send the data along to a log server or can more directly output the
data. Basically, I have my own print routines. (An analog would be to
substitute your own localized LIB$PUT_OUTPUT routine into various of the
services that accept this localized output routine in the argument list.
This also works nicely, obviously.)
That written, there may well be something slightly odd (and more
specifically, inconsistent) in how the output log is accessed here at
the root of all this; that the default path for output doesn't open the
log for shared access and the DCL path to same does. Again, I don't
usually write log-related information directly to the SYS$OUTPUT path
for what will be a detached process. But I can see why folks might do
that. (I'm guessing here as to what's really happening within this
particular application environment. Obviously.)
Post by John Santos
Well, there are dozens of such processes, so the batch job limit
for that queue would have to be huge, and everything would fall
over dead if someone stopped the queue manager or the queue. But
it might be simpler to manage, and fewer privileges required by
the process (another batch job) that starts up everything.
Process control is always entertaining, and there are open source and
commercial packages available -- OpenVMS lacks an integrated in-built
process control subsystem, however. So many folks use batch queues.
There are several good scheduling packages available on the market.
Before there was a VMS, mid to late 1970s, myself and several others
workout out how to implement what are basically 'service' processes,
including communications, process start-up and run-down, and such. The
procedures moved to VMS quite easily and worked very well. Almost 30
years later I still don't see any reason to fix what ain't broke.

Application specific log files are part of the design. It might be
easier to dump stuff to SYS$OUTPUT, but it sure is less flexible.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
JF Mezei
2006-06-16 18:59:38 UTC
Permalink
Post by Dave Froble
The BACKUP >>should<< pick up all allocated disk storage.
Why should it ? In a sequential file, there is no point in going past
the enf of file marker.

In a relative file, as I recall from a discussion of highwater marking,
the "end of file" would go to the end of the highest block used in the
file. (so if you have a 100 block file, and have written only one
record at block 50, then backup would only pickup the first 50 blocks).
(and if the first write is at block 50, all previous blocks are
automatically zeroed).

Similarly, in an indexed file, there is also the concept of an "end of
file" because you can have fewer blocks used than allocated.
Hein RMS van den Heuvel
2006-06-21 11:59:25 UTC
Permalink
Post by JF Mezei
Post by Dave Froble
The BACKUP >>should<< pick up all allocated disk storage.
Why should it ?
Because that's what's it only purpose in life is?

Check out "HELP BACKUP"..

"BACKUP
Invokes the Backup utility (BACKUP) to perform the following
BACKUP operations:
o Make copies of disk files.
:
"

Well, in the presented case is does NOT make a true copy of the disk
file does it?
Post by JF Mezei
Post by Dave Froble
In a sequential file, there is no point in going past the enf of file marker.
I believe the example shows there is a point. Kinda an all or nothing
point.
The EOF markers (EBK, FFB) are just hints.
Very hand hints as maintained by RMS, but they are just hints.
Applications may or might not choose to maintain it, listen to it.
Applications may of might not write sequential files using RMS.
IMHO Backup has no business second guessing usefulnes of blocks in a
file based on the EOF information.
Post by JF Mezei
Post by Dave Froble
Similarly, in an indexed file, there is also the concept of an "end of
file" because you can have fewer blocks used than allocated.

Is there? Not in the VMS I know. The EOF has no meaning for indexed
files.
I just tried to doublecheck:

$ dir/full test.idx... [Edited to reduce output]
TEST.IDX;2 File ID: (98701,12,0)
Size: 50/51 Owner: [HEIN]
File organization: Indexed, Prolog: 3, Using 1 key
File attributes: Allocation: 51, Extend: 0
$ type test.idx
aap er was eens een aap
bbbbbbbbbbbbbbbb
dddddddddddddddddd
eeeeeeeeeeeeeeeeee
noot die luste een noot
$ set file/att=ebk=1 test.idx
$ type test.idx
aap er was eens een aap
bbbbbbbbbbbbbbbb
dddddddddddddddddd
eeeeeeeeeeeeeeeeee
noot die luste een noot
$ dir/full test.idx.... [Edited to reduce output]
TEST.IDX;2 File ID: (98701,12,0)
Size: 0/51 Owner: [HEIN]
File organization: Indexed, Prolog: 3, Using 1 key
File attributes: Allocation: 51, Extend: 0,
Bob Koehler
2006-06-19 12:57:42 UTC
Permalink
Post by Dave Froble
The BACKUP >>should<< pick up all allocated disk storage.
BACKUP honors the last used block field. Always has. Someone
with a competing product pointed to a Files-11 document that
described this as a user-defined field. RMS always sets it.
The disk XQP/ACP documentation in the I/O User's Guide describes
it accurately.

It is possible to use the $QIO interface to put data in a file and
not set the last used block field, but I've only known one
application that did this, and the effect was the same as marking
the file /nobackup, which was the correct way to handle that file
anyhow.
Richard B. Gilbert
2006-06-16 20:44:25 UTC
Permalink
Post by Rob Brown
Post by JF Mezei
Post by John Santos
You will probably still end up with the EOF pointing to the beginning
of the file (directory shows 0/xxx for the size), but BACKUP should
copy whatever has actually been written to the disk
Can anyone confirm this ?
I did some experiments with VMS/Alpha V7.1. It appears that this
approach does not give the desired result.
$ open/write/share f openlog.txt
$ write f "the quick brown fox jumps over the lazy dog"
I repeated the write statement until a $DIRECTORY command on another
terminal showed that an extent had been added to the file. Then I did
$ backup/ignore=interlock openlog.txt;0 copy.txt/log/new
%BACKUP-W-ACCONFLICT, DISK$8:[BROWN.TEMP]OPENLOG.TXT;2 is open for write
by another user
%BACKUP-S-CREATED, created DISK$8:[BROWN.TEMP]COPY.TXT;9
$ dir/size=all/date openlog.txt,copy
Directory DISK$8:[BROWN.TEMP]
OPENLOG.TXT;2 0/12 16-JUN-2006 09:30:50.51
COPY.TXT;9 0/12 16-JUN-2006 09:30:50.51
...
$ dump/allocated/block=count=1 copy.txt
Dump of file DISK$8:[BROWN.TEMP]COPY.TXT;9 on 16-JUN-2006 09:32:35.25
File ID (28737,18,0) End of file block 0 / Allocated 12
Virtual block number 1 (00000001), 512 (0200) bytes
0000A400 03000100 0000A500 0100054F O....¥......... 000000
...
As you can see, it does not say "the quick brown fox" anywhere here.
Doing a SET FILE/END COPY.TXT does not change anything. The data is not
there.
:-(
I just tried the experiment on Alpha VMS V7.2-1. You're right, it
doesn't work. Even after writing 11 blocks worth of "The quick
brown..." nothing was committed to disk until the file was closed.

I hearby retract foolish remarks I've made in the past about VMS/RMS
writing things to disk. It appears that VMS, at least, V7.2-1 is no
better than Unix about actually committing things to disk and that if
I'd lost power before closing the file the contents would have been lost.
JF Mezei
2006-06-16 23:51:36 UTC
Permalink
Post by Richard B. Gilbert
I just tried the experiment on Alpha VMS V7.2-1. You're right, it
doesn't work. Even after writing 11 blocks worth of "The quick
brown..." nothing was committed to disk until the file was closed.
Actually, this may not be the case. It may have been written to disk,
but since the EOF pointer was not updated, so normal utilities wouldn't
see the data even thoug it is on disk.
Bob Koehler
2006-06-19 12:58:47 UTC
Permalink
Post by Richard B. Gilbert
I hearby retract foolish remarks I've made in the past about VMS/RMS
writing things to disk. It appears that VMS, at least, V7.2-1 is no
better than Unix about actually committing things to disk and that if
I'd lost power before closing the file the contents would have been lost.
VMS gives the user control over the buffering of the file. It never
has made it impossible to buffer a file.
JF Mezei
2006-06-19 17:57:07 UTC
Permalink
Question:

When in C, one uses the fsynch routing or fflush, (allowing another user
to type the file and see the contents recently written to it), I take it
that it actually updates the EOF pointer ?


If you write a single line of text (and thus smaller amount of data than
the RMS buffers, does this mean that it stays in the memory and not on
disk eternally until you close the file (or do an fflush/fsynch) ?
h***@hp.nospam
2006-06-19 18:17:34 UTC
Permalink
In article <***@teksavvy.com>, JF Mezei <***@vaxination.ca> writes:

|> When in C, one uses the fsynch routing or fflush, (allowing another user
|> to type the file and see the contents recently written to it), I take it
|> that it actually updates the EOF pointer ?

Yes.

|> If you write a single line of text (and thus smaller amount of data than
|> the RMS buffers, does this mean that it stays in the memory and not on
|> disk eternally until you close the file (or do an fflush/fsynch) ?

No.

Easier to follow the discussion and the description and the C code
shown in the OpenVMS FAQ; to request this file-level shared-access happen
entirely automatically and transparently. The whole discussion -- the
fflush and fsync calls, and the access overrides -- becomes moot.
Hein RMS van den Heuvel
2006-06-18 20:51:18 UTC
Permalink
Post by Rob Brown
OPENLOG.TXT;2 0/12 16-JUN-2006 09:30:50.51
COPY.TXT;9 0/12 16-JUN-2006 09:30:50.51
Well, 12 blocks is simply not enough.

The default RMS buffer size was 16 pagelettes/blocks = 8KB, and was
bumped to 23 pagelettes. I woudl expect the 0/400+ file originally
reported to have pleny of sefull data already comitted to the disk.

fwiw,
Hein.
Rob Brown
2006-06-19 19:07:37 UTC
Permalink
Post by Hein RMS van den Heuvel
Post by Rob Brown
OPENLOG.TXT;2 0/12 16-JUN-2006 09:30:50.51
COPY.TXT;9 0/12 16-JUN-2006 09:30:50.51
Well, 12 blocks is simply not enough.
My apologies. I did not make myself clear in that post or in
subsequent posts.

In the situation shown above, I did DUMP/ALLOCATED of both files.
This verified that the data really *was* in OPENLOG.TXT, but was not
copied to COPY.TXT.
Post by Hein RMS van den Heuvel
The default RMS buffer size was 16 pagelettes/blocks = 8KB, and was
bumped to 23 pagelettes. I woudl expect the 0/400+ file originally
reported to have pleny of sefull data already comitted to the disk.
Without a doubt. The point I was trying to make was that BACKUP will
not copy the data to another file.

hth

- Rob
--
Rob Brown b r o w n a t g m c l d o t c o m
G. Michaels Consulting Ltd. (780)438-9343 (voice)
Edmonton (780)437-3367 (FAX)
http://gmcl.com/
Main, Kerry
2006-06-15 16:26:59 UTC
Permalink
-----Original Message-----
Sent: June 15, 2006 11:43 AM
Subject: How can I read a locked VMS file?
I have a log file that is locked. The process that has the file locked
is production critical; I cannot terminate it. However, I really need
to view the contents of this log file. Is there a way to override the
VMS file lock in order to view the contents of this file?
Thanks!
Steve,

If the log file has no comitted data written to it, this might not work,
but try -

$ Edit/TPU/read filename.log

Regards

Kerry Main
Senior Consultant
HP Services Canada
Voice: 613-592-4660
Fax: 613-591-4477
kerryDOTmainAThpDOTcom
(remove the DOT's and AT)

OpenVMS - the secure, multi-site OS that just works.
JF Mezei
2006-06-15 21:50:58 UTC
Permalink
Post by Main, Kerry
If the log file has no comitted data written to it, this might not work,
but try -
$ Edit/TPU/read filename.log
TPU is no different from TYPE in the type of data it can access in such
circumstances.

If it has 0/432 in the "/size=all" display, it means that the end of
file is still at block 0, so you need a utility that copies all
allocated blocks, not just the data to the end of file. And I am not
even sure if BACKUP will do it. I know that if a file had 200/432,
backup is able to get the first 200 blocks for sure. But will it get the
rest of the file ?


DUMP/ALLOCATED is one utility that will read past the end of file until
the end of allocated blocks. But this only works if the file is
accessible (eg: there is no "/IGNORE=INTERLOCK" in it).
Syltrem
2006-06-16 01:00:37 UTC
Permalink
For some files, you can force a flush to disk by opening the file yourself
in update mode that is:

$ open yourfile.xxx /shar/read/write
$ close xxx

By "some files" I mean those that have been open in share mode and not yet
flushed to disk.
A log file is not opened in share mode afaik so that may not work.

Syltrem
Post by JF Mezei
Post by Main, Kerry
If the log file has no comitted data written to it, this might not work,
but try -
$ Edit/TPU/read filename.log
TPU is no different from TYPE in the type of data it can access in such
circumstances.
If it has 0/432 in the "/size=all" display, it means that the end of
file is still at block 0, so you need a utility that copies all
allocated blocks, not just the data to the end of file. And I am not
even sure if BACKUP will do it. I know that if a file had 200/432,
backup is able to get the first 200 blocks for sure. But will it get the
rest of the file ?
DUMP/ALLOCATED is one utility that will read past the end of file until
the end of allocated blocks. But this only works if the file is
accessible (eg: there is no "/IGNORE=INTERLOCK" in it).
h***@hp.nospam
2006-06-15 17:21:59 UTC
Permalink
In article <***@c74g2000cwc.googlegroups.com>, ***@cfl.rr.com writes:
|> I have a log file that is locked. The process that has the file locked
|> is production critical; I cannot terminate it. However, I really need
|> to view the contents of this log file. Is there a way to override the
|> VMS file lock in order to view the contents of this file?

In addition to BACKUP and such, another approach is the DCL command:

CONVERT/SHARE from-spec to-spec

Should you have access to the source code for the application (or
if you are willing to patch the image directly to resolve this), the
commands necessary to open the file for shared access are readily
available. (The FAQ has a discussion of this issue specific to C
programmers, but the solution and the flags can be generalized.)
Tweaking the file-sharing flags is trivial to code in most any of
the available languages, and equally easy within a direct RMS file
open.

If you're not sure how to do this, post the section of code that
opens the log file here, and we can take a look at it.

Adjusting the file options is the best and most appropriate approach,
well, barring some comparatively draconian site-specific application
(re)qualification requirements.
S***@cfl.rr.com
2006-06-16 13:28:50 UTC
Permalink
Wow, DUMP/ALLOCATED worked! I really doubted there would be a solution.
Thanks! You are amazing.


Thanks!!
Hein RMS van den Heuvel
2006-06-19 20:59:05 UTC
Permalink
Post by S***@cfl.rr.com
Wow, DUMP/ALLOCATED worked! I really doubted there would be a solution.
Thanks! You are amazing.
Thanks!!
You can probably do a whole lot better.

Check the MACRO code I wrote just now below for a potential solution.
I wrote in macro to accomodate all systems.

It simply opens the file with RMS in shared mode adn copies record.

The reason CONVERT (and TYPE and SEARCH) do not work on this file is
that they use the RMS option SHR=UPI to minimize locking overhead.
However this prevents RMS from reading the actual EOF in the lock
associated with the file.

I have discussed the problem with this with Guy in the past and will
attempt to pick up that conversation.

Enjoy,
Hein.

.psect data, wrt, noexe
buf: .blkb 32*1024
header: .blkb 64
infab: $FAB fnm=sys$input,fac=get,shr=put
inrab: $RAB fab=infab,usz=32000,ubf=buf,rop=rah,rhb=header

outfab: $FAB fnm=sys$output,fac=put
outrab: $RAB fab=outfab,mbc=50,rbf=buf,rsz=400,rop=wbh,rhb=header

.psect code, nowrt, exe
.entry start,0
$OPEN fab=infab
blbc r0,end
movb infab+fab$b_rat, outfab+fab$b_rat
movb infab+fab$b_rfm, outfab+fab$b_rfm
movb infab+fab$b_fsz, outfab+fab$b_fsz
$open fab=outfab
blbc r0,end
$connect rab=inrab
blbc r0,end
$connect rab=outrab
blbc r0,end
loop:
$get rab=inrab
blbc r0,end
movw inrab+rab$w_rsz, outrab+rab$w_rsz
$put rab=outrab
blbc r0,end
brw loop

end:
cmpl r0,#RMS$_EOF
beql succes
ret
succes:
movl #1, r0
ret
.end start
Chris Sharman
2006-06-23 08:16:28 UTC
Permalink
Post by S***@cfl.rr.com
I have a log file that is locked. The process that has the file locked
is production critical; I cannot terminate it. However, I really need
to view the contents of this log file. Is there a way to override the
VMS file lock in order to view the contents of this file?
I wrote a little utility called peek, way back in '89.

It uses $qio to access the file, and does the record handling itself
(incomplete, but handles all the log files I've ever wanted). It has to
do that because rms won't disregard its own locks, afaik.

Obviously, deciding whether something's been written or is simply old
rubbish on the disk is largely guesswork (especially without highwater
marking). It expects records to be 255 bytes or less (or more if the
file header says they can be bigger).

It was submitted to decus, all those years ago, and has probably made it
onto one of the freeware disks.

Failing that, anyone that wants a copy of peek.pas/.obj/.exe can pick it
up from http://services.ccagroup.co.uk/proofs/peek.zip (a vms zip file,
so unzip it on vms & restore the attributes).

Chris

Loading...