Post by Luigi Thirty
What I would like to do, though, is make the PCI memory buffer directly
accessible from a user process that’s opened a channel to the device
and made a request for it. In Linux this would be mmap() with a shared
buffer, in NT this would be IOCTL_MAPMEM_USER_PHYSICAL_MEMORY
but I can’t find the equivalent in the VMS documentation.
the term you're looking for is pfn-mapping. in this situation, i've arranged an io$_sensemode function that returns the information the user process needs to create a pfn mapping that covers the section. there are, however, many landmines in this area.
we have several custom devices in our labs and i am responsible for the device drivers for those devices. many years ago, i got tired of maintaining several device drivers that were very similar, so i created a generic pci device driver that can handle pretty much any device. it provides the information a user process needs to create a pfn-mapped section that covers the register, collects interrupts for deliver to the user program, and can pin buffers down and describe their mapping so that a user program can configure the device to access the buffer.
it's been a while since i've had to go into the driver and i don't have access to the documentation (and no idea where hp has moved them this week) and driver sources, so my memory is a bit fuzzy around the details.
back in the day, global sections and pfn-mapped sections were accessed using the same system services, $crmpsc and $mgblsc. there are sections that will be either pfn-mapped or mapped as a global section, depending on the details of the lab configuration. my strategy to deal with this was to try one type of mapping and, if that failed, retry it as the other type. eventually, the global section and pfn-mapped section services were split into independent system services. at that point, there was one combination (i.e., either trying to map a pfn-mapped section as a global section or trying to map a global section as a pfn-mapped section) that would cause the system to crash. naturally, my mapping routine was making the attempts in the order that caused the system to crash.
interrupts are handled by placing the irp on a queue, then enabling interrupts from the device. when an interrupt happens, the driver disables interrupts from the device, then completes all of the irps sitting on the queue. some versions of alpha/vms do not properly handle disabling interrupts from a device; it's as if the bus support author never entertained the possibility that someone might want to disable interrupts. in addition to doing the actual work involved in disabling interrupts, the bus support code maintains a bitmap tracking which interrupts are disabled. when an interrupt is enabled, the bus support code does the work and sets a bit in the map to note that it has enabled the interrupt. however, when an interrupt is disabled, the bus support code does the work but *does not* clear the bit in the map. consequently, the next time you enable an interrupt, the bus support code does nothing because it thinks the interrupt is already enabled. the device driver works around this on those situations by clearing the bit in the bus support code's bitmap when it disables an interrupt.
the most recent issue i had with the driver involved shared interrupts. if you attach a driver that does not handle shared interrupts to a device that is connected to a shared interrupt line, the operating system refuses to initialize the driver; it does not call *any* of the driver's initialization functions. however, there are still situations in which the operating system calls the driver's $cancel entry point, even though the driver has not been initialized. i didn't expect this, and my driver would crash when the $cancel entry was called without any of the driver's data structures having been initialized.