Hmmm... I'm not sure what to make of this but here are a few thoughts...
1. I've never heard of a false positive SIGHUP coming from some OS-level glitch. But it is certainly possible that an A-Shell or OS-level process is explicitly sending SIGHUP signals via the UNIX kill command, so I guess a "rogue agent" is impossible to rule out. (Note that at one point in the distant past, KILL.LIT was even guilty of sending SIGHUP to terminate jobs; that was eventually changed though to sending SIGKILL instead. Also note that when A-Shell sends a signal to another job via
MX_KILL , a message "MX_KILL signal ## sent to pid #####" will be written to the ashlog file.)
2. I have seen cases of cron scripts that crank up every so many minutes, ostensibly looking for CPU hogs or zombies and end up sending out a barrage of SIGHUP or SIGKILL signals. The telltale sign of that is usually a bunch of SIGHUP messages in the log at the same time, repeating at a fixed interval.
3. With that in mind, it might be useful to see a somewhat larger excerpt of the log, both to check for such clusters, and also to get a sense of how well the jobs receiving the SIGHUPs are shutting themselves down. (I know that's not the issue here, but it often is, so it's a good opportunity to review. Typically what I'll do is locate the SIGHUP message, then search from there for the pid (in the bracketed part of each trace prefix) to see if the job goes through the entire sequence, ending up with "After qpurge & qclose", and how long that takes.
4. It's hard to tell from the excerpt if the SIGHUP is occurring after the job had been running for awhile, or if it was during the startup. (The "Was: 20P/21L, Is: 20P/21L" in the final trace suggests that there was no change in the physical/logical job count as a result of the SIGHUP, which might mean that the job never got into the job table to start with, or the "jcb rebuild" (more like a "rescan") occurred before the job had exited.
5. What version/platform is this?
6. I once wrote a utility to convert an ashlog.log to a kind of spreadsheet of sessions to allow for a kind of overview, including information on how many sessions terminate with errors or signals. But it's kind of a work in progress due to the constantly evolving information in the ashlog. (This is one of the ideas under the category of "system health reports" that we touched on at the Conference but didn't really resolve.) Usually the issue comes up in case like this where you are interested in a specific statistic - the number or frequency of SIGHUPs in this case. And that should be fairly easy to get by scanning your ashlog for indicators of session start, finish, and termination, such as:
Normal (but rather short) session...
27-Nov-18 05:20:23 [p21317-23]<:(nil)> In: Nodes=11/31/55 [P], ip=192.168.20.205 d8:9e:f3:6:9a:43, (dave)
...
27-Nov-18 05:20:36 [p21317-23]<HOST:0x3da> Out: Nodes Remaining = 10P/30L, 15 reads, 1 writes, 140 kbd byte
SIGHUP with normal recovery/termination...
27-Nov-18 04:55:38 [tsk:20349-18]<BXINA2:0x436a> SIGHUP trapped on: TSKAAR (steak)
...
27-Nov-18 04:55:43 [tsk:20349-18]<MASTMU:0x47c6> After qpurge & qclose
But, if you are sufficiently motivated to want to tinker with a spreadsheet treatment of the ashlog in order to gather statistics, look for anomalies, etc., that might inspire me to dig out the routine to let you play with it. (Full disclosure/warning: it's the kind of thing that can easily suck up many hours gathering, analyzing, refining, etc. which might be interesting but aren't necessary that productive.)