I used RUNPROF from a dot prompt. The program took 53 seconds to run had a chain back to the menu program, and when I exit the menu program I see
Code
RUNPROF Version 1.0(113)
Creating report from 9582.rpl for OE08
Loading 9582.rpl into gridmap for analysis...
Killed
Don't know any traces that would be useful The rpl file is 256 MB. I zipped it and is 31MB, so too big to upload as an attachment since there's a 5MB per attachment limit.
For what it's worth, I get the same thing here. I'm afraid that this is one weakness, possibly a fatal one, with the idea behind RUNPROF -- the amount of data it generates can be so large that it completely overwhelms the tools needed to analyze it.
That said, let me play with it a bit to at least see if I can identify the cause of the crash and try to make it a bit more elegant. Stay tuned...
Sure enough, it dies deep within a Standard Template Library C++ routine which tries to allocate memory for another gridmap element. Somewhat surprising, perhaps, that it doesn't handle the exception and just return an error code, but the prospects for me fixing that are slim. And it does occur right around the 2GB mark, suggesting that it may be more of an architectural limit than a physical memory limit.
So probably RUNPROF needs a better strategy for dealing with all this data than to start by loading it into a gridmap!
If you were desperate to analyze some part of it, such as how many times it calls a particular routine, or how much time is spent there, you could using some other stream-based tools on the RPL file to filter it down to a more manageable amount. (You can even view it with EZTYP to get an idea of what you're dealing with). The first step would be to match up the &hxxxx location counters (the last one on each line is the 'to' address of a call) to your LSX file, and then perhaps just grep those lines out of it. Or conversely, you might be able to remove a large subset of the calls (if there were some particularly repetitive low-level function calls that you weren't that interested in).
I'll hold on to it and see if I can come up with some better ideas. If you could send me the OE08.LSX file, that would be helpful. But since this isn't a critical part of the 7.0 release, I may set it aside for a few days while I try to finalize the pivot out of beta.
The two most likely motivations would be to 1) help optimize some code by analyzing where it was spending a lot of time, and 2) track the sequence of calls in a complex program, particularly one with many conditionals, levels, maybe recursion, etc., making it otherwise hard to trace. The mechanism taps into the same internal hook used for the call stack tracing (that shows up in the ashlog for for Basic errors), but adds additional detail (such as CPU and clock time), and instead of discarding the information as calls are popped off the stack, it logs the entire sequence to a file for subsequent analysis or easier viewing. The most convenient viewing options are XTREE and Excel. Here are a couple of example screenshots, one showing the summary view of the calls sorted by # of calls, and one showing the detailed chronological sequence view with nesting levels highlighted...
But it's at best a work in progress, and maybe just a interesting idea that doesn't pan out that well. The problem, as Stephen has encountered, is that the amount of data it can create is immense, and this is one case where the modern approach of just loading all the data into a memory structure (such as a gridmap) for easier manipulation falls over. I need to redesign it to be more disk based. But even then, other than the summary view, it's not clear how to present the sequential view, since even Excel starts to become unwieldy when dealing with hundreds of thousands of rows.
Note if you can confine your inquiry to a smaller section of code, you can turn the profiling on and off within the app itself (using MX_RUNPROF) and thus cut the data to a manageable amount. The raw output is also just a CSV file, so you can view and/or search it if you knew what you were looking for. The raw log though only has location counters, so one of the things that RUNPROF.LIT is trying to do to make it more comprehensible is to match them up with your LSX to convert them to routine names.
Another problem is that the neither the CPU counter nor system clock has sufficient resolution to measure very small function calls (which is why some in the examples above show up with 0 or blank values.) On the other hand, it probably isn't worth trying to further optimize such small routines anyway.
Another shortcoming is that at least in the current implementation, it doesn't track internal XCALLs. It could easily do that, but I'm not sure there is much practical value there, since you're able to review the C code to optimize it, and most of the routines that are called frequently are so fast and short that the overhead to track them would probably disproportionate to the actual overhead of the routines themselves.
Let's say you have a 12345.RPL file that references a program MYPROG for which there is an LSX file in the current directory. Then you should be able to get a report using:
Code
.runprof/rpl:12345/view/prog:myprog
RUNPROF Version 1.1(114)
Creating report from 12345.rpl for MYPROG
Scanning 12345.rpl for program
RUNPROF, RUNPROF, RUN, RUN, RUNPROF
MYPROG found: lines from 10 to 20
Using MYPROG.lsx from current directory...
Building location map from MYPROG.lsx ...
7 label locations mapped
Reformatting profile data using LSX map...
100 % (0.6 secs)
34 profile data rows available for viewing / analysis
1) raw view, 2) CSV, 3) XLSX (detail), 4) XLSX (sum), 5) XTREE: -1) end:
I would try the XTREE variation first. And yes, it should match up the locations with the LSX file so that you can see routine names in the various report views.