The best way to optimize files using SERCH is to replace them with ISAM-A files. (At best, looking up a record in a file of 32K records would require on average 15 disk operations for SERCH, and about 3 for ISAM.) If that is not practical, then at consider one or both of the following suggestions:
• | Change your file reorganization threshold so that the index is kept sorted (i.e. so that the unsorted "overflow" area is kept small). On a machine where sorting took a relatively long time, you might have been tempted to delay sorting until the overflow area became quite large. But in the typical A-Shell case, it is much more likely that the accumulated delay in having to continually search the overflow area is much greater than the overhead in sorting the file. |
• | If Unix, memory map the index file whenever possible. If Windows, use one of the other techniques (read-only, MEM:, local copy) in any program that does not need to update the file. |