I think the same problem occurs, to varying degrees, in all filesystems. It not very easy though to identify the magic limit. The basic issue is that directories are not indexed like ISAM files. In AMOS, I think there was just one link to the start of the directory, and each directory block was linked to the next. So not only is the search linear, but you've got a disk seek operation for each block (of N files). Newer filesystems have added various refinements, but I don't think any of them have resorted to full indexing. And memory mapping and disk caching have eliminated a lot of the physical disk seeks, but other than directly accessing a file (e.g. an OPEN statement, or command execution), most directory operations require linear scanning of the directory anyway. So it's more or less unavoidable that the larger the directory gets, the more overhead there will be.
Another consideration is how often the directory is accessed. Directories containing your main data files and programs (e.g. SYS:, CMD:, BAS:, [p,0], etc.) are going to have a lot of accesses, which on the upside means they'll remain cached but on the downside there's a lot of linear searching. Directories used for archival, or perhaps for reports, might get much less access so maybe you don't need to worry about them as much.
Personally, I would try to keep the active directories below, say, 2000 files. But I often run into sites that have ten or even a hundred thousand files in a directory. Doing a wildcard search like DIR ABCD*.RUN on a directory with fifty thousand fiels can take many seconds on a busy system, especially if the directory hasn't been fully cached.
Typically, the main cause behind these giant directories is the creation of report or other output files with unique names (timestamped, sequentially numbered, user-associated, etc.). Depending on the situation, a couple of suggestions for managing this would be:
- Create one or more special ersatz directories, e.g. REPORTS: or LOGS: and direct your output files there. You can then set up a scheduled task to erase them after, say, a week, allowing you the ability to review/reprint the files for a reasonable period without letting them accumulate indefinitely.
- Use native hierarchical directories, rather than PPNs, for such files. You can create a standard function to generate an appropriate directory tree (e.g. /REPORTS/CCYY/MM/) and just incorporate that into your standard open-file-for-output routine.
- If you're in the hardware business, embrace the problem and use it to convince your customers to keep buying faster and bigger servers!