Please enable JavaScript to view this site.

A-Shell Reference

In older versions of A-Shell, random files were limited 2GB. This NOT the case now and the new limit is too large to worry about. Note, however, that sequential files, as well as old ISAM files, remain limited to 2GB.

Also BASORT remains limited to 2GB. You can, however, use the third-party Optech sort routine, which is much faster anyway; contact us for details if you have large files to sort, or complex sorting requirements not handled by BASORT.

Implemented at this time was a runtime option to allow random files to grow record by record, sort of like sequential files, or like ISAM-A files. The motivation for this was initially just to get around the 2GB limit on sequential files for ISMUTL dump/load operations, but it may be useful in a variety of situations. Notes:

Enable the mode by setting bit GOP_EXTFIO (&h02) in the first bank of options and GOP2_AUTOX_RAN (&h01000000) in the second bank, using MX_GETOPTIONS, MX_SETOPTIONS:

xcall MIAMEX, MX_GETOPTIONS, options1, options2

OPTIONS1 = OPTIONS1 or GOP_EXTFIO

OPTIONS2 = OPTIONS2 or GOP2_AUTOX_RAN

xcall MIAMEX, MX_SETOPTIONS, options1, options2

 

Once set, the mode is available to all random files. (You can also set EXTFIO using SET EXTFIO or OPTIONS=EXTFIO.)
Since files grow record by record, you must use span'blocks mode to open the file, i.e.:

OPEN #1, FNAME$, RANDOM, RECSIZE, RECVAR, span'blocks

 

To create the initial file, you can use CREATE.LIT or ALLOCATE (to create a 1 block file) or MAKE.LIT to create a 0 block file. Or you can open the file for sequential output, then close it (also creating a 0 block file).
To write to the file, use the normal random file WRITE statements. If you attempt to write past the current end of the file, the file will be extended as needed (on a record by record basis).
To read from the file, use the normal random file READ statements. If you attempt to read past the end of the file, you will get the normal illegal record error (#31).
To determine exactly how many records are in the file, use XCALL SIZE, file, bytes and divide the size in bytes by the record size. (Using LOOKUP may be misleading since it returns the size as an integer number of blocks, although the actual size may not be an even multiple of 512 bytes.)
Since fundamentally (from the OS point of view) there is no difference between our "random" and "sequential" files, there is nothing preventing you from later accessing the file as a sequential file. However, you cannot change from random to sequential access without closing and reopening the file.
See the test program autox.bp SOSLIB:[908,39] for an example.

Old File Size Limits

Prior to removing the 2GB limit, this is what the docs said about file sizes:

Most versions of A-Shell, being a 32 bit application, have a file size limit of 2 GB. But we do offer a special version of A-Shell/Linux which eliminates that limitation, provided the kernel also supports the LFS (Large File System) option. Contact MicroSabio for details.

To determine if your version of A-Shell/Linux supports large files, launch it with the –d switch. If Large File Support is available, a message to that affect will be displayed.