Rewritten July 2020
Peer-to-Peer (P2P) installations, where A-Shell clients share files with a "server" over a P2P network, are easy to set up for small networks, but are prone to performance issues due to the inherent weaknesses and overhead of the P2P architecture. With all the possible versions and sub-versions (Home, Professional, etc.) of Windows, hardware, anti-virus software, etc., it's impossible to say anything definitive that will address all performance problems. But here are some tips that might help:
• Windows Professional vs Home versions: in a business environment, it's best to use the "Professional" version on all the clients.
• Try to get all the workstation clients on the same version of Windows.
• Anti-virus: necessary perhaps, but also the number one cause of performance problems. Generally easy to rule out (by disabling it on all the machines), if there's any reason to to suspect it, then you may need to configure it to be more selective about what it targets. It may be comforting to have the anti-virus scanning your data files constantly, but it's not likely to find anything there, while it could easily cut the performance substantially.
• Privileges: obviously, each client will need full privileges on the shared files and directories. Don't try to rely on "Home groups" for this. Instead, add all of the individual workstation users to the "server" as users, with their passwords. That will go a long way towards avoiding problems with Windows inexplicably denying access to one of the clients. On the "server" make sure to list all the users as having full privileges on the shared directories.
• Limit the shared directory structure to just the \VM directory—i.e. not the entire C: drive.
• Try the "C" version of A-Shell to see if it runs faster when there are multiple users with the same file open. Some sites report this to have a huge effect; others little or none.
• Don't try to extend your P2P network over a WAN. That might seem reasonable (using a VPN) for lightweight browsing and file sharing, but is guaranteed to be terrible for typical applications performing multi-user record I/O. Instead, have the remote users connect via ATE, or RDP, or some similar technology that effectively allows the remote user to share the memory and CPU of the "server", with the only the screen updates traveling across the WAN. Note that if all the users connect with such a tool, you can turn off file sharing which will give you another boost.