Previous Thread
Next Thread
Print Thread
SQL/A-Shell data files Synchronization Schemes #31076 16 Feb 10 08:52 AM
Joined: Nov 2006
Posts: 2,223
S
Stephen Funkhouser Offline OP
Member
OP Offline
Member
S
Joined: Nov 2006
Posts: 2,223
I'm working on finalizing a synchronization scheme for A-Shell data files & SQL. And, I'm just curious as to what thoughts anyone else might have.

Currently, my scheme consists of setting MX_FILEHOOKS on the A-Shell files to be sync'd, and having it execute XCALL SQLSYN; which, handles the sync. This is pretty straight forward, but there are several issues still to work out:

1. How to handle errors in sync'ing?

2. Since the filehook SBX:SQLSYN executes as a subroutine of the file event, it hold's the user's process until it finishes. So, how should table locking and waiting to acquire a lock be handled?

3. Should I use a separate background A-Shell process to handle sync'ing?

Thanks in advance for anyone's thoughts or opinions.


Stephen Funkhouser
Diversified Data Solutions
Re: SQL/A-Shell data files Synchronization Schemes #31077 16 Feb 10 09:37 AM
Joined: Sep 2003
Posts: 4,158
Steve - Caliq Offline
Member
Offline
Member
Joined: Sep 2003
Posts: 4,158
Hi Stephen,

I came across the same questions/issues when I was updating my external SQL tables with the FileHooks and our DVerto and I really did not want to hold/slow down the user when the update/filehook occurred. So the filehook SBX:SQLSYN executes and just writes to a log file with the data file and record number information etc and them moves on.. Then you can have separate job or background process as you say to take the log file and process the relevant changes with the sql server tables.

Thats how we kind of have it running so far.

btw, My file Hook Errors are also outputted to a different log file - not that ive had/seen any yet!

Re: SQL/A-Shell data files Synchronization Schemes #31078 16 Feb 10 10:10 AM
Joined: Nov 2006
Posts: 2,223
S
Stephen Funkhouser Offline OP
Member
OP Offline
Member
S
Joined: Nov 2006
Posts: 2,223
Thanks for you input Steve.

I think I'm going to do the following:

For instance, sync'ing a customer
1. Create a Customer table & a SyncCustomer table in the database. SyncCustomer will have all the same columns as the customer table, plus some event related logging. Plus, a trigger to actually update the Customer table.

2. MX_FILEHOOKS calls SBX:SQLSYN which inserts a new tuple in the SyncCustomer table. If an error occurs in SQLSYN I'll log that. On successful SyncCustomer insertion a trigger executes, and updates the Customer table.

So, at this point from A-Shell there's not any direct updating of the Customer table. I think I like this best because the SyncCustomer table will be much faster than a log file. Plus, it'll allow later for the reverse to occur. Where I'm updating the SQL table I'll update the sync table (a trigger will update the data table), and then have an A-Shell process which (either running, or perhaps launched by a db trigger) will update the A-Shell data files from the sync table.


Stephen Funkhouser
Diversified Data Solutions
Re: SQL/A-Shell data files Synchronization Schemes #31079 16 Feb 10 10:20 AM
Joined: Sep 2003
Posts: 4,158
Steve - Caliq Offline
Member
Offline
Member
Joined: Sep 2003
Posts: 4,158
Sounds a good plan to me smile keep us updated how it goes.

Re: SQL/A-Shell data files Synchronization Schemes #31080 19 Feb 10 09:44 AM
A
Anonymous
Unregistered
Anonymous
Unregistered
A
I just want to add another thought.

We export to an SQL subsystem using the same idea of an import table.

During a bi-monthly (billing) process I write a very long character record (fixed field) record that includes sale, revenue, salesman, commission and client data. There is intentionally a lot of data duplication within the multiple records of an invoice but avoids multiple files.

M/S SQL server has a process to import this character file into the IMPORT'’TBL which has all of the matching fields in one file. There is some integrity checking during that process. There is also a log file of records that fail to import.

However the major checking is done on this import file before ANY data is moved to the multiple tables that make the data base. Immediately before this file is posted, there is a backup of the whole data base. So basically the pre-posting error checking and DB B/U make this is our COMMIT mechanism. Also the records in the import table are marked as they are successfully processed so that if there is a problem we can fix and reprocess as needed.

I realize this is not a process for constant syncing, but it serves our accounting and management needs for SQL access to our very large A-Shell in-house system that was started 25 years ago on an AM1000. We grew to 5 AM6000s with the associated maintenance cost and issues. So we then shifted to five A-Shell instances with five separate sets of data on one IBM AIX box.

We need for SQL, Crystal, M/S Access to all be able to create the ever changing Department Dependant solutions. So I wrote the export and the SQL guys did the rest. This is not meant to be a Wharehouse because it is not ALL of the data in the parent system.

Our special reports requests have gone down considerably.


Moderated by  Jack McGregor, Ty Griffin 

Powered by UBB.threads™ PHP Forum Software 7.7.3