INTELLIGENT WORK FORUMS
FOR COMPUTER PROFESSIONALS

Log In

Come Join Us!

Are you a
Computer / IT professional?
Join Tek-Tips Forums!
  • Talk With Other Members
  • Be Notified Of Responses
    To Your Posts
  • Keyword Search
  • One-Click Access To Your
    Favorite Forums
  • Automated Signatures
    On Your Posts
  • Best Of All, It's Free!

*Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.

Posting Guidelines

Promoting, selling, recruiting, coursework and thesis posting is forbidden.

Jobs

Moving Linc away from MCP to Windows

Moving Linc away from MCP to Windows

(OP)
In the next 18 month I have to move our Linc env. from MCP to Windows.
We know that the programmers have to change some of the database calls.

Has anyone tried this?

We have to move both the dev. and prod. env. to Windows.

I would like to use a MS-SQL 2000 database since we already have a lot of them running.

/johnny

RE: Moving Linc away from MCP to Windows

you might search your code for any dt; back's

RE: Moving Linc away from MCP to Windows

I have been told that due to the way relational databases collect records that the DT; GROUP command should be used instead of DT; FROM to get good performance.

A common practice is to have a DW; <condition>  BK; END; inside the DT; loop to stop reading records.  This works fine for DMSII, but my understanding is that coding this way causes the creation of large cursors.

RE: Moving Linc away from MCP to Windows

I don't know any problems with the DW; ... BK; END; inside the DT; loop and I still continue a common practice of using it working in the LINC NT environment for 2 years. But sure it's specific.

Using of DT; FROM with the relational databases may cause performance problems because of the "recordset" record access compared to the "pointer" one of DMSII. The latter one returns each record to your program when the DB pointer moves from record to record within the DT; loop. But the Oracle/MS SQL first makes an ordinate-based evaluation of the scope of records your DT; may get access to and then return the whole bunch of records called "recordset" to your logic. Including locking of the whole recordset when the logic updates any record within it.

And more for locking. Some DT; FROM ... (GLB.SPACES) SECURE may lock the whole ISPEC or EVENT(!) before any record is updated and hold it all until the end of your transaction. That is why replacement of DT; FROM with the DT; GROUP ...(const_keys) FROM(var_keys) is highly recommended at migration.

There is no SQL command providing access in the key backward order and hence the Oracle impelementation of DT; BACK is pretty awkward and even un-natural with very poor performance. And keep in mind the "recordset" issue  above. The best is to avoid DT; BACK totally or at least to consider replacement with the DT; GROUP ... BACK.

You will definitely face many other issues as well migrating to NT environment. Go to the www.btg.org.lv site register in the LINC USER ON-LINE and download the "NT migration study" to find more information. Or ask your questions in the LINC Q&A section. They have lots of experience working in the NT environment which could be very helpful for you.

RE: Moving Linc away from MCP to Windows

Moving from MCP system has a number of things to be aware of and is not a small topic.

NB. My knowledge is based on Unix/Oracle platform, but should be similar to NT/SQL in many respects.

1. EBCDIC vs ASCII
a)This has a different collation sequence so affects ordering.
b) Data migration is handled for you except if you store numeric data within alpha columns (e.g. Notepad type structures mapped via GROUPs or ARRAY's)

2. GROUP variable Initialisation
a) MV; GLB.SPACES <group variable>
will contain spaces in the numerics on non MCP based linc systems, so you should use the INIT; command to initialise groups.

3. Files
a) MCP uses dependantspecs attribute so reading a file of different length record to that defined is handled, whereas non MCP host systems use defintion of record to read a byte stream.  This means that if you have a different definition of the file than that of the file you will get either the wrong result or a runtime error on the last record.

4. I-O
a) Unique Profiles (no duplicates allowed option) are not implemented on non MCP platforms (excpet for Standard component / Automaint).  This means that it is possible to get duplicate records in a MEMO I-O or OUTPUT dataset even when NO DUPLICATES ALLOWED has ben set.
b) Descending keys on a profile will cause another column (prefixed with an X) to be created with the 2's compliment of the ASCENDING column, and this is used to read forwards through the "backward" (X) column.
c) conditional profiles cause secondary tables to be created.  This is normaly always true for the EVENT table but can be controlled for fully spanning profiles via a setting.
d) Reading the tables is the most complex area and requires more expert knowledge of SQL to understand tis properly.  EAE4.x which is sceduled for 2005 for the NT environment will be a huge leap forward in this area and allow WHERE conditions on the DB query.  This should aid performance immensely.
e) There are a lot of misunderstood pre-conceptions about the changes required to Linc code.

and so the list goes on ...

If you require professional help with the conversion, then please e-mail me so that we can discuss this further.

RE: Moving Linc away from MCP to Windows

(OP)
Anyone running LINC appl. on Windows?

RE: Moving Linc away from MCP to Windows

Yes - I am working at a site that has migrated from MCP to Windows - but this was done before my time and therefore I had no direct involvement.  The site is very happy with the migration.

Regards,

MCPLincster.
Dont recognise acronyms (TIA. IIRC etc) in my post - try www.acronymfinder.com

RE: Moving Linc away from MCP to Windows

I'm a chilean consultant. I've developed some tools for migration fron Linc to Microfocus Cobol for a local bank.

These tools help to migrate the Data Model, the Data Base, LINC and LIRC programs and Batch COBOL programs.

We have migrated one app with 360 LINC transactions and 1000 Cobol programs, in six months with a team of 3 programmers.

RE: Moving Linc away from MCP to Windows

you might want to review the possibilities that the LION tools bring to the table if this is still an issue for you.  I'll be happy to get you more info if you'd like

RE: Moving Linc away from MCP to Windows

I just recently signed up for this forum and perhaps this reply is too late.....

Our site recentlly completed an EAE/LINC migration from MCP to Windows.

We process about 250,000 students in our school apps as well as our HR and Finance apps.

We are running three ES-7000s.

For the most part, things run much faster on Windows that they did on our old MCP.

All ofouor apps are still EAE/LINC based and we use Component Enabler quite a bit to deploy custom ASP pages over the web.

I would be happy to discuss our experiences with you.

Longtime LINCster...
EAE/LINC UNITE chair
David at TIES
St. Paul, MN

RE: Moving Linc away from MCP to Windows

There seems to be a lot of rumour in the Linc world over using relational databases with Linc.

I have used MS-SQL 2000 but not with a Linc system so I will speak from a Linc / Oracle background.

If doing a read (DT; LU; etc) it builds a SELECT * FROM <table> WHERE <condition list>

If you use a DT; GROUP (key1, key2) FROM (key3, key4) UNTIL (key5, key6), then the Key1 and Key2 will be equal conditions within the SELECT conditions, while the other keys will generate nested >= and <= conditions (gets more complex as you have more and more keys in the FROM UNTIL part, but I will speak about this later.

The conditions determine the maximum possible size of the CURSOR (known as the HIT SET).  The database can be optimised to get the entire HIT SET before returning any rows through the CURSOR, OR as is the normal case, will begin to return rows as soon as it can (which due to the indexes created by Linc to match the profile, this should be a direct hit on the first row, hence a single IO).  So in practice, you will normally find that if you do a DT; BK; END; to get a single row, this will still be very quick even if the hit set is millions of rows.

Oracle has both forward links and backward links (like DMSII) in its indexes, so reading backwards should not be unneccessarily slow.

Here comes the complex bit
==========================
If we concentrate on the FROM clause, then see how the number of Ordinates within this clause affects the SQL generated.

Here is an example of a single Ordinate key1 (:1 is the bound variable)

WHERE :1 >= key1

Now if we have 2 keys (key1, key2) (:1 and :2 are the bound variables)

If you have phasedDB = A+ (I think)

then you will get two SELECT statements

1. WHERE :1 = key1 AND :2 >= key2
2. WHERE :1 > key1

If you dont have phasedDB set then it tries to build a single complex SQL statement that after about two ordinates gets too complicated for the Oracle parser to optimise, so starts to do table scans.

Rememeber it is NOT
WHERE :1 >= key2 AND :2 >= key3 AND :3 >= key4 etc

So What you say
===============
This can cause more SQL statements to be executed (PhasedDB set), or can cause complex statements to be generated that dont do what you expect (phasedDB not set).

A SQL SELECT goes through the following steps for execution
1. Parse statement - CPU
2. Look in SQL buffer cache for previous use - IO
3. If not there,
      - create Execution plan - CPU
      - store in SQL Buffer cache - IO
4. Execute SQL - expected IO
5. Read from Cursor - CPU
5. Close Cursor - CPU

So lots of small SQL statements are VERY expensive, but a few Large SQL statements are VERY efficient.

This is why SQL uses JOINS to return all the data in one SELECT

example

DT; GROUP CUST-001 FROM ... UNTIL ...
    DT; EVERY ADDR-001 (CUST.CUST-NO)
        BK;
    END;
END;

For 1,000,000 rows, this will do 1,000,001 SQL SELECT's
while in native SQL you would do

SELECT
FROM CUST
    ,ADDR
WHERE ADDR.CUST-NO = CUST.CUST-NO
 ...

which is just 1 SQL select.  This makes a HUGE difference.

Also, Linc does a SELECT * which returns every column, but you may only be interested in one or two columns.  This drastically slows down the CURSOR.

There are many more things but this will give you a taste.

GSD's
======
You may think I have lost the plot, as we are talking about performance here.  Well tests performed many years back showed that GSD initialisation was one of the main killers of the online screens.

I cannot tell you any details about EAE4.x.NET (due to signed disclosures) coming out, but the boyz in Australia are doing there best to tackle many of these issues.

You can get around most of the performance issues by using native SQL statements EAE3.1 and higher) where required.

Also consider using Oracle sequences as opposed to the old method of reading a table with SECURE, adding one, and flagging this back, as this will cause a locking problem if you ever want to get to running multiple copies of the same report.
Unisys wrote a library that can be called to use Oracle sequences.  I did it by creating an Oracle package that returned the sequence, then calling this within a view that I replaced the old linc table with (NOT for beginners).


Have fun.

RE: Moving Linc away from MCP to Windows

There seems to be a lot of rumour in the Linc world over using relational databases with Linc.

I have used MS-SQL 2000 but not with a Linc system so I will speak from a Linc / Oracle background.

If doing a read (DT; LU; etc) it builds a SELECT * FROM <table> WHERE <condition list>

If you use a DT; GROUP (key1, key2) FROM (key3, key4) UNTIL (key5, key6), then the Key1 and Key2 will be equal conditions within the SELECT conditions, while the other keys will generate nested >= and <= conditions (gets more complex as you have more and more keys in the FROM UNTIL part, but I will speak about this later.

The conditions determine the maximum possible size of the CURSOR (known as the HIT SET).  The database can be optimised to get the entire HIT SET before returning any rows through the CURSOR, OR as is the normal case, will begin to return rows as soon as it can (which due to the indexes created by Linc to match the profile, this should be a direct hit on the first row, hence a single IO).  So in practice, you will normally find that if you do a DT; BK; END; to get a single row, this will still be very quick even if the hit set is millions of rows.

Oracle has both forward links and backward links (like DMSII) in its indexes, so reading backwards should not be unneccessarily slow.

Here comes the complex bit
==========================
If we concentrate on the FROM clause, then see how the number of Ordinates within this clause affects the SQL generated.

Here is an example of a single Ordinate key1 (:1 is the bound variable)

WHERE :1 >= key1

Now if we have 2 keys (key1, key2) (:1 and :2 are the bound variables)

If you have phasedDB = A+ (I think)

then you will get two SELECT statements

1. WHERE :1 = key1 AND :2 >= key2
2. WHERE :1 > key1

If you dont have phasedDB set then it tries to build a single complex SQL statement that after about two ordinates gets too complicated for the Oracle parser to optimise, so starts to do table scans.

Rememeber it is NOT
WHERE :1 >= key2 AND :2 >= key3 AND :3 >= key4 etc

So What you say
===============
This can cause more SQL statements to be executed (PhasedDB set), or can cause complex statements to be generated that dont do what you expect (phasedDB not set).

A SQL SELECT goes through the following steps for execution
1. Parse statement - CPU
2. Look in SQL buffer cache for previous use - IO
3. If not there,
      - create Execution plan - CPU
      - store in SQL Buffer cache - IO
4. Execute SQL - expected IO
5. Read from Cursor - CPU
5. Close Cursor - CPU

So lots of small SQL statements are VERY expensive, but a few Large SQL statements are VERY efficient.

This is why SQL uses JOINS to return all the data in one SELECT

example

DT; GROUP CUST-001 FROM ... UNTIL ...
    DT; EVERY ADDR-001 (CUST.CUST-NO)
        BK;
    END;
END;

For 1,000,000 rows, this will do 1,000,001 SQL SELECT's
while in native SQL you would do

SELECT
FROM CUST
    ,ADDR
WHERE ADDR.CUST-NO = CUST.CUST-NO
 ...

which is just 1 SQL select.  This makes a HUGE difference.

Also, Linc does a SELECT * which returns every column, but you may only be interested in one or two columns.  This drastically slows down the CURSOR.

There are many more things but this will give you a taste.

GSD's
======
You may think I have lost the plot, as we are talking about performance here.  Well tests performed many years back showed that GSD initialisation was one of the main killers of the online screens.

I cannot tell you any details about EAE4.x.NET (due to signed disclosures) coming out, but the boyz in Australia are doing there best to tackle many of these issues.

You can get around most of the performance issues by using native SQL statements EAE3.1 and higher) where required.

Also consider using Oracle sequences as opposed to the old method of reading a table with SECURE, adding one, and flagging this back, as this will cause a locking problem if you ever want to get to running multiple copies of the same report.
Unisys wrote a library that can be called to use Oracle sequences.  I did it by creating an Oracle package that returned the sequence, then calling this within a view that I replaced the old linc table with (NOT for beginners).


Have fun.

RE: Moving Linc away from MCP to Windows

There seems to be a lot of rumour in the Linc world over using relational databases with Linc.

I have used MS-SQL 2000 but not with a Linc system so I will speak from a Linc / Oracle background.

If doing a read (DT; LU; etc) it builds a SELECT * FROM <table> WHERE <condition list>

If you use a DT; GROUP (key1, key2) FROM (key3, key4) UNTIL (key5, key6), then the Key1 and Key2 will be equal conditions within the SELECT conditions, while the other keys will generate nested >= and <= conditions (gets more complex as you have more and more keys in the FROM UNTIL part, but I will speak about this later.

The conditions determine the maximum possible size of the CURSOR (known as the HIT SET).  The database can be optimised to get the entire HIT SET before returning any rows through the CURSOR, OR as is the normal case, will begin to return rows as soon as it can (which due to the indexes created by Linc to match the profile, this should be a direct hit on the first row, hence a single IO).  So in practice, you will normally find that if you do a DT; BK; END; to get a single row, this will still be very quick even if the hit set is millions of rows.

Oracle has both forward links and backward links (like DMSII) in its indexes, so reading backwards should not be unneccessarily slow.

Here comes the complex bit
==========================
If we concentrate on the FROM clause, then see how the number of Ordinates within this clause affects the SQL generated.

Here is an example of a single Ordinate key1 (:1 is the bound variable)

WHERE :1 >= key1

Now if we have 2 keys (key1, key2) (:1 and :2 are the bound variables)

If you have phasedDB = A+ (I think)

then you will get two SELECT statements

1. WHERE :1 = key1 AND :2 >= key2
2. WHERE :1 > key1

If you dont have phasedDB set then it tries to build a single complex SQL statement that after about two ordinates gets too complicated for the Oracle parser to optimise, so starts to do table scans.

Rememeber it is NOT
WHERE :1 >= key2 AND :2 >= key3 AND :3 >= key4 etc

So What you say
===============
This can cause more SQL statements to be executed (PhasedDB set), or can cause complex statements to be generated that dont do what you expect (phasedDB not set).

A SQL SELECT goes through the following steps for execution
1. Parse statement - CPU
2. Look in SQL buffer cache for previous use - IO
3. If not there,
      - create Execution plan - CPU
      - store in SQL Buffer cache - IO
4. Execute SQL - expected IO
5. Read from Cursor - CPU
5. Close Cursor - CPU

So lots of small SQL statements are VERY expensive, but a few Large SQL statements are VERY efficient.

This is why SQL uses JOINS to return all the data in one SELECT

example

DT; GROUP CUST-001 FROM ... UNTIL ...
    DT; EVERY ADDR-001 (CUST.CUST-NO)
        BK;
    END;
END;

For 1,000,000 rows, this will do 1,000,001 SQL SELECT's
while in native SQL you would do

SELECT
FROM CUST
    ,ADDR
WHERE ADDR.CUST-NO = CUST.CUST-NO
 ...

which is just 1 SQL select.  This makes a HUGE difference.

Also, Linc does a SELECT * which returns every column, but you may only be interested in one or two columns.  This drastically slows down the CURSOR.

There are many more things but this will give you a taste.

GSD's
======
You may think I have lost the plot, as we are talking about performance here.  Well tests performed many years back showed that GSD initialisation was one of the main killers of the online screens.

I cannot tell you any details about EAE4.x.NET (due to signed disclosures) coming out, but the boyz in Australia are doing there best to tackle many of these issues.

You can get around most of the performance issues by using native SQL statements EAE3.1 and higher) where required.

Also consider using Oracle sequences as opposed to the old method of reading a table with SECURE, adding one, and flagging this back, as this will cause a locking problem if you ever want to get to running multiple copies of the same report.
Unisys wrote a library that can be called to use Oracle sequences.  I did it by creating an Oracle package that returned the sequence, then calling this within a view that I replaced the old linc table with (NOT for beginners).


RE: Moving Linc away from MCP to Windows

Part2 of 2 parts
=================


So What you say
===============
This can cause more SQL statements to be executed (PhasedDB set), or can cause complex statements to be generated that dont do what you expect (phasedDB not set).

A SQL SELECT goes through the following steps for execution
1. Parse statement - CPU
2. Look in SQL buffer cache for previous use - IO
3. If not there,
      - create Execution plan - CPU
      - store in SQL Buffer cache - IO
4. Execute SQL - expected IO
5. Read from Cursor - CPU
5. Close Cursor - CPU

So lots of small SQL statements are VERY expensive, but a few Large SQL statements are VERY efficient.

This is why SQL uses JOINS to return all the data in one SELECT

example

DT; GROUP CUST-001 FROM ... UNTIL ...
    DT; EVERY ADDR-001 (CUST.CUST-NO)
        BK;
    END;
END;

For 1,000,000 rows, this will do 1,000,001 SQL SELECT's
while in native SQL you would do

SELECT
FROM CUST
    ,ADDR
WHERE ADDR.CUST-NO = CUST.CUST-NO
 ...

which is just 1 SQL select.  This makes a HUGE difference.

Also, Linc does a SELECT * which returns every column, but you may only be interested in one or two columns.  This drastically slows down the CURSOR.

There are many more things but this will give you a taste.

GSD's
======
You may think I have lost the plot, as we are talking about performance here.  Well tests performed many years back showed that GSD initialisation was one of the main killers of the online screens.

I cannot tell you any details about EAE4.x.NET (due to signed disclosures) coming out, but the boyz in Australia are doing there best to tackle many of these issues.

You can get around most of the performance issues by using native SQL statements EAE3.1 and higher) where required.

Also consider using Oracle sequences as opposed to the old method of reading a table with SECURE, adding one, and flagging this back, as this will cause a locking problem if you ever want to get to running multiple copies of the same report.
Unisys wrote a library that can be called to use Oracle sequences.  I did it by creating an Oracle package that returned the sequence, then calling this within a view that I replaced the old linc table with (NOT for beginners).


RE: Moving Linc away from MCP to Windows

There seems to be a lot of rumour in the Linc world over using relational databases with Linc.

I have used MS-SQL 2000 but not with a Linc system so I will speak from a Linc / Oracle background.

If doing a read (DT; LU; etc) it builds a SELECT * FROM <table> WHERE <condition list>

If you use a DT; GROUP (key1, key2) FROM (key3, key4) UNTIL (key5, key6), then the Key1 and Key2 will be equal conditions within the SELECT conditions, while the other keys will generate nested >= and <= conditions (gets more complex as you have more and more keys in the FROM UNTIL part, but I will speak about this later.

The conditions determine the maximum possible size of the CURSOR (known as the HIT SET).  The database can be optimised to get the entire HIT SET before returning any rows through the CURSOR, OR as is the normal case, will begin to return rows as soon as it can (which due to the indexes created by Linc to match the profile, this should be a direct hit on the first row, hence a single IO).  So in practice, you will normally find that if you do a DT; BK; END; to get a single row, this will still be very quick even if the hit set is millions of rows.

Oracle has both forward links and backward links (like DMSII) in its indexes, so reading backwards should not be unneccessarily slow.

Here comes the complex bit
==========================
If we concentrate on the FROM clause, then see how the number of Ordinates within this clause affects the SQL generated.

Here is an example of a single Ordinate key1 (:1 is the bound variable)

WHERE :1 >= key1

Now if we have 2 keys (key1, key2) (:1 and :2 are the bound variables)

If you have phasedDB = A+ (I think)

then you will get two SELECT statements

1. WHERE :1 = key1 AND :2 >= key2
2. WHERE :1 > key1

If you dont have phasedDB set then it tries to build a single complex SQL statement that after about two ordinates gets too complicated for the Oracle parser to optimise, so starts to do table scans.

Rememeber it is NOT
WHERE :1 >= key2 AND :2 >= key3 AND :3 >= key4 etc

So What you say
===============
This can cause more SQL statements to be executed (PhasedDB set), or can cause complex statements to be generated that dont do what you expect (phasedDB not set).

A SQL SELECT goes through the following steps for execution
1. Parse statement - CPU
2. Look in SQL buffer cache for previous use - IO
3. If not there,
      - create Execution plan - CPU
      - store in SQL Buffer cache - IO
4. Execute SQL - expected IO
5. Read from Cursor - CPU
5. Close Cursor - CPU

So lots of small SQL statements are VERY expensive, but a few Large SQL statements are VERY efficient.

This is why SQL uses JOINS to return all the data in one SELECT

example

DT; GROUP CUST-001 FROM ... UNTIL ...
    DT; EVERY ADDR-001 (CUST.CUST-NO)
        BK;
    END;
END;

For 1,000,000 rows, this will do 1,000,001 SQL SELECT's
while in native SQL you would do

SELECT
FROM CUST
    ,ADDR
WHERE ADDR.CUST-NO = CUST.CUST-NO
 ...

which is just 1 SQL select.  This makes a HUGE difference.

Also, Linc does a SELECT * which returns every column, but you may only be interested in one or two columns.  This drastically slows down the CURSOR.

There are many more things but this will give you a taste.

GSD's
======
You may think I have lost the plot, as we are talking about performance here.  Well tests performed many years back showed that GSD initialisation was one of the main killers of the online screens.

I cannot tell you any details about EAE4.x.NET (due to signed disclosures) coming out, but the boyz in Australia are doing there best to tackle many of these issues.

You can get around most of the performance issues by using native SQL statements EAE3.1 and higher) where required.

Also consider using Oracle sequences as opposed to the old method of reading a table with SECURE, adding one, and flagging this back, as this will cause a locking problem if you ever want to get to running multiple copies of the same report.
Unisys wrote a library that can be called to use Oracle sequences.  I did it by creating an Oracle package that returned the sequence, then calling this within a view that I replaced the old linc table with (NOT for beginners).


Red Flag This Post

Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.

Red Flag Submitted

Thank you for helping keep Tek-Tips Forums free from inappropriate posts.
The Tek-Tips staff will check this out and take appropriate action.

Reply To This Thread

Posting in the Tek-Tips forums is a member-only feature.

Click Here to join Tek-Tips and talk with other members!

Resources

Close Box

Join Tek-Tips® Today!

Join your peers on the Internet's largest technical computer professional community.
It's easy to join and it's free.

Here's Why Members Love Tek-Tips Forums:

Register now while it's still free!

Already a member? Close this window and log in.

Join Us             Close