Professional Documents
Culture Documents
23A RPGTipsAndTechniques - Key
23A RPGTipsAndTechniques - Key
Jon Paris
Jon.Paris @ Partner400.com
www.Partner400.com
www.SystemiDeveloper.com
Notes
About Me:
I am the co-founder of Partner400, a firm specializing in customized education and mentoring services for IBM i (AS/
400, System i, iSeries, etc.) developers. My career in IT spans 45+ years including a 12 year period with IBM's
Toronto Laboratory.
Together with my partner Susan Gantner, I devote my time to educating developers on techniques and technologies
to extend and modernize their applications and development environments. Together Susan and I author regular
technical articles for the IBM publication, IBM Systems Magazine, IBM i edition, and the companion electronic
newsletter, IBM i EXTRA. You may view articles in current and past issues and/or subscribe to the free newsletter at:
www.IBMSystemsMag.com. We also write frequently for IT Jungle's RPG Guru column (www.itjungle.com).
We also write a (mostly) monthly blog on Things "i" - and indeed anything else that takes our fancy. You can find the
blog here: ibmsystemsmag.blogs.com/idevelop/
Feel free to contact me any time: Jon.Paris @ partner400.com
Ctl-Opt (H Spec)
Compiler Directives
Integers
Basic Thoughts
Be Free
• Code EVERYTHING in free form
• Consider changing old programs over to free when making
significant modifications
✦ Particularly if it is already in RPG IV
• There are excellent (and cheap) conversion tools out there
✦ And very very few compatibility issues
Use proper names
• If you use a good editor like RDi there's no increase in typing
• Use customerName not custNam not cusNo not wkCsNo
✦ messageType not msgTyp
Speak the same language as the rest of the world
• Table not Physical File
• Index or View not Logical File
H-spec / Ctl-Opt
These have a new lease on life in RPG IV
• They can supply defaults for dates/times in the program
• Control debugging options
• Compiler options - to ensure the same ones are always used
The compiler stops looking once the first of these is found
• A Ctl-Opt (or H-spec) included in your source
• A data area named RPGLEHSPEC in *LIBL
• A data area named DFTLEHSPEC in QRPGLE
Tip: DO NOT use the data areas!
• If you want standard set of options then use a /Copy or /Include
Notes
Multiple options are separated with a colon (:) – e.g OPTIONS( *SRCSTMT : *NODEBUGIO)
With *SRCSTMT specified, the statement number reported when an error occurs during run time will correspond
directly to the SEU sequence number. Without this support, the statement number reported did not correlate directly
to the source statement numbers. Therefore, support of end user problems was much more difficult. Many support
desks kept compiler listings of all programs just to be able to match the program statement numbers to SEU
statement numbers.
*NOSRCSTMT (default behaviour) indicates that the compiler just "makes them up" and assigns line numbers
sequentially.
If *SRCSTMT is specified, statement numbers for the listing are generated from the source ID and SEU sequence
numbers as follows: stmt_num = source_ID * 1000000 + source_SEU_sequence_number
If *DEBUGIO is specified, breakpoints are generated for all input and output specifications. *NODEBUGIO (the
default) indicates that no breakpoints are to be generated for these specifications. This means that during debug
sessions, doing a Step function on an I/O operation required many steps (one for each field in the format).
/if defined(*CRTBNDRPG)
ctl-Opt dftActGrp(*NO) actGrp(‘MYACTGRP');
/endIf
ctl-Opt bndDir(‘MYBNDDIR');
ctl-Opt option(*srcStmt :*noDebugIO);
ctl-Opt datFmt(*ISO) timFmt(*ISO) datEdit(*YMD-);
ctl-Opt decPrec(63);
Integer Details
Integers are identified in different ways in different places
• It depends on the intended audience for the documentation
• RPG, C and Java use different notations to those for APIs
Notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ...
0 1
H E R E I S T H E D A T A
0 6
%Size would
dcl-ds *n; return 2,050
webOut varChar(2048);
webDataLen int(5) overlay(webOut);
webData char(2048) overlay(webOut:*next);
end-ds;
Notes
Variable length fields can give any program that builds a string (for example for a web page, or CSV file) can see
huge performance improvements when varying length fields are used instead of fixed length fields. The reason is
that, as long as you trim a string when it is loaded into a variable length field, you never need to trim it again.
Compare this with fixed length fields where to add to an existing field you have to basically say:
string = %TrimR( string ) + newStuff;
And do this for each new piece of data to be added to the string. The longer the string, the more inefficient this
process is.
CUSTOMER 4
STREET 32
dcl-f MthSales;
CITY 24
STATE 2
dcl-ds SalesData; DIVISION 2
Q1; Q1 7 2
Q2; Q2 7 2
Q3 7 2
Q3;
Q4 7 2
Q4; K CUSTNO
Notes
The inspiration for this example comes from a commonly asked question on RPG programming lists: “How do I
directly load fields from a database record into an array?” The question normally arises when handling old databases
that are not normalized. Typically these are from old S/36 or S/38 applications. The type of record I mean contains a
series of related values - for example, sales figures for January, February, ..., December.
To make our example fit on the page we're not going to show 12 months, because it wouldn't fit on the chart!
Hopefully the example of sales figures for four quarters will give you the idea of how it all works. The DDS for the
physical file is shown. One solution is depicted here. We'll look at a slightly different solution on the next chart. Our
objective is to access the individual fields Q1-Q4 as an array of four elements after reading a record from the file.
Notice that we’ve incorporated the Quarterly sales fields into the DS by specifying their names. No length or type
definition is required.
dcl-ds displayControls;
F3_Exit ind pos(3);
F12_Cancel ind pos(12); Remember ...
errorIndicators char(3) pos(31);
error ind
startDateError ind
pos(31);
pos(32);
*IN32 means NOTHING!
endDateError ind pos(33);
end-Ds;
if (F3Exit or F12_Cancel);
Notes
The ONLY place where there is an "excuse" for using numbered indicators is in O-specs. And even that is a poor
excuse for new code as you should be using externally described printer files!
The "IBM approved" method of naming indicators is the Indicator Data Structure (INDDS) option for files. But that
only works with externally described files - so program described printer files cannot take advantage of the capability.
Also INDDS creates a separate set of 99 indicators for each structure used. That can be useful but it also means that
INDDS can be hard to use in existing code where *INnn indicators are already in use since indicator 99 in a display
file that uses INDDS is NOT the same thing and *IN99. I'll look at how to address that problem on the next chart.
Notes
Note that I used a group field to define the set of indicators 31 - 33 as the single field errorInds_31_33. I could also
have defined it as being Char(3) Pos(30). This allows me to simply code things like Clear errorInds_31_33; I use
this technique for subfile control indicators and use constants with names such as DISPLAYSUBFILE and
CLEARSUBFILE which are defined as patterns of character 1s and 0s as appropriate. Helps to make the code far
more readable.
Unlike the INDDS approach, these named indicators DO directly affect the content of their corresponding *IN
indicator. So, using the above example, if we code Error = *On then indicator *IN30 was just turned on. This often
makes this a better approach for those who use program described (i.e. O-spec) based files rather than externally
described printer files.
Those of you who use the *INKx series of indicators to identify function key usage need not feel left out. A similar
technique can be used. In this case the pointer is set to the address of *INKA. The other 23 function key indicators
are in 23 bytes that follow. IBM have confirmed many times that for RPG IV this will always be the case.
Notes
Templates are an incredibly useful way of ensuring consistency in definitions between programs.
They can also have standard initialization values which will be inherited by any any DS that "clones" them if the
Inz(*LikeDS) option is used.
I use these all the time for defining DS that are used as parameters. being able to simply code LikeDS(xxxxx) on a
parameter definition is such an easy way of ensuring consistency and in the called routine avoids the need to define
the DS as the individual fields can be referenced directly.
The use of _T at the end of the name makes it obvious that my LikeDS reference is to a template. It was a
convention I first encountered in IBM documentation and have used it ever since.
clearMessages();
addMessage('ERR0001': employeeID: %char(salary));
addMessage('ERR9001': *omit: employeeID);
addMessageText('Bad things happened!!!');
if (messageCount() > 0);
for i = 1 to messageCount();
getMessage( i: message);
// Do what you will with the message
endFor;
endIf;
Notes
If you use a standard method for reporting errors, such as this, you can more easily change the interface.
Use a green screen and the errors can be retrieved and placed on the message queue as before. Use a browser
interface and the messages can be retrieved via Javascript/JSON calls. Use the code in a web service and the error
messages can be wrapped in XML/JSON/whatever.
Notes
DS I/O offers many advantages not the least of which (as with all qualified references) being that there is never any
doubt as to the source of the data.
Another benefit is that you can keep multiple copies of a record in the program and easily compare the same field in
multiple records.
In the example that follows we are only going to be comparing the record at the global level - but we could of course
compare on a field by field basis. For example, you could compare the record image and if there was a mis-match go
ahead and compare individual fields to see if the changed fields impact the planned update. If the field changed
between the initial read and subsequent access is not one that was changed by the user then perhaps it is fine to go
ahead and perform the update.
Dcl-Ds displayInds;
exit_03 Ind Pos(3);
update_06 Ind Pos(6); // Request Update of Record
canUpdate_96 Ind Pos(96); // Update option available
End-Ds;
Notes
Too often master file update programs are written that pay no attention to the record lock created. Locking the record
while the user makes changes is not a problem - as long as the user doesn't stop to take a phone call in the middle
of the update resulting in a 5 minute lock. Even worse is the case where they wander off to grab a coffee, meet a
colleague and chat for 10 minutes. That lock is just a little landmine waiting for some other application to trip over it.
The process used here involves simply reading the record without a lock initially and storing an image of the record.
Once the user has made the changes and submits the update the record is read again (this time with a lock) and the
new record image compared with the one stored. If the two match the update can proceed - if not the user must be
notified, shown the current data, and invited to try again.
Return;
Dcl-Proc LoadRecord;
Dcl-Pi *N End-Pi;
If %Found(TestFile);
Eval-corr displayRec = custRec(1);
displayRec.message = 'Press F6 to update customer +
or <Enter> to view new customer';
canUpdate_96 = *On;
Else;
displayRec.message = ('** Error: Customer ' + displayRec.ARCode +
' not found **');
EndIf;
End-Proc LoadRecord;
Notes
After the article I wrote on this topic appeared in IT Jungle a reader wrote in and queried my use of a DS array to
hold the record images. He preferred to use two different DS, each based on the same record layout. But unlike my
example that uses *All, in his case he defines one using the input layout and the other the output - which of course
are normally the same thing for a database but RPG can still be fussy about using the right one for the right task.
For him this was less confusing. Perhaps you feel the same but for me I prefer to use the array approach for two
reasons.
First using an array makes it very obvious that the two structures being compared are identical. With the two DS
approach you no assurance that both DS used the same definition.
Secondly, the approach I use is an evolution of an earlier approach that used a Multiple Occurrence DS (MODS) to
achieve the same thing. I felt that anyone familiar with that approach would find this one easier to follow.
Monitor;
Read Customer;
If Not %EOF(Customer);
line1 = %SUBST(inputData : %SCAN('***': inputData) + 1);
EndIf;
On-Error 1211; // << Use named constants - but you can hard code
// ... handle file-not-open
On-Error 00100;
// ... handle string error and array-index error
On-Error;
// ... handle all other errors
EndMon;
Notes
A monitor group monitors all of the code between the Monitor operation and the first On-Error operation.
If an error is detected, then control passes to the first On-Error operation etc. If it matches the condition specified
there the specified action is taken and the exception considered to have been handled. If the condition is not
matched then the next On-Error is checked and so on until either the exception has been handled or it is determined
that there is no action specified. At that time the normal RPG exception handling will kick in.
In addition to specific error code values, the special values of *PROGRAM, *FILE and *ALL can be specified.
Monitor
Read Customer;
If Not %EOF(Customer);
line1 = %Subst(inputData : %Scan('***': inputData) + 1); EndIf;
On-Error FILEISCLOSED;
// ... handle file-not-open
On-Error STRINGRANGEERR;
// ... handle string error e.g. line1 = *Blanks
On-Error *All;
// ... handle all other errors
EndMon;
Notes
One of the problems with this type of code is that we often leave future maintenance programmers wondering just
why we were testing for zero. They may not really find out for a page or two - particularly if viewing the program
using the limited visibility of SEU - i.e. a mere 18 lines at a time.
Notes
The nice thing about this approach is that our main line logic doesn't become cluttered by "we hope it never happens
but ..." type code.
If we wished to trap separately for other possible types of error, we could simply add more ON-ERROR operations
together with their associated code. Don't forget that by coding ON-ERROR *ALL (or simply leaving the extended
factor2 field blank) we can supply catch-all coding for any other error that may occur. As the example is written, any
other error will simply blow up with the normal two-line "screen of death" or trigger the PSSR if one is present.
Dcl-F MyFile;
Dcl-DS inputData LikeRec(Record1);
Monitor;
Num1 = InputData.InpNum1;
Num2 = InputData.InpNum2;
Num3 = InputData.InpNum3;
On-Error DEC_DATA_ERR;
// Place code here to react to Dec Data Errors
EndMon;
Notes
A few things to note about this example: First, it wouldn't be sufficient to simply put in the MONITOR code without
also adding the DS name to the READ operation. We need to avoid the possibility that a Decimal Data Error could
be triggered on the read itself as the data is moved to its internal storage area. Using DS I/O prevents the error from
being signalled on the READ because the database record is moved as a single large character field to the data
structure named InputData.
In order to ensure that the DS exactly matches the layout of the record buffer, we are taking advantage of the
LIKEREC keyword. When we use LIKEREC the compiler guarantees that the layout of the DS exactly matches the
layout of the record buffer.
Because of the use of the LIKEREC keyword, the DS implicitly becomes a Qualified DS. Therefore we use the DS
name qualification syntax to specify we want the data from the DS fields. Note that because of the DS I/O "normal"
fields (e.g. InpNum1, InpNum2, etc.) will not contain the data from the record. We bypassed those fields by reading
the data directly into the DS .
Of course, the code to report and fix the errors can be challenging in an example like this, since there are 3 possible
fields in error. If you only plan to report that the record as a whole is in error, this code will do the job. But if you
want to attempt to identify the specific field in error and take corrective action you will need to do a little more work.
Notes
Until this feature was introduced, there were two ways to specify the key for a keyed operation. Specify the name of
a single field or the name of a KLIST.
KLISTs always annoyed me because you had to wander off elsewhere in the program to actually find the list. Only
then did you know what keys were being used. Not only that, but if you wanted to be able to use all keys, or two
keys, or ... then you needed a separate KLIST for each set of keys.
This new support offers both an improved alternative to the KLIST approach and a new method of directly specifying
the keys on the operation itself.
The new "KLIST" (actually a BIF called %KDS - Key Data Structure) references key definitions in the D specs where
they belong. It uses the *KEY option of LIKEREC. You can use this to automatically generate a DS containing the
file's key fields. This structure can then be referenced in the I/O operation by specifying the DS name to the new
%KDS function.
So how do you specify that a partial key is to be used? Just use the second parameter of %KDS to tell the compiler
how many of the key fields are to be used.
Notes
The second method is an extension of the current ability to specify a single field as the key (the old Factor 1).
Instead of a single field, you can now supply a list of fields. The list should be specified within parentheses with
colons (:) used to separate the individual key elements.
Note that the key elements do not have to be fields, they can be any character expression. The compiler will
perform conversion if required.
Note that even if you have a single field key, it is often better to enclose it in parentheses because with the
parentheses the field specified is not required to match exactly in type and size, but without parentheses they must
match. This is much like the size & type accommodation made by the compiler with the use of the CONST keyword
in prototypes for parameters.
Notes
In many ways this is both one of the best of the new I/O features and one of the most under utilized. You should be
using this in EVERY program that is supposed to only perform updates on specific fields.
It provides a great way to protect your code from the worst efforts of (shall we say) "less-gifted" programmers. The
list of fields is specified using the new BIF %Fields. Only those fields specified will be updated.
Why is this so useful? Suppose that, during the operation of the program, only certain fields in the file should be
subject to change. By specifying those fields to the UPDATE op code, you are assured that only those fields will be
changed. If during subsequent maintenance tasks a mistake is made and the value of a field that should not be
modified is accidentally changed in the code, it will have no effect on the database. Only if the %Fields list is also
modified can this error result in database corruption.
// This calculation
MonthTotal = MonthTotal + TotalSale;
// can be shortened to this
MonthTotal += TotalSale;
Notes
Numeric operations now support short-form notation for certain functions. Prior to this release, an addition of the
type X = X + 1 required that you repeat the name of the target field. Some people considered this a step backwards
since the old ADD op-code offered a short-form notation that only required the target field to be specified once, in the
result field. e.g. ADD 1 X.
With this new feature, the expression can be written as X += 1. Similar shorthand can be used for subtraction,
multiplication, division, and exponentiation.
Do not make the mistake of coding it like this: X =+ 1 This simply puts a positive value of 1 into X.
Why not ?
MonthTotal += TotalSale;
Notes
This is a basic template for the Trigger Buffer parameter.
The *N fields are reserved filler areas and are not currently used.
Any Questions ?