You are on page 1of 356

Scientific Libraries Reference

Manual, Volume 2
004–2081–002
     "! # $&%' ( ) # *+), $#-./! !102 3)4*50)67*867 9267:1-;1) *)<=( $>?(!?(4*@ )6AB67C<=( /$)DFE6
B6G?BD: >?#67H
: $I($=C4D <J>$)! 6A*K*?67"<="4+6A:LEDM# $4(#DNEDPOQ"446G$R?67 <M *8*K"$SDC "! # $M%5(  #*4?, $#-

T , UV, ;W1XM.ZY[X\02W1K;02,]1;WX^02,]%5_`; T W1%5W Y[X


a 8* 6G): >?! # (A4"$2D:1 *8#! D*4>?B6EDM 6b%5 9267 $<=6G$)"*`*4>EDc]67#24/B67*84 #4  $)*Z(*5*K6AC]  S $I )6b0) 4*5 $IX5(4(L# ! ( >?*K6d()e./0
f- ffg h($:1i $I*8 <= !"(A`D*4>?##6A*K*8D# ! ( >?*867*' $I )6de./0j2` $= 6bXZkZX)X5klWh1YL./K.Je./0MK>1?! 6<=6G$4*K- a $F>E! *467:[ 3)4*
B67*867"9267:L>$):167@ )6d?  ) T ( OQ*5C2 )6 a $) 46A:Qm4(467*8-2$4(#4Di <=( $F>?C](# >?B67' *5m ! "# $M%5( ) # *4?, $)# - )noo.Z</  )67(4B6
pq Or - Us>$4( $[t5 6GO'2.J o Aug uA-

.Z>?4D4(* q $)32?eD ?? (K?( R./:( )( 2 DC4 ?( wv2g]U p )? ( Dg8 02, $)C]"</)02,]ix7yjz{B|}` 92( _r8~2 T Em# U pp .Z?B6G$4 #6mmX2
 a/p W102 Ta 8;W10j a Y[,]kZ8~2g4U p W.Z?( $): a Y[,]?klmi< q (B6rC467:167 (! ! MB6AD *8467B67:H4 (:6G<M( q *'( $):H167#( >?*86$lO[D q *K4(4  $I *5( $
*8! ( $:?, ??1UQ;D?eDo ??e ;D2e ;1f2e ; $)1>?G6G$)Us(A $)46G$($#6;€D! *42?kl8( w.Z$ <=(4  $h;1)6A(A46A )? ( w. pp )( H?o 
( HAoX)? ( MQ</? !" $)RK2*846G</)? (2X5D#]?? (HW T )( R‚8o ( P‚o*K62? ( T $ q ( PYQƒZ82( 2i02W1W T ! E("( $F
( H8g4U p )( MmmX)g4;o ( M8tr 2( P;o ? (w;1uX))( P;1uW ( D;m>?4D 2( R~2g4U p ?? (w~)UV8?? ( Dg4fm, UQ1t;D
XZ67! 926A $/ )6?OQ67`-F-j- ?X5%5( >?*8*4)X5D#]92 6O@)W UVXZ8?%5 j(02 $)3)_rW ~2./0j),]kZ8FYQX„m67 67*Y[6A O[D q XZ * q ./ (3
Y[6A O[D q ƒ>?6G>? $PW $F9) B$<=6G$ Y[67 O[D q ƒ>?6G>? $)b;1D!"*+?k T Y[W ;D02ƒl8)mW1% T XZ0j8U ./0m;W  a/p W0 T , YQ}1
8*846G<†UV( $46G$)( $#6l( $):H026<=D+6?;€67*84 $PW $F92"B$F<M6G$) F;€ >?*8467: a Y[,]kZ8($: a YQ,4kl5UQ.Z~‡(AG6L4 (:6<=( q *@DC( M026A*K6A(AG#]F
T - T - - 2('Or)D! ! M Or$67:[*4>E*8 :1 ( =DC)m !" # $M%5( )"#*4?, $)# -

m%5,N *`(4(:16G<=( q D Cm ! "# $M%5( ) # *4), $#-,]02, ~\( $):Qm ! # $M%5 (  #*@(AG6dB67j *8467B67:[4 (:6<=( q *@( $):Q )6d "! # $M%5(  #*5! DDw *`(
4 (:6G<M( q DCm ! "# $I5
% (  #*4), $)# -

X5I"*'(4(:16G<=( q C2? $)4BD!XZ(4(b8*846G<M*+?, $)# -)XZW1 a/T ;02, ~2Ft.Z~2?( $):LtUs(B6r4(:16G<=( q *@DCX5 j 4(!W1ˆ >? <M6G$)
"? (A4"$-2W102oL *5(d4(:16G<M(A q DCW UQ.P 8, $)# -)W ;D.‰"*'(4(:16G<=( q C2W;D.ŠK2*846G<=*4?, $#-,]U‹"*'(4(:16G<=( q C2, $467 $(4 $(!
 >?*8 $)6A*K*NUs(#]) $)67*'"? (A4"$-UV, p  *5(d4(:16G<M(A q DCDUs, p ? </>?467@K2*846G<=*8- a Y[, ~\ *5(LB67j *8467B67:H4(:16G<M(A q $I )6 a $) 46A:
m4(467*@($:[D )67`# >$)4"67*4N! "#6G$*K6A: 67Π#! >*K 9267!   B>?3[~5i1k)?6G$S? </?( $F T <="467:1-~5ik?6G$M"*`(dB67D"*K46AB67:H4(:16G<=( q Cj~5ik?6G$
<P?($ T 4:1-F~&V $):1OŽK2*846G<( $:H )6?~\:6G9) #6b(B6d4 (:6<=( q *5DCD;16dk?6G$S%5B>?-

;1)6 a Y[,]?klw?67 (4 $)P*4*846G< *5:67 9267: C4B < a YQ, ~?‘h82*K46<Št?-7;1)6 a Y[,]?klr ?67(4 $)h*4*846G<’ *5(! *8lE(*867:Q $h?(A 2 $S )6
ej>?  =167 q 67! 6GSmC] O[(B6dXZ *84 ED>?4 $V“41mX5”F>$:167?! # 6G$)*86rC]B <†;1)6b0267D6$)4*5C2 )6 a $) 9267*8  HDC?(! "C4D $)"(-
New Features

Scientific Libraries Reference Manual, Volume 2 004–2081–002

•V–˜—'™š›hœB`žKš›AŸK€¡Dš&¡D¢`€`£š™Q¤Vš›3š&¥\€¦5šMŸ8–)›hžK¢`œB™Q›3š§Bš€™mš?¨
Record of Revision

©`ª1«m¬1­D®2¯ °˜ª1¬±1«m­3²w³­D®2¯

´`¨ µ ¶€›3¡D¢¸·€¹€º1¹
»H–¡D—'¥\š`ž €ž œB–^™m—'¼'¼'–)›ž œB`£\ž ¢`šV½H•V¾8¿/ÀMÁs´`¨ µV›3š§Bš€™mšS›A—'``œB`£„–¸¿/›1Â\Ãrš™mš€›3¡D¢
¡D–¥\¼'—'ž š›R™m™mžKš¥\™€¨

Ä`¨ µ ÅK€`—'€›A‡·€¹€¹1·
ÃLš¼'›œB`žb¤VœBžK¢^›3šÆ?œB™mœB–Ç™m—'¼'¼'–)›AžKœB`£\ž ¢`šV½H•V¾K¿PÀ=Á˜Ä`¨ µV›3š§Bš1™šS›A—'``œB`£„–)^¿P›A€Â‡ÃLš™š€›3¡D¢
¡D–¥\¼'—'ž š›R™m™mžKš¥\™€¨

È`¨ µ É=—'£—'™mžd·€¹€¹€Ê
ÃLš¼'›œB`žb¤VœBžK¢^›3šÆ?œB™mœB–Ë™m—'¼'¼'–)›ž œB`£\½=•&¾8¿PÀ=ÁsÈ`¨ µs›3š§Bš€™mšV›A—'``œB`£‡–)¸¿P›A€Â‡ÃLš™š€›3¡D¢
¡D–¥\¼'—'ž š›R™m™mžKš¥\™€¨

º`¨ µ É=—'£—'™mžd·€¹€¹€Ì
ÃLš¼'›œB`žb¤VœBžK¢^›3šÆ?œB™mœB–Ë™m—'¼'¼'–)›ž œB`£\žK¢`šS¿/›1Â?Í5œBΙ ·`¨ µs›3š§Bš€™mšSÏ81™Â`¡D¢`›3–`–)—'™mÐwž ¢`€žb›A—'`™
–‡¿/›€Â‡ÃLš™š€›3¡D¢^™m™ž š¥\™€¨Z¾K„žK¢`œB™Q›3šÆœB™mœB–)„–)Ÿrž ¢`šS¦5–)¡j—'¥\š`žK€ž œB–)`Ñ?ž ¢`šS¥\€ž ¢„§BœBΛ€›A‡œB™
`–Ò§B–`£š›R¦5–)¡j—'¥\š`žKš¦‰œBÇž ¢`šV™m€¥\šM¥\€`—'€§b1™QžK¢`šS™m¡DœBš`ž œBÓ5¡H§BœBΛA€›A¨Z¾8`™mž š€¦5Ñ@œBždœB™
¦5–)¡j—'¥\š`žKš¦¸œBÇž ¢`šRÔÖÕD×8ØVÙÚÜÛFÝAÕDÝAÞ&ßLà áDàAÝ àAâã àZÔÖÕDâ?äÕDåÜÑ`¼'—'ΧBœB¡D€žKœB–)^ÁÃLæKʀ·€Ì1º`¨5ç5¢`šS™m–›Ažr€`¦
™mš€›3¡D¢„›3–)—'žKœB`š™Ñ@¤V¢`œB¡D¢\¤Vš›3š&œB\žK¢`šS½H•V¾8¿/ÀMÁsÈ`¨ µIÆ?š›A™œB–)„–)Ÿrž ¢`šS™m¡DœBš`žKœBÓ'¡H§BœBΛA€› Ñ`¤Vš›3š
¥\–Æ?š¦¸ž –\ž ¢`š&½=•V¾K¿PÀ=Á˜è–)›ž ›1¸Í5œBΛA€›A¨

º`¨ · ÅK—'`šS·1¹€¹€é
ÃLš¤s›AœBž šMž –\™m—'¼'¼'–)›žrž ¢`š&¿P›A€Â?Í'œBÎ?™[·`¨ ·V›3š§Bš1™š&ÏK€™mÂ?`¡D¢`›3–)`–—'™mÐrž ¢`€žr›—'`™Q–„¿P›A€Â
ÃLš™š€›3¡D¢^™mÂ?™mž š¥\™€¨Zç5¢`œB™Q›3šÆ?œB™mœB–ËœB`¡D–›A¼'–)›1žKš™[™m—5¼@¼'–›AžrŸK–)›[ž ¢`šS¿P›A€Â^¶ê'êË¢`€›3¦'¤s1›3š
¼'§B€ž Ÿ8–)›A¥Š¨

º`¨ Ê ÀM¡Dž –Κ›R·€¹€¹€é


ÃLš¤s›AœBž šMž –\™m—'¼'¼'–)›žrž ¢`š&¿P›A€Â?Í'œBÎ?™[·`¨ ÊV›3š§Bš1™š&ÏK€™mÂ?`¡D¢`›3–)`–—'™mÐrž ¢`€žr›—'`™Q–„¿P›A€Â
ÃLš™š€›3¡D¢^™mÂ?™mž š¥\™€¨Zç5¢`œB™Q›3šÆ?œB™mœB–ËœB`¡D–›A¼'–)›1žKš™[™m—5¼@¼'–›AžrŸK–)›[ž ¢`šSë5€™mœB¡HÍ5œB`š€›hÉ=§B£šÎ›A
Á2—'μ'› –£›1¥\™ ŸK–)›h™¢`1›3š¦€›A›€Â™ ÏKë5Í5ÉHÁ2ì1Á2Ð ¨

Ê`¨ µ »Hš¡Dš¥\Κ›R·€¹€¹€´
ÃLš¤s›AœBž šMž –\™m—'¼'¼'–)›žrž ¢`š&¿P›A€Â?Í'œBÎ?™[Ê`¨ µV›3š§Bš1™š&žK¢`1žb›—'`™Q–)¸¿P›A€Â\Ãrš™mš€›3¡D¢¸™m™ž š¥\™€¨
ç5¢`œB™Q›3šÆ?œB™œB–)ÇœB`¡D–)›¼'–)›A€žKš™Q™m—'¼'¼'–)›žrŸK–)›hÁ2¡D€§B€Î§Bš&Í'É=ê1ÉH¿PíÏKÁ2¡D€Í5ÉHêÉH¿PíRÐd€`¦
¦5–)¡j—'¥\š`žKš¦¸™m—'¼'¼'–)›ždŸ8–)›h̀ʀæKÎœBžrè?èç˛3–—'ž œB`š™€¨lÉ=¦5¦5œBž œB–€§Z›3–)—'žKœB`š™Q¤Vš›3š&€¦5¦5š¦ŠŸK–)›
è?èçˀ`¦ë5Í5ÉHÁ'¨

Ì`¨ µ ÅK—'`šS·1¹€¹€È
ÃLš¤s›AœBž šMž –\™m—'¼'¼'–)›žrž ¢`š&¿P›A€Â?Í'œBÎ?™[Ì`¨ µV›3š§Bš1™š&žK¢`1žb›—'`™Q–)¸¿P›A€Â\Ãrš™mš€›3¡D¢¸™m™ž š¥\™€¨
ç5¢`œB™Q›3šÆ?œB™œB–)Ç›3š¥\–Æ?š™H™m—'¼'¼'–)›ždŸ8–)›Pž ¢`šVë'€™œB¡HÍ5œB`š€›RÉH§B£šÎ›sÁ2—'μ'›3–)£›€¥\™QŸK–)›[™m¢`€›3š¦

004–2081–002 i
Scientific Libraries Reference Manual, Volume 2

€›A›€Â™hÏ8ë'Í5É=Áì€ÁÐ ¨Á2ššSžK¢`šS•Vš¤î蚀ž —'›3š™ ¼@€£?š&ŸK–)›[¥\–)›3šS¦5šžK€œB§B™Q€Î–)—'žr€¦5¦5œBž œB–`€§


ŸK—'`¡jž œB–`1§BœBžKÂ\€¦'¦5š¦Ë€ždž ¢`œB™=›3š§Bš€™mš?¨

Ì`¨ · É=—'£—'™mžd·€¹€¹€º
½=¼'¦'€žKš¦¸žK–^›3šï5š¡Džd¡D¢`€`£š™ œBÇž ¢`šSê'›3–£›A€¥\¥\œB`£„ð5`Æ?œB›3–`¥\š`žÌ¨ ·s›3š§Bš€™mš?¨5ç5¢`š
¼'›AœB`žKš¦ËžKšñžd–)Ÿrž ¢`œB™Q¥\€`—'€§b¤V€™ ¥\€¦'šS€Æ?€œB§B€Î§Bš&œBÇ¼'–)™mžK™m¡D›œB¼@ždÏ ¨ ¼'™mПK–)›¥\€žb–)`§B„ŸK–›
žK¢`œB™Q›3š§Bš€™mš?¨

Ì`¨ Ì ÅK—'§B‡·€¹€¹1¹
½=¼'¦'€žKš¦¸žK–^›3šï5š¡Džd¡D¢`€`£š™ œBÇž ¢`šSê'›3–£›A€¥\¥\œB`£„ð5`Æ?œB›3–`¥\š`žÌ¨ Ìs›3š§Bš€™mš?¨5ç5¢`š
¼'›AœB`žKš¦ËžKšñžd–)Ÿrž ¢`œB™Q¥\€`—'€§b¤V€™ ¥\€¦'šS€Æ?€œB§B€Î§Bš&œBÇ¼'–)™mžK™m¡D›œB¼@ždÏ ¨ ¼'™mПK–)›¥\€žb–)`§B„ŸK–›
žK¢`œB™Q›3š§Bš€™mš?¨

ii 004–2081–002
About This Guide

ç5¢`œB™H¼'—'ΧBœB¡j1žKœB–)^¦5–)¡j—'¥\š`žK™Q™m—'μ'›3–£›A€¥\™=€`¦›3–)—'žKœB`š™Q€Æ1œB§B€Î§Bš&ž –\—'™mš›A™Q–)Ÿrž ¢`š


¿/›€ÂÍ5œBΙH¼@›3–¦5—'¡Dž Ñ'¤V¢`œB¡D¢^œB™HœB`¡D§B—'¦5š¦ËœBÇžK¢`šSê'›3–)£?›A€¥\¥\œB`£„ð5`Æ?œB›3–)`¥\š`žLÌ`¨ Ì
›3š§Bš€™mš?¨lç5¢`šS¿P›A€ÂÍ5œBΙH¼'›3–)¦5—'¡Džd¡j–)`ž €œB`™Q™mšÆ?š›1§l§BœBΛA€›AœBš™mòžK¢`šS§BœBΛA€›AÂ^›3–)—'žKœB`š™Q¡D€
Κ&¡D€§B§Bš¦ŠŸK›3–)¥ó™m–)—'›3¡Dš&¡D–)¦5šV¤V›œBž žKšÇœBÇ&`—'¥\Κ›h–)Ÿr¼@›3–£›1¥\¥\œB`£^§B1`£?—'€£š™mÑ
œB`¡D§B—'¦5œB`£^è?–)›AžK›A€`Ñ`¿/Ñ@ê'€™m¡j1§BÑ@€`¦Ë€™m™š¥\ΧBÂ\§B€`£—'€£š?¨Zç5¢`šSœB`Ÿ8–)›A¥\€žKœB–)¸œBÇž ¢`œB™
¦5–)¡D—'¥\š`žL™m—'¼'¼'§Bš¥\š`žK™QœB`ŸK–)›¥\1žKœB–)¸¡D–`ž €œB`š¦œBÇ–)žK¢`š›P¥\€`—'€§B™Q–Ÿž ¢`š
ê'›3–)£?›A€¥\¥\œB`£Çð5`Æ?œB›3–`¥\š`žd¦5–)¡j—'¥\š`žK€ž œB–Ç™šž ¨
ç5¢`œB™HœB™Qs›3šŸKš›3š`¡Dš&¥\€`—'€§5ŸK–›h€¼'¼'§BœB¡D€ž œB–)Ç€`¦™m™mžKš¥ô¼'›3–£›A€¥\¥\š›A™€¨lÃLš1¦5š›A™
™m¢`–)—'§B¦‰€§B™m–\¢`€Æ?šIs¤V–)›õœB`£„õ`–¤V§Bš¦5£?šM–)ŸšœBžK¢`š›hžK¢`šS½H•V¾8¿/ÀMÁÑ`½=•V¾K¿PÀ=Á5ö¥\õÑ@–)›
½H•V¾8÷Ž–)¼'š›A€žKœB`£‡™Â™mžKš¥ó€`¦s¤V–)›õœB`£„õ`–¤V§Bš¦5£?šI–ŸdžK¢`šSè–›Až ›€Ç–›R¿
¼'›3–)£›€¥\¥\œB£^§B€`£—'€£š?¨

Documentation Organization
ç5¢`š&¼'›œB`ž š¦ËÆš›™mœB–)`™H–)Ÿrž ¢`šSÁ¡jœBš`žKœBÓ'¡MÍ5œBΛA€›A„¥\€^¼'€£š™Q€¼'¼'š1›[œBÇÊ&Æ?–)§B—5¥\š™
€`¦Š€›3šI£›3–—'¼@š¦€¡D¡D–)›3¦5œB`£Çž –\ž –¼'œB¡D™€¨5ÁššSž ¢`š ÏK̀ÁÐd¥\1Ë¼'€£šSŸK–)›
¦5šžK€œB§B™H€Î–—'žlžK¢`š&¡D–`ž š`ž ™ –Ÿdš€¡D¢ÇÆ?–§B—'¥\š?¨ INTRO_LIBSCI
ð51¡j¢¸ž –¼'œB¡=™š¡jž œB–)Ç€§B™m–\¢`€™[€¸œB`ž ›3–¦5—'¡Dž –›AÂ\¥\€Ë¼'1£?šS¤V¢`œB¡D¢^šñ¼@§B€œB`™ žK¢`š
¡D–)`žKš`ž ™ –)ŸžK¢`šS™mš¡Dž œB–Ç1`¦¼'›3–)ÆœB¦5š™=–)ž ¢`š›RœB`ŸK–)›A¥\€žKœB–)¸€Î–)—'žržK¢`šS—'™m€£šI–)ŸžK¢`–)™mš
›3–)—'žKœB`š™N¨lç5¢`šSŸK–§B§B–)¤VœB`£\œB`žK›3–)¦5—'¡DžK–)›Â^¥\€^¼'€£š™Q€›3šI€Æ?€œB§B€Î§Bš?ø
Ï8Ì1Á2Ð
INTRO_BLACS
Ï8Ì1Á2Ð
INTRO_BLAS1
Ï8Ì1Á2Ð
INTRO_BLAS2
Ï8Ì1Á2Ð
INTRO_BLAS3
ÏK̀ÁÐ
INTRO_CORE
ÏK̀ÁÐ
INTRO_FFT
ÏK̀ÁÐ
INTRO_LAPACK
ÏK̀ÁÐ
INTRO_MACH
Ï8Ì1Á2Ð
INTRO_SCALAPACK
ÏK̀ÁÐ
INTRO_SPARSE

004–2081–002 iii
Scientific Libraries Reference Manual, Volume 2

ÏK̀ÁÐ
INTRO_SPEC
ÏK̀Á2Ð
INTRO_SUPERSEDED

Related Publications
ç5¢`š&Ÿ8–)§B§B–)¤VœB`£„¥\€`—'1§B™ ¦5–)¡j—'¥\š`ždž ¢`šV¿/›1Â?Í5œBΙ[¼'›3–)¦5—'¡DžK™€¨ZÉH§B§b¥\€^¼'€£š™QœB
ž ¢`š™mšI¥\€`—'€§B™ ¡D€Ç1§B™–‡Îš&Æ?œBš¤Vš¦–)`§BœB`šI΄—'™œB`£„ž ¢`š ¡D–)¥\¥\€`¦w¨
man
ùûú â?×KÝ7ÚÜâ?ü7ÚÜã[ý5Ý þDã àAÿ€äÝ àAüRßrà áDà7Ý3àAâã àbÔÖÕDâ?äÕDå
 å]ÚÜãÕj×KÚÜþDâ‡ý5Ý þ`ÝAÕ HàAÝ ühÙ?Ú]ÛFÝAÕDÝAÞVßrà áDà7Ý3àAâã àbÔÖÕDâ?äÕDå
ù
ù ã ÚÜà7â?×KÚ ãQÙÚÜÛFÝAÕDÝAÚ]àAüPßLàAÕDÿ€ÞIßLà áDàAÝ àAâã à
ù å]ÚÜãÕj×KÚÜþDâ‡ý5Ý þ`ÝAÕ HàAÝ ühÙ?Ú]ÛFÝAÕDÝAÞVßràAÕDÿ€Þsßrà áDàAÝ àAâã à
ç5¢`šSŸK–§B§B–)¤VœB`£\¥\€`—'€§B™h¦5š™m¡D›œBΚSž ¢`šS¼'›3–)¦5—'¡jž ™ œBÇž ¢`šSê5›3–)£›1¥\¥\œB`£‡ð5`Æ?œB›3–)`¥\š`ž ¨
ç5¢`š™mšI¼'—'ΧBœB¡D€žKœB–)`™Q¦5š™m¡D›œBΚ&ž ¢`šS–¼'š›€ž œB`£^™mÂ?™mžKš¥\ÑœB`¼'—'žö2–)—'žK¼'—'žLÏK¾ ö2À=ÐKÑ€`¦
–)žK¢`š›[›3š§B€ž š¦‰ž –)¼'œB¡j™€¨
ù 
à HàAâ?×rÙþDÕDÿ€àAÝ   Ù=ßLÕDâÿVåÜÿ&ßLà áàAÝ àAâ?ãàdÔÖÕDâä?ÕDå
ù H  ú !" ü7àAÝ  þ#!=ÕDâÿ€üRßLà áDàAÝ àAâã àbÔÖÕjâäÕjå
ù H  ú !" ü7àAÝ  þ#!= ÕDâÿ€üRßLà7Õjÿ1Þ&ßrà áDàAÝ àAâã à
ù  äÚÜÿ€àP×KþHý5ÕjÝ7ÕDåÜåÜàAå%)$ àAã×KþjÝ  åÜÚ]ã ÕD×KÚÜþDâ?ü
ù å]ÚÜãÕj×KÚÜþDâ‡ý5Ý þ`
 ÝAÕ H àAÝ ü ú'&() ä?Ú]ÿ€à
¾K¸1¦5¦5œBžKœB–)¸ž –\ž ¢`š™mšI¦5–)¡j—'¥\š`žK™mÑ@™šÆš›€§Z¦5–)¡j—'¥\š`žK™Q€›3šI€Æ?€œB§B€Î§BšIžK¢`€žl¦5š™m¡D›œBΚ
ž ¢`šV¡D–)¥\¼'œB§Bš›R™Â™mžKš¥\™ €Æ1œB§B€Î§BšS–)¸½=•&¾8¿PÀ=Ás€`¦Š½=•&¾8¿PÀ=Á'ö2¥\õl¨'Á2–)¥\šI–ŸdžK¢`š™mš
¥\€`—'1§B™ €›3š?ø
ù *,+- ßLàAÕDÿ€ÞIßLà áDàAÝ àAâã à
ù *,+-. þ HÕDâ?ÿ€üRÕDâÿ/=Ú]Ý3à7ã ×KÚ10€à7üRßrà áDà7Ý3à7â?ãàLÔÖÕDâ?äÕDå
ù * þjÝ7×8Ý7Õjâ\Ù?ÕDâ ` äÕ2 àPßrà áDàAÝ àAâã àbÔÖÕDâ?äÕDå13$ þDåÜä= à54
ù * þjÝ7×8Ý7Õjâ\Ù?ÕDâ ` äÕ2 àPßrà áDàAÝ àAâã àbÔÖÕDâ?äÕDå13$ þDåÜä= à56
ù * þjÝ7×8Ý7Õjâ\Ù?ÕDâ ` äÕ2 àPßrà áDàAÝ àAâã àbÔÖÕDâ?äÕDå13$ þDåÜä= à57
ù  ÝAÕDÞ &(88 ßLà áDàAÝ àAâã àbÔÖÕjâäÕjå

iv 004–2081–002
About This Guide

ç5¢`š&Ÿ8–)§B§B–)¤VœB`£„¥\€`—'1§B™ ¦5–)¡j—'¥\š`ždž ¢`šV¡D–)¥\¼'œB§Bš›A™[žK¢`1žL1›3š&1Æ€œB§Ü€Î§Bš&–¸¾KÃL¾K÷


™m™ž š¥\™€ø
ù Ô úý ý5Ý3þ !9 * þDÝA×KÝAÕDâ +-. þ =ÕDâÿ€üRÕDâ?ÿ.=ÚÜÝ àAã×KÚ10€àAühßLà áàAÝ àAâ?ãàdÔÖÕDâä?ÕDå
ù Ô ú ý  Ý3þ üAü7à:=ÛFåÜÞ&Ù?ÕDâ `ä?Õ`à/ý5Ý þ2ÝAÕ HàAÝ ü  ä?Ú]ÿ€à
ù Ô ú ý  Ý3þ * þjÝ7×8Ý7Õjâ;99&Ù?ÕDâ `äÕ2àhßLà áàAÝ àAâ?ãàdÔÖÕDâä?ÕDå
ù Ô ú ý  Ý3þ * þjÝ7×8Ý7Õjâ;9& 9 ý5Ý þ2 ÝAÕ H àAÝ ü  ä?Ú]ÿ€à
ù Ô ú ý  Ý3þ <=>5
? Ú]×Lý5þDÝA×KÚÜâ Ç ÕDâ?ÿ/@ Ý7ÕjâüAÚ]×8Ú]þjâ  äÚÜÿ€à
Obtaining Publications
BA
Á ¾w¥\1œB`žK€œB`™[œB`ŸK–)›A¥\€žKœB–)¸€Î–)—'žr€Æ1œB§B€Î§Bš&¼'—'ΧBœB¡D€žKœB–)`™Q€ždžK¢`šSŸK–)§B§B–¤VœB`£\½=ÃrÍwø

DC
http://techpubs.sgi.com/library

BA FE
ç5¢`œB™ „šÎҙmœBžKšI¡D–`ž €œB`™RœB`Ÿ8–)›A¥\€žKœB–)„žK¢`1žb€§B§B–)¤V™[Â?–)—¸ž –ÒΛ3–)¤V™mšI¦5–¡D—'¥\š`ž ™h–)`§BœB`šÑ
–)›3¦5š›h¦5–¡D—'¥\š`ž ™mÑ'1`¦™mš`¦‰ŸKšš¦5΀¡Dõ„ž –\Á ¾m¨ `–)—‰¡D€^€§B™m–\–›3¦'š›[I¼'›œB`ž š¦ŠÁ ¾
¦5–)¡D—'¥\š`žL΄¡D€§B§BœB`£‡·sº€µ€µVÄ1ʀÈV¹€Ì€µ1È`¨
BA
  
ç5¢`š üAà7ÝPý5äÛFåÜÚÜãÕj×KÚ]þjâü ÕD×KÕDåÜþ Ǧ5š™m¡D›œBΚ™Qž ¢`šV€Æ?€œB§B€ÎœB§BœBž ‡€`¦¡D–`ž š`žL–Ÿd€§B§l¿P›A€Â
¢`€›3¦'¤s1›3šV€`¦™m–)Ÿ8ž ¤V€›3šV¦5–)¡j—'¥\š`žK™Qž ¢`€žr€›3šI€Æ?€œB§B€Î§BšMžK–‡¡j—'™mž –¥\š›A™€¨Z¿P—'™mžK–)¥\š›A™
¤V¢`–„™m—5Ιm¡D›AœBÎ?šIž –\ž ¢`šS¿P›A€Â\¾K`ŸK–›A¥ Ï8¿/ÃL¾K`ŸK–›A¥\Ðr¼'›3–)£›€¥ó¡j11¡j¡Dš™m™[ž ¢`œB™
œB`ŸK–)›¥\1žKœB–)¸–)¸žK¢`šS¿/ÃL¾8`ŸK–)›A¥ ™Â™mžKš¥Š¨
BŸK–)A §B§B¾w–¥\¤VœB1œ`B`£ÇžK€œ½HB`ÃL™[ÍwœBø `ŸK–)›A¥\€žKœB–)¸–)¸¼'—'ΧBœB¡D§BÂ\€Æ?€œB§B€Î§BšI¿P›A€Â^¦5–)¡j—'¥\š`žK™Q€ždž ¢`š
Á

!C
http://www.cray.com/swpubs/

GA
ç5¢`œB™ „š΄™mœBžKšI¡D–)`žK€œB`™QœB`ŸK–)›A¥\€žKœB–)¸ž ¢`€žr€§B§B–)¤V™[Â?–)—žK–‡Î›3–¤V™mšI¦5–)¡j—'¥\š`žK™Q–)`§BœB`š

FI
HH
€`¦Š™mš`¦¸ŸKšš¦5΀¡Dõ„ž –^Á ¾ ¨1ç2–Ò–)›3¦5š›[s¼@›œB`ž š¦¿/›1Â\¦5–)¡D—'¥\š`žKÑ`šœBž ¢`š›h¡D€§B§

GA
·sĀ´1·VĀº€ÌV´1¹€µ€ÈV–›h™mš`¦sŸK€¡D™mœB¥\œB§BšI–)Ÿr–—'›R›3š ?—'š™žLž –\ŸK€ñ˜`—'¥\Κ›
·sĀ´1·VĀº€ÌVÌ1º€é€µ`¨5Á ¾š¥\¼'§B–)Â?šš™[¥\€Â\€§B™m–\–)›3¦5š›h¼'›AœB`žKš¦Ë¿P›A€Â^¦5–)¡D—'¥\š`žK™QÎÂ
™mš`¦5œB`£^žK¢`šœB›h–›3¦'š›A™ Æ?œBVš§Bš¡DžK›3–)`œB¡M¥\€œB§5ž – ¨
orderdsk

KJ
¿/—5™mž –¥\š›™ –)—'žK™mœB¦5šI–)Ÿrž ¢`šV½H`œBžKš¦Á2ž €ž š™ 1`¦¿P€`1¦5V™m¢`–—'§B¦‰¡D–)`žK€¡Dždž ¢`šœB›h§B–)¡D€§
™mš›AÆ?œB¡jš&–)›3£€`œ €€ž œB–ÇŸK–›h–)›3¦5š›AœB`£„œB`ŸK–›A¥\€ž œB–Ç€`¦¦5–)¡D—'¥\š`žK1žKœB–)¸œB`ŸK–›A¥\€ž œB–r¨

Conventions
ç5¢`š&Ÿ8–)§B§B–)¤VœB`£„¡D–)`Æš`žKœB–)`™H1›3š&—'™mš¦¸ž ¢`›3–—'£¢`–—'žž ¢`œB™Q¦5–)¡j—'¥\š`ž ø

004–2081–002 v
Scientific Libraries Reference Manual, Volume 2

¿ –  Æ š  žœ–  ¶ š  œ £
ç5¢`œB™QÓ'ñš¦'æ8™m¼'€¡DšIŸK–)`žr¦5š`–ž š™ §BœBž š›A€§lœBž š¥\™[™m—5¡D¢Ç€™
command ¡D–¥\¥\1`¦5™mÑÓ5§Bš™Ñ`›3–)—'žKœB`š™mÑ@¼'€ž ¢Ç`€¥\š™Ñ@™œB£`€§B™mÑ
¥\š™m™m€£š™mÑ`€`¦Ë¼'›3–£›A€¥\¥\œB`£„§B€`£?—'1£?š&™mž ›—'¡Dž —5› š™€¨
0€ÕDÝAÚ]ÕjÛFå]à ¾8ž €§BœB¡=ž Â?¼'šŸ81¡jšS¦5š`–)žKš™ Æ?€›AœB€Î§BšIš`ž ›AœBš™Q1`¦¤V–›3¦5™
–)›h¡D–`¡Dš¼'ž ™[ΚœB`£„¦5šÓ5`š¦w¨
ç5¢`œB™QΖ§B¦5ÑÓ'ñš¦5æK™¼'1¡jšSŸ8–)`žr¦'š`–ž š™Q§BœBžKš›A€§lœBž š¥\™
user input
ž ¢`€žržK¢`šS—'™mš›hš`ž š›A™QœBÇœB`ž š›1¡jž œBÆ?šI™mš™m™œB–)`™€¨
À=—'žK¼'—'žLœB™[™m¢`–)¤sÇœBÇ`–)`Ζ)§B¦5Ñ`Ó'ñNš¦5æK™m¼'€¡DšIŸK–`ž¨

ML N QN PO
¾K¸1¦5¦5œBžKœB–)¸ž –\ž ¢`š™mšIŸK–›A¥\€ž žKœB`£^¡D–)`Æš`žKœB–)`™mÑ`™šÆš›€§Z`€¥\œB`£‡¡D–)`Æš`žKœB–)`™ €›3š
—'™mš¦‰ž ¢`›3–)—'£?¢`–)—'žžK¢`šS¦'–¡D—'¥\š`ž €žKœB–)r¨ €¿/›1Â\ê Rê˙mÂ?™mž š¥\™ &¦5š`–)ž š™ €§B§

L PO
¡D–)`Ó5£—'›1žKœB–)`™H–)Ÿr¿/›€Â„¼'€›A€§B§Bš§lÆ?š¡Dž –›R¼'›3–)¡Dš™m™mœB`£^ÏKê Rê'Ðd™mÂ?™mžKš¥\™ žK¢`1žL›A—'^žK¢`š

L
½H•V¾8¿/ÀMÁҖ¼'š›€ž œB`£\™m™ž š¥Š¨ 1¿P›A€ÂǶê'ê˙m™ž š¥\™ s¦5š`–)žKš™[1§B§l¡D–`Ó'£—5›A€ž œB–`™ –Ÿ

PO BA
ž ¢`šV¿/›1Â\ç5̀ð‰™mš›œBš™Qž ¢`€žr›—'^ž ¢`šS½=•V¾K¿PÀ=Á5ö¥\õ^–)¼'š›A€žKœB`£‡™Â™mžKš¥Š¨ €¾8Ãr¾8÷
™m™ž š¥\™ s¦5š`–)žKš™ Á ¾r¼'§B€žKŸK–)›A¥\™=¤V¢`œB¡D¢^›—'^ž ¢`šS¾8ÃL¾K÷֖)¼'š›A€ž œB`£\™mÂ?™mž š¥Š¨
ç5¢`š&¦5šŸK€—'§Bžd™m¢`š§B§lœBÇž ¢`šS½=•V¾K¿PÀ=Á‡€`¦½H•V¾K¿PÀ=Á5ö¥\õ\–)¼'š›A€ž œB`£Ç™m™mžKš¥\™mÑ`›3šŸKš›› š¦
ž –^€™Qž ¢`šSüA×KÕjâÿ€ÕDÝAÿsüA؀àAå]åÜÑœB™[sÆš›™mœB–)Ç–ŸdžK¢`š&íh–)›AÇ™m¢`š§B§lž ¢`€žb¡D–`ŸK–)›¥\™Hž –\ž ¢`š
ŸK–)§B§B–¤VœB`£Ç™mž €`¦5€›3¦5™€ø
ù
R
¾8`™mž œBžK—'ž šI–)Ÿrð5§Bš¡jž ›AœB¡j1§l€`¦ð'§Bš¡DžK›3–)`œB¡D™Hð5`£œB`šš›A™QÏK¾Kð5ð5ð5ÐLê5–)›AžK€Î§BšSÀM¼@š›A€žKœB`£
ÁÂ?™mž š¥ô¾8`ž š›ŸK1¡jšIÏ8ê'À=Á¾8÷rÐdÁ2ž €`¦51›3¦†·€µ€µ1Ì`¨ Ê €·€¹1¹€Ê
ù ÷[ö2À=¼'š^ê'–›AžK1Î?œÜ§BœBžKÂ;A —'œB¦5šÑ`¾K™m™—'šIésÏK÷rêA é1Ð
ç5¢`šS½=•&¾8¿PÀ=Ás€`¦Ç½=•V¾K¿PÀ=Á5ö¥\õ\–)¼'š›A€ž œB`£\™mÂ?™mž š¥\™[1§B™–˜™m—'¼'¼'–)›žlžK¢`šS–)¼'ž œB–`€§'—'™mš
–)ŸžK¢`šS¿ ™m¢`š§B§3¨

Man page sections


ç5¢`š&š`žK›AœBš™[œBÇžK¢`œB™Q¦'–¡D—'¥\š`žL1›3š&΀™mš¦Š–)^s¡D–¥\¥\–)ÇŸK–›A¥\€ž¨Zç5¢`šSŸ8–)§B§B–)¤VœB`£„§BœB™mž
™m¢`–)¤s™ žK¢`šS–)›3¦5š›h–Ÿ™mš¡Dž œB–`™hœBÇ€¸š`ž ›A„€`¦¦5š™¡D›AœBΚ™Qš€¡D¢Ç™mš¡DžKœB–)r¨5¶–™mžrš`ž ›AœBš™
¡D–)`žK€œB¸–)`§B„s™m—5Ιmšžb–ŸdžK¢`š™šI™mš¡Dž œB–`™€¨

Á š ¡ žœ–  ¢ š ¦ œ £ » š ™ ¡ › œ¼ žœ– 
•VÉH¶ð Á¼'š¡DœBÓ'š™Hž ¢`šS`€¥\šS–)ŸžK¢`šSš`ž ›A„€`¦Î›AœBšï5‡™mžK€ž š™
œBž ™ ŸK—'`¡DžKœB–)r¨
BE
Á L•&ÀMê'Á¾8Á ê'›3š™mš`ž ™ ž ¢`šV™m`žK€ñV–ŸdžK¢`š&š`ž ›ÂN¨
¾K¶‰ê'Í5ð5¶ð5•&ç)ÉPç5¾KÀM• ¾8¦'š`žKœBÓ'š™[ž ¢`šV™m™ž š¥\™ ž –\¤V¢`œB¡D¢Çž ¢`šSš`ž ›Â\€¼'¼'§BœBš™€¨

vi 004–2081–002
About This Guide

Áç)ÉH•V»HÉ=Ãr»=Á ê'›3–Æ?œB¦5š™HœB`Ÿ8–)›A¥\€žKœB–)„€Î–—'žrž ¢`šV¼@–›AžK€ÎœB§BœBž „–)Ÿr


—'žKœB§BœBž Â\–)›h›3–—'ž œB`š?¨
»Hð5Á¿PÃr¾8ê'ç5¾8À=• »HœB™m¡j—'™m™mš™ ž ¢`šSš`ž ›„œBÇ¦'šžK1œB§3¨
•VÀ=ç5ð5Á ê'›3š™mš`ž ™ œBž š¥\™ –Ÿd¼'€›AžKœB¡D—'§B€›RœB¥\¼'–)›ž €`¡Dš?¨
¿/É=½HçZ¾KÀ=•VÁ »Hš™m¡D›œBΚ™ €¡DžKœB–)`™ ž ¢`€žr¡D€Ç¦5š™ž ›3–)Â\¦5€ž ˜–›
¼'›3–)¦5—'¡Dš&—'`¦5š™mœB›3š¦‰›3š™m—'§Bž ™N¨
CÇÉ=Ãr•V¾K•/A Á FI
»Hš™m¡D›œBΚ™ €¡DžKœB–)`™ ž ¢`€žr¡D€Ç¢`€›A¥ ¼@š–¼@§BšÑ
š ?—'œB¼@¥\š`žKÑ@–)›h™m™ž š¥ó™m–)ŸKžK¤V€›3š¨
ð5•.R
N'ÉHÃLN ¾K¾8É=ÃLë5À=Í5•Vð5¶Á ð5•Vç »Hš™m¡D›œBΚ™ ¼'›3š¦5šÓ'`š¦™m¢`š§B§lÆ?€›AœB€Î§Bš™[ž ¢`€ž
¦5šž š›A¥\œB`š&™m–)¥\šS¡D¢`€›A€¡DžKš›œB™mž œB¡j™Q–Ÿž ¢`šS™m¢`š§B§l–›/ž ¢`€ž
€ŸŸKš¡jždžK¢`š&Κ¢`€Æ?œB–)›R–Ÿd™–)¥\š&¼'›3–)£›€¥\™mÑ@¡D–)¥\¥\€`¦5™mÑ
–)›h—'žKœB§BœBž œBš™€¨
Ãrð5ç5½=Ãr• SN'ÉHÍ5½Hð5Á »Hš™m¡D›œBΚ™ ¼'–™m™mœBΧBšS›3šžK—'›AËÆ?€§B—'š™ žK¢`€žlœB`¦5œB¡D€žKšV
§BœBΛA€›„–›/™mÂ?™mž š¥ ¡D€§B§ZšñNš¡D—'žKš¦™m—'¡D¡jš™™mŸK—'§B§BÂDÑ`–)›
œB¦5š`ž œBÓ'š™Hž ¢`šSš›A›3–)›[¡D–`¦'œBžKœB–)¸—'`¦5š›P¤V¢`œB¡D¢ËœBž
ŸK€œB§Bš¦w¨
ð5÷r¾8çŠÁç)É/ç5½=Á »Hš™m¡D›œBΚ™ ¼'–™m™mœBΧBšSšñœBžr™mžK€ž —'™ Æ?€§B—'š™ ž ¢`€žrœB`¦5œB¡D€žKš
¤V¢`šž ¢`š›PžK¢`š&¡D–¥\¥\€`¦¸–›h—'ž œB§BœBžKÂ^šñNš¡D—'žKš¦
™m—'¡j¡Dš™m™mŸK—'§B§B¨
¶ð5ÁÁ2É !A ð5Á »Hš™m¡D›œBΚ™ Bœ `Ÿ8–)›A¥\€žKœB–)`€§BÑ`¦5œB1£?`–)™mžKœB¡DÑ`€`¦š›A›3–)›
¥\š™m™m€£š™ ž ` ¢ €žr¥\€Â\€¼@¼'š€›¨'Áš§BŸKæKšñN¼'§B€`€ž –›AÂ
¥\š™m™m€£š™ €›3šI`–)žd§BœB™ž š¦w¨
ð5ÃrÃLÀ=ÃLÁ »H–¡D—'¥\šžK™ š››3–)›[¡D–)¦5š™€¨ZÉH¼'¼'§BœBš™[–)`§B„ž –\™mÂ?™mž š¥
¡D€§B§B™€¨
èÀ=ÃZç5ÃLÉH• »Hš™m¡D›œBΚ™ ¢`–¤ žK–‡¡j1§B§Zs™mÂ?™mžKš¥ó¡D€§B§ZŸK›3–)¥ è–›AžK›A€r¨
ð5÷rç5ð5•VÁ¾8À=•VÁ É=¼@¼'§BœBš™ –`§BÂҞK–^™m™mžKš¥ô¡j1§B§B™N¨
!A
ë5½ Á ¾8`¦'œB¡j1žKš™ õ`–¤V^Η'£™H€`¦¦'šÓ'¡jœBš`¡jœBš™N¨
ð5÷rÉ=¶ê'Í5ð5Á Á¢`–¤V™ šñN€¥\¼'§Bš™ –)Ÿr—'™m€£š?¨
è¾8Í5ð5Á Í5œB™mžK™QÓ'§Bš™Hž ¢`€žb1›3š&šœBžK¢`š›[¼@€›žL–)Ÿrž ¢`šSš`ž ›Â\–›h1›3š
›3š§B1žKš¦žK–‡œBž ¨

004–2081–002 vii
Scientific Libraries Reference Manual, Volume 2

Áð5ðŠÉHÍ5ÁÀ Í5œB™mžK™Qš`ž ›œBš™Q€`¦Ë¼'—'ΧBœB¡j1žKœB–)`™=ž ¢`€žb¡D–)`žK€œB¸›3š§B€žKš¦


œB`ŸK–›A¥\€ž œB–r¨

Reader Comments
KJ
¾KŸÂ–)—‰¢`1ÆšS¡j–)¥\¥\š`ž ™Q1Î?–)—'žržK¢`šSž š¡D¢``œB¡D€§b€¡D¡D—'›1¡jÂmÑ'¡D–`ž š`ž Ñ–›R–›3£€`œ €€ž œB–)¸–)Ÿ
ž ¢`œB™=¦'–¡D—'¥\š`ž Ñ@¼@§Bš€™mšIžKš§B§Z—'™€¨'ë5šV™m—'›3š&ž –\œB`¡D§B—'¦5šMž ¢`šVž œBžK§BšI1`¦¼'€›Ažd`—'¥\Κ›R–Ÿ
ž ¢`šV¦'–¡D—'¥\š`žb¤VœBžK¢Ë–—'›R¡D–¥\¥\š`žK™€¨
E–—¡D€Ç¡D–`ž €¡Džd—'™HœBÇ€`„–)Ÿrž ¢`š&Ÿ8–)§B§B–)¤VœB`£„¤V€Â™€ø
ù Áš`¦šæK¥\€œB§Zž –\ž ¢`šSŸK–§B§B–)¤VœB`£\€¦5¦5›3š™m™€ø

ù
techpubs@sgi.com
L PO H ·sÄ1´€µV¹€Ì€Ê&µ€º€µ€·`¨
Áš`¦&ŸK€ñ˜ž –\ž ¢`š&€žKž š`ž œB–Ç–)Ÿ 1çNš¡D¢``œB¡j1§lê'—'ΧBœB¡j1žKœB–)`™ V€ž ø
½=™mšMžK¢`š&èšš¦5΀¡Dõ\–)¼'žKœB–)Ç–¸ž ¢`šSçNš¡j¢``œB¡D€§Zê'—'Î?§BœB¡D€ž œB–)`™=Í'œBÎ?›A€›AÂT„
C –)›§B¦U‰C œB¦5š
ù
C„š΄¼@€£?šø
ù
http://techpubs.sgi.com
DA
¿P1§B§ZžK¢`šSçš¡D¢``œB¡j1§lê'—'ΧBœB¡j1žKœB–)`™ ›3–)—'¼'Ñ@ž ¢`›3–)—'£?¢ÇžK¢`š&çNš¡j¢``œB¡D€§ZÉ=™m™mœB™mžK€`¡Dš
¿Pš`žKš› Ñ—'™mœB`£„–)`š&–ŸdžK¢`šSŸK–)§B§B–¤VœB`£\`—'¥\Κ›A™Nø
è–›RÁGA ¾r¾KÃL¾K÷ ΀™mš¦Š–)¼'š›A€žKœB`£‡™Â™mžKš¥\™Nø5·&º€µ€µ&º€µ€µ˜é€Á BA ¾
è–›R½=•V¾K¿PÀ=Á˜–)›h½H•V¾8¿/ÀMÁ'ö¥\õ\΀™mš¦‰–¼'š›€ž œB`£\™m™ž š¥\™ –)›h¿P›A€Â‡ÀM›AœB£œBËʀµ€µ€µ

H
™mÂ?™mž š¥\™€øZ·&º1µ€µV¹€´€µ&ʀȀʀ¹VÏKžK–)§B§ZŸ8›3šš&ŸK›3–)¥ ž ¢`šS½=`œBž š¦Áž €žKš™ €`¦Š¿P1`€¦5€Ðb–›
·sĀ´€·&Āº€ÌV´€Ä1µ€µ
ù Áš`¦¥\€œB§5ž –\ž ¢`šVŸK–)§B§B–¤VœB`£\€¦5¦5›3š™™€ø

BA
瀚¡D¢``œB¡j1§lê'—'ΧBœB¡j1žKœB–)`™
Á ¾

VN
·€Ä€µ1µsÉH¥\¼'¢`œBžK¢`š€žK›3šSê'õ?¤V¨
R
¶–—'`ž €œB LœBš¤[Ñ`¿P1§BœBŸ8–)›A`œBV¹€é€µ€é1Ì €·€Ì€´€·
Cǚ&Æ?€§B—'šI–)—'›h¡D–¥\¥\š`žK™[€`¦¤VœB§B§l›3š™¼'–)`¦žK–‡žK¢`š¥ó¼@›3–¥\¼'ž §BÂ2¨

viii 004–2081–002
CONTENTS

Solvers for dense linear systems and eigensystems

intro_lapack, INTRO_LAPACK .......................... Introduction to LAPACK solvers for dense linear systems ...................... 333
eispack, EISPACK .................................................. Introduction to Eigensystem computation for dense linear systems ......... 349
linpack, LINPACK .................................................. Single-precision real and complex LINPACK routines ............................ 355

Scalable LAPACK
intro_scalapack, INTRO_SCALAPACK ............ Introduction to the ScaLAPACK routines for distributed matrix
computations .............................................................................................. 359
descinit, DESCINIT ............................................. Initializes a descriptor vector of a distributed two-dimensional array ...... 362
indxg2p, INDXG2P .................................................. Computes the coordinate of the processing element (PE) that
possesses the entry of a distributed matrix ............................................... 364
numroc, NUMROC ...................................................... Computes the number of rows or columns of a distributed matrix
owned locally ............................................................................................ 365
pcheevx, PCHEEVX .................................................. Computes selected eigenvalues and eigenvectors of a Hermitian-
definite eigenproblem ................................................................................ 366
pchegvx, PCHEGVX .................................................. Computes selected eigenvalues and eigenvectors of a Hermitian-
definite generalized eigenproblem ............................................................. 374
psgebrd, PSGEBRD, PCGEBRD ............................... Reduces a real or complex distributed matrix to bidiagonal form ........... 382
psgelqf, PSGELQF, PCGELQF ............................... Computes an LQ factorization of a real or complex distributed matrix ... 387
psgeqlf, PSGEQLF, PCGEQLF ............................... Computes a QL factorization of a real or complex distributed matrix ..... 390
psgeqpf, PSGEQPF, PCGEQPF ............................... Computes a QR factorization with column pivoting of a real or
complex distributed matrix ........................................................................ 393
psgeqrf, PSGEQRF, PCGEQRF ............................... Computes a QR factorization of a real or complex distributed matrix ..... 396
psgerqf, PSGERQF, PCGERQF ............................... Computes a RQ factorization of a real or complex distributed matrix ..... 399
psgesv, PSGESV, PCGESV ...................................... Computes the solution to a real or complex system of linear
equations .................................................................................................... 402
psgetrf, PSGETRF, PCGETRF ............................... Computes an LU factorization of a real or complex distributed matrix ... 405
psgetri, PSGETRI, PCGETRI ............................... Computes the inverse of a real or complex distributed matrix ................. 408
psgetrs, PSGETRS, PCGETRS ............................... Solves a real or complex distributed system of linear equations .............. 411
psposv, PSPOSV, PCPOSV ...................................... Solves a real symmetric or complex Hermitian system of linear
equations .................................................................................................... 414
pspotrf, PSPOTRF, PCPOTRF ............................... Computes the Cholesky factorization of a real symmetric or complex
Hermitian positive definite distributed matrix ........................................... 418
pspotri, PSPOTRI, PCPOTRI ............................... Computes the inverse of a real symmetric or complex Hermitian
positive definite distributed matrix ............................................................ 421
pspotrs, PSPOTRS, PCPOTRS ............................... Solves a real symmetric positive definite or complex Hermitian
positive definite system of linear equations .............................................. 424
pssyevx, PSSYEVX .................................................. Computes selected eigenvalues and eigenvectors of a real symmetric
matrix ......................................................................................................... 427
pssygvx, PSSYGVX .................................................. Computes selected eigenvalues and eigenvectors of a real
symmetric-definite generalized eigenproblem ........................................... 434
pssytrd, PSSYTRD, PCHETRD ............................... Reduces a real symmetric or complex Hermitian distributed matrix to
tridiagonal form ......................................................................................... 442
pstrtri, PSTRTRI, PCTRTRI ............................... Computes the inverse of a real or complex upper or lower triangular
distributed matrix ....................................................................................... 446
pstrtrs, PSTRTRS, PCTRTRS ............................... Solves a real or complex distributed triangular system ............................ 449

004– 2081– 002 ix


Solvers for sparse linear systems
intro_sparse, INTRO_SPARSE .......................... Introduction to solvers for sparse linear systems ...................................... 453
dfaults, DFAULTS .................................................. Assigns default values to the parameter arguments for SITRSOL(3S) .... 461
sitrsol, SITRSOL .................................................. Solves a real general sparse system, using a preconditioned conjugate
gradient-like method .................................................................................. 466
ssgetrf, SSGETRF .................................................. Factors a real sparse general matrix with threshold pivoting
implemented .............................................................................................. 482
ssgetrs, SSGETRS .................................................. Solves a real sparse general system, using the factorization computed
in SSGETRF(3S) ....................................................................................... 487
sspotrf, SSPOTRF .................................................. Factors a real sparse symmetric definite matrix ........................................ 489
sspotrs, SSPOTRS .................................................. Solves a real sparse symmetric definite system, using the factorization
computed in SSPOTRF(3S) ....................................................................... 494
sststrf, SSTSTRF .................................................. Factors a real sparse general matrix with a symmetric nonzero pattern
(no form of pivoting is implemented) ....................................................... 496
sststrs, SSTSTRS .................................................. Solves a real sparse general system with a symmetric nonzero
pattern, using the factorization computed in SSTSTRF(3S) .................... 501

Solvers for special linear systems


intro_spec, INTRO_SPEC ................................... Introduction to solvers for special linear systems ..................................... 503
folr, FOLR, FOLRP .................................................. Solves first-order linear recurrences .......................................................... 504
folr2, FOLR2, FOLR2P ........................................... Solves first-order linear recurrences without overwriting the operand
vector ......................................................................................................... 509
folrc, FOLRC ........................................................... Solves a first-order linear recurrence with a scalar multiplier .................. 511
folrn, FOLRN, FOLRNP ........................................... Solves for the last term of first-order linear recurrence ............................ 513
recpp, RECPP, RECPS ............................................. Solves a partial product or partial summation problem ............................ 516
sdtsol, SDTSOL, CDTSOL ...................................... Solves a real-valued or complex-valued tridiagonal system with one
right-hand side ........................................................................................... 518
sdttrf, SDTTRF, CDTTRF ...................................... Factors a real-valued or complex-valued tridiagonal system .................... 520
sdttrs, SDTTRS, CDTTRS ...................................... Solves a real-valued or complex-valued tridiagonal system with one
right-hand side, using its factorization as computed by SDTTRF(3S)
or CDTTRF(3) ............................................................................................ 523
solr, SOLR ................................................................ Solves a second-order linear recurrence .................................................... 526
solr3, SOLR3 ........................................................... Solves a second-order linear recurrence for three terms ........................... 528
solrn, SOLRN ........................................................... Solves a second-order linear recurrence for only the last term ................ 531

BLACS routines
intro_blacs, INTRO_BLACS ............................... Introduction to Basic Linear Algebra Communication Subprograms ...... 535
blacs_barrier, BLACS_BARRIER ..................... Stops execution until all specifed processes have called a routine ........... 539
blacs_exit, BLACS_EXIT ................................... Frees all existing grids .............................................................................. 540
blacs_gridexit, BLACS_GRIDEXIT ................. Frees a grid ................................................................................................ 541
blacs_gridinfo, BLACS_GRIDINFO ................. Returns information about the two-dimensional processor grid ............... 542
blacs_gridinit, BLACS_GRIDINIT ................. Initializes counters, variables, and so on, for the BLACS routines .......... 543
blacs_gridmap, BLACS_GRIDMAP ..................... a grid of processors ................................................................................... 544
blacs_pcoord, BLACS_PCOORD .......................... Computes coordinates in two-dimensional grids ....................................... 545
blacs_pnum, BLACS_PNUM ................................... Returns the processor element number for specified coordinates in
two-dimensional grids ............................................................................... 546
gridinfo3d, GRIDINFO3D ................................... Returns information about the three-dimensional processor grid ............. 547
gridinit3d, GRIDINIT3D ................................... Initializes variables for a three-dimensional (3D) grid partition of
processor set .............................................................................................. 548
igamn2d, IGAMN2D, SGAMN2D, CGAMN2D ............ Determines minimum absolute values of rectangular matrices ................. 550

x 004– 2081– 002


igamx2d, IGAMX2D, SGAMX2D, CGAMX2D ............ Determines maximum absolute values of rectangular matrices ................ 552
igebr2d, IGEBR2D, SGEBR2D, CGEBR2D ............ Receives a broadcast general rectangular matrix from all or a subset
of processors .............................................................................................. 554
igebs2d, IGEBS2D, SGEBS2D, CGEBS2D ............ Broadcasts a general rectangular matrix to all or a subset of
processors .................................................................................................. 556
igerv2d, IGERV2D, SGERV2D, CGERV2D ............ Receives a general rectangular matrix from another processor ................ 558
igesd2d, IGESD2D, SGESD2D, CGESD2D ............ Sends a general rectangular matrix to another processor .......................... 560
igsum2d, IGSUM2D, SGSUM2D, CGSUM2D ............ Performs element summation operations on rectangular matrices ............ 562
itrbr2d, ITRBR2D, STRBR2D, CTRBR2D ............ Receives a broadcast trapezoidal rectangular matrix from all or a
subset of processors ................................................................................... 564
itrbs2d, ITRBS2D, STRBS2D, CTRBS2D ............ Broadcasts a trapezoidal rectangular matrix to all or a subset of
processors .................................................................................................. 566
itrrv2d, ITRRV2D, STRRV2D, CTRRV2D ............ Receives a trapezoidal rectangular matrix from another processor .......... 568
itrsd2d, ITRSD2D, STRSD2D, CTRSD2D ............ Sends a trapezoidal rectangular matrix to another processor .................... 570
mynode, MYNODE ...................................................... Returns the calling processor’s assigned number ..................................... 572
pcoord3d, PCOORD3D ............................................. Computes three-dimensional (3D) processor grid coordinates ................ 573
pnum3d, PNUM3D ...................................................... Returns the processor element number for specified three-dimensional
(3D) coordinates ........................................................................................ 574

Out-of-core routines
intro_core, INTRO_CORE ................................... Introduction to the Cray Research Scientific Library out-of-core
routines for linear algebra ......................................................................... 575
scopy2rv, SCOPY2RV, CCOPY2RV ........................ Copies a submatrix of a real or complex matrix in memory into a
virtual matrix ............................................................................................. 590
scopy2vr, SCOPY2VR, CCOPY2VR ........................ Copies a submatrix of a virtual matrix to a real or complex (in
memory) matrix ......................................................................................... 593
vbegin, VBEGIN ...................................................... Initializes the out-of-core routine data structures ...................................... 595
vend, VEND ................................................................ Handles terminal processing for the out-of-core routines ......................... 598
vsgemm, VSGEMM, VCGEMM ...................................... Multiplies a virtual real or complex general matrix by a virtual real
or complex general matrix ........................................................................ 600
vsgetrf, VSGETRF, VCGETRF ............................... Computes an LU factorization of a virtual general matrix with real or
complex elements, using partial pivoting with row interchanges ............. 604
vsgetrs, VSGETRS, VCGETRS ............................... Solves a virtual system of linear equations, using the LU factorization
computed by VSGETRF(3S) or VCGETRF(3S) ......................................... 608
vspotrf, VSPOTRF .................................................. Computes the Cholesky factorization of a real symmetric positive
definite virtual matrix ................................................................................ 610
vspotrs, VSPOTRS .................................................. Solves a virtual system of linear equations with a symmetric positive
definite matrix whose Cholesky factorization has been computed by
VSPOTRF(3S) ............................................................................................ 612
vssyrk, VSSYRK ...................................................... Performs symmetric rank k update of a real or complex symmetric
virtual matrix ............................................................................................. 614
vstorage, VSTORAGE ............................................. Declares packed storage mode for a triangular, symmetric, or
Hermitian (complex only) virtual matrix .................................................. 616
vstrsm, VSTRSM, VCTRSM ...................................... Solves a virtual real or virtual complex triangular system of equations
with multiple right-hand sides ................................................................... 619

Machine constant functions


intro_mach, INTRO_MACH ................................... Introduction to machine constant functions .............................................. 623
r1mach, R1MACH ...................................................... Returns Cray PVP machine constants ....................................................... 624
slamch, SLAMCH ...................................................... Determines single-precision machine parameters ..................................... 626

004– 2081– 002 xi


smach, SMACH, CMACH ............................................. Returns machine epsilon, small or large normalized numbers ................. 628

Superseded routines
intro_superseded, INTRO_SUPERSEDED ....... Introduction to superseded Scientific Library routines ............................. 631
gather, GATHER ...................................................... Gathers a vector from a source vector ...................................................... 633
minv, MINV ................................................................ Solves systems of linear equations by inverting a square matrix ............. 634
mxm, MXM ....................................................................
Computes matrix-times-matrix product (unit increments) ........................ 637
mxma, MXMA ................................................................ Computes matrix-times-matrix product (arbitrary increments) ................. 639
mxv, MXV ....................................................................
Computes matrix-times-vector product (unit increments) ......................... 642
mxva, MXVA ................................................................ Computes matrix-times-vector product (arbitrary increments) ................. 644
scatter, SCATTER .................................................. Scatters a vector into another vector ......................................................... 646
smxpy, SMXPY ........................................................... Multiplies a column vector by a matrix and adds the result to another
column vector ............................................................................................ 647
sxmpy, SXMPY ........................................................... Multiplies a row vector by a matrix and adds the result to another
row vector .................................................................................................. 649
trid, TRID ................................................................ Solves a tridiagonal system ....................................................................... 651

xii 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

NAME
INTRO_LAPACK – Introduction to LAPACK solvers for dense linear systems

IMPLEMENTATION
See individual man pages for implementation details

DESCRIPTION
The preferred solvers for dense linear systems are those parts of the LAPACK package included in the
current version of the Scientific Library. The LAPACK routines in the Scientific Library supersede the older
LINPACK routines (see LINPACK(3S) for more information).
LAPACK Routines
LAPACK is a public domain library of subroutines for solving dense linear algebra problems, including the
following:
• Systems of linear equations
• Linear least squares problems
• Eigenvalue problems
• Singular value decomposition (SVD) problems
For details about which routines are supported, see LAPACK Routines Contained in the Scientific Library,
which follows.
The LAPACK package is designed to be the successor to the older LINPACK and EISPACK packages. It
uses today’s high-performance computers more efficiently than the older packages. It also extends the
functionality of these packages by including equilibration, iterative refinement, error bounds, and driver
routines for linear systems, routines for computing and reordering the Schur factorization, and condition
estimation routines for eigenvalue problems.
Performance issues are addressed by implementing the most computationally-intensive algorithms by using
the Level 2 and 3 Basic Linear Algebra Subprograms (BLAS). Because most of the BLAS were optimized
in single- and multiple-processor environments for UNICOS and UNICOS/mk systems, these algorithms give
near optimal performance.
The original Fortran programs are described in the LAPACK User’s Guide by E. Anderson, Z. Bai,
C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney,
S. Ostrouchov, and D. Sorensen, published by the Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, 1992. You can order the LAPACK User’s Guide, publication TPD– 0003.
LAPACK Routines Contained in the Scientific Library
Most of the single-precision (64-bit) real and complex routines from LAPACK 2.0 are supported in the
Scientific Library. This includes driver routines and computational routines for solving linear systems, least
squares problems, and eigenvalue and singular value problems. Selected auxiliary routines for generating
and manipulating elementary orthogonal transformations are also supported.

004– 2081– 002 333


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

The Scientific Library does not include the LAPACK driver routines for certain generalized eigenvalue and
singular value computations and the divide-and-conquer routines for computing eigenvalues, which were new
for LAPACK 2.0. This may be added in a future release. Also, most of the auxiliary routines used only
internally by LAPACK have been renamed to avoid conflicts with user-defined subroutine names.
The LAPACK routines in the Scientific Library are described online in man pages. For example, to see a
description of the arguments to the expert driver routine for solving a general system of equations, enter the
following command:
% man sgesvx

The user interface to all LAPACK routines is exactly the same as the standard LAPACK interface, except
for the CPTSV(3L) and CPTSVX(3L) driver routines. An optional character argument was added to CPTSV
and CPTSVX to afford upward compatibility with the storage format in LINPACK’s CPTSL. However,
because the argument is optional the LAPACK calling sequence also is accepted.
Several enhancements were made to the public-domain LAPACK software to improve performance for
UNICOS and UNICOS/mk systems. In particular, the solve routines were redesigned to give better
performance for one or a small number of right-hand sides, and to make better use of parallelism when the
number of right-hand sides is large.
Tuning parameters for the block algorithms provided in the Scientific Library are set within the LAPACK
routine ILAENV(3L). ILAENV(3L) is an integer function subprogram that accepts information about the
problem type and dimensions, and it returns one integer parameter, such as the optimal block size, the
minimum block size for which a block algorithm should be used, or the crossover point (the problem size at
which it becomes more efficient to switch to an unblocked algorithm). The setting of tuning parameters
occurs without user intervention, but users may call ILAENV(3L) directly to discover the values that will be
used (for example, to determine how much workspace to provide).
Naming Scheme
The name of each LAPACK routine is a coded specification of its function (within the limits of standard
FORTRAN 77 six-character names).
All driver and computational routines have five- or six-character names of the form XYYZZ or XYYZZZ.
The first letter in each name, X, indicates the data type, as follows:
S REAL (single precision)
C COMPLEX
The next two letters, YY, indicate the type of matrix (or the most-significant matrix). Most of these
two-letter codes apply to both real and complex matrices, but a few apply specifically to only one or the
other. The matrix types are as follows:
BD BiDiagonal
GB General Band
GE GEneral (nonsymmetric)
GG General matrices, Generalized problem

334 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

GT General Tridiagonal
HB Hermitian Band (complex only)
HE HErmitian (possibly indefinite) (complex only)
HG Hessenberg matrix, Generalized problem
HP Hermitian Packed (possibly indefinite) (complex only)
HS upper HeSsenberg
OP Orthogonal Packed (real only)
OR ORthogonal (real only)
PB Positive definite Band (symmetric or Hermitian)
PO POsitive definite (symmetric or Hermitian)
PP Positive definite Packed (symmetric or Hermitian)
PT Positive definite Tridiagonal (symmetric or Hermitian)
SB Symmetric Band (real only)
SP Symmetric Packed (possibly indefinite)
ST Symmetric Tridiagonal
SY SYmmetric (possibly indefinite)
TB Triangular Band
TG Triangular matrices, Generalized problem
TP Triangular Packed
TR TRiangular
TZ TrapeZoidal
UN UNitary (complex only)
UP Unitary Packed (complex only)
Some LAPACK auxiliary routines also have man pages on UNICOS and UNICOS/mk systems. These
routines use the special YY designation:
LA LAPACK Auxiliary routine
For example, ILAENV(3) is the auxiliary routine that determines the block size for a particular algorithm and
problem size.
The last two or three letters, ZZ or ZZZ, indicate the computation performed. For example, SGETRF
performs a TRiangular Factorization of a Single-precision (real) GEneral matrix; CGETRF performs the
factorization of a Complex GEneral matrix.

004– 2081– 002 335


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Lists of Available LAPACK Routines


The following pages contain tables of driver and computational routines from LAPACK available in the
Scientific Library. For details about the argument lists and usage of these routines, see the individual online
man pages or the LAPACK User’s Guide, publication TPD– 0003.
Driver Routines
These routines are listed in alphabetical order.

Name Purpose

CHESV Solves a complex Hermitian indefinite system of linear equations AX = B.


CHESVX Solves a complex Hermitian indefinite system of linear equations AX = B and provides an
estimate of the condition number and error bounds on the solution.
CHPSV Solves a complex Hermitian indefinite system of linear equations AX = B; A is held in
packed storage.
CHPSVX Solves a complex Hermitian indefinite system of linear equations AX = B (A is held in
packed storage) and provides an estimate of the condition number and error bounds on the
solution.
SGBSV Solves a general banded system of linear equations AX = B.
CGBSV
SGBSVX Solves any of the following general banded systems of linear equations and provides an
CGBSVX estimate of the condition number and error bounds on the solution.
AX = B
T
A X=B
H
A X=B
SGEES Compute eigenvalues, Schur form, and Schur vectors of a general matrix
CGEES
SGEESX Compute eigenvalues, Schur form, Schur vectors, and condition numbers of a general matrix
CGEESX
SGEEV Compute eigenvalues and eigenvectors of a general matrix
CGEEV
SGEEVX Compute eigenvalues, eigenvectors, and condition numbers of a general matrix
CGEEVX
SGEGS Compute the generalized Schur factorization of a matrix pair (A,B)
CGEGS
SGEGV Compute the eigenvalues and eigenvectors of a matrix pair (A,B)
CGEGV

336 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SGELS Finds a least squares or minimum norm solution of an overdetermined or underdetermined


CGELS linear system.
SGELSS Solve linear least squares problem using SVD
CGELSS
SGELSX Computes a minimum norm solution of a linear least squares problem using a complete
CGELSX orthogonal factorization.
SGESV Solves a general system of linear equations AX = B.
CGESV
SGESVD Compute the singular value decomposition (SVD) of a general matrix
CGESVD
SGESVX Solves any of the following general systems of linear equations and provides an estimate of
CGESVX the condition number and error bounds on the solution.
AX = B
T
A X=B
H
A X=B
SGTSV Solves a general tridiagonal system of linear equations AX = B.
CGTSV
SGTSVX Solves any of the following general tridiagonal systems of linear equations and provides an
CGTSVX estimate of the condition number and error bounds on the solution.
AX = B
T
A X=B
H
A X=B
SPBSV Solves a symmetric or Hermitian positive definite banded system of linear equations
CPBSV AX = B.
SPBSVX Solves a symmetric or Hermitian positive definite banded system of linear equations AX = B
CPBSVX and provides an estimate of the condition number and error bounds on the solution.
SPOSV Solves a symmetric or Hermitian positive definite system of linear equations AX = B.
CPOSV
SPOSVX Solves a symmetric or Hermitian positive definite system of linear equations AX = B and
CPOSVX provides an estimate of the condition number and error bounds on the solution.
SPPSV Solves a symmetric or Hermitian positive definite system of linear equations AX = B; A is
CPPSV held in packed storage.

004– 2081– 002 337


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SPPSVX Solves a symmetric or Hermitian positive definite system of linear equations AX = B (A is


CPPSVX held in packed storage) and provides an estimate of the condition number and error bounds
on the solution.
SPTSV Solves a symmetric or Hermitian positive definite tridiagonal system of linear equations
CPTSV AX = B.
SPTSVX Solves a symmetric or Hermitian positive definite tridiagonal system of linear equations
CPTSVX AX = B and provides an estimate of the condition number and error bounds on the solution.
SSBEV Compute all eigenvalues and eigenvectors of a symmetric or Hermitian band matrix
CHBEV
SSBEVX Compute selected eigenvalues and eigenvectors of a symmetric or Hermitian band matrix
CHBEVX
SSBGV Compute all eigenvalues and eigenvectors of a generalized symmetric-definite or Hermitian-
CHBGV definite banded eigenproblem
SSPEV Compute all eigenvalues and eigenvectors of a symmetric or Hermitian packed matrix
CHPEV
SSPEVX Compute selected eigenvalues and eigenvectors of a symmetric or Hermitian packed matrix
CHPEVX
SSPGV Compute all eigenvalues and eigenvectors of a generalized symmetric-definite or
CHPGV Hermitian-definite packed eigenproblem
SSPSV Solves a real or complex symmetric indefinite system of linear equations AX = B; A is held
CSPSV in packed storage.
SSPSVX Solves a real or complex symmetric indefinite system of linear equations AX = B (A is held
CSPSVX in packed storage) and provides an estimate of the condition number and error bounds on
the solution.
SSTEV Compute all eigenvalues and eigenvectors of a real symmetric tridiagonal matrix
SSTEVX Compute selected eigenvalues and eigenvectors of a real symmetric tridiagonal matrix
SSYEV Compute all eigenvalues and eigenvectors of a symmetric or Hermitian matrix
CHEEV
SSYEVX Compute selected eigenvalues and eigenvectors of a symmetric or Hermitian matrix
CHEEVX
SSYGV Compute all eigenvalues and eigenvectors of a generalized symmetric-definite or
CHEGV Hermitian-definite eigenproblem
SSYSV Solves a real or complex symmetric indefinite system of linear equations AX = B.
CSYSV

338 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SSYSVX Solves a real or complex symmetric indefinite system of linear equations AX = B and
CSYSVX provides an estimate of the condition number and error bounds on the solution.

Computational Routines
These computational routines are listed in alphabetical order, with real matrix routines and complex matrix
routines grouped together as appropriate.

Name Purpose

CHECON Estimates the reciprocal of the condition number of a complex Hermitian indefinite matrix,
using the factorization computed by CHETRF.
CHERFS Improves the computed solution to a complex Hermitian indefinite system of linear
equations AX = B and provides error bounds for the solution.
CHETRF Computes the factorization of a complex Hermitian indefinite matrix, using the diagonal
pivoting method.
CHETRI Computes the inverse of a complex Hermitian indefinite matrix, using the factorization
computed by CHETRF.
CHETRS Solves a complex Hermitian indefinite system of linear equations AX = B, using the
factorization computed by CHETRF.
CHPCON Estimates the reciprocal of the condition number of a complex Hermitian indefinite matrix
in packed storage, using the factorization computed by CHPTRF.
CHPRFS Improves the computed solution to a complex Hermitian indefinite system of linear
equations AX = B (A is held in packed storage) and provides error bounds for the solution.
CHPTRF Computes the factorization of a complex Hermitian indefinite matrix in packed storage,
using the diagonal pivoting method.
CHPTRI Computes the inverse of a complex Hermitian indefinite matrix in packed storage, using the
factorization computed by CHPTRF.
CHPTRS Solves a complex Hermitian indefinite system of linear equations AX = B (A is held in
packed storage) using the factorization computed by CHPTRF.
ILAENV Determines tuning parameters (such as the block size).
SBDSQR Compute the singular value decomposition of a general matrix reduced to bidiagonal form
CBDSQR
SGBCON Estimates the reciprocal of the condition number of a general band matrix, in either the 1-
CGBCON norm or the infinity-norm, using the LU factorization computed by SGBTRF or CGBTRF.

004– 2081– 002 339


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SGBEQU Computes row and column scalings to equilibrate a general band matrix and reduce its
CGBEQU condition number. Does not multiprocess or call any multiprocessing routines.
SGBRFS Improves the computed solution to any of the following general banded systems of linear
CGBRFS equations and provides error bounds for the solution.
AX = B
T
A X=B
H
A X=B
SGBTRF Computes an LU factorization of a general band matrix, using partial pivoting with row
CGBTRF interchanges.
SGBTRS Solves any of the following general banded systems of linear equations using the LU
CGBTRS factorization computed by SGBTRF or CGBTRF.
AX = B
T
A X=B
H
A X=B
SGEBAK Back transform the eigenvectors of a matrix transformed by SGEBAL/CGEBAL.
CGEBAK
SGEBAL Balances a general matrix A.
CGEBAL
SGEBRD Reduces a general matrix to upper or lower bidiagonal form by an orthogonal/unitary
CGEBRD transformation.
SGECON Estimates the reciprocal of the condition number of a general matrix, in either the 1-norm or
CGECON the infinity-norm, using the LU factorization computed by SGETRF or CGETRF.
SGEEQU Computes row and column scalings to equilibrate a general rectangular matrix and to reduce
CGEEQU its condition number.
SGEHRD Reduces a general matrix to upper Hessenberg form by an orthogonal/unitary transformation.
CGEHRD
SGELQF Computes an LQ factorization of a general rectangular matrix.
CGELQF
SGEQLF Computes a QL factorization of a general rectangular matrix.
CGEQLF
SGEQPF Computes a QR factorization with column pivoting of a general rectangular matrix.
CGEQPF

340 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SGEQRF Computes a QR factorization of a general rectangular matrix.


CGEQRF
SGERFS Improves the computed solution to any of the following general systems of linear equations
CGERFS and provides error bounds for the solution.
AX = B
T
A X=B
H
A X=B
SGERQF Computes an RQ factorization of a general rectangular matrix.
CGERQF
SGETRF Computes an LU factorization of a general matrix, using partial pivoting with row
CGETRF interchanges.
SGETRI Computes the inverse of a general matrix, using the LU factorization computed by SGETRF
CGETRI or CGETRF.
SGETRS Solves any of the following general systems of linear equations using the LU factorization
CGETRS computed by SGETRF or CGETRF.
AX = B
T
A X=B
H
A X=B
SGGBAK Back transform the eigenvectors of a generalized eigenvalue problem transformed by
CGGBAK SGGBAL
SGGBAL Balance a pair of general matrices (A,B)
CGGBAL
SGGHRD Reduce a pair of matrices (A,B) to generalized upper Hessenberg form
CGGHRD
SGTCON Estimates the reciprocal of the condition number of a general tridiagonal matrix, in either
CGTCON the 1-norm or the infinity-norm, using the LU factorization computed by SGTTRF or
CGTTRF.
SGTRFS Improves the computed solution to any of the following general tridiagonal systems of linear
CGTRFS equations and provides error bounds for the solution.
AX = B
T
A X=B
H
A X=B

004– 2081– 002 341


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SGTTRF Computes an LU factorization of a general tridiagonal matrix, using partial pivoting with
CGTTRF row interchanges.
SGTTRS Solves a general tridiagonal system of linear equations using the LU factorization computed
CGTTRS by SGTTRF or CGTTRF. AX = B
T
A X=B
H
A X=B
SHGEQZ Compute the eigenvalues of a matrix pair (A,B) in generalized upper Hessenberg form using
CHGEQZ the QZ method
SHSEIN Compute eigenvectors of a upper Hessenberg matrix by inverse iteration
CHSEIN
SHSEQR Compute eigenvalues, Schur form, and Schur vectors of a upper Hessenberg matrix
CHSEQR
SLAMCH Computes machine-specific constants.
SLARF Applies an elementary reflector.
CLARF
SLARFB Applies a block reflector.
CLARFB
SLARFG Generates an elementary reflector.
CLARFG
SLARFT Forms the triangular factor of a block reflector.
CLARFT
SLARGV Generate a vector of real or complex plane rotations
CLARGV
SLARNV Generates a vector of random numbers.
CLARNV
SLARTG Generates a plane rotation.
CLARTG
SLARTV Apply a vector of real or complex plane rotations to two vectors
CLARTV
SLASR Apply a sequence of real plane rotations to a matrix
CLASR
SOPGTR Generates the orthogonal/unitary matrix Q from SSPTRD/CHPTRD.
CUPGTR

342 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SOPMTR Multiplies by the orthogonal/unitary matrix Q from SSPTRD/CHPTRD.


CUPMTR
SORGBR Generates one of the orghogonal/unitary matrices Q or P H from SGEBRD/CGEBRD.
CUNGBR
SORGHR Generates the orthogonal/unitary matrix Q from SGEHRD/CGEHRD.
CUNGHR
SORGLQ Generates all or part of the orthogonal or unitary matrix Q from an LQ factorization
CUNGLQ determined by SGELQF or CGELQF.
SORGQL Generates all or part of the orthogonal or unitary matrix Q from a QL factorization
CUNGQL determined by SGEQLF or CGEQLF.
SORGQR Generates all or part of the orthogonal or unitary matrix Q from a QR factorization
CUNGQR determined by SGEQRF or CGEQRF.
SORGRQ Generates all or part of the orthogonal or unitary matrix Q from an RQ factorization
CUNGRQ determined by SGERQF or CGERQF.
SORGTR Generates the orthogonal/unitary matrix Q from SSYTRD/CHETRD.
CUNGTR
SORMBR Multiplies by one of the orthogonal/unitary matrices Q or P from SGEBRD/CGEBRD.
CUNMBR
SORMHR Multiplies by the orthogonal/unitary matrix Q from SGEHRD/CGEHRD.
CUNMHR
SORMLQ Multiplies a general matrix by the orthogonal or unitary matrix from an LQ factorization
CUNMLQ determined by SGELQF or CGELQF.
SORMQL Multiplies a general matrix by the orthogonal or unitary matrix from a QL factorization
CUNMQL determined by SGEQLF or CGEQLF.
SORMQR Multiplies a general matrix by the orthogonal or unitary matrix from a QR factorization
CUNMQR determined by SGEQRF or CGEQRF.
SORMRQ Multiplies a general matrix by the orthogonal or unitary matrix from an RQ factorization
CUNMRQ determined by SGERQF or CGERQF.
SORMTR Multiplies by the orthogonal/unitary matrix Q from SSYTRD/CHETRD.
CUNMTR
SPBCON Estimates the reciprocal of the condition number of a symmetric or Hermitian positive
CPBCON definite band matrix, using the Cholesky factorization computed by SPBTRF or CPBTRF.
SPBEQU Computes row and column scalings to equilibrate a symmetric or Hermitian positive definite
CPBEQU band matrix and to reduce its condition number.

004– 2081– 002 343


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SPBRFS Improves the computed solution to a symmetric or Hermitian positive definite banded
CPBRFS system of linear equations AX = B and provides error bounds for the solution.
SPBSTF Compute a split Cholesky factorization of a symmetric or Hermitian positive definite band
CPBSTF matrix.
SPBTRF Computes the Cholesky factorization of a symmetric or Hermitian positive definite band
CPBTRF matrix.
SPBTRS Solves a symmetric or Hermitian positive definite banded system of linear equations AX =
CPBTRS B, using the Cholesky factorization computed by SPBTRF or CPBTRF.
SPOCON Estimates the reciprocal of the condition number of a symmetric or Hermitian positive
CPOCON definite matrix, using the Cholesky factorization computed by SPOTRF or CPOTRF.
SPOEQU Computes row and column scalings to equilibrate a symmetric or Hermitian positive definite
CPOEQU matrix and reduces its condition number.
SPORFS Improves the computed solution to a symmetric or Hermitian positive definite system of
CPORFS linear equations AX = B and provides error bounds for the solution.
SPOTRF Computes the Cholesky factorization of a symmetric or Hermitian positive definite matrix.
CPOTRF
SPOTRI Computes the inverse of a symmetric or Hermitian positive definite matrix, using the
CPOTRI Cholesky factorization computed by SPOTRF or CPOTRF.
SPOTRS Solves a symmetric or Hermitian positive definite system of linear equations AX = B, using
CPOTRS the Cholesky factorization computed by SPOTRF or CPOTRF.
SPPCON Estimates the reciprocal of the condition number of a symmetric or Hermitian positive
CPPCON definite matrix in packed storage, using the Cholesky factorization computed by SPPTRF or
CPPTRF.
SPPEQU Computes row and column scalings to equilibrate a symmetric or Hermitian positive definite
CPPEQU matrix in packed storage and reduces its condition number.
SPPRFS Improves the computed solution to a symmetric or Hermitian positive definite system of
CPPRFS linear equations AX = B (A is held in packed storage) and provides error bounds for the
solution.
SPPTRF Computes the Cholesky factorization of a symmetric or Hermitian positive definite matrix in
CPPTRF packed storage.
SPPTRI Computes the inverse of a symmetric or Hermitian positive definite matrix in packed
CPPTRI storage, using the Cholesky factorization computed by SPPTRF or CPPTRF.
SPPTRS Solves a symmetric or Hermitian positive definite system of linear equations AX = B (A is
CPPTRS held in packed storage) using the Cholesky factorization computed by SPPTRF or CPPTRF.

344 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose
H
SPTCON Uses the LDL factorization computed by SPTTRF or CPTTRF to compute the reciprocal
CPTCON of the condition number of a symmetric or Hermitian positive definite tridiagonal matrix.
SPTEQR Compute eigenvalues and eigenvectors of a symmetric or Hermitian positive definite
CPTEQR tridiagonal matrix.
SPTRFS Improves the computed solution to a symmetric or Hermitian positive definite tridiagonal
CPTRFS system of linear equations AX = B and provides error bounds for the solution.
SPTTRF Computes the LDL H factorization of a symmetric or Hermitian positive definite tridiagonal
CPTTRF matrix.
H
SPTTRS Uses the LDL factorization computed by SPTTRF or CPTTRF to solve a symmetric or
CPTTRS Hermitian positive definite tridiagonal system of linear equations.
SSBGST Reduce a symmetric or Hermitian definite banded generalized eigenproblem to standard
CHBGST form.
SSBTRD Reduce a symmetric or Hermitian band matrix to real symmetric tridiagonal form by an
CHBTRD orthogonal/unitary transformation.
SSPCON Estimates the reciprocal of the condition number of a real or complex symmetric indefinite
CSPCON matrix in packed storage, using the factorization computed by SSPTRF or CSPTRF.
SSPGST Reduce a symmetric or Hermitian definite generalized eigenproblem to standard form, using
CHPGST packed storage.
SSPRFS Improves the computed solution to a real or complex symmetric indefinite system of linear
CSPRFS equations AX = B (A is held in packed storage) and provides error bounds for the solution.
SSPTRD Reduces a symmetric/Hermitian packed matrix A to real symmetric tridiagonal form by an
CHPTRD orthogonal/unitary transformation.
SSPTRF Computes the factorization of a real or complex symmetric indefinite matrix in packed
CSPTRF storage, using the diagonal pivoting method.
SSPTRI Computes the inverse of a real or complex symmetric indefinite matrix in packed storage,
CSPTRI using the factorization computed by SSPTRF or CSPTRF.
SSPTRS Solves a real or complex symmetric indefinite system of linear equations AX = B (A is held
CSPTRS in packed storage) using the factorization computed by SSPTRF or CSPTRF.
SSTEBZ Compute eigenvalues of a symmetric tridiagonal matrix by bisection.
SSTEIN Compute eigenvectors of a real symmetric tridiagonal matrix by inverse iteration.
CSTEIN
SSTEQR Compute eigenvalues and eigenvectors of a real symmetric tridiagonal matrix using the
CSTEQR implicit QL or QR method.

004– 2081– 002 345


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

SSTERF Compute all eigenvalues of a symmetric tridiagonal matrix using the root-free variant of the
QL or QR algorithm.
SSYCON Estimates the reciprocal of the condition number of a real or complex symmetric indefinite
CSYCON matrix, using the factorization computed by SSYTRF or CSYTRF.
SSYGST Reduce a symmetric or Hermitian definite generalized eigenproblem to standard form.
CHEGST
SSYRFS Improves the computed solution to a real or complex symmetric indefinite system of linear
CSYRFS equations AX = B and provides error bounds for the solution.
SSYTRD Reduces a symmetric/Hermitian matrix A to real symmetric tridiagonal form by an
CHETRD orthogonal/unitary transformation.
SSYTRF Computes the factorization of a real complex symmetric indefinite matrix, using the
CSYTRF diagonal pivoting method.
SSYTRI Computes the inverse of a real or complex symmetric indefinite matrix, using the
CSYTRI factorization computed by SSYTRF or CSYTRF.
SSYTRS Solves a real or complex symmetric indefinite system of linear equations AX = B, using the
CSYTRS factorization computed by SSYTRF or CSYTRF.
STBCON Estimates the reciprocal of the condition number of a triangular band matrix, in either the
CTBCON 1-norm or the infinity-norm.
STBRFS Provides error bounds for the solution of any of the following triangular banded systems of
CTBRFS linear equations:
AX = B
T
A X=B
H
A X=B
STBTRS Solves any of the following triangular banded systems of linear equations:
CTBTRS AX = B
T
A X=B
H
A X=B
STGEVC Compute eigenvectors of a pair of matrices (A,B) in generalized Schur form.
CTGEVC
STPCON Estimates the reciprocal of the condition number of a triangular matrix in packed storage, in
CTPCON either the 1-norm or the infinity-norm.

346 004– 2081– 002


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

STPRFS Provides error bounds for the solution of any of the following triangular systems of linear
CTPRFS equations where A is held in packed storage.
AX = B
T
A X=B
H
A X=B
STPTRI Computes the inverse of a triangular matrix in packed storage.
CTPTRI
STPTRS Solves any of the following triangular systems of linear equations where A is held in packed
CTPTRS storage.
AX = B
T
A X=B
H
A X=B
STRCON Estimates the reciprocal of the condition number of a triangular matrix, in either the 1-norm
CTRCON or the infinity-norm.
STREVC Compute eigenvectors of a real upper quasi-triangular matrix.
CTREVC Compute eigenvectors of a complex triangular matrix.
STREXC Exchange diagonal blocks in the real Schur factorization of a real matrix.
CTREXC Exchange diagonal elements in the Schur factorization of a complex matrix.
STRRFS Provides error bounds for the solution of any of the following triangular systems of linear
CTRRFS equations:
AX = B
T
A X=B
H
A X=B
STRSEN Compute condition numbers to measure the sensitivity of a cluster of eigenvalues and its
CTRSEN corresponding invariant subspace.
STRSNA Compute condition numbers for specified eigenvalues and eigenvectors of a real upper
quasi-triangular matrix.
CTRSNA Compute condition numbers for specified eigenvalues and eigenvectors of a complex upper
triangular matrix.
STRSYL Solve the Sylvester matrix equation
CTRSYL

004– 2081– 002 347


INTRO_LAPACK ( 3S ) INTRO_LAPACK ( 3S )

Name Purpose

STRTRI Computes the inverse of a triangular matrix.


CTRTRI
STRTRS Solves any of the following triangular systems of linear equations:
CTRTRS AX = B
T
A X=B
H
A X=B
STRZRQF Reduces an upper trapezoidal matrix to upper triangular form by an orthogonal/unitary
CTZRQF transformation.

SEE ALSO
LINPACK(3S) which lists the names of the LINPACK routines that are superseded by the linear system
solvers in LAPACK
LAPACK User’s Guide, CRI publication TPD– 0003

348 004– 2081– 002


EISPACK ( 3S ) EISPACK ( 3S )

NAME
EISPACK – Introduction to Eigensystem computation for dense linear systems

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
EISPACK is a package of Fortran routines for solving the eigenvalue problem and for computing and using
the singular-value decomposition.
The original Fortran versions are described in the Matrix Eigensystem Routines – EISPACK Guide, second
edition, by B. T. Smith, J. M. Boyle, J. J. Dongarra, B. S. Garbow, Y. Ikebe, V. C. Klema, and C. B. Moler,
published by Springer-Verlag, New York, 1976, Library of Congress catalog card number 76– 2662. The
original Fortran versions also are documented in the Matrix Eigensystem Routines - EISPACK Guide
Extensions (Lecture Notes in Computer Science, Vol. 51) by B. S. Garbow, J. M. Boyle, J. J. Dongarra, and
C. B. Moler, published by Springer-Verlag, New York, 1977, Library of Congress catalog card number
77– 2802.
Most EISPACK routines are superseded by routines from the more recent public domain package, LAPACK,
described in the LAPACK User’s Guide (see INTRO_LAPACK(3S) for a complete reference). Of particular
interest to EISPACK users who want to switch to LAPACK is Appendix D, "Converting from LINPACK
and EISPACK," of the LAPACK User’s Guide. This appendix contains a table that shows the name of the
LAPACK routines that are functionally equivalent to each EISPACK routine.
Each Scientific Library version of the EISPACK routines has the same name, algorithm, and calling
sequence as the original version. Optimization of each routine includes the following:
• Use of the Level 1 BLAS routines when applicable, and use of the Level 2 and 3 BLAS in TRED1,
TRED2, TRBAK, and REDUC.
• Removal of Fortran IF statements when the result of either branch is the same.
• Unrolling complicated Fortran DO loops to improve vectorization.
• Use of Fortran compiler directives to aid vector optimization.
These modifications increase vectorization and use optimized library routines; therefore, they reduce
execution time. Only the order of computations within a loop is changed; the modified versions produce the
same answers as the original versions, unless the problem is sensitive to small changes in the data.

004– 2081– 002 349


EISPACK ( 3S ) EISPACK ( 3S )

The following table lists the routines, name, matrix or decomposition, and purpose for each routine.

Purpose Matrix or Decomposition Name

Forms eigenvectors by back transforming corresponding Real nonsymmetric tridiagonal BAKVEC


matrix determined by FIGI
Balances matrix and isolates eigenvalues when possible Real general BALANC
Forms eigenvectors by back transforming those of the Real general BALBAK
corresponding matrices determined by BALANC
Finds the eigenvalues that lie in a specified interval by using Real symmetric tridiagonal BISECT
bisection
Forms eigenvectors by back transforming those of the Complex general CBABK2
corresponding matrices determined by CBAL
Balances matrix and isolates eigenvalues when possible Complex general CBAL
Reduces to a symmetric tridiagonal matrix Real symmetric banded BANDR
Finds those eigenvectors that correspond to ordered list of Real symmetric banded BANDV
eigenvalues by using inverse iteration
Finds some eigenvalues by using QR algorithm with shifts Real symmetric banded BQR
of origin
Finds eigenvalues and eigenvectors Complex general CG
Finds eigenvalues and eigenvectors Complex Hermitian CH
Finds eigenvectors that correspond to specified eigenvalues Complex upper Hessenberg CINVIT
by using inverse iteration
Forms eigenvectors by back transforming those of the Complex general COMBAK
corresponding matrices determined by COMHES
Reduces matrix to upper Hessenberg form by using Complex general COMHES
elementary similarity transformations
Finds eigenvalues by using modified LR method Complex upper Hessenberg COMLR
Finds eigenvalues and eigenvectors, by using modified LR Complex upper Hessenberg COMLR2
method
Finds eigenvalues by QR method Complex upper Hessenberg COMQR
Finds eigenvalues and eigenvectors by QR method Complex upper Hessenberg COMQR2
Forms eigenvectors by back transforming those of the Complex general CORTB
corresponding matrices determined by CORTH

350 004– 2081– 002


EISPACK ( 3S ) EISPACK ( 3S )

Purpose Matrix or Decomposition Name

Reduces matrix to upper Hessenberg form by using unitary Complex general CORTH
similarity transformations
Forms eigenvectors by back transforming those of the Real general ELMBAK
corresponding matrices determined by ELMHES
Reduces matrix to upper Hessenberg form by using Real general ELMHES
elementary similarity transformations
Accumulates transformations used in the reduction to upper Real general ELTRAN
Hessenberg form done by ELMHES
Reduces to symmetric tridiagonal matrix that has the same Real nonsymmetric tridiagonal FIGI
eigenvalues
Reduces to symmetric tridiagonal matrix that has the same Real nonsymmetric tridiagonal FIGI2
eigenvalues, retaining the diagonal similarity transformations
Finds eigenvalues by QR method Real upper Hessenberg HQR
Finds eigenvalues and eigenvectors by QR method Real upper Hessenberg HQR2
Finds eigenvectors given the eigenvectors of the real Complex Hermitian HTRIBK
symmetric tridiagonal matrix calculated by HTRIDI
(including eigenvectors calculated by TQL2 or IMTQL2)
Finds eigenvectors given the eigenvectors of the real Complex Hermitian (packed) HTRIB3
symmetric tridiagonal matrix calculated by HTRID3
(eigenvectors calculated by TQL2 or IMTQL2, among
others)
Reduces to real symmetric tridiagonal form by using unitary Complex Hermitian HTRIDI
similarity transformations
Reduces to real symmetric tridiagonal form by using unitary Complex Hermitian (packed) HTRID3
similarity transformations
Finds eigenvalues by using implicit QL method, and Real symmetric tridiagonal IMTQLV
associates them with their corresponding submatrix indices
Finds eigenvalues by implicit QL method Real symmetric tridiagonal IMTQL1
Finds eigenvalues and eigenvectors by implicit QL method Real symmetric tridiagonal IMTQL2
Finds eigenvectors that correspond to specified eigenvalues Real upper Hessenberg INVIT
by using inverse iteration
Determines the singular-value decomposition A = USV T , Real rectangular MINFIT
forming U T B rather than U by using Householder
bidiagonalization and a variant of the QR algorithm

004– 2081– 002 351


EISPACK ( 3S ) EISPACK ( 3S )

Purpose Matrix or Decomposition Name

Forms eigenvectors by back transforming those of the Real general ORTBAK


corresponding matrices determined by ORTHES
Reduces matrix to upper Hessenberg form by using Real general ORTHES
orthogonal similarity transformations
Accumulates transformations used in the reduction to upper Real general ORTRAN
Hessenberg form done by ORTHES
Reduces matrices A and B in the generalized eigenproblem Real general QZHES
(Ax = λBx ) so that A is in upper Hessenberg form and B is
in upper triangular form by using orthogonal transformations
Further reduces matrices A and B as calculated by QZHES Real general QZIT
for the generalized eigenproblem (Ax = λBx ), so that A is in
quasi-upper triangular form and B is still upper triangular
Produces three arrays that can be used to calculate the Real general QZVAL
eigenvalues for the generalized eigenproblem (Ax = λBx ),
with A and B as calculated by QZIT
Finds the eigenvectors that correspond to a list of Real general QZVEC
eigenvalues for the generalized eigenproblem (Ax = λBx ),
with A and B as calculated by QZIT
Finds the smallest or largest eigenvalues by rational QR Real symmetric tridiagonal RATQR
method with Newton corrections
Forms generalized eigenvectors by back transforming those Real general REBAK
of the corresponding matrices determined by REDUC or
REDUC2
Forms eigenvectors by back transforming those of the Real general REBAKB
corresponding matrices determined by REDUC2
Reduces the generalized eigenproblem (Ax = λBx ) to a Real symmetric REDUC
standard symmetric eigenproblem by using the Cholesky
factorization of B
Reduces either of the generalized eigenproblems Real symmetric REDUC2
(ABx = λBx or BAx = λBx ) to a standard symmetric
eigenproblem by using the Cholesky factorization of B
Finds eigenvalues and eigenvectors Real general RG
Finds generalized eigenvalues and eigenvectors Real general RGG
(Ax = λBx )

352 004– 2081– 002


EISPACK ( 3S ) EISPACK ( 3S )

Purpose Matrix or Decomposition Name

Finds eigenvalues and eigenvectors Real symmetric RS


Finds eigenvalues and eigenvectors Real symmetric banded RSB
Finds generalized eigenvalues and eigenvectors Real symmetric RSG
(Ax = λBx )
Finds generalized eigenvalues and eigenvectors Real symmetric RSGAB
(ABx = λx )
Finds generalized eigenvalues and eigenvectors Real symmetric RSGBA
(BAx = λx )
Finds eigenvalues and eigenvectors Real symmetric RSM
Finds eigenvalues and eigenvectors Real symmetric packed RSP
Finds eigenvalues and eigenvectors Real symmetric tridiagonal RST
Finds eigenvalues and eigenvectors Special real tridiagonal RT
Determines the singular-value decomposition A = USV by T
Real rectangular SVD
using Householder bidiagonalization and a variant of the QR
algorithm
Finds the eigenvectors from a set of ordered eigenvalues by Real symmetric tridiagonal TINVIT
using inverse iteration
Finds the eigenvalues by rational QL method Real symmetric tridiagonal TQLRAT
Finds the eigenvalues and/or eigenvectors by the rational QL Real symmetric tridiagonal TQL1
or QL method
Finds the eigenvalues and/or eigenvectors by the rational QL Real symmetric tridiagonal TQL2
or QL method
Forms eigenvectors by back transforming those of the Real symmetric TRBAK
corresponding matrices determined by TRED1
Forms eigenvectors by back transforming those of the Real symmetric (packed) TRBAK3
corresponding matrices determined by TRED3
Reduces to symmetric tridiagonal matrix by using orthogonal Real symmetric TRED1
similarity transformations
Reduces to symmetric tridiagonal matrix by using and Real symmetric TRED2
accumulating orthogonal similarity transformations
Reduces to symmetric tridiagonal matrix by using orthogonal Real symmetric (packed) TRED3
similarity transformations

004– 2081– 002 353


EISPACK ( 3S ) EISPACK ( 3S )

Purpose Matrix or Decomposition Name

Finds the eigenvalues that lie between specified indices by Real symmetric tridiagonal TRIDIB
using bisection
Finds the eigenvalues that lie in a specified interval and each Real symmetric tridiagonal TSTURM
corresponding eigenvector by using bisection and inverse
iteration

SEE ALSO
LAPACK User’s Guide, CRI publication TPD– 0003

354 004– 2081– 002


LINPACK ( 3S ) LINPACK ( 3S )

NAME
LINPACK – Single-precision real and complex LINPACK routines

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
LINPACK is a public domain package of Fortran routines that solves systems of linear equations and
computes the QR, Cholesky, and singular value decompositions. The original Fortran programs are
described in the LINPACK User’s Guide by J. J. Dongarra, C. B. Moler, J. R. Bunch, and G. W. Stewart,
published by the Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 1979, Library of
Congress catalog card number 78– 78206.
Most LINPACK routines are superseded by routines from the more recent public domain package, LAPACK,
described in the LAPACK User’s Guide (see INTRO_LAPACK(3S) for a complete reference). Of particular
interest to LINPACK users who want to switch to LAPACK is Appendix D, "Converting from LINPACK
and EISPACK," of the LAPACK User’s Guide. This appendix contains a table that shows the name of the
LAPACK routines that are functionally equivalent to each LINPACK routine.
Each single-precision Scientific Library version of the LINPACK routines has the same name, algorithm, and
calling sequence as the original version. Optimization of each routine includes the following:
• Replacement of calls to the BLAS routines SSCAL, SCOPY, SSWAP, SAXPY, and SROT with inline
Fortran code vectorized by the Cray Research Fortran compilers. (SROTG is still called by LINPACK.)
• Removal of Fortran IF statements in which the result of either branch is the same.
• Replacement of SDOT to solve triangular systems of linear equations in SPOSL, STRSL, and SCHDD
with more vectorizable code.
These optimizations affect only the execution order of floating-point operations in DO loops. See the
LINPACK User’s Guide for further descriptions. The complex routines have been added without much
optimization.
As mentioned previously, LAPACK does not completely supersede LINPACK. In the following table, an
asterick (*) marks LINPACK routines that are not superseded in public domain LAPACK. This table lists
the name, matrix or decomposition, and purpose for each routine.

Name Matrix or Decomposition Purpose

SGECO Real general Factors and estimates condition


SGEFA Factors
SGESL Solves
SGEDI Computes determinant and inverse

004– 2081– 002 355


LINPACK ( 3S ) LINPACK ( 3S )

Name Matrix or Decomposition Purpose

CGECO Complex general Factors and estimates condition


CGEFA Factors
CGESL Solves
CGEDI Computes determinant and inverse
SGBCO Real general banded Factors and estimates condition
SGBFA Factors
SGBSL Solves
SGBDI Computes determinant
CGBCO Complex general banded Factors and estimates condition
CGBFA Factors
CGBSL Solves
CGBDI Computes determinant
SPOCO Real positive definite Factors and estimates condition
SPOFA Factors
SPOSL Solves
SPODI Computes determinant and inverse
CPOCO Complex positive definite Factors and estimates condition
CPOFA Factors
CPOSL Solves
CPODI Computes determinant and inverse
SPPCO Real positive definite packed Factors and estimates condition
SPPFA Factors
SPPSL Solves
SPPDI Computes determinant and inverse
CPPCO Complex positive definite packed Factors and estimates condition
CPPFA Factors
CPPSL Solves
CPPDI Computes determinant and inverse
SPBCO Real positive definite banded Factors and estimates condition
SPBFA Factors
SPBSL Solves
SPBDI Computes determinant
CPBCO Complex positive definite banded Factors and estimates condition
CPBFA Factors
CPBSL Solves
CPBDI Computes determinant

356 004– 2081– 002


LINPACK ( 3S ) LINPACK ( 3S )

Name Matrix or Decomposition Purpose

SSICO Real symmetric indefinite Factors and estimates condition


SSIFA Factors
SSISL Solves
SSIDI Computes inertia, determinant, and inverse
CSICO Complex symmetric Factors and estimates condition
CSIFA Factors
CSISL Solves
CSIDI Computes determinant and inverse
CHICO Complex Hermitian indefinite Factors and estimates condition
CHIFA Factors
CHISL Solves
CHIDI Computes inertia, determinant, and inverse
SSPCO Real symmetric indefinite packed Factors and estimates condition
SSPFA Factors
SSPSL Solves
SSPDI Computes inertia, determinant, and inverse
CSPCO Complex symmetric indefinite packed Factors and estimates condition
CSPFA Factors
CSPSL Solves
CSPDI Computes inertia, determinant, and inverse
CHPCO Complex Hermitian indefinite packed Factors and estimates condition
CHPFA Factors
CHPSL Solves
CHPDI Computes inertia, determinant, and inverse
STRCO Real triangular Factors and estimates condition
STRSL Solves
STRDI Computes determinant and inverse
CTRCO Complex triangular Factors and estimates condition
CTRSL Solves
CTRDI Computes determinant and inverse
SGTSL Real tridiagonal Solves
CGTSL Complex tridiagonal Solves
SPTSL Real positive definite Solves tridiagonal
CPTSL Complex positive Solves definite tridiagonal

004– 2081– 002 357


LINPACK ( 3S ) LINPACK ( 3S )

Name Matrix or Decomposition Purpose

SCHDC * Real Cholesky decomposition Decomposes


SCHDD * Downdates
SCHUD * Updates
SCHEX * Exchanges
CCHDC * Complex Cholesky decomposition Decomposes
CCHDD * Downdates
CCHUD * Updates
CCHEX * Exchanges
SQRDC Real Performs orthogonal factorization
SQRSL Solves
CQRDC Complex Performs orthogonal factorization
CQRSL Solves
SSVDC Real Performs singular value decomposition
CSVDC Complex Performs singular value decomposition

SEE ALSO
INTRO_LAPACK(3S) for information and references about the LAPACK routines that supersede LINPACK
LAPACK User’s Guide, CRI publication TPD– 0003
Dongarra, J. J., C. B. Moler, J. R. Bunch, and G. W. Stewart, LINPACK User’s Guide. Society for
Industrial and Applied Mathematics (SIAM), Philadelphia, 1979.

358 004– 2081– 002


INTRO_SCALAPACK ( 3S ) INTRO_SCALAPACK ( 3S )

NAME
INTRO_SCALAPACK – Introduction to the ScaLAPACK routines for distributed matrix computations

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
The ScaLAPACK library contains routines for solving real or complex general, triangular, or positive definite
distributed systems. It also contains routines for reducing distributed matrices to condensed form and an
eigenvalue problem solver for real symmetric distributed matrices. Finally, it also includes a set of routines
that perform basic operations involving distributed matrices and vectors, the PBLAS.
Individual man pages exist for all routines except the PBLAS. You can find more information on the
PBLAS on the World Wide Web at the following URL: http://www.netlib.org/.
Changes from Public Domain Version
The ScaLAPACK development team is directed by Jack Dongarra and consists of groups at UT Knoxville
and UC Berkeley. A version of the package is available in the public domain on the World Wide Web at
the following URL: http://www.netlib.org/.
In the UNICOS/mk version, the calling sequences to all ScaLAPACK routines remain unchanged.
Initialization
Some of the ScaLAPACK routines require the Basic Linear Algebra Communication Subprograms (BLACS)
to be initialized. This can be done through a call to BLACS_GRIDINIT(3S). Finally, each distributed array
that is passed as an argument to a ScaLAPACK routine, requires a descriptor, which is set through a call to
DESCINIT(3S). If a call is required, it is documented on the man page for the routine.
Available Routines
The following routines are available:
Linear Solvers
PSGETRF, PCGETRF LU factorization and solution of linear general
PSGETRS, PCGETRS distributed systems of linear equations.
PSTRTRS, PCTRTRS
PSGESV, PCGESV
PSPOTRF, PCPOTRF Cholesky factorization and solution of real symmetric
PSPOTRS, PCPOTRS or complex Hermitian distributed systems of linear
PSPOSV, PCPOSV equations.
PSGEQRF, PCGEQRF QR, RQ, QL, LQ, and QR with column pivoting for general
PSGERQF, PCGERQF distributed matrices.
PSGEQLF, PCGEQLF
PSGELQF, PCGELQF
PSGEQPF, CGEQPF

004– 2081– 002 359


INTRO_SCALAPACK ( 3S ) INTRO_SCALAPACK ( 3S )

PSGETRI, PCGETRI Inversion of general, triangular, real symmetric


PSTRTRI, PCTRTRI positive definite or complex Hermitian positive
PSPOTRI, PCPOTRI definite distributed matrices.

Similarity/Equivalence Reduction to Condensed Form


PSSYTRD, PCHETRD Reduction of real symmetric or complex Hermitian matrices to tridiagonal
form.
PSGEBRD, PCGEBRD Reduction of general matrices to bidiagonal form
Eigenvalue Routines
PCHEEVX Eigenvalue solver for complex hermitian matrices.
PCHEGVX Eigenvalue solver for Hermitian-definite generalized eigenproblem.
PSSYEVX Eigenvalue solver for real symmetric distributed matrices.
PSSYGVX Eigenvalue solver for real symmetric definite generalized eigenproblems.
Support Routines
INDXG2P Computes the coordinate of the processor in the two-dimensional (2D)
processor grid that owns an entry of the distributed array.
NUMROC Computes the number of local rows or columns of the distributed array owned
by a processor.
PBLAS
The following PBLAS routines are supported, but are not documented:
Level 1

PSAMAX PCAMAX
PSASUM PSCASU M
PSAXPY PCAXPY
PSNRM2 PSCNRM 2
PSCOPY PCCOPY
PSDOT PCDOTC PCD OTU
PSSCAL PCS CAL PCS SCA L
PSSWAP PCS WAP

360 004– 2081– 002


INTRO_SCALAPACK ( 3S ) INTRO_SCALAPACK ( 3S )

Level 2

PSG EMV PCG EMV


PSG ER PCG ERC PCG ERU
PSS YMV PCH EMV
PSS YR PCH ER
PSS YR2 PCH ER2
PST RMV PCT RMV
PST RSV PCT RSV

Level 3

PSG EMM PCG EMM


PSSYMM PCSYMM PCH EMM
PSSYRK PCSYRK PCH ERK
PSS YR2K PCSYR2 K PCH ER2 K
PSTRMM PCTRMM
PSTRSM PCTRSM
PSTRAN PCTRAN C PCT RAN U

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S)
Choi, J., J. Dongarra, R. Pozo, and D. Walker, ‘‘Scalapack: A scalable linear algebra library for distributed
memory concurrent computers,’’ in Proceedings of the Fourth Symposium on the Frontiers of Massively
Parallel Computation, IEEE Comput. Soc. Press, 1992.

004– 2081– 002 361


DESCINIT ( 3S ) DESCINIT ( 3S )

NAME
DESCINIT – Initializes a descriptor vector of a distributed two-dimensional array

SYNOPSIS
CALL DESCINIT (desc, m, n, mb, nb, irsrc, icsrc, icntxt, lld, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
DESCINIT associates a descriptor vector with a two-dimensional (2D) block, or block-cyclically distributed
array. The vector stores information required by the parallel 2D FFT and ScaLAPACK routines to establish
the mapping between an entry in the distributed 2D array and the processor that owns it.
The DESCINIT routine accepts the following arguments.
desc Integer array of dimension 9. (output)
Array descriptor.
m Integer. (input)
Number of global rows in the distributed matrix whose descriptor is being created.
n Integer. (input)
Number of global columns in the distributed matrix whose descriptor is being created.
mb Integer. (input)
Blocking size used to distribute the rows of the distributed matrix.
nb Integer. (input)
Blocking size used to distribute the columns of the distributed matrix.
irsrc Integer. (input)
Processor row that owns the first row of the distributed matrix.
icsrc Integer. (input)
Processor column that owns the first column of the distributed matrix.
icntxt Integer. (input)
Context handle that identifies the grid of processors over which the distributed matrix is
distributed as returned by a call to BLACS_GRIDINIT(3S).
lld Integer. (input)
The leading dimension of the local array that stores the local blocks of the distributed matrix.
info Integer. (output)
info = 0: Successful exit.
info < 0: If info = – i, the ith argument had an illegal value.

362 004– 2081– 002


DESCINIT ( 3S ) DESCINIT ( 3S )

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), BLACS_PCOORD(3S), INTRO_BLACS(3S)

004– 2081– 002 363


INDXG2P ( 3S ) INDXG2P ( 3S )

NAME
INDXG2P – Computes the coordinate of the processing element (PE) that possesses the entry of a
distributed matrix

SYNOPSIS
my_home=INDXG2P(indxglob, nb, iproc, isrcproc, nproc)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
INDXG2P computes the coordinate of the processing element (PE) that posseses the entry of a distributed
matrix specified by a global index indxglob. The formula for my_home is the following:
my_home = MOD(isrcproc+(indxglob-1)/nb, nprocs)
This routine accepts the following arguments:
indxglob Integer. (global input)
The global index of the element.
nb Integer. (global input)
Block size, size of the blocks the distributed matrix is split into.
iproc Integer. (local dummy)
Dummy argument; used to unify the calling sequence of the tool routines.
isrcproc Integer. (global input)
The coordinate of the process that possesses the first row/column of the distributed matrix.
nproc Integer. (global input)
Total number processes over which the matrix is distributed.

364 004– 2081– 002


NUMROC ( 3S ) NUMROC ( 3S )

NAME
NUMROC – Computes the number of rows or columns of a distributed matrix owned locally

SYNOPSIS
nrows_or_cols=NUMROC(n, nb, iproc, isrcproc, nprocs)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
NUMROC computes the number of rows or columns of a distributed matrix owned locally by the processor
indicated by iproc. If only a close upper-bound on the value is needed (for example, to determine how
much to allocate for a workspace), you can use the following formula to approximate the value returned by
this function:
nrows_or_cols ~= ((n/nb)/nprocs)*nb + nb
This routine accepts the following aruguments:
n Integer. (global input)
The number of rows/columns in distributed matrix.
nb Integer. (global input)
Block size. The size of the blocks which the blocks that the distributed matrix is split into.
iproc Integer. (local input)
The coordinate of the processor with the local array row or column to be determined.
isrcproc Integer. (global input)
The coordinate of the processor that possesses the first row or column of the distributed matrix.
nprocs Integer. (global input)
The total number of processors over which the matrix is distributed.

004– 2081– 002 365


PCHEEVX ( 3S ) PCHEEVX ( 3S )

NAME
PCHEEVX – Computes selected eigenvalues and eigenvectors of a Hermitian-definite eigenproblem

SYNOPSIS
CALL PCHEEVX (jobZ, range, uplo, n, A, iA, jA, descA, vl, vu, il, iu, abstol, m, nZ, w,
orfac, Z, iZ, jZ, descZ, work, lwork, rwork, lrwork, iwork, ifail, iclustr, gap, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PCHEEVX computes all the eigenvalues and, optionally, eigenvectors of a complex Hermitian matrix A by
calling the recommended sequence of ScaLAPACK routines. Eigenvalues/vectors can be selected by
specifying a range of vlues or a range of indices for the desired eigenvalues.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.

366 004– 2081– 002


PCHEEVX ( 3S ) PCHEEVX ( 3S )

Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

An upper bound for these quantities may be computed by:


LOCp( M ) <= ceil( cei l(M /MB _A) /NP ROW )*MB_A
LOCq( N ) <= ceil( cei l(N /NB _A) /NP COL )*N B_A

These routines have the following arguments:


NP=number of rows local to a given process.
NQ=number of columns local to a given process.
These routines accept the following arguments:
jobZ Character*1. (global input)
Specifies whether to compute the eigenvectors:
jobZ =’N’: Compute only eigenvalues.
jobZ =’V’: Compute eigenvalues and eigenvectors.
range Character*1. (global input)
range =’A’: All eigenvalues will be found.
range =’V’: All eigenvalues in the half-open interval (vl,vu) will be found.
range =’I’: The ilth through iuth eigenvalues will be found.
uplo Character. (global input)
Specifies whether the upper or lower triangular part of the Hermitian matrix A is stored:
uplo =’U’: Upper triangular
uplo =’L’: Lower triangular
n Integer. (global input)
The number of rows and columns of the matrix A. n ≥ 0.
A Block cyclic complex array. (local input/workspace)
Global dimension (N,N), local dimension (DESCA(DLEN_),NQ).
On entry, this array contains the Hermitian matrix A.
If uplo = ’U’, only the upper triangular part of A is used to define the elements of the Hermitian
matrix. If uplo = ’L’, only the lower triangular part of A is used to define the elements of the
Hermitian matrix.

004– 2081– 002 367


PCHEEVX ( 3S ) PCHEEVX ( 3S )

On exit, the lower triangle (if uplo = ’L’) or the upper triangle (if uplo = ’U’) of A, including the
diagonal, is destroyed.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated
on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension dlen_. (global input)
The array descriptor for the distributed matrix A. If descA(CTXT_ ) is incorrect, this routine
cannot guarantee correct error reporting.
vl Real. (global input)
If range=’V’, the lower bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
vu Real. (global input)
If range =’V’, the upper bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
il Integer. (global input)
If range =’I’, the index (from smallest to largest) of the smallest eigenvalue to be returned. il ≥ 1.
If range=’A’ or ’V’, it is not referenced.
iu Integer. (global input)
If range =’I’, the index (from smallest to largest) of the largest eigenvalue to be returned.
min(il,n) ≤ iu ≤ n. If range =’A’ or ’V’, it is not referenced.
abstol Real. (global input)
If jobZ=’V’, setting abstol to PSLAMCH(CONTEXT,’U’) yields the most orthogonal
eigenvectors.
This is the absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as
converged when it is determined to lie in an interval [a,b] of width less than or equal to the
following:
abstol + eps * MAX(|a|,|b|)
eps is the machine precision. If abstol is ≤ 0, eps * norm(T) will be used in its place, where
norm(T) is the 1-norm of the tridiagonal matrix obtained by reducing A to tridiagonal form.
Eigenvalues will be computed most accurately when abstol is set to twice the underflow threshold
2*PSLAMCH(’S’) not zero. If this routine returns with ((MOD(INFO,2).NE.0).OR.
(MOD(INFO/8,2).NE.0)), indicating that some eigenvalues or eigenvectors did not converge,
try setting abstol to 2*PSLAMCH(’S’).

368 004– 2081– 002


PCHEEVX ( 3S ) PCHEEVX ( 3S )

m Integer. (global output)


Total number of eigenvalues found. 0 ≤ m ≤ n.
nZ Integer. (global output)
Total number of eigenvectors computed. 0 ≤ nZ ≤ m. The number of columns of Z that are filled.
If jobZ is not equal to ’V’, nz is not referenced. If jobZ is equal to ’V’, nz = m unless the user
supplies insufficient space and PCHEEVX is not able to detect this before beginning computation.
To get all of the eigenvectors requested, the user must supply both sufficient space to hold the
eigenvectors in Z (m ≤ descZ(n)) and sufficient workspace to compute them. (See lwork below.)
PCHEEVX can always detect insufficient space without computation, unless range=’V’.
w Real array, dimension (n). (global output)
On normal exit, the first m entries contain the selected eigenvalues in ascending order.
orfac Real. (global input)
Specifies which eigenvectors should be reorthogonalized. Eigenvectors that correspond to
eigenvalues that are within tol = orfac*norm(A) of each other are reorthogonalized. However,
if the workspace is insufficient (see lwork), tol may be decreased until all eigenvectors to be
reorthogonalized can be stored in one process. No reorthogonalization will be done if orfac equals
-3
zero. A default value of 10 is used if orfac is negative. orfac should be identical on all
processes.
Z Real array. (local output)
Global dimension (n, n), local dimension (descZ(CTXT_), NQ). If jobZ = ’V’, on normal exit
the first m columns of Z contain the orthonormal eigenvectors of the matrix that corresponds to the
selected eigenvalues. If an eigenvector fails to converge, then that column of Z contains the latest
approximation to the eigenvector, and the index of the eigenvector is returned in ifail. If jobZ =
’N’, Z is not referenced.
iZ Integer. (global input)
The global row index of the submatrix of the distributed matrix Z to operate on.
jZ Integer. (global input)
The global column index of the submatrix of the distributed matrix Z to operate on.
descZ Integer array of dimension 9. (input)
The array descriptor for the distributed matrix Z. descZ(CTXT_) must equal descACTXT_).
work Complex array, dimension (lwork). (local workspace/output)
On output, work(1) returns the workspace needed to guarantee completion, but not orthogonality of
the eigenvectors. If the input parameters are incorrect, work(1) may also be incorrect.
This will be modified in the future so if enough workspace is given to complete the request,
work(1) will return the amount of workspace needed to guarantee orthogonality. This is described
as follows:

004– 2081– 002 369


PCHEEVX ( 3S ) PCHEEVX ( 3S )

If info ≥ 0
if jobZ = ’N’, work(1) equals the minimal and optimal amount of workspace;
if jobZ = ’V’, work(1) equals the minimal amount of workspace required to guarantee
orthogonal eigenvectors on the given input matrix with the given ortol. In version 1.0,
work(1) equals the minimal workspace required to compute eigenvales.
If info<0, then
if jobZ=’N’, work(1) equals the minimal and optimal amount of workspace
if jobZ=’V’
if range=’A’ or range=’I’, then work(1) equals the minimal workspace required
to compute all eigenvectors (no guarantee on orthogonality).
if range=’V’, then work(1) equals the minimal workspace required to compute
N_Z=DESCZ(N_) eigenvectors (no guarantee on orthogonality). In version 1.0,
work(1) equals the minimal workspace required to compute eigenvalues.
lwork Integer. (locak input)
Size of work array. If only eigenvalues are requested, lwork ≥ N + (NPO + MQP + NB) *
NB. If eigenvectors are requested, lwork ≥ N + MAX(NB*(NPO+1),3)
rwork Real array, dimension (lrwork). (local workspace/output)
lrwork Integer. (local input) The following variable definitions are used to define lrwork:
NN = MAX ( N, NB, 2 )
NEI G = number of eigenvectors requested
NB = descA( MB_ ) = des cA( NB_ ) = des cZ( MB_ ) = des cZ( NB_ )
des cA( RSR C_ ) = des cA( NB_ ) = des cZ( RSR C_ ) = des cZ( CSRC_ ) = 0
NP0 = NUMROC( NN, NB, 0, 0, NPR OW )
MQ0 = NUMROC(MA X(N EIG,NB,2) NV,0,0 ,NPCOL )
ICE IL( X, Y ) is a ScaLAPACK function returning ceiling (X/ Y)

If no eigenvectors are requested (jobZ = ’N’), lrwork ≥ 5*NN + 4 * N


If eigenvectors are requested (jobZ = ’V’), the amount of workspace required to guarantee that all
eigenvectors are computed is the following:
lrwork≥4*N+MAX(5*NN,NP0*MQ0+ +ICEIL(NEIG,NPROW*NPCOL)*NN
The computed eigenvectors may not be orthogonal if the minimal workspace is supplied and ortol
is too small. If you want to guarantee orthogonality (at the cost of potentially poor performance)
you should add the following to lwork:
(CLUSTERSIZE-1)*N
CLUSTERSIZE is the number of eigenvalues in the largest cluster, where a cluster is defined as a
set of close eigenvalues:

370 004– 2081– 002


PCHEEVX ( 3S ) PCHEEVX ( 3S )

{W(K),...,W(K+CLUSTERSIZE-1)|W(J+1)≤ W(J)+orfac*norm(A)}
If lrwork is too small to guarantee orthogonality, PCHEEVX attempts to maintain orthogonality in
the clusters with the smallest spacing between the eigenvalues. If lrwork is too small to compute
all of the eigenvectors requested, no computation is performed and info = – 25 is returned. Note
that when range = ’V’, PCHEEVX does not know how many eigenvectors are requested until the
eigenvalues are computed. Therefore, when range = ’V’ and as long as lwork is large enough to
allow PCHEEVX to compute the eigenvalues, PCHEEVX will compute the eigenvalues and as many
eigenvectors as it can.
Relationship between workspace, orthogonality, and performance:
If CLUSTERSIZE ≥ N/SQRT(NPROW*NPCOL), providing enough space to compute all the
eigenvectors orthogonally will cause serious degradation in performance. In the limit (i.e.
CLUSTERSIZE = N-1), PCSTEIN will perform no better than CSTEIN on one processor. For
CLUSTERSIZE = N/SQRT(NPROW*NPCOL) reorthogonalizing all eigenvectors will increase the
total execution time by a factor of 2 or more.
For CLUSTERSIZE > N/SQRT(NPROW*NPCOL), execution time will grow as the square of the
cluster size, all other factors remaining equal and assuming enough workspace. Less workspace
means less reorthogonalization but faster execution.
iwork Integer array. (local workspace)
On return, iwork(1) contains the amount of integer workspace required. If the input parameters are
incorrect, iwork(1) may also be incorrect.
liwork Integer. (local input)
Size of iwork. liwork ≥ 6*NNP
where:

NNP =MAX(N,NP ROW *NPCOL +1,4)

ifail Integer array, dimension (N). (global output)


If jobZ=’V’, then on normal exit, the first m elements of ifail are set to 0. If
(MOD(INFO,2).NE.0) on exit, ifail contains the indices of the eigenvectors that failed to
converge. If jobz=’N’ then ifail is not referenced.
iclustr Integer array, dimension (2*NPROW*NPCOL). (global output)
This array contains indices of eigenvectors that corresponds to a cluster of eigenvalues that could
not be reorthogonalized due to insufficient workspace (see lwork, orfac, and info). Eigenvectors
that correspond to clusters of eigenvalues indexed iclustr(2*I-1) to iclustr(2*I) could not be
reorthogonalized due to lack of workspace. Hence, the eigenvectors that correspond to these
clusters may not be orthogonal. iclustr() is a 0-terminated array. (iclustr(2*K).NE.0 .AND.
iclustr(2*K+1).EQ.0) if and only if K is the number of clusters. iclustr is not referenced if
jobZ =’N’.

004– 2081– 002 371


PCHEEVX ( 3S ) PCHEEVX ( 3S )

gap Real array, dimension (NPROW*NPCOL). (global output)


This array contains the gap between eigenvalues whose eigenvectors could not be reorthogonalized.
The output values in this array correspond to the clusters indicated by the iclustr array. Therefore,
the dot product between eigenvectors that corresponds to the Ith cluster may be as high as
(C*n)/GAP(I) where C is a small constant.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value,
info = -(i*100+j); if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If (MOD(info,2).NE.0), one or more eigenvectors failed to converge. Their indices
are stored in ifail. Send email to scalapack@cs.utk.edu.
If (MOD(info/2,2).NE.0), eigenvectors corresponding to one or more clusters of
eigenvalues could not be reorthogonalized because of insufficient workspace. The
indices of the clusters are stored in the ICLUSTR array.
If (MOD(info/4,2).NE.0), space limitations prevented PCHEEVX from computing
all of the eigenvectors between vl and vu. The number of eigenvectors computed is
returned in nZ.
If (MOD(info/8,2).NE.0), PSSTEBZ failed to compute eigenvalues. Send email
to scalapack@cs.utk.edu.
Differences between PCHEEVX and CHEEVX
A,LDA->A,IA, JA, DESCA
Z,LDZ->Z,IZ, JZ, DESCZ

WORKSPACE needs are larger for PCHEEVX.


lwork, orfac, icluster, and gap parameters added.
The meaning of info is changed.
Functional differences: PCHEEVX dos not promise orthogonality for eigenvectors associated with tightly
clustered eigenvalues. PCHEEVX does not reorthogonalize eigenvectors that are on different processes. The
extent of reorthogonalization is controlled by the input parameter lwork.
Current limitations:

372 004– 2081– 002


PCHEEVX ( 3S ) PCHEEVX ( 3S )

DESCA( M_) = DES CA(NB_ )


IA=JA= 1
IZ=JZ= 1
DES CA(RSR C_) = DESCA( CSR C_) = 0
DES CZ(RSR C_) = DESCZ( CSR C_) = 0
DESCA( M_) = DESCZ(M_)
DESCA( N_) = DESCZ(N_)
DES CA( MB_) = DES CZ(MB_ )
DES CA(NB_ ) = DES CZ(NB_ )
DESCA( M_) = DESCZ(M_)
DES CA(RSR C_) = DES CA(CSR C_)
DES CZ(RSR C_) = DES CZ(CSR C_)

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

004– 2081– 002 373


PCHEGVX ( 3S ) PCHEGVX ( 3S )

NAME
PCHEGVX – Computes selected eigenvalues and eigenvectors of a Hermitian-definite generalized
eigenproblem

SYNOPSIS
CALL PCHEGVX (ibtype, jobZ, range, uplo, n, A, iA, jA, descA, B, iB, jB, descB, vl, vu,
il, iu, abstol, m, nZ, w, orfac, Z, iZ, jZ, descZ, work, lwork, rwork, lrwork, iwork, ifail,
iclustr, gap, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PCHEGVX computes all the eigenvalues and, optionally, eigenvectors of a complex generalized Hermitian-
definite eigenproblem, of the form:
sub (A)*x= (lambd a)* sub (B) *x, sub (A) *su b(B )x= (la mbd a)* x

or
sub (B)*su b(A)*x =(l amb da) *x

Here sub(A) denoting A(IA:IA+N-1, JA:JA+N-1) is assumed to be Hermitian, and sub(B)


denoting B(IB:IB+N-1, JB:JB+N-1) is assumed to be Hermitian positive definite.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.

374 004– 2081– 002


PCHEGVX ( 3S ) PCHEGVX ( 3S )

CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

An upper bound for these quantities may be computed by:


LOCp( M ) <= ceil( cei l(M /MB _A) /NP ROW )*MB_A
LOCq( N ) <= ceil( cei l(N /NB _A) /NP COL )*N B_A

These routines accept the following arguments:


ibtype Integer. (global input)
Specifies the problem type to be solved:
= 1: sub(A)*x = (lambda)*sub(B)*x
= 2: sub(A)*sub(B)*x = (lambda)*x
= 3: sub(B)*sub(A)*x = (lambda)*x
jobZ Character*1. (global input)
Specifies whether to compute the eigenvectors:
jobZ =’N’: Compute only eigenvalues.
jobZ =’V’: Compute eigenvalues and eigenvectors.
range Character*1. (global input)
range =’A’: All eigenvalues will be found.
range =’V’: All eigenvalues in the half-open interval (vl,vu) will be found.
range =’I’: The ilth through iuth eigenvalues will be found.
uplo Character. (global input)
Specifies whether the upper or lower triangular part of the symmetric matrix A is stored:
uplo =’U’: Upper triangle of sub(A) is stored.
uplo =’L’: Lower triangle of sub(A) is stored.

004– 2081– 002 375


PCHEGVX ( 3S ) PCHEGVX ( 3S )

n Integer. (global input)


The order of the matrices sub(A) and sub(B). n must be ≥ 0.
A Complex pointer into local memory. (local input/output)
Real pointer into the local memory to an array of dimension (LLD_A, LOCq(JA+N-1)).
On entry, this array contains the local pieces of the N-by-N Hermitian distributed matrix sub(A).
If uplo = ’U’, the leading N-by-N upper triangular part of sub(A) contains the upper triangular
part of the matrix. If uplo = ’L’, the leading N-by-N lower triangular part of sub(A) contains the
lower triangular part of the matrix.
On exit, if jobz = ’V’, then if info = 0, sub(A) contains the distributed matrix Z of eigenvectors.
The eigenvectors are normalized as follows:
if ibtype = 1 or 2, Z**H*sub( B )*Z = I
if ibtype = 3, Z**H*inv( sub( B ) )*Z = I.
If jobz = ’N’, then on exit the upper triangle (if uplo= ’U’) or the lower triangle (if uplo= ’L’) of
sub(A), including the diagonal, is destroyed.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated
on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension dlen_. (global input)
The array descriptor for the distributed matrix A. If descA(CTXT_ ) is incorrect, this routine
cannot guarantee correct error reporting.
B Complex pointer into local memory. (local input/output)
Real pointer into the local memory to an array of dimension (LLD_B, LOCq(JB+N-1)).
On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub(B).
If uplo = ’U’, the leading N-by-N upper triangular part of sub(B) contains the upper triangular
part of the matrix. If uplo = ’L’, the leading N-by-N lower triangular part of sub(B) contains the
lower triangular part of the matrix.
On exit, if info ≤ n, the part of sub(B) containing the matrix is overwritten by the triangular
factor U or L from the Cholesky factorization sub(B) = U**H*U or sub(B) = L*L**H.
iB Integer. (global input)
The global row index of B, which points to the beginning of the submatrix that will be operated
on.
jB Integer. (global input)
The global column index of B which points to the beginning of the submatrix that will be operated
on.

376 004– 2081– 002


PCHEGVX ( 3S ) PCHEGVX ( 3S )

descB Integer array of dimension dlen_. (global input)


The array descriptor for the distributed matrix A. descB(CTXT_) must equal descA(CTXT_).
vl Real. (global input)
If range=’V’, the lower bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
vu Real. (global input)
If range =’V’, the upper bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
il Integer. (global input)
If range =’I’, the index (from smallest to largest) of the smallest eigenvalue to be returned. il ≥ 1.
If range=’A’ or ’V’, it is not referenced.
iu Integer. (global input)
If range =’I’, the index (from smallest to largest) of the largest eigenvalue to be returned.
min(il,n) ≤ iu ≤ n. If range =’A’ or ’V’, it is not referenced.
abstol Real. (global input)
If jobZ=’V’, setting abstol to PSLAMCH(CONTEXT,’U’) yields the most orthogonal
eigenvectors.
This is the absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as
converged when it is determined to lie in an interval [a,b] of width less than or equal to the
following:
abstol + eps * MAX(|a|,|b|)
eps is the machine precision. If abstol is ≤ 0, eps * norm(T) will be used in its place, where
norm(T) is the 1-norm of the tridiagonal matrix obtained by reducing A to tridiagonal form.
Eigenvalues will be computed most accurately when abstol is set to twice the underflow threshold
2*PSLAMCH(’S’) not zero. If this routine returns with ((MOD(INFO,2).NE.0).OR.
(MOD(INFO/8,2).NE.0)), indicating that some eigenvalues or eigenvectors did not converge,
try setting abstol to 2*PSLAMCH(’S’).
m Integer. (global output)
Total number of eigenvalues found. 0 ≤ m ≤ n.
nZ Integer. (global output)
Total number of eigenvectors computed. 0 ≤ nZ ≤ m. The number of columns of Z that are filled.
If jobZ is not equal to ’V’, nz is not referenced. If jobZ is equal to ’V’, nz = m unless the user
supplies insufficient space and PCHEGVX is not able to detect this before beginning computation.
To get all of the eigenvectors requested, the user must supply both sufficient space to hold the
eigenvectors in Z (m ≤ descZ(n)) and sufficient workspace to compute them. (See lwork below.)
PCHEGVX can always detect insufficient space without computation, unless range=’V’.

004– 2081– 002 377


PCHEGVX ( 3S ) PCHEGVX ( 3S )

w Real array, dimension (n). (global input)


On normal exit, the first m entries contain the selected eigenvalues in ascending order.
orfac Real. (global input)
Specifies which eigenvectors should be reorthogonalized. Eigenvectors that correspond to
eigenvalues that are within tol = orfac*norm(A) of each other are reorthogonalized. However,
if the workspace is insufficient (see lwork), tol may be decreased until all eigenvectors to be
reorthogonalized can be stored in one process. No reorthogonalization will be done if orfac equals
-3
zero. A default value of 10 is used if orfac is negative. orfac should be identical on all
processes.
Z Real array. (local output)
Global dimension (n, n), local dimension (descZ(CTXT_), NQ). If jobZ = ’V’, on normal exit
the first m columns of Z contain the orthonormal eigenvectors of the matrix that corresponds to the
selected eigenvalues. If an eigenvector fails to converge, then that column of Z contains the latest
approximation to the eigenvector, and the index of the eigenvector is returned in ifail. If jobZ =
’N’, Z is not referenced.
iZ Integer. (global input)
The global row index of the submatrix of the distributed matrix Z to operate on.
jZ Integer. (global input)
The global column index of the submatrix of the distributed matrix Z to operate on.
descZ Integer array of dimension 9. (input)
The array descriptor for the distributed matrix Z. descZ(CTXT_) must equal descACTXT_).
work Real array, dimension (work). (local workspace/output)
On output, work(1) returns the workspace needed to guarantee completion, but not orthogonality of
the eigenvectors. If the input parameters are incorrect, work(1) may also be incorrect.
If info ≥ 0
if jobZ = ’N’, work(1) equals the minimal and optimal amount of workspace;
if jobZ = ’V’, work(1) equals the minimal amount of workspace required to guarantee
orthogonal eigenvectors on the given input matrix with the given ortol. In version 1.0,
work(1) = the minimal workspace required to compute eigenvales.
If info<0, then
if jobZ=’N’, work(1) equals the minimal and optimal amount of workspace
if jobZ=’V’
if range=’A’ or range=’I’, then work(1) equals the minimal workspace required
to compute all eigenvectors (no guarantee on orthogonality).

378 004– 2081– 002


PCHEGVX ( 3S ) PCHEGVX ( 3S )

if range=’V’, then work(1) equals the minimal workspace required to compute


N_Z=DESCZ(N_) eigenvectors (no guarantee on orthogonality). In version 1.0,
work(1) equals the minimal workspace required to compute eigenvalues.
lwork Integer. (locak input)
Size of work array. If only eigenvalues are requested, lwork ≥ N + (NPO + MQP + NB) *
NB. If eigenvectors are requested, lwork ≥ N + MAX(NB*(NPO+!),3)
rwork Real array, dimension (lrwork). (local workspace/output)
lrwork Integer. (local input) The following variable definitions are used to define lrwork:
NN = MAX ( N, NB, 2 )
NEI G = number of eigenvectors requested
NB = des cA( MB_ ) = des cA( NB_ ) = des cZ( MB_ ) = des cZ( NB_ )
des cA( RSRC_ ) = des cA( NB_ ) = descZ( RSR C_ ) = des cZ( CSR C_ ) = 0
NP0 = NUMROC ( NN, NB, 0, 0, NPROW )
MQ0 = NUM ROC(MA X(N EIG,NB ,2) NV, 0,0 ,NPCOL )
ICE IL( X, Y ) is a ScaLAPACK function returning ceiling (X/Y)

If no eigenvectors are requested (jobZ = ’N’), lrwork ≥ 5*NN + 4 * N


If eigenvectors are requested (jobZ = ’V’), the amount of workspace required to guarantee that all
eigenvectors are computed is the following:
lrwork≥4*N+MAX(5*NN,NP0*MQ0+ +ICEIL(NEIG,NPROW*NPCOL)*NN
The computed eigenvectors may not be orthogonal if the minimal workspace is supplied and ortol
is too small. If you want to guarantee orthogonality (at the cost of potentially poor performance)
you should add the following to lwork:
(CLUSTERSIZE-1)*N
CLUSTERSIZE is the number of eigenvalues in the largest cluster, where a cluster is defined as a
set of close eigenvalues:
{W(K),...,W(K+CLUSTERSIZE-1)|W(J+1)≤ W(J)+orfac*norm(A)}
If lrwork is too small to guarantee orthogonality, PCHEGVX attempts to maintain orthogonality in
the clusters with the smallest spacing between the eigenvalues. If lrwork is too small to compute
all of the eigenvectors requested, no computation is performed and info = – 25 is returned. Note
that when range = ’V’, PCHEGVX does not know how many eigenvectors are requested until the
eigenvalues are computed. Therefore, when range = ’V’ and as long as lwork is large enough to
allow PCHEGVX to compute the eigenvalues, PCHEGVX will compute the eigenvalues and as many
eigenvectors as it can.
Relationship between workspace, orthogonality, and performance:

004– 2081– 002 379


PCHEGVX ( 3S ) PCHEGVX ( 3S )

If CLUSTERSIZE ≥ N/SQRT(NPROW*NPCOL), providing enough space to compute all the


eigenvectors orthogonally will cause serious degradation in performance. In the limit (i.e.
CLUSTERSIZE = N-1), PSSTEIN will perform no better than SSTEIN on one processor. For
CLUSTERSIZE = N/SQRT(NPROW*NPCOL) reorthogonalizing all eigenvectors will increase the
total execution time by a factor of 2 or more.
For CLUSTERSIZE > N/SQRT(NPROW*NPCOL), execution time will grow as the square of the
cluster size, all other factors remaining equal and assuming enough workspace. Less workspace
means less reorthogonalization but faster execution.
iwork Integer array. (local workspace)
On return, iwork(1) contains the amount of integer workspace required. If the input parameters are
incorrect, iwork(1) may also be incorrect.
liwork Integer. (local input)
Size of iwork. liwork ≥ 6*NNP
where:

NNP =MAX(N,NP ROW *NPCOL+1, 4)

ifail Integer array, dimension (N). (global output)


ifail provides additional information when INFO.NE.0. If (MOD(INFO/16,2).NE.0) then
ifail(1) indicates the order of the smallest minor which is not positive definite. If
(MOD(INFO,2).NE.0) on exit, then ifail contains the indices of the eigenvectors that failed to
converge.
If neither of the above error conditions hold and jobZ=’V’, then the first m elements of ifail are set
to 0.
iclustr Integer array, dimension (2*NPROW*NPCOL). (global output)
This array contains indices of eigenvectors that corresponds to a cluster of eigenvalues that could
not be reorthogonalized due to insufficient workspace (see lwork, orfac, and info). Eigenvectors
that correspond to clusters of eigenvalues indexed iclustr(2*I-1) to iclustr(2*I) could not be
reorthogonalized due to lack of workspace. Hence, the eigenvectors that correspond to these
clusters may not be orthogonal. iclustr() is a 0-terminated array. (iclustr(2*K).NE.0 .AND.
iclustr(2*K+1).EQ.0) if and only if K is the number of clusters. iclustr is not referenced if
jobZ =’N’.
gap Real array, dimension (NPROW*NPCOL). (global output)
This array contains the gap between eigenvalues whose eigenvectors could not be reorthogonalized.
The output values in this array correspond to the clusters indicated by the iclustr array. Therefore,
the dot product between eigenvectors that corresponds to the Ith cluster may be as high as
(C*n)/GAP(I) where C is a small constant.
Current limitations:

380 004– 2081– 002


PCHEGVX ( 3S ) PCHEGVX ( 3S )

DESCA( MB_ )=(DES CA(NB_ )


IA=JA= 1
IZ=JZ= 1
DES CA(RSR C_) =DESCA (CSRC_ )=0
DESCA( M_) =DESCB(M_ )=D ESC Z(M _)
DESCA( N_) =DESCB(N_ )=D ESC Z(N _)
DES CA(MB_ )=D ESCB(M B_)=DE SCZ (MB_)
DES CA(NB_ )=D ESCB(N B_)=DE SCZ (NB_)
DES CA(RSR C_) =DESCB (RSRC_ )=D ESCZ(R SRC _)
DES CA(CSR C_) =DESCB (CSRC_ )=D ESCZ(C SRC _)

info Integer. (global output)


info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value,
info = -(i*100+j); if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If (MOD(info,2).NE.0), one or more eigenvectors failed to converge. Their indices
are stored in ifail. Send email to scalapack@cs.utk.edu.
If (MOD(info/2,2).NE.0), eigenvectors corresponding to one or more clusters of
eigenvalues could not be reorthogonalized because of insufficient workspace. The
indices of the clusters are stored in the ICLUSTR array.
If (MOD(info/4,2).NE.0), space limitations prevented PCHEGVX from computing
all of the eigenvectors between vl and vu. The number of eigenvectors computed is
returned in nZ.
If (MOD(info/8,2).NE.0), PSSTEBZ failed to compute eigenvalues. Send email
to scalapack@cs.utk.edu.
If (MOD(info/16,2).NE.0), B was not positive definite. ifail(1) indicates the order
of the smallest minor which is not positive definite.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

004– 2081– 002 381


PSGEBRD ( 3S ) PSGEBRD ( 3S )

NAME
PSGEBRD, PCGEBRD – Reduces a real or complex distributed matrix to bidiagonal form

SYNOPSIS
CALL PSGEBRD (m, n, A, iA, jA, descA, D, E, tauQ, tauP, work, liwork, info)
CALL PCGEBRD (m, n, A, iA, jA, descA, D, E, tauQ, tauP, work, liwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGEBRD and PCGEBRD reduce a real or complex general m-by-n distributed matrix of the following form:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
to upper or lower bidiagonal form B by the following orthogonal transformation:
Q’ sub(A)*P = B
If m ≥ n, B is upper bidiagonal; if m < n, B is lower bidiagonal.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).

382 004– 2081– 002


PSGEBRD ( 3S ) PSGEBRD ( 3S )

Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCGEBRD, the following real arguments must be
complex:
m Integer. (global input)
The number of rows to be operated on (the order of the distributed submatrix sub(A)).
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, this array contains the local pieces of the general distributed matrix sub(A).
On exit, if m ≥ n, the diagonal and the first superdiagonal of sub(A) are overwritten with the upper
bidiagonal matrix B; the elements below the diagonal, with the array tauQ, represent the orthogonal
matrix Q as a product of elementary reflectors, and the elements above the first superdiagonal, with
the array tauP, represent the orthogonal matrix P as a product of elementary reflectors.
If m < n, the diagonal and the first subdiagonal are overwritten with the lower bidiagonal matrix B;
the elements below the first subdiagonal, with the array tauQ, represent the orthogonal matrix Q as
a product of elementary reflectors, and the elements above the diagonal, with the array tauP,
represent the orthogonal matrix P as a product of elementary reflectors. See the Further Details
subsection for more information.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.

004– 2081– 002 383


PSGEBRD ( 3S ) PSGEBRD ( 3S )

D Real array. (local output)


If m ≥ n, the array dimension is LOCq(jA+MIN(m,n)-1). Otherwise, the dimension is
LOCp(iA+MIN(m,n)-1).
The distributed diagonal elements of the bidiagonal matrix B: D(i) = A(i,i). D is tied to the
distributed matrix A.
E Real array. (local output)
If m ≥ n, the array dimension is LOCp(iA+MIN(m,n)-1). Otherwise, the dimension is
LOCq(jA+MIN(m,n)-2).
The distributed off-diagonal elements of the bidiagonal distributed matrix B:
if m ≥ n, E(i) = A(i,i+1) for i = 1,2,...,n-1
if m < n, E(i) = A(i+1,i) for i = 1,2,...,m-1
E is tied to the distributed matrix A.
tauQ Real array, dimension LOCq(jA+MIN(m,n)-1). (local output)
This array contains the scalar factors of the elementary reflectors which represent the orthogonal
matrix Q. tauQ is tied to the distributed matrix A.
tauP Real array, dimension LOCp(iA+MIN(m,n)-1). (local output)
This array contains the scalar factors of the elementary reflectors which represent the orthogonal
matrix P. tauP is tied to the distributed matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, work(1) returns the minimal and optimal lwork.
lwork Integer. (local input)
lwork=NB*(MpA0+NqA0+1)+NqA0
where

IRO FF = MOD( IA- 1, NB ), ICOFF = MOD ( JA- 1, NB)


IAROW = IND XG2 P( IA, NB, MYR OW, RSR C_A, NPROW )
IACOL = IND XG2 P( JA, NB, MYC OL, CSRC_A , NPC OL )
MpA0 = NUM ROC ( M+I ROF F, NB, MYROW, IAR OW, NPROW )
NqA0 = NUM ROC ( N+I COF F, NB, MYC OL, IACOL, NPC OL )

and NUMROC(3S) and INDXG2P(3S) are ScaLAPACK tool functions; MYROW, MYCOL, NPROW, and
NPCOL can be determined by calling the BLACS_GRIDINFO(3S) subroutine.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = -i.

384 004– 2081– 002


PSGEBRD ( 3S ) PSGEBRD ( 3S )

Alignment Requirements
The distributed submatrix sub(A) must verify some alignment properties, namely the following expressions
should be true:
(MB_A. EQ.NB_ A .AN D. IROFFA .EQ.IC OFFA)

Further Details
The matrices Q and P are represented as products of elementary reflectors (if m ≥ n):
Q = H(1 ) H(2)...H( n) and P=G(1) G(2 ).. .G( n-1 )
Q = H(1 ) H(2 ).. .H(n)

Each H(i) and G(i) have the form:


H(i ) = I - tau Q * v * v’ and G(i ) = I - tau P * u * u’

where tauQ and tauP are real scalars, and v and u are real vectors; v(1:i-1)=0, v(i)=1, and
v(i+1:m) is stored on exit in A(iA+i:iA+m-1,jA+i-1); u(1:i)=0, u(i+1)=1, and u(i+2:n) is
stored on exit in A(iA+i-1,jA+i+1:jA+n-1); tauQ is stored in tauQ(jA+i-1), and tauP is stored
in tauP(iA+i-1).
If m < n,
Q = H(1 ) H(2)...H( m-1 ) and P = G(1 ) G(2 ).. .G( m)

Each H(i) and G(i) has the following form:


H(i ) = I - tauQ * v * v’ and G(i ) = I - tau P * u * u’

where tauQ and tauP are real scalars, and v and u are real vectors; v(1:i)=0, v(i+1)=1, and
v(i+2:m) is stored on exit in A(iA+i+1:iA+m-1,jA+i-1); u(1:i-1)=0, u(i)=1, and u(i+1:n)
is stored on exit in A(iA+i-1,jA+i:jA+n-1); tauQ is stored in tauQ(jA+i-1) and tauP is stored
in tauP(iA+i-1)
The following examples illustrate the contents of sub(A) on exit:
(m > n) (m < n)
m = 6 and n =5 m = 5 and n = 6
( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 )
( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 )
( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 )
( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 )
( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 )
( v1 v2 v3 v4 v5 )

where d and e denote diagonal and off-diagonal elements of B, v1 denotes an element of the vector defining
H(i), and u1 an element of the vector defining G(i).

004– 2081– 002 385


PSGEBRD ( 3S ) PSGEBRD ( 3S )

BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), DESCINIT(3S), INDXG2P(3S), NUMROC(3S)

386 004– 2081– 002


PSGELQF ( 3S ) PSGELQF ( 3S )

NAME
PSGELQF, PCGELQF – Computes an LQ factorization of a real or complex distributed matrix

SYNOPSIS
CALL PSGELQF (m, n, A, iA, jA, descA, tau, work, lwork, info)
CALL PCGELQF (m, n, A, iA, jA, descA, tau, work, lwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGELQF and PCGELQF compute a LQ factorization of a real or complex distributed m-by-n matrix:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)=L*Q
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.

004– 2081– 002 387


PSGELQF ( 3S ) PSGELQF ( 3S )

Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp( M )=N UMROC( M, MB_ A, MYR OW, RSRC_A , NPR OW )

LOCq( N )=N UMROC( N, NB_ A, MYC OL, CSR C_A, NPCOL )

These routines accept the following arguments. For PCGELQF, the following arguments must be complex:
m Integer. (global input)
The number of rows to be operated on; that is, the order of the distributed submatrix sub(A). m
must be ≥ 0.
n Integer. (global input)
The number of columns to be operated on; that is, the number of columns of the distributed
submatrix sub(A). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the m-by-n distributed matrix sub(A) to be factored.
On exit, the elements on and below the diagonal of sub(A) contain the (MIN(m,n)-by-m) lower
trapezoidal matrix L (L is lower triangular if m ≤ n); the elements above the diagonal, with the array
tau, represent the orthogonal matrix Q as a product of elementary reflectors. See the Further Details
subsection for more information.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
tau Real array, dimension LOCp(iA+MIN(m,n)-1. (local output)
This array contains the scalar factors tau of the elementary reflectors. tau is tied to the distributed
matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, work(1) returns the minimal and optimal lwork.
lwork Integer. (local input)
The dimension of the array work.

388 004– 2081– 002


PSGELQF ( 3S ) PSGELQF ( 3S )

lwork ≥ MB_A * (Mp0 + Nq0 + MB_A)


where

IRO FF = MOD ( IA- 1, MB_ A ), ICOFF = MOD ( JA- 1, NB_ A )


IAROW = IND XG2 P( IA, MB_ A, MYR OW, RSR C_A, NPROW )
IACOL = IND XG2 P( JA, NB_ A, MYC OL, CSRC_A , NPC OL )
Mp0 = NUMROC ( M+I ROF F, MB_ A, MYR OW, IAROW, NPR OW )
Nq0 = NUMROC ( N+I COF F, NB_ A, MYC OL, IAC OL, NPCOL )

and NUMROC(3S) and INDXG2P(3S) are ScaLAPACK tool functions; MYROW, MYCOL, NPROW, and
NPCOL can be determined by calling the BLACS_GRIDINFO(3S) subroutine.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
Further Details
The matrix Q is represented as a product of elementary reflectors:
Q = H(i A+k -1) H(i A+k -2) ...H(i a)

where k=MIN(m,n).
Each H(i) has the following form:
H = I - tau * v * v’

where tau is a real scalar, and v is a real vector with v(1:i-1)=0 and v(i)=1; v(i+1:n) is stored on
exit in A(iA+i-1:jA+i-1,jA+n-1) and tau is stored in tau(iA+i-1).

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), DESCINIT(3S), INDXG2P(3S), NUMROC(3S)

004– 2081– 002 389


PSGEQLF ( 3S ) PSGEQLF ( 3S )

NAME
PSGEQLF, PCGEQLF – Computes a QL factorization of a real or complex distributed matrix

SYNOPSIS
CALL PSGEQLF (m, n, A, iA, jA, descA, tau, work, lwork, info)
CALL PCGEQLF (m, n, A, iA, jA, descA, tau, work, lwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGEQLF and PCGELQF compute a QL factorization of a real or complex distributed m-by-n matrix:
sub(A)=A(iA:iA+m-1,jA:jA+n-1)=Q*L
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimenstional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.

390 004– 2081– 002


PSGEQLF ( 3S ) PSGEQLF ( 3S )

Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOC p( M ) = NUMROC ( M, MB_ A, MYR OW, RSRC_A , NPR OW )

LOC q( N ) = NUMROC ( N, NB_ A, MYC OL, CSR C_A, NPCOL )

These routines accept the following arguments. For PCGEQLF, the following real arguments must be
complex:
m Integer. (global input)
The number of rows to be operated on (the order of the distributed submatrix sub(A)). m must be ≥
0.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the m-by-n distributed matrix sub(A) to be factored.
On exit, if m ≥ n, the lower triangle of the distributed submatrix sub(A) contains the n-by-n lower
triangular matrix L. If m ≤ n, the elements on and below the (n-to-m)-th superdiagonal contain the
m-by-n lower trapezoidal matrix L. The remaining elements, with the array tau, represent the
orthogonal matrix Q as a product of elementary reflectors. See the Further Details subsection for
more information.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
tau Real array, dimension LOCq(N_A). (local output)
This array contains the scalar factors tau of the elementary reflectors. tau is tied to the distributed
matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, work(1) returns the minimal and optimal lwork.

004– 2081– 002 391


PSGEQLF ( 3S ) PSGEQLF ( 3S )

lwork Integer. (local input)


The dimension of the array work.
lwork ≥ NB_A*(Mp0 + Nq0 + NB_A)
where

IRO FF = MOD( IA- 1, MB_A ), ICO FF = MOD( JA-1, NB_A )


IAR OW = INDXG2 P( IA, MB_ A, MYR OW, RSR C_A, NPROW )
IAC OL = INDXG2 P( JA, NB_ A, MYC OL, CSR C_A, NPCOL )
Mp0 = NUMROC ( M+I ROFF, MB_A, MYROW, IAR OW, NPROW )
Nq0 = NUMROC ( N+ICOF F, NB_ A, MYC OL, IACOL, NPC OL )

and NUMROC(3S) and INDXG2P(3S) are ScaLAPACK tool functions; MYROW, MYCOL, NPROW, and
NPCOL can be determined by calling the BLACS_GRIDINFO(3S) subroutine.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, then info = – i.
Further Details
The matrix Q is represented as a product of elementary reflectors:
Q = H(j a+k-1) ...H(j a+1 ) (H(ja)

where k=MIN(m,n)
Each H(i) has the following form:
H = I - tau * v * v’

where tau is a real scalar, and v is a real vector with v(m-k+i+1:m)=0 and v(m-k+i)=1; v(1:m-
k+i-1) is stored on exit in A(iA:iA+m-k+i-2,jA+n-k+i-1), and tau is stored in
tau(jA+n-k+i-1).

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), DESCINIT(3S), INDXG2P(3S), NUMROC(3S)

392 004– 2081– 002


PSGEQPF ( 3S ) PSGEQPF ( 3S )

NAME
PSGEQPF, PCGEQPF – Computes a QR factorization with column pivoting of a real or complex distributed
matrix

SYNOPSIS
CALL PSGEQPF (m, n, A, iA, jA, descA, ipiv, tau, work, lwork, info)
CALL PCGEQPF (m, n, A, iA, jA, descA, ipiv, tau, work, lwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGEQPF and PCGEQPF compute a QR factorization with column pivoting of a real or complex m-by-n
distributed matrix:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
sub(A)*P = Q*R
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).

004– 2081– 002 393


PSGEQPF ( 3S ) PSGEQPF ( 3S )

Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp( M ) = NUMROC ( M, MB_ A, MYR OW, RSR C_A, NPROW )

LOCq( N ) = NUMROC ( N, NB_ A, MYC OL, CSRC_A , NPC OL )

These routines accept the following arguments. For PCGEQPF, the following real arguments must be
complex:
m Integer. (global input)
The number of rows to be operated on (the order of the distributed submatrix sub(A)). m must be ≥
0.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the m-by-n distributed matrix sub(A) to be factored.
On exit, the elements on and above the diagonal of sub(A) contain the (MIN(m,n)-by-n) upper
trapezoidal matrix R (R is upper triangular if m ≥ n); the elements below the diagonal, with the
array tau represent the orthogonal matrix Q as a product of elementary reflectors. See the Further
Details subsection for more information.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
ipiv Integer array, dimension (LOCq(jA+n-1). (local output)
On exit, if ipiv(i) = k, the local i-th column of A(iA:iA+n– 1, jA:jA+n-1)*P was the global kth
column of A(iA:iA+n-1, jA:jA+n-1). ipiv is tied to the distributed matrix A.

394 004– 2081– 002


PSGEQPF ( 3S ) PSGEQPF ( 3S )

tau Real array, dimension LOCq(jA+MIN(m,n)-1). (local output)


This array contains the scalar factors tau of the elementary reflectors. tau is tied to the distributed
matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, work(1) returns the minimal and optimal lwork.
lwork Integer. (local input)
lwork ≥ MAX(3,Mp0 + Nq0) + LOCq(JA+N-1)+Nq0
where

IRO FF = MOD( IA-1, MB_A ), ICO FF = MOD( JA-1, NB_A )


IAR OW = INDXG2 P( IA, MB_ A, MYR OW, RSRC_A , NPR OW )
IAC OL = INDXG2 P( JA, NB_ A, MYC OL, CSRC_A , NPC OL )
Mp0 = NUMROC ( M+IROF F, MB_ A, MYR OW, IAROW, NPR OW )
Nq0 = NUMROC ( N+I COFF, NB_A, MYCOL, IAC OL, NPCOL )
LOCq(JA+N -1) = NUMROC ( JA+ N-1, NB_A, MYCOL, CSR C_A, NPCOL )

and NUMROC(3S) and INDXG2P(3S) are ScaLAPACK tool functions; MYROW, MYCOL, NPROW, and
NPCOL can be determined by calling the BLACS_GRIDINFO(3S) subroutine.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
Further Details
The matrix Q is represented as a product of elementary reflectors:
Q = H(1 ) H(2 ) ... H(n )

Each H(i) has the following form:


H = I - tau * v * v’

where tau is a real scalar, and v is a real vector with v(1:i-1) = 0 and v(i) = 1; v(i+1:m) is
stored on exit in A(iA+i-1:iA+m-1,jA+i-1).
The matrix P is represented in jpvt as follows: if jpvt(j) = i the jth column of P is the ith canonical
unit vector.

SEE ALSO
BLACS_GRIDINFO(3S), DESCINIT(3S), INDXG2P(3S), NUMROC(3S)

004– 2081– 002 395


PSGEQRF ( 3S ) PSGEQRF ( 3S )

NAME
PSGEQRF, PCGEQRF – Computes a QR factorization of a real or complex distributed matrix

SYNOPSIS
CALL PSGEQRF (m, n, A, iA, jA, descA, tau, work, lwork, info)
CALL PCGEQRF (m, n, A, iA, jA, descA, tau, work, lwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGEQRF and PCGEQRF compute a QR factorization of a real or complex distributed m-by-n matrix of the
form:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)=Q*R
These routines require square block decomposition (MB_A = NB_A, as defined in the comments which
follow).
A description vector is associated with each two-dimenstional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.

396 004– 2081– 002


PSGEQRF ( 3S ) PSGEQRF ( 3S )

Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCGEQRF, the following real arguments must be
complex:
m Integer. (global input)
The number of rows to be operated on (the order of the distributed submatrix sub(A)). m must be ≥
0.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the m-by-n distributed matrix sub(A) to be factored.
On exit, the elements on and above the diagonal of sub(A) contain the (MIN(m,n) by n) upper
trapezoidal matrix R (R is upper triangular if m ≥ n); the elements below the diagonal, with the
array tau represent the orthogonal matrix Q as a product of elementary reflectors. See the Further
Details subsection for more information.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
tau Real array, dimension LOCq(jA+MIN(m,n)-1). (local output)
This array contains the scalar factors tau of the elementary reflectors. tau is tied to the distributed
matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, work(1) returns the minimal and optimal lwork.

004– 2081– 002 397


PSGEQRF ( 3S ) PSGEQRF ( 3S )

lwork Integer. (local input)


lwork ≥ NB_A*(Mp0 + Nq0 +NB_A)
where

IRO FF = MOD( IA- 1, MB_A ), ICO FF = MOD( JA-1, NB_A )


IAR OW = INDXG2 P( IA, MB_ A, MYR OW, RSR C_A, NPROW )
IAC OL = INDXG2 P( JA, NB_ A, MYC OL, CSR C_A, NPCOL )
Mp0 = NUMROC ( M+I ROFF, MB_A, MYROW, IAR OW, NPROW )
Nq0 = NUMROC ( N+ICOF F, NB_ A, MYC OL, IACOL, NPC OL )

and NUMROC(3S) and INDXG2P(3S) are ScaLAPACK tool functions; MYROW, MYCOL, NPROW, and
NPCOL can be determined by calling the BLACS_GRIDINFO(3S) subroutine.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, then info = – i.
Further Details
The matrix Q is represented as a product of elementary reflectors:
Q = H(jA) H(j A+1 ) ... H(j A+k-1)

where k = min(m,n).
Each H(i) has the following form:
H = I - tau * v * v’

where tau is a real scalar, and v is a real vector with v(1:i-1)=0 and v(i)=1; v(i+1:m) is stored on
exit in A(iA+i-1:iA+m-1,jA+i-1) and tau is stored in TAU(jA+i-1).

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), DESCINIT(3S), INDXG2P(3S), NUMROC(3S)

398 004– 2081– 002


PSGERQF ( 3S ) PSGERQF ( 3S )

NAME
PSGERQF, PCGERQF – Computes a RQ factorization of a real or complex distributed matrix

SYNOPSIS
CALL PSGERQF (m, n, A, iA, jA, descA, tau, work, lwork, info)
CALL PCGERQF (m, n, A, iA, jA, descA, tau, work, lwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGERQF and PCGERQF compute a RQ factorization of a real or complex distributed m-by-n matrix:
sub(A) = A(iA:iA+m-1,jA:jA+n-1) = R * Q
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.

004– 2081– 002 399


PSGERQF ( 3S ) PSGERQF ( 3S )

Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSR C_A, NPROW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSRC_A , NPC OL)

These routines accept the following arguments. For PCGERQF, the following real arguments must be
complex:
m Integer. (global input)
The number of rows to be operated on (the order of the distributed submatrix sub(A)).
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the m-by-n distributed matrix sub(A) to be factored.
On exit, if m≤ n, the upper triangle of sub(A) contains the m-by-m upper triangular matrix R. If m
≥ n, the elements on and above the (m-to-n)-th subdiagonal contain the m-by-n upper trapezoidal
matrix R; the remaining elements, with the array tau, represent the orthogonal matrix Q as a product
of elementary reflectors (see the Further Details subsection).
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (global and local input)
The array descriptor for the distributed matrix A.
tau Real array, dimension LOCp(M_A). (local output)
This array contains the scalar factors tau of the elementary reflectors. tau is tied to the distributed
matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, work(1) returns the minimal and optimal lwork.
lwork Integer. (local input)
lwork ≥ MB_A * (Mp0 + Nq0 + MB_A)
where

400 004– 2081– 002


PSGERQF ( 3S ) PSGERQF ( 3S )

IRO FF = MOD( IA- 1, MB_ A ), ICOFF = MOD ( JA- 1, NB_ A )


IAR OW = IND XG2 P( IA, MB_A, MYROW, RSR C_A, NPROW )
IAC OL = IND XG2 P( JA, NB_ A, MYC OL, CSRC_A , NPC OL )
Mp0 = NUMROC ( M+IROF F, MB_ A, MYR OW, IAR OW, NPR OW )
Nq0 = NUMROC ( N+ICOF F, NB_ A, MYC OL, IAC OL, NPC OL )

and NUMROC(3S) and INDXG2P(3S) are ScaLAPACK tool functions; MYROW, MYCOL, NPROW, and
NPCOL can be determined by calling the BLACS_GRIDINFO(3S) subroutine.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
Further Details
The matrix Q is represented as a product of elementary reflectors:
Q = H(i A) H(i A+1 ) ... H(iA+k -1)

where k = MIN(m,n).
Each H(i) has the following form:
H = I - tau * v * v’

where tau is a real scalar, and v is a real vector with v(n-k+i+1:n)=0 and v(n-k+1)=1;
v(1:n-k+1-1) is stored on exit in A(iA+i-1:iA+m-1,jA+i-1) and tau is stored in
TAU(iA+m-k+i-1).
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), DESCINIT(3S), INDXG2P(3S), NUMROC(3S)

004– 2081– 002 401


PSGESV ( 3S ) PSGESV ( 3S )

NAME
PSGESV, PCGESV – Computes the solution to a real or complex system of linear equations

SYNOPSIS
CALL PSGESV (n, nrhs, A, iA, jA, descA, ipiv, B, iB, jB, descB, info)
CALL PCGESV (n, nrhs, A, iA, jA, descA, ipiv, B, iB, jB, descB, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGESV and PCGESV compute the solution to a real or complex system of linear equations:
sub(A) X = sub(B)
where sub(A)=A(iA:iA+n-1,jA:jA+n-1) is an n-by-n distributed matrix and X and sub(B)=B(iB:iB+n-
1, jB:jB+nrhs-1) are n-by-nrhs distributed matrices.
The LU decomposition with partial pivoting and row interchanges is used to factor sub(A) as sub(A) = P *
L * U, where P is a permutation matrix, L is unit lower triangular, and U is upper triangular. L and U are
stored in sub(A). The factored form of sub(A) is then used to solve the system of equations sub(A)X=sub(B).
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of block-cyclicly distributed matrics. In these comments, the
underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.

402 004– 2081– 002


PSGESV ( 3S ) PSGESV ( 3S )

LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOC p(M) = NUM ROC(M, MB_ A, MYR OW, RSR C_A, NPROW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSRC_A , NPC OL)

These routines accept the following arguments. For PCGESV, the following real arguments must be
complex:
n Integer. (global input)
The number of rows and columns to be operated on (the order of the distributed submatrix sub(A)).
n must be ≥ 0.
nrhs Integer. (global input)
The number of right hand sides (the number of columns of the distributed submatrix sub(A)). nrhs
must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the n-by-n distributed matrix sub(A) to be factored.
On exit, this array contains the local pieces of the factors L and U from the factorization sub(A) =
P*L*U; the unit diagonal elements of L are not stored.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
ipiv Integer array, dimension ( LOCp(M_A)+MB_A ). (local output)
This array contains the pivoting information ipiv(i), which is the global row that local row i was
swapped with. This array is tied to the distributed matrix A.

004– 2081– 002 403


PSGESV ( 3S ) PSGESV ( 3S )

B Real pointer into the local memory to an array of dimension (LLD_B,LOCq(jB+nrhs– 1). (local
input/local output)
On entry, the right hand side distributed matrix sub(B).
On exit, if info=0, sub(B) is overwritten by the solution distributed matrix X.
iB Integer. (global input)
The global row index of B, which points to the beginning of the submatrix that will be operated on.
jB Integer. (global input)
The global column index of B, which points to the beginning of the submatrix that will be operated
on.
descB Integer array of dimension 9. (input)
The array descriptor for the distributed matrix B.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If info = K, U(iA+K-1,jA+K-1) is exactly 0. The factorization has been completed,
but the factor U is exactly singular, so the solution could not be computed.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), DESCINIT(3S), INDXG2P(3S), NUMROC(3S)

404 004– 2081– 002


PSGETRF ( 3S ) PSGETRF ( 3S )

NAME
PSGETRF, PCGETRF – Computes an LU factorization of a real or complex distributed matrix

SYNOPSIS
CALL PSGETRF (m, n, A, iA, jA, descA, ipiv, info)
CALL PCGETRF (m, n, A, iA, jA, descA, ipiv, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGETRF and PCGETRF compute an LU factorization of a real or complex general m-by-n distributed
matrix of the form:
sub(A)=(iA:iA+m-1,jA:jA+n-1)
by using partial pivoting with row interchanges.
The factorization has the following form:
sub(A) = P * L * U
P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and
U is upper triangular (upper trapezoidal if m < n). L and U are stored in sub(A).
This is the right-looking Parallel Level 3 BLAS version of the algorithm.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.

004– 2081– 002 405


PSGETRF ( 3S ) PSGETRF ( 3S )

CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSR C_A, NPROW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSRC_A , NPC OL)

These routines accept the following arguments. For PCGETRF, the following real arguments must be
complex:
m Integer. (global input)
The number of rows to be operated on (the order of the distributed submatrix sub(A)). m must be ≥
0.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the m-by-n distributed matrix sub(A) to be factored.
On exit, this array contains the local pieces of the factors L and U from the factorization sub(A) =
P*L*U; the unit diagonal elements of L are not stored.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A, which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
ipiv Integer array, dimension ( LOCp(M_A)+MB_A). (local output)
This array contains the pivoting information. ipiv(i) is the global row that local row i was swapped
with. This array is tied to the distributed matrix A.

406 004– 2081– 002


PSGETRF ( 3S ) PSGETRF ( 3S )

info Integer. (global output)


info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If info = K, U(iA+K-1,jA+K-1) is exactly 0. The factorization has been completed,
but the factor U is exactly singular, and division by 0 will occur if it is used to solve a
system of equations.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S)

004– 2081– 002 407


PSGETRI ( 3S ) PSGETRI ( 3S )

NAME
PSGETRI, PCGETRI – Computes the inverse of a real or complex distributed matrix

SYNOPSIS
CALL PSGETRI (n, A, iA, jA, descA, ipiv, work, lwork, iwork, liwork, info)
CALL PCGETRI (n, A, iA, jA, descA, ipiv, work, lwork, iwork, liwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGETRI and PCGETRI compute the inverse of a real or complex distributed matrix by using the LU
factorization computed by PSGETRF(3S) or PCGETRF(3S). This method inverts U and then computes the
inverse of InvA, which is the following:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
It does this by solving the system InvA*L=inv(U) for InvA.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).

408 004– 2081– 002


PSGETRI ( 3S ) PSGETRI ( 3S )

Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCGETRI, the following real arguments must be
complex:
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the L and U obtained by the factorization sub(A)=P*L*U computed by
PSGETRF(3F).
On exit, if info = 0, sub(A) contains the inverse of the original distributed matrix sub(A).
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A, which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
ipiv Integer array, dimension ( LOCp(M_A)+MB_A. (local output)
This arrray keeps track of the pivoting information. ipiv(i) is the global row index that the local
row i was swapped with. This array is tied to the distributed matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, if info = 0, work(1) returns the minimal and optimal lwork.
lwork Integer. (local input)
lwork=LOCp(n+MOD(iA-1,MB_A))*NB_A. lwork is used to keep a copy of (at maximum) an
entire column block of sub(A).
iwork Integer array, dimension (liwork). (local workspace)
On exit, if info = 0, iwork(1) returns the minimal and optimal liwork.

004– 2081– 002 409


PSGETRI ( 3S ) PSGETRI ( 3S )

liwork Integer. (local input)


The dimension of array iwork used as workspace for physically transposing the pivots.
Where LCM is the least common multiple of process rows and columns (NPROW and NPCOL):

If NPROW == NPC OL the n


liw ork = LOC q( M_A + MOD (IA -1, MB_ A) ) + MB_ A
else if PIV ROC == ’C’ the n
liw ork = LOC q( M_A + MOD (IA -1, MB_A) ) +
MB_A*CEIL (CE IL(LOC p(M _A) /MB_A) /(LCM/ NPROW) )
end if

info Integer. (global output)


info = 0 successful exit
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If info = K, U(iA+K-1,jA+K-1) is exactly 0. Because the matrix is exactly singular,
the solution could not be computed.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), PCGETRF(3S), PSGETRF(3S)

410 004– 2081– 002


PSGETRS ( 3S ) PSGETRS ( 3S )

NAME
PSGETRS, PCGETRS – Solves a real or complex distributed system of linear equations

SYNOPSIS
CALL PSGETRS (trans, n, nrhs, A, iA, jA, descA, ipiv, B, iB, jB, descB, info)
CALL PCGETRS (trans, n, nrhs, A, iA, jA, descA, ipiv, B, iB, jB, descB, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSGETRS and PCGETRS solve a system of real or complex distributed linear equations
op (sub(A)) * X = sub(B)
with a general n-by-n distributed matrix sub(A) by using the LU factorization computed by PSGETRF(3S).
sub(A) denotes
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
T
and op(A)=A or A , and sub(B) denotes B(iB:iB+ n– 1, jB:jB+nrhs -1).
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).

004– 2081– 002 411


PSGETRS ( 3S ) PSGETRS ( 3S )

Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M) = NUM ROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N) = NUM ROC(N, NB_ A, MYC OL, CSRC_A , NPC OL)

These routines accept the following arguments. For PCGETRS, the following real arguments must be
complex:
trans Character. (global input)
Specifies the form of the system of equations:
trans = ’N’: sub(A) * X = sub(B) (No transpose)
T
trans = ’T’: sub(A) * X = sub(B) (Transpose)
T
trans = ’C’: sub(A) * X = sub(B) (Transpose)
n Integer. (global input)
The number of columns to be operated on; (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
nrhs Integer. (global input)
The number of right-hand sides (the number of columns of the distributed submatrix sub(B)). nrhs
must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the of the factors L and U from the factorization sub(A)=P*L*U; the
unit diagonal elements of L are not stored.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
ipiv Integer array, dimension (LOCp(M_A+MB_A). (local input)
This array contains the pivoting information, ipiv(i), which is the global row that local row i was
swapped with. This array is tied to the distributed matrix A.

412 004– 2081– 002


PSGETRS ( 3S ) PSGETRS ( 3S )

B Real pointer into the local memory to an array of dimension (LLD_B, LOCq(jB +nrhs– 1)). (local
input/local output)
On entry, the right-hand side of distributed matrix sub(B).
On exit, sub(B) is overwritten by the solution distributed matrix X.
iB Integer. (global input)
The global row index of B, which points to the beginning of the submatrix that will be operated on.
jB Integer. (global input)
The global column index of B, which points to the beginning of the submatrix that will be operated
on.
descB Integer array of dimension 9. (input)
The array descriptor for the distributed matrix B.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, then info = – i.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S), PSGETRF(3S)

004– 2081– 002 413


PSPOSV ( 3S ) PSPOSV ( 3S )

NAME
PSPOSV, PCPOSV – Solves a real symmetric or complex Hermitian system of linear equations

SYNOPSIS
CALL PSPOSV (uplo, n, nrhs, A, iA, jA, descA, B, iB, jB, descB, info)
CALL PCPOSV (uplo, n, nrhs, A, iA, jA, descA, B, iB, jB, descB, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSPOSV computes the solution to a real symmetric positive definite system of linear equations, as in the
following:
sub(A) X = sub(B)
where sub(A) denotes the following:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
sub(A) is an n-by-n symmetric distributed positive definite matrix and X and sub(B), which denotes the
following, are n-by-nrhs distributed matrices:
B(iB:iB+n-1,jB:jB+nrhs-1)
In the case of PCPOSV, the matrix must be Hermitian positive definite.
The Cholesky decomposition is used to factor sub(A) in the following way:
T
sub(A)=U * U if uplo = ’U’
T
sub(A)=L * L if uplo = ’L’
U is an upper triangular matrix, and L is a lower triangular matrix. The factored form of sub(A) is then used
to solve the system of equations.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).

414 004– 2081– 002


PSPOSV ( 3S ) PSPOSV ( 3S )

M_A The number of rows in the distributed matrix.


N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCPOSV, the following real arguments must be
complex:
uplo Character. (global input)
uplo = ’U’: Upper triangle of sub(A) is stored.
uplo = ’L’: Lower triangle of sub(A) is stored.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
nrhs Integer. (global input)
The number of right-hand sides (the number of columns of the distributed submatrix sub(B)). nrhs
must be ≥ 0.

004– 2081– 002 415


PSPOSV ( 3S ) PSPOSV ( 3S )

A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the n-by-n symmetric distributed matrix sub(A) to be factored.
If uplo =’U’, the leading n-by-n upper triangular part of sub(A) contains the upper triangular part of
the matrix, and its strictly lower triangular part is not refrenced.
If uplo = ’L’, the leading n-by-n lower triangular part of sub(A) contains the lower triangular part of
the distributed matrix, and its strictly upper triangular part is not referenced.
On exit, if info = 0, this array contains the local pieces of the factor U or L from the Cholesky
T T
factorization sub(A) = U * U or L * L .
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
B Real pointer into the local memory to an array of dimension (LLD_B,LOC(jB+nrhs– 1). (local
input/local output)
On entry, the right hand side distributed matrix sub(BfR).
On exit, if info = 0, sub(B) is overwritten by the solution distributed matrix X.
iB Integer. (global input)
The global row index of B, which points to the beginning of the submatrix that will be operated on.
jB Integer. (global input)
The global column index of B, which points to the beginning of the submatrix that will be operated
on.
descB Integer array of dimension 9. (input)
The array descriptor for the distributed matrix B.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
>0 If info = K, the leading minor of order K, A(iA:iA+K-1,jA+K-1) is not positive
definite. The factorization could not be completed, and the solution could not be
computed.

416 004– 2081– 002


PSPOSV ( 3S ) PSPOSV ( 3S )

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

004– 2081– 002 417


PSPOTRF ( 3S ) PSPOTRF ( 3S )

NAME
PSPOTRF, PCPOTRF – Computes the Cholesky factorization of a real symmetric or complex Hermitian
positive definite distributed matrix

SYNOPSIS
CALL PSPOTRF (uplo, n, A, iA, jA, descA, info)
CALL PCPOTRF (uplo, n, A, iA, jA, descA, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSPOTRF computes the Cholesky factorization of an n-by-n real symmetric positive definite distributed
matrix of the form:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
PCPOTRF computes the Cholesky factorization of a Hermitian positive definite distributed matrix.
The factorization has the following form; U is an upper triangular matrix, and L is a lower triangular matrix.
sub(A)=U’ * U if uplo=’U’
sub(A)=L * L’ if uplo=’L’
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.

418 004– 2081– 002


PSPOTRF ( 3S ) PSPOTRF ( 3S )

LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)
LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCPOTRF, the following real arguments must be
complex:
uplo Character. (global input)
uplo= ’U’: Upper triangle of sub(A) is stored;
uplo = ’L’: Lower triangle of sub(A) is stored.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the n-by-n symmetric distributed matrix sub(A) to be factored.
If uplo = ’U’, the leading n-by-n upper triangular part of the matrix sub(A) contains the upper
triangular matrix, and its strictly lower triangular part of sub(A) is not referenced.
If uplo = ’L’, the leading n-by-n lower triangular part of the matrix sub(A) contains the lower
triangular matrix, and the strictly upper triangular part of sub(A) is not referenced.
On exit, if uplo = ’U’, the upper triangular part of the distributed matrix contains the Cholesky
factor U; if uplo = ’L’, the lower triangular part of the distributed matrix contains the Cholesky
factor L.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A, which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.

004– 2081– 002 419


PSPOTRF ( 3S ) PSPOTRF ( 3S )

info Integer. (global output)


info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If info = K, the leading minor of order K, A(iA:iA+K-1,jA+K-1) is not positive
definite, and the factorization could not be completed.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

420 004– 2081– 002


PSPOTRI ( 3S ) PSPOTRI ( 3S )

NAME
PSPOTRI, PCPOTRI – Computes the inverse of a real symmetric or complex Hermitian positive definite
distributed matrix

SYNOPSIS
CALL PSPOTRI (uplo, n, A, iA, jA, descA, info)
CALL PCPOTRI (uplo, n, A, iA, jA, descA, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSPOTRI computes the inverse of a real symmetric positive definite distributed matrix of the form:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
T T
by using the Cholesky factorization sub(A)=U * U or L * L computed by PSPOTRF(3S).
PCPOTRI computes the inverse of a complex Hermitian positive definite matrix using the output from
PCPOTRF(3S).
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).

004– 2081– 002 421


PSPOTRI ( 3S ) PSPOTRI ( 3S )

Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSR C_A, NPROW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSRC_A , NPC OL)

These routines accept the following arguments. For PCPOTRI, the following real arguments must be
complex:
uplo Character. (global input)
uplo = ’U’: Upper triangle of sub(A) is stored.
uplo = ’L’: Lower triangle of sub(A) is stored.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the triangular factor U or L from the Cholesky factorization of the
T T
distributed matrix sub(A)=U * U or L * L , as computed by PSPOTRF(3S).
On exit, the local pieces of the upper or lower triangle of the (symmetric) inverse of sub(A),
overwriting the input factor U or L.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A, which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.

422 004– 2081– 002


PSPOTRI ( 3S ) PSPOTRI ( 3S )

info > 0 If info = i, the (i,i) element of the factor U or L is 0, and the inverse could not be
computed.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S), PSPOTRF(3S)

004– 2081– 002 423


PSPOTRS ( 3S ) PSPOTRS ( 3S )

NAME
PSPOTRS, PCPOTRS – Solves a real symmetric positive definite or complex Hermitian positive definite
system of linear equations

SYNOPSIS
CALL PSPOTRS (uplo, n, nrhs, A, iA, jA, descA, B, iB, jB, descB, info)
CALL PCPOTRS (uplo, n, nrhs, A, iA, jA, descA, B, iB, jB, descB, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSPOTRS solves a real symmetric positive definite system of linear equations of the form:
sub(A) * X = sub(B)
where sub(A) denotes the following:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
sub(A) is an n-by-n symmetric positive definite distributed matrix using the following Cholesky factorization
and computed by PSPOTRF(3S):
T
sub(A)=U * U
or
T
L*L
sub(B) denotes the following distributed matrix B:
sub(B)=B(iB:iB+n-1,jB:jB+nrhs-1)
PCPOTRS requires a Hermitian positive definite matrix.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.

424 004– 2081– 002


PSPOTRS ( 3S ) PSPOTRS ( 3S )

NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCPOTRS, the following real arguments must be
complex:
uplo Character. (global input)
uplo = ’U’: Upper triangle of sub(A) is stored.
uplo = ’L’: Lower triangle of sub(A) is stored.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
nrhs Integer. (global input)
The number of right-hand sides (the number of columns of the distributed submatrix sub(B)). nrhs
must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
T
On entry, this array contains the factors L or U from the Cholesky factorization sub(A)=L * L or
T
U * U, as computed by PSPOTRF(3S).
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.

004– 2081– 002 425


PSPOTRS ( 3S ) PSPOTRS ( 3S )

jA Integer. (global input)


The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
B Real pointer into the local memory to an array of dimension (LLD_B, LOCq(jB +nrhs– 1). (local
input/local output)
On entry, the right-hand side distributed matrix sub(B).
On exit, this array contains the local pieces of the solution distributed matrix X.
iB Integer. (global input)
The global row index of B, which points to the beginning of the submatrix that will be operated on.
jB Integer. (global input)
The global column index of B, which points to the beginning of the submatrix that will be operated
on.
descB Integer array of dimension 9. (input)
The array descriptor for the distributed matrix B.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S), PSPOTRF(3S)

426 004– 2081– 002


PSSYEVX ( 3S ) PSSYEVX ( 3S )

NAME
PSSYEVX – Computes selected eigenvalues and eigenvectors of a real symmetric matrix

SYNOPSIS
CALL PSSYEVX (jobZ, range, uplo, n, A, iA, jA, descA, vl, vu, il, iu, abstol, m, nZ, w,
orfac, Z, iZ, jZ, descZ, work, lwork, iwork, liwork, ifail, iclustr, gap, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSSYEVX computes selected eigenvalues and, optionally, eigenvectors of a real symmetric matrix A by
calling the recommended sequence of ScaLAPACK routines. Eigenvalues/vectors can be selected by
specifying a range of values or a range of indices for the desired eigenvalues.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.

004– 2081– 002 427


PSSYEVX ( 3S ) PSSYEVX ( 3S )

Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSR C_A, NPROW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSRC_A , NPC OL)

In describing the following arguments, NP, the number of rows local to a given processor, and NQ, the
number of columns local to a given processor, are used.
These routines accept the following arguments:
jobZ Character*1. (global input)
Specifies whether to compute the eigenvectors:
jobZ =’N’: Compute only eigenvalues.
jobZ =’V’: Compute eigenvalues and eigenvectors.
range Character*1. (global input)
range =’A’: All eigenvalues will be found.
range =’V’: All eigenvalues in the half-open interval (vl,vu) will be found.
range =’I’: The ilth through iuth eigenvalues will be found.
uplo Character. (global input)
Specifies whether the upper or lower triangular part of the symmetric matrix A is stored:
uplo =’U’: Upper triangle of sub(A) is stored.
uplo =’L’: Lower triangle of sub(A) is stored.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Block cyclic real array. (local input/workspace)
Global dimension (n,n), local dimension (descA(9), NQ)
On entry, the symmetric matrix A.
If uplo=’U’, only the upper triangular part of A is used to define the elements of the symmetric
matrix.
If uplo=’L’, only the lower triangular part of A is used to define the elements of the symmetric
matrix.
On exit, the lower triangle (if iplo=’L’) or the upper triangle (if uplo=’U’) of A, including the
diagonal, is destroyed.

428 004– 2081– 002


PSSYEVX ( 3S ) PSSYEVX ( 3S )

iA Integer. (global input)


The global row index of A, which points to the beginning of the submatrix that will be operated
on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
vl Real. (global input)
If range=’V’, the lower bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
vu Real. (global input)
If range =’V’, the upper bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
il Integer. (global input)
If range =’I’, the index (from smallest to largest) of the smallest eigenvalue to be returned. il ≥ 1.
If range=’A’ or ’V’, it is not referenced.
iu Integer. (global input)
If range =’I’, the index (from smallest to largest) of the largest eigenvalue to be returned.
min(il,n) ≤ iu ≤ n. If range =’A’ or ’V’, it is not referenced.
abstol Real. (global input)
If jobZ=’V’, setting abstol to PSLAMCH(CONTEXT,’U’) yields the most orthogonal
eigenvectors.
This is the absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as
converged when it is determined to lie in an interval [a,b] of width less than or equal to the
following:
abstol + eps * MAX(|a|,|b|)
eps is the machine precision. If abstol is ≤ 0, eps * norm(A) will be used in its place, where
norm(A) is the 1-norm of A. For most problems this is the appropriate level of accuracy to
request. For certain strongly graded matrices, greater accuracy can be obtained in very small
eigenvalues by setting abstol to some very small positive number. However, if abstol is less than
SQRT(unfl), where unfl is the underflow threshold, SQRT(unfl) will be used in its place.
See "Computing Small Singular Values of Bidiagonal Matrices with Guaranteed High Relative
Accuracy," by Demmel and Kahan, LAPACK Working Note #3, and "On the correctness of Parallel
Bisection in Floating Point" by Demmel, Dhillon, and Ren, LAPACK Working Note #70.
m Integer. (global output)
Total number of eigenvalues found. 0 ≤ m ≤ n.

004– 2081– 002 429


PSSYEVX ( 3S ) PSSYEVX ( 3S )

nZ Integer. (global output)


Total number of eigenvectors computed. 0 ≤ nZ ≤ m. The number of columns of Z that are filled.
If jobZ is not equal to ’V’, nz is not referenced. If jobZ is equal to ’V’, nz = m unless the user
supplies insufficient space and PSSYEVX is not able to detect this before beginning computation.
To get all of the eigenvectors requested, the user must supply both sufficient space to hold the
eigenvectors in Z (m ≤ descZ(2)) and sufficient workspace to compute them. (See lwork below.)
PSSYEVX can always detect insufficient space without computation, unless range=’V’.
W Real array, dimension (n). (global output)
On normal exit, the first m entries contain the selected eigenvalues in ascending order.
orfac Real. (global input)
Specifies which eigenvectors should be reorthogonalized. Eigenvectors that correspond to
eigenvalues that are within tol = orfac*norm(A) of each other are reorthogonalized. However,
if the workspace is insufficient (see lwork), tol may be decreased until all eigenvectors to be
reorthogonalized can be stored in one process. No reorthogonalization will be done if orfac equals
-3
zero. A default value of 10 is used if orfac is negative. orfac should be identical on all
processes.
Z Real array. (local output)
Global dimension (n, n), local dimension (descZ(9), NQ). If jobZ = ’V’, on normal exit the first m
columns of Z contain the orthonormal eigenvectors of the matrix that corresponds to the selected
eigenvalues. If an eigenvector fails to converge, then that column of Z contains the latest
approximation to the eigenvector, and the index of the eigenvector is returned in ifail. If jobZ =
’N’, Z is not referenced.
iZ Integer. (global input)
The global row index of the submatrix of the distributed matrix Z to operate on.
jZ Integer. (global input)
The global column index of the submatrix of the distributed matrix Z to operate on.
descZ Integer array of dimension 9. (input)
The array descriptor for the distributed matrix Z.
work Real array, dimension (work). (local workspace/output)
On output, work(1) returns the workspace needed to guarantee completion, but not orthogonality of
the eigenvectors. If the input parameters are incorrect, work(1) may also be incorrect.
This will be modified in future releases so if enough workspace is given to complete the request,
work(1) will return the amount of workspace needed to guarantee orthogonality.
lwork Integer. (local input) The following variable definitions are used to define work:

430 004– 2081– 002


PSSYEVX ( 3S ) PSSYEVX ( 3S )

NN = MAX( N, NB, 2 )
NEIG = number of eigenvectors requested
NB = descA( 3 ) = des cA( 4 ) = des cZ( 3 ) = descZ( 4 )
des cA( 5 ) = des cA( 4 ) = des cZ( 5 ) = des cZ( 6 ) = 0
IA = JA = IZ = JZ = 1
NP = NUMROC ( N, NB, MYR OW, 0, NPR OW )
NP0 = NUMROC ( NN, NB, 0, 0, NPROW )
NQ0 = MAX ( NUM ROC ( NEI G, NB, 0, 0, NPC OL ), NB )
ICEIL( X, Y ) is a ScaLAPACK function returning ceiling (X/Y)

If no eigenvectors are requested (jobZ = ’N’), lwork ≥ 5*N + MAX( 5*NN, NB*(NP+1) ).
If eigenvectors are requested (jobZ = ’V’), the amount of workspace required to guarantee that all
eigenvectors are computed is the following:
work≥5*N+MAX(5*NN,NP0*NQ0)+ICEIL(NEIG,NPROW*NPCOL)*NN+2*NB*NB
The computed eigenvectors may not be orthogonal if the minimal workspace is supplied and ortol
is too small. If you want to guarantee orthogonality (at the cost of potentially poor performance)
you should add the following to lwork:
(CLUSTERSIZE-1)*N
CLUSTERSIZE is the number of eigenvalues in the largest cluster, where a cluster is defined as a
set of close eigenvalues:
{W(K),...,W(K+CLUSTERSIZE-1)|W(J+1)≤ W(J)+orfac*norm(A)}
If lwork is too small to guarantee orthogonality, PSSYEVX attempts to maintain orthogonality in
the clusters with the smallest spacing between the eigenvalues. If lwork is too small to compute all
of the eigenvectors requested, no computation is performed and info = – 23 is returned. Note that
when range = ’V’, PSSYEVX does not know how many eigenvectors are requested until the
eigenvalues are computed. Therefore, when range = ’V’ and as long as lwork is large enough to
allow PSSYEVX to compute the eigenvalues, PSSYEVX will compute the eigenvalues and as many
eigenvectors as it can.
Relationship between workspace, orthogonality, and performance:
If CLUSTERSIZE ≥ N/SQRT(NPROW*NPCOL), providing enough space to compute all the
eigenvectors orthogonally will cause serious degradation in performance. In the limit (i.e.
CLUSTERSIZE = N-1), PSSTEIN will perform no better than DSTEIN on one processor. For
CLUSTERSIZE = N/SQRT(NPROW*NPCOL) reorthogonalizing all eigenvectors will increase the
total execution time by a factor of 2 or more.
For CLUSTERSIZE > N/SQRT(NPROW*NPCOL), execution time will grow as the square of the
cluster size, all other factors remaining equal and assuming enough workspace. Less workspace
means less reorthogonalization but faster execution.

004– 2081– 002 431


PSSYEVX ( 3S ) PSSYEVX ( 3S )

iwork Integer array. (local workspace)


On return, iwork(1) contains the amount of integer workspace required. If the input parameters are
incorrect, iwork(1) may also be incorrect.
liwork Integer. (local input)
Size of iwork. liwork ≥ MAX( ISIZESTEIN, ISIZESTEBZ)+2*N
where:

ISI ZESTEIN = 3*N + NPR OCS + 1


ISI ZESTEBZ = MAX ( 4*N, 14 )

ifail Integer array, dimension (N). (global output)


If jobZ = ’V’, then on normal exit, the first M elements of ifail are zero. If info > 0 on exit, ifail
contains the indices of the eigenvectors that failed to converge. If jobZ = ’N’, ifail is not
referenced.
iclustr Integer array, dimension (2*NPROW*NPCOL). (global output)
This array contains indices of eigenvectors that corresponds to a cluster of eigenvalues that could
not be reorthogonalized due to insufficient workspace (see lwork, orfac, and info). Eigenvectors
that correspond to clusters of eigenvalues indexed iclustr(2*I-1) to iclustr(2*I) could not be
reorthogonalized due to lack of workspace. Hence, the eigenvectors that correspond to these
clusters may not be orthogonal. iclustr() is a 0-terminated array. (iclustr(2*K).NE.0 .AND.
iclustr(2*K+1).EQ.0) if and only if K is the number of clusters. iclustr is not referenced if
jobZ =’N’.
gap Real array, dimension (NPROW*NPCOL). (global output)
This array contains the gap between eigenvalues whose eigenvectors could not be reorthogonalized.
The output values in this array correspond to the clusters indicated by the iclustr array. Therefore,
the dot product between eigenvectors that corresponds to the Ith cluster may be as high as
(C*n)/GAP(I) where C is a small constant.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value,
info = -(i*100+j); if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If (MOD(info,1).NE.0), one or more eigenvectors failed to converge. Their indices
are stored in ifail.
If (MOD(info/2,1).NE.0), eigenvectors corresponding to one or more clusters of
eigenvalues could not be reorthogonalized because of insufficient workspace. The
indices of the clusters are stored in the ICLUSTR array.

432 004– 2081– 002


PSSYEVX ( 3S ) PSSYEVX ( 3S )

If (MOD(info/4,1).NE.0), space limitations prevented PSSYEVX from computing


all of the eigenvectors between vl and vu. The number of eigenvectors computed is
returned in nZ.
If (MOD(info/8,1).NE.0), PSSTEBZ failed to compute eigenvalues.
Differences between PSSYEVX and SSYEVX
• A, LDA -> A, IA, JA, DESCA
• Z, LDZ -> Z, IZ, JZ, DESCZ
• Workspace needs are larger for PSSYEVX
• liwork argument added
• orfac, iclustr, and gap arguments added
• Meaning of info is changed.
PSSYEVX does not promise orthogonality for eigenvectors that are associated with tightly clustered
eigenvalues.
PSSYEVX does not reorthogonalize eigenvectors that are on different processors. The extent of
reorthogonalization is controlled by the input argument lwork.
PE 1.2.2 limitations:
IA = JA = 1
IZ = JZ = 1

The following restrictions apply on the parameters passed to DESCINIT:


• RSRC_A = CSRC_A = 0: PE 0 should own the first entry of global A.
• RSRC_Z = CSRC_Z = 0: (PE 0 should own the first entry of global Z.
• M_A = M_Z: The global number of rows in A and Z must be the same.
• MB_A = MB_Z.
• NB_A = NB_Z: This and the previous restriction mean that the block-cyclic distributions of A and Z
should be based on the same block size.
• CTXT_A = CTXT_Z: A and Z must be distributed on the context (˜=grid).
SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

004– 2081– 002 433


PSSYGVX ( 3S ) PSSYGVX ( 3S )

NAME
PSSYGVX – Computes selected eigenvalues and eigenvectors of a real symmetric-definite generalized
eigenproblem

SYNOPSIS
CALL PSSYGVX (ibtype, jobZ, range, uplo, n, A, iA, jA, descA, B, iB, jB, descB, vl, vu,
il, iu, abstol, m, nZ, w, orfac, Z, iZ, jZ, descZ, work, lwork, iwork, liwork, ifail, iclustr,
gap, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSSYGVX computes all the eigenvalues and, optionally, eigenvectors of a real generalized SY-definite
eigenproblem, of the form:
sub (A)*x= (lambd a)* sub (B) *x, sub (A) *su b(B )x= (la mbd a)* x

or
sub (B)*su b(A)*x =(l amb da) *x

Here sub(A) denoting A(IA:IA+N-1, JA:JA+N-1) is assumed to be SY, and sub(B) denoting
B(IB:IB+N-1, JB:JB+N-1) is assumed to be symmetric positive definite.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.

434 004– 2081– 002


PSSYGVX ( 3S ) PSSYGVX ( 3S )

CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

An upper bound for these quantities may be computed by:


LOCp( M ) <= ceil( cei l(M /MB _A) /NP ROW )*MB_A
LOCq( N ) <= ceil( cei l(N /NB _A) /NP COL )*N B_A

These routines accept the following arguments:


ibtype Integer. (global input)
Specifies the problem type to be solved:
= 1: sub(A)*x = (lambda)*sub(B)*x
= 2: sub(A)*sub(B)*x = (lambda)*x
= 3: sub(B)*sub(A)*x = (lambda)*x
jobZ Character*1. (global input)
Specifies whether to compute the eigenvectors:
jobZ =’N’: Compute only eigenvalues.
jobZ =’V’: Compute eigenvalues and eigenvectors.
range Character*1. (global input)
range =’A’: All eigenvalues will be found.
range =’V’: All eigenvalues in the half-open interval (vl,vu) will be found.
range =’I’: The ilth through iuth eigenvalues will be found.
uplo Character. (global input)
Specifies whether the upper or lower triangular part of the symmetric matrix A is stored:
uplo =’U’: Upper triangle of sub(A) is stored.
uplo =’L’: Lower triangle of sub(A) is stored.

004– 2081– 002 435


PSSYGVX ( 3S ) PSSYGVX ( 3S )

n Integer. (global input)


The order of the matrices sub(A) and sub(B). n must be ≥ 0.
A Real pointer into local memory. (local input/output)
Real pointer into the local memory to an array of dimension (LLD_A, LOCq(JA+N-1)).
On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub(A).
If uplo = ’U’, the leading N-by-N upper triangular part of sub(A) contains the upper triangular
part of the matrix. If uplo = ’L’, the leading N-by-N lower triangular part of sub(A) contains the
lower triangular part of the matrix.
On exit, if jobz = ’V’, then if info = 0, sub(A) contains the distributed matrix Z of eigenvectors.
The eigenvectors are normalized as follows:
if ibtype = 1 or 2, Z**T*sub( B )*Z = I
if ibtype = 3, Z**T*inv( sub( B ) )*Z = I.
If jobz = ’N’, then on exit the upper triangle (if uplo=’U’) or the lower triangle (if uplo=’L’) of
sub(A), including the diagonal, is destroyed.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated
on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.
descA Integer array of dimension dlen_. (global input)
The array descriptor for the distributed matrix A. If descA(CTXT_ ) is incorrect, this routine
cannot guarantee correct error reporting.
B Real pointer into local memory. (local input/output)
REAL pointer into the local memory to an array of dimension (LLD_B, LOCq(JB+N-1)).
On entry, this array contains the local pieces of the N-by-N symmetric distributed matrix sub(B).
If uplo = ’U’, the leading N-by-N upper triangular part of sub(B) contains the upper triangular
part of the matrix. If uplo = ’L’, the leading N-by-N lower triangular part of sub(B) contains the
lower triangular part of the matrix.
On exit, if info ≤ n, the part of sub(B) containing the matrix is overwritten by the triangular
factor U or L from the Cholesky factorization sub(B) = U**T*U or sub(B) = L*L**T.
iB Integer. (global input)
The global row index of B, which points to the beginning of the submatrix that will be operated
on.
jB Integer. (global input)
The global column index of B which points to the beginning of the submatrix that will be operated
on.

436 004– 2081– 002


PSSYGVX ( 3S ) PSSYGVX ( 3S )

descB Integer array of dimension dlen_. (global input)


The array descriptor for the distributed matrix A. descB(CTXT_) must equal descA(CTXT_).
vl Real. (global input)
If range=’V’, the lower bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
vu Real. (global input)
If range =’V’, the upper bound of the interval to be searched for eigenvalues. If range =’A’ or ’I’,
it is not referenced.
il Integer. (global input)
If range =’I’, the index (from smallest to largest) of the smallest eigenvalue to be returned. il ≥ 1.
If range=’A’ or ’V’, it is not referenced.
iu Integer. (global input)
If range =’I’, the index (from smallest to largest) of the largest eigenvalue to be returned.
min(il,n) ≤ iu ≤ n. If range =’A’ or ’V’, it is not referenced.
abstol Real. (global input)
If jobZ=’V’, setting abstol to PSLAMCH(CONTEXT,’U’) yields the most orthogonal
eigenvectors.
This is the absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as
converged when it is determined to lie in an interval [a,b] of width less than or equal to the
following:
abstol + eps * MAX(|a|,|b|)
eps is the machine precision. If abstol is ≤ 0, eps * norm(T) will be used in its place, where
norm(T) is the 1-norm of the tridiagonal matrix obtained by reducing A to tridiagonal form.
Eigenvalues will be computed most accurately when abstol is set to twice the underflow threshold
2*PSLAMCH(’S’) not zero. If this routine returns with ((MOD(INFO,2).NE.0).OR.
(MOD(INFO/8,2).NE.0)), indicating that some eigenvalues or eigenvectors did not converge,
try setting abstol to 2*PSLAMCH(’S’).
m Integer. (global output)
Total number of eigenvalues found. 0 ≤ m ≤ n.
nZ Integer. (global output)
Total number of eigenvectors computed. 0 ≤ nZ ≤ m. The number of columns of Z that are filled.
If jobZ is not equal to ’V’, nz is not referenced. If jobZ is equal to ’V’, nz = m unless the user
supplies insufficient space and PSSYGVX is not able to detect this before beginning computation.
To get all of the eigenvectors requested, the user must supply both sufficient space to hold the
eigenvectors in Z (m ≤ descZ(n)) and sufficient workspace to compute them. (See lwork below.)
PSSYGVX can always detect insufficient space without computation, unless range=’V’.

004– 2081– 002 437


PSSYGVX ( 3S ) PSSYGVX ( 3S )

w Real array, dimension (n). (global input)


On normal exit, the first m entries contain the selected eigenvalues in ascending order.
orfac Real. (global input)
Specifies which eigenvectors should be reorthogonalized. Eigenvectors that correspond to
eigenvalues that are within tol = orfac*norm(A) of each other are reorthogonalized. However,
if the workspace is insufficient (see lwork), tol may be decreased until all eigenvectors to be
reorthogonalized can be stored in one process. No reorthogonalization will be done if orfac equals
-3
zero. A default value of 10 is used if orfac is negative. orfac should be identical on all
processes.
Z Real array. (local output)
Global dimension (n, n), local dimension (descZ(CTXT_), NQ). If jobZ = ’V’, on normal exit
the first m columns of Z contain the orthonormal eigenvectors of the matrix that corresponds to the
selected eigenvalues. If an eigenvector fails to converge, then that column of Z contains the latest
approximation to the eigenvector, and the index of the eigenvector is returned in ifail. If jobZ =
’N’, Z is not referenced.
iZ Integer. (global input)
The global row index of the submatrix of the distributed matrix Z to operate on.
jZ Integer. (global input)
The global column index of the submatrix of the distributed matrix Z to operate on.
descZ Integer array of dimension 9. (input)
The array descriptor for the distributed matrix Z. descZ(CTXT_) must equal descACTXT_).
work Real array, dimension (work). (local workspace/output)
On output, work(1) returns the workspace needed to guarantee completion, but not orthogonality of
the eigenvectors. If the input parameters are incorrect, work(1) may also be incorrect.
If info ≥ 0
if jobZ = ’N’, work(1) = minimal=optimal amount of workspace;
if jobZ = ’V’, work(1) = minimal amount of workspace required to guarantee orthogonal
eigenvectors on the given input matrix with the given ortol. In version 1.0, work(1) = the
minimal workspace required to compute eigenvales.
If info<0, then
if jobZ=’N’, work(1) equals the minimal=optimal amount of workspace
if jobZ=’V’
if range=’A’ or range=’I’, then work(1) equals the minimal workspace required
to compute all eigenvectors (no guarantee on orthogonality).

438 004– 2081– 002


PSSYGVX ( 3S ) PSSYGVX ( 3S )

if range=’V’, then work(1) equals the minimal workspace required to compute


N_Z=DESCZ(N_) eigenvectors (no guarantee on orthogonality). In version 1.0,
work(1) equals the minimal workspace required to compute eigenvalues.
lwork Integer. (local input) The following variable definitions are used to define work:
NN = MAX ( N, NB, 2 )
NEIG = number of eigenvectors requested
NB = des cA( MB_ ) = des cA( NB_ ) = des cZ( MB_ ) = des cZ( NB_ )
des cA( RSRC_ ) = des cA( NB_ ) = descZ( RSR C_ ) = des cZ( CSR C_ ) = 0
NP0 = NUMROC ( NN, NB, 0, 0, NPROW )
MQ0 = NUM ROC(MA X(N EIG,NB ,2) NV, 0,0 ,NPCOL )
ICE IL( X, Y ) is a ScaLAPACK function returning ceiling (X/Y)

If no eigenvectors are requested (jobZ = ’N’), lwork ≥ 5*N + MAX( 5*NN, NB*(NP+1) ).
If eigenvectors are requested (jobZ = ’V’), the amount of workspace required to guarantee that all
eigenvectors are computed is the following:
lwork≥5*N+MAX(5*NN,NP0*MQ0+2*NB*NB) +ICEIL(NEIG,NPROW*NPCOL)*NN
The computed eigenvectors may not be orthogonal if the minimal workspace is supplied and ortol
is too small. If you want to guarantee orthogonality (at the cost of potentially poor performance)
you should add the following to lwork:
(CLUSTERSIZE-1)*N
CLUSTERSIZE is the number of eigenvalues in the largest cluster, where a cluster is defined as a
set of close eigenvalues:
{W(K),...,W(K+CLUSTERSIZE-1)|W(J+1)≤ W(J)+orfac*norm(A)}
If lwork is too small to guarantee orthogonality, PSSYGVX attempts to maintain orthogonality in
the clusters with the smallest spacing between the eigenvalues. If lwork is too small to compute all
of the eigenvectors requested, no computation is performed and info = – 23 is returned. Note that
when range = ’V’, PSSYGVX does not know how many eigenvectors are requested until the
eigenvalues are computed. Therefore, when range = ’V’ and as long as lwork is large enough to
allow PSSYGVX to compute the eigenvalues, PSSYGVX will compute the eigenvalues and as many
eigenvectors as it can.
Relationship between workspace, orthogonality, and performance:
If CLUSTERSIZE ≥ N/SQRT(NPROW*NPCOL), providing enough space to compute all the
eigenvectors orthogonally will cause serious degradation in performance. In the limit (i.e.
CLUSTERSIZE = N-1), PSSTEIN will perform no better than SSTEIN on one processor. For
CLUSTERSIZE = N/SQRT(NPROW*NPCOL) reorthogonalizing all eigenvectors will increase the
total execution time by a factor of 2 or more.

004– 2081– 002 439


PSSYGVX ( 3S ) PSSYGVX ( 3S )

For CLUSTERSIZE > N/SQRT(NPROW*NPCOL), execution time will grow as the square of the
cluster size, all other factors remaining equal and assuming enough workspace. Less workspace
means less reorthogonalization but faster execution.
iwork Integer array. (local workspace)
On return, iwork(1) contains the amount of integer workspace required. If the input parameters are
incorrect, iwork(1) may also be incorrect.
liwork Integer. (local input)
Size of iwork. liwork ≥ 6*NNP
where:

NNP =MAX(N,NP ROW *NPCOL+1, 4)

ifail Integer array, dimension (N). (global output)


ifail provides additional information when *CINFO.NE.0. If (MOD(INFO/16,2).NE.0) then
ifail(1) indicates the order of the smallest minor which is not positive definite. If
(MOD(INFO,2).NE.0) on exit, then ifail contains the indices of the eigenvectors that failed to
converge.
If neither of the above error conditions hold and jobZ=’V’, then the first m elements of ifail are set
to 0.
iclustr Integer array, dimension (2*NPROW*NPCOL). (global output)
This array contains indices of eigenvectors that corresponds to a cluster of eigenvalues that could
not be reorthogonalized due to insufficient workspace (see lwork, orfac, and info). Eigenvectors
that correspond to clusters of eigenvalues indexed iclustr(2*I-1) to iclustr(2*I) could not be
reorthogonalized due to lack of workspace. Hence, the eigenvectors that correspond to these
clusters may not be orthogonal. iclustr() is a 0-terminated array. (iclustr(2*K).NE.0 .AND.
iclustr(2*K+1).EQ.0) if and only if K is the number of clusters. iclustr is not referenced if
jobZ =’N’.
gap Real array, dimension (NPROW*NPCOL). (global output)
This array contains the gap between eigenvalues whose eigenvectors could not be reorthogonalized.
The output values in this array correspond to the clusters indicated by the iclustr array. Therefore,
the dot product between eigenvectors that corresponds to the Ith cluster may be as high as
(C*n)/GAP(I) where C is a small constant.
Current limitations:

440 004– 2081– 002


PSSYGVX ( 3S ) PSSYGVX ( 3S )

DESCA( MB_ )=(DES CA(NB_ )


IA=JA= 1
IZ=JZ= 1
DES CA(RSR C_) =DESCA (CSRC_ )=0
DESCA( M_) =DESCB(M_ )=D ESC Z(M _)
DESCA( N_) =DESCB(N_ )=D ESC Z(N _)
DES CA(MB_ )=D ESCB(M B_)=DE SCZ (MB_)
DES CA(NB_ )=D ESCB(N B_)=DE SCZ (NB_)
DES CA(RSR C_) =DESCB (RSRC_ )=D ESCZ(R SRC _)
DES CA(CSR C_) =DESCB (CSRC_ )=D ESCZ(C SRC _)

info Integer. (global output)


info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value,
info = -(i*100+j); if the ith argument is a scalar and had an illegal value, info = – i.
info > 0 If (MOD(info,2).NE.0), one or more eigenvectors failed to converge. Their indices
are stored in ifail. Send email to scalapack@cs.utk.edu.
If (MOD(info/2,2).NE.0), eigenvectors corresponding to one or more clusters of
eigenvalues could not be reorthogonalized because of insufficient workspace. The
indices of the clusters are stored in the ICLUSTR array.
If (MOD(info/4,2).NE.0), space limitations prevented PSSYGVX from computing
all of the eigenvectors between vl and vu. The number of eigenvectors computed is
returned in nZ.
If (MOD(info/8,2).NE.0), PSSTEBZ failed to compute eigenvalues. Send email
to scalapack@cs.utk.edu.
If (MOD(info/16,2).NE.0), B was not positive definite. ifail(1) indicates the order
of the smallest minor which is not positive definite.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

004– 2081– 002 441


PSSYTRD ( 3S ) PSSYTRD ( 3S )

NAME
PSSYTRD, PCHETRD – Reduces a real symmetric or complex Hermitian distributed matrix to tridiagonal
form

SYNOPSIS
CALL PSSYTRD (uplo, n, A, iA, jA, descA, D, E, tau, work, lwork, info)
CALL PCHETRD (uplo, n, A, iA, jA, descA, D, E, tau, work, lwork, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSSYTRD reduces a real symmetric matrix sub(A) to symmetric tridiagonal form T by an orthogonal
similarity transformation:
Q’ sub(A) * Q = T
where sub(A)= A(iA:iA+n– 1, jA:jA+n– 1).
PCHETRD requires a complex Hermitian matrix.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).

442 004– 2081– 002


PSSYTRD ( 3S ) PSSYTRD ( 3S )

Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCHETRD, the following real arguments must be
complex:
uplo Character. (global input)
uplo = ’U’: Upper triangle of sub(A) is stored.
uplo = ’L’: Lower triangle of sub(A) is stored.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n– 1). (local
input/local output)
On entry, the local pieces of the symmetric distributed matrix sub(A) to be factored.
If uplo = ’U’, the leading n-by-n upper triangular part of sub(A) contains the upper triangular part of
the matrix, and its strictly lower triangular part is not refrenced.
If uplo = ’L’, the leading n-by-n lower triangular part of sub(A) contains the lower triangular part of
the distributed matrix, and its strictly upper triangular part is not referenced.
On exit, if uplo = ’U’, the diagonal and first superdiagonal of sub(A) are overwritten by the
corresponding elements of the tridiagonal matrix T, and the elements above the first superdiagonal,
with the array tau, represent the orthogonal matrix Q as a product of elementary reflectors; if uplo =
’L’, the diagonal and first subdiagonal of sub(A) are overwritten by the corresponding elements of
the tridiagonal matrix T, and the elements below the first subdiagonal, with the array tau, represent
the orthogonal matrix Q as a product of elementary reflectors. See the Further Details subsection
for more information.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.

004– 2081– 002 443


PSSYTRD ( 3S ) PSSYTRD ( 3S )

descA Integer array of dimension 9. (input)


The array descriptor for the distributed matrix A.
D Real array of dimension LOCq(jA+n– 1). (local input/local output)
The diagonal elements of the tridiagonal matrix T: D(i) = A(i,i). D is tied to the distributed
matrix A.
E Real array of dimension LOCq(jA+n– 1). (local input/local output)
The off-diagonal elements of the tridiagonal matrix T: E(i) = A(i,i) if uplo = ’U’; E(i) = A(i+1,i) if
uplo = ’L’. E is tied to the distributed matrix A.
tau Real array, dimension LOCq(jA+MIN(m,n)-1. (local output)
This array contains the scalar factors tau of the elementary reflectors. tau is tied to the distributed
matrix A.
work Real array, dimension (lwork). (local workspace)
On exit, work(1) returns the minimal lwork.
lwork Integer. (local input)
The dimension of the array work.
lwork ≥ MAX(NB * (NP+1), 3*NB)
where
NB = MB_A = NB_ A
NP = NUM ROC ( N, NB, MYROW, IAR OW, NPR OW )
IAROW = INDXG2 P( IA, NB, MYR OW, RSR C_A, NPROW )

NUMROC(3S) and INDXG2P(3S) are ScaLAPACK tool function; MYROW, MYCOL, NPROW, and
NPCOL can be determined by calling the BLACS_GRIDINFO(3S) subroutine.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, info = – i.
Alignment Requirements
The distributed submatrix sub(A) must verify some alignment properties, namely the following expression
should be true:
( MB_ A.E Q.NB_A .AND. IRO FFA .EQ.IC OFFA .AND. IROFFA .EQ.0 )

with the following:


IROFFA = MOD ( iA- 1, MB_ A ) and ICO FFA = MOD( jA-1, NB_A )

444 004– 2081– 002


PSSYTRD ( 3S ) PSSYTRD ( 3S )

Further Details
If uplo = ’U’, the matrix Q is represented as a product of elementary reflectors
Q = H(n -1) ... H(2 ) H(1 )

Each H(i) has the following form:


H = I - tau * v * v’

tau is a real scalar, and v is a real vector with v(i+1:n) = 0 and v(i) = 1; v(1:i-1) is stored on
exit in A(iA+i-1:iA+m-1,jA+i-1), and tau is stored in tau(jA+i-1).
If uplo = ’L’, the matrix Q is represented as a product of elementary reflectors
Q = H(1 ) ... H(2 ) H(n -1)

Each H(i) has the following form:


H = I - tau * v * v’

where tau is a real scalar, and v is a real vector with v(1:i) = 0 and v(i+1) = 1; v(i+1:n) is
stored on exit in A(iA+i-1:iA+m-1,jA+i-1), and tau is stored in tau(jA+i-1).
The contents of sub(A) on exit are illustrated by the following examples with n = 5:
if uplo = ’U’: if uplo = ’L’:
( d e v2 v3 v4 ) ( d )
( d e v3 v4 ) ( e d )
( d e v4 ) ( v1 e d )
( d e ) ( v1 v2 e d )
( d ) ( v1 v2 v3 e d )

In this example, d and e denote diagonal and off-diagonal elements of T, and v1 denotes an element of the
vector defining H(i).

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_GRIDINIT(3S), INDXG2P(3S), NUMROC(3S)

004– 2081– 002 445


PSTRTRI ( 3S ) PSTRTRI ( 3S )

NAME
PSTRTRI, PCTRTRI – Computes the inverse of a real or complex upper or lower triangular distributed
matrix

SYNOPSIS
CALL PSTRTRI (uplo, diag, n, A, iA, jA, descA, info)
CALL PCTRTRI (uplo, diag, n, A, iA, jA, descA, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSTRTRI and PCTRTRI compute the inverse of a real or complex upper or lower triangular distributed
matrix of the form:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.
CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).

446 004– 2081– 002


PSTRTRI ( 3S ) PSTRTRI ( 3S )

Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSRC_A , NPR OW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSR C_A, NPCOL)

These routines accept the following arguments. For PCTRTRI, the following real arguments must be
complex:
uplo Character. (global input)
Specifies whether the distributed matrix sub(A) is upper or lower triangular:
uplo = ’U’: Upper triangle of sub(A) is stored.
uplo = ’L’: Lower triangle of sub(A) is stored.
diag Character. (global input)
Specifies whether the distributed matrix sub(A) is unit triangular:
diag = ’N’: Non-unit triangular.
diag = ’U’: Unit triangular.
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n-1). (local
input/local output)
On entry, the local pieces of the triangular matrix sub(A).
If uplo = ’U’, the leading n-by-n upper triangular part of the matrix sub(A) contains the upper
triangular matrix to be inverted, and the strictly lower triangular part of sub(A) is not referenced.
If uplo = ’L’, the leading n-by-n lower triangular part of the matrix sub(A) contains the lower
triangular matrix, and the strictly upper triangular part of sub(A) is not referenced.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be operated
on.

004– 2081– 002 447


PSTRTRI ( 3S ) PSTRTRI ( 3S )

descA Integer array of dimension 9. (input)


The array descriptor for the distributed matrix A.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value, info = -(i*100+j);
if the ith argument is a scalar and had an illegal value, then info = – i.
info > 0 If info = K, U(iA+K-1,jA+K-1) is exactly 0. The factorization has been completed,
but the factor U is exactly singular, so the solution could not be computed.

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

448 004– 2081– 002


PSTRTRS ( 3S ) PSTRTRS ( 3S )

NAME
PSTRTRS, PCTRTRS – Solves a real or complex distributed triangular system

SYNOPSIS
CALL PSTRTRS (uplo, trans, diag, n, nrhs, A, iA, jA, descA, B, iB, jB, descB, info)
CALL PCTRTRS (uplo, trans, diag, n, nrhs, A, iA, jA, descA, B, iB, jB, descB, info)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PSTRTRS and PCTRTRS solve a real or complex triangular system of the form
sub(A) * X = sub(B)
or
T
sub(A) * X = sub(B)
where sub(A) denotes the following:
sub(A)=A(iA:iA+n-1,jA:jA+n-1)
and sub(A) is a triangular distributed matrix of order N, and the following is an n-by-nrhs distributed matrix
denoted by sub(B):
sub(B)=B(iB:iB+n-1,jB:jB+n-1)
A check is made to verify that sub(A) is nonsingular.
These routines require square block decomposition (MB_A = NB_A, as defined in the following comments).
A description vector is associated with each two-dimensional (2D) block-cyclicly distributed matrix. This
vector stores the information required to establish the mapping between a matrix entry and its corresponding
process and memory location.
The following comments describe the elements of a block-cyclicly distributed matrix. In these comments,
the underline character (_) should be read as "of the distributed matrix". Let A be a generic term for any 2D
block cyclicly distributed matrix. Its description vector is descA and must be initialized through a call to
DESCINIT(3S).
M_A The number of rows in the distributed matrix.
N_A The number of columns in the distributed matrix.
MB_A The blocking factor used to distribute the rows of the matrix.
NB_A The blocking factor used to distribute the columns of the matrix.
RSRC_A The process row over which the first row of the matrix is distributed.
CSRC_A The process column over which the first column of the matrix is distributed.

004– 2081– 002 449


PSTRTRS ( 3S ) PSTRTRS ( 3S )

CTXT_A The BLACS context handle, indicating the BLACS process grid A is distributed over. The
context itself is global, but the handle (the integer value) may vary.
LLD_A The leading dimension of the local array storing the local blocks of the distributed matrix A.
LLD_A ≥ MAX(1,LOCp(M_A)).
Let K be the number of rows or columns of a distributed matrix, and assume that its process grid has
dimension p-by-q. LOCp( K ) denotes the number of elements of K that a process would receive if K were
distributed over the p processes of its process column.
Similarly, LOCq( K ) denotes the number of elements of K that a process would receive if K were distributed
over the q processes of its process row.
The values of LOCp() and LOCq() may be determined via a call to the NUMROC(3S) ScaLAPACK tool
function, as in the following:
LOCp(M )=N UMROC(M, MB_ A, MYR OW, RSR C_A, NPROW)

LOCq(N )=N UMROC(N, NB_ A, MYC OL, CSRC_A , NPC OL)

These routines accept the following arguments. For PCTRTRS, the following real arguments must be
complex:
uplo Character. (input)
uplo = ’U’: sub(A)=A(iA:iA+n-1,jA:jA+n-1) is upper triangular.
uplo = ’L’: sub(A)=A(iA:iA+n-1,jA:jA+n-1) is lower triangular.
trans Character. (global input)
Specifies the form of the system of equations:
trans = ’N’: sub(A) * X = sub(B) (No transpose).
T
trans = ’T’: sub(A) * X = sub(B) (Transpose).
T
trans = ’C’: sub(A) * X = sub(B) (Transpose).
diag Character. (global input)
diag = ’N’: sub(A) is non-unit triangular
diag = ’U’: sub(A) is unit triangular
n Integer. (global input)
The number of columns to be operated on (the number of columns of the distributed submatrix
sub(A)). n must be ≥ 0.
nrhs Integer. (global input)
The number of right-hand sides (the number of columns of the distributed matrix sub(B)). nrhs
must be ≥ 0.
A Real pointer into the local memory to an array of dimension (LLD_A, LOCq(jA+n-1). (local
input/local output)

450 004– 2081– 002


PSTRTRS ( 3S ) PSTRTRS ( 3S )

If uplo = ’U’, the leading n-by-n upper triangular part of the matrix sub(A) contains the upper
triangular matrix and its strictly lower triangular part of sub(A) is not referenced.
If uplo = ’L’, the leading n-by-n lower triangular part of the matrix sub(A) contains the lower
triangular matrix, and the strictly upper triangular part of sub(A) is not referenced.
If diag = ’U’, the diagonal elements of sub(A) are also not referenced and are assumed to be 1.
iA Integer. (global input)
The global row index of A, which points to the beginning of the submatrix that will be operated
on.
jA Integer. (global input)
The global column index of A which points to the beginning of the submatrix that will be
operated on.
descA Integer array of dimension 9. (input)
The array descriptor for the distributed matrix A.
B Real pointer into the local memory to an array of dimension (LLD_B, LOCq(jB +nrhs– 1). (local
input/local output)
On entry, the right-hand side distributed matrix sub(B).
On exit, if info = 0, sub(B) is overwritten by the solution distributed matrix X.
iB Integer. (global input)
The global row index of B, which points to the beginning of the submatrix that will be operated
on.
jB Integer. (global input)
The global column index of B, which points to the beginning of the submatrix that will be
operated on.
descB Integer array of dimension 9. (input)
The array descriptor for the distributed matrix B.
info Integer. (global output)
info = 0 Successful exit.
info < 0 If the ith argument is an array and the j-entry had an illegal value,
info = -(i*100+j); if the ith argument is a scalar and had an illegal value, info = – i.
info > 0: If info = i, the i-ith diagonal element of sub(A) is 0, which indicates that the
submatrix is singular and the solutions have not been computed.

004– 2081– 002 451


PSTRTRS ( 3S ) PSTRTRS ( 3S )

NOTES
BLACS_GRIDINIT(3S) must be called to initialize the virtual BLACS grid.

SEE ALSO
BLACS_GRIDINIT(3S), DESCINIT(3S), NUMROC(3S)

452 004– 2081– 002


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

NAME
INTRO_SPARSE – Introduction to solvers for sparse linear systems

IMPLEMENTATION
UNICOS systems

DESCRIPTION

The following table lists the purpose and name of the sparse linear system routines.

Purpose Name

Assigns values to parameters in arguments iparam and rparam for DFAULTS


SITRSOL (initializes iterative solver)
Solves a real general sparse system, using a preconditioned conjugate SITRSOL
gradient-like method (iterative solver)
Factors a real sparse general matrix (direct solver, threshold pivoting) SSGETRF
Solves a real sparse general system, using the factorization computed SSGETRS
in SSGETRF (direct solver, threshold pivoting)
Factors a real sparse symmetric definite matrix (direct solver, no SSPOTRF
pivoting)
Solves a real sparse symmetric definite system, using the factorization SSPOTRS
computed in SSPOTRF (direct solver, no pivoting)
Factors a real sparse structurally symmetric matrix (direct solver, no SSTSTRF
pivoting)
Solves a real sparse structurally symmetric system, using the SSTSTRS
factorization computed in SSTSTRF (direct solver, no pivoting)

A sparse matrix is a matrix that has relatively few nonzero values. This type of matrix occurs frequently in
key computational steps of a variety of engineering and scientific applications. Most sparse matrix software
takes advantage of this "sparseness" to reduce the amount of storage and arithmetic required by keeping track
of only the nonzero entries in the matrix.

004– 2081– 002 453


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

Storage Formats
Suppose that the n-by-n input matrix A has nza nonzero entries. The data structure used to represent A is a
column-oriented format, which is referred to as the sparse column format, in which the entries are grouped
by columns. In this format, the row indices of the nonzero elements in the first column are stored
contiguously in ascending order in an array irowind; then the row indices are stored for the second column,
and so on. The corresponding values are stored in an array values. A pointer array, icolptr, points to the
first entry in each column of A in irowind and values. icolptr(n+1) is set to nza+1. irowind and values are
arrays of length nza, and icolptr is of length n+1. Hence, 2nza+n+1 words of storage are required to
2
represent A, rather than the usual n words in the corresponding dense matrix format. Moreover, in the case
when A is symmetric, there is an even more compact symmetric column pointer format, in which only the
lower triangular part of A is stored.
Suppose A is a 5-by-5 matrix with 13 nonzero elements defined as follows:

 11 0 0 41 0 
 
 0 22 32 0 52 
A =  0 32 33 43 0 
 41 0 43 44 0 
 
 0 52 0 0 55 
The full sparse column format representation of A is as follows:
values = (11 41 22 32 52 32 33 43 41 43 44 52 55 )
irowind = ( 1 4 2 3 5 2 3 4 1 3 4 2 5 )
icolptr = ( 1 3 6 9 12 14 )
Because A is symmetric, the following symmetric sparse column format representation of A also is valid:
values = (11 41 22 32 52 33 43 44 55 )
irowind = ( 1 4 2 3 5 3 4 4 5 )
icolptr = ( 1 3 6 8 9 10 )
Direct Versus Iterative Solution
Techniques for the solution of sparse linear systems can be divided into two broad classes: direct and
iterative.
Direct solution
An explicit factorization of the matrix is computed, and it is used to solve for a solution of the linear system
given a right-hand side. The solution obtained by direct methods is certain to be as accurate as the problem
definition.

454 004– 2081– 002


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

Iterative solution
A sequence of approximations is generated iteratively, which should converge to a solution of the linear
system. Unlike direct methods, iterative methods tend to be more special-purpose, and it is well known that
no general, effective iterative algorithms exist for an arbitrary sparse linear system. However, for certain
classes of problems, the use of an appropriate iterative method can yield an approximate solution
significantly faster than direct methods. Also, iterative methods typically require less memory than direct
methods, making iterative methods the only feasible approach for some large problems. In an attempt to
compensate for the lack of robustness of any single iterative method and preconditioner, this package
provides a variety of methods and preconditioners. All are preconditioned conjugate gradient-type methods.
You can find a reference to a good introduction to these methods in the SEE ALSO section.
Analyze Phase for the Direct Sparse
In the direct solution of sparse linear systems, the structure of the input matrix usually is preprocessed prior
to the numerical factorization and the numerical solution phase. This is often referred to as the Analyze
phase. Only the structure of the matrix (that is, icolptr and irowind) is required at this stage. As
implemented in the package, the Analyze phase is further divided into the following:
• Fill-reduction reordering phase
• Symbolic factorization phase
• Execution sequence and memory management phase
Fill-reduction reordering phase
For a given sparse symmetric matrix A, the lower triangular matrix L from the LDL T factorization of A is
generally much more dense than A because of the fill-in generated at locations in which Ai j = 0. To reduce
this amount of fill-in, the routine applies an appropriate symmetric row and column permutation P to A
before carrying out the numerical factorization on PAP T . The system to be solved is then
PAP T y = Pb , x =P T y .
The reordering heuristic used in the package is based on the multiple minimum degree algorithm (see the
SEE ALSO section for a reference), which has proven to be a very effective practical method for reducing
the amount of fill-in created during the factorization. Moreover, in most problems, some of the columns of
the resultant factor L naturally have identical sparsity structure. These columns are grouped into what is
commonly referred to as supernode, and they are processed together in subsequent stages. This results in
significant performance improvement over previous sparse matrix solvers. The supernode concept can be
relaxed further by allowing additional fill-ins in L, so that more columns can be grouped together, resulting
in fewer and larger supernodes.
Experience shows that more often than not this trade-off of additional fill-ins (and therefore, more
operations) for fewer but larger supernodes reduces the execution time overall.
Symbolic factorization phase
Given the structure of the input matrix and a permutation matrix P as determined from the fill-reduction
reordering phase, the symbolic factorization phase builds the data structure for the nonzero entries of L.

004– 2081– 002 455


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

Execution sequence and memory management phase


The multifrontal method uses update matrices to carry the intermediate results from the variables (nodes)
being eliminated to the variables (nodes) that are not yet processed.
Before the elimination of a variable, update matrices that correspond to previously eliminated variables may
have to be "assembled" to form the current frontal matrix. The partial factorization of the current frontal
matrix is then carried out, and its update matrix is generated. This phase finds the processing sequence and
the amount of storage needed to store the temporary update matrices.

EXAMPLES
The following examples show the use of the iterative and direct sparse solver routines.
PRO GRA M EX1
PAR AME TER (NM AX = 5, NZA S = 9, NZA U = 13)
PAR AME TER (LI WORK = 350 , LWO RK = LIW ORK )
INT EGE R NEQ NS, NZA , IPA TH, IER R, ROW U(NZAU ), COL U(N ZAU),
& ROW S(N ZAS), COL S(N MAX +1), IWORK( LIW ORK ), IPARAM (40 )
REA L AMA TU( NZAU), AMA TS(NZA S), RPARAM (30 ), X(N MAX), B(N MAX ),
& BGE (NM AX) , BPO (NMAX) , BTS(NM AX) , SOL N(N MAX), WOR K(L WOR K)
CHA RAC TER *3 MET HOD
c
c --- --- --- --- --------- --- --- ----
c Def ine matrix , sol uti on and RHS
c --- --- --- ------ --- --- ------ --- -
c
c.. ... Ful l col umn poi nter format
DAT A COL U / 1, 4, 7, 9, 12, 14/
DAT A ROW U / 1, 2, 4, 1, 2, 3, 2, 3, 1, 4, 5, 4, 5/
DAT A AMA TU / 4.,-1.,-1 .,- 1., 4., -1. ,-1 ., 4.,-1. , 4., -1. ,-1 ., 4./
c
c.. ... Sym met ric column poi nter format
DAT A COL S / 1, 4, 6, 7, 9, 10/
DAT A ROW S / 1, 2, 4, 2, 3, 3, 4, 5, 5/
DAT A AMA TS / 4.,-1. ,-1 ., 4.,-1. , 4., 4.,-1. , 4./
c
DAT A SOL N / 1., 1., 1., 1., 1. /
DAT A B / 2., 2., 3., 2., 3. /
DAT A BGE / 2., 2., 3., 2., 3. /
DAT A BPO / 2., 2., 3., 2., 3. /
DAT A BTS / 2., 2., 3., 2., 3. /
c
NEQ NS = 5
c
c --- --- ------ --- --------- ---
c Sol ve pro ble m usi ng SIT RSO L

456 004– 2081– 002


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

c --- ------ --- ------ ------ ---


c
c.. ... Let the initial gue ss for x be ran dom number s bet ween 0 and 1
DO 20 I = 1, NEQ NS
X(I ) = RANF()
20 CON TIN UE
c
c.. ... Set def ault parameter val ues
CAL L DFA ULTS ( IPA RAM , RPA RAM )
c
c.. ... Sel ect lef t Lea st-square s pre con dit ion ing
IPA RAM (9) = 1
IPA RAM (10 ) = 5
c
c.. ... Cal l SITRSO L to sol ve the pro ble m usi ng PCG
IPA TH = 2
MET HOD = ’PCG’
CAL L SIT RSOL ( MET HOD , IPA TH, NEQ NS, NEQ NS, X, B, COL S, ROW S,
& AMA TS, LIWORK , IWO RK, LWORK, WOR K, IPA RAM , RPA RAM , IER R )
c
c --- --- ------ --- ------ --- --- --- ------ --- -
c Sol ve same proble m usi ng SSG ETR F/SSGE TRS
c --- --- ------ --- ------ --- --- --- ------ --- -
c
c.. ... use all defaul t val ues
IPA RAM (1) = 0
c.. ... do all 4 phases of fac toriza tio n
IDO = 14
c.. ... thr esh old for piv oti ng
THR ESH = 0.1
c
c.. ... com pute factor iza tio n usi ng SSG ETR F
CAL L SSG ETRF ( IDO , NEQ NS, COL U, ROW U, AMA TU, LWORK,
& WORK, IPA RAM , THR ESH , IER R )
c
c.. ... com pute soluti on usi ng SSG ETR S
c
c.. ... sol ve standard way
IDO = 1
c.. ... sol ve for 1 RHS wit h lea din g dim = neq ns
NRH S = 1
LDB = NEQNS
c
CAL L SSGETR S ( IDO, LWO RK, WORK, NRH S, BGE , LDB ,

004– 2081– 002 457


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

& IPA RAM , IER R )


c
c --- --- ------ --- --------- --- --- ------ --- -
c Sol ve same proble m usi ng SSP OTRF/S SPO TRS
c --- --- ------ --- --------- --- --- ------ --- -
c
c.. ... use all defaul t val ues
IPA RAM (1) = 0
c.. ... do all 4 phases of factor iza tio n
IDO = 14
c
c.. ... com put e factor iza tion usi ng SSP OTR F
CAL L SSPOTR F ( IDO, NEQ NS, COLS, ROW S, AMA TS, LWORK,
& WORK, IPA RAM , IER R )
c
c.. ... com put e soluti on using SSP OTR S
c
c.. ... sol ve sta ndard way
IDO = 1
c.. ... sol ve for 1 RHS with lea din g dim = neqns
NRH S = 1
LDB = NEQ NS
c
CAL L SSPOTR S ( IDO, LWORK, WORK, NRH S, BPO, LDB,
& IPARAM, IER R )
c

458 004– 2081– 002


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

c --- --- --- --- --- --- ------ --- ------ --- --- -
c Sol ve sam e pro ble m usi ng SSTSTR F/S STS TRS
c --- --- --- --- --- --- ------ --- ------ --- --- -
c
c.. ... use all def aul t val ues
IPA RAM (1) = 0
c.. ... do all 4 pha ses of fac toriza tio n
IDO = 14
c
c.. ... com put e fac tor iza tio n usi ng SST STR F
CAL L SST STR F ( IDO , NEQ NS, COL U, ROWU, AMA TU, LWO RK,
& WOR K, IPA RAM , IER R )
c
c.. ... com put e sol uti on usi ng SSTSTR S
c
c.. ...sol ve sta nda rd way
IDO = 1
c.. ...sol ve for 1 RHS wit h lea din g dim = neq ns
NRH S = 1
LDB = NEQ NS
c
CAL L SST STR S ( IDO , LWO RK, WOR K, NRHS, BTS , LDB ,
& IPA RAM, IERR )
c
c --- --- --- --- --- --- ------ --- --- --- --
c Com par e sol uti ons to exact sol ution
c --- --- --- --- --- --- ------ --- --- --- --
c
c.. ... Com put e two -no rm of the dif fer ence betwee n exa ct and com put ed
c for all sol uti on tec hni ques (SSxxTRS sol uti on is in Bxx)
c
c.. ... com put e dif fer enc es
CAL L SAX PY ( NEQ NS, -1. , SOL N, 1, X, 1 )
CAL L SAX PY ( NEQ NS, -1. , SOL N, 1, BGE , 1 )
CAL L SAX PY ( NEQ NS, -1., SOLN, 1, BPO , 1 )
CAL L SAX PY ( NEQ NS, -1. , SOL N, 1, BTS , 1 )
c
c.. ... com put e nor ms
ERR I = SNR M2( NEQ NS, X, 1 )
ERR GE = SNR M2( NEQ NS, BGE , 1 )
ERR PO = SNR M2( NEQ NS, BPO, 1 )
ERR TS = SNR M2( NEQ NS, BTS , 1 )
c
c.. ... pri nt res ult s

004– 2081– 002 459


INTRO_SPARSE ( 3S ) INTRO_SPARSE ( 3S )

WRI TE( 6,1 1)E RRI , ERRGE, ERR PO, ERR TS


11 FOR MAT (’* *** * Output fro m pro gra m: EX1 *** **’, /
& ’ Ite rative soluti on err or ’,E15. 8, /
& ’ "GE" soluti on err or ’,E15. 8, /
& ’ "PO " sol uti on err or ’,E 15. 8 , /
& ’ "TS " sol uti on error ’,E 15. 8 )
c.. ... all don e
END
Program EX1 yields the following output on CRAY Y-MP systems:
*** ** Out put from progra m: EX1 *** **
Ite rat ive sol ution err or 0.1 230 696 1E-13
"GE " sol ution err or 0.2 009 718 3E-13
"PO " soluti on err or 0.2 009718 3E- 13
"TS " sol ution err or 0.2 009 718 3E- 13

SEE ALSO
Golub, G. H. and C. F. Van Loan, Matrix Computations, second edition. Baltimore, MD: Johns Hopkins
University Press, 1989.
Liu, J. W., "Modification of the Minimum Degree Algorithm by Multiple Elimination," ACM Transactions
on Math Software, 11, (1985): pp. 141– 153.

460 004– 2081– 002


DFAULTS ( 3S ) DFAULTS ( 3S )

NAME
DFAULTS – Assigns default values to the parameter arguments for SITRSOL(3S)

SYNOPSIS
CALL DFAULTS (iparam, rparam)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
Users of SITRSOL usually would have to explicitly define each required parameter in the iparam and
rparam array arguments. DFAULTS lets you easily assign default values to the parameters in iparam and
rparam. After you set the default values by using DFAULTS, you can then change any of the parameter
values explicitly, as needed.
This routine has the following arguments:
iparam Integer array of dimension 40. (output)
Array of integer parameters required by SITRSOL.
rparam Real array of dimension 30. (output)
Array of real parameters required by SITRSOL.
To see the complete range of valid values for these arguments, see the SITRSOL(3S) man page.
Many of these parameters are set on exit from SITRSOL. After a call to SITRSOL, the DFAULTS setting
of these parameters is destructive.
The default values for iparam and rparam (output of DFAULTS) are as follows:
iparam
iparam(1): isym Full or symmetric format flag.
=1 Matrix is in symmetric column pointer format.
iparam(2): itest Stopping criterion.
=0 Use ’natural’ (cheapest) stopping criterion for the chosen iterative
method.
iparam(3): maxiter Maximum number of iterations allowed.
= 500
iparam(4): niter On exit, SITRSOL sets this to the number of iterations actually performed.
=0
iparam(5): msglvl Flag to control the level of messages output.
=2 Warning and fatal messages only.

004– 2081– 002 461


DFAULTS ( 3S ) DFAULTS ( 3S )

iparam(6): iunit Unit number for output information.


=6
iparam(7): iscale Diagonal scaling option.
=1 Apply symmetric diagonal scaling to make the diagonal of the scale
matrix all 1’s.
iparam(8): isympap Full or symmetric jagged diagonal format. (input)
Matrix vector product uses jagged diagonal format to achieve faster performance. If
mvformat = 0, this parameter is ignored.
=0 Convert A to full jagged diagonal format when A is in symmetric sparse
column format. If isym = 1, this requires more storage, but it provides
much faster performance than the symmetric jagged diagonal format.
iparam(9): iprelrb Applying preconditioning.
=0 Do not apply preconditioning.
iparam(10): ipretyp Type of preconditioning to use.
=0 No preconditioning.
iparam(11): lvlfill Level of fill-in allowed if incomplete factorization is used.
=0
iparam(12): maxlfil Maximum amount of fill allowed in the lower triangular factor of the incomplete
factorization with lvlfill > 0.
On exit, SITRSOL sets maxlfil to the amount of fill created.
=0 Use a crude estimate determined by using maxlfil = (1 + lvlfill)nza; nza
is the number of nonzero elements of A.
iparam(13): maxufil Maximum amount of fill allowed in the upper triangular factor of the incomplete LU
factorization with lvlfill > 0.
On exit, SITRSOL sets maxufil to the amount of fill created.
=0 Use a crude estimate determined by using maxufil = (1 + lvlfill)nza; nza
is the number of nonzero elements of A.
iparam(14): ifcomp Incomplete factor compute flag.
=1 Compute the incomplete factorization.
iparam(15): kdegree Degree of the polynomial preconditioner.
=2
iparam(16): ntrunc Number of vectors to be saved in GMRES[k] and OMN[k].
= 10

462 004– 2081– 002


DFAULTS ( 3S ) DFAULTS ( 3S )

iparam(17): nvorth Number of previous Krylov basis vectors to which each new basis vector is made
orthogonal. (GMRES method only.)
= 10
iparam(18): nrstrt Number of iterations between restart in OMN[k].
= 20
iparam(19): irestrt Save-and-restart control flag.
=0 No save-and-restart.
iparam(20): iosave Save-and-restart unit number of the unformatted file, which is assumed to have
been opened by the user.
=0 This is not a valid unit number for the save-and-restart operation. If
you change the value of irestrt to enable save-and-restart, you also must
change the value of iosave.
iparam(21): mvformat Desired format for computation of matrix-vector products.
=1 Use jagged diagonal form. This requires more storage, but it offers
faster performance.
iparam(22): nicfmax Maximum number of times to try IC[k] factorization by using shifted IC
factorization. (See rparam(15) and rparam(16).)
= 11
iparam(23): nicfacs On exit, SITRSOL sets this to the actual number of shifted IC[k] factorizations
tried. (See rparam(15) and rparam(16).)
=0
iparam(24) — iparam(40)
Presently unused. These parameters are reserved for future use.
rparam
rparam(1): tol Stopping criterion tolerance.
–6
= 1.0
rparam(2): err On exit, SITRSOL sets this to the computed error estimate at each iteration.
= 0.0
rparam(3): alpha Absolute value of the estimate of the smallest eigenvalue of A. Currently, this
parameter is unused and is assumed to be 0.
= 0.0
rparam(4): beta Absolute value of the estimate of the largest eigenvalue of A. This is needed only
by the least-squares polynomial preconditioner (ipretyp=5).
= 0.0 SITRSOL computes an estimate of the spectral radius.

004– 2081– 002 463


DFAULTS ( 3S ) DFAULTS ( 3S )

rparam(5): timscal On exit, SITRSOL sets this to the accumulated time (in seconds) to scale and
unscale the user matrix. (See the NOTES section.)
= 0.0
rparam(6): timsets On exit, SITRSOL sets this to the accumulated time (in seconds) to compute the
symbolic incomplete factorization. (See the NOTES section.)
= 0.0
rparam(7): timsetn On exit, SITRSOL sets this to the accumulated time (in seconds) to compute the
numerical incomplete factorization. (See the NOTES section.)
= 0.0
rparam(8): timset On exit, SITRSOL sets this to the accumulated total time (in seconds) to perform
the preconditioner setup. If incomplete factorization is used, this includes both
timsets and timsetn. (See the NOTES section.)
= 0.0
rparam(9): timpre On exit, SITRSOL sets this to the accumulated total time (in seconds) to apply the
preconditioner in the iteration phase of the solution process. (See the NOTES
section.)
= 0.0
rparam(10): timmvs On exit, SITRSOL sets this to the accumulated time (in seconds) to convert from
column pointer to jagged diagonal format. If parallel processing is used, this also
includes the setup time to perform the parallel matrix vector operations. (See the
NOTES section.)
= 0.0
rparam(11): timmv On exit, SITRSOL sets this to the accumulated time (in seconds) to perform the
matrix vector product (not including those performed in applying the polynomial
preconditioners). (See the NOTES section.)
= 0.0
rparam(12): timmtv On exit, SITRSOL sets this to the accumulated time (in seconds) to perform the
transpose matrix vector product (not including those in applying the polynomial
preconditioners). (See the NOTES section.)
= 0.0
rparam(13): timit On exit, SITRSOL sets this to the accumulated time (in seconds) spent in the
iterative routine (not including the time spent computing matrix vector products or
applying the preconditioners). (See the NOTES section.)
= 0.0
rparam(14): timtot On exit, SITRSOL sets this to the accumulated total time (in seconds) for this call
to SITRSOL, plus that of previous calls if not reset. (See the NOTES section.)

464 004– 2081– 002


DFAULTS ( 3S ) DFAULTS ( 3S )

= 0.0
rparam(15): gammin Minimum value for shift factor γ. For some problems, IC[k] preconditioning fails in
the factorization. In many cases, "shifting" the diagonal elements allows the
factorization to be computed for this modified matrix.
= 0.0
rparam(16): gammax Maximum value for shift factor γ. If the IC[k] factorization fails, SITRSOL
increments γ and tries again. γ may take on nicfmax different values between
gammin and gammax.
On exit from SITRSOL (if nicfacs > 1), gammax contains the actual value of
gamma used to compute the factorization.
= 0.3
rparam(17) – rparam(30)
Presently unused. These parameters are reserved for future use.

NOTES
If the timing parameters, rparam(5) through rparam(14), are not reset to 0.0 (for example, by DFAULTS),
timing information for subsequent calls to SITRSOL will be added to existing timing information.
If multiple CPUs are used, rparam(5) through rparam(14) report the cumulative time for all CPUs.

SEE ALSO
INTRO_SPARSE(3S) for an example of a Fortran program that uses DFAULTS and SITRSOL
SITRSOL(3S) for a more complete description of iparam and rparam
Scientific Libraries User’s Guide

004– 2081– 002 465


SITRSOL ( 3S ) SITRSOL ( 3S )

NAME
SITRSOL – Solves a real general sparse system, using a preconditioned conjugate gradient-like method

SYNOPSIS
CALL SITRSOL (method, ipath, neqns, nvars, x, b, icolptr, irowind, value, liwork, iwork,
lrwork, rwork, iparam, rparam, ierr)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
SITRSOL uses any of several iterative techniques to solve a real general sparse system of equations.
Because no single robust iterative algorithm for solving sparse linear systems exists, SITRSOL lets users
select from a wide variety of iterative techniques, preconditioning schemes, and many tuning parameters.
You can initialize the iparam and rparam tuning parameter arguments by using a call to DFAULTS. Then
you must change only selected parameter values, rather than setting up the entire arrays of parameters
manually.
This routine has the following arguments:
method Character*3. (input)
Name used to select the iterative method.
= ’BCG’ Biconjugate gradient method
= ’CGN’ Conjugate gradient method applied to the equations:
T T
AA y = b x = A y (Craig’s method)
= ’CGS’ Conjugate gradient squared method
= ’GMR’ Generalized minimum residual (GMRES) method
= ’GMN’ Orthomin or generalized conjugate residual (GCR) method
= ’PCG’ Preconditioned conjugate gradient method
ipath Integer. (input)
Value used to control the execution path in the solver. This argument is useful when the driver is
used to solve similar problems or a large problem in pieces.
= 1 Processes only the structure of the matrix. No solution is computed.
= 2 Processes both the structure and values of the matrix. The solution is computed.
= 3 Processes only the values of the matrix. The solution is computed. It is assumed that
SITRSOL has been called with ipath equal to 1 or 2 and that the structure previously set up
is used.
= 4 Solves the same linear system with different right-hand side.
= 5 Restarts from a previously saved run.
neqns Integer. (input)
Number of equations (rows) in the system.

466 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

nvars Integer. (input)


Number of variables (columns) in the system. Currently, only square matrices are allowed;
therefore, nvars must equal neqns.
x Real array of dimension nvars. (input and output)
On input, x must contain an initial approximation to the solution vector of the system. On output,
x contains an approximation to the solution vector as computed by the chosen iterative scheme.
b Real array of dimension neqns. (input and output)
On input, b contains the right-hand side vector of the linear system. On output, if scaling was
used, b can be changed slightly, because b is scaled and then unscaled.
icolptr Integer array of dimension neqns + 1. (input)
Column pointer array for the sparse matrix A. The first and last elements of the array must be set
as follows:
icolptr(1) = 1 icolptr(neqns+1) = nza + 1
where nza is the number of nonzero elements in the sparse matrix A.
irowind Integer array of dimension nza (see icolptr). (input)
Row indices array for the sparse matrix A.
value Real array of dimension nza (see icolptr). (input)
Array of nonzero values for the sparse matrix A. Taken together, icolptr, irowind, and value
arguments contain the input matrix in sparse column format. See the introduction to the sparse
solvers (INTRO_SPARSE(3S)) for a full description of the sparse column format.
liwork Integer. (input)
Length of the work array iwork. Workspace requirements vary from phase to phase. See the
Workspace subsection.
iwork Integer array of dimension liwork. (output)
Work array. On output, the first four elements of iwork have the following special meanings:
iwork(1) Amount of real workspace needed at the time of exit
iwork(2) Amount of integer workspace needed at the time of exit
iwork(3) Amount of real workspace that must be left untouched if subsequent calls to SITRSOL
will be made with ipath set to 3 or 4
iwork(4) Amount of integer workspace that must be left untouched if subsequent calls to
SITRSOL will be made with ipath set to 3 or 4
If SITRSOL aborts because not enough workspace is available, you can use iwork(1) and iwork(2)
to determine how much workspace is needed.
lrwork Integer. (input)
Length of the work array rwork. Workspace requirements vary from phase to phase. See the
Workspace subsection.
rwork Real array of dimension lrwork. (output)
Work array.

004– 2081– 002 467


SITRSOL ( 3S ) SITRSOL ( 3S )

iparam Integer array of dimension 40. (input and output)


Parameter array.
rparam Real array of dimension 30. (input and output)
Parameter array. On input, most of the elements of iparam and rparam contain user control
parameters. To assign default values for these parameters, call the DFAULTS(3S) routine, as
follows:
CALL DFAULTS (iparam, rparam)
On output, some values of iparam and rparam are changed to report what occurred in the solution
process. See the Parameters subsection.
ierr Integer. (output)
Error code. A six-digit code to report any error condition detected by the iterative solver.
= 0 Normal completion
> 0 Warning error
< 0 Fatal error
See the Error Code subsection.
Parameters
You can assign default values to the required parameters in iparam and rparam by calling the DFAULTS
routine, as noted previously. After calling DFAULTS, you can then reset any parameter to a more
appropriate value. See the example in the introduction to the sparse solvers, INTRO_SPARSE(3S).
The parameters and their default values (as assigned by DFAULTS) are as follows:
iparam(1): isym Full or symmetric format flag. (input)
= 0 Matrix is in full column pointer format.
= 1 Matrix is in symmetric column pointer format.
DFAULTS returns isym=1.
iparam(2): itest Stopping criterion. (input)
= 0 Use "natural" (cheapest) stopping criterion for the chosen iterative method.
= 1 Use the 2-norm of the residual divided by the 2-norm of the right-hand side.
= 2 Use the 2-norm of the preconditioned residual divided by the 2-norm of the
preconditioned right-hand side.
If method = ’BCG’, ’CGN’, or ’PCG’, itest = 0 has the same effect as itest = 1.
If method = ’CGS’, ’GMR’, or ’OMN’, itest = 0 has the same effect as itest = 2.
DFAULTS returns itest=0.
iparam(3): maxiter Maximum number of iterations allowed. (input)
DFAULTS returns maxiter=500.
iparam(4): niter Number of iterations actually performed. (output)
DFAULTS returns niter=0.
iparam(5): msglvl Flag to control the level of messages output. (input)
= 0 No output.
= 1 Fatal messages only.

468 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

=2 Warning and fatal messages only.


=3 Warning and fatal messages and a brief summary.
=4 Warning and fatal messages, information about each iteration, and a brief
summary.
DFAULTS returns msglvl=2.
iparam(6): iunit Unit number for output information. (input)
DFAULTS returns iunit=6.
iparam(7): iscale Diagonal scaling option. (input)
= 0 No scaling is applied to the system
= 1 Apply symmetric diagonal scaling to make the diagonal of the scaled matrix all
1’s
= 2 Apply symmetric row-sum scaling
= 3 Apply symmetric column-sum scaling
= 4 Apply left-diagonal scaling to make the diagonal of the scaled matrix all 1’s
= 5 Apply left-row scaling to make the infinity-norm of the scaled matrix = 1
= 6 Apply right-column scaling to make the 1-norm of the scaled matrix = 1
DFAULTS returns iscale=1.
iparam(8): isympap Full or symmetric jagged diagonal format. (input)
Matrix vector product uses jagged diagonal format to improve performance. If
mvformat = 0, this parameter is ignored.
= 0 Convert A to full jagged diagonal format when A is in symmetric sparse column
format. If isym = 1, this requires more storage, but it provides much better
performance than the symmetric jagged diagonal format.
= 1 Convert A to symmetric jagged diagonal format if A is in symmetric sparse
column format.
DFAULTS returns isympap = 0.
iparam(9): iprelrb Applying preconditioning. (input)
= 0 Do not apply preconditioning.
= 1 Apply left preconditioning.
= 2 Apply right preconditioning (not available for the PCG and CGN methods).
= 3 Apply two-sided preconditioning (not available for the PCG and CGN methods.
Only for use with IC[k] and ILU[k] preconditioners).
DFAULTS returns iprelrb = 0.
iparam(10): ipretyp Type of preconditioning to use. (input)
= 0 No preconditioning.
= 1 Diagonal (Jacobi) preconditioning.
= 2 Incomplete Cholesky factorization (IC[k]).
= 3 Incomplete LU factorization (ILU[k]).
= 4 Truncated Neumann polynomial expansion.
= 5 Truncated least-squares polynomial expansion.
DFAULTS returns ipretyp = 0.

004– 2081– 002 469


SITRSOL ( 3S ) SITRSOL ( 3S )

iparam(11): lvlfill Level of fill-in allowed if incomplete factorization is used. (input)


DFAULTS returns lvlfill = 0.
iparam(12): maxlfil Amount of fill in lower triangular factor. (input and output)
On input, maxlfil controls the amount of fill allowed in the lower triangular factor of
the incomplete factorization with lvlfill < 0.
> 0 maxlfil is the maximum fill allowed.
= 0 Uses the following crude estimate:
maxlfil = (1+lvlfill)nza
where nza is the number of nonzero elements in the sparse matrix A.
On output, maxlfil contains the amount of fill created.
DFAULTS returns maxlfil = 0.
iparam(13): maxufil Amount of fill in upper triangular factor. (input and output)
On input, maxufil controls the amount of fill allowed in the upper triangular factor of
the incomplete factorization with lvlfill < 0.
> 0 maxufil is the maximum fill allowed
= 0 Uses the following crude estimate:
maxufil = (1+lvlfill)nza
On output, maxufil contains the amount of fill created.
DFAULTS returns maxufil = 0.
iparam(14): ifcomp Incomplete factor compute flag. (input)
= 0 Do not compute the incomplete factorization. This is useful when a previous
run has already computed the incomplete factors for a slightly perturbed system.
= 1 Compute the incomplete factorization.
DFAULTS returns ifcomp = 1.
iparam(15): kdegree Degree of the polynomial preconditioner. (input)
DFAULTS returns kdegree = 2.
iparam(16): ntrunc Number of vectors to be saved in GMRES[k] and OMN[k]. (input)
DFAULTS returns ntrunc = 10.
iparam(17): nvorth Number of previous Krylov basis vectors to which each new basis vector is made
orthogonal (GMRES method only). (input)
DFAULTS returns nvorth = 10.
iparam(18): nrstrt Number of iterations between restart in OMN[k]. (input)
DFAULTS returns nrstrt=20.
iparam(19): irestrt Save-and-restart control flag. (input)
= 0 No save-and-restart.
= 1 Save-and-restart after preconditioner setup. The active part of the work arrays
is saved to or restored from the unit specified by iosave (iparam(20)).
= 2 Save-and-restart after completion of iterative phase. The x and b arrays and the
active part of the work arrays is saved to or restored from the unit specified by
iosave (iparam(20)).

470 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

DFAULTS returns irestrt = 0.


iparam(20): iosave Save-and-restart unit number. (input)
Unit number of the unformatted save-and-restart file. SITRSOL assumes that the
user has already opened this file.
DFAULTS returns iosave = 0. This default value is not a valid unit number for the
save-and-restart operation. If you change the value of irestrt to enable
save-and-restart, you also must change the value of iosave.
iparam(21): mvformat Desired format for computation of matrix-vector products. (input)
= 0 Use matrix supplied by user. This can save a substantial amount of storage, but
performance is usually poor.
= 1 Use jagged diagonal form. This requires more storage, but it offers faster
performance.
DFAULTS returns mvformat = 1.
iparam(22): nicfmax Maximum number of times to try IC[k] factorization by using shifted IC factorization
(see rparam(15) and rparam(16)). (input)
DFAULTS returns nicfmax = 11.
iparam(23): nicfacs Actual number of shifted IC[k] factorizations tried (see rparam(15) and rparam(16)).
(output)
DFAULTS returns nicfacs = 0.
iparam(24) through iparam(40)
Currently unused. These parameters are reserved for future use.
rparam(1): tol Stopping criterion tolerance. (input)
–6
DFAULTS returns tol = 1.0 .
rparam(2): err Computed error estimate at each iteration. (output)
DFAULTS returns err = 0.0.
rparam(3): alpha Absolute value of the estimate of the smallest eigenvalue of A. (input)
Currently, this parameter is unused and is assumed to be 0.
DFAULTS returns alpha = 0.0.
rparam(4): beta Absolute value of the estimate of the largest eigenvalue of A. (input)
This is needed only by the least-squares polynomial preconditioner (ipretyp = 5). If
beta = 0, an estimate of the spectral radius will be computed.
DFAULTS returns beta = 0.0.
rparam(5): timscal Accumulated time (in seconds) to scale and unscale the user matrix. (input and
output)
DFAULTS returns timscal = 0.0.
rparam(6): timsets Accumulated time (in seconds) to compute the symbolic incomplete factorization.
(input and output)
DFAULTS returns timsets = 0.0.

004– 2081– 002 471


SITRSOL ( 3S ) SITRSOL ( 3S )

rparam(7): timsetn Accumulated time (in seconds) to compute the numerical incomplete factorization.
(input and output)
DFAULTS returns timsetn = 0.0.
rparam(8): timset Accumulated total time (in seconds) to perform the preconditioner setup. (input and
output)
If incomplete factorization is used, this includes both timsets and timsetn.
DFAULTS returns timset = 0.0.
rparam(9): timpre Accumulated total time (in seconds) to apply the preconditioner in the iteration phase
of the solution process. (input and output)
DFAULTS returns timpre = 0.0.
rparam(10): timmvs Accumulated time (in seconds) to convert from column pointer to jagged diagonal
format. (input and output)
If you use parallel processing, this also includes the setup time to perform the parallel
matrix vector operations.
DFAULTS returns timmvs = 0.0.
rparam(11): timmv Accumulated time (in seconds) to perform the matrix vector product. (input and
output)
This does not include the products that apply the polynomial preconditioners.
DFAULTS returns timmv = 0.0.
rparam(12): timmtv Accumulated time (in seconds) to perform the transpose matrix vector product. (input
and output)
This does not include the products that apply the polynomial preconditioners.
DFAULTS returns timmtv = 0.0.
rparam(13): timit Accumulated time (in seconds) spent in the iterative routine. (input and output)
This does not include the time spent computing matrix vector products or applying
the preconditioners.
DFAULTS returns timit = 0.0.
rparam(14): timtot Accumulated total time (in seconds) for this call to SITRSOL. (input and output)
DFAULTS returns timtot = 0.0.
rparam(15): gammin Minimum value for shift factor γ. (input)
For some problems, IC[k] preconditioning fails in the factorization. In many cases,
"shifting" the diagonal elements (that is, letting a(i,i) = (1 + γ) . a(i,i)) allows the
factorization to be computed for this modified matrix.
DFAULTS returns gammin=0.0.

472 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

rparam(16): gammax Value for shift factor γ. (input and output)


On input, gammax is the maximum value that the shift factor γ may have. If the
IC[k] factorization fails (that is, a negative diagonal element is computed for the
factor L), SITRSOL increases γ by the following amount:
gammax −gammin
∆γ =
nic f max −1
and it tries to recompute the factorization. SITRSOL tries a maximum of nicfmax
factorizations with the following values of γ:
γi = gammin + i ∆γ, for i = 0, 1, . . ., nic f max −1
On output, the number of factorizations tried is returned in nicfacs, and, if nicfacs >
1, gammax contains the actual value of γ used to compute the factorization.
DFAULTS returns gammax = 0.3.
rparam(17) through rparam(30)
Currently unused. They are reserved for future use.
If rparam(5) through rparam(14) are not reset to 0, timing information for subsequent calls to SITRSOL
will be added to existing timing information.
If multiple CPUs are used, rparam(5) through rparam(14) report the cumulative time for all CPUs.
Error Code
The error flag ierr in routine SITRSOL is a six-decimal-digit signed integer. Immediately after an error
condition is detected, the error flag is set, the error-handling routine is called, and a message is printed if the
user requests it. If a warning error occurs (ierr > 0), execution continues on returning from the
error-handling routine. If the error is fatal (ierr < 0), control is returned to the user.
Unless otherwise specified, an error code with xx denoting the third and fourth digits (for example, 03xx01)
represents a common error condition from various routines. Any error code with ii denoting the fifth and
sixth digits represents a range of two-digit numbers (or one-digit numbers with a leading 0).
Phase 1 error: parameter checking
ierr =
– 010001 Illegal value for method.
– 010002 Illegal value for ipath.
– 010003 Illegal value for neqns or nvars.
– 010004 Illegal value for liwork.
– 010005 Illegal value for lrwork.
– 010006 Illegal value for ncpus (see INTRO_LIBSCI(3S)).
– 0101ii Illegal value for iparam(ii), for ii = 01, 02, . . ., 22.
– 010201 Full-storage column pointer matrix cannot be converted to half-storage jagged-diagonal matrix.
– 010202 Left and right nonsymmetric scaling cannot be applied to a symmetric matrix in half-storage
mode.
– 010203 PCG and CGN allow only left preconditioning.
– 010204 ILU preconditioning cannot be applied to a symmetric matrix in half-storage mode.

004– 2081– 002 473


SITRSOL ( 3S ) SITRSOL ( 3S )

– 010205 iprelrb is 3 and ipretyp is neither 2 nor 3.


– 010301 Illegal value for tol, tol < 0.
– 010315 Illegal value for gammin.
– 010316 Illegal value for gammax.
Phase 2 error: user matrix preprocessing
ierr =
– 02xx01 Not enough real workspace allocated.
– 02xx02 Not enough integer workspace allocated.
– 020203 Zero diagonal element found, cannot scale matrix.
– 020204 Zero row found, cannot scale matrix.
– 020205 Zero column found, cannot scale matrix.
– 020303 Input matrix structure is not valid.
Phase 3 error: preconditioner setup
ierr =
+030101 During the incomplete Cholesky factorization, a diagonal element was found to be nonpositive.
The shift parameter was increased and the factorization was recomputed.
+030201 During the incomplete LU factorization, a diagonal element was found to be 0. The element
was modified to allow the factorization to continue.
+03xx02 Tried to use the incomplete factorization from a previous run, but it has been overwritten.
Must recompute factorization.
– 03xx01 Not enough real workspace allocated.
– 03xx02 Not enough integer workspace allocated.
– 03xx03 Ran out of integer workspace while performing the symbolic factorization. The estimated
amount in maxlfil was insufficient.
– 03xx04 Ran out of integer workspace while performing the symbolic factorization. The estimated
amount in maxufil was insufficient.
– 030104 L is structurally singular. A diagonal element is missing from the structure of L.
– 030105 Cannot compute IC factorization in nicfmax tries. Set gammin and gammax to larger values.
– 030205 U is structurally singular. A diagonal element is missing from the structure of U.
– 030603 Zero diagonal element found, cannot scale matrix.
– 030703 Zero diagonal element found, cannot scale matrix.
Phase 4 error: iterative process
ierr =
+04xx01 Method did not converge in maxiter steps.
+04xx03 Preconditioning matrix is not positive definite.
– 04xx01 Not enough real workspace allocated.
– 040203 Matrix AA T is not positive definite in CGN method.
– 04xx03 Breakdown occurred when computing direction vector magnitude, the divisor is near 0.
– 040403 GMRES iteration stalled. The norm of the residual was not reduced on the most recently
restart iteration.
– 040603 Matrix A is not positive definite in PCG method.

474 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

Workspace
The following are three methods for estimating your workspace needs:
Rough estimate Fastest (only one multiply), but least accurate.
SITRSOL estimate Much slower, but also a lot more accurate.
Hand-coded estimate If you already know certain information about the size of the final factorization, a
hand-coded SITRSOL estimation algorithm with your information will be more
accurate than SITRSOL’s calculations.
Rough estimate
You can make a very rough estimate of your workspace needs by setting
liwork =lrwork =6 . nza
where nza is the number of nonzero elements in matrix A (= icolptr(neqns+1)– 1). This estimate is usually
sufficient.
If you are not using certain memory-intensive preconditioning or matrix formatting options, you can refine
this estimate further:
A. If you are not using IC[k] or ILU[k] preconditioning, subtract 2 . nza from your previous estimate for
liwork and lrwork.
B. If you are not using jagged diagonal format, subtract 2 . nza from your previous estimate for liwork and
lrwork.
One, both, or neither of the preceding conditions might be true; therefore, the estimate could end up being
any of the following:
4 . nza If only one or the other of A and B were true
2 . nza If both A and B were true
6 . nza If neither A nor B were true
SITRSOL estimate
You can get a more accurate estimate by calling SITRSOL with liwork or lrwork set to 0. This causes
SITRSOL to generate an error flag (– 10004 or – 10005) and to return an estimate of workspace requirements
in iwork(1) for lrwork and iwork(2) for liwork. You can then use these estimates in another call to
SITRSOL. In computing this estimate, SITRSOL uses the precise formulas that follow in the Algorithm for
Accurate Workspace Estimate subsection.
Hand-coded estimate
If you are using IC[k] or ILU[k] preconditioning and you already know the number of nonzero elements in
the IC[k] factor matrix L or in the ILU[k] factor matrices L and U, you can get the most accurate estimate
by hand-coding the high-precision algorithm used by SITRSOL.

004– 2081– 002 475


SITRSOL ( 3S ) SITRSOL ( 3S )

Then you can use your exact numbers in the algorithm in formulas for which SITRSOL has only estimates.
This means your final result will be more accurate than SITRSOL’s, even though the algorithm is the same.

NOTES
This section discusses parallel processing and workspace considerations.
Using Parallel Processing in SITRSOL
SITRSOL is designed to exploit the parallel processing capabilities of Cray Y-MP systems. In particular,
the preconditioners and matrix-vector operations are designed to achieve significant speedup on multiple
CPUs; however, the parallelism is designed to be effective only for large problems. Small problems will not
benefit, and performance probably will be degraded. What constitutes a "small" problem or a "large"
problem is difficult to define. Also, a large gray area exists in which using fewer than all CPUs gives better
performance than using all CPUs. Experimentation is the best way to decide on the optimal number of
CPUs.
To select the number of CPUs, define the NCPUS environment variable to the desired value. SITRSOL will
then obtain this value and try to use that number of CPUs. In a batch environment, it is unusual to get all of
the physical CPUs on the machine. In this case, it is better to request a smaller number of CPUs than is
physically available. If you do not define the NCPUS variable, the default value for NCPUS will be the total
number of physical processors on the machine.
Timing and parallel processing
SITRSOL uses the system timing function SECOND(3F), which is a real-valued function that returns the
accumulated CPU time for all processors. If you use parallel processing and you want wall-clock timing
information, replace the SECOND function with the following function, which uses IRTC(3I) to do the
timing:
REA L FUN CTI ON SEC OND ()
C.. ... CRA Y Y-M P C-9 0 clo ck per iod
PAR AME TER ( CP= 4.2 E-9 )
C..... CRA Y Y-M P clo ck period
C PAR AME TER ( CP= 6.0 E-9 )
C
SEC OND = FLO AT( IRT C())*C P
RET URN
END

Based on your system, use the appropriate parameter CP. You should replace the SECOND system function
because it returns the accumulated CPU time for all CPUs; if you do not replace SECOND, the multiple-CPU
timings will always be worse than the single-CPU timings.
A drawback exists to using the IRTC function. It returns the real system time and does not subtract time
spent being swapped out. Thus, in a batch environment, timing information typically will not be consistent
between two identical runs.

476 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

Algorithm for Accurate Workspace Estimate


You can break down the solution process into three steps:
Step 1: User matrix preprocessing
Step 2: Preconditioner setup
Step 3: Compute solution by using the selected iterative method
The following notation is used in this discussion:
ncpus= Number of requested CPUs for parallel processing.
neqns= Dimension of the linear system.
nza= Number of nonzero elements in A (= icolptr(neqns+1)– 1).
maxnz= Maximum number of nonzero elements in a row of A.
nzpap= Number of elements in the jagged diagonal matrix that is used in the
matrix-vector product.
If ( mvformat = 0 ) then
nzpap = 0
Else if ( isympap = isym ) then
nzpap = nza
Else If ( isympap = 0 & isym = 1 ) then
nzpap = 2*nza - neqns
End If
nsegs= 0 when ncpus = 1.
MAX(ncpus, neqns / 1024) when ncpus does not equal 1.
Iuse(k)= Amount of integer workspace used in step k.
Iret(k)= Amount of integer workspace retained by step k.
Ruse(k)= Amount of real workspace used in step k.
Rret(k)= Amount of real workspace retained by step k.
liwork= Total amount of integer workspace needed.
lrwork= Total amount of real workspace needed.
liwork1= Integer workspace needed in step 1 = Iuse(1).
liwork2= Integer workspace needed in step 2 = Iret(1)+Iuse(2).
liwork3= Integer workspace needed in step 3 = Iret(1)+Iret(2)+Iuse(3).
lrwork2= Real workspace needed in step 2 = Rret(1)+Ruse(2).
lrwork3= Real workspace needed in step 3 = Rret(1)+Rret(2)+Ruse(3).
Total workspace needed is defined as follows:
liwork = MAX( liwork1, liwork2, liwork3 ) + 100
lrwork = MAX( lrwork2, lrwork3 )

004– 2081– 002 477


SITRSOL ( 3S ) SITRSOL ( 3S )

Step 1: User matrix preprocessing


In this step, locate all diagonal elements, apply scaling (if selected), and convert the user matrix to the
jagged diagonal format for use in the matrix-vector operations. The workspace usage is as follows:

If ( mvformat = 0 ) then
Iuse(1)= Iret(1) = neqns
Ruse(1)= Rret(1) = nscale

Else
Iuse(1)= nzpap + 4*neqns + maxnz + nsegs + 2
Iret(1)= nzpap + 3*neqns + maxnz + nsegs + 2
Ruse(1)= nzpap + neqns + nscale
Rret(1)= nzpap + nscale
End If

where
If ( iscale > 0 ) then
nscale = neqns
Else If ( iscale = 0 ) then
nscale = 0
End If

Step 2: Preconditioning setup


In this step, do the necessary setup work for efficient application of the preconditioner. For diagonal scaling,
this means computing the inverse of the diagonal and storing it. For incomplete Cholesky and LU
factorization, this means computing the factorization. For Jacobi least-squares polynomial preconditioning,
this means computing the polynomial coefficients. For Neumann polynomial preconditioning, there is no
setup. The amount of workspace used and retained varies greatly, depending on the type of preconditioning
selected.
For no preconditioning:
Iuse(2) = Iret(2) = Ruse(2) = Rret(2) = 0
For diagonal preconditioning:
Iuse(2)= Iret(2)= 0
Ruse(2)= 2*neqns
Rret(2)= neqns
For incomplete Cholesky preconditioning:
Iuse(2)= max( (4*neqns + 2*nzlE + 3), (2*neqns + 2*nzlA + 2 + nparU) )
Iret(2) = 2*nzlA + 2*neqns + 2 + nparR
Ruse(2) = 2*nzlA + ncpus*neqns
Rret(2) = 2*nzlA

478 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

where
• nzlE = Estimated number of nonzero elements in L as defined by maxlfil on input
• nzlA = Actual number of nonzero elements in L as defined by maxlfil on exit from the preconditioner
setup phase
If ( ncpus = 1 ) Then
nparU = nparR = 0
Else If ( ncpus > 1 ) Then
nparU = 6*neqns + 4
nparR = 4*neqns + 4
End If
In the hand-coded version, if you already know the value of nzlA, you can improve your workspace estimate
by setting nzlE = nzlA.
For incomplete LU preconditioning:
Iuse(2) = max( (4*neqns + 2*nzlE + nzuE + 3), (2*neqns + nzlA + nzuA + 2 + nparU) )
Iret(2) = nzlA + nzuA + 2*neqns + 2 + nparR
If ( ncpus = 1 ) then
Ruse(2) = nzlA + nzuA + neqns
Else If ( ncpus > 1 ) then
Ruse(2) = nzlA + nzuA + max(ncpus*neqns, nzlA, nzuA )
Rret(2) = nzlA + nzuA
where
• nzlE = Estimated number of nonzero elements in L as defined by maxlfil on input
• nzuE = Estimated number of nonzero elements in U as defined by maxufil on input
• nzlA = Actual number of nonzero elements in L as defined by maxlfil on exit from the preconditioner
setup phase
• nzuA = Actual number of nonzero elements in U as defined by maxufil on exit from the preconditioner
setup phase
If ( ncpus = 1 ) Then
nparU = nparR = 0
Else If ( ncpus > 1 ) Then
nparU = max(nzlA,nzuA) + 5*neqns + 4
nparR = 4*neqns + 4
End If
In the hand-coded version, if you already know the values of nzlA and nzuA, you can improve your
workspace estimate by setting nzlE = nzlA and nzuE = nzuA.

004– 2081– 002 479


SITRSOL ( 3S ) SITRSOL ( 3S )

For Neumann polynomial preconditioning:


Iuse(2) = Iret(2) = 0
If ( method = ‘CGN’ ) Then
Ruse(2) = neqns
Rret(2) = neqns
Else
Ruse(2) = Rret(2) = 0
End If
For Jacobi least-squares polynomial preconditioning:
Iuse(2) = Iret(2) = 0
If ( method = ‘BCG’ ) Then
Ruse(2) = neqns + 2*(kdegree + 1)
Rret(2) = 2*(kdegree + 1)
Else If ( method = ‘CGN’ ) Then
Ruse(2) = Rret(2) = neqns + kdegree + 1
Else
Ruse(2) = neqns + kdegree + 1
Rret(2) = kdegree + 1
End If
Step 3: Compute solution by using the selected iterative method
In this step, compute the solution by using the selected preconditioner and iterative method. The amount of
workspace used depends on the method, preconditioner, stopping test, and number of CPUs used.
Iuse(3)= Iret(3) = Rret(3) = 0
Ruse(3)= nmethod + max( npresol, ntest ) + nmatvec
where

480 004– 2081– 002


SITRSOL ( 3S ) SITRSOL ( 3S )

 6*neqns if method = ‘BCG’


 5*neqns if method = ‘CGN’
 6*neqns if method = ‘CGS’
nmethod =  ntrunc*neqns + 4*neqns + ntrunc*ntrunc + 3*ntrunc + 1
 if method = ‘GMR’
 2*ntrunc*neqns + 4*neqns + ntrunc
 if method = ‘OMN’
 3*neqns if method = ‘PCG’
 neqns if ipretyp = 2 and method = ‘CGN’
 neqns if ipretyp = 3 and method = ‘CGN’
npresol = neqns if ipretyp = 4
 2*neqns if ipretyp = 5
 0 otherwise
ntest = 0 if itest = 0
nmatvec = ncpus*neqns if ncpus>1 and (mvformat=0 or isympap=1 or
 0 method = ‘BCG’ or ‘CGN’) otherwise.

SEE ALSO
DFAULTS(3S)
INTRO_SPARSE(3S) for an example of using this routine and the other sparse matrix routines

004– 2081– 002 481


SSGETRF ( 3S ) SSGETRF ( 3S )

NAME
SSGETRF – Factors a real sparse general matrix with threshold pivoting implemented

SYNOPSIS
CALL SSGETRF (ido, neqns, icolptr, irowind, value, lwork, work, iparam, thresh, ierror)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
Given a real sparse general matrix A, SSGETRF computes the LU factorization of PA(transpose of P), in
which P is an internally computed permutation matrix. Threshold pivoting is implemented for stability.
This routine has the following arguments:
ido Integer. (input)
Controls the execution path through the routine. ido is a two-digit integer whose digits are
represented on this man page as i and j. i indicates the starting phase of execution, and j
indicates the ending phase. For SSGETRF, there are four phases of execution, as follows:
Phase 1: Fill reduction reordering
Phase 2: Symbolic factorization
Phase 3: Determination of the node execution sequence and the storage requirement for the
frontal matrices
Phase 4: Numerical factorization
If a previous call to the routine has computed information from previous phases, execution can
start at any phase.
ido = 10i + j 1≤i≤j≤4
neqns Integer. (input)
Number of equations (or unknowns, rows, or columns).
icolptr Integer array of dimension neqns + 1. (input)
Column pointer array for the sparse matrix A. The first and last elements of the array must be
set as follows:
icolptr(1) = 1 icolptr(neqns+1) = nza + 1
where nza is the number of nonzero elements in the sparse matrix A.
irowind Integer array of dimension nza (see icolptr). (input)
Row indices array for the sparse matrix A.
value Real array of dimension nza (see icolptr). (input)

482 004– 2081– 002


SSGETRF ( 3S ) SSGETRF ( 3S )

Array of nonzero values for the sparse matrix A. The icolptr, irowind, and value arguments
taken together contain the input matrix in sparse column format. See the introduction to the
sparse solvers (INTRO_SPARSE(3S)) for a full description of the sparse column format.
lwork Integer. (input)
Length of the work array work. Workspace requirements vary from phase to phase. If lwork is
not sufficient to execute a particular phase successfully, the routine will return with an indication
of how much workspace is required to continue. See the Workspace subsection.
work Real array of dimension lwork. (input and output)
Work array used to hold the results of each phase that are needed to process the next phase.
Between calls to SSGETRF to compute subsequent phases, the user must not modify this array.
iparam Integer array of dimension 13. (input)
List of user control parameters. The value of iparam(1) controls the use of the parameter array:
0 Uses default values for all parameters.
1 Overrides default values by using iparam.
For a full description, see the Parameters subsection.
thresh Real. (input)
The thresh variable determines whether pivoting occurs. 0 ≤ thresh ≤ 1.
ierror Integer. (output)
Error code to report any error condition detected.
0 Normal completion.
–1 ido is not a valid path for a fresh start.
–2 ido is not a valid path for a restart run.
– 10000 Input matrix structure is incorrect.
– k0001 Insufficient storage allocated for phase k. (1 ≤ k ≤ 4)
– 20002 Fatal error from the symbolic factorization. Either the input structure is incorrect or the
active part of array work was changed between successive calls to SSGETRF.
– 40002 Input matrix structure is not consistent with the structure of the lower triangular factor.
The active part of array work may have been changed between successive calls to
SSGETRF.
– 40301 Fatal error from the numerical factorization. The input matrix is numerically singular.
Parameters
The following is a list of user control parameters and their default values to be used by SSGETRF and
SSGETRS routines. To use the default values, pass a constant 0 as the iparam argument, as follows:
CAL L SSG ETR F(IDO,NEQ ,IC OL,IRO W,V AL, LWK ,WORK, 0 ,TH R,I ER)

iparam(1) 0 Use default values for all options.


1 Override default values.
iparam(2) Unit number for warning and error messages.
Default is 6.

004– 2081– 002 483


SSGETRF ( 3S ) SSGETRF ( 3S )

iparam(3) Flag to control level of messages output.


≤ 0 Report only fatal errors.
= 1 Report timing and workspace for each phase.
≥ 2 Report detailed information for each phase.
Default is 0.
iparam(4) 0 Do not save the adjacency structure.
1 Save the adjacency structure.
Default is 0.
iparam(5) 0 A fresh start.
1 A restart from previously saved data.
Default is 0.
iparam(6) 0 Output will not be saved for subsequent restart.
k Active part of the work array through phase k will be saved for subsequent restart.
Default is 0.
iparam(7) Save-and-restart unit number of the unformatted file, which it is assumed that the user has
opened. No meaningful default exists. If you use the default for iparam(6), no unit number is
needed; therefore, no unit number default value is needed.
iparam(8) Relaxation factor that specifies the maximum additional fill-ins allowed per supernode.
Default is 0.
iparam(9) Relaxation factor that specifies the maximum additional fill-ins allowed as a percentage of the
size of L. iparam(9) is essentially used as a constraint in allowing additional fill-ins for each
supernode that uses iparam(8). See INTRO_SPARSE(3S).
Default is 0.
iparam(10) Reserved for future usage.
iparam(11) Reserved for future usage.
iparam(12) 0 Check for valid input structure.
1 Do not check input structure.
Default is 0.
iparam(13) Flag to control saving the sorted order of the input matrix. This is recommended if the same
sparsity pattern is being used repeatedly.
0 Do not save the sorted order of the input matrix.
1 Save the sorted order.
Default is 0.
Workspace
You can determine the amount of workspace needed to execute phase k (denoted Use(k)) and the amount of
workspace retained after the execution of phase k (denoted Ret(k)) by using the following algorithm:
neqns = Number of unknowns or equations.

484 004– 2081– 002


SSGETRF ( 3S ) SSGETRF ( 3S )

nsup = Number of supernodes.


This can be obtained from work(32) after phase 1.
nza = Number of nonzero elements in A (=icolptr(neqns+1)– 1).
nadj = 2*(nza – neqns), size of the adjacency structure of A.
nfctnzs = Number of nonzero elements in L.
This can be obtained from work(11) after phase 1.
ngssubs = Number of row subscripts required to represent L.
This can be obtained from work(14) after phase 1.
nnzsym = Number of nonzero elements of A + (transpose of A).
This can be obtained from work(66) after phase 1.
lusize = Size of final LU decomposition.
This can be obtained from work(68) after phase 3.

Phase 1:
Use(1) = 150 + 12*neqns + 4*nza + 4
Ret(1) = 150 + 5*neqns + 3*nsup + nnzsym + 3

Phase 2:
I1 = Ret(1) + ngssubs + 4*neqns + nsup + 2
I2 = Ret(1) + 2*ngssubs + 10*nsup + 2*neqns + 4
If adjacency structure is saved
Use(2) = max ( I1, I2 )
Ret(2) = I2
Otherwise
Use(2) = max ( I1, I2 - nnzsym - neqns - 1 )
Ret(2) = I2(1)

Phase 3:
Use(3) = Ret(2) + 3*nsup
Ret(3) = Ret(2) + nsup

Phase 4:
If the sort information is saved
Use(4) ≥ Ret(3) + 5*neqns + nfctnzs + 3*nza + 4
Ret(4) = Ret(3) + nza + lusize
Otherwise
Use(4) ≥ Ret(3) + 6*neqns + nfctnzs + 2*nza + 4
Ret(4) = Ret(3) + lusize

004– 2081– 002 485


SSGETRF ( 3S ) SSGETRF ( 3S )

SEE ALSO
INTRO_SPARSE(3S) for general information on sparse solvers and a usage example
SSGETRS(3S) to solve one or more right-hand sides by using the factorization computed by SSGETRF

486 004– 2081– 002


SSGETRS ( 3S ) SSGETRS ( 3S )

NAME
SSGETRS – Solves a real sparse general system, using the factorization computed in SSGETRF(3S)

SYNOPSIS
CALL SSGETRS (ido, lwork, work, nrhs, rhs, ldrhs, iparam, ierror)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
Given the LU factorization computed from SSGETRF and a (set of) right-hand side(s), SSGETRS solves the
linear systems.
This routine has the following arguments:
ido Integer. (input)
Variable used to control the execution path in SSGETRS.
= 1 Solve AX = B
= 2 Forward solve
= 3 Backward solve
Calling SSGETRS with ido = 2 and again with ido = 3 gives the same result as calling SSGETRS
once with ido = 1.
lwork Integer. (input)
Length of the work array work as in SSGETRF.
work Real array of dimension lwork. (input and output)
Work array exactly as output from SSGETRF. The user must not have modified this array because
it contains information about the LU factorization.
nrhs Integer. (input)
Number of right-hand sides.
rhs Real array of dimension (ldrhs,nrhs). (input and output)
On entry, rhs contains the nrhs vectors. If ido = 1 or 2, the vectors are the right-hand side vectors
b from the system of equations Ax = b. If ido = 3, the right-hand sides should be the intermediate
result z obtained by calling SSGETRS with ido = 2.
On exit, rhs contains the nrhs corresponding solution vectors.
ldrhs Integer. (input)
Leading dimension of array rhs exactly as specified in the calling program.
iparam Integer array of dimension 13. (input)
List of user control options as in SSGETRF. Only four elements, iparam(1), iparam(2), iparam(3),
and iparam(5), are required for the solution phase.

004– 2081– 002 487


SSGETRS ( 3S ) SSGETRS ( 3S )

iparam(1) 0 Use default values for all options.


1 Override default values.
iparam(2) Unit number for warning and error messages.
Default is 6.
iparam(3) Flag to control level of messages output.
≤ 0 Report only fatal errors.
= 1 Report timing and workspace for each phase.
≥ 2 Report detailed information for each phase.
Default is 0.
iparam(5) 0 A fresh start.
1 A restart from previously saved data.
Default is 0.
ierror Integer. (output)
Error code, as follows:
0 Normal completion.
–3 ido is not a valid input.
– 50001 Insufficient workspace for ido = 1.
– 60001 Insufficient workspace for ido = 2.
– 70001 Insufficient workspace for ido = 3.

SEE ALSO
INTRO_SPARSE(3S) for general information on sparse solvers and a usage example
SSGETRF(3S) to compute the factorization used by SSGETRS

488 004– 2081– 002


SSPOTRF ( 3S ) SSPOTRF ( 3S )

NAME
SSPOTRF – Factors a real sparse symmetric definite matrix

SYNOPSIS
CALL SSPOTRF (ido, neqns, icolptr, irowind, value, lwork, work, iparam, ierror)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
Given a real sparse symmetric definite matrix A, SSPOTRF computes the LD(transpose of L) factorization of
(PA(transpose of P); P is an internally computed permutation matrix.
This routine has the following arguments:
ido Integer. (input)
Controls the execution path through the routine. ido is a two-digit integer whose digits are
represented on this man page as i and j. i indicates the starting phase of execution, and j indicates
the ending phase. For SSPOTRF, there are four phases of execution, as follows:
Phase 1: Fill reduction reordering
Phase 2: Symbolic factorization
Phase 3: Determination of the node execution sequence and the storage requirement for the
frontal matrices
Phase 4: Numerical factorization
If a previous call to the routine has computed information from previous phases, execution can
start at any phase.
ido = 10i + j 1≤i≤j≤4
neqns Integer. (input)
Number of equations (or unknowns, rows, or columns).
icolptr Integer array of dimension neqns + 1. (input)
Column pointer array for the sparse matrix A. The first and last elements of the array must be set
as follows:
icolptr(1) = 1 icolptr(neqns+1) = nza + 1
where nza is the number of nonzero elements in the sparse matrix A.
irowind Integer array of dimension nza (see icolptr). (input)
Row indices array for the sparse matrix A.

004– 2081– 002 489


SSPOTRF ( 3S ) SSPOTRF ( 3S )

value Real array of dimension nza (see icolptr). (input)


Array of nonzero values for the sparse matrix A. The icolptr, irowind, and value arguments taken
together contain the input matrix in sparse column format. Because A is symmetric, only the lower
triangle is specified by these arguments. See the introduction to the sparse solvers
(INTRO_SPARSE(3S)) for a full description of the sparse column format.
lwork Integer. (input)
Length of the work array work. Workspace requirements vary from phase to phase. If lwork is
not sufficient to execute a particular phase successfully, the routine will return with an indication of
how much workspace is required to continue. See the Workspace subsection.
work Real array of dimension lwork. (input and output)
Work array used to hold the results of each phase that are needed to process the next phase. The
user must not modify this array between calls to SSPOTRF to compute subsequent phases.
iparam Integer array of dimension 12. (input)
List of user control parameters. The value of iparam(1) controls the use of the parameter array:
0 Uses default values for all parameters.
1 Overrides default values by using iparam.
For a full description, see the Parameters subsection.
ierror Integer. (output) Error code to report any error condition detected.
0 Normal completion.
–1 ido is not a valid path for a fresh start.
–2 ido is not a valid path for a restart run.
– 10000 Input matrix structure is incorrect.
– k0001 Insufficient storage allocated for phase k (1 ≤ k ≤ 4).
– 20002 Fatal error from the symbolic factorization. Either the input structure is incorrect or
the active part of array work has been changed between successive calls to SSPOTRF.
– 40002 Input matrix structure is not consistent with the structure of the lower triangular
factor. The active part of array work may have been changed between successive
calls to SSPOTRF.
– 40301 Fatal error from the numerical factorization. The input matrix is numerically singular.
Parameters
The following is a list of user control parameters and their default values to be used by subroutines
SSPOTRF and SSPOTRS. iparam(1)=0.
CALL SSPOTR F(I DO,NEQ ,IC OL,IRO W,VAL, LWK,WO RK, 0 ,IE R)

iparam(1) 0 Use default values for all parameters.


1 Override default values.
iparam(2) Unit number for warning and error messages.
Default is 6.

490 004– 2081– 002


SSPOTRF ( 3S ) SSPOTRF ( 3S )

iparam(3) Flag to control level of messages output.


≤ 0 Report only fatal errors.
= 1 Report timing and workspace for each phase.
≥ 2 Report detailed information for each phase.
Default is 0.
iparam(4) 0 Do not save the adjacency structure.
1 Save the adjacency structure.
Default is 0.
iparam(5) 0 A fresh start.
1 A restart from previously saved data.
Default is 0.
iparam(6) 0 No output will be saved for subsequent restart.
k Active portion of the work array through phase k will be saved for subsequent restart.
Default is 0.
iparam(7) Save-and-restart unit number of the unformatted file, which it is assumed that the user has
opened.
No meaningful default exists. That is, if the default for iparam(6) is used, no unit number is
needed; therefore, no need for a unit number default value.
iparam(8) Relaxation factor that specifies the maximum additional fill-ins allowed per supernode.
Default is 0.
iparam(9) Relaxation factor that specifies the maximum additional fill-ins allowed as a percentage of the
size of L. iparam(9) is essentially used as a constraint in allowing additional fill-ins for each
supernode that uses iparam(8). See the introduction to the sparse solvers,
INTRO_SPARSE(3S).
Default is 0.
iparam(10) Size of the frontal matrix above which parallelism is exploited only in the factorization and
partial updating of the dense frontal matrix.
Usually, this type of parallelism is more effective toward the end of the factorization, when
there tend to be fewer independent supernodes and the frontal matrices tend to be larger.
Default is 0.
iparam(11) Size of the fixed block to accommodate the grouping of temporary frontal matrices. This is
needed only when you want to exploit the parallelism in the elimination of independent
supernodes; in this case, workspace for temporary frontal and update matrices of the
independent supernodes are allocated using a fixed-block scheme. When in use, iparam(11)
must be greater than or equal to iparam(10).
Default is 0.
iparam(12) 0 Check for valid input structure.
1 Do not check input structure.
Default is 0.

004– 2081– 002 491


SSPOTRF ( 3S ) SSPOTRF ( 3S )

Workspace
You can determine the amount of workspace needed to execute phase k (denoted Use(k)) and the amount of
workspace retained after the execution of phase k (denoted Ret(k)) by using the following notation:
ncpus = Number of CPUs.
neqns = Number of unknowns or equations.
nsup = Number of supernodes.
This can be obtained from work(32) after phase 1.
nza = Number of nonzero elements in A (=icolptr(neqns+1)– 1).
nadj = 2*(nza – neqns), size of the adjacency structure of A.
nfctnzs = Number of nonzero elements in L.
This can be obtained from work(11) after phase 1.
gs subs = Number of row subscripts required to represent L.
This can be obtained from work(14) after phase 1.
maxrow = Maximum number of nonzero elements in a row of L.
This can be obtained from work(20) after phase 1.
maxsup = Maximum size of a supernode.
This can be obtained from work(21) after phase 1.
minstk = Minimum amount of workspace required for the temporary frontal matrices.
This can be obtained from work(22) after phase 3.

Phase 1:
Use(1) = 150 + 2*nadj + 11*neqns + 4
Ret(1) = 150 + 4*neqns + 3*nsup + nadj + 3

Phase 2:
I1 = Ret(1) + ngssubs + 3*neqns + nsup + 1
I2 = Ret(1) + 2*ngssubs + 10*nsup + 3
If saving the adjacency structure
Use(2) = max ( I1, I2-(2*nza+1) )
Ret(2) = 150 + 3*neqns + 2*ngssubs + 13*nsup + 5
Otherwise
Use(2) = max ( I1, I2 )
Ret(2) = Ret(1) + 2*ngssubs + 10*nsup + 3

492 004– 2081– 002


SSPOTRF ( 3S ) SSPOTRF ( 3S )

Phase 3:
For single processing (ncpus = 1)
Use(3) = Ret(2) + 2*nsup
Ret(3) = Ret(2)
For multiple processing (ncpus > 1)
Use(3) = Ret(2) + 12*nsup + 2
Ret(3) = Ret(2) + 8*nsup + 1

Phase 4:
For single processing (ncpus = 1)
Use(4) = Ret(3) + neqns + nfctnzs + 2*(maxsup+maxrow) +
nsup + minstk
For multiple processing (ncpus > 1)
Use(4) = Ret(3) + neqns + nfctnzs + ncpus + maxsup + 3*(nsup) +
(ncpus+1)*(maxsup+2*maxrow) + minstk
Ret(4) = Ret(3) + neqns + nfctnzs

SEE ALSO
INTRO_SPARSE(3S) for general information on sparse solvers and a usage example
SSPOTRS(3S) to solve one or more right-hand sides, using the factorization computed by SSPOTRF

004– 2081– 002 493


SSPOTRS ( 3S ) SSPOTRS ( 3S )

NAME
SSPOTRS – Solves a real sparse symmetric definite system, using the factorization computed in
SSPOTRF(3S)

SYNOPSIS
CALL SSPOTRS (ido, lwork, work, nrhs, rhs, ldrhs, iparam, ierror)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
T T
Given the LDL factorization of PAP computed from SSPOTRF and a (set of) right-hand side(s),
SSPOTRS solves the following linear system for the solution of the system Ax = b. P is an internally
computed permutation matrix.
T T
PAP y = Pb, x = P y
This routine has the following arguments:
ido Integer. (input)
Variable used to control the execution path in SSPOTRS.
T T T
ido = 1 Solves P (LDL )(Px) = P P(b))
ido = 2 Solves L(Px) = P(rhs)
ido = 3 Solves Dx = rhs
T T T
ido = 4 Solves P L x = P (rhs)
1/2
ido = 5 Solves (LD )(Px) = P(rhs)
T 1/2 T T
ido = 6 Solves P (LD ) x = P (rhs)
lwork Integer. (input)
Length of the work array work as in SSPOTRF.
work Real array of dimension lwork. (input and output)
Work array exactly as output from SSPOTRF. The user must not have modified this array because
it contains information about the LD(transpose of L) factorization.
nrhs Integer. (input)
Number of right-hand sides.
rhs Real array of dimension (ldrhs,nrhs). (input and output)
On entry, rhs contains the nrhs right-hand side b for which to solve. On exit, rhs contains the nrhs
corresponding solutions.
ldrhs Integer. (input)
Leading dimension of array rhs exactly as specified in the calling program.

494 004– 2081– 002


SSPOTRS ( 3S ) SSPOTRS ( 3S )

iparam Integer array of dimension 12. (input)


List of user control options as in SSPOTRF. Only four elements, iparam(1), iparam(2), iparam(3),
and iparam(5), are required for the solution phase.
iparam(1) 0 Use default values for all options.
1 Override default values.
iparam(2) Unit number for warning and error messages.
Default is 6.
iparam(3) Flag to control level of messages output.
≤ 0 Report only fatal errors.
= 1 Report timing and workspace for each phase.
≥ 2 Report detailed information for each phase.
Default is 0.
iparam(5) 0 A fresh start.
1 A restart from previously saved data.
Default is 0.
ierror Integer. (output)
Error code, as follows:
0 Normal completion.
–3 ido is not a valid input.
– 50001 Insufficient workspace for ido = 1.
– 60001 Insufficient workspace for ido = 2.
– 70001 Insufficient workspace for ido = 3.
– 80001 Insufficient workspace for ido = 4.
– 90001 Insufficient workspace for ido = 5.
– 100001 Insufficient workspace for ido = 6.

SEE ALSO
INTRO_SPARSE(3S) for general information on sparse solvers and a usage example
SSPOTRF(3S) to compute the factorization used by SSPOTRS

004– 2081– 002 495


SSTSTRF ( 3S ) SSTSTRF ( 3S )

NAME
SSTSTRF – Factors a real sparse general matrix with a symmetric nonzero pattern (no form of pivoting is
implemented)

SYNOPSIS
CALL SSTSTRF (ido, neqns, icolptr, irowind, value, lwork, work, iparam, ierror)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
Given a real sparse general matrix A with a symmetric nonzero pattern, SSTSTRF computes the LU
factorization of PA(transpose of P). P is an internally computed permutation matrix. No form of pivoting is
implemented.
This routine has the following arguments:
ido Integer. (input)
Controls the execution path through the routine. ido is a two-digit integer whose digits are
represented on this man page as i and j. i indicates the starting phase of execution, and j
indicates the ending phase. For SSTSTRF, there are four phases of execution, as follows:
Phase 1: Fill reduction reordering
Phase 2: Symbolic factorization
Phase 3: Determination of the node execution sequence and the storage requirement for the
frontal matrices
Phase 4: Numerical factorization
If a previous call to the routine has computed information from previous phases, execution may
start at any phase.
ido = 10i + j 1≤i≤j≤4
neqns Integer. (input)
Number of equations (or unknowns, rows, or columns).
icolptr Integer array of dimension neqns + 1 . (input)
Column pointer array for the sparse matrix A. The first and last elements of the array must be
set as follows:
icolptr(1) = 1 icolptr(neqns+1) = nza + 1
where nza is the number of nonzero elements in the sparse matrix A.
irowind Integer array of dimension nza (see icolptr). (input)
Row indices array for the sparse matrix A.

496 004– 2081– 002


SSTSTRF ( 3S ) SSTSTRF ( 3S )

value Real array of dimension nza (see icolptr). (input)


Array of nonzero values for the sparse matrix A.
The icolptr, irowind, and value arguments taken together contain the input matrix in sparse
column format. See the introduction to the sparse solvers (INTRO_SPARSE(3S)) for a full
description of the sparse column format.
lwork Integer. (input)
Length of the work array work. Workspace requirements vary from phase to phase. If lwork is
not sufficient to execute a particular phase successfully, the routine will return with an
indication of how much workspace is required to continue. See the Workspace subsection.
work Real array of dimension lwork.
On input equal to least amount of workspace required for the step. On output equal to least
amount of workspace needed to be saved for the intermediate results of the step being
processed.
Work array used to hold the results of each phase that are needed to process the next phase.
The user must not modify this array between calls to SSTSTRF to compute subsequent phases.
iparam Integer array of dimension 12. (input)
List of user control parameters. The value of iparam(1) controls the use of the parameter array:
0 Uses default values for all parameters.
1 Overrides default values by using iparam.
For a full description, see the Parameters subsection.
ierror Integer. (output)
Error code to report any error condition detected.
0 Normal completion.
–1 ido is not a valid path for a fresh start.
–2 ido is not a valid path for a restart run.
– 10000 Input matrix structure is incorrect.
– k0001 Insufficient storage allocated for phase
k (1 ≤ k ≤ 4).
– 20002 Fatal error from the symbolic factorization. Either
the input structure is incorrect or the active part of
array work has been changed between successive
calls to SSTSTRF.
– 40002 Input matrix structure is not consistent with the
structure of the lower triangular factor. The active
part of array work may have been changed between
successive calls to SSTSTRF.
– 40301 Fatal error from the numerical factorization. The
input matrix is numerically singular.

004– 2081– 002 497


SSTSTRF ( 3S ) SSTSTRF ( 3S )

Parameters
The following is a list of user control parameters and their default values to be used by SSTSTRF and
SSTSTRS routines. iparam(1)=0.
CALL SSTSTR F(IDO, NEQ ,IC OL, IRO W,V AL, LWK ,WO RK, 0 ,IE R)

iparam(1) 0 Use default values for all parameters.


1 Override default values.
iparam(2) Unit number for warning and error messages.
Default is 6.
iparam(3) Flag to control level of messages output.
≤ 0 Report only fatal errors.
= 1 Report timing and workspace for each phase.
≥ 2 Report detailed information for each phase.
Default is 0.
iparam(4) 0 Do not save the adjacency structure.
1 Save the adjacency structure.
Default is 0.
iparam(5) 0 A fresh start.
1 A restart from previously saved data.
Default is 0.
iparam(6) 0 No output will be saved for subsequent restart.
k Active part of the work array through phase k will be saved for subsequent restart.
Default is 0.
iparam(7) Save-and-restart unit number of the unformatted file, which it is assumed that the user has
opened.
No meaningful default exists. That is, if the default for iparam(6) is used, no unit number is
needed; therefore, no unit number default value is needed.
iparam(8) Relaxation factor that specifies the maximum additional fill-ins allowed per supernode.
Default is 0.
iparam(9) Relaxation factor that specifies the maximum additional fill-ins allowed as a percentage of the
size of L. iparam(9) is essentially used as a constraint in allowing additional fill-ins for each
supernode that uses iparam(8). See the introduction to the sparse solvers,
INTRO_SPARSE(3S).
Default is 0.
iparam(10) Size of the frontal matrix above which parallelism is exploited only in the factorization and
partial updating of the dense frontal matrix. Usually, this type of parallelism is more effective
toward the end of the factorization, when there are usually fewer independent supernodes and
the frontal matrices are usually larger.
Default is 0.

498 004– 2081– 002


SSTSTRF ( 3S ) SSTSTRF ( 3S )

iparam(11) Size of the fixed block to accommodate the grouping of temporary frontal matrices. This is
needed only when you want to exploit the parallelism in the elimination of independent
supernodes; in this case, workspace for temporary frontal and update matrices of the
independent supernodes are allocated using a fixed-block scheme. When in use, iparam(11)
must be greater than or equal to iparam(10).
Default is 0.
iparam(12) 0 Check for valid input structure.
1 Do not check input structure.
Default is 0.
Workspace
You can determine the amount of workspace needed to execute phase k (denoted Use(k)) and the amount of
workspace retained after the execution of phase k (denoted Ret(k)) by using the following notation:
ncpus = Number of CPUs.
neqns = Number of unknowns or equations.
nsup = Number of supernodes. This can be obtained from work(32) after phase 1.
nza = Number of nonzero elements in A (=icolptr(neqns+1)– 1).
nadj = (nza – neqns), size of the adjacency structure of A.
nfctnzs = Number of nonzero elements in L. This can be obtained from work(11) after phase 1.
ngssubs = Number of row subscripts required to represent L. This can be obtained from work(14) after
phase 1.
minstk = Minimum amount of workspace required for the temporary frontal matrices. This can be
obtained from work(22) after phase 3.
Phase 1:
Use(1) = 150 + 2*nadj + 11*neqns + 4
Ret(1) = 150 + 4*neqns + 3*nsup + nadj + 3

Phase 2:
Use(2) = Ret(1) + 2*ngssubs + 10*nsup + neqns + 5

If adjacency structure is saved then


Ret(2) = Use(2)
else
Ret(2) = Use(2) - neqns - nadj
end

004– 2081– 002 499


SSTSTRF ( 3S ) SSTSTRF ( 3S )

Phase 3:
For single processing (ncpus = 1)
Use(3) = Ret(2) + 2*nsup
Ret(3) = Ret(2)

For multiple processing (ncpus > 1)


Use(3) = Ret(2) + 12*nsup + 2
Ret(3) = Ret(2) + 8*nsup + 1

Phase 4:
For single processing (ncpus = 1)
Use(4) = Ret(3) + neqns + nfctnzs + nsup + minstk

For multiple processing (ncpus > 1)


Use(4) = Ret(3) + neqns + nfctnzs + ncpus + +3*nsup + minstk
Ret(4) = Ret(3) + neqns + nfctnzs

SEE ALSO
INTRO_SPARSE(3S) for general information on sparse solvers and a usage example
SSTSTRS(3S) to solve one or more right-hand sides, using the factorization computed by SSTSTRF

500 004– 2081– 002


SSTSTRS ( 3S ) SSTSTRS ( 3S )

NAME
SSTSTRS – Solves a real sparse general system with a symmetric nonzero pattern, using the factorization
computed in SSTSTRF(3S)

SYNOPSIS
CALL SSTSTRS (ido, lwork, work, nrhs, rhs, ldrhs, iparam, ierror)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
T
Given the LU factorization of PAP computed from SSTSTRF(3S) and a (set of) right-hand side(s),
SSTSTRS solves the following linear system for the solution of the system Ax = b.
T
P is an internally computed permutation matrix. P is the transpose of P.
T T
PAP y = Pb, x = P
This routine has the following arguments:
ido Integer. (input)
Variable used to control the execution path in SSTSTRS.
T
ido = 1 Solves P (LU)Px = b (that is, Ax = b) for x
T
ido = 2 Solves P Lz = b for z
ido = 3 Solves UPx = z for x
Calling SSTSTRS with ido = 2 and again with ido = 3 has the same result as calling SSTSTRS
once with ido = 1.
lwork Integer. (input)
Length of the work array work as in SSTSTRF.
work Real array of dimension lwork. (input and output)
Work array exactly as output from SSTSTRF. The user must not have modified this array
because it contains information about the LU factorization.
nrhs Integer. (input)
Number of right-hand sides.
rhs Real array of dimension (ldrhs, nrhs). (input and output)
On entry, rhs contains the nrhs right-hand side vectors. If ido = 1 or 2, the right-hand side
vectors should be b from the system of equations Ax = b. If ido = 3, the right-hand sides
should be the intermediate result z obtained by calling SSTSTRS with ido = 2.
On exit, rhs contains the nrhs solution vectors.
ldrhs Integer. (input)
Leading dimension of array rhs exactly as specified in the calling program.

004– 2081– 002 501


SSTSTRS ( 3S ) SSTSTRS ( 3S )

iparam Integer array of dimension 12. (input)


List of user control options as in SSTSTRF. Only four elements, iparam(1), iparam(2),
iparam(3), and iparam(5), are required for the solution phase.
iparam(1) 0 Use default values for all options
1 Override default values
iparam(2) Unit number for warning and error messages
Default is 6.
iparam(3) Flag to control level of messages output.
≤ 0 Report only fatal errors
= 1 Report timing and workspace for each phase
≥ 2 Report detailed information for each phase
Default is 0.
iparam(5) 0 A fresh start
1 A restart from previously saved data
Default is 0.
ierror Integer. (output)
0 Normal completion.
–3 ido is not a valid input.
– 50001 Insufficient workspace for ido = 1.
– 60001 Insufficient workspace for ido = 2.
– 70001 Insufficient workspace for ido = 3.
– 80001 Insufficient workspace for ido = 4.
– 90001 Insufficient workspace for ido = 5.
– 100001 Insufficient workspace for ido = 6.

SEE ALSO
INTRO_SPARSE(3S) for general information on sparse solvers and a usage example
SSTSTRF(3S) to compute the factorization used by SSTSTRF

502 004– 2081– 002


INTRO_SPEC ( 3S ) INTRO_SPEC ( 3S )

NAME
INTRO_SPEC – Introduction to solvers for special linear systems

IMPLEMENTATION
UNICOS systems

DESCRIPTION
All solvers for special linear systems run only on Cray PVP systems.
The following table lists the solvers for special linear systems. The first name in each block of the table is
the name of the man page that documents all of the routines listed in that block.

Purpose Name

Solves first-order linear recurrences, overwriting input vector FOLRP


FOLR
Solves first-order linear recurrences and writes the solutions to a new vector FOLR2
FOLR2P
Solves special first-order linear recurrences FOLRC
Solves for the last term of a first-order linear recurrence FOLRN
FOLRNP
Solves a partial product or a partial summation problem RECPP
RECPS
Solves a real- or complex-valued tridiagonal system with one right-hand side SDTSOL
CDTSOL
Factors a real- or complex-valued tridiagonal system SDTTRF
CDTTRF
Solves a real- or complex-valued tridiagonal system with one right-hand side, using its SDTTRS
factorization as computed by SDTTRF(3S) or CDTTRF(3S) CDTTRS
Solves a second-order linear recurrence SOLR
Solves a second-order linear recurrence for only the last term SOLRN
Solves a second-order linear recurrence for three terms SOLR3

004– 2081– 002 503


FOLR ( 3S ) FOLR ( 3S )

NAME
FOLR, FOLRP – Solves first-order linear recurrences

SYNOPSIS
CALL FOLR (n, x, incx, a, inca)
CALL FOLRP (n, x, incx, a, inca)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
FOLR solves first-order linear recurrences, as follows:
a1 = a1
a i = a i – x i a i– 1 for i = 2, 3, . . ., n
FOLRP solves first-order linear recurrences, as follows:
a1 = a1
a i = a i + x i a i– 1 for i = 2, 3, . . ., n
These routines have the following arguments:
n Integer. (input)
Length of linear recurrence. If n ≤ 1, neither routine performs any computation.
x Real array of dimension 1+(n – 1) .  incx  . (input)
Contains multiplier vector. The first element of x in the recurrence is arbitrary.
incx Integer. (input)
Increment between elements of x.
a Real array of dimension 1+(n – 1) .  inca  . (input and output)
Contains operand vector. On input, a contains the initial values for the recurrence relation. On
output, a receives the result of the linear recurrence.
inca Integer. (input)
Increment between recurrence elements of a.

NOTES
When working backward (incx < 0 or inca < 0), each routine starts at the end of the vector and moves
backward, as follows:

504 004– 2081– 002


FOLR ( 3S ) FOLR ( 3S )

x (1−incx . (n −1)), x (1−incx . (n −2)),. . ., x (1)


a (1−inca . (n −1)), a (1−inca . (n −2)),. . ., a (1)
If incx = 0, x is a scalar multiplier.

CAUTIONS
Do not specify inca as 0, because unpredictable results may occur.

EXAMPLES
The following examples illustrate the use of these routines with positive and negative increments. (The first
three executable statements of each example are Fortran 90 array syntax.)
Example 1: FOLR with positive increments
PRO GRA M EX1
PAR AME TER (NMAX = 100)
REA L X(N MAX), A(N MAX ), A1( NMA X)
C
C.. ...Loa d vec tor s wit h ran dom number s, ini tializ e N.
X = RANF()
A = RANF()
A1 = A
N = NMAX
C
C.. ...Cal l to FOL R
CAL L FOL R(N,X,1,A 1,1 )
C
C..... Equiva len t FOR TRAN code
A(1)=A (1)
DO 10 I = 2, N
A(I)=A(I) -X( I)*A(I -1)
10 CONTIN UE
C
C.. ... Verify res ults
A = A - A1
PRI NT*,’D iff erence = ’,S NRM 2(N ,A, 1)
END

004– 2081– 002 505


FOLR ( 3S ) FOLR ( 3S )

Example 2: FOLR with negative increments


PRO GRA M EX2
PARAME TER (NMAX = 100 )
REA L X(N MAX), A(N MAX ), A1( NMA X)
C
C..... Loa d vec tors with ran dom num ber s, ini tia liz e N.
X = RAN F()
A = RAN F()
A1 = A
N = NMA X
C
C..... Cal l to FOLR
CAL L FOL R(N,X,-1, A1, -1)
C
C.....Equiva len t FOR TRAN cod e
A(N)=A (N)
DO 10 I = N-1 , 1, -1
A(I )=A (I)-X(I)* A(I +1)
10 CON TINUE
C
C.. ...Ver ify result s
A = A - A1
PRINT*,’D iff erence = ’,S NRM 2(N ,A, 1)
END

Example 3: FOLRP with positive increments

506 004– 2081– 002


FOLR ( 3S ) FOLR ( 3S )

PRO GRA M EX3


PAR AME TER (NM AX = 100 )
REA L X(NMAX ), A(N MAX), A1( NMAX)
C
C.. ... Load vector s wit h ran dom num ber s, ini tializ e N.
X = RAN F()
A = RAN F()
A1 = A
N = NMA X
C
C.. ...Cal l to FOLRP
CAL L FOL RP(N,X ,1, A1, 1)
C
C.. ... Equiva lent FORTRA N cod e
A(1)=A (1)
DO 10 I = 2, N
A(I )=A(I) +X(I)* A(I -1)
10 CONTIN UE
C
C..... Ver ify res ult s
A = A - A1
PRI NT*,’D iff erence = ’,SNRM 2(N ,A, 1)
END

Example 4: FOLRP with negative increments

004– 2081– 002 507


FOLR ( 3S ) FOLR ( 3S )

PRO GRA M EX4


PAR AME TER (NM AX = 100 )
REA L X(NMAX ), A(NMAX ), A1( NMA X)
C
C..... Load vector s wit h random num bers, initia lize N.
X = RAN F()
A = RAN F()
A1 = A
N = NMA X
C
C.....Cal l to FOLRP
CALL FOLRP( N,X,-1 ,A1 ,-1 )
C
C..... Equiva lent FORTRA N cod e
A(N)=A (N)
DO 10 I = N-1, 1, -1
A(I )=A(I) +X(I)* A(I +1)
10 CONTIN UE
C
C..... Ver ify res ult s
A = A - A1
PRI NT*,’D iff erence = ’,S NRM2(N ,A, 1)
END

SEE ALSO
FOLR2(3S) and FOLR2P(3S) to solve the same recurrences as solved by FOLR and FOLRP, without
overwriting the a operand
FOLRC(3S) to solve a first-order linear recurrence by using scalar multiplier
FOLRN(3S) and FOLRNP(3S) to solve for only the last term of the same recurrences as solved by FOLR and
FOLRP
SOLR(3S), SOLR3(3S), SOLRN(3S) to solve various forms of second-order linear recurrence

508 004– 2081– 002


FOLR2 ( 3S ) FOLR2 ( 3S )

NAME
FOLR2, FOLR2P – Solves first-order linear recurrences without overwriting the operand vector

SYNOPSIS
CALL FOLR2 (n, x, incx, a, inca, b, incb)
CALL FOLR2P (n, x, incx, a, inca, b, incb)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION

FOLR2 solves first-order linear recurrences, as follows:


b1 = a1
b i = a i – x i b i– 1 for i = 2, 3, . . ., n
FOLR2P solves first-order linear recurrences, as follows:
b1 = b1
b i = b i + x i b i– 1 for i = 2, 3, . . ., n
These routines have the following arguments:
n Integer. (input)
Length of linear recurrence. If n ≤ 0, neither routine performs any computation.
x Real array of dimension 1+(n – 1) .  incx  . (input)
Contains multiplier vector. The first element of x in the recurrence is arbitrary.
incx Integer. (input)
Increment between elements of x.
a Real array of dimension 1+(n – 1) .  inca  . (input)
Contains operand vector.
inca Integer. (input)
Increment between recurrence elements of a.
b Real array of dimension 1+(n – 1) .  incb  . (output)
Contains result vector.
incb Integer. (input)
Increment between recurrence elements of b.
The following is the Fortran equivalent of FOLR2 (given for case incx = inca = incb = 1):

004– 2081– 002 509


FOLR2 ( 3S ) FOLR2 ( 3S )

B(1)=A (1)
DO 10 I=2 ,N
B(I )=A(I) -X(I)* B(I -1)
10 CON TIN UE

The following is the Fortran equivalent of FOLR2P (given for case incx = inca = incb = 1):
B(1)=A (1)
DO 10 I=2 ,N
B(I )=A(I) +X(I)* B(I -1)
10 CON TIN UE

NOTES
When working backward (incx < 0, inca < 0 or incb < 0), each routine starts at the end of the vector and
moves backward, as follows:
x (1−incx . (n −1)), x (1−incx . (n −2)),. . ., x (1)
a (1−inca . (n −1)), a (1−inca . (n −2)),. . ., a (1)
b (1−incb . (n −1)), b (1−incb . (n −2)),. . ., b (1)

If incx = 0, x is a scalar multiplier.

CAUTIONS
Do not specify inca or incb as 0, because unpredictable results may occur.

SEE ALSO
FOLR(3S), FOLRP(3S) to solve the same recurrences as solved by FOLR2 and FOLRP2, but they overwrite
the a operand rather than producing a separate result vector
FOLRC(3S) to solve a first-order linear recurrence by using scalar multiplier
FOLRN(3S), FOLRNP(3S) to solve for only the last term of the same recurrences as solved by FOLR and
FOLRP
SOLR(3S), SOLR3(3S), SOLRN(3S) to solve various forms of second-order linear recurrence

510 004– 2081– 002


FOLRC ( 3S ) FOLRC ( 3S )

NAME
FOLRC – Solves a first-order linear recurrence with a scalar multiplier

SYNOPSIS
CALL FOLRC (n, b, incb, a, inca, alpha)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
FOLRC solves first-order linear recurrences, as follows:
b1 = a1
b i = a i + α . b i– 1 for i = 2, 3, . . ., n
This routine has the following arguments:
n Integer. (input)
Length of linear recurrence. If n ≤ 0, FOLRC returns without any computation.
b Real array of dimension 1+(n – 1) .  incb  . (output)
Contains result vector.
incb Integer. (input)
Increment between recurrence elements of b.
a Real array of dimension 1+(n – 1) .  inca  . (input)
Contains operand vector.
inca Integer. (input)
Increment between recurrence elements of a.
alpha Real. (input)
Scalar multiplier α.
The following is the Fortran equivalent of FOLRC (given for case inca = incb = 1):
B(1)=A (1)
DO 10 I=2 ,N
B(I)=A (I) +AL PHA *B( I-1 )
10 CON TINUE

004– 2081– 002 511


FOLRC ( 3S ) FOLRC ( 3S )

NOTES
When working backward (inca < 0 or incb < 0), this routine starts at the end of the vector and moves
backward, as follows:
a (1−inca . (n −1)), a (1−inca . (n −2)),. . ., a (1)
b (1−incb . (n −1)), b (1−incb . (n −2)),. . ., b (1)

CAUTIONS
Do not specify incb as 0, because unpredictable results may occur.

SEE ALSO
FOLR(3S), FOLRP(3S) to solve recurrences similar to that solved by FOLRC, but they require a vector of
multipliers rather than one scalar multiplier
FOLR2(3S), FOLR2P(3S) to solve the same recurrences as solved by FOLR and FOLRP, without overwriting
the a operand
FOLRN(3S), FOLRNP(3S) to solve for only the last term of the same recurrences as solved by FOLR and
FOLRP
RECPS(3S) to perform a partial summation operation (same as FOLRC with α = 1.0)
SOLR(3S), SOLR3(3S), SOLRN(3S) to solve various forms of second-order linear recurrence

512 004– 2081– 002


FOLRN ( 3S ) FOLRN ( 3S )

NAME
FOLRN, FOLRNP – Solves for the last term of first-order linear recurrence

SYNOPSIS
r = FOLRN (n, x, incx, a, inca)
r = FOLRNP (n, x, incx, a, inca)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
FOLRN solves for r, the last term of first-order linear recurrence, as follows:
r ← a1
r ← a i – x i r for i = 2,3,. . .,n
FOLRNP solves for r, the last term of first-order linear recurrence, as follows:
r ← a1
r ← a i + x i r for i = 2,3,. . .,n
These functions have the following arguments:
r Real. (output)
Value of the last term of the linear recurrence.
n Integer. (input)
Length of linear recurrence. If n ≤ 0, neither routine performs any computation.
x Real array of dimension 1+(n – 1) .  incx  . (input)
Contains multiplier vector. The first element of x in the recurrence is arbitrary.
incx Integer. (input)
Increment between recurrence elements of x.
a Real array of dimension 1+(n – 1) .  inca  . (input)
Contains operand vector.
inca Integer. (input)
Increment between recurrence elements of a.
The following is the Fortran equivalent of FOLRN (given for case incx = inca = 1):
R=A (1)
DO 10 I=2 ,N
R=A(I) -X( I)* R
10 CON TINUE

004– 2081– 002 513


FOLRN ( 3S ) FOLRN ( 3S )

The following is the Fortran equivalent of FOLRNP (given for case incx = inca = 1):
R=A (1)
DO 10 I=2,N
R=A (I) +X(I)*R
10 CONTIN UE

NOTES
When working backward (incx < 0 or inca < 0), each routine starts at the end of the vector and moves
backward, as follows:
x (1−incx . (n −1)), x (1−incx . (n −2)),. . ., x (1)
a (1−inca . (n −1)), a (1−inca . (n −2)),. . ., a (1)

If incx = 0, x is a scalar multiplier.

CAUTIONS
Do not specify inca as 0, because unpredictable results may occur.

EXAMPLES
You can use FOLRNP to perform Horner’s rule, an efficient method for evaluation of polynomials.
m
Let p (x ) = Σ
i =0
ai x m−i , a polynomial of degree m.

Then, Horner’s rule states that:


p (x ) = (. . .((a 0x + a 1) x + a 2) x +. . .am )

Thus, the following is the Fortran equivalent to Horner’s rule for evaluating p(x):
REA L A(0 :M), PX, X
. . .

PX = A(0 )
DO 10 I = 1, M
PX = PX * X + A(I)
10 CON TINUE

This is the same as the Fortran equivalent to FOLRNP, when x is a scalar (incx = 0); that is, the following is
also an equivalent to Horner’s rule for evaluating p(x):

514 004– 2081– 002


FOLRN ( 3S ) FOLRN ( 3S )

REAL A(0:M) , PX, X


. . .

PX = FOLRNP(M+ 1,X ,0,A(0 ),1)

SEE ALSO
FOLR(3S), FOLRP(3S) to solve for all terms (not just the last term) in the same recurrences as solved by
FOLRN and FOLRNP, overwriting the a operand with the results
FOLR2(3S), FOLR2P(3S) to solve for all terms in the same recurrences as solved by FOLRN and FOLRNP,
without overwriting the a operand
FOLRC(3S) to solve for all terms in a first-order linear recurrence by using scalar multiplier
SOLR(3S), SOLR3(3S), SOLRN(3S) to solve various forms of second-order linear recurrence

004– 2081– 002 515


RECPP ( 3S ) RECPP ( 3S )

NAME
RECPP, RECPS – Solves a partial product or partial summation problem

SYNOPSIS
CALL RECPP (n, y, incy, x,
incx)
CALL RECPS (n, y, incy, x, incx)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
RECPP solves a partial product problem, as follows:
y1 ← x1
y 1 ← x 1 . y i– 1 for i = 2, 3 . . ., n
RECPS solves a partial summation problem, as follows:
y1 ← x1
y i ← x i + y i– 1 for i = 2, 3 . . ., n
These routines have the following arguments:
n Integer. (input)
Length of linear recurrence. If n ≤ 0, neither routine performs any computation.
y Real array of dimension 1+(n – 1) .  incy  . (output)
Contains recurrent operand vector. Array y receives the result.
incy Integer. (input)
Increment between recurrence elements of y.
x Real array of dimension 1+(n – 1) .  incx  . (input)
Contains nonrecurrent operand vector.
incx Integer. (input)
Increment between recurrence elements of x.

NOTES
When working backward (incx < 0 or incy < 0), this routine starts at the end of the vector and moves
backward, as follows:

516 004– 2081– 002


RECPP ( 3S ) RECPP ( 3S )

x (1−incx . (n −1)), x (1−incx . (n −2)),. . ., x (1)


y (1−incy . (n −1)), y (1−incy . (n −2)),. . ., y (1)

CAUTIONS
Do not specify incy as 0, because unpredictable results may occur.

004– 2081– 002 517


SDTSOL ( 3S ) SDTSOL ( 3S )

NAME
SDTSOL, CDTSOL – Solves a real-valued or complex-valued tridiagonal system with one right-hand side

SYNOPSIS
CALL SDTSOL (n, c, d, e, inct, b, incb)
CALL CDTSOL (n, c, d, e, inct, b, incb)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
SDTSOL solves a real-valued tridiagonal system with one right-hand side by combination of
burn-at-both-ends and 3:1 cyclic reduction.
CDTSOL solves a complex-valued tridiagonal system with one right-hand side by combination of
burn-at-both-ends and 3:1 cyclic reduction.
These routines have the following arguments:
n Integer. (input)
Dimension of the tridiagonal matrix. If n < 1, these routines return without any computation.
c SDTSOL: Real array of dimension (1+(n – 1) . inct ). (input and output)
Lower off-diagonal of the real-valued tridiagonal matrix with c(1) = 0.0.
CDTSOL: Complex array of dimension (1+(n – 1) . inct ). (input and output)
Lower off-diagonal of the complex-valued tridiagonal matrix with c(1) = (0.0,0.0).
d SDTSOL: Real array of dimension (1+(n – 1) . inct ). (input and output)
Main diagonal of the real-valued tridiagonal matrix.
CDTSOL: Complex array of dimension (1+(n – 1) . inct ). (input and output)
Main diagonal of the complex-valued tridiagonal matrix.
e SDTSOL: Real array of dimension (1+(n – 1) . inct ). (input and output)
Upper off-diagonal of the real-valued tridiagonal matrix with e (1+(n – 1) . inct ) = 0.0.
CDTSOL: Complex array of dimension (1+(n – 1) . inct ). (input and output)
Upper off-diagonal of the complex-valued tridiagonal matrix with e (1+(n – 1) . inct )=(0.0,0.0).
inct Integer. (input)
Increment between elements in each of the input vectors c, d, and e. inct must be positive.
Typically inct = 1, in which case, the elements of c are contiguous in memory, as are the elements
of d and e.
b SDTSOL: Real array of dimension (1+(n – 1) . incb ). (input and output)
CDTSOL: Complex array of dimension (1+(n – 1) . incb ). (input and output)

518 004– 2081– 002


SDTSOL ( 3S ) SDTSOL ( 3S )

On entry, b contains the right-hand-side values. On exit, it contains the solution.


incb Integer. (input)
Increment between elements in each column of b. incb must be positive. Typically, incb = 1, in
which case, the elements in each row of b are contiguous in memory.

NOTES
A 3:1 cyclic reduction is used until the size of the system is reduced to 40. Then the reduced system is
solved directly using a burn-at-both-ends algorithm. The remaining values are obtained by backfilling.
When calling these routines, the elements of c(1) and e (1+(n – 1) . inct ) must be allocated and set equal to
0.0. See the EXAMPLES section.
These routines are appropriate only for tridiagonal matrices that require no pivoting.

EXAMPLES
The following example shows how to set up the arguments c, d, and e, given the tridiagonal matrix T.
Let T be the tridiagonal matrix:
11 12 0 0 0 
 
21 22 23 0 0 
T =  0 32 33 34 0 
 0 0 43 44 45 
 
î 0 0 0 54 55 
Then to pass T to TRID (with inct = 1), set the following:
0  11  12 
     
21  22  23 
c = 32  d = 33  e = 34 
43  44  45 
     
î 54  î 55  î 0 

004– 2081– 002 519


SDTTRF ( 3S ) SDTTRF ( 3S )

NAME
SDTTRF, CDTTRF – Factors a real-valued or complex-valued tridiagonal system

SYNOPSIS
CALL SDTTRF (n, c, d, e, inct, work, lwork, info)
CALL CDTTRF (n, c, d, e, inct, work, lwork, info)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
SDTTRF factors a real-valued tridiagonal system by combination of burn-at-both-ends and 3:1 cyclic
reduction.
CDTTRF factors a complex-valued tridiagonal system by combination of burn-at-both-ends and 3:1 cyclic
reduction.
These routines have the following arguments:
n Integer. (input)
Dimension of the tridiagonal matrix. If n < 1, these routines return without any computation.
c SDTTRF: Real array of dimension (1+(n – 1) . inct ). (input and output)
Lower off-diagonal of the real-valued tridiagonal matrix with c(1) = 0.0.
CDTTRF: Complex array of dimension (1+(n – 1) . inct ). (input and output)
Lower off-diagonal of the complex-valued tridiagonal matrix with c(1) = (0.0,0.0).
d SDTTRF: Real array of dimension (1+(n – 1) . inct ). (input and output)
Main diagonal of the real-valued tridiagonal matrix.
CDTTRF: Complex array of dimension (1+(n – 1) . inct ). (input and output)
Main diagonal of the complex-valued tridiagonal matrix.
e SDTTRF: Real array of dimension (1+(n – 1) . inct ). (input and output)
Upper off-diagonal of the real-valued tridiagonal matrix with e (1+(n – 1) . inct ) = 0.0.
CDTTRF: Complex array of dimension (1+(n – 1) . inct ). (input and output)
Upper off-diagonal of the complex-valued tridiagonal matrix with e (1+(n – 1) . inct )=(0.0,0.0).
inct Integer. (input)
Increment between elements in each of the input vectors c, d, and e. inct must be positive.
Typically, inct = 1, in which case, the elements of c are contiguous in memory as are the
elements of d and e.
work SDTTRF: Real array of dimension (lwork). (output)
Storage for intermediate results needed for subsequent calls to SDTTRS. This space must not be
modified between calls to this routine and SDTTRS.

520 004– 2081– 002


SDTTRF ( 3S ) SDTTRF ( 3S )

CDTTRF: Complex array of dimension (lwork). (output)


Storage for intermediate results needed for subsequent calls to CDTTRS. This space must not be
modified between calls to this routine and CDTTRS.
lwork Integer. (input)
Length of work. lwork must be greater than or equal to 2n. The value of lwork must not
change between calls to this routine and SDTTRS or CDTTRS.
info Integer. (output)
On exit, info has one of the following values:
= 0 No error detected.
= – 1 lwork is too small ( < 2n ).

NOTES
A 3:1 cyclic reduction is used until the size of the system is reduced to 40. Then the reduced system is
factored directly using a burn-at-both-ends algorithm. You should use these routines with SDTTRS or
CDTTRS, either of which solves for one right-hand side given the factorization computed in SDTTRF or
CDTTRF, respectively.
When calling these routines, the elements of c(1) and e (1+(n – 1) . inct ) must be allocated and set equal to 0.
See the EXAMPLES section.
These routines are appropriate only for tridiagonal matrices that require no pivoting.
CDTTRF only: Because this routine is for complex data, the amount of memory needed is 4n words, which
is 2n complex elements.

EXAMPLES
The following example shows how to set up the arguments c, d, and e, given the tridiagonal matrix T.
Let T be the tridiagonal matrix:
11 12 0 0 0 
 
21 22 23 0 0 
T= 0 32 33 34 0 
0 0 43 44 45 
 
î 0 0 0 54 55 
Then to pass T to TRID (with inct = 1), set the following:
0  11  12 
     
21  22  23 
c = 32  d = 33  e = 34 
43  44  45 
     
î 54  î 55  î 0 

004– 2081– 002 521


SDTTRF ( 3S ) SDTTRF ( 3S )

SEE ALSO
SDTSOL(3S) for a description of SDTSOL and CDTSOL, which factor and solve tridiagonal systems
SDTTRS(3S) for a description of SDTTRS(3S) and CDTTRS(3S), which solve tridiagonal systems based on
the factorization computed by SDTTRF or CDTTRF, respectively

522 004– 2081– 002


SDTTRS ( 3S ) SDTTRS ( 3S )

NAME
SDTTRS, CDTTRS – Solves a real-valued or complex-valued tridiagonal system with one right-hand side,
using its factorization as computed by SDTTRF(3S) or CDTTRF(3)

SYNOPSIS
CALL SDTTRS (n, c, d, e, inct, b, incb, work, lwork, info)
CALL CDTTRS (n, c, d, e, inct, b, incb, work, lwork, info)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
SDTTRS solves a real-valued tridiagonal system with one right-hand-side by combination of
burn-at-both-ends and 3:1 cyclic reduction. SDTTRF(3S) must be called first to factor the matrix.
CDTTRS solves a complex-valued tridiagonal system with one right-hand-side by combination of
burn-at-both-ends and 3:1 cyclic reduction. CDTTRF(3S) must be called first to factor the matrix.
These routines have the following arguments:
n Integer. (input)
Dimension of the tridiagonal matrix. If n < 1, these routines return without any computation.
c SDTTRS: Real array of dimension (1+(n – 1) . inct ). (input)
Factored lower off-diagonal of the real-valued tridiagonal matrix as computed by SDTTRF.
CDTTRS: Complex array of dimension (1+(n – 1) . inct ). (input)
Factored lower off-diagonal of the complex-valued tridiagonal matrix as computed by CDTTRF.
d SDTTRS: Real array of dimension (1+(n – 1) . inct ). (input)
Factored main diagonal of the real-valued tridiagonal matrix as computed by SDTTRF.
CDTTRS: Complex array of dimension (1+(n – 1) . inct ). (input)
Factored main diagonal of the complex-valued tridiagonal matrix as computed by CDTTRF.
e SDTTRS: Real array of dimension (1+(n – 1) . inct ). (input)
Factored upper off-diagonal of the real-valued tridiagonal matrix as computed by SDTTRF.
CDTTRS: Complex array of dimension (1+(n – 1) . inct ). (input)
Factored upper off-diagonal of the complex-valued tridiagonal matrix as computed by CDTTRF.
inct Integer. (input)
Increment between elements in each of the input vectors c, d, and e. inct must be positive.
Typically, inct = 1, in which case, the elements of c are contiguous in memory as are the
elements of d and e.

004– 2081– 002 523


SDTTRS ( 3S ) SDTTRS ( 3S )

b SDTTRS: Real array of dimension (1+(n – 1) . incb ). (input and output)


CDTTRS: Complex array of dimension (1+(n – 1) . incb ). (input and output)
On entry, b contains the right-hand-side values. On exit, it contains the solution.
incb Integer. (input)
Increment between elements in each column of b. incb must be positive. Typically, incb = 1, in
which case, the elements in each row of b are contiguous in memory.
work SDTTRS: Real array of dimension (lwork). (input)
Storage for intermediate results computed by SDTTRF for subsequent calls to SDTTRS. This
space must not be modified between calls to SDTTRF and this routine.
CDTTRS: Complex array of dimension (lwork). (input)
Storage for intermediate results computed by CDTTRF for subsequent calls to CDTTRS. This
space must not be modified between calls to CDTTRF and this routine.
lwork Integer. (input)
Length of work. lwork must be greater than or equal to 2n. The value of lwork must not
change between calls to SDTTRF or CDTTRF and this routine.
info Integer. (output)
On exit, info has one of the following values:
= 0 No error detected.
= – 1 lwork is too small ( < 2n ).

NOTES
A 3:1 cyclic reduction is used until the size of the system is reduced to 40. Then the reduced system is
solved directly using a burn-at-both-ends algorithm. You should use these routines after factoring the
tridiagonal matrix with SDTTRF or CDTTRF.
CDTTRS only: Because this routine is for complex data, the amount of memory needed is 4n words, which
is 2n complex elements.

EXAMPLES
The following example shows how to set up the arguments c, d, and e, given the tridiagonal matrix T.
Let T be the tridiagonal matrix:
11 12 0 0 0 
 
21 22 23 0 0 
T= 0 32 33 34 0 
0 0 43 44 45 
 
î 0 0 0 54 55 

524 004– 2081– 002


SDTTRS ( 3S ) SDTTRS ( 3S )

Then to pass T to TRID (with inct = 1), set the following:


0  11  12 
     
21  22  23 
c = 32  d = 33  e = 34 
43  44  45 
     
î 54  î 55  î 0 

SEE ALSO
SDTSOL(3S) for a description of SDTSOL and CDTSOL, which factor and solve tridiagonal systems
SDTTRF(3S) for a description of SDTTRF(3S) and CDTTRF(3S), which compute the factorization used by
SDTTRS or CDTTRS, respectively

004– 2081– 002 525


SOLR ( 3S ) SOLR ( 3S )

NAME
SOLR – Solves a second-order linear recurrence

SYNOPSIS
CALL SOLR (n, x, incx, y, incy, a, inca)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
SOLR solves second-order linear recurrences, as in the following equation:
a i ← x i– 1 a i– 1 + y i– 2 a i– 2 for i = 3, . . ., n
a 1 and a 2 are input to this routine, and a 3 , a 4 , . . ., a n are output.
This routine has the following arguments:
n Integer. (input)
Length of linear recurrence. If n ≤ 2, SOLR returns without any computation.
x Real array of dimension 1+(n – 1) .  incx  . (input)
Contains vector of multipliers for the first-order term of the recurrence.
If incx > 0, x (incx . (n – 2)+1) and x (incx . (n – 1)+1) are arbitrary.
If incx < 0, x(1) and x(1– incx) are arbitrary.
If incx = 0, x is a scalar multiplier.
incx Integer. (input)
Increment between elements of x.
y Real array of dimension 1+(n – 1) .  incy  . (input)
Contains vector of multipliers for the second-order term of the recurrence.
If incy > 0, y (incy . (n – 2)+1) and y (incy . (n – 1)+1) are arbitrary.
If incy < 0, y(1) and y(1– incy) are arbitrary.
If incy = 0, y is a scalar multiplier.
incy Integer. (input)
Increment between elements of y.
a Real array of dimension 1+(n – 1) .  inca  . (input and output)
Contains result vector.
inca Integer. (input)
Increment between elements of a.

526 004– 2081– 002


SOLR ( 3S ) SOLR ( 3S )

The following is the Fortran equivalent of SOLR (given for case incx = incy = inca = 1):
DO 10 I=3 ,N
A(I )=X(I-2)* A(I -1) +Y(I-2 )*A (I- 2)
10 CON TIN UE

NOTES
When working backward (incx < 0, incy < 0, or inca < 0), each routine starts at the end of the vector and
moves backward, as follows:
x (1−incx . (n −1)), x (1−incx . (n −2)),. . ., x (1−2 . incx)
y (1−incy . (n −1)), y (1−incy . (n −2)),. . ., y (1−2 . incy)
a (1−inca . (n −1)), a (1−inca . (n −2)),. . ., a (1)

If incx = 0 or incy = 0, x or y (respectively) is a scalar multiplier.

CAUTIONS
Do not specify inca as 0, because unpredictable results may occur.

SEE ALSO
FOLR(3S), FOLR2(3S), FOLR2P(3S), FOLRC(3S), FOLRN(3S), FOLRNP(3S), FOLRP(3S) to solve various
forms of first-order linear recurrence
SOLR3(3S) to solve a three-term, second-order linear recurrence
SOLRN(3S) to solve the same recurrence as SOLR, but SOLRN calculates only the last term

004– 2081– 002 527


SOLR3 ( 3S ) SOLR3 ( 3S )

NAME
SOLR3 – Solves a second-order linear recurrence for three terms

SYNOPSIS
CALL SOLR3 (n, x, incx, y, incy, a, inca)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
SOLR3 solves second-order linear recurrences of three terms, as in the following equation:
a i ← a i + x i– 1 a i– 1 + y i– 2 a i– 2 for i = 3, . . ., n
All values of a are input to this routine, and a 3 , a 4 , . . ., a n are output.
This routine has the following arguments:
n Integer. (input)
Length of linear recurrence. If n ≤ 2, SOLR3 returns without any computation.
x Real array of dimension 1+(n – 1) .  incx  . (input)
Contains vector of multipliers for the first-order term of the recurrence.
If incx > 0, x (incx . (n – 2)+1) and x (incx . (n – 1)+1) are arbitrary.
If incx < 0, x(1) and x(1– incx) are arbitrary.
If incx = 0, x is a scalar multiplier.
incx Integer. (input)
Increment between elements of x.
y Real array of dimension 1+(n – 1) .  incy  . (input)
Contains vector of multipliers for the second-order term of the recurrence.
If incy > 0, y (incy . (n – 2)+1) and y (incy . (n – 1)+1) are arbitrary.
If incy < 0, y(1) and y(1– incy) are arbitrary.
If incy = 0, y is a scalar multiplier.
incy Integer. (input)
Increment between elements of y.
a Real array of dimension 1+(n – 1) .  inca  . (input and output)
Contains result vector.
inca Integer. (input)
Increment between elements of a.

528 004– 2081– 002


SOLR3 ( 3S ) SOLR3 ( 3S )

The following is the Fortran equivalent of SOLR (given for case incx = incy = inca = 1):
DO 10 I=3 ,N
A(I)=A(I) +X( I-2 )*A(I- 1)+ Y(I -2)*A( I-2 )
10 CON TINUE

NOTES
When working backward (incx < 0, incy < 0, or inca < 0), each routine starts at the end of the vector and
moves backward, as follows:
x (1−incx . (n −1)), x (1−incx . (n −2)),. . ., x (1−incx . 2)
y (1−incy . (n −1)), y (1−incy . (n −2)),. . ., y (1−incy . 2)
a (1−inca . (n −1)), a (1−inca . (n −2)),. . ., a (1)
If incx = 0 or incy = 0, x or y (respectively) is a scalar multiplier.

CAUTIONS
Do not specify inca as 0, because unpredictable results may occur.

EXAMPLES
You can use SOLR3 to solve a lower triangular two-subdiagonal system of linear equations La = b. That is,
because
| 1 0 0 0 . . . . 0| |a( 1)| |b( 1)|
|e( 1) 1 0 0 . . . . 0| |a( 2)| |b( 2)|
|f( 1) e(2 ) 1 0 . . . . 0| |a( 3)| |b( 3)|
| 0 f(2 ) e(3 ) 1 0 . . . 0| |a( 4)| |b( 4)|
La =| 0 0 f(3 ) e(4 ) 1 0 . . 0| | . | = | . | = b
| . . . . . . . . 0| | . | | . |
| . . . . . . . . 0| | . | | . |
| . . . . . . . . 0| | . | | . |
| 0 0 0 . . . f(n -2) e(n -1) 1| |a( n)| |b( n)|

can be written as:


a1 = b1
a2 = b2 – e1 a1
a i = b i – e i– 1 a i– 1 – f i– 2 a i– 2 i = 3, . . ., n
To solve this problem, use the following Fortran code:

004– 2081– 002 529


SOLR3 ( 3S ) SOLR3 ( 3S )

DO 10 I=1,N- 1
10 E(I )=-E(I )
DO 20 I=1,N- 2
20 F(I )=-F(I )
B(2 )=B (2)+E( 1)*B(1 )
CAL L SOL R3( N,E(2) ,1,F(1 ),1 ,B(1), 1)

where the solution vector a is returned in array B.

SEE ALSO
FOLR(3S), FOLR2(3S), FOLR2P(3S), FOLRC(3S), FOLRN(3S), FOLRNP(3S), FOLRP(3S) to solve various
forms of first-order linear recurrence
SOLR(3S) to solve a two-term second-order linear recurrence
SOLRN(3S) to solve the same recurrence as SOLR, but SOLRN calculates only the last term

530 004– 2081– 002


SOLRN ( 3S ) SOLRN ( 3S )

NAME
SOLRN – Solves a second-order linear recurrence for only the last term

SYNOPSIS
r = SOLRN (n, x, incx, y, incy, a, inca)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
SOLRN solves for r, the last term in the following second-order linear recurrence:
a i ← x i– 2 a i– 1 + y i– 2 a i– 2 i = 3,4,. . .,n

r ← an
Only a 1 and a 2 are used as input. The remaining elements of a are workspace that is overwritten on output.
This function has the following arguments:
r Real. (output)
Value of the last term of the linear recurrence.
If n ≤ 0, r is set to 0.
If n = 1, r is set to the first element of a.
If n = 2, r is set to the second element of a.
n Integer. (input)
Length of linear recurrence.
x Real array of dimension 1+(n – 1) .  incx  . (input)
Contains vector of multipliers for the first-order term of the recurrence.
If incx > 0, x (incx . (n – 2)+1) and x (incx . (n – 1)+1) are arbitrary.
If incx < 0, x(1) and x(1– incx) are arbitrary.
If incx = 0, x is a scalar multiplier.
incx Integer. (input)
Increment between elements of x.
y Real array of dimension 1+(n – 1) .  incy  . (input)
Contains vector of multipliers for the second-order term of the recurrence.
If incy > 0, y (incy . (n – 2)+1) and y (incy . (n – 1)+1) are arbitrary.
If incy < 0, y(1) and y(1– incy) are arbitrary.
If incy = 0, y is a scalar multiplier.
incy Integer. (input)
Increment between elements of y.

004– 2081– 002 531


SOLRN ( 3S ) SOLRN ( 3S )

a Real array of dimension 1+(n – 2) .  inca  . (input and output)


Contains vector of starting terms. In the course of calculating the result r, a is overwritten with
scratch work.
inca Integer. (input)
Increment between elements of a.
The following is the Fortran equivalent of SOLRN (given for case incx = incy = inca = 1):
DO 10 I=3,N
A(I)=X (I- 2)*A(I -1) +Y(I-2 )*A(I- 2)
10 CONTINUE
RES ULT=A(N)

For SOLRN, even though only the last term is computed, array a (A in this Fortran code) is used to hold
intermediate results and, therefore, it is overwritten.

NOTES
When working backward (incx < 0, incy < 0, or inca < 0), each routine starts at the end of the vector and
moves backward, as follows:
x (1−incx . (n −1)), x (1−incx . (n −2)),. . ., x (1−2 . incx)
y (1−incy . (n −1)), y (1−incy . (n −2)),. . ., y (1−2 . incy)
a (1−inca . (n −1)), a (1−inca . (n −2)),. . ., a (1)
If incx = 0 or incy = 0, x or y (respectively) is a scalar multiplier.

CAUTIONS
Do not specify inca as 0, because unpredictable results may occur.

EXAMPLES
SOLRN might be used to find r 2 of the calculation
         
 x 1 y 1   x 2 y 2  . . .  x n−2 y n−2   a 2  =  r 2 
1 0 1 0  1 0  a1  r1 
with the following call:
R2 = SOLRN( N,X ,1,Y,1 ,A, 1)

The Fortran equivalent for the example follows:

532 004– 2081– 002


SOLRN ( 3S ) SOLRN ( 3S )

R1=A(1 )
R2=A(2 )
DO 10 I=1 ,N-2
TEM P=R 2
R2= X(I)*R 2+Y(I) *R1
R1= TEMP
10 CON TIN UE

SEE ALSO
FOLR(3S), FOLR2(3S), FOLR2P(3S), FOLRC(3S), FOLRN(3S), FOLRNP(3S), FOLRP(3S) to solve various
forms of first-order linear recurrence
SOLR(3S) to solve the same recurrence as SOLRN, but it calculates all terms, not just the last term
SOLR3(3S) to solve a three-term second-order linear recurrence

004– 2081– 002 533


534 004– 2081– 002
INTRO_BLACS ( 3S ) INTRO_BLACS ( 3S )

NAME
INTRO_BLACS – Introduction to Basic Linear Algebra Communication Subprograms

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
The Basic Linear Algebra Communication Subprograms (BLACS) is a package of routines for UNICOS/mk
systems that provides the same functionality for message-passing linear algebra communication as the Basic
Linear Algebra Subprograms (BLAS) provide for linear algebra computation. With these two packages,
software for dense linear algebra on UNICOS/mk systems can use calls to the BLAS for computation and
calls to the BLACS for communication.
The BLACS consist of communication primitives routines, and global reduction routines. There are several
support routines.
The current version of the BLACS is compatible with the version last released by the ScaLAPACK group at
the University of Tennessee. Arrays passed to the BLACS routines must not be dynamically allocated from
the heap.
Communication Primitives
The communication primitives send a matrix to another processor or receive a matrix from another
processor; if a processor has data to be broadcast to all or a subset of processors, a broadcast communication
primitive must be used to send or receive the data. Any processor involved in a send or receive operation
must have the same amount of available matrix space.
The communication primitives can work on matrices (as indicated by the m, n, and lda arguments to the
routines) of data types of integer, real, or complex. The user can specify that only a portion of the matrix (a
trapezoidal matrix) be referenced in the operation. The uplo argument specifies whether the upper or lower
trapezoid should be used; the diag argument specifies if the matrix is a unit trapedoizal matrix or a non-unit
trapezoidal matrix.
When using the scope argument for the BLACS routines, operations can be expressed in terms of all
processors, a row of processors, or a column of processors. All processors indicated by the scope argument
will be involved in the operation being performed, even if the processor does not have data to contribute or
does not need the data being communicated.
When broadcast operations are involved, a communication pattern must be selected. The top argument
denotes the communication topology for a communication primitive or global operation.
The following table describes the available communication primitives, the routine names, and the man page
where the primitive is described:

004– 2081– 002 535


INTRO_BLACS ( 3S ) INTRO_BLACS ( 3S )

Routine
Description name Man page

Sends an integer rectangular matrix to another processor IGESD2D


Sends a real rectangular matrix to another processor SGESD2D IGESD2D
Sends a complex rectangular matrix to another processor CGESD2D
Receives an integer rectangular matrix from another processor IGERV2D
Receives a real rectangular matrix from another processor SGERV2D IGERV2D
Receives a complex rectangular matrix from another processor CGERV2D
Broadcasts an integer rectangular matrix to all or a subset of processors IGEBS2D
Broadcasts a real rectangular matrix to all or a subset of processors SGEBS2D IGEBS2D
Broadcasts a complex rectangular matrix to all or a subset of processors CGEBS2D
Receives a broadcast integer rectangular matrix from all or subset of processors IGEBR2D
Receives a real rectangular matrix from all or a subset of processors SGEBR2D IGEBR2D
Receives a complex rectangular matrix from all or a subset of processors CGEBR2D
Sends an integer trapezoidal matrix to another processor ITRSD2D
Sends a real trapezoidal matrix to another processor STRSD2D ITRSD2D
Sends a complex trapezoidal matrix to another processor CTRSD2D
Receives an integer trapezoidal matrix from another processor ITRRV2D
Receives a real trapezoidal matrix from another processor STRRV2D ITRRV2D
Receives a complex trapezoidal matrix from another processor CTRRV2D
Broadcasts an integer trapezoidal matrix to all or a subset of processors ITRBS2D
Broadcasts a real trapezoidal matrix to all or a subset of processors STRBS2D ITRBS2D
Broadcasts a complex trapezoidal matrix to all or a subset of processors CTRBS2D
Receives an integer trapezoidal matrix from all or a subset of processors ITRBR2D
Receives a real trapezoidal matrix from all or a subset of processors STRBR2D ITRBR2D
Receives a complex trapezoidal matrix from all or a subset of processors CTRBR2D

Global Reduction Routines


The global reduction routines perform element-wise operations on rectangular matrices. These operations
include summations, maximum absolute values, and minimum absolute values. For an operation to work
properly, all processors indicated by the scope argument must call the given routine.
The hypercube topology is the only supported topology for global primitives. Using this topology, all
processors in the scope of the operation get the information.

536 004– 2081– 002


INTRO_BLACS ( 3S ) INTRO_BLACS ( 3S )

The following table describes the available global reduction routines, the routine names, and the man page
name where the primitive is described:

Routine
Description name Man page

Performs summations on specified parts of an integer matrix IGSUM2D


Performs summations on specified parts of a real matrix SGSUM2D IGSUM2D
Performs summations on specified parts of a complex matrix CGSUM2D
Finds maximum absolute value of specified parts of an integer matrix IGAMX2D
Finds maximum absolute value of specified parts of a real matrix SGAMX2D IGAMX2D
Finds maximum absolute value of specified parts of a complex matrix CGAMX2D
Finds minimum absolute value of specified parts of an integer matrix IGAMN2D
Finds minimum absolute value of specified parts of a real matrix SGAMN2D IGAMN2D
Finds minimum absolute value of specified parts of a complex matrix CGAMN2D

Topologies
Different communication topologies can be used to optimize performance. Several factors can be used to
determine the best topologies. For example, a ring topology is often preferred if one processor’s time is
preferred over another processor’s; or a minimum spanning tree can be used if all processors need the
information as quickly as possible. The following topologies are supported on UNICOS/mk systems:
• Unidirectional ring. Using the unidirectional ring topology, the source processor issues one broadcast,
and each processor then receives and forwards the message. There are two types of unidirectional rings:
the increasing ring topology and the decreasing ring topology. These are "quiet" topologies (only one
processor is communicating at a time).
• Hypercube or minimum spanning tree. Hypercube broadcasts follow the physical connection of the
system; these are most useful when distributing information to all processors is more important than
saving processor time. In addition, hypercube broadcasts are more noisy, because several processors are
sending data simultaneously.
Support Routines
The BLACS package contains several routines that are not directly releated to linear processing. These
routines are used to compute grid coordinates, to initialize routines, and to return information about
processors.
The following table describes the available support routines, the routine names, and the man page name
where the routine is described:

004– 2081– 002 537


INTRO_BLACS ( 3S ) INTRO_BLACS ( 3S )

Description Routine name Man page

Initializes counters, variables, and so on for BLACS routines BLACS_GRIDINIT BLACS_GRIDINIT


Initializes processors BLACS_GRIDMAP BLACS_GRIDMAP
Returns information about processor grid BLACS_GRIDINFO BLACS_GRIDINFO
Returns the processor element number for specified coordinates BLACS_PNUM BLACS_PNUM
Returns processor’s number MYNODE MYNODE
Computes processor grid coordinates BLACS_PCOORD BLACS_PCOORD
Stops execution until all specified processors have called a routine BLACS_BARRIER BLACS_BARRIER

Context Argument
A new feature in this release of the BLACS is the added capability for the BLACS routines to communicate
over any of many coexisting grids or contexts. Each of the grids (contexts) is identified by an integer called
a context handle. The context handle is output by BLACS_GRIDINIT upon the creation of the grid.

SEE ALSO
Dongarra, Jack J. and Robert A. van de Geijn, "Two Dimensional Basic Linear Algebra Communication
Subprograms," Technical Report CS-91-138, University of Tennessee, October 1991.

538 004– 2081– 002


BLACS_BARRIER ( 3S ) BLACS_BARRIER ( 3S )

NAME
BLACS_BARRIER – Stops execution until all specifed processes have called a routine

SYNOPSIS
CALL BLACS_BARRIER (icntxt, scope)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_BARRIER stops execution until all specified processes have called a routine.
This routine has the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT.
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S)

004– 2081– 002 539


BLACS_EXIT ( 3S ) BLACS_EXIT ( 3S )

NAME
BLACS_EXIT – Frees all existing grids

SYNOPSIS
CALL BLACS_EXIT()

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_EXIT frees all the grids that have been created in the course of a user’s program. The call frees
internal buffer space that was allocated when the different grids were created.

SEE ALSO
BLACS_GRIDEXIT(3S), BLACS_GRIDINIT(3S), INTRO_BLACS(3S)

540 004– 2081– 002


BLACS_GRIDEXIT ( 3S ) BLACS_GRIDEXIT ( 3S )

NAME
BLACS_GRIDEXIT – Frees a grid

SYNPOSIS
CALL BLACS_GRIDEXIT(icntxt)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_GRIDEXIT frees a grid that has been created by a call to BLACS_GRIDINIT(3S). The call frees
internal buffer space that has been allocated upon the creation of the grid.
This routine has the following argument:
icntxt Integer. (input)
The context handle identifying the grid returned by BLACS_GRIDINIT(3S) upon the creation
of the grid.

NOTES
If a call to a BLACS routine is made after a call to BLACS_GRIDEXIT with the same context handle, the
program will abort.

SEE ALSO
BLACS_EXIT(3S), BLACS_GRIDINIT(3S), INTRO_BLACS(3S)

004– 2081– 002 541


BLACS_GRIDINFO ( 3S ) BLACS_GRIDINFO ( 3S )

NAME
BLACS_GRIDINFO – Returns information about the two-dimensional processor grid

SYNOPSIS
CALL BLACS_GRIDINFO (icntxt, nprow, npcol, myrow, mycol)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_GRIDINFO returns information about the processor grid, such as: the number of processor rows,
the number of processor columns, and the grid coordinates of the calling processor.
This routine has the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S). This argument must be passed
but is currently ignored internally.
nprow Integer. (output)
The number of processor rows.
npcol Integer. (output)
The number of processor columns.
myrow Integer. (output)
Row coordinate of processor.
mycol Integer. (output)
Column coordinate of processor.

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S)

542 004– 2081– 002


BLACS_GRIDINIT ( 3S ) BLACS_GRIDINIT ( 3S )

NAME
BLACS_GRIDINIT – Initializes counters, variables, and so on, for the BLACS routines

SYNOPSIS
CALL BLACS_GRIDINIT (icntxt, order, nprow, npcol)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_GRIDINIT initializes a nprow-by-npcol grid of processors in a row-major or column-major fashion.
The BLACS_GRIDINIT routine assigns grid coordinates to each processor. Users must call this routine and
it must be called before any access to the other BLACS routines, ScaLAPACK routines, BLAS_S routines,
or the parallel two-dimensional FFT routines. The arguments should be the same on all nodes.
This routine has the following arguments:
icntxt Integer. (output)
Context handle identifying the grid being initialized.
order Character*1. (input)
Specifies whether the grid of processors will be initialized in row-major or col-major order. If
the grid is to match the distribution of a SHARED array, the order should be c.
order = R or r: row-major order
order = C or c: col-major order
nprow Integer. (input)
Indicates the number of processor rows for the processor grid.
npcol Integer. (input)
Indicates the number of processor columns for the processor grid.

SEE ALSO
INTRO_BLACS(3S)

004– 2081– 002 543


BLACS_GRIDMAP ( 3S ) BLACS_GRIDMAP ( 3S )

NAME
BLACS_GRIDMAP – a grid of processors

SYNOPSIS
CALL BLACS_GRIDMAP (icntxt, gridmap ld, nprow, npcol)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_GRIDMAP initializes a nprow-by-npcol grid of processors in the image of the (input) array gridmap.
This routine can be used as an alternative to BLACS_GRIDINIT in cases where the user’s application
requires a mapping of the processors to the grid that is different from those implemented in
BLACS_GRIDINIT.
This routine has the following arguments:
icntxt Integer. (output)
The context handle identifying the grid being initialized.
gridmap Integer array of dimension (ld, npcol). (input)
Array specifying the map of the processors to the grid. gridmap(i, j) will fill the (i-1)-th
row and (j-1)-th column of the grid (assuming indexing starts from 1).
ld Integer. (input)
Specifies the first dimension of array gridmap as declared in the calling program.
nprow Integer. (input)
Indicates the number of processor rows for the processor grid.
npcol Integer. (input)
Indicates the number of processor columns for the processor grid.

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S)

544 004– 2081– 002


BLACS_PCOORD ( 3S ) BLACS_PCOORD ( 3S )

NAME
BLACS_PCOORD – Computes coordinates in two-dimensional grids

SYNOPSIS
CALL BLACS_PCOORD (icntxt, pe_num, prow, pcol)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_PCOORD computes processor grid coordinates prow and pcol by using pe_num.
This routine has the following arguments:
icntxt Integer. (input)
The context handle returned by a call to BLACS_GRIDINIT(3S). This argument must be
passed but is currently ignored internally.
pe_num Integer. (input)
Processing element.
prow Integer. (output)
Row coordinate for processor.
pcol Integer. (output)
Column coordinate for processor.

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S)

004– 2081– 002 545


BLACS_PNUM ( 3S ) BLACS_PNUM ( 3S )

NAME
BLACS_PNUM – Returns the processor element number for specified coordinates in two-dimensional grids

SYNOPSIS
PE_number = BLACS_PNUM (icntxt, prow, pcol)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
BLACS_PNUM returns the processor element number at grid coordinate prow, pcol.
This routine has the following arguments:
icntxt Integer. (input)
The context handle returned by a call to BLACS_GRIDINIT(3S). This argument must be
passed but is currently ignored internally.
prow Integer. (input)
Row coordinate of processor.
pcol Integer. (output)
Column coordinate of processor.

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S)

546 004– 2081– 002


GRIDINFO3D ( 3S ) GRIDINFO3D ( 3S )

NAME
GRIDINFO3D – Returns information about the three-dimensional processor grid

SYNOPSIS
CALL GRIDINFO3D (ictxt, npx, npy, npz, mypex, mypey, mypez)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
GRIDINFO3D returns information about the processor grid, such as the number of processors assigned to the
X, Y, and Z dimensions and the grid coordinates of the calling processor.
The following arguments are available with this routine:
ictxt Integer. (input)
Handle that describes the grid initialized by GRIDINIT3D(3S).
npx Integer. (output)
Number of processors assigned to the X dimension.
npy Integer. (output)
Number of processors assigned to the Y dimension.
npz Integer. (output)
Number of processors assigned to the Z dimension.
mypex Integer. (output)
X coordinate of processor.
mypey Integer. (output)
Y coordinate of processor.
mypez Integer. (output)
Z coordinate of processor.

NOTES
The GRIDINIT3D(3S) routine must be called somewhere in the program before the first call to
GRIDINFO3D.

SEE ALSO
DESCINIT3D(3S), GRIDINIT3D(3S), PCOORD3D(3S), PNUM3D(3S)

004– 2081– 002 547


GRIDINIT3D ( 3S ) GRIDINIT3D ( 3S )

NAME
GRIDINIT3D – Initializes variables for a three-dimensional (3D) grid partition of processor set

SYNOPSIS
CALL GRIDINIT3D (ictxt, npx, npy, npz)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
GRIDINIT3D initializes a npx-by-npy-by-npz grid of processors in a column-major fashion. The
GRIDINIT3D routine assigns grid coordinates to each processor. Users must call this routine before calling
any routine that uses information about the 3D grid of processors. The arguments should be the same on all
nodes.
The GRIDINIT3D routine accepts the following arguments:
ictxt Integer. (output)
Handle that describes the 3D grid.
npx Integer. (input)
Number of processors assigned to the X dimension of the processor grid. This argument must
be a power of 2.
npy Integer. (input)
Number of processors assigned to the Y dimension of the processor grid. This argument must
be a power of 2.
npz Integer. (input)
Number of processors assigned to the Z dimension of the processor grid. This argument must be
a power of 2.
As an example, consider a partition of 16 processors (N$PES = 16) that will be initialized as a 3D grid of
size 2-by-4-by-2 (that is, 2 processors assigned to the X dimension, 4 to the Y dimension and 2 to the Z
dimension). GRIDINIT3D assigns the following coordinates to the processors:

548 004– 2081– 002


GRIDINIT3D ( 3S ) GRIDINIT3D ( 3S )

Z = 0

Y 0 1 2 3
X |----| --- -|----|-- --|
0 | 0 | 2 | 4 | 6 |
|-- --|--- -|- ---|-- --|
1 | 1 | 3 | 5 | 7 |
|-- --|--- -|- ---|-- --|

Z = 1

Y 0 1 2 3
X |----| --- -|----|-- --|
0 | 8 | 10 | 12 | 14 |
|-- --|--- -|- ---|-- --|
1 | 9 | 11 | 13 | 15 |
|-- --|--- -|- ---|-- --|

In this case processor 2 would have coordinates (0,1,0) and processor 13 would have coordinates (1,2,1).

SEE ALSO
DESCINIT3D(3S), GRIDINFO3D(3S), PCOORD3D(3S), PNUM3D(3S)

004– 2081– 002 549


IGAMN2D ( 3S ) IGAMN2D ( 3S )

NAME
IGAMN2D, SGAMN2D, CGAMN2D – Determines minimum absolute values of rectangular matrices

SYNOPSIS
CALL IGAMN2D (icntxt, scope, top, m, n, a, lda, ra, ca, ldia, rdest, cdest)
CALL SGAMN2D (icntxt, scope, top, m, n, a, lda, ra, ca, ldia, rdest, cdest)
CALL CGAMN2D (icntxt, scope, top, m, n, a, lda, ra, ca, ldia, rdest, cdest)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
IGAMN2D determines minimum absolute values of rectangular matrices.
IGAMN2D communicates integer data. SGAMN2D communicates real data. CGAMN2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT(3S).
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors
top Character*1. (input)
Network topology. Only the h topology (minimum spanning tree) is currently supported.
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a IGAMN2D: Integer array, dimension (lda,n). (input/output)
SGAMN2D: Real array, dimension (lda,n). (input/output)
CGAMN2D: Complex array, dimension (lda,n). (input/output)
On entry, a is an m-by-n matrix of values. a is such that, a(i, j) is the element of maximum
absolute value from the (i, j) entry of all the input arrays.

550 004– 2081– 002


IGAMN2D ( 3S ) IGAMN2D ( 3S )

lda Integer. (input)


The leading dimension of the array a. lda ≥ MIN(m,1).
ra Integer array of dimension (ldia,n). (output)
On exit, ra(i, j) is the row index of the processor that provided a(i, j) in the output array.
ca Integer array of dimension (ldia,n). (output)
On exit, ca(i, j) is the column index of the processor that provided a(i, j) in the output array.
ldia Integer.
Leading dimension of integer arrays ra and ca. ldia ≥ MAX(m,1).
rdest Ignored.
cdest Ignored.

NOTES
The m, n, and lda arguments determine the matrix shape. For an operation to proceed, all processors
indicated by the scope argument must call the given routine. The result is left on all processors indicated by
the scope argument.
These routines were named IGMIN2D, SGMIN2D, and CGMIN2D in a previous release.

SEE ALSO
BLACS_GRIDINIT(3S), IGAMX2D(3S), IGSUM2D(3S), INTRO_BLACS(3S)

004– 2081– 002 551


IGAMX2D ( 3S ) IGAMX2D ( 3S )

NAME
IGAMX2D, SGAMX2D, CGAMX2D – Determines maximum absolute values of rectangular matrices

SYNOPSIS
CALL IGAMX2D (icntxt, scope, top, m, n, a, lda, ra, ca, ldia, rdest, cdest)
CALL SGAMX2D (icntxt, scope, top, m, n, a, lda, ra, ca, ldia, rdest, cdest)
CALL CGAMX2D (icntxt, scope, top, m, n, a, lda, ra, ca, ldia, rdest, cdest)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
IGAMX2D determines maximum absolute values of rectangular matrices.
IGAMX2D communicates integer data. SGAMX2D communicates real data. CGAMX2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT(3S).
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors
top Character*1. (input)
Network topology. Only the h topology (minimum spanning tree) is currently supported.
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a IGAMX2D: Integer array, dimension (lda,n). (input/output)
SGAMX2D: Real array, dimension (lda,n). (input/output)
CGMAX: Complex array, dimension (lda,n). (input/output)
On entry, a is an m-by-n matrix of values. On exit, a is such that a(i, j) is the element of
maximum absolute value from the (i, j) entry of all the input arrays.

552 004– 2081– 002


IGAMX2D ( 3S ) IGAMX2D ( 3S )

lda Integer. (input)


The leading dimension of the array a. lda ≥ MAX(m,1).
ra Integer array of dimension (ldia,n). (output)
On exit, ra(i, j) is the row index of the processor that provided a(i, j) in the output array.
ca Integer array of dimension (ldia,n). (output)
On exit, ca(i, j) is the column index of the processor that provided a(i, j) in the output array.
ldia Integer. (input)
Leading dimension of integer arrays ra and ca. ldia ≥ MAX(m,1).
rdest Ignored.
cdest Ignored.

NOTES
The m, n, and lda arguments determine the matrix shape. For an operation to proceed, all processors
indicated by the scope argument must call the given routine. The result is left on all processors indicated by
the scope argument.
These routines were named IGMAX2D, SGMAX2D, and CGMAX2D in a previous release.

SEE ALSO
BLACS_GRIDINIT(3S), IGAMN2D(3S), IGSUM2D(3S), INTRO_BLACS(3S)

004– 2081– 002 553


IGEBR2D ( 3S ) IGEBR2D ( 3S )

NAME
IGEBR2D, SGEBR2D, CGEBR2D – Receives a broadcast general rectangular matrix from all or a subset of
processors

SYNOPSIS
CALL IGEBR2D (icntxt, scope, top, m, n, a, lda, rsrc, csrc)
CALL IGEBR2D (icntxt, scope, top, m, n, a, lda, rsrc, csrc)
CALL IGEBR2D (icntxt, scope, top, m, n, a, lda, rsrc, csrc)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
IGEBR2D receives a broadcast general rectangular matrix from all or a subset of processors. The source of
the broadcast uses the IGEBS2D(3S) routine to send the matrix. Execution does not resume until the data
arrives.
IGEBR2D communicates integer data. SGEBR2D communicates real data. CGEBR2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT(3S).
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors
top Character*1. (input)
Specifies the network topology used by the broadcast.
top = I or i: increasing ring
top = D or d: decreasing ring
top = H or h: hypercube
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.

554 004– 2081– 002


IGEBR2D ( 3S ) IGEBR2D ( 3S )

a IGEBR2D: Integer array, dimension (lda,n). (output)


SGEBR2D: Real array, dimension (lda,n). (output)
CGEBR2D: Complex array, dimension (lda,n). (output)
The m-by-n array at which the message is to be received.
lda Integer. (input)
The leading dimension of the array a. lda ≥ MAX (m,1).
rsrc Integer. (input)
The row index of the source processor in the processor grid.
csrc Integer. (input)
Column index of the source processor in the processor grid.

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.
For an operation to proceed, all processors indicated by the scope argument must call the routine.
These routines will default to the h topology if called with any of the other values of top that are supported
by the standard version of the BLACS from the University of Tennessee (except on UNICOS/mk systems).

SEE ALSO
BLACS_GRIDINIT(3S), IGEBS2D(3S), INTRO_BLACS(3S)

004– 2081– 002 555


IGEBS2D ( 3S ) IGEBS2D ( 3S )

NAME
IGEBS2D, SGEBS2D, CGEBS2D – Broadcasts a general rectangular matrix to all or a subset of processors

SYNOPSIS
CALL IGEBS2D (icntxt, scope, top, m, n, a, lda)
CALL SGEBS2D (icntxt, scope, top, m, n, a, lda)
CALL CGEBS2D (icntxt, scope, top, m, n, a, lda)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
IGEBS2D broadcasts a general rectangular matrix to all or a subsection of processors. The other processors
use the IGERV2D(3S) routine to receive the broadcast matrix. Execution does not resume until the data
arrives.
IGEBS2D communicates integer data. SGEBS2D communicates real data. CGEBS2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT(3S).
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors
top Character*1. (input)
Specifies the network topology used by the broadcast.
top = I or i: increasing ring
top = D or d: decreasing ring
top = H or h: hypercube
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.

556 004– 2081– 002


IGEBS2D ( 3S ) IGEBS2D ( 3S )

a IGEBS2D: Integer array, dimension (lda,n). (input)


SGEBS2D: Real array, dimension (lda,n). (input)
CGEBS2D: Complex array, dimension (lda,n). (input)
The m-by-n array to be sent.
lda Integer. (input)
The leading dimension of the array a. lda ≥ MAX(m,1).

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.
For an operation to proceed, all processors indicated by the scope argument must call the routine.
These routines will default to the h topology if called with any of the other values of top that are supported
by the standard version of the BLACS from the University of Tennessee (except on Cray T3D systems).

SEE ALSO
BLACS_GRIDINIT(3S), IGEBR2D(3S), IGERV2D(3S), INTRO_BLACS(3S)

004– 2081– 002 557


IGERV2D ( 3S ) IGERV2D ( 3S )

NAME
IGERV2D, SGERV2D, CGERV2D – Receives a general rectangular matrix from another processor

SYNOPSIS
CALL IGERV2D (icntxt, m, n, a, lda, rsrc, csrc)
CALL SGERV2D (icntxt, m, n, a, lda, rsrc, csrc)
CALL CGERV2D (icntxt, m, n, a, lda, rsrc, csrc)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
IGERV2D receives a general rectangular matrix from another processor. The other processor uses the
IGESD2D(3S) routine to send the matrix. Execution does not resume until the data arrives.
IGERV2D communicates integer data. SGERV2D communicates real data. CGERV2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a IGERV2D: Integer array, dimension (lda,n). (output)
SGERV2D: Real array, dimension (lda,n). (output)
CGERV2D: Complex array, dimension (lda,n). (output)
The m-by-n array at which the message is to be received.
lda Integer. (input)
The leading dimension of the array a. lda ≥ MAX(m,1).
rsrc Integer. (input)
Row index of source processor.
csrc Integer. (input)
Column index of source processor.

558 004– 2081– 002


IGERV2D ( 3S ) IGERV2D ( 3S )

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.

SEE ALSO
BLACS_GRIDINIT(3S), IGESD2D(3S), INTRO_BLACS(3S)

004– 2081– 002 559


IGESD2D ( 3S ) IGESD2D ( 3S )

NAME
IGESD2D, SGESD2D, CGESD2D – Sends a general rectangular matrix to another processor

SYNOPSIS
CALL IGESD2D (icntxt, m, n, a, lda, rdest, cdest)
CALL SGESD2D (icntxt, m, n, a, lda, rdest, cdest)
CALL CGESD2D (icntxt, m, n, a, lda, rdest, cdest)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
IGESD2D sends a general rectangular matrix to another processor. The other processor uses the
IGERV2D(3S) routine to receive the matrix. Execution does not resume until the data arrives.
IGESD2D communicates integer data. SGESD2D communicates real data. CGESD2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a IGESD2D: Integer array, dimension (lda,n). (input)
SGESD2D: Real array, dimension (lda,n). (input)
CGESD2D: Complex array, dimension (lda,n). (input)
The m-by-n array to be sent.
lda Integer. (input)
The leading dimension of the array a. lda ≥ MAX(m,1).
rdest Integer. (input)
Row index of destination processor.
cdest Integer. (input)
Column index of destination processor.

560 004– 2081– 002


IGESD2D ( 3S ) IGESD2D ( 3S )

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.

SEE ALSO
BLACS_GRIDINIT(3S), IGERV2D(3S), INTRO_BLACS(3S)

004– 2081– 002 561


IGSUM2D ( 3S ) IGSUM2D ( 3S )

NAME
IGSUM2D, SGSUM2D, CGSUM2D – Performs element summation operations on rectangular matrices

SYNOPSIS
CALL IGSUM2D (icntxt, scope, top, m, n, a, lda, rdest, cdest)
CALL SGSUM2D (icntxt, scope, top, m, n, a, lda, rdest, cdest)
CALL CGSUM2D (icntxt, scope, top, m, n, a, lda, rdest, cdest)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
IGSUM2D performs element summation operations on rectangular matrices.
IGSUM2D communicates integer data. SGSUM2D communicates real data. CGSUM2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT(3S).
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors
top Character*1. (input)
Network topology. Only the h topology (minimum spanning tree) is currently supported.
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a IGSUM2D: Integer array, dimension (lda,n). (input/output)
SGSUM2D: Real array, dimension (lda,n). (input/output)
CGSUM2D: Complex array, dimension (lda,n). (input/output)
On exit, a is such that a(i, j) is the sum of all (i, j) entries in the input arrays.
lda Integer. (input)
The leading dimension of the array a. lda ≥ MAX(m,1).

562 004– 2081– 002


IGSUM2D ( 3S ) IGSUM2D ( 3S )

rdest Ignored.
cdest Ignored.

NOTES
The m, n, and lda arguments determine the matrix shape. For an operation to proceed, all processors
indicated by the scope argument must call the given routine. The result is left on all processors indicated by
the scope argument.

SEE ALSO
BLACS_GRIDINIT(3S), IGAMX2D(3S), IGAMN2D(3S), INTRO_BLACS(3S)

004– 2081– 002 563


ITRBR2D ( 3S ) ITRBR2D ( 3S )

NAME
ITRBR2D, STRBR2D, CTRBR2D – Receives a broadcast trapezoidal rectangular matrix from all or a subset
of processors

SYNOPSIS
CALL ITRBR2D (icntxt, scope, top, uplo, diag, m, n, a, lda, rsrc, csrc)
CALL STRBR2D (icntxt, scope, top, uplo, diag, m, n, a, lda, rsrc, csrc)
CALL CTRBR2D (icntxt, scope, top, uplo, diag, m, n, a, lda, rsrc, csrc)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
ITRBR2D receives a broadcast trapezoidal matrix from all or a subset of processors. The source of the
broadcast uses the ITRBS2D(3S) routine to send the matrix. Execution does not resume until the data
arrives.
ITRBR2D communicates integer data. STRBR2D communicates real data. CTRBR2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT(3S).
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors
top Character*1. (input)
Specifies the network topology used by the broadcast.
top = I or i: increasing ring
top = D or d: decreasing ring
top = H or h: hypercube
uplo Character*1. (input)
Specifies whether the trapezoid is in the upper or lower triangular part of the matrix a, as
follows:
If uplo = ’U’ or ’u’, the trapezoid is in the upper triangular part of the matrix.
If uplo = ’L’ or ’l’, the trapezoid is in the lower triangular part of the matrix.

564 004– 2081– 002


ITRBR2D ( 3S ) ITRBR2D ( 3S )

diag Character*1. (input)


Specifies whether the upper or lower triangular part of the matrix a is referenced, as follows:
If diag = ’U’ or ’u’, specifies a unit trapezoidal matrix.
If diag = ’N’ or ’n’, specifies a non-unit trapeziodal matrix.
m Integer. (input)
Specifies the number of rows in matrix a. m must ber ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a ITRBR2D: Integer array, dimension (lda,n). (output)
STRBR2D: Real array, dimension (lda,n). (output)
CTRBR2D: Complex array, dimension (lda,n). (output)
The m-by-n matrix containing the trapezoidal matrix to be sent.
lda Integer. (input)
The leading dimension of the array a. lda ≥ MAX (m,1).
rsrc Row index of source processor. (input)
csrc Column index of source processor. (input)

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.
For an operation to proceed, all processors indicated by the scope argument must call the routine.
These routines will default to the h topology if called with any of the other values of top that are supported
by the standard version of the BLACS from the University of Tennessee (except on Cray T3D systems).

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S), ITRBS2D(3S)

004– 2081– 002 565


ITRBS2D ( 3S ) ITRBS2D ( 3S )

NAME
ITRBS2D, STRBS2D, CTRBS2D – Broadcasts a trapezoidal rectangular matrix to all or a subset of
processors

SYNOPSIS
CALL ITRBS2D (icntxt, scope, top, uplo, diag, m, n, a, lda)
CALL STRBS2D (icntxt, scope, top, uplo, diag, m, n, a, lda)
CALL CTRBS2D (icntxt, scope, top, uplo, diag, m, n, a, lda)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
ITRBS2D broadcasts a trapezoidal rectangular matrix to all or a subset of processors. The other processors
use the ITRBR2D(3S) routine to receive the broadcast matrix. Execution does not resume until the data
arrives.
ITRBS2D communicates integer data. STRBS2D communicates real data. CTRBS2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
scope Character*1. (input)
Specifies the processors that participate in the operation, using the grid specified by a previous
call to BLACS_GRIDINIT(3S).
scope = R or r: row of processors
scope = C or c: column of processors
scope = A or a: all processors
top Character*1. (input)
Specifies the network topology used by the broadcast.
top = I or i: increasing ring
top = D or d: decreasing ring
top = H or h: hypercube
uplo Character*1. (input)
Specifies whether the trapezoid is in the upper or lower triangular part of the matrix a, as
follows:
If uplo = ’U’ or ’u’, the trapezoid is in the upper triangular part of the matrix.
If uplo = ’L’ or ’l’, the trapezoid is in the lower triangular part of the matrix.

566 004– 2081– 002


ITRBS2D ( 3S ) ITRBS2D ( 3S )

diag Character*1. (input)


Specifies whether the matrix a has ones on the diagonal, as follows:
If diag = ’U’ or ’u’, specifies a unit trapezoidal matrix.
If diag = ’N’ or ’n’, specifies a non-unit trapeziodal matrix.
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a ITRBS2D: Integer array, dimension (lda,n). (input)
STRBS2D: Real array, dimension (lda,n). (input)
CTRBS2D: Complex array, dimension (lda,n). (input)
The m-by-n matrix containing the trapezoidal matrix where the message is to be sent.
lda Integer. (input)
The leading dimension of the array a. lda ≥ MAX (m,1).

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.
For an operation to proceed, all processors indicated by the scope argument must call the routine.
These routines will default to the h topology if called with any of the other values of top that are supported
by the standard version of the BLACS from the University of Tennessee (except on Cray T3D systems).

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S), ITRBR2D(3S)

004– 2081– 002 567


ITRRV2D ( 3S ) ITRRV2D ( 3S )

NAME
ITRRV2D, STRRV2D, CTRRV2D – Receives a trapezoidal rectangular matrix from another processor

SYNOPSIS
CALL ITRRV2D (icntxt, uplo, diag, m, n, a, lda, rsrc, csrc)
CALL STRRV2D (icntxt, uplo, diag, m, n, a, lda, rsrc, csrc)
CALL CTRRV2D (icntxt, uplo, diag, m, n, a, lda, rsrc, csrc)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
ITRRV2D receives a trapezoidal matrix from another processor. The other processor uses the ITRSD2D(3S)
routine to send the matrix. Execution does not resume until the data arrives.
ITRRV2D communicates integer data. STRRV2D communicates real data. CTRRV2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
uplo Character*1. (input)
Specifies whether the trapezoid is in the upper or lower triangular part of the matrix a, as
follows:
If uplo = ’U’ or ’u’, the trapezoid is in the upper triangular part of the matrix.
If uplo = ’L’ or ’l’, the trapezoid is in the lower triangular part of the matrix.
diag Character*1. (input)
Specifies whether the matrix a has ones on the diagonal, as follows:
If diag = ’U’ or ’u’, specifies a unit trapezoidal matrix.
If diag = ’N’ or ’n’, specifies a non-unit trapezoidal matrix.
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a ITRRV2D: Integer array, dimension (lda,n). (output)
STRRV2D: Real array, dimension (lda,n). (output)
CTRRV2D: Complex array, dimension (lda,n). (output)
The m-by-n matrix containing the trapezoidal matrix to be sent.

568 004– 2081– 002


ITRRV2D ( 3S ) ITRRV2D ( 3S )

lda Integer. (input)


The leading dimension of the array a. lda ≥ MAX (m,1).
rsrc Integer. (input)
Row index of source processor.
csrc Integer. (input)
Column index of source processor.

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.

SEE ALSO
INTRO_BLACS(3S), ITRSD2D(3S)

004– 2081– 002 569


ITRSD2D ( 3S ) ITRSD2D ( 3S )

NAME
ITRSD2D, STRSD2D, CTRSD2D – Sends a trapezoidal rectangular matrix to another processor

SYNOPSIS
CALL ITRSD2D (icntxt, uplo, diag, m, n, a, lda, rdest, cdest)
CALL STRSD2D (icntxt, uplo, diag, m, n, a, lda, rdest, cdest)
CALL CTRSD2D (icntxt, uplo, diag, m, n, a, lda, rdest, cdest)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
ITRSD2D sends a trapezoidal matrix to another processor. The other processor uses the ITRRV2D(3S)
routine to receive the matrix. Execution does not resume until the data arrives.
ITRSD2D communicates integer data. STRSD2D communicates real data. CTRSD2D communicates
complex data.
These routines have the following arguments:
icntxt Integer. (input)
Context handle returned by a call to BLACS_GRIDINIT(3S).
uplo Character*1. (input)
Specifies whether the trapezoid is in the upper or lower triangular part of the matrix a, as
follows:
If uplo = ’U’ or ’u’, the trapezoid is in the upper triangular part of the matrix.
If uplo = ’L’ or ’l’, the trapezoid is in the lower triangular part of the matrix.
diag Character*1. (input)
Specifies whether the matrix a has ones on the diagonal, as follows:
If diag = ’U’ or ’u’, specifies a unit trapezoidal matrix.
If diag = ’N’ or ’n’, specifies a non-unit trapezoidal matrix.
m Integer. (input)
Specifies the number of rows in matrix a. m must be ≥ 0.
n Integer. (input)
Specifies the number of columns in matrix a. n must be ≥ 0.
a ITRSD2D: Integer array, dimension (lda,n). (input)
STRSD2D: Real array, dimension (lda,n). (input)
CTRSD2D: Complex array, dimension (lda,n). (input)
The m-by-n matrix containing the trapezoidal matrix where the message is to be sent.

570 004– 2081– 002


ITRSD2D ( 3S ) ITRSD2D ( 3S )

lda Integer. (input)


The leading dimension of the array a. lda ≥ MAX (m,1).
rdest Integer. (input)
Row index of destination processor.
cdest Integer. (input)
Column index of destination processor.

NOTES
The m, n, and lda arguments determine the matrix shape. Any processor using a send operation and the
matching receive operation must have the same m and n.

SEE ALSO
BLACS_GRIDINIT(3S), INTRO_BLACS(3S), ITRRV2D(3S)

004– 2081– 002 571


MYNODE ( 3S ) MYNODE ( 3S )

NAME
MYNODE – Returns the calling processor’s assigned number

SYNOPSIS
MY_NUMBER = MYNODE()

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
MYNODE returns a number between 0 and NPES– 1, where NPES is the number of processors in the
mainframe partition on which the program is executing.

SEE ALSO
BLACS_GRIDINFO(3S), BLACS_PCOORD(3S), BLACS_PNUM(3), INTRO_BLACS(3S)

572 004– 2081– 002


PCOORD3D ( 3S ) PCOORD3D ( 3S )

NAME
PCOORD3D – Computes three-dimensional (3D) processor grid coordinates

SYNOPSIS
CALL PCOORD3D (ictxt, pe_num, pex, pey, pez)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PCOORD3D computes processor grid coordinates pex, pey, and pez by using pe_num.
This routine accepts the following arguments:
ictxt Integer. (input)
Handle that describes the grid initalized by GRIDINIT3D(3S).
pe_num Integer. (input)
Processing element.
pex Integer. (output)
X coordinate for processor.
pey Integer. (output)
Y coordinate for processor.
pez Integer. (output)
Z coordinate for processor.

NOTES
The GRIDINIT3D(3S) routine must be called somewhere in the program before the first call to PCOORD3D.

SEE ALSO
DESCINIT3D(3S), GRIDINFO3D(3S), GRIDINIT3D(3S), PNUM3D(3S)

004– 2081– 002 573


PNUM3D ( 3S ) PNUM3D ( 3S )

NAME
PNUM3D – Returns the processor element number for specified three-dimensional (3D) coordinates

SYNOPSIS
PE_number = PNUM3D (ictxt, pex, pey, pez)

IMPLEMENTATION
UNICOS/mk systems

DESCRIPTION
PNUM3D returns the processor element number at grid coordinate pex, pey, and pez.
This routine accepts the following arguments:
ictxt Integer. (input)
Handle that describes the grid initalized by GRIDINIT3D(3S).
pex Integer. (input)
X coordinate of processor.
pey Integer. (input)
Y coordinate of processor.
pez Integer. (input)
Z coordinate of processor.

NOTES
The routine GRIDINIT3D(3S) must be called somewhere in the program before the first call to PNUM3D.

SEE ALSO
DESCINIT3D(3S), GRIDINFO3D(3S), GRIDINIT3D(3S), PCOORD3D(3S)

574 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

NAME
INTRO_CORE – Introduction to the Scientific Library out-of-core routines for linear algebra

IMPLEMENTATION
UNICOS systems

DESCRIPTION
The Scientific Library out-of-core routines for linear algebra let you solve problems in which it is not
possible, or not convenient, to store all of the data in main memory during program execution. The central
concept on which these routines are based is the idea of the virtual matrix, which is stored outside main
memory (perhaps on disk or on SSD), and referenced through a Fortran I/O unit number.
The following list describes the purpose and name of each out-of-core routine. The first name listed is the
name of the man page that documents the routines.
Virtual Matrix Initialization and Termination Routines
• VBEGIN: Initializes out-of-core routine data structures.
• VEND: Handles terminal processing for the out-of-core routines.
• VSTORAGE: Declares packed storage mode for a triangular, symmetric, or Hermitian virtual matrix.
Virtual Matrix Copy Routines
• SCOPY2RV, CCOPY2RV: Copies a submatrix of a real (in memory) matrix to a virtual matrix.
• SCOPY2VR, CCOPY2VR: Copies a submatrix of a virtual matrix to a real (in memory) matrix.
Virtual Linear Algebra Package Routines
• VSGETRF, VCGETRF: Computes an LU factorization of a virtual general matrix, using partial pivoting
with row interchanges.
• VSGETRS, VCGETRS: Solves a system of linear equations AX = B; A is a virtual general matrix whose
LU factorization has been computed by VSGETRF(3S).
• VSPOTRF: Computes the Cholesky factorization of a virtual real symmetric positive definite matrix.
• VSPOTRS: Solves a system of linear equations AX = B; A is a virtual real symmetric positive definite
matrix whose Cholesky factorization has been computed by VSPOTRF(3S).
Virtual Level 3 Basic Linear Algebra
• VSGEMM, VCGEMM: Multiplies a virtual general matrix by a virtual general matrix.
• VSTRSM, VCTRSM: Solves a virtual triangular system of equations with multiple right-hand sides.
• VSSYRK: Performs symmetric rank k update of virtual symmetric matrix.

004– 2081– 002 575


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

General Introduction
Some problems are so large that it is not possible, or at least not convenient, to store all of the data in main
memory during program execution. For such problems, you can use an out-of-core technique. This term is
an anachronism, referring as it does to magnetic core memory, but the name is still used to refer to
algorithms that combine input and output with computation to solve problems in which the data resides on
disk or some other secondary random-access storage device.
Consider the problem of solving a system of simultaneous linear equations. If the system contains n
equations with n unknowns, the amount of data required to represent the problem is n 2 floating-point
numbers. The amount of computation required to compute a solution is approximately 2n 3 ⁄ 3 floating-point
operations. For example, if n = 30,000, the amount of memory required to store the matrix is 900 Mwords.
If the effective computational rate were 2.0 GFLOPS, the amount of time required to solve the problem
would be 9,000 seconds (2.5 hours).
This amount of computation is large compared to the amount of input and output required, so this problem is
computationally intensive. Therefore, it is an excellent candidate for solution by an out-of-core technique
(especially because solving large problems of this type is of great practical importance in many areas of
application).
Out-of-core Linear Algebra Software
The Scientific Library contains a unified set of routines for out-of-core solution of problems in dense linear
algebra. These routines are designed to be easy to use and highly efficient. The design of the out-of-core
routines is parallel to the design of library software that solves similar problems "in-core" (in memory),
namely LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms).
The LAPACK library is a state-of-the-art package for solving problems in dense linear algebra (see the
INTRO_LAPACK(3S) man page for more information on the Cray implementation of LAPACK). For out-
of-core problems in dense linear algebra, the Scientific Library has followed a software design which, from a
user perspective, is very similar to the LAPACK routines, and which uses similar or identical algorithms.
These routines are called the Virtual LAPACK, or VLAPACK, routines.
The LAPACK routines perform much of their computational work through calls to the Level 3 Basic Linear
Algebra Subprograms (Level 3 BLAS), which are designed to perform very efficiently on parallel vector
computers (see the INTRO_BLAS3(3S) man page for more information on the Level 3 BLAS routines).
Likewise, the Virtual LAPACK routines are based on a set of Virtual BLAS, called VBLAS.
Some features of these out-of-core library routines include the following:
• The routines are based on state-of-the-art algorithms for numerical linear algebra.
• Highly-efficient computational kernels perform at peak attainable speed on the hardware.
• Highly-efficient input and output is done automatically by the software; therefore, users do not have to be
involved in the details of the I/O routines.
• Virtual matrices are easy to create and use.
• Detailed performance measurement capabilities are built in. Performance statistics can be printed
automatically to give users complete information on software and hardware performance.

576 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

• Users can easily change certain tuning parameters to optimize the software for each specific problem and
computing environment.
Virtual Matrices
An important concept in the out-of-core routines is that of a virtual matrix. You can think of a virtual
matrix as a mathematical matrix, the elements of which are accessed in a certain way, using subroutine calls.
In some ways, a virtual matrix is like a two-dimensional Fortran array. Like a Fortran array, a virtual matrix
has elements that are real numbers. Like a Fortran array, a virtual matrix has subscripts that are integers
between 1 and some positive number n, and has a certain "leading dimension," which the user defines when
creating the virtual matrix.
Unlike a Fortran array, a virtual matrix is not accessed directly from a Fortran (or C) program. Instead, you
access a virtual matrix by using calls to the out-of-core routines.
These subroutines provide the only mechanism for manipulating a virtual matrix. In particular, a user never
has to do any explicit input or output to read or write a virtual matrix. Even though a virtual matrix is
actually stored as a file, users do not have to be concerned with the actual I/O. The library software handles
the I/O details automatically and efficiently, leaving users free to concentrate on the mathematical solution to
the problem at hand, and for the most part, to ignore the fact that out-of-core techniques are in use.
The next subsection, "Subroutine Types," briefly describes these routines. After that, the NOTES section
provides more specific information about virtual matrices.
Subroutine Types
The Scientific Library out-of-core software user interface comprises four types of subroutines:
• Initialization and termination routines
• Virtual matrix copy (VCOPY) routines
• Virtual LAPACK (VLAPACK) routines
• Virtual Level 3 BLAS (VBLAS) routines
The subsections that follow describe each subroutine.
Initialization and termination routines
You must initialize the underlying library routines by a call to the VBEGIN(3S) routine. This routine has
several optional arguments, all of which relate to tuning performance of the package. The most important
argument is an integer that specifies how many words to use for buffer space. VBEGIN(3S) automatically
allocates the requested amount of memory, using a call to the operating system. Likewise, you must call the
VEND(3S) routine when you are done with virtual linear algebra. VEND(3S) closes any open files that are
being used for virtual matrices and deallocates the memory that was allocated by VBEGIN(3S).
VSTORAGE(3S) declares that an existing virtual matrix (initialized with VBEGIN(3S)) is stored and
referenced in packed form. See the NOTES section for more on packed storage.

004– 2081– 002 577


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

Virtual matrix copy routines


As previously mentioned, an important feature of this software is that users never have to do any explicit
input or output to a virtual matrix. To initially create a virtual matrix, or to find out what is in a virtual
matrix after it is created, users should call the virtual copy routines.
To create a virtual matrix, users copy sections of an in-memory matrix to a section of a virtual matrix, or
vice versa, using virtual copy routines. For example, if you want to work with one column of the matrix at
a time, you can call a virtual copy routine to "Copy this vector, x, to column j of the virtual matrix A." Or,
conversely, you could call a virtual copy routine to "Get row i of the virtual matrix A and store it in vector
y."
The virtual copy routines are named SCOPY2RV(3S), SCOPY2VR(3S), CCOPY2RV(3S), and
CCOPY2VR(3S). The initial letter in each routine name, "S" or "C," stands for "Single-precision real" or
"Complex," respectively. The numeral "2" stands for "two-dimensional," because the routines copy parts of
matrices, as opposed to vectors. The last two letters, "RV" or "VR," stand for "real to virtual" or "virtual to
real," respectively.
You can use these routines to copy one row at a time, or one column at a time, or any rectangular submatrix
of the virtual matrix. You give the copy routine the subscripts of the upper-left corner of the submatrix and
the dimensions of the submatrix. In fact, you can even use the routines to fetch or store one element at a
time, although that would be less efficient. Users never have to do any explicit input or output to access
their virtual matrices. The library routines do all of the necessary I/O automatically. From the user
perspective, it is just a matrix copy operation.
Virtual LAPACK routines
The design of the out-of-core routines is parallel to the design of library software that solves similar
problems in memory, namely LAPACK and BLAS.
The LAPACK library is a state-of-the-art package for solving problems in dense linear algebra (see the
INTRO_LAPACK(3S) man page). For out-of-core problems in dense linear algebra, the Scientific Library
has followed a software design which, from a user perspective, is very similar to the LAPACK routines and
uses similar or identical algorithms. These routines are called the Virtual LAPACK (or VLAPACK)
routines.
The LAPACK routine for LU matrix factorization (with partial pivoting) is called SGETRF(3L). The virtual
matrix counterpart is called VSGETRF(3S). This virtual routine is very similar in its use, and its design, to
the original LAPACK routine. The main difference is that in place of the argument for the matrix name, the
virtual routine requires the name of a virtual matrix. The name of a virtual matrix is just an integer constant
or variable that gives the unit number of the file in which the matrix resides.
The virtual LAPACK routine VSGETRS(3S) performs back-substitution for solving systems of equations,
based on the matrix factorization produced by VSGETRF(3S). VSGETRS(3S) is the counterpart of the
LAPACK routine SGETRS(3L). Man pages for LAPACK routines, such as SGETRS, are available only
online, using the man(1) command).

578 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

For applications in which the matrices contain complex numbers, rather than real numbers, you can use the
Virtual LAPACK routines VCGETRF(3S) and VCGETRS(3S).
Virtual Level 3 BLAS routines
The LAPACK routines perform much of their actual computation by calling Level 3 BLAS routines, which
are designed for speed and efficiency on parallel-vector computers (see the INTRO_BLAS3(3S) man page).
Likewise, the Virtual LAPACK routines are based on a set of Virtual Level 3 BLAS (VBLAS).
For example, the BLAS routine to perform a matrix multiply is called SGEMM(3S) (single-precision real) or
CGEMM(3S) (complex). The corresponding out-of-core routine for virtual matrices is VSGEMM(3S) or
VCGEMM(3S), respectively.
The calling sequences of the Virtual LAPACK and Virtual BLAS routines are similar to those of the
corresponding LAPACK and BLAS routines, but when an in-memory routine requires a matrix argument, the
corresponding virtual routine requires one or more arguments that specify a virtual matrix.

NOTES
This section describes further aspects of virtual matrices and the out-of-core routines that operate on them.
Unit Numbers
The name of a virtual matrix is an integer number between 1 and 99, inclusive. The name identifies the
Fortran unit number of the file in which the virtual matrix file is stored. By default, unit number 1 is
associated with file fort.1, unit 2 with file fort.2, and so on.
Do not use any unit number that your program is using for another purpose.
Also, do not use any of the following units: 0, 5, 6, 100, 101, or 102, because these unit numbers are, by
default, associated with the following special files:
stdin 5 and 100
stdout 6 and 101
stderr 0 and 102
You may close and reopen units 0, 5, and 6 as virtual matrices, but not units 100, 101, and 102.
You can associate a particular file with a particular unit number by using the "assign by unit" option of the
assign(1) command (see the assign(1) man page for more information about the assign command).
As an example, suppose you want to store your virtual matrix on a file in directory /tmp/xxx and call the
file mydata. If you choose to use Fortran unit number 3 for the file, prior to executing the program that
calls the out-of-core software, you could issue the following command line:
ass ign -a /tm p/x xx/ myd ata u:3

Within the out-of-core subroutines, you would use the number 3 as the value of the argument for the virtual
matrix name.

004– 2081– 002 579


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

File Format
A virtual matrix is actually stored as a file, in a special format that is useful only for the virtual linear
algebra routines. But outside of the program, at the operating system level, such a file can be copied,
moved, archived, compressed, and so on, just like any other binary file. The assign(1) command
determines the actual characteristics of the file, including the device to which it is assigned (that is, disk or
SSD).
Technically, a virtual matrix is a binary unblocked file. You do not have to specify the -s u option on the
assign command; you cannot use other formats or conversions in conjunction with the out-of-core
routines. If you try to use other formats or conversions, your program will abort with an Asynchronous
Queued I/O (AQIO) error message. Actually, the virtual matrix file is blocked into "pages," but this
blocking is done by the Scientific Library out-of-core routines, not by the system I/O routines; therefore, for
the assign command, the virtual matrix file is considered to be an unblocked file.
The actual input and output is done internally using a feature called Asynchronous Queued I/O (AQIO).
This feature allows highly-efficient, random-access I/O without using any unnecessary intermediate buffering
of data.
If you want to use a file of data that was created by some means other than using Virtual LAPACK or
Virtual BLAS routines, you should write a program that reads the file, using the usual Fortran I/O facilities,
and copies the file, one section at a time, to a virtual matrix, by using the virtual copy routines. Likewise, if
you want to use a virtual matrix as input to some other program, you should write a program that uses the
virtual copy routines to get data from the virtual matrix, and then write it out using the usual Fortran I/O
facilities. If only the virtual linear algebra routines use the data, it is most convenient to just work with the
virtual matrix files themselves, using the subroutines provided.
Leading Virtual Dimension
A virtual matrix has a certain "leading dimension" (that is, the first dimension) just like a Fortran
two-dimensional array. For instance, if the virtual matrix is 1000 by 2000 elements, the first (leading)
dimension is 1000. You should supply the value 1000 for the leading dimension argument in the
subroutines.
You can use any value for the leading dimension, but after it is defined, you cannot change it. If you
originally created the virtual matrix with 1000 for the leading dimension, you must always use the same
value in subsequent subroutine calls.
Definition and Redefinition of Elements
When accessing elements of a virtual matrix, the value of the first subscript must be in the range
1 ≤ i ≤ lvd

where i is the subscript, and lvd is the leading virtual dimension, as defined in the subroutine call. The
second subscript must be a positive integer. No set upper limit to the value of the second subscript exists.

580 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

When you first create a virtual matrix, you must explicitly define every element of the matrix before you use
it in a computation. You can consider that any element you have not explicitly defined is undefined, and it
should not be referenced. For example, if you want to create an identity matrix of size 2000 by 2000, you
could zero out all 4,000,000 elements, then set the 2000 diagonal elements to 1, using the virtual copy
routines. You should not just set the diagonal elements to 1 and assume that the off-diagonal elements are 0.
After the elements of a virtual matrix are defined, their values remain defined unless you explicitly change
them or remove the file.
File Size
The size (in words) of a virtual matrix file is slightly larger than the total number of elements it contains.
Thus, a virtual matrix of size 5000 by 5000 would contain slightly more than 25 million words, or 200
Mbytes of data. The reason that it is not exactly 25 million words has to do with the way that the software
organizes data internally into pages.
When you define the value of a virtual matrix element, you are implicitly creating file space for all elements
up to the one you define. For example, if you declare that a virtual matrix has a leading dimension of 5000,
and you define a value for element (1, 1000), the software will create a virtual matrix file large enough to
contain elements (i, j) for 1 ≤ i ≤ 5000, 1 ≤ j ≤ 1000, which is 5 Mwords, or 40 Mbytes of file space.
Page Size
At the internal level, the software organizes virtual matrices into "pages." The size of a page is, by default,
256 by 256 words, or 65,536 words. I/O transfers, internally, are done in minimum units of one page. For
both disk and SSD, this size gives excellent performance.
You may redefine the page size that the out-of-core routines use, although it is not recommended unless
special performance tuning considerations are involved. The internal file structure of a virtual matrix
depends on the page size. Thus, a virtual matrix created with a certain page size could not later be read or
written using a different page size; but instead, it would have to be re-created.
Lower-level Routines
The user-level out-of-core routines are built on lower-level routines that manage work request queues, active
page queues, and other tasks. These routines in turn, depend on the AQIO routines and the operating system
routines.

004– 2081– 002 581


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

This layered software design can be illustrated as follows:


___ ___ ______ ___ _________ ___ ___ ______ ___ ___ ___ ______ ___ ___ ___ __
| | | Vir tua l LAP ACK | |
| | | rou tin es | Initia lizati on |
| USE R | Virtua l | ______ ___ ___ ______ _| and |
| LEV EL | copy | | termin ation |
| | routin es | Vir tua l BLA S | rou tin es |
| | | rou tin es | |
|__ ___ ____|_ ___ _________ _|_ ___ ______ ___ ___ ____|_ ___ ___ ___ ___ ___ |
| | |
| | Que uing |
| | rou tines |
| | |
| |________ ___ ______ ___ ___ ___ ______ ___ ___ ___ ______ _____|
| LIB RAR Y | | |
| LEV EL | Pag e man agemen t routin es| Wor k man age ment routin es|
| |__ ___ _________ ___ ___ _____| ___ ___ ___ ______ ___ ___ ___ __|
| | | |
| | AQI O | BLA S |
| | | |
|__ ___ ___ _|_______ ___ _________ ___ ___ ___ ______ ___ ______ ______ ___|
| | |
| OS | UNI COS ope rat ing sys tem |
| LEV EL | |
|__ ___ ___ _|_______ ___ ______ ___ ___ ___ ______ ___ ___ ___ ______ ______ |

Strassen’s Algorithm
Strassen’s algorithm for matrix multiplication is a recursive algorithm that is slightly faster than the ordinary
(inner product) algorithm. This additional speed is purchased at the expense of requiring some additional
memory for intermediate workspace. Because the Virtual LAPACK and Virtual BLAS routines are
managing their own memory anyway, and performing their work on individual page size blocks, it is an easy
matter to use Strassen’s algorithm everywhere that a matrix multiplication is required.
Strassen’s algorithm performs the floating-point operations for matrix multiplication in an order that is very
different than the usual vector method. In some cases, this could cause differences in round-off, possibly
leading to numerical differences in the result.
You may choose whether to use Strassen’s algorithm when calling VBEGIN(3S), either by passing an
argument to VBEGIN(3S), or by setting the VBLAS_STRASSEN environment variable before run time. For
C shell, use the following command:
set env VBLAS_ STR ASSEN

582 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

For POSIX or Korn shell, use the following command:


exp ort VBLAS_ STR ASSEN

If the user selects Strassen’s algorithm, VBEGIN(3S) automatically allocates the necessary workspace. In
subsequent virtual matrix computations, Strassen’s algorithm is then automatically used for all matrix
multiplications, including matrix multiplications done as part of the VSGEMM(3S) and VSTRSM(3S) routines.
Multitasking
Like most of the Scientific Library routines, the Virtual LAPACK and Virtual BLAS routines perform
multitasking automatically. To control the use of multitasking, set the value of the NCPUS environment
variable before run time to an integer number that indicates the number of processors you want to use. For
example, to use only one CPU (which effectively turns off multitasking), use one of the following
commands. For the C shell, enter the following command:
set env NCPUS 1

For the POSIX or Korn shell, enter the following command:


NCP US= 1 ; exp ort NCP US

If you enter the following command for the C shell:


set env NCPUS 4

or the following command for the POSIX or Korn shell:


NCP US=4 ; export NCPUS

the software will try to use four CPUs. The actual number of CPUs used depends on the availability of
resources (see the INTRO_LIBSCI(3S) man page for more information on multitasking in the Scientific
Library).
Complex Routines
Most of the out-of-core software described previously deals with matrices of real numbers. There are also
counterparts to these routines that work with matrices of complex numbers (numbers that have a real and
imaginary part). For example, the complex two-dimensional counterpart of the virtual copy routine
SCOPY2RV(3S) is routine CCOPY2RV(3S). Likewise, routine VCGETRF(3S) factors a general complex
virtual matrix. In the naming conventions for all routines, the letter "S" denotes real (that is, "single-
precision") data; the letter "C" denotes complex data.
Packed Storage
Packed storage of a triangular or symmetric matrix means that only half of the matrix is actually stored on
disk or SSD. If a real matrix is declared to be lower triangular, only the lower triangle is stored; if upper
triangular, only the upper triangle is stored. If the matrix is symmetric, either the lower or upper triangular
part may be stored.

004– 2081– 002 583


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

Likewise, a complex matrix may be lower or upper triangular, or may be symmetric, with only the lower or
upper triangle being stored. Additionally, a complex matrix may be Hermitian (equal to the conjugate of its
transpose), with either the lower or upper triangle being stored.
For the purpose of storing a matrix, the out-of-core routines do not have to distinguish between a triangular,
symmetric, or Hermitian matrix; they must know only which part of the matrix is being stored (that is, the
full matrix, the lower triangle, or the upper triangle).
In the Level 2 BLAS routines, packed storage implies a linearized storage scheme. For the out-of-core
routines, packed storage is similar, but more complicated. Because it is the page structure of the virtual
matrix binary file that is linearized, pages that correspond to the upper (or lower) part of a triangular matrix
are omitted.
Three possible storage modes are possible:
• FULL — The full matrix is stored
• LOWER — Only the lower triangle is stored
• UPPER — Only the upper triangle is stored
To define this storage mode, call the VSTORAGE(3S) routine, which has the calling sequence:
CAL L VST ORA GE( nunit, mode)

The nunit argument is an integer that gives the unit number of the virtual matrix, and mode is a character
string giving the storage mode. See VSTORAGE(3S) for further information.
Performance Measurement
The out-of-core software has a built-in feature for performance measurement, and it will collect various
performance statistics automatically. The user can print these statistics when calling the VEND(3S) routine,
either by providing a nonzero argument to the VEND(3S) routine (within the program) or by setting the
VBLAS_STATISTICS environment variable (before run time). To set this environment variable in the C
shell, enter the following:
set env VBL AS_ STA TIS TIC S

In the POSIX or Korn shell, enter the following:


exp ort VBL AS_ STA TIS TIC S

Statistics reported include the following:


• Total elapsed time
• Total CPU time
• Total I/O wait time
• Total workspace used
• Number of words read and written
• A distribution of wait times

584 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

You may use this feature in addition to the usual performance tools (for example, see the procstat(1)
man page).
Error Reporting
When the out-of-core software diagnoses an error, it writes an error diagnostic to stderr and aborts. If the
error is diagnosed by the out-of-core routines themselves, the error message should be complete and self-
explanatory. For instance, a common error is to provide an insufficient amount of memory for workspace.
In this case, the error diagnostic will indicate how much memory was needed.
Example:
*** Err or in rou tin e: VBE GIN
*** Ins uff ici ent mem ory was given;
min imu m req uir ed (de cim al wor ds) = 198 144

If the error was diagnosed by a lower-level system or library routine, the diagnostic will include the error
code. Usually, you can use the explain(1) command to get more information about the error by entering
one of the following commands:
exp lai n sys -xxx

exp lai n lib -xxx

The character string xxx represents the error code listed in the diagnostic. Use explain sys for error
status codes less than 100, and explain lib for higher-numbered codes (see the explain(1) man page
in the UNICOS User Commands Reference Manual, for more information).
For example, suppose that unit 1 was assigned to file /tmp/xxx/yyy/zzz, using the command:
ass ign -a /tmp/x xx/ yyy /zzz u:1

But suppose that the /tmp/xxx/yyy directory has not been created. When the out-of-core routine tries to
create the file, it cannot, and aborts after printing the following message:
*** Err or in rou tin e: pag e_r equ est
*** Err or status on AQO PEN for uni t num ber : 1
*** Err or status on AQO PEN = -2

Because AQIO routines are used internally for input and output, the error is usually detected by
AQOPEN(3F), AQREAD(3F), or AQWRITE(3F). In this case, it was AQOPEN. Of more concern to users,
however, is the specific error status. The diagnostic denotes that the error occurred on unit number 1, and
that the error status code was -2. You can enter the following command, which prints a further description,
that explains that one of the directories in a path name does not exist:
exp lai n sys -2

004– 2081– 002 585


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

Performance Tuning
The most important tuning parameter for the out-of-core routines is the value of nwork, the amount of buffer
space. This value is set either as an explicit argument to VBEGIN(3S) or by setting the
VBLAS_WORKSPACE environment variable before run time. If the virtual matrix is disk resident, larger
buffer space means faster I/O performance, within certain limits.
CPU time is essentially unaffected by this parameter; only I/O wait time, and hence, total wall-clock time,
are affected.
As always with out-of-core techniques, a trade-off exists between performance and size. If you use more
memory, performance will be better, but the program size increases. It is difficult to give firm rules for how
much memory you should use, but the following are some guidelines:
• The absolute minimum amount of out-of-core routine page-buffer space must be enough to hold three
pages.
• If the virtual matrix is disk resident, larger buffer space means better I/O performance, within certain
limits.
• If the virtual matrix is SSD resident, much less buffer space is needed to obtain good performance.
• If running in a dedicated environment, you should use as much memory as is available.
• If running in a batch environment, it may be desirable to use less memory, so that the job can be
scheduled and run at the same time that other user jobs are running; that is, the turnaround time of a
smaller job might be much less than for a large job, even if the I/O wait time for the smaller job is larger.
• Use enough buffer space for one "column" of pages; that is, n . np words, for which np is the number of
columns per page, and n is the leading dimension of the matrix (rounded up to a multiple of np). If you
use twice this much memory, performance will improve.
• The use of Strassen’s algorithm almost always speeds the computation for a small increase in memory.
The VBLAS statistics report the amount of memory used by Strassen’s algorithm.
• Packed storage mode should be used when appropriate, because it will save disk space with no penalty in
CPU time.
For solution of a general matrix (with VSGETRF and VSGETRS, or with VCGETRF and VCGETRS), a
special memory requirement exists. These routines need enough buffer space to contain one "column" of
pages; that is, if the matrix is 5000 by 5000, the buffer space must be 5000 by 256 (assuming 256 is the
page size). This requirement is necessary because of the nature of Gaussian elimination with partial
pivoting. To do the pivots, the performance might be extremely poor if less memory was used.

ENVIRONMENT VARIABLES
These environment variables change the default behavior of either the VBEGIN or the VEND routine. You
can override the effect of any of these settings by the corresponding argument of the affected routine.

586 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

VBLAS_PAGESIZE
Numeric value of the default page size, np. VBEGIN uses this variable to set up in-memory pages
for virtual matrices. Each page acts as an np-by-np submatrix of a virtual matrix. If unspecified,
VBEGIN defaults to np = 256.
VBLAS_STATISTICS
Flag to determine whether to print performance statistics after using the out-of-core routines.
VEND uses this variable to determine whether it should print statistics to stdout after terminating
out-of-core processing. If this variable is set (even if it has no value), the default behavior of
VEND is to print the statistics. If unspecified, VEND prints no statistics by default.
VBLAS_STRASSEN
Flag to determine whether to use Strassen’s algorithm for matrix multiplication. VBEGIN uses this
variable to determine whether it should set up data structures for Strassen’s algorithm. If the data
structures are set up, all virtual matrix multiplies use Strassen’s algorithm. If this variable is set
(even if it has no value), the default behavior of VBEGIN is to set up for Strassen’s algorithm. If
unspecified, VBEGIN defaults to the regular (inner product) matrix multiply algorithm.
VBLAS_WORKSPACE
Numeric value of nwork, the number of words of memory to set aside for I/O buffering (pages). If
unspecified, VBEGIN defaults to nwork =6 . np 2 (the number of words of memory required for six
pages).

EXAMPLES
Some short examples illustrate how you can use these subroutines to manipulate virtual matrices. For an
explanation of the specific arguments to the subroutines, see the man pages for the individual routines.
Example 1: This example shows how you can use routine SCOPY2RV(3S) to create a virtual matrix. This
program creates a virtual matrix on unit number 1 (which, by default, is on file fort.1). Within the
program, this matrix is referred to as V, which corresponds to IV, an integer parameter set equal to 1.
The first step is to call routine VBEGIN(3S) for initialization. Next, create a vector, X, of random numbers,
and copy it to one column of the virtual matrix, using routine SCOPY2RV(3S). This procedure is repeated
for each column J of the virtual matrix. The program is as follows:

004– 2081– 002 587


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

* Cre ate a vir tual mat rix of ran dom number s of siz e n by n.

INT EGE R N
PAR AME TER (N = 200 0)
INT EGE R IV ! uni t num ber of the vir tua l mat rix V
PAR AME TER (IV = 1)
REA L X(N ) ! vec tor for sto rin g a col umn of V

CAL L VBE GIN

DO, J = 1, N ! for eac h col umn of V

* Cre ate a vec tor X of ran dom num ber s


DO, I = 1, N
X(I ) = RANF()
END DO

* Cop y the vec tor X to column J of vir tua l mat rix V


CAL L SCO PY2 RV( N, 1, X, N, IV, J, 1, N)
END DO

CAL L VEN D
END

Example 2: This example illustrates the Virtual BLAS routine VSGEMM(3S) and multiplying a virtual matrix
by itself. The example assumes that the virtual matrix on unit 1 was already created, possibly by the
program in example 1. The example multiplies this virtual matrix by itself, resulting in a new virtual matrix,
called W, that corresponds to unit number 2 (integer IW). The following program copies the first column of
the result matrix W into array X, and then it prints out the first element of X, which is the value of virtual
matrix element W(1,1):

588 004– 2081– 002


INTRO_CORE ( 3S ) INTRO_CORE ( 3S )

INT EGE R N
PAR AME TER (N = 200 0)
INT EGE R IV, IW ! uni t num bers of the vir tua l mat ric es
PARAME TER (IV = 1, IW = 2)
REAL X(N) ! vec tor for sto rin g a col umn of W

CALL VBEGIN

* Multip ly virtua l mat rix V by its elf , cre ati ng vir tua l
* matrix W = V*V

CAL L VSGEMM (’N OTR ANSPOS E’, ’NO TRA NSP OSE ’, N, N, N, 1.0 ,
& IV, 1, 1, N, IV, 1, 1, N, 0.0 , IW, 1, 1, N)

* Sto re the fir st col umn of vir tua l W in vec tor X.


CALL SCOPY2 VR( N, 1, W, 1, 1, N, X, N)

* Pri nt the val ue of the first ele men t


WRI TE( *,*) ’Va lue of W(1,1) = ’, X(1 )

CAL L VEN D
END

Example 3: This example shows sample usage protocol for solving systems of equations. A program to
solve a large system of equations by using the Virtual LAPACK routines might be organized according to
the following general outline. This sample outline assumes that the user can generate the original matrix one
row at a time (by computing it, reading it, or whatever).
1. Call VBEGIN(3S) to initialize the virtual matrix routines.
2. For each row of the matrix, call a virtual copy routine to store the row in a virtual matrix. Likewise,
create a virtual matrix of right-hand sides.
3. Call routine VSGETRF(3S) to factor the general matrix.
4. Call routine VSGETRS(3S) to solve the right-hand sides.
5. For each column of the solution matrix, call a virtual copy routine to fetch the solution vector and
process it.
6. Call the VEND(3S) routine to terminate the virtual matrix routines and to close the files.

004– 2081– 002 589


SCOPY2RV ( 3S ) SCOPY2RV ( 3S )

NAME
SCOPY2RV, CCOPY2RV – Copies a submatrix of a real or complex matrix in memory into a virtual matrix

SYNOPSIS
CALL SCOPY2RV (m, n, a, lda, nunit, iv, jv, ldv)
CALL CCOPY2RV (m, n, a, lda, nunit, iv, jv, ldv)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
SCOPY2RV copies a matrix contained in a two-dimensional array of type REAL located in central memory
into a submatrix of a virtual matrix.
CCOPY2RV copies a matrix contained in a two-dimensional array of type COMPLEX located in central
memory into a submatrix of a virtual matrix.
These routines have the following arguments:
m Integer. (input)
Number of rows.
n Integer. (input)
Number of columns.
a SCOPY2RV: Real array of dimension (lda, *). (input)
CCOPY2RV: Complex array of dimension (lda, *). (input)
Contains real (in memory) input matrix.
lda Integer. (input)
Leading (first) dimension of a.
nunit Integer. (input)
Unit number of the virtual matrix.
This routine changes the contents of the virtual matrix. The virtual matrix itself contains either
real (SCOPY2RV) or complex (CCOPY2RV) elements.
iv Integer. (input)
Starting row index of the virtual matrix (1 to ldv).
jv Integer. (input)
Starting column of the virtual matrix (1 to n).
ldv Integer. (input)
Leading (first) dimension of the virtual matrix.

590 004– 2081– 002


SCOPY2RV ( 3S ) SCOPY2RV ( 3S )

NOTES
These routines are two-dimensional analogues of the Level 1 BLAS routines SCOPY and CCOPY (see
SCOPY(3S)). The initial S in SCOPY2RV means "single-precision real," the initial C in CCOPY2RV means
"complex," the 2 means "two-dimensional," and the RV means "real (in memory) to virtual (on disk or
SSD)."
These routines provide the only available method for reading data directly from memory into a virtual
matrix. Companion routines SCOPY2VR and CCOPY2VR go in the opposite direction: virtual to real.

EXAMPLES
The following examples show how to copy various types of matrices from central memory into a virtual
matrix.
Example 1: Copy vector X, of N real elements, to row I of the virtual matrix on unit number 3. Suppose
that the virtual matrix is of size N by N, so that the leading dimension is N. Because X is a vector, the
leading dimension of X is irrelevant, and you can use the constant 1 for the lda argument.
CALL SCOPY2 RV( 1, N, X, 1, 3, I, 1, N)

Example 2: Copy vector X, of N complex elements, to column J of the virtual matrix on unit number 3,
with the same assumptions as in example 1.
CALL CCOPY2 RV( N, 1, X, 1, 3, 1, J, N)

Example 3: Copy the 100-by-100 matrix A to the 100-by-100 submatrix of the virtual matrix on unit
NUNIT, beginning at virtual matrix subscript location (I, J). Assume that the virtual matrix has leading
dimension 3000.
CALL SCOPY2 RV( 100 , 100 , A, 100 , NUN IT, I, J, 300 0)

Example 4: Copy the single element X(I, J) of a complex matrix X to element (IV, JV) of the virtual
matrix on unit NUNITX. Assume that the leading dimension of the virtual matrix is LDXV. Because this
subroutine call copies one element, the leading dimension of X is irrelevant, and you can use the constant 1
for the lda argument.
CALL CCOPY2 RV( 1, 1, X(I , J), 1, NUN ITX , IV, JV, LDX V)

Example 5: Copy the lower triangular part (the part below the main diagonal) of the first 100 rows and
columns of matrix A to the lower triangular part of the virtual matrix NVA, of leading dimension 1000,
starting at virtual array element NVA(101, 101).
CALL VSTORA GE( NVA , ’LO WER ’)
DO, I = 1, 100
CAL L SCO PY2 RV( 1, I, A(I , 1), 100 , NVA , 100 +I, 101 , 100 0)
END DO

004– 2081– 002 591


SCOPY2RV ( 3S ) SCOPY2RV ( 3S )

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
MSCOPY2VR(3S) for a desciption of SCOPY2VR and CCOPY2VR, each of which copy a submatrix of a real
or complex virtual matrix into a real or complex matrix in central memory (the copy routines for the
opposite direction are SCOPY2RV and CCOPY2RV)

592 004– 2081– 002


SCOPY2VR ( 3S ) SCOPY2VR ( 3S )

NAME
SCOPY2VR, CCOPY2VR – Copies a submatrix of a virtual matrix to a real or complex (in memory) matrix

SYNOPSIS
CALL SCOPY2VR (m, n, nunit, iv, jv, ldv, a, lda)
CALL CCOPY2VR (m, n, nunit, iv, jv, ldv, a, lda)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
SCOPY2VR copies a submatrix of a virtual matrix into a two-dimensional array of type REAL located in
central memory.
CCOPY2VR copies a submatrix of a virtual matrix into a two-dimensional array of type COMPLEX located in
central memory.
These routines have the following arguments:
m Integer. (input)
Number of rows.
n Integer. (input)
Number of columns.
nunit Integer. (input)
Unit number of the virtual matrix. The virtual matrix itself contains either real (SCOPY2VR) or
complex (CCOPY2VR) elements.
iv Integer. (input)
Starting row index of the virtual matrix (1 to m).
jv Integer. (input)
Starting column of the virtual matrix (1 to n).
ldv Integer. (input)
Leading (first) dimension of the virtual matrix.
a SCOPY2VR: Real array of dimension (lda, *). (output)
CCOPY2VR: Complex array of dimension (lda, *). (output)
Contains real (in memory) output matrix.
lda Integer. (input)
Leading (first) dimension of a.

004– 2081– 002 593


SCOPY2VR ( 3S ) SCOPY2VR ( 3S )

NOTES
These routines are two-dimensional analogues of the Level 1 BLAS routines SCOPY and CCOPY (see
SCOPY(3S)). The initial S in SCOPY2VR means "single-precision real," the initial C in CCOPY2VR means
"complex," the 2 means "two-dimensional," and the VR means "virtual (on disk or SSD) to real (in
memory)."
These routines provide the only available method for reading data from a virtual matrix into memory.
Companion routines SCOPY2RV and CCOPY2RV go in the opposite direction: real to virtual.

EXAMPLES
The following examples show how to copy various types of matrices from a virtual matrix into central
memory.
Example 1: Copy row I of the virtual matrix on unit number 3 to the real vector X. Suppose that the
virtual matrix is of size N by N, so that the leading dimension is N. Because X is a vector, the leading
dimension of X is irrelevant, and you can use the constant 1 for the lda argument.

CAL L SCO PY2VR( 1, N, 3, I, 1, 100 0, X, 1)

Example 2: Copy column J of the complex virtual matrix on unit number 3 to the complex vector X, with
the same assumptions as in example 1.

CAL L CCO PY2VR( N, 1, 3, 1, J, N, X, 1)

Example 3: Copy the 100-by-100 submatrix of the virtual matrix on unit NUNIT, beginning at virtual
matrix subscript location (I, J), to the 100-by-100 matrix A. Assume that the virtual matrix has leading
dimension 3000.

CAL L SCO PY2VR( 100 , 100 , NUN IT, I, J, 300 0, A, 100 )

Example 4: Copy the single element of the complex virtual matrix on unit NUNITX, at subscript position
(IV, JV), to the single element X(I, J) of the complex matrix X. Assume that the leading dimension
of the virtual matrix is LDXV. Because this subroutine call copies one element, the leading dimension of X
is irrelevant, and you can use the constant 1 for the lda argument.

CAL L CCO PY2VR( 1, 1, NUN ITX , IV, JV, LDX V, X(I , J), 1)

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
SCOPY2RV(3S), which documents SCOPY2RV and CCOPY2RV, each of which copies a submatrix of a real
or complex matrix in central memory into a virtual matrix (the copy routines for the opposite direction are
SCOPY2VR and CCOPY2VR)

594 004– 2081– 002


VBEGIN ( 3S ) VBEGIN ( 3S )

NAME
VBEGIN – Initializes the out-of-core routine data structures

SYNOPSIS
CALL VBEGIN
CALL VBEGIN [(nwork)]
CALL VBEGIN [(nwork, nstrassen)]
CALL VBEGIN [(nwork, nstrassen, np)]

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VBEGIN initializes the data structures in memory that are used to handle virtual matrices. A program must
call VBEGIN once before beginning virtual matrix work and must call the companion routine, VEND(3S),
after virtual matrix work is complete. A program can have more than one "code block" of virtual matrix
work, but each block must begin with a call to VBEGIN and end with a call to VEND.
This routine takes as its first argument the minimum size of the I/O buffer space the user wants to allocate.
The routine allocates this much memory for buffers (using a malloc(3C) library call). The routine also
allocates a small additional amount of memory for its own data structures. When the VEND(3S) routine is
called to handle terminal processing, all allocated memory is freed. Other arguments determine some of the
inner working of the out-of-core routines.
All of these arguments are optional; you can specify them by using environment variables, with the
following order of precedence:
1. Explicit argument; if the argument is given explicitly in the VBEGIN call, any conflicting settings are
ignored.
2. Environment variable; if no explicit argument is given, but there is an environment variable setting, that
setting is used.
3. If neither the argument nor the environment variable is set, there is an internal default (shown in the
argument list that follows as "DEFAULT: . . .").
This routine has the following optional arguments:
nwork Integer. (input)
Minimum number of words to use for buffer space for I/O. The minimum number of words is
3 . np 2, enough space for three "pages" (see the np argument description).
The corresponding environment variable is VBLAS_WORKSPACE, which you should set to a
numeric value that gives the number of words to use.
DEFAULT: nwork = 6 . np 2, enough space for six "pages."

004– 2081– 002 595


VBEGIN ( 3S ) VBEGIN ( 3S )

nstrassen Integer. (input)


This argument determines whether to use Strassen’s algorithm for matrix multiplication. If you
use Strassen’s algorithm, VBEGIN allocates 2.4 . np 2 words of storage for workspace for
Strassen’s algorithm.
nstrassen = 0 Do not use Strassen’s algorithm.
nstrassen = 1 Use Strassen’s algorithm.
nstrassen = – 1 Use Strassen’s algorithm if and only if the VBLAS_STRASSEN environment
variable is set.
DEFAULT: Do not use Strassen’s algorithm.
np Integer. (input)
Size (matrix order) of one page. Each page is a square submatrix of the entire virtual matrix.
np is the order (number of rows or columns) of a page submatrix (for example, set np = 512 to
use pages of 512 by 512 words each). The value of np should be a multiple of 32.
The corresponding environment variable is VBLAS_PAGESIZE, which you should set to a
numeric value that gives the order of a page.
DEFAULT: np = 256 (You should use this default size, unless you have some special reason to
change it.)

NOTES
The most important tuning parameter for the out-of-core routines is the value of nwork, the minimum
amount of buffer space.
If the virtual matrix is disk resident, larger buffer space means faster I/O performance, within certain limits.
CPU time is essentially unaffected by the amount of buffer space; only I/O wait time, and hence, total wall-
clock time, are affected.
As always with out-of-core techniques, a trade-off exists between performance and size. If you use more
memory, performance will be better, but the program size increases. It is difficult to give firm guidelines as
to how much memory you should use. If running in a multiuser environment, it may be desirable to use less
memory, so that the job can be scheduled and run at the same time that other user jobs are running. This
means that the turnaround time of a smaller job might be much less than for a large job, even if the I/O wait
time for the smaller job is larger. If running in a dedicated environment, it would make sense to use as
much available memory as possible.
One rule of thumb for good performance is to use enough buffer space for one column of pages. For
example, if the page size is np and the largest of the leading dimensions of your virtual matrices (rounded up
to the next multiple of np) is n, set the minimum buffer size to be nwork = n . np . If you use twice this
much memory (nwork = 2 . n . np ), performance will improve.

596 004– 2081– 002


VBEGIN ( 3S ) VBEGIN ( 3S )

If the virtual matrix resides in SSD, much less buffer space is needed to obtain good performance.
For solving a general virtual matrix (with VSGETRF(3S) and VSGETRS(3S)), the preceding rule of thumb
becomes a minimum memory requirement. These routines need enough buffer space to contain one column
of pages; that is, if you are factoring a virtual matrix of size 5000 by 4000 and the page size is np = 256, the
minimum buffer space setting must be nwork = 5000 . 256 = 1,280,000words. This size restriction is needed
because of the nature of Gaussian elimination with partial pivoting. If less memory is used, the performance
when doing pivots might be extremely poor.

ENVIRONMENT VARIABLES
These environment variables change the default behavior of the VBEGIN routine. To override the effect of
any of these settings, use the corresponding argument of VBEGIN.
VBLAS_PAGESIZE
Numeric value of the default page size, np. VBEGIN uses this variable to set up in-memory pages
for virtual matrices. Each page acts as an np-by-np submatrix of a virtual matrix. If unspecified,
VBEGIN defaults to np = 256.
VBLAS_STRASSEN
Flag to determine whether to use Strassen’s algorithm for matrix multiplication. VBEGIN uses this
variable to determine whether it should set up data structures for Strassen’s algorithm. If the data
structures are set up, all virtual matrix multiplies use Strassen’s algorithm. If this variable is set
(even if it has no value), the default behavior of VBEGIN is to set up for Strassen’s algorithm. If
unspecified, VBEGIN defaults to the regular (inner product) matrix multiply algorithm.
VBLAS_WORKSPACE
Numeric value of nwork, the number of words of memory to set aside for I/O buffering (pages). If
unspecified, VBEGIN defaults to nwork = 6 . np 2 (the number of words of memory required for
six pages).

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
VEND(3S), VSGETRF(3S), VSGETRS(3S)
malloc(3C) in the UNICOS System Libraries Reference Manual

004– 2081– 002 597


VEND ( 3S ) VEND ( 3S )

NAME
VEND – Handles terminal processing for the out-of-core routines

SYNOPSIS
CALL VEND [(info)]

IMPLEMENTATION
UNICOS systems

DESCRIPTION
The VEND routine does termination processing for the out-of-core routines. You must call VEND as the last
step in out-of-core processing. This routine ensures that any output in progress from the out-of-core routines
is completed, and then it deallocates all of the storage space that VBEGIN(3S) allocated.
After calling VEND, you can call the VBEGIN(3S) routine again, if you desire, to reinitialize the out-of-core
routines (including their performance statistics).
This routine has the following optional argument:
info Integer. (input)
Flag to request out-of-core routine performance statistics output.
If you supply this argument with a nonzero value, a set of performance statistics about the out-of-
core routines is printed on stdout. If you omit the argument, the statistics are printed if and only
if the VBLAS_STATISTICS environment variable is set. Statistics reported include the following:
• Total elapsed time
• Total CPU time
• Total I/O wait time
• Total workspace used
• Number of words read and written
• A distribution of wait times
You can use this performance statistics feature in addition to the usual performance tools.

NOTES
If a program terminates abnormally before VEND is called, you should assume that any virtual matrices
created or changed by the program were destroyed, because the integrity of their data cannot be guaranteed.
Virtual matrices used only for input will remain valid.

598 004– 2081– 002


VEND ( 3S ) VEND ( 3S )

You can use the optional statistics report that VEND prints to judge whether using more or less memory for
buffer space would significantly affect performance.
Generally, if the total I/O wait time is a small percentage of wall-clock time, the program is compute-bound
and no more memory is needed.
Other useful statistics are the virtual read and write rates. These statistics measure the amount of data
transferred divided by the time when the out-of-core routines were idle because they were waiting for I/O. If
the virtual read and write rates are much faster than the physical speed of the device being used, ample
memory was used for buffer space.

ENVIRONMENT VARIABLES
The following environment variable changes the default behavior of the VEND routine. To override the
effect of this setting use the info argument.
VBLAS_STATISTICS
Flag to determine whether to print performance statistics after using the out-of-core routines.
VEND uses this variable to determine whether it should print statistics to stdout after terminating
out-of-core processing. If this variable is set (even if it has no value), the default behavior of
VEND is to print the statistics. If unspecified, VEND prints no statistics by default.

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
VBEGIN(3S)

004– 2081– 002 599


VSGEMM ( 3S ) VSGEMM ( 3S )

NAME
VSGEMM, VCGEMM – Multiplies a virtual real or complex general matrix by a virtual real or complex general
matrix

SYNOPSIS
CALL VSGEMM (transa, transb, m, n, l, alpha, nunita, ia1, ja1, lda, nunitb, ib1, jb1, ldb,
beta, nunitc, ic1, jc1, ldc)
CALL VCGEMM (transa, transb, m, n, l, alpha, nunita, ia1, ja1, lda, nunitb, ib1, jb1, ldb,
beta, nunitc, ic1, jc1, ldc)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSGEMM and VCGEMM each perform one of the following matrix-matrix operations:
C ← α op(A)op(B) + β C
where α and β are scalars; A, B, and C are virtual matrices or submatrices; op(A) is an m-by-k matrix; op(B)
is a k-by-n matrix; C is an m-by-n matrix; and op(X) is one of the following:
op(X) = X
T
op(X) = X (transpose of X)
H
op(X) = X (conjugate transpose of X; VCGEMM only)
These routines have the following arguments:
transa Character*1. (input)
Specifies the form of op(A) to be used in the matrix multiplication, as follows:
transa = ’N’ or ’n’: op(A) = A
T
transa = ’T’ or ’t’: op(A) = A
T H
transa = ’C’ or ’c’: op(A) = A (VSGEMM), or op(A) = A (VCGEMM)
This argument can be any length. Only the first character is significant (for example, ’t’ and
’transpose’ have the same effect).
transb Character*1. (input)
Specifies the form of op (B ) to be used in the matrix multiplication, as follows:
transb = ’N’ or ’n’: op(B) = B
T
transb = ’T’ or ’t’: op(B) = B
T H
transb = ’C’ or ’c’: op(B) = B (VSGEMM), or op(B) = B (VCGEMM)
This argument can be any length. Only the first character is significant (for example, ’t’ and
’transpose’ have the same effect).

600 004– 2081– 002


VSGEMM ( 3S ) VSGEMM ( 3S )

m Integer. (input)
Number of rows of output matrix.
n Integer. (input)
Number of columns of output matrix.
l Integer. (input)
Number of columns of A = number of rows of B.
alpha VSGEMM: Real. (input)
VCGEMM: Complex. (input)
Scalar factor α.
nunita Integer. (input)
Fortran unit number of virtual matrix A. The virtual matrix itself is composed of real numbers
(VSGEMM) or complex numbers (VCGEMM).
ia1 Integer. (input)
Row subscript of first element of virtual matrix A.
ja1 Integer. (input)
Column subscript of first element of A.
lda Integer. (input)
Leading virtual dimension of virtual matrix A.
nunitb Integer. (input)
Fortran unit number of virtual matrix B. The virtual matrix itself is composed of real numbers
(VSGEMM) or complex numbers (VCGEMM).
ib1 Integer. (input)
Row subscript of first element of virtual matrix B.
jb1 Integer. (input)
Column subscript of first element of B.
ldb Integer. (input)
Leading virtual dimension of virtual matrix B.
beta VSGEMM: Real. (input)
VCGEMM: Complex. (input)
Scalar factor β.
nunitc Integer. (input)
Fortran unit number of virtual matrix C. The virtual matrix itself is composed of real numbers
(VSGEMM) or complex numbers (VCGEMM), and it is changed by this routine.
ic1 Integer. (input)
Row subscript of first element of virtual matrix C.
jc1 Integer. (input)
Column subscript of first element of C.

004– 2081– 002 601


VSGEMM ( 3S ) VSGEMM ( 3S )

ldc Integer. (input)


Leading virtual dimension of virtual matrix C.

NOTES
This routine is the virtual counterpart of the Level 3 BLAS routine SGEMM. The calling sequence is similar
to SGEMM. The difference is that in SGEMM a matrix operand (A, for example) would be defined by the
following two arguments:
a(i,j) The starting position of the matrix A within array a
lda The leading dimension of the array a (distance between adjacent elements of a row of A)
In VSGEMM, however, a matrix operand (A) would be defined by the following four arguments:
nunita The Fortran unit number of the file that contains the virtual matrix
ia1 Row subscript within the virtual matrix file at which the submatrix A begins
ja1 Column subscript within the virtual matrix file at which the submatrix A begins
lda Leading virtual dimension of the virtual matrix file (virtual subscript distance between adjacent
elements of a row of the submatrix A)

EXAMPLES
Two examples of virtual matrix multiplication follow.
Example 1: Multiply the complex virtual matrix V, of dimension N-by-N, by itself, creating complex virtual
matrix W = V * V. Assume the virtual matrix V was already created, on unit 1, and that this operation will
create the virtual matrix W on unit 2.
INTEGE R V, W
PARAME TER (V = 1, W = 2)
CALL VBEGIN

C Multiply com ple x vir tua l mat rix V by itself ,


C creating com ple x vir tua l mat rix W = V*V

CALL VCGEMM (’N OTR ANS POS E’, ’NO TRANSP OSE’, N, N, N,
& (1. 0,0.0) ,V,1,1 ,N,V,1 ,1,N,( 0.0,0. 0),W,1 ,1,N)

CAL L VEN D
END

Example 2: Let X be an in-memory vector of length N, consisting of random numbers. Copy X to the
virtual matrix on unit 1, which is defined as the parameter VX, that has dimension N-by-1. Multiply this
virtual matrix by its transpose, giving an N-by-N virtual matrix on unit 2, which is defined as the VY
parameter.

602 004– 2081– 002


VSGEMM ( 3S ) VSGEMM ( 3S )

INT EGE R N
PAR AME TER (N = 100 0)
REA L X(N )
INT EGE R VX, VY
PARAME TER (VX = 1, VY = 2)

C create ran dom vec tor x


DO, I = 1, N
X(I ) = RAN F()
END DO

C ini tializ e
CALL VBEGIN

C cre ate vir tua l matrix VX of dim ens ion n by 1


CAL L SCO PY2RV( N, 1, X, N, VX, 1, 1, N)

C cre ate n by n vir tual matrix vy = vx * tra nsp ose (vx )


CALL VSGEMM (’NOTR ANS POSE’, ’TR ANSPOS E’, N, N, 1, 1.0 ,
& VX, 1, 1, N, VX, 1, 1, N, 0.0 , VY, 1, 1, N)

CAL L VEN D
END

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
SGEMM(3S) for a description of SGEMM(3S) and CGEMM(3S), the in-memory equivalents of the out-of-core
routines VSGEMM and VCGEMM

004– 2081– 002 603


VSGETRF ( 3S ) VSGETRF ( 3S )

NAME
VSGETRF, VCGETRF – Computes an LU factorization of a virtual general matrix with real or complex
elements, using partial pivoting with row interchanges

SYNOPSIS
CALL VSGETRF (m, n, nunita, lda, ipiv, info)
CALL VCGETRF (m, n, nunita, lda, ipiv, info)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSGETRF and VCGETRF are the out-of-core versions of SGETRF and CGETRF (see SGETRF(3L)). Each
computes an LU factorization of a real (VSGETRF) or complex (VCGETRF) general m-by-n matrix A, using
partial pivoting with row interchanges.
The factorization has the form:
A =P .L .U
where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if
m > n), and U is upper triangular (upper trapezoidal if m < n).
These routines have the following arguments.
m Integer. (input)
The number of rows of the virtual matrix A. m ≥ 0.
n Integer. (input)
The number of columns of the virtual matrix A. n ≥ 0.
nunita Integer. (input)
VSGETRF: Unit number of the virtual real matrix of dimension (lda, n).
VCGETRF: Unit number of the virtual complex matrix of dimension (lda, n).
The virtual matrix itself is used for both input and output.
On entry, A, the m-by-n matrix to be factored, is stored in the virtual matrix file, starting at
subscript (1,1). On exit, the virtual matrix A is replaced by the triangular matrix factors L and
U; the unit diagonal elements of L are not stored.
lda Integer. (input)
The leading (first) dimension of the virtual matrix A. lda ≥ MAX(1,m).
ipiv Integer array of dimension (MIN(m,n)). (output)
The pivot indices. Row i of the matrix was interchanged with row ipiv(i).
info Integer. (output)

604 004– 2081– 002


VSGETRF ( 3S ) VSGETRF ( 3S )

=0 Successful exit.
<0 If info = – k, the kth argument had an illegal value.
>0 If info = k, U(k,k) is exactly 0. The factorization has been completed, but the factor U is
exactly singular, and division by 0 will occur if it is used to solve a system of equations or
to compute the inverse of A.

NOTES
This routine requires workspace of two types:
• If m is the first argument (number of rows) and np is the page size established by VBEGIN(3S), the
amount of buffer space that was allocated in the VBEGIN(3S) routine must be at least m . np words.
This minimum size of workspace is necessary to prevent a huge amount of page thrashing (excessive I/O
to reread data) when doing the partial pivoting operations.
• In addition to the buffer space allocated by VBEGIN(3S), this routine also allocates m . np words of
workspace for its own use. Thus, the total memory requirement is a minimum of 2 . np . m (plus the
much smaller workspace for data structures that is also allocated by VBEGIN(3S)).
If insufficient memory is available, the routine exits with an error message, which indicates how much
memory was needed.

EXAMPLES
This example illustrates using both VSGETRF and VSGETRS to solve a set of 1000 linear equations in 1000
unknowns. It is assumed that a virtual matrix of size 1000 by 1000 was created on unit 1 (defined as
parameter VA), representing the equations, and that a virtual matrix of size 1000 by 1 was created on unit 2
(defined as parameter VB), representing the right-hand side. The square matrix is factored and that
factorization is used, along with the right-hand side matrix, to compute a solution matrix.

C Comput e the LU factor iza tio n of the vir tua l mat rix A on uni t 1,
C which is assume d to hav e bee n cre ate d pre vio usl y and have
C dimension 100 0 by 1000.
C Solve the equati on A*X = B
C whe re B is a vir tual matrix on unit 2, ass umed to hav e dim ension
C 100 0 by 1.
C
INTEGE R M, LD, VA, VB
PAR AME TER (M = 100 0, LD = M, VA = 1, VB = 2)
INT EGE R IPI V(L D)
REA L X(L D)

CAL L VBE GIN


C
C LU factor iza tio n
C

004– 2081– 002 605


VSGETRF ( 3S ) VSGETRF ( 3S )

PRI NT *, ’be ginnin g vsg etr f’

CAL L VSGETR F(M , M, VA, LD, IPIV, INF O)

IF (INFO .NE . 0) THEN


PRI NT *, ’ma tri x is sin gular’
PRINT *, ’info = ’, INF O
CALL VEND
STOP
END IF
C
C com put e sol ution vec tor of A*X = B (repla ce B)
C
PRI NT *, ’be ginning vsg etr s’

CAL L VSG ETRS(’ NO TRA NSPOSE ’, M, 1, VA,


& LD, IPI V, VB, LD, INF O)

IF (IN FO .NE. 0) THE N


PRI NT *, ’no n-z ero inf o fro m sge trs’
PRI NT *, ’in fo = ’, INF O
CAL L VEN D
STO P
END IF
C
C Cop y sol uti on vector b to vec tor x,
C and pri nt the first 5 ele men ts of x
C
PRI NT *, ’be ginnin g sco py2 vr’

CAL L SCOPY2 VR( M, 1, VB, 1, 1, LD, X, LD)

DO, i = 1, 5
PRI NT *, ’X(’, I, ’) = ’, X(I )
END DO

CAL L VEND
PRI NT *, ’do ne’
END

606 004– 2081– 002


VSGETRF ( 3S ) VSGETRF ( 3S )

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
VBEGIN(3S)
VSGETRS(3S) (which documents VSGETRS(3S) and VCGETRS(3S)) to solve linear systems based on the
factorization computed by VSGETRF or VCGETRF, respectively
SGETRF(3L) (available only online) for a description of SGETRF(3L) and CGETRF(3L), which are the in-
memory equivalents of the out-of-core routines VSGETRF and VCGETRF

004– 2081– 002 607


VSGETRS ( 3S ) VSGETRS ( 3S )

NAME
VSGETRS, VCGETRS – Solves a virtual system of linear equations, using the LU factorization computed by
VSGETRF(3S) or VCGETRF(3S)

SYNOPSIS
CALL VSGETRS (trans, n, nrhs, nunita, lda, ipiv, nunitb, ldb, info)
CALL VCGETRS (trans, n, nrhs, nunita, lda, ipiv, nunitb, ldb, info)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSGETRS and VCGETRS are virtual matrix versions of the LAPACK routines SGETRS and CGETRS (see
SGETRS(3L)). VSGETRS or VCGETRS uses the LU factorization of matrix A as computed by
VSGETRF(3S) or VCGETRF(3S), respectively.
VSGETRS and VCGETRS solve one of the following linear systems:
AX = B
T
A X= B
H
A X = B (VCGETRS only)
T H
A is the transpose of A, A is the conjugate transpose of A, A is an n-by-n matrix, and X and B are n-by-
nrhs matrices.
These routines have the following arguments:
trans Character*1. (input)
Specifies the solution, X, to be computed as follows:
trans = ’N’ or ’n’ (no transpose): AX = B
T
trans = ’T’ or ’t’ (transpose): A X = B
H T
trans = ’C’ or ’c’ (conjugate transpose): A X = B (VCGETRS), or A X = B (VSGETRS)
This argument can be any length. Only the first character is significant (for example, ’t’ and
’transpose’ have the same effect).
n Integer. (input)
Number of rows and columns of the matrix A. n ≥ 0.
nrhs Integer. (input)
Number of right-hand sides. The number of columns of the matrix B. nrhs ≥ 0.
nunita Integer. (input)
VSGETRS: Unit number of the real virtual matrix of dimension (lda, n).
VCGETRS: Unit number of the complex virtual matrix of dimension (lda, n).

608 004– 2081– 002


VSGETRS ( 3S ) VSGETRS ( 3S )

nunita is itself a virtual matrix file used only for input. The matrix contains the LU factorization
of the matrix A, which must be computed by VSGETRF or VCGETRF before calling VSGETRS
or VCGETRS, respectively.
lda Integer. (input)
Leading (first) dimension of the virtual matrix A. lda ≥ MAX(1, n).
ipiv Integer array of dimension n. (input)
Array of pivot indices as determined by VSGETRF or VCGETRF.
nunitb Integer. (input)
VSGETRS: Unit number of the real virtual matrix B of dimension (ldb, n).
VCGETRS: Unit number of the complex virtual matrix B of dimension (ldb, n).
nunitb is itself a virtual matrix file used for input and output. On entry, B contains the right-
hand side vectors for the systems of linear equations. On exit, the solution vectors, columns of
the matrix X, replace the right-hand side vectors.
ldb Integer. (input)
Leading (first) dimension of the virtual matrix B. ldb ≥ MAX(1, n).
info Integer. (output)
= 0 Normal return (successful exit).
< 0 If info = – k, the kth argument has an illegal value.

NOTES
This routine requires workspace of two types:
• The amount of buffer space that was allocated in the VBEGIN routine must be at least m . np words; m
is the first argument (number of rows), and np is the page size established by VBEGIN(3S). This
minimum size of workspace is necessary to prevent a huge amount of page thrashing (excessive I/O to
reread data) when doing the partial pivoting operations.
• In addition to the buffer space allocated by VBEGIN, this routine also allocates m*np words of workspace
for its own use. Thus, the total memory requirement is a minimum of 2 . np . m (plus the much smaller
workspace for data structures that is also allocated by VBEGIN).
If insufficient memory is available, the routine exits with an error message, which indicates how much
memory is needed.

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
VSGETRF(3S) for an example of using VSGETRS in conjunction with VSGETRF
VBEGIN(3S), VCGETRF(3S)
SGETRS(3L) (available only online) for a description of SGETRS and CGETRS, which are the LAPACK
routines on which VSGETRS and VCGETRS are based

004– 2081– 002 609


VSPOTRF ( 3S ) VSPOTRF ( 3S )

NAME
VSPOTRF – Computes the Cholesky factorization of a real symmetric positive definite virtual matrix

SYNOPSIS
CALL VSPOTRF (uplo, n, nunita, lda, info)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSPOTRF is the virtual (out-of-core) implementation of the LAPACK routine SPOTRF(3L) (documented
online). It computes the Cholesky factorization of the real symmetric positive definite virtual matrix A,
which is accessed through the I/O unit number nunita. It uses an out-of-core technique based on the Virtual
Level 3 Basic Linear Algebra Subroutines (Virtual Level 3 BLAS or VBLAS).
This routine has the following arguments:
uplo Character*1. (input)
Specifies whether the upper or lower triangular part of the symmetric matrix A is stored.
uplo = ’U’ or ’u’: the upper triangle of matrix A is stored.
uplo = ’L’ or ’l’: the lower triangle of matrix A is stored.
n Integer. (input)
Number of columns in virtual matrix A. n ≥ 0.
nunita Integer. (input)
Unit number of the file that contains the virtual matrix A.
The virtual matrix A is a real array of dimension (lda, n). The virtual matrix file nunita is used
for both input and output. On entry, the virtual matrix contains the symmetric positive definite
matrix to be factored.
If uplo = ’U’ or ’u’, the leading n-by-n upper triangular part of array a contains the upper
triangular part of matrix A, and the strictly lower triangular part of a is not referenced.
If uplo = ’L’ or ’l’, the leading n-by-n lower triangular part of array a contains the lower
triangular part of matrix A, and the strictly upper triangular part of a is not referenced.
On exit, the triangular factor L or U from the Cholesky factorization of matrix A overwrites
virtual matrix A. Given L or U, you can write the factorization as one of the following:
T T
A = U . U, where U is an upper triangular matrix, and U is the transpose of U.
T T
A = L . L , where L is a lower triangular matrix, and L is the transpose of L.
1 ≤ nunita ≤ 99
You may use packed storage mode for the virtual matrix. See the NOTES section.

610 004– 2081– 002


VSPOTRF ( 3S ) VSPOTRF ( 3S )

lda Integer. (input)


Specifies the leading dimension of virtual matrix A. lda ≥ MAX(1, n).
info Integer. (output)
=0 Normal return.
>0 If info = k, the leading minor of order k is not positive definite, and the factorization could
not be completed.
<0 If – info = k, the kth argument has an illegal value.

NOTES
This routine allocates workspace of size (np . np )words ; np is the page size used by the virtual matrix
routines.
See INTRO_CORE(3S) for general information about Virtual BLAS and Virtual LAPACK out-of-core
software.
You can use packed storage mode to store virtual matrix A, reducing the required amount of disk space by
about 50%. The uplo argument itself does not necessarily imply that packed storage mode will be used;
however, if packed storage is used, the storage mode must agree with the value of uplo (that is, both ’U’ or
both ’L’).
To specify packed storage mode, you must use a prior call to the VSTORAGE(3S) routine.

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
VSPOTRS(3S) to solve linear systems by using the factorization computed by VSPOTRF
VSTORAGE(3S) for a definition of the packed storage mode for the virtual matrix used by VSPOTRF
SPOTRF(3L) (available only online), which is the in-memory equivalent of the out-of-core routine VSPOTRF

004– 2081– 002 611


VSPOTRS ( 3S ) VSPOTRS ( 3S )

NAME
VSPOTRS – Solves a virtual system of linear equations with a symmetric positive definite matrix whose
Cholesky factorization has been computed by VSPOTRF(3S)

SYNOPSIS
CALL VSPOTRS (uplo, n, nrhs, nunita, lda, nunitb, ldb, info)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSPOTRS is the virtual (out-of-core) implementation of the LAPACK routine. SPOTRS(3L). It solves a
system of linear equations AX = B; A is a symmetric positive definite virtual matrix (stored on I/O unit
nunita), whose Cholesky factorization has been computed by VSPOTRF(3S).
This routine has the following arguments:
uplo Character*1. (input)
Specifies whether the Cholesky factor stored in virtual matrix A is upper triangular or lower
triangular.
uplo = ’U’ or ’u’: the factor is upper triangular.
uplo = ’L’ or ’l’: the factor is lower triangular.
n Integer. (input)
Number of rows (or columns) of virtual matrix A. n ≥ 0.
nrhs Integer. (input)
Number of right-hand sides. (The number of columns of matrix B.) nrhs ≥ 0.
nunita Integer. (input)
Unit number of the virtual matrix of dimension (lda, n).
The virtual matrix contains a Cholesky factor of A. The virtual matrix file nunita is used only
for input. On entry, the virtual matrix must contain the upper or lower triangular Cholesky
factor of the original virtual matrix A, as computed by VSPOTRF(3S). 1 ≤ nunita ≤ 99.
lda Integer. (input)
Leading dimension of the virtual matrix A. lda ≥ MAX(1, n).
nunitb Integer. (input)
Unit number of the virtual matrix B of dimension (ldb, nrhs). The virtual matrix file nunitb is
used for both input and output. On entry, the virtual matrix stores the right-hand-side vectors
(columns of B) for the system of linear equations. On exit, the virtual matrix contains the
solution vectors (columns of X). 1 ≤ nunitb ≤ 99.

612 004– 2081– 002


VSPOTRS ( 3S ) VSPOTRS ( 3S )

ldb Integer. (input)


Leading dimension of virtual matrix B. ldb ≥ MAX(1, n).
info Integer. (output)
= 0 Normal return.
< 0 If – info = k, the kth argument has an illegal value.

NOTES
See INTRO_CORE(3S) for general information about the Virtual BLAS and Virtual LAPACK out-of-core
software.
You can use packed storage mode to store virtual matrix A, reducing the required amount of disk space by
about 50%. The uplo argument itself does not necessarily imply that packed storage mode will be used;
however, if packed storage is used, the storage mode must agree with the value of uplo (that is, both ’U’ or
both ’L’).
To specify packed storage mode, you must use a prior call to the VSTORAGE(3S) routine.

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
VSPOTRF(3S) to compute the factorization used by VSPOTRS
VSTORAGE(3S) for a definition of the packed storage mode for the virtual matrix used by VSPOTRS
SPOTRS(3L) (available only online), which is the in-memory equivalent of the out-of-core routine VSPOTRS

004– 2081– 002 613


VSSYRK ( 3S ) VSSYRK ( 3S )

NAME
VSSYRK – Performs symmetric rank k update of a real or complex symmetric virtual matrix

SYNOPSIS
CALL VSSYRK (uplo, trans, n, k, alpha, nunita, ia1, ja1, lda, beta, nunitc, ic1, jc1, ldc)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSSYRK is the virtual matrix equivalent of the Level 3 Basic Linear Algebra Subprogram (Level 3 BLAS)
routines SSYRK(3S). VSSYRK performs a symmetric rank k update of a real symmetric virtual matrix.
VSSYRK performs one of the following symmetric rank k operations:
T
C ← α AA + β C
T
C←αA A+βC
T
where A is the transpose of A, α and β are scalars, C is an n-by-n symmetric virtual matrix, and A is an
n-by-k virtual matrix in the first operation listed previously, or a k-by-n virtual matrix in the second.
This routine has the following arguments:
uplo Character*1. (input)
Specifies whether the upper or lower triangular part of virtual matrix C is referenced, as follows:
uplo = ’U’ or ’u’: only the upper triangular part of c is referenced.
uplo = ’L’ or ’l’: only the lower triangular part of c is referenced.
trans Character*1. (input)
Specifies the operation to be performed, as follows:
T
trans = ’N’ or ’n’: C ← α AA + β C
T
trans = ’T’ or ’t’: C ← α A A + β C
n Integer. (input)
Specifies the order of virtual matrix C (the number of rows or columns in C). n ≥ 0.
k Integer. (input)
On entry with trans = ’N’ or ’n’: k specifies the number of columns of matrix A.
On entry with trans = ’T’ or ’t’: k specifies the number of rows of matrix A.
k ≥ 0.
alpha VSSYRK: Real. (input)
Scalar factor α.

614 004– 2081– 002


VSSYRK ( 3S ) VSSYRK ( 3S )

nunita Integer. (input)


Unit number of the file that contains virtual matrix A of dimension (lda,m), in which m = k if
trans = ’N’ or ’n’, or m = n if trans = ’T’ or ’t’. The virtual matrix itself is a real (VSSYRK)
matrix. The virtual matrix file nunita is used only for input.
ia1 Integer. (input)
Row subscript of the first element of virtual matrix A.
ja1 Integer. (input)
Column subscript of the first element of virtual matrix A.
lda Integer. (input)
Leading dimension of virtual matrix A.
trans = ’N’ or ’n’: lda ≥ MAX(1,n).
trans = ’T’ or ’t’: lda ≥ MAX(1,k).
beta VSSYRK: Real. (input)
Scalar factor β.
nunitc Integer. (input)
Unit number of the file that contains virtual matrix C of dimension (ldc,n). The virtual matrix is
a real (VSSYRK) matrix. The virtual matrix file nunitc is used for both input and output.
ic1 Integer. (input)
Row subscript of the first element of virtual matrix C.
jc1 Integer. (input)
Column subscript of the first element of virtual matrix C.
ldc Integer. (input)
Leading dimension of virtual matrix C. ldc ≥ MAX(1,n).

NOTES
Each calling sequence is similar to that of the equivalent Level 3 BLAS routine, except that a real (in-
memory) matrix is specified by the following:
• Location (for example, A(I,J))
• Leading dimension (LDA)
A virtual (out-of-core) matrix is specified by the following:
• Unit number (NUNITA)
• Location within the file (IA1, JA1)
• Leading dimension (LDA)

004– 2081– 002 615


VSSYRK ( 3S ) VSSYRK ( 3S )

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
SSYRK(3S) for the in-memory equivalent of the out-of-core routine VSSYRK

616 004– 2081– 002


VSTORAGE ( 3S ) VSTORAGE ( 3S )

NAME
VSTORAGE – Declares packed storage mode for a triangular, symmetric, or Hermitian (complex only)
virtual matrix

SYNOPSIS
CALL VSTORAGE (nunit, mode)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSTORAGE declares the mode of packed storage, mode, on the I/O unit nunit that contains a triangular,
symmetric, or Hermitian virtual matrix. These packed storage modes are for use with out-of-core routines in
the Level 3 Virtual Basic Linear Algebra Subprograms (Virtual BLAS or VBLAS) and the Virtual LAPACK
routines.
Packed storage of a triangular or symmetric matrix means that only half of the matrix is actually stored. If a
real virtual matrix is declared to be lower triangular, only the lower triangle is stored; if upper triangular,
only the upper triangle is stored. If the matrix is symmetric, either the lower or upper triangular part is
stored.
Likewise, a complex matrix may be lower or upper triangular, or it may be symmetric, with only the lower
or upper triangle being stored. In addition, a complex matrix may be Hermitian (equal to the conjugate of
its transpose), with only the lower or upper triangle being stored.
When reading from or writing to a virtual matrix, the out-of-core routines do not have to distinguish between
a triangular, symmetric, or Hermitian matrix. They must know only which part of the matrix is being stored:
the full matrix, the lower triangle, or the upper triangle. This defines the three modes of storage for a virtual
matrix:
• Full. The full matrix is stored.
• Lower. Only the lower triangle is stored.
• Upper. Only the upper triangle is stored.
VSTORAGE associates one of these modes of storage with the unit number of a virtual matrix. Then,
whenever an out-of-core routine has that unit number as a virtual matrix argument, it handles the virtual
matrix according to the mode of storage declared in the VSTORAGE call (see the NOTES section).
This routine has the following arguments:
nunit Integer. (input)
Fortran unit number of a virtual matrix.
mode Character*1. (input)
The storage mode of the virtual matrix stored in nunit.

004– 2081– 002 617


VSTORAGE ( 3S ) VSTORAGE ( 3S )

mode = ’F’ or ’f’: full storage mode (the default) is used.


mode = ’L’ or ’l’: lower triangular mode is used.
mode = ’U’ or ’u’: upper triangular mode is used.
Only the first character of the argument is significant; any other characters are ignored. Uppercase
and lowercase mean the same thing.

NOTES
For a given virtual matrix, if VSTORAGE is used to set mode = ’L’ or ’l’, any out-of-core routine that
accesses that virtual matrix must not refer to any elements above the main diagonal. Similarly, if
VSTORAGE is used to set mode = ’U’ or ’u’, any out-of-core routine that accesses the virtual matrix must
not refer to any elements below the main diagonal. If the program tries to access any such elements, it will
terminate with the following error message:
Tried to acc ess the upper par t of a low er tri angula r
matrix (or vic e ver sa) , uni t num ber nunit.

Use one call to VSTORAGE for each virtual matrix, unless the virtual matrix uses full storage mode. In this
case, calling VSTORAGE is not necessary, because full matrix storage is the default.
The call (or calls) to VSTORAGE should occur right after the call to the VBEGIN(3S) routine. For a given
virtual matrix, any call to VSTORAGE must occur before the first reference to the virtual matrix. After the
mode is defined for a given virtual matrix, that matrix cannot change modes; the same mode applies to all
subsequent references to the matrix, up until the call to VEND(3S).
In the LAPACK and Level 3 BLAS routines, "packed storage" implies a linearized storage scheme. For the
Virtual LAPACK and VBLAS routines, "packed storage" is similar, but more complicated, because it is the
page structure of the virtual matrix binary file that is linearized; therefore, pages that correspond to the upper
(or lower) part of a triangular matrix are omitted.

EXAMPLES
The following program defines a virtual matrix on unit 1 to be stored in upper triangular packed mode, and
it sets all elements on or above the main diagonal to 1.

618 004– 2081– 002


VSTORAGE ( 3S ) VSTORAGE ( 3S )

PRO GRAM EXM PL1


INT EGER N, VA
PAR AME TER(VA = 1) ! UNI T NUM BER OF VIRTUA L MAT RIX
PARAME TER(N = 100 0) ! SIZ E OF VIRTUA L MAT RIX

INTEGE R I
REAL X(N )

X = 1.0 ! SET VEC TOR X TO ALL 1’S .

CALL VBEGIN ! INITIA LIZE OUT-OF -CORE ROUTIN ES


CALL VSTORA GE(VA, ’UPPER ’) ! UPP ER TRI ANGULA R STO RAGE MODE

C FOR EACH ROW OF THE VIR TUA L MAT RIX :


DO, I = 1, N
C SET VA(I, I:N ) = X(I:N)
CAL L SCO PY2RV(1, N+1 -I, X(I ), 1, VA, I, I, N)
END DO

CAL L VEN D

END

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines, including usage examples
VBEGIN(3S), VEND(3S)

004– 2081– 002 619


VSTRSM ( 3S ) VSTRSM ( 3S )

NAME
VSTRSM, VCTRSM – Solves a virtual real or virtual complex triangular system of equations with multiple
right-hand sides

SYNOPSIS
CALL VSTRSM (side, uplo, transa, diag, m, n, alpha, nunita, ia1, ja1, lda, nunitb, ib1,
jb1, ldb)
CALL VCTRSM (side, uplo, transa, diag, m, n, alpha, nunita, ia1, ja1, lda, nunitb, ib1,
jb1, ldb)

IMPLEMENTATION
UNICOS systems

DESCRIPTION
VSTRSM solves a virtual real triangular system of equations with multiple right-hand sides. VCTRSM solves
a virtual complex triangular system of equations with multiple right-hand sides. VSTRSM and VCTRSM are
out-of-core versions of STRSM(3S) and CTRSM(3S), which are Level 3 Basic Linear Algebra Subprograms
(Level 3 BLAS).
VSTRSM and VCTRSM each solve one of the following matrix equations, using the operation associated with
each:

Equation Operation
op (A )X = α B B ← α op (A −1)B
X op (A ) = α B B ← α B op (A −1)
–1
where A is the inverse of A, α is a scalar, X and B are m-by-n matrices, A is either a unit or nonunit upper
or lower triangular matrix, and op(A) is one of the following:
op(A)=A
T
op(A)=A
H
op(A)=A (VCTRSM only)
T H
where A is the transpose of A, and A is the conjugate transpose of A.

620 004– 2081– 002


VSTRSM ( 3S ) VSTRSM ( 3S )

These routines have the following arguments:


side Character*1. (input)
Specifies whether op(A) appears on the left or right of X, as follows:
side = ’L’ or ’l’: op(A)X = αB.
side = ’R’ or ’r’: X*op(A) = αB.
uplo Character*1. (input)
Specifies whether matrix A is an upper or lower triangular matrix, as follows:
uplo = ’U’ or ’u’: A is an upper triangular matrix.
uplo = ’L’ or ’l’: A is a lower triangular matrix.
transa Character*1. (input)
Specifies the form of op(A) to be used in the matrix multiplication, as follows:
transa = ’N’ or ’n’: op(A) = A.
T
transa = ’T’ or ’t’: op(A) = A
T H
transa = ’C’ or ’c’: op(A) = A or op(A) = A (VCTRSM).
This argument can be of any length. Only the first character is significant (for example, ’n’ and
’notranspose’ have the same effect).
diag Character*1. (input)
Specifies whether A is unit triangular, as follows:
diag = ’U’ or ’u’: A is assumed to be unit triangular.
diag = ’N’ or ’n’: A is not assumed to be unit triangular.
m Integer. (input)
Specifies the number of rows in op(A) and B. m ≥ 0.
n Integer. (input)
Specifies the number of columns in B. n ≥ 0.
alpha VSTRSM: Real. (input)
VCTRSM: Complex. (input)
Scalar factor α.
nunita Integer. (input)
Fortran unit number of the file that contains the triangular virtual matrix A of dimension (lda,k),
in which k = m if trans = ’N’ or ’n’, or k = n if trans = ’T’ or ’t’. The virtual matrix itself is a
real (VSTRSM) or complex (VCTRSM) matrix. The virtual matrix file nunita is used only for
input.
ia1 Integer. (input)
Row subscript of the first element of A.
ja1 Integer. (input)
Column subscript of the first element of A.

004– 2081– 002 621


VSTRSM ( 3S ) VSTRSM ( 3S )

lda Integer. (input)


Specifies the first virtual dimension of virtual matrix A.
nunitb Integer. (input)
Fortran unit number of the file that contains the virtual matrix B of dimension (ldb, n). The
virtual matrix B itself is used for both input and output. On entry, the leading m-by-n part of
the virtual matrix B is the right-hand side matrix. On exit, the solution matrix X overwrites B.
ib1 Integer. (input)
Row subscript of the first element of B.
jb1 Integer. (input)
Column subscript of the first element of B.
ldb Integer. (input)
Specifies the first virtual dimension of virtual matrix B.

SEE ALSO
INTRO_CORE(3S) for an introduction to the out-of-core routines
STRSM(3S) for a description of STRSM(3S) and CTRSM(3S), which are the in-memory equivalents of the
out-of-core routines VSTRSM and VCTRSM

622 004– 2081– 002


INTRO_MACH ( 3S ) INTRO_MACH ( 3S )

NAME
INTRO_MACH – Introduction to machine constant functions

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
These functions return machine constants for UNICOS systems.
The SLAMCH routine runs on UNICOS and UNICOS/mk systems. The R1MACH(3S) and SMACH(3S)
routines run only on UNICOS systems.
The following table contains the purpose and name of each machine constant function. The first routine is
the name of the man page that documents all of the listed routines.
• R1MACH: Returns machine constants
• SLAMCH: LAPACK routine (see INTRO_LAPACK(3S)) which returns a wide variety of machine
constants
• SMACH, CMACH: Returns machine epsilon, numerically safe small and large normalized numbers

004– 2081– 002 623


R1MACH ( 3S ) R1MACH ( 3S )

NAME
R1MACH – Returns UNICOS machine constants

SYNOPSIS
r = R1MACH (i)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
The R1MACH function returns UNICOS machine constants.
This function has the following arguments:
r Real. (output)
Machine constant returned.
i Integer. (input)
Indicates the machine constant to be returned.
Must be an integer from 1 to 5; any other value prints an error message on standard output and
executes a Fortran STOP (thus aborting the program). The following lists the machine constant
returned for each valid value of i:
Value Machine Constant Returned
1 B**(EMIN– 1), the smallest positive magnitude
2 B**EMAX*(1 – B**(– T)), the largest magnitude
3 B**(– T), the smallest relative spacing
4 B**(1– T), the largest relative spacing
5 LOG10(B); B is the base or radix of the machine
where
B = Base of the machine
T = Number of base-B digits in the mantissa
EMIN = Minimum exponent before underflow
EMAX = Largest exponent before overflow
The constants that define the model of rounded floating-point arithmetic on Cray Research systems are as
follows:
B = 2 EMI N = -81 89
T = 47 EMA X = 819 0

624 004– 2081– 002


R1MACH ( 3S ) R1MACH ( 3S )

Because of the characteristics of Cray Research floating-point hardware, the constant used for R1MACH(1)
is one bit larger than the smallest magnitude defined by the model. The constants that R1MACH returns, in
both decimal and the internal representation, are as follows:
R1M ACH (1) = 0.3667 207 735109 720E-2 465 020003 400 000 000000 000 1
R1M ACH (2) = 0.2726 870339 048520 E+2466 057 776 777 777 777777 777 6
R1M ACH (3) = 0.7105 427 357601 002E-1 4 037722 400 000 000000 000 0
R1M ACH (4) = 0.1421 085471 520200 E-13 037 723 400 000 000000 000 0
R1M ACH (5) = 0.3010 299 956639 813E+0 0 037777 464 202 324117 572 0

SEE ALSO
SMACH(3S)

004– 2081– 002 625


SLAMCH ( 3S ) SLAMCH ( 3S )

NAME
SLAMCH – Determines single-precision machine parameters

SYNOPSIS
s = SLAMCH(cmach)

IMPLEMENTATION
UNICOS and UNICOS/mk systems

DESCRIPTION
SLAMCH determines single-precision machine parameters.
This routine accepts the following arguments:
cmach Specifies the value to be returned by SLAMCH.
CHARACTER*1. (input)
’B’ or ’b’ = base (base of the machine)
’E’ or ’e’ = eps (epsilon, relative machine precision, base*(1– t))
’L’ or ’l’ = emax (largest exponent before overflow)
’M’ or ’m’ = emin (minimum exponent before (gradual) overflow)
’N’ or ’n’ = t (number of (base) digits in the mantissa)
’O’ or ’o’ = rmax (overflow threshold, (base*emax)*(1– eps))
’P’ or ’p’ = prec (precision)
’R’ or ’r’ = rnd (1.0 if rounding occurs in addition; otherwise, 0.0)
’S’ or ’s’ = sfmin (safe minimum such that 1/sfmin does not overflow)
’U’ or ’u’ = rmin (underflow threshold, base*(emin– 1))

NOTES
The constants used to define the model of rounded floating-point arithmetic on UNICOS systems are as
follows:
base = 2
t = 47
emin = – 8189
emax = 8190
Two exceptions are the values used for sfmin and rmin. They are taken to be 1 bit larger than the smallest
magnitude defined by the model to ensure that reciprocal operations on these values do not become smaller
than sfmin or larger than rmax. The values returned by SLAMCH on UNICOS systems are as follows:

626 004– 2081– 002


SLAMCH ( 3S ) SLAMCH ( 3S )

SLA MCH (’B’) = 2


SLA MCH (’E’) = 0.1421085471520200E– 13
SLAMCH(’L’) = 8190
SLA MCH(’M’) = – 8189
SLA MCH(’N’) = 47
SLA MCH(’O’) = 0.2726870339048520E+2466
SLA MCH(’P’) = 0.2842170943040401E– 13
SLA MCH (’R’) = 0
SLA MCH (’S’) = 0.3667207735109720E– 2465
SLAMCH (’U’) = 0.3667207735109720E– 2465

The constants used to define the model of rounded floating-point arithmetic on UNICOS/mk systems are as
follows:
base = 2
t = 53
emin = – 1021
emax = 1023
The values returned by SLMACH are as follows:
SLAMCH (’B’) = 2
SLA MCH(’E’) = 0.222044604925031308E– 15
SLA MCH (’L’) = 1023
SLA MCH (’M’) = – 1021
SLA MCH(’N’) = 53
SLA MCH(’O’) = 0.898846567431157754E+308
SLA MCH(’P’) = 0.444089209850062616E– 15
SLAMCH (’R’) = 1
SLAMCH(’S’) = 0.222507385850720138E– 307
SLAMCH (’U’) = 0.222507385850720138E– 307

004– 2081– 002 627


SMACH ( 3S ) SMACH ( 3S )

NAME
SMACH, CMACH – Returns machine epsilon, small or large normalized numbers

SYNOPSIS
result = SMACH (int)
result = CMACH (int)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
The SMACH and CMACH routines return machine epsilon, small or large normalized numbers.
These routines have the following arguments:
result Real. (output)
Machine constant returned.
int Integer. (input)
Selects machine constant to be returned.
1 ≤ int ≤ 3. Any other value returns an error message.
For SMACH, int indicates that one of the following machine constants is returned as result:
int result
1 0.7105427357601002E-14
The machine epsilon (the smallest positive machine number ε for which 1.0 ± ε ≠ 1.0).
2 0.1290284014791423E-2449
A "numerically safe" number close to the smallest normalized, representable number.
3 0.7750231643082450E+2450
A "numerically safe" number close to the largest normalized, representable number.
For CMACH, int indicates that one of the following machine constants is returned as result:
int result
1 0.7105427357601002E-14
The machine epsilon (the smallest positive machine number ε for which 1.0 ± ε ≠ 1.0).
2 0.1347558278913286E-1216
A "numerically safe" number close to the square root of the smallest normalized, representable
number.
3 0.7420829329967288E+1217
A "numerically safe" number close to the square root of the largest normalized, representable number.

628 004– 2081– 002


SMACH ( 3S ) SMACH ( 3S )

You can use CMACH(2) and CMACH(3) to prevent overflow during complex arithmetic.

SEE ALSO
Lawson, C. L., Hanson, R. J., Kincaid, D. R., and Krogh, F. T., "Basic Linear Algebra Subprograms for
Fortran Usage – An Extended Report," Sandia Technical Report SAND 77-0898, Sandia Laboratories,
Albuquerque, NM, 1977.

004– 2081– 002 629


630 004– 2081– 002
INTRO_SUPERSEDED ( 3S ) INTRO_SUPERSEDED ( 3S )

NAME
INTRO_SUPERSEDED – Introduction to superseded Scientific Library routines

IMPLEMENTATION
UNICOS systems

DESCRIPTION
The routines is this section are superseded by newer routines. Many routines and one software package
(LINPACK) are almost, but not totally, superseded. Each of these superseded routines or packages is
documented in another section, according to its purpose.
Each of these routines, whether fully, mostly, or partially superseded, is minimally supported to maintain
continuity.
These routines are not available on Cray T90 systems that support IEEE arithmetic.
Fully Superseded Routines
The following table contains the purpose and name of each superseded Scientific Library routine. Column 3
contains a reference to the preferred replacement for each superseded routine. Each superseded routine has
its own man page.

Superseded
Purpose routine Preferred routine

Gathers a vector from a source vector GATHER None needed (see GATHER(3S))
Solves a system of linear equations by inverting a MINV SGESV (see
square matrix INTRO_LAPACK(3S))
Multiplies a matrix by a vector (unit increments) MXV SGEMV(3S)
Multiplies a matrix by a vector (arbitrary increments) MXVA SGEMV(3S)
Multiplies a matrix by a matrix (unit increments) MXM SGEMM(3S)
Multiplies a matrix by a matrix (arbitrary increments) MXMA SGEMM(3S)
Multiplies a matrix by a column vector and adds the SMXPY SGEMV(3S)
result to another column vector
Multiplies a matrix by a row vector and adds the result SXMPY SGEMV(3S)
to another row vector
Scatters a vector into another vector SCATTER None needed (see SCATTER(3S))
Solves a tridiagonal system TRID SGTSV (see
INTRO_LAPACK(3S))

004– 2081– 002 631


INTRO_SUPERSEDED ( 3S ) INTRO_SUPERSEDED ( 3S )

Mostly Superseded and Partially Superseded Routines


The following table contains the purpose and name of each mostly superseded or partially superseded
Scientific Library routine. Column 3 contains a reference to the preferred replacement for each routine.
Each of the named routines has its own man page.
All individual routines listed here are Fast Fourier Transform (FFT) routines, which are partially or mostly
superseded by routines from the UNICOS Standard FFT package currently available only on UNICOS. The
routine package LINPACK(3S) is also listed. Most routines in this package are completely superseded by
routines from the more recent LAPACK package. A few LINPACK routines are not superseded at all.

Purpose Routine Preferred routine

Applies a complex-to-complex FFT CFFT CCFFT(3S)


Applies a complex-to-complex FFT CFFT2 CCFFT(3S)
Applies a complex-to-complex two-dimensional CFFT2D CCFFT2D(3S)
FFT
Applies a complex-to-complex three- CFFT3D CCFFT3D(3S)
dimensional FFT
Applies multiple complex-to-complex FFTs CFFTMLT CCFFTM(3S)
Applies a complex-to-real FFT CSFFT2 CSFFT (see SCFFT(3S))
Contains routines which solve real or complex LINPACK INTRO_LAPACK
dense linear systems
Applies multiple complex-to-complex FFTs MCFFT CCFFTM(3S)
Applies multiple real-to-complex or RFFTMLT SCFFTM or CSFFTM (see
complex-to-real FFTs SCFFTM(3S))
Applies a real-to-complex FFT SCFFT2 SCFFT(3S)

632 004– 2081– 002


GATHER ( 3S ) GATHER ( 3S )

NAME
GATHER – Gathers a vector from a source vector

SYNOPSIS
CALL GATHER (n, a, b, index)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
GATHER is defined as follows:

ai = b j i
where i = 1, . . ., n

This routine has the following arguments:


n Integer. (input)
Number of elements in arrays a and index (not in b).
a Real or integer array of dimension n. (output)
Contains the result vector.
b Real or integer array of dimension max(index(i): i=1,. . .,n). (input)
Contains the source vector.
index Integer array of dimension n. (input)
Contains the vector of indices.
The Fortran equivalent of this routine is as follows:
DO 100 I=1 ,N
A(I )=B (IN DEX (I) )
100 CONTIN UE

CAUTIONS
You should not use this routine on systems that have Compress-Index Gather-Scatter (CIGS) hardware,
because it will degrade performance.

SEE ALSO
SCATTER(3S)

004– 2081– 002 633


MINV ( 3S ) MINV ( 3S )

NAME
MINV – Solves systems of linear equations by inverting a square matrix

SYNOPSIS
CALL MINV (ab, n, ldab, scratch, det, tol, m, mode)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
MINV computes the determinant of a matrix A, subject to the restriction imposed by tol (see the CAUTIONS
section. You may also use it to solve systems of linear equations (if m > 0) or to compute the inverse of a
square matrix (if mode ≠ 0).
If m>0, MINV solves the following matrix equation:
AX = B

where B represents an n-by-m matrix of known values, and X represents an n-by-m matrix of unknowns for
which to solve.
You may consider each column of B to be the right-hand side values of a system of linear equations, and
each corresponding column of X to be the unknowns for the system of linear equations defined by A and the
corresponding column of A. On output, the solution matrix X overwrites the right-hand side matrix B.
If mode ≠ 0, MINV calculates A −1, which overwrites A. If mode = 0, A is still overwritten, but not by A −1.
This routine has the following arguments:
ab Real array of dimension (ldab,n+m). (input and output)
On input, ab contains the augmented matrix A:B. A is the square matrix to be inverted (if
mode ≠ 0), and B is the matrix whose columns are the right-hand sides for the systems of linear
equations to be solved.
On output, ab contains the augmented matrix Z:X. Z is either A −1 (mode ≠ 0), the inverse of A
(mode is nonzero), or some other n-by-n matrix replacing A. X is the matrix, each column of
which is the solution vector for the system of linear equations defined by the corresponding
column of B.
n Integer. (input)
Order of matrix A; that is, the number of rows in A (same as number of columns).
ldab Integer. (input)
Leading dimension of array ab.
ldab ≥ n .

634 004– 2081– 002


MINV ( 3S ) MINV ( 3S )

scratch Real array of dimension 2*n. (output)


Workspace for MINV.
det Real. (output)
Determinant of A, computed as the product of pivot elements.
tol Real. (input)
Lower limit for the determinant’s partial products.
A is declared singular when the partial product of pivot elements is less than or equal in
magnitude to this parameter, which should be positive.
m Integer. (input)
Number of columns in B.
m = 0 implies that no right-hand sides exist, hence no linear systems to solve.
mode Integer. (input)
Specifies whether the inverse of A is required.
In ab, the inverse of A overwrites A only if mode ≠0.
The following summarizes the effect of different combinations of parameter values:

Parameter values Results returned by MINV

m=0, mode=0 det(A)


m=0, mode≠0 det(A), A −1
m>0, mode=0 det(A), X = A −1B
m>0, mode≠0 det(A), A −1, X = A −1B

NOTES
MINV solves linear equations by using a partial pivot search (one unused row) and Gauss-Jordan reduction.
MINV is superseded by the LAPACK routines SGETRF(3L) and SGETRI(3L) (which together can calculate
the determinant and inverse of a general square matrix), or by the LAPACK routine SGESV(3L) (which
solves the matrix equation AX = B). LAPACK routines are preferred because they are the emerging de facto
standard linear systems interface. Using LAPACK routines will enhance your program’s portability, and also
should enhance its performance portability.
Man pages for the LAPACK routines SGETRF(3L), SGETRI(3L), SGESV(3L) are available only online,
using the man(1) command.

CAUTIONS
At each reduction step, MINV computes the partial product of pivot elements. If this product’s magnitude is
less than or equal to tol, MINV aborts computation. Therefore, if the value returned in det is less or equal in
magnitude to the value input as tol, MINV did not invert A or solve for X (although A:B may have been
overwritten); in this case, the value returned in det may not be the determinant of A.

004– 2081– 002 635


MINV ( 3S ) MINV ( 3S )

SEE ALSO
INTRO_LAPACK(3S) for more information and further references regarding the preferred routines
SGETRF(3L), SGETRI(3L), SGESV(3L) (available only online)
man(1) in the UNICOS User Commands Reference Manual
Partial Pivoting Linear Equation Solver (MINV), publication SN– 0215 (1980), which contains more
information on the algorithm used by MINV
Knuth, D.E., The Art of Computer Programming, Volume 1 (Fundamental Algorithms), Reading, MA:
Addison-Wesley, 1973; pp. 301– 302

636 004– 2081– 002


MXM ( 3S ) MXM ( 3S )

NAME
MXM – Computes matrix-times-matrix product (unit increments)

SYNOPSIS
CALL MXM (a, nra, b, nca, c, ncb)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
MXM computes the nra-by-ncb matrix product C = AB of the nra-by-nca matrix A and the nca-by-ncb matrix
B.
This routine has the following arguments:
a Real array of dimension (nra,nca). (input)
Matrix A, the first factor.
nra Integer. (input)
Number of rows in A (same as number of rows in C).
b Real array of dimension (nca,ncb). (input)
Matrix B, the second factor.
nca Integer. (input)
Number of columns in A (same as number of rows in B).
c Real array of dimension (nra,ncb). (output)
Matrix C, the product AB.
ncb Integer. (input)
Number of columns in B (same as number of columns in C).

NOTES
You should use the Level 3 Basic Linear Algebra Subprogram (Level 3 BLAS) SGEMM(3S) rather than MXM.
BLAS routines are preferred because they are the de facto standard linear algebra interface. Using Level 3
BLAS routines will enhance your program’s portability, and also should enhance its performance portability.
For example,
CALL MXM (A, NRA , B, NCA, C, NCB )

is equivalent to,
CALL SGEMM (’N ’, ’N’ , NRA , NCB , NCA , 1.0 ,
$ A, NRA , B, NCA , 0.0 , C, NRA )

004– 2081– 002 637


MXM ( 3S ) MXM ( 3S )

MXM is restricted to multiplying matrices that have elements stored by columns in successive memory
locations. MXMA(3S) is a general subroutine for multiplying matrices that can be used to multiply matrices
that do not satisfy the requirements of MXM (although SGEMM also supersedes MXMA). If B and C have only
one column, MXV(3S) or MXVA(3S) (both superseded by Level 2 BLAS routine SGEMV, see SGEMV(3S)) are
similar subroutines, each of which computes the product of a matrix and a vector.

CAUTIONS
The product must not overwrite either factor. For example, the following call will not (in general) assign the
product AB to A:
CALL MXM(A, NRA ,B,NCA ,A, NCA )

SEE ALSO
MXMA(3S) to multiply less strictly declared matrices
MXV(3S), MXVA(3S) to perform a matrix-vector multiply
SGEMM(3S), which supersedes MXM and MXMA
SGEMV(3S), which supersedes MXV and MXVA

638 004– 2081– 002


MXMA ( 3S ) MXMA ( 3S )

NAME
MXMA – Computes matrix-times-matrix product (arbitrary increments)

SYNOPSIS
CALL MXMA (sa, iac, iar, sb, ibc, ibr, sc, icc, icr, nrp, m, ncp)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
MXMA calculates the following nrp-by-ncp matrix product where A is a nrp-by-m matrix, and B is a m-by-ncp
matrix:
C = AB

This routine has the following arguments:


sa Real array of dimension ( max ( iac , iar ), m ). (input)
Contains matrix A, the first operand.
iac Integer. (input)
Memory increment in sa between adjacent column elements of A.
iar Integer. (input)
Memory increment in sa between adjacent row elements of A.
min ( iac , iar ) * nrp ≤ max ( iac , iar ).
sb Real array of dimension ( max ( ibc , ibr ), ncp ). (input)
Contains matrix B, the second operand.
ibc Integer. (input)
Memory increment in sb between adjacent column elements of B.
ibr Integer. (input)
Memory increment in sb between adjacent row elements of B.
min ( ibc , ibr ) * m ≤ max ( ibc , ibr ).
sc Real array of dimension ( max ( icc , icr ), ncp ). (output)
Array that receives C, the product AB.
icc Integer. (input)
Memory increment in sc between adjacent column elements of C.
icr Integer. (input)
Memory increment in sc between adjacent row elements of C.
min ( icc , icr ) * nrp ≤ max ( icc , icr ).

004– 2081– 002 639


MXMA ( 3S ) MXMA ( 3S )

nrp Integer. (input)


Number of rows in C (same as number of rows in A).
m Integer. (input)
Middle dimension: number of columns in A (same as number of rows in B).
ncp Integer. (input)
Number of columns in C (same as number of columns in B).

NOTES
You should use the Level 3 Basic Linear Algebra Subprogram (Level 3 BLAS) SGEMM (see SGEMM(3S))
rather than MXMA, because they are the de facto standard linear algebra interface. Using Level 3 BLAS
routines will enhance your program’s portability, and also should enhance its performance portability.
MXMA is a general subroutine for multiplying matrices. It can be used to compute a product of matrices in
which one or more of the operands or the product must be transposed. You can use MXMA to multiply any
matrices whose elements are not stored by columns in successive memory locations, provided only that the
elements of rows and columns are spaced by increments constant for each matrix. (The preferred routine,
SGEMM, also can do these operations.)
If B and C have only one column, MXVA(3S) (superseded by Level 2 BLAS routine SGEMV, see SGEMV(3S))
is a similarly general subroutine that computes the product of a matrix and a vector.
The product of matrices whose elements are stored by columns in successive memory locations can be
computed slightly faster using MXM(3S) (superseded by SGEMM) for matrices of more than one column or
MXV(3S) (superseded by SGEMV) for matrices B and C which have only one column.
The following subroutine calls are equivalent:
CALL MXMA(S A,1 ,NRP,S B,1 ,M,SC, 1,NCP, NRP,M, NCP)

CALL MXM (SA,NRP,S B,M ,SC ,NC P)

(The product elements computed by MXM are also stored by columns in successive memory locations.)

CAUTIONS
To be computed correctly, the product must not overwrite either operand. Thus, if ALPHA is a
one-dimensional array,
CALL MXMA(A LPH A,3,9, BET A,1,2, ALPHA( 2),1,3 , 3,2 ,2)

correctly computes the product of the matrices defined in ALPHA and BETA, whereas the following does not
(in general):
CALL MXMA(A LPH A,3,9, BET A,1,2, ALPHA, 1,3, 3,2,2)

640 004– 2081– 002


MXMA ( 3S ) MXMA ( 3S )

SEE ALSO
MXM(3S) to multiply more strictly declared matrices
MXV(3S), MXVA(3S) to perform a matrix-vector multiply
SGEMM(3S), which supersedes MXM and MXMA
SGEMV(3S), which supersedes MXV and MXVA

004– 2081– 002 641


MXV ( 3S ) MXV ( 3S )

NAME
MXV – Computes matrix-times-vector product (unit increments)

SYNOPSIS
CALL MXV (a, nra, b, nca, c)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
MXV computes the nra vector product c = Ab of the nra-by-nca matrix A and the nca vector b.
This routine has the following arguments:
a Real array of dimension (nra,nca). (input)
Matrix factor.
nra Integer. (input)
Number of rows in the matrix.
b Real array of dimension nca. (input)
Vector factor.
nca Integer. (input)
Number of columns in the matrix.
c Real array of dimension nra. (output)
Vector product.

NOTES
You should use the Level 2 Basic Linear Algebra Subprogram (Level 2 BLAS) SGEMV (see SGEMV(3S))
rather than MXV, because they are the de facto standard linear algebra interface. Using Level 2 BLAS
routines will enhance your program’s portability, and also should enhance its performance portability. For
example,
CAL L MXV (A, NRA , B, NCA , C)

is equivalent to,
CAL L SGE MV (’N ’, NRA , NCA , 1.0 , A, NRA , B, 1, 0.0 , C, 1)

MXV is restricted to using matrix and vector arguments that have elements stored by columns in successive
memory locations. MXVA(3S) is a general matrix-vector multiply subroutine that can use matrix and vector
arguments that do not satisfy the requirements of MXV (although SGEMV also supersedes MXVA).

642 004– 2081– 002


MXV ( 3S ) MXV ( 3S )

CAUTIONS

MXV is restricted to multiplying a vector that occupies successive memory locations (in order) by a matrix
whose elements are stored by columns in successive memory locations. MXVA is a general subroutine for
multiplying a matrix and a vector, which can be used to multiply a vector by a matrix stored with arbitrary
column and row increments.

SEE ALSO
MXM(3S), MXMA(3S) to perform a matrix-matrix multiply
MXVA(3S) to multiply with less strictly declared matrix and vector arguments
SGEMM(3S), which supersedes MXM and MXMA
SGEMV(3S), which supersedes MXV and MXVA

004– 2081– 002 643


MXVA ( 3S ) MXVA ( 3S )

NAME
MXVA – Computes matrix-times-vector product (arbitrary increments)

SYNOPSIS
CALL MXVA (sa, iac, iar, sb, ib, sc, ic, nra, nca)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
MXVA calculates the following nra matrix-vector product:
c = Ab

where A is an nra-by-nca matrix, and b is an nca vector.


This routine has the following arguments:
sa Real array of dimension ( max ( iac , iar ), nca ). (input)
Contains matrix A, the first operand.
iac Integer. (input)
Memory increment in sa between adjacent column elements of A.
iar Integer. (input)
Memory increment in sa between adjacent row elements of A.
min ( iac , iar ) * nrp ≤ max ( iac , iar ).
sb Real array of dimension (nca − 1) * ib + 1. (input)
Contains vector b, the second operand.
ib Integer. (input)
Memory increment in sb between adjacent elements of b.
sc Real array of dimension (nra − 1) * ic + 1. (output)
Array that receives c, the product Ab.
ic Integer. (input)
Memory increment in sc between adjacent elements of c.
nra Integer. (input)
Number of rows in A (same as number of elements in c).
nca Integer. (input)
Number of columns in A (same as number of elements in b). Suppose sa is a two-dimensional array
defined to have leading dimension ldsa, as follows:
DIM ENSION SA( LDS A,N CA)

644 004– 2081– 002


MXVA ( 3S ) MXVA ( 3S )

Then
CALL MXVA(S A,I AC,LDS A,SB,I B,SC,I C,NCA, NCA)

multiplies a square submatrix A of sa times a vector b from sb, storing the product c in sc, while
CALL MXVA(S A,L DSA,IA C,SB,I B,SC,I C,NCA, NCA)

computes the product c as A T times the same vector b.

NOTES
You should use the Level 2 Basic Linear Algebra Subprogram (Level 2 BLAS) SGEMV(3S) rather than
MXVA, because they are the de facto standard linear algebra interface. Using Level 2 BLAS routines will
enhance your program’s portability, and also should enhance its performance portability.
MXVA is a general matrix-vector multiply subroutine. As demonstrated earlier, you can use MXVA with a
matrix or its transpose. You can use MXVA to multiply any vector or matrix arguments whose elements are
not stored by columns in successive memory locations, provided only that the elements of rows and columns
are spaced by increments constant for each matrix. (The preferred routine, SGEMV, also can do these
operations.)
The the matrix-vector product whose elements are stored by columns in successive memory locations can be
computed slightly faster using MXV(3S) (superseded by SGEMV).
The following two subroutine calls have the same result:
CALL MXVA(S A,1 ,NRA,SB,1 ,SC ,1, NRA ,NC A)

CAL L MXV (SA,NRA,S B,N CA, SC)

(The product elements computed by MXV are also stored in successive memory locations.)

CAUTIONS
To be computed correctly, the product must not overwrite either operand. Thus, for example, the following
call will not (in general) compute correctly the product of the matrix in sa and the vector in sb:
CALL MXVA(S A,I AC,IAR,SB ,IB ,SB ,IB ,NR A,N CA)

SEE ALSO
MXM(3S), MXMA(3S) to perform a matrix-matrix multiply
MXV(3S) to multiply with more strictly declared matrix and vector arguments
SGEMM(3S), which supersedes MXM and MXMA
SGEMV(3S), which supersedes MXV and MXVA

004– 2081– 002 645


SCATTER ( 3S ) SCATTER ( 3S )

NAME
SCATTER – Scatters a vector into another vector

SYNOPSIS
CALL SCATTER (n, a, index, b)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
SCATTER is defined as follows:
a j = bi
i
where i = 1, . . ., n
This routine has the following arguments:
n Integer. (input)
Number of elements in arrays index and b (not in a).
a Real or integer array of dimension max(index(i): i=1,. . .,n). (output)
Contains the result vector.
b Real or integer array of dimension n. (input)
Contains the source vector.
index Integer array of dimension n. (input)
Contains the vector of indices.
The Fortran equivalent of this routine is as follows:
DO 100 I=1 ,N
A(INDE X(I ))= B(I )
100 CON TINUE

CAUTIONS
You should not use this routine on systems that have Compress-Index Gather-Scatter (CIGS) hardware,
because it will degrade performance.

SEE ALSO
GATHER(3S)

646 004– 2081– 002


SMXPY ( 3S ) SMXPY ( 3S )

NAME
SMXPY – Multiplies a column vector by a matrix and adds the result to another column vector

SYNOPSIS
CALL SMXPY (n1, y, n2, ldam, x, am)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
SMXPY performs the matrix-vector operation:
y ← y + Mx
where y is a vector of length n1, M is an n1-by-n2 matrix, and x is a vector of length n2.
This routine has the following arguments:
n1 Integer. (input)
Number of elements in y (same as number of rows in M).
y Real array of dimension n1. (input and output)
On input, y is the vector to be added to the product of M and x. On output, the result vector
overwrites y.
n2 Integer. (input)
Number of elements in x (same as number of columns in M).
ldam Integer. (input)
Leading dimension of array am, which contains the matrix M.
n1 ≤ ldam.
x Real array of dimension n2. (input)
Vector used in the matrix-vector product.
am Real array of dimension (ldam, n2). (input)
Contains the n1-by-n2 matrix M used in the matrix-vector product.

NOTES
You should use the Level 2 Basic Linear Algebra Subprogram (Level 2 BLAS) SGEMV (see SGEMV(3S))
rather than SMXPY, because they are the de facto standard linear algebra interface. Using Level 2 BLAS
routines will enhance your program’s portability, and also should enhance its performance portability.

004– 2081– 002 647


SMXPY ( 3S ) SMXPY ( 3S )

SEE ALSO
SGEMV(3S), which supersedes SMXPY
SXMPY(3S) (also superseded by SGEMV) to multiply a row vector by a matrix and add the result to another
row vector

648 004– 2081– 002


SXMPY ( 3S ) SXMPY ( 3S )

NAME
SXMPY – Multiplies a row vector by a matrix and adds the result to another row vector

SYNOPSIS
CALL SXMPY (n1, ldy, sy, n2, ldx, sx, ldam, am)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
SXMPY performs the matrix-vector operation:
y <-- y + xM
where y is a row vector of length n1, x is a vector of length n2, and M is an n2-by-n1 matrix.
These "row vectors" would normally be written as transposes in the more conventional "column vector"
notation; however, SXMPY assumes that these vectors are actual rows from matrices Y and X, not merely lists
of elements considered to be a row for algebraic purposes. For some numbers l and m, the elements of y
and x are as follows:
yi = Yli for i = 1,. . .,n1 x j = Xm j for j = 1,. . .,n2
This routine has the following arguments:
n1 Integer. (input)
Number of columns in Y (same as number of elements in y, same as number of columns in M).
ldy Integer. (input)
Leading dimension of Y (same as increment between elements of y).
sy Real element from array of dimension (ldy, n1). (input and output)
sy locates the first element of the vector y; that is, Yl 1, or Y(l,1).
On input, y is the vector to be added to the product of x and M. On output, the result vector
overwrites y.
n2 Integer. (input)
Number of columns in X (same as number of elements in x, same as number of rows in M).
ldx Integer. (input)
Leading dimension of X (same as increment between elements of x).
sx Real element from array of dimension (ldx, n2). (input)
sx locates the first element of the vector x; that is, Xm 1, or X(m,1).
x is the row vector used in the vector-matrix product.
ldam Integer. (input)
Leading dimension of array am.
n2 ≤ ldam.

004– 2081– 002 649


SXMPY ( 3S ) SXMPY ( 3S )

am Real array of dimension (ldam, n1). (input)


Contains n2-by-n1 matrix M used in vector-matrix product.

NOTES
Cray Research recommends that you use the Level 2 Basic Linear Algebra Subprogram (Level 2 BLAS)
SGEMV (see SGEMV(3S)) rather than SXMPY, because they are the de facto standard linear algebra interface.
Using Level 2 BLAS routines will enhance your program’s portability, and also should enhance its
performance portability.

SEE ALSO
SGEMV(3S), which supersedes SXMPY
SMXPY(3S) (also superseded by SGEMV), to multiply a matrix by a column vector and add the result to
another column vector

650 004– 2081– 002


TRID ( 3S ) TRID ( 3S )

NAME
TRID – Solves a tridiagonal system

SYNOPSIS
CALL TRID (tl, tc, tr, inct, n, s, incs)

IMPLEMENTATION
UNICOS systems (except Cray T90 systems that support IEEE arithmetic)

DESCRIPTION
TRID solves a tridiagonal system for a single right-hand side by a combination of burn-at-both-ends and 3:1
cyclic reduction. 3:1 cyclic reduction is used until the size of the system is reduced to 40. Then the reduced
system is solved directly using a burn-at-both-ends algorithm. The remaining values are obtained by
backfilling. No type of pivoting is done.
This routine has the following arguments:
tl Real array of dimension (n– 1)*inct+1. (input)
Contains the lower off-diagonal of the tridiagonal matrix with tl(1) = 0.0.
tc Real array of dimension (n– 1)*inct+1. (input)
Contains the main diagonal of the tridiagonal matrix.
tr Real array of dimension (n– 1)*inct+1. (input)
Contains the upper off-diagonal of the tridiagonal matrix with tr(1+(n– 1)*inct) = 0.0.
inct Integer. (input)
Increment between elements of tl, tc, and tr.
Typically, inct = 1.
n Integer. (input)
Contains the dimension of the matrix system being solved.
s Real array of dimension (n– 1)*incs+1. (input and output)
On input, s contains the right-hand side values of the matrix system. On output, s contains the
solution of the matrix system.
incs Integer. (input)
Increment between elements of s.
Typically, incs = 1.

004– 2081– 002 651


TRID ( 3S ) TRID ( 3S )

NOTES
To perform this operation using the same algorithm, CRI recommends that you use the newer routine
SDTSOL(3S) rather than TRID. SDTSOL(3S) uses the same algorithm as TRID, but SDTSOL(3S) is part of
a larger package of tridiagonal system routines, including SDTTRF(3S) to factor the tridiagonal matrix, and
SDTTRS(3S) to solve systems based on that factorization. There are also complex versions of these
routines: CDTSOL(3S), CDTTRF(3S), and CDTTRS(3S).
To perform this operation for ill-conditioned systems, CRI recommends the LAPACK routine SGTSV(3L),
which uses partial pivoting for better numerical stability.
When calling TRID, the elements tl(1) and tr(1+(n– 1)*inct) must be allocated and set equal to 0.

EXAMPLES
The following examples show how to set up arguments tl, tc, and tr, given the tridiagonal matrix T. Let T
be the tridiagonal matrix:

 11 12 0 0 0 
 
 21 22 23 0 0 
T= 0 32 33 34 0 
 0 0 43 44 45 
 
î 0 0 0 54 55 
Then to pass T to TRID (with inct = 1), set

0  11  12 
     
21  22  23 
tl = 32  tc = 33  tr = 34 
43  44  45 
     
î 54  î 55  î 0 
SEE ALSO
SDTSOL(3S), SDTTRF(3S), SDTTRS(3S) to factor and solve tridiagonal systems by using the same
algorithm as TRID
SGTSV(3L) (available only online) to solve tridiagonal system by using Gaussian elimination with partial
pivoting

652 004– 2081– 002


INDEX

2D array (ScaLAPACK) ................................................................................. descinit(3S) ..................................................... 362


2D grid computation (BLACS) ....................................................................... blacs_pcoord(3S) ............................................ 545
3D grid partition initialization (BLACS) ........................................................ gridinit3d(3S) ................................................ 548
3D processor grids (BLACS) .......................................................................... gridinfo3d(3S) ................................................ 547
a grid of processors ......................................................................................... blacs_gridmap(3S) ......................................... 544
Assigns default values to the parameter arguments for SITRSOL(3S) ......... dfaults(3S)........................................................ 461
BAKVEC(3S) .................................................................................................... eispack(3S)........................................................ 349
bakvec(3S) .................................................................................................... eispack(3S)........................................................ 349
BALANC(3S) .................................................................................................... eispack(3S)........................................................ 349
balanc(3S) .................................................................................................... eispack(3S)........................................................ 349
BALBAK(3S) .................................................................................................... eispack(3S)........................................................ 349
balbak(3S) .................................................................................................... eispack(3S)........................................................ 349
Banded symmetric systems of linear equations .............................................. eispack(3S)........................................................ 349
BANDR(3S) ...................................................................................................... eispack(3S)........................................................ 349
bandr(3S) ...................................................................................................... eispack(3S)........................................................ 349
BANDV(3S) ...................................................................................................... eispack(3S)........................................................ 349
bandv(3S) ...................................................................................................... eispack(3S)........................................................ 349
Basic Linear Algebra Communication Subprograms ...................................... intro_blacs(3S) .............................................. 535
Basic Linear Algebra Subprogram .................................................................. vssyrk(3S) .......................................................... 614
BISECT(3S) .................................................................................................... eispack(3S)........................................................ 349
bisect(3S) .................................................................................................... eispack(3S)........................................................ 349
BLACS ............................................................................................................ intro_blacs(3S) .............................................. 535
BLACS introduction ....................................................................................... intro_blacs(3S) .............................................. 535
BLACS(3S) ...................................................................................................... intro_blacs(3S) .............................................. 535
blacs(3S) ...................................................................................................... intro_blacs(3S) .............................................. 535
BLACS_BARRIER(3S) ................................................................................... blacs_barrier(3S) ......................................... 539
blacs_barrier(3S) ................................................................................... blacs_barrier(3S) ......................................... 539
BLACS_EXIT(3S) .......................................................................................... blacs_exit(3S) ................................................ 540
blacs_exit(3S) .......................................................................................... blacs_exit(3S) ................................................ 540
BLACS_GRIDEXIT(3S) ................................................................................. blacs_gridexit(3S) ....................................... 541
blacs_gridexit(3S) ................................................................................. blacs_gridexit(3S) ....................................... 541
BLACS_GRIDINFO(3S) ................................................................................. blacs_gridinfo(3S) ....................................... 542
blacs_gridinfo(3S) ................................................................................. blacs_gridinfo(3S) ....................................... 542
BLACS_GRIDINIT(3S) ................................................................................. blacs_gridinit(3S) ....................................... 543
blacs_gridinit(3S) ................................................................................. blacs_gridinit(3S) ....................................... 543
BLACS_GRIDMAP(3S) ................................................................................... blacs_gridmap(3S) ......................................... 544
blacs_gridmap(3S) ................................................................................... blacs_gridmap(3S) ......................................... 544
BLACS_PCOORD(3S) ...................................................................................... blacs_pcoord(3S) ............................................ 545
blacs_pcoord(3S) ...................................................................................... blacs_pcoord(3S) ............................................ 545
BLACS_PNUM(3S) .......................................................................................... blacs_pnum(3S) ................................................ 546
blacs_pnum(3S) .......................................................................................... blacs_pnum(3S) ................................................ 546
BLAS 3 ........................................................................................................... vssyrk(3S) .......................................................... 614
BQR(3S) ........................................................................................................... eispack(3S)........................................................ 349
bqr(3S) ........................................................................................................... eispack(3S)........................................................ 349
broadcast a trapezoidal rectangular matrix (BLACS) ..................................... itrbs2d(3S)........................................................ 566
broadcast general rectangular matrix (BLACS) .............................................. igebs2d(3S)........................................................ 556
Broadcasts a general rectangular matrix to all or a subset of processors ...... igebs2d(3S)........................................................ 556

004– 2081– 002 Index-1


Broadcasts a trapezoidal rectangular matrix to all or a subset of
processors ........................................................................................................ itrbs2d(3S)........................................................ 566
CBABK2(3S) .................................................................................................... eispack(3S)........................................................ 349
cbabk2(3S) .................................................................................................... eispack(3S)........................................................ 349
CBAL(3S) ........................................................................................................ eispack(3S)........................................................ 349
cbal(3S) ........................................................................................................ eispack(3S)........................................................ 349
cchdc(3S) ...................................................................................................... linpack(3S)........................................................ 355
cchdd(3S) ...................................................................................................... linpack(3S)........................................................ 355
cchex(3S) ...................................................................................................... linpack(3S)........................................................ 355
cchud(3S) ...................................................................................................... linpack(3S)........................................................ 355
CCOPY2RV(3S) ............................................................................................... scopy2rv(3S) ..................................................... 590
ccopy2rv(3S) ............................................................................................... scopy2rv(3S) ..................................................... 590
CCOPY2VR(3S) ............................................................................................... scopy2vr(3S) ..................................................... 593
ccopy2vr(3S) ............................................................................................... scopy2vr(3S) ..................................................... 593
CDTSOL(3S) .................................................................................................... sdtsol(3S) .......................................................... 518
cdtsol(3S) .................................................................................................... sdtsol(3S) .......................................................... 518
CDTTRF(3S) .................................................................................................... sdttrf(3S) .......................................................... 520
cdttrf(3S) .................................................................................................... sdttrf(3S) .......................................................... 520
CDTTRS(3S) .................................................................................................... sdttrs(3S) .......................................................... 523
cdttrs(3S) .................................................................................................... sdttrs(3S) .......................................................... 523
CG(3S) ............................................................................................................. eispack(3S)........................................................ 349
cg(3S) ............................................................................................................. eispack(3S)........................................................ 349
CGAMN2D(3S) ................................................................................................. igamn2d(3S)........................................................ 550
cgamn2d(3S) ................................................................................................. igamn2d(3S)........................................................ 550
CGAMX2D(3S) ................................................................................................. igamx2d(3S)........................................................ 552
cgamx2d(3S) ................................................................................................. igamx2d(3S)........................................................ 552
cgbco(3S) ...................................................................................................... linpack(3S)........................................................ 355
cgbdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
cgbfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
cgbsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
CGEBR2D(3S) ................................................................................................. igebr2d(3S)........................................................ 554
cgebr2d(3S) ................................................................................................. igebr2d(3S)........................................................ 554
CGEBS2D(3S) ................................................................................................. igebs2d(3S)........................................................ 556
cgebs2d(3S) ................................................................................................. igebs2d(3S)........................................................ 556
cgeco(3S) ...................................................................................................... linpack(3S)........................................................ 355
cgedi(3S) ...................................................................................................... linpack(3S)........................................................ 355
cgefa(3S) ...................................................................................................... linpack(3S)........................................................ 355
CGERV2D(3S) ................................................................................................. igerv2d(3S)........................................................ 558
cgerv2d(3S) ................................................................................................. igerv2d(3S)........................................................ 558
CGESD2D(3S) ................................................................................................. igesd2d(3S)........................................................ 560
cgesd2d(3S) ................................................................................................. igesd2d(3S)........................................................ 560
cgesl(3S) ...................................................................................................... linpack(3S)........................................................ 355
CGMAX2D(3S) ................................................................................................. igamx2d(3S)........................................................ 552
cgmax2d(3S) ................................................................................................. igamx2d(3S)........................................................ 552
CGMIN2D(3S) ................................................................................................. igamn2d(3S)........................................................ 550
cgmin2d(3S) ................................................................................................. igamn2d(3S)........................................................ 550
CGSUM2D(3S) ................................................................................................. igsum2d(3S)........................................................ 562
cgsum2d(3S) ................................................................................................. igsum2d(3S)........................................................ 562
cgtsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
CH(3S) ............................................................................................................. eispack(3S)........................................................ 349

Index-2 004– 2081– 002


ch(3S) ............................................................................................................. eispack(3S)........................................................ 349
chico(3S) ...................................................................................................... linpack(3S)........................................................ 355
chidi(3S) ...................................................................................................... linpack(3S)........................................................ 355
chifa(3S) ...................................................................................................... linpack(3S)........................................................ 355
chisl(3S) ...................................................................................................... linpack(3S)........................................................ 355
Cholesky factorization ..................................................................................... intro_lapack(3S) ............................................ 333
Cholesky factorization ..................................................................................... linpack(3S)........................................................ 355
Cholesky factorization (CORE) ...................................................................... vspotrf(3S)........................................................ 610
Cholesky factorization (CORE) ...................................................................... vspotrs(3S)........................................................ 612
Cholesky factorzation computation (ScaLAPACK) ........................................ pspotrf(3S)........................................................ 418
chpco(3S) ...................................................................................................... linpack(3S)........................................................ 355
chpdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
chpfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
chpsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
CINVIT(3S) .................................................................................................... eispack(3S)........................................................ 349
cinvit(3S) .................................................................................................... eispack(3S)........................................................ 349
CMACH(3S) ...................................................................................................... smach(3S) ............................................................ 628
cmach(3S) ...................................................................................................... smach(3S) ............................................................ 628
Column vector ................................................................................................. smxpy(3S) ............................................................ 647
COMBAK(3S) .................................................................................................... eispack(3S)........................................................ 349
combak(3S) .................................................................................................... eispack(3S)........................................................ 349
COMHES(3S) .................................................................................................... eispack(3S)........................................................ 349
comhes(3S) .................................................................................................... eispack(3S)........................................................ 349
COMLR2(3S) .................................................................................................... eispack(3S)........................................................ 349
comlr2(3S) .................................................................................................... eispack(3S)........................................................ 349
COMLR(3S) ...................................................................................................... eispack(3S)........................................................ 349
comlr(3S) ...................................................................................................... eispack(3S)........................................................ 349
complex distributed matrix inverse computation (ScaLAPACK) ................... psgetri(3S)........................................................ 408
complex distributed matrix LQ factorization (ScaLAPACK) ......................... psgelqf(3S)........................................................ 387
complex distributed matrix LU factorization (ScaLAPACK) ......................... psgetrf(3S)........................................................ 405
complex distributed matrix QL factorization (ScaLAPACK) ......................... psgeqlf(3S)........................................................ 390
complex distributed matrix QR factorization (ScaLAPACK) ........................ psgeqpf(3S)........................................................ 393
complex distributed matrix QR factorization (ScaLAPACK) ........................ psgeqrf(3S)........................................................ 396
complex distributed matrix QR factorization (ScaLAPACK) ........................ psgerqf(3S)........................................................ 399
complex distributed matrix reduction (ScaLAPACK) .................................... psgebrd(3S)........................................................ 382
complex distributed system of linear equations solution (ScaLAPACK) ....... psgetrs(3S)........................................................ 411
complex distributed triangular system computation (ScaLAPACK) .............. pstrtrs(3S)........................................................ 449
complex Hermitian distributed matrix reduction (ScaLAPACK) ................... pssytrd(3S)........................................................ 442
complex Hermitian matrix inverse (ScaLAPACK) ........................................ pspotri(3S)........................................................ 421
complex Hermitian positive definite distributed matrix (ScaLAPACK) ........ pspotrf(3S)........................................................ 418
complex Hermitian positive definite system solution (ScaLAPACK) ............ pspotrs(3S)........................................................ 424
complex system computation (ScaLAPACK) ................................................. psgesv(3S) .......................................................... 402
complex triangular distributed matrix inverse computation (ScaLAPACK) .. pstrtri(3S)........................................................ 446
compute grid coordinates (BLACS) ............................................................... pcoord3d(3S) ..................................................... 573
compute local matrix (ScaLAPACK) ............................................................. numroc(3S) .......................................................... 365
compute processing element coordinate (ScaLAPACK) ................................ indxg2p(3S)........................................................ 364
Computes a QL factorization of a real or complex distributed matrix ........... psgeqlf(3S)........................................................ 390
Computes a QR factorization of a real or complex distributed matrix ........... psgeqrf(3S)........................................................ 396
Computes a QR factorization with column pivoting of a real or complex
distributed matrix ............................................................................................ psgeqpf(3S)........................................................ 393

004– 2081– 002 Index-3


Computes a RQ factorization of a real or complex distributed matrix ........... psgerqf(3S)........................................................ 399
Computes an LQ factorization of a real or complex distributed matrix ......... psgelqf(3S)........................................................ 387
Computes an LU factorization of a real or complex distributed matrix ......... psgetrf(3S)........................................................ 405
Computes an LU factorization of a virtual general matrix with real or
complex elements, using partial pivoting with row interchanges ................... vsgetrf(3S)........................................................ 604
Computes coordinates in two-dimensional grids ............................................ blacs_pcoord(3S) ............................................ 545
Computes matrix-times-matrix product (arbitrary increments) ....................... mxma(3S)............................................................... 639
Computes matrix-times-matrix product (unit increments) .............................. mxm(3S) ................................................................. 637
Computes matrix-times-vector product (arbitrary increments) ....................... mxva(3S)............................................................... 644
Computes matrix-times-vector product (unit increments) .............................. mxv(3S) ................................................................. 642
Computes selected eigenvalues and eigenvectors of a Hermitian-definite
eigenproblem ................................................................................................... pcheevx(3S)........................................................ 366
Computes selected eigenvalues and eigenvectors of a Hermitian-definite
generalized eigenproblem ................................................................................ pchegvx(3S)........................................................ 374
Computes selected eigenvalues and eigenvectors of a real symmetric
matrix .............................................................................................................. pssyevx(3S)........................................................ 427
Computes selected eigenvalues and eigenvectors of a real symmetric-
definite generalized eigenproblem ................................................................... pssygvx(3S)........................................................ 434
Computes the Cholesky factorization of a real symmetric or complex
Hermitian positive definite distributed matrix ................................................ pspotrf(3S)........................................................ 418
Computes the Cholesky factorization of a real symmetric positive definite
virtual matrix ................................................................................................... vspotrf(3S)........................................................ 610
Computes the coordinate of the processing element (PE) that possesses
the entry of a distributed matrix ..................................................................... indxg2p(3S)........................................................ 364
Computes the inverse of a real or complex distributed matrix ...................... psgetri(3S)........................................................ 408
Computes the inverse of a real or complex upper or lower triangular
distributed matrix ............................................................................................ pstrtri(3S)........................................................ 446
Computes the inverse of a real symmetric or complex Hermitian positive
definite distributed matrix ............................................................................... pspotri(3S)........................................................ 421
Computes the number of rows or columns of a distributed matrix owned
locally .............................................................................................................. numroc(3S) .......................................................... 365
Computes the solution to a real or complex system of linear equations ....... psgesv(3S) .......................................................... 402
Computes three-dimensional (3D) processor grid coordinates ...................... pcoord3d(3S) ..................................................... 573
Condition number ............................................................................................ intro_lapack(3S) ............................................ 333
Conjugate gradient .......................................................................................... dfaults(3S)........................................................ 461
Conjugate gradient .......................................................................................... sitrsol(3S)........................................................ 466
Constant ........................................................................................................... r1mach(3S) .......................................................... 624
Constants ......................................................................................................... intro_mach(3S) ................................................ 623
Copies a submatrix of a real or complex matrix in memory into a virtual
matrix .............................................................................................................. scopy2rv(3S) ..................................................... 590
Copies a submatrix of a virtual matrix to a real or complex (in memory)
matrix .............................................................................................................. scopy2vr(3S) ..................................................... 593
Copying matrices ............................................................................................ scopy2rv(3S) ..................................................... 590
Copying matrices ............................................................................................ scopy2vr(3S) ..................................................... 593
CORTB(3S) ...................................................................................................... eispack(3S)........................................................ 349
cortb(3S) ...................................................................................................... eispack(3S)........................................................ 349
CORTH(3S) ...................................................................................................... eispack(3S)........................................................ 349
corth(3S) ...................................................................................................... eispack(3S)........................................................ 349
cpbco(3S) ...................................................................................................... linpack(3S)........................................................ 355
cpbdi(3S) ...................................................................................................... linpack(3S)........................................................ 355

Index-4 004– 2081– 002


cpbfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
cpbsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
cpoco(3S) ...................................................................................................... linpack(3S)........................................................ 355
cpodi(3S) ...................................................................................................... linpack(3S)........................................................ 355
cpofa(3S) ...................................................................................................... linpack(3S)........................................................ 355
cposl(3S) ...................................................................................................... linpack(3S)........................................................ 355
cppco(3S) ...................................................................................................... linpack(3S)........................................................ 355
cppdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
cppfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
cppsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
cptsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
cqrdc(3S) ...................................................................................................... linpack(3S)........................................................ 355
cqrsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
cspco(3S) ...................................................................................................... linpack(3S)........................................................ 355
cspdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
cspfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
cspsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
csvdc(3S) ...................................................................................................... linpack(3S)........................................................ 355
CTRBR2D(3S) ................................................................................................. itrbr2d(3S)........................................................ 564
ctrbr2d(3S) ................................................................................................. itrbr2d(3S)........................................................ 564
CTRBS2D(3S) ................................................................................................. itrbs2d(3S)........................................................ 566
ctrbs2d(3S) ................................................................................................. itrbs2d(3S)........................................................ 566
CTRRV2D(3S) ................................................................................................. itrrv2d(3S)........................................................ 568
ctrrv2d(3S) ................................................................................................. itrrv2d(3S)........................................................ 568
CTRSD2D(3S) ................................................................................................. itrsd2d(3S)........................................................ 570
ctrsd2d(3S) ................................................................................................. itrsd2d(3S)........................................................ 570
Declares packed storage mode for a triangular, symmetric, or Hermitian
(complex only) virtual matrix ......................................................................... vstorage(3S) ..................................................... 616
Dense ............................................................................................................... linpack(3S)........................................................ 355
Dense linear algebra ........................................................................................ intro_lapack(3S) ............................................ 333
Dense linear system ........................................................................................ linpack(3S)........................................................ 355
Dense linear system ........................................................................................ vsgetrs(3S)........................................................ 608
Dense linear system solvers ............................................................................ intro_lapack(3S) ............................................ 333
Dense linear systems ....................................................................................... intro_lapack(3S) ............................................ 333
Dense solver .................................................................................................... linpack(3S)........................................................ 355
Dense solver .................................................................................................... vsgetrs(3S)........................................................ 608
DESCINIT(3S) ............................................................................................... descinit(3S) ..................................................... 362
descinit(3S) ............................................................................................... descinit(3S) ..................................................... 362
determine maximum value (BLACS) ............................................................. igamx2d(3S)........................................................ 552
determine minimum absolute values (BLACS) .............................................. igamn2d(3S)........................................................ 550
Determines maximum absolute values of rectangular matrices ...................... igamx2d(3S)........................................................ 552
Determines minimum absolute values of rectangular matrices ...................... igamn2d(3S)........................................................ 550
Determines single-precision machine parameters ........................................... slamch(3S) .......................................................... 626
DFAULTS(3S) ................................................................................................. dfaults(3S)........................................................ 461
dfaults(3S) ................................................................................................. dfaults(3S)........................................................ 461
Direct ............................................................................................................... ssgetrf(3S)........................................................ 482
Direct ............................................................................................................... ssgetrs(3S)........................................................ 487
Direct ............................................................................................................... sspotrf(3S)........................................................ 489
Direct ............................................................................................................... sspotrs(3S)........................................................ 494
Direct ............................................................................................................... sststrf(3S)........................................................ 496

004– 2081– 002 Index-5


Direct ............................................................................................................... sststrs(3S)........................................................ 501
Direct sparse solver ......................................................................................... intro_sparse(3S) ............................................ 453
Direct sparse solver ......................................................................................... ssgetrf(3S)........................................................ 482
Direct sparse solver ......................................................................................... ssgetrs(3S)........................................................ 487
Direct sparse solver ......................................................................................... sspotrf(3S)........................................................ 489
Direct sparse solver ......................................................................................... sspotrs(3S)........................................................ 494
Direct sparse solver ......................................................................................... sststrf(3S)........................................................ 496
Direct sparse solver ......................................................................................... sststrs(3S)........................................................ 501
distributed matrix (ScaLAPACK) ................................................................... numroc(3S) .......................................................... 365
Eigenvalue problem ........................................................................................ eispack(3S)........................................................ 349
Eigenvalues ..................................................................................................... eispack(3S)........................................................ 349
eigenvalues and eigenvectors computation (ScaLAPACK) ............................ pcheevx(3S)........................................................ 366
eigenvalues and eigenvectors computation (ScaLAPACK) ............................ pchegvx(3S)........................................................ 374
eigenvalues and eigenvectors computation (ScaLAPACK) ............................ pssyevx(3S)........................................................ 427
eigenvalues and eigenvectors computation (ScaLAPACK) ............................ pssygvx(3S)........................................................ 434
Eigenvectors .................................................................................................... eispack(3S)........................................................ 349
EISPACK(3S) ................................................................................................. eispack(3S)........................................................ 349
eispack(3S) ................................................................................................. eispack(3S)........................................................ 349
element summation operations (BLACS) ....................................................... igsum2d(3S)........................................................ 562
ELMBAK(3S) .................................................................................................... eispack(3S)........................................................ 349
elmbak(3S) .................................................................................................... eispack(3S)........................................................ 349
ELMHES(3S) .................................................................................................... eispack(3S)........................................................ 349
elmhes(3S) .................................................................................................... eispack(3S)........................................................ 349
ELTRAN(3S) .................................................................................................... eispack(3S)........................................................ 349
eltran(3S) .................................................................................................... eispack(3S)........................................................ 349
Factorization .................................................................................................... vsgetrf(3S)........................................................ 604
Factorization .................................................................................................... vspotrf(3S)........................................................ 610
Factors a real sparse general matrix with a symmetric nonzero pattern (no
form of pivoting is implemented) ................................................................... sststrf(3S)........................................................ 496
Factors a real sparse general matrix with threshold pivoting implemented ... ssgetrf(3S)........................................................ 482
Factors a real sparse symmetric definite matrix ............................................. sspotrf(3S)........................................................ 489
Factors a real-valued or complex-valued tridiagonal system ......................... sdttrf(3S) .......................................................... 520
FIGI2(3S) ...................................................................................................... eispack(3S)........................................................ 349
figi2(3S) ...................................................................................................... eispack(3S)........................................................ 349
FIGI(3S) ........................................................................................................ eispack(3S)........................................................ 349
figi(3S) ........................................................................................................ eispack(3S)........................................................ 349
First-order linear recurrence ............................................................................ folrc(3S) ............................................................ 511
First-order linear recurrence ............................................................................ folrn(3S) ............................................................ 513
First-order linear recurrences .......................................................................... folr(3S)............................................................... 504
First-order linear recurrences .......................................................................... folr2(3S) ............................................................ 509
FOLR2(3S) ...................................................................................................... folr2(3S) ............................................................ 509
folr2(3S) ...................................................................................................... folr2(3S) ............................................................ 509
FOLR2P(3S) .................................................................................................... folr2(3S) ............................................................ 509
folr2p(3S) .................................................................................................... folr2(3S) ............................................................ 509
FOLR(3S) ........................................................................................................ folr(3S)............................................................... 504
folr(3S) ........................................................................................................ folr(3S)............................................................... 504
FOLRC(3S) ...................................................................................................... folrc(3S) ............................................................ 511
folrc(3S) ...................................................................................................... folrc(3S) ............................................................ 511
FOLRN(3S) ...................................................................................................... folrn(3S) ............................................................ 513
folrn(3S) ...................................................................................................... folrn(3S) ............................................................ 513

Index-6 004– 2081– 002


FOLRNP(3S) .................................................................................................... folrn(3S) ............................................................ 513
folrnp(3S) .................................................................................................... folrn(3S) ............................................................ 513
FOLRP(3S) ...................................................................................................... folr(3S)............................................................... 504
folrp(3S) ...................................................................................................... folr(3S)............................................................... 504
free grid (BLACS) .......................................................................................... blacs_gridexit(3S) ....................................... 541
Frees a grid ..................................................................................................... blacs_gridexit(3S) ....................................... 541
Frees all existing grids .................................................................................... blacs_exit(3S) ................................................ 540
Gather a vector ................................................................................................ gather(3S) .......................................................... 633
GATHER(3S) .................................................................................................... gather(3S) .......................................................... 633
gather(3S) .................................................................................................... gather(3S) .......................................................... 633
Gathers a vector from a source vector ............................................................ gather(3S) .......................................................... 633
Global reduction routines ................................................................................ intro_blacs(3S) .............................................. 535
GRIDINFO3D(3S) .......................................................................................... gridinfo3d(3S) ................................................ 547
gridinfo3d(3S) .......................................................................................... gridinfo3d(3S) ................................................ 547
GRIDINIT3D(3S) .......................................................................................... gridinit3d(3S) ................................................ 548
gridinit3d(3S) .......................................................................................... gridinit3d(3S) ................................................ 548
halt execution (BLACS) .................................................................................. blacs_barrier(3S) ......................................... 539
Handles terminal processing for the out-of-core routines .............................. vend(3S)............................................................... 598
Hermitian matrix ............................................................................................. vstorage(3S) ..................................................... 616
Horner’s method ............................................................................................. folrn(3S) ............................................................ 513
Horner’s rule ................................................................................................... folrn(3S) ............................................................ 513
HTRIB3(3S) .................................................................................................... eispack(3S)........................................................ 349
htrib3(3S) .................................................................................................... eispack(3S)........................................................ 349
HTRIBK(3S) .................................................................................................... eispack(3S)........................................................ 349
htribk(3S) .................................................................................................... eispack(3S)........................................................ 349
HTRID3(3S) .................................................................................................... eispack(3S)........................................................ 349
htrid3(3S) .................................................................................................... eispack(3S)........................................................ 349
HTRIDI(3S) .................................................................................................... eispack(3S)........................................................ 349
htridi(3S) .................................................................................................... eispack(3S)........................................................ 349
IGAMN2D(3S) ................................................................................................. igamn2d(3S)........................................................ 550
igamn2d(3S) ................................................................................................. igamn2d(3S)........................................................ 550
IGAMX2D(3S) ................................................................................................. igamx2d(3S)........................................................ 552
igamx2d(3S) ................................................................................................. igamx2d(3S)........................................................ 552
IGEBR2D(3S) ................................................................................................. igebr2d(3S)........................................................ 554
igebr2d(3S) ................................................................................................. igebr2d(3S)........................................................ 554
IGEBS2D(3S) ................................................................................................. igebs2d(3S)........................................................ 556
igebs2d(3S) ................................................................................................. igebs2d(3S)........................................................ 556
IGERV2D(3S) ................................................................................................. igerv2d(3S)........................................................ 558
igerv2d(3S) ................................................................................................. igerv2d(3S)........................................................ 558
IGESD2D(3S) ................................................................................................. igesd2d(3S)........................................................ 560
igesd2d(3S) ................................................................................................. igesd2d(3S)........................................................ 560
igmax2d(3S) ................................................................................................. igamx2d(3S)........................................................ 552
igmin2d(3S) ................................................................................................. igamn2d(3S)........................................................ 550
IGSUM2D(3S) ................................................................................................. igsum2d(3S)........................................................ 562
igsum2d(3S) ................................................................................................. igsum2d(3S)........................................................ 562
IMTQL1(3S) .................................................................................................... eispack(3S)........................................................ 349
imtql1(3S) .................................................................................................... eispack(3S)........................................................ 349
IMTQL2(3S) .................................................................................................... eispack(3S)........................................................ 349
imtql2(3S) .................................................................................................... eispack(3S)........................................................ 349
IMTQLV(3S) .................................................................................................... eispack(3S)........................................................ 349

004– 2081– 002 Index-7


imtqlv(3S) .................................................................................................... eispack(3S)........................................................ 349
INDXG2P(3S) ................................................................................................. indxg2p(3S)........................................................ 364
indxg2p(3S) ................................................................................................. indxg2p(3S)........................................................ 364
initialization routine (BLACS) ........................................................................ blacs_gridinit(3S) ....................................... 543
Initialization routines (CORE) ........................................................................ vbegin(3S) .......................................................... 595
initialize descriptor vector ............................................................................... descinit(3S) ..................................................... 362
Initializes a descriptor vector of a distributed two-dimensional array ........... descinit(3S) ..................................................... 362
Initializes counters, variables, and so on, for the BLACS routines ............... blacs_gridinit(3S) ....................................... 543
Initializes the out-of-core routine data structures ........................................... vbegin(3S) .......................................................... 595
Initializes variables for a three-dimensional (3D) grid partition of
processor set .................................................................................................... gridinit3d(3S) ................................................ 548
INTRO_BLACS(3S) ........................................................................................ intro_blacs(3S) .............................................. 535
intro_blacs(3S) ........................................................................................ intro_blacs(3S) .............................................. 535
INTRO_CORE(3S) .......................................................................................... intro_core(3S) ................................................ 575
intro_core(3S) .......................................................................................... intro_core(3S) ................................................ 575
Introduction to Basic Linear Algebra Communication Subprograms ........... intro_blacs(3S) .............................................. 535
introduction to BLACS routines ..................................................................... intro_blacs(3S) .............................................. 535
Introduction to Eigensystem computation for dense linear systems ............... eispack(3S)........................................................ 349
Introduction to LAPACK solvers for dense linear systems ........................... intro_lapack(3S) ............................................ 333
Introduction to machine constant functions .................................................... intro_mach(3S) ................................................ 623
introduction to scalar lapack ........................................................................... intro_scalapack(3S) ..................................... 359
Introduction to solvers for sparse linear systems ........................................... intro_sparse(3S) ............................................ 453
Introduction to solvers for special linear systems .......................................... intro_spec(3S) ................................................ 503
Introduction to superseded Scientific Library routines ................................... intro_superseded(3S) .................................. 631
Introduction to the Cray Research Scientific Library out-of-core routines
for linear algebra ............................................................................................. intro_core(3S) ................................................ 575
Introduction to the ScaLAPACK routines for distributed matrix
computations ................................................................................................... intro_scalapack(3S) ..................................... 359
INTRO_LAPACK(3S) ...................................................................................... intro_lapack(3S) ............................................ 333
intro_lapack(3S) ...................................................................................... intro_lapack(3S) ............................................ 333
INTRO_MACH(3S) .......................................................................................... intro_mach(3S) ................................................ 623
intro_mach(3S) .......................................................................................... intro_mach(3S) ................................................ 623
INTRO_SCALAPACK(3S) .............................................................................. intro_scalapack(3S) ..................................... 359
intro_scalapack(3S) .............................................................................. intro_scalapack(3S) ..................................... 359
INTRO_SPARSE(3S) ...................................................................................... intro_sparse(3S) ............................................ 453
intro_sparse(3S) ...................................................................................... intro_sparse(3S) ............................................ 453
INTRO_SPEC(3S) .......................................................................................... intro_spec(3S) ................................................ 503
intro_spec(3S) .......................................................................................... intro_spec(3S) ................................................ 503
INTRO_SUPERSEDED(3S) ............................................................................ intro_superseded(3S) .................................. 631
intro_superseded(3S) ............................................................................ intro_superseded(3S) .................................. 631
Inverse ............................................................................................................. intro_lapack(3S) ............................................ 333
inverse computation (ScaLAPACK) ............................................................... pspotri(3S)........................................................ 421
inverse computation (ScaLAPACK) ............................................................... pstrtri(3S)........................................................ 446
Inverse of square matrix ................................................................................. minv(3S)............................................................... 634
INVIT(3S) ...................................................................................................... eispack(3S)........................................................ 349
invit(3S) ...................................................................................................... eispack(3S)........................................................ 349
Iterative ........................................................................................................... dfaults(3S)........................................................ 461
Iterative ........................................................................................................... sitrsol(3S)........................................................ 466
Iterative sparse solver ..................................................................................... intro_sparse(3S) ............................................ 453
Iterative sparse solver ..................................................................................... dfaults(3S)........................................................ 461

Index-8 004– 2081– 002


Iterative sparse solver ..................................................................................... sitrsol(3S)........................................................ 466
ITRBR2D(3S) ................................................................................................. itrbr2d(3S)........................................................ 564
itrbr2d(3S) ................................................................................................. itrbr2d(3S)........................................................ 564
ITRBS2D(3S) ................................................................................................. itrbs2d(3S)........................................................ 566
itrbs2d(3S) ................................................................................................. itrbs2d(3S)........................................................ 566
ITRRV2D(3S) ................................................................................................. itrrv2d(3S)........................................................ 568
itrrv2d(3S) ................................................................................................. itrrv2d(3S)........................................................ 568
ITRSD2D(3S) ................................................................................................. itrsd2d(3S)........................................................ 570
itrsd2d(3S) ................................................................................................. itrsd2d(3S)........................................................ 570
LAPACK ......................................................................................................... intro_lapack(3S) ............................................ 333
LAPACK ......................................................................................................... slamch(3S) .......................................................... 626
Lapack ............................................................................................................. intro_lapack(3S) ............................................ 333
Lapack ............................................................................................................. slamch(3S) .......................................................... 626
LAPACK(3S) .................................................................................................... intro_lapack(3S) ............................................ 333
lapack(3S) .................................................................................................... intro_lapack(3S) ............................................ 333
LDU factorization ........................................................................................... vsgetrf(3S)........................................................ 604
Level 3 Basic Linear Algebra Subprogram .................................................... vssyrk(3S) .......................................................... 614
Level 3 BLAS ................................................................................................. vssyrk(3S) .......................................................... 614
Linear .............................................................................................................. slamch(3S) .......................................................... 626
Linear algebra ................................................................................................. intro_lapack(3S) ............................................ 333
Linear algebra ................................................................................................. slamch(3S) .......................................................... 626
Linear equations .............................................................................................. linpack(3S)........................................................ 355
Linear equations .............................................................................................. minv(3S)............................................................... 634
Linear recurrence ............................................................................................ folr(3S)............................................................... 504
Linear recurrence ............................................................................................ folr2(3S) ............................................................ 509
Linear recurrence ............................................................................................ folrc(3S) ............................................................ 511
Linear recurrence ............................................................................................ folrn(3S) ............................................................ 513
Linear recurrence ............................................................................................ recpp(3S) ............................................................ 516
Linear recurrence ............................................................................................ solr(3S)............................................................... 526
Linear recurrence ............................................................................................ solr3(3S) ............................................................ 528
Linear recurrence ............................................................................................ solrn(3S) ............................................................ 531
Linear system solvers ...................................................................................... intro_lapack(3S) ............................................ 333
Linear systems ................................................................................................. intro_lapack(3S) ............................................ 333
Linear systems ................................................................................................. slamch(3S) .......................................................... 626
LINPACK(3S) ................................................................................................. linpack(3S)........................................................ 355
linpack(3S) ................................................................................................. linpack(3S)........................................................ 355
LU factorization .............................................................................................. intro_lapack(3S) ............................................ 333
LU factorization .............................................................................................. vsgetrf(3S)........................................................ 604
LU factorization .............................................................................................. vsgetrs(3S)........................................................ 608
LU solver ........................................................................................................ vsgetrs(3S)........................................................ 608
MACH_CON(3S) ............................................................................................... intro_mach(3S) ................................................ 623
mach_con(3S) ............................................................................................... intro_mach(3S) ................................................ 623
Machine constant ............................................................................................ r1mach(3S) .......................................................... 624
Machine constants ........................................................................................... intro_mach(3S) ................................................ 623
Machine epsilon .............................................................................................. smach(3S) ............................................................ 628
Matrix copy ..................................................................................................... scopy2rv(3S) ..................................................... 590
Matrix copy ..................................................................................................... scopy2vr(3S) ..................................................... 593
Matrix multiplication (VBLAS) ...................................................................... vsgemm(3S) .......................................................... 600
Matrix-matrix multiplication (arbitrary increments) ....................................... mxma(3S)............................................................... 639
Matrix-matrix multiplication (unit increments) ............................................... mxm(3S) ................................................................. 637

004– 2081– 002 Index-9


Matrix-matrix multiplication (VBLAS) .......................................................... vsgemm(3S) .......................................................... 600
Matrix-vector multiplication ........................................................................... smxpy(3S) ............................................................ 647
Matrix-vector multiplication ........................................................................... sxmpy(3S) ............................................................ 649
Matrix-vector multiplication (arbitrary increments) ........................................ mxva(3S)............................................................... 644
Matrix-vector multiplication (unit increments) ............................................... mxv(3S) ................................................................. 642
MINFIT(3S) .................................................................................................... eispack(3S)........................................................ 349
minfit(3S) .................................................................................................... eispack(3S)........................................................ 349
MINV(3S) ........................................................................................................ minv(3S)............................................................... 634
minv(3S) ........................................................................................................ minv(3S)............................................................... 634
Multiplies a column vector by a matrix and adds the result to another
column vector .................................................................................................. smxpy(3S) ............................................................ 647
Multiplies a row vector by a matrix and adds the result to another row
vector ............................................................................................................... sxmpy(3S) ............................................................ 649
Multiplies a virtual real or complex general matrix by a virtual real or
complex general matrix ................................................................................... vsgemm(3S) .......................................................... 600
Multiplying matrices ....................................................................................... mxm(3S) ................................................................. 637
Multiplying matrices ....................................................................................... mxma(3S)............................................................... 639
MXM(3S) ........................................................................................................... mxm(3S) ................................................................. 637
mxm(3S) ........................................................................................................... mxm(3S) ................................................................. 637
MXMA(3S) ........................................................................................................ mxma(3S)............................................................... 639
mxma(3S) ........................................................................................................ mxma(3S)............................................................... 639
MXV(3S) ........................................................................................................... mxv(3S) ................................................................. 642
mxv(3S) ........................................................................................................... mxv(3S) ................................................................. 642
MXVA(3S) ........................................................................................................ mxva(3S)............................................................... 644
mxva(3S) ........................................................................................................ mxva(3S)............................................................... 644
MYNODE(3S) .................................................................................................... mynode(3S) .......................................................... 572
mynode(3S) .................................................................................................... mynode(3S) .......................................................... 572
Non-symmetric ................................................................................................ sststrf(3S)........................................................ 496
Non-symmetric ................................................................................................ sststrs(3S)........................................................ 501
Non-symmetric matrix .................................................................................... sststrf(3S)........................................................ 496
Non-symmetric matrix .................................................................................... sststrs(3S)........................................................ 501
Normalized number ......................................................................................... smach(3S) ............................................................ 628
NUMROC(3S) .................................................................................................... numroc(3S) .......................................................... 365
numroc(3S) .................................................................................................... numroc(3S) .......................................................... 365
ORTBAK(3S) .................................................................................................... eispack(3S)........................................................ 349
ortbak(3S) .................................................................................................... eispack(3S)........................................................ 349
ORTHES(3S) .................................................................................................... eispack(3S)........................................................ 349
orthes(3S) .................................................................................................... eispack(3S)........................................................ 349
ORTRAN(3S) .................................................................................................... eispack(3S)........................................................ 349
ortran(3S) .................................................................................................... eispack(3S)........................................................ 349
Out of core ...................................................................................................... intro_core(3S) ................................................ 575
Out of core ...................................................................................................... vend(3S)............................................................... 598
Out of core ...................................................................................................... vstorage(3S) ..................................................... 616
Outdated .......................................................................................................... intro_superseded(3S) .................................. 631
OUT_OF_CORE(3S) ........................................................................................ intro_core(3S) ................................................ 575
out_of_core(3S) ........................................................................................ intro_core(3S) ................................................ 575
Packed storage ................................................................................................. vstorage(3S) ..................................................... 616
Partial products problem ................................................................................. recpp(3S) ............................................................ 516
Partial summation problem ............................................................................. recpp(3S) ............................................................ 516
PCGEBRD(3S) ................................................................................................. psgebrd(3S)........................................................ 382

Index-10 004– 2081– 002


PCGELQF(3S) ................................................................................................. psgelqf(3S)........................................................ 387
PCGEQLF(3S) ................................................................................................. psgeqlf(3S)........................................................ 390
PCGEQPF(3S) ................................................................................................. psgeqpf(3S)........................................................ 393
PCGEQRF(3S) ................................................................................................. psgeqrf(3S)........................................................ 396
PCGERQF(3S) ................................................................................................. psgerqf(3S)........................................................ 399
PCGESV(3S) .................................................................................................... psgesv(3S) .......................................................... 402
PCGETRF(3S) ................................................................................................. psgetrf(3S)........................................................ 405
PCGETRI(3S) ................................................................................................. psgetri(3S)........................................................ 408
PCGETRS(3S) ................................................................................................. psgetrs(3S)........................................................ 411
PCHEEVX(3S) ................................................................................................. pcheevx(3S)........................................................ 366
pcheevx(3S) ................................................................................................. pcheevx(3S)........................................................ 366
PCHEGVX(3S) ................................................................................................. pchegvx(3S)........................................................ 374
pchegvx(3S) ................................................................................................. pchegvx(3S)........................................................ 374
PCHETRD(3S) ................................................................................................. pssytrd(3S)........................................................ 442
PCOORD3D(3S) ............................................................................................... pcoord3d(3S) ..................................................... 573
pcoord3d(3S) ............................................................................................... pcoord3d(3S) ..................................................... 573
PCPOSV(3S) .................................................................................................... psposv(3S) .......................................................... 414
PCPOTRF(3S) ................................................................................................. pspotrf(3S)........................................................ 418
PCPOTRI(3S) ................................................................................................. pspotri(3S)........................................................ 421
PCPOTRS(3S) ................................................................................................. pspotrs(3S)........................................................ 424
PCTRTRI(3S) ................................................................................................. pstrtri(3S)........................................................ 446
PCTRTRS(3S) ................................................................................................. pstrtrs(3S)........................................................ 449
Performs element summation operations on rectangular matrices ................. igsum2d(3S)........................................................ 562
Performs symmetric rank k update of a real or complex symmetric virtual
matrix .............................................................................................................. vssyrk(3S) .......................................................... 614
PNUM3D(3S) .................................................................................................... pnum3d(3S) .......................................................... 574
pnum3d(3S) .................................................................................................... pnum3d(3S) .......................................................... 574
Positive definite matrix (CORE) ..................................................................... vspotrf(3S)........................................................ 610
process element number (BLACS) ................................................................. blacs_pnum(3S) ................................................ 546
processing element coordinates (ScaLAPACK) ............................................. indxg2p(3S)........................................................ 364
processor grid information (BLACS) .............................................................. blacs_gridinfo(3S) ....................................... 542
PSGEBRD(3S) ................................................................................................. psgebrd(3S)........................................................ 382
psgebrd(3S) ................................................................................................. psgebrd(3S)........................................................ 382
PSGELQF(3S) ................................................................................................. psgelqf(3S)........................................................ 387
psgelqf(3S) ................................................................................................. psgelqf(3S)........................................................ 387
PSGEQLF(3S) ................................................................................................. psgeqlf(3S)........................................................ 390
psgeqlf(3S) ................................................................................................. psgeqlf(3S)........................................................ 390
PSGEQPF(3S) ................................................................................................. psgeqpf(3S)........................................................ 393
psgeqpf(3S) ................................................................................................. psgeqpf(3S)........................................................ 393
PSGEQRF(3S) ................................................................................................. psgeqrf(3S)........................................................ 396
psgeqrf(3S) ................................................................................................. psgeqrf(3S)........................................................ 396
PSGERQF(3S) ................................................................................................. psgerqf(3S)........................................................ 399
psgerqf(3S) ................................................................................................. psgerqf(3S)........................................................ 399
PSGESV(3S) .................................................................................................... psgesv(3S) .......................................................... 402
psgesv(3S) .................................................................................................... psgesv(3S) .......................................................... 402
PSGETRF(3S) ................................................................................................. psgetrf(3S)........................................................ 405
psgetrf(3S) ................................................................................................. psgetrf(3S)........................................................ 405
PSGETRI(3S) ................................................................................................. psgetri(3S)........................................................ 408
psgetri(3S) ................................................................................................. psgetri(3S)........................................................ 408
PSGETRS(3S) ................................................................................................. psgetrs(3S)........................................................ 411

004– 2081– 002 Index-11


psgetrs(3S) ................................................................................................. psgetrs(3S)........................................................ 411
PSPOSV(3S) .................................................................................................... psposv(3S) .......................................................... 414
psposv(3S) .................................................................................................... psposv(3S) .......................................................... 414
PSPOTRF(3S) ................................................................................................. pspotrf(3S)........................................................ 418
pspotrf(3S) ................................................................................................. pspotrf(3S)........................................................ 418
PSPOTRI(3S) ................................................................................................. pspotri(3S)........................................................ 421
pspotri(3S) ................................................................................................. pspotri(3S)........................................................ 421
PSPOTRS(3S) ................................................................................................. pspotrs(3S)........................................................ 424
pspotrs(3S) ................................................................................................. pspotrs(3S)........................................................ 424
PSSYEVX(3S) ................................................................................................. pssyevx(3S)........................................................ 427
pssyevx(3S) ................................................................................................. pssyevx(3S)........................................................ 427
PSSYGVX(3S) ................................................................................................. pssygvx(3S)........................................................ 434
pssygvx(3S) ................................................................................................. pssygvx(3S)........................................................ 434
PSSYTRD(3S) ................................................................................................. pssytrd(3S)........................................................ 442
pssytrd(3S) ................................................................................................. pssytrd(3S)........................................................ 442
PSTRTRI(3S) ................................................................................................. pstrtri(3S)........................................................ 446
pstrtri(3S) ................................................................................................. pstrtri(3S)........................................................ 446
PSTRTRS(3S) ................................................................................................. pstrtrs(3S)........................................................ 449
pstrtrs(3S) ................................................................................................. pstrtrs(3S)........................................................ 449
QR factorization .............................................................................................. linpack(3S)........................................................ 355
QZHES(3S) ...................................................................................................... eispack(3S)........................................................ 349
qzhes(3S) ...................................................................................................... eispack(3S)........................................................ 349
QZIT(3S) ........................................................................................................ eispack(3S)........................................................ 349
qzit(3S) ........................................................................................................ eispack(3S)........................................................ 349
QZVAL(3S) ...................................................................................................... eispack(3S)........................................................ 349
qzval(3S) ...................................................................................................... eispack(3S)........................................................ 349
QZVEC(3S) ...................................................................................................... eispack(3S)........................................................ 349
qzvec(3S) ...................................................................................................... eispack(3S)........................................................ 349
R1MACH(3S) .................................................................................................... r1mach(3S) .......................................................... 624
r1mach(3S) .................................................................................................... r1mach(3S) .......................................................... 624
Rank k update ................................................................................................. vssyrk(3S) .......................................................... 614
RATQR(3S) ...................................................................................................... eispack(3S)........................................................ 349
ratqr(3S) ...................................................................................................... eispack(3S)........................................................ 349
real distributed matrix inverse computation (ScaLAPACK) .......................... psgetri(3S)........................................................ 408
real distributed matrix LQ factorization (ScaLAPACK) ................................ psgelqf(3S)........................................................ 387
real distributed matrix LU factorization (ScaLAPACK) ................................ psgetrf(3S)........................................................ 405
real distributed matrix QL factorization (ScaLAPACK) ................................ psgeqlf(3S)........................................................ 390
real distributed matrix QR factorization (ScaLAPACK) ................................ psgeqpf(3S)........................................................ 393
real distributed matrix QR factorization (ScaLAPACK) ................................ psgeqrf(3S)........................................................ 396
real distributed matrix QR factorization (ScaLAPACK) ................................ psgerqf(3S)........................................................ 399
real distributed matrix reduction (ScaLAPACK) ............................................ psgebrd(3S)........................................................ 382
real distributed system of linear equations solution (ScaLAPACK) .............. psgetrs(3S)........................................................ 411
real distributed triangular system computation (ScaLAPACK) ...................... pstrtrs(3S)........................................................ 449
real symmetric distributed matrix reduction (ScaLAPACK) .......................... pssytrd(3S)........................................................ 442
real symmetric matrix inverse (ScaLAPACK) ............................................... pspotri(3S)........................................................ 421
real symmetric positive definite matrix computation (ScaLAPACK) ............ pspotrf(3S)........................................................ 418
real symmetric positive definite system solution (ScaLAPACK) ................... pspotrs(3S)........................................................ 424
real system computation (ScaLAPACK) ........................................................ psgesv(3S) .......................................................... 402
real triangular distributed matrix inverse computation (ScaLAPACK) .......... pstrtri(3S)........................................................ 446
REBAK(3S) ...................................................................................................... eispack(3S)........................................................ 349

Index-12 004– 2081– 002


rebak(3S) ...................................................................................................... eispack(3S)........................................................ 349
REBAKB(3S) .................................................................................................... eispack(3S)........................................................ 349
rebakb(3S) .................................................................................................... eispack(3S)........................................................ 349
receive a trapezoidal rectangular matrix (BLACS) ........................................ itrrv2d(3S)........................................................ 568
receive broadcast trapezoidal rectangular matrix (BLACS) ........................... itrbr2d(3S)........................................................ 564
receive general rectangular matrix (BLACS) ................................................. igerv2d(3S)........................................................ 558
Receives a broadcast general rectangular matrix from all or a subset of
processors ........................................................................................................ igebr2d(3S)........................................................ 554
Receives a broadcast trapezoidal rectangular matrix from all or a subset
of processors ................................................................................................... itrbr2d(3S)........................................................ 564
Receives a general rectangular matrix from another processor ...................... igerv2d(3S)........................................................ 558
Receives a trapezoidal rectangular matrix from another processor ................ itrrv2d(3S)........................................................ 568
receives general rectangular matrix (BLACS) ................................................ igebr2d(3S)........................................................ 554
RECPP(3S) ...................................................................................................... recpp(3S) ............................................................ 516
recpp(3S) ...................................................................................................... recpp(3S) ............................................................ 516
RECPS(3S) ...................................................................................................... recpp(3S) ............................................................ 516
recps(3S) ...................................................................................................... recpp(3S) ............................................................ 516
REDUC2(3S) .................................................................................................... eispack(3S)........................................................ 349
reduc2(3S) .................................................................................................... eispack(3S)........................................................ 349
REDUC(3S) ...................................................................................................... eispack(3S)........................................................ 349
reduc(3S) ...................................................................................................... eispack(3S)........................................................ 349
Reduces a real or complex distributed matrix to bidiagonal form ................. psgebrd(3S)........................................................ 382
Reduces a real symmetric or complex Hermitian distributed matrix to
tridiagonal form ............................................................................................... pssytrd(3S)........................................................ 442
return calling processor number (BLACS) ..................................................... mynode(3S) .......................................................... 572
return processor element number (BLACS) ................................................... pnum3d(3S) .......................................................... 574
Returns Cray PVP machine constants ............................................................ r1mach(3S) .......................................................... 624
Returns information about the three-dimensional processor grid ................... gridinfo3d(3S) ................................................ 547
Returns information about the two-dimensional processor grid ..................... blacs_gridinfo(3S) ....................................... 542
Returns machine epsilon, small or large normalized numbers ....................... smach(3S) ............................................................ 628
Returns the calling processor’s assigned number ........................................... mynode(3S) .......................................................... 572
Returns the processor element number for specified coordinates in
two-dimensional grids ..................................................................................... blacs_pnum(3S) ................................................ 546
Returns the processor element number for specified three-dimensional
(3D) coordinates .............................................................................................. pnum3d(3S) .......................................................... 574
RG(3S) ............................................................................................................. eispack(3S)........................................................ 349
rg(3S) ............................................................................................................. eispack(3S)........................................................ 349
RGG(3S) ........................................................................................................... eispack(3S)........................................................ 349
rgg(3S) ........................................................................................................... eispack(3S)........................................................ 349
Row vector ...................................................................................................... sxmpy(3S) ............................................................ 649
RS(3S) ............................................................................................................. eispack(3S)........................................................ 349
rs(3S) ............................................................................................................. eispack(3S)........................................................ 349
RSB(3S) ........................................................................................................... eispack(3S)........................................................ 349
rsb(3S) ........................................................................................................... eispack(3S)........................................................ 349
RSG(3S) ........................................................................................................... eispack(3S)........................................................ 349
rsg(3S) ........................................................................................................... eispack(3S)........................................................ 349
RSGAB(3S) ...................................................................................................... eispack(3S)........................................................ 349
rsgab(3S) ...................................................................................................... eispack(3S)........................................................ 349
RSGBA(3S) ...................................................................................................... eispack(3S)........................................................ 349
rsgba(3S) ...................................................................................................... eispack(3S)........................................................ 349

004– 2081– 002 Index-13


RSM(3S) ........................................................................................................... eispack(3S)........................................................ 349
rsm(3S) ........................................................................................................... eispack(3S)........................................................ 349
RSP(3S) ........................................................................................................... eispack(3S)........................................................ 349
rsp(3S) ........................................................................................................... eispack(3S)........................................................ 349
RST(3S) ........................................................................................................... eispack(3S)........................................................ 349
rst(3S) ........................................................................................................... eispack(3S)........................................................ 349
RT(3S) ............................................................................................................. eispack(3S)........................................................ 349
rt(3S) ............................................................................................................. eispack(3S)........................................................ 349
scalapack ......................................................................................................... intro_scalapack(3S) ..................................... 359
scalapack(3S) ............................................................................................. intro_scalapack(3S) ..................................... 359
Scatter a vector ............................................................................................... scatter(3S)........................................................ 646
SCATTER(3S) ................................................................................................. scatter(3S)........................................................ 646
scatter(3S) ................................................................................................. scatter(3S)........................................................ 646
Scatters a vector into another vector .............................................................. scatter(3S)........................................................ 646
schdc(3S) ...................................................................................................... linpack(3S)........................................................ 355
schdd(3S) ...................................................................................................... linpack(3S)........................................................ 355
schex(3S) ...................................................................................................... linpack(3S)........................................................ 355
schud(3S) ...................................................................................................... linpack(3S)........................................................ 355
SCOPY2RV(3S) ............................................................................................... scopy2rv(3S) ..................................................... 590
scopy2rv(3S) ............................................................................................... scopy2rv(3S) ..................................................... 590
SCOPY2VR(3S) ............................................................................................... scopy2vr(3S) ..................................................... 593
scopy2vr(3S) ............................................................................................... scopy2vr(3S) ..................................................... 593
SDTSOL(3S) .................................................................................................... sdtsol(3S) .......................................................... 518
sdtsol(3S) .................................................................................................... sdtsol(3S) .......................................................... 518
SDTTRF(3S) .................................................................................................... sdttrf(3S) .......................................................... 520
sdttrf(3S) .................................................................................................... sdttrf(3S) .......................................................... 520
SDTTRS(3S) .................................................................................................... sdttrs(3S) .......................................................... 523
sdttrs(3S) .................................................................................................... sdttrs(3S) .......................................................... 523
Second-order linear recurrences ...................................................................... solr(3S)............................................................... 526
Second-order linear recurrences ...................................................................... solr3(3S) ............................................................ 528
Second-order linear recurrences ...................................................................... solrn(3S) ............................................................ 531
send general rectangular matrix (BLACS) ..................................................... igesd2d(3S)........................................................ 560
send trapezoidal rectangular matrix (BLACS) ................................................ itrsd2d(3S)........................................................ 570
Sends a general rectangular matrix to another processor ............................... igesd2d(3S)........................................................ 560
Sends a trapezoidal rectangular matrix to another processor ......................... itrsd2d(3S)........................................................ 570
SGAMN2D(3S) ................................................................................................. igamn2d(3S)........................................................ 550
sgamn2d(3S) ................................................................................................. igamn2d(3S)........................................................ 550
SGAMX2D(3S) ................................................................................................. igamx2d(3S)........................................................ 552
sgamx2d(3S) ................................................................................................. igamx2d(3S)........................................................ 552
sgbco(3S) ...................................................................................................... linpack(3S)........................................................ 355
sgbdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
sgbfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
sgbsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
SGEBR2D(3S) ................................................................................................. igebr2d(3S)........................................................ 554
sgebr2d(3S) ................................................................................................. igebr2d(3S)........................................................ 554
SGEBS2D(3S) ................................................................................................. igebs2d(3S)........................................................ 556
sgebs2d(3S) ................................................................................................. igebs2d(3S)........................................................ 556
sgeco(3S) ...................................................................................................... linpack(3S)........................................................ 355
sgedi(3S) ...................................................................................................... linpack(3S)........................................................ 355
sgefa(3S) ...................................................................................................... linpack(3S)........................................................ 355

Index-14 004– 2081– 002


SGERV2D(3S) ................................................................................................. igerv2d(3S)........................................................ 558
sgerv2d(3S) ................................................................................................. igerv2d(3S)........................................................ 558
SGESD2D(3S) ................................................................................................. igesd2d(3S)........................................................ 560
sgesd2d(3S) ................................................................................................. igesd2d(3S)........................................................ 560
sgesl(3S) ...................................................................................................... linpack(3S)........................................................ 355
SGMAX2D(3S) ................................................................................................. igamx2d(3S)........................................................ 552
sgmax2d(3S) ................................................................................................. igamx2d(3S)........................................................ 552
SGMIN2D(3S) ................................................................................................. igamn2d(3S)........................................................ 550
sgmin2d(3S) ................................................................................................. igamn2d(3S)........................................................ 550
SGSUM2D(3S) ................................................................................................. igsum2d(3S)........................................................ 562
sgsum2d(3S) ................................................................................................. igsum2d(3S)........................................................ 562
sgtsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
Single-precision real and complex LINPACK routines .................................. linpack(3S)........................................................ 355
Singular value decomposition ......................................................................... eispack(3S)........................................................ 349
Singular value decomposition ......................................................................... linpack(3S)........................................................ 355
SITRSOL(3S) ................................................................................................. sitrsol(3S)........................................................ 466
sitrsol(3S) ................................................................................................. sitrsol(3S)........................................................ 466
SLAMCH(3S) .................................................................................................... slamch(3S) .......................................................... 626
slamch(3S) .................................................................................................... slamch(3S) .......................................................... 626
SMACH(3S) ...................................................................................................... smach(3S) ............................................................ 628
smach(3S) ...................................................................................................... smach(3S) ............................................................ 628
SMXPY(3S) ...................................................................................................... smxpy(3S) ............................................................ 647
smxpy(3S) ...................................................................................................... smxpy(3S) ............................................................ 647
SOLR3(3S) ...................................................................................................... solr3(3S) ............................................................ 528
solr3(3S) ...................................................................................................... solr3(3S) ............................................................ 528
SOLR(3S) ........................................................................................................ solr(3S)............................................................... 526
solr(3S) ........................................................................................................ solr(3S)............................................................... 526
SOLRN(3S) ...................................................................................................... solrn(3S) ............................................................ 531
solrn(3S) ...................................................................................................... solrn(3S) ............................................................ 531
Solver .............................................................................................................. minv(3S)............................................................... 634
Solves a first-order linear recurrence with a scalar multiplier ....................... folrc(3S) ............................................................ 511
Solves a partial product or partial summation problem ................................. recpp(3S) ............................................................ 516
Solves a real general sparse system, using a preconditioned conjugate
gradient-like method ....................................................................................... sitrsol(3S)........................................................ 466
Solves a real or complex distributed system of linear equations ................... psgetrs(3S)........................................................ 411
Solves a real or complex distributed triangular system .................................. pstrtrs(3S)........................................................ 449
Solves a real sparse general system, using the factorization computed in
SSGETRF(3S) ................................................................................................. ssgetrs(3S)........................................................ 487
Solves a real sparse general system with a symmetric nonzero pattern,
using the factorization computed in SSTSTRF(3S) ....................................... sststrs(3S)........................................................ 501
Solves a real sparse symmetric definite system, using the factorization
computed in SSPOTRF(3S) ............................................................................ sspotrs(3S)........................................................ 494
Solves a real symmetric or complex Hermitian system of linear equations .. psposv(3S) .......................................................... 414
Solves a real symmetric positive definite or complex Hermitian positive
definite system of linear equations ................................................................. pspotrs(3S)........................................................ 424
Solves a real-valued or complex-valued tridiagonal system with one
right-hand side ................................................................................................. sdtsol(3S) .......................................................... 518
Solves a real-valued or complex-valued tridiagonal system with one
right-hand side, using its factorization as computed by SDTTRF(3S) or
CDTTRF(3) ...................................................................................................... sdttrs(3S) .......................................................... 523

004– 2081– 002 Index-15


Solves a second-order linear recurrence ......................................................... solr(3S)............................................................... 526
Solves a second-order linear recurrence for only the last term ...................... solrn(3S) ............................................................ 531
Solves a second-order linear recurrence for three terms ................................ solr3(3S) ............................................................ 528
Solves a tridiagonal system ............................................................................. trid(3S)............................................................... 651
Solves a virtual real or virtual complex triangular system of equations
with multiple right-hand sides ........................................................................ vstrsm(3S) .......................................................... 619
Solves a virtual system of linear equations, using the LU factorization
computed by VSGETRF(3S) or VCGETRF(3S) .............................................. vsgetrs(3S)........................................................ 608
Solves a virtual system of linear equations with a symmetric positive
definite matrix whose Cholesky factorization has been computed by
VSPOTRF(3S) ................................................................................................. vspotrs(3S)........................................................ 612
Solves first-order linear recurrences ............................................................... folr(3S)............................................................... 504
Solves first-order linear recurrences without overwriting the operand
vector ............................................................................................................... folr2(3S) ............................................................ 509
Solves for the last term of first-order linear recurrence ................................. folrn(3S) ............................................................ 513
Solves systems of linear equations by inverting a square matrix .................. minv(3S)............................................................... 634
Sparse .............................................................................................................. dfaults(3S)........................................................ 461
Sparse .............................................................................................................. sitrsol(3S)........................................................ 466
Sparse .............................................................................................................. ssgetrf(3S)........................................................ 482
Sparse .............................................................................................................. ssgetrs(3S)........................................................ 487
Sparse .............................................................................................................. sspotrf(3S)........................................................ 489
Sparse .............................................................................................................. sspotrs(3S)........................................................ 494
Sparse .............................................................................................................. sststrf(3S)........................................................ 496
Sparse .............................................................................................................. sststrs(3S)........................................................ 501
Sparse factor .................................................................................................... ssgetrf(3S)........................................................ 482
Sparse factor .................................................................................................... sspotrf(3S)........................................................ 489
Sparse factor .................................................................................................... sststrf(3S)........................................................ 496
Sparse linear system ........................................................................................ intro_sparse(3S) ............................................ 453
Sparse linear system ........................................................................................ dfaults(3S)........................................................ 461
Sparse linear system ........................................................................................ sitrsol(3S)........................................................ 466
Sparse linear system ........................................................................................ ssgetrf(3S)........................................................ 482
Sparse linear system ........................................................................................ ssgetrs(3S)........................................................ 487
Sparse linear system ........................................................................................ sspotrf(3S)........................................................ 489
Sparse linear system ........................................................................................ sspotrs(3S)........................................................ 494
Sparse linear system ........................................................................................ sststrf(3S)........................................................ 496
Sparse linear system ........................................................................................ sststrs(3S)........................................................ 501
Sparse matrix .................................................................................................. intro_sparse(3S) ............................................ 453
Sparse matrix .................................................................................................. dfaults(3S)........................................................ 461
Sparse matrix .................................................................................................. sitrsol(3S)........................................................ 466
Sparse matrix .................................................................................................. ssgetrf(3S)........................................................ 482
Sparse matrix .................................................................................................. ssgetrs(3S)........................................................ 487
Sparse matrix .................................................................................................. sspotrf(3S)........................................................ 489
Sparse matrix .................................................................................................. sspotrs(3S)........................................................ 494
Sparse matrix .................................................................................................. sststrf(3S)........................................................ 496
Sparse matrix .................................................................................................. sststrs(3S)........................................................ 501
Sparse matrix factoring ................................................................................... ssgetrf(3S)........................................................ 482
Sparse matrix factoring ................................................................................... sspotrf(3S)........................................................ 489
Sparse matrix factoring ................................................................................... sststrf(3S)........................................................ 496
Sparse solver ................................................................................................... intro_sparse(3S) ............................................ 453
Sparse solver ................................................................................................... dfaults(3S)........................................................ 461

Index-16 004– 2081– 002


Sparse solver ................................................................................................... sitrsol(3S)........................................................ 466
Sparse solver ................................................................................................... ssgetrf(3S)........................................................ 482
Sparse solver ................................................................................................... ssgetrs(3S)........................................................ 487
Sparse solver ................................................................................................... sspotrf(3S)........................................................ 489
Sparse solver ................................................................................................... sspotrs(3S)........................................................ 494
Sparse solver ................................................................................................... sststrf(3S)........................................................ 496
Sparse solver ................................................................................................... sststrs(3S)........................................................ 501
SPARSE(3S) .................................................................................................... intro_sparse(3S) ............................................ 453
spbco(3S) ...................................................................................................... linpack(3S)........................................................ 355
spbdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
spbfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
spbsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
Special linear systems ..................................................................................... intro_spec(3S) ................................................ 503
SPEC_SYS(3S) ............................................................................................... intro_spec(3S) ................................................ 503
spoco(3S) ...................................................................................................... linpack(3S)........................................................ 355
spodi(3S) ...................................................................................................... linpack(3S)........................................................ 355
spofa(3S) ...................................................................................................... linpack(3S)........................................................ 355
sposl(3S) ...................................................................................................... linpack(3S)........................................................ 355
sppco(3S) ...................................................................................................... linpack(3S)........................................................ 355
sppdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
sppfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
sppsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
sptsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
sqrdc(3S) ...................................................................................................... linpack(3S)........................................................ 355
sqrsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
Square matrix .................................................................................................. minv(3S)............................................................... 634
SSGETRF(3S) ................................................................................................. ssgetrf(3S)........................................................ 482
ssgetrf(3S) ................................................................................................. ssgetrf(3S)........................................................ 482
SSGETRS(3S) ................................................................................................. ssgetrs(3S)........................................................ 487
ssgetrs(3S) ................................................................................................. ssgetrs(3S)........................................................ 487
ssico(3S) ...................................................................................................... linpack(3S)........................................................ 355
ssidi(3S) ...................................................................................................... linpack(3S)........................................................ 355
ssifa(3S) ...................................................................................................... linpack(3S)........................................................ 355
ssisl(3S) ...................................................................................................... linpack(3S)........................................................ 355
sspco(3S) ...................................................................................................... linpack(3S)........................................................ 355
sspdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
sspfa(3S) ...................................................................................................... linpack(3S)........................................................ 355
SSPOTRF(3S) ................................................................................................. sspotrf(3S)........................................................ 489
sspotrf(3S) ................................................................................................. sspotrf(3S)........................................................ 489
SSPOTRS(3S) ................................................................................................. sspotrs(3S)........................................................ 494
sspotrs(3S) ................................................................................................. sspotrs(3S)........................................................ 494
sspsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
SSTSTRF(3S) ................................................................................................. sststrf(3S)........................................................ 496
sststrf(3S) ................................................................................................. sststrf(3S)........................................................ 496
SSTSTRS(3S) ................................................................................................. sststrs(3S)........................................................ 501
sststrs(3S) ................................................................................................. sststrs(3S)........................................................ 501
ssvdc(3S) ...................................................................................................... linpack(3S)........................................................ 355
Stops execution until all specifed processes have called a routine ................ blacs_barrier(3S) ......................................... 539
STRBR2D(3S) ................................................................................................. itrbr2d(3S)........................................................ 564
strbr2d(3S) ................................................................................................. itrbr2d(3S)........................................................ 564

004– 2081– 002 Index-17


STRBS2D(3S) ................................................................................................. itrbs2d(3S)........................................................ 566
strbs2d(3S) ................................................................................................. itrbs2d(3S)........................................................ 566
strco(3S) ...................................................................................................... linpack(3S)........................................................ 355
strdi(3S) ...................................................................................................... linpack(3S)........................................................ 355
STRRV2D(3S) ................................................................................................. itrrv2d(3S)........................................................ 568
strrv2d(3S) ................................................................................................. itrrv2d(3S)........................................................ 568
STRSD2D(3S) ................................................................................................. itrsd2d(3S)........................................................ 570
strsd2d(3S) ................................................................................................. itrsd2d(3S)........................................................ 570
strsl(3S) ...................................................................................................... linpack(3S)........................................................ 355
SUPERSEDED(3S) .......................................................................................... intro_superseded(3S) .................................. 631
superseded(3S) .......................................................................................... intro_superseded(3S) .................................. 631
SVD(3S) ........................................................................................................... eispack(3S)........................................................ 349
svd(3S) ........................................................................................................... eispack(3S)........................................................ 349
SXMPY(3S) ...................................................................................................... sxmpy(3S) ............................................................ 649
sxmpy(3S) ...................................................................................................... sxmpy(3S) ............................................................ 649
Symmetric matrix ............................................................................................ vssyrk(3S) .......................................................... 614
Symmetric matrix ............................................................................................ vstorage(3S) ..................................................... 616
Symmetric matrix (CORE) ............................................................................. vspotrs(3S)........................................................ 612
Symmetric rank k update ................................................................................ vssyrk(3S) .......................................................... 614
System of linear equations .............................................................................. minv(3S)............................................................... 634
Termination ..................................................................................................... vend(3S)............................................................... 598
TINVIT(3S) .................................................................................................... eispack(3S)........................................................ 349
tinvit(3S) .................................................................................................... eispack(3S)........................................................ 349
TQL1(3S) ........................................................................................................ eispack(3S)........................................................ 349
tql1(3S) ........................................................................................................ eispack(3S)........................................................ 349
TQL2(3S) ........................................................................................................ eispack(3S)........................................................ 349
tql2(3S) ........................................................................................................ eispack(3S)........................................................ 349
TQLRAT(3S) .................................................................................................... eispack(3S)........................................................ 349
tqlrat(3S) .................................................................................................... eispack(3S)........................................................ 349
TRBAK3(3S) .................................................................................................... eispack(3S)........................................................ 349
trbak3(3S) .................................................................................................... eispack(3S)........................................................ 349
TRBAK(3S) ...................................................................................................... eispack(3S)........................................................ 349
trbak(3S) ...................................................................................................... eispack(3S)........................................................ 349
TRED1(3S) ...................................................................................................... eispack(3S)........................................................ 349
tred1(3S) ...................................................................................................... eispack(3S)........................................................ 349
TRED2(3S) ...................................................................................................... eispack(3S)........................................................ 349
tred2(3S) ...................................................................................................... eispack(3S)........................................................ 349
TRED3(3S) ...................................................................................................... eispack(3S)........................................................ 349
tred3(3S) ...................................................................................................... eispack(3S)........................................................ 349
Triangular matrix ............................................................................................ vstorage(3S) ..................................................... 616
Triangular system of equations ....................................................................... vstrsm(3S) .......................................................... 619
TRID(3S) ........................................................................................................ trid(3S)............................................................... 651
trid(3S) ........................................................................................................ trid(3S)............................................................... 651
Tridiagonal ...................................................................................................... sdtsol(3S) .......................................................... 518
Tridiagonal ...................................................................................................... sdttrf(3S) .......................................................... 520
Tridiagonal ...................................................................................................... sdttrs(3S) .......................................................... 523
Tridiagonal ...................................................................................................... trid(3S)............................................................... 651
Tridiagonal system .......................................................................................... sdtsol(3S) .......................................................... 518
Tridiagonal system .......................................................................................... sdttrf(3S) .......................................................... 520
Tridiagonal system .......................................................................................... sdttrs(3S) .......................................................... 523

Index-18 004– 2081– 002


Tridiagonal system .......................................................................................... trid(3S)............................................................... 651
TRIDIB(3S) .................................................................................................... eispack(3S)........................................................ 349
tridib(3S) .................................................................................................... eispack(3S)........................................................ 349
TSTURM(3S) .................................................................................................... eispack(3S)........................................................ 349
tsturm(3S) .................................................................................................... eispack(3S)........................................................ 349
user-created grids (BLACS) ............................................................................ blacs_exit(3S) ................................................ 540
variable initialization (BLACS) ...................................................................... blacs_gridmap(3S) ......................................... 544
VBEGIN(3S) .................................................................................................... vbegin(3S) .......................................................... 595
vbegin(3S) .................................................................................................... vbegin(3S) .......................................................... 595
VBLAS ............................................................................................................ intro_core(3S) ................................................ 575
VBLAS(3S) ...................................................................................................... intro_core(3S) ................................................ 575
vblas(3S) ...................................................................................................... intro_core(3S) ................................................ 575
VCGEMM(3S) .................................................................................................... vsgemm(3S) .......................................................... 600
vcgemm(3S) .................................................................................................... vsgemm(3S) .......................................................... 600
VCGETRF(3S) ................................................................................................. vsgetrf(3S)........................................................ 604
vcgetrf(3S) ................................................................................................. vsgetrf(3S)........................................................ 604
VCGETRS(3S) ................................................................................................. vsgetrs(3S)........................................................ 608
vcgetrs(3S) ................................................................................................. vsgetrs(3S)........................................................ 608
VCOPY(3S) ...................................................................................................... intro_core(3S) ................................................ 575
vcopy(3S) ...................................................................................................... intro_core(3S) ................................................ 575
VCTRSM(3S) .................................................................................................... vstrsm(3S) .......................................................... 619
vctrsm(3S) .................................................................................................... vstrsm(3S) .......................................................... 619
VEND(3S) ........................................................................................................ vend(3S)............................................................... 598
vend(3S) ........................................................................................................ vend(3S)............................................................... 598
Virtual ............................................................................................................. intro_core(3S) ................................................ 575
Virtual ............................................................................................................. scopy2rv(3S) ..................................................... 590
Virtual ............................................................................................................. scopy2vr(3S) ..................................................... 593
Virtual ............................................................................................................. vend(3S)............................................................... 598
Virtual ............................................................................................................. vsgemm(3S) .......................................................... 600
Virtual ............................................................................................................. vsgetrf(3S)........................................................ 604
Virtual ............................................................................................................. vsgetrs(3S)........................................................ 608
Virtual ............................................................................................................. vstorage(3S) ..................................................... 616
Virtual ............................................................................................................. vstrsm(3S) .......................................................... 619
Virtual BLAS .................................................................................................. intro_core(3S) ................................................ 575
Virtual BLAS .................................................................................................. vbegin(3S) .......................................................... 595
Virtual copy .................................................................................................... intro_core(3S) ................................................ 575
Virtual LAPACK ............................................................................................ intro_core(3S) ................................................ 575
Virtual routines ............................................................................................... vspotrf(3S)........................................................ 610
VLAPACK(3S) ................................................................................................. intro_core(3S) ................................................ 575
vlapack(3S) ................................................................................................. intro_core(3S) ................................................ 575
VSGEMM(3S) .................................................................................................... vsgemm(3S) .......................................................... 600
vsgemm(3S) .................................................................................................... vsgemm(3S) .......................................................... 600
VSGETRF(3S) ................................................................................................. vsgetrf(3S)........................................................ 604
vsgetrf(3S) ................................................................................................. vsgetrf(3S)........................................................ 604
VSGETRS(3S) ................................................................................................. vsgetrs(3S)........................................................ 608
vsgetrs(3S) ................................................................................................. vsgetrs(3S)........................................................ 608
VSPOTRF(3S) ................................................................................................. vspotrf(3S)........................................................ 610
vspotrf(3S) ................................................................................................. vspotrf(3S)........................................................ 610
VSPOTRS(3S) ................................................................................................. vspotrs(3S)........................................................ 612
vspotrs(3S) ................................................................................................. vspotrs(3S)........................................................ 612

004– 2081– 002 Index-19


VSSYRK(3S) .................................................................................................... vssyrk(3S) .......................................................... 614
vssyrk(3S) .................................................................................................... vssyrk(3S) .......................................................... 614
VSTORAGE(3S) ............................................................................................... vstorage(3S) ..................................................... 616
vstorage(3S) ............................................................................................... vstorage(3S) ..................................................... 616
VSTRSM(3S) .................................................................................................... vstrsm(3S) .......................................................... 619
vstrsm(3S) .................................................................................................... vstrsm(3S) .......................................................... 619

Index-20 004– 2081– 002

You might also like