OpenRadioss 2025.1.11
OpenRadioss project
Loading...
Searching...
No Matches
cgejsv.f
Go to the documentation of this file.
1*> \brief \b CGEJSV
2*
3* =========== DOCUMENTATION ===========
4*
5* Online html documentation available at
6* http://www.netlib.org/lapack/explore-html/
7*
8*> \htmlonly
9*> Download CGEJSV + dependencies
10*> <a href="http://www.netlib.org/cgi-bin/netlibfiles.tgz?format=tgz&filename=/lapack/lapack_routine/cgejsv.f">
11*> [TGZ]</a>
12*> <a href="http://www.netlib.org/cgi-bin/netlibfiles.zip?format=zip&filename=/lapack/lapack_routine/cgejsv.f">
13*> [ZIP]</a>
14*> <a href="http://www.netlib.org/cgi-bin/netlibfiles.txt?format=txt&filename=/lapack/lapack_routine/cgejsv.f">
15*> [TXT]</a>
16*> \endhtmlonly
17*
18* Definition:
19* ===========
20*
21* SUBROUTINE CGEJSV( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
22* M, N, A, LDA, SVA, U, LDU, V, LDV,
23* CWORK, LWORK, RWORK, LRWORK, IWORK, INFO )
24*
25* .. Scalar Arguments ..
26* IMPLICIT NONE
27* INTEGER INFO, LDA, LDU, LDV, LWORK, M, N
28* ..
29* .. Array Arguments ..
30* COMPLEX A( LDA, * ), U( LDU, * ), V( LDV, * ), CWORK( LWORK )
31* REAL SVA( N ), RWORK( LRWORK )
32* INTEGER IWORK( * )
33* CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
34* ..
35*
36*
37*> \par Purpose:
38* =============
39*>
40*> \verbatim
41*>
42*> CGEJSV computes the singular value decomposition (SVD) of a complex M-by-N
43*> matrix [A], where M >= N. The SVD of [A] is written as
44*>
45*> [A] = [U] * [SIGMA] * [V]^*,
46*>
47*> where [SIGMA] is an N-by-N (M-by-N) matrix which is zero except for its N
48*> diagonal elements, [U] is an M-by-N (or M-by-M) unitary matrix, and
49*> [V] is an N-by-N unitary matrix. The diagonal elements of [SIGMA] are
50*> the singular values of [A]. The columns of [U] and [V] are the left and
51*> the right singular vectors of [A], respectively. The matrices [U] and [V]
52*> are computed and stored in the arrays U and V, respectively. The diagonal
53*> of [SIGMA] is computed and stored in the array SVA.
54*> \endverbatim
55*>
56*> Arguments:
57*> ==========
58*>
59*> \param[in] JOBA
60*> \verbatim
61*> JOBA is CHARACTER*1
62*> Specifies the level of accuracy:
63*> = 'C': This option works well (high relative accuracy) if A = B * D,
64*> with well-conditioned B and arbitrary diagonal matrix D.
65*> The accuracy cannot be spoiled by COLUMN scaling. The
66*> accuracy of the computed output depends on the condition of
67*> B, and the procedure aims at the best theoretical accuracy.
68*> The relative error max_{i=1:N}|d sigma_i| / sigma_i is
69*> bounded by f(M,N)*epsilon* cond(B), independent of D.
70*> The input matrix is preprocessed with the QRF with column
71*> pivoting. This initial preprocessing and preconditioning by
72*> a rank revealing QR factorization is common for all values of
73*> JOBA. Additional actions are specified as follows:
74*> = 'E': Computation as with 'C' with an additional estimate of the
75*> condition number of B. It provides a realistic error bound.
76*> = 'F': If A = D1 * C * D2 with ill-conditioned diagonal scalings
77*> D1, D2, and well-conditioned matrix C, this option gives
78*> higher accuracy than the 'C' option. If the structure of the
79*> input matrix is not known, and relative accuracy is
80*> desirable, then this option is advisable. The input matrix A
81*> is preprocessed with QR factorization with FULL (row and
82*> column) pivoting.
83*> = 'G': Computation as with 'F' with an additional estimate of the
84*> condition number of B, where A=B*D. If A has heavily weighted
85*> rows, then using this condition number gives too pessimistic
86*> error bound.
87*> = 'A': Small singular values are not well determined by the data
88*> and are considered as noisy; the matrix is treated as
89*> numerically rank deficient. The error in the computed
90*> singular values is bounded by f(m,n)*epsilon*||A||.
91*> The computed SVD A = U * S * V^* restores A up to
92*> f(m,n)*epsilon*||A||.
93*> This gives the procedure the licence to discard (set to zero)
94*> all singular values below N*epsilon*||A||.
95*> = 'R': Similar as in 'A'. Rank revealing property of the initial
96*> QR factorization is used do reveal (using triangular factor)
97*> a gap sigma_{r+1} < epsilon * sigma_r in which case the
98*> numerical RANK is declared to be r. The SVD is computed with
99*> absolute error bounds, but more accurately than with 'A'.
100*> \endverbatim
101*>
102*> \param[in] JOBU
103*> \verbatim
104*> JOBU is CHARACTER*1
105*> Specifies whether to compute the columns of U:
106*> = 'U': N columns of U are returned in the array U.
107*> = 'F': full set of M left sing. vectors is returned in the array U.
108*> = 'W': U may be used as workspace of length M*N. See the description
109*> of U.
110*> = 'N': U is not computed.
111*> \endverbatim
112*>
113*> \param[in] JOBV
114*> \verbatim
115*> JOBV is CHARACTER*1
116*> Specifies whether to compute the matrix V:
117*> = 'V': N columns of V are returned in the array V; Jacobi rotations
118*> are not explicitly accumulated.
119*> = 'J': N columns of V are returned in the array V, but they are
120*> computed as the product of Jacobi rotations, if JOBT = 'N'.
121*> = 'W': V may be used as workspace of length N*N. See the description
122*> of V.
123*> = 'N': V is not computed.
124*> \endverbatim
125*>
126*> \param[in] JOBR
127*> \verbatim
128*> JOBR is CHARACTER*1
129*> Specifies the RANGE for the singular values. Issues the licence to
130*> set to zero small positive singular values if they are outside
131*> specified range. If A .NE. 0 is scaled so that the largest singular
132*> value of c*A is around SQRT(BIG), BIG=SLAMCH('O'), then JOBR issues
133*> the licence to kill columns of A whose norm in c*A is less than
134*> SQRT(SFMIN) (for JOBR = 'R'), or less than SMALL=SFMIN/EPSLN,
135*> where SFMIN=SLAMCH('S'), EPSLN=SLAMCH('E').
136*> = 'N': Do not kill small columns of c*A. This option assumes that
137*> BLAS and QR factorizations and triangular solvers are
138*> implemented to work in that range. If the condition of A
139*> is greater than BIG, use CGESVJ.
140*> = 'R': RESTRICTED range for sigma(c*A) is [SQRT(SFMIN), SQRT(BIG)]
141*> (roughly, as described above). This option is recommended.
142*> ===========================
143*> For computing the singular values in the FULL range [SFMIN,BIG]
144*> use CGESVJ.
145*> \endverbatim
146*>
147*> \param[in] JOBT
148*> \verbatim
149*> JOBT is CHARACTER*1
150*> If the matrix is square then the procedure may determine to use
151*> transposed A if A^* seems to be better with respect to convergence.
152*> If the matrix is not square, JOBT is ignored.
153*> The decision is based on two values of entropy over the adjoint
154*> orbit of A^* * A. See the descriptions of WORK(6) and WORK(7).
155*> = 'T': transpose if entropy test indicates possibly faster
156*> convergence of Jacobi process if A^* is taken as input. If A is
157*> replaced with A^*, then the row pivoting is included automatically.
158*> = 'N': do not speculate.
159*> The option 'T' can be used to compute only the singular values, or
160*> the full SVD (U, SIGMA and V). For only one set of singular vectors
161*> (U or V), the caller should provide both U and V, as one of the
162*> matrices is used as workspace if the matrix A is transposed.
163*> The implementer can easily remove this constraint and make the
164*> code more complicated. See the descriptions of U and V.
165*> In general, this option is considered experimental, and 'N'; should
166*> be preferred. This is subject to changes in the future.
167*> \endverbatim
168*>
169*> \param[in] JOBP
170*> \verbatim
171*> JOBP is CHARACTER*1
172*> Issues the licence to introduce structured perturbations to drown
173*> denormalized numbers. This licence should be active if the
174*> denormals are poorly implemented, causing slow computation,
175*> especially in cases of fast convergence (!). For details see [1,2].
176*> For the sake of simplicity, this perturbations are included only
177*> when the full SVD or only the singular values are requested. The
178*> implementer/user can easily add the perturbation for the cases of
179*> computing one set of singular vectors.
180*> = 'P': introduce perturbation
181*> = 'N': do not perturb
182*> \endverbatim
183*>
184*> \param[in] M
185*> \verbatim
186*> M is INTEGER
187*> The number of rows of the input matrix A. M >= 0.
188*> \endverbatim
189*>
190*> \param[in] N
191*> \verbatim
192*> N is INTEGER
193*> The number of columns of the input matrix A. M >= N >= 0.
194*> \endverbatim
195*>
196*> \param[in,out] A
197*> \verbatim
198*> A is COMPLEX array, dimension (LDA,N)
199*> On entry, the M-by-N matrix A.
200*> \endverbatim
201*>
202*> \param[in] LDA
203*> \verbatim
204*> LDA is INTEGER
205*> The leading dimension of the array A. LDA >= max(1,M).
206*> \endverbatim
207*>
208*> \param[out] SVA
209*> \verbatim
210*> SVA is REAL array, dimension (N)
211*> On exit,
212*> - For WORK(1)/WORK(2) = ONE: The singular values of A. During the
213*> computation SVA contains Euclidean column norms of the
214*> iterated matrices in the array A.
215*> - For WORK(1) .NE. WORK(2): The singular values of A are
216*> (WORK(1)/WORK(2)) * SVA(1:N). This factored form is used if
217*> sigma_max(A) overflows or if small singular values have been
218*> saved from underflow by scaling the input matrix A.
219*> - If JOBR='R' then some of the singular values may be returned
220*> as exact zeros obtained by "set to zero" because they are
221*> below the numerical rank threshold or are denormalized numbers.
222*> \endverbatim
223*>
224*> \param[out] U
225*> \verbatim
226*> U is COMPLEX array, dimension ( LDU, N ) or ( LDU, M )
227*> If JOBU = 'U', then U contains on exit the M-by-N matrix of
228*> the left singular vectors.
229*> If JOBU = 'F', then U contains on exit the M-by-M matrix of
230*> the left singular vectors, including an ONB
231*> of the orthogonal complement of the Range(A).
232*> If JOBU = 'W' .AND. (JOBV = 'V' .AND. JOBT = 'T' .AND. M = N),
233*> then U is used as workspace if the procedure
234*> replaces A with A^*. In that case, [V] is computed
235*> in U as left singular vectors of A^* and then
236*> copied back to the V array. This 'W' option is just
237*> a reminder to the caller that in this case U is
238*> reserved as workspace of length N*N.
239*> If JOBU = 'N' U is not referenced, unless JOBT='T'.
240*> \endverbatim
241*>
242*> \param[in] LDU
243*> \verbatim
244*> LDU is INTEGER
245*> The leading dimension of the array U, LDU >= 1.
246*> IF JOBU = 'U' or 'F' or 'W', then LDU >= M.
247*> \endverbatim
248*>
249*> \param[out] V
250*> \verbatim
251*> V is COMPLEX array, dimension ( LDV, N )
252*> If JOBV = 'V', 'J' then V contains on exit the N-by-N matrix of
253*> the right singular vectors;
254*> If JOBV = 'W', AND (JOBU = 'U' AND JOBT = 'T' AND M = N),
255*> then V is used as workspace if the pprocedure
256*> replaces A with A^*. In that case, [U] is computed
257*> in V as right singular vectors of A^* and then
258*> copied back to the U array. This 'W' option is just
259*> a reminder to the caller that in this case V is
260*> reserved as workspace of length N*N.
261*> If JOBV = 'N' V is not referenced, unless JOBT='T'.
262*> \endverbatim
263*>
264*> \param[in] LDV
265*> \verbatim
266*> LDV is INTEGER
267*> The leading dimension of the array V, LDV >= 1.
268*> If JOBV = 'V' or 'J' or 'W', then LDV >= N.
269*> \endverbatim
270*>
271*> \param[out] CWORK
272*> \verbatim
273*> CWORK is COMPLEX array, dimension (MAX(2,LWORK))
274*> If the call to CGEJSV is a workspace query (indicated by LWORK=-1 or
275*> LRWORK=-1), then on exit CWORK(1) contains the required length of
276*> CWORK for the job parameters used in the call.
277*> \endverbatim
278*>
279*> \param[in] LWORK
280*> \verbatim
281*> LWORK is INTEGER
282*> Length of CWORK to confirm proper allocation of workspace.
283*> LWORK depends on the job:
284*>
285*> 1. If only SIGMA is needed ( JOBU = 'N', JOBV = 'N' ) and
286*> 1.1 .. no scaled condition estimate required (JOBA.NE.'E'.AND.JOBA.NE.'G'):
287*> LWORK >= 2*N+1. This is the minimal requirement.
288*> ->> For optimal performance (blocked code) the optimal value
289*> is LWORK >= N + (N+1)*NB. Here NB is the optimal
290*> block size for CGEQP3 and CGEQRF.
291*> In general, optimal LWORK is computed as
292*> LWORK >= max(N+LWORK(CGEQP3),N+LWORK(CGEQRF), LWORK(CGESVJ)).
293*> 1.2. .. an estimate of the scaled condition number of A is
294*> required (JOBA='E', or 'G'). In this case, LWORK the minimal
295*> requirement is LWORK >= N*N + 2*N.
296*> ->> For optimal performance (blocked code) the optimal value
297*> is LWORK >= max(N+(N+1)*NB, N*N+2*N)=N**2+2*N.
298*> In general, the optimal length LWORK is computed as
299*> LWORK >= max(N+LWORK(CGEQP3),N+LWORK(CGEQRF), LWORK(CGESVJ),
300*> N*N+LWORK(CPOCON)).
301*> 2. If SIGMA and the right singular vectors are needed (JOBV = 'V'),
302*> (JOBU = 'N')
303*> 2.1 .. no scaled condition estimate requested (JOBE = 'N'):
304*> -> the minimal requirement is LWORK >= 3*N.
305*> -> For optimal performance,
306*> LWORK >= max(N+(N+1)*NB, 2*N+N*NB)=2*N+N*NB,
307*> where NB is the optimal block size for CGEQP3, CGEQRF, CGELQ,
308*> CUNMLQ. In general, the optimal length LWORK is computed as
309*> LWORK >= max(N+LWORK(CGEQP3), N+LWORK(CGESVJ),
310*> N+LWORK(CGELQF), 2*N+LWORK(CGEQRF), N+LWORK(CUNMLQ)).
311*> 2.2 .. an estimate of the scaled condition number of A is
312*> required (JOBA='E', or 'G').
313*> -> the minimal requirement is LWORK >= 3*N.
314*> -> For optimal performance,
315*> LWORK >= max(N+(N+1)*NB, 2*N,2*N+N*NB)=2*N+N*NB,
316*> where NB is the optimal block size for CGEQP3, CGEQRF, CGELQ,
317*> CUNMLQ. In general, the optimal length LWORK is computed as
318*> LWORK >= max(N+LWORK(CGEQP3), LWORK(CPOCON), N+LWORK(CGESVJ),
319*> N+LWORK(CGELQF), 2*N+LWORK(CGEQRF), N+LWORK(CUNMLQ)).
320*> 3. If SIGMA and the left singular vectors are needed
321*> 3.1 .. no scaled condition estimate requested (JOBE = 'N'):
322*> -> the minimal requirement is LWORK >= 3*N.
323*> -> For optimal performance:
324*> if JOBU = 'U' :: LWORK >= max(3*N, N+(N+1)*NB, 2*N+N*NB)=2*N+N*NB,
325*> where NB is the optimal block size for CGEQP3, CGEQRF, CUNMQR.
326*> In general, the optimal length LWORK is computed as
327*> LWORK >= max(N+LWORK(CGEQP3), 2*N+LWORK(CGEQRF), N+LWORK(CUNMQR)).
328*> 3.2 .. an estimate of the scaled condition number of A is
329*> required (JOBA='E', or 'G').
330*> -> the minimal requirement is LWORK >= 3*N.
331*> -> For optimal performance:
332*> if JOBU = 'U' :: LWORK >= max(3*N, N+(N+1)*NB, 2*N+N*NB)=2*N+N*NB,
333*> where NB is the optimal block size for CGEQP3, CGEQRF, CUNMQR.
334*> In general, the optimal length LWORK is computed as
335*> LWORK >= max(N+LWORK(CGEQP3),N+LWORK(CPOCON),
336*> 2*N+LWORK(CGEQRF), N+LWORK(CUNMQR)).
337*>
338*> 4. If the full SVD is needed: (JOBU = 'U' or JOBU = 'F') and
339*> 4.1. if JOBV = 'V'
340*> the minimal requirement is LWORK >= 5*N+2*N*N.
341*> 4.2. if JOBV = 'J' the minimal requirement is
342*> LWORK >= 4*N+N*N.
343*> In both cases, the allocated CWORK can accommodate blocked runs
344*> of CGEQP3, CGEQRF, CGELQF, CUNMQR, CUNMLQ.
345*>
346*> If the call to CGEJSV is a workspace query (indicated by LWORK=-1 or
347*> LRWORK=-1), then on exit CWORK(1) contains the optimal and CWORK(2) contains the
348*> minimal length of CWORK for the job parameters used in the call.
349*> \endverbatim
350*>
351*> \param[out] RWORK
352*> \verbatim
353*> RWORK is REAL array, dimension (MAX(7,LWORK))
354*> On exit,
355*> RWORK(1) = Determines the scaling factor SCALE = RWORK(2) / RWORK(1)
356*> such that SCALE*SVA(1:N) are the computed singular values
357*> of A. (See the description of SVA().)
358*> RWORK(2) = See the description of RWORK(1).
359*> RWORK(3) = SCONDA is an estimate for the condition number of
360*> column equilibrated A. (If JOBA = 'E' or 'G')
361*> SCONDA is an estimate of SQRT(||(R^* * R)^(-1)||_1).
362*> It is computed using CPOCON. It holds
363*> N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
364*> where R is the triangular factor from the QRF of A.
365*> However, if R is truncated and the numerical rank is
366*> determined to be strictly smaller than N, SCONDA is
367*> returned as -1, thus indicating that the smallest
368*> singular values might be lost.
369*>
370*> If full SVD is needed, the following two condition numbers are
371*> useful for the analysis of the algorithm. They are provided for
372*> a developer/implementer who is familiar with the details of
373*> the method.
374*>
375*> RWORK(4) = an estimate of the scaled condition number of the
376*> triangular factor in the first QR factorization.
377*> RWORK(5) = an estimate of the scaled condition number of the
378*> triangular factor in the second QR factorization.
379*> The following two parameters are computed if JOBT = 'T'.
380*> They are provided for a developer/implementer who is familiar
381*> with the details of the method.
382*> RWORK(6) = the entropy of A^* * A :: this is the Shannon entropy
383*> of diag(A^* * A) / Trace(A^* * A) taken as point in the
384*> probability simplex.
385*> RWORK(7) = the entropy of A * A^*. (See the description of RWORK(6).)
386*> If the call to CGEJSV is a workspace query (indicated by LWORK=-1 or
387*> LRWORK=-1), then on exit RWORK(1) contains the required length of
388*> RWORK for the job parameters used in the call.
389*> \endverbatim
390*>
391*> \param[in] LRWORK
392*> \verbatim
393*> LRWORK is INTEGER
394*> Length of RWORK to confirm proper allocation of workspace.
395*> LRWORK depends on the job:
396*>
397*> 1. If only the singular values are requested i.e. if
398*> LSAME(JOBU,'N') .AND. LSAME(JOBV,'N')
399*> then:
400*> 1.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
401*> then: LRWORK = max( 7, 2 * M ).
402*> 1.2. Otherwise, LRWORK = max( 7, N ).
403*> 2. If singular values with the right singular vectors are requested
404*> i.e. if
405*> (LSAME(JOBV,'V').OR.LSAME(JOBV,'J')) .AND.
406*> .NOT.(LSAME(JOBU,'U').OR.LSAME(JOBU,'F'))
407*> then:
408*> 2.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
409*> then LRWORK = max( 7, 2 * M ).
410*> 2.2. Otherwise, LRWORK = max( 7, N ).
411*> 3. If singular values with the left singular vectors are requested, i.e. if
412*> (LSAME(JOBU,'U').OR.LSAME(JOBU,'F')) .AND.
413*> .NOT.(LSAME(JOBV,'V').OR.LSAME(JOBV,'J'))
414*> then:
415*> 3.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
416*> then LRWORK = max( 7, 2 * M ).
417*> 3.2. Otherwise, LRWORK = max( 7, N ).
418*> 4. If singular values with both the left and the right singular vectors
419*> are requested, i.e. if
420*> (LSAME(JOBU,'U').OR.LSAME(JOBU,'F')) .AND.
421*> (LSAME(JOBV,'V').OR.LSAME(JOBV,'J'))
422*> then:
423*> 4.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
424*> then LRWORK = max( 7, 2 * M ).
425*> 4.2. Otherwise, LRWORK = max( 7, N ).
426*>
427*> If, on entry, LRWORK = -1 or LWORK=-1, a workspace query is assumed and
428*> the length of RWORK is returned in RWORK(1).
429*> \endverbatim
430*>
431*> \param[out] IWORK
432*> \verbatim
433*> IWORK is INTEGER array, of dimension at least 4, that further depends
434*> on the job:
435*>
436*> 1. If only the singular values are requested then:
437*> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
438*> then the length of IWORK is N+M; otherwise the length of IWORK is N.
439*> 2. If the singular values and the right singular vectors are requested then:
440*> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
441*> then the length of IWORK is N+M; otherwise the length of IWORK is N.
442*> 3. If the singular values and the left singular vectors are requested then:
443*> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
444*> then the length of IWORK is N+M; otherwise the length of IWORK is N.
445*> 4. If the singular values with both the left and the right singular vectors
446*> are requested, then:
447*> 4.1. If LSAME(JOBV,'J') the length of IWORK is determined as follows:
448*> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
449*> then the length of IWORK is N+M; otherwise the length of IWORK is N.
450*> 4.2. If LSAME(JOBV,'V') the length of IWORK is determined as follows:
451*> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
452*> then the length of IWORK is 2*N+M; otherwise the length of IWORK is 2*N.
453*>
454*> On exit,
455*> IWORK(1) = the numerical rank determined after the initial
456*> QR factorization with pivoting. See the descriptions
457*> of JOBA and JOBR.
458*> IWORK(2) = the number of the computed nonzero singular values
459*> IWORK(3) = if nonzero, a warning message:
460*> If IWORK(3) = 1 then some of the column norms of A
461*> were denormalized floats. The requested high accuracy
462*> is not warranted by the data.
463*> IWORK(4) = 1 or -1. If IWORK(4) = 1, then the procedure used A^* to
464*> do the job as specified by the JOB parameters.
465*> If the call to CGEJSV is a workspace query (indicated by LWORK = -1 and
466*> LRWORK = -1), then on exit IWORK(1) contains the required length of
467*> IWORK for the job parameters used in the call.
468*> \endverbatim
469*>
470*> \param[out] INFO
471*> \verbatim
472*> INFO is INTEGER
473*> < 0: if INFO = -i, then the i-th argument had an illegal value.
474*> = 0: successful exit;
475*> > 0: CGEJSV did not converge in the maximal allowed number
476*> of sweeps. The computed values may be inaccurate.
477*> \endverbatim
478*
479* Authors:
480* ========
481*
482*> \author Univ. of Tennessee
483*> \author Univ. of California Berkeley
484*> \author Univ. of Colorado Denver
485*> \author NAG Ltd.
486*
487*> \ingroup complexGEsing
488*
489*> \par Further Details:
490* =====================
491*>
492*> \verbatim
493*> CGEJSV implements a preconditioned Jacobi SVD algorithm. It uses CGEQP3,
494*> CGEQRF, and CGELQF as preprocessors and preconditioners. Optionally, an
495*> additional row pivoting can be used as a preprocessor, which in some
496*> cases results in much higher accuracy. An example is matrix A with the
497*> structure A = D1 * C * D2, where D1, D2 are arbitrarily ill-conditioned
498*> diagonal matrices and C is well-conditioned matrix. In that case, complete
499*> pivoting in the first QR factorizations provides accuracy dependent on the
500*> condition number of C, and independent of D1, D2. Such higher accuracy is
501*> not completely understood theoretically, but it works well in practice.
502*> Further, if A can be written as A = B*D, with well-conditioned B and some
503*> diagonal D, then the high accuracy is guaranteed, both theoretically and
504*> in software, independent of D. For more details see [1], [2].
505*> The computational range for the singular values can be the full range
506*> ( UNDERFLOW,OVERFLOW ), provided that the machine arithmetic and the BLAS
507*> & LAPACK routines called by CGEJSV are implemented to work in that range.
508*> If that is not the case, then the restriction for safe computation with
509*> the singular values in the range of normalized IEEE numbers is that the
510*> spectral condition number kappa(A)=sigma_max(A)/sigma_min(A) does not
511*> overflow. This code (CGEJSV) is best used in this restricted range,
512*> meaning that singular values of magnitude below ||A||_2 / SLAMCH('O') are
513*> returned as zeros. See JOBR for details on this.
514*> Further, this implementation is somewhat slower than the one described
515*> in [1,2] due to replacement of some non-LAPACK components, and because
516*> the choice of some tuning parameters in the iterative part (CGESVJ) is
517*> left to the implementer on a particular machine.
518*> The rank revealing QR factorization (in this code: CGEQP3) should be
519*> implemented as in [3]. We have a new version of CGEQP3 under development
520*> that is more robust than the current one in LAPACK, with a cleaner cut in
521*> rank deficient cases. It will be available in the SIGMA library [4].
522*> If M is much larger than N, it is obvious that the initial QRF with
523*> column pivoting can be preprocessed by the QRF without pivoting. That
524*> well known trick is not used in CGEJSV because in some cases heavy row
525*> weighting can be treated with complete pivoting. The overhead in cases
526*> M much larger than N is then only due to pivoting, but the benefits in
527*> terms of accuracy have prevailed. The implementer/user can incorporate
528*> this extra QRF step easily. The implementer can also improve data movement
529*> (matrix transpose, matrix copy, matrix transposed copy) - this
530*> implementation of CGEJSV uses only the simplest, naive data movement.
531*> \endverbatim
532*
533*> \par Contributor:
534* ==================
535*>
536*> Zlatko Drmac (Zagreb, Croatia)
537*
538*> \par References:
539* ================
540*>
541*> \verbatim
542*>
543*> [1] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm I.
544*> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1322-1342.
545*> LAPACK Working note 169.
546*> [2] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm II.
547*> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1343-1362.
548*> LAPACK Working note 170.
549*> [3] Z. Drmac and Z. Bujanovic: On the failure of rank-revealing QR
550*> factorization software - a case study.
551*> ACM Trans. Math. Softw. Vol. 35, No 2 (2008), pp. 1-28.
552*> LAPACK Working note 176.
553*> [4] Z. Drmac: SIGMA - mathematical software library for accurate SVD, PSV,
554*> QSVD, (H,K)-SVD computations.
555*> Department of Mathematics, University of Zagreb, 2008, 2016.
556*> \endverbatim
557*
558*> \par Bugs, examples and comments:
559* =================================
560*>
561*> Please report all bugs and send interesting examples and/or comments to
562*> drmac@math.hr. Thank you.
563*>
564* =====================================================================
565 SUBROUTINE cgejsv( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
566 $ M, N, A, LDA, SVA, U, LDU, V, LDV,
567 $ CWORK, LWORK, RWORK, LRWORK, IWORK, INFO )
568*
569* -- LAPACK computational routine --
570* -- LAPACK is a software package provided by Univ. of Tennessee, --
571* -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
572*
573* .. Scalar Arguments ..
574 IMPLICIT NONE
575 INTEGER INFO, LDA, LDU, LDV, LWORK, LRWORK, M, N
576* ..
577* .. Array Arguments ..
578 COMPLEX A( LDA, * ), U( LDU, * ), V( LDV, * ), CWORK( LWORK )
579 REAL SVA( N ), RWORK( LRWORK )
580 INTEGER IWORK( * )
581 CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
582* ..
583*
584* ===========================================================================
585*
586* .. Local Parameters ..
587 REAL ZERO, ONE
588 PARAMETER ( ZERO = 0.0e0, one = 1.0e0 )
589 COMPLEX CZERO, CONE
590 parameter( czero = ( 0.0e0, 0.0e0 ), cone = ( 1.0e0, 0.0e0 ) )
591* ..
592* .. Local Scalars ..
593 COMPLEX CTEMP
594 REAL AAPP, AAQQ, AATMAX, AATMIN, BIG, BIG1, COND_OK,
595 $ condr1, condr2, entra, entrat, epsln, maxprj, scalem,
596 $ sconda, sfmin, small, temp1, uscal1, uscal2, xsc
597 INTEGER IERR, N1, NR, NUMRANK, p, q, WARNING
598 LOGICAL ALMORT, DEFR, ERREST, GOSCAL, JRACC, KILL, LQUERY,
599 $ lsvec, l2aber, l2kill, l2pert, l2rank, l2tran, noscal,
600 $ rowpiv, rsvec, transp
601*
602 INTEGER OPTWRK, MINWRK, MINRWRK, MINIWRK
603 INTEGER LWCON, LWLQF, LWQP3, LWQRF, LWUNMLQ, LWUNMQR, LWUNMQRM,
604 $ lwsvdj, lwsvdjv, lrwqp3, lrwcon, lrwsvdj, iwoff
605 INTEGER LWRK_CGELQF, LWRK_CGEQP3, LWRK_CGEQP3N, LWRK_CGEQRF,
606 $ LWRK_CGESVJ, LWRK_CGESVJV, LWRK_CGESVJU, LWRK_CUNMLQ,
607 $ lwrk_cunmqr, lwrk_cunmqrm
608* ..
609* .. Local Arrays
610 COMPLEX CDUMMY(1)
611 REAL RDUMMY(1)
612*
613* .. Intrinsic Functions ..
614 INTRINSIC abs, cmplx, conjg, alog, max, min, real, nint, sqrt
615* ..
616* .. External Functions ..
617 REAL SLAMCH, SCNRM2
618 INTEGER ISAMAX, ICAMAX
619 LOGICAL LSAME
620 EXTERNAL isamax, icamax, lsame, slamch, scnrm2
621* ..
622* .. External Subroutines ..
626 $ xerbla
627*
628 EXTERNAL cgesvj
629* ..
630*
631* Test the input arguments
632*
633 lsvec = lsame( jobu, 'U' ) .OR. lsame( jobu, 'F' )
634 jracc = lsame( jobv, 'J' )
635 rsvec = lsame( jobv, 'V' ) .OR. jracc
636 rowpiv = lsame( joba, 'F' ) .OR. lsame( joba, 'G' )
637 l2rank = lsame( joba, 'R' )
638 l2aber = lsame( joba, 'A' )
639 errest = lsame( joba, 'E' ) .OR. lsame( joba, 'G' )
640 l2tran = lsame( jobt, 'T' ) .AND. ( m .EQ. n )
641 l2kill = lsame( jobr, 'R' )
642 defr = lsame( jobr, 'N' )
643 l2pert = lsame( jobp, 'P' )
644*
645 lquery = ( lwork .EQ. -1 ) .OR. ( lrwork .EQ. -1 )
646*
647 IF ( .NOT.(rowpiv .OR. l2rank .OR. l2aber .OR.
648 $ errest .OR. lsame( joba, 'C' ) )) THEN
649 info = - 1
650 ELSE IF ( .NOT.( lsvec .OR. lsame( jobu, 'N' ) .OR.
651 $ ( lsame( jobu, 'W' ) .AND. rsvec .AND. l2tran ) ) ) THEN
652 info = - 2
653 ELSE IF ( .NOT.( rsvec .OR. lsame( jobv, 'N' ) .OR.
654 $ ( lsame( jobv, 'W' ) .AND. lsvec .AND. l2tran ) ) ) THEN
655 info = - 3
656 ELSE IF ( .NOT. ( l2kill .OR. defr ) ) THEN
657 info = - 4
658 ELSE IF ( .NOT. ( lsame(jobt,'T') .OR. lsame(jobt,'N') ) ) THEN
659 info = - 5
660 ELSE IF ( .NOT. ( l2pert .OR. lsame( jobp, 'N' ) ) ) THEN
661 info = - 6
662 ELSE IF ( m .LT. 0 ) THEN
663 info = - 7
664 ELSE IF ( ( n .LT. 0 ) .OR. ( n .GT. m ) ) THEN
665 info = - 8
666 ELSE IF ( lda .LT. m ) THEN
667 info = - 10
668 ELSE IF ( lsvec .AND. ( ldu .LT. m ) ) THEN
669 info = - 13
670 ELSE IF ( rsvec .AND. ( ldv .LT. n ) ) THEN
671 info = - 15
672 ELSE
673* #:)
674 info = 0
675 END IF
676*
677 IF ( info .EQ. 0 ) THEN
678* .. compute the minimal and the optimal workspace lengths
679* [[The expressions for computing the minimal and the optimal
680* values of LCWORK, LRWORK are written with a lot of redundancy and
681* can be simplified. However, this verbose form is useful for
682* maintenance and modifications of the code.]]
683*
684* .. minimal workspace length for CGEQP3 of an M x N matrix,
685* CGEQRF of an N x N matrix, CGELQF of an N x N matrix,
686* CUNMLQ for computing N x N matrix, CUNMQR for computing N x N
687* matrix, CUNMQR for computing M x N matrix, respectively.
688 lwqp3 = n+1
689 lwqrf = max( 1, n )
690 lwlqf = max( 1, n )
691 lwunmlq = max( 1, n )
692 lwunmqr = max( 1, n )
693 lwunmqrm = max( 1, m )
694* .. minimal workspace length for CPOCON of an N x N matrix
695 lwcon = 2 * n
696* .. minimal workspace length for CGESVJ of an N x N matrix,
697* without and with explicit accumulation of Jacobi rotations
698 lwsvdj = max( 2 * n, 1 )
699 lwsvdjv = max( 2 * n, 1 )
700* .. minimal REAL workspace length for CGEQP3, CPOCON, CGESVJ
701 lrwqp3 = 2 * n
702 lrwcon = n
703 lrwsvdj = n
704 IF ( lquery ) THEN
705 CALL cgeqp3( m, n, a, lda, iwork, cdummy, cdummy, -1,
706 $ rdummy, ierr )
707 lwrk_cgeqp3 = real( cdummy(1) )
708 CALL cgeqrf( n, n, a, lda, cdummy, cdummy,-1, ierr )
709 lwrk_cgeqrf = real( cdummy(1) )
710 CALL cgelqf( n, n, a, lda, cdummy, cdummy,-1, ierr )
711 lwrk_cgelqf = real( cdummy(1) )
712 END IF
713 minwrk = 2
714 optwrk = 2
715 miniwrk = n
716 IF ( .NOT. (lsvec .OR. rsvec ) ) THEN
717* .. minimal and optimal sizes of the complex workspace if
718* only the singular values are requested
719 IF ( errest ) THEN
720 minwrk = max( n+lwqp3, n**2+lwcon, n+lwqrf, lwsvdj )
721 ELSE
722 minwrk = max( n+lwqp3, n+lwqrf, lwsvdj )
723 END IF
724 IF ( lquery ) THEN
725 CALL cgesvj( 'L', 'N', 'N', n, n, a, lda, sva, n, v,
726 $ ldv, cdummy, -1, rdummy, -1, ierr )
727 lwrk_cgesvj = real( cdummy(1) )
728 IF ( errest ) THEN
729 optwrk = max( n+lwrk_cgeqp3, n**2+lwcon,
730 $ n+lwrk_cgeqrf, lwrk_cgesvj )
731 ELSE
732 optwrk = max( n+lwrk_cgeqp3, n+lwrk_cgeqrf,
733 $ lwrk_cgesvj )
734 END IF
735 END IF
736 IF ( l2tran .OR. rowpiv ) THEN
737 IF ( errest ) THEN
738 minrwrk = max( 7, 2*m, lrwqp3, lrwcon, lrwsvdj )
739 ELSE
740 minrwrk = max( 7, 2*m, lrwqp3, lrwsvdj )
741 END IF
742 ELSE
743 IF ( errest ) THEN
744 minrwrk = max( 7, lrwqp3, lrwcon, lrwsvdj )
745 ELSE
746 minrwrk = max( 7, lrwqp3, lrwsvdj )
747 END IF
748 END IF
749 IF ( rowpiv .OR. l2tran ) miniwrk = miniwrk + m
750 ELSE IF ( rsvec .AND. (.NOT.lsvec) ) THEN
751* .. minimal and optimal sizes of the complex workspace if the
752* singular values and the right singular vectors are requested
753 IF ( errest ) THEN
754 minwrk = max( n+lwqp3, lwcon, lwsvdj, n+lwlqf,
755 $ 2*n+lwqrf, n+lwsvdj, n+lwunmlq )
756 ELSE
757 minwrk = max( n+lwqp3, lwsvdj, n+lwlqf, 2*n+lwqrf,
758 $ n+lwsvdj, n+lwunmlq )
759 END IF
760 IF ( lquery ) THEN
761 CALL cgesvj( 'l', 'u', 'n', N,N, U, LDU, SVA, N, A,
762 $ LDA, CDUMMY, -1, RDUMMY, -1, IERR )
763 LWRK_CGESVJ = REAL( CDUMMY(1) )
764 CALL CUNMLQ( 'l', 'c', N, N, N, A, LDA, CDUMMY,
765 $ V, LDV, CDUMMY, -1, IERR )
766 LWRK_CUNMLQ = REAL( CDUMMY(1) )
767 IF ( ERREST ) THEN
768 OPTWRK = MAX( N+LWRK_CGEQP3, LWCON, LWRK_CGESVJ,
769 $ N+LWRK_CGELQF, 2*N+LWRK_CGEQRF,
770 $ N+LWRK_CGESVJ, N+LWRK_CUNMLQ )
771 ELSE
772 OPTWRK = MAX( N+LWRK_CGEQP3, LWRK_CGESVJ,N+LWRK_CGELQF,
773 $ 2*N+LWRK_CGEQRF, N+LWRK_CGESVJ,
774 $ N+LWRK_CUNMLQ )
775 END IF
776 END IF
777.OR. IF ( L2TRAN ROWPIV ) THEN
778 IF ( ERREST ) THEN
779 MINRWRK = MAX( 7, 2*M, LRWQP3, LRWSVDJ, LRWCON )
780 ELSE
781 MINRWRK = MAX( 7, 2*M, LRWQP3, LRWSVDJ )
782 END IF
783 ELSE
784 IF ( ERREST ) THEN
785 MINRWRK = MAX( 7, LRWQP3, LRWSVDJ, LRWCON )
786 ELSE
787 MINRWRK = MAX( 7, LRWQP3, LRWSVDJ )
788 END IF
789 END IF
790.OR. IF ( ROWPIV L2TRAN ) MINIWRK = MINIWRK + M
791.AND..NOT. ELSE IF ( LSVEC (RSVEC) ) THEN
792* .. minimal and optimal sizes of the complex workspace if the
793* singular values and the left singular vectors are requested
794 IF ( ERREST ) THEN
795 MINWRK = N + MAX( LWQP3,LWCON,N+LWQRF,LWSVDJ,LWUNMQRM )
796 ELSE
797 MINWRK = N + MAX( LWQP3, N+LWQRF, LWSVDJ, LWUNMQRM )
798 END IF
799 IF ( LQUERY ) THEN
800 CALL CGESVJ( 'l', 'u', 'n', N,N, U, LDU, SVA, N, A,
801 $ LDA, CDUMMY, -1, RDUMMY, -1, IERR )
802 LWRK_CGESVJ = REAL( CDUMMY(1) )
803 CALL CUNMQR( 'l', 'n', M, N, N, A, LDA, CDUMMY, U,
804 $ LDU, CDUMMY, -1, IERR )
805 LWRK_CUNMQRM = REAL( CDUMMY(1) )
806 IF ( ERREST ) THEN
807 OPTWRK = N + MAX( LWRK_CGEQP3, LWCON, N+LWRK_CGEQRF,
808 $ LWRK_CGESVJ, LWRK_CUNMQRM )
809 ELSE
810 OPTWRK = N + MAX( LWRK_CGEQP3, N+LWRK_CGEQRF,
811 $ LWRK_CGESVJ, LWRK_CUNMQRM )
812 END IF
813 END IF
814.OR. IF ( L2TRAN ROWPIV ) THEN
815 IF ( ERREST ) THEN
816 MINRWRK = MAX( 7, 2*M, LRWQP3, LRWSVDJ, LRWCON )
817 ELSE
818 MINRWRK = MAX( 7, 2*M, LRWQP3, LRWSVDJ )
819 END IF
820 ELSE
821 IF ( ERREST ) THEN
822 MINRWRK = MAX( 7, LRWQP3, LRWSVDJ, LRWCON )
823 ELSE
824 MINRWRK = MAX( 7, LRWQP3, LRWSVDJ )
825 END IF
826 END IF
827.OR. IF ( ROWPIV L2TRAN ) MINIWRK = MINIWRK + M
828 ELSE
829* .. minimal and optimal sizes of the complex workspace if the
830* full SVD is requested
831.NOT. IF ( JRACC ) THEN
832 IF ( ERREST ) THEN
833 MINWRK = MAX( N+LWQP3, N+LWCON, 2*N+N**2+LWCON,
834 $ 2*N+LWQRF, 2*N+LWQP3,
835 $ 2*N+N**2+N+LWLQF, 2*N+N**2+N+N**2+LWCON,
836 $ 2*N+N**2+N+LWSVDJ, 2*N+N**2+N+LWSVDJV,
837 $ 2*N+N**2+N+LWUNMQR,2*N+N**2+N+LWUNMLQ,
838 $ N+N**2+LWSVDJ, N+LWUNMQRM )
839 ELSE
840 MINWRK = MAX( N+LWQP3, 2*N+N**2+LWCON,
841 $ 2*N+LWQRF, 2*N+LWQP3,
842 $ 2*N+N**2+N+LWLQF, 2*N+N**2+N+N**2+LWCON,
843 $ 2*N+N**2+N+LWSVDJ, 2*N+N**2+N+LWSVDJV,
844 $ 2*N+N**2+N+LWUNMQR,2*N+N**2+N+LWUNMLQ,
845 $ N+N**2+LWSVDJ, N+LWUNMQRM )
846 END IF
847 MINIWRK = MINIWRK + N
848.OR. IF ( ROWPIV L2TRAN ) MINIWRK = MINIWRK + M
849 ELSE
850 IF ( ERREST ) THEN
851 MINWRK = MAX( N+LWQP3, N+LWCON, 2*N+LWQRF,
852 $ 2*N+N**2+LWSVDJV, 2*N+N**2+N+LWUNMQR,
853 $ N+LWUNMQRM )
854 ELSE
855 MINWRK = MAX( N+LWQP3, 2*N+LWQRF,
856 $ 2*N+N**2+LWSVDJV, 2*N+N**2+N+LWUNMQR,
857 $ N+LWUNMQRM )
858 END IF
859.OR. IF ( ROWPIV L2TRAN ) MINIWRK = MINIWRK + M
860 END IF
861 IF ( LQUERY ) THEN
862 CALL CUNMQR( 'l', 'n', M, N, N, A, LDA, CDUMMY, U,
863 $ LDU, CDUMMY, -1, IERR )
864 LWRK_CUNMQRM = REAL( CDUMMY(1) )
865 CALL CUNMQR( 'l', 'n', N, N, N, A, LDA, CDUMMY, U,
866 $ LDU, CDUMMY, -1, IERR )
867 LWRK_CUNMQR = REAL( CDUMMY(1) )
868.NOT. IF ( JRACC ) THEN
869 CALL CGEQP3( N,N, A, LDA, IWORK, CDUMMY,CDUMMY, -1,
870 $ RDUMMY, IERR )
871 LWRK_CGEQP3N = REAL( CDUMMY(1) )
872 CALL CGESVJ( 'l', 'u', 'n', N, N, U, LDU, SVA,
873 $ N, V, LDV, CDUMMY, -1, RDUMMY, -1, IERR )
874 LWRK_CGESVJ = REAL( CDUMMY(1) )
875 CALL CGESVJ( 'u', 'u', 'n', N, N, U, LDU, SVA,
876 $ N, V, LDV, CDUMMY, -1, RDUMMY, -1, IERR )
877 LWRK_CGESVJU = REAL( CDUMMY(1) )
878 CALL CGESVJ( 'l', 'u', 'v', N, N, U, LDU, SVA,
879 $ N, V, LDV, CDUMMY, -1, RDUMMY, -1, IERR )
880 LWRK_CGESVJV = REAL( CDUMMY(1) )
881 CALL CUNMLQ( 'l', 'c', N, N, N, A, LDA, CDUMMY,
882 $ V, LDV, CDUMMY, -1, IERR )
883 LWRK_CUNMLQ = REAL( CDUMMY(1) )
884 IF ( ERREST ) THEN
885 OPTWRK = MAX( N+LWRK_CGEQP3, N+LWCON,
886 $ 2*N+N**2+LWCON, 2*N+LWRK_CGEQRF,
887 $ 2*N+LWRK_CGEQP3N,
888 $ 2*N+N**2+N+LWRK_CGELQF,
889 $ 2*N+N**2+N+N**2+LWCON,
890 $ 2*N+N**2+N+LWRK_CGESVJ,
891 $ 2*N+N**2+N+LWRK_CGESVJV,
892 $ 2*N+N**2+N+LWRK_CUNMQR,
893 $ 2*N+N**2+N+LWRK_CUNMLQ,
894 $ N+N**2+LWRK_CGESVJU,
895 $ N+LWRK_CUNMQRM )
896 ELSE
897 OPTWRK = MAX( N+LWRK_CGEQP3,
898 $ 2*N+N**2+LWCON, 2*N+LWRK_CGEQRF,
899 $ 2*N+LWRK_CGEQP3N,
900 $ 2*N+N**2+N+LWRK_CGELQF,
901 $ 2*N+N**2+N+N**2+LWCON,
902 $ 2*N+N**2+N+LWRK_CGESVJ,
903 $ 2*N+N**2+N+LWRK_CGESVJV,
904 $ 2*N+N**2+N+LWRK_CUNMQR,
905 $ 2*N+N**2+N+LWRK_CUNMLQ,
906 $ N+N**2+LWRK_CGESVJU,
907 $ N+LWRK_CUNMQRM )
908 END IF
909 ELSE
910 CALL CGESVJ( 'l', 'u', 'v', N, N, U, LDU, SVA,
911 $ N, V, LDV, CDUMMY, -1, RDUMMY, -1, IERR )
912 LWRK_CGESVJV = REAL( CDUMMY(1) )
913 CALL CUNMQR( 'l', 'n', N, N, N, CDUMMY, N, CDUMMY,
914 $ V, LDV, CDUMMY, -1, IERR )
915 LWRK_CUNMQR = REAL( CDUMMY(1) )
916 CALL CUNMQR( 'l', 'n', M, N, N, A, LDA, CDUMMY, U,
917 $ LDU, CDUMMY, -1, IERR )
918 LWRK_CUNMQRM = REAL( CDUMMY(1) )
919 IF ( ERREST ) THEN
920 OPTWRK = MAX( N+LWRK_CGEQP3, N+LWCON,
921 $ 2*N+LWRK_CGEQRF, 2*N+N**2,
922 $ 2*N+N**2+LWRK_CGESVJV,
923 $ 2*N+N**2+N+LWRK_CUNMQR,N+LWRK_CUNMQRM )
924 ELSE
925 OPTWRK = MAX( N+LWRK_CGEQP3, 2*N+LWRK_CGEQRF,
926 $ 2*N+N**2, 2*N+N**2+LWRK_CGESVJV,
927 $ 2*N+N**2+N+LWRK_CUNMQR,
928 $ N+LWRK_CUNMQRM )
929 END IF
930 END IF
931 END IF
932.OR. IF ( L2TRAN ROWPIV ) THEN
933 MINRWRK = MAX( 7, 2*M, LRWQP3, LRWSVDJ, LRWCON )
934 ELSE
935 MINRWRK = MAX( 7, LRWQP3, LRWSVDJ, LRWCON )
936 END IF
937 END IF
938 MINWRK = MAX( 2, MINWRK )
939 OPTWRK = MAX( OPTWRK, MINWRK )
940.LT..AND..NOT. IF ( LWORK MINWRK (LQUERY) ) INFO = - 17
941.LT..AND..NOT. IF ( LRWORK MINRWRK (LQUERY) ) INFO = - 19
942 END IF
943*
944.NE. IF ( INFO 0 ) THEN
945* #:(
946 CALL XERBLA( 'cgejsv', - INFO )
947 RETURN
948 ELSE IF ( LQUERY ) THEN
949 CWORK(1) = OPTWRK
950 CWORK(2) = MINWRK
951 RWORK(1) = MINRWRK
952 IWORK(1) = MAX( 4, MINIWRK )
953 RETURN
954 END IF
955*
956* Quick return for void matrix (Y3K safe)
957* #:)
958.EQ..OR..EQ. IF ( ( M 0 ) ( N 0 ) ) THEN
959 IWORK(1:4) = 0
960 RWORK(1:7) = 0
961 RETURN
962 ENDIF
963*
964* Determine whether the matrix U should be M x N or M x M
965*
966 IF ( LSVEC ) THEN
967 N1 = N
968 IF ( LSAME( JOBU, 'f' ) ) N1 = M
969 END IF
970*
971* Set numerical parameters
972*
973*! NOTE: Make sure SLAMCH() does not fail on the target architecture.
974*
975 EPSLN = SLAMCH('epsilon')
976 SFMIN = SLAMCH('safeminimum')
977 SMALL = SFMIN / EPSLN
978 BIG = SLAMCH('o')
979* BIG = ONE / SFMIN
980*
981* Initialize SVA(1:N) = diag( ||A e_i||_2 )_1^N
982*
983*(!) If necessary, scale SVA() to protect the largest norm from
984* overflow. It is possible that this scaling pushes the smallest
985* column norm left from the underflow threshold (extreme case).
986*
987 SCALEM = ONE / SQRT(REAL(M)*REAL(N))
988 NOSCAL = .TRUE.
989 GOSCAL = .TRUE.
990 DO 1874 p = 1, N
991 AAPP = ZERO
992 AAQQ = ONE
993 CALL CLASSQ( M, A(1,p), 1, AAPP, AAQQ )
994.GT. IF ( AAPP BIG ) THEN
995 INFO = - 9
996 CALL XERBLA( 'cgejsv', -INFO )
997 RETURN
998 END IF
999 AAQQ = SQRT(AAQQ)
1000.LT..AND. IF ( ( AAPP (BIG / AAQQ) ) NOSCAL ) THEN
1001 SVA(p) = AAPP * AAQQ
1002 ELSE
1003 NOSCAL = .FALSE.
1004 SVA(p) = AAPP * ( AAQQ * SCALEM )
1005 IF ( GOSCAL ) THEN
1006 GOSCAL = .FALSE.
1007 CALL SSCAL( p-1, SCALEM, SVA, 1 )
1008 END IF
1009 END IF
1010 1874 CONTINUE
1011*
1012 IF ( NOSCAL ) SCALEM = ONE
1013*
1014 AAPP = ZERO
1015 AAQQ = BIG
1016 DO 4781 p = 1, N
1017 AAPP = MAX( AAPP, SVA(p) )
1018.NE. IF ( SVA(p) ZERO ) AAQQ = MIN( AAQQ, SVA(p) )
1019 4781 CONTINUE
1020*
1021* Quick return for zero M x N matrix
1022* #:)
1023.EQ. IF ( AAPP ZERO ) THEN
1024 IF ( LSVEC ) CALL CLASET( 'g', M, N1, CZERO, CONE, U, LDU )
1025 IF ( RSVEC ) CALL CLASET( 'g', N, N, CZERO, CONE, V, LDV )
1026 RWORK(1) = ONE
1027 RWORK(2) = ONE
1028 IF ( ERREST ) RWORK(3) = ONE
1029.AND. IF ( LSVEC RSVEC ) THEN
1030 RWORK(4) = ONE
1031 RWORK(5) = ONE
1032 END IF
1033 IF ( L2TRAN ) THEN
1034 RWORK(6) = ZERO
1035 RWORK(7) = ZERO
1036 END IF
1037 IWORK(1) = 0
1038 IWORK(2) = 0
1039 IWORK(3) = 0
1040 IWORK(4) = -1
1041 RETURN
1042 END IF
1043*
1044* Issue warning if denormalized column norms detected. Override the
1045* high relative accuracy request. Issue licence to kill nonzero columns
1046* (set them to zero) whose norm is less than sigma_max / BIG (roughly).
1047* #:(
1048 WARNING = 0
1049.LE. IF ( AAQQ SFMIN ) THEN
1050 L2RANK = .TRUE.
1051 L2KILL = .TRUE.
1052 WARNING = 1
1053 END IF
1054*
1055* Quick return for one-column matrix
1056* #:)
1057.EQ. IF ( N 1 ) THEN
1058*
1059 IF ( LSVEC ) THEN
1060 CALL CLASCL( 'g',0,0,SVA(1),SCALEM, M,1,A(1,1),LDA,IERR )
1061 CALL CLACPY( 'a', M, 1, A, LDA, U, LDU )
1062* computing all M left singular vectors of the M x 1 matrix
1063.NE. IF ( N1 N ) THEN
1064 CALL CGEQRF( M, N, U,LDU, CWORK, CWORK(N+1),LWORK-N,IERR )
1065 CALL CUNGQR( M,N1,1, U,LDU,CWORK,CWORK(N+1),LWORK-N,IERR )
1066 CALL CCOPY( M, A(1,1), 1, U(1,1), 1 )
1067 END IF
1068 END IF
1069 IF ( RSVEC ) THEN
1070 V(1,1) = CONE
1071 END IF
1072.LT. IF ( SVA(1) (BIG*SCALEM) ) THEN
1073 SVA(1) = SVA(1) / SCALEM
1074 SCALEM = ONE
1075 END IF
1076 RWORK(1) = ONE / SCALEM
1077 RWORK(2) = ONE
1078.NE. IF ( SVA(1) ZERO ) THEN
1079 IWORK(1) = 1
1080.GE. IF ( ( SVA(1) / SCALEM) SFMIN ) THEN
1081 IWORK(2) = 1
1082 ELSE
1083 IWORK(2) = 0
1084 END IF
1085 ELSE
1086 IWORK(1) = 0
1087 IWORK(2) = 0
1088 END IF
1089 IWORK(3) = 0
1090 IWORK(4) = -1
1091 IF ( ERREST ) RWORK(3) = ONE
1092.AND. IF ( LSVEC RSVEC ) THEN
1093 RWORK(4) = ONE
1094 RWORK(5) = ONE
1095 END IF
1096 IF ( L2TRAN ) THEN
1097 RWORK(6) = ZERO
1098 RWORK(7) = ZERO
1099 END IF
1100 RETURN
1101*
1102 END IF
1103*
1104 TRANSP = .FALSE.
1105*
1106 AATMAX = -ONE
1107 AATMIN = BIG
1108.OR. IF ( ROWPIV L2TRAN ) THEN
1109*
1110* Compute the row norms, needed to determine row pivoting sequence
1111* (in the case of heavily row weighted A, row pivoting is strongly
1112* advised) and to collect information needed to compare the
1113* structures of A * A^* and A^* * A (in the case L2TRAN.EQ..TRUE.).
1114*
1115 IF ( L2TRAN ) THEN
1116 DO 1950 p = 1, M
1117 XSC = ZERO
1118 TEMP1 = ONE
1119 CALL CLASSQ( N, A(p,1), LDA, XSC, TEMP1 )
1120* CLASSQ gets both the ell_2 and the ell_infinity norm
1121* in one pass through the vector
1122 RWORK(M+p) = XSC * SCALEM
1123 RWORK(p) = XSC * (SCALEM*SQRT(TEMP1))
1124 AATMAX = MAX( AATMAX, RWORK(p) )
1125.NE. IF (RWORK(p) ZERO)
1126 $ AATMIN = MIN(AATMIN,RWORK(p))
1127 1950 CONTINUE
1128 ELSE
1129 DO 1904 p = 1, M
1130 RWORK(M+p) = SCALEM*ABS( A(p,ICAMAX(N,A(p,1),LDA)) )
1131 AATMAX = MAX( AATMAX, RWORK(M+p) )
1132 AATMIN = MIN( AATMIN, RWORK(M+p) )
1133 1904 CONTINUE
1134 END IF
1135*
1136 END IF
1137*
1138* For square matrix A try to determine whether A^* would be better
1139* input for the preconditioned Jacobi SVD, with faster convergence.
1140* The decision is based on an O(N) function of the vector of column
1141* and row norms of A, based on the Shannon entropy. This should give
1142* the right choice in most cases when the difference actually matters.
1143* It may fail and pick the slower converging side.
1144*
1145 ENTRA = ZERO
1146 ENTRAT = ZERO
1147 IF ( L2TRAN ) THEN
1148*
1149 XSC = ZERO
1150 TEMP1 = ONE
1151 CALL SLASSQ( N, SVA, 1, XSC, TEMP1 )
1152 TEMP1 = ONE / TEMP1
1153*
1154 ENTRA = ZERO
1155 DO 1113 p = 1, N
1156 BIG1 = ( ( SVA(p) / XSC )**2 ) * TEMP1
1157.NE. IF ( BIG1 ZERO ) ENTRA = ENTRA + BIG1 * ALOG(BIG1)
1158 1113 CONTINUE
1159 ENTRA = - ENTRA / ALOG(REAL(N))
1160*
1161* Now, SVA().^2/Trace(A^* * A) is a point in the probability simplex.
1162* It is derived from the diagonal of A^* * A. Do the same with the
1163* diagonal of A * A^*, compute the entropy of the corresponding
1164* probability distribution. Note that A * A^* and A^* * A have the
1165* same trace.
1166*
1167 ENTRAT = ZERO
1168 DO 1114 p = 1, M
1169 BIG1 = ( ( RWORK(p) / XSC )**2 ) * TEMP1
1170.NE. IF ( BIG1 ZERO ) ENTRAT = ENTRAT + BIG1 * ALOG(BIG1)
1171 1114 CONTINUE
1172 ENTRAT = - ENTRAT / ALOG(REAL(M))
1173*
1174* Analyze the entropies and decide A or A^*. Smaller entropy
1175* usually means better input for the algorithm.
1176*
1177.LT. TRANSP = ( ENTRAT ENTRA )
1178*
1179* If A^* is better than A, take the adjoint of A. This is allowed
1180* only for square matrices, M=N.
1181 IF ( TRANSP ) THEN
1182* In an optimal implementation, this trivial transpose
1183* should be replaced with faster transpose.
1184 DO 1115 p = 1, N - 1
1185 A(p,p) = CONJG(A(p,p))
1186 DO 1116 q = p + 1, N
1187 CTEMP = CONJG(A(q,p))
1188 A(q,p) = CONJG(A(p,q))
1189 A(p,q) = CTEMP
1190 1116 CONTINUE
1191 1115 CONTINUE
1192 A(N,N) = CONJG(A(N,N))
1193 DO 1117 p = 1, N
1194 RWORK(M+p) = SVA(p)
1195 SVA(p) = RWORK(p)
1196* previously computed row 2-norms are now column 2-norms
1197* of the transposed matrix
1198 1117 CONTINUE
1199 TEMP1 = AAPP
1200 AAPP = AATMAX
1201 AATMAX = TEMP1
1202 TEMP1 = AAQQ
1203 AAQQ = AATMIN
1204 AATMIN = TEMP1
1205 KILL = LSVEC
1206 LSVEC = RSVEC
1207 RSVEC = KILL
1208 IF ( LSVEC ) N1 = N
1209*
1210 ROWPIV = .TRUE.
1211 END IF
1212*
1213 END IF
1214* END IF L2TRAN
1215*
1216* Scale the matrix so that its maximal singular value remains less
1217* than SQRT(BIG) -- the matrix is scaled so that its maximal column
1218* has Euclidean norm equal to SQRT(BIG/N). The only reason to keep
1219* SQRT(BIG) instead of BIG is the fact that CGEJSV uses LAPACK and
1220* BLAS routines that, in some implementations, are not capable of
1221* working in the full interval [SFMIN,BIG] and that they may provoke
1222* overflows in the intermediate results. If the singular values spread
1223* from SFMIN to BIG, then CGESVJ will compute them. So, in that case,
1224* one should use CGESVJ instead of CGEJSV.
1225 BIG1 = SQRT( BIG )
1226 TEMP1 = SQRT( BIG / REAL(N) )
1227* >> for future updates: allow bigger range, i.e. the largest column
1228* will be allowed up to BIG/N and CGESVJ will do the rest. However, for
1229* this all other (LAPACK) components must allow such a range.
1230* TEMP1 = BIG/REAL(N)
1231* TEMP1 = BIG * EPSLN this should 'almost' work with current LAPACK components
1232 CALL SLASCL( 'g', 0, 0, AAPP, TEMP1, N, 1, SVA, N, IERR )
1233.GT. IF ( AAQQ (AAPP * SFMIN) ) THEN
1234 AAQQ = ( AAQQ / AAPP ) * TEMP1
1235 ELSE
1236 AAQQ = ( AAQQ * TEMP1 ) / AAPP
1237 END IF
1238 TEMP1 = TEMP1 * SCALEM
1239 CALL CLASCL( 'g', 0, 0, AAPP, TEMP1, M, N, A, LDA, IERR )
1240*
1241* To undo scaling at the end of this procedure, multiply the
1242* computed singular values with USCAL2 / USCAL1.
1243*
1244 USCAL1 = TEMP1
1245 USCAL2 = AAPP
1246*
1247 IF ( L2KILL ) THEN
1248* L2KILL enforces computation of nonzero singular values in
1249* the restricted range of condition number of the initial A,
1250* sigma_max(A) / sigma_min(A) approx. SQRT(BIG)/SQRT(SFMIN).
1251 XSC = SQRT( SFMIN )
1252 ELSE
1253 XSC = SMALL
1254*
1255* Now, if the condition number of A is too big,
1256* sigma_max(A) / sigma_min(A) .GT. SQRT(BIG/N) * EPSLN / SFMIN,
1257* as a precaution measure, the full SVD is computed using CGESVJ
1258* with accumulated Jacobi rotations. This provides numerically
1259* more robust computation, at the cost of slightly increased run
1260* time. Depending on the concrete implementation of BLAS and LAPACK
1261* (i.e. how they behave in presence of extreme ill-conditioning) the
1262* implementor may decide to remove this switch.
1263.LT..AND..AND. IF ( ( AAQQSQRT(SFMIN) ) LSVEC RSVEC ) THEN
1264 JRACC = .TRUE.
1265 END IF
1266*
1267 END IF
1268.LT. IF ( AAQQ XSC ) THEN
1269 DO 700 p = 1, N
1270.LT. IF ( SVA(p) XSC ) THEN
1271 CALL CLASET( 'a', M, 1, CZERO, CZERO, A(1,p), LDA )
1272 SVA(p) = ZERO
1273 END IF
1274 700 CONTINUE
1275 END IF
1276*
1277* Preconditioning using QR factorization with pivoting
1278*
1279 IF ( ROWPIV ) THEN
1280* Optional row permutation (Bjoerck row pivoting):
1281* A result by Cox and Higham shows that the Bjoerck's
1282* row pivoting combined with standard column pivoting
1283* has similar effect as Powell-Reid complete pivoting.
1284* The ell-infinity norms of A are made nonincreasing.
1285.AND..AND..NOT. IF ( ( LSVEC RSVEC ) ( JRACC ) ) THEN
1286 IWOFF = 2*N
1287 ELSE
1288 IWOFF = N
1289 END IF
1290 DO 1952 p = 1, M - 1
1291 q = ISAMAX( M-p+1, RWORK(M+p), 1 ) + p - 1
1292 IWORK(IWOFF+p) = q
1293.NE. IF ( p q ) THEN
1294 TEMP1 = RWORK(M+p)
1295 RWORK(M+p) = RWORK(M+q)
1296 RWORK(M+q) = TEMP1
1297 END IF
1298 1952 CONTINUE
1299 CALL CLASWP( N, A, LDA, 1, M-1, IWORK(IWOFF+1), 1 )
1300 END IF
1301*
1302* End of the preparation phase (scaling, optional sorting and
1303* transposing, optional flushing of small columns).
1304*
1305* Preconditioning
1306*
1307* If the full SVD is needed, the right singular vectors are computed
1308* from a matrix equation, and for that we need theoretical analysis
1309* of the Businger-Golub pivoting. So we use CGEQP3 as the first RR QRF.
1310* In all other cases the first RR QRF can be chosen by other criteria
1311* (eg speed by replacing global with restricted window pivoting, such
1312* as in xGEQPX from TOMS # 782). Good results will be obtained using
1313* xGEQPX with properly (!) chosen numerical parameters.
1314* Any improvement of CGEQP3 improves overall performance of CGEJSV.
1315*
1316* A * P1 = Q1 * [ R1^* 0]^*:
1317 DO 1963 p = 1, N
1318* .. all columns are free columns
1319 IWORK(p) = 0
1320 1963 CONTINUE
1321 CALL CGEQP3( M, N, A, LDA, IWORK, CWORK, CWORK(N+1), LWORK-N,
1322 $ RWORK, IERR )
1323*
1324* The upper triangular matrix R1 from the first QRF is inspected for
1325* rank deficiency and possibilities for deflation, or possible
1326* ill-conditioning. Depending on the user specified flag L2RANK,
1327* the procedure explores possibilities to reduce the numerical
1328* rank by inspecting the computed upper triangular factor. If
1329* L2RANK or L2ABER are up, then CGEJSV will compute the SVD of
1330* A + dA, where ||dA|| <= f(M,N)*EPSLN.
1331*
1332 NR = 1
1333 IF ( L2ABER ) THEN
1334* Standard absolute error bound suffices. All sigma_i with
1335* sigma_i < N*EPSLN*||A|| are flushed to zero. This is an
1336* aggressive enforcement of lower numerical rank by introducing a
1337* backward error of the order of N*EPSLN*||A||.
1338 TEMP1 = SQRT(REAL(N))*EPSLN
1339 DO 3001 p = 2, N
1340.GE. IF ( ABS(A(p,p)) (TEMP1*ABS(A(1,1))) ) THEN
1341 NR = NR + 1
1342 ELSE
1343 GO TO 3002
1344 END IF
1345 3001 CONTINUE
1346 3002 CONTINUE
1347 ELSE IF ( L2RANK ) THEN
1348* .. similarly as above, only slightly more gentle (less aggressive).
1349* Sudden drop on the diagonal of R1 is used as the criterion for
1350* close-to-rank-deficient.
1351 TEMP1 = SQRT(SFMIN)
1352 DO 3401 p = 2, N
1353.LT..OR. IF ( ( ABS(A(p,p)) (EPSLN*ABS(A(p-1,p-1))) )
1354.LT..OR. $ ( ABS(A(p,p)) SMALL )
1355.AND..LT. $ ( L2KILL (ABS(A(p,p)) TEMP1) ) ) GO TO 3402
1356 NR = NR + 1
1357 3401 CONTINUE
1358 3402 CONTINUE
1359*
1360 ELSE
1361* The goal is high relative accuracy. However, if the matrix
1362* has high scaled condition number the relative accuracy is in
1363* general not feasible. Later on, a condition number estimator
1364* will be deployed to estimate the scaled condition number.
1365* Here we just remove the underflowed part of the triangular
1366* factor. This prevents the situation in which the code is
1367* working hard to get the accuracy not warranted by the data.
1368 TEMP1 = SQRT(SFMIN)
1369 DO 3301 p = 2, N
1370.LT..OR. IF ( ( ABS(A(p,p)) SMALL )
1371.AND..LT. $ ( L2KILL (ABS(A(p,p)) TEMP1) ) ) GO TO 3302
1372 NR = NR + 1
1373 3301 CONTINUE
1374 3302 CONTINUE
1375*
1376 END IF
1377*
1378 ALMORT = .FALSE.
1379.EQ. IF ( NR N ) THEN
1380 MAXPRJ = ONE
1381 DO 3051 p = 2, N
1382 TEMP1 = ABS(A(p,p)) / SVA(IWORK(p))
1383 MAXPRJ = MIN( MAXPRJ, TEMP1 )
1384 3051 CONTINUE
1385.GE. IF ( MAXPRJ**2 ONE - REAL(N)*EPSLN ) ALMORT = .TRUE.
1386 END IF
1387*
1388*
1389 SCONDA = - ONE
1390 CONDR1 = - ONE
1391 CONDR2 = - ONE
1392*
1393 IF ( ERREST ) THEN
1394.EQ. IF ( N NR ) THEN
1395 IF ( RSVEC ) THEN
1396* .. V is available as workspace
1397 CALL CLACPY( 'u', N, N, A, LDA, V, LDV )
1398 DO 3053 p = 1, N
1399 TEMP1 = SVA(IWORK(p))
1400 CALL CSSCAL( p, ONE/TEMP1, V(1,p), 1 )
1401 3053 CONTINUE
1402 IF ( LSVEC )THEN
1403 CALL CPOCON( 'u', N, V, LDV, ONE, TEMP1,
1404 $ CWORK(N+1), RWORK, IERR )
1405 ELSE
1406 CALL CPOCON( 'u', N, V, LDV, ONE, TEMP1,
1407 $ CWORK, RWORK, IERR )
1408 END IF
1409*
1410 ELSE IF ( LSVEC ) THEN
1411* .. U is available as workspace
1412 CALL CLACPY( 'u', N, N, A, LDA, U, LDU )
1413 DO 3054 p = 1, N
1414 TEMP1 = SVA(IWORK(p))
1415 CALL CSSCAL( p, ONE/TEMP1, U(1,p), 1 )
1416 3054 CONTINUE
1417 CALL CPOCON( 'u', N, U, LDU, ONE, TEMP1,
1418 $ CWORK(N+1), RWORK, IERR )
1419 ELSE
1420 CALL CLACPY( 'u', N, N, A, LDA, CWORK, N )
1421*[] CALL CLACPY( 'U', N, N, A, LDA, CWORK(N+1), N )
1422* Change: here index shifted by N to the left, CWORK(1:N)
1423* not needed for SIGMA only computation
1424 DO 3052 p = 1, N
1425 TEMP1 = SVA(IWORK(p))
1426*[] CALL CSSCAL( p, ONE/TEMP1, CWORK(N+(p-1)*N+1), 1 )
1427 CALL CSSCAL( p, ONE/TEMP1, CWORK((p-1)*N+1), 1 )
1428 3052 CONTINUE
1429* .. the columns of R are scaled to have unit Euclidean lengths.
1430*[] CALL CPOCON( 'U', N, CWORK(N+1), N, ONE, TEMP1,
1431*[] $ CWORK(N+N*N+1), RWORK, IERR )
1432 CALL CPOCON( 'u', N, CWORK, N, ONE, TEMP1,
1433 $ CWORK(N*N+1), RWORK, IERR )
1434*
1435 END IF
1436.NE. IF ( TEMP1 ZERO ) THEN
1437 SCONDA = ONE / SQRT(TEMP1)
1438 ELSE
1439 SCONDA = - ONE
1440 END IF
1441* SCONDA is an estimate of SQRT(||(R^* * R)^(-1)||_1).
1442* N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
1443 ELSE
1444 SCONDA = - ONE
1445 END IF
1446 END IF
1447*
1448.AND..GT. L2PERT = L2PERT ( ABS( A(1,1)/A(NR,NR) ) SQRT(BIG1) )
1449* If there is no violent scaling, artificial perturbation is not needed.
1450*
1451* Phase 3:
1452*
1453.NOT..OR. IF ( ( RSVEC LSVEC ) ) THEN
1454*
1455* Singular Values only
1456*
1457* .. transpose A(1:NR,1:N)
1458 DO 1946 p = 1, MIN( N-1, NR )
1459 CALL CCOPY( N-p, A(p,p+1), LDA, A(p+1,p), 1 )
1460 CALL CLACGV( N-p+1, A(p,p), 1 )
1461 1946 CONTINUE
1462.EQ. IF ( NR N ) A(N,N) = CONJG(A(N,N))
1463*
1464* The following two DO-loops introduce small relative perturbation
1465* into the strict upper triangle of the lower triangular matrix.
1466* Small entries below the main diagonal are also changed.
1467* This modification is useful if the computing environment does not
1468* provide/allow FLUSH TO ZERO underflow, for it prevents many
1469* annoying denormalized numbers in case of strongly scaled matrices.
1470* The perturbation is structured so that it does not introduce any
1471* new perturbation of the singular values, and it does not destroy
1472* the job done by the preconditioner.
1473* The licence for this perturbation is in the variable L2PERT, which
1474* should be .FALSE. if FLUSH TO ZERO underflow is active.
1475*
1476.NOT. IF ( ALMORT ) THEN
1477*
1478 IF ( L2PERT ) THEN
1479* XSC = SQRT(SMALL)
1480 XSC = EPSLN / REAL(N)
1481 DO 4947 q = 1, NR
1482 CTEMP = CMPLX(XSC*ABS(A(q,q)),ZERO)
1483 DO 4949 p = 1, N
1484.GT..AND..LE. IF ( ( (pq) (ABS(A(p,q))TEMP1) )
1485.OR..LT. $ ( p q ) )
1486* $ A(p,q) = TEMP1 * ( A(p,q) / ABS(A(p,q)) )
1487 $ A(p,q) = CTEMP
1488 4949 CONTINUE
1489 4947 CONTINUE
1490 ELSE
1491 CALL CLASET( 'u', NR-1,NR-1, CZERO,CZERO, A(1,2),LDA )
1492 END IF
1493*
1494* .. second preconditioning using the QR factorization
1495*
1496 CALL CGEQRF( N,NR, A,LDA, CWORK, CWORK(N+1),LWORK-N, IERR )
1497*
1498* .. and transpose upper to lower triangular
1499 DO 1948 p = 1, NR - 1
1500 CALL CCOPY( NR-p, A(p,p+1), LDA, A(p+1,p), 1 )
1501 CALL CLACGV( NR-p+1, A(p,p), 1 )
1502 1948 CONTINUE
1503*
1504 END IF
1505*
1506* Row-cyclic Jacobi SVD algorithm with column pivoting
1507*
1508* .. again some perturbation (a "background noise") is added
1509* to drown denormals
1510 IF ( L2PERT ) THEN
1511* XSC = SQRT(SMALL)
1512 XSC = EPSLN / REAL(N)
1513 DO 1947 q = 1, NR
1514 CTEMP = CMPLX(XSC*ABS(A(q,q)),ZERO)
1515 DO 1949 p = 1, NR
1516.GT..AND..LE. IF ( ( (pq) (ABS(A(p,q))TEMP1) )
1517.OR..LT. $ ( p q ) )
1518* $ A(p,q) = TEMP1 * ( A(p,q) / ABS(A(p,q)) )
1519 $ A(p,q) = CTEMP
1520 1949 CONTINUE
1521 1947 CONTINUE
1522 ELSE
1523 CALL CLASET( 'u', NR-1, NR-1, CZERO, CZERO, A(1,2), LDA )
1524 END IF
1525*
1526* .. and one-sided Jacobi rotations are started on a lower
1527* triangular matrix (plus perturbation which is ignored in
1528* the part which destroys triangular form (confusing?!))
1529*
1530 CALL CGESVJ( 'l', 'n', 'n', NR, NR, A, LDA, SVA,
1531 $ N, V, LDV, CWORK, LWORK, RWORK, LRWORK, INFO )
1532*
1533 SCALEM = RWORK(1)
1534 NUMRANK = NINT(RWORK(2))
1535*
1536*
1537.AND..NOT..AND..NOT. ELSE IF ( ( RSVEC ( LSVEC ) ( JRACC ) )
1538.OR. $
1539.AND..NOT..AND..NE. $ ( JRACC ( LSVEC ) ( NR N ) ) ) THEN
1540*
1541* -> Singular Values and Right Singular Vectors <-
1542*
1543 IF ( ALMORT ) THEN
1544*
1545* .. in this case NR equals N
1546 DO 1998 p = 1, NR
1547 CALL CCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1548 CALL CLACGV( N-p+1, V(p,p), 1 )
1549 1998 CONTINUE
1550 CALL CLASET( 'u', NR-1,NR-1, CZERO, CZERO, V(1,2), LDV )
1551*
1552 CALL CGESVJ( 'l','u','n', N, NR, V, LDV, SVA, NR, A, LDA,
1553 $ CWORK, LWORK, RWORK, LRWORK, INFO )
1554 SCALEM = RWORK(1)
1555 NUMRANK = NINT(RWORK(2))
1556
1557 ELSE
1558*
1559* .. two more QR factorizations ( one QRF is not enough, two require
1560* accumulated product of Jacobi rotations, three are perfect )
1561*
1562 CALL CLASET( 'l', NR-1,NR-1, CZERO, CZERO, A(2,1), LDA )
1563 CALL CGELQF( NR,N, A, LDA, CWORK, CWORK(N+1), LWORK-N, IERR)
1564 CALL CLACPY( 'l', NR, NR, A, LDA, V, LDV )
1565 CALL CLASET( 'u', NR-1,NR-1, CZERO, CZERO, V(1,2), LDV )
1566 CALL CGEQRF( NR, NR, V, LDV, CWORK(N+1), CWORK(2*N+1),
1567 $ LWORK-2*N, IERR )
1568 DO 8998 p = 1, NR
1569 CALL CCOPY( NR-p+1, V(p,p), LDV, V(p,p), 1 )
1570 CALL CLACGV( NR-p+1, V(p,p), 1 )
1571 8998 CONTINUE
1572 CALL CLASET('u', NR-1, NR-1, CZERO, CZERO, V(1,2), LDV)
1573*
1574 CALL CGESVJ( 'l', 'u','n', NR, NR, V,LDV, SVA, NR, U,
1575 $ LDU, CWORK(N+1), LWORK-N, RWORK, LRWORK, INFO )
1576 SCALEM = RWORK(1)
1577 NUMRANK = NINT(RWORK(2))
1578.LT. IF ( NR N ) THEN
1579 CALL CLASET( 'a',N-NR, NR, CZERO,CZERO, V(NR+1,1), LDV )
1580 CALL CLASET( 'a',NR, N-NR, CZERO,CZERO, V(1,NR+1), LDV )
1581 CALL CLASET( 'a',N-NR,N-NR,CZERO,CONE, V(NR+1,NR+1),LDV )
1582 END IF
1583*
1584 CALL CUNMLQ( 'l', 'c', N, N, NR, A, LDA, CWORK,
1585 $ V, LDV, CWORK(N+1), LWORK-N, IERR )
1586*
1587 END IF
1588* .. permute the rows of V
1589* DO 8991 p = 1, N
1590* CALL CCOPY( N, V(p,1), LDV, A(IWORK(p),1), LDA )
1591* 8991 CONTINUE
1592* CALL CLACPY( 'All', N, N, A, LDA, V, LDV )
1593 CALL CLAPMR( .FALSE., N, N, V, LDV, IWORK )
1594*
1595 IF ( TRANSP ) THEN
1596 CALL CLACPY( 'a', N, N, V, LDV, U, LDU )
1597 END IF
1598*
1599.AND..NOT..AND..EQ. ELSE IF ( JRACC ( LSVEC) ( NR N ) ) THEN
1600*
1601 CALL CLASET( 'l', N-1,N-1, CZERO, CZERO, A(2,1), LDA )
1602*
1603 CALL CGESVJ( 'u','n','v', N, N, A, LDA, SVA, N, V, LDV,
1604 $ CWORK, LWORK, RWORK, LRWORK, INFO )
1605 SCALEM = RWORK(1)
1606 NUMRANK = NINT(RWORK(2))
1607 CALL CLAPMR( .FALSE., N, N, V, LDV, IWORK )
1608*
1609.AND..NOT. ELSE IF ( LSVEC ( RSVEC ) ) THEN
1610*
1611* .. Singular Values and Left Singular Vectors ..
1612*
1613* .. second preconditioning step to avoid need to accumulate
1614* Jacobi rotations in the Jacobi iterations.
1615 DO 1965 p = 1, NR
1616 CALL CCOPY( N-p+1, A(p,p), LDA, U(p,p), 1 )
1617 CALL CLACGV( N-p+1, U(p,p), 1 )
1618 1965 CONTINUE
1619 CALL CLASET( 'u', NR-1, NR-1, CZERO, CZERO, U(1,2), LDU )
1620*
1621 CALL CGEQRF( N, NR, U, LDU, CWORK(N+1), CWORK(2*N+1),
1622 $ LWORK-2*N, IERR )
1623*
1624 DO 1967 p = 1, NR - 1
1625 CALL CCOPY( NR-p, U(p,p+1), LDU, U(p+1,p), 1 )
1626 CALL CLACGV( N-p+1, U(p,p), 1 )
1627 1967 CONTINUE
1628 CALL CLASET( 'u', NR-1, NR-1, CZERO, CZERO, U(1,2), LDU )
1629*
1630 CALL CGESVJ( 'l', 'u', 'n', NR,NR, U, LDU, SVA, NR, A,
1631 $ LDA, CWORK(N+1), LWORK-N, RWORK, LRWORK, INFO )
1632 SCALEM = RWORK(1)
1633 NUMRANK = NINT(RWORK(2))
1634*
1635.LT. IF ( NR M ) THEN
1636 CALL CLASET( 'a', M-NR, NR,CZERO, CZERO, U(NR+1,1), LDU )
1637.LT. IF ( NR N1 ) THEN
1638 CALL CLASET( 'a',NR, N1-NR, CZERO, CZERO, U(1,NR+1),LDU )
1639 CALL CLASET( 'a',M-NR,N1-NR,CZERO,CONE,U(NR+1,NR+1),LDU )
1640 END IF
1641 END IF
1642*
1643 CALL CUNMQR( 'l', 'n', M, N1, N, A, LDA, CWORK, U,
1644 $ LDU, CWORK(N+1), LWORK-N, IERR )
1645*
1646 IF ( ROWPIV )
1647 $ CALL CLASWP( N1, U, LDU, 1, M-1, IWORK(IWOFF+1), -1 )
1648*
1649 DO 1974 p = 1, N1
1650 XSC = ONE / SCNRM2( M, U(1,p), 1 )
1651 CALL CSSCAL( M, XSC, U(1,p), 1 )
1652 1974 CONTINUE
1653*
1654 IF ( TRANSP ) THEN
1655 CALL CLACPY( 'a', N, N, U, LDU, V, LDV )
1656 END IF
1657*
1658 ELSE
1659*
1660* .. Full SVD ..
1661*
1662.NOT. IF ( JRACC ) THEN
1663*
1664.NOT. IF ( ALMORT ) THEN
1665*
1666* Second Preconditioning Step (QRF [with pivoting])
1667* Note that the composition of TRANSPOSE, QRF and TRANSPOSE is
1668* equivalent to an LQF CALL. Since in many libraries the QRF
1669* seems to be better optimized than the LQF, we do explicit
1670* transpose and use the QRF. This is subject to changes in an
1671* optimized implementation of CGEJSV.
1672*
1673 DO 1968 p = 1, NR
1674 CALL CCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1675 CALL CLACGV( N-p+1, V(p,p), 1 )
1676 1968 CONTINUE
1677*
1678* .. the following two loops perturb small entries to avoid
1679* denormals in the second QR factorization, where they are
1680* as good as zeros. This is done to avoid painfully slow
1681* computation with denormals. The relative size of the perturbation
1682* is a parameter that can be changed by the implementer.
1683* This perturbation device will be obsolete on machines with
1684* properly implemented arithmetic.
1685* To switch it off, set L2PERT=.FALSE. To remove it from the
1686* code, remove the action under L2PERT=.TRUE., leave the ELSE part.
1687* The following two loops should be blocked and fused with the
1688* transposed copy above.
1689*
1690 IF ( L2PERT ) THEN
1691 XSC = SQRT(SMALL)
1692 DO 2969 q = 1, NR
1693 CTEMP = CMPLX(XSC*ABS( V(q,q) ),ZERO)
1694 DO 2968 p = 1, N
1695.GT..AND..LE. IF ( ( p q ) ( ABS(V(p,q)) TEMP1 )
1696.OR..LT. $ ( p q ) )
1697* $ V(p,q) = TEMP1 * ( V(p,q) / ABS(V(p,q)) )
1698 $ V(p,q) = CTEMP
1699.LT. IF ( p q ) V(p,q) = - V(p,q)
1700 2968 CONTINUE
1701 2969 CONTINUE
1702 ELSE
1703 CALL CLASET( 'u', NR-1, NR-1, CZERO, CZERO, V(1,2), LDV )
1704 END IF
1705*
1706* Estimate the row scaled condition number of R1
1707* (If R1 is rectangular, N > NR, then the condition number
1708* of the leading NR x NR submatrix is estimated.)
1709*
1710 CALL CLACPY( 'l', NR, NR, V, LDV, CWORK(2*N+1), NR )
1711 DO 3950 p = 1, NR
1712 TEMP1 = SCNRM2(NR-p+1,CWORK(2*N+(p-1)*NR+p),1)
1713 CALL CSSCAL(NR-p+1,ONE/TEMP1,CWORK(2*N+(p-1)*NR+p),1)
1714 3950 CONTINUE
1715 CALL CPOCON('l',NR,CWORK(2*N+1),NR,ONE,TEMP1,
1716 $ CWORK(2*N+NR*NR+1),RWORK,IERR)
1717 CONDR1 = ONE / SQRT(TEMP1)
1718* .. here need a second opinion on the condition number
1719* .. then assume worst case scenario
1720* R1 is OK for inverse <=> CONDR1 .LT. REAL(N)
1721* more conservative <=> CONDR1 .LT. SQRT(REAL(N))
1722*
1723 COND_OK = SQRT(SQRT(REAL(NR)))
1724*[TP] COND_OK is a tuning parameter.
1725*
1726.LT. IF ( CONDR1 COND_OK ) THEN
1727* .. the second QRF without pivoting. Note: in an optimized
1728* implementation, this QRF should be implemented as the QRF
1729* of a lower triangular matrix.
1730* R1^* = Q2 * R2
1731 CALL CGEQRF( N, NR, V, LDV, CWORK(N+1), CWORK(2*N+1),
1732 $ LWORK-2*N, IERR )
1733*
1734 IF ( L2PERT ) THEN
1735 XSC = SQRT(SMALL)/EPSLN
1736 DO 3959 p = 2, NR
1737 DO 3958 q = 1, p - 1
1738 CTEMP=CMPLX(XSC*MIN(ABS(V(p,p)),ABS(V(q,q))),
1739 $ ZERO)
1740.LE. IF ( ABS(V(q,p)) TEMP1 )
1741* $ V(q,p) = TEMP1 * ( V(q,p) / ABS(V(q,p)) )
1742 $ V(q,p) = CTEMP
1743 3958 CONTINUE
1744 3959 CONTINUE
1745 END IF
1746*
1747.NE. IF ( NR N )
1748 $ CALL CLACPY( 'a', N, NR, V, LDV, CWORK(2*N+1), N )
1749* .. save ...
1750*
1751* .. this transposed copy should be better than naive
1752 DO 1969 p = 1, NR - 1
1753 CALL CCOPY( NR-p, V(p,p+1), LDV, V(p+1,p), 1 )
1754 CALL CLACGV(NR-p+1, V(p,p), 1 )
1755 1969 CONTINUE
1756 V(NR,NR)=CONJG(V(NR,NR))
1757*
1758 CONDR2 = CONDR1
1759*
1760 ELSE
1761*
1762* .. ill-conditioned case: second QRF with pivoting
1763* Note that windowed pivoting would be equally good
1764* numerically, and more run-time efficient. So, in
1765* an optimal implementation, the next call to CGEQP3
1766* should be replaced with eg. CALL CGEQPX (ACM TOMS #782)
1767* with properly (carefully) chosen parameters.
1768*
1769* R1^* * P2 = Q2 * R2
1770 DO 3003 p = 1, NR
1771 IWORK(N+p) = 0
1772 3003 CONTINUE
1773 CALL CGEQP3( N, NR, V, LDV, IWORK(N+1), CWORK(N+1),
1774 $ CWORK(2*N+1), LWORK-2*N, RWORK, IERR )
1775** CALL CGEQRF( N, NR, V, LDV, CWORK(N+1), CWORK(2*N+1),
1776** $ LWORK-2*N, IERR )
1777 IF ( L2PERT ) THEN
1778 XSC = SQRT(SMALL)
1779 DO 3969 p = 2, NR
1780 DO 3968 q = 1, p - 1
1781 CTEMP=CMPLX(XSC*MIN(ABS(V(p,p)),ABS(V(q,q))),
1782 $ ZERO)
1783.LE. IF ( ABS(V(q,p)) TEMP1 )
1784* $ V(q,p) = TEMP1 * ( V(q,p) / ABS(V(q,p)) )
1785 $ V(q,p) = CTEMP
1786 3968 CONTINUE
1787 3969 CONTINUE
1788 END IF
1789*
1790 CALL CLACPY( 'a', N, NR, V, LDV, CWORK(2*N+1), N )
1791*
1792 IF ( L2PERT ) THEN
1793 XSC = SQRT(SMALL)
1794 DO 8970 p = 2, NR
1795 DO 8971 q = 1, p - 1
1796 CTEMP=CMPLX(XSC*MIN(ABS(V(p,p)),ABS(V(q,q))),
1797 $ ZERO)
1798* V(p,q) = - TEMP1*( V(q,p) / ABS(V(q,p)) )
1799 V(p,q) = - CTEMP
1800 8971 CONTINUE
1801 8970 CONTINUE
1802 ELSE
1803 CALL CLASET( 'l',NR-1,NR-1,CZERO,CZERO,V(2,1),LDV )
1804 END IF
1805* Now, compute R2 = L3 * Q3, the LQ factorization.
1806 CALL CGELQF( NR, NR, V, LDV, CWORK(2*N+N*NR+1),
1807 $ CWORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, IERR )
1808* .. and estimate the condition number
1809 CALL CLACPY( 'l',NR,NR,V,LDV,CWORK(2*N+N*NR+NR+1),NR )
1810 DO 4950 p = 1, NR
1811 TEMP1 = SCNRM2( p, CWORK(2*N+N*NR+NR+p), NR )
1812 CALL CSSCAL( p, ONE/TEMP1, CWORK(2*N+N*NR+NR+p), NR )
1813 4950 CONTINUE
1814 CALL CPOCON( 'l',NR,CWORK(2*N+N*NR+NR+1),NR,ONE,TEMP1,
1815 $ CWORK(2*N+N*NR+NR+NR*NR+1),RWORK,IERR )
1816 CONDR2 = ONE / SQRT(TEMP1)
1817*
1818*
1819.GE. IF ( CONDR2 COND_OK ) THEN
1820* .. save the Householder vectors used for Q3
1821* (this overwrites the copy of R2, as it will not be
1822* needed in this branch, but it does not overwritte the
1823* Huseholder vectors of Q2.).
1824 CALL CLACPY( 'u', NR, NR, V, LDV, CWORK(2*N+1), N )
1825* .. and the rest of the information on Q3 is in
1826* WORK(2*N+N*NR+1:2*N+N*NR+N)
1827 END IF
1828*
1829 END IF
1830*
1831 IF ( L2PERT ) THEN
1832 XSC = SQRT(SMALL)
1833 DO 4968 q = 2, NR
1834 CTEMP = XSC * V(q,q)
1835 DO 4969 p = 1, q - 1
1836* V(p,q) = - TEMP1*( V(p,q) / ABS(V(p,q)) )
1837 V(p,q) = - CTEMP
1838 4969 CONTINUE
1839 4968 CONTINUE
1840 ELSE
1841 CALL CLASET( 'u', NR-1,NR-1, CZERO,CZERO, V(1,2), LDV )
1842 END IF
1843*
1844* Second preconditioning finished; continue with Jacobi SVD
1845* The input matrix is lower trinagular.
1846*
1847* Recover the right singular vectors as solution of a well
1848* conditioned triangular matrix equation.
1849*
1850.LT. IF ( CONDR1 COND_OK ) THEN
1851*
1852 CALL CGESVJ( 'l','u','n',NR,NR,V,LDV,SVA,NR,U, LDU,
1853 $ CWORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,RWORK,
1854 $ LRWORK, INFO )
1855 SCALEM = RWORK(1)
1856 NUMRANK = NINT(RWORK(2))
1857 DO 3970 p = 1, NR
1858 CALL CCOPY( NR, V(1,p), 1, U(1,p), 1 )
1859 CALL CSSCAL( NR, SVA(p), V(1,p), 1 )
1860 3970 CONTINUE
1861
1862* .. pick the right matrix equation and solve it
1863*
1864.EQ. IF ( NR N ) THEN
1865* :)) .. best case, R1 is inverted. The solution of this matrix
1866* equation is Q2*V2 = the product of the Jacobi rotations
1867* used in CGESVJ, premultiplied with the orthogonal matrix
1868* from the second QR factorization.
1869 CALL CTRSM('l','u','n','n', NR,NR,CONE, A,LDA, V,LDV)
1870 ELSE
1871* .. R1 is well conditioned, but non-square. Adjoint of R2
1872* is inverted to get the product of the Jacobi rotations
1873* used in CGESVJ. The Q-factor from the second QR
1874* factorization is then built in explicitly.
1875 CALL CTRSM('l','u','c','n',NR,NR,CONE,CWORK(2*N+1),
1876 $ N,V,LDV)
1877.LT. IF ( NR N ) THEN
1878 CALL CLASET('a',N-NR,NR,CZERO,CZERO,V(NR+1,1),LDV)
1879 CALL CLASET('a',NR,N-NR,CZERO,CZERO,V(1,NR+1),LDV)
1880 CALL CLASET('a',N-NR,N-NR,CZERO,CONE,V(NR+1,NR+1),LDV)
1881 END IF
1882 CALL CUNMQR('l','n',N,N,NR,CWORK(2*N+1),N,CWORK(N+1),
1883 $ V,LDV,CWORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR)
1884 END IF
1885*
1886.LT. ELSE IF ( CONDR2 COND_OK ) THEN
1887*
1888* The matrix R2 is inverted. The solution of the matrix equation
1889* is Q3^* * V3 = the product of the Jacobi rotations (appplied to
1890* the lower triangular L3 from the LQ factorization of
1891* R2=L3*Q3), pre-multiplied with the transposed Q3.
1892 CALL CGESVJ( 'l', 'u', 'n', NR, NR, V, LDV, SVA, NR, U,
1893 $ LDU, CWORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR,
1894 $ RWORK, LRWORK, INFO )
1895 SCALEM = RWORK(1)
1896 NUMRANK = NINT(RWORK(2))
1897 DO 3870 p = 1, NR
1898 CALL CCOPY( NR, V(1,p), 1, U(1,p), 1 )
1899 CALL CSSCAL( NR, SVA(p), U(1,p), 1 )
1900 3870 CONTINUE
1901 CALL CTRSM('l','u','n','n',NR,NR,CONE,CWORK(2*N+1),N,
1902 $ U,LDU)
1903* .. apply the permutation from the second QR factorization
1904 DO 873 q = 1, NR
1905 DO 872 p = 1, NR
1906 CWORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
1907 872 CONTINUE
1908 DO 874 p = 1, NR
1909 U(p,q) = CWORK(2*N+N*NR+NR+p)
1910 874 CONTINUE
1911 873 CONTINUE
1912.LT. IF ( NR N ) THEN
1913 CALL CLASET( 'a',N-NR,NR,CZERO,CZERO,V(NR+1,1),LDV )
1914 CALL CLASET( 'a',NR,N-NR,CZERO,CZERO,V(1,NR+1),LDV )
1915 CALL CLASET('a',N-NR,N-NR,CZERO,CONE,V(NR+1,NR+1),LDV)
1916 END IF
1917 CALL CUNMQR( 'l','n',N,N,NR,CWORK(2*N+1),N,CWORK(N+1),
1918 $ V,LDV,CWORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1919 ELSE
1920* Last line of defense.
1921* #:( This is a rather pathological case: no scaled condition
1922* improvement after two pivoted QR factorizations. Other
1923* possibility is that the rank revealing QR factorization
1924* or the condition estimator has failed, or the COND_OK
1925* is set very close to ONE (which is unnecessary). Normally,
1926* this branch should never be executed, but in rare cases of
1927* failure of the RRQR or condition estimator, the last line of
1928* defense ensures that CGEJSV completes the task.
1929* Compute the full SVD of L3 using CGESVJ with explicit
1930* accumulation of Jacobi rotations.
1931 CALL CGESVJ( 'l', 'u', 'v', NR, NR, V, LDV, SVA, NR, U,
1932 $ LDU, CWORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR,
1933 $ RWORK, LRWORK, INFO )
1934 SCALEM = RWORK(1)
1935 NUMRANK = NINT(RWORK(2))
1936.LT. IF ( NR N ) THEN
1937 CALL CLASET( 'a',N-NR,NR,CZERO,CZERO,V(NR+1,1),LDV )
1938 CALL CLASET( 'a',NR,N-NR,CZERO,CZERO,V(1,NR+1),LDV )
1939 CALL CLASET('a',N-NR,N-NR,CZERO,CONE,V(NR+1,NR+1),LDV)
1940 END IF
1941 CALL CUNMQR( 'l','n',N,N,NR,CWORK(2*N+1),N,CWORK(N+1),
1942 $ V,LDV,CWORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1943*
1944 CALL CUNMLQ( 'l', 'c', NR, NR, NR, CWORK(2*N+1), N,
1945 $ CWORK(2*N+N*NR+1), U, LDU, CWORK(2*N+N*NR+NR+1),
1946 $ LWORK-2*N-N*NR-NR, IERR )
1947 DO 773 q = 1, NR
1948 DO 772 p = 1, NR
1949 CWORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
1950 772 CONTINUE
1951 DO 774 p = 1, NR
1952 U(p,q) = CWORK(2*N+N*NR+NR+p)
1953 774 CONTINUE
1954 773 CONTINUE
1955*
1956 END IF
1957*
1958* Permute the rows of V using the (column) permutation from the
1959* first QRF. Also, scale the columns to make them unit in
1960* Euclidean norm. This applies to all cases.
1961*
1962 TEMP1 = SQRT(REAL(N)) * EPSLN
1963 DO 1972 q = 1, N
1964 DO 972 p = 1, N
1965 CWORK(2*N+N*NR+NR+IWORK(p)) = V(p,q)
1966 972 CONTINUE
1967 DO 973 p = 1, N
1968 V(p,q) = CWORK(2*N+N*NR+NR+p)
1969 973 CONTINUE
1970 XSC = ONE / SCNRM2( N, V(1,q), 1 )
1971.LT..OR..GT. IF ( (XSC (ONE-TEMP1)) (XSC (ONE+TEMP1)) )
1972 $ CALL CSSCAL( N, XSC, V(1,q), 1 )
1973 1972 CONTINUE
1974* At this moment, V contains the right singular vectors of A.
1975* Next, assemble the left singular vector matrix U (M x N).
1976.LT. IF ( NR M ) THEN
1977 CALL CLASET('a', M-NR, NR, CZERO, CZERO, U(NR+1,1), LDU)
1978.LT. IF ( NR N1 ) THEN
1979 CALL CLASET('a',NR,N1-NR,CZERO,CZERO,U(1,NR+1),LDU)
1980 CALL CLASET('a',M-NR,N1-NR,CZERO,CONE,
1981 $ U(NR+1,NR+1),LDU)
1982 END IF
1983 END IF
1984*
1985* The Q matrix from the first QRF is built into the left singular
1986* matrix U. This applies to all cases.
1987*
1988 CALL CUNMQR( 'l', 'n', M, N1, N, A, LDA, CWORK, U,
1989 $ LDU, CWORK(N+1), LWORK-N, IERR )
1990
1991* The columns of U are normalized. The cost is O(M*N) flops.
1992 TEMP1 = SQRT(REAL(M)) * EPSLN
1993 DO 1973 p = 1, NR
1994 XSC = ONE / SCNRM2( M, U(1,p), 1 )
1995.LT..OR..GT. IF ( (XSC (ONE-TEMP1)) (XSC (ONE+TEMP1)) )
1996 $ CALL CSSCAL( M, XSC, U(1,p), 1 )
1997 1973 CONTINUE
1998*
1999* If the initial QRF is computed with row pivoting, the left
2000* singular vectors must be adjusted.
2001*
2002 IF ( ROWPIV )
2003 $ CALL CLASWP( N1, U, LDU, 1, M-1, IWORK(IWOFF+1), -1 )
2004*
2005 ELSE
2006*
2007* .. the initial matrix A has almost orthogonal columns and
2008* the second QRF is not needed
2009*
2010 CALL CLACPY( 'u', N, N, A, LDA, CWORK(N+1), N )
2011 IF ( L2PERT ) THEN
2012 XSC = SQRT(SMALL)
2013 DO 5970 p = 2, N
2014 CTEMP = XSC * CWORK( N + (p-1)*N + p )
2015 DO 5971 q = 1, p - 1
2016* CWORK(N+(q-1)*N+p)=-TEMP1 * ( CWORK(N+(p-1)*N+q) /
2017* $ ABS(CWORK(N+(p-1)*N+q)) )
2018 CWORK(N+(q-1)*N+p)=-CTEMP
2019 5971 CONTINUE
2020 5970 CONTINUE
2021 ELSE
2022 CALL CLASET( 'l',N-1,N-1,CZERO,CZERO,CWORK(N+2),N )
2023 END IF
2024*
2025 CALL CGESVJ( 'u', 'u', 'n', N, N, CWORK(N+1), N, SVA,
2026 $ N, U, LDU, CWORK(N+N*N+1), LWORK-N-N*N, RWORK, LRWORK,
2027 $ INFO )
2028*
2029 SCALEM = RWORK(1)
2030 NUMRANK = NINT(RWORK(2))
2031 DO 6970 p = 1, N
2032 CALL CCOPY( N, CWORK(N+(p-1)*N+1), 1, U(1,p), 1 )
2033 CALL CSSCAL( N, SVA(p), CWORK(N+(p-1)*N+1), 1 )
2034 6970 CONTINUE
2035*
2036 CALL CTRSM( 'l', 'U', 'N', 'N', n, n,
2037 $ cone, a, lda, cwork(n+1), n )
2038 DO 6972 p = 1, n
2039 CALL ccopy( n, cwork(n+p), n, v(iwork(p),1), ldv )
2040 6972 CONTINUE
2041 temp1 = sqrt(real(n))*epsln
2042 DO 6971 p = 1, n
2043 xsc = one / scnrm2( n, v(1,p), 1 )
2044 IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
2045 $ CALL csscal( n, xsc, v(1,p), 1 )
2046 6971 CONTINUE
2047*
2048* Assemble the left singular vector matrix U (M x N).
2049*
2050 IF ( n .LT. m ) THEN
2051 CALL claset( 'A', m-n, n, czero, czero, u(n+1,1), ldu )
2052 IF ( n .LT. n1 ) THEN
2053 CALL claset('A',n, n1-n, czero, czero, u(1,n+1),ldu)
2054 CALL claset( 'A',m-n,n1-n, czero, cone,u(n+1,n+1),ldu)
2055 END IF
2056 END IF
2057 CALL cunmqr( 'L', 'N', m, n1, n, a, lda, cwork, u,
2058 $ ldu, cwork(n+1), lwork-n, ierr )
2059 temp1 = sqrt(real(m))*epsln
2060 DO 6973 p = 1, n1
2061 xsc = one / scnrm2( m, u(1,p), 1 )
2062 IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
2063 $ CALL csscal( m, xsc, u(1,p), 1 )
2064 6973 CONTINUE
2065*
2066 IF ( rowpiv )
2067 $ CALL claswp( n1, u, ldu, 1, m-1, iwork(iwoff+1), -1 )
2068*
2069 END IF
2070*
2071* end of the >> almost orthogonal case << in the full SVD
2072*
2073 ELSE
2074*
2075* This branch deploys a preconditioned Jacobi SVD with explicitly
2076* accumulated rotations. It is included as optional, mainly for
2077* experimental purposes. It does perform well, and can also be used.
2078* In this implementation, this branch will be automatically activated
2079* if the condition number sigma_max(A) / sigma_min(A) is predicted
2080* to be greater than the overflow threshold. This is because the
2081* a posteriori computation of the singular vectors assumes robust
2082* implementation of BLAS and some LAPACK procedures, capable of working
2083* in presence of extreme values, e.g. when the singular values spread from
2084* the underflow to the overflow threshold.
2085*
2086 DO 7968 p = 1, nr
2087 CALL ccopy( n-p+1, a(p,p), lda, v(p,p), 1 )
2088 CALL clacgv( n-p+1, v(p,p), 1 )
2089 7968 CONTINUE
2090*
2091 IF ( l2pert ) THEN
2092 xsc = sqrt(small/epsln)
2093 DO 5969 q = 1, nr
2094 ctemp = cmplx(xsc*abs( v(q,q) ),zero)
2095 DO 5968 p = 1, n
2096 IF ( ( p .GT. q ) .AND. ( abs(v(p,q)) .LE. temp1 )
2097 $ .OR. ( p .LT. q ) )
2098* $ V(p,q) = TEMP1 * ( V(p,q) / ABS(V(p,q)) )
2099 $ v(p,q) = ctemp
2100 IF ( p .LT. q ) v(p,q) = - v(p,q)
2101 5968 CONTINUE
2102 5969 CONTINUE
2103 ELSE
2104 CALL claset( 'U', nr-1, nr-1, czero, czero, v(1,2), ldv )
2105 END IF
2106
2107 CALL cgeqrf( n, nr, v, ldv, cwork(n+1), cwork(2*n+1),
2108 $ lwork-2*n, ierr )
2109 CALL clacpy( 'L', n, nr, v, ldv, cwork(2*n+1), n )
2110*
2111 DO 7969 p = 1, nr
2112 CALL ccopy( nr-p+1, v(p,p), ldv, u(p,p), 1 )
2113 CALL clacgv( nr-p+1, u(p,p), 1 )
2114 7969 CONTINUE
2115
2116 IF ( l2pert ) THEN
2117 xsc = sqrt(small/epsln)
2118 DO 9970 q = 2, nr
2119 DO 9971 p = 1, q - 1
2120 ctemp = cmplx(xsc * min(abs(u(p,p)),abs(u(q,q))),
2121 $ zero)
2122* U(p,q) = - TEMP1 * ( U(q,p) / ABS(U(q,p)) )
2123 u(p,q) = - ctemp
2124 9971 CONTINUE
2125 9970 CONTINUE
2126 ELSE
2127 CALL claset('U', nr-1, nr-1, czero, czero, u(1,2), ldu )
2128 END IF
2129
2130 CALL cgesvj( 'L', 'U', 'V', nr, nr, u, ldu, sva,
2131 $ n, v, ldv, cwork(2*n+n*nr+1), lwork-2*n-n*nr,
2132 $ rwork, lrwork, info )
2133 scalem = rwork(1)
2134 numrank = nint(rwork(2))
2135
2136 IF ( nr .LT. n ) THEN
2137 CALL claset( 'A',n-nr,nr,czero,czero,v(nr+1,1),ldv )
2138 CALL claset( 'A',nr,n-nr,czero,czero,v(1,nr+1),ldv )
2139 CALL claset( 'A',n-nr,n-nr,czero,cone,v(nr+1,nr+1),ldv )
2140 END IF
2141
2142 CALL cunmqr( 'L','N',n,n,nr,cwork(2*n+1),n,cwork(n+1),
2143 $ v,ldv,cwork(2*n+n*nr+nr+1),lwork-2*n-n*nr-nr,ierr )
2144*
2145* Permute the rows of V using the (column) permutation from the
2146* first QRF. Also, scale the columns to make them unit in
2147* Euclidean norm. This applies to all cases.
2148*
2149 temp1 = sqrt(real(n)) * epsln
2150 DO 7972 q = 1, n
2151 DO 8972 p = 1, n
2152 cwork(2*n+n*nr+nr+iwork(p)) = v(p,q)
2153 8972 CONTINUE
2154 DO 8973 p = 1, n
2155 v(p,q) = cwork(2*n+n*nr+nr+p)
2156 8973 CONTINUE
2157 xsc = one / scnrm2( n, v(1,q), 1 )
2158 IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
2159 $ CALL csscal( n, xsc, v(1,q), 1 )
2160 7972 CONTINUE
2161*
2162* At this moment, V contains the right singular vectors of A.
2163* Next, assemble the left singular vector matrix U (M x N).
2164*
2165 IF ( nr .LT. m ) THEN
2166 CALL claset( 'A', m-nr, nr, czero, czero, u(nr+1,1), ldu )
2167 IF ( nr .LT. n1 ) THEN
2168 CALL claset('A',nr, n1-nr, czero, czero, u(1,nr+1),ldu)
2169 CALL claset('A',m-nr,n1-nr, czero, cone,u(nr+1,nr+1),ldu)
2170 END IF
2171 END IF
2172*
2173 CALL cunmqr( 'L', 'N', m, n1, n, a, lda, cwork, u,
2174 $ ldu, cwork(n+1), lwork-n, ierr )
2175*
2176 IF ( rowpiv )
2177 $ CALL claswp( n1, u, ldu, 1, m-1, iwork(iwoff+1), -1 )
2178*
2179*
2180 END IF
2181 IF ( transp ) THEN
2182* .. swap U and V because the procedure worked on A^*
2183 DO 6974 p = 1, n
2184 CALL cswap( n, u(1,p), 1, v(1,p), 1 )
2185 6974 CONTINUE
2186 END IF
2187*
2188 END IF
2189* end of the full SVD
2190*
2191* Undo scaling, if necessary (and possible)
2192*
2193 IF ( uscal2 .LE. (big/sva(1))*uscal1 ) THEN
2194 CALL slascl( 'G', 0, 0, uscal1, uscal2, nr, 1, sva, n, ierr )
2195 uscal1 = one
2196 uscal2 = one
2197 END IF
2198*
2199 IF ( nr .LT. n ) THEN
2200 DO 3004 p = nr+1, n
2201 sva(p) = zero
2202 3004 CONTINUE
2203 END IF
2204*
2205 rwork(1) = uscal2 * scalem
2206 rwork(2) = uscal1
2207 IF ( errest ) rwork(3) = sconda
2208 IF ( lsvec .AND. rsvec ) THEN
2209 rwork(4) = condr1
2210 rwork(5) = condr2
2211 END IF
2212 IF ( l2tran ) THEN
2213 rwork(6) = entra
2214 rwork(7) = entrat
2215 END IF
2216*
2217 iwork(1) = nr
2218 iwork(2) = numrank
2219 iwork(3) = warning
2220 IF ( transp ) THEN
2221 iwork(4) = 1
2222 ELSE
2223 iwork(4) = -1
2224 END IF
2225
2226*
2227 RETURN
2228* ..
2229* .. END OF CGEJSV
2230* ..
2231 END
2232*
float cmplx[2]
Definition pblas.h:136
subroutine slassq(n, x, incx, scl, sumsq)
SLASSQ updates a sum of squares represented in scaled form.
Definition slassq.f90:137
subroutine classq(n, x, incx, scl, sumsq)
CLASSQ updates a sum of squares represented in scaled form.
Definition classq.f90:137
subroutine slascl(type, kl, ku, cfrom, cto, m, n, a, lda, info)
SLASCL multiplies a general rectangular matrix by a real scalar defined as cto/cfrom.
Definition slascl.f:143
subroutine xerbla(srname, info)
XERBLA
Definition xerbla.f:60
subroutine cgeqrf(m, n, a, lda, tau, work, lwork, info)
CGEQRF
Definition cgeqrf.f:146
subroutine cgeqp3(m, n, a, lda, jpvt, tau, work, lwork, rwork, info)
CGEQP3
Definition cgeqp3.f:159
subroutine cgesvj(joba, jobu, jobv, m, n, a, lda, sva, mv, v, ldv, cwork, lwork, rwork, lrwork, info)
CGESVJ
Definition cgesvj.f:351
subroutine cgelqf(m, n, a, lda, tau, work, lwork, info)
CGELQF
Definition cgelqf.f:143
subroutine cgejsv(joba, jobu, jobv, jobr, jobt, jobp, m, n, a, lda, sva, u, ldu, v, ldv, cwork, lwork, rwork, lrwork, iwork, info)
CGEJSV
Definition cgejsv.f:568
subroutine clascl(type, kl, ku, cfrom, cto, m, n, a, lda, info)
CLASCL multiplies a general rectangular matrix by a real scalar defined as cto/cfrom.
Definition clascl.f:143
subroutine clapmr(forwrd, m, n, x, ldx, k)
CLAPMR rearranges rows of a matrix as specified by a permutation vector.
Definition clapmr.f:104
subroutine clacpy(uplo, m, n, a, lda, b, ldb)
CLACPY copies all or part of one two-dimensional array to another.
Definition clacpy.f:103
subroutine clacgv(n, x, incx)
CLACGV conjugates a complex vector.
Definition clacgv.f:74
subroutine claset(uplo, m, n, alpha, beta, a, lda)
CLASET initializes the off-diagonal elements and the diagonal elements of a matrix to given values.
Definition claset.f:106
subroutine claswp(n, a, lda, k1, k2, ipiv, incx)
CLASWP performs a series of row interchanges on a general rectangular matrix.
Definition claswp.f:115
subroutine cunmqr(side, trans, m, n, k, a, lda, tau, c, ldc, work, lwork, info)
CUNMQR
Definition cunmqr.f:168
subroutine cungqr(m, n, k, a, lda, tau, work, lwork, info)
CUNGQR
Definition cungqr.f:128
subroutine cunmlq(side, trans, m, n, k, a, lda, tau, c, ldc, work, lwork, info)
CUNMLQ
Definition cunmlq.f:168
subroutine cpocon(uplo, n, a, lda, anorm, rcond, work, rwork, info)
CPOCON
Definition cpocon.f:121
subroutine cswap(n, cx, incx, cy, incy)
CSWAP
Definition cswap.f:81
subroutine csscal(n, sa, cx, incx)
CSSCAL
Definition csscal.f:78
subroutine ccopy(n, cx, incx, cy, incy)
CCOPY
Definition ccopy.f:81
subroutine ctrsm(side, uplo, transa, diag, m, n, alpha, a, lda, b, ldb)
CTRSM
Definition ctrsm.f:180
subroutine sscal(n, sa, sx, incx)
SSCAL
Definition sscal.f:79
#define min(a, b)
Definition macros.h:20
#define max(a, b)
Definition macros.h:21