US5490278A - Data processing method and apparatus employing parallel processing for solving systems of linear equations - Google Patents

Data processing method and apparatus employing parallel processing for solving systems of linear equations Download PDF

Info

Publication number
US5490278A
US5490278A US07/912,180 US91218092A US5490278A US 5490278 A US5490278 A US 5490278A US 91218092 A US91218092 A US 91218092A US 5490278 A US5490278 A US 5490278A
Authority
US
United States
Prior art keywords
sup
sub
pck
row
coefficient matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US07/912,180
Inventor
Yoshiyuki Mochizuki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP17216891A external-priority patent/JPH0520348A/en
Priority claimed from JP17361691A external-priority patent/JPH0520349A/en
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Assigned to MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD. reassignment MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: MOCHIZUKI, YOSHIYUKI
Application granted granted Critical
Publication of US5490278A publication Critical patent/US5490278A/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations

Definitions

  • the present invention relates to calculating equipment for solving systems of linear equations, parallel calculating equipment for solving systems of linear equations, and methods of parallel computation for solving systems of linear equations.
  • Gauss elimination method based on bi-pivot simultaneous elimination, which is described in Takeo Murata, Chikara Okuni and Yukihiko Karaki, "Super Computer-Application to Science and Technology,” Maruzen 1985 pp 95-96.
  • the bi-pivot simultaneous elimination algorithm eliminates two columns at the same time by choosing two pivots at one step. It limits simultaneous elimination to two columns and the choice of pivots to partial pivoting by row interchanges. Furthermore it considers the speeding up of its process in terms of numbers of repetition of do-loops only.
  • simultaneous elimination is not limited to two columns and extended to more than two columns, the corresponding algorithms will be hereafter called multi-pivot simultaneous elimination algorithms.
  • Gauss elimination method or Gauss-Jordan elimination method which is based on multi-pivot simultaneous elimination and can be efficiently implemented in scalar computers and parallel computers.
  • the object of the present invention is therefore to provide high-speed parallel calculating equipment and methods of parallel computation for solving systems of linear equations by means of Gauss elimination method and Gauss-Jordan's method based on multi-pivot simultaneous elimination.
  • a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
  • an updating section B that is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates ##EQU2## for (p+1)k+1 ⁇ i, j ⁇ n retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set,
  • a back-substitution section that is connected to the memory and obtains the value of the unknown vector x by calculating
  • [n/k]-1 where [x] denotes the greatest integer equal or less than x, and instructs the pivot choosing section and the preprocessing sections A 1 , . . . , A n- [n/k]k to execute their above operations, and in both cases, instructs the back-substitution section to obtain the unknown vector x.
  • a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
  • a preprocessing section A 1 that, immediately after the pivot choosing section's above operation determines the transposed pivot (3), calculates (4) for pk +2 ⁇ j ⁇ n and (5),
  • an updating section B' which is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for 1 ⁇ i ⁇ pk, (p+1)k+1 ⁇ i ⁇ n, (p+1)k+1 ⁇ j ⁇ n if n is a multiple of k or p ⁇ [n/k] and for 1 ⁇ i ⁇ [n/k]k, [n/k]k+1 ⁇ j ⁇ n otherwise, retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set,
  • a system of nodes ⁇ 0 , . . . , ⁇ P-1 each of which is connected to each other by a network and comprises:
  • a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
  • a preprocessing section A 1 that is connected to the memory and calculates (4) for pk+2 ⁇ j ⁇ n and (5),
  • an updating section B that is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for (p+1)k +1 ⁇ j ⁇ n retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set,
  • a gateway that is connected to the memory and is a junction with the outside, and
  • a transmitter that is connected to the memory and transmits data between the memory and the outside through the gateway.
  • the preprocessing section A t of the above node ⁇ u calculates (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n, and, immediately after the pivot choosing section of ⁇ u determines the pivot (11), calculates (12) and (13) for pk +t+1 ⁇ j ⁇ n, and the transmitter transmits the results to the memory of every other node through the gateway, while the updating section B of the node in charge of the i-th row calculates ##EQU3## for every i such that (p+1)k+1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing A t , where 2 ⁇ t ⁇ k.
  • the updating section B of each node in charge of the i-th row such that (p+1)k+1 ⁇ i ⁇ n also calculates (14) through (18) retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set.
  • a system of nodes ⁇ 0 , . . . , ⁇ P-1 each of which is connected to each other by a network and comprises:
  • a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
  • a preprocessing section A 1 that is connected to the memory and calculates (4) for pk+2 ⁇ j ⁇ n and (5),
  • an updating section B' that is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for (p+1)k +1 ⁇ j ⁇ n retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set,
  • a gateway that is connected to the memory and is a junction with the outside, and
  • a transmitter that is connected to the memory and transmits data between the memory and the outside through the gateway.
  • the preprocessing section A t of the node ⁇ u calculates (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n, and, immediately after the pivot choosing section 2 of ⁇ u determines the pivot (11), calculates (12) and (13) for pk +t+1 ⁇ j ⁇ n, and the transmitter transmits the results to the memory of every other node through the gateway, while the updating section B' of the node in charge of the i-th row calculates (30) for every i such that (p+1)k+1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing A t , where 2 ⁇ t ⁇ k.
  • the updating section B' of each node in charge of the i-th row such that 1 ⁇ i ⁇ pk or (p+1)k+1 ⁇ i ⁇ n if n is a multiple of k or p ⁇ [n/k] and 1 ⁇ i ⁇ [n/k]k otherwise also calculates (14) through (18) for (p+1)k+1 ⁇ j ⁇ n if n is a multiple of k or p ⁇ [n/k] and for [n/k]k+1 ⁇ j ⁇ n otherwise, retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set.
  • This series of operations is below called post-elimination C.
  • a main controller J p that is connected to the system of nodes by the network, distributes the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the nodes in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the memory of one node in the cyclic order of ⁇ 0 , . . . , ⁇ P-1 , ⁇ 0 , ⁇ 1 , . . .
  • an element processor comprising:
  • a pivot choosing section that, for coefficient matrices A.sup.(r), known vectors b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2), chooses a pivot in the i-th row of A.sup.(i-1) and interchanges the i-th column with the chosen pivotal column,
  • a preprocessing section A 1 that is connected to the pivot choosing section and calculates (4) for pk+2 ⁇ j ⁇ n and (5),
  • an updating section B which is connected to the pivot choosing section, comprises a set of k registers and an arithmetic unit, and calculates (14) , (15), . . . , (18) for (p+1)k+1 ⁇ j ⁇ n retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set,
  • a back-substitution section that is connected to the pivot choosing section and obtains the unknown x by back-substitution, that is, by calculating (19) and (20), and
  • a gateway that is connected to the pivot choosing section and is a junction with the outside.
  • a system of clusters CL 0 , . . . , CL P-1 , each of which is connected to each other by a network and comprises:
  • a transmitter that transmits data between the memory and the outside through the C gateway.
  • each element processor of CL u takes charge of part of the k rows and 2k components row by row, while the preprocessing section A t of each element processor of CL u takes charge of elements of the (pk+t)th row of A.sup.(r) and the (pk+t)th component of b.sup.(r) one by one.
  • the pivot choosing section of the element processor PE 1 of CL u determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A 1 of element processors of CL u simultaneously calculate (4) and (5) for pk+2 ⁇ j ⁇ n and (5) with each A 1 calculating for elements and components in its charge, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA 1 .
  • the preprocessing sections A t of the above cluster CL u simultaneously calculate (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n with each A t calculating for elements and components in its charge, immediately after the pivot choosing section of PE t of CL u determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1 ⁇ j ⁇ n, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA t , where 2 ⁇ t ⁇ k.
  • an element processor comprising:
  • a pivot choosing section that, for coefficient matrices A.sup.(r), known vectors b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2), chooses a pivot in the i-th row of A.sup.(i-1) and interchanges the i-th column with the chosen pivotal column,
  • a preprocessing section A 1 that is connected to the pivot choosing section and calculates (4) for pk+2 ⁇ j ⁇ n and (5),
  • an updating section B' which is connected to the pivot choosing section, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for (p+1)k+1 ⁇ j ⁇ n retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set,
  • a gateway that is connected to the pivot choosing section and is a junction with the outside.
  • a system of clusters CL 0 , . . . , CL P-1 , each of which is connected to each other by a network and comprises:
  • a transmitter that transmits data between the memory and the outside through the C gateway.
  • the pivot choosing section of the element processor PE 1 of CL u determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A 1 of element processors of CL u simultaneously calculate (4) and (5) for pk+2 ⁇ j ⁇ n with each A 1 calculating for elements and components in its charge, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B' of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA 1 .
  • the preprocessing sections A t of element processors of the above cluster CL u simultaneously calculate (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n with each A t calculating for elements and components in its charge and, immediately after the pivot choosing section of PE t of CL u determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1 ⁇ j ⁇ n, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B' of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA t , where 2 ⁇ t ⁇ k.
  • the updating section B' of each element processor in charge of the i-th row such that 1 ⁇ i ⁇ pk or (p+1)k +1 ⁇ i ⁇ n if n is a multiple of k or p ⁇ [n/k] and 1 ⁇ i ⁇ [n/k]k otherwise also calculates (14) through (18) for (p +1)k+1 ⁇ j ⁇ n if n is a multiple of k or p ⁇ [n/k] and for [n/k]k+1 ⁇ j ⁇ n otherwise, retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set.
  • parallel updating B' c These operations are below called parallel updating B' c .
  • This series of operations is below called postelimination C c .
  • a parallel elimination method for solving the system of linear equations (2) in a parallel computer comprising C clusters CL 1 , . . . , CL C connected by a network.
  • Each of the clusters comprises P c element processors and a shared memory that stores part of the reduced matrices A.sup.(r) and the known vectors b.sup.(r) and the unknown vector x.
  • the method comprises:
  • a data distribution means that distributes the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the shared memory of the clusters in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the shared memory in the cyclic order of CL 1 , . . . , CL C , CL 1 , CL 2 , . . . , and assigns those distributed to the cluster's shared memory to its element processors row by row,
  • a pivot choosing means that chooses a pivot in a row assigned to each element processor
  • a multi-pivot elimination means that calculates ##EQU6## in each element processor in charge of the i-th row such that (k+1)P c +1 ⁇ i ⁇ n,
  • a remainder elimination means that executes the above elementary pre-elimination means for the ([n/P c ]P c +1)th row through the n-th row, if the above testing means judges that the operation of the multi-pivot elimination means was executed [n/P c ] times, and n is not a multiple of P c .
  • an elementary back-transmission means that transmits x i to the shared memory of every cluster to which the element processor in charge of an h-th row such that 1 ⁇ h ⁇ i-1 belongs
  • a parallel elimination method for solving the system of linear equations (2) in a parallel computer comprising C clusters CL 1 , . . . , CL C connected by a network.
  • Each of the clusters comprises P c element processors and a shared memory that stores part of the reduced matrices A.sup.(r) and the known vectors b.sup.(r) and the unknown vector x.
  • the method comprises:
  • a data distribution means that distributes the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the clusters in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the shared memory in the cyclic order of CL 1 , . . . , CL C , CL 1 , CL 2 , . . . , and assigns those distributed to the cluster's shared memory to its element processors row by row,
  • a pivot choosing means that chooses a pivot in a row assigned to each element processor
  • P c calculates (34) for kP c +l ⁇ i ⁇ n in the element processor in charge of the i-th row, calculates (35) and (36) in the element processor in charge of the (kP c +l)th row, and, after the pivot choosing means chooses the pivot (37), calculates (38) and (39) in the element processor in charge of the (kP c +l)th row, and transmits the results (38) and (39) to the shared memory of every other cluster to which an element processor in charge of the i-th row such that kP c +l+1 ⁇ i ⁇ n belongs, calculates,
  • a multi-pivot elimination means that calculates (43) and (44) in each element processor in charge of the i-throw such that (k+1)P c +1 ⁇ i ⁇ n,
  • a remainder elimination means that executes the above elementary pre-elimination means for the ([n/P c ]P c +1)th through the n-th rows and executes the above multi-pivot elimination means and the post-elimination means, if the above testing means judges that the operation of the post-elimination means was executed [n/P c ] times.
  • a search means whereby an above element processor searches for a nonzero element in the order of increasing column numbers from that diagonal element in the same row, if a diagonal element of a coefficient matrix is 0,
  • a component interchange means whereby two element processors interchange the two components of the unknown vector which are in their charge and have the same component indices as the column numbers of the above diagonal zero element and the found nonzero element.
  • a search means whereby an above element processor searches for an element with the greatest absolute value in the order of increasing column numbers from a diagonal element in the same row,
  • each element processor interchanges the two elements which are in its charge and have the same column number as the above diagonal element and the found element
  • a component interchange means whereby two element processors interchange the two components of the unknown vector which are in their charge and have the same component indices as the column numbers of the above diagonal element and the found component.
  • FIG. 1 is a block diagram of a linear calculating equipment according to the first embodiment of the present invention.
  • FIG. 2 is a flow chart of a control algorithm to be performed in the first embodiment.
  • FIG. 3 is a block diagram of a linear calculating equipment according to the second embodiment of the present invention.
  • FIG. 4 is a flow chart of the control algorithm to be performed in the second embodiment.
  • FIG. 5 is a block diagram of a parallel linear calculating equipment according to the third embodiment of the present invention.
  • FIG. 6 is a block diagram of a node shown in FIG. 5.
  • FIG. 7 is a flow chart of the control algorithm to be performed in the third embodiment.
  • FIG. 8 is a block diagram of a parallel linear calculating equipment according to the fourth embodiment of the present invention.
  • FIG. 9 is a block diagram of a node shown in FIG. 8.
  • FIG. 10 is a flow chart of the control algorithm to be performed in the fourth embodiment.
  • FIG. 11 is a block diagram of a parallel linear calculating equipment according to the fifth embodiment of the present invention.
  • FIG. 12 is a block diagram of a cluster shown in FIG. 11.
  • FIG. 13 is a block diagram of an element processor shown in FIG. 12.
  • FIG. 14 is a flow chart of the control algorithm to be performed in the fifth embodiment.
  • FIG. 15 is a block diagram of a parallel linear calculating equipment according to the sixth embodiment of the present invention.
  • FIG. 16 is a block diagram of a cluster shown in FIG. 15.
  • FIG. 17 is a block diagram of an element processor shown in FIG. 16.
  • FIG. 18 is a flow chart of the control algorithm to be performed in the sixth embodiment.
  • FIG. 19 is a block diagram of an element processor or processor module in a parallel computer which implements the 7th and 8th embodiments.
  • FIG. 20 is a block diagram of a cluster used in the 7th and 8th embodiments.
  • FIG. 21 is a block diagram of the parallel computation method according to the 7th embodiment.
  • FIG. 22 is a block diagram of the parallel computation method according to the 8th embodiment.
  • FIG. 23 is a diagram for showing the pivoting method according to the 7th and 8th embodiments.
  • FIG. 1 is a block diagram of linear calculating equipment in the first embodiment of the present invention.
  • 1 is a memory
  • 2 is a pivoting section connected to the memory 1
  • 3, 4, 5 are preprocessing sections A 1 , A t , A k respectively, each connected to the memory 1
  • 6 is an updating section B connected to the memory 1
  • 7 is a back-substitution section connected to the memory 1
  • 8 is a main controller G
  • 101 is a register set composed of k registers
  • 102 is an arithmetic unit.
  • the memory 1 is ordinary semiconductor memory and stores reduced coefficient matrices A.sup.(r) with zeroes generated from the first to the r-th column and corresponding known vectors b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2).
  • the pivoting section is connected to the memory 1, chooses a pivot in the i-th row following the instruction of the main controller G 8 when the first (i-1) columns are already reduced, and interchanges the i-th column with the chosen pivotal column and the i-th component with the corresponding component of x.
  • the choice of the pivot is based on a method called partial pivoting whereby an element with the largest absolute value in the i-th row is chosen as the pivot.
  • the interchange can be direct data transfer or transposition of column numbers and component indices.
  • the preprocessing section A 1 3 calculates (4) for pk+2 ⁇ j ⁇ n and (5) following the instruction of the main controller G.
  • Each preprocessing sections A t 4, where t 2, 3, . . . , k, is connected to the memory 1, calculates (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n, and, immediately after the pivoting section determines the transposed pivot (11), calculates (12) and (13) for pk+t+1 ⁇ j ⁇ n following the instruction of the main controller G 8.
  • the updating section B 6 is connected to the memory 1, comprises a register set 101 of k registers and an arithmetic unit 102, and calculates (14), (15), (16), (17), (18) for (p+1)k+1 ⁇ i, j ⁇ n in the arithmetic unit 102, retaining each value of Reg i .sup.(0), . . . , Reg i .sup.(k) in the corresponding register of the register set 101 following the instruction of the main controller G 8.
  • (14), (15), (16) are preliminary formulas, and (17) and (18) are formulas that determine updated components.
  • FIG. 2 shows a flow chart of its control algorithm.
  • the next step increments p by 1 and returns to the operations of the pivoting section 2 and the preprocessing section A 1 3.
  • n is not a multiple of k
  • the final step instructs the back-substitution section 7 to execute its operation and terminates the whole operation to obtain the unknown vector x.
  • FIG. 3 is a block diagram of linear calculating equipment in the second embodiment of the present invention.
  • 1 is a memory
  • 2 is a pivoting section connected to the memory 1
  • 3, 4, 5 are preprocessing sections A 1 , A t , A k respectively, each connected to the memory 1
  • 9 is an updating section B' connected to the memory 1
  • 10, 11, 12 are postprocessing sections C 1 , C t , C k-1 respectively, each connected to the memory 1
  • 13 is a main controller J
  • 103 is a register set composed of k registers
  • 104 is an arithmetic unit for, 101 is an arithmetic unit.
  • the updating section B' 9 is connected to the memory 1 and calculates (14), (15), . . . , (18) for 1 ⁇ i ⁇ pk, (p+1)k+1 ⁇ i ⁇ n, (p+1)k+1 ⁇ j ⁇ n if n is a multiple of k or p ⁇ [n/k] and for 1 ⁇ i ⁇ [n/k]k, [n/k]k+1 ⁇ j ⁇ n otherwise in the arithmetic unit 104, retaining each value of Reg i .sup.(0), . . . , Reg i .sup.(k) in the corresponding register of the register set 103.
  • FIG. 4 shows a flow chart of its control algorithm.
  • the t-th step within this loop, where t 1, . . . , k, instructs the pivoting section 2 and the preprocessing section A t 4 to execute their operations for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1).
  • the next step instructs the updating section B' 9 to execute its operation.
  • the following k-1 steps instruct the postprocessing sections C 1 10 through C k-1 12 to execute their operations in this order.
  • FIG. 5 is a block diagram of parallel linear calculating equipment in the third embodiment of the present invention.
  • 21 is a network
  • 22, 23, 24 are nodes ⁇ 0 , ⁇ u , ⁇ P-1 mutually connected by the network 21
  • 25 is a main controller G p connected to each node.
  • FIG. 6 is a block diagram of a node in FIG. 5. In FIG.
  • the preprocessing section A t 4 of the node ⁇ u 23 calculates (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n, and, immediately after the pivoting section 2 of ⁇ u 23 determines the pivot (11), calculates (12) and (13) for pk +t+1 ⁇ j ⁇ n, and the transmitter 27 transmits the results to the memory 1 of every other node through the gateway 26, while the updating section B 6 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1 ⁇ i ⁇ n.
  • This series of parallel operations is below called parallel preprocessing A t , where 2 ⁇ t ⁇ k.
  • the back-substitution sections 7 of nodes ⁇ u 23 calculate (19) and (20) using necessary data transmitted by the transmitters 27 of other nodes. These operations are called back-substitution.
  • FIG. 7 shows a flow chart of its control algorithm at the level of above definition.
  • the first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the nodes ⁇ 0 22, . . . , ⁇ u 23, . . . , ⁇ P-1 24 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of ⁇ 0 , . . . , ⁇ P-1 , ⁇ 0 , ⁇ 1 , . . .
  • n is not a multiple of k
  • the final step orders the execution of back-substitution and terminates the whole operation to obtain the unknown vector x.
  • FIG. 8 is a block diagram of parallel linear calculating equipment in the fourth embodiment of the present invention.
  • 31 is a network
  • 32, 33, 34 are nodes ⁇ 0 , ⁇ u , ⁇ P-1 mutually connected by the network 31
  • 35 is a main controller J p connected to each node.
  • FIG. 9 is a block diagram of a node in FIG. 8. In FIG.
  • 9 1 is a memory; 2 is a pivoting section connected to the memory 1; 3, 4, 5 are preprocessing sections A 1 , A t , A k respectively, each connected to the memory 1; 9 is an updating section B' connected to the memory 1; 10, 11, 12 are postprocessing sections C 1 , C t , C k-1 respectively, each connected to the memory 1; 26 is a gateway that is a junction with the outside; 27 is a transmitter that transmits data between the memory 1 and the outside through the gateway 26; 103 is a register set composed of k registers; 104 is an arithmetic unit.
  • the preprocessing section A t 4 of the node ⁇ u 23 calculates (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n, and, immediately after the pivoting section 2 of ⁇ u 23 determines the pivot (11), calculates (12) and (13) for pk +t+1 ⁇ j ⁇ n, and the transmitter 27 transmits the results to the memory 1 of every other node through the gateway 26, while the updating section B' 9 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing A t , where 2 ⁇ t ⁇ k.
  • the updating section B' 9 of each node in charge of the i-th row such that 1 ⁇ i ⁇ pk or (p+1)k+1 ⁇ i ⁇ n if n is a multiple of k or p ⁇ [n/k] and 1 ⁇ i ⁇ [n/k]k otherwise also calculates (14) through (18) for (p+1)k+1 ⁇ j ⁇ n if n is a multiple of K or p ⁇ [n/k] and for [n/k]k+1 ⁇ j ⁇ n otherwise, retaining the values of Reg i .sup.(0), . . . Reg i .sup.(k) in the register set.
  • This series of operations is below called post-elimination C.
  • FIG. 10 shows a flow chart of its control algorithm at the level of above definition.
  • the first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the nodes ⁇ 0 32, . . . , ⁇ u 33, . . . , ⁇ P-1 34 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of ⁇ 0 , . . . , ⁇ P-1 , ⁇ 0 , ⁇ 1 , . . .
  • the t-th step within this loop orders the execution of the parallel preprocessing A t for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1).
  • the next step orders the execution of the parallel updating B'.
  • the next step orders the execution of the post-elimination C.
  • n is not a multiple of k
  • FIG. 11 is a block diagram of a parallel linear calculating equipment according to the fifth embodiment of the present invention.
  • 41 is a network
  • 42, 43, 44 are clusters CL 0 , CL u , CL P-1 mutually connected by the network 41
  • 45 is a main controller G pc connected to each cluster.
  • FIG. 12 is a block diagram of a cluster in FIG. 11.
  • 1 is a memory
  • 46 is a C gateway that is a junction with the outside
  • 47, 48, 49 are element processors PE 1 , PE 2 , PE P .sbsb.c, each connected to the memory 1
  • 50 is a transmitter that transmits data between the memory 1 and the outside through the C gateway 46.
  • FIG. 12 is a block diagram of a parallel linear calculating equipment according to the fifth embodiment of the present invention.
  • 41 is a network
  • 42, 43, 44 are clusters CL 0 , CL u , CL P-1 mutually connected by the network 41
  • 45 is a main controller
  • FIG. 13 is a block diagram of an element processor in FIG. 12.
  • 2 is a pivoting section
  • 3, 4, 5 are preprocessing sections A 1 , A t , A k respectively, each connected to the pivoting section 2
  • 6 is an updating section B connected to the pivoting section 2
  • 7 is a back-substitution section connected to the pivoting section 2
  • 51 is a gateway that is a junction with the outside
  • 101 is a register set composed of k registers
  • 102 is an arithmetic unit.
  • the pivoting section 2 of the element processor PE 1 of CL u 43 determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A 1 3 of element processors of CL u simultaneously calculate (4) and (5) for pk+2 ⁇ j ⁇ n with each A 1 3 calculating for elements and components in its charge, and the transmitter 50 transmits the results to the memory of every other cluster through the C gateway 46, while the updating section B 6 of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA 1 .
  • the preprocessing sections A t 4 of the above cluster CL u 43 simultaneously calculate (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n with each A t 4 calculating for elements and components in its charge and, immediately after the pivoting section of PE t of CL u 43 determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1 ⁇ j ⁇ n, and the transmitter 50 transmits the results to the memory 1 of every other cluster through the C gateway 46, while the updating section B 6 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA t , where 2 ⁇ t ⁇ k.
  • the back-substitution sections 7 of element processors calculate (19) and (20) using necessary data transmitted by the transmitters 50 of other clusters. These operations are called back-substitution.
  • FIG. 14 shows a flow chart of its control algorithm at the level of above definition.
  • the first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the cluster CL 0 42, . . . , CL u 43, . . . , CL P-1 44 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of CL 0 , . . . , CL P-1 , CL 0 , CL 1 , . . .
  • n is not a multiple of k
  • the final step orders the execution of back-substitution and terminates the whole operation to obtain the unknown vector x.
  • FIG. 15 is a block diagram of a parallel linear calculating equipment according to the sixth embodiment of the present invention.
  • 61 is a network;
  • 62, 63, 64 are clusters CL 0 , CL u , CL P-1 mutually connected by the network 61;
  • 65 is a main controller J pc connected to each cluster.
  • FIG. 16 is a block diagram of a cluster in FIG. 15.
  • 1 is a memory;
  • 46 is a C gateway that is a junction with the outside;
  • 66, 67, 68 are element processors PE 1 , PE 2 , PE P .sbsb.c, each connected to the memory 1;
  • 50 is a transmitter that transmits data between the memory 1 and the outside through the C gateway 46.
  • FIG. 17 is a block diagram of an element processor shown in FIG. 16.
  • 2 is a pivoting section
  • 3, 4, 5 are preprocessing sections A 1 , A t , A k respectively, each connected to the pivoting section 2
  • 9 is an updating section B' connected to the pivoting section 2
  • 10, 11, 12 are postprocessing sections C 1 , C t , C k-1 respectively, each connected to the pivoting section 2
  • 51 is a gateway that is a junction with the outside
  • 103 is a register set composed of k registers
  • 104 is an arithmetic unit.
  • the pivoting section 2 of the element processor PE 1 of CL u 63 determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A 1 3 of element processors of CL u 63 simultaneously calculate (4) and (5) for pk+2 ⁇ j ⁇ n with each A 1 3 calculating for elements and components in its charge, and the transmitter 50 transmits the results to the memory 1 of every other cluster through the C gateway 46, while the updating section B' 9 of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA 1 .
  • the preprocessing sections A t 4 of the above cluster CL u 63 simultaneously calculate (6), (7), (8), (9), (10) for pk+t ⁇ j ⁇ n with each A t 4 calculating for elements and components in its charge and, immediately after the pivoting section 2 of the element processor PE t of CL u 63 determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1 ⁇ j ⁇ n, and the transmitter 50 transmits the results to the memory 1 of every other cluster through the C gateway 46, while the updating section B' 9 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k +1 ⁇ i ⁇ n.
  • This series of operations is below called parallel preprocessing CLA t , where 2 ⁇ t ⁇ k.
  • the updating section B' 9 of each element processor in charge of the i-th row such that 1 ⁇ i ⁇ pk or (p+1)k+1 ⁇ i ⁇ n if n is a multiple of k or p ⁇ [n/k] and 1 ⁇ i ⁇ [n/k]k otherwise also calculates (14) through (18) for (p+1)k+1 ⁇ j ⁇ n if n is a multiple of k or p ⁇ [n/k] and for [n/k]k+1 ⁇ j ⁇ n otherwise, retaining the values of Reg i .sup.(0), . . . , Reg i .sup.(k) in the register set.
  • parallel updating B' c These operations are below called parallel updating B' c .
  • This series of operations is below called post-elimination C c .
  • FIG. 18 shows a flow chart of its control algorithm at the level of above definition.
  • the first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the clusters CL 0 62, . . . , CL u 63, . . . , CL P-1 64 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of CL 0 , . . . , CL P-1 , CL 0 , CL 1 , . . .
  • the t-th step within this loop orders the execution of the parallel preprocessing CLA t for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1).
  • the next step orders the execution of the parallel updating B' c .
  • the next step orders the execution of the post-elimination C c .
  • n is not a multiple of k
  • FIG. 19 shows a block diagram of an element processor or processor module of a parallel computer that implements the seventh embodiment of the present invention.
  • 201 is a gate way;
  • 202 is a cache memory;
  • 203 is a central processing unit;
  • 204 is a local memory;
  • 205 is a shared buss.
  • FIG. 20 shows a block diagram of a cluster composed of element processors 212, 213, . . . , 214, a C gateway 210, and a shared memory 211.
  • a network of the parallel computer connects each of the clusters to each other, so that data can be transmitted between any two clusters. Let the number of element processors in each cluster be P c and the total number of clusters be C.
  • the total number P of element processors in the parallel computer is C ⁇ P c .
  • the clusters be denoted by CL 1 , CL 2 , . . . , CL C
  • the element processors of CL u be denoted by PR u 1, . . . , PR u P.sbsb.c.
  • FIG. 21 is a block diagram of parallel linear computation method according to the seventh embodiment of the present invention implemented by a parallel computer structured above.
  • 220 is a data distribution means;
  • 221 is a pivoting means;
  • 222 is an elementary pre-elimination means;
  • 223 is a multi-pivot elimination means;
  • 224 is an elimination testing means;
  • 225 is a remainder elimination means;
  • 226 is an elementary back-substitution mean;
  • 227 is an elementary back-transmission means;
  • 228 is an elementary back-calculation means;
  • 229 is a back-processing testing means.
  • the processing involves a pivot choosing process for each l.
  • Partial pivoting chooses as a pivot in each reduced matrix A.sup.(r) an element with the largest absolute value in the relevant column or row.
  • Full pivoting chooses as a pivot in each reduced matrix A.sup.(r) an element with the largest absolute value in the submatrix of the columns or rows which have not hitherto been pivotal.
  • choosing of a pivot is necessary only when the relevant diagonal element is 0, and in that case any nonzero element can be chosen as a pivot in partial pivoting.
  • Pivoting methods in the present invention employ partial pivoting, and the present first method chooses the first nonzero element in the relevant row, and the present second method chooses an element with the greatest absolute value in the relevant row.
  • FIG. 23 shows the process of the pivot choosing means 221.
  • the element processor either transmits h to a specified word of the shared memory 211 of each cluster, and each element processor refers to the word, or the element processor transmits h to a dedicated bus line, and each element processor fetches h into its local memory 204. Then each element processor, by the element interchange means 242, simultaneously interchanges the element with the column number i with the element with the column number h in the row in its charge. Then two element processors in charge of the i-th component and the h-th component of the unknown vector x respectively interchange these component by the component interchange means 243. The pivot choosing process terminates hereby.
  • and Col i.
  • the element processor compares max with
  • and Col j, only if
  • the element processor notifies each element processor of Col by a broadcast. The remaining steps are the same as above.
  • the element processor PR u 1 in charge of the (kP c +1)th row in the cluster CL u , where u k-[k/C]+1, calculates (32) and (33), and transmits the results to the shared memory of every other cluster to which the element processor in charge of an i-th row such that kP c +2 ⁇ i ⁇ n belongs. If 2 ⁇ l ⁇ P c , then each element processor in charge of the i-th row such that kP c +l ⁇ i ⁇ n calculates (34), and the element processor PR u l calculates (35) and (36).
  • the element processor PR u l calculates (38) and (39) and transmits the results to the shared memory of every other cluster to which the element processor in charge of an i-th row such that kP c +l+1 ⁇ i ⁇ n belongs.
  • each element processor in charge of the i-th row such that (k+1)P c +1 ⁇ i ⁇ n calculate (40) and (41) for i.
  • the element processor in charge of the i-th row calculates (42).
  • an elementary back-transmission means that transmits x i to the shared memory of every clusters such that the element processor in charge of an h-th row such that 1 ⁇ h ⁇ i-1 belongs.
  • each element processor in charge of the h-th row such that 1 ⁇ h ⁇ i-1 calculates (43). Then this step decrements i by 1, and increments goes to the sixth step.
  • FIG. 22 is a block diagram of parallel linear calculating method in the eighth embodiment of the present invention implemented by a parallel computer structured as in the seventh embodiment.
  • 220 is a data distribution means;
  • 221 is a pivot choosing means;
  • 231 is an elimination testing means,
  • 232 is an elementary pre-elimination means;
  • 233 is a multi-pivot elimination means;
  • 234 is an elementary post-elimination means;
  • 225 is a post-elimination processing means;
  • 236 is a remainder elimination means.
  • the processing involves a pivot choosing process for each l, which is the same as in the seventh embodiment.
  • the element processor PR u 1 in charge of the (kP c +1)th row in the cluster CL u , where u k-[k/C]+1, calculates (32) and (33), and transmits the results to the shared memory of every other cluster. If 2 ⁇ l ⁇ P c , then each element processor in charge of the i-th row such that kP c +l ⁇ i ⁇ n calculates (34), and the element processor PR u l calculates (35) and (36). Then after the pivot choosing means determines the pivot (37), the element processor PR u l calculates (38) and (39) and transmits the results to the shared memory of every other cluster.
  • each element processor in charge of the i-th row such that 1 ⁇ i ⁇ kP c or (k+1)P c +1 ⁇ i ⁇ n calculates (40) and (41).
  • the post-elimination processing means 235 eliminates unnecessary elements generated by the multi-pivot elimination means 233.
  • the core of the post-elimination processing means 235 is the elementary post-elimination means 234, which calculates (45) and (46) in the element processor in charge of the i-th row.
  • each element processor in charge of the i-th row such that [n/P c ]P c +1 ⁇ i ⁇ n executes the operation of the elementary pre-elimination means 232.
  • the remainder elimination means executes operation of the multi-pivot elimination means 233 followed by the post-elimination processing means 235.
  • the unknown vector x is obtained as the vector b.sup.(r) after the above operation.
  • preprocessing section A t and the postprocessing section C t have their own register sets as the updating section B and B' in the first embodiment through the six embodiment, and their operations are executed by retaining values of variables and divisors, then the number of load-and-store operations for the memory are reduced, and further improvement in computation speed can be achieved.
  • the present invention provides high-speed linear calculating equipment and parallel linear calculating equipment for solving systems of linear equations by means of Gauss's elimination method and Gauss-Jordan's method based on multi-pivot simultaneous elimination and scalar operations.
  • the speed-up is achieved by reducing the number of load-and-store operations for the memory by retaining values of variables in register sets in updating processing, and reducing the number of iteration by multi-pivot simultaneous elimination.
  • the present invention is easily implementation in scalar computers.
  • each memory is assigned blocks of k rows of the coefficient matrix A.sup.(0) for the k-pivot simultaneous elimination method, so that effects of parallel computation are enhanced.
  • the preprocessing or both the preprocessing and the postprocessing are also made parallel, and the computation is more effective.
  • a theoretical estimation has shown that if the number of components of the unknown vector X is sufficiently large for a definite number of processors, then the effects of parallel computation are sufficiently powerful. Therefore, parallel linear calculating equipment effectively employing Gauss method and Gauss-Jordan method based on multi-pivot simultaneous elimination has been obtained.
  • the present invention effectively makes possible high-speed parallel computation for solving systems of linear equations using a parallel computer with a number of element processors by means of the methods of the seventh and eighth embodiments.

Abstract

A linear calculating equipment comprises a memory for storing a coefficient matrix, a known vector and an unknown vector of a given system of linear equations, a pivoting device for choosing pivots of the matrix, a plurality of preprocessors for executing K steps of preprocessing for multi-pivot simultaneous elimination, an updating device for updating the elements of the matrix and the components of the vectors, a register set for storing values of the variables, a back-substitution device for obtaining a solution and a main controller for controlling the linear calculating equipment as a whole.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to calculating equipment for solving systems of linear equations, parallel calculating equipment for solving systems of linear equations, and methods of parallel computation for solving systems of linear equations.
2. Description of the Related Art
The need for solving systems of linear equations at high speed frequently arises in numerical analysis of the finite element method and the boundary element method and other processes of technical calculation.
Among algorithms based on direct methods of solving systems of linear equations is Gauss elimination method based on bi-pivot simultaneous elimination, which is described in Takeo Murata, Chikara Okuni and Yukihiko Karaki, "Super Computer-Application to Science and Technology," Maruzen 1985 pp 95-96. The bi-pivot simultaneous elimination algorithm eliminates two columns at the same time by choosing two pivots at one step. It limits simultaneous elimination to two columns and the choice of pivots to partial pivoting by row interchanges. Furthermore it considers the speeding up of its process in terms of numbers of repetition of do-loops only.
If simultaneous elimination is not limited to two columns and extended to more than two columns, the corresponding algorithms will be hereafter called multi-pivot simultaneous elimination algorithms.
A similar algorithm to multi-pivot simultaneous elimination algorithms is described in Jim Armstrong, "Algorithm and Performance Notes for Block LU Factorization," International Conference on Parallel Processing, 1988, Vol. 3, pp 161-164. It is a block LU factorization algorithm intended to speed up matrix operations and should be implemented in vector computers or computers with a few multiplexed processors.
Therefore, according to prior art, there has not yet been developed Gauss elimination method or Gauss-Jordan elimination method which is based on multi-pivot simultaneous elimination and can be efficiently implemented in scalar computers and parallel computers.
SUMMARY OF THE INVENTION
The object of the present invention is therefore to provide high-speed parallel calculating equipment and methods of parallel computation for solving systems of linear equations by means of Gauss elimination method and Gauss-Jordan's method based on multi-pivot simultaneous elimination.
In order to achieve the aforementioned objective, according to one aspect of the present invention, there are provided
a memory that stores reduced coefficient matrices A.sup.(r) with zeroes generated from the first to the r-th column and corresponding known vectors b.sup.(r) and an unknown vector x expressed by
A.sup.(r) =(a.sub.ij.sup.(r)), 1≦i, j≦n,
b.sup.(r) =(b.sub.1.sup.(r), b.sub.2.sup.(r), . . . , b.sub.n.sup.(r)).sup.t,                                   (1)
x=(x.sub.1, x.sub.2, . . . , x.sub.n).sup.t
for a given system of linear equations
A.sup.(0) x=b.sup.(0).                                     (2)
a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
a preprocessing section A1 that, immediately after the pivot choosing section's above operation determines the transposed pivot
a.sub.pk+1pk+1,.sup.(pk)                                   ( 3)
calculates
a.sub.pk+1j.sup.(pk+1) =a.sub.pk+1j.sup.(pk) /a.sub.pk+1pk+1.sup.(pk) ( 4)
for pk+2≦j≦n and
b.sub.pk+1.sup.(pk+1) =b.sub.pk+1.sup.(pk) /a.sub.pk+1pk+1,.sup.(pk) ( 5)
k-1 preprocessing sections At, where t=2, 3, . . . , k, each of which is connected to the memory and calculates ##EQU1## for pk+t≦j≦n, and, immediately after the pivot choosing section determines the transposed pivot
a.sub.pk+tpk+t,.sup.(pk+t-1)                               ( 11)
calculates
a.sub.pk+tj.sup.(pk+t) =a.sub.pk+tj.sup.(pk+t-1) /a.sub.pk+tpk+t,.sup.(pk+t-1)                             ( 12)
b.sub.pk+t.sup.(pk+t) =b.sub.pk+t.sup.(pk+t-1) /a.sub.pk+tpk+t.sup.(pk+t-1) ( 13)
for pk+t+1≦j≦n,
an updating section B that is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates ##EQU2## for (p+1)k+1≦i, j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set,
a back-substitution section that is connected to the memory and obtains the value of the unknown vector x by calculating
x.sub.i =b.sub.i.sup.(n)                                   ( 19)
and
b.sub.h.sup.(n+h-i+1) =b.sub.h.sup.(n+h-i) -a.sub.hi.sup.(h) X.sub.i ( 20)
for 1≦h≦i-1 for i=n, n-1, . . . , 1 in this order of i, and
a main controller G that, if n is a multiple of k, instructs the pivot choosing section, the preprocessing sections A1, . . . , Ak, and the updating section B to repeat their above operations for p=0, 1, . . . , n/k-2, and instructs the pivot choosing section and the preprocessing sections A1, . . . , Ak to execute their above operations for p=n/k -1, and, if n is not a multiple of k, instructs the pivot choosing section, the preprocessing sections A1, . . . , Ak, and the updating section B to repeat their above operations for p=0, 1, . . . [n/k]-1, where [x] denotes the greatest integer equal or less than x, and instructs the pivot choosing section and the preprocessing sections A1, . . . , An-[n/k]k to execute their above operations, and in both cases, instructs the back-substitution section to obtain the unknown vector x.
According to another aspect of the present invention there are provided
a memory that stores coefficient matrices A.sup.(r), known vectors b.sup.(r) and the unknown vector x expressed by (1) for a given system of linear equations (2),
a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
a preprocessing section A1 that, immediately after the pivot choosing section's above operation determines the transposed pivot (3), calculates (4) for pk +2≦j≦n and (5),
k-1 preprocessing sections At, where t=2, 3, . . , k, each of which is connected to the memory, calculates (6), (7), . . . , (10) for pk+t≦j≦n, and, immediately after the pivot choosing section determines the transposed pivot (11), calculates (12) and (13) for pk+t +1≦j≦n,
an updating section B' which is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for 1≦i≦pk, (p+1)k+1≦i≦n, (p+1)k+1≦j≦n if n is a multiple of k or p<[n/k] and for 1≦i≦[n/k]k, [n/k]k+1≦j≦n otherwise, retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set,
k-1 postprocessing sections Ct, where t=1, 2, . . . , k-1, each of which is connected to the memory and calculates
Reg.sup.(0) =a.sub.pk+1pk+t+1,.sup.(pk+t)                  ( 21)
Reg.sup.(1) =a.sub.pk+2pk+t+1,.sup.(pk+t)                  ( 22)
Reg.sup.(t-1) =a.sub.pk+tpk+t+1,.sup.(pk+t)                ( 23)
a.sub.pk+1j.sup.(pk+t+1) =a.sub.pk+1j.sup.(pk+t) -Reg.sup.(0) a.sub.pk+t+1j,.sup.(pk+t+1)                               ( 24)
a.sub.pk+2j.sup.(pk+t+1) =a.sub.pk+2j.sup.(pk+t) -Reg.sup.(1) a.sub.pk+t+1j,.sup.(pk+t+1)                               ( 25)
a.sub.pk+tj.sup.(pk+t+1) =a.sub.pk+tj.sup.(pk+t) -Reg.sup.(t-1) a.sub.pk+t+1j,.sup.(pk+t+1)                               ( 26)
b.sub.pk+1.sup.(pk+t+1) =b.sub.pk+1.sup.(pk+t) -Reg.sup.(0) b.sub.pk+t+1,.sup.(pk+t+1)                                ( 27)
b.sub.pk+2.sup.(pk+t+1) =b.sub.pk+2.sup.(pk+t) -Reg.sup.(1) b.sub.pk+t+1,.sup.(pk+t+1)                                ( 28)
b.sub.pk+t.sup.(pk+t+1) =b.sub.pk+t.sup.(pk+t) -Reg.sup.(t-1) b.sub.pk+t+1.sup.(pk+t+1)                                 ( 29)
for pk+t+2≦j≦n,
a main controller J that, if n is a multiple of k, instructs the pivot choosing section, the preprocessing sections A1, . . . , Ak, the updating section B', and the postprocessing sections C1, . . . , Ck-1 to repeat their above operations for p=0, 1, . . . , n/k-1, and, if n is not a multiple of k, instructs the pivot choosing section, the preprocessing sections A1, . . . , Ak, the updating section B', and the postprocessing sections C1, . . . , Ck-1 to repeat their above operations for p=0, 1, . . . [n/k]-1, and instructs the pivot choosing section, the preprocessing sections A1, . . . , An-[n/k]k, the updating section B', and the postprocessing sections C1, . . . , Cn-[n/k]k to execute their above operations for p=[k/n].
According to another aspect of the present invention there is provided a system of nodes α0, . . . , αP-1, each of which is connected to each other by a network and comprises:
a memory that stores blocks of k rows of each coefficient matrix A.sup.(r) and corresponding k components of each known vector b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2),
a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
a preprocessing section A1 that is connected to the memory and calculates (4) for pk+2≦j≦n and (5),
k-1 preprocessing sections At, where t=2, 3, . . . , k, each of which is connected to the memory, calculates (6), (7), . . . , (10) for pk+t≦j≦n, and calculates (12) and (13) for pk+t+1≦j≦n,
an updating section B that is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for (p+1)k +1≦j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set,
a back-substitution section that is connected to the memory and obtains the unknown x by back-substitution, that is, by calculating (19) and (20),
a gateway that is connected to the memory and is a junction with the outside, and
a transmitter that is connected to the memory and transmits data between the memory and the outside through the gateway.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the node αu, then the pivot choosing section of the node αu determines the pivot (3), and the preprocessing section of the node αu calculates (4) and (5) for pk+2≦j≦n, and the transmitter transmits the results to the memory of every other node through the gateway, while the updating section B of the node in charge of the i-th row calculates (14) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing A1.
The preprocessing section At of the above node αu calculates (6), (7), (8), (9), (10) for pk+t≦j≦n, and, immediately after the pivot choosing section of αu determines the pivot (11), calculates (12) and (13) for pk +t+1≦j≦n, and the transmitter transmits the results to the memory of every other node through the gateway, while the updating section B of the node in charge of the i-th row calculates ##EQU3## for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing At, where 2≦t≦k.
The updating section B of each node in charge of the i-th row such that (p+1)k+1≦i≦n also calculates (14) through (18) retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set. These operations are below called parallel updating B.
According to a further aspect of the present invention there is provided a main controller Gp that is connected to the system of nodes by the network, distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the nodes in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the memory of one node in the cyclic order of α0, . . . , αP-1, α0, α1, . . . , and, if n is a multiple of k, instructs each node to execute parallel preprocessing A1 through Ak and parallel updating B for p=0, 1, . . . , n/k-1, and, if n is not a multiple of k, instructs each node to execute parallel preprocessing A1 through Ak and parallel updating B for p=0, 1, . . . , [n/k]-1 and to execute parallel preprocessing A1 through An-[n/k]k for p=[n/k], and instructs the nodes to obtain unknown vector by means of back-substitution.
According to another aspect of the present invention there is provided a system of nodes α0, . . . , αP-1, each of which is connected to each other by a network and comprises:
a memory that stores blocks of k rows of each coefficient matrix A.sup.(r) and corresponding k components of each known vector b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2),
a pivot choosing section that is connected to the memory, chooses a pivot in the i-th row of A.sup.(i-1), and interchanges the i-th column with the chosen pivotal column,
a preprocessing section A1 that is connected to the memory and calculates (4) for pk+2≦j≦n and (5),
k-1 preprocessing sections At, where t=2, 3, . . . , k, each of which is connected to the memory, calculates (6), (7), . . . , (10) for pk+t≦j≦n, and calculates (12) and (13) for pk+t+1≦j≦n,
an updating section B' that is connected to the memory, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for (p+1)k +1≦j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set,
k-1 postprocessing sections Ct, where t=1, 2, . . . , k-1, each of which is connected to the memory and calculates (21), (22), . . . , (29) for pk+2+2≦j≦n,
a gateway that is connected to the memory and is a junction with the outside, and
a transmitter that is connected to the memory and transmits data between the memory and the outside through the gateway.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the node αu, then the pivot choosing section of αu determines the pivot (3), and the preprocessing section of αu calculates (4) and (5) for pk+2≦j≦n, and the transmitter transmits the results to the memory of every other node through the gateway, while the updating section B of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing A1.
The preprocessing section At of the node αu calculates (6), (7), (8), (9), (10) for pk+t≦j≦n, and, immediately after the pivot choosing section 2 of αu determines the pivot (11), calculates (12) and (13) for pk +t+1≦j≦n, and the transmitter transmits the results to the memory of every other node through the gateway, while the updating section B' of the node in charge of the i-th row calculates (30) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing At, where 2≦t≦k.
The updating section B' of each node in charge of the i-th row such that 1≦i≦pk or (p+1)k+1≦i≦n if n is a multiple of k or p<[n/k] and 1≦i≦[n/k]k otherwise also calculates (14) through (18) for (p+1)k+1≦j≦n if n is a multiple of k or p<[n/k] and for [n/k]k+1≦j≦n otherwise, retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set. These operations are below called parallel updating B'.
The postprocessing section Ct of the above node αu calculate (21), (22), . . . , (29) for pk+t+2≦j≦n for t=1, 2, . . . , k-1 if n is a multiple of k or p<[n/k] and for t=1, 2, . . . , n-[n/k]k otherwise. This series of operations is below called post-elimination C.
According to a further aspect of the present invention there is provided a main controller Jp that is connected to the system of nodes by the network, distributes the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the nodes in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the memory of one node in the cyclic order of α0, . . . , αP-1, α0, α1, . . . , and, if n is a multiple of k, instructs each node to execute parallel preprocessing A1 through Ak, parallel updating B' and post-elimination C for p=0, . . . , n/k-1, and, if n is not a multiple of k, instructs each node to execute parallel preprocessing A1 through Ak, parallel updating B' and post-elimination C for p=0, 1, . . . , [n/k]-1 and to execute parallel preprocessing A1 through An-[n/k]k, parallel updating B', and post-elimination C for p=[n/k].
According to another aspect of the present invention there is provided an element processor comprising:
a pivot choosing section that, for coefficient matrices A.sup.(r), known vectors b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2), chooses a pivot in the i-th row of A.sup.(i-1) and interchanges the i-th column with the chosen pivotal column,
a preprocessing section A1 that is connected to the pivot choosing section and calculates (4) for pk+2≦j≦n and (5),
k-1 preprocessing sections At, where t=2, 3, . . . , k, each of which is connected to the pivot choosing section, calculates (6), (7), . . . , (10) for pk+t≦j≦n, and calculates (12) and (13) for pk+t+1≦j≦n,
an updating section B which is connected to the pivot choosing section, comprises a set of k registers and an arithmetic unit, and calculates (14) , (15), . . . , (18) for (p+1)k+1≦j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set,
a back-substitution section that is connected to the pivot choosing section and obtains the unknown x by back-substitution, that is, by calculating (19) and (20), and
a gateway that is connected to the pivot choosing section and is a junction with the outside.
According to a further aspect of the present invention there is provided a system of clusters, CL0, . . . , CLP-1, each of which is connected to each other by a network and comprises:
above element processors PE1, . . . , PEP.sbsb.c,
a memory that stores blocks of k rows of each coefficient matrix A.sup.(r) and corresponding k components of each known vector b.sup.(r) and the unknown vector x,
a C gateway that is a junction with the outside, and
a transmitter that transmits data between the memory and the outside through the C gateway.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the cluster CLu, then the pivot choosing section, the updating section and the back-substitution section of each element processor of CLu take charge of part of the k rows and 2k components row by row, while the preprocessing section At of each element processor of CLu takes charge of elements of the (pk+t)th row of A.sup.(r) and the (pk+t)th component of b.sup.(r) one by one.
specifically, the pivot choosing section of the element processor PE1 of CLu determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A1 of element processors of CLu simultaneously calculate (4) and (5) for pk+2≦j≦n and (5) with each A1 calculating for elements and components in its charge, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1≦i≦n. This series of operations is below called parallel preprocessing CLA1.
The preprocessing sections At of the above cluster CLu simultaneously calculate (6), (7), (8), (9), (10) for pk+t≦j≦n with each At calculating for elements and components in its charge, immediately after the pivot choosing section of PEt of CLu determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1≦j≦n, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing CLAt, where 2≦t≦k.
The updating sections B of each element processor in charge of the i-th row such that (p+1)k+1≦i≦n calculate (14) through (18) for (p+1)k+1≦j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set. These operation are below called parallel updating Bc.
According to a further aspect of the present invention there is provided a main controller Gpc that is connected to the above system, distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the clusters in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the memory of one cluster in the cyclic order of CL0, . . . , CLP-1, CL0, CL1, . . . , and, if n is a multiple of k, instructs each cluster to execute parallel preprocessing CLA1 through CLAk and parallel updating Bc for p=0, 1, . . . , n/k-2 and to execute CLA1 through CLAk for p=n/k-1, and, if n is not a multiple of k, instructs each cluster to execute CLA1 through CLAk and Bc for p=0, 1, . . . , [n/k]-1 and to execute CLA1 through CLAn-[n/k]k for p=[n/k], and instructs each cluster to obtain the unknown vector x by means of the back-substitution sections of its element processors and its transmitter.
According to another aspect of the present invention there is provided an element processor comprising:
a pivot choosing section that, for coefficient matrices A.sup.(r), known vectors b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2), chooses a pivot in the i-th row of A.sup.(i-1) and interchanges the i-th column with the chosen pivotal column,
a preprocessing section A1 that is connected to the pivot choosing section and calculates (4) for pk+2≦j≦n and (5),
k-1 preprocessing sections At, where t=2, 3, . . . , k, each of which is connected to the pivot choosing section, calculates (6), (7), . . . , (10) for pk+t≦j≦n, and calculates (12) and (13) for pk+t+1≦j≦n,
an updating section B' which is connected to the pivot choosing section, comprises a set of k registers and an arithmetic unit, and calculates (14), (15), . . . , (18) for (p+1)k+1≦j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set,
k-1 postprocessing sections Ct, where t=1, 2, . . . , k-1, each of which is connected to the pivot choosing section and calculates (21), (22). . . , (29) for pk+t+2≦j≦n, and
a gateway that is connected to the pivot choosing section and is a junction with the outside.
According to a further aspect of the present invention there is provided a system of clusters, CL0, . . . , CLP-1, each of which is connected to each other by a network and comprises:
above element processors PE1, . . . , PEP.sbsb.c,
a memory that stores the coefficient matrices A.sup.(r), the known vectors b.sup.(r) and the unknown vector x,
a C gateway that is a junction with the outside, and
a transmitter that transmits data between the memory and the outside through the C gateway.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the cluster CLu, then the pivot choosing section and the updating section B' of each element processor of CLu take charge of part of the k rows and 2k components row by row, while the preprocessing section At and postprocessing section Ct of each element processor of CLu take charge of elements of the (pk+t)th row of A.sup.(r) and the (pk+t)th component of b.sup.(r) one by one.
Specifically, the pivot choosing section of the element processor PE1 of CLu determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A1 of element processors of CLu simultaneously calculate (4) and (5) for pk+2≦j≦n with each A1 calculating for elements and components in its charge, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B' of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1≦i≦n. This series of operations is below called parallel preprocessing CLA1.
The preprocessing sections At of element processors of the above cluster CLu simultaneously calculate (6), (7), (8), (9), (10) for pk+t≦j≦n with each At calculating for elements and components in its charge and, immediately after the pivot choosing section of PEt of CLu determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1≦j≦n, and the transmitter transmits the results to the memory of every other cluster through the C gateway, while the updating section B' of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1≦i ≦n. This series of operations is below called parallel preprocessing CLAt, where 2≦t≦k.
The updating section B' of each element processor in charge of the i-th row such that 1≦i≦pk or (p+1)k +1≦i≦n if n is a multiple of k or p<[n/k] and 1≦i ≦[n/k]k otherwise also calculates (14) through (18) for (p +1)k+1≦j≦n if n is a multiple of k or p<[n/k] and for [n/k]k+1≦j<n otherwise, retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set. These operations are below called parallel updating B'c.
The postprocessing sections Ct of element processors of the above CLu simultaneously calculate (21), (22), . . . , (29) for j such that pk+t+2≦j≦n for t =1, 2, . . . , k-1 if n is a multiple of k or p<[n/k] and for t=1, 2, . . . , n-[n/k]k otherwise. This series of operations is below called postelimination Cc.
According to a further aspect of the present invention there is provided a main controller Jpc that is connected to the above system, distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the clusters in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the memory of one cluster in the cyclic order of CL0, . . . , CLP-1, CL0, CL1, . . . , and, if n is a multiple of k, instructs each cluster to execute parallel preprocessing CLA1 through CLAk, parallel updating B'c and parallel postelimination Cc for p=0, 1, . . . , n/k-1, and if n is not a multiple of k, instructs each cluster to execute parallel preprocessing CLA1 through CLAk, parallel updating B'c, and post-elimination Cc for p=0, 1, . . . , [n/k]-1 and to execute parallel preprocessing CLA1 through CLAn-[n/k]k, parallel updating B'c' and postelimination Cc for p=[n/k].
According to another aspect of the present invention, there is provided a parallel elimination method for solving the system of linear equations (2) in a parallel computer comprising C clusters CL1, . . . , CLC connected by a network. Each of the clusters comprises Pc element processors and a shared memory that stores part of the reduced matrices A.sup.(r) and the known vectors b.sup.(r) and the unknown vector x. The method comprises:
a data distribution means that distributes the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the shared memory of the clusters in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the shared memory in the cyclic order of CL1, . . . , CLC, CL1, CL2, . . . , and assigns those distributed to the cluster's shared memory to its element processors row by row,
a pivot choosing means that chooses a pivot in a row assigned to each element processor,
an elementary pre-elimination means that, after the pivot choosing means chooses the pivot
a.sub.kPc+1kPc+1,.sup.(kPc)                                ( 31)
calculates
a.sub.kPc+1j.sup.(kPc+1) =a.sub.kPc+1j.sup.(kPc) /a.sub.kPc+1kPc+1,.sup.(kPc)                              ( 32)
b.sub.kPc+1.sup.(kPc+1) =b.sub.kPc+1.sup.(kPc) /a.sub.kPc+1kPc+1.sup.(kPc) ( 33)
in the element processor in charge of the (kPc +1)th row, transmits the results to the shared memory of every other cluster to which the element processor in charge of an i-throw such that kPc +1≦i≦n belongs, and, for l=2, . . . , Pc, calculates ##EQU4## for kPc +1≦i≦n in the element processor in charge of the i-th row, calculates ##EQU5## in the element processor in charge of the (kPc +1)th row, and, after the pivot choosing means determines the pivot
a.sub.kPc+lkPc+l,.sup.(kPc+l-1)                            ( 37)
calculates
a.sub.kPc+lj.sup.(kPc+l) =a.sub.kPc+lj.sup.(kPc+l-1) /a.sub.kPc+lkPc+l,.sup.(kPc+l-1)                          (38)
b.sub.kPc+l.sup.(kPc+l) =b.sub.kPc+l.sup.(kPc+l-1) /a.sub.kPc+lkPc+l.sup.(kPc+l-1)                           ( 39)
in the element processor in charge of the (kPc +1)th row, transmits the results (38) and (39) to the shared memory of every other cluster to which the element processor in charge of an i-th row such that kPc +l+1≦i≦n belongs,
a multi-pivot elimination means that calculates ##EQU6## in each element processor in charge of the i-th row such that (k+1)Pc +1≦i≦n,
a means for testing if the operation of the multi-pivot elimination means was repeated [n/Pc ] times, and
a remainder elimination means that executes the above elementary pre-elimination means for the ([n/Pc ]Pc +1)th row through the n-th row, if the above testing means judges that the operation of the multi-pivot elimination means was executed [n/Pc ] times, and n is not a multiple of Pc.
According to a further aspect of the present invention, there is provided a parallel computation method
comprising:
an elementary back-substitution means that calculates
x.sub.i =b.sub.i.sup.(n)                                   ( 42)
in the element processor in charge of the i-th row after the elimination process of the above parallel elimination method,
an elementary back-transmission means that transmits xi to the shared memory of every cluster to which the element processor in charge of an h-th row such that 1≦h≦i-1 belongs,
an elementary back-calculation means that calculates
b.sub.h.sup.(n+h-i+1) =b.sub.h.sup.(n+h-i) -a.sub.hi.sup.(h) x.sub.i, (43)
for 1≦h≦i-1 in the element processor in charge of the h-th row, and
a means for testing if the operation of the elementary back-substitution means was repeated from i=n to i=1.
The solution of the system of linear equation (1) is thus obtained by the elementary back-substitution as
x.sub.n =b.sub.n.sup.(n), . . . , x.sub.1 =b.sub.1.sup.(n) ( 44)
in this order.
According to another aspect of the present invention, there is provided a parallel elimination method for solving the system of linear equations (2) in a parallel computer comprising C clusters CL1, . . . , CLC connected by a network. Each of the clusters comprises Pc element processors and a shared memory that stores part of the reduced matrices A.sup.(r) and the known vectors b.sup.(r) and the unknown vector x. The method comprises:
a data distribution means that distributes the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the clusters in such a manner as each block of consecutive k rows and corresponding 2k components is transmitted to the shared memory in the cyclic order of CL1, . . . , CLC, CL1, CL2, . . . , and assigns those distributed to the cluster's shared memory to its element processors row by row,
a pivot choosing means that chooses a pivot in a row assigned to each element processor,
an elementary pre-elimination means that, after the pivot choosing means chooses the pivot (31), calculates (32) and (33) in the element processor in charge of the (Pc k+1)th row, transmits the results to the shared memory of every other cluster to which the element processor in charge of an i-th row such that kPc +2≦i≦n belongs, and, for l=2, . . . , Pc, calculates (34) for kPc +l≦i≦n in the element processor in charge of the i-th row, calculates (35) and (36) in the element processor in charge of the (kPc +l)th row, and, after the pivot choosing means chooses the pivot (37), calculates (38) and (39) in the element processor in charge of the (kPc +l)th row, and transmits the results (38) and (39) to the shared memory of every other cluster to which an element processor in charge of the i-th row such that kPc +l+1≦i≦n belongs, calculates,
a multi-pivot elimination means that calculates (43) and (44) in each element processor in charge of the i-throw such that (k+1)Pc +1≦i≦n,
an elementary post-elimination means that calculates
a.sub.ij.sup.(r+1) =a.sub.ij.sup.(r) -a.sub.ii+l.sup.(r) a.sub.i+lj,.sup.(r+1)                                     ( 45)
b.sub.i.sup.(r+1) =b.sub.i.sup.(r) -a.sub.ii+l.sup.(r) b.sub.i+l.sup.(r+1) ( 46)
in the element processor in charge of the i-th row,
a post-elimination processing means that calculates (45) and (46) for l=-w+q+1 for w=1, . . . , q and q=1, . . . , Pc -1 for kPc +1≦i≦kPc +q in the element processor in charge of the i-th row,
a means for testing if the operation of the post-elimination means was executed [n/Pc ] times, and
a remainder elimination means that executes the above elementary pre-elimination means for the ([n/Pc ]Pc +1)th through the n-th rows and executes the above multi-pivot elimination means and the post-elimination means, if the above testing means judges that the operation of the post-elimination means was executed [n/Pc ] times.
According to a further aspect of the present invention, there is provided
a search means whereby an above element processor searches for a nonzero element in the order of increasing column numbers from that diagonal element in the same row, if a diagonal element of a coefficient matrix is 0,
a column number broadcasting means that notifies other element processors of the column number of a nonzero element found by the above search means,
an element interchange means whereby each element processor interchanges the two elements which are in its charge and have the same column numbers as the above diagonal zero element and the found nonzero element, and
a component interchange means whereby two element processors interchange the two components of the unknown vector which are in their charge and have the same component indices as the column numbers of the above diagonal zero element and the found nonzero element.
According to a further aspect of the present invention, there is provided
a search means whereby an above element processor searches for an element with the greatest absolute value in the order of increasing column numbers from a diagonal element in the same row,
a column number broadcasting means that notifies other element processors of the column number of an element found by the above search means,
an element interchange means whereby each element processor interchanges the two elements which are in its charge and have the same column number as the above diagonal element and the found element, and
a component interchange means whereby two element processors interchange the two components of the unknown vector which are in their charge and have the same component indices as the column numbers of the above diagonal element and the found component.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other objects and features of the present invention will become clear from the following description taken in conjunction with the preferred embodiments thereof with reference to the accompanying drawings throughout which like parts are designated by like reference numerals, and in which:
FIG. 1 is a block diagram of a linear calculating equipment according to the first embodiment of the present invention.
FIG. 2 is a flow chart of a control algorithm to be performed in the first embodiment.
FIG. 3 is a block diagram of a linear calculating equipment according to the second embodiment of the present invention.
FIG. 4 is a flow chart of the control algorithm to be performed in the second embodiment.
FIG. 5 is a block diagram of a parallel linear calculating equipment according to the third embodiment of the present invention.
FIG. 6 is a block diagram of a node shown in FIG. 5.
FIG. 7 is a flow chart of the control algorithm to be performed in the third embodiment.
FIG. 8 is a block diagram of a parallel linear calculating equipment according to the fourth embodiment of the present invention.
FIG. 9 is a block diagram of a node shown in FIG. 8.
FIG. 10 is a flow chart of the control algorithm to be performed in the fourth embodiment.
FIG. 11 is a block diagram of a parallel linear calculating equipment according to the fifth embodiment of the present invention.
FIG. 12 is a block diagram of a cluster shown in FIG. 11.
FIG. 13 is a block diagram of an element processor shown in FIG. 12.
FIG. 14 is a flow chart of the control algorithm to be performed in the fifth embodiment.
FIG. 15 is a block diagram of a parallel linear calculating equipment according to the sixth embodiment of the present invention.
FIG. 16 is a block diagram of a cluster shown in FIG. 15.
FIG. 17 is a block diagram of an element processor shown in FIG. 16.
FIG. 18 is a flow chart of the control algorithm to be performed in the sixth embodiment.
FIG. 19 is a block diagram of an element processor or processor module in a parallel computer which implements the 7th and 8th embodiments.
FIG. 20 is a block diagram of a cluster used in the 7th and 8th embodiments.
FIG. 21 is a block diagram of the parallel computation method according to the 7th embodiment.
FIG. 22 is a block diagram of the parallel computation method according to the 8th embodiment.
FIG. 23 is a diagram for showing the pivoting method according to the 7th and 8th embodiments.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The preferred embodiments according to the present invention will be described below with reference to the attached drawings.
FIG. 1 is a block diagram of linear calculating equipment in the first embodiment of the present invention. In FIG. 1, 1 is a memory; 2 is a pivoting section connected to the memory 1; 3, 4, 5 are preprocessing sections A1, At, Ak respectively, each connected to the memory 1; 6 is an updating section B connected to the memory 1; 7 is a back-substitution section connected to the memory 1; 8 is a main controller G; 101 is a register set composed of k registers; 102 is an arithmetic unit.
Following is a description of the operation of each component of the first embodiment.
The memory 1 is ordinary semiconductor memory and stores reduced coefficient matrices A.sup.(r) with zeroes generated from the first to the r-th column and corresponding known vectors b.sup.(r) and an unknown vector x expressed by (1) for a given system of linear equations (2).
The pivoting section is connected to the memory 1, chooses a pivot in the i-th row following the instruction of the main controller G 8 when the first (i-1) columns are already reduced, and interchanges the i-th column with the chosen pivotal column and the i-th component with the corresponding component of x. The choice of the pivot is based on a method called partial pivoting whereby an element with the largest absolute value in the i-th row is chosen as the pivot. The interchange can be direct data transfer or transposition of column numbers and component indices.
Immediately after the pivoting section 2 determines the transposed pivot (3), the preprocessing section A1 3 calculates (4) for pk+2≦j≦n and (5) following the instruction of the main controller G. Each preprocessing sections At 4, where t=2, 3, . . . , k, is connected to the memory 1, calculates (6), (7), (8), (9), (10) for pk+t≦j≦n, and, immediately after the pivoting section determines the transposed pivot (11), calculates (12) and (13) for pk+t+1≦j≦n following the instruction of the main controller G 8.
The updating section B 6 is connected to the memory 1, comprises a register set 101 of k registers and an arithmetic unit 102, and calculates (14), (15), (16), (17), (18) for (p+1)k+1≦i, j≦n in the arithmetic unit 102, retaining each value of Regi.sup.(0), . . . , Regi.sup.(k) in the corresponding register of the register set 101 following the instruction of the main controller G 8. (14), (15), (16) are preliminary formulas, and (17) and (18) are formulas that determine updated components.
The back-substitution section 7 is connected to the memory 1 and obtains the value of the unknown vector x by calculating (19) and (20) for 1≦h≦i-1 for i=n, n-1, . . . , 1 in this order of i.
The operation of the main controller G 8 is described below with reference to FIG. 2, which shows a flow chart of its control algorithm.
The first step tests if n is a multiple of k. If it is, then the next step initializes p as p=0 and enters the loop of the left side. The t-th step within this loop where, t=1, . . . , k, instructs the pivoting section 2 and the preprocessing section At 4 to execute their operations for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1). The next step tests if p=n/k-1. If it is, then the next step escapes the loop. If p<n/k-1, then the next step instructs the updating section B 6 to execute its operation. The next step increments p by 1 and returns to the operations of the pivoting section 2 and the preprocessing section A1 3.
If n is not a multiple of k, then the next step initializes p as p=0 and enters the loop of the right side. Within this loop, the operations are the same except the fact that the condition for escaping the loop is p=[n/k], and the position of the testing for escape is immediately after the operation of An-[n/k]k.
After escaping one of the loops the final step instructs the back-substitution section 7 to execute its operation and terminates the whole operation to obtain the unknown vector x.
FIG. 3 is a block diagram of linear calculating equipment in the second embodiment of the present invention. In FIG. 3, 1 is a memory, 2 is a pivoting section connected to the memory 1; 3, 4, 5 are preprocessing sections A1, At, Ak respectively, each connected to the memory 1; 9 is an updating section B' connected to the memory 1; 10, 11, 12 are postprocessing sections C1, Ct, Ck-1 respectively, each connected to the memory 1; 13 is a main controller J; 103 is a register set composed of k registers; 104 is an arithmetic unit for, 101 is an arithmetic unit.
Following is a description of the operation of each component, which is different from one in the first embodiment.
The updating section B' 9 is connected to the memory 1 and calculates (14), (15), . . . , (18) for 1≦i ≦pk, (p+1)k+1≦i≦n, (p+1)k+1≦j≦n if n is a multiple of k or p<[n/k] and for 1≦i≦[n/k]k, [n/k]k+1≦j≦n otherwise in the arithmetic unit 104, retaining each value of Regi.sup.(0), . . . , Regi.sup.(k) in the corresponding register of the register set 103.
The k-1 postprocessing sections Ct 11, where t =1, 2, . . . , k-1, are connected to the memory 1 and calculate (21), (22), . . . , (29) for pk+t+2≦j≦n.
The operation of the main controller J 13 is described below with reference to FIG. 4, which shows a flow chart of its control algorithm.
The first step tests if n is a multiple of k. If it is, then the next step initializes p as p=0 and enters the left side loop. The t-th step within this loop, where t=1, . . . , k, instructs the pivoting section 2 and the preprocessing section At 4 to execute their operations for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1). The next step instructs the updating section B' 9 to execute its operation. The following k-1 steps instruct the postprocessing sections C1 10 through Ck-1 12 to execute their operations in this order. The next step tests if p=n/k-1. If it is, then the next step escapes the loop and terminates operation. If p<n/k-1, then the next step increments p by 1 and returns to the operation of the pivoting section 2.
If n is not a multiple of k, then the next step initializes p as p=0 and enters the right side loop. Within this loop, the first n-[n/k]k+1 steps are the same as those in the loop of the left side. After instructing the preprocessing section An-[n/k]k 4 to execute its operation, the step tests if p=[n/k]. If it is not, then the following steps order the operations of the pivoting section 2 and the preprocessing section An-[n/k]+1 4 through the operations of the pivoting section 2 and the preprocessing section Ak 5 followed by the operation of the updating section B'9 and then the operations of the postprocessing sections C1 10 through Ck-1 12. Then the step increments p by 1 and returns to the operation of the pivoting section 2. If p=[n/k], then the following steps instruct the updating section B' 9 to execute its operation, instruct the postprocessing sections C1 10 through Cn-[n/k]k 11 to execute their operations, and terminates the whole process to obtain the unknown vector.
FIG. 5 is a block diagram of parallel linear calculating equipment in the third embodiment of the present invention. In FIG. 5, 21 is a network; 22, 23, 24 are nodes α0, αu, αP-1 mutually connected by the network 21; 25 is a main controller Gp connected to each node. FIG. 6 is a block diagram of a node in FIG. 5. In FIG. 6, 1 is a memory; 2 is a pivoting section connected to the memory 1; 3, 4, 5 are preprocessing sections A1, At, Ak respectively, each connected to the memory 1; 6 is an updating section B connected to the memory 1; 7 is a back-substitution section connected to the memory 1; 26 is a gateway that is a junction with the outside; 27 is a transmitter that transmits data between the memory 1 and the outside through the gateway 26; 101 is a register set composed of k registers; 102 is an arithmetic unit.
Following is a description of the operation of each component of the third embodiment.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the node αu 23, then the pivoting section 2 of the node αu 23 determines the pivot (3), and the preprocessing section of the node αu 23 calculates (4) and (5) for pk+2≦j≦n, and the transmitter 27 transmits the results to the memory 1 of every other node through the gateway 26, while the updating section B 6 of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing A1.
The preprocessing section At 4 of the node αu 23 calculates (6), (7), (8), (9), (10) for pk+t≦j≦n, and, immediately after the pivoting section 2 of αu 23 determines the pivot (11), calculates (12) and (13) for pk +t+1≦j≦n, and the transmitter 27 transmits the results to the memory 1 of every other node through the gateway 26, while the updating section B 6 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1≦i≦n. This series of parallel operations is below called parallel preprocessing At, where 2≦t≦k.
The updating section B 6 of each node in charge of the i-th row such that (p+1)k+1≦i≦n also calculates (14) through (18) for (p+1)k+1≦j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set. These operations are below called parallel updating B.
The back-substitution sections 7 of nodes αu 23 calculate (19) and (20) using necessary data transmitted by the transmitters 27 of other nodes. These operations are called back-substitution.
The operation of the main controller Gp 25 is described below with reference to FIG. 7, which shows a flow chart of its control algorithm at the level of above definition.
The first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the nodes α0 22, . . . , αu 23, . . . , αP-1 24 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of α0, . . . , αP-1, α0, α1, . . .
The next step tests if n is a multiple of k. If it is, then the next step initializes p as p=0 and enters the loop of the left side. The t-th step within this loop orders the execution of the parallel preprocessing At for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1). The next step tests if p=n/k-1. If it is, then the next step escapes the loop. If p<n/k-1, then the next step orders the execution of the parallel updating B. The next step increments p by 1 and returns to the execution of the parallel preprocessing A1.
If n is not a multiple of k, then the next step initializes p as p=0 and enters the loop of the right side. Within this loop, the operations are the same except the fact that the condition for escaping the loop is p=[n/k], and the position of the testing for escape is between the parallel preprocessing An-[n/k]k and An-[n/k]k+1.
After escaping one of the loops the final step orders the execution of back-substitution and terminates the whole operation to obtain the unknown vector x.
FIG. 8 is a block diagram of parallel linear calculating equipment in the fourth embodiment of the present invention. In FIG. 8, 31 is a network; 32, 33, 34 are nodes α0, αu, αP-1 mutually connected by the network 31; 35 is a main controller Jp connected to each node. FIG. 9 is a block diagram of a node in FIG. 8. In FIG. 9, 1 is a memory; 2 is a pivoting section connected to the memory 1; 3, 4, 5 are preprocessing sections A1, At, Ak respectively, each connected to the memory 1; 9 is an updating section B' connected to the memory 1; 10, 11, 12 are postprocessing sections C1, Ct, Ck-1 respectively, each connected to the memory 1; 26 is a gateway that is a junction with the outside; 27 is a transmitter that transmits data between the memory 1 and the outside through the gateway 26; 103 is a register set composed of k registers; 104 is an arithmetic unit.
Following is a description of the operation of each component of the fourth embodiment.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the node αu 33, then the pivoting section 2 of the node αu 33 determines the pivot (3), and the preprocessing section of the node αu 33 calculates (4) and (5) for pk+2≦j≦n, and the transmitter 27 transmits the results to the memory 1 of every other node through the gateway 26, while the updating section B 6 of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing A1.
The preprocessing section At 4 of the node αu 23 calculates (6), (7), (8), (9), (10) for pk+t≦j≦n, and, immediately after the pivoting section 2 of αu 23 determines the pivot (11), calculates (12) and (13) for pk +t+1≦j≦n, and the transmitter 27 transmits the results to the memory 1 of every other node through the gateway 26, while the updating section B' 9 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing At, where 2≦t≦k.
The updating section B' 9 of each node in charge of the i-th row such that 1≦i≦pk or (p+1)k+1≦i≦n if n is a multiple of k or p<[n/k] and 1≦i≦[n/k]k otherwise also calculates (14) through (18) for (p+1)k+1≦j≦n if n is a multiple of K or p<[n/k] and for [n/k]k+1≦j≦n otherwise, retaining the values of Regi.sup.(0), . . . Regi.sup.(k) in the register set. These operations are below called parallel updating B'.
The postprocessing section Ct 11 of the above node αu 33 calculate (21), (22), . . . , (29) for pk+t+2≦j≦n for t=1, 2, . . . , k-1 if n is a multiple of k or p<[n/k] and for t=1, 2, . . . , n-[n/k]k otherwise. This series of operations is below called post-elimination C.
The operation of the main controller Jp 35 is described below with reference to FIG. 10, which shows a flow chart of its control algorithm at the level of above definition.
The first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the nodes α0 32, . . . , αu 33, . . . , αP-1 34 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of α0, . . . , αP-1, α0, α1, . . .
The next step tests if n is a multiple of k. If it is, then the next step initializes p as p=0 and enters the loop of the left side. The t-th step within this loop orders the execution of the parallel preprocessing At for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1). The next step orders the execution of the parallel updating B'. The next step orders the execution of the post-elimination C. The next step tests if p=n/k-1. If it is, then the next step escapes the loop. If p<n/k-1, then the next step increments p by 1 and returns to the execution of the parallel preprocessing A1.
If n is not a multiple of k, then the next step initializes p as p=0 and enters the loop of the right side. Within this loop, the operations are the same except the fact that the condition for escaping the loop is p=[n/k], and if p=[n/k], the steps skip the order for the execution of the parallel preprocessing An-[n/k]k+1 through Ak.
By the above processing, the unknown vector is obtained.
FIG. 11 is a block diagram of a parallel linear calculating equipment according to the fifth embodiment of the present invention. In FIG. 11, 41 is a network; 42, 43, 44 are clusters CL0, CLu, CLP-1 mutually connected by the network 41; 45 is a main controller Gpc connected to each cluster. FIG. 12 is a block diagram of a cluster in FIG. 11. In FIG. 12, 1 is a memory; 46 is a C gateway that is a junction with the outside; 47, 48, 49 are element processors PE1, PE2, PEP.sbsb.c, each connected to the memory 1; 50 is a transmitter that transmits data between the memory 1 and the outside through the C gateway 46. FIG. 13 is a block diagram of an element processor in FIG. 12. In FIG. 13, 2 is a pivoting section; 3, 4, 5 are preprocessing sections A1, At, Ak respectively, each connected to the pivoting section 2; 6 is an updating section B connected to the pivoting section 2; 7 is a back-substitution section connected to the pivoting section 2; 51 is a gateway that is a junction with the outside; 101 is a register set composed of k registers; 102 is an arithmetic unit.
Following is a description of the operation of each component of the fifth embodiment.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the cluster CLu 43, then the pivoting section 2, the updating section 6 and the back-substitution section 7 of each element processor of CLu 43 take charge of part of the k rows and 2k components row by row, while the preprocessing section At 4 of each element processor of CLu 43 takes charge of elements of the (pk+t)th row of A.sup.(r) and the (pk+t)th component of b.sup.(r) one by one.
Specifically, the pivoting section 2 of the element processor PE1 of CLu 43 determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A1 3 of element processors of CLu simultaneously calculate (4) and (5) for pk+2≦j≦n with each A1 3 calculating for elements and components in its charge, and the transmitter 50 transmits the results to the memory of every other cluster through the C gateway 46, while the updating section B 6 of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1≦i≦n. This series of operations is below called parallel preprocessing CLA1.
The preprocessing sections At 4 of the above cluster CLu 43 simultaneously calculate (6), (7), (8), (9), (10) for pk+t≦j≦n with each At 4 calculating for elements and components in its charge and, immediately after the pivoting section of PEt of CLu 43 determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1≦j≦n, and the transmitter 50 transmits the results to the memory 1 of every other cluster through the C gateway 46, while the updating section B 6 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k+1≦i≦n. This series of operations is below called parallel preprocessing CLAt, where 2≦t≦k.
The updating sections B 6 of each element processor in charge of the i-th row such that (p+1)k+1≦i≦n calculate (14) through (18) for (p+1)k+1≦j≦n retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set 101. These operations are below called parallel updating Bc.
The back-substitution sections 7 of element processors calculate (19) and (20) using necessary data transmitted by the transmitters 50 of other clusters. These operations are called back-substitution.
The operation of the main controller Gpc 45 is described below with reference to FIG. 14, which shows a flow chart of its control algorithm at the level of above definition.
The first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the cluster CL0 42, . . . , CLu 43, . . . , CLP-1 44 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of CL0, . . . , CLP-1, CL0, CL1, . . .
The next step tests if n is a multiple of k. If it is, then the next step initializes p as p=0 and enters the loop of the left side. The t-th step within this loop orders the execution of the parallel preprocessing CLAt for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1). The next step tests if p=n/k-1. If it is, then the next step escapes the loop. If p<n/k-1, then the next step orders the execution of the parallel updating Bc. The next step increments p by 1 and returns to the execution of the parallel preprocessing CLA1.
If n is not a multiple of k, then the next step initializes p as p=0 and enters the loop of the right side. Within this loop, the operations are the same except the fact that the condition for escaping the loop is p=[n/k], and the position of the testing for escape is between the parallel preprocessing CLAn-[n/k]k and CLAn-[n/k]k+1.
After escaping one of the loops the final step orders the execution of back-substitution and terminates the whole operation to obtain the unknown vector x.
FIG. 15 is a block diagram of a parallel linear calculating equipment according to the sixth embodiment of the present invention. In FIG. 15, 61 is a network; 62, 63, 64 are clusters CL0, CLu, CLP-1 mutually connected by the network 61; 65 is a main controller Jpc connected to each cluster. FIG. 16 is a block diagram of a cluster in FIG. 15. In FIG. 16, 1 is a memory; 46 is a C gateway that is a junction with the outside; 66, 67, 68 are element processors PE1, PE2, PEP.sbsb.c, each connected to the memory 1; 50 is a transmitter that transmits data between the memory 1 and the outside through the C gateway 46. FIG. 17 is a block diagram of an element processor shown in FIG. 16. In FIG. 17, 2 is a pivoting section; 3, 4, 5 are preprocessing sections A1, At, Ak respectively, each connected to the pivoting section 2; 9 is an updating section B' connected to the pivoting section 2; 10, 11, 12 are postprocessing sections C1, Ct, Ck-1 respectively, each connected to the pivoting section 2; 51 is a gateway that is a junction with the outside; 103 is a register set composed of k registers; 104 is an arithmetic unit.
Following is a description of the operation of each component of the fourth embodiment.
If the (pk+1)th through (p+1)k-th rows of A.sup.(0) and corresponding components of b.sup.(0) and x are assigned to the cluster CLu 63, then the pivoting section 2 and the updating section B' 9 of each element processor of CLu 63 take charge of part of the k rows and 2k components row by row, while the preprocessing section At 4 and postprocessing section Ct 11 of each element processor of CLu 63 take charge of elements of the (pk+t)th row of A.sup.(r) and the (pk+t)th component of b.sup.(r) one by one.
Specifically, the pivoting section 2 of the element processor PE1 of CLu 63 determines the transposed pivot (3) of the (pk+1)th row, and the preprocessing sections A1 3 of element processors of CLu 63 simultaneously calculate (4) and (5) for pk+2≦j≦n with each A1 3 calculating for elements and components in its charge, and the transmitter 50 transmits the results to the memory 1 of every other cluster through the C gateway 46, while the updating section B' 9 of the element processor in charge of the i-th row calculates (14) for every i such that (p+1)k +1≦i≦n. This series of operations is below called parallel preprocessing CLA1.
The preprocessing sections At 4 of the above cluster CLu 63 simultaneously calculate (6), (7), (8), (9), (10) for pk+t≦j≦n with each At 4 calculating for elements and components in its charge and, immediately after the pivoting section 2 of the element processor PEt of CLu 63 determines the pivot (11), simultaneously calculate (12) and (13) for pk+t+1≦j≦n, and the transmitter 50 transmits the results to the memory 1 of every other cluster through the C gateway 46, while the updating section B' 9 of the element processor in charge of the i-th row calculates (30) for every i such that (p+1)k +1≦i≦n. This series of operations is below called parallel preprocessing CLAt, where 2≦t≦k.
The updating section B' 9 of each element processor in charge of the i-th row such that 1≦i≦pk or (p+1)k+1≦i≦n if n is a multiple of k or p<[n/k] and 1≦i≦[n/k]k otherwise also calculates (14) through (18) for (p+1)k+1≦j≦n if n is a multiple of k or p <[n/k] and for [n/k]k+1≦j≦n otherwise, retaining the values of Regi.sup.(0), . . . , Regi.sup.(k) in the register set. These operations are below called parallel updating B'c.
The postprocessing sections Ct 11 of element processors of the above CLu 63 simultaneously calculate (21), (22), . . . , (29) for j such that pk+t+2≦j≦n for t=1, 2, . . . , k-1 if n is a multiple of k or p<[n/k] and for t=1, 2, . . . , n-[n/k]k otherwise with each Ct 11 calculating for elements and components in its charge. This series of operations is below called post-elimination Cc.
The operation of the main controller Jpc 65 is described below with reference to FIG. 18, which shows a flow chart of its control algorithm at the level of above definition.
The first step distributes and assigns the rows of the coefficient matrix A.sup.(0) and the components of b.sup.(0) and x to the clusters CL0 62, . . . , CLu 63, . . . , CLP-1 64 in such a manner as each block of k rows and corresponding 2k components (n-[n/k]k rows and 2(n-[n/k]k) components in the final distribution) are transmitted to the memory 1 of one node at a time in the cyclic order of CL0, . . . , CLP-1, CL0, CL1, . . .
The next step tests if n is a multiple of k. If it is, then the next step initializes p as p=0 and enters the loop of the left side. The t-th step within this loop orders the execution of the parallel preprocessing CLAt for the (pk+t)th row of the current reduced matrix A.sup.(pk+t-1). The next step orders the execution of the parallel updating B'c. The next step orders the execution of the post-elimination Cc. The next step tests if p=n/k-1. If it is, then the next step escapes the loop. If p<n/k-1, then the next step increments p by 1 and returns to the execution of the parallel preprocessing CLA1.
If n is not a multiple of k, then the next step initializes p as p=0 and enters the loop of the right side. Within this loop, the operations are the same except the fact that the condition for escaping the loop is p=[n/k], and if p=[n/k], the steps skip the order for the execution of the parallel preprocessing CLAn-[n/k]k+1 through CLAk.
By the above processing, the unknown vector is obtained.
FIG. 19 shows a block diagram of an element processor or processor module of a parallel computer that implements the seventh embodiment of the present invention. In FIG. 19, 201 is a gate way; 202 is a cache memory; 203 is a central processing unit; 204 is a local memory; 205 is a shared buss. FIG. 20 shows a block diagram of a cluster composed of element processors 212, 213, . . . , 214, a C gateway 210, and a shared memory 211. A network of the parallel computer connects each of the clusters to each other, so that data can be transmitted between any two clusters. Let the number of element processors in each cluster be Pc and the total number of clusters be C. Then the total number P of element processors in the parallel computer is C·Pc. Furthermore, let the clusters be denoted by CL1, CL2, . . . , CLC, and let the element processors of CLu be denoted by PRu 1, . . . , PRu P.sbsb.c.
FIG. 21 is a block diagram of parallel linear computation method according to the seventh embodiment of the present invention implemented by a parallel computer structured above. In FIG. 21, 220 is a data distribution means; 221 is a pivoting means; 222 is an elementary pre-elimination means; 223 is a multi-pivot elimination means; 224 is an elimination testing means; 225 is a remainder elimination means; 226 is an elementary back-substitution mean; 227 is an elementary back-transmission means; 228 is an elementary back-calculation means; 229 is a back-processing testing means.
The operation of the parallel linear computation method of the seventh embodiment is described below with reference to FIG. 21.
In the first step, the data distribution means 220 distributes each i-th row of A.sup.(0) and i-th component of b.sup.(0) and x to the cluster CLu such that u=[i/Pc ]-[[i/Pc ]/C]C+1. Then the data distribution means 220 assigns each i-th row of A.sup.(0) and i-th component of b.sup.(0) distributed to the cluster CLu to the element processor PRu v such that v=i-[i/Pc ]Pc +1. Then the data distribution means initializes k as k=0.
In the second step, the elimination testing means 224 tests if the multi-pivot elimination means repeated its operation [n/Pc ] times, that is, whether k=[n/Pc]. If it did, then the process jumps to the fifth step. If it did not, the process goes to the third step.
In the third step, the elementary pre-elimination means 222 executes preliminary processing for the i-th rows of reduced matrices and the corresponding known vectors such that i=kPc +l and l=1, . . . , Pc in this order. The processing involves a pivot choosing process for each l.
Methods of choosing a pivot are in general classified into either partial pivoting or full pivoting. Partial pivoting chooses as a pivot in each reduced matrix A.sup.(r) an element with the largest absolute value in the relevant column or row. Full pivoting chooses as a pivot in each reduced matrix A.sup.(r) an element with the largest absolute value in the submatrix of the columns or rows which have not hitherto been pivotal. Besides, if precision is not important so much, then choosing of a pivot is necessary only when the relevant diagonal element is 0, and in that case any nonzero element can be chosen as a pivot in partial pivoting. Pivoting methods in the present invention employ partial pivoting, and the present first method chooses the first nonzero element in the relevant row, and the present second method chooses an element with the greatest absolute value in the relevant row.
FIG. 23 shows the process of the pivot choosing means 221. In FIG. 23, 240 is a search means; 241 is a column number broadcasting means; 242 is an element interchange means; 243 is an component interchange means.
In the present first method of pivot choosing, the element processor in charge of each i-th row, by the search means 240, tests if a.sup.(i-1)i i =0. If it is not, then the process terminates. If it is, then the element processor, by the search means 240, searches for a nonzero element in the i-th row of A.sup.(i-1) from a.sup.(i-1) i i+1 to a.sup.(i-1)i n in this order. If a.sup.(i-1) i h is the first such element, then the element processor, by the broadcasting means 241, notifies each element processor of the column number h by a broadcast. Specifically, the element processor either transmits h to a specified word of the shared memory 211 of each cluster, and each element processor refers to the word, or the element processor transmits h to a dedicated bus line, and each element processor fetches h into its local memory 204. Then each element processor, by the element interchange means 242, simultaneously interchanges the element with the column number i with the element with the column number h in the row in its charge. Then two element processors in charge of the i-th component and the h-th component of the unknown vector x respectively interchange these component by the component interchange means 243. The pivot choosing process terminates hereby.
In the present second method of pivot choosing, the element processor in charge of each i-th row, by the search means 240, sets Max=|a.sup.(i-1)i i | and Col=i. The element processor then compares max with |a.sup.(i-1)i j | for j=i+1, . . . , n in this order and updates Max and Col as Max=|a.sup.(i-1) i j | and Col=j, only if |a.sup.(i-1) i j | is greater than Max. Then the element processor notifies each element processor of Col by a broadcast. The remaining steps are the same as above.
In the process of the elementary pre-elimination means 222, if l=1, then the element processor PRu 1 in charge of the (kPc +1)th row in the cluster CLu, where u=k-[k/C]+1, calculates (32) and (33), and transmits the results to the shared memory of every other cluster to which the element processor in charge of an i-th row such that kPc +2≦i≦n belongs. If 2≦l≦Pc, then each element processor in charge of the i-th row such that kPc +l≦i≦n calculates (34), and the element processor PRu l calculates (35) and (36). Then after the pivot choosing means determines the pivot (37), the element processor PRu l calculates (38) and (39) and transmits the results to the shared memory of every other cluster to which the element processor in charge of an i-th row such that kPc +l+1≦i≦n belongs.
In the fourth step, by the multi-pivot elimination means 223, each element processor in charge of the i-th row such that (k+1)Pc +1≦i≦n calculate (40) and (41) for i.
In the fifth step, by the remainder elimination means 225, each element processor in charge of the i-th row such that [n/Pc ]Pc +1≦i≦n executes the same operation as in the elementary pre-elimination means 232 for l=2, . . . , n-[n/Pc ]Pc. Then this step initializes i as i=n, and goes to the sixth step.
In the sixth step, by the elementary back-substitution means 226, the element processor in charge of the i-th row calculates (42).
In the seventh step, the back-processing testing means 229 tests if i=n. If it is, then the solution of the system of linear equation (2) has been obtained by the above elementary back-substitution as (44), and the process terminates. If it is not, then the process proceeds to the eighth step.
In the eighth step, an elementary back-transmission means that transmits xi to the shared memory of every clusters such that the element processor in charge of an h-th row such that 1≦h≦i-1 belongs.
In the ninth step, by the elementary back-calculation means, each element processor in charge of the h-th row such that 1≦h≦i-1 calculates (43). Then this step decrements i by 1, and increments goes to the sixth step.
FIG. 22 is a block diagram of parallel linear calculating method in the eighth embodiment of the present invention implemented by a parallel computer structured as in the seventh embodiment. In FIG. 22, 220 is a data distribution means; 221 is a pivot choosing means; 231 is an elimination testing means, 232 is an elementary pre-elimination means; 233 is a multi-pivot elimination means; 234 is an elementary post-elimination means; 225 is a post-elimination processing means; 236 is a remainder elimination means.
The operation of the parallel linear computation method of the seventh embodiment is described below with reference to FIG. 22.
In the first step, the data distribution means 220 distributes each i-th row of A.sup.(0) and i-th component of b.sup.(0) and x to the cluster CLu such that u=[i/Pc ]-[[i/Pc ]/C]C+1. Then the data distribution means 220 assigns each i-th row of A.sup.(0) and i-th component of b.sup.(0) distributed to the cluster CLu to the element processor PRu v such that v=i-[i/Pc ]Pc +1. Then the data distribution means initializes k as k=0.
In the second step, the elimination testing means 231 tests if the multi-pivot elimination means repeated its operation [n/Pc ] times, that is, whether k=[n/Pc]. If it did, then the process jumps to the sixth step. If it did not, the process goes to the third step.
In the third step, the elementary pre-elimination means 232 executes preliminary processing for the i-th rows of the reduced matrices and the corresponding known vectors such that i=kPc +1 and l=1, . . . , Pc in this order. The processing involves a pivot choosing process for each l, which is the same as in the seventh embodiment.
In the pre-elimination means 232, if l=1, then after the pivot choosing means 221 determines the pivot (31), the element processor PRu 1 in charge of the (kPc +1)th row in the cluster CLu, where u=k-[k/C]+1, calculates (32) and (33), and transmits the results to the shared memory of every other cluster. If 2≦l≦Pc, then each element processor in charge of the i-th row such that kPc +l≦i≦n calculates (34), and the element processor PRu l calculates (35) and (36). Then after the pivot choosing means determines the pivot (37), the element processor PRu l calculates (38) and (39) and transmits the results to the shared memory of every other cluster.
In the fourth step, by the multi-pivot elimination means 233, each element processor in charge of the i-th row such that 1≦i≦kPc or (k+1)Pc +1≦i≦n calculates (40) and (41).
In the fifth step, the post-elimination processing means 235 eliminates unnecessary elements generated by the multi-pivot elimination means 233. The core of the post-elimination processing means 235 is the elementary post-elimination means 234, which calculates (45) and (46) in the element processor in charge of the i-th row.
By the post-elimination processing means the element processor in charge of the (kPc +w)th row calculates (45) and (46), where i=Pc +w and l=-w+q +1, from w=1 to w=q for q=1, 2, . . . , Pc -1.
In the sixth step, by the remainder elimination means 236, each element processor in charge of the i-th row such that [n/Pc ]Pc +1≦i≦n executes the operation of the elementary pre-elimination means 232. Then the remainder elimination means executes operation of the multi-pivot elimination means 233 followed by the post-elimination processing means 235. The operation of pre-elimination processing means 232 should be executed for l=1, . . . , n-[n/Pc ]Pc. The operation of the multi-pivot elimination means 233 should be executed by calculating (40) and (41) for 1≦i≦[n/Pc ]Pc and k=[n/Pc ]. The operation of the post-elimination processing means 235 should be executed from q=1 to q=n-[n/Pc ]Pc for k=[n/Pc ].
The unknown vector x is obtained as the vector b.sup.(r) after the above operation.
If the preprocessing section At and the postprocessing section Ct have their own register sets as the updating section B and B' in the first embodiment through the six embodiment, and their operations are executed by retaining values of variables and divisors, then the number of load-and-store operations for the memory are reduced, and further improvement in computation speed can be achieved.
In the seventh and eighth embodiments two components of the unknown vector should be interchanged if the corresponding columns are interchanged by the pivoting means. However, it is not necessary to actually transpose the components. By simply memorizing the correct position of the components after each interchange of columns, the correct solution is obtained by considering the positions in the final substitution to the components of the unknown vector.
Thus the present invention provides high-speed linear calculating equipment and parallel linear calculating equipment for solving systems of linear equations by means of Gauss's elimination method and Gauss-Jordan's method based on multi-pivot simultaneous elimination and scalar operations. The speed-up is achieved by reducing the number of load-and-store operations for the memory by retaining values of variables in register sets in updating processing, and reducing the number of iteration by multi-pivot simultaneous elimination. And the present invention is easily implementation in scalar computers. In fact, an experiment done in a scalar computer by means of software showed that Gauss's method and Gauss-Jordan's method based on 8-pivot simultaneous elimination was 2.5 times faster than original Gauss's elimination method and Gauss-Jordan's elimination method.
As for the parallel calculating equipment of the third through sixth embodiments of the seventh and eighth embodiments, each memory is assigned blocks of k rows of the coefficient matrix A.sup.(0) for the k-pivot simultaneous elimination method, so that effects of parallel computation are enhanced. In the fifth and sixth embodiments, where element processors are clustered, the preprocessing or both the preprocessing and the postprocessing are also made parallel, and the computation is more effective. In these embodiments, a theoretical estimation has shown that if the number of components of the unknown vector X is sufficiently large for a definite number of processors, then the effects of parallel computation are sufficiently powerful. Therefore, parallel linear calculating equipment effectively employing Gauss method and Gauss-Jordan method based on multi-pivot simultaneous elimination has been obtained.
Furthermore, the present invention effectively makes possible high-speed parallel computation for solving systems of linear equations using a parallel computer with a number of element processors by means of the methods of the seventh and eighth embodiments.
Although the present invention has been fully described in connection with the preferred embodiments thereof with reference to the accompanying drawings, it is to be noted that various changes and modifications are apparent to those skilled in the art. Such changes and modifications are to be understood as included within the scope of the present invention as defined by the appended claims unless they depart therefrom.

Claims (13)

What is claimed is:
1. A data processing machine for the numerical solution of linear equations represented by Ax=b, where A=(ai,j) (1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n rows and n columns, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector, comprising:
a memory;
a pivot choosing section connected to said memory for choosing pivots by searching said coefficient matrix in a row direction and interchanging elements of said coefficient matrix according to a column-interchange method;
a preprocessing section A1 connected to said memory for calculating
a.sub.pk+1,j.sup.(pk+1) =a.sub.pk+1,j.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 1
b.sub.pk+1.sup.(pk+1) =b.sub.pk+1.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 2
after said pivot choosing section chooses
a.sub.pk+1,pk+1.sup.(pk)                                   Eq. 3
wherein ai,j.sup.(r) denotes (i,j) element of a coefficient matrix obtained when first to r-th columns are eliminated from A=(ai,j),
bi.sup.(r) denotes i-th component of a known vector obtained when first to r-th columns are eliminated from A=(ai,j),
k is an integer satisfying 1≦k≦n-1,
wherein if n-{n/k}k=0, {n/k} denotes a maximum integer not exceeding n/k, p is an integer satisfying 0≦p≦{n/k}-1,
and, if n-{n/k}k>0, p is an integer satisfying 0≦p≦{n/k}, and
j is an integer satisfying pk+2≦j≦n;
2nd to k-th preprocessing sections At (t is an integer satisfying 2≦t≦k) connected to said memory, respectively, each for calculating the following equations: ##EQU7## wherein j is integer satisfying pk+t≦j≦n and, after said ##EQU8## pivot choosing section chooses
a.sub.pk+t,pk+t.sup.(pk+t-1)                               Eq. 9
calculating equations
a.sub.pk+t,j.sup.(pk+t) =a.sub.pk+t,j.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 10
b.sub.pk+t.sup.(pk+t) =b.sub.pk+t.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 11
for each element of a (pk+t)-th row of said coefficient matrix and for a (pk+t)-th component of said known vector wherein j is integer satisfying pk+t+1≦j≦n;
an updating section B connected to said memory and comprised of a register set consisting of k registers for registering variables Reg and an arithmetic unit;
said arithmetic unit for calculating the following equations: ##EQU9## for i and j satisfying (p+1)k+1≦(i, j)≦n while holding variables Reg in said register set;
a main controller G which,
if n-{n/k}k=0,
instructs said pivot choosing section, said preprocessing sections A1 to Ak and said updating section B to repeat their operations for every p from zero to {n/k}-2 while incrementing p by one and, further, to execute their operations after incrementing p from p={n/k}-2 to p={n/k}-1, and
if n-{n/k}k>0,
instructs said pivot choosing section, said preprocessing sections A1 to Ak and said updating section B to repeat their operations for every p from zero to {n/k}-1 while incrementing p by one, and instructs said pivot choosing section and said preprocessing sections A1 to An-{n/k} to execute their operations after incrementing p by one from p={n/k}-1;
a backward substitution section connected to said memory for calculating the following equations, repeatedly
x.sub.i =b.sub.i.sup.(n)                                   Eq. 17
b.sub.i.sup.(r+1) =b.sub.i.sup.(r) -a.sub.i,j.sup.(i) x.sub.j Eq. 18
while decrementing i from i=n to i=1, thereby obtaining said unknown vector.
2. A data processing machine for the numerical solution of linear equations represented by Ax=b, where A=(ai,j)1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n rows and n columns, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector, comprising:
a memory;
a pivot choosing section connected to said memory for choosing pivots by searching said coefficient matrix in a row direction and interchanging elements of said coefficient matrix according to a column-interchange method;
a preprocessing section A1 connected to said memory for calculating
a.sub.pk+1,j.sup.(pk+1) =a.sub.pk+1,j.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 1
b.sub.pk+1.sup.(pk+1) =b.sub.pk+1.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 2
after said pivot choosing section chooses
a.sub.pk+1,pk+1.sup.(pk)                                   Eq. 3
wherein ai,j.sup.(r) denotes (i,j) element of a coefficient matrix obtained when first to r-th columns are eliminated from A=(ai,j),
bi.sup.(r) denotes i-th component of a known vector obtained when first to r-th columns are eliminated from A=(ai,j),
k is an integer satisfying 1≦k≦n-1,
if n-{n/k}k=0, wherein {n/k} denotes a maximum integer not exceeding n/k, p is an integer satisfying 0≦p≦{n/k}-1,
and, if n-{n/k}k>0, p is an integer satisfying 0≦p≦{n/k}, and
j is an integer satisfying pk+2≦j≦n;
2nd to k-th preprocessing sections At (t is an integer satisfying 2≦t≦k) connected to said memory, respectively, each for calculating the following equations: ##EQU10## wherein j is integer satisfying pk+t≦j≦n and, after said ##EQU11## pivot choosing section chooses
a.sub.pk+t,pk+t.sup.(pk+t-1)                               Eq. 9
calculating equations
a.sub.pk+t,j.sup.(pk+t) =a.sub.pk+t,j.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 10
b.sub.pk+t.sup.(pk+t) =b.sub.pk+t.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 11
for each element of a (pk+t)-th row of said coefficient matrix and for a (pk+t)-th component of said known vector wherein j is integer satisfying pk+t+1≦j≦n;
an updating section B connected to said memory and comprised of a register set consisting of k registers for registering variables Reg and an arithmetic unit;
said arithmetic unit for calculating the following equations: ##EQU12## using (i, j) elements of an i-th row of said coefficient matrix for i satisfying 1≦i≦pk or (p+1)k+1≦i≦n and j satisfying (p+1)k+1≦j≦n while holding variables Reg in said register set;
(k-1) postprocessors C1 to Ck-1 each connected to said memory for calculating
Reg.sup.(0) =a.sub.pk+1,pk+t+1.sup.(pk+t)                  Eq. 17
Reg.sup.(1) =a.sub.pk+2,pk+t+1.sup.(pk+t)                  Eq. 18
Reg.sup.(t-1) =a.sub.pk+t,pk+t+1.sup.(pk+t)                Eq. 19
a.sub.pk+1,j.sup.(pk+t+1) =a.sub.pk+1,j.sup.(pk+t) -Reg.sup.(0) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 20
a.sub.pk+2,j.sup.(pk+t+1) =a.sub.pk+2,j.sup.(pk+t) -Reg.sup.(1) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 21
a.sub.pk+t,j.sup.(pk+t+1) =a.sub.pk+t,j.sup.(pk+t) -Reg.sup.(t-1) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 22
b.sub.pk+1.sup.(pk+t+1) =b.sub.pk+1.sup.(pk+t) -Reg.sup.(0) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 23
b.sub.pk+2.sup.(pk+t+1) =b.sub.pk+2.sup.(pk+t) -Reg.sup.(1) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 24
b.sub.pk+t.sup.(pk+t+1) =b.sub.pk+t.sup.(pk+t) -Reg.sup.(t-1) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 25
using elements of a (pk+1)-th row to a (pk+t)-th row and (pk+1)-th to (pk+t)-th components of said known vector for j satisfying pk+t+2≦j≦n;
a main controller J which obtains said unknown vector by,
if n-{n/k} k=0, instructing said pivot choosing section, said preprocessing sections A1 to Ak, said updating section B and said postprocessors C1 to Ck-1 to repeat their linking operations from p=0 to p=[n/k]-1, and,
if n-{n/k}k>0, instructing said pivot choosing section, said preprocessing section A1 to Ak, said updating section B and said postprocessors C1 to Ck-1 to repeat their linking operations from p=0 to p={n/k}-1 and, thereafter, instructing after setting p={n/k}, said pivot choosing section, said preprocessing sections A1 to An-{n/k}k, said updating section B and said postprocessors C1 to Cn-{n/k}k to execute linking operations of said pivot choosing section and said preprocessing sections A1 to An-{n/k}k, a processing wherein a number of pivots in said updating section B is set at n-{n/k}k and linking operations of said postprocessing sections C1 to Cn-{n/k}k.
3. A data processing machine for the numerical solution of linear equations represented by Ax=b, where A=(ai,j)(1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n rows and n columns, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector, comprising:
a network comprising P nodes α0 to αP-1 connected with each other, each node comprising;
a memory;
a pivot choosing section connected to said memory for choosing pivots by searching said coefficient matrix in a row direction and interchanging elements of said coefficient matrix according to a column-interchange method;
a preprocessing section A1 connected to said memory for calculating
a.sub.pk+1,j.sup.(pk+1) =a.sub.pk+1,j.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 1
b.sub.pk+1.sup.(pk+1) =b.sub.pk+1.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 2
after said pivot choosing section chooses
a.sub.pk+1,pk+1.sup.(pk)                                   Eq. 3
wherein ai,j.sup.(r) denotes (i,j) element of a coefficient matrix obtained when first to r-th columns are eliminated from A=(ai,j),
bi.sup.(r) denotes i-th component of a known vector obtained when first to r-th columns are eliminated from A=(ai,j),
k is an integer satisfying 1≦k≦n-1,
if n-{n/k}k=0, wherein {n/k} denotes a maximum integer not exceeding n/k, p is an integer satisfying 0≦p≦{n/k}-1,
and, if n-{n/k}k>0, p is an integer satisfying 0≦p≦{n/k}, and
j is an integer satisfying pk+2≦j≦n;
2nd to k-th preprocessing sections At (t is an integer satisfying 2≦t≦k) connected to said memory, respectively, each for calculating the following equations: ##EQU13## wherein j is integer satisfying pk+t≦j≦n and, after said pivot choosing section chooses
a.sub.pk+t,pk+t.sup.(pk+t-1)                               Eq. 9
calculating equations
a.sub.pk+t,j.sup.(pk+t) =a.sub.pk+t,j.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 10
b.sub.pk+t.sup.(pk+t) =b.sub.pk+t.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 11
for each element of a (pk+t)-th row of said coefficient matrix and for a (pk+t)-th component of said known vector wherein j is integer satisfying pk+t+1≦j≦n;
an updating section B connected to said memory and comprised of a register set consisting of k registers for registering variables Reg and an arithmetic unit;
said arithmetic unit for calculating the following equations: ##EQU14## a gateway connected to said memory and provided as a junction for an external apparatus; and
a transmitter connected to said memory for transmitting data between said memory and said external apparatus through said gateway; and
a main controller GP for obtaining said unknown vector by executing control of (a) allocating every k rows of said coefficient matrix and every k components of each of said unknown vector and said known vector each of which has a component number equal to a row number of each of every k rows allocated to said memories of said P nodes α0 to αP-1 in an order of α0 to αP-1 cyclically until all elements of said coefficient matrix and all components of each of said unknown vector and said known vector are completely allocated to said memories of said P nodes α0 to αP-1, (b) if n-{n/k}k=0, instructing said P nodes α0 to αP-1 to repeat parallel preprocess PA1 to parallel preprocess PAk and parallel updating process PB from p=0 to p={n/k}-2 and, further, to execute parallel preprocess PA1 to parallel preprocess PAk for p={n/k}-1, and if n-{n/k}k>0, instructing said P nodes α0 to αP-1 to repeat parallel preprocess PA1 to parallel preprocess PAk and parallel updating process PB from p=0 to p={n/k}-1 and, further, to execute parallel preprocess PA1 to PAn-{n/k}k for p={n/k}, and (c) instructing each node to obtain values of said unknown vector using backward substitution and transmitter of each node after completion of steps of (a) and (b);
said parallel preprocess PA1 including calculating Eq. 1 and Eq. 2 (pk+2≦j≦n) for elements of a (pk+1)-th row of said coefficient matrix and a (pk+1)-th component of said known vector at node αu (0≦u≦P-1), after said pivot choosing section of said node chooses a pivot represented by Eq. 3, to which (pk+1)-th to (p+1)k-th rows of said coefficient matrix have been allocated, transmitting results of calculation to respective memories of said nodes other than αu by said transmitter of αu,
calculating Eq. 15 at each updating section B of said nodes other than αu for respective elements of allocated rows of said coefficient matrix in parallel to calculation of Eq. 1 and Eq. 2, and
calculating Eq. 15 at said updating section B of said node αu if rows other than (pk+1)-th to (p+1)k-th rows of said coefficient matrix are allocated to said node αu ;
said parallel preprocess PAt (2≦t≦k) including
calculating Eq. 4, Eq. 5, . . . , Eq. 6, Eq. 7 and Eq. 8 for each element of (pk+t)-th row of said coefficient matrix and (pk+t)-th component of said known vector (pk+t≦j≦n) at said preprocessing section At (2≦t≦k) of said node αu,
calculating Eq. 10 and Eq. 11, after choice of a pivot represented by Eq. 9, at said pivot choosing section for each element of (pk+t)-th row of said coefficient matrix and (pk+t)-th component of said known vector, transmitting results of calculation of respective memories of nodes other than αu, and
calculating ##EQU15## for allocated rows of said coefficient matrix at respective updating sections B of nodes other than αu and calculating Eq. 17 at said updating section B of said node αu if rows other than (pk+1)-th to (p+1)k-th rows of said coefficient matrix have been allocated to said node αu ; and
said parallel updating process PB including calculating Eq. 15 and Eq. 16 for ((p+1)k+1)-th row to n-th row at respective updating sections of all nodes to which ((p+1)k+1)-th row to n-th row have been allocated, respectively, while holding variables Reg in said register set.
4. A data processing machine for the numerical solution of linear equations represented by Ax=b, where A=(ai,j)(1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n rows and n columns, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector, comprising:
a network comprising P nodes α0 to αP-1 connected with each other, each node comprising;
a memory;
a pivot choosing section connected to said memory for choosing pivots by searching said coefficient matrix in a row direction and interchanging elements of said coefficient matrix according to a column-interchange method;
a preprocessing section A1 connected to said memory for calculating
a.sub.pk+1,j.sup.(pk+1) =a.sub.pk+1,j.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 1
b.sub.pk+1.sup.(pk+1) =b.sub.pk+1.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 2
after said pivot choosing section chooses
a.sub.pk+1,pk+1.sup.(pk)                                   Eq. 3
wherein ai,j.sup.(r) denotes (i,j) element of a coefficient matrix obtained when first to r-th columns are eliminated from A=(ai,j),
bi.sup.(r) denotes i-th component of a known vector obtained when first to r-th columns are eliminated from A=(ai,j),
k is an integer satisfying 1≦k≦n-1,
if n-{n/k}k=0, wherein {n/k} denotes a maximum integer not exceeding n/k, p is an integer satisfying 0≦p≦{n/k}-1,
and, if n-{n/k}k>0, p is an integer satisfying 0≦p≦{n/k}, and
j is an integer satisfying pk+2≦j≦n;
2nd to k-th preprocessing sections At (t is an integer satisfying 2≦t≦k) connected to said memory, respectively, each for calculating the following equations: ##EQU16## wherein j is integer satisfying pk+t≦j≦n and, after said pivot choosing section chooses
a.sub.pk+t,pk+t.sup.(pk+t-1)                               Eq. 9
calculating equations
a.sub.pk+t,j.sup.(pk+t) =a.sub.pk+t,j.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 10
b.sub.pk+t.sup.(pk+t) =b.sub.pk+t.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 11
for each element of a (pk+t)-th row of said coefficient matrix and for a (pk+t)-th component of said known vector wherein j is integer satisfying pk+t+1≦j≦n;
an updating section B connected to said memory and comprised of a register set consisting of k registers for registering variables Reg and an arithmetic unit;
said arithmetic unit for calculating the following equations: ##EQU17## using (i, j) elements of i-th row of said coefficient matrix for i satisfying 1≦i≦pk or (p+1)k+1≦i≦n and j satisfying (p+1)k+1≦j≦n while holding variables Reg in said register set;
(k-1) postprocessors C1 to Ck-1 each connected to said memory for calculating
Reg.sup.(0) =a.sub.pk+1,pk+t+1.sup.(pk+t)                  Eq. 17
Reg.sup.(1) =a.sub.pk+2,pk+t+1.sup.(pk+t)                  Eq. 18
Reg.sup.(t-1) =a.sub.pk+t,pk+t+1.sup.(pk+t)                Eq. 19
a.sub.pk+1,j.sup.(pk+t+1) =a.sub.pk+1,j.sup.(pk+t) -Reg.sup.(0) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 20
a.sub.pk+2,j.sup.(pk+t+1) =a.sub.pk+2,j.sup.(pk+t) -Reg.sup.(1) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 21
a.sub.pk+t,j.sup.(pk+t+1) =a.sub.pk+t,j.sup.(pk+t) -Reg.sup.(t-1) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 22
b.sub.pk+1.sup.(pk+t+1) =b.sub.pk+1.sup.(pk+t) -Reg.sup.(0) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 23
b.sub.pk+2.sup.(pk+t+1) =b.sub.pk+2.sup.(pk+t) -Reg.sup.(1) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 24
b.sub.pk+t.sup.(pk+t+1) =b.sub.pk+t.sup.(pk+t) -Reg.sup.(t-1) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 25
using elements of (pk+1)-th row to (pk+t)-th row and (pk+1)-th to (pk+t)-th components of said known vector for j satisfying pk+t+2≦j≦n;
a gateway connected to said memory and provided as a junction for an external apparatus; and
a transmitter connected to said memory for transmitting data between said memory and said external apparatus through said gateway; and
a main controller for obtaining said unknown vector by executing control of (a) allocating every k rows of said coefficient matrix and every k components of each of said unknown vector and said known vector each of which has a component number equal to a row number of each of every k rows allocated to said memories of said P nodes α0 to αP-1 in an order of α0 to αP-1 cyclically until all elements of said coefficient matrix and all components of each of said unknown vector and said known vector are completely allocated to said memories of said P nodes α0 to αP-1, (b) if n-{n/k}k=0, instructing said P nodes α0 to αP-1 to repeat parallel preprocess PA1 to parallel preprocessings PA2 to PAk, parallel updating process PB and a post-eliminating processing PC for every p from p=0 to p={n/k}-1 and, if n-{n/k}k>0, instructing said P nodes α0 to αP-1 to repeat parallel preprocessing PA1 to parallel preprocessings PA2 to PAk, parallel updating processing PB and post-eliminating processing PC for every p from p=0 to p={n/k}-1 and, further, to execute parallel preprocessings PA1 to PAn-{n/k}k, after setting p={n/k}, parallel updating processing PB, after setting a number of pivots equal to n-{n/k}k and post-eliminating processing PC;
said parallel preprocess PA1 including calculating Eq. 1 and Eq. 2 (pk+2≦j≦n) for elements of a (pk+1)-th row of said coefficient matrix and a (pk+1)-th component of said known vector at node αu (0≦u≦P-1), after said pivot choosing section of said node chooses a pivot represented by Eq. 3, to which (pk+1)-th to (p+1)k-th rows of said coefficient matrix have been allocated, transmitting results of calculation to respective memories of said nodes other than αu by said transmitter of αu,
calculating Eq. 15 at each updating section of said nodes other than αu for respective elements of allocated rows of said coefficient matrix in parallel to calculation of Eq. 1 and Eq. 2, and
calculating Eq. 15 at said updating section of said node αu if rows other than (pk+1)-th to (p+1)k-th rows of said coefficient matrix are allocated to said node αu ;
said parallel preprocess PAt (2≦t≦k) including
calculating Eq. 4, Eq. 5, . . . , Eq. 6, Eq. 7 and Eq. 8 for each element of (pk+k)-th row of said coefficient matrix and (pk+t)-th component of said known vector (pk+t≦j≦n) at said preprocessing section At (2≦t≦k) of said node αu,
calculating Eq. 1 and Eq. 11, after choice of a pivot represented by Eq. 9, at said pivot choosing section for each element of (pk+t)-th row of said coefficient matrix and (pk+t)-th component of said known vector, transmitting results of calculation to respective memories of nodes other than αu, and
calculating ##EQU18## for allocated rows of said coefficient matrix at respective updating sections B of nodes other than αu and calculating Eq. 17 at said updating section B of said node αu if rows other than (pk+1)-th to (p+1)k-th rows of said coefficient matrix have been allocated to said node αu, and
said parallel updating process PB including calculating Eq. 15 and Eq. 16 for 1≦i≦pk, (p+1)k+1≦i≦n, (p+1)k+1≦j≦n at respective updating sections B of all nodes to which ((p+1)k+1)-th row to n-th row have been allocated, respectively, while holding variables Reg in said register set; and
said post-eliminating processing PC including calculating equations from Eq. 17 to Eq. 25 for each element of (pk+1)-th row to (pk+t)-th row of said coefficient matrix and (pk+1)-th to (pk+t)-th components of said known vector (pk+t+2≦j≦n, t=1, 2, . . . , k-1).
5. A data processing machine for the numerical solution of linear equations represented by Ax=b, where A=(ai,j)(1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n rows and n columns, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector, comprising:
P clusters CL0 to CLP-1, connected with each other through a network, each comprising Pc element processors PE1 to PEPc connected with each other, a memory, a C gate-way for connecting each cluster with an external apparatus, and a transmitter connected to said memory for transmitting data between each cluster and said external apparatus,
each element processor comprising;
a memory;
a pivot choosing section connected to said memory for choosing pivots by searching said coefficient matrix in a row direction and interchanging elements of said coefficient matrix according to a column-interchange method;
a preprocessing section A1 connected to said memory for calculating
a.sub.pk+1,j.sup.(pk+1) =a.sub.pk+1,j.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 1
b.sub.pk+1.sup.(pk+1) =b.sub.pk+1.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 2
after said pivot choosing section chooses
a.sub.pk+1,pk+1.sup.(pk)                                   Eq. 3
wherein ai,j.sup.(r) denotes (i,j) element of a coefficient matrix obtained when first to r-th columns are eliminated from A=(ai,j),
bi.sup.(r) denotes i-th component of a known vector obtained when first to r-th columns are eliminated from A=(ai,j),
k is an integer satisfying 1≦k≦n-1,
if n-{n/k}k=0, wherein {n/k} denotes a maximum integer not exceeding n/k, p is an integer satisfying 0≦p≦{n/k}-1,
and, if n-{n/k}k>0, p is an integer satisfying 0≦p≦{n/k}, and
j is an integer satisfying pk+2≦j≦n;
2nd to k-th preprocessing sections At (t is an integer satisfying 2≦t≦k) connected to said memory, respectively, each for calculating the following equations: ##EQU19## ##EQU20## wherein j is integer satisfying pk+t≦j≦n and, after said pivot choosing section chooses
a.sub.pk+t,pk+t.sup.(pk+t-1)                               Eq. 9
calculating equations
a.sub.pk+t,j.sup.(pk+t) =a.sub.pk+t,j.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 10
b.sub.pk+t.sup.(pk+t) =b.sub.pk+t.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 11
for each element of a (pk+t)-th row of said coefficient matrix and for a (pk+t)-th component of said known vector wherein j is integer satisfying pk+t+1≦j≦n;
an updating section B connected to said memory and comprised of a register set consisting of k registers for registering variables Reg and an arithmetic unit;
said arithmetic unit for calculating the following equations: ##EQU21## for i and j satisfying (p+1)k+1≦i, j≦n while holding variables Reg in said register set; and
a main controller GP for obtaining said unknown vector by executing control of (a) allocating every k rows of said coefficient matrix and every k components of each of said unknown vector and said known vector each of which has a component number equal to a row number of each of every k rows allocated to said memories of said P clusters CL0 to CLP-1 in an order of CL0 to CLP-1 cyclically until all elements of said coefficient matrix and all components of each of said unknown vector and said known vector are completely allocated to said memories of said P clusters CL0 to CLP-1, assuming that each element processor of each cluster is in charge of processing each one of allocated rows of said coefficient matrix and each one of allocated components of said know vector and unknown vector, (b) if n-{n/k}k=0, instructing said P clusters CL0 to CLP-1 to repeat parallel preprocessing CLA1, parallel preprocessings CLA2 to CLAk and parallel updating process PBc from p=0 to p={n/k}-2 and, further, to execute parallel preprocessing CLA1 to parallel preprocessing CLAp-1, for p={n/k}-1, and if n-{n/k}k>0, instructing said P nodes α0 to αP-1 to repeat parallel preprocessing CLAk to parallel preprocessing CLAk and parallel updating process PBc from p=0 to p={n/k}-1 and, further, to execute parallel preprocesses PA1 to PAn-{n/k}k for p={n/k}, and (c) instructing each cluster to obtain values of said unknown vector using backward substitution and transmitter of each element processor of each cluster after completion of steps of (a) and (b);
said parallel preprocessing CLA1 including, assuming a cluster CLu (o≦u≦P-1) to which (pk+1)-th to (pk+1)k-th rows have been allocated,
allocating each element of (pk+1)-th row of said coefficient matrix and each element of (pk+1)-th component of said known vector to each of said element processors of said cluster CLu in turn;
calculating Eq. 1 and Eq. 2 (pk+2≦j≦n) at respective preprocessing section A1 of said element processors of said cluster CLu simultaneously after said pivot choosing section of each element processor chooses a pivot represented by Eq. 3;
transmitting results of calculation to said memories of clusters other than CLu by said transmitter of said cluster CLu ;
in parallel to the above equation, calculating Eq. 12 at each updating section B of each element processor of said clusters other than CLu for each of allocated rows of said coefficient matrix; and
if rows other than (pk+1)-th to (p+1)k-th rows have been allocated to said cluster CLu ;
said parallel preprocessings CLA2 to CLAk including
allocating each element of (pk+t)-th row (2≦t≦k) of said coefficient matrix and each element of (pk+t)-th component of said known vector to each of said element processors of said cluster CLu in turn;
calculating Eq. 4 to Eq. 8 (pk+t≦j≦n) at each of said preprocessing sections A2 to Ak of said element processor of said cluster CLu simultaneously;
calculating Eq. 10 to Eq. 11, after choice of a pivot represented by Eq. 9 at said pivot choosing section B of each element processor, at each of said preprocessing sections A2 to Ak (for pk+t+1≦j≦n) of each element processor simultaneously;
transmitting results of calculation to each of said memories of clusters other than CLu by said transmitter of CLu ;
in parallel to the above equation, calculating Eq. 17 for each row of said coefficient matrix stored in each of said memories of clusters other than CLu at each updating section B of said element processors of clusters other than CLu ; and if rows other than (pk+1)-th to (p+1)k-th rows have been allocated to said cluster CLu, ##EQU22## calculating Eq. 17 at each updating section B of said element processors in said cluster CLu ;
parallel updating processing Bc including calculating Eq. 15 and Eq. 16 for {(p+1)k+1}-th row to n-th row at respective updating sections of all clusters to which {(p+1)k+1}-th row to n-th row have been allocated, respectively, while holding variables Reg in said register set.
6. A data processing machine for the numerical solution of linear equations represented by Ax=b, where A=(ai,j)(1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n rows and n columns, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector, comprising:
(A) P clusters CL0 to CLP-1, connected with each other through a network, each comprising Pc element processors PE1 to PEPc connected with each other, a memory, a C gate-way for connecting each cluster with an external apparatus, and a transmitter connected to said memory for transmitting data between each cluster and said external apparatus,
each element processor comprising;
a memory;
a pivot choosing section connected to said memory for choosing pivots by searching said coefficient matrix in a row direction and interchanging elements of said coefficient matrix according to a column-interchange method;
a preprocessing section A1 connected to said memory for calculating
a.sub.pk+1,j.sup.(pk+1) =a.sub.pk+1,j.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 1
b.sub.pk+1.sup.(pk+1) =b.sub.pk+1.sup.(pk) /a.sub.pk+1,pk+1.sup.(pk) Eq. 2
after said pivot choosing section chooses
a.sub.pk+1,pk+1.sup.(pk)                                   Eq. 3
wherein ai,j.sup.(r) denotes (i,j) element of a coefficient matrix obtained when first to r-th columns are eliminated from A=(ai,j),
bi.sup.(r) denotes i-th component of a known vector obtained when first to r-th columns are eliminated from A=(ai,j),
k is an integer satisfying 1≦k≦n-1,
if n-{n/k}k=0, wherein {n/k} denotes a maximum integer not exceeding n/k, p is an integer satisfying 0≦p≦{n/k}-1,
and, if n-{n/k}k>0, p is an integer satisfying 0≦p≦{n/k}, and
j is an integer satisfying pk+2≦j≦n;
2nd to k-th preprocessing sections At (t is an integer satisfying 2≦t≦k) connected to said memory, respectively, each for calculating the following equations: ##EQU23## wherein j is integer satisfying pk+t≦j≦n and, after said pivot choosing section chooses
a.sub.pk+t,pk+t.sup.(pk+t-1)                               Eq. 9
calculating equations
a.sub.pk+t,j.sup.(pk+t) =a.sub.pk+t,j.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 10
b.sub.pk+t.sup.(pk+t) =b.sub.pk+t.sup.(pk+t-1) /a.sub.pk+t,pk+t.sup.(pk+t-1)                             Eq. 11
for each element of a (pk+t)-th row of said coefficient matrix and for a (pk+t)-th component of said known vector wherein j is integer satisfying pk+t+1≦j≦n;
an updating section B connected to said memory and comprised of a register set consisting of k :registers for registering variables Reg and an arithmetic unit;
said arithmetic unit for calculating the following equations: ##EQU24## using (i, j) elements of i-th row of said coefficient matrix for i satisfying 1≦i≦pk or (p+1)k+1≦i≦n and j satisfying (p+1}k+1≦j≦n while holding variables Reg in said register set;
(k-1) postprocessors C1 to Ck-1 each connected to said pivot choosing section for calculating
Reg.sup.(0) =a.sub.pk+1,pk+t+1.sup.(pk+t)                  Eq. 17
Reg.sup.(1) =a.sub.pk+2,pk+t+1.sup.(pk+t)                  Eq. 18
Reg.sup.(t-1) =a.sub.pk+t,pk+t+1.sup.(pk+t)                Eq. 19
a.sub.pk+1,j.sup.(pk+t+1) =a.sub.pk+1,j.sup.(pk+t) -Reg.sup.(0) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 20
a.sub.pk+2,j.sup.(pk+t+1) =a.sub.pk+2,j.sup.(pk+t) -Reg.sup.(1) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 21
a.sub.pk+t,j.sup.(pk+t+1) =a.sub.pk+t,j.sup.(pk+t) -Reg.sup.(t-1) a.sub.pk+t+1,j.sup.(pk+t+1)                               Eq. 22
b.sub.pk+1.sup.(pk+t+1) =b.sub.pk+1.sup.(pk+t) -Reg.sup.(0) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 23
b.sub.pk+2.sup.(pk+t+1) =b.sub.pk+2.sup.(pk+t) -Reg.sup.(1) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 24
b.sub.pk+t.sup.(pk+t+1) =b.sub.pk+t.sup.(pk+t) -Reg.sup.(t-1) b.sub.pk+t+1.sup.(pk+t+1)                                 Eq. 25
using elements of (pk+1)-th row to (pk+t)-th row and (pk+1)-th to (pk+t)-th components of said known vector for j satisfying pk+t+2≦j≦n; and
(B) a main controller for obtaining said unknown vector by executing control of:
(a) allocating every k rows of said coefficient matrix and every k components of each of said unknown vector and said known vector each of which has a component number equal to a row number of each of every k rows allocated to said P clusters CL0 to CLP-1, in an order of CL0 to CLP-1 cyclically until all elements of said coefficient matrix and all components of each of said unknown vector and said known vector are completely allocated to said memories of said P clusters CL0 to CLP-1, assuming that each element processor of each cluster is in charge of processing each one of allocated rows of said coefficient matrix and each one of allocated components of said known vector and unknown vector; and
(b) if n-{n/k}k=0, instructing said P clusters CL0 to CLP-1 to repeat parallel preprocessing PA1 to parallel preprocessings PA2 to PAk and parallel updating processing PBc ' and post-eliminating processing PCc for every P from p=0 to p={n/k}-1 and,
if n-{n/k}k>0, instructing said P clusters CL0 to CLP-1 to repeat parallel preprocessing PA1 to parallel preprocessing PA2 to PAk, parallel updating processing PBc ' and post-eliminating processing PCc for every P from p=0 to p={n/k}-1 and, further, to execute parallel preprocessings PA1 to PAn-{n/k}k, after setting p={n/k}, parallel updating processing PBc ', after setting a number of pivots equal to n-{n/k}k and post-eliminating processings PC1 to PCn-[n/k]k ;
said parallel preprocessing PA1 including calculating Eq. 1 and Eq. 2 (pk+2≦j≦n) for elements of (pk+1)-th row of said coefficient matrix and (pk+1)-th component of said known vector at cluster CLu (0≦u≦P-1), after said pivot choosing section of said cluster CLu chooses a pivot represented by Eq. 3, to which (pk+1)-th to (p+1)k-th rows of said coefficient matrix have been allocated, transmitting results of calculation to respective memories of clusters other than CLu by said transmitter of CLu,
calculating Eq. 15 at each updating section B of said clusters other than CLu for respective elements of allocated rows of said coefficient matrix in parallel to calculation of Eq. 1 and Eq. 2, and
calculating Eq. 15 at said updating section B of said cluster CLu if rows other than (pk+1)-th to (p+1)k-th rows of said coefficient matrix are allocated to said cluster CLu ;
said parallel preprocessings PAt (2≦t≦k) including
calculating Eq. 4, Eq. 5, . . . , Eq. 6, Eq. 7 and Eq. 8 for each element of (pk+k)-th row of said coefficient matrix and (pk+t)-th component of said known vector (pk+t≦j≦n) at said preprocessing section At (2≦t≦k) of said clusters CLu,
calculating Eq. 1 and Eq. 11, after choice of a pivot represented by Eq. 9, at said pivot choosing section for each element of (pk+t)-th row of said coefficient matrix and (pk+t)-th component of said known vector, transmitting results of calculation to respective memories of clusters other than CLu, and
calculating ##EQU25## for allocated rows of said coefficient matrix at respective updating sections B of clusters other than CLu and calculating Eq. 17 at said updating section B of said cluster CLu if rows other than (pk+1)-th to (p+1)k-th rows of said coefficient matrix have been allocated to said cluster CLu, and
said parallel updating processing PBc ' including calculating Eq. 15 and Eq. 16 for 1≦i≦pk, (p+1)k+1≦i ≦n, (p+1)k+1≦j≦n at respective updating sections B of all nodes to which ((p+1)k+1)-th row to n-th row have been allocated, respectively, while holding variables Reg in said register set; and
said post-eliminating processing PCc including calculating equations from Eq. 17 to Eq. 25 for each element of (pk+1)-th to (pk+t)-th row of said coefficient matrix and (pk+1)-th to (pk+t)-th components of said known vector (pk+t+2≦j≦n, t=1, 2, . . . , k-1).
7. Parallel elimination method for numerical solution of linear equations represented by Ax=b wherein A=(ai,j)(1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n columns and n rows, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector with use of a parallel computer consisting of first to C-th clusters (C is an integer larger than 1) connected by a network, each cluster consisting of first to Pc -th element processors (Pc is an integer larger than 1) and a memory common to said first to Pc -th element processors, comprising
(A) data allocation step for allocating every Pc rows of a coefficient matrix A.sup.(r) =(ai,j.sup.(r)) and every Pc components of each of known vector b.sup.(r) and unknown vector x.sup.(r), component numbers of said Pc components corresponding to row numbers of said Pc rows one to one, to respective memories of said clusters in turn wherein said coefficient matrix A.sup.(r), known vector b.sup.(r) and unknown vector x.sup.(r) denote coefficient matrix, known vector and unknown vector obtained by eliminating first to r-th columns of the coefficient matrix A-(ai,j), respectively;
repeating said data allocation step until all rows of the coefficient matrix A.sup.(r) and all components of each of the known vector b.sup.(r) and unknown vector x.sup.(r) have been allocated, and, further, allocating said Pc rows of the coefficient matrix A.sup.(r) and Pc components of each of the known and unknown vectors b.sup.(r) and x.sup.(r) to Pc element processors in each cluster;
(B) fundamental pre-elimination step for repeating a series of following operations from l=3 to l=Pc ;
choosing a pivot represented by Eq. 1 at the first element processor of the corresponding cluster
a.sub.Pck+1,Pck+1.sup.(Pck)                                Eq. 1
wherein, if n-{n/Pc }Pc >0, wherein {n/Pc } denotes a maximum integer not exceeding n/Pc, k is an integer satisfying 0≦k≦{n/Pc }, and, if n-{n/Pc }Pc =0, k is an integer satisfying 0≦k≦{n/Pc }-1; calculating Eq. 2 and Eq. 3
a.sub.Pck+1,j.sup.(Pck+1) =a.sub.Pck+1,j.sup.(Pck) /a.sub.Pck+1,Pck+1.sup.(Pck)                              Eq. 2
b.sub.Pck+1.sup.(Pck+1) =b.sub.Pck+1.sup.(Pck) /a.sub.Pck+1,Pck+1.sup.(Pck) Eq. 3
and transmitting calculation results to respective memories of clusters other than those to which elements processors in charge of (Pc k+2)-th to n-th rows of the coefficient matrix belong and an element processor in charge of a (Pc k+1)-th row belongs,
calculating Eq. 4 for the i-th row at the i-th element processor wherein Pc k+2≦i≦n;
t.sub.i.sup.(1) =a.sub.i,Pck+2.sup.(Pck) -a.sub.i,Pck+2.sup.(Pck) a.sub.Pck+1,Pck+2.sup.(Pck+1)                             Eq. 4
calculating Eq. 5 and Eq. 6 at the second element processor of the cluster
a.sub.Pck+2,j.sup.(Pck+2) =a.sub.Pck+2,j.sup.(Pck) -a.sub.Pck+2,Pck+1.sup.(Pck) a.sub.Pck+1,j.sup.(Pck+1)    Eq. 5
b.sub.Pck+2.sup.(Pck+1) =b.sub.Pck+2.sup.(Pck) -a.sub.Pck+2,Pck+1.sup.(Pck) b.sub.Pck+1.sup.(Pck+1)                                   Eq. 6
choosing a pivot represented by Eq. 7;
a.sub.Pck+2,Pck+2.sup.(Pck+1)                              Eq. 7
calculating Eq. 8 and Eq. 9;
a.sub.Pck+2,j.sup.(Pck+2) =a.sub.Pck+2,j.sup.(Pck+1) /a.sub.Pck+2,Pck+2.sup.(Pck+1)                            Eq. 8
b.sub.Pck+2.sup.(Pck+2) =b.sub.Pck+2.sup.(Pck+1) /a.sub.Pck+2,Pck+2.sup.(Pck+1)                            Eq. 9
transmitting calculation results of Eq. 8 and Eq. 9 to memories of clusters other than those to which element processors in charge of the (Pc k+3)-th to n-th rows of the coefficient matrix belong and an element processor in charge of the (Pc k+2)-th row belongs,
calculating Eq. 10 for each of the (Pc k+1)-th to n-th rows at each of element processors in charge of (Pc k+1)-th to n-th rows, respectively; ##EQU26## calculating Eq. 11 and Eq. 12 at the l-th element processor of the cluster; ##EQU27## choosing a pivot represented by Eq. 13 and calculating Eq. 14 and Eq. 15;
a.sub.Pck+l,Pck+l.sup.(Pck+l-1)                            Eq. 13
a.sub.Pck+l,j.sup.(Pck+l) =a.sub.Pck+l,j.sup.(Pck+l-1) /a.sub.Pck+l,Pck+l.sup.(Pck+l-1)                          Eq. 14
b.sub.Pck+l.sup.(Pck+l) =b.sub.Pck+l.sup.(Pck+l-1) /a.sub.Pck+l,Pck+l.sup.(Pck+l-1)                          Eq. 15
transmitting results of calculation of Eq. 14 and Eq. 15 to memories of clusters other than those to which element processors in charge of (Pc k+l+1)-th row to n-th row belong and an element processor in charge of (Pc k+l)-th row belongs;
(C) multi-pivot elimination step of calculating Eq. 16 and Eq. 17 for each of ((k+1)Pc +1)-th to n-th rows at each of elements processors in charge of [(k+1)Pc +1]-th to n-th rows; ##EQU28## (D) repetition elimination judgment step of judging whether or not a series of operation executing said fundamental pre-elimination step in unit of cluster in turn and, thereafter, executing said multi-pivot elimination step have been repeated by {n/Pc } times;
(E) remainder elimination step of executing said fundamental pre-elimination step for the ([n/Pc ]Pc +1)-th to n-th rows of the coefficient matrix at element processors in charge of the ([n/Pc ]Pc +1)-th to n-th row, respectively, if n-{n/Pc }Pc >0 when it is judged in said repetition elimination judgement step that said series of operation have been completed; and unknown vector generation step for obtaining said unknown vector using results of steps (A) through (E).
8. The parallel elimination method as claimed in claim 7, said unknown vector generation step comprises
(F) fundamental back-substitution step of setting
x.sub.i =b.sub.i.sup.(n)                                   Eq. 18
at an element processor in charge of i-th row;
(G) fundamental back transmission step of transmitting xi to the memory of the cluster to which element processors in charge of first to (i-1)-th components of the unknown vector;
(H) fundamental back calculation step of calculating Eq. 19 for components in charge at element processors in charge of first to (i-1)-th components;
b.sub.i.sup.(r+1) =b.sub.i.sup.(r) -a.sub.i,j.sup.(i) x.sub.j Eq. 19
and
(I) repetition back procession step of calculating Eq. 20 by said fundamental back-substitution step in an element processor in charge of (n-l+1)-th component of each of the known and unknown vectors;
x.sub.n-l+1 =b.sub.n-l+1.sup.(n)                           Eq. 20
repeating a series of operation executing calculation by said fundamental back calculation at respective element processors in charge of first to (n-l)-th components for l from 1 to (n-1) after transmitting xn-l+1 to a memory of a cluster to which element processors in charge of first to (n-1)-th components of each of the known and unknown vectors; and
finally setting Eq. 21 by said fundamental back-substitution step
x.sub.l =b.sub.l.sup.(n)                                   Eq 21.
9. The parallel elimination method as claimed in claim 7 wherein, upon choosing a pivot, the following steps are executed:
searching a non-zero element in an increase direction in the row number from a zero diagonal element when found at an element processor in charge of the row to which said zero diagonal element belongs;
announcing the row number of the non-zero element found at the above step to other element processors;
interchanging the non-zero element of the coefficient matrix having the row number having been announced with an element having a row number equal to that of said zero diagonal element; and
interchanging a component of the unknown vector having a component number equal to the row number having been announced with another component of the unknown vector having a component number equal to the row number of the non-zero diagonal element.
10. The parallel elimination method as claimed in claim 7, wherein, upon choosing a pivot, the following steps are executed:
searching an element having a maximum absolute value in an increase direction in the row number from a given diagonal element of the coefficient matrix at an element processor in charge of the row to which said given diagonal element belongs;
announcing the row number of the element found at the above searching to element processors other than said element processor;
interchanging an element having the row number announced with an element having the row number of said given diagonal number for each row at each element processor in charge of said each row;
interchanging a component of the unknown vector having a component number equal to the row number announced with another component of the unknown vector having a component number equal to the row number of the given diagonal element at element processors in charge of the above two components of the unknown vector, respectively.
11. Parallel elimination method for numerical solution of linear equations represented by Ax=b wherein A=(ai,j)(1≦i≦n, 1≦j≦n, and n is an integer larger than 1) is a coefficient matrix of n columns and n rows, x=(x1, x2, . . . , xn)Trans is an unknown vector and b=(b1, b2, . . . , bn)Trans is a known vector with use of a parallel computer consisting of first to C-th clusters (C is an integer larger than 1) connected by a network, each cluster consisting of first to Pc -th element processors (Pc is an integer larger than 1) and a memory common to said first to Pc -th element processors, comprising
(A) data allocation step for allocating every Pc rows of a coefficient matrix A.sup.(r) =(aij.sup.(r)) and every Pc components of each of known vector b.sup.(r) and unknown vector x.sup.(r), component numbers of said Pc components corresponding to row numbers of said Pc rows one to one, to respective memories of said clusters in turn wherein said coefficient matrix A.sup.(r), known vector b.sup.(r) and unknown vector x.sup.(r) denote coefficient matrix, known vector and unknown vector obtained by eliminating first to r-th columns of the coefficient matrix A-(aij), respectively;
repeating said data allocation step until all rows of the coefficient matrix A.sup.(r) and all components of each of the known vector b.sup.(r) and unknown vector x.sup.(r) have been allocated, and, further, allocating said Pc rows of the coefficient matrix A.sup.(r) and Pc components of each of the known and unknown vectors b.sup.(r) and x.sup.(r) to Pc element processors in each cluster;
(B) fundamental pre-elimination step for repeating a series of following operations from l=3 to l=Pc ;
choosing a pivot represented by Eq. 1 at the first element processor of the corresponding cluster
a.sub.Pck+1,Pck+1.sup.(Pck)                                Eq. 1
wherein, if n-{n/Pc }Pc >0, wherein {n/Pc } denotes a maximum integer not exceeding n/Pc, k is an integer satisfying 0≦k≦{n/Pc }, and, if n-{n/Pc }Pc =0, k is an integer satisfying 0≦k≦{n/Pc }-1; calculating Eq. 2 and Eq. 3
a.sub.Pck+1,j.sup.(Pck+1) =a.sub.Pck+1,j.sup.(Pck) /a.sub.Pck+1,Pck+1.sup.(Pck)                              Eq. 2
b.sub.Pck+1.sup.(Pck+1) =b.sub.Pck+1.sup.(Pck) /a.sub.Pck+1,Pck+1.sup.(Pck) Eq. 3
and transmitting calculation results to respective memories of clusters other than those to which element processors in charge of (Pc k+2)-th to n-th rows of the coefficient matrix belong and an element processor in charge of a (Pc k+1)-th row belongs,
calculating Eq. 4 for the i-th row at the i-th element processor wherein Pc k+2≦i≦n;
t.sub.i.sup.(1) =a.sub.i,Pck+2.sup.(Pck) -a.sub.i,Pck+2.sup.(Pck) a.sub.Pck+1,Pck+2.sup.(Pck+1)                             Eq. 4
calculating Eq. 5 and Eq. 6 at the second element processor of the cluster
a.sub.Pck+2,j.sup.(Pck+2) =a.sub.Pck+2,j.sup.(Pck) -a.sub.Pck+2,Pck+1.sup.(Pck) a.sub.Pck+1,j.sup.(Pck+1)    Eq. 5
b.sub.Pck+2.sup.(Pck+1) =b.sub.Pck+2.sup.(Pck) -a.sub.Pck+2,Pck+1.sup.(Pck) b.sub.Pck+1.sup.(Pck+1)                                   Eq. 6
choosing a pivot represented by Eq. 7;
a.sub.Pck+2,Pck+2.sup.(Pck+1)                              Eq. 7
calculating Eq. 8 and Eq. 9;
a.sub.Pck+2,j.sup.(Pck+2) =a.sub.Pck+2,j.sup.(Pck+1) /a.sub.Pck+2,Pck+2.sup.(Pck+1)                            Eq. 8
b.sub.Pck+2.sup.(Pck+2) =b.sub.Pck+2.sup.(Pck+1) /a.sub.Pck+2,Pck+2.sup.(Pck+1)                            Eq. 9
transmitting calculation results of Eq. 8 and Eq. 9 to memories of clusters other than those to which element processors in charge of the (Pc k+3)-th to n-th rows of the coefficient matrix belong and an element processor in charge of the (Pc k+2)-th row belongs,
calculating Eq. 10 for each of the (Pc k+1)-th to n-th rows at each of element processors in charge of (Pc k+1)-th to n-th rows, respectively; ##EQU29## calculating Eq. 11 and Eq. 12 at the l-th element processor of the cluster; ##EQU30## choosing a pivot represented by Eq. 13 and calculating Eq. 14 and Eq. 15;
a.sub.Pck+l,Pck+l.sup.(Pck+l-1)                            Eq. 13
a.sub.Pck+l,j.sup.(Pck+l) =a.sub.Pck+l,j.sup.(Pck+l-1) /a.sub.Pck+l,Pck+l.sup.(Pck+l-1)                          Eq. 14
b.sub.Pck+l.sup.(Pck+l) =b.sub.Pck+l.sup.(Pck+l-1) /a.sub.Pck+l,Pck+l.sup.(Pck+l-1)                          Eq. 15
transmitting results of calculation of Eq. 14 and Eq. 15 to memories of clusters other than those to which element processors in charge of (Pc k+l+1)-th row to n-th row belong and an element processor in charge of (Pc k+l)-th row belongs;
(C) multi-pivot elimination step of calculating Eq. 16 and Eq. 17 for each of ((k+1)Pc +1)-th to n-th rows at each of elements processors in charge of [(k+1)Pc +1]-th to n-th rows; ##EQU31## (D) fundamental post-elimination step of calculating Eq. 18 and Eq. 19 at each element processor;
a.sub.i,j.sup.(r+1) =a.sub.i,j.sup.(r) -a.sub.i,i+1.sup.(r) a.sub.i+1,j.sup.(r+1)                                     Eq. 18
b.sub.i.sup.(r+1) =b.sub.i.sup.(r) -a.sub.i,i+1.sup.(r) b.sub.i+1.sup.(r+1) Eq. 19
(E) post-elimination procession step of repeating the following operation at respective element processors in charge of (Pc k+1)-th to (Pc k+q)-th rows of the coefficient matrix from q=1 to q=Pc -1, said operation executing said fundamental post-elimination step for (Pc k+1)-th to (Pc k+q)-th rows of the coefficient matrix simultaneously after setting l=-w+q+1 in each of Eq. 18 and Eq. 19 for (Pc k+w)-th row (1≦w≦q);
(F) repetition elimination judgment step of judging whether or not a series of operations have been repeated by {n/Pc } times, said series of operations executing said fundamental pre-elimination step for every Pc rows and, then, executing said multi-pivot elimination procession step and post-elimination procession step at each cluster;
(G) remainder elimination step of executing, if n-{n/Pc }Pac >0 at the time when it is judged that said series of operations have been repeated by {n/Pac } times at repetition elimination step, said fundamental pre-elimination step, said multi-pivot elimination step and post-elimination procession step for remaining ([n/Pc ]Pc +1)-th to n-th rows of the coefficient matrix at respective element processors in charge of them
(H) unknown vector generation step for obtaining said unknown vector using results of steps (A) through (G).
12. The parallel elimination method as claimed in claim 11 wherein, upon choosing a pivot, the following steps are executed;
searching a non-zero element in an increase direction in the row number from a zero diagonal element when found at an element processor in charge of the row to which said zero diagonal element belongs;
announcing the row number of the non-zero element found at the above step to other element processors;
interchanging the non-zero element of the coefficient matrix having the row number having been announced with an element having a row number equal to that of said zero diagonal element; and
interchanging a component of the unknown vector having a component number equal to the row number having been announced with another component of the unknown vector having a component number equal to the row number of the non-zero diagonal element.
13. The parallel elimination method as claimed in claim 12 wherein, upon choosing a pivot, the following steps are executed;
searching an element having a maximum absolute value in an increase direction in the row number from a given diagonal element of the coefficient matrix at an element processor in charge of the row to which said zero diagonal element belongs;
announcing the row number of the element found at the above searching to element processors other than said element processor;
interchanging an element having the row number announced with an element having a row number of said given diagonal number for each row at each element processor in charge of said each row;
interchanging a component of the unknown vector having a component number equal to the row number announced with another component of the unknown vector having a component number equal to the row number of the given diagonal element at element processors in charge of the above two components of the unknown vector, respectively.
US07/912,180 1991-07-12 1992-07-13 Data processing method and apparatus employing parallel processing for solving systems of linear equations Expired - Fee Related US5490278A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP17216891A JPH0520348A (en) 1991-07-12 1991-07-12 Parallel-arithmetic operation device
JP3-172168 1991-07-12
JP17361691A JPH0520349A (en) 1991-07-15 1991-07-15 Linear calculation device
JP3-173616 1991-07-15

Publications (1)

Publication Number Publication Date
US5490278A true US5490278A (en) 1996-02-06

Family

ID=26494618

Family Applications (1)

Application Number Title Priority Date Filing Date
US07/912,180 Expired - Fee Related US5490278A (en) 1991-07-12 1992-07-13 Data processing method and apparatus employing parallel processing for solving systems of linear equations

Country Status (3)

Country Link
US (1) US5490278A (en)
EP (1) EP0523544B1 (en)
DE (1) DE69232431T2 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5887186A (en) * 1994-03-31 1999-03-23 Fujitsu Limited Method of solving simultaneous linear equations in a memory-distributed parallel computer
US6144932A (en) * 1997-06-02 2000-11-07 Nec Corporation Simulation device and its method for simulating operation of large-scale electronic circuit by parallel processing
US6360190B1 (en) * 1997-11-28 2002-03-19 Nec Corporation Semiconductor process device simulation method and storage medium storing simulation program
US20020077789A1 (en) * 2000-09-16 2002-06-20 Hitachi, Ltd. Method and apparatus for solving simultaneous linear equations
US20020091909A1 (en) * 2000-11-24 2002-07-11 Makoto Nakanishi Matrix processing method of shared-memory scalar parallel-processing computer and recording medium
US20030097390A1 (en) * 2001-11-16 2003-05-22 Walster G. William Solving systems of nonlinear equations using interval arithmetic and term consistency
US20030105789A1 (en) * 2001-11-30 2003-06-05 Walster G. William Solving systems of nonlinear equations using interval arithmetic and term consistency
US20030130970A1 (en) * 2002-01-08 2003-07-10 Walster G. William Method and apparatus for solving an equality constrained global optimization problem
US6601080B1 (en) * 2000-02-23 2003-07-29 Sun Microsystems, Inc. Hybrid representation scheme for factor L in sparse direct matrix factorization
US20030182518A1 (en) * 2002-03-22 2003-09-25 Fujitsu Limited Parallel processing method for inverse matrix for shared memory type scalar parallel computer
US20030212723A1 (en) * 2002-05-07 2003-11-13 Quintero-De-La-Garza Raul Gerardo Computer methods of vector operation for reducing computation time
US20040122882A1 (en) * 2002-04-11 2004-06-24 Yuriy Zakharov Equation solving
US20050071526A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for virtual devices using a plurality of processors
US20050071814A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for processor thread for software debugging
US20050071513A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for processor dedicated code handling in a multi-processor environment
US20050071828A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for compiling source code for multi-processor environments
US20050071404A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for solving a large system of dense linear equations
US20050071578A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for manipulating data with a plurality of processors
US20050071651A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for encrypting data using a plurality of processors
US20050081202A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for task queue management of virtual devices using a plurality of processors
US20050081201A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for grouping processors
US20050081203A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for asymmetric heterogeneous multi-threaded operating system
US20050081182A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for balancing computational load across a plurality of processors
US20050086655A1 (en) * 2003-09-25 2005-04-21 International Business Machines Corporation System and method for loading software on a plurality of processors
US20050091473A1 (en) * 2003-09-25 2005-04-28 International Business Machines Corporation System and method for managing a plurality of processors as devices
US7146529B2 (en) 2003-09-25 2006-12-05 International Business Machines Corporation System and method for processor thread acting as a system service processor
US7392511B2 (en) 2001-03-22 2008-06-24 International Business Machines Corporation Dynamically partitioning processing across plurality of heterogeneous processors
US20090292511A1 (en) * 2008-05-22 2009-11-26 Aljosa Vrancic Controlling or Analyzing a Process by Solving A System of Linear Equations in Real-Time
US8417755B1 (en) 2008-05-28 2013-04-09 Michael F. Zimmer Systems and methods for reducing memory traffic and power consumption in a processing environment by solving a system of linear equations
CN105045768A (en) * 2015-09-01 2015-11-11 浪潮(北京)电子信息产业有限公司 Method and system for achieving GMRES algorithm
US20180373673A1 (en) * 2017-06-23 2018-12-27 University Of Dayton Hardware accelerated linear system solver
US20190101877A1 (en) * 2017-10-02 2019-04-04 University Of Dayton Reconfigurable hardware-accelerated model predictive controller

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8098778B2 (en) 2007-11-27 2012-01-17 Infineon Technologies Ag Controlled transmission of data in a data transmission system
KR102096365B1 (en) 2019-11-19 2020-04-03 (주)브이텍 Vacuum multi-sensing unit

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4787057A (en) * 1986-06-04 1988-11-22 General Electric Company Finite element analysis method using multiprocessor for matrix manipulations with special handling of diagonal elements
US5113523A (en) * 1985-05-06 1992-05-12 Ncube Corporation High performance computer system
US5274832A (en) * 1990-10-04 1993-12-28 National Semiconductor Corporation Systolic array for multidimensional matrix computations
US5301342A (en) * 1990-12-20 1994-04-05 Intel Corporation Parallel processing computer for solving dense systems of linear equations by factoring rows, columns, and diagonal, inverting the diagonal, forward eliminating, and back substituting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5113523A (en) * 1985-05-06 1992-05-12 Ncube Corporation High performance computer system
US4787057A (en) * 1986-06-04 1988-11-22 General Electric Company Finite element analysis method using multiprocessor for matrix manipulations with special handling of diagonal elements
US5274832A (en) * 1990-10-04 1993-12-28 National Semiconductor Corporation Systolic array for multidimensional matrix computations
US5301342A (en) * 1990-12-20 1994-04-05 Intel Corporation Parallel processing computer for solving dense systems of linear equations by factoring rows, columns, and diagonal, inverting the diagonal, forward eliminating, and back substituting

Non-Patent Citations (18)

* Cited by examiner, † Cited by third party
Title
A. El Amawy et al, Efficient Linear and Bilinear Arrays for Matrix Triangularisation with Partial Pivoting, IEE Proceedings, vol. 137, No. 4, Jul. 1990, pp. 295 300. *
A. El-Amawy et al, "Efficient Linear and Bilinear Arrays for Matrix Triangularisation with Partial Pivoting," IEE Proceedings, vol. 137, No. 4, Jul. 1990, pp. 295-300.
B. Smith et al, "Sparse Matrix Computations on an FFP Machine," IEEE, Oct. 1988, pp. 215-218.
B. Smith et al, Sparse Matrix Computations on an FFP Machine, IEEE, Oct. 1988, pp. 215 218. *
IEEE Transactions On Computers, vol. 32, No. 12, Dec. 1983, New York, US, pp. 1109 1117, Mandayam A. Srinivas, Optimal Parallel Scheduling of Gaussian Elimination DEG s . *
IEEE Transactions On Computers, vol. 32, No. 12, Dec. 1983, New York, US, pp. 1109-1117, Mandayam A. Srinivas, "Optimal Parallel Scheduling of Gaussian Elimination DEG's".
Parallel Computing, vol. 11, No. 4, Aug. 1989, Amsterdam, NL, pp. 201 221, Gita Alaghband, Parallel Pivoting Combined with Parallel Reduction and Fill In Control . *
Parallel Computing, vol. 11, No. 4, Aug. 1989, Amsterdam, NL, pp. 201-221, Gita Alaghband, "Parallel Pivoting Combined with Parallel Reduction and Fill-In Control".
Parallel Computing, vol. 13, No. 3, Mar. 1990, Amsterdam, NL, pp. 289 294, Hyoung Joong Kim et al., A Parallel Algorithm Solving Tridiagonal Toeplitz Linear System . *
Parallel Computing, vol. 13, No. 3, Mar. 1990, Amsterdam, NL, pp. 289-294, Hyoung Joong Kim et al., "A Parallel Algorithm Solving Tridiagonal Toeplitz Linear System".
Proceedings Of The 1986 IBM Europe Instutute Seminar On Parallel Computing, North Holland Amsterdam NL, 11 Aug. 1986, Oberlech, Austria, pp. 99 106, Iain S. Duff, Parallelism in Sparse Matrices . *
Proceedings Of The 1986 IBM Europe Instutute Seminar On Parallel Computing, North Holland Amsterdam NL, 11 Aug. 1986, Oberlech, Austria, pp. 99-106, Iain S. Duff, "Parallelism in Sparse Matrices".
Proceedings Of The 1989 Power Industry Computer Application Conference, IEEE Press, New York, US, 1 May 1989, Seattle, Washington, US, pp. 9 15, D. C. Yu et al., A New Approach for the Forward and Backward Substitution of Parallel Solution of Sparse Linear Equations Based on Dataflow Architecture . *
Proceedings Of The 1989 Power Industry Computer Application Conference, IEEE Press, New York, US, 1 May 1989, Seattle, Washington, US, pp. 9-15, D. C. Yu et al., "A New Approach for the Forward and Backward Substitution of Parallel Solution of Sparse Linear Equations Based on Dataflow Architecture".
Proceedings Of The IEEE 1983 International Symposium On Circuits And Systems, IEEE Press, New York, US, vol. 1/3, 2 May 1983, Newport Beach, California, US, pp. 214 217, R. M. Kieckhafer et al., A Clustered Processor Array for the Solution of the Unstructured Sparse Matrix Equations . *
Proceedings Of The IEEE 1983 International Symposium On Circuits And Systems, IEEE Press, New York, US, vol. 1/3, 2 May 1983, Newport Beach, California, US, pp. 214-217, R. M. Kieckhafer et al., "A Clustered Processor Array for the Solution of the Unstructured Sparse Matrix Equations".
Transaction Of The Institute Of Electronics, Information And Communications Engineers Of Japan, vol. 72, No. 12, Dec. 1989, Tokyo, Japan, pp. 1336 1343, Nobuyuki Tanaka et al. Special Parallel Machine for LU Decomposition of a Large Scale Circuit Matrix and Its Performance . *
Transaction Of The Institute Of Electronics, Information And Communications Engineers Of Japan, vol. 72, No. 12, Dec. 1989, Tokyo, Japan, pp. 1336-1343, Nobuyuki Tanaka et al. "Special Parallel Machine for LU Decomposition of a Large Scale Circuit Matrix and Its Performance".

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5887186A (en) * 1994-03-31 1999-03-23 Fujitsu Limited Method of solving simultaneous linear equations in a memory-distributed parallel computer
US6144932A (en) * 1997-06-02 2000-11-07 Nec Corporation Simulation device and its method for simulating operation of large-scale electronic circuit by parallel processing
US6360190B1 (en) * 1997-11-28 2002-03-19 Nec Corporation Semiconductor process device simulation method and storage medium storing simulation program
US6601080B1 (en) * 2000-02-23 2003-07-29 Sun Microsystems, Inc. Hybrid representation scheme for factor L in sparse direct matrix factorization
US20020077789A1 (en) * 2000-09-16 2002-06-20 Hitachi, Ltd. Method and apparatus for solving simultaneous linear equations
US6826585B2 (en) * 2000-11-16 2004-11-30 Hitachi Software Engineering Co., Ltd. Method and apparatus for solving simultaneous linear equations
US20020091909A1 (en) * 2000-11-24 2002-07-11 Makoto Nakanishi Matrix processing method of shared-memory scalar parallel-processing computer and recording medium
US6907513B2 (en) * 2000-11-24 2005-06-14 Fujitsu Limited Matrix processing method of shared-memory scalar parallel-processing computer and recording medium
US7392511B2 (en) 2001-03-22 2008-06-24 International Business Machines Corporation Dynamically partitioning processing across plurality of heterogeneous processors
US20080250414A1 (en) * 2001-03-22 2008-10-09 Daniel Alan Brokenshire Dynamically Partitioning Processing Across A Plurality of Heterogeneous Processors
US8091078B2 (en) 2001-03-22 2012-01-03 International Business Machines Corporation Dynamically partitioning processing across a plurality of heterogeneous processors
US20030097390A1 (en) * 2001-11-16 2003-05-22 Walster G. William Solving systems of nonlinear equations using interval arithmetic and term consistency
US6859817B2 (en) * 2001-11-16 2005-02-22 Sun Microsystems, Inc. Solving systems of nonlinear equations using interval arithmetic and term consistency
US20030105789A1 (en) * 2001-11-30 2003-06-05 Walster G. William Solving systems of nonlinear equations using interval arithmetic and term consistency
US20030130970A1 (en) * 2002-01-08 2003-07-10 Walster G. William Method and apparatus for solving an equality constrained global optimization problem
US6961743B2 (en) * 2002-01-08 2005-11-01 Sun Microsystems, Inc. Method and apparatus for solving an equality constrained global optimization problem
US20040093470A1 (en) * 2002-03-22 2004-05-13 Fujitsu Limited Parallel processing method for inverse matrix for shared memory type scalar parallel computer
US7483937B2 (en) 2002-03-22 2009-01-27 Fujitsu Limited Parallel processing method for inverse matrix for shared memory type scalar parallel computer
US20030182518A1 (en) * 2002-03-22 2003-09-25 Fujitsu Limited Parallel processing method for inverse matrix for shared memory type scalar parallel computer
US20040122882A1 (en) * 2002-04-11 2004-06-24 Yuriy Zakharov Equation solving
US20030212723A1 (en) * 2002-05-07 2003-11-13 Quintero-De-La-Garza Raul Gerardo Computer methods of vector operation for reducing computation time
US7065545B2 (en) 2002-05-07 2006-06-20 Quintero-De-La-Garza Raul Gera Computer methods of vector operation for reducing computation time
US7318218B2 (en) 2003-09-25 2008-01-08 International Business Machines Corporation System and method for processor thread for software debugging
US20080301695A1 (en) * 2003-09-25 2008-12-04 International Business Machines Corporation Managing a Plurality of Processors as Devices
US20050081203A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for asymmetric heterogeneous multi-threaded operating system
US20050081182A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for balancing computational load across a plurality of processors
US20050086655A1 (en) * 2003-09-25 2005-04-21 International Business Machines Corporation System and method for loading software on a plurality of processors
US20050091473A1 (en) * 2003-09-25 2005-04-28 International Business Machines Corporation System and method for managing a plurality of processors as devices
US20050081202A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for task queue management of virtual devices using a plurality of processors
US20050071651A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for encrypting data using a plurality of processors
US20050071578A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for manipulating data with a plurality of processors
US7146529B2 (en) 2003-09-25 2006-12-05 International Business Machines Corporation System and method for processor thread acting as a system service processor
US7236998B2 (en) * 2003-09-25 2007-06-26 International Business Machines Corporation System and method for solving a large system of dense linear equations
US20050071404A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for solving a large system of dense linear equations
US7389508B2 (en) 2003-09-25 2008-06-17 International Business Machines Corporation System and method for grouping processors and assigning shared memory space to a group in heterogeneous computer environment
US20050071828A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for compiling source code for multi-processor environments
US20080155203A1 (en) * 2003-09-25 2008-06-26 Maximino Aguilar Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
US20080162834A1 (en) * 2003-09-25 2008-07-03 Daniel Alan Brokenshire Task Queue Management of Virtual Devices Using a Plurality of Processors
US20080168443A1 (en) * 2003-09-25 2008-07-10 Daniel Alan Brokenshire Virtual Devices Using a Plurality of Processors
US7415703B2 (en) 2003-09-25 2008-08-19 International Business Machines Corporation Loading software on a plurality of processors
US20080235679A1 (en) * 2003-09-25 2008-09-25 International Business Machines Corporation Loading Software on a Plurality of Processors
US20050071513A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for processor dedicated code handling in a multi-processor environment
US7444632B2 (en) 2003-09-25 2008-10-28 International Business Machines Corporation Balancing computational load across a plurality of processors
US20080271003A1 (en) * 2003-09-25 2008-10-30 International Business Machines Corporation Balancing Computational Load Across a Plurality of Processors
US20080276232A1 (en) * 2003-09-25 2008-11-06 International Business Machines Corporation Processor Dedicated Code Handling in a Multi-Processor Environment
US20050081201A1 (en) * 2003-09-25 2005-04-14 International Business Machines Corporation System and method for grouping processors
US7475257B2 (en) 2003-09-25 2009-01-06 International Business Machines Corporation System and method for selecting and using a signal processor in a multiprocessor system to operate as a security for encryption/decryption of data
US7478390B2 (en) 2003-09-25 2009-01-13 International Business Machines Corporation Task queue management of virtual devices using a plurality of processors
US20050071814A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for processor thread for software debugging
US7496917B2 (en) 2003-09-25 2009-02-24 International Business Machines Corporation Virtual devices using a pluarlity of processors
US7516456B2 (en) 2003-09-25 2009-04-07 International Business Machines Corporation Asymmetric heterogeneous multi-threaded operating system
US7523157B2 (en) 2003-09-25 2009-04-21 International Business Machines Corporation Managing a plurality of processors as devices
US7549145B2 (en) * 2003-09-25 2009-06-16 International Business Machines Corporation Processor dedicated code handling in a multi-processor environment
US8549521B2 (en) 2003-09-25 2013-10-01 International Business Machines Corporation Virtual devices using a plurality of processors
US7653908B2 (en) 2003-09-25 2010-01-26 International Business Machines Corporation Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
US7694306B2 (en) 2003-09-25 2010-04-06 International Business Machines Corporation Balancing computational load across a plurality of processors
US7748006B2 (en) 2003-09-25 2010-06-29 International Business Machines Corporation Loading software on a plurality of processors
US7921151B2 (en) 2003-09-25 2011-04-05 International Business Machines Corporation Managing a plurality of processors as devices
US20050071526A1 (en) * 2003-09-25 2005-03-31 International Business Machines Corporation System and method for virtual devices using a plurality of processors
US8219981B2 (en) 2003-09-25 2012-07-10 International Business Machines Corporation Processor dedicated code handling in a multi-processor environment
US8204925B2 (en) * 2008-05-22 2012-06-19 National Instruments Corporation Controlling or analyzing a process by solving a system of linear equations in real-time
US20090292511A1 (en) * 2008-05-22 2009-11-26 Aljosa Vrancic Controlling or Analyzing a Process by Solving A System of Linear Equations in Real-Time
US8417755B1 (en) 2008-05-28 2013-04-09 Michael F. Zimmer Systems and methods for reducing memory traffic and power consumption in a processing environment by solving a system of linear equations
CN105045768A (en) * 2015-09-01 2015-11-11 浪潮(北京)电子信息产业有限公司 Method and system for achieving GMRES algorithm
US20180373673A1 (en) * 2017-06-23 2018-12-27 University Of Dayton Hardware accelerated linear system solver
US10713332B2 (en) * 2017-06-23 2020-07-14 University Of Dayton Hardware accelerated linear system solver
US20190101877A1 (en) * 2017-10-02 2019-04-04 University Of Dayton Reconfigurable hardware-accelerated model predictive controller
US10915075B2 (en) * 2017-10-02 2021-02-09 University Of Dayton Reconfigurable hardware-accelerated model predictive controller

Also Published As

Publication number Publication date
EP0523544A2 (en) 1993-01-20
DE69232431T2 (en) 2002-09-19
EP0523544A3 (en) 1994-08-10
DE69232431D1 (en) 2002-04-04
EP0523544B1 (en) 2002-02-27

Similar Documents

Publication Publication Date Title
US5490278A (en) Data processing method and apparatus employing parallel processing for solving systems of linear equations
JP3749022B2 (en) Parallel system with fast latency and array processing with short waiting time
CN111461311B (en) Convolutional neural network operation acceleration method and device based on many-core processor
US5590066A (en) Two-dimensional discrete cosine transformation system, two-dimensional inverse discrete cosine transformation system, and digital signal processing apparatus using same
US5717621A (en) Speedup for solution of systems of linear equations
Orcutt Implementation of permutation functions in Illiac IV-type computers
Eberlein On one-sided Jacobi methods for parallel computation
Pavel et al. Integer sorting and routing in arrays with reconfigurable optical buses
US6128639A (en) Array address and loop alignment calculations
WO2021036729A1 (en) Matrix computation method, computation device, and processor
Harper III Increased memory performance during vector accesses through the use of linear address transformations
US5900023A (en) Method and apparatus for removing power-of-two restrictions on distributed addressing
EP1076296A2 (en) Data storage for fast fourier transforms
US20230244600A1 (en) Process for Generation of Addresses in Multi-Level Data Access
Lin et al. Efficient histogramming on hypercube SIMD machines
Gertner et al. A parallel algorithm for 2-d DFT computation with no interprocessor communication
Swarztrauber et al. A comparison of optimal FFTs on torus and hypercube multicomputers
US5654910A (en) Processing method and apparatus for performing 4 ×4 discrete cosine transformation or inverse discrete cosing transformation
Hu Parallel eigenvalue decomposition for toeplitz and related matrices
US5999958A (en) Device for computing discrete cosine transform and inverse discrete cosine transform
Chuang et al. Efficient computation of the singular value decomposition on cube connected SIMD machine
Hou et al. Optimal processor mapping for linear-complement communication on hypercubes
US20230289287A1 (en) Programmable Multi-Level Data Access Address Generator
JPH09212483A (en) Processor and method for parallel processing of simultaneous equation that can used various matrix storing methods
CN117493748A (en) Low-bit-width data matrix vector multiplication implementation method and device of vector processor

Legal Events

Date Code Title Description
AS Assignment

Owner name: MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNOR:MOCHIZUKI, YOSHIYUKI;REEL/FRAME:006253/0507

Effective date: 19920715

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
FP Lapsed due to failure to pay maintenance fee

Effective date: 20040206

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362