My bold....
Massively parallel is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of coordinated computations in parallel.
One approach is grid computing, where the processing power of many computers in distributed, diverse administrative domains is opportunistically used whenever a computer is available.[1] An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.[2]
Another approach is grouping many processors in close proximity to each other, as in a computer cluster. In such a centralized system the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced InfiniBand systems to three-dimensional torus interconnects.[3]
The term also applies to massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of central processing units (CPUs) and random-access memory (RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing many processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips.[citation needed] MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.
Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common supercomputer implementations after clusters, as of November 2013.[4]
Data warehouse appliances such as Teradata, Netezza or Microsoft's PDW commonly implement an MPP architecture to handle the processing of very large amounts of data in parallel.
https://en.wikipedia.org/wiki/Massively_parallel