miércoles, 8 de febrero de 2012

Parallel Computer Memory Architectures [Lab]

As all of my class know that traditionally, software has been code for sequential computation, so they can only be run on a single computer having only one CPU and instructions are executed one by one. In parallel computing, you can make code for simultaneous computation, so yo can run using multiple CPUs and instructions can be executed simultaneously on different CPUs.

One topic that i think it is important is know about different memory architectures on parallel computing, so, i make a concept map for see more clearly differences between memory architectures.

Shared Memory
  • We can have more friendly programming because we have a global address space.
  • We have a fast and uniform data sharing because we have close memory to CPU.

  • We don't have many scalability between memory and CPUs, because adding more CPUs increases traffic on the shared memory to CPU, so, it is not ability to be enlarged.
  • We have a responsability for synchronization correctly access to memory.
  • It is more expensive for design and produce
Distributed Memory
  • We can have more scalability between memory and CPUs, increase processors and size of memory it is proportionately.
  • We have rapidly access memory for each processor because is own, without interference and without overhead.
  • it is not expensive than shared memory because uses commodity and networking.

  • We need to have resposability for many details for data communication between processors.
  • it is difficult to map existing data structures which are based by global memory.
  • Don't have uniform times of memory access

1 comentario:

  1. Nice. Just remember that "I" is with capital i. 7 points for the lab, week #2.