Course Code |
CSC320 |
Course Title |
Parallel and distributed Computing |
Credit Hours |
3+0 |
Prerequisites by Course(s) and Topics |
Operating Systems |
Assessment Instruments with Weights (homework, quizzes, midterms, final, programming assignments, lab work, etc.) |
SESSIONAL (Quizzes, Assignments, Presentations) =25 %
Midterm Exam =25 %
Final Exam = 50%
|
Course Coordinator |
Mr. Muhammad Arsalan Raza |
URL (if any) |
NA |
Current Catalog Description |
NA |
Textbook (or Laboratory Manual for Laboratory Courses) |
Distributed Systems: Principles and Paradigms, A. S. Tanenbaum and M. V. Steen, Prentice Hall, 2nd Edition, 2007 |
Reference Material |
Distributed and Cloud Computing: Clusters, Grids, Clouds, and the Future Internet, K, Hwang, J Dongarra and GC. C. Fox, Elsevier, 1st Ed. |
Course Goals |
This course is to provide a combined applied and theoretical background in Parallel and Distributed Computing to improve students’ learning outcomes: • Articulate and compute core concepts of parallel and distributed models along with their applications. • Correlate the performance metrics with parallel programs and diagnose multiprocessor parameters. • Construct programs based on OpenMP and MPI programming libraries adapted to shared and distributed memory. |
Course Learning Outcomes (CLOs): |
At the end of the course the students will be able to: | Domain | BT Level* |
Articulate core concepts of parallel and distributed models along with their applications. |
C |
3 |
Correlate the performance metrics with parallel programs and diagnose multiprocessor parameters. |
C |
4 |
Construct programs based on OpenMP and MPI programming libraries adapted to shared and distributed memory. |
P |
4 |
* BT= Bloom’s Taxonomy, C=Cognitive domain, P=Psychomotor domain, A= Affective domain |
|
|
|
Topics Covered in the Course, with Number of Lectures on Each Topic (assume 15-week instruction and one-hour lectures) |
Week | Lecture | Topics Covered |
Week 1 |
1 |
Distributed systems and types |
|
2 |
Parallel computing in distributed systems |
Week 2 |
3 |
Asynchronous communication/computation |
|
4 |
Synchronous communication/computation |
Week 3 |
5 |
Transactions in distributed data stores and types |
|
6 |
Writeahead Log and serializability in transactions |
Week 4 |
7 |
Fault Tolerance and dependable systems |
|
8 |
Faults, errors, and failures in distributed systems |
Week 5 |
9 |
Load Balancing and Application Delivery Controller |
|
10 |
Flynn's Taxonomy and Computation Models |
Week 6 |
11 |
Modern CPU and GPU architectures |
|
12 |
General Purpose GPUs, Special Purpose GPUs, and IBM Cell Broadband Engine |
Week 7 |
13 |
AMD and NVIDIA General Purpose GPUs, AMD 7000-series HD7970 and NVIDIA GTX 480 |
|
14 |
GPUs development libraries |
Week 8 |
1 hours |
Mid Term |
Week 9 |
15 |
Threading and Multithreading |
|
16 |
Multithreading in C# |
Week 10 |
17 |
|
|
18 |
|
Week 11 |
19 |
Heterogeneity, Goals, and Forms of Heterogeneity |
|
20 |
Parallel Virtual Machine PVM, Fault Tolerance scheme of PVM |
Week 12 |
21 |
Interconnection Topologies, 3D Hypercubes, 2D Mesh with 16-Nodes and Direct Memory Access, Diminishing role of topology |
|
22 |
Multicore Programming, Concurrency vs. Parallelism, Types of Parallelism, Parallel Systems, Performance metrics of Parallel systems, Runtime, Speedup, Efficiency, & Cost |
Week 13 |
23 |
Scalability of Parallel Systems, Amdahl's Law (1967), Gustafson's Law (1988) |
|
24 |
Iso-efficiency metrics of Scalability, Sources of Parallel Overhead |
Week 14 |
25 |
Memory Models, Shared and Distributed, The Message Passing Interface MPI, Sockets interface, primitives |
|
26 |
Programming interface, C/C++ MPI program |
Week 15 |
27 |
MPI C# programming, Communicator namespace, Properties: Size & Rank, Types: World & Self |
|
28 |
MPI C# Communication between processes, Point-to-point & Collective |
Week 16 |
29 |
OpenMP models UMA, NUMA, Goals, Essentials |
|
30 |
Parallel Programming Models, Data parallel model, Task graph |
Week 17 |
2 hours |
Final Term |
|
Laboratory Projects/Experiments Done in the Course |
Lab is not associated with this course. |
Programming Assignments Done in the Course |
Implementation of parallel processing concepts using multi-threaded, and Message Passing Interface (MPI) based applications. Covering both; shared memory and message passing models. |