Course Code |
CSC320 |
Course Title |
Parallel and distributed Computing |
Credit Hours |
3+0 |
Prerequisites by Course(s) and Topics |
Operating Systems |
Assessment Instruments with Weights (homework, quizzes, midterms, final, programming assignments, lab work, etc.) |
SESSIONAL (Quizzes, Assignments, Presentations) =25 %
Midterm Exam =25 %
Final Exam = 50%
|
Course Coordinator |
Jawad Hassan |
URL (if any) |
- |
Current Catalog Description |
- |
Textbook (or Laboratory Manual for Laboratory Courses) |
• Distributed Systems: Principles and Paradigms, A. S. Tanenbaum and M. V. Steen, Prentice Hall, 2nd Edition, 2007 |
Reference Material |
• Distributed and Cloud Computing: Clusters, Grids, Clouds, and the Future Internet, K, Hwang, J Dongarra and GC. C. Fox, Elsevier, 1st Ed, 2011. |
Course Goals |
This course is to provide a combined applied and theoretical background in Parallel and Distributed Computing to improve students’ learning outcomes: • Learn about parallel and distributed computers. • Write portable programs for parallel or distributed architectures using Message-Passing Interface (MPI) library. • Analytical modeling and performance of parallel programs. • Analyze complex problems with shared memory programming with OpenMP. |
Course Learning Outcomes (CLOs): |
At the end of the course the students will be able to: | Domain | BT Level* |
Learn about parallel and distributed computers. |
|
|
Write portable programs for parallel or distributed architectures using Message-Passing Interface (MPI) library. |
|
|
Analytical modelling and performance of parallel programs. |
|
|
Analyze complex problems with shared memory programming with OpenMP. |
|
|
* BT= Bloom’s Taxonomy, C=Cognitive domain, P=Psychomotor domain, A= Affective domain |
|
|
|
Topics Covered in the Course, with Number of Lectures on Each Topic (assume 15-week instruction and one-hour lectures) |
Week | Lecture | Topics Covered |
Week 1 |
1 |
Asynchronous Communication/Computation |
|
2 |
Synchronous Communication/Computation |
Week 2 |
3 |
Concurrency Control |
|
4 |
Fault Tolerance |
Week 3 |
5 |
GPU Architecture |
|
6 |
GPU Programming |
Week 4 |
7 |
Heterogeneity |
|
8 |
Interconnection Topologies |
Week 5 |
9 |
Load Balancing |
|
10 |
Memory Consistency Model |
Week 6 |
11 |
Memory Hierarchies |
|
12 |
Message Passing Interface (MPI) |
Week 7 |
13 |
MIMD/SIMD |
|
14 |
Multithreaded Programming |
Week 8 |
1 hours |
Mid Term |
Week 9 |
15 |
Parallel Algorithms |
|
16 |
Parallel Architectures |
Week 10 |
17 |
Parallel Input and Output |
|
18 |
Performance Analysis |
Week 11 |
19 |
Performance Tuning |
|
20 |
Power Consumption methods and saving techniques in distributed systems |
Week 12 |
21 |
Programming Models (Data Parallel, Task Parallel) |
|
22 |
Programming Models (Process centric) |
Week 13 |
23 |
Programming Models (Shared/Distributed memory) |
|
24 |
Scalability and Performance Studies |
Week 14 |
25 |
Scheduling (Process scheduling schemes in distributed computing) |
|
26 |
Storage Systems for Distributed computing |
Week 15 |
27 |
Synchronization in communication |
|
28 |
Tools (Cuda, Swift, Globus, Amazon AWS, OpenStack) |
Week 16 |
29 |
Tools (Cilk, gdb, threads, MPICH, OpenMP, Hadoop, FUSE) |
|
30 |
Revision |
Week 17 |
2 hours |
Final Term |
|
Laboratory Projects/Experiments Done in the Course |
Lab is not associated with this course. |
Programming Assignments Done in the Course |
Shared memory programming using OpenMP. |