COURSE DESCRIPTION

NAME OF INSTITUTION Lahore Garrison University
PROGRAM (S) TO BE EVALUATED Computer Science , Spring 2022
Course Description : This is an introductory course in Parallel and Distributed Computing. It is the study of how to build a computer system where the state of the program is divided over more than one machine (or "node").
Course Code CSC320
Course Title Parallel and distributed Computing
Credit Hours 3+0
Prerequisites by Course(s) and Topics Operating Systems
Assessment Instruments with Weights (homework, quizzes, midterms, final, programming assignments, lab work, etc.) SESSIONAL (Quizzes, Assignments, Presentations) =25 %
Midterm Exam =25 %
Final Exam = 50%
Course Coordinator Mr. Muhammad Arsalan Raza
URL (if any) NA
Current Catalog Description NA
Textbook (or Laboratory Manual for Laboratory Courses) Distributed Systems: Principles and Paradigms, A. S. Tanenbaum and M. V. Steen, Prentice Hall, 2nd Edition, 2007
Reference Material Distributed and Cloud Computing: Clusters, Grids, Clouds, and the Future Internet, K Hwang, J Dongarra and GC. C. Fox, Elsevier, 1st Ed.
Course Goals This course is to provide a combined applied and theoretical background in Parallel and Distributed Computing to improve students’ learning outcomes.; 1. Learn about parallel and distributed computers. 2. Write portable programs for parallel or distributed architectures using Message-Passing Interface (MPI) library. 3. Analytical modeling and performance of parallel programs. 4. Analyze complex problems with shared memory programming with OpenMP.
Course Learning Outcomes (CLOs):
At the end of the course the students will be able to:DomainBT Level*
Learn about parallel and distributed computers. C 2
Write portable programs for parallel or distributed architectures using Message-Passing Interface (MPI) library. P 3
Analytical modelling and performance of parallel programs. C 3
Analyze complex problems with shared memory programming with openMP. C 4
* BT= Bloom’s Taxonomy, C=Cognitive domain, P=Psychomotor domain, A= Affective domain
Topics Covered in the Course, with Number of Lectures on Each Topic (assume 15-week instruction and one-hour lectures)
WeekLectureTopics Covered
Week 1 1 Distributed systems and types
2 Parallel computing in distributed systems
Week 2 3 Asynchronous communication/computation
4 Synchronous communication/computation
Week 3 5 Transactions in distributed data stores and types
6 Writeahead Log and serializability in transactions
Week 4 7 Fault Tolerance and dependable systems
8 Faults, errors, and failures in distributed systems
Week 5 9 Load Balancing and Application Delivery Controller
10 Flynn's Taxonomy and Computation Models
Week 6 11 Modern CPU and GPU architectures
12 General Purpose GPUs, Special Purpose GPUs, and IBM Cell Broadband Engine
Week 7 13 AMD and NVIDIA General Purpose GPUs, AMD 7000-series HD7970 and NVIDIA GTX 480
14 GPUs development libraries
Week 8 1 hours Mid Term
Week 9 15 Heterogeneity, Goals, and Forms of Hetrogeneity
16 Parallel Virtual Machine PVM, Fault Tolerance scheme of PVM
Week 10 17 Interconnection Topologies, 3D Hypercubes
18 2D Mesh with 16-Nodes and Direct Memory Access, Diminishing role of topology
Week 11 19 Multicore Programming, Concurrency vs. Parallelism, Types of Parallelism
20 Parallel Systems, Performance metrics of Parallel systems, Runtime, Speedup, Efficiency, & Cost
Week 12 21 Scalability of Parallel Systems, Amdahl's Law (1967), Gustafson's Law (1988)
22 Iso-efficiency metrics of Scalability, Sources of Parallel Overhead
Week 13 23 Memory Models, Shared and Distributed, The Message Passing Interface MPI, Sockets interface, primitives
24 Programming interface, C/C++ MPI program
Week 14 25 MPI C# programming, Communicator namespace, Properties: Size & Rank, Types: World & Self
26 MPI C# Communication between processes, Point-to-point & Collective.
Week 15 27 OpenMP models UMA, NUMA, Goals, Essentials
28 Parallel Programming Models, Data parallel model, Task graph
Week 16 29 Parallel programming model, Work pool, Master slave, producer consumer model, hybrid
30 Brainstorming scenarios
Week 17 2 hours Final Term
Laboratory Projects/Experiments Done in the Course Lab is not associated with this course.
Programming Assignments Done in the Course Implementation of parallel processing concepts using multi-threaded and Message Passing Interface (MPI) based applications. Covering both; shared memory and message passing models.
Instructor Name Mr. Muhammad Arsalan Raza
Instructor Signature
Date