COURSE DESCRIPTION

NAME OF INSTITUTION Lahore Garrison University
PROGRAM (S) TO BE EVALUATED Computer Science , Fall 2021
Course Description : This is an introductory course in Parallel and Distributed Computing. It is the study of how to build a computer system where the state of the program is divided over more than one machine (or "node").
Course Code CSC320
Course Title Parallel and distributed Computing
Credit Hours 3+0
Prerequisites by Course(s) and Topics Operating Systems
Assessment Instruments with Weights (homework, quizzes, midterms, final, programming assignments, lab work, etc.) SESSIONAL (Quizzes, Assignments, Presentations) =25 %
Midterm Exam =25 %
Final Exam = 50%
Course Coordinator Muhammad Ali Dildar
URL (if any) NA
Current Catalog Description NA
Textbook (or Laboratory Manual for Laboratory Courses) Distributed Systems: Principles and Paradigms, A. S. Tanenbaum and M. V. Steen, Prentice Hall, 2nd Edition, 2007
Reference Material Distributed and Cloud Computing: Clusters, Grids, Clouds, and the Future Internet, K Hwang, J Dongarra and GC. C. Fox, Elsevier, 1st Ed.
Course Goals This course is to provide a combined applied and theoretical background in Parallel and Distributed Computing to improve students’ learning outcomes.; 1. Learn about parallel and distributed computers. 2. Write portable programs for parallel or distributed architectures using Message-Passing Interface (MPI) library. 3. Analytical modeling and performance of parallel programs. 4. Analyze complex problems with shared memory programming with
Course Learning Outcomes (CLOs):
At the end of the course the students will be able to:DomainBT Level*
Learn about parallel and distributed computers.
Write portable programs for parallel or distributed architectures using Message-Passing Interface (MPI) library.
Analytical modelling and performance of parallel programs.
Analyze complex problems with shared memory programming with openMP.
* BT= Bloom’s Taxonomy, C=Cognitive domain, P=Psychomotor domain, A= Affective domain
Topics Covered in the Course, with Number of Lectures on Each Topic (assume 15-week instruction and one-hour lectures)
WeekLectureTopics Covered
Week 1 1 Asynchronous Communication/Computation
2 Synchronous Communication/Computation
Week 2 3 Concurrency Control
4 Fault Tolerance
Week 3 5 GPU Architecture
6 GPU Programming
Week 4 7 Heterogeneity
8 Interconnection Topologies
Week 5 9 Load Balancing
10 Memory Consistency Model
Week 6 11 Memory Hierarchies
12 Message Passing Interface (MPI)
Week 7 13 MIMD/SIMD
14 Multithreaded Programming
Week 8 1 hours Mid Term
Week 9 15 Mid term exams
16 Mid term exams
Week 10 17 Parallel Algorithms
18 Parallel Architectures
Week 11 19 Parallel Input and Output
20 Performance Analysis
Week 12 21 Performance Tuning
22 Power Consumption methods and saving techniques in distributed systems
Week 13 23 Programming Models (Data Parallel, Task Parallel)
24 Programming Models (Process centric)
Week 14 25 Programming Models (Shared/Distributed memory)
26 Scalability and Performance Studies
Week 15 27 Scheduling (Process scheduling schemes in distributed computing)
28 Storage Systems for Distributed computing
Week 16 29 Synchronization in communication
30 Tools (Cuda, Swift, Globus, Amazon AWS, OpenStack)
Week 17 2 hours Final Term
Laboratory Projects/Experiments Done in the Course Lab is not associated with this course.
Programming Assignments Done in the Course Implementation of parallel processing concepts using multi-threaded and Message Passing Interface (MPI) based applications. Covering both; shared memory and message passing models.
Instructor Name Muhammad Ali Dildar
Instructor Signature
Date