Report I of Concurrent programming PDF

Title Report I of Concurrent programming
Course Bachelor of Computer Science
Institution Swinburne University of Technology
Pages 18
File Size 340.9 KB
File Type PDF
Total Downloads 53
Total Views 131

Summary

CONCURRENT PROGRAMMING REPORT 1...


Description

COS40003 Concurrent Programming

CONTENTS: Topics

Introduction Week 1 Week 2 Week 3 Week 4

Pag e no. 3 3-5 6-8 811 1113

COS40003 Concurrent Programming

Week 5

1316 Week 6 1617 Acknowledgement/Resources/Refe 18 rences:

Introduction:

COS40003 Concurrent Programming

In my report I, I have included my learnings from week 1 to week 6. This report reflects what have I learnt in this unit “Concurrent Programming – COS40003” until now. This report includes the lecture materials and the lab tasks. I have briefly explained everything in my own words. I have also presented the areas that I have personally explored beyond the expectations of the unit, as well as indication of all those areas where I plan to learn further on my own.

WEEK 1: In first week we came across the basics concepts of concurrent programming, i.e. “Computing paradigms” which are as follow:

Concurrent computing: Concurrent computing is an approach in which various calculations and computations are executed within an overlapping time frame. The advantage of concurrent computing is that various processes can advance without waiting for the other to finish. We can find these computing techniques in operating systems that support threading and pre-emptive multitasking. This computing is applied at many different levels like in CPU, the running software on the system and control flow of network. NOTE: Concurrent computing is different from sequential computing, where calculations are made one after the other, and parallel computing, where calculations are made simultaneously.

Main advantages: -

Maintain data consistency. Avoid deadlocks and livelocks. Clearer programming of individual tasks. Allows to run more than one program at a time.

COS40003 Concurrent Programming

Parallel computing: Parallel computing is a computing approach in which several processors process an application or computation simultaneously. It helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Parallel computing’s objective is to increase the computation power for faster application processing or task resolution. Most supercomputers employ parallel computing principles to operate. Parallel processing is generally implemented in operational environments/scenarios that require massive computation or processing power. Parallel computing is also called as parallel processing.

Main advantages: - Improves runtime of individual programs. - On multi core systems, improves systems performance by executing parallelly. - Partition problems. - Minimise dependencies among the computation units.

Distributed computing: Distributed computing is an approach in which multiple computer systems are working on a single problem. i.e. here, a single problem is broken into several parts, and each part is computed by a different computer. The objective of distributed computing is to improve the performance and efficiency while connecting users and IT resources.

Main advantages: -

Fault tolerance Cost effective Transparent Reliable manner

Cluster computing:

COS40003 Concurrent Programming

Cluster computing is one logical unit consisting of many computers linked together through a LAN. All these interlinked computers act as a single powerful machine which provides a lot of advantages.

Main advantages: -

Faster processing speed Large storage capacity Better data integrity More reliable- flexible

Disadvantages: - Expensive

Grid computing: Grid computing is an approach in which multiple computers on a network can work on a task together behaving like a supercomputer. Basically, it combines computer resources from various domains to reach an objective. i.e. In this computing large number of computers are connected to solve a complex problem.

Main advantages: - Solves problems that are too big with keeping the flexibility to process numerous smaller problems. Used in: ATM banking, back-office infrastructures, and scientific or marketing research.

Cloud computing: Cloud computing is an approach of using many services like servers, storage or software over the internet which we refer as a cloud.

WEEK 2:

COS40003 Concurrent Programming

In week 2, I read, understood and summarized “process” which I have described as follow:

PROCESS: A program which is running on our computer is known as a process, in other words, it is basically a program in execution. It can be considered as a basic unit of our program to be implemented in our system. Simply, if we write a computer program in a text file. On executing that program, it is loaded into the memory and then it -becomes a process. When a process is executed, it passes through few following different states:

S.N . 1

State & Description

Start/Initial/Created This is the state when a process is first started/created.

2

Ready The processes are waiting to be assigned to a processor so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process.

3

Running When the process has been assigned to a processor, the

COS40003 Concurrent Programming

process state is set to running and the processor starts executing its instructions. 4

Waiting Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available.

5

Terminated/Final/Exit When the process finishes its execution or been exited by the operating system, it is moved to the terminated state where it waits to be cleaned from main memory. (In UNIX-based systems, this is called a zombie state)

A process can initiate a subprocess, which is a considered as childprocess. A child process is a copy of the parent process and shares some of its features and resources but cannot exist if the parent process is terminated. System call fork () is used to create new processes. It takes no arguments and returns a process ID. System call wait () blocks the calling process till one of its child processes exits or it receives a signal. System call exec () function replaces the current process image with a new process image.

Process Scheduling: This is an activity of the process manager that handles the cleaning of the running process from the CPU and the selection of another process and the selection of other processes based on a specific strategy.

COS40003 Concurrent Programming

WEEK 3: In week 3, I read, understood, and summarized the following scheduling approaches:

CPU scheduling: CPU scheduling is an approach that allows one process to make use of CPU while another process’s execution is on hold due to unavailability of any resource like I/O etc. The objective of CPU scheduling is to make the system efficient, fast, flexible and fair. There are six popular process scheduling algorithms that we discussed in this unit until now.

1.First come, first server (FIFO) Scheduling: The first come, first served or FIFO (i.e. First in, first out) process scheduling algorithm is the easiest process scheduling algorithm. FIFO follows the simple approach in which the process that arrives first will be dealt first.

Advantages:    

easy useful simple first come, first served

Disadvantages: This scheduling method is nonpreemptive, thus, the process will run until it finishes.  Short processes which are at the back of the queue have to wait for the long process at the front to finish.  Average wait time is high.



2.

Shortest job first (SJF):

Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process with the smallest execution time to execute next. SJF is a non-preemptive algorithm.

COS40003 Concurrent Programming

Algorithm: - sort all the processes in increasing order according to burst time. - Then follow the approach of FIFO

Advantages: - Best approach for minimizing the waiting time. - Easy to implement.

3.

Preemptive shortest job first (PSJF):

Preemptive shortest job first is a scheduling policy that selects the process with the smallest amount of time remaining until completion to execute. (Preemptive version of SJF is known as Shortest Remaining Time First (SRTF).)

Advantages: - Short processes are handled quickly

Disadvantages: - Potential for process starvation - Impossible in interactive systems where CPU time is unknown.

4.

Round robin scheduling:

Round robin scheduling is the preemptive process scheduling algorithm where every process has fixed time for execution (known as quantum). Once the process starts executing, it is preempted and then the other process starts executing for the given time frame. Efficiency of this scheduling depends on - The number of jobs - The size of quantum (the smaller time quantum means higher processing rates)

5.

Lottery scheduling:

COS40003 Concurrent Programming

This is preemptive/non-preemptive scheduling in which processes are scheduled in a random manner. Here, every process has some tickets and the scheduler picks a random ticket and process with this ticket is the winner. It is executed for some time and after that another ticket is picked by the scheduler. These lottery tickets represent the share of processes. The process which have higher number of tickets has more chance to get chosen for execution.

6.

Multi-level feedback queue:

This scheduling algorithm make use of other existing algorithms to group and schedule jobs with same features. Here each queue has its own scheduling algorithms with their own assigned priorities.

WEEK 4: In week 4, I read, understood and summarized “thread” and related material which I have described as follow:

What is a thread? The smallest unit of processing that can be performed in an operating system is known as thread. A thread can also be regarded as the smallest sequence of programmed instructions, managed independently by a scheduler. Generally, a thread exists within a process, i.e. one process may contain multiple threads. We know that, multitasking allows processes to run concurrently, similarly multithreading allows sub-processes to run concurrently (run seemingly at the same time).

For example: We can download a video at the same time when we play it. There are various programming languages that allow developers to work on threads such as Java, .NET and Python.

Why we need threads?

COS40003 Concurrent Programming

Threads do multiple things at the same time hence they are used to make Java application faster. Multi-threading is one way to exploit huge computing power of CPU in Java application. Other than this, threads allow us to do multiple tasks simultaneously.

For example, in GUI application, if we want to draw screens and we want to capture screenshots or pressing keys and downloading something from the network. If we do all these task in one thread, they will execute sequentially i.e. firstly, we draw on the screen then we capture the command and finally we upload our high score to the network. This can cause a problem for our application because GUI behaves to be frozen while we are doing another task. Multiple threading in Java helps us in executing each of this task on its own.

Thread Pool: A thread pool is a group of pre-instantiated threads which stand ready to be given work. These are preferred because otherwise we will have to instantiate new threads for each task even when there are many short tasks to be done rather than a small number of long ones. This prevents from creating a thread many times.

Multithreading in Java: Every Java application contains at least one thread, which is referred as main thread that executes our main method. There are more threads used by JVM. Here are a couple of common reasons and scenarios to use multiple threads in Java: 1) To make a task run parallel to another task e.g. drawing and event handling. 2) To take full advantage of CPU power. To improve throughput of the application by utilizing full CPU power. 3) For reducing response time. By doing quick computation by breaking a big problem into smaller chunks and processing them through multiple threads.

COS40003 Concurrent Programming

For example: here, multiple requests are processing through multiple-threaded server at a same point of time.

4) To sever multiple clients at the same time.

Issues with Threading: 1. 2. 3. 4. 5.

Deadlock-livelock, Memory inconsistency error Race conditions Starvation. Very difficult to test a Java program which involves multiple threads.

Creating Threads: - By implementing the run () method - It should end when run () finishes - Use start () to get run the thread.

WEEK 5: In week 5, I read, understood and summarized “lock” and related material which I have described as follow:

COS40003 Concurrent Programming

Key Concurrency Terms Critical section: Concurrent accesses to shared resources can lead to unpredictable behaviour or result, Hence, the part of program where the shared resources are protected is referred as critical section/critical region. I.e. critical section is a group of instructions or a part of a program’s code which must be executed atomically is referred as critical section.

Race condition: This happens when more than one threads tries to access the shared data and try to edit it at the same time, (because the thread scheduling algorithm can swap between threads at any time) as we are not sure about the order in which these threads will try to access the shared data, resulting into wrong outcome. For example:

for (int i=0; i...


Similar Free PDFs