Synchronization
Synchronization is a key concept in computer science that refers to the coordination of multiple processes or threads to ensure their correct and efficient operation. In a concurrent system, where multiple processes or threads are running simultaneously, it is essential to synchronize their activities to avoid conflicts and ensure correctness. One of the most common problems in synchronization is the n-process critical problem, where n processes share a critical section of code, and only one process can execute that code at any given time. This problem can be solved using various synchronization techniques, such as semaphores.
Semaphores
Semaphores are a synchronization mechanism that provides a way to control access to shared resources. A semaphore is a simple integer variable that is used to signal between processes. Semaphores have two main operations: wait and signal. The wait operation decrements the semaphore value and waits if the value becomes negative, while the signal operation increments the semaphore value.
Semaphores can be used to deal with the n-process critical problem in the following way:
Initialization
Initialize the semaphore to 1 to ensure that only one process can access the critical section at a time.
Entry
Before entering the critical section, a process needs to acquire the semaphore by performing a wait operation. If the semaphore value is greater than zero, the process can decrement the value and enter the critical section. If the semaphore value is zero, the process must wait until another process signals the semaphore.
Exit
When a process exits the critical section, it releases the semaphore by performing a signal operation. This increments the semaphore value, allowing another waiting process to enter the critical section.
Let’s consider an example of how semaphores can be used to solve the n-process critical problem.
Example: Suppose we have three processes: P1, P2, and P3, that need to access a shared resource (e.g., a printer) protected by a critical section of code. Only one process can access the critical section at a time to avoid conflicts and ensure correctness.
We can use semaphores to implement the following solution:
Initialization: Initialize the semaphore to 1.
Process P1:
a. Wait for the semaphore.
b. Enter the critical section and access the shared resource.
c. Release the semaphore.
Process P2:
a. Wait for the semaphore.
b. Enter the critical section and access the shared resource.
c. Release the semaphore.
Process P3:
a. Wait for the semaphore.
b. Enter the critical section and access the shared resource.
c. Release the semaphore.
This solution ensures that only one process can access the critical section at a time. When a process enters the critical section, it acquires the semaphore by performing a wait operation. If the semaphore value is zero (i.e., another process is already in the critical section), the process waits until the semaphore is signaled by the other process. When a process exits the critical section, it releases the semaphore by performing a signal operation, allowing another waiting process to enter the critical section.
OS-2022
Q-1.
(a) Write are the functions of operating systems? write a note on multi programmed operating systems ?
(b) Distinguish between client server and peer to peer model of distributed systems .
Q-2.
(b) What is meant by Storage Structure? Discuss Storage Hierarchy.
Q-3.
(a) What are the criteria for evaluating the CPU scheduling algorithms? Why do we need it?
Q-4.
(a) What is synchronization? Explain how semaphores can be used to deal with n-process critical problem.
Q-5.
Q-6.
(b) What are the disadvantages of single contiguous memory allocation? Explain.
Q-7.
(a) Briefly explain about single-level, two-level and three Structured directories.
(b) What is disk scheduling? Explain the C-SCAN scheduling by giving an example.
Q-8.
(b) UNIX file system.