Sunday, March 25, 2012

System Software And Operating System Important Descriptive Questions 1

System Software & Operating System Important Questions 1


DESCRIPTIVES



Q.1. Discuss in detail Table management Techniques?                                       (7)

Ans:
An Assembler uses the following tables:
OPTAB: Operation Code Table Contains mnemonic operation code and its machine language equivalent.
SYMTAB: Symbol Table maintains symbolic label, operand and their corresponding machine.
LITTAB is a table of literals used in the program
For efficiency reasons SYMTAB must remain in main memory throughout passes I and II of the assembler.  LITTAB is not accessed as frequently as SYMTAB, however it may be  accessed  sufficiently frequently to justify its presence in the memory. If memory is at a  premium, only a part of LITTAB can be kept in memory. OPTAB should be in memory during pass I

Q.2 Define the following:
(i) Formal language Grammars. (ii) Terminal symbols.
(iii) Alphabet and String.                                                                     (9)

Ans:
(i) A formal language grammar is a set of formation rules that describe which strings formed  from  the  alphabet  of  a  formal  language  are  syntactically  valid,  within  the language. A grammar only addresses the location and manipulation of the strings of the language. It does not describe anything else about a language, such as its semantics.
As proposed by Noam Chomsky, a grammar G consists of the following components:
     A finite set N of non terminal symbols.
     A finite set Σ of terminal symbols that is disjoint from N.
     A finite set P of production rules, each rule of the form

where * is the Kleene star operator and denotes set union. That is, each production rule  maps from one string of symbols to another, where the first string contains at least one non terminal symbol.
        A distinguished non terminal symbol from set N that is the start symbol. (ii)Terminal  symbols are literal strings forming the input  of a formal grammar and cannot be broken down into smaller units without losing their literal meaning. In simple words, terminal symbols cannot  be changed using the rules of the grammar; that is, they're the end of the line, or terminal. For example, if the grammar rules are that x can become xa and x can become ax, then a is a terminal symbol because it cannot become something else. These are the symbols which can appear as it is in the programme.
(iii) A finite set of symbols is called alphabet. An alphabet is often denoted by sigma, yet can be given any name.
B = {0, 1}  says B is an alphabet of two symbols, 0 and 1.
C = {a, b, c}  says C is an alphabet of three symbols, a, b and c.




12


DC14                                                                      System Software and Operating System

Sometimes space and comma are in an alphabet while other times they are meta symbols  used for descriptions. A language is defined over an alphabet. For example binary language is defined over alphabet B.
A finite sequence of symbols from an alphabet is called string or word.
01110 and 111 are strings from the alphabet B above. aaabccc and b are strings from the alphabet C above.
A null string is a string with no symbols, usually denoted by epsilon has zero length.
Q.3. What is parsing?  Write down the drawback of top down parsing of backtracking.  (7)              Ans:
Parsing  is  the  process  of  analyzing  a  text,  made  of  a  sequence  of  tokens,  to
determine its grammatical structure with respect to a given formal grammar. Parsing is also known as syntactic analysis and parser is used for analyzing a text.                           The task of the parser is essentially to determine if and how the input can be derived from the start symbol of the grammar. The input is a valid input with respect to a given formal grammar if it can be derived from the start symbol of the grammar.
Following are drawbacks of top down parsing of backtracking:
(i)    Semantic actions cannot be performed while making a prediction. The actions must be delayed until the prediction is known to be a part of a successful parse.
(ii)  Precis erro reportin is   not   possible  mismatc merel triggers backtracking. A source string is known to be erroneous only after all predictions have failed.

Q.4.    Give the Schematic of Interpretation of HLL program and execution of a machine language program by the CPU.                                                                                                         (8)



Ans:



Interpreter



Memory                                 CPU



Memory



Machine


PC                              Source
Program
+


PC
language
program
+


Errors


Data


Data



a                                                                                 b

The CPU uses a program counter (PC) to note the address of next instruction to be executed. This instruction is subjected to the instruction execution cycle consisting of the following steps:
1.  Fetch the instruction.
2.  Decode the instruction to determine the operation to be performed, and also its operands.
3.Execute the instruction.
At the end of the cycle, the instruction address in PC is updated and the cycle is repeated  for the  next  instruction. Program interpretation can proceed in a similar manner.  The  PC  can  indicate  which  statement  of  the  source  program  is  to  be
 interpreted next. This statement would be subjected to the interpretation cycle, which consists of the following steps:
1.  Fetch the statement.
2.  Analyse  the  statement and determine its  meaning,  viz  .  the  computation to  be performed and its operands.
3.  Execute the meaning of the statement.
Q.5. Give the difference between multiprogramming and multiprocessing.                  (5)                      Ans:
A multiprocessing system is a computer hardware configuration that includes more
than one independent processing unit. The term multiprocessing is generally used to refer to large computer hardware complexes found in major scientific or commercial applications The   multiprocessor   syste is   characterize by-increase system throughput  and  application  speedup-parallel  processing.  The  main  feature  of  this architecture is to provide high speed at low cost in comparison to uni- processor.
A multiprogramming operating system is system that allows more than one active user program (or part of user program) to be stored in main memory simultaneously. Multi  programmed operating systems are fairly sophisticated. All the jobs that enter the system are kept in the job pool. This pool consists of all processes residing on mass storage awaiting allocation of main memory. If several jobs are ready to be brought into memory, and there is  not enough room for all of them, then the system must choose among them. A time-sharing system is a multiprogramming system.
Q.6. Write down different system calls for performing different kinds of tasks.          (4)                    Ans:
A  system  call  is  a  request  made  by  any  program  to  the  operating  system  for
performing tasks -- picked from a predefined set -- which the said program does not have  required permissions to execute  in  its  owflow of  execution.  Systecalls provide the  interface between a process and the operating system. Most operations interacting with the system require permissions not available to a user level process, e.g. I/O performed with a device present on the system or any form of communication with other processes requires the use of system calls.
The main types of system calls are as follows:
Process Control: These types of system calls are used to control the processes. Some examples are end, abort, load, execute, create process, terminate process etc.
File Management: These types of system calls are used to manage files. Some examples are Create file, delete file, open, close, read, write etc.
Device Management: These types of system calls are used to manage devices. Some examples are Request device, release device, read, write, get device attributes etc.
Q.7 Differentiate between pre-emptive and non-pre-emptive scheduling.                 (4)                         AnsIn a pre-emptive scheduling approach, CPU can be taken away from a process if
there is a need while in a non-pre-emptive approach if once a process has been given  the  CPU, the CPU cannot be taken away from that process, unless the process completes or leaves the CPU for performing an Input Output.
                      Pre-emptive scheduling is more useful in high priority process which requires immediate           response, for example in real time system. While in nonpreemptive systems, jobs are made to wait by longer jobs, but treatment of all processes is fairer.

Q.8.        CPU burst time indicates the time, the process needs the CPU. The following are the set of processes with their respective CPU burst time                                  (in milliseconds).
Processes                                        CPU-burst time
P1                                     10
P2                                       5
P3                                       5
Calculate  the  average  waiting time if the  process  arrived in  the following order:
(i) P1, P2 & P3                                (ii) P2, P3 & P1                                        (6)

Ans:
Considering FCFS scheduling
Process                                        Burst Time
P1                                                       10
P2                                                        5
P3                                                         5
(i) Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:


P1

P2

P3



0                                                               10                                 15                              20

Waiting time for P1  = 0; P2  = 10; P3 = 15
Average waiting time:  (0 + 10 + 15)/3 = 8.33 unit of time

(ii)Suppose that the processes arrive in the order P2, P3 , P1 . The Gantt chart for the schedule is:







P2

P3

P1




 
0                        5                      10                                                            20

Waiting time for P1 = 10; P2 = 0; P3 = 5
Average waiting time (10 + 0 + 5)/3 = 5 unit of time.
Q.9. What is a semaphore? Explain busy waiting semaphores.                                     (6)                          Ans: A semaphore is a protected variable or abstract data type which constitutes the classic method for restricting access to shared resources such as shared memory in a parallel programming environment.
                     Weak, Busy-wait Semaphores:
The simplest way to implement semaphores.
Useful when critical sections last for a short time, or we have lots of CPUs.
S initialized to positive value (to allow someone in at the beginning).
S is an integer variable that, apart from initialization, can only be accessed through 2
atomic and mutually exclusive operations:
wait(s):
while (s.value != 0);
s.value--;
signal(s):
s.value++;

All happens atomically i.e. wrap pre and post protocols.
Q.10. What are the four necessary conditions of deadlock prevention?                       (4)                   Ans: Four necessary conditions for deadlock prevention:
1.  Removing  the  mutual  exclusion  condition  means  that  no  process  may  have
exclusive access to a resource. This proves impossible for resources that cannot be spooled, and even with spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
2. The "hold and wait" conditions may be removed by requiring processes to request
all the resources they will need before starting up. Another way is to require processes to release all their resources before requesting all the resources they will need.
3. A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a  process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. Algorithms that allow preemption include  lock-free and wait-free  algorithms and optimistic  concurrency control.
4. The circular wait condition: Algorithms that avoid circular waits include "disable interrupts  during  critical  sections",  and  "use  a  hierarchy  to  determine  a  partial ordering of resources" and Dijkstra's solution.

No comments:

Post a Comment