Question: Is isoefficiency metric/analysis still part of the curriculum?
Answer: No. (It used to be in previous semesters, but not any more.)
Question: Which state, private or shared, will a variable attain if it is declared inside a parallel region of OpenMP?
Answer: Private per thread.
Question: If not otherwise specified, which state (private or shared) will a variable, which is declared outside a parallel region, attain inside a parallel region?
Answer: If not otherwise specified, a variable that is declared outside a parallel region can be assumed to attain the shared state inside a parallel region. (Note: if the clause of "default(none)" is used, then every variable that is declared outside a parallel region has to be explicitly assigned a state: shared or private.)
Question: If the iterations of a for-loop are parallelized by the "#pragma omp for" directive, which iterations will be executed by each thread?
Answer: This depends on the chosen schedule clause (either explicitly specified or implied as "static" by default) and the accompanying (default) chunk size. Please read pages 316-319 of the textbook.
Question: Where shall I put the "#pragma omp for" directive, when parallelising a nested for-loop with several levels?
Answer: First, check if all the levels of the nested for-loop are parallelizable. The "#pragma omp for" directive should be placed immediately above the outermost parallelizable for-loop level (all the inner loop levels should also be parallelizable), for the purpose of limiting the parallelisation overhead. At the same time, make sure that all the inner for-loops adopt private looping index variables. (Nesting parallel directives, described on pages 321-322 of the textbook, is another more involved approach.)
Question: If multiple threads simultaneously read the same shared variable, will race condition arise?
Answer: No. Race condition arises when multiple threads simultaneously update the same shared variable.