Thread Locking in C#

Thread locking is used to ensure that only one thread can enter particular sections of code at a time…

Main exclusive locking constructs are lock , Mutex and nonexclusive locking constructs is semaphores.

class ThreadUnsafe
{
  static int _val1 = 1, _val2 = 1;

  static void Go()
  {
    if (_val2 != 0) Console.WriteLine (_val1 / _val2);
    _val2 = 0;
  }
}

This class is not thread-safe: if Go was called by two threads simultaneously, it would be possible to get a division-by-zero error, because _val2 could be set to zero in one thread right as the other thread was in between executing the if statement and Console.WriteLine.

Here’s how lock can fix the problem:

class ThreadSafe
{
  static readonly object _locker = new object();
  static int _val1, _val2;

  static void Go()
  {
    lock (<span style="color: #ff6600;"><strong>_locker</strong></span>)
    {
      if (_val2 != 0) Console.WriteLine (_val1 / _val2);
      _val2 = 0;
    }
  }
}

Only one thread can lock the synchronizing object (in this case, _locker) at a time, and any contending threads are blocked until the lock is released. If more than one thread contends the lock, they are queued on a “ready queue” and granted the lock on a first-come, first-served basis

 

Untitled.png

 

 

http://www.albahari.com/threading/part2.aspx#_Locking&#8230;..

All about Threading in C#

  • Threading means parallel code execution.
  • C# supports parallel execution of code through multi-threading.
  • Threading helps us to executing one or more tasks in a parallel way or asynchronous way.

.Net applications like Console, WPF or Windows starts in a single thread created automatically by the CLR and operating system (i.e main thread) and we can make multithreaded by creating additional threads.

  • Thread can not restart once completed.
  • A thread, while blocked, doesn’t consume CPU resources.

Threading

How Threading Works ?

Multithreading is managed internally by a thread scheduler, a function the CLR typically delegates to the operating system. A thread scheduler ensures all active threads are allocated appropriate execution time, and that threads that are waiting or blocked (for instance, on an exclusive lock or on user input)  do not consume CPU time.

  • On a single-processor computer, a thread scheduler performs time-slicing — rapidly switching execution between each of the active threads.
  • On a multi-processor computer, multithreading is implemented with a mixture of time-slicing and genuine concurrency, where different threads run code simultaneously on different CPUs. It’s almost certain there will still be some time-slicing, because of the operating system’s need to service its own threads — as well as those of other applications.

Threads vs Processes-

A thread is analogous to the operating system process in which your application runs. Just as processes run in parallel on a computer, threads run in parallel within a single process. Processes are fully isolated from each other; threads have just a limited degree of isolation. In particular, threads share (heap) memory with other threads running in the same application. This, in part, is why threading is useful: one thread can fetch data in the background, for instance, while another thread can display the data as it arrives.

Share Data-

Threads share data if they have a common reference to the same object instance.

Threading1

Passing Data to a Thread-

  1. The easiest way to pass arguments to a thread’s target method is to execute a lambda expression that calls the method with the desired arguments
  2. Pass an argument into Thread’s Start methodUntitled

Naming Threads-

Each thread has a Name property that you can set for the benefit of debugging. This is particularly useful in Visual Studio, since the thread’s name is displayed in the Threads Window and Debug Location toolbar. You can set a thread’s name just once; attempts to change it later will throw an exception.

The static Thread.CurrentThread property gives you the currently executing thread. In the following example, we set the main thread’s name:

class ThreadNaming
{
  static void Main()
  {
    Thread.CurrentThread.Name = "main";
    Thread worker = new Thread (Go);
    worker.Name = "worker";
    worker.Start();
    Go();
  }
  static void Go()
  {
    Console.WriteLine ("Hello from " + Thread.CurrentThread.Name);
  }
}

 

Thread Types-

  1. Foreground Thread
  2. Background Thread

Foreground threads are those threads which keeps on running to complete its work even if the main thread quits. In simple words worker thread will keeps on running (to complete work) even if the main thread has ended the session. Here lifespan of worker thread is not dependent on the main thread. Worker thread can be alive without main thread.

Untitled

Background threads are those threads which quits if the main application method quits. Here lifespan of worker thread is dependent on the main thread. Worker thread quits if the main application thread quits. To use the background thread in an application we need to set a property called “IsBackground” to true.

Untitled

 

Thread Priority-

A thread’s Priority property determines how much execution time it gets relative to other active threads in the operating system, on the following scale:

enum ThreadPriority { Lowest, BelowNormal, Normal, AboveNormal, Highest }

This becomes relevant only when multiple threads are simultaneously active.

Exception Handling-

Any try/catch/finally blocks in scope when a thread is created are of no relevance to the thread when it starts executing. Consider the following program:

public static void Main()
{
  try
  {
    new Thread (Go).Start();
  }
  catch (Exception ex)
  {
    // We'll never get here!
    Console.WriteLine ("Exception!");
  }
}
 
static void Go() { throw null; }   // Throws a NullReferenceException

The try/catch statement in this example is ineffective, and the newly created thread will be encumbered with an unhandled NullReferenceException. This behavior makes sense when you consider that each thread has an independent execution path.

The remedy is to move the exception handler into the Go method:

public static void Main()
{
   new Thread (Go).Start();
}
 
static void Go()
{
  try
  {
    // ...
    throw null;    // The NullReferenceException will get caught below
    // ...
  }
  catch (Exception ex)
  {
    // Typically log the exception, and/or signal another thread
    // that we've come unstuck
    // ...
  }
}

You need an exception handler on all thread entry methods in production applications. An unhandled exception causes the whole application to shut down.

 

Thread Pooling-

Whenever you start a thread, a few hundred microseconds are spent organizing such things as a fresh private local variable stack. Each thread also consumes (by default) around 1 MB of memory. The thread pool cuts these overheads by sharing and recycling threads, allowing multithreading to be applied at a very granular level without a performance penalty. This is useful when leveraging multicore processors to execute computationally intensive code in parallel in “divide-and-conquer” style.

The thread pool also keeps a lid on the total number of worker threads it will run simultaneously. Too many active threads throttle the operating system with administrative burden and render CPU caches ineffective. Once a limit is reached, jobs queue up and start only when another finishes. This makes arbitrarily concurrent applications possible, such as a web server.

There are a number of ways to enter the thread pool:

 

What are Thread.Join(), Thread.Sleep() and Thread.Abort() Methods-

  1. Thread.Join() – Join waits for a thread to end. Join method when attached to any thread it makes that thread to finish its execution first or to end first and halts other processes. In simple words we can wait for another thread to end by calling its Join method. We can include TimeSpan or milliseconds with Join method.Untitled.png
  2. Thread.Sleep() –Sleep method is used to suspend the current thread or pauses the current thread for specific time. The time can be specified in milliseconds or TimeSpan and in Sleep mode thread does not consume and CPU resources which indirectly saves the memory for other processes. Untitled.png
  3. Thread.Abort() – Thread.Abort method helps to terminate or to end thread. Abort method raises ThreadAbortException in the thread to do process of termination and throws ThreadAbortException in the thread to abort it. This exception can be caught in the application code. The complete termination can be done by calling Join method after Abort method….

 

 

References-

https://www.onlinebuff.com/article_understand-threading-and-types-of-threading-in-c-using-an-example_56.html

http://www.albahari.com/threading/

 

Multithreading and Thread safety.

If we are using the multithreading in our application then we should also keep in mind that code which can be used via mulitple threads simultaneously should be thread safe to avoid some abnormal problems.

As in the below example which is not thread safe will throw Devide by zero exception because may be same time one thread setting the value to zero and second thread trying to Devide. So it will throw the exception.

To handle this kind of  situation we should make the code thread safe. There are three techniques to handle this- lock, mutex and semaphore.

Task Parallel Library (TPL) ?

The TPL scales the degree of concurrency dynamically to most efficiently use all the processors that are available. In addition, the TPL handles the partitioning of the work, the scheduling of threads on the ThreadPool, cancellation support, state management, and other low-level detail.

Big benefit of TPL over Threading is, It will take the maximum usage of all the available processors  as compare to threading where it had a affinity with the processor.

Threads have a core affinity i.e once a thread runs on a once core it will always run on that code but we as a developer you can write code to run thread on different cores but then you as a developer are responsible to check which core is less loaded then go and divide your logical pieces and then run on those cores and you will also be responsible to aggregate result of all cores and give back to the program. Definitely there will be lot of work like decide which core is less loaded find them and divide and run logic and sync resuts etc..

TPL encapsulate all these for us. So TPL benefits are-

  1. TPL encapsulate multi-core execution
  2. Thread Pooling
// Sequential version            
foreach (var item in sourceCollection)
{
    Process(item);
}

// Parallel equivalent
Parallel.ForEach(sourceCollection, item =&gt; Process(item));

https://docs.microsoft.com/en-us/dotnet/standard/parallel-programming/data-parallelism-task-parallel-library

“VAR” keyword in .NET and c#

Its a implicit or indirect way of defining data types i.e by looking the data at RHS the LHS data type are define by the compiler during the generation of the IL Code. In IL code actual data type will be present as per data.

Var keyword define data type statically and not on runtime i.e value type can not be change at runtime.

When to use VAR ?

  • In Linq query where we are not sure what kind of columns query will return i.e Anonymous Type. If we will not use var then we have to create a POCO class to store the Linq query result which will be a overhead.
  • Rarely but when class name are big enough to write

Ex.

var i= 10;

i = "Jogendra"; //Here it will give the COMPILE time error which proves that var keyword data type set statically

What is the use of checked and unchecked keyword ?

In C# by default there is no overflow checking(except const variables) while performing arithmetic operations and conversions on integral values. To control the overflow conditions checked and unchecked keywords are come to the picture.

Checked :-

It ensure that only max value can be assigned to the LHS variable. If we will try to push value more then its max value then it will throw an exception.


class Program

{

public short a = 30000;

public short b = 20000;

public short c;

&nbsp;

public int Add()

{

try

{

c = checked((short)(a + b));

&nbsp;

}

catch (System.OverflowException e)

{

System.Console.WriteLine(e.ToString());

}

return c;

}

unchecked keyword is used to by pass the default behavior of overflow checking for const variable. Constant variables are checked for overflow at compile time.

Delegates and Anonymous functions in C# 

Delegates and Anonymous functions are very much connected to each other. Below are the simple definitions for each-
Delegates :- Delegates are function’s pointer i.e Delegates can holds the references of one or more functions.

Anonymous Functions:- Anonymous functions help us reduce the overhead of creating the named functions where we want to assign some simple line of code to a delegate.

We should use the Anonymous functions when we want to use the delegate (with some line of code) in the same fuctions.

Performance is also much better while using Anonymous functions with delegates.

ERROR: Cannot determine the location of the VS Common Tools folder. VS ?

If you are getting the error “ERROR: Cannot determine the location of the VS Common Tools folder. VS” then it can be fixed by given below steps-

  1. Go to My Computer -> Right Click->Properties
  2. Advanced system settings
  3. Advanced tab-> Environment Variables
  4. System Variables -> Path-> double click on it
  5. Add “;c:\windows\system32;” this into variable value.
  6. You are done 🙂