Saturday, January 11, 2014

Chapter:15 Smart Pointers

Smart Pointer concept is not something which is newly introduced in C++11. However it has changed siginificantly from pervious standard,so I thought to cover this topic as well. If we read about on smart pointer on Wikipedia, we get the following:
"In computer science, a smart pointer is an abstract data type that simulates a pointer while providing additional features, such as automatic memory management or bounds checking. These additional features are intended to reduce bugs caused by the misuse of pointers while retaining efficiency. Smart pointers typically keep track of the  memory they point to."


In near future, it would became even more popular becasue its provides the garbage collectior like feature in C++. In addition to this, it is very important tool to write 'RAII(resource acquisition is initialization)' based programming. RAII is programming idom which is invented by Bjarne Stroustrup. This is very important idom as it always ensures that there would be call of destructors for all objects declared in that scope where exception has occured. This avoids any resource leak scenario under any cirumstances.


Above two points make smart pointer very useful tool. So we should try to avoid the use of raw pointer as much as possible and start using the smart pointer. Of course this would not be easy as there would be many other code which had been written based on raw pointer and sometime it is not possible(naturally) to express something using these tools. Current C++11 standard supports three type of smart pointer:


1. unique ptr.
2. shared ptr.
3. weak ptr.

The following sections describe how we can use these tools in our program.


The current program demonstrate the usage of 'unique ptr' in our program. Here I have written 'myvector' class which has 'std::vector' as member. I have also written two method of this class which simply delegate the work to 'std::vector'. The main intention behind writing this class was, I wanted to write constructor and destructor so that I can put the appropriate  message in these functions. This would help us to understand the object lifecycle in better way compared to 'std::vector".


In this program, first of all I have created an object(pmyvec) of this class  on free store. After that I have inserted two elements in 'myvector' object.
Now I have tried to fetch the value from the object 'myvector' for the given index. We can see that this is not correct as we are passing size of this object
in the index(it should be size -1), due to which it would throw the 'out-of-range' exception.


I did write this line deliberately as I wanted to explain that how RAII would  take care to call destructor of this class whenever exception occurs. And if everything goes fine, then at the end of the scope, 'myvector' object destrutor would get called without calling the 'delete' which we do with raw pointer. So in short it take care for both the scenario in very good manner and it always ensures that destructor gets called so that there is no resource leak in your program. By the way the above code is not so unusual in the real world program. Such mistakes in the program do exists in the code.


Now we verified that destructor gets called under the exception scenario so we should also see what happens in the normal case when everything works fine. We can comment out the source line which is causing the exception and run the program. It works as  expected and whenever 'myvector' object go out of scope, destructors gets called  without explicit 'delete' call.


If we have used the normal raw pointer instead of smart pointer, there can not be any automatic destructor call under the exception. We  have to write the code explicitly in each 'catch()' block to achieve that. And to handle the normal scenario we would have to call 'delete' at the end of the function. We can see that things are  becoming bit complicated and lot of repetitive code needs to be written to handle both  scenario even in such simple case. This is not so good and chances of mistake are  high so ideally we should avoid such style of code. Try by yourself and check the  behavior and feel the difference between raw pointer and smart pointer.



//Code Begins

#include<memory>
#include<vector>
#include<iostream>
#include<exception>

template<typename T>
class myvector {
public:
    myvector() {std::cout<<"myvector::myvector\n";}
    ~myvector(){std::cout<<"myvector::~myvector\n";}
    void insert_element(T val) {
        vec.push_back(val);
    }
    size_t size() {
        return vec.size();
    }
    T getat(size_t index) {
        return (vec.at(index));
    }
private:
    std::vector<T> vec;
};


void learn_unique_ptr(void) {
    try{
        std::unique_ptr<myvector<int>> pmyvec {new myvector<int>};
        /*do not bother about the deleting the pointer */
        pmyvec->insert_element(int{5});
        pmyvec->insert_element(int{20});
        auto sz = pmyvec->size();
        /*Comment out the below line so see the behavior under normal*/
        auto val = pmyvec->getat(sz);
        auto x = pmyvec.get();
        auto del = pmyvec.get_deleter();
        std::cout<<sz<<"\n";
    }

    catch(std::exception& e) {
        std::cout<<e.what()<<", Exception"<<std::endl;
    }
    catch(...) {
        std::cout<<"Unhandled Exception"<<std::endl;
    }
}


int main(int argc, const char* argv[]) {
    learn_unique_ptr();
    return 0;
}

//Code Ends



When I run the above program, I get the following output which is expected. Under both scenario, destructor has executed.


//Console Output Begins

$ ./test (Under exception scenario)
myvector::myvector
myvector::~myvector
vector::_M_range_check, Exception

$ ./test (Under Normal scenario)
myvector::myvector
2
myvector::~myvector

//Console Output Ends



In this section, we will learn how smart pointer (unique ptr) helps to return the pointer safely from a function. We need not bother about any component about releasing the memory once is is done with the usage. We can return the pointer from one function to other and the other function would keep on using that  particular pointer. Once the last function which was using the smart pointer  is about to go out of scope, destructor would get called automatically.


In this program, one function returns the smart pointer object to its caller. Now caller would use the smart pointer object in normal fashion. Once Caller function is about to finish,destructor would get called as there are no more active user for this object.


This is great and its makes program more robust and less error prone. It also does automatic resource management even if smart pointer objects has been returned from  one scope to other. Now user of the returned smart pointer objects do not need to worry about the 'delete' this memory. System would do automatically for us.


So we should use the 'unique ptr' in these scenarios. Its performance is comparable to raw pointer usage. Also it removes lot of burden from the programmer and makes system more safe.



//Code Begins

#include<memory>
#include<vector>
#include<iostream>
#include<exception>

struct x {
public:
    x(){
        std::cout<<"x::x()"<<std::endl;
    }
    ~x() {
        delete[] length;
        delete[] width;
        std::cout<<"x::~x()"<<std::endl;
    }

    std::size_t size{5};
    int* length{new int[size]};
    int* width{new int[size]};
};


std::unique_ptr<x> learn_unique_ptr(void) {
    std::unique_ptr<x> ux{new x{}};
    return ux;
}

int main(int argc, const char* argv[]) {
    std::unique_ptr<x> uy{};
    uy = learn_unique_ptr();
    auto val = uy->size;
    std::cout<<"Size Memeber Of:"<<val<<std::endl;
    return 0;
}

//Code Ends



When I run the above program, I get the following output.


//Console Output Begins

$ ./test
x::x()
Size Memeber Of:5
x::~x()

//Console Output Ends



In this section, we would learn how we can use the 'shared ptr' smart pointer. As the name suggests that, it should be used when we want to share the objects between two components and there is no explicit owner who should delete that  particular resource.


'shared ptr' manages \textbf{reference count} for a particular object. Its keep on  incrementing the count value whenever its shared/passed to the other component. Whenever its count value become 0 which indicate that there is no user of this object, it would call destructor. This is the basic philosophy of 'shared ptr'. We can see that it does more work and its complicated than 'unique ptr' pointers. Hence there would be significant overhead associated with the 'shared ptr'. So we should use it when that particular scenario cannot be handled using the  simpler 'unique ptr'.


In the current program, I have created a 'shared ptr' object(sx) and then passed by value to the two different threads. Inside the thread function, I have used the 'sx' object in normal fashion. Once both threads finish their execution, parent code would move ahead. Once it is about to come out from the current function scope,  its count value would become 0. At this point system would call destructor for this object.


As we can see that with 'shared ptr' we can safely pass smart pointer object to  different thread and component. System would take care to increase the reference count of this object. Once everyone is done with the usage of this object, reference count would become 0. At this point system has called the destructor of this class.


The rest of the program is self-explanatory  and it has been written for completeness. 'shared ptr' is great tool and it take care very difficult scenarios for us. Hope this make sense to reader.




//Code Begins

#include<memory>
#include<iostream>
#include<thread>
#include<chrono>
using namespace std::chrono;

class x {
public:
    x() {std::cout<<"x::x()"<<std::endl;}
    ~x() {std::cout<<"x::~x()"<<std::endl;}
    void display(void) {
        std::cout<<msg<<std::endl;
    }
private:
    std::string msg{"Sachin Is SuperMan"};
};

void thread_one(std::shared_ptr<x> sptr) {
    std::this_thread::sleep_for(seconds{5});
    sptr->display();
}

void thread_two(std::shared_ptr<x> sptr) {
    sptr->display();
    std::this_thread::sleep_for(seconds{5});
}


void learn_shared_ptr(void) {
    std::shared_ptr<x> sx{new x};
    std::cout<<"Object Reference Count: "<<sx.use_count()
         <<std::endl;
    std::thread tx{thread_one, sx};
    std::cout<<"Object Reference Count: "<<sx.use_count()
         <<std::endl;
    std::thread ty{thread_two, sx};
    std::cout<<"Object Reference Count: "<<sx.use_count()
         <<std::endl;
    tx.join();
    ty.join();
    std::cout<<"Object Reference Count: "<<sx.use_count()
         <<std::endl;
}

int main(int argc, const char* argv[]) {
    learn_shared_ptr();
    return 0;
}

//Code Ends



When I run the above program, I get the following output. Watch out for reference count value depending on the usage.




//Console Output Begins

$ ./test
x::x()
Object Reference Count: 1
Object Reference Count: 2
Object Reference Count: 3
Sachin Is SuperMan
Sachin Is SuperMan
Object Reference Count: 1
x::~x()

//Console Output Ends

Chapter:14 Chrono Support


'std::chrono' has been added in C++11 standard. It basically provides the time related operations in more flexible ways. Again key to this class is, it encapsulate all relevant information at one place. Frankly speaking I did find this part a bit difficult to understand. Nevertheless it provides us to write the piece of logic which can be used to measure any time related stuff in our code. Prior to that, there was no easy way to achieve this and we had use some system specific library/system calls to do this. Now all these can be done by just using standard library  interface. This is something which I always wanted to have in the standard.


It also provides the interface to change from C++ style type to C style type. This enables us to use good part of both the world (.i.e C and C++). The following two program demonstrate the basic usage of this 'std::chrono' library. Apart from that, these library can be used in  various places as time manipulation related operation are very common in non trivial software. Once we understand below progarm, we can check the other  facility of this library and accordingly we can start using those in  our work.



This program explains how we can write safe and better version of 'time.h' functions. We all know that these functions returns the pointer to static objects, which may be changed/overwritten by any other subsequent calls .


To improve this situation, we can write the wrapper function for these functions where after the call to these function, we do deep copy of those information into our local copy. Now we can provide the local copy to our caller which would always have the valid and consistent information.


The below program basically finds the current time using 'std::chrono' interface. After that we again use the 'std::chrono' interface to convert the C++ style type to C style data type. Now we can pass this data as argument to our wrapper function. These wrapper functions would basically provide us the current time in more human  readable format. These all stuff can be done in complete C style interface as well  and people had been using those interface since years. However, here I have tried to explain and write the better code in complete C++11 style. Choice is yours but personally I think C +11 style is safe and better code in this case. Hope it make  sense to reader.


//Code Begins

#include<iostream>
#include<thread>
#include<chrono>
#include<ctime>
#include<string>
using namespace std::chrono;

std::string get_ctime_to_ltime(const time_t* tp) {
    char* tmp=::ctime(tp);
    std::string output{tmp};
    return output;
}

std::string get_gmtime_asctime(const time_t* tp) {
    struct tm* gt=::gmtime(tp);
    char* tmp=::asctime(gt);
    std::string output{tmp};
    return output;
}

void chrono_component(void) {
    auto tp=std::chrono::system_clock::now();
    time_t cstyle_t=system_clock::to_time_t(tp);
    std::cout<<"Local Time: "<< get_ctime_to_ltime(&cstyle_t)
         <<std::endl;
    std::cout<<"GMT Time: "<<get_gmtime_asctime(&cstyle_t)
         <<std::endl;
}

int main(int argc,const char* argv[]) {
    chrono_component();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.


//Console Output Begins

$ ./test
Local Time: Mon Nov 25 20:54:36 2013

GMT Time: Mon Nov 25 15:24:36 2013

//Console Output Ends





This is one of the better programs which I have written using complete C++11 standard. So I could not resist myself to add this program in my current book. As I have  mentioned in our discussion, time measurement is very important part in our day to  day activity. If we are working on performance improvement or we want to find out how efficient the code is, we should always measure our code instead of just going by our intuition.


The following 'timemeasure' class is small helper class written by me using the facility provided by 'std::chrono'. So whenever we want to measure the time duration, we should create the object of this class and 'start point()' method of this class. This would start the  measuring time. Once we are done, we should call the 'end point()' method of this class. The 'end point()' method would also provide the number of units of time elapsed to its caller.


The 'timemeasure' class is template on type 'Tprecision'. The default here is 'milliseconds'. If user wants to measure in any other precision, he should pass that while creating the object of this class. I have also done typedef for 'unit' for user convenient and how any standard class actually defines for us. However if we want to avoid these, we can always use 'auto'  instead of specific type. It would work without any problem.


In the current example, I have measured the time at three different places and with different precisions. As usual we need to create the object before the line from which we want to measure. Once we are done with our measurement, we need to stop our measurement. This would give us the number of unit in specified presicion. I use this program extensively wherever I want to measure my code performances. Hope that reader too would find this useful.



//Code Begins

#include<iostream>
#include<thread>
#include<chrono>
using namespace std::chrono;

template<typename Tprecision=milliseconds>
class timemeasure {
private:
    steady_clock::time_point tbegin{};
    steady_clock::time_point tend{};
    steady_clock::duration diff{};
public:
    typedef steady_clock::duration::rep unit;
    timemeasure() {}
    ~timemeasure() {}
    void start_point(void){
        tbegin=steady_clock::now();
    }
    unit end_point() {
        tend = steady_clock::now();
        diff = tend-tbegin;
        Tprecision  dur{};
        dur=duration_cast<Tprecision>(diff);
        unit val=dur.count();
        return val;
    }
};

void function_one(void) {
    std::this_thread::sleep_for(seconds(2));
}

void function_two(void) {
    std::this_thread::sleep_for(seconds(2));
    std::this_thread::sleep_for(microseconds(1000));
}

void chrono_component(void) {
    timemeasure<>  timer1;
    timer1.start_point();
    function_one();
    timemeasure<>::unit val1 = timer1.end_point();
    std::cout<<"Val1:"<<val1<<"\n";

    timemeasure<microseconds>  timer2;
    timer2.start_point();
    function_two();
    timemeasure<microseconds>::unit val2 = timer2.end_point();
    std::cout<<"Val2:"<<val2<<"\n";

    timemeasure<nanoseconds>  timer3;
    timer3.start_point();
    std::this_thread::sleep_for(microseconds(10));
    timemeasure<nanoseconds>::unit val3 = timer3.end_point();
    std::cout<<"Val3:"<<val3<<"\n";
}

int main(int argc, const char* argv[]) {
    chrono_component();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.



//Console Output Begins

$ ./test
Val1:2000
Val2:2001242
Val3:80697

//Console Output Ends



Chapter:13 Task Based Concurrency

The C++11 standard has also introduced the concept of task based concurrency. The main intention behind this concept is to provide the alternative for launching different task concurrently without the use of 'std::thread'. Task based concurrency also allows us to write lock free programming wherever possible.


Creating a thread to do the simple work has substantial overhead. In addition to that, user also needs to take care about the certain things like to wait for the completion of child thread in the parent thread context. This is pure blocking call and until child thread finishes its work, parent would not do anything useful except waiting.


If we think carefully, does it looks good to just wait in parent thread context?. I mean normally our intention behind the creating the thread is to execute tasks concurrently. However parent thread normally does not uses the result of its child thread immediately. Parent thread can do something useful in its context instead of just waiting. The moment  parent thread needs the result, it can fetch it and use it.


With 'std::thread' interfaces, we can start executing a thread function. Once thread function finishes, thread would eventually finish. However with this model, it is not possible to get the return value from the thread  function to its caller(parent). I mean there are ways which we can apply  to achieve this, like pass the return output variable in the thread function itself. However its not kind of very natural. When we write the function/task we pass some argument into it and once function finishes it return some value to its caller. By this way caller code can easily utilize the result of the task done.


This sort of mechanism has been provided in the task based concurrency and the main interface for this is "std::async". Its lightweight and should be  used to perform the basic task. It also provide the higher level of abstraction to the user. I mean user does not need to bother about whether system would create the thread or it would execute concurrently to perform the task provided by user.



The current program explains the basics usage of 'std::async'. User needs to pass the function name and the argument in the 'std::async'. This is same as we do it while creating the 'std::thread'. 'std::async' would return the handle to its caller which would be used by its caller to get the result whenever he wants. Here whenever is very important as parent can do its own useful work until he want to get the result of this particular task. If the task has not finished by the time parent is trying to get the result, it would result into the blocking call as it happens with the 'std::thread'. However if task has been executed successfully when parent is trying to get the result, there would not be any waiting/blocking call in the parent thread. This is key concept in this mechanism. We can think this handle as the communication medium between the caller and its callee.


Just to simulate this behavior, I have written the task/function which open the file and read the first line and return to its caller. However if file is not  available in the working directory, then the task would go to sleep for 2  seconds and again retry to open the file.


Here I have launched two tasks to open two different file and read their first  lines. 'std::async' returns the handle to its caller. After this, I have displayed the message to console in the parent context before fetching the result from both handles. During this time, parent can do its own useful work.


Parent thread would fetch the result from the handle by using the 'get()' interface. As I have mentioned that if the task has not finished, here parent would have to wait for the result. To verify this we can remove one input file before you run this program. Once program waits on the line where parent thread has called to get the result, we can put the back the input file to the program directory. The moment we do this, program would print the concatenation of both strings onto console. With this program would complete sucessfully. Hope this make sense to the reader.



//Code Begins

#include<iostream>
#include<fstream>
#include<string>
#include<thread>
#include<future>
#include<chrono>
using namespace std::chrono;

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

std::string readline_from_file(std::string fname) {
    std::string output;
    start:
    std::fstream fs{fname, std::ios_base::in};
    if(!fs.is_open()) {
        std::this_thread::sleep_for(seconds(2))    ;
        goto start;
    }
    else {
        std::getline(fs,output);
        return output;
    }
}

void learn_task_based_concurrency(void)
{
    std::string file_x="input_x";
    std::string file_y="input_y";
    auto fd_x=std::async(readline_from_file,file_x);
    auto fd_y=std::async(readline_from_file,file_y);
    display("Waiting For the Result...");

    std::string out_x=fd_x.get();
    std::string out_y=fd_y.get();

    std::string output=out_x +" "+ out_y;
    display(output);
}


int main(int argc, const char* argv[])
{
    learn_task_based_concurrency();
    return 0;
}

//Code Ends




When I run the above program, I get the following output.


//Console Output Begins

$ ./test
Waiting For the Result....
input_x input_y

//Console Output Ends




In this program, I would explain the usage of 'std::packaged task'.  Practially, we can think about 'std::async' as the specialization  of this interface. We can think of 'std::packaged task' as the  complete object which contains all the information which caller  would need in future to get the result from the task. This way  it encapsulate all important information at one place in a object.


In addition to that it also breaks the steps of initialization and  actual execution in different steps which kind of give more  flexibilty to user. I think this is more like object oriented  apporach against the 'std::async' interface which looks more  of functional type.


In this program, I have created two tasks in parent thread  context. First we need to create the object of 'std::packaged task'. Watch out for the syntax in the left hand side of expression. Its template on the type of the function, so we need to pass this  information. Once objet has been created, we can pass the argument which would invoke the task execution. Rest of the interface are  kind of same as the 'std::async'.


In my view, we can also think 'std::packaged task' as better version  of raw function pointer. It provides better abstraction and it  also encapsulate all the relevant information(like argument, result) into one place.



//Code Begins

#include<iostream>
#include<string>
#include<thread>
#include<future>
#include<exception>

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

template<typename Tout, typename Targs>
Tout task_sum(Targs i1, Targs i2) {
    Tout tmp{};
    tmp = i1 + i2;
    return tmp;
}

template<typename Tout, typename Targs>
Tout task_sub(Targs i1, Targs i2) {
    Tout tmp{};
    tmp = i1 - i2;
    return tmp;
}


void learn_task_based_concurrency(void) {
    std::packaged_task<int (int,int)> pt_x {task_sum<int,int>};
    std::packaged_task<int (int,int)> pt_y {task_sub<int,int>};
    pt_x(20,10);
    pt_y(20,10);
    auto f_x = pt_x.get_future();
    auto f_y = pt_y.get_future();
    display(f_x.get());
    display(f_y.get());
}


int main(int argc, const char* argv[]) {
    learn_task_based_concurrency();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.




//Console Output Begins

$ ./test
30
10

//Console Output Ends




Chapter:12 Std::Thread Support

Support of multi threading was incorporated in the C++11. This is the best feature in C++11 as per my opinion. It has very clean interface and very easy to use. We all know how difficult it is to write multi threading programming using the system specific libraries. I mean to pass the  argument to thread function, we have to typecast to some system specific types. We can not directly pass more than one argument to the thread function. If we had to, we had to create some structure and club all those argument and then pass those to function.


All those steps are source of error and we sometime forget to do these things in our program. On the other hand C++11 thread has very very  clean and easy interface. It does not impose any constraints about how and how many argument we can pass to the thread function. In addition to this, 'std::thread' are well written class so encapsulate many useful information related to that particular thread in very elegant manner.


'std::thread' also provides the interface by which you can use the system specific libraries which we have been using till now. So this is kind of two layer abstraction where you have option to write your code which  is not dependent on any system specific calls and if whenever you want to write system specific calls, there is provision for that as well.


C++11 standard has also incorporated the tools for the synchronization. There are rich set of classes available which we can use to synchronize our code/data. There is also features for writing the lock free programming. There is also task based concurrency support available in the new standard.


These features are not just good they are great. Once you would start using them you would realize the benefits of this standard library feature.


This example demonstrate all the key concepts which is required to use  the 'std::thread'.  If we want to executive any function under a particular thread, all we need to pass its address and argument in the constructor of 'std::thread'. Here my thread function needs 'string' data type so that it can use it inside the function.


Once we create the thread object, thread function would start executing as soon  as system schedules this particular thread. However as we do not know  how much time the new thread would take to complete its execution. So as usual parent thread needs to wait for the completion of its launched  thread. I feel this is absolutely necessary due to couple of reasons. First if you have launched something, ideally you would like to make use of that work, otherwise in normal circumstances it does not make much sense. The second reason is more technical. We could verify that all necessary infrastructure like variables and thread object itself gets created on the stack of parent  thread. So if parent thread would not wait and its continue its execution and finishes before the child thread finishes its work, we are in big big trouble. Sooner something nasty is bound to happen. So to avoid those,  system throws and error and terminate the program if parent thread do not wait for its child thread. By this way it maintains the integrity of  the system and also provide the appropriate error message close to the problem. Hopefully user would be able to identify the problem and fix it.


The 'join()' thread class function does this for us. So always remember to use this in the parent thread context to ensure that parent does not finish before the child thread is completed.


'hardware concurrency' is another useful static type thread class function which provides us the number of cores of our machine has. This is very important function  provided by standard as we do not have to write any system specific  code to fetch such low level information. Number of core is very important and would be very important information in multi core,  multi threading environment. We can make use of these while  maintaining the thread pool or in some sort of load balance or how many threads should be created in a particular program.  Its priceless information and we get it in free.



//Code Begins

#include<vector>
#include<iostream>
#include<thread>
#include<string>
#include<chrono>
using namespace std::chrono;

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

int tfunction(std::string n) {
    display(n);
    auto sz=n.size();
    return sz;
}

void learn_std_thread(void) {
    std::string arg_x{"Sachin 10Dulkar"};
    std::thread t_x{tfunction,arg_x};
    t_x.join();
}

int main(int argc, const char* argv[]) {
    std::size_t ncore=std::thread::hardware_concurrency();
    display("Number of Core On This Machine Is:");
    display(ncore);
    learn_std_thread();
    return 0;
}

//Code Ends



When I run the above program, I get the following output which is expected. Please remember that the output may vary as we run this program on different machine. On my machine, number of core is 4.


//Console Output Begins
$ ./test
Number of Core On This Machine Is:
4
Sachin 10Dulkar
//Console Output Ends


The below example looks complicated at the first glance. However complete program can be broken into the following sections:

How we find out the number of core on your machine?
Thread library 'hardware concurrency' provide the no.of cores.

How to create the empty thread pool with size equal to no of core?
We can use create the vector of 'std::thread' of size equal to no.of core.

How we can check  a particular thread is associated with a function or not?
Standard has provided the one static variable 'thread::id{}' which is kind of denotes the invalid id/handle. If your thread id matches with this variable, that
indicates that that thread is not associated with any task.

How we can assign a task to a particular thread from the thread pool?
Once we find the empty thread slot in our thread pool, we can create one local thread object and after that we can call 'std::move' to that particular thread slot. This would associate a particular task with a thread. Due to above call now  thread pool would have valid thread handle so we can do whatever we want to fetch from this thread handle.

How we can use system specific thread library using native handle of thread?
Standard provides the 'native handle()' member function in the 'std::thread' class. We all know that all system specific libraries work on some kind of handle. So we can fetch the 'native handle()' our 'std::thread' and pass this information to system specific libraries. Those would start working. This is big big bonus with the 'std::thread' as it provide us the two layer abstraction. However until its absolutely must, I think we should avoid to mix the both uses as it would lead to non-portable code. So think before using 'native handle()' information in your code.

How we parent thread can wait for its child completion?
As usual 'join()' would do this for us. It would block the parent thread until its child finishes its task and return to it.



//Code Begins

#include<vector>
#include<iostream>
#include<thread>
#include<string>
#include<pthread.h>
using namespace std::chrono;
std::size_t ncore{};

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

int tfunction(std::string n) {
    display(n);
    auto sz=n.size();
    return sz;
}

void learn_std_thread(void) {
    std::vector<std::thread> tpool(ncore);
    for(const auto& x: tpool) {
        if(x.get_id()==std::thread::id{}) {
            display("Current Thread is Not Running");
        }
    }
    std::string arg_x{"Hello, C++11"};
    std::thread t_x{tfunction,arg_x};
    tpool[0] = std::move(t_x);
    if(tpool[0].get_id()==std::thread::id{}) {
        display("It should not happen at this point");
    }
    tpool[0].join();

    std::string arg_y{"C++11, Thread Library"};
    std::thread t_y{tfunction,arg_y};
    tpool[1] = std::move(t_y);
    auto nhandle = tpool[1].native_handle();
    void* value_ptr{nullptr};
    auto ret = pthread_join(nhandle, &value_ptr);
    if(ret != 0) {
        display("pthread_join calls is unsucessfull");
    }
}


int main(int argc, const char* argv[]) {
    ncore= std::thread::hardware_concurrency();
    display("Number of Core On This Machine Is: ");
    display(ncore);
    learn_std_thread();
    return 0;
}

//Code Ends




When I run the above program,I get the following output. I do not know why this program is throwing an exception at the point where I have used system specific library call. It may be bug in my code or it could be possible that current gcc4.8.1 has not implemented this feature yet. Nevertheless I wanted to show how we can use the system specific call to 'std::thread' native handle to execute such calls. Hope this program fulfill that intention.




//Console Output Begins

$ ./test
Number of Core On This Machine Is:
4
Current Thread is Not Running
Current Thread is Not Running
Current Thread is Not Running
Current Thread is Not Running
Hello, C++11
C++11, Thread Library
terminate called without an active exception
Aborted (core dumped)

//Console Output Ends






In this section, I would be covering bit controversial interface(detach) provided by the 'std::thread'. By 'thread::detach()' we can ignore the parent thread to wait for its child thread. I mean once we detach the thread from the parent thread, both are free to run independently.


However there is potential problem in this approach. If there is some resource sharing between the 'child/parent' thread, then there could be scenario where child thread keep using the already released resource by its parent thread. This is very much possible as both threads can execute and finish before waiting for each other. These can lead to most deadliest bug in your program in the form of memory corruption, data racing and in many unknown type. The problem with these bugs are that they would mostly be of non deterministic type, so it is quite possible that it works for 99.99 percent time.


I do not know when we can use this functionality with complete safety and  correctness. The real problem is that we can never know completely about whether parent thread has some shared resource with its child thread. Even though there is no explicit sharing of resources between these threads, but there could be implicit sharing of some resources by system which we(user mode) program can not know.


The below program creates two child threads. The parent thread allocate the heap memory where it stores the 'index' value. This would be shared by value in the thread  where thread function just read this value. 'index' address is shared with the thread2 where thread function writes something into this. Both threads execute some loop and execute  something and then go for 'sleep()'.


Now in parent thread, we have 'join()' call for the thread1 which is correct. However we  have called 'detach()' for the thread2 due to which now parent thread would not wait for  thread2 completion. Just after that I have released the memory as parent thread is about to finish. Clearly, this is real problem and we have nasty nasty heap corruption as thread2 would be writing into the memory which has already been released by parent thread.


Just to aggravate this scenario, I have written the infinite for loop in the program main thread context. By this program would never terminate and hopefully we could see the real problem by ourself. This program has been written to show the reader about the impact of usage of deadly interfaces. Never write such program in your production environment.




//Code Begins

#include<iostream>
#include<thread>
#include<string>
#include<chrono>
using namespace std::chrono;

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

void tfunction_one(std::size_t counter) {
    for(std::size_t index{0};index<counter;++index) {
        display("Inside tfunction_one");
        std::this_thread::sleep_for(seconds(2));
    }
}


void tfunction_two(std::size_t* counter){
    for(;;) {
        display("Inside tfunction_two");
        std::this_thread::sleep_for(seconds(5));
        *counter = 100;
        display(*counter);
    }
}


void learn_std_thread(void){
    std::size_t* index = new std::size_t;
    *index = 2;
    std::thread t_x{tfunction_one,index};
    std::thread t_y{tfunction_two,&index};
    t_x.join();
    t_y.detach();
    delete index;
}


int main(int argc, const char* argv[]) {
    learn_std_thread();
    for(;;) {
        std::this_thread::sleep_for(seconds(5));
    }
    return 0;
}

//Code Ends



When I run the above program, I get the following output on my machine. It could vary depending on the system load, scheduler. From the output it  looks everything is running fine. However this is not the real case.



//Console Output Begins
$ ./test
Inside tfunction_two
Inside tfunction_one
Inside tfunction_one
100
.....................
.....................
//Console Output Ends




However let us run the same program using very very powerful dynamic tool: 'valgrind'. This tool captures almost all set of memory/thread related problem with best possible manner. Here we get the problem information. As we have discussed, that thread2 would be writing into the released memory which leads to heap corruption scenario. The same is reported by 'valgrind'.



//Console Output Begins

$ ./test
...........................
==5552== Thread 3:
==5552== Invalid write of size 8
==5552==    at 0x4010F1: tfunction_two(unsigned long*)
==5552==    by 0x5353E99: start_thread (pthread_create.c:308)
==5552==    by 0x565D3FC: clone (clone.S:112)
==5552==  Address 0x5c25040 is 0 bytes inside a block of size 8 free'd
==5552==    at 0x4C2A4BC: operator delete(void*) (in /usr/lib/valgrind/..)
==5552==    by 0x40117D: learn_std_thread() (test_3.cpp:37)
==5552==    by 0x4011DF: main (test_3.cpp:42)
..............................

//Console Output Ends






In this program we will discuss about the 'thread local' data types. In the simplest form we can think 'thread local' is private data types which lives with the course of complete thread execution. Private here means no other thread would be accessible unless owner thread share the address of this  particular variable to other thread.


This is completely different than the global variable type. The following  program actually uses 'global' as well as 'thread local' data type to explain that both are different. We have created the two child thread and and each thread executes the same function 'tfunction one'.  Inside the function, program first update the global variable and print the address of as well. I have also written 'function' where I have declared 'count' variable to be of 'thread local' type. Here also we have printed the address of 'count'. Hopefully this would explain the difference between these two data types.



//Code Begins

#include<iostream>
#include<thread>
#include<string>
#include<chrono>
using namespace std::chrono;
int counter{0};

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

void function(void){
    thread_local int count{0};
    int tmp=count;
    std::cout<<"Count(Thread Local) Address:"<<&count<<std::endl;
}

void tfunction_one(void) {
    display("Inside tfunction_one");
    counter++;
    std::cout<<"Counter(Global) Address:"<<&counter<<std::endl;
    std::this_thread::sleep_for(seconds(2));
    function();
}


void learn_std_thread(void){
    std::thread t_x{tfunction_one};
    std::thread t_y{tfunction_one};
    t_x.join();
    t_y.join();
}


int main(int argc, const char* argv[]) {
    learn_std_thread();
    return 0;
}

//Code Ends



When I run the above program, I get the following output. We see that global variable address is same when the function is getting executed under two threads. However 'thread local' data type address is different in both thread  execution. This was expected as per the definition of 'thread local' data type.




//Console Output Begins

$ ./test
Inside tfunction_oneInside tfunction_one
Counter(Global) Address: 0x6052bc
Counter(Global) Address: 0x6052bc
Count(Thread Local) Address:0x7f85b5df46fc
Count(Thread Local) Address:0x7f85b55f36fc

//Console Output Ends



This is the last program which we would discuss in this chapter. Till now  if you have noticed that, the the output of different threads to console were kind of mixed. This is not unusual for the previous programs as we have not used any form of synchronization mechanism in those. If there are one  resource which can be used by different threads at one point, we should and must synchronize those.


standard threading mechanism also provides the 'std::mutex' and many of its variants which we can use in our program. Standard also provides the recursive mutex, which allows one thread to acquire the same mutex for more than once. Of course that thread needs to release that mutex number of time it has acquired.


The following program basically synchronize the  global variable 'count' update and also while writing on to the console. So we have defined two objects of the 'std::mutex'  class and used to handle these two synchronization. The rest are self explanatory and  has been written for completeness. 'std::mutex' also provide the 'native handle' functionality like 'std::thread' which can be used to in case you want to use the system specific library. I have not used these in the current program, however it can be used quite easily if you want to.





//Code Begins

#include<iostream>
#include<thread>
#include<mutex>
#include<string>
#include<chrono>
using namespace std::chrono;

int count{0};
std::mutex cout_mutx;
std::mutex count_mutx;

template<typename T>
void display(const T& val) {
    cout_mutx.lock();
    std::cout<<val<<std::endl;
    cout_mutx.unlock();
}

void tfunction_one(std::size_t counter) {
    for(std::size_t index{0};index<counter;++index) {
        display("Inside tfunction_one");
        count_mutx.lock();
        count++;
        count_mutx.unlock();
        std::this_thread::sleep_for(seconds(2));
    }
}

void tfunction_two(std::size_t counter) {
    for(std::size_t index{0};index<counter;++index) {
        display("Inside tfunction_two");
        count_mutx.lock();
        count++;
        count_mutx.unlock();
        std::this_thread::sleep_for(seconds(4));
    }
}

void tfunction_three(std::size_t counter) {
    for(std::size_t index{0};index<counter;++index) {
        display("Inside tfunction_three");
        count_mutx.lock();
        count++;
        count_mutx.unlock();
        std::this_thread::sleep_for(seconds(8));
    }
}

void learn_std_thread(void){
    std::size_t index{2};
    std::thread t_x{tfunction_one,index};
    std::thread t_y{tfunction_two,index*index};
    std::thread t_z{tfunction_three,index*index*index};
    t_x.join();
    t_y.join();
    t_z.join();
    display(count);
}


int main(int argc,const char* argv[]) {
    learn_std_thread();
    return 0;
}

//Code Ends






When I run the above program, I get the following output.




//Console Output Begins

$ ./test
Inside tfunction_two
Inside tfunction_one
Inside tfunction_three
Inside tfunction_one
Inside tfunction_two
Inside tfunction_three
Inside tfunction_two
Inside tfunction_two
Inside tfunction_three
Inside tfunction_three
Inside tfunction_three
Inside tfunction_three
Inside tfunction_three
Inside tfunction_three
14

//Console Output Ends



Chapter:11 Hashing Support

Hash table is very popular data structures which has been in use in almost all domain of programming. It is based on the data structure which has '(key, value), pair.  Typically hash table uses hash function to compute an index into array of buckets while insertion and lookup of any key. This is almost O(1) time complexity when well behaved  hash function is used. This fast lookup makes this data structure so popular.


C++11 has now introduced the support of hash function. There are four associate container classes which directly supports hash table  functionality. As usual there are two variants of 'map'  and 'set' which has usual meaning as their orders counterpart.

1. unordered map
2. unordered set
3. unordered multimap
4. unordered multiset


If some reader is not familiar to hash table data structure, I suggest them to read about this from any standard algorithm books available. Here we would mainly focus how we can use the STL version of hash table implementation. Hence in the following section, I assume that the reader is aware about the hash table concept.


We all know that 'std::map/std::set' insert the key's based on the less than operator. These container uses red-black tree in their internal implementation. As while inserting elements are inserted in the less than ordered manner, red-black tree guarantee about  'O(log(n))' time complexity in elements lookup.


However this does not hold true for hash table supported container classes. The elements are not guaranteed to be inserted in any order. In fact it depends on many factor of hash table and its result may vary in different run of the same program. Basically hash table needs to provide the 'O(1)' complexity for key lookup, hence it normally uses very different way to store the elements in the container.


If you want to search any random key entry from the sequence, you should ideally go for the hash table variants of container classes(unordered). If you want that elements  should be ordered way even you could kind of compromise for lookup time, then  you should use the previous variants of container classes(ordered).


You may not get the desired performance benefits while using hash table container class compared to ordered container classes. However once your container size is  large we start to observe the significant benefits. As usual do not trust to anyone, write the code and measure the time. We should always try to measure any code performance by ourself and then we can start to use for our requirement.


The following example demonstrate the fact that we should not rely on the order of element in the sequence for hash table container classes. They could vary even in different run of the same program.




//Code Begins

#include<map>
#include<unordered_map>
#include<iostream>
#include<string>

typedef std::map<std::string, int> mymap;
typedef std::unordered_map<std::string, int> myunordererdmap;

void learn_hash_support(void) {
    mymap m_a{ {"Mantosh",7},{"Kumar",5},{"Sharma", 6},
        {"Ram",3},{"Chandra",7},{"Aryan",5},
        {"Deepansu",8},{"Shakuntala",10},
        {"Phool",5},{"Parbati",7}};

    myunordererdmap m_b{ {"Mantosh",7},{"Kumar",5},{"Sharma", 6},
        {"Ram",3},{"Chandra",7},{"Aryan",5},
        {"Deepansu",8},{"Shakuntala",10},
        {"Phool",5},{"Parbati",7}};
    auto itr1=m_a.begin();
    auto itr2=m_b.begin();
    for(;(itr1 != m_a.end()&& itr2 != m_b.end()); ++itr1, ++itr2) {
        std::cout<<"("<<itr1->first<<","<<itr1->second<<")"<<"\t\t\t"
                << "("<<itr2->first<<","<<itr2->second<<")"<<std::endl;
    }
}

int main(int argc, const char* argv[]) {
    learn_hash_support();
    return 0;
}

//Code Ends



When I run the above program, I get the following output. Here we can see that the outputs are different for ordered and unordered version for the same set of input.


//Console Output Begins

$ ./test
(Aryan,5)            (Phool,5)
(Chandra,7)            (Deepansu,8)
(Deepansu,8)        (Chandra,7)
(Kumar,5)            (Aryan,5)
(Mantosh,7)            (Ram,3)
(Parbati,7)            (Parbati,7)
(Phool,5)            (Sharma,6)
(Ram,3)                (Shakuntala,10)
(Shakuntala,10)        (Kumar,5)
(Sharma,6)            (Mantosh,7)

//Console Output Ends



C++11 standard has provided the standalone way of using hash function. This is good as now we can do all experiment with the hash function for given data set. The basic requirement of hash function is it should always give the same hash value for the given input. The second requirement is provide the different hash value even for similar key, so that key would be inserted into the table as uniformly as possible. These things plays the significant role while key lookup into the table.


In the following examples, I have stored the words in each line in the file 'input.txt'. The words are not unique and has multiple entries into the file. First I have stored them into 'std::set' which would ensure that it has only unique entries of words after reading from the file is done.


Now I have defined 'hash fun' function object from the standard library. The idea over here is to calculate the hash value for the entries of the words which we have already stored into the 'std::set' container.  I have stored the  hash value into another 'std::set' container. Ideally both the container size  should be the same which would confirm that hash function is behaving  good as it has generated different hash value for all different key.


At the end of the program, I have printed the '(key, hash value)' pair. We can see that even for the similar type of string the hash values are very different which is also indication of the very good hash function. Hope it make sense to the reader.



//Code Begins

#include<iostream>
#include<vector>
#include<string>
#include<fstream>
#include<functional>
#include<set>

std::set<std::string> words;

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

void read_and_store(void) {
    std::string node;
    std::string file = "input.txt";
    std::ifstream ifstr{file};

    if(ifstr.is_open()){
        while(std::getline(ifstr,node)) {
            words.insert(node);
            node.clear();
        }
        ifstr.close();
    }
}

void do_some_hash_stuff(void) {
    std::set<size_t> hash_val;
    std::hash<std::string> hash_fun;

    for(const auto& i: words) {
        auto val = hash_fun(i);
        hash_val.insert(val);
    }

    display(words.size());
    display(hash_val.size());

    auto itr1 =words.begin();
    auto itr2=hash_val.begin();
    for(;(itr1 != words.end()&& itr2 != hash_val.end())
    ; ++itr1,++itr2) {
        std::cout<<*itr1<<"\t\t\t"<<*itr2<<std::endl;
    }
}

int main(int argc, const char* argv[]) {
    read_and_store();
    do_some_hash_stuff();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.


//Console Output Begins

$ ./test
114
114

've            35910839017159818
And            45719090434930041
If            257679905247666261
Let's        728709960516072112
So            828576016503041753
The            891443442213174483
There        1053890543232546217
They        1083224364347218984
We            1778721970047496877
..................................
..................................

//Console Output Ends




The following is kind of complete example of hash table usage  for a particular problem. At first I have read the words and  calculated their length so that I can store in 'std::unordered map'  as '(key,value)' pair.  After that I have fetched the various  information using the bucket interface of this particular hash table.


You can visualize the hash table as vector where each element of  vector is kind of list itself. Each index of vector is known as  bucket and ideally there should be uniform distribution values over the entire vector index. Here I have fetched the total number of bucket in the current hash table. I have  also printed the count of keys to each bucket. The output would be fairly large and there would be many buckets with the count  value 0 and there are many words which are in the same bucket.


This is expected in the giving input which I am using in these programs. There are many multiple entries of same key  which lead to situation where one bucket has more than one  element. Ideally the list size of key elements in the  particular index of vector should not be very large  else there would be no guarantee of 'O(1)'  complexity in lookup operation. So to maintain this  sometime hash table increases its size and again it  calculate the hash value of all elements so that they can be positioned to their new bucket. This is known as rehasing and its expensive and ideally it should not happen very frequently.


In the later part of the program, I did find the  bucket number for the particular key. After this I wanted to print all elements which has been hashed to this bucket  number.The standard provides the 'begin/end' kind of  interface to walk through the lists of key in the particular bucket. These 'begin/end' interfaces are different than the one which we uses to walk through the elements of the entire container.


In this program, I have also calculated the 'load factor' of  this particular hash table. Standard has provided the different interface to get the same but here I have calculated with the definition of the load factor. At the end of the program, I  have also calculated the load factor of this hash table using standard interface so that we can verify that both values are same. As we know that 'load factor' is kind of very important information which hash table uses to find out when entire hash table should be rehashed.




//Code Begins
#include<iostream>
#include<string>
#include<fstream>
#include<utility>
#include<unordered_map>

typedef std::unordered_multimap<std::string,int>myunorderdmulitmap;
typedef myunorderdmulitmap::size_type sztype;
typedef myunorderdmulitmap::local_iterator uniterator;

myunorderdmulitmap words;

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

void read_and_store(void) {
    std::string str;
    std::pair<std::string, int> node;
    std::string file = "input.txt";
    std::ifstream ifstr{file};

    if(ifstr.is_open()){
        while(std::getline(ifstr, str))    {
            node.first = str;
            node.second = str.length();
            words.insert(node);
            str.clear();
        }
        ifstr.close();
    }
}

void learn_bucket(void) {
    sztype bucketcnt=words.bucket_count();
    sztype maxbuckercnt=words.max_bucket_count();

    /* Think The below output as the grid and non zero  */
    /* elements means there are so many members.Ideally */
    /* it should not be very non-uniform                */
    for(sztype index = 0; index < bucketcnt; ++index) {
        std::cout<<    words.bucket_size(index)<<"\t";
    }
    std::cout<<std::endl;

    /* Now,lets assume you want to know whether particular*/
    /* key exists(in which bucket)                       */
    std::string key{"mathematical"};
    sztype bucketno = words.bucket(key);
    display(bucketno);

    /*If we want to print all elements which is stored in the*/
    /* same bucket number.         */
    for(uniterator itr = words.begin(bucketno);
            itr != words.end(bucketno); ++itr) {
        std::cout<<itr->first<<"\t"<<itr->second<<std::endl;
    }

    /* Now, lets calculate the load factor manually */
    sztype sz = words.size();
    float ldfactor = (float(sz)/float(bucketcnt));
    display(ldfactor);
}


void learn_load_factor(void) {
    float ldfactor = words.load_factor();
    float maxldfactor = words.max_load_factor();

    display(ldfactor);
    display(maxldfactor);
}


void learn_hash_stuff(void) {
    read_and_store();
    learn_bucket();
    learn_load_factor();
}

int main(int argc, const char* argv[]) {
    learn_hash_stuff();
    return 0;
}


//Code Ends



When I run the above program, I get the following output.


//Console Output Begins
$ ./test
0    0    0    0    0    0    0    0    6    0    0    0    0    6    0    0
0    0    00    0    0    0    0    0    0    0    0    0    0    0    0    0
0    6    0    0    00    0    0    0    0    0    0    0    0    0    6    0
24    0    0    0    0    0    00    0    0    0    0    0    0    0    0    0
.............................................................
.............................................................
13
mathematical    12
mathematical    12
mathematical    12
mathematical    12
mathematical    12
mathematical    12
0.630672
0.630672
1

//Console Output Ends



This is kind of continuation of the previous example. I had to break  up into two parts so that it would be easy to understand. In this program, I have described how we can enforce the rehash at time when we want it to be.


First I have stored the 'load factor' of current hash table. I have also  saved the bucket number for a particular key.  Now I have changed the 'load factor' of the hash table by half of the current 'max load factor'. As we have discussed that rehash would typically depend on the 'load factor' and when it goes beyond some threshold limit, rehash of entire table happens.


I again fetched the bucket number of the same key. As we can see that now its bucket number is changed. Also the total bucket number has also increased to balance out the 'load factor'. This confirms that indeed rehash of entire table has triggered once we changed its 'load factor'.


In this program, I have also shown how we can fetch the hash table function object of that particular hash table. This is not so important and useful, however if we need to use hash function for some use, we can do it. This is same as we can get  the allocator object from a particular container class.  I have given very similar input to this hash function, however we can see that their hash values are very  different. I had mentioned earlier that this is the sign of good hash function.  Of course the same behavior should holds true for all different type of the input not  just for few of them.



//Code Begins

#include<iostream>
#include<string>
#include<fstream>
#include<utility>
#include<unordered_map>

typedef std::unordered_multimap<std::string, int> hash_multimap;
typedef hash_multimap::size_type sztype;
typedef hash_multimap::hasher hashobject;

hash_multimap words;

template<typename T>
void display(const T& val) {
    std::cout<<val<<std::endl;
}

void read_and_store(void) {
    std::string str;
    std::pair<std::string, int> node;
    std::string file = "input.txt";
    std::ifstream ifstr{file};

    if(ifstr.is_open()){
        while(std::getline(ifstr, str)) {
            node.first = str;
            node.second = str.length();
            words.insert(node);
            str.clear();
        }
        ifstr.close();
    }
}

void adjust_loadfactor_rehash(void) {
    float ldfactor = words.load_factor();
    float maxldfactor = words.max_load_factor();

    std::cout<<"Bucket Count:"<<words.bucket_count()<<"\t"
            <<"Max Bucket Count:"<<words.max_bucket_count()
            <<std::endl;
    std::cout<<"Current Load Factor is: "
            <<words.load_factor()<<std::endl;

    std::string key{"mathematical"};
    sztype bucketno1 = words.bucket(key);
    std::cout<<key<<",word was found on the bucket before hash: "
            <<bucketno1<<std::endl;

    ldfactor=2.0*maxldfactor;
    words.max_load_factor(ldfactor);

    std::cout<<"Bucket Count:"<<words.bucket_count()<<"\t"
            <<"Max Bucket Count:"<<words.max_bucket_count()
            <<std::endl;

    sztype bucketno2=words.bucket(key);
    std::cout<<key<<",word was found on the bucket after rehash: "
            <<bucketno2<<std::endl;
    std::cout<<"Current Load Factor is: "
            <<words.load_factor()<<std::endl;
}

void use_hash_function(void) {
    hashobject  hobj=words.hash_function();
    std::size_t hval_a=hobj("Mantosh");
    std::size_t hval_b=hobj("Macintosh");
    display("hval_a = ");
    display(hval_a);

    display("hval_b = ");
    display(hval_b);
}


void learn_hash_stuff(void) {
    read_and_store();
    adjust_loadfactor_rehash();
    use_hash_function();
}


int main(int argc, const char* argv[]) {
    learn_hash_stuff();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.



//Console Output Begins

$ ./test
Bucket Count:1741    Max Bucket Count:576460752303423487
mathematical,word was found on the bucket before hash: 13

Bucket Count:2357    Max Bucket Count:576460752303423487
mathematical,word was found on the bucket after rehash: 2188

Current Load Factor is: 0.465846

hval_a =
5777427744200277933

hval_b =
10830944572245689348

//Console Output Ends



Chapter:10 Forward List

Forward list is another container class introduced in C++11. If we read the ISO C++ standard about this container class, we get the following:

"It is intended that forward list has zero space or time overhead relative to a hand-written C-style singly linked list. Features that would conflict with that goal have been omitted."


As the standard mentions that it would have zero space, hence there can  not be additional information stored in the forward list object. This  is the reason why there is no member size() member for this particular class. If we need to the size of this container class, we need to write by yourself.


The other fundamental difference between this container class and 'std::list' class is their iterator types are different. Forward list has  forward iterator compared to bidirectional iterator in the 'std::list'. In fact due to this, the class is named as forward list. Due to  this there would be some limitation in terms of usage of some of standard STL algorithms for forward list class.


So do not think that we can replace all of 'std::list' usage with forward list as it is more optimized and occupies less memory for its internal information storage. As fundamentally they are not same and all the logic which had requirement of bidirectional iterator would break immediately with the usage of forward list. They have been implemented to serve the different purpose rather than to replace each other. Think to 'replace/use' the standard  forward list where your code uses the C-style version of single linked list. We get many advantage as it is standard  container class and has the very nice and elegant interface  to do many things.


With the above concepts, we can start using the forward list without any issue. The usage is like we uses the 'std::list'  'std::vector' or any sequential container class.


The current program explain the basic usage of forward list container class. They are pretty simple except few 'things/interface'  may look strange to some reader. I see there are some fundamental limitation in forward list over  the other sequential container class. However these so called limitation are because of its design philosophy. Forward list do not provide any interface which would allow us to insert  the element at the end of sequence(push back). It make complete sense as forward list do not store any extra information like (start and end) node interface rather it just store the start node. From start node with just forward iterator support its difficult to do such operation in efficient way. However there are interfaces which has been provided to insert the elements at the beginning of the sequence as it is more natural for forward list.



//Code Begins

#include<iostream>
#include<forward_list>
#include<initializer_list>


template<typename C>
void display_containers(C& cnt) {
    for (const auto& index : cnt)
        std::cout<<index<<"\t";
    std::cout<<std::endl;
}

void learn_forward_list(void){
    std::forward_list<int> flist_x{1,2,3,4,5};
    display_containers(flist_x);

    std::initializer_list<int> ilist_x{9,8,7,6};
    for(const auto& ii:ilist_x)
        flist_x.emplace_front(ii);

    display_containers(flist_x);
    flist_x.push_front(0);
    display_containers(flist_x);
    flist_x.reverse();
    display_containers(flist_x);
    flist_x.sort();
    display_containers(flist_x);
}

int main(int argc, const char* argv[]) {
    learn_forward_list();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.



//Console Output Begins
$ ./test
1    2    3    4    5   
6    7    8    9    1    2    3    4    5   
0    6    7    8    9    1    2    3    4    5   
5    4    3    2    1    9    8    7    6    0   
0    1    2    3    4    5    6    7    8    9   
//Console Output Ends




This is another example of how to use forward list container class. This is very much similar to our earlier program. However In this  case I have explained how having forward iterator support in this container class actually poses some limitations. I mean reversing  the elements in a particular sequence is very common task which we perform in container classes. Here if we try to use 'std::reverse' STL algorithm for forward list, the program would not compile. If we check the manual, we can see that the requirement for 'std::reverse' is bidirectional iterator and forward list support the forward iterators. Due to this the program would not compile.


However standard has provided 'reverse' interface for forward list class. In general standard has provided the all efficient and  meaningful interface to this class. If you want to use something beyond the feature of forward list, you may want to use 'std::list' or 'std::vector'. However for small number of list size, it make sense  that we should use forward list as it consumes less memory  compared to 'std::list' counterpart.



//Code Begins

#include<iostream>
#include<list>
#include<forward_list>
#include<algorithm>

template<typename C>
void display_containers(C& cnt) {
    for (const auto& index : cnt)
        std::cout<<index<<"\t";
    std::cout<<std::endl;
}

void learn_forward_list(void){
    std::forward_list<int> flist_x{1,2,3,4,5};
    display_containers(flist_x);
    /* std::reverse(flist_x.begin(), flist_x.end()); */
    flist_x.reverse();
    display_containers(flist_x);

    std::list<int> list_y{6,7,8,9,10};
    display_containers(list_y);
    std::reverse(list_y.begin(), list_y.end());
    display_containers(list_y);

    /* auto flist_size = flist_x.size(); */
    auto list_size = list_y.size();
    std::cout<<"list size:"<<list_size<<std::endl;
}


int main(int argc, const char* argv[]) {
    learn_forward_list();
    return 0;
}

//Code Ends



When I run the above program, I get the following output. If I uncomment the two line, I get the following compilation error message which was expected. The message is bit verbose as we can expect from the STL interface. Here I  have truncated and showed just the meaningful message. However all it is  trying to convey that this class does not support bidirectional iterators which is required for this algorithm. Also this class does not have 'size()' interface which we have tried to use in our program.


//Console Output Begins
$ ./test
1    2    3    4    5   
5    4    3    2    1   
6    7    8    9    10   
10    9    8    7    6   
list size:5

// With uncomment the both line
$ g++ -g -gdwarf-2 -std=c++11 -Wall test_2.cpp -o test_2
test_2.cpp: In function ‘void learn_forward_list()’:
test_2.cpp:25:28: error: ‘class std::forward_list<int>’
has no member named ‘size’
  auto flist_size = flist_x.size();
                            ^
In file included from /usr/include/c++/4.8/algorithm:62:0,
                 from test_2.cpp:4:
/usr/include/c++/4.8/bits/stl_algo.h: In instantiation of
‘void std::reverse(_BIter, _BIter)
[with _BIter = std::_Fwd_list_iterator<int>]’:
test_2.cpp:16:45:   required from here
...............
‘__reverse(std::_Fwd_list_iterator<int>&,
std::_Fwd_list_iterator<int>&,
std::__iterator_traits<std::_Fwd_list_iterator<int>,
 true>::iterator_category)’
std::__reverse(__first, __last, std::__iterator_category(__first));

//Console Output Ends




Chapter:09 Delegating Constructors

Delegating constructor is a mechanism where one constructor body can delegate its work to other constructor. This mechanism supports the reuse of already implemented functions. Normally we should write one constructor in more generic sense. Once we define such constructor ,we can make use of this  for defining other constructor. This is  known as delegation mechanism.


Prior to existence of this concept, we normally had to write one common function and call it from almost all the place inside  constructor body. However ideally we should not try to 'call/use' any member functions inside the constructor as objects may not be in valid state until they call constructor successful. In a way in class initialize, 'default/delete' and delegating constructor are very much similar and normally gets used at one place.


All delegating constructor would be called and used outside of the constructor body. If somone tries to call it from the inner body of constructor,ill effects may happen in your program. As I have mentioned that ideally inside constructor body we should not use any other member  functions as objects  is yet to be created. This holds true for even other constructor and hence it should not be used inside the body of any of your constructor. This is key in this concept.



//Code Begins

#include<iostream>
#include<cstring>
#include<algorithm>

class x {
public:
    x(int len): length(len) {std::fill(ptr,ptr+length,'\0');}

    x(): x{1} {}

    x(char* n):x{strlen(n)+1}{strncpy(ptr,n,length);}

    ~x() {delete[] ptr;}

    void display(void) {std::cout<<ptr<<std::endl;}
private:
    int length{20};
    char* ptr{new char[length]};
};


void learn_delegating_constructor(void)
{
    x o_a;
    o_a.display();

    x o_b(10);
    o_b.display();

    char* input{"Mantosh"};
    x  o_c(input);
    o_c.display();
}

int main(int argc, const char* argv[]) {
    learn_delegating_constructor();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.



//Console Output Begins
$ ./test
Mantosh
//Console Output Ends

Chapter:08 In Class Initializers

In class initializer is a basic and relatively easy concept to use. The fundamental logic behind this is to write default common values of your member variables which are same  with all other possible combination of member variables. We should override value of the only members variables which has been again initialized with new value inside your constructor. Otherwise the object would have default in class initializer value for a particular member variable.


This has some great advantage over older ways of doing this. Suppose that we have written a particular class and all constructors. In future if we add any new member variable we would have to go to each possible constructor and put the new member variables with its appropriate default value. However if we use in class initializer to initialize that particular member variable then we would not have to do any changes in the any of the  constructor. That particular member variable would contain the values which we have used while doing in class initializer for all objects of this class.


This is important as constructor initializes the class invariants with the appropriate values. So here there must not be any mistake otherwise the result
might be different. As I said I found it very simple and very useful concept. Hence I uses this whereever I found it is applicable and it should be used.



Whenever we write any class, we would like to initialize all the member variables with some appropriate or default values. This is very very  important as once constructor gets called for an object, potentially it can be used by any member function. So to ensure that we normally  write a default constructor which simply initialize all member variable with their default values.


The same thing can be done in more elegant way  if we uses C++11 functionality. We can use in class initializer for all member variable and then we can use the 'default' version of constructor. Now when client code create object of this class, all member variable would have their default values. In addition to that it would be useful while writing parametrized constructor as well. In those function, we simply initialize the member variable for which we have get value from the client code. All other member variables except the one which had received the new values from the constructor argument, would still have default value. To understand the real benfit of this method, reader should start using this.




//Code Begins

#include<iostream>
#include<string>

struct information {
    information()=default;
    ~information()=default;
    void display_info(void){
        std::cout<<height<<"\t"<<age<<"\t"<<money<<std::endl;
    }
    int height{};
    int age{};
    float money{};
};


void learn_inclass_initializer(void){
    information oi_x;
    oi_x.display_info();
}

int main(int argc, const char* argv[]) {
    learn_inclass_initializer();
    return 0;
}

//Code Ends



When I run the above program, I get the following output.



//Console Output Begins
$ ./test
0    0    0
//Console Output Ends




The below example is very much similar to the above. However this class has five member variables hence there would be potentially  many possible combination by which these variables can be  initialized. With in class initialize we initialize all member variable with some default values. After this we just write the class to update any variables of a class, other variable would  have default values. Hopefully with this program, reader can understand the usefulness of this concept.




//Code Begins

#include<iostream>
#include<string>

class x {
public:
    x()=default;
    x(std::string n):name(n){};
    x(std::string n,bool pflag):name(n),printflag(pflag){}
    x(int a,int h,std::string n,std::string c,bool pflag):
        age(a),height(h),name(n),country(c),printflag(pflag) {}
    ~x()=default;

    void display_info(void){
        if(printflag) {
            std::cout<<age<<"\t"<<height<<"\t"
                    <<name<<"\t"<<country<<"\t"<<printflag<<std::endl;
        }
    }
private:
    int age{};
    int height{};
    std::string name;
    std::string country{"India"};
    bool printflag{true};
};


void learn_inclass_initializer(void){
    x o_a;
    o_a.display_info();
    std::string name{"Mantosh"};
    x o_b(name);
    o_b.display_info();

    int a{62};
    int h{184};
    std::string n{"Bjarne"};
    std::string c{"USA"};
    bool pflag{true};
    x o_c(a,h,n,c,pflag);
    o_c.display_info();
}

int main(int argc, const char* argv[]) {
    learn_inclass_initializer();
    return 0;
}

//Code Ends




When I run the above program, I get the following output.



//Console Output Begins
$ ./test
0    0           India    1
0    0    Mantosh    India    1
62    184    Bjarne    USA        1
//Console Output Ends