Skip to main content

Speeding up Model Training with Multithreading and GSFRS

Speeding up Model Training with Multithreading and GSFRS

                  written by Rahat Ahmed Talukder, Notre Dame University Bangladesh


                 We live in a multicore universe where great things can happen in parallel. Parallel processing is equivalent to enormous performance gain. Organized parallelism is how our own body works through dynamic bit organized activation of billions of single neurons. Everybody wants to parallelize a workload done on a data frame. In the machine learning (ML) lifecycle, different workloads are parallelized across a large VM. This allows you to take advantage of the efficiency of the VM and maximize the use of your notebook session. Nonetheless, many of the machine learning or scientific libraries used by data scientists ( Numpy, Pandas, sci-kit-learn,...) release the GIL, allowing their use on multiple threads. It is important to keep in mind that when our dataset is large, threads are more practical than processes because of the possible memory limitations when using multiple processes.


GSFRS stands for Giant Signal File Random Sampler by which we can access any specific portion of the whole at any time with low power consumption. With this tool, our work will be so easy and efficient. You will be glad to know that

GSFRS is a tool that can be used to leverage parallelization for machine learning tasks.
In the real world, the size of datasets is very large which comes as a challenge for every data science programmer. 

Working on it takes a lot of time, so there is a need for a technique that can increase the algorithm’s speed and in this sector, our emerging tool GSFRS can help. We can here use our own GSFRS to take less time and to accomplish the exact tasks. As it can also help simultaneously with other data being processed too.



Since parallelism is expected to continue to grow in the future, these techniques will become more integral in a data science problem's solution in less time.
We can think about collaborating GSFRS and parallel processes in model training.

As GSFRS is a software that works on processing giant files to work on specific data without loading or visiting the whole file, adding this emerging software/tool can be one of the best solutions to leverage the multi-core model training process.

To learn more about GSFRS and its working policy, you can visit previous blogs.
Thanks for your time and We would love to hear your thoughts in the comments section. 😊

Comments

  1. Amazing write-up on GSFRS , explained so well. Would love to know more read more.

    ReplyDelete
    Replies
    1. Thanks dear Traee for your feedback. Yes, you can go through our previous blogs for better understanding. However , stay tuned with us. We will be back to you all with some new objective.

      Delete

Post a Comment

Popular posts from this blog

GSFRS : The Story of a Gigantic Random Sampler by Dr. Prasanta Pal, Brown University

                                                            What is GSFRS ? All about data and more data! We want stuff! A lot of stuff! Often, more stuff than we can handle. These days, with everything turning digital, it means, we are looking for a lot of data which in disguise means trouble! Let's understand the kind of of troubles we may be asking for through some stories. Suppose you've been collecting radio signals from the alien world and storing it in a gigantic file (call it the big book of universal secrets) for last million years. By now, the size of the book got so big that you started counting it in exabytes! (10^18 bytes). If someone tells you that the secret equation for time-travel is buried somewhere around the 3 trillionth line and you want the secret code right now because a giant asteroid ...

“Data cooking”: The Process For Making Artichoke Dips

  “Data cooking”: The Process For Making Artichoke Dips Gitika Gorthi, Chantilly High School Food and data are quite similar. How come? Let me explain. Both are consumed for the purpose of nourishment. In case of food, it is our body and in case of data, it is the model that we are trying to build. Whether it be by using a blender or our teeth, we process all of our food before consumption to avoid choking. To be more specific, imagine eating an artichoke. Will you just consume it at once or will you process it with your teeth before consumption? Most of us will do the latter to avoid choking by the extremely nutritious vegetable. The same conceptual idea is applied on data. Data is powerful and imperative for technological advancements; however, if raw data is fed into machine learning (ML), Fast Fourier Transform (FFT), or other similar systems without the cooking or cleaning parts, it may lead to dangerous outcomes instead. Figure 1: Labeled illustration of the good and choke pa...