DATASCIENCEM Telegram 4641
Here is a trick to optimize a neural network that gives about 4x speedup when transferring data from CPU to GPU.

Let's consider an image classification task.

We define the model, load and transform the data.

In the training loop, we transfer data to the GPU and train the network.

What's the problem:

If you look into the profiler,

- most of the resources go to the kernel (i.e., the training itself),
- but a noticeable amount of time is also spent on transferring data from CPU to GPU (cudaMemcpyAsync).

This can be easily reduced.

Initially, the dataset consists of pixels as 8-bit integers. We convert them to 32-bit floats.
Then we send these float tensors to the GPU. As a result, the data size becomes 4 times larger, making the transfer heavier.

The solution:

Shift the transformation step after the transfer. That is, first transfer the 8-bit ints, and then convert them to floats on the GPU.

As a result, the data transfer step speeds up significantly.

Of course, this doesn't work everywhere; for example, in NLP we initially deal with float embeddings.
But in cases where it applies, the speedup is very noticeable.

👉  @DataScienceM
Please open Telegram to view this post
VIEW IN TELEGRAM
4



tgoop.com/DataScienceM/4641
Create:
Last Update:

Here is a trick to optimize a neural network that gives about 4x speedup when transferring data from CPU to GPU.

Let's consider an image classification task.

We define the model, load and transform the data.

In the training loop, we transfer data to the GPU and train the network.

What's the problem:

If you look into the profiler,

- most of the resources go to the kernel (i.e., the training itself),
- but a noticeable amount of time is also spent on transferring data from CPU to GPU (cudaMemcpyAsync).

This can be easily reduced.

Initially, the dataset consists of pixels as 8-bit integers. We convert them to 32-bit floats.
Then we send these float tensors to the GPU. As a result, the data size becomes 4 times larger, making the transfer heavier.

The solution:

Shift the transformation step after the transfer. That is, first transfer the 8-bit ints, and then convert them to floats on the GPU.

As a result, the data transfer step speeds up significantly.

Of course, this doesn't work everywhere; for example, in NLP we initially deal with float embeddings.
But in cases where it applies, the speedup is very noticeable.

👉  @DataScienceM

BY Data Science Machine Learning Data Analysis




Share with your friend now:
tgoop.com/DataScienceM/4641

View MORE
Open in Telegram


Telegram News

Date: |

The public channel had more than 109,000 subscribers, Judge Hui said. Ng had the power to remove or amend the messages in the channel, but he “allowed them to exist.” The channel also called on people to turn out for illegal assemblies and listed the things that participants should bring along with them, showing prior planning was in the works for riots. The messages also incited people to hurl toxic gas bombs at police and MTR stations, he added. ‘Ban’ on Telegram In the next window, choose the type of your channel. If you want your channel to be public, you need to develop a link for it. In the screenshot below, it’s ”/catmarketing.” If your selected link is unavailable, you’ll need to suggest another option. Telegram channels enable users to broadcast messages to multiple users simultaneously. Like on social media, users need to subscribe to your channel to get access to your content published by one or more administrators.
from us


Telegram Data Science Machine Learning Data Analysis
FROM American