Tensorflow: MultiWorkerMirroredStrategy ์„ฑ๋Šฅ์ด ๋‚ฎ์Œ (2gpu, 2node) X1.3 ์†๋„ ํ–ฅ์ƒ

์— ๋งŒ๋“  2019๋…„ 12์›” 02์ผ  ยท  3์ฝ”๋ฉ˜ํŠธ  ยท  ์ถœ์ฒ˜: tensorflow/tensorflow

์‹œ์Šคํ…œ ์ •๋ณด

Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
OS Platform and Distribution: Ubuntu 18.04
TensorFlow installed from (source or binary): pip install tensorflow-gpu
TensorFlow version (use command below): 2.0
Python version: 3.6.9
CUDA/cuDNN version: 10/7.6.4.38
GPU model and memory: Tesla P4  8G

ํ˜„์žฌ ํ–‰๋™ ์„ค๋ช…
์•„๋ž˜์— ์„ค๋ช… ๋œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค.

ํ…Œ์ŠคํŠธ 1 : (2 ๋Œ€์˜ ๊ธฐ๊ณ„)

os.environ [ 'TF_CONFIG'] = json.dumps ({
'ํด๋Ÿฌ์Šคํ„ฐ': {
'์ž‘์—…์ž': [ "server1 : 12345", "server2 : 12345"]
},
'์ž‘์—…': { '์œ ํ˜•': '์ž‘์—…์ž', '์ƒ‰์ธ': 0}
})

๋‹ค๋ฅธ ๊ธฐ๊ณ„์—์„œ

os.environ [ 'TF_CONFIG'] = json.dumps ({
'ํด๋Ÿฌ์Šคํ„ฐ': {
'์ž‘์—…์ž': [ "server1 : 12345", "server2 : 12345"]
},
'์ž‘์—…': { '์œ ํ˜•': '์ž‘์—…์ž', '์ƒ‰์ธ': 1}
})

์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ฒซ ๋ฒˆ์งธ ์—ํฌํฌ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ์ถฉ๋Œํ•ฉ๋‹ˆ๋‹ค.

์˜ˆ์ƒ๋˜๋Š” ๋™์ž‘ ์„ค๋ช…

15 ์ดˆ / epoch๊ฐ€ ๋„ˆ๋ฌด ๋Š๋ฆฝ๋‹ˆ๋‹ค.

ๅ›พ็‰‡

ํ…Œ์ŠคํŠธ 2 : (ํ•œ ๋Œ€์˜ ์ปดํ“จํ„ฐ)

os.environ [ 'TF_CONFIG'] = json.dumps ({
'ํด๋Ÿฌ์Šคํ„ฐ': {
'์ž‘์—…์ž': [ "server1 : 12345"]
},
'์ž‘์—…': { '์œ ํ˜•': '์ž‘์—…์ž', '์ƒ‰์ธ': 0}
})

์˜ˆ์ƒ๋˜๋Š” ๋™์ž‘ ์„ค๋ช…

์‚ฌ์šฉ ์ „๋žต๊ณผ ๋™์ผํ•œ 5s / epoch = tf.distribute.MirroredStrategy () GPU ์นด๋“œ 1 ๊ฐœ

ๅ›พ็‰‡

์•”ํ˜ธ

import ssl
import os
import json
import argparse
import time

import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds

ssl._create_default_https_context = ssl._create_unverified_context


def configure_cluster(worker_hosts=None, task_index=-1):
    """Set multi-worker cluster spec in TF_CONFIG environment variable.
    Args:
      worker_hosts: comma-separated list of worker ip:port pairs.
    Returns:
      Number of workers in the cluster.
    """
    tf_config = json.loads(os.environ.get('TF_CONFIG', '{}'))
    if tf_config:
        num_workers = len(tf_config['cluster'].get('worker', []))
    elif worker_hosts:
        workers = worker_hosts.split(',')
        num_workers = len(workers)
        if num_workers > 1 and task_index < 0:
            raise ValueError('Must specify task_index when number of workers > 1')
        task_index = 0 if num_workers == 1 else task_index
        os.environ['TF_CONFIG'] = json.dumps({
            'cluster': {
                'worker': workers
            },
            'task': {'type': 'worker', 'index': task_index}
        })
    else:
        num_workers = 1
    return num_workers


parser = argparse.ArgumentParser(description='TensorFlow Benchmark',
                                 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--num-epochs', type=int, default=5, help='input batch size')
parser.add_argument('--batch-size-per-replica', type=int, default=32, help='input batch size')
parser.add_argument('--worker-method', type=str, default="NCCL")
parser.add_argument('--worker-hosts', type=str, default="localhost:23456")
parser.add_argument('--worker-index', type=int, default=0)

args = parser.parse_args()

worker_num = configure_cluster(args.worker_hosts, args.worker_index)
batch_size = args.batch_size_per_replica * worker_num
print('Batch Size: %d' % batch_size)

gpus = tf.config.experimental.list_physical_devices('GPU')
print("Physical GPU Devices Num:", len(gpus))
for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)

if args.worker_method == "AUTO":
    communication = tf.distribute.experimental.CollectiveCommunication.AUTO
elif args.worker_method == "RING":
    communication = tf.distribute.experimental.CollectiveCommunication.RING
else:
    communication = tf.distribute.experimental.CollectiveCommunication.NCCL

strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
    communication=communication)


# logical_gpus = tf.config.experimental.list_logical_devices('GPU')
# print("Logical GPU Devices Num:", len(gpus))


def resize(image, label):
    image = tf.image.resize(image, [128, 128]) / 255.0
    return image, label


# if as_supervised is True๏ผŒreturn image abd label
dataset, info = tfds.load("tf_flowers", split=tfds.Split.TRAIN, with_info=True, as_supervised=True)
dataset = dataset.map(resize).repeat().shuffle(1024).batch(batch_size)

# options = tf.data.Options()
# options.experimental_distribute.auto_shard = False
# dataset = dataset.with_options(options)

def build_and_compile_cnn_model():
    model = tf.keras.Sequential([
        tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),
        tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),
        tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
        tf.keras.layers.Dropout(0.25),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(128, activation='relu'),
        tf.keras.layers.Dropout(0.5),
        tf.keras.layers.Dense(info.features['label'].num_classes, activation='softmax')
    ])
    model.compile(
        opt=tf.keras.optimizers.Adam(learning_rate=0.0001),
        loss=tf.keras.losses.sparse_categorical_crossentropy,
        metrics=[tf.keras.metrics.sparse_categorical_accuracy]
    )
    return model


with strategy.scope():
    multi_worker_model = build_and_compile_cnn_model()
print("Now training the distributed model")


class TimeHistory(tf.keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.times = []
        self.totaltime = time.time()

    def on_train_end(self, logs={}):
        self.totaltime = time.time() - self.totaltime

    def on_epoch_begin(self, batch, logs={}):
        self.epoch_time_start = time.time()

    def on_epoch_end(self, batch, logs={}):
        self.times.append(time.time() - self.epoch_time_start)


time_callback = TimeHistory()
steps_per_epoch = 100
print('Running benchmark...')
multi_worker_model.fit(dataset, steps_per_epoch=steps_per_epoch, epochs=args.num_epochs, callbacks=[time_callback])
per_epoch_time = np.mean(time_callback.times[1:])
print("per_epoch_time:", per_epoch_time)
img_sec = batch_size * steps_per_epoch / per_epoch_time
print("Result:  {:.1f} pic/sec".format(img_sec))


TEST 2 : ์ž‘์—…์ž 1 ๋ช…๋งŒ, 440pic / sec ๏ผˆbatch_szie = 128๏ผ‰

ํ…Œ์ŠคํŠธ 1 : ์ž‘์—…์ž 2 ๋ช…, 610 pic / sec ๏ผˆbatch_szie = 128 * 2๏ผ‰ [์˜ˆ์ƒ 440 * 2 = 800+]

์งˆ๋ฌธ 1:
dist MultiWorkerMirroredStrategy ์ž‘์—…์ž ์ˆ˜> 1, ๊ต์œก์ด ์™œ ๊ทธ๋ ‡๊ฒŒ ๋Š๋ฆฐ ์ง€

๋ฐฐ๊ณ  ์žˆ๋‹ค

TF 2.0 dist-strat bug

๋ชจ๋“  3 ๋Œ“๊ธ€

๋ชจ๋ธ์ด ๋Š๋ฆฐ ์ด์œ ๋Š” ๋„คํŠธ์›Œํ‚น, ๋ฐ์ดํ„ฐ ์ฝ๊ธฐ, ์Šค๋ ˆ๋“œ ๊ฒฝํ•ฉ ๋“ฑ ์—ฌ๋Ÿฌ ๊ฐ€์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํ”„๋กœ๊ทธ๋žจ์„ ํ”„๋กœํŒŒ์ผ ๋งํ•˜์—ฌ ๋ณ‘๋ชฉ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•œ ๋ถ€๋ถ„์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. https://www.tensorflow.org/tensorboard/tensorboard_profiling_keras

๋ฌธ์ œ ํ•ด๊ฒฐ์— ๋งŒ์กฑํ•˜์‹ญ๋‹ˆ๊นŒ?
์˜ˆ
์•„๋‹ˆ

์ง€๊ธˆ ์ข…๋ฃŒํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœํ•„์— ๋ช…๋ฐฑํ•œ ๋ฌธ์ œ๊ฐ€ ์žˆ์œผ๋ฉด ์–ธ์ œ๋“ ์ง€ ๋‹ค์‹œ ์—ด๊ฑฐ ๋‚˜ ์ƒˆ๋กœ์šด ๋ฌธ์ œ๋ฅผ ์ œ์ถœํ•˜์‹ญ์‹œ์˜ค.

์ด ํŽ˜์ด์ง€๊ฐ€ ๋„์›€์ด ๋˜์—ˆ๋‚˜์š”?
0 / 5 - 0 ๋“ฑ๊ธ‰