2017-01-23 1 views
0

В какой-то степени в эпоху, но затем, как только Эпоха заканчивается, я вижу это предупреждение о копировании пустой матрицы. Что обычно вызывает это предупреждение?Та же матрица с dim [0, 0] была перенесена между различными устройствами в течение 20 раз

01/23/2017 13:06:49: Epoch[ 1 of 50]-Minibatch[691301-691400]: ce = 0.06757763 * 9404; errs = 1.595% * 9404; time = 14.3775s; samplesPerSecond = 654.1 01/23/2017 13:07:04: Epoch[ 1 of 50]-Minibatch[691401-691500]: ce = 0.08411693 * 9784; errs = 1.962% * 9784; time = 15.1554s; samplesPerSecond = 645.6 01/23/2017 13:07:18: Epoch[ 1 of 50]-Minibatch[691501-691600]: ce = 0.07443892 * 9847; errs = 1.696% * 9847; time = 14.1284s; samplesPerSecond = 697.0 WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. 01/23/2017 13:07:33: Epoch[ 1 of 50]-Minibatch[691601-691700]: ce = 0.07692308 * 9815; errs = 1.854% * 9815; time = 14.4867s; samplesPerSecond = 677.5 WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. 01/23/2017 13:07:48: Epoch[ 1 of 50]-Minibatch[691701-691800]: ce = 0.08028341 * 9809; errs = 1.906% * 9809; time = 14.7772s; samplesPerSecond = 663.8 01/23/2017 13:08:03: Epoch[ 1 of 50]-Minibatch[691801-691900]: ce = 0.09192892 * 10073; errs = 2.214% * 10073; time = 14.8481s; samplesPerSecond = 678.4 01/23/2017 13:08:17: Epoch[ 1 of 50]-Minibatch[691901-692000]: ce = 0.07414725 * 9616; errs = 1.841% * 9616; time = 14.9059s; samplesPerSecond = 645.1 01/23/2017 13:08:32: Finished Epoch[ 1 of 50]: [Training] ce = 0.08177092 * 67573150; errs = 1.962% * 67573150; totalSamplesSeen = 67573150; learningRatePerSample = 0.0020000001; epochTime=104968s WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times. WARNING: The same matrix with dim [0, 0] has been transferred between different devices for 20 times.

ответ

0

Я никогда не видел это с пустой матрицей, но, когда матрица имеет ненулевые размеры это вызвано неподдерживаемой операции в GPU (как правило, вычисление с участием разреженных матриц, либо в прямом или обратный проход). Обычно сообщение может быть вызвано длинными повторениями, когда на каждом шаге сложная операция продолжает добавлять к рассматриваемой матрице еще один GPU-CPU-GPU roundtrip.

Смежные вопросы