在20.1 节中,我们介绍了 gan 工作原理背后的基本思想。我们展示了他们可以从一些简单的、易于采样的分布中抽取样本,比如均匀分布或正态分布,并将它们转换成看起来与某些数据集的分布相匹配的样本。虽然我们匹配 2d 高斯分布的示例说明了要点,但它并不是特别令人兴奋。
在本节中,我们将演示如何使用 gan 生成逼真的图像。我们的模型将基于 radford等人介绍的深度卷积 gan (dcgan)。(2015 年)。我们将借用已经证明在判别计算机视觉问题上非常成功的卷积架构,并展示如何通过 gan 来利用它们来生成逼真的图像。
import warningsimport torchimport torchvisionfrom torch import nnfrom d2l import torch as d2l
from mxnet import gluon, init, np, npxfrom mxnet.gluon import nnfrom d2l import mxnet as d2lnpx.set_np()
import tensorflow as tffrom d2l import tensorflow as d2l
20.2.1。口袋妖怪数据集 我们将使用的数据集是从pokemondb获得的 pokemon 精灵的集合 。首先下载、提取和加载此数据集。
#@saved2l.data_hub['pokemon'] = (d2l.data_url + 'pokemon.zip', 'c065c0e2593b8b161a2d7873e42418bf6a21106c')data_dir = d2l.download_extract('pokemon')pokemon = torchvision.datasets.imagefolder(data_dir)
downloading ../data/pokemon.zip from http://d2l-data.s3-accelerate.amazonaws.com/pokemon.zip...
#@saved2l.data_hub['pokemon'] = (d2l.data_url + 'pokemon.zip', 'c065c0e2593b8b161a2d7873e42418bf6a21106c')data_dir = d2l.download_extract('pokemon')pokemon = gluon.data.vision.datasets.imagefolderdataset(data_dir)
downloading ../data/pokemon.zip from http://d2l-data.s3-accelerate.amazonaws.com/pokemon.zip...
#@saved2l.data_hub['pokemon'] = (d2l.data_url + 'pokemon.zip', 'c065c0e2593b8b161a2d7873e42418bf6a21106c')data_dir = d2l.download_extract('pokemon')batch_size = 256pokemon = tf.keras.preprocessing.image_dataset_from_directory( data_dir, batch_size=batch_size, image_size=(64, 64))
downloading ../data/pokemon.zip from http://d2l-data.s3-accelerate.amazonaws.com/pokemon.zip...found 40597 files belonging to 721 classes.
我们将每个图像调整为64×64. 变换totensor 会将像素值投影到[0,1],而我们的生成器将使用 tanh 函数获取输出 [−1,1]. 因此我们用0.5意味着和0.5标准偏差以匹配值范围。
batch_size = 256transformer = torchvision.transforms.compose([ torchvision.transforms.resize((64, 64)), torchvision.transforms.totensor(), torchvision.transforms.normalize(0.5, 0.5)])pokemon.transform = transformerdata_iter = torch.utils.data.dataloader( pokemon, batch_size=batch_size, shuffle=true, num_workers=d2l.get_dataloader_workers())
batch_size = 256transformer = gluon.data.vision.transforms.compose([ gluon.data.vision.transforms.resize(64), gluon.data.vision.transforms.totensor(), gluon.data.vision.transforms.normalize(0.5, 0.5)])data_iter = gluon.data.dataloader( pokemon.transform_first(transformer), batch_size=batch_size, shuffle=true, num_workers=d2l.get_dataloader_workers())
def transform_func(x): x = x / 255. x = (x - 0.5) / (0.5) return x# for tf>=2.4 use `num_parallel_calls = tf.data.autotune`data_iter = pokemon.map(lambda x, y: (transform_func(x), y), num_parallel_calls=tf.data.experimental.autotune)data_iter = data_iter.cache().shuffle(buffer_size=1000).prefetch( buffer_size=tf.data.experimental.autotune)
warning:tensorflow:from /home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.instructions for updating:lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
让我们想象一下前 20 张图像。
warnings.filterwarnings('ignore')d2l.set_figsize((4, 4))for x, y in data_iter: imgs = x[:20,:,:,:].permute(0, 2, 3, 1)/2+0.5 d2l.show_images(imgs, num_rows=4, num_cols=5) break
d2l.set_figsize((4, 4))for x, y in data_iter: imgs = x[:20,:,:,:].transpose(0, 2, 3, 1)/2+0.5 d2l.show_images(imgs, num_rows=4, num_cols=5) break
d2l.set_figsize(figsize=(4, 4))for x, y in data_iter.take(1): imgs = x[:20, :, :, :] / 2 + 0.5 d2l.show_images(imgs, num_rows=4, num_cols=5)
20.2.2。发电机 生成器需要映射噪声变量 z∈rd, 长度-d向量,到具有宽度和高度的 rgb 图像64×64. 在 14.11 节中我们介绍了全卷积网络,它使用转置卷积层(参考 14.10 节)来扩大输入尺寸。生成器的基本块包含一个转置卷积层,然后是批量归一化和 relu 激活。
class g_block(nn.module): def __init__(self, out_channels, in_channels=3, kernel_size=4, strides=2, padding=1, **kwargs): super(g_block, self).__init__(**kwargs) self.conv2d_trans = nn.convtranspose2d(in_channels, out_channels, kernel_size, strides, padding, bias=false) self.batch_norm = nn.batchnorm2d(out_channels) self.activation = nn.relu() def forward(self, x): return self.activation(self.batch_norm(self.conv2d_trans(x)))
class g_block(nn.block): def __init__(self, channels, kernel_size=4, strides=2, padding=1, **kwargs): super(g_block, self).__init__(**kwargs) self.conv2d_trans = nn.conv2dtranspose( channels, kernel_size, strides, padding, use_bias=false) self.batch_norm = nn.batchnorm() self.activation = nn.activation('relu') def forward(self, x): return self.activation(self.batch_norm(self.conv2d_trans(x)))
class g_block(tf.keras.layers.layer): def __init__(self, out_channels, kernel_size=4, strides=2, padding=same, **kwargs): super().__init__(**kwargs) self.conv2d_trans = tf.keras.layers.conv2dtranspose( out_channels, kernel_size, strides, padding, use_bias=false) self.batch_norm = tf.keras.layers.batchnormalization() self.activation = tf.keras.layers.relu() def call(self, x): return self.activation(self.batch_norm(self.conv2d_trans(x)))
默认情况下,转置卷积层使用 kh=kw=4内核,一个sh=sw=2大步前进,一个 ph=pw=1填充。输入形状为 nh′×nw′=16×16,生成器块将使输入的宽度和高度加倍。
(20.2.1)nh′×nw′=[(nhkh−(nh−1)(kh−sh)−2ph]×[(nwkw−(nw−1)(kw−sw)−2pw]=[(kh+sh(nh−1)−2ph]×[(kw+sw(nw−1)−2pw]=[(4+2×(16−1)−2×1]×[(4+2×(16−1)−2×1]=32×32.
x = torch.zeros((2, 3, 16, 16))g_blk = g_block(20)g_blk(x).shape
torch.size([2, 20, 32, 32])
x = np.zeros((2, 3, 16, 16))g_blk = g_block(20)g_blk.initialize()g_blk(x).shape
(2, 20, 32, 32)
x = tf.zeros((2, 16, 16, 3)) # channel last conventiong_blk = g_block(20)g_blk(x).shape
tensorshape([2, 32, 32, 20])
如果将转置卷积层更改为4×4 核心,1×1步幅和零填充。输入大小为 1×1,输出的宽度和高度将分别增加 3。
x = torch.zeros((2, 3, 1, 1))g_blk = g_block(20, strides=1, padding=0)g_blk(x).shape
torch.size([2, 20, 4, 4])
x = np.zeros((2, 3, 1, 1))g_blk = g_block(20, strides=1, padding=0)g_blk.initialize()g_blk(x).shape
(2, 20, 4, 4)
x = tf.zeros((2, 1, 1, 3))# `padding=valid` corresponds to no paddingg_blk = g_block(20, strides=1, padding=valid)g_blk(x).shape
tensorshape([2, 4, 4, 20])
生成器由四个基本块组成,将输入的宽度和高度从 1 增加到 32。同时,它首先将潜在变量投影到64×8通道,然后每次将通道减半。最后,使用转置卷积层生成输出。它进一步加倍宽度和高度以匹配所需的64×64形状,并将通道尺寸减小到 3. tanh 激活函数用于将输出值投影到(−1,1)范围。
n_g = 64net_g = nn.sequential( g_block(in_channels=100, out_channels=n_g*8, strides=1, padding=0), # output: (64 * 8, 4, 4) g_block(in_channels=n_g*8, out_channels=n_g*4), # output: (64 * 4, 8, 8) g_block(in_channels=n_g*4, out_channels=n_g*2), # output: (64 * 2, 16, 16) g_block(in_channels=n_g*2, out_channels=n_g), # output: (64, 32, 32) nn.convtranspose2d(in_channels=n_g, out_channels=3, kernel_size=4, stride=2, padding=1, bias=false), nn.tanh()) # output: (3, 64, 64)
n_g = 64net_g = nn.sequential()net_g.add(g_block(n_g*8, strides=1, padding=0), # output: (64 * 8, 4, 4) g_block(n_g*4), # output: (64 * 4, 8, 8) g_block(n_g*2), # output: (64 * 2, 16, 16) g_block(n_g), # output: (64, 32, 32) nn.conv2dtranspose( 3, kernel_size=4, strides=2, padding=1, use_bias=false, activation='tanh')) # output: (3, 64, 64)
n_g = 64net_g = tf.keras.sequential([ # output: (4, 4, 64 * 8) g_block(out_channels=n_g*8, strides=1, padding=valid), g_block(out_channels=n_g*4), # output: (8, 8, 64 * 4) g_block(out_channels=n_g*2), # output: (16, 16, 64 * 2) g_block(out_channels=n_g), # output: (32, 32, 64) # output: (64, 64, 3) tf.keras.layers.conv2dtranspose( 3, kernel_size=4, strides=2, padding=same, use_bias=false, activation=tanh)])
生成一个 100 维的潜在变量来验证生成器的输出形状。
x = torch.zeros((1, 100, 1, 1))net_g(x).shape
torch.size([1, 3, 64, 64])
x = np.zeros((1, 100, 1, 1))net_g.initialize()net_g(x).shape
(1, 3, 64, 64)
x = tf.zeros((1, 1, 1, 100))net_g(x).shape
tensorshape([1, 64, 64, 3])
20.2.3。判别器 判别器是一个普通的卷积网络,除了它使用一个 leaky relu 作为它的激活函数。鉴于 α∈[0,1], 它的定义是
(20.2.2)leaky relu(x)={xif x>0αxotherwise.
可以看出,如果α=0,以及一个身份函数,如果α=1. 为了α∈(0,1),leaky relu 是一个非线性函数,它为负输入提供非零输出。它旨在解决“垂死的 relu”问题,即神经元可能始终输出负值,因此无法取得任何进展,因为 relu 的梯度为 0。
alphas = [0, .2, .4, .6, .8, 1]x = torch.arange(-2, 1, 0.1)y = [nn.leakyrelu(alpha)(x).detach().numpy() for alpha in alphas]d2l.plot(x.detach().numpy(), y, 'x', 'y', alphas)
alphas = [0, .2, .4, .6, .8, 1]x = np.arange(-2, 1, 0.1)y = [nn.leakyrelu(alpha)(x).asnumpy() for alpha in alphas]d2l.plot(x.asnumpy(), y, 'x', 'y', alphas)
alphas = [0, .2, .4, .6, .8, 1]x = tf.range(-2, 1, 0.1)y = [tf.keras.layers.leakyrelu(alpha)(x).numpy() for alpha in alphas]d2l.plot(x.numpy(), y, 'x', 'y', alphas)
判别器的基本块是一个卷积层,然后是一个批量归一化层和一个 leaky relu 激活。卷积层的超参数类似于生成器块中的转置卷积层。
class d_block(nn.module): def __init__(self, out_channels, in_channels=3, kernel_size=4, strides=2, padding=1, alpha=0.2, **kwargs): super(d_block, self).__init__(**kwargs) self.conv2d = nn.conv2d(in_channels, out_channels, kernel_size, strides, padding, bias=false) self.batch_norm = nn.batchnorm2d(out_channels) self.activation = nn.leakyrelu(alpha, inplace=true) def forward(self, x): return self.activation(self.batch_norm(self.conv2d(x)))
class d_block(nn.block): def __init__(self, channels, kernel_size=4, strides=2, padding=1, alpha=0.2, **kwargs): super(d_block, self).__init__(**kwargs) self.conv2d = nn.conv2d( channels, kernel_size, strides, padding, use_bias=false) self.batch_norm = nn.batchnorm() self.activation = nn.leakyrelu(alpha) def forward(self, x): return self.activation(self.batch_norm(self.conv2d(x)))
class d_block(tf.keras.layers.layer): def __init__(self, out_channels, kernel_size=4, strides=2, padding=same, alpha=0.2, **kwargs): super().__init__(**kwargs) self.conv2d = tf.keras.layers.conv2d(out_channels, kernel_size, strides, padding, use_bias=false) self.batch_norm = tf.keras.layers.batchnormalization() self.activation = tf.keras.layers.leakyrelu(alpha) def call(self, x): return self.activation(self.batch_norm(self.conv2d(x)))
正如我们在第 7.3 节中演示的那样,具有默认设置的基本块会将输入的宽度和高度减半。例如,给定一个输入形状nh=nw=16, 具有内核形状 kh=kw=4, 步幅sh=sw=2和填充形状ph=pw=1,输出形状将是:
(20.2.3)nh′×nw′=⌊(nh−kh+2ph+sh)/sh⌋×⌊(nw−kw+2pw+sw)/sw⌋=⌊(16−4+2×1+2)/2⌋×⌊(16−4+2×1+2)/2⌋=8×8.
x = torch.zeros((2, 3, 16, 16))d_blk = d_block(20)d_blk(x).shape
torch.size([2, 20, 8, 8])
x = np.zeros((2, 3, 16, 16))d_blk = d_block(20)d_blk.initialize()d_blk(x).shape
(2, 20, 8, 8)
x = tf.zeros((2, 16, 16, 3))d_blk = d_block(20)d_blk(x).shape
tensorshape([2, 8, 8, 20])
鉴别器是生成器的镜像。
n_d = 64net_d = nn.sequential( d_block(n_d), # output: (64, 32, 32) d_block(in_channels=n_d, out_channels=n_d*2), # output: (64 * 2, 16, 16) d_block(in_channels=n_d*2, out_channels=n_d*4), # output: (64 * 4, 8, 8) d_block(in_channels=n_d*4, out_channels=n_d*8), # output: (64 * 8, 4, 4) nn.conv2d(in_channels=n_d*8, out_channels=1, kernel_size=4, bias=false)) # output: (1, 1, 1)
n_d = 64net_d = nn.sequential()net_d.add(d_block(n_d), # output: (64, 32, 32) d_block(n_d*2), # output: (64 * 2, 16, 16) d_block(n_d*4), # output: (64 * 4, 8, 8) d_block(n_d*8), # output: (64 * 8, 4, 4) nn.conv2d(1, kernel_size=4, use_bias=false)) # output: (1, 1, 1)
n_d = 64net_d = tf.keras.sequential([ d_block(n_d), # output: (32, 32, 64) d_block(out_channels=n_d*2), # output: (16, 16, 64 * 2) d_block(out_channels=n_d*4), # output: (8, 8, 64 * 4) d_block(out_channels=n_d*8), # outupt: (4, 4, 64 * 64) # output: (1, 1, 1) tf.keras.layers.conv2d(1, kernel_size=4, use_bias=false)])
它使用带有输出通道的卷积层1作为获得单个预测值的最后一层。
x = torch.zeros((1, 3, 64, 64))net_d(x).shape
torch.size([1, 1, 1, 1])
x = np.zeros((1, 3, 64, 64))net_d.initialize()net_d(x).shape
(1, 1, 1, 1)
x = tf.zeros((1, 64, 64, 3))net_d(x).shape
tensorshape([1, 1, 1, 1])
20.2.4。训练 与第 20.1 节中的基本 gan 相比,我们对生成器和鉴别器使用相同的学习率,因为它们彼此相似。此外,我们改变β1在 adam 中(第 12.10 节)来自0.9到0.5. 它降低了动量的平滑度,即过去梯度的指数加权移动平均值,以处理快速变化的梯度,因为生成器和鉴别器相互争斗。此外,随机生成的噪声z是一个 4-d 张量,我们正在使用 gpu 来加速计算。
def train(net_d, net_g, data_iter, num_epochs, lr, latent_dim, device=d2l.try_gpu()): loss = nn.bcewithlogitsloss(reduction='sum') for w in net_d.parameters(): nn.init.normal_(w, 0, 0.02) for w in net_g.parameters(): nn.init.normal_(w, 0, 0.02) net_d, net_g = net_d.to(device), net_g.to(device) trainer_hp = {'lr': lr, 'betas': [0.5,0.999]} trainer_d = torch.optim.adam(net_d.parameters(), **trainer_hp) trainer_g = torch.optim.adam(net_g.parameters(), **trainer_hp) animator = d2l.animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], nrows=2, figsize=(5, 5), legend=['discriminator', 'generator']) animator.fig.subplots_adjust(hspace=0.3) for epoch in range(1, num_epochs + 1): # train one epoch timer = d2l.timer() metric = d2l.accumulator(3) # loss_d, loss_g, num_examples for x, _ in data_iter: batch_size = x.shape[0] z = torch.normal(0, 1, size=(batch_size, latent_dim, 1, 1)) x, z = x.to(device), z.to(device) metric.add(d2l.update_d(x, z, net_d, net_g, loss, trainer_d), d2l.update_g(z, net_d, net_g, loss, trainer_g), batch_size) # show generated examples z = torch.normal(0, 1, size=(21, latent_dim, 1, 1), device=device) # normalize the synthetic data to n(0, 1) fake_x = net_g(z).permute(0, 2, 3, 1) / 2 + 0.5 imgs = torch.cat( [torch.cat([ fake_x[i * 7 + j].cpu().detach() for j in range(7)], dim=1) for i in range(len(fake_x)//7)], dim=0) animator.axes[1].cla() animator.axes[1].imshow(imgs) # show the losses loss_d, loss_g = metric[0] / metric[2], metric[1] / metric[2] animator.add(epoch, (loss_d, loss_g)) print(f'loss_d {loss_d:.3f}, loss_g {loss_g:.3f}, ' f'{metric[2] / timer.stop():.1f} examples/sec on {str(device)}')
def train(net_d, net_g, data_iter, num_epochs, lr, latent_dim, device=d2l.try_gpu()): loss = gluon.loss.sigmoidbceloss() net_d.initialize(init=init.normal(0.02), force_reinit=true, ctx=device) net_g.initialize(init=init.normal(0.02), force_reinit=true, ctx=device) trainer_hp = {'learning_rate': lr, 'beta1': 0.5} trainer_d = gluon.trainer(net_d.collect_params(), 'adam', trainer_hp) trainer_g = gluon.trainer(net_g.collect_params(), 'adam', trainer_hp) animator = d2l.animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], nrows=2, figsize=(5, 5), legend=['discriminator', 'generator']) animator.fig.subplots_adjust(hspace=0.3) for epoch in range(1, num_epochs + 1): # train one epoch timer = d2l.timer() metric = d2l.accumulator(3) # loss_d, loss_g, num_examples for x, _ in data_iter: batch_size = x.shape[0] z = np.random.normal(0, 1, size=(batch_size, latent_dim, 1, 1)) x, z = x.as_in_ctx(device), z.as_in_ctx(device), metric.add(d2l.update_d(x, z, net_d, net_g, loss, trainer_d), d2l.update_g(z, net_d, net_g, loss, trainer_g), batch_size) # show generated examples z = np.random.normal(0, 1, size=(21, latent_dim, 1, 1), ctx=device) # normalize the synthetic data to n(0, 1) fake_x = net_g(z).transpose(0, 2, 3, 1) / 2 + 0.5 imgs = np.concatenate( [np.concatenate([fake_x[i * 7 + j] for j in range(7)], axis=1) for i in range(len(fake_x)//7)], axis=0) animator.axes[1].cla() animator.axes[1].imshow(imgs.asnumpy()) # show the losses loss_d, loss_g = metric[0] / metric[2], metric[1] / metric[2] animator.add(epoch, (loss_d, loss_g)) print(f'loss_d {loss_d:.3f}, loss_g {loss_g:.3f}, ' f'{metric[2] / timer.stop():.1f} examples/sec on {str(device)}')
def train(net_d, net_g, data_iter, num_epochs, lr, latent_dim, device=d2l.try_gpu()): loss = tf.keras.losses.binarycrossentropy( from_logits=true, reduction=tf.keras.losses.reduction.sum) for w in net_d.trainable_variables: w.assign(tf.random.normal(mean=0, stddev=0.02, shape=w.shape)) for w in net_g.trainable_variables: w.assign(tf.random.normal(mean=0, stddev=0.02, shape=w.shape)) optimizer_hp = {lr: lr, beta_1: 0.5, beta_2: 0.999} optimizer_d = tf.keras.optimizers.adam(**optimizer_hp) optimizer_g = tf.keras.optimizers.adam(**optimizer_hp) animator = d2l.animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], nrows=2, figsize=(5, 5), legend=['discriminator', 'generator']) animator.fig.subplots_adjust(hspace=0.3) for epoch in range(1, num_epochs + 1): # train one epoch timer = d2l.timer() metric = d2l.accumulator(3) # loss_d, loss_g, num_examples for x, _ in data_iter: batch_size = x.shape[0] z = tf.random.normal(mean=0, stddev=1, shape=(batch_size, 1, 1, latent_dim)) metric.add(d2l.update_d(x, z, net_d, net_g, loss, optimizer_d), d2l.update_g(z, net_d, net_g, loss, optimizer_g), batch_size) # show generated examples z = tf.random.normal(mean=0, stddev=1, shape=(21, 1, 1, latent_dim)) # normalize the synthetic data to n(0, 1) fake_x = net_g(z) / 2 + 0.5 imgs = tf.concat([tf.concat([fake_x[i * 7 + j] for j in range(7)], axis=1) for i in range(len(fake_x) // 7)], axis=0) animator.axes[1].cla() animator.axes[1].imshow(imgs) # show the losses loss_d, loss_g = metric[0] / metric[2], metric[1] / metric[2] animator.add(epoch, (loss_d, loss_g)) print(f'loss_d {loss_d:.3f}, loss_g {loss_g:.3f}, ' f'{metric[2] / timer.stop():.1f} examples/sec on {str(device._device_name)}')
我们用少量的 epochs 训练模型只是为了演示。为了获得更好的性能,可以将变量num_epochs设置为更大的数字。
latent_dim, lr, num_epochs = 100, 0.005, 20train(net_d, net_g, data_iter, num_epochs, lr, latent_dim)
loss_d 0.030, loss_g 7.203, 1026.4 examples/sec on cuda:0
latent_dim, lr, num_epochs = 100, 0.005, 20train(net_d, net_g, data_iter, num_epochs, lr, latent_dim)
loss_d 0.224, loss_g 6.386, 2260.7 examples/sec on gpu(0)
latent_dim, lr, num_epochs = 100, 0.0005, 40train(net_d, net_g, data_iter, num_epochs, lr, latent_dim)
loss_d 0.112, loss_g 4.952, 1968.2 examples/sec on /gpu:0
20.2.5。概括 dcgan 架构有四个用于鉴别器的卷积层和四个用于生成器的“分数步”卷积层。
鉴别器是一个 4 层跨步卷积,具有批量归一化(除了它的输入层)和 leaky relu 激活。
leaky relu 是一种非线性函数,可为负输入提供非零输出。它旨在解决“垂死的 relu”问题,并帮助梯度更容易地通过架构。
20.2.6. 练习 如果我们使用标准 relu 激活而不是 leaky relu 会发生什么?
在 fashion-mnist 上应用 dcgan,看看哪个类别效果好,哪个效果不好。
集成显卡驱动怎么安装
物联网时代,一线记者与机器的抉择
京东方率先实现玻璃基主动式Mini LED产品量产
光伏大功率时代下打造极具性价比的系统解决方案
小米9的5G版本你知道吗
PyTorch教程-20.2. 深度卷积生成对抗网络
伺服驱动器与变频器的区别解析
英飞凌科推新型感测和平衡IC,专门针对电动车电池设计
机器人真的可以像人一样解决数理化难题吗?
骁龙芯专注游戏体验 应对玩手机游戏发热和卡顿问题
删除了硬盘中的数据后出售就能确保信息安全了吗?
Linux中fork()函数详解
ios10.3一出炉就好评不断,是有原因的!
利用飞镖发射方式部署传感器的无人机,测试表明该系统相当可靠
无死角螺旋板式换热器的泄漏处理方法
电工必备的5大工具另附操作详解
MAX9276,MAX9280 3.12Gbps GMSL解串器
笙泉MCU在储能电源中的作用(2)
对于语音识别技术你了解多少呢
OPPO新产品曝光:配备4350mAh电池,支持65W快充