像学习编程一样学习深度学习模型开发

作为一个程序员,我们可以像学习编程一样学习深度学习模型开发。我们以 keras 为例来说明。
我们可以用 5 步 + 4 种基本元素 + 9 种基本层结构,这 5-4-9 模型来总结。
5步法:
1. 构造网络模型2. 编译模型3. 训练模型4. 评估模型5. 使用模型进行预测
4种基本元素:
1. 网络结构:由10种基本层结构和其他层结构组成2. 激活函数:如relu, softmax。口诀: 最后输出用softmax,其余基本都用relu3. 损失函数:categorical_crossentropy多分类对数损失,binary_crossentropy对数损失,mean_squared_error平均方差损失, mean_absolute_error平均绝对值损失4. 优化器:如sgd随机梯度下降, rmsprop, adagrad, adam, adadelta等
9种基本层模型
包括3种主模型:
1. 全连接层dense2. 卷积层:如conv1d, conv2d3. 循环层:如lstm, gru
3种辅助层:1. activation层2. dropout层3. 池化层
3种异构网络互联层:
1. 嵌入层:用于第一层,输入数据到其他网络的转换2. flatten层:用于卷积层到全连接层之间的过渡3. permute层:用于rnn与cnn之间的接口
我们通过一张图来理解下它们之间的关系
▌五步法
五步法是用深度学习来解决问题的五个步骤:
1. 构造网络模型2. 编译模型3. 训练模型4. 评估模型5. 使用模型进行预测
在这五步之中,其实关键的步骤主要只有第一步,这一步确定了,后面的参数都可以根据它来设置。
过程化方法构造网络模型
我们先学习最容易理解的,过程化方法构造网络模型的过程。
keras中提供了sequential容器来实现过程式构造。只要用sequential的add方法把层结构加进来就可以了。10种基本层结构我们会在后面详细讲。
例:
from keras.models import sequentialfrom keras.layers import dense, activationmodel = sequential()model.add(dense(units=64, input_dim=100))model.add(activation(relu))model.add(dense(units=10))model.add(activation(softmax))
对于什么样的问题构造什么样的层结构,我们会在后面的例子中介绍。
编译模型
模型构造好之后,下一步就可以调用sequential的compile方法来编译它。
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
编译时需要指定两个基本元素:loss是损失函数,optimizer是优化函数。
如果只想用最基本的功能,只要指定字符串的名字就可以了。如果想配置更多的参数,调用相应的类来生成对象。例:我们想为随机梯度下降配上nesterov动量,就生成一个sgd的对象就好了:
from keras.optimizers import sgdmodel.compile(loss='categorical_crossentropy', optimizer=sgd(lr=0.01, momentum=0.9, nesterov=true))
lr是学习率,learning rate。
训练模型
调用fit函数,将输出的值x,打好标签的值y,epochs训练轮数,batch_size批次大小设置一下就可以了:
model.fit(x_train, y_train, epochs=5, batch_size=32)
评估模型
模型训练的好不好,训练数据不算数,需要用测试数据来评估一下:
loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)
用模型来预测
一切训练的目的是在于预测:
classes = model.predict(x_test, batch_size=128)
▌4种基本元素
网络结构
主要用后面的层结构来拼装。网络结构如何设计呢? 可以参考论文,比如这篇中不管是左边的19层的vgg-19,还是右边34层的resnet,只要按图去实现就好了。
激活函数
对于多分类的情况,最后一层是softmax。
其它深度学习层中多用relu。
二分类可以用sigmoid。
另外浅层神经网络也可以用tanh。
损失函数
categorical_crossentropy:多分类对数损失
binary_crossentropy:对数损失
mean_squared_error:均方差
mean_absolute_error:平均绝对值损失
对于多分类来说,主要用categorical_crossentropy。
优化器
sgd:随机梯度下降
adagrad:adaptive gradient自适应梯度下降
adadelta:对于adagrad的进一步改进
rmsprop
adam
本文将着重介绍后两种教程。
深度学习中的函数式编程
前面介绍的各种基本层,除了可以add进sequential容器串联之外,它们本身也是callable对象,被调用之后,返回的还是callable对象。所以可以将它们视为函数,通过调用的方式来进行串联。
来个官方例子:
from keras.layers import input, densefrom keras.models import modelinputs = input(shape=(784,))x = dense(64, activation='relu')(inputs)x = dense(64, activation='relu')(x)predictions = dense(10, activation='softmax')(x)model = model(inputs=inputs, outputs=predictions)model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])model.fit(data, labels)
为什么要用函数式编程?
答案是,复杂的网络结构并不是都是线性的add进容器中的。并行的,重用的,什么情况都有。这时候callable的优势就发挥出来了。
比如下面的google inception模型,就是带并联的:
我们的代码自然是以并联应对并联了,一个输入input_img被三个模型所重用:
from keras.layers import conv2d, maxpooling2d, inputinput_img = input(shape=(256, 256, 3))tower_1 = conv2d(64, (1, 1), padding='same', activation='relu')(input_img)tower_1 = conv2d(64, (3, 3), padding='same', activation='relu')(tower_1)tower_2 = conv2d(64, (1, 1), padding='same', activation='relu')(input_img)tower_2 = conv2d(64, (5, 5), padding='same', activation='relu')(tower_2)tower_3 = maxpooling2d((3, 3), strides=(1, 1), padding='same')(input_img)tower_3 = conv2d(64, (1, 1), padding='same', activation='relu')(tower_3)output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)
▌案例教程
cnn处理mnist手写识别
光说不练是假把式。我们来看看符合五步法的处理mnist的例子。
首先解析一下核心模型代码,因为模型是线性的,我们还是用sequential容器
model = sequential()
核心是两个卷积层:
model.add(conv2d(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))model.add(conv2d(64, (3, 3), activation='relu'))
为了防止过拟合,我们加上一个最大池化层,再加上一个dropout层:
model.add(maxpooling2d(pool_size=(2, 2)))model.add(dropout(0.25))
下面要进入全连接层输出了,这两个中间的数据转换需要一个flatten层:
model.add(flatten())
下面是全连接层,激活函数是relu。
还怕过拟合,再来个dropout层!
model.add(dense(128, activation='relu'))model.add(dropout(0.5))
最后通过一个softmax激活函数的全连接网络输出:
model.add(dense(num_classes, activation='softmax'))
下面是编译这个模型,损失函数是categorical_crossentropy多类对数损失函数,优化器选用adadelta。
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.adadelta(), metrics=['accuracy'])
下面是可以运行的完整代码:
from __future__ import print_functionimport kerasfrom keras.datasets import mnistfrom keras.models import sequentialfrom keras.layers import dense, dropout, flattenfrom keras.layers import conv2d, maxpooling2dfrom keras import backend as kbatch_size = 128num_classes = 10epochs = 12# input image dimensionsimg_rows, img_cols = 28, 28# the data, split between train and test sets(x_train, y_train), (x_test, y_test) = mnist.load_data()if k.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols)else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32')x_test = x_test.astype('float32')x_train /= 255x_test /= 255print('x_train shape:', x_train.shape)print(x_train.shape[0], 'train samples')print(x_test.shape[0], 'test samples')# convert class vectors to binary class matricesy_train = keras.utils.to_categorical(y_train, num_classes)y_test = keras.utils.to_categorical(y_test, num_classes)model = sequential()model.add(conv2d(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))model.add(conv2d(64, (3, 3), activation='relu'))model.add(maxpooling2d(pool_size=(2, 2)))model.add(dropout(0.25))model.add(flatten())model.add(dense(128, activation='relu'))model.add(dropout(0.5))model.add(dense(num_classes, activation='softmax'))model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.adadelta(), metrics=['accuracy'])model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))score = model.evaluate(x_test, y_test, verbose=0)print('test loss:', score[0])print('test accuracy:', score[1])
下面我们来个surprise例子,处理一下各种语言之间的翻译。
机器翻译:多语种互译
英译汉,汉译英之类的事情,在学生时代是不是一直难为这你呢?
现在不用担心了,只要有两种语言的对照表,我们就可以训练一个模型来像做一个机器翻译。
首先得下载一个字典:http://www.manythings.org/anki/
然后我们还是老办法,我们先看一下核心代码。没啥说的,这类序列化处理的问题用的一定是rnn,通常都是用lstm.
encoder_inputs = input(shape=(none, num_encoder_tokens))encoder = lstm(latent_dim, return_state=true)encoder_outputs, state_h, state_c = encoder(encoder_inputs)encoder_states = [state_h, state_c]decoder_inputs = input(shape=(none, num_decoder_tokens))decoder_lstm = lstm(latent_dim, return_sequences=true, return_state=true)decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)decoder_dense = dense(num_decoder_tokens, activation='softmax')decoder_outputs = decoder_dense(decoder_outputs)model = model([encoder_inputs, decoder_inputs], decoder_outputs)
优化器选用rmsprop,损失函数还是categorical_crossentropy.
validation_split是将一个集合随机分成训练集和测试集。
# run trainingmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy')model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs, validation_split=0.2)
最后,训练一个模型不容易,我们将其存储起来。
model.save('s2s.h5')
最后,附上完整的实现了机器翻译功能的代码,加上注释和空行有100多行,供有需要的同学取用。
from __future__ import print_functionfrom keras.models import modelfrom keras.layers import input, lstm, denseimport numpy as npbatch_size = 64 # batch size for training.epochs = 100 # number of epochs to train for.latent_dim = 256 # latent dimensionality of the encoding space.num_samples = 10000 # number of samples to train on.# path to the data txt file on disk.data_path = 'fra-eng/fra.txt'# vectorize the data.input_texts = []target_texts = []input_characters = set()target_characters = set()with open(data_path, 'r', encoding='utf-8') as f: lines = f.read().split('')for line in lines[: min(num_samples, len(lines) - 1)]: input_text, target_text = line.split(' ')
# we use tab as the start sequence character # for the targets, and as end sequence character.
target_text = ' ' + target_text + '' input_texts.append(input_text) target_texts.append(target_text) for char in input_text:
if char not in input_characters:
input_characters.add(char) for char in target_text:
if char not in target_characters:
target_characters.add(char)input_characters = sorted(list(input_characters))target_characters = sorted(list(target_characters))num_encoder_tokens = len(input_characters)num_decoder_tokens = len(target_characters)max_encoder_seq_length = max([len(txt) for txt in input_texts])max_decoder_seq_length = max([len(txt) for txt in target_texts])print('number of samples:', len(input_texts))print('number of unique input tokens:', num_encoder_tokens)print('number of unique output tokens:', num_decoder_tokens)print('max sequence length for inputs:', max_encoder_seq_length)print('max sequence length for outputs:', max_decoder_seq_length)input_token_index = dict( [(char, i) for i, char in enumerate(input_characters)])target_token_index = dict( [(char, i) for i, char in enumerate(target_characters)])encoder_input_data = np.zeros( (len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype='float32')decoder_input_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype='float32')decoder_target_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)): for t, char in enumerate(input_text): encoder_input_data[i, t, input_token_index[char]] = 1. for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.# define an input sequence and process it.encoder_inputs = input(shape=(none, num_encoder_tokens))encoder = lstm(latent_dim, return_state=true)encoder_outputs, state_h, state_c = encoder(encoder_inputs)# we discard `encoder_outputs` and only keep the states.encoder_states = [state_h, state_c]# set up the decoder, using `encoder_states` as initial state.decoder_inputs = input(shape=(none, num_decoder_tokens))# we set up our decoder to return full output sequences,# and to return internal states as well. we don't use the# return states in the training model, but we will use them in inference.decoder_lstm = lstm(latent_dim, return_sequences=true, return_state=true)decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)decoder_dense = dense(num_decoder_tokens, activation='softmax')decoder_outputs = decoder_dense(decoder_outputs)# define the model that will turn# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`model = model([encoder_inputs, decoder_inputs], decoder_outputs)# run trainingmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy')model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs,
validation_split=0.2)# save modelmodel.save('s2s.h5')encoder_model = model(encoder_inputs, encoder_states)decoder_state_input_h = input(shape=(latent_dim,))decoder_state_input_c = input(shape=(latent_dim,))decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]decoder_outputs, state_h, state_c = decoder_lstm( decoder_inputs, initial_state=decoder_states_inputs)decoder_states = [state_h, state_c]decoder_outputs = decoder_dense(decoder_outputs)decoder_model = model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)# reverse-lookup token index to decode sequences back to# something readable.reverse_input_char_index = dict( (i, char) for char, i in input_token_index.items())reverse_target_char_index = dict( (i, char) for char, i in target_token_index.items())def decode_sequence(input_seq): # encode the input as state vectors. states_value = encoder_model.predict(input_seq) # generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens)) # populate the first character of target sequence with the start character. target_seq[0, 0, target_token_index[' ']] = 1.
# sampling loop for a batch of sequences # (to simplify, here we assume a batch of size 1). stop_condition = false decoded_sentence = '' while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# exit condition: either hit max length
# or find stop character.
if (sampled_char == '' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = true
# update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.
# update states
states_value = [h, c] return decoded_sentencefor seq_index in range(100):
# take one sequence (part of the training set)
# for trying out decoding. input_seq = encoder_input_data[seq_index: seq_index + 1] decoded_sentence = decode_sequence(input_seq) print('-') print('input sentence:', input_texts[seq_index]) print('decoded sentence:', decoded_sentence)

全球首个推力突破20吨航空发动机F135,我国歼20战机10B发动机相差甚远
5G目前的建设情况如何我们还需要多久才能够用上5G
小米发布旗下首款曲面显示器 21:9带鱼屏+144Hz刷新率
荣耀V9首销即打败红米Note4X, 第二波余3月7日开售!
管道泵选型指南
像学习编程一样学习深度学习模型开发
努比亚Z17S什么时候上市?最新消息:小米MIX2劲敌来袭10月12日发布,全面屏2.1很惊喜
云存储系统再获认可,曙光斩获最大规模分布式存储集群项目
华为nova青春版评测:年轻更强芯 越用越好用
关于使用4g dtu-lte660短信功能的说明
AI可以让想象的成为现实吗
高工机器人CEO会议“在不确定性时代中起舞”将8月19日在杭州举行
长虹CHiQ电视Q6K评测 到底怎么样
太阳能蓄电池工作原理
测试焊接质量的方法,推拉力测试机测试方法
芯和半导体荣获“年度最佳EDA产品”2022硬核中国芯大奖
充分开发网络元数据,它将具有无限的潜能
【虹科终端安全案例】工业机器人领先企业Yaskawa Motoman如何应对高级威胁?
Eleuther AI:已经开源了复现版GPT-3的模型参数
口罩过滤效率仪的用途、参数及注意事项