快捷搜索: 王者荣耀 脱发

keras怎么保存最好的训练模型

  1. 只保存最佳的训练模型
  2. 保存有所有有提升的模型
  3. 加载模型
  4. 参数说明

只保存最佳的训练模型

from keras.callbacks import ModelCheckpoint
 
filepath=weights.best.hdf5
    # 有一次提升, 则覆盖一次.
checkpoint = ModelCheckpoint(filepath, monitor=val_acc, verbose=1,save_best_only=True,mode=max,period=2) callbacks_list = [checkpoint]
 
model.compile(loss=categorical_crossentropy, optimizer=optimizers.Adam(lr=2e-6,decay=1e-7),metrics=[acc])
 
history1 = model.fit_generator(
          train_generator,
          steps_per_epoch=100,
          epochs=40,
          validation_data=validation_generator,
          validation_steps=100, callbacks=callbacks_list)

输出的部分结果为:

Epoch 2/40
100/100 [==============================] - 24s 241ms/step - loss: 0.2715 - acc: 0.9380 - val_loss: 0.1635 - val_acc: 0.9600
 
Epoch 00002: val_acc improved from -inf to 0.96000, saving model to weights.best.hdf5
Epoch 3/40
100/100 [==============================] - 24s 240ms/step - loss: 0.1623 - acc: 0.9575 - val_loss: 0.1116 - val_acc: 0.9730
Epoch 4/40
100/100 [==============================] - 24s 242ms/step - loss: 0.1143 - acc: 0.9730 - val_loss: 0.0799 - val_acc: 0.9840
 
Epoch 00004: val_acc improved from 0.96000 to 0.98400, saving model to weights.best.hdf5

保存所有有提升的模型:

from keras.callbacks import ModelCheckpoint
# checkpoint
filepath = "weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
# 中途训练效果提升, 则将文件保存, 每提升一次, 保存一次
checkpoint = ModelCheckpoint(filepath, monitor=val_acc, verbose=1, save_best_only=True,mode=max)
callbacks_list = [checkpoint]
model.compile(loss=categorical_crossentropy, optimizer=adam, metrics=[accuracy])
 
history1 = model.fit_generator(
          train_generator,
          steps_per_epoch=100,
          epochs=40,
          validation_data=validation_generator,
          validation_steps=100, callbacks=callbacks_list)

因为我只想要最佳的模型,所以没有尝试保存所有有提升的模型,结果是什么样自己试。。。

加载最佳的模型

# load weights 加载模型权重
model.load_weights(weights.best.hdf5)
#如果想加载模型,则将model.load_weights(weights.best.hdf5)改为
#model.load_model(weights.best.hdf5)
# compile 编译
model.compile(loss=categorical_crossentropy, optimizer=adam, metrics=[accuracy])
print(Created model and loaded weights from hdf5 file)
 
# estimate
scores = model.evaluate(validation_generator, steps=30, verbose=0)
print("{0}: {1:.2f}%".format(model.metrics_names[1], scores[1]*100))

ModelCheckpoint参数说明

keras.callbacks.ModelCheckpoint(filepath,monitor=val_loss,verbose=0,save_best_only=False, save_weights_only=False, mode=auto, period=1)

filename:字符串,保存模型的路径 monitor:需要监视的值 verbose:信息展示模式,0或1(checkpoint的保存信息,类似Epoch 00001: saving model to ...)

(verbose = 0 为不在标准输出流输出日志信息;verbose = 1 为输出进度条记录;verbose = 2 为每个epoch输出一行记录) save_best_only:当设置为True时,监测值有改进时才会保存当前的模型( the latest best model according to the quantity monitored will not be overwritten) mode:‘auto’,‘min’,‘max’之一,在save_best_only=True时决定性能最佳模型的评判准则,例如,当监测值为val_acc时,模式应为max,当监测值为val_loss时,模式应为min。在auto模式下,评价准则由被监测值的名字自动推断。 save_weights_only:若设置为True,则只保存模型权重,否则将保存整个模型(包括模型结构,配置信息等) period:CheckPoint之间的间隔的epoch数

经验分享 程序员 微信小程序 职场和发展