White Papers

17 CheXNet Inference with Nvidia T4 on Dell EMC PowerEdge R7425
input_fn=lambda: input_fn(
True, FLAGS.data_dir, FLAGS.batch_size, FLAGS.epochs_per_eval))
Evaluate the model and print results:
eval_results = chexnet_classifier.evaluate(
input_fn=lambda: input_fn(False, FLAGS.data_dir, FLAGS.batch_size))
lr = reduce_lr_hook.update_lr(eval_results['loss'])
print (eval_results)
3.3 Save the Trained Model with TensorFlow Serving for Inference
Export the trained model as SavedModel with the Estimator function
export_savedmodel
Exports inference graph as a SavedModel into the given directory [9][10]
def export_saved_model(chexnet_classifier):
shape=[_DEFAULT_IMAGE_SIZE, _DEFAULT_IMAGE_SIZE, _NUM_CHANNELS]
input_receiver_fn = export.build_tensor_serving_input_receiver_fn(shape,
batch_size=FLAGS.batch_size)
Chexnet_classifier.export_savedmodel(FLAGS.export_dir, input_receiver_fn)
3.4 Freeze the Saved Model (optional)
Convert Saved Model to a Frozen Graph:
def convert_savedmodel_to_frozen_graph(savedmodel_dir, output_dir):
meta_graph = get_serving_meta_graph_def(savedmodel_dir)
signature_def = tf.contrib.saved_model.get_signature_def_by_key(
output=return_tensors[0].outputs[0]
with tf.Session(graph=g, config=get_gpu_config()) as sess:
result = sess.run([output])
meta_graph, tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY)