White Papers

23 CheXNet Inference with Nvidia T4 on Dell EMC PowerEdge R7425
--image_file=image.jpg \
--int8 \
--output_dir=/home/chest-x-ray/output_tensorrt_chexnet_1541777429/ \
--batch_size=1 \
--input_node="input_tensor” \
--output_node="chexnet_sigmoid_tensor"
Where:
--savedmodel_dir: The location of a saved model directory to be converted into a Frozen Graph
--image_file: The location of a JPEG image that will be passed in for inference
--int8: Benchmark the model with TensorRT™ using int8 precision
--output_dir: The location where output files will be saved
--batch_size: Batch size for inference
--input_node: The name of the graph input node where the float image array should be fed for
prediction
--output_node: The names of the graph output node
Script Output sample:
On completion, the script prints overall metrics and timing information over the inference
session
==========================
network: tftrt_int8_frozen_graph.pb, batchsize 1, steps 100
fps median: 284.6, mean: 304.3, uncertainty: 5.5, jitter: 4.4
latency median: 0.00351, mean: 0.00337, 99th_p: 0.00383, 99th_uncertainty: 0.00053
==========================
Throughput (images/sec): 304
Latency (sec): 3.37