API
 
Loading...
Searching...
No Matches
TensorRTEngine Class Reference

The TensorRTEngine class implements a generic TensorRT model. More...

#include <trt_engine_syh.hpp>

Collaboration diagram for TensorRTEngine:
Collaboration graph

Public Member Functions

 TensorRTEngine (const samplesCommon::OnnxSampleParams &params)
 
bool build ()
 Function builds the network engine.
 
bool infer ()
 Runs the TensorRT inference engine for this sample.
 

Private Member Functions

bool constructNetwork (SampleUniquePtr< nvinfer1::IBuilder > &builder, SampleUniquePtr< nvinfer1::INetworkDefinition > &network, SampleUniquePtr< nvinfer1::IBuilderConfig > &config, SampleUniquePtr< nvonnxparser::IParser > &parser)
 Parses an ONNX model for MNIST and creates a TensorRT network.
 
bool processInput (float *filedata, const samplesCommon::BufferManager &buffers)
 Reads the input and stores the result in a managed buffer.
 

Private Attributes

samplesCommon::OnnxSampleParams mParams
 The parameters for the sample.
 
nvinfer1::Dims mInputDims
 The dimensions of the input to the network.
 
nvinfer1::Dims mOutputDims
 The dimensions of the output to the network.
 
int mNumber {0}
 The number to classify.
 
std::shared_ptr< nvinfer1::IRuntime > mRuntime
 The TensorRT runtime used to deserialize the engine.
 
std::shared_ptr< nvinfer1::ICudaEngine > mEngine
 The TensorRT engine used to run the network.
 

Detailed Description

The TensorRTEngine class implements a generic TensorRT model.

It creates the network using an ONNX model

Definition at line 54 of file trt_engine_syh.hpp.

Constructor & Destructor Documentation

◆ TensorRTEngine()

TensorRTEngine::TensorRTEngine ( const samplesCommon::OnnxSampleParams &  params)
inline

Definition at line 57 of file trt_engine_syh.hpp.

Member Function Documentation

◆ build()

bool TensorRTEngine::build ( )

Function builds the network engine.

Creates the network, configures the builder and creates the network engine.

This function creates the Onnx MNIST network by parsing the Onnx model and builds the engine that will be used to run MNIST (mEngine)

Returns
true if the engine was created successfully and false otherwise

Definition at line 101 of file trt_engine_syh.hpp.

Referenced by main().

◆ constructNetwork()

bool TensorRTEngine::constructNetwork ( SampleUniquePtr< nvinfer1::IBuilder > &  builder,
SampleUniquePtr< nvinfer1::INetworkDefinition > &  network,
SampleUniquePtr< nvinfer1::IBuilderConfig > &  config,
SampleUniquePtr< nvonnxparser::IParser > &  parser 
)
private

Parses an ONNX model for MNIST and creates a TensorRT network.

Uses a ONNX parser to create the Onnx MNIST Network and marks the output layers.

Parameters
networkPointer to the network that will be populated with the Onnx MNIST network
builderPointer to the engine builder

Definition at line 179 of file trt_engine_syh.hpp.

Referenced by build().

◆ infer()

bool TensorRTEngine::infer ( )

Runs the TensorRT inference engine for this sample.

This function is the main execution function of the sample. It allocates the buffer, sets inputs and executes the engine.

Definition at line 215 of file trt_engine_syh.hpp.

Referenced by main().

◆ processInput()

bool TensorRTEngine::processInput ( float *  filedata,
const samplesCommon::BufferManager &  buffers 
)
private

Reads the input and stores the result in a managed buffer.

Definition at line 263 of file trt_engine_syh.hpp.

Referenced by infer().

Member Data Documentation

◆ mEngine

std::shared_ptr<nvinfer1::ICudaEngine> TensorRTEngine::mEngine
private

The TensorRT engine used to run the network.

Definition at line 77 of file trt_engine_syh.hpp.

Referenced by build(), and infer().

◆ mInputDims

nvinfer1::Dims TensorRTEngine::mInputDims
private

The dimensions of the input to the network.

Definition at line 72 of file trt_engine_syh.hpp.

Referenced by build(), and processInput().

◆ mNumber

int TensorRTEngine::mNumber {0}
private

The number to classify.

Definition at line 74 of file trt_engine_syh.hpp.

◆ mOutputDims

nvinfer1::Dims TensorRTEngine::mOutputDims
private

The dimensions of the output to the network.

Definition at line 73 of file trt_engine_syh.hpp.

Referenced by build().

◆ mParams

samplesCommon::OnnxSampleParams TensorRTEngine::mParams
private

The parameters for the sample.

Definition at line 70 of file trt_engine_syh.hpp.

Referenced by constructNetwork(), infer(), and processInput().

◆ mRuntime

std::shared_ptr<nvinfer1::IRuntime> TensorRTEngine::mRuntime
private

The TensorRT runtime used to deserialize the engine.

Definition at line 76 of file trt_engine_syh.hpp.

Referenced by build().


The documentation for this class was generated from the following file: