Chevron Left
Voltar para Optimize TensorFlow Models For Deployment with TensorRT

Comentários e feedback de alunos de Optimize TensorFlow Models For Deployment with TensorRT da instituição Coursera Project Network

4.6
estrelas
59 classificações

Sobre o curso

This is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput. Prerequisites: In order to successfully complete this project, you should be competent in Python programming, understand deep learning and what inference is, and have experience building deep learning models in TensorFlow and its Keras API. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions....

Melhores avaliações

LS

3 de jun de 2021

Great workshop, all the concepts were very well explained.

AA

14 de mar de 2022

The first to introduce such a rare and important topic.

Filtrar por:

1 — 10 de 10 Avaliações para o Optimize TensorFlow Models For Deployment with TensorRT

por Awais A

28 de mar de 2021

por Jorge G

25 de fev de 2021

por Luis S

4 de jun de 2021

por Abdelrahman A

15 de mar de 2022

por Fabian I M N

20 de abr de 2021

por Nusrat I

16 de abr de 2021

por Chandra S

13 de dez de 2020

por Maftuna E

10 de set de 2020

por Vignesh R

8 de jul de 2021

por Yilber R

1 de out de 2020