Deep learning explained

Deep neural networks can solve the most challenging problems, but require abundant computing power and massive amounts of data

1 2 Page 2
Page 2 of 2

While all of the frameworks mentioned above are primarily Python, Deeplearning4j (DL4J), originally from Skymind and now an Apache project, is primarily Java and Scala. DL4J is compatible with Apache Spark and Hadoop.

ONNX was originally proposed as an open ecosystem for interchangeable AI models. ONNX now has a runtime in addition to the interchange file format.

TensorRT, from Nvidia, is another run-time for AI models, specifically to take advantage of Nvidia GPUs. The ONNX runtime can use TensorRT as a plug-in.

Deep transfer learning

Transfer learning is the process of adapting a model trained on one set of data to another set of data. Transfer learning is much faster than training models from scratch, and it requires much less data for the training.

Google Cloud AutoML implements deep transfer learning for vision, translation, and natural language. Azure Machine Learning Service offers similar deep transfer learning services as custom vision, customizable speech and translation, and custom search.

Distributed deep learning training

While TensorFlow has its own way of coordinating distributed training with parameter servers, a more general approach uses Open MPI (message passing interface). Horovod, a distributed training framework for TensorFlow, Keras, and PyTorch that was created at Uber, uses Open MPI as well as Nvidia NCCL. Horovod achieves between 68 percent and 90 percent scaling efficiency, depending on the model being trained.

Deep learning books and resources

You can learn a lot about deep learning simply by installing one of the deep learning packages, trying out its samples, and reading its tutorials. For more depth, consider one or more of the following resources.

This story, "Deep learning explained" was originally published by InfoWorld.

1 2 Page 2
Page 2 of 2
  
Shop Tech Products at Amazon