This tutorial describes how to use ONNX to convert a model defined in PyTorch into the ONNX format and then convert it into Caffe2. However, if you follow the way in the tutorial to install onnx, onnx-caffe2 and Caffe2, you may experience some errors. Here I provide a solution to solve this problem.
The latest version of CUDA is 9.1. However some deep learning frameworks are not yet ready for CUDA 9.1. The installation script of CUDA-9.1 is very similar to this one.
When I started programming in C/C++, one particular variable naming convention, Hungarian notation, is pretty popular. Even nowadays it is still being used. But based my experience, I found out that it is very annoying. Here are some reasons I am against it:
If you want to compile FFmpeg windows version, you may find it is not an easy task. Thank Roger Pack for his turnkey script to make this task much easier: ffmpeg-windows-build-helpers
Many deep learning libraries use Nvidia GPU to accelerate the computation. The CUDA Toolkit needs to install to make use of the GPU. The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks which is worth installing. Current version of CUDA is 9 and current version of cuDNN is 7. But many deep learning libraries have yet to upgrade to the current versions of CUDA and cuDNN. So here I present a way to install CUDA 8 and cuDNN 6.
Here is a link for docker images of Caffe on Ubuntu 16.04 along with Anaconda (Python 3.6 version):
Anaconda = Package Manager + Environment Manager + Additional Scientific Libraries. Based on my experience, it makes the python package management much easier. Also conda is a general-purpose package management system, it is designed to manage packages and dependencies of software from any language.
If you want to install Caffe on Ubuntu 16.04 along with Anaconda (Python 3.6 version), here is an installation guide:
Installing multiple deep learning frameworks in a single system is a dependency hell. Docker provides a solution to this issue.
Receiver operating characteristic(ROC) curve is “a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied” (see wikipedia).
I use Disqus comment system. One thing puzzled me is that the comments did not show up sometimes, but sometimes it did. Finally I figured out it is because github pages treats “http” and “https” requests as the same. But Disqus treats them as two different URLs. So the comments are not visible to each other.
If you want to run a Python script on remote server, you can run it through Screen or Byobu. The only problem is that it is very slow to display the figures if the network connection is slow. The solution is to run the script in IPython remotely using Jupyter Notebook.
If you want to install Caffe on Ubuntu 16.04 along with Anaconda, here is an installation guide:
In the last post, I want to show the html file hosted in github, but instead html source code is displayed. The solution is to create a
gh-pages branch in your project:
If you install different versions of the ubuntu packages, there could be chance that you get following error:
When I run a program on remote Linux server, I used to run it in background with
nohup such as:
There are two Nvidia graphics cards installed on my machine. When I execute:
To make Tensorflow run on Ubuntu 16.04 with GTX 1080, first follow the instructions in the previous post to install Nvidia drivers, CUDA 8RC and cuDNN 5. Then follow the intstructions from here to install tensorflow.
I got a Nvidia GTX 1080 last week and want to make it run Caffe on Ubuntu 16.04. After some trial-and-errors, I findally made it work. The speed is very fast and the price of card is reasonable($699) and the power consumption is low(180Watts maximum).