Randomized.ME Posts

Q1: What’s wrooong with you recently?

Well, I’ve been wondering what desktop environment can meet all of my following three requirements:

1. Programming Friendly;
2. UI & UE Friendly;
3. Budget Friendly.

“Do you want to train a LSTM structured prediction task while having some tea with the soup on the other screen?”

“Yes, I do. And I may have to switch to another desktop to debug if the program crashes…”

So, Let’s run a survey to narrow down the choices. Basically because I’m so lazy that I don’t know how to properly tune OS like Solaris, IBM-AIX or even Chrome OS, I believe the CUDA official website has helped me boiled them down to three alternatives: Windows, Linux and macOS.

So, from the perspective of a lazy person, I choose macOS because the CUDA related configuration is the easiest (you’ll see below) with the first two requirements met comfortably. But in fact I can’t afford a Mac Pro… and what’s more, the latest Mac Pro or other Macs use AMD graphic cards (yes, AMD…).

What about a PC with a macOS installed? That’s called a hackintosh and I guess that fits all the three requirements!

Q2: Is it even practical to configure a hackintosh?

Hi there I am trying a new type of article – reading a paper (RAP for short). In this article, I will read a paper and ask myself some questions about the paper, such as the general ideas and implementations, and finally I will try to make a conclusions and give some of my opinions.

The new category of these articles is called Biscuits, food for my teatime. There’s no clear schedule and I will write RAPs sometimes.

Title: Rationalizing Neural Predictions

From: EMNLP 2016

PDF: http://aclweb.org/anthology/D/D16/D16-1011.pdf

It’s been several days since the release of Ubuntu 16.04, and since I’ve bought a new graphic card, I think it’s necessary to have some fun with it. Obviously I underestimated the difficulty of adapting the Keras with a Theano backend on the new system. It took me a day to figure out what went wrong and searching for any solution, for I am so green on the practical mechanism behind the library (I only know how to use it).

Generally, the problem I met is like:

ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc return status', 1, 'for cmd', 'nvcc -shared -O3 -m64 -Xcompiler -DCUDA_NDARRAY_CUH=c72d035fdf91890f3b36710688069b2e,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,-fPIC,-fvisibility=hidden -Xlinker -rpath,/home/joseph/.theano/compiledir_Linux-4.4--generic-x86_64-with-Ubuntu-16.04-xenial-x86_64-2.7.11+-64/cuda_ndarray -I/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda -I/usr/local/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -I/usr/local/lib/python2.7/dist-packages/theano/gof -o /home/joseph/.theano/compiledir_Linux-4.4--generic-x86_64-with-Ubuntu-16.04-xenial-x86_64-2.7.11+-64/cuda_ndarray/cuda_ndarray.so mod.cu -L/usr/lib -lcublas -lpython2.7 -lcudart')
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available  (error: cuda unavailable)


when I launched theano on some tasks with GPU enabled and specified.

If you are in a hurry, click here to jump directly to the solution.

2D Matrix Multiplication on GPUs with Limited Memories

There was a time when I imagined feeding my big big data to the graphics cards smoothly, but Mom told me that is impossible because my data stream is much too big and the memory of  the graphics cards is limited. The memory is to a graphics card what capacity to a human brain.

But we have hard drives and CPU memories which are a lot larger than GPU memories. Using CUDA one can perform such operations like slicing matrices into different parts then calculating sub-matrices multiplications once at a time on GPU and composing the little matrices into the final result, so as to ease the burden when the matrices are too big to load on the GPU memory at a time, which is just what this article introduced.