750Ti 4

Available Packages

These two web pages have provide us a range of R package for GPU parallel computation.

  1. Parallel computing: GPUs
  2. R and GPU

Installation of GPUtools

I chose gputools, but some error occurred within my installation. I pasted them in github.

Later, I realized that those I posted are actually warnings, the true error is ld: library not found for -licuuc. So I solved it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
> brew info icu4c
icu4c: stable 56.1 (bottled), HEAD [keg-only]
C/C++ and Java libraries for Unicode and globalization
http://site.icu-project.org/
/usr/local/Cellar/icu4c/56.1 (244 files, 63.7M)
Poured from bottle
From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/icu4c.rb
==> Options
--c++11
trueBuild using C++11 mode
--universal
trueBuild a universal binary
--HEAD
trueInstall HEAD version
==> Caveats
This formula is keg-only, which means it was not symlinked into /usr/local.

OS X provides libicucore.dylib (but nothing else).

Generally there are no consequences of this for you. If you build your
own software and it requires this formula, you'll need to add to your
build variables:

LDFLAGS: -L/usr/local/opt/icu4c/lib
CPPFLAGS: -I/usr/local/opt/icu4c/include
> export LDFLAGS=-L/usr/local/opt/icu4c/lib
> export CPPFLAGS=-I/usr/local/opt/icu4c/include
> R
> install.packages("gputools")
...
installing to .../R/library/gputools/libs
** R
** preparing package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded

Welcome at Tue Mar 8 20:24:13 2016

Goodbye at Tue Mar 8 20:24:13 2016
* DONE (gputools)

Goodbye at Tue Mar 8 20:24:13 2016

The downloaded source packages are in
true‘...’
>

Benchmark Test

I used a piece of code from the second page mentioned above.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
> library(gputools)
> gpu.matmult <- function(n) {
+ A <- matrix(runif(n * n), n ,n)
+ B <- matrix(runif(n * n), n ,n)
+ a <- system.time(C <- A %*% B)
+ b <- system.time(C <- gpuMatMult(A, B))
+ cat("CPU: ", a, "\nGPU: ", b)
+ }
> gpu.matmult(1e3)
CPU: 0.055 0.001 0.016 0 0
GPU: 0.151 0.064 0.058 0 0
> gpu.matmult(1e3)
CPU: 0.055 0.001 0.015 0 0
GPU: 0.135 0.065 0.055 0 0
> gpu.matmult(1e3)
CPU: 0.055 0.001 0.015 0 0
GPU: 0.145 0.061 0.056 0 0
> gpu.matmult(5e3)
CPU: 5.607 0.162 1.497 0 0
GPU: 5.718 0.122 5.681 0 0
> gpu.matmult(1e4)
Hide Traceback

Rerun with Debug
Error in gpuMatMult(A, B) : device memory allocation failed
3 gpuMatMult(A, B)
2 system.time(C <- gpuMatMult(A, B))
1 gpu.matmult(10000)
Timing stopped at: 0.003 0.002 0.002
> gpu.matmult(5e3)
CPU: 5.624 0.199 1.518 0 0
GPU: 5.705 0.111 5.675 0 0

I don’t know why but something really wired must be happened. The blog person was using 720M on a laptop. Mine, however, is running 750Ti on desktop.

0%