The R Parallel Programming Blog


This is a personal weblog. The opinions expressed here represent my own and not those of my employer. Further, the opinions expressed by the ParallelR Bloggers and those providing comments are theirs alone and do not reflect the opinions of  ParallelR.

Today, parallel computing truly is a mainstream technology. But, stock R is still a single-thread and main memory (RAM) limited software, which really restricts its usage and efficiency against the challenges from very complex model architectures, dynamically configurable analytics models and big data input with billions of parameters and samples.

Therefore, ParallelR dedicated on accelerate R by parallel technologies, and our blog will deliver massive parallel technologies and programming tips with real cases in Machine Learning, Data Analysis, Finance fields. And we will cover rich of topics from data vectorization, usages of parallel packages, (snow, doparallel, Rmpi, SparkR) , to parallel algorithm design and implementation by OpenMP, OpenACC, CPU/GPU accelerated libraries, CUDA C/C++ and Pthread in R.

At ParallelR Blog you will find useful information about productive, high-performance programming techniques based on commercialized computer architecture, ranging from multicores CPU, GPU, Intel Xeon Phi, FPGA to HPC Cluster. As well, you will learn how you can use your existing skills in R in new ways, represented your R codes with structured computational models.

ParallelR Blog is created by Peng Zhao. Peng have rich experience in heterogeneous and parallel computing areas including multi-cores, multi-nodes and accelerators (GPGPU, Intel Xeon Phi) for parallel algorithm design, implementation, debugging and optimizations.

This is Peng. Handsome, Right?