语言狂热者与实用主义者

编程语言界有着一场旷日持久的争论。人们会为了一种(或一类)语言,或是自己熟用的,或是自己所欣赏的,与他人吵得不可开交。而争论的起点,可能只是某人一句小小的抱怨。各方各为其主,剑拔弩张,俨然一次声势浩大的圣战。

尽管众语言不可一概而论,我们还是可以粗略地将争论的人群分为两个派别:语言狂热者与实用主义者。这是光谱的两个极端。语言狂热者关注语言本身,或是钟情于新的、现代化的语言特性,并据此评判一种语言;实用主义者则侧重于语言的工程实践,常会以语言的生态、业界使用率反驳他人。当然,也不乏两者兼具的人,对双方的意见各持有一定比例。

Read More


Demystify the randomness in CUDA kernels

You might have heard that many CUDA operators contains some kind of non-determinism, and to eliminate the randomness, one must pay for the degradation of performance. The warning occurs many times in blog posts or framework documentation, but few of them give a detailed explanation for the source of randomness. To this end, the post is going to explore the problem.

When talked about GPU computation, one might come up with a notion of some super-fast hardwares. The surprising speed comes from intensive parallelism of the architecture, which allows users to run thousands of routines on parallel (compared to dozens on ordinary CPUs). The routines are called threads, and similar to the concept with the same name in operating systems, they suffer from non-deterministic execution order and data race condition.

Non-deterministic execution order means, if we arrange all instructions of different threads into a sequence, ordered by their occurrence time, the sequence could vary greatly across invocations. If two threads run on parallel, each with a single instruction, we cannot tell which one is executed first. This is the fundamental origin of randomness, and is inevitable.

Data race condition is one of the consequences of non-deterministic execution order. When the threads is manipulating some shared variables, and the manipulation is not atomic, i.e. consists of interruptible instruction sequence, the program might yield undesired results. Programs should be carefully designed to avoid race condition, with the help of locks or atomic operations. To alleviate, CUDA provides atomic arithmetic routines like atomicAdd() or atomicMax() for safe access to shared memory.

By far we have seen that there does exist some kind of randomness inside GPUs, and if not handled properly, our program will give incorrect results when working with shared variables. But one may argue that, we have atomic operations like atomicAdd(). If a program correctly sums up the same collection of numbers, although the order might be messed, it should always returns the same result. Sadly this is wrong, since some arithmetic operations DOES rely on the order of operands! Let’s take the following CUDA program as an example:

Read More


Performant Bulk Mutations in IndexedDB

IndexedDB seems to be inefficient when working on bulk mutations, such as dumping a huge list of items into an object store – at least I think so at the first sight on the MDN docs. It provides no explicit API for the job as SQL does , so all we can do is to loop from client side, which cannot benefit from database internal optimization (if there’s any). The mutation requests, in addition, appear to be spawned sequentially – the tutorial recommends a paradigm to raise a request within the success event callback of the previous request, which is in fact a sequential execution. Such code will be definitely slow.

We may conduct a quick benchmark on the above approach:

;(async () => {
await new Promise((resolve) => {
const r = indexedDB.deleteDatabase("test")
r.onsuccess = r.onerror = resolve
})
const items = Array.from({ length: 100000 }, (_, i) => ({ id: i }))
const store = await new Promise((resolve) => {
indexedDB.open("test", 1).onupgradeneeded = (event) => {
const db = event.target.result
const store = db.createObjectStore("store", { keyPath: "id" })
store.createIndex("id", "id")
resolve(store)
}
})
console.time("bulkAdd")
await bulkAdd(store, items)
console.timeEnd("bulkAdd")
})()

function bulkAdd(store, items) {
const failures = []
return new Promise((resolve) => {
function _perform(idx) {
const req = store.add(items[idx])
req.onsuccess = (event) => {
if (idx === items.length - 1) resolve(failures)
else _perform(idx + 1)
}
req.onerror = (event) => {
failures.push(items[idx].id)
}
}
_perform(0)
})
}

Practically, we concern more about failed records than the ones inserted successfully. We thus take down only the indices of those records, which improves the efficiency at least a little bit.

The timing is rather unstable, but on average, it takes 30~40 seconds to insert 100k records or 2000~3000 records per second, which is not promising.

Read More


Auto Rebuild .pyx Files with pyximport

Modules written in Cython usually comes with a setup.py script that compiles Cython source codes into native shared libary. For whom not so familiar with Python’s packaging and distributing toolchains, such step is sometimes scary, and turns out to be a stumbling block for Cython freshmen. Moreover, the workflow, “run setup.py -> debug -> edit .pyx files -> run setup.py”, is also less convenient and troublesome for fast iterating projects.

pyximport is a handy tool from Cython official, provided to address the above problem. The module enables users to “directly import” .pyx files, with no explicit setup.py required. Let’s start from an example here. Say we have two files residing in the same directory:

# main.py
import pyximport

pyximport.install(language_level=3)

import foo

print(foo.func(3))
# foo.pyx
cpdef int sqr(int x):
return x * x

The magical highlighted line registers some import hooks to let Python recognize .pyx files. When the .pyx files imported for the first time or modified later, pyximport compiles or re-compiles them behind the scene automatically.

Read More


Cython and Threads

Pure Python sucks in the scene of parallel computing, due to the existence of the Global Interpreter Lock (aka GIL). GIL prevents accessing or manipulating interpreter from different threads concurrently. The mechanism alleviates the risk of race condition, but sequentializes multi-threading program as well. Sadly, there’s no way to release the lock from pure Python.

Alright. So what about beyond pure Python? Shall we bypass the mechanism within an extension? The answer is yes, and that’s what most of scientific computing libaries do.

Cython is a good choice for writing extensions, less verbose, and more similar to Python syntactically. In Cython, one can release GIL temporarily for a code block using the with nogil: syntax. Will it release the true power of multi-core CPU? We should have a try.

Read More


Obtain a Random Available TCP Port with Bash

On Linux, we might sometimes want to choose an unused TCP port randomly. This occurs from time to time on a server, when the administrator wants to expose an HTTP port for a user. Or, you just need an available port for IPC. Let’s make it happen with pure bash scripting.

function unused_port() {
N=${1:-1}
comm -23 \
<(seq "1025" "65535" | sort) \
<(ss -Htan |
awk '{print $4}' |
cut -d':' -f2 |
sort -u) |
shuf |
head -n "$N"
}

We would take apart the function step by step in the following paragraphs.

Read More


Information Theory: KL Divergence

Assume there are two hypotheses $H_1$ and $H_2$, r.v. $X$ ranged in alphabets $\{a_1,\ldots\,a_k\}$. Under hypothesis $H_i$, $X$ has pdf $p(X=a_j|H_i)=p_i(a_j)$. According to Law of Total Probability, we have:

$$ p(H_i|a_k) = \frac{p(H_i)p_i(a_k)}{p_1(a_k)p(H_1)+p_2(a_k)p(H_2)} $$

The formula can be transformed into:

$$ \log \frac{p_2(a_k)}{p_1(a_k)} = \log \frac{p(H_2|a_k)}{p(H_1|a_k)} - \log \frac{p(H_2)}{p(H_1)} $$

which implies that, $\log \frac{p_2(a_k)}{p_1(a_k)}$ equals the difference of log likelihood ratio before and after conditioning $X=a_k$. We define $\log \frac{p_2(a_k)}{p_1(a_k)}$ be the discrimination information for $H_2$ over $H_1$, when $X=a_k$. The expectation of discrimination information is KL divergence, denoted as:

$$D_{KL}(P_2||P_1) = \sum_k p_2(a_k) \log \frac{p_2(a_k)}{p_1(a_k)} $$

which sometimes denoted as $I(p2,p1;X)$, or simply $I(p2,p1)$ if without ambiguity.

KL Divergence can be interpreted as a measure of expected information for $X$ gained after distribution shifted from $p_1$ to $p_2$, where $p_1$ and $p_2$ regarded as prior and post-prior distributions.

Read More


Information Theory: Entropy and Mutual Information

Given a discrete r.v. $X$, where $X$ ranged in $\{a_1, \ldots, a_n\}$, $\mathbb{P}(X=a_k)=p_k$. Entropy $H(X)$ is defined as:

$$H(X)= - \sum_k p_k \log p_k$$

When regarded as a function of $\{p_k\}$, entropy satisfies the following properties:

  1. $H(p_1,\ldots,p_n)$ is continuous, and non-negative;
  2. $H(p_1,\ldots,p_n)$ is convex w.r.t. $(p_1,\ldots,p_n)$;
  3. $H(p_1,\ldots,p_n)$ has a unique maxima $(\frac{1}{n},\ldots,\frac{1}{n})$;
  4. $H(n):=H(\frac{1}{n},\ldots,\frac{1}{n})$ increases along with $n$;
  5. $H(p_1,\ldots,p_n)=H(p_1+\ldots+p_k,p_{k+1},\ldots,p_n)+(p_1+\ldots+p_k)H(p_{k+1}',\ldots,p_n')$.

Property 5 is so-called addictivity. That is, if we observe $X$ in two steps, firstly obtaining a value from $\{\hat{a},a_{k+1},\ldots,a_n\}$ and then another value from $\{a_1,\ldots,a_k\}$ if $\hat{a}$ selected, the entropy of the whole system should be sum of these two subsystems.

Note that a function satisfying property 1, 4, 5 must have a form of $H(\vec{p})= - C \sum_k p_k \log p_k$, which reveals that entropy function is unique.

Entropy measures the uncertainty of a random value. Intuitively, entropy reaches its maximum $\log n$ when all alphabets occur with same probability, and likewise has a minimum of $0$ if $p_k=1$ for some $k$.

Entropy also represents the smallest average length to encode a message. Say we have a message consisting of alphabets $a_1,\ldots,a_n$, occurring with probability $p_1,\ldots,p_n$. Now we want to assign a code (an $N$-ary string) to each alphabet, with no two codes sharing a same prefix. The length of the codes are denoted as $l_1,\ldots,l_n$. Shannon’s source coding theroem states that the average code length $\sum_k p_k l_k$ could not be less than $H(p_1,\ldots,p_n)$ (taking $N$ as logarithm base).

Read More


铁板烧

最近喜欢上了那家铁板烧。在寸土寸金的北京,那家店占据了食街不大的一角。开放式的厨房几乎就是它的全部。一圈细长的大理石柜台,被椅子围得严严实实。人们就此坐下,点菜,颇有一番日式的味道。隔着不高的玻璃,可以看到三个大厨在柜台另一边忙碌。食材触到铁板的「吱」响,伴随时不时的爆裂声,白色的烟汽袅起,混合着诱人的香味,好不热闹。

入冬后,食街上的人更多了。日落后,室外温度急剧下降,这时来热闹的地方吃点热热的东西,再合适不过了。独自点上一份肉,一份菜,一碟满满的炒饭,吃得满面油光,奢侈而满足。推门而出,白天发生的不愉快,或是接下来要加班的怠惰,多少缓解了一些。但每次吃完,衣服上,头发上总会带上一股味道——铁板烧的味道,油烟的味道。

有人不在乎这味道,觉得这是烟火味,是相关料理的灵魂,是生活的气息。在以前,我是很痛恨这种说法的。吃火锅,或是烧烤时,高温食材逸散出的分子,几乎无孔不入,附在衣服上、头发上,就像被泼了一身油。每一回都要努力地洗澡,再将衣服由里至外换一遍,方可除去这种味道。我不能忍受这样的味道,藉而讨厌相关的料理。但近几年,这种想法似乎在慢慢消失,我开始能接受这种味道。尽管还是接受不了烧烤,但这种铁板烧倒是可以,甚至慢慢喜欢上了。

讨厌烟火味,和睡前一定要洗澡一样,这些观念的背后似乎有着奇妙的机理。人们小时候的经历,或是大人的说教,或是自身的负面遭遇,仿佛在主导这些观念。怕小鸡的以前踩死过小鸡,不敢吃鱼的是被鱼刺卡过。但我的这个观念好像是自发产生的,没有听从任何人表达过这样的观点,而是打心底里就讨厌这种味道。

如此根深蒂固的观念,又是如何被改变的呢?大概是太忙碌,忙得无暇顾及这类事。烟汽熏天的铁板烧,是我不可多得的闲暇时光。生活于生存之上,生存尚不能满足,又谈何更高层次的吹毛求疵?

一个习惯的改变,或者是一种生活方式的丧失。


西郊线

西郊线真的是电车,那种拖着两个辫子,走得不快的电车。沿途的车站也是很小的车站。不到两米宽的站台,甚至比广州的BRT站更窄一些,立了一排凳子,仅此而已。半露天的车站,有遮雨棚,却没有真正意义的墙。透过一人多高的玻璃看,站台外沿长满了狗尾巴草,再往后铺开了大片的草甸和桦林,一直延伸到远处的香山,和仿佛水洗后的天空。午前的秋阳给眼前的一切都染上了愉悦的暖色。

Read More