Cython and Threads

en

Pure Python sucks in the scene of parallel computing, due to the existence of the Global Interpreter Lock (aka GIL). GIL prevents accessing or manipulating interpreter from different threads concurrently. The mechanism alleviates the risk of race condition, but sequentializes multi-threading program as well. Sadly, there’s no way to release the lock from pure Python.

Alright. So what about beyond pure Python? Shall we bypass the mechanism within an extension? The answer is yes, and that’s what most of scientific computing libaries do.

Cython is a good choice for writing extensions, less verbose, and more similar to Python syntactically. In Cython, one can release GIL temporarily for a code block using the with nogil: syntax. Will it release the true power of multi-core CPU? We should have a try.

READ MORE

Obtain a Random Available TCP Port with Bash

en

On Linux, we might sometimes want to choose an unused TCP port randomly. This occurs from time to time on a server, when the administrator wants to expose an HTTP port for a user. Or, you just need an available port for IPC. Let’s make it happen with pure bash scripting.

function unused_port() {
N=${1:-1}
comm -23 \
<(seq "1025" "65535" | sort) \
<(ss -Htan |
awk '{print $4}' |
cut -d':' -f2 |
sort -u) |
shuf |
head -n "$N"
}

We would take apart the function step by step in the following paragraphs.

READ MORE

Information Theory: KL Divergence

en

Assume there are two hypotheses $H_1$ and $H_2$, r.v. $X$ ranged in alphabets $\{a_1,\ldots\,a_k\}$. Under hypothesis $H_i$, $X$ has pdf $p(X=a_j|H_i)=p_i(a_j)$. According to Law of Total Probability, we have:

$$ p(H_i|a_k) = \frac{p(H_i)p_i(a_k)}{p_1(a_k)p(H_1)+p_2(a_k)p(H_2)} $$

The formula can be transformed into:

$$ \log \frac{p_2(a_k)}{p_1(a_k)} = \log \frac{p(H_2|a_k)}{p(H_1|a_k)} - \log \frac{p(H_2)}{p(H_1)} $$

which implies that, $\log \frac{p_2(a_k)}{p_1(a_k)}$ equals the difference of log likelihood ratio before and after conditioning $X=a_k$. We define $\log \frac{p_2(a_k)}{p_1(a_k)}$ be the discrimination information for $H_2$ over $H_1$, when $X=a_k$. The expectation of discrimination information is KL divergence, denoted as:

$$D_{KL}(P_2||P_1) = \sum_k p_2(a_k) \log \frac{p_2(a_k)}{p_1(a_k)} $$

which sometimes denoted as $I(p2,p1;X)$, or simply $I(p2,p1)$ if without ambiguity.

KL Divergence can be interpreted as a measure of expected information for $X$ gained after distribution shifted from $p_1$ to $p_2$, where $p_1$ and $p_2$ regarded as prior and post-prior distributions.

READ MORE

Information Theory: Entropy and Mutual Information

en

Given a discrete r.v. $X$, where $X$ ranged in $\{a_1, \ldots, a_n\}$, $\mathbb{P}(X=a_k)=p_k$. Entropy $H(X)$ is defined as:

$$H(X)= - \sum_k p_k \log p_k$$

When regarded as a function of $\{p_k\}$, entropy satisfies the following properties:

  1. $H(p_1,\ldots,p_n)$ is continuous, and non-negative;
  2. $H(p_1,\ldots,p_n)$ is convex w.r.t. $(p_1,\ldots,p_n)$;
  3. $H(p_1,\ldots,p_n)$ has a unique maxima $(\frac{1}{n},\ldots,\frac{1}{n})$;
  4. $H(n):=H(\frac{1}{n},\ldots,\frac{1}{n})$ increases along with $n$;
  5. $H(p_1,\ldots,p_n)=H(p_1+\ldots+p_k,p_{k+1},\ldots,p_n)+(p_1+\ldots+p_k)H(p_{k+1}',\ldots,p_n')$.

Property 5 is so-called addictivity. That is, if we observe $X$ in two steps, firstly obtaining a value from $\{\hat{a},a_{k+1},\ldots,a_n\}$ and then another value from $\{a_1,\ldots,a_k\}$ if $\hat{a}$ selected, the entropy of the whole system should be sum of these two subsystems.

Note that a function satisfying property 1, 4, 5 must have a form of $H(\vec{p})= - C \sum_k p_k \log p_k$, which reveals that entropy function is unique.

Entropy measures the uncertainty of a random value. Intuitively, entropy reaches its maximum $\log n$ when all alphabets occur with same probability, and likewise has a minimum of $0$ if $p_k=1$ for some $k$.

Entropy also represents the smallest average length to encode a message. Say we have a message consisting of alphabets $a_1,\ldots,a_n$, occurring with probability $p_1,\ldots,p_n$. Now we want to assign a code (an $N$-ary string) to each alphabet, with no two codes sharing a same prefix. The length of the codes are denoted as $l_1,\ldots,l_n$. Shannon’s source coding theroem states that the average code length $\sum_k p_k l_k$ could not be less than $H(p_1,\ldots,p_n)$ (taking $N$ as logarithm base).

READ MORE

铁板烧

zh

最近喜欢上了那家铁板烧。在寸土寸金的北京,那家店占据了食街不大的一角。开放式的厨房几乎就是它的全部。一圈细长的大理石柜台,被椅子围得严严实实。人们就此坐下,点菜,颇有一番日式的味道。隔着不高的玻璃,可以看到三个大厨在柜台另一边忙碌。食材触到铁板的「吱」响,伴随时不时的爆裂声,白色的烟汽袅起,混合着诱人的香味,好不热闹。

入冬后,食街上的人更多了。日落后,室外温度急剧下降,这时来热闹的地方吃点热热的东西,再合适不过了。独自点上一份肉,一份菜,一碟满满的炒饭,吃得满面油光,奢侈而满足。推门而出,白天发生的不愉快,或是接下来要加班的怠惰,多少缓解了一些。但每次吃完,衣服上,头发上总会带上一股味道——铁板烧的味道,油烟的味道。

有人不在乎这味道,觉得这是烟火味,是相关料理的灵魂,是生活的气息。在以前,我是很痛恨这种说法的。吃火锅,或是烧烤时,高温食材逸散出的分子,几乎无孔不入,附在衣服上、头发上,就像被泼了一身油。每一回都要努力地洗澡,再将衣服由里至外换一遍,方可除去这种味道。我不能忍受这样的味道,藉而讨厌相关的料理。但近几年,这种想法似乎在慢慢消失,我开始能接受这种味道。尽管还是接受不了烧烤,但这种铁板烧倒是可以,甚至慢慢喜欢上了。

讨厌烟火味,和睡前一定要洗澡一样,这些观念的背后似乎有着奇妙的机理。人们小时候的经历,或是大人的说教,或是自身的负面遭遇,仿佛在主导这些观念。怕小鸡的以前踩死过小鸡,不敢吃鱼的是被鱼刺卡过。但我的这个观念好像是自发产生的,没有听从任何人表达过这样的观点,而是打心底里就讨厌这种味道。

如此根深蒂固的观念,又是如何被改变的呢?大概是太忙碌,忙得无暇顾及这类事。烟汽熏天的铁板烧,是我不可多得的闲暇时光。生活于生存之上,生存尚不能满足,又谈何更高层次的吹毛求疵?

一个习惯的改变,或者是一种生活方式的丧失。

READ MORE

西郊线

zh

西郊线真的是电车,那种拖着两个辫子,走得不快的电车。沿途的车站也是很小的车站。不到两米宽的站台,甚至比广州的 BRT 站更窄一些,立了一排凳子,仅此而已。半露天的车站,有遮雨棚,却没有真正意义的墙。透过一人多高的玻璃看,站台外沿长满了狗尾巴草,再往后铺开了大片的草甸和桦林,一直延伸到远处的香山,和仿佛水洗后的天空。午前的秋阳给眼前的一切都染上了愉悦的暖色。

READ MORE

Proof of the Gumbel Max Trick

en

Statement

Assume that $\alpha_1, \alpha_2, \ldots, \alpha_n$ satisify $\sum_k\alpha_k=1$. Define

$$Z=\arg\max_k\{\log\alpha_k+G_k\}$$

where $G_1,\ldots,G_n \text{ i.i.d.}\sim Gumbel(0,1)$, whose PDF and CDF are defined as

$$\begin{align} f(x)&=e^{-(x+e^{-x})} \\ F(x)&=e^{-e^{-x}}\end{align}$$

. Then $\mathbb{P}(Z=k)=\alpha_k$.

Proof

Set $u_k=\log{\alpha_k}+G_k$. We prove by direct calculations.

$$\begin{align} \mathbb{P}(Z=k)&=\mathbb{P}(u_k \geq u_j,\forall j \neq k) \\ &=\int_{-\infty}^\infty \mathbb{P}(u_k \geq u_j, \forall j \neq k|u_k)\mathbb{P}(u_k) du_k \\ &=\int_{-\infty}^\infty \prod_{j\neq k}\mathbb{P}(u_k \geq u_j|u_k)\mathbb{P}(u_k) du_k \\ &=\int_{-\infty}^\infty \prod_{j\neq k}e^{-e^{-u_k+\log \alpha_j}} e^{-(u_k-\log\alpha_k+e^{-(u_k-\log\alpha_k)})} du_k \\ &=\int_{-\infty}^\infty e^{-\sum_{j\neq k}\alpha_je^{-u_k}} \alpha_k e^{-(u_k+\alpha_k e^{-u_k})} du_k \\ &=\alpha_k \int_{-\infty}^\infty e^{-u_k-(\alpha_k+\sum_{j\neq k}\alpha_j)e^{-u_k}} du_k \\ &= \alpha_k \end{align}$$.

Application

The trick is commonly used in DL to make sampling over a discrete distribution differentiable.

References

READ MORE

Option::as_ref

en

Let’s consider the following function:

use std::ptr::NonNull;

fn transform<T>(option: &Option<NonNull<T>>) -> Option<&T> {
option.map(|x| unsafe { x.as_ref() })
}

The function transform takes an Option<NonNull<T>> as input, and converts the inner pointer to an immutable reference &T if possible. The method NonNull::as_ref() is marked unsafe so we need an unsafe block. The snippet causes an compilation error:

READ MORE

Rc, RefCell and Interior Mutability

en

Say we need a type Cursor<T> , which holds a mutable reference to T. A method .dup() duplicates the internal reference, wraps it in a new instance of Cursor<T> and returns. Such pattern exists commonly in database driver library. Users could hold multiple cursors simultaneously, with each owning a (mutable) reference to the same connection object.

One might implements with a primitive mutable reference:

struct Cursor<'a, T> {
obj: &'a mut T,
}

impl<'a, T> Cursor<'a, T> {
fn new(t: &'a mut T) -> Cursor<'a, T> {
Cursor { obj: t }
}

fn dup(&mut self) -> Cursor<T> {
Cursor { obj: self.obj }
}
}

fn main() {
let mut i = 1;
let mut cursor_a = Cursor::new(&mut i);
let _cursor_b = cursor_a.dup();
}

Perfect and neat, and luckily Rust compiler did not complain. Fresh Rustanceans would have to work hard for shutting up the compiler, especially when fighting with references.

The invocation of ::new() and .dup() are on separate lines. Now what about to chain up the constructor and .dup()? This time the compiler fails:

READ MORE

Visualizing Correlation

en

Say we have a matrix A of shape N x M , which can be viewed as a collection of N vectors of shape 1 x M . The code below gives us the correlation matrix of A :

A_corr = np.corrcoef(A)  # shape: (N, N)

To visualize it, just use plt.matshow(A_corr) .

If N is so large that the figure could not provide a clear insight, we might alternatively use histograms like this:

def corr_matrix_to_array(corr_mat):
N = corr_mat.shape[0]
return np.array([corr_mat[i][j] for i in range(1, N) for j in range(i + 1, N)])

plt.hist(corr_matrix_to_array(A_corr), bins=np.linspace(-1, 1, N_bins))
READ MORE