Performant Bulk Mutations in IndexedDB

IndexedDB seems to be inefficient when working on bulk mutations, such as dumping a huge list of items into an object store – at least I think so at the first sight on the MDN docs. It provides no explicit API for the job as SQL does , so all we can do is to loop from client side, which cannot benefit from database internal optimization (if there’s any). The mutation requests, in addition, appear to be spawned sequentially – the tutorial recommends a paradigm to raise a request within the success event callback of the previous request, which is in fact a sequential execution. Such code will be definitely slow.

We may conduct a quick benchmark on the above approach:

;(async () => {
await new Promise((resolve) => {
const r = indexedDB.deleteDatabase("test")
r.onsuccess = r.onerror = resolve
})
const items = Array.from({ length: 100000 }, (_, i) => ({ id: i }))
const store = await new Promise((resolve) => {
indexedDB.open("test", 1).onupgradeneeded = (event) => {
const db = event.target.result
const store = db.createObjectStore("store", { keyPath: "id" })
store.createIndex("id", "id")
resolve(store)
}
})
console.time("bulkAdd")
await bulkAdd(store, items)
console.timeEnd("bulkAdd")
})()

function bulkAdd(store, items) {
const failures = []
return new Promise((resolve) => {
function _perform(idx) {
const req = store.add(items[idx])
req.onsuccess = (event) => {
if (idx === items.length - 1) resolve(failures)
else _perform(idx + 1)
}
req.onerror = (event) => {
failures.push(items[idx].id)
}
}
_perform(0)
})
}

Practically, we concern more about failed records than the ones inserted successfully. We thus take down only the indices of those records, which improves the efficiency at least a little bit.

The timing is rather unstable, but on average, it takes 30~40 seconds to insert 100k records or 2000~3000 records per second, which is not promising.

Read More


Auto Rebuild .pyx Files with pyximport

Modules written in Cython usually comes with a setup.py script that compiles Cython source codes into native shared libary. For whom not so familiar with Python’s packaging and distributing toolchains, such step is sometimes scary, and turns out to be a stumbling block for Cython freshmen. Moreover, the workflow, “run setup.py -> debug -> edit .pyx files -> run setup.py”, is also less convenient and troublesome for fast iterating projects.

pyximport is a handy tool from Cython official, provided to address the above problem. The module enables users to “directly import” .pyx files, with no explicit setup.py required. Let’s start from an example here. Say we have two files residing in the same directory:

# main.py
import pyximport

pyximport.install(language_level=3)

import foo

print(foo.func(3))
# foo.pyx
cpdef int sqr(int x):
return x * x

The magical highlighted line registers some import hooks to let Python recognize .pyx files. When the .pyx files imported for the first time or modified later, pyximport compiles or re-compiles them behind the scene automatically.

Read More


Cython and Threads

Pure Python sucks in the scene of parallel computing, due to the existence of the Global Interpreter Lock (aka GIL). GIL prevents accessing or manipulating interpreter from different threads concurrently. The mechanism alleviates the risk of race condition, but sequentializes multi-threading program as well. Sadly, there’s no way to release the lock from pure Python.

Alright. So what about beyond pure Python? Shall we bypass the mechanism within an extension? The answer is yes, and that’s what most of scientific computing libaries do.

Cython is a good choice for writing extensions, less verbose, and more similar to Python syntactically. In Cython, one can release GIL temporarily for a code block using the with nogil: syntax. Will it release the true power of multi-core CPU? We should have a try.

Read More


Obtain a Random Available TCP Port with Bash

On Linux, we might sometimes want to choose an unused TCP port randomly. This occurs from time to time on a server, when the administrator wants to expose an HTTP port for a user. Or, you just need an available port for IPC. Let’s make it happen with pure bash scripting.

function unused_port() {
N=${1:-1}
comm -23 \
<(seq "1025" "65535" | sort) \
<(ss -Htan |
awk '{print $4}' |
cut -d':' -f2 |
sort -u) |
shuf |
head -n "$N"
}

We would take apart the function step by step in the following paragraphs.

Read More


Information Theory: KL Divergence

Assume there are two hypotheses $H_1$ and $H_2$, r.v. $X$ ranged in alphabets $\{a_1,\ldots\,a_k\}$. Under hypothesis $H_i$, $X$ has pdf $p(X=a_j|H_i)=p_i(a_j)$. According to Law of Total Probability, we have:

$$ p(H_i|a_k) = \frac{p(H_i)p_i(a_k)}{p_1(a_k)p(H_1)+p_2(a_k)p(H_2)} $$

The formula can be transformed into:

$$ \log \frac{p_2(a_k)}{p_1(a_k)} = \log \frac{p(H_2|a_k)}{p(H_1|a_k)} - \log \frac{p(H_2)}{p(H_1)} $$

which implies that, $\log \frac{p_2(a_k)}{p_1(a_k)}$ equals the difference of log likelihood ratio before and after conditioning $X=a_k$. We define $\log \frac{p_2(a_k)}{p_1(a_k)}$ be the discrimination information for $H_2$ over $H_1$, when $X=a_k$. The expectation of discrimination information is KL divergence, denoted as:

$$D_{KL}(P_2||P_1) = \sum_k p_2(a_k) \log \frac{p_2(a_k)}{p_1(a_k)} $$

which sometimes denoted as $I(p2,p1;X)$, or simply $I(p2,p1)$ if without ambiguity.

KL Divergence can be interpreted as a measure of expected information for $X$ gained after distribution shifted from $p_1$ to $p_2$, where $p_1$ and $p_2$ regarded as prior and post-prior distributions.

Read More


Information Theory: Entropy and Mutual Information

Given a discrete r.v. $X$, where $X$ ranged in $\{a_1, \ldots, a_n\}$, $\mathbb{P}(X=a_k)=p_k$. Entropy $H(X)$ is defined as:

$$H(X)= - \sum_k p_k \log p_k$$

When regarded as a function of $\{p_k\}$, entropy satisfies the following properties:

  1. $H(p_1,\ldots,p_n)$ is continuous, and non-negative;
  2. $H(p_1,\ldots,p_n)$ is convex w.r.t. $(p_1,\ldots,p_n)$;
  3. $H(p_1,\ldots,p_n)$ has a unique maxima $(\frac{1}{n},\ldots,\frac{1}{n})$;
  4. $H(n):=H(\frac{1}{n},\ldots,\frac{1}{n})$ increases along with $n$;
  5. $H(p_1,\ldots,p_n)=H(p_1+\ldots+p_k,p_{k+1},\ldots,p_n)+(p_1+\ldots+p_k)H(p_{k+1}',\ldots,p_n')$.

Property 5 is so-called addictivity. That is, if we observe $X$ in two steps, firstly obtaining a value from $\{\hat{a},a_{k+1},\ldots,a_n\}$ and then another value from $\{a_1,\ldots,a_k\}$ if $\hat{a}$ selected, the entropy of the whole system should be sum of these two subsystems.

Note that a function satisfying property 1, 4, 5 must have a form of $H(\vec{p})= - C \sum_k p_k \log p_k$, which reveals that entropy function is unique.

Entropy measures the uncertainty of a random value. Intuitively, entropy reaches its maximum $\log n$ when all alphabets occur with same probability, and likewise has a minimum of $0$ if $p_k=1$ for some $k$.

Entropy also represents the smallest average length to encode a message. Say we have a message consisting of alphabets $a_1,\ldots,a_n$, occurring with probability $p_1,\ldots,p_n$. Now we want to assign a code (an $N$-ary string) to each alphabet, with no two codes sharing a same prefix. The length of the codes are denoted as $l_1,\ldots,l_n$. Shannon’s source coding theroem states that the average code length $\sum_k p_k l_k$ could not be less than $H(p_1,\ldots,p_n)$ (taking $N$ as logarithm base).

Read More


Proof of the Gumbel Max Trick

Statement

Assume that $\alpha_1, \alpha_2, \ldots, \alpha_n$ satisify $\sum_k\alpha_k=1$. Define

$$Z=\arg\max_k\{\log\alpha_k+G_k\}$$

where $G_1,\ldots,G_n \text{ i.i.d.}\sim Gumbel(0,1)$, whose PDF and CDF are defined as

$$\begin{align} f(x)&=e^{-(x+e^{-x})} \\ F(x)&=e^{-e^{-x}}\end{align}$$

. Then $\mathbb{P}(Z=k)=\alpha_k$.

Proof

Set $u_k=\log{\alpha_k}+G_k$. We prove by direct calculations.

$$\begin{align} \mathbb{P}(Z=k)&=\mathbb{P}(u_k \geq u_j,\forall j \neq k) \\ &=\int_{-\infty}^\infty \mathbb{P}(u_k \geq u_j, \forall j \neq k|u_k)\mathbb{P}(u_k) du_k \\ &=\int_{-\infty}^\infty \prod_{j\neq k}\mathbb{P}(u_k \geq u_j|u_k)\mathbb{P}(u_k) du_k \\ &=\int_{-\infty}^\infty \prod_{j\neq k}e^{-e^{-u_k+\log \alpha_j}} e^{-(u_k-\log\alpha_k+e^{-(u_k-\log\alpha_k)})} du_k \\ &=\int_{-\infty}^\infty e^{-\sum_{j\neq k}\alpha_je^{-u_k}} \alpha_k e^{-(u_k+\alpha_k e^{-u_k})} du_k \\ &=\alpha_k \int_{-\infty}^\infty e^{-u_k-(\alpha_k+\sum_{j\neq k}\alpha_j)e^{-u_k}} du_k \\ &= \alpha_k \end{align}$$.

Application

The trick is commonly used in DL to make sampling over a discrete distribution differentiable.

References


Option::as_ref

Let’s consider the following function:

use std::ptr::NonNull;

fn transform<T>(option: &Option<NonNull<T>>) -> Option<&T> {
option.map(|x| unsafe { x.as_ref() })
}

The function transform takes an Option<NonNull<T>> as input, and converts the inner pointer to an immutable reference &T if possible. The method NonNull::as_ref() is marked unsafe so we need an unsafe block. The snippet causes an compilation error:

Read More


Rc, RefCell and Interior Mutability

Say we need a type Cursor<T> , which holds a mutable reference to T. A method .dup() duplicates the internal reference, wraps it in a new instance of Cursor<T> and returns. Such pattern exists commonly in database driver library. Users could hold multiple cursors simultaneously, with each owning a (mutable) reference to the same connection object.

One might implements with a primitive mutable reference:

struct Cursor<'a, T> {
obj: &'a mut T,
}

impl<'a, T> Cursor<'a, T> {
fn new(t: &'a mut T) -> Cursor<'a, T> {
Cursor { obj: t }
}

fn dup(&mut self) -> Cursor<T> {
Cursor { obj: self.obj }
}
}

fn main() {
let mut i = 1;
let mut cursor_a = Cursor::new(&mut i);
let _cursor_b = cursor_a.dup();
}

Perfect and neat, and luckily Rust compiler did not complain. Fresh Rustanceans would have to work hard for shutting up the compiler, especially when fighting with references.

The invocation of ::new() and .dup() are on separate lines. Now what about to chain up the constructor and .dup()? This time the compiler fails:

Read More


Haskell 笔记:State Monad

一个依赖于外部状态 s 的伪函数 f' :: a -> b,我们可以将其改写为 f :: a -> s -> (b, s) 使其良定。即,在输入输出中显式传递状态 s。现在,我们需要利用 Monad 将状态传递过程隐藏起来。

注意到,输出值 (b, s) 中的末状态 s 不仅依赖于输入状态,更依赖于之前更改过状态的一系列函数及其逻辑。因此我们不能简单地将 Monad 定义为 (a, s) 类似的形式,否则两个函数用 >=> 结合的结果将与函数逻辑无关,这与我们的期望不符。

考虑如下定义:

newtype State s a = { runState :: s -> (a, s) }

由于 -> 的右结合性,f :: a -> s -> (b, s)f :: a -> State s b 等价。固定 s,则 State s 可以成为一个 Monad。一个类型为 State s a 的值通常也被称为一个 state processor。

现在尝试定义 (>>=) :: State s a -> (a -> State s b) -> State s b。若 p >>= f,则 p 蕴含了在此之前所有的状态处理逻辑,我们希望将 pf 的逻辑融合在一起,成为一个新的 state processor,并作为返回值。

p >>= f = 
(
State $ \s -> (b, s'')
where
(a, s') = (runState p) s
p2 = f a -- :: State s b
(b, s'') = (runState p2) s'
)

return 是平凡的:

return a = State $ (\s -> (a, s))

fmap 可以作如下定义:

fmap :: (a -> b) -> (State s a) -> (State s b)
fmap f =
(
\pIn -> (
\s -> (b, s')
where
(a, s') = (runState pIn) s
b = f a
)

如此一来,我们可以将一系列的依赖外部状态的函数串成一个依赖外部状态的函数,传以初始状态,便可得到结果。