不自由的互联网

zh

互联网正在变得不自由——这已经成为了一种共识。这种不自由是从获取内容的角度出发,我们依自己的意愿获取知识/资讯的难度在增加。「中文互联网已死」。优质资源被各大互联网巨头所垄断,人们只能从应用内部访问,并时常受到推荐算法的说教。在巨头领域之外的公网则是一片荒漠,充斥着大量低质量、重复的内容,让秉承自主意志前来探索的人心寒。而在世界其他地方,情况可能没有这么严重,但类似的现象如巨头垄断也是存在的。互联网并不像十几年前我们所憧憬的那样。

READ MORE

Retrieve Contents over HTTP without curl or wget

en

I came across a piece of interesting vulnerable script from a post on V2EX 1) on V2EX. A bash function named __curl inside the file retrieves contents over HTTP as a simple alternative for command curl or wget, in scenarios where no such utilities available.

#!/bin/bash
function __curl() {
read proto server path <<<$(echo ${1//// })
DOC=/${path// //}
HOST=${server//:*}
PORT=${server//*:}
[[ x"${HOST}" == x"${PORT}" ]] && PORT=80

exec 3<>/dev/tcp/${HOST}/$PORT
echo -en "GET ${DOC} HTTP/1.0\r\nHost: ${HOST}\r\n\r\n" >&3
(while read line; do
[[ "$line" == $'\r' ]] && break
done && cat) <&3
exec 3>&-
}

The function makes use of certain less known features of Linux and the Bash language.

The first one is communicating over TCP through files. Linux employs a design philosophy of “everything are files”. One could find some special device files in directory /dev, through which we can manipulate the underlying devices. Specifically, manipulating a TCP socket connecting ${HOST}:${PORT} could be achieved by accessing device file /dev/tcp/${HOST}/${PORT}. Since HTTP is a text-based protocol over TCP, working with it is no more difficult than reading / writing a text file. Line exec 3<>$FILENAME opens file $FILENAME under read-write mode and binds it to descriptor 3. The next line then manually composes an HTTP payload and writes out to &3, which is in fact requesting the URL http ://${HOST}:${PORT}. By reading the same file descriptor, we retrieve the response content from the service. The trick serves as a primitive workaround for retrieving contents from web.

Another one is parameter substitution in Bash. The expression ${var//PATTERN/REPL} substitutes all occurrences of PATTERN in var into REPL. If REPL omitted, the matched substrings will be deleted. For example, in this script, ${1//// } would replace all slashes / into white spaces in variable $1.

References

  1. Parameter Substitution
  1. [收到条阿里云的告警,看不懂是做什么用的,请教一下 - V2EX](https://www.v2ex.com/t/811424
READ MORE

[Unravelling mocona] Part 1 - Verbosity or Anti-Pattern

en

I was once working as an intern at MSRA around two years ago, at which I joined a research project and started developing upon a large codebase. It’s a practice in ML research fields to adopt an existing code repository as codebase, instead of crafting everything from scratch. Such codebases usually come with convenient “infrastructures” , so researchers would not have to implement them once again, which could be time-wasting and error-prone. All we need is to write our models and losses, and put them into experiments.

The flow works just fine if you are proposing minor improvement on algorithms. The codebase provides an easy approach to prove and iterate your idea. But things would get worse if your work goes beyond it, especially touching the encapsulated infrastructures. Those convenient parts would constraint you and enforce your code into spaghetti.

READ MORE

[Unravelling mocona] Part 0 - Preface

en

The early idea of hsfzxjy/mocona was come up with in late April. It was not until July that I figure out a reasonable design for the project. I finished most of my idea and released the first version approaching August. Nevertheless, there’s no chance for me to share the story behind the library. Now another month gone, it’s time to do some writing.

The series, as I planning, would cover the motivation of creating mocona, some technical details and usage, along with some critical thinking on the creative process. For whom interested in CPython internals or would like to extend the language, it is worth to read through.

Table of Contents

READ MORE

Understanding pickle in Python

en

The module pickle shipped in Python could be used for generic-purpose object serialization and de-serialization. It’s been widely adopted or recommended as backend in scenarios like persisting states or IPC.

Employed by many famous frameworks, though, the magic behind it still seems to be vague for daily users, especially guys fresh to the language. People come across “unpicklable” errors from time to time, but don’t know the reason; or re-invent state persistence by themselves, even if pickle could be competent. People sometimes write error-prone codes, merely because they are afraid of or unaware of pickle.

This post thus attempts to clarify the usage of pickle module in an easy understanding way, by answering three questions.

READ MORE

Rough Notes on Deploying Vaultwarden & NextCloud Bookmarks

zh

I’ve been struggling for years on two things: synchronize passwords and blog posts I have read across devices. The problem kills me so much since my devices, an Android mobile, an Ubuntu laptop and an iPad, are less supported by big App companies. Aside, I want to gain control for all my data, so there should better exist a self-hosted solution. The problem are partially solved recently by deploying Vaultwarden and NextCloud on VPS. This blog post dictates the setup process and problems I met, in case anyone searching for this topic.

Install Vaultwarden and NextCloud on VPS

The two services are both luckily dockerized. To install there’s nothing more complicated than a command:

READ MORE

语言狂热者与实用主义者

zh

编程语言界有着一场旷日持久的争论。人们会为了一种(或一类)语言,或是自己熟用的,或是自己所欣赏的,与他人吵得不可开交。而争论的起点,可能只是某人一句小小的抱怨。各方各为其主,剑拔弩张,俨然一次声势浩大的圣战。

尽管众语言不可一概而论,我们还是可以粗略地将争论的人群分为两个派别:语言狂热者与实用主义者。这是光谱的两个极端。语言狂热者关注语言本身,或是钟情于新的、现代化的语言特性,并据此评判一种语言;实用主义者则侧重于语言的工程实践,常会以语言的生态、业界使用率反驳他人。当然,也不乏两者兼具的人,对双方的意见各持有一定比例。

READ MORE

Demystify the randomness in CUDA kernels

en

You might have heard that many CUDA operators contains some kind of non-determinism, and to eliminate the randomness, one must pay for the degradation of performance. The warning occurs many times in blog posts or framework documentation, but few of them give a detailed explanation for the source of randomness. To this end, the post is going to explore the problem.

When talked about GPU computation, one might come up with a notion of some super-fast hardwares. The surprising speed comes from intensive parallelism of the architecture, which allows users to run thousands of routines on parallel (compared to dozens on ordinary CPUs). The routines are called threads, and similar to the concept with the same name in operating systems, they suffer from non-deterministic execution order and data race condition.

Non-deterministic execution order means, if we arrange all instructions of different threads into a sequence, ordered by their occurrence time, the sequence could vary greatly across invocations. If two threads run on parallel, each with a single instruction, we cannot tell which one is executed first. This is the fundamental origin of randomness, and is inevitable.

Data race condition is one of the consequences of non-deterministic execution order. When the threads is manipulating some shared variables, and the manipulation is not atomic, i.e. consists of interruptible instruction sequence, the program might yield undesired results. Programs should be carefully designed to avoid race condition, with the help of locks or atomic operations. To alleviate, CUDA provides atomic arithmetic routines like atomicAdd() or atomicMax() for safe access to shared memory.

By far we have seen that there does exist some kind of randomness inside GPUs, and if not handled properly, our program will give incorrect results when working with shared variables. But one may argue that, we have atomic operations like atomicAdd(). If a program correctly sums up the same collection of numbers, although the order might be messed, it should always returns the same result. Sadly this is wrong, since some arithmetic operations DOES rely on the order of operands! Let’s take the following CUDA program as an example:

READ MORE

Performant Bulk Mutations in IndexedDB

en

IndexedDB seems to be inefficient when working on bulk mutations, such as dumping a huge list of items into an object store – at least I think so at the first sight on the MDN docs. It provides no explicit API for the job as SQL does , so all we can do is to loop from client side, which cannot benefit from database internal optimization (if there’s any). The mutation requests, in addition, appear to be spawned sequentially – the tutorial recommends a paradigm to raise a request within the success event callback of the previous request, which is in fact a sequential execution. Such code will be definitely slow.

We may conduct a quick benchmark on the above approach:

;(async () => {
await new Promise((resolve) => {
const r = indexedDB.deleteDatabase("test")
r.onsuccess = r.onerror = resolve
})
const items = Array.from({ length: 100000 }, (_, i) => ({ id: i }))
const store = await new Promise((resolve) => {
indexedDB.open("test", 1).onupgradeneeded = (event) => {
const db = event.target.result
const store = db.createObjectStore("store", { keyPath: "id" })
store.createIndex("id", "id")
resolve(store)
}
})
console.time("bulkAdd")
await bulkAdd(store, items)
console.timeEnd("bulkAdd")
})()

function bulkAdd(store, items) {
const failures = []
return new Promise((resolve) => {
function _perform(idx) {
const req = store.add(items[idx])
req.onsuccess = (event) => {
if (idx === items.length - 1) resolve(failures)
else _perform(idx + 1)
}
req.onerror = (event) => {
failures.push(items[idx].id)
}
}
_perform(0)
})
}

Practically, we concern more about failed records than the ones inserted successfully. We thus take down only the indices of those records, which improves the efficiency at least a little bit.

The timing is rather unstable, but on average, it takes 30~40 seconds to insert 100k records or 2000~3000 records per second, which is not promising.

READ MORE

Auto Rebuild .pyx Files with pyximport

en

Modules written in Cython usually comes with a setup.py script that compiles Cython source codes into native shared libary. For whom not so familiar with Python’s packaging and distributing toolchains, such step is sometimes scary, and turns out to be a stumbling block for Cython freshmen. Moreover, the workflow, “run setup.py -> debug -> edit .pyx files -> run setup.py”, is also less convenient and troublesome for fast iterating projects.

pyximport is a handy tool from Cython official, provided to address the above problem. The module enables users to “directly import” .pyx files, with no explicit setup.py required. Let’s start from an example here. Say we have two files residing in the same directory:

# main.py
import pyximport

# hl: begin
pyximport.install(language_level=3)
# hl: end

import foo

print(foo.func(3))
# foo.pyx
cpdef int sqr(int x):
return x * x

The magical highlighted line registers some import hooks to let Python recognize .pyx files. When the .pyx files imported for the first time or modified later, pyximport compiles or re-compiles them behind the scene automatically.

READ MORE