Move the Root Partition of Ubuntu

Some days ago, I made the decision to shrink the footprint of Windows system on my laptop and reallocate the disk space to the Ubuntu system that resides next to it. Ubuntu is competent for my daily use of programming and web browsing so I hardly launched the OEM-shipped Windows since the laptop was bought. The Windows takes up a not-so-small portion of my SSD space, which can be better utilized instead of wasted in vain.

#gk:immersive
(before)
| --- Windows C: (256 GB) --- | --- Ubuntu / (256 GB) --- |
(after)
| --- Windows C: (120 GB) --- | --- Ubuntu / (392 GB) --- |

Read More


A New Programmer Kicks a Roadblock

The time I composed my first program can be back to my junior high school age. It was the first day of PC lesson, and everybody crowded to the computer classroom. We were told to learn “programming” there. The kids who were talented would be selected and trained for OI . Others instead would go to an ordinary class and learn something more general.

I was anxious. Before the time I had no concept of what “programming” is, nor had I ever gone through a real PC lesson. The PC lesson in my primary school barely taught anything. Over the time the teachers let us play games instead. I could type merely a dozen of characters per minute, since I’d never received a thorough typing training. I was ignorant of inside the metal box. I was a complete computer idiot.

Read More


Reversy Naming

I am always a dedicated fan of writing naturally readable code – by “naturally readable” I mean, one can read a line of code as if it were a sentence of English (or maybe other human languages). It’s believed that the practice encourages more self-explainable code, as the code reads more like a human-composed article, instead of some gibberish only recognizable by machine.

The practice recommends to name functions or variables following the word order of human language, for English that is, subjects come after verbs, and adjectives go before nouns that being modified. The samples below showcase how it guides naming in a program (please hold your opinions about the casing)

  • append_to_list(lst, item). A function that appends an item to a list, which can read as “append to the list (specified by name lst) with the item”.
  • register_service_notifier(func). A function that registers another function as a service notifier, which can read as “register a service notifier with the function func“.
  • UserFollowersListView. The name of a web component which is a list view to display followers for a user.

It plays well and improves my developing experience most of the time, but there is no silver bullet, just like other practices or guidelines.Sometimes I found the readability even degrades. I kept skimming the lines and just couldn’t locate an item efficiently.

Read More


Invalid Golang Pointers Can Bite You Even If You Don't Dereference

In Golang, if you coerce a uintptr variable into unsafe.Pointer (or further, to some *T), the linter will warn with the message "possible miuse of unsafe.Pointer". This makes sense because the uintptr variable may contain an address that points to a piece of invalid memory, and dereferencing such a pointer is catastrophic (usually aborts the program).

I was always aware of the above discipline, but I thought it would be OK to hold the pointers but not dereference them. This is true in C/C++, but not for Golang, which I did not realize until recently.

In fact, the program can panic even if you just keep an invalid pointer on the stack!

Read More


A Flaw of Promoting Complex Trait Bounds in Rust

Days ago, for some reason, I was trying to implement a function that can polymorphize over its return type. The solution is simple, but my brain was jammed at that time, trapped in some complicated typing tricks for hours.

During the struggling, I coincidently ran into something that is temporarily a flaw in the current Rust compiler implementation. In some cases, the compiler is not smart enough to promote known trait bounds, and we have to replicate them again and again. Although the problem is afterwards proved to be a useless “X-Y Problem”, I would still like to share the story.

Read More


Initialize Process Pool Worker with Individual Value

There could be scenes when you are using multiprocessing.pool.Pool and you want to perform some initialization for each worker before tasks are scheduled via Pool.map() or something alike. For example, you create a pool of 4 workers, each for one GPU, and expect tasks scheduled on Worker-i to precisely utilize GPU-i. In this case, Worker-i should be initialized with env var CUDA_VISIBLE_DEVICES=<i> set.

To initialize spawned workers, the constructor of Pool provides two arguments concerning the job 1initializer and initargs. initializer is expected to be a callable, and if specified, each worker process will call initializer(*initargs) when it starts.

import multiprocessing as mp
import multiprocessing.pool as mpp

def worker(arg1):
print(arg1)

mpp.Pool(processes=2, initializer=worker, initargs=(42, ))
# 42
# 42

This is, however, slightly away from what we expect. The initializer is called with same arguments in each worker, while in our case, the arguments are expected to be different, like value 1 for Worker-0 and value 1 for Worker-1. There are two approaches to do the tricks.

Use a Queue

Queue and SimpleQueue types in module multiprocessing 2 implement multi-producer, multi-consumer FIFO queues under the multi-processing scenario. We may create and share a queue among parent and worker processes, send individual values from parent processes and read them from workers. Since the sending and receiving operations are synchronized, we won’t run into any race conditions.

def worker(q):
print(q.get())

q = mp.SimpleQueue()
p = mpp.Pool(processes=2, initializer=worker, initargs=(q,))
for i in range(2):
q.put(i)
p.close()
# 0
# 1

Use a Value

Alternatively, we may use a lighter shared object other than a queue. The Value type in module multiprocessing 3 allows sharing simple values across multiple processes. It can also synchronize accesses to values to avoid race conditions if necessary. We can use a Value object to allocate an individual id for each worker process.

def worker(v):
with v.get_lock():
val = v.value
v.value += 1
print(val)

v = mp.Value(ctypes.c_int32, 0, lock=True)
p = mpp.Pool(processes=2, initializer=worker, initargs=(v,))
p.close()
# 0
# 1

Rust - Python FFI From Scratch

I was recently working on a side project that involves communication between binaries written in Rust and web interfaces written in Python. Moving a part of my project onto a language like Rust is under several considerations: 1) the logic is all about manipulating byte arrays, where Python has deficit and system language like Rust is superior; 2) the logic happens to be complicated, I need a static type system to ensure the correctness, and also the match expression of Rust is found helpful in getting things concise; 3) I was planning to develop CLI tools with Rust, which calls this fraction of functionality, and I don’t want to rewrite the stuff in the future.

Read More


[Extending Hexo For My Site] Part 1 - Better Mathjax Rendering

I am a heavy user of Mathjax. Mathjax is a library that renders Tex-compatible syntax into pretty equations in web scenarios. Hence I am always mixing up Markdown and Tex snippets in my writing. The annoying part is Tex snippets have low priority in my Markdown renderer, and are sometimes incorrectly rendered into Markdown elements. For instance, $a_1, a_2$ becomes $a1, a2$, where underscores within $...$ are mistakenly recognized as an emphasis element. A bunch of escaping is required to avoid the situation, which drives me mad. So I got to seek a permanant solution.

Read More


Debug a 'torch.tensor(1).cuda()' hanging

Today a user of our GPU cluster ran into a problem where executing python -c 'import torch; torch.tensor(1).cuda() would hang forever and could not be killed. The problem occured on a rather old Docker image (with torch == 0.4.0), and would disappear if newer images were used. It was caused by some far less known coincidents, which surprised me and I want to share in this post.

The Problem

The hanging program is spawned by following command:

/usr/bin/docker run --rm -u 1457:1457 \
--gpus '"device='0,1,2,3'"' \
-v /ghome/username:/ghome/username -v /gdata/username:/gdata/username \
-it --ipc=host --shm-size 64G \
-v /gdata1/username:/gdata1/username -v /gdata2/username:/gdata2/username \
-e HOME=/ghome/username \
-m 48G --memory-swap 48G --cpus 5 \
--name username2 \
bit:5000/deepo_9 \
python3 -c 'import torch; torch.tensor(1).cuda()'

the Docker image bit:5000/deepo_9 he used was built with CUDA-9, while the host has multiple 1080Ti GPU cards and CUDA upgraded to 11.4. Looks like there’s some binary incompatibility, considering the fact that the problem would gone with newer images.

Read More


[Unravelling mocona] Part 1 - Verbosity or Anti-Pattern

I was once working as an intern at MSRA around two years ago, at which I joined a research project and started developing upon a large codebase. It’s a practice in ML research fields to adopt an existing code repository as codebase, instead of crafting everything from scratch. Such codebases usually come with convenient “infrastructures” , so researchers would not have to implement them once again, which could be time-wasting and error-prone. All we need is to write our models and losses, and put them into experiments.

The flow works just fine if you are proposing minor improvement on algorithms. The codebase provides an easy approach to prove and iterate your idea. But things would get worse if your work goes beyond it, especially touching the encapsulated infrastructures. Those convenient parts would constraint you and enforce your code into spaghetti.

Read More