# zero

## CCS1 and CCS2 in the same country?

CCS, the Combined Charging Standard, is THE standard in the EV charging. There are two plugs you can find out in the wild, respectively called CCS1 and CCS2. The actual DC plug looks exactly the same, the only difference is being the top AC port where CCS1 uses the US standard and CCS2 uses the EU style plug.

The two primary plug differs in carrying single phase / three phases of power, in the US where single phase is prevalent, we use the CCS1 plug. In the EU where three phases charging is ubiquitous, the CCS2 plug is used. Due to the difference being mostly with how the electrical grid works, most country will use a single plug throughout their country.

We would not be writing an article today if that's the case everywhere, would we? 😂 There is one country in the world where you can actually find both CCS1/CCS2 everywhere in the country, and that country is Taiwan. (If there're more oddities out there, let me know!)

▲Tesla CCS2 cable plugged into the car and the NA plug from the same supercharger is shown

Before 2021/7, all the Tesla in Taiwan uses the Tesla NA plug (Tesla's small proprietary plug). This is the same plug used in Teslas in the US because Taiwan's power grid is very similar to the US's. However, the Taiwanese government announced intention to standardize EV plugs into open standards. Not sure what exactly went wrong with the negotiation with Tesla here, but instead of going with CCS1 (like in South Korea where the CCS adapter is on sale). Tesla announces that they'll switch to CCS2 instead in Taiwan, probably because the parts are readily available for the EU market already. (Not sure if two coincides, but currently Tesla sends cars from their Germany factory to Taiwan as well.)

This creates an interesting situation. You can actually see cars and charging stations in Taiwan with both CCS1/2 plugs. Non-Tesla cars are still mostly using CCS1 standards in Taiwan, while third-party charging stations are rushing to add CCS2 ports and stations as well.

Is it a good thing or a bad thing? Honestly, as a user, I love being able to access ALL charging stations. I rented a CCS2 Model 3 in Taiwan and I have access to all kinds of third-party charging stations (one-time even had three-phase AC charging in a random farmer's market in Hualien!) I really don't mind the bulkier connector, because honestly, how many times are you plugging that plug in/out anyway? The only downside is that sometimes I need to verify if the station has Tesla/CCS1 or CCS2 plug before backing in. While I wish Tesla went with CCS1 plug in Taiwan, it turned out better than I expected, so I guess all in all, it is indeed a good thing, and an interesting oddity in a small corner of the world.

▲Bonus pic: There's a gas station in highway rest area converted an island into EV charging stations.

## [rust] DB integration tests with Rocket + Diesel

Using Diesel with Rocket usually require the use of rocket_sync_db_pools crate which hides the complicated setup of initializing a DB connection pool and expose it to handlers via a opaque type that you can call run on to get a Future back and only in that Future you will get a connection.

This makes writing a integration test with a Rocket handler a bit more complicated because the impl of the DB Pool guard type is generated on-the-fly and not bind to a trait, so we can't just write a mock implementation of it.

After some trial and error, I realized that you can initialize Rocket to the Ignite state (initialized but not launched/listening yet). We generate a new test DB on-the-fly for every test so we also need to config the Rocket instance to use the right test-specific DB url. Here's the code doing so:

pub async fn rocket_test<F, Fut, R>(&self, f: F) -> anyhow::Result<R>
where
F: FnOnce(DbConnection) -> Fut,
Fut: Future<Output = R> + Send + 'static,
R: Send + 'static,
{
let figment = rocket::Config::figment()
.merge(("databases.db.url", self.db_url.clone()));
let rocket = rocket::custom(figment)
.attach(DbConnection::fairing())
.ignite()
.await?;
let conn = DbConnection::get_one(&rocket)
.await
.ok_or(MyError::DbError(
"unable to get db connection".to_string(),
))?;

let result = f(conn).await;

rocket.shutdown().notify();

Ok(result)
}


and therefore, you can write tests as such:

rocket_test(async move |conn| {
let ret = a_rocket_handler(
conn,
params,
)
.await
.expect_err("handler should fail");

assert_eq!(
ret,
);
})
.await
.unwrap();


## 來學開飛機 Part. 2

This is an article about my experience learning how to fly an airplane. Given it's rare in Taiwan to have a chance to learn general aviation, I'm gonna write this article in Traditional Chinese instead, an English version might come later.

## 第一階段訓練 (to Solo)

SFO 理論上也可以讓你降落，但降落與處理費可能會收你個幾百塊美金 😅

## 第二階段訓練 (to Solo XC)

FBO: Fixed Base Operator

## 來學開飛機 Part. 1

This is an article about my experience learning how to fly an airplane. Given it's rare in Taiwan to have a chacne to learn general aviation, I'm gonna write this article in Traditional Chinese instead, an English version might come later.

## 流程與文件

• Foreflight：超級好用的各種航空資訊 app，從飛機 Traffic 資訊（需要另外連接 ADS-B 接收器或是至少飛低一點的時候有網路）還有各種航空圖跟機場圖。一樣也不是必要，但非常好用！Foreflight 建議使用 iPad Mini Cellular 來跑，mini 大小比較適合而 Celluar 版本有 GPS 緊急情況可以當備用導航使用。 ($199/yr，加入 SAFE 有額外額扣) 不想花錢的話，可以用 SkyVector 但就沒有離線（除非你印出來XD）。 • 筆記本、筆跟 Checklist：一些小東西，也沒多少錢但就在聊錢了就拿出來講一下。Checklist 跟飛機機型有關，學飛很常用 Cessna 172S / G1000。 ### 外國（星）人的特殊步驟 因為美國文件都稱呼外國人是 Alien，因此以下我們都用外星人稱呼自己 XD 外星人想在美國學飛要先做一個背景調查叫 Flight Training Security Program （以前叫 Alien Flight Student Program XD），其實就是需要多一個指紋的步驟。大概一兩週就會下來很快，在這之前的時數無法紀錄在 logbook 上面算入正式訓練時間。 （題外話，一路上從簽證、綠卡跟 Global Entry，美國也不知道調查過幾次我的身家了，這種東西可以 share 一下嗎？我很樂意簽個資訊分享同意書 XD） ### 體檢 體檢倒是意外的麻煩。FAA 雖然有個官方網站可以查 AME (Aviation Medical Examiner)，但你只能輸入一個地點然後之後還要一個一個打電話約約看時間，交叉看一下 reddit 上大家的經驗，總之是個有點花時間的步驟。PPL 一般來說只需要持有三級體檢 Third-class Medical Certificate，基本上沒有太多奇怪的問題都可以開飛機。 這個步驟建議提早開始做，免得最後體檢過不了，學飛的錢就浪費掉了。不過到你單飛 (Solo)之前都不需要體檢結果，所以你可以評估一下什麼時候要做。 特殊頒發 Special Issuance 有些醫療情況會導致你無法立刻通過三級體檢，需要再送文件給 FAA 額外審查之後才能拿到 Special Issuance。如果你有這個需求，我建議跟 AME 問一下 FAA 大概會需要哪些文件，先跟你的主治醫生弄好證明後直接就寄過去 FAA，同時建議每週打個電話過去問一下進度。我不知道會不會真的比較快審核你的申請，但總之兩三個月過後，我也還是拿到了。比較討厭的是，我的 SI 只有一年有效，之後可能要提早重新申請或是乾脆就改用 BasicMed。 到這邊如果你都做了，你應該已經開始在上課了。下一篇再來聊聊我自己上課的心得！ ## Cross-compile for Raspberry Pi with Docker I am a lazy person so I've been really just compiling the code I want to run on Raspberry Pi ... well, on Raspberry Pi. It was slow but it is super simple to setup. However, sometimes you just want to compile something larger than the Raspberry Pi can handle. What now? The first thing my lazy butt tried is to simply run a ARMv7 image using qemu-system-arm but that sadly is very slow on my computer due to emulating a different architecture altogether. I was also too lazy to setup a proper buildroot with all the toolchains and libraries properly cross-compiled for the ARMv7 architecture. I decided to give another approach a try: using qemu user-mode emulation to run ARMv7 userspace directly and to wrap it in docker so I don't need to worry about messing my system up. We should be able to get near full-speed with this method. Fortunately, someone already published an ARMv7 docker image agners/archlinuxarm-arm32v7. We just need to get our system to run ARMv7 file now. To do this, we need to install binfmt-qemu-static from AUR. This enables your system to run ELF files from other architecture. If you just start running the container at this point on, you will run into this weird problem: [root@f19789b92d0d code]# cargo build Updating crates.io index warning: spurious network error (2 tries remaining): could not read directory '/root/.cargo/registry/index/github.com-1285ae84e5963aae/.git//refs': Value too large for defined data type; class=Os (2) warning: spurious network error (1 tries remaining): could not read directory '/root/.cargo/registry/index/github.com-1285ae84e5963aae/.git//refs': Value too large for defined data type; class=Os (2)  Value too large... for wat? I didn't read into what exactly caused this but someone hypothysize that it could be filesystem compatibility between 32-bit/64-bit (ARMv7 is 32-bit and my PC is 64-bit. If you run the ARM64v8 image than it should just work) systems so we need to mount a filesystem that works on a 32-bit system. I've tried using mkfs.ext4 -O^64bit and even mkfs.ext3 but they all still produce the same problem. I decided to try another filesystem altoghter and JFS works! To create a JFS image, you can run: fallocate -l 4G disk.img sudo mkfs.jfs disk.img  and than you can run this to mount it: mkdir mnt mount -o loop disk.img mnt  That's it! Once you have that JFS filesystem setup, you can run this command to run ARMv7 Arch Linux in docker and compile whatever you need! docker run -it --rm -v$PWD/mnt:/work agners/archlinuxarm-arm32v7


## A Taiwanese viewpoint about #BlackLivesMatter

The #BlackLivesMatter is happening in the US. It feels like a very very distant event for Taiwanese and yet it is happening right beside me. I've seen a lot of viewpoints from the Asian American community and that got me thinking: What am I feeling and thinking as a Taiwanese expatriate living in the US.

[The English version was translated and expanded on my original text, in Traditional Chinese.]

To be honest, I know next to nothing about racism when I came to the US years ago. I grew up in Taiwan all the way until I finished my master's degree. I haven't been hearing a lot of racism being talked about in Taiwan (not that Taiwan does not have it) and that I didn't have a deep understanding of the US history, and quiet frankly, I still don't today. It could be that I'm lucky or insensitive to it, but I also never deeply felt that I was being discriminated because of my racial background. The biggest discrimination I felt since I left Taiwan is the oppression on my country. Almost no one recognize Taiwan as a country and we need to somehow navigate these gaps as a Taiwanese individuals. #YourCountryIsNotACountry I can totally understand that some Taiwanese people, living 7500 miles away from the US, probably don't have the context to build empathy towards what is going on here.

As I'm staying longer in the US, I get to know more people of different backgrounds. I hear more things about my friends, about what's happening around me. And as I was rebuilding my identity now that I don't live in my own identity bubble, I've read on more things. It's really hard to not start to feel and think more deeply about racism. It has become a problem I might have encountered myself rather than some distant story. I've read on an article today “Black Lives Matter, Taiwan’s ‘228 Incident,’ and the Transnational Struggle For Liberation” that really resonate with me deeply. Growing up in a country that is being alienated by the international community, I have never though about one day we will be drawing parallels from the 228 White Terror that happened in the dictatorship-era of Taiwan to the Black history and current events.

Taiwan has came a long way since the dictatorship era. We grew to be one of the modern democratic and progressive country in Asia. This did not happen without protests, so we should know very well ourselves. More recently, we had the Sunflower movement in 2014, we have many same-sex marriage protests throughout our history until we finally legalize it in 2019. We really should know what is going on. The Taiwanese society cares a lot about being “polite”. Our movements put a lot of emphasize on projecting that image. Everyone is very conscious about it. We would be fighting our way into the legislative yuan while self-patrolling to make sure no one is hurt, no cultural artifacts in the building was damaged and protesters clean everything up afterwards. Yes, those are all great, but is that really everything? We've felt deeply when Hong Konger was protesting for their freedom and saw the police brutality over there as well. It all got me to think about what exactly is a protest and where do I draw the line? In the face of the oppression and systematic discrimination that the black community having going through, these doesn't matter. Minnesota officials also found that arrested looters are linked to the white supremacists groups. We have seen this too. There were gangsters trying to blend into our movements and try to incite violence and escalate too. We should understand what is going on. We've always felt that we were being discriminated on the international community and we should have the empathy here too as it is far more personal than ours.

I'm really glad that we have Taiwan. I may have not been living in Taiwan but seeing us gaining more momentum and visibility on the international stage really makes me happy. A few recent big policies are heading towards the progressive path. I felt really lucky and proud that I'm Taiwanese, but we are also far from perfect. We have not finished our own transitional justice for the 228 incident and we have our own racism problem towards migrant workers from the SEA countries too, not too mention casual racism that I still hear occasionally. I'm not saying every single Taiwanese person should care about all the things in the world, and that is perhaps not necessary. However, the very very least we can do, is to look at what is happening, and at the very least, trying to prevent it from happening in Taiwan too. And if you do live in the US, we should care. It's unjust and we are not protected from racism at all.

## Marking Helmet Cam Highlights while on a Motorcycle

I want to talk about one problem that has been bugging me as a motorcyclist for a while. I usually ride with a helmet cam with me. For example, I've been to Japan for some motorcycle road trips before. I've collected hours and hours of videos of the roads ahead and some other different angles. However, it is really hard to find a highlight in the video.

Sometimes you noticed something interesting going on the road. Taking one example from my recent trip to Napa, I saw two squirrels fighting on the road as I rode by. (okay, it is both interesting and scary at the same time, luckily I managed to miss them.) How do I recover these highlights from a boring long video? The problem is that roads looks very similar and it is very easy to miss the exact moment you see something when you are skimming through the video.

I first thought about GPS might work if I can just remember where it happens and it turns out it's really hard to remember at which corner you see fun stuff and even if you do, synchronizing the video with recorded GPS tracks is usually a long process even if your helmet cam records GPS track at the same time as well. I thought about making a hardware button that just records a timestamp but then I will first need to figure out the right hardware to make one then to mount it on the bike and to synchronize it with the video too.

Finally I had a really simple idea. What if I just use my hand to cover the camera? It's simple, easy to do and now all I need to figure out is how to detect black frames from the video.

Here is one of the example of how a “marker” would look like on video when you use your hand to just cover the camera for a second. As long as you are covering the lens, it should produce a very dark frame comparing to regular day time riding videos.

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, dist = cv2.threshold(gray, 30, 255, cv2.THRESH_BINARY)
dark_pixels = np.count_nonzero(dist == 0)
dark_percent = (float(dark_pixels) / size * 100)


We first convert the frame to grayscale for easier processing since all we card are detecting black pixels anyway. Then we run the frame through a threshold filter to mark anything below gray level 30 to 0 (perfect black) and anything else to 255 (perfect white) then we count the pixels having value equals to zero.

Now we take this snippet and apply a bit more logic: let's say we will count a frame as a marker if more than 95% of its pixels are black. We might also have multiple marker frames when your hand is moving in and out of the view so we will want to merge close-by marker points, let's say we will only have 1 marker per 5 seconds. Now we can write out the final code!

import sys

import math
from datetime import datetime
import numpy as np
import cv2

MERGE_THRESHOLD_MS = 5000

def format_time(timestamp):
msec = timestamp % 1000
parts = [msec]

secs = math.floor(timestamp / 1000)
parts.append(secs % 60)

mins = math.floor(secs / 60)
parts.append(mins % 60)

hrs = math.floor(mins / 60)
parts.append(hrs)

parts.reverse()
return "%02d:%02d:%02d.%03d" % tuple(parts)

def main():
src = cv2.VideoCapture(sys.argv[1])
if not src.isOpened():
print("Error opening file")
sys.exit(0)
length = int(src.get(cv2.CAP_PROP_FRAME_COUNT))
width = src.get(cv2.CAP_PROP_FRAME_WIDTH)
height = src.get(cv2.CAP_PROP_FRAME_HEIGHT)
size = width * height
markers = []
start_time = datetime.now()

while src.isOpened():
if not ret:
break
idx = int(src.get(cv2.CAP_PROP_POS_FRAMES))
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
_, dist = cv2.threshold(gray, 30, 255, cv2.THRESH_BINARY)
dark_pixels = np.count_nonzero(dist == 0)
dark_percent = (float(dark_pixels) / size * 100)
frame_time = int(src.get(cv2.CAP_PROP_POS_MSEC))
fps = idx / (datetime.now() - start_time).total_seconds()
print("\033[0KFrame %d/%d [%s]: %.2f fps, %.2f%% black. %d black frames found.\r" %
(idx, length, format_time(frame_time), fps, dark_percent, len(markers)),
end='')
if dark_percent > 95:
markers.append(frame_time)

merged_markers = []
for marker in markers:
if not merged_markers or marker - merged_markers[-1] > MERGE_THRESHOLD_MS:
merged_markers.append(marker)

print()
print("Markers:")
for marker in merged_markers:
print("  %s" % format_time(marker))

src.release()

main()


To actually run this script, you will need to have opencv-python and numpy installed.

One thing I have not figured out on how to improve is the performance of the script. It currently takes about 5 mins to process this 26 mins long video. It looks like most of the processing is done on CPU (decoding/analyzing). I'm wondering if try to move some processing into GPU would help with the speed but that's another topic for another time!

And this is the story of how I recover that squirrel snippet from a 4 hours long recording!

## Quick note on Crostini + Chinese IME

Crostini is the new Google project to bring Linux apps to ChromeOS. Input method is on the roadmap but it has not been implemented yet in the current preview version of Crostini. The situation is a little bit different from the regular Linux one because it is running Wayland and using Google's sommelier project to passthrough into the ChromeOS host Wayland.

To set it up, in your Crostini container do:

sudo apt install fcitx # and your IME engine, example for Taiwanese users: fcitx-chewing
sudo apt remove fcitx-module-kimpanel


Then you should use fcitx-config-gtk3 to set it up.

Now we need to set up a few environment variables and we want those to apply to starting application from launcher menu too. I found that we can set it up here in this file /etc/systemd/user/cros-garcon.service.d/cros-garcon-override.conf. This file might be overwritten in the future with updates. Suggestions are welcome for a better location. You should put in these extra lines in there:

Environment="GTK_IM_MODULE=fcitx"
Environment="QT_IM_MODULE=fcitx"
Environment="XMODIFIERS=@im=fcitx"


Finally, we need to start fcitx daemon. I just put this one-line in ~/.sommelierrc to do the work:

/usr/bin/fcitx-autostart


That's all! Now enjoy typing Chinese in the Linux apps on Chrome OS!

## Implement Night Mode on Twitter Lite

We have just launched Night Mode on Twitter Lite recently. Night mode is an exciting feature in regards to engineering. It is a highly demanded, visually pleasing and the primary driver for our effort of moving our CSS to CSS-in-JS. Let's dive into what did we do to bring this feature to life!

DISCLAIMER: The post was written and posted after the end of my employment at Twitter. I tried to recall the details as best as I could, and I apologize beforehand for any inaccuracies.

## What is it?

Night mode is an increasingly popular feature that starts to show up on a lot of websites/apps. Most of the websites use a white background which might cause eye strains when used in a dark environment. When users activate night mode, Twitter Lite switch to a dark color theme app-wide.

## Styling components

The core of this feature is the ability to dynamically switching the styling of every component on the screen. Our components were styled using CSS. To swap out styling, we would have to build multiple CSS bundles based on a few factors: color theme, and LTR/RTL text direction. It is not a very scalable solution and requires users to download new CSS when switching different combinations. The other option would be switching to CSS variables. It, unfortunately, does not have enough support across the browsers that Twitter Lite intended to support.

Our next option would be to switch to a CSS-in-JS solution. We use react-native-web throughout our internal component library and the website. It has a built-in component called StyleSheet that provides the function.

// A simplifed example of using react-native-web StyleSheet
const styles = StyleSheet.create({
root: {
backgroundColor: theme.colors.red
}
});

const Component = () => <View styles={styles.root}/>;


## Runtime-generated Style Sheet

To create a StyleSheet instance, you make a StyleSheet.create call and pass in a JSON object that looks very much like its CSS counterpart. The API returns you an object with the class name mapped to a number representing the registered styles while its styling engine works in the background to generate runtime CSS classes and deduplication. We would need to somehow allow it to:

1. Rerun the style creation every time we switch to a new theme
2. Pass in reference to the next theme so we can use the new color palette

We designed a new API wrapping the StyleSheet API, but instead of taking an object, a function (theme) => styleObject is accepted. We store references to all those functions and return an object with dynamic getters. Whenever users requests to switch themes, we would re-run all the style creations with the new theme. The React components can use the same styles object returned from the first API call to render with the new style.

// Updated to support the new API
const styles = StyleSheet.create(theme => ({
root: {
// do not use color name directly but name colors by context
}
}));

const Component = () => <View styles={styles.root}/>;


## Are we all on the same page?

Sounds perfect! New styles are generated, and all the references are updated. The page, however, is not updated. Well, not until some components receives new data. The components are not re-rendering on the spot because we are updating an external variable instead of working with the React component states. We need a way to signal components to re-render.

Theoretically, we would love this part to be as performant as possible to reduce the overhead of switching themes. For example, we could use a higher-order component to keep track of the components and its corresponding styles and use that information to update components on a smaller scale. It turned out to be hard as we would need to wrap around many components and also the components might have some shouldComponentUpdate tricks to prevent themselves from updating, and the children components might also have shouldComponentUpdate functions too. It does work 80% of the time, it is unfortunate that the other 20% stand out very much under a dark theme.

One hacky solution would be to somehow recursively calling forceUpdate() on every mounted component. It would require some meddling with React internals and we eventually decided not to do this. In our first implementation, we used to manually unmount the previous component tree entirely and remount a new one; this caused a considerable delay in theme switching and was working out of React's lifecycles. We switched to using React.Fragment with the key set to the theme name, allowing React to optimize the operation better and without lifecycle hooking.

class AppThemeRoot extends React.Component {
state = { theme: 'light' };

componentDidMount() {
StyleSheet.onThemeSwitch((theme) => this.setState({ theme: theme.name }));
}

render() {
return (
<React.Fragment key={this.state.theme}>
{this.props.children}
</React.Fragment>
);
}
}


## The final touch

Now that we have the basic going, we would like to make it better. Instead of swapping the content out directly, we would like it to be a smooth transition. We have also explored a few different options to implement this.

The first option pops up in my head is to implement a cross-fade. Fading out the old content while fading in the new content. We can create a copy of the old content by doing oldDomNode.cloneNode(true) and insert it back into the DOM. It looked absolutely beautiful, but sadly it did screw up our virtualised list implementation. We had to explore other avenues. The next thing we tried was to fade out and fade in. It looks okay when we do it fast enough so that the transition feels smooth. It, however, would have a brief period of white flashing due to the default page background being full white. We addressed the flash by also fading the document background color to the next background color which makes it feels much more like a cross-fade than a simple fade-out-and-in.

## Credit

I hope you enjoyed our journey of exploring the implementation of the Night Mode. Night Mode can't be made without the team's collaboration. Thanks to Marius and Sidhu for finding out the best solution to this problem with me. Special call out to Sidhu because he implemented the proposal. Thanks to the whole team very efficiently migrated all of our components out of CSS in two hack days which in turn enables us to switch the theme of the entire website!

## Infinite List and React

I have worked on Twitter’s new mobile website for the past year. We rebuilt the website using the latest web technologies: React, Redux, Node.js/Express to name a few. It is absolutely an exciting project to work on since you rarely get a chance to rework a large-scale website from the ground up and experiment with the latest tools without having to worry about any historical baggage.

One of the problems that we realized early on is that our Tweet is fairly complex in both the React tree and the DOM tree. A Tweet does not only contain the body text and metadata; it also involves processing #hashtags, @mentions, cards and a lot of Unicode ordeals (one of the most prominent examples is emoji) to make sure we are rendering everything correctly across all platforms.

This normally would not be a problem on a desktop browser, as they have enough processing power to deal with a highly complex DOM tree. However, this is not the case with mobile browsers. We discovered that the performance degrades as the user scrolls further down. What’s even worse is that if we want to implement caching and pre-download say 200 tweets for a user, this will cause our app to effectively render 200 tweets at the same time and lock up the app for a few seconds. I started to look into this problem and realized that a solution to this is to maintain only the visible portion of an infinite list in the DOM tree and render/remove invisible parts as the user scrolls.

## How did we solve it?

In the search for a component to support both lazy-rendering and dynamic item height, we developed a component called LazyList. Supporting items of dynamic height can make the system much more complex but unfortunately Tweets have non-deterministic heights due to variable content like cards/picture and text.

## The Basics

LazyList works by measuring an item’s height and calculating what slice of items should be displayed on the screen given the scrolled coordinates, this is called a projection. It also applies before/after padding to maintain the facade of out-of-view items, thus not affecting the scroll bar pill in terms of size and position.

In addition to the items visible in the viewport, in order to allow the page to scroll smoothly, we needed to render extra items both above and below the visible region. Typically, this results in one to one-and-a-half pages worth of items. This also gives us a bit of buffer in order to preload the next page of Tweets before the user hits the bottom of the scrollable area. Now that we have a strategy of how this component would work, we will need to fit this into React’s lifecycle methods. Theoretically we will want this to be just like a ListView component – give us items and render function and get lazy-rendering for free.

## Lifecycle

The only thing that LazyList is required to know for rendering is a projection of items. A projection is defined as a slice of input items that is visible in the viewport. In order to calculate the projection at any given moment, we will need to figure out the height for each item. A typical approach on the web is to render it off-screen, taking a measurement and re-render it on-screen with the cached measurements. However, this doubles the rendering costs which is impractical for a product used by millions of users on lower-end mobile devices. We moved to an in-place measurement technique: we render items on screen first with a guestimate average height, caching the actual item height for rendered items. We repeat this process until the estimation/cached heights matches all the items on-screen. Using the in-place measurement also allow us to accommodate cases where the item height is changed after rendering, such as when loaded images change the overall height of a tweet.

### Initial rendering (mount)

When the component is mounted for the first time, it has no knowledge about what items will fall within the viewport. It renders nothing and simply triggers projection update.

### Update Projection

The projection can be generated by adding up the item heights sequentially until it reaches the scroll offset of the container. This is when we know items after this will be in the viewport. We continue to add it up until it is more than the container height. If there’s any item in the process that we do not have the height for, we will guestimate one. The incorrect number will be corrected after we cache its height and update the projection again.

This step will also be triggered when input events, like resize and scroll happens.

### Render

Render is fairly straightforward after we've established the projection to use. We simply run it through a loop and call the renderer function supplied by the user to render it on screen.

### Prologue

After rendering, we update our internal cache of item heights. If we encounter any inconsistencies, it means our current projection is incorrect. We will repeat the process until it settles down. The difference in heights are also deducted from the scroll position so the list will stay at a stable position.

### Resizing

Resizing a window changes all item widths which effectively invalidates all cached item heights. However, we definitely do not want to invalidate the entire cache. Think of the case where a user has scrolled down 5 pages: if they choose to resize the window, we will want the app to adapt to it gradually instead of waiting for LazyList to remeasure all items; fortunately the in-place measurement technique works with this scenario. We update new item heights into cache and allow the system to correct itself as the user scrolls. The downside to applying this technique is that the scroll bar pill will be a bit jerky or show sudden resizing due to first-pass rendering using cached heights and correcting itself on second-pass. However, this outcome is preferable to having the app locked up for several seconds.

## Scroll Position Stabilization & Restoration

{% img /images/posts/infinite-list-anchoring.gif Notice the first tweet is always in the viewport during resizing %}

Whenever there is a difference in expected item heights and the actual item heights, the scroll position will be affected. This problem manifests as the list jumping up and down randomly due to miscalculation. We will need an anchoring solution to keep the list stable.

LazyList used a top-aligning strategy which means it kept the first rendered item at the same position. This strategy improves the symptom but did not fix it completely because we’re not necessarily aligning items within the viewport. We have since improved it to use an anchor-based solution. It searches for an anchor that is present in both projections before and after updates, usually the first item within the viewport. The anchor is used as a point of reference to adjust scroll position to keep it in the same place. This strategy works pretty well. However, it is tricky to programmatically control scroll position when the inertia scrolling is still in-effect. It stops the animation on Safari and causes slight slow down on Chrome for Windows while working fine on Chrome for Mac and Android, for which we do not have a perfect solution yet.

Remembering timeline position is one of the feature that most Twitter users expected a client to have. However, it is an interesting challenge due to each browser having their own slightly different strategies to restore scroll positions when navigating to a previously loaded page. Some wait for the whole page to finish loading, some wait extra bit to account for dynamically loaded data. To implement a cross-browser solution, we take the matter into our own hands. We give each infinite scrolling list a unique ID and persist the item heights cache and anchor candidates with it. When the user navigates back from other screens, we use that information to re-initialize the component and re-render the screen exactly as you left it. We take advantage of the scrollRestoration attribute of the history object to take over the restoration whenever available and compensate accordingly if manual takeover is not possible.

## Onwards

Being a component that is centered around our performance, this is still a critical component that we work on from time to time. It has a new name VirtualScroller too. We have taken on refactoring, performance tuning (minimizing layout thrashing, optimizing for browser schedulers, etc.) largely thanks to Marius, Paul, the Google Chrome team(especially “Complexities of an Infinite Scroller”; we have taken some advice from it for our improvement plan.) and the Microsoft Edge team.