First Unreal World

After the first trial of making my own world, I found how complex and difficult it is to make a delicate world.

The first issue I met was how to make the character coordinate move with the action I get from Mixamo. It is so important to make it “in place” when downloading the action from Mixamo.

As to my world, I made a world like a dungeon so it looks like my character is doing adventure in this world.

Another issue is that I want to render the video and export it. But I tried many times the rendered video only comes out with a strange position. Accordingly, I can only use screen recording to record what I have done.

I feel that Unreal is so complex that it still need a lot of time to get used to it. Since I want to learn more about VR, I think I still need to work a lot in Unreal.

Fuse and Mixamo

Before constructing a world in Unreal, this week I tried to make some characters in Fuse and Mixamo.

Due to the issue of MacOS, it keeps crash when I am going to choose the feature of a character. So I can only switch to Windows laptop. I made one character on Fuse and took a zombie character directly from Mixamo.

For the character I made in Fuse, I want to make a hunter who is kind of in a modern look. So the clothes are somehow mix in different styles. However, I am not good at drawing and I don’t know how to make a character take a weapon so the character is only bare hand.

AfterFX Animation

Follow by our storyboard, we gathered some assets of Avengers and Shiffman online and took some photos on ITP floor to be our stage.

At first we found a lot of Avenger assets that are from cartoons. However, since we are using Shiffman as our super villan and we are using ITP as the stage, we can only change them into movie characters.


For the animation, Heng and I worked separately. I mainly worked on the part of Avengers go to ITP and become weaker and weaker.

My part

Heng did the intro and the closing and modified two parts so that it would not look like two separate animations.

Full Video

The Data Viz of NYC Traffic


At first Alan and I decided to do data visualization and reached agreement in a pretty short time. Then we both wanted to do something on the map. So we came up with visualizing the traffic in New York. As we know that New York is always in traffic jam, and the subway trains always delay. We want to draw the real condition in the city.

Data Source:

Originally we wanted to use realtime data as the data feed in the project. However, there were some difficulties coming up.

Traffic Data

The data for the traffic in the road is from NYC DOT(Department of Transportation). However, as we distracted the data, it only gave us the speed of highway. Next, we tried the data from NYC Taxi & Limousine Commission. The data here is not realtime, and they changed the format of its data so that it only marked out the picking and dropping AREA in each taxi.

Due to the difficulties of getting realtime traffic data, we found an example from, which is a visual library made by Uber. One of the provided example shows the taxi route in 30 minutes, and the taxi data has route coordinates. Accordingly, we took the data from the example to be our traffic data to represent the traffic speed.

Subway Data

For the subway data, MTA provides realtime data for all the trains. Unfortunately, it is not in JSON format. It is in a data format called GTFS instead. GTFS stands for General Transit Feed Specification, which is made by Google. It uses with protocol buffer to encode the data feed into a form that is even lighter than JSON format. However, decoding GTFS is more difficult than we thought. At the end, we found a solution from Github that is written in Python, and it is the only code we can successfully decode the data provided by MTA.

The decoded data looks like this:

To fully understand the data, MTA provides a documents to define all the instances. Additionally, there are files that give coordinates for each route and stations.

The GTFS data from MTA updates every 30 seconds, however, we didn’t use a server to grab the data so we can only download them to the local. For the project, we downloaded the data every 30 seconds in 30 minutes.


The main code can be divided into three parts: Setup, Analysis, and Draw.


This part is to have every data loaded and ready.

The first thing is to set the map. Our project uses Mapbox as the base, and we use Mappa to call Mapbox and connect it to p5. However, there is a small issue to do some additional manipulations to Mapbox in Mappa. If we want to add something on the map through Mapbox, it should always be event-driven. So if I want to draw something by Mapbox not by p5, I can only do it by pressing mouse or a key.

Next, load data for traffic, subway stations, subway routes, and subway GTFS data.


For the input data, I create classes for each dataset: Traffic, MtaStation, MtaRoute, and Train.

One object of Traffic and Train are simply one taxi and one train respectively.

Each MtaStation object represents one station, and each MtaRoute represents one train route.

To convert the raw data into class objects, I use some simple regular expression techniques to get and match what I want.

Moreover, since the data from GTFS is chronological, and I need to put the data of same train to the same object. So I need to efficiently find out where the exist train is in the array. To reduce the computation loading by using loops searching the same train id or route id, I create several hash tables to make it easier and faster.

After getting every data arrays ready, I still need to do some sorting due to some issues. For example, for the data of traffic and GTFS, I need to sort it due to the chronological issue, so that I can know when to start drawing each train and taxi. For the MtaRoute, I have to sort it because the raw data records every route a train might run through, and most of them are pretty similar, so I need to distinguish which line I should draw and which I don’t to reduce the data amount on p5.


The last part is to draw the data on the map.

I draw the 3d buildings and stations through Mapbox, so it will only comes up when there is a key pressed.

For the traffic data, train routes, and running trains, I draw it on p5.

If I draw everything on the map, it will definitely make the performance worse. Also, the data we are showing is based on time, so they should be drawn in the time order.

I use the timestamps between two points and use “lerp” function to interpolate the line between these points to make the trains and taxis run in different speed. For example, if one taxi runs through two points within 10 seconds, then I will divided the line of this two points into 10 pieces, so that only one section is drawn in every frame, which means it needs more frames to go through the whole line if it takes more time.



Short video demo: The Data Viz of NYC Traffic

The full code is here

Next Step:

The next thing we want to do is to add the time display on it. Perhaps it can be stopped or slowdown. Moreover, we want to add more interactive elements into it. For example, the user can choose which train route is showing by checking a checkbox. Also, we want to put popup windows to show some information when the user click on the station.


By tuning the parameters in the code, I tried to find out some trends among these parameters.


mlp error: 0.016490        rbm error: 0.23327


1st: hidden size = 100   epoch size = 2000

mlp error: 0.011570        rbm error: 0.011254


2nd: learn rate = 0.01(rbm)   hidden size = 100   epoch size = 2000

mlp error: 0.011857        rbm error: 0.011254


3rd: learning rate = 0.03(mlp)   hidden size = 100   epoch size = 2000

learning rate = 0.01 (rbm)

mlp error: 0.011570           rbm error: 0.011254


4th: hidden size = 200 epoch size = 2000 learning rate = 0.03(mlp)

learning rate = 0.01(rbm)

mlp error: 0.010433          rbm error: 0.004448


It seems that increasing the hidden size can reduce the error;

apparently larger epoch size also reduces the error;

as to the learning rate, the result here is the lower the learning rate, the lower error rate. However, I think it is also related to the epoch size, if the epoch size is large enough, there is more room for the learning rate.

So the last trial is to increase the hidden size to 200, and the error is the lowest among the 5 trials.


For this week, I was a little bit struggling about the input training data. Since a single layer perceptron can only handle linear separable data, I couldn’t simply random the input. Therefore, I use the simplest input {0,1} to do AND and OR.

As to the algorithm itself, I am wondering about calculating the error. I found that if I do E = Y(guessed) – Y(real), it is so easy to make both two weights diverge. But if I change it to E = Y(real) – Y(guessed), the condition of divergence reduces a lot.

For XOR perceptron, I failed to make it. I couldn’t figure out how to combine two perceptrons together, and I found some posts that use sigmoid to be the activation function. However I don’t have enough time to try it, I will try it later.

Homework3: Perceptron

K-means Clustering

For the homework this week, since I have had experience in Matlab, I am quite familiar with working on matrices.

However, it seems there are some nuances between Matlab and numpy when doing some manipulations. For example it seems nothing happened when transposing a 1-D array in numpy. Perhaps it is because numpy treat 1-D array as a normal array rather than a matrix. Or maybe I just assign it wrong.

Another thing for Python is that although it doesn’t give a specific data type to one variable when declaring, like in C++ and Java, Python itself still quite cares about the data type of a variable.

Here is the linke of HW2:  K-means Clustering