NOC Final Project: Style Transfer Chrome Extension

Rather than making a webpage and upload images by the user, I found that directly apply style transfer to Instagram images will be more fun. So I tried to combine the knowledge I learned from Hacking The Browser to make a chrome extension.

For this project, I think it can be separated into three parts:

The first part is train my own style base model. Originally I wanted to follow this repo to train my own model. However, it takes too long for me and I cannot make my own style model, so I can only take three trained model from this repo.

The second part is to do style transfer. Thank to wonderful ml5, it gets the jobs done easily. However, I don’t know if the style transfer function in ml5 has a little problem on getting image width and length. Because I found that the if the image is not a rectangle image, the output image will become a mess and also the width and the length of the output image seems to be swapped.

The third part is to make a chrome extension. Since I am going to get the image data from Instagram, there is a cross-origin data issue due to network security, so I have to find other approaches to get the image data.

I tried to download the image I chose so that the image I am going to deal with will become a local data. At first I thought the extension will pack all images onto chrome so the web accessible resources (images) will also unchangeable. Luckily it seems that it will still change if I download the image to my extension folder.


So downloading the image is somehow a little bit weird, so I tried another way after the presentation. I tried to get the screenshot of the page and cut the part I want (which is the image I clicked). So this time I send the image to background to do style transfer. At first I thought doing style transfer in background won’t slow down the user doing things at front. However, it seems that it will slow down the whole browser. Fortunately, the image is not that large, so it only takes few seconds. However, when you hover to the image, there will come up with a div, and I cannot get rid of it, so the output image will have the div on it.


The full code is here: StyleTransfer

RWET Final Project: PhonoCi 2.0

From my previous homework, I made a poem structure called “PhonoCi”, which is a combination of Phonogram and Ci.

Ci is a type of poetry in Classical Chinese Poetry. Ci use a set of poetic meters derived from a base set of certain patterns, in fixed-rhythm, fixed-tone, and variable line-length formal types, or model examples.

For my homework last time, I just restricted the word counts and put English words into it. However, I found that it somehow lost the rhythm since some words are having too many syllables.

In Chinese, there are four tone classes for Chinese words, and for a certain Ci, it will give a certain pattern for poets to put words into it.


So this time I change to restrict the number of syllables. I restricted all my corpus in only one syllable, and few in two syllables.

Also, I try to “visualize” the poem so I go to Shutterstock and search for the images then combine them into a gif.

Some outputs:



The full code is here: PhonoCi 2.0

Hacking The Browser Final Project Thoughts

For the final project, I am going to do image style transfer extension which can apply on the images on the webpage, maybe Instagram in particular.
This project is combined with my Nature of Code project
The project can be separated to 3 big parts:
1. Train data for style model
2. Style Transfer the image
3. Swap the image

The first step would be done locally, since it will take a long time to train the data,
it is impossible for a user to randomly upload one image to be a style base image.

For the second step, I want to do it in background.js,
since the transfer process would last for some time according to the size of an image,
so putting behind the scene may ideally won’t stuck the user doing other things in the page.

For the last step, it is just showing the result on the page.

I think I would use a popup page for users to choose the image style they want to change,
also, I would like to have the user to choose which image they want to be changed according to mouse clicking,
so perhaps it will block the original behavior of the mouse clicking on the particular image component after they have chosen the style image.

So definitely I will use page action, content script, background script. For the APIs, right now I don’t have a particular list for it.

Hacking The Browser HW5

To simply describe how to bypass the WSJ work, just delete the cookie header and block the cookie being set when you receive the header. Also, add/modify Referer and User-Agent header to Google and a specific web browser name (OR in the update version, change it as you are from Twitter).

In manifest.json, it is simple that you need API permission of webRequest and webRequestBlocking for doing the request and blocking things. Then list out the websites that you want to work with.


For background.js, it will try to do something before the browser is sending the header and when the browser receives the header.


The first argument is a callback function, which is the action you want to do when the event is happening.

The second argument is filter, which limits the requests for which events are triggered – in this example, it it can work on all urls and the header type should be “name_frame”.

The third argument is opt_extraInfoSpec, which is used to decide what information is going to pass into the callback function – in this example, it sends requestHeaders and responseHeaders in these 2 events. For the blocking, it is used to tell the request is blocked until the callback function returns so that the header can have time to be modified.

In the changeRefer callback functions, it will return every header that is not Cookie, then check if the url you are requesting is being allowed or not, if not, delete the cookie. Then it will check if there are headers called Referer and User-Agent, if yes, change it to Google/Twitter and the web browser name.

In the blockCookies callback function, it will delete the header called Set-Cookie whenever the browser receives header

Detourning The Web Final Project

Most of the time, when I see a Hollywood movie English title, I always have no idea what it is. The main reason is that when a movie is released in Taiwan, the movie will have a Chinese name, and most of time, the Chinese name is not only translated from English.

So I want to grab the alternative titles in different countries and use google translate to translate back to English to see what’s different. Also, it is fun to see what google will translate from the foreign languages.

After doing some researches, I found that TMDB got API for users to get movie data. Accordingly, I make a page by FLASK for user to enter keywords to search for movies, then choose one movie will return posters with translated movie titles.

I use requests to get data from TMDB API, then use selenium to open a browser and do translation. The final step is to use pillow to put the title on a poster.


the full code is here: Movie Title Translation

Hacking The Browser HW3

For the first part, I put in github: Bookmaklet2Extension

For the second part, I chose a bookmarklet called “MapThis”, which can grab the address you select and directly redirect to google map. The code is showing below with comments on it.


For the third part, I chose an interesting chrome extension called Giphy Tabs, which will come up with a GIF every time you open a new tab.

NOC Final Project Proposal

For the final project, I am interested in working on machine learning concepts so I want to take this chance to make a machine learning / neural network project.

In Jabrils’ talk, he showed an image style transfer project:

Since I was working on some image processing projects and I am really interested in combining machine learning techniques into it.

Also, I found that there are some projects like Face Portraits Style Transfer , Style Transfer and Colorizing Grayscale.

So I want to take images and transfer them into a style of different kinds of paintings.

I don’t  know if it is too ambitious to do this or even makes sense to do this since there are already some developed projects or even mobile apps that can do these works.

RWET HW6: Semantic Similarity

For the homework this week, I choose to implement word semantic vectors on the poems I found from Poetry Foundation

So I went to the page of Cat Poem Collection
to get all the poems in the page. Then I randomly chose 10 sentences from all these poems to be my poem base.

I followed the tutorial from Replacements and walks with word vectors to do word vectors. I use spaCy’s word vectors to search similar words of my base, then give 50% of the chance to change the word.

Hacking The Browser HW2

To make a bookmarklet that makes user difficult to use, I try to put in the event of “mouse not moving” feature that I asked last week.

So the bookmarklet has 2 parts, one is when the mouse moving and another one is when the mouse is not moving. When the mouse is moving, I take popular tags (p, li, h1~h6, span) and grab their innerText, then the text will keep disappearing in each event trigger. Moreover, the user will find that there are still some links that seems can escape from this weird webpage by clicking it. When the link is clicked, it will redirect the page to google search with the keyword “antivirus”.

As to the not moving part, when the mouse is still for more than 2 seconds, the opacity of the entire page will keep decrease, and the page will finally become totally white.

the code is here: hw2

the codepen is here: Codepen for HW2

NOC Week 9: Neural Network

For the exercise this week, I simply want to do some practice on making a neural network. So I choose to follow the playlist of Doodle Classifier.

I grabbed 4 datasets (mosquitos, airplanes, birds, and dragons) from Google Quickdraw, which I want to see how would the performance be if  my target data have similar characteristics (they all have wings). Instead of using Processing to parse and reduce the data, I use Python to read the npy file and save them as txt file.

At first, I even wanted to write the neural network by myself. Unfortunately, I was stuck with a problem when I was making the testing function. With 4 outputs, the % of correctness should be around 25% without training. However, every time I run the testing function, the % of correctness is always exactly 25%, it is even the same 25% when I add the training process, which is so weird. (And strangely at the time I am writing this blog, it works…)

So I tried different number of nodes to see the correctness in 10 epochs.

It seems that adding nodes doesn’t improve the correctness, I am wondering if the training data is not enough or it is simply because these data are too similar.

here is the code: hw6