v0.1
The Facebook purchase of Oculus VR in 2 billion dollar is still quite shocking. Particularly, it is because the company is only 2-year-old and the owner is only 21. And for people who are more familiar with technologies, there are two more reasons to be shocked. One, the legend graphics programmer, John Carmack, joined the company. And two, Virtual Reality (VR), a once hot technology, was considered a mistake and pretty much dead.
I still remember when I was an intern in Microsoft Research Asia, 2011, I attended a talk given by Professor James A. Landay. During the Q&A session, a student asked Dr. Landay about the prospect of VR. Dr. Landay said something like "many people believe VR is a mistake, and augment reality is the right way". Indeed, in the first fad of VR back in late 1990s and early 2000s, companies spent a lot of money for VR but the return was frustrating [1]. However, the founder of Oculus VR just loves VR all the time and collected most of available VR equipment. This love gives him motivation and this collection gives him inspiration. I guess most of us could regret for what unique interests we have abandoned in order to cater to others, and what large amount of money we have lost...
Let's wipe our tears and move on. Despite the lesson of perseverance, we could also find some other interesting things. For me, I would say a unique thing will never be a mistake and every unique thing will have a position and purpose in the world. And this is why I am able to write this article and you are able to read it. We know that human are mammals. But in 65 million years ago, the lord of the earth, dinosaurs, might not know mammals, which at that time, were just some tiny creatures eating bugs to survive. An alien who visited earth at that point could say that mammals are "a mistake" compared with dinosaurs. However, we all know the story afterwards, dinosaurs got wiped out in a disaster and mammals control the world. We could say that mammals win this time, but the ultimate winner is Nature who keeps the diversity of the ecosystem to combat disasters.
Same principle applies for technology. VR might not be the mainstream technology in the past 20 years and possibly not the mainstream technology in the next 20 years, as Oculus VR might fail. However, VR does have the value to exist, because it might play a critical role in the future. Every unique technology fits a technological niche, and is prepared for its day.
References:
[1] http://time.com/39577/facebook-oculus-vr-inside-story/
[2] The picture. http://media.pcgamer.com/files/2013/04/Eve-Oculus-RIft.jpg
Saturday, April 19, 2014
Sunday, April 13, 2014
Resume an Old Project
v0.2
It is common for us to resume an old project (research project, programming project, etc.) due to interruptions (e.g. a vacation) or multitasking. Particularly, multitasking is necessary for graduate students and even more important for professors. The challenge is that we usually forget about the details of that project, or even forget about the motivation of doing that project. Thus, we will find quite a lot difficulties to regain what you once know. How to reduce the difficulty of resuming an old project is what we will discuss here today.
First, I found it is useful to have a warm-up period in the beginning. To recall the progress of an old research project, we could first warm up by reading a related paper. If you want to continue a suspended coding project, you can first warm up by adding some comments to the source code and refactor the code a little bit. In general, try to be slow in the beginning and do not hurry. Maybe after the warm-up and a good sleep, your dormant memories of that project start to revive and you will have more confidence to continue.
Another important thing is to write note during a project. Write down as much detail as possible as they might save a lot of time when we want to pick up where we left off. I found a simple daily note is especially useful.
Finally, before suspending a project, we need to consider how ourselves or somebody else can pick that up in the future. One tip is to make our work as automatic as possible, so the future person can quickly run it. For example, creating a script file for experiment or data analysis tasks will enable a future person to run what we've done in one command. It gives that person an instant feeling of accomplishment and it is much easier than first study a lot of stuff and then run.
References:
[1] The picture. http://blog.viddler.com/wp-content/uploads/2013/10/project-manager.jpg
It is common for us to resume an old project (research project, programming project, etc.) due to interruptions (e.g. a vacation) or multitasking. Particularly, multitasking is necessary for graduate students and even more important for professors. The challenge is that we usually forget about the details of that project, or even forget about the motivation of doing that project. Thus, we will find quite a lot difficulties to regain what you once know. How to reduce the difficulty of resuming an old project is what we will discuss here today.
First, I found it is useful to have a warm-up period in the beginning. To recall the progress of an old research project, we could first warm up by reading a related paper. If you want to continue a suspended coding project, you can first warm up by adding some comments to the source code and refactor the code a little bit. In general, try to be slow in the beginning and do not hurry. Maybe after the warm-up and a good sleep, your dormant memories of that project start to revive and you will have more confidence to continue.
Another important thing is to write note during a project. Write down as much detail as possible as they might save a lot of time when we want to pick up where we left off. I found a simple daily note is especially useful.
Finally, before suspending a project, we need to consider how ourselves or somebody else can pick that up in the future. One tip is to make our work as automatic as possible, so the future person can quickly run it. For example, creating a script file for experiment or data analysis tasks will enable a future person to run what we've done in one command. It gives that person an instant feeling of accomplishment and it is much easier than first study a lot of stuff and then run.
References:
[1] The picture. http://blog.viddler.com/wp-content/uploads/2013/10/project-manager.jpg
Monday, April 7, 2014
Writing Like Compiling
v0.1
There has been suggestion to write programs like writing articles [2]. But I am thinking the opposite direction: could we write articles like writing programs? There is an interesting article that makes such analogy for novels [3]. While as a grinding PhD student, I am more interested in applying it to academic paper writing. And in this article, I would like to discuss the connection between writing an academic paper and compiling a program.
When a programmer has written some source code, it needs to be compiled to machine-understandable format, by a program called compiler. This is similar to writing a paper, in which you try to translate the thoughts in your mind to a form that is understandable to others. When compiling a program, there are typically many passes to process the source code and transform them step by step towards the final form. Each pass usually focuses on a specific task. Such architecture simplifies the design of compiler and enables extensions in the future. When writing a paper, we could do the same thing by first write an awful version, then improve it through multiple passes. In each pass, we focus on one goal. Below is a simple example:
References:
[1]. Picture. http://uploads3.wikipaintings.org/images/m-c-escher/drawing-hands.jpg
[2]. Literate programming. http://en.wikipedia.org/wiki/Literate_programming
[3]. 金庸笔下的良好代码风格. http://blog.sina.cn/dpool/blog/s/blog_6a55d6840101ek3y.html (In Chinese)
There has been suggestion to write programs like writing articles [2]. But I am thinking the opposite direction: could we write articles like writing programs? There is an interesting article that makes such analogy for novels [3]. While as a grinding PhD student, I am more interested in applying it to academic paper writing. And in this article, I would like to discuss the connection between writing an academic paper and compiling a program.
When a programmer has written some source code, it needs to be compiled to machine-understandable format, by a program called compiler. This is similar to writing a paper, in which you try to translate the thoughts in your mind to a form that is understandable to others. When compiling a program, there are typically many passes to process the source code and transform them step by step towards the final form. Each pass usually focuses on a specific task. Such architecture simplifies the design of compiler and enables extensions in the future. When writing a paper, we could do the same thing by first write an awful version, then improve it through multiple passes. In each pass, we focus on one goal. Below is a simple example:
- Make sure that the paper does not miss any important information.
- Make sure that the story line of the whole paper makes sense.
- Make sure that the core concepts are correctly defined.
- Make sure that terminologies are consistent and sentences are correct.
- Improve line by line and make the paper readable (might have a lot of redundant information)
- Revise the paper by removing redundant information.
- ...
This method definitely cannot guarantee a good paper. After all, the quality of the paper is determined by the quality of the research. However, it can at least reduce the anxiety of writers. When looking at an aghast draft, they won't feel panic and overwhelming. They can directly start from pass 1:)
References:
[1]. Picture. http://uploads3.wikipaintings.org/images/m-c-escher/drawing-hands.jpg
[2]. Literate programming. http://en.wikipedia.org/wiki/Literate_programming
[3]. 金庸笔下的良好代码风格. http://blog.sina.cn/dpool/blog/s/blog_6a55d6840101ek3y.html (In Chinese)
Tuesday, April 1, 2014
Research Idea Forensics
v0.1
When reading a paper, I always wonder how the author came up with the idea. Knowing this could helps us better understand the essence of the paper. We could also learn how to find good ideas by studying predecessors' path. Finding a good idea is much more harder than reading and understanding an idea. From the history of science and technology we could see that only a few of creative minds were able to propose great ideas.
Since most papers do not include how the authors got the idea, the readers have to figure out by themselves. We can call this activity research idea forensics. It is pretty much like a detective deducing motivation and crime of a criminal. In this article, I just share some thoughts on idea forensics. For more general and comprehensive discussion, [2] might be helpful.
How to do research idea forensics? A few authors might mention the story in some sections of the paper, such as introduction or related works. Or the reader can find a clue in the citations of the paper, since the authors might directly inspired by some existing works. The publication record of the authors define their specialty and way of thinking, which are usually important factors for generating ideas. The inspiration might also from industry, because a new technology could turn impractical ideas practical.
Actually, these heuristics sounds not difficult to understand and use. So some researchers in information retrieval, data mining, etc. could even try to create some tools that can automatically infer the idea generation process
However, I do find some other works that might have a very interesting and unique idea path. Recently, I re-read the following classical paper:
A sense of self for unix processes, by Forrest, Stephanie, et al, 1996
It proposes a way of differentiating intended execution of a process and maliciously injected execution (e.g. shell code execution through a stack overflow attack) during a process run time. So that intrusions to a system could be detected. The idea is to use short system call sequences to build a model of self (i.e. intended execution of a process), and then apply the model to detect abnormal system call sequences. The closest work uses system calls as building blocks for a policy language that allows users to specify what is correct and incorrect. While it is an interesting approach to detect intrusion, human might not have the ability to make a comprehensive policy. On the other hand, the work by Stephaine is automated. Until now, this paper has received 1863 citations on Google Scholar. It particularly inspires later intrusion detection work and more recently, behavior-based software analysis work [3, 4].
I am quite curios on how the authors discovered this idea. And my current hypothesis is that this idea is a product of interdisciplinary research. The last sentence of the abstract part is:
"This work is part of a research program aimed at building computer security systems that incorporate the mechanisms and algorithms used by natural immune systems."
It seems that such motivation push the author to think how to create an immune system for computer system. There are many concepts in immune system that might be useful in computer security. And the authors seem to focus on phagocyte cells, which "eat" foreign particles in the body. Phagocyte cells use some chemical cues to identify foreigners, so to implement them in a computer system, you need to find the correspondent "chemical cues". These cues have to be simple (i.e do not require too much time to identify) and effective (e.g. will not let bad guys run away and will not kill good ones). And system call trace is a very good candidate, because it contains the critical behaviors of a process and it is much smaller than raw instruction traces.
The lesson is that, thinking from a new angle could make a difference:)
Reference
[1] The picture. http://www.teamyeater.com/2011/09/phases-of-computer-forensics/computer-forensics-2/
[2] Where Good Ideas Come From, Steven Johnson
[3] Zhao, Bin, and Peng Liu. "Behavior Decomposition: Aspect-Level Browser Extension Clustering and Its Security Implications." Research in Attacks, Intrusions, and Defenses. Springer Berlin Heidelberg, 2013. 244-264.
[4] Wang, Xinran, et al. "Behavior based software theft detection." Proceedings of the 16th ACM conference on Computer and communications security. ACM, 2009.
[5] Data mining approaches for intrusion detection. Defense Technical Information Center, 2000.
Subscribe to:
Posts (Atom)