Friday, May 22, 2009

A slightly better operating system philosophy than UNIX

Please note that this is a conceptual joke.

In the spirit of ideas like this, I'd like to outline an entirely new operating system philosophy.

It's based on the following concepts:

  • Everything is an email. All user data, and peripherals are accessible only through POP3 and SMTP daemons built into the operating system, sorted in different accounts depending on their tasks.

  • No program has any particular task it does very well, but all of them can send or receive emails. The latter is in fact a prerequisite to be called a 'program'.

  • When you start a program, it gets an account in the mail system. This is the only means of IPC.

  • The GUI is also managed with emails. It is server based like X11, except it's wrapped in email protocols. A program wanting to change it's title would for example send an email to the GUI server asking for it to do so.

  • The default command line shell is a mail client of user choice. The default is crude, reminiscent of a raw telnet session to an SMTP server, but other options such as mutt, pine or emacs (reduced to it's mail-manipulation capabilities only) are also available.



Example: How to write a Hello World program in C, compile it, and run it.


MAIL: Compose
TO: code@filesystem
TITLE: hello-world.c

#include <mailos.h>

int main(int argc, char* argv[]) {
/* To be a program, it must by definition be able to deal with emails */
mail_client_initialize(argv[0]);
mail* mail_create("output");
mprintf(mail, "Hello World!\n");
mail_inbox_add(mail);
mail_destroy(mail);

return EXIT_SUCCESS;
}
-- END

MAIL: Compose
TO: process-starter@system
TITLE: compiler
--END

MAIL: LOGIN
SERVER: filesystem
USER: code
LOGGED IN

CODE: FORWARD
WHAT: hello-world.c
WHERE: compiler@processes
CODE: LOGOUT

MAIL: Compose
TO: process-starter@system
TITLE: a.out
--END
Hello World!

MAIL:


The possibilities are endless: Spam mail can be used to generate entropy for the random number generator, and system backup could rely on gmail. Clusters could be managed with mailing lists.

Tuesday, May 19, 2009

Follow-up: Webcam touch screen

As a follow-up on the Making A Touch Sensitive Sidepanel post from a while back, I can gladly report that it is actually feasible to make the entire screen touch sensitive. In a feat of hobo engineering1, I've crafted flaps out of cardboard to mount the webcams 20 centimeters (8-ish inches) away from the screen, thereby making it possible to cover the entire surface of the screen within the field of view. When I have the time, I'll make a proper mount for the webcams out of something sturdier than cardboard.

I actually have come up with the designs for a more elaborate version, but an actual assembly will have to wait until I have more time and resources (hopefully in a month or so). It requires a webcam, a light source (LED:s? perhaps infrared if your webcam can see those), a two panes of glass (without scratches), and a box. The idea is to mount the glass at a 45 degree angle inside the box, and then the screen, upside down, at the top of the box. The box is painted black on the inside, and a row of LED:s are mounted on the top of the viewing side. When you look into the box, the glass will reflect the screen (which neatly enough will appear to hover in the air inside) so that it's viewable from the viewing hole. The LED:s on top of the hole will illuminate anything close to the surface (e.g. your finger touching the surface), making it possible to track such objects with a webcam on the far end of it. It's also probably a good idea to add a fan somewhere. Here's a diagram:



Note that this design means that the aspect ration of the viewing area will change by a factor 1/sin(45o) = √2, so what was a 4:3 screen will be a 36:19 screen. ... yeah.

[1] It's even beyond redneck engineering. Even their mechanical shenanigans don't reach this level. Everything is made out of expectations, cardboard, surgical tape. Nothing stays where it should for more than a few minutes. All that's missing is a burning barrel, a shopping cart, and some gloves with no fingertips, and the experience would be complete.

Friday, May 15, 2009

Browser javascript performance vs. Adoption

There was an article on Slashdot today about how browser javascript speed seemed to be inversely correlated to the adoption of the browser. The core of the article was this press release from Futuremark.

Sadly lacking was a hands-on analysis of the numbers. The sample size is pretty small (5 browsers), but still, some analysis would still be called for, so here goes:

First a plot of the data set[1].



At a glance, it doesn't look terribly correlated, but, there is a hidden outlier here. If you remove the third sample, Opera, you get a plot that looks more like a smooth function.



Indeed, if you take the logarithm of this dataset, you find an almost straight line (!)



If you look at the logarithm of the full dataset, it's pretty evident that something is wonky about the Opera point.



For the reduced data set, a linear regression of Performance vs. log(Adoption) as a+b*x yields

a = 0.619
b = -0.00447

A statistical analysis indicates that the probability for such a strong correlation by random chance is 0.062%. The standard deviation is 0.079, and the coefficient of determination R2 is 0.999. For the full data set, R2 is 0.89.

So, the adoption as a function of performance in the benchmark would according to this hypothesis be

Adoption(Performance) = e0.619 - 0.00447*Performance


Comparing this with the data sets, it appears very likely that the hypothesis is true.



Of course, the data set is very small, so it's plausible that this is just a random coincidence, the same way one may get a descending sequence of numbers when throwing a die 4 times without some physical rule dictating that dice only generate descending sequences of numbers.

Some final thoughts: Even though correlation is not causation, my guess is that more popular browsers, being older, need more maintenance. Their large user base will also lead to more time being spent patching security flaws. On the other hand, the smaller upstarts need a competitive edge to stand a chance against the bigwigs, motivating the developers to create faster code.

As for Opera, you can either interpret them as having unusually small adoption for it's speed, or being unusually slow for it's tiny adoption. The former makes more sense than the latter. A factor that should be taken into consideration is that almost all the other browsers either have massive campaigns surrounding them, or ship as default with some operating system. So maybe Opera is simply under-advertised?



The analysis was performed with the statistical functions of a TI-84+, and the plots were generated with gnuplot.

[1] You can find the actual numbers in the press release, I don't want to steal their thunder.

Saturday, May 2, 2009

Making a touch sensitive sidepanel

I love coming up with new ways of controlling my computer. But I'm also a cheap bastard. So, in joining my two aforementioned characteristics, I bring to you probably one of the cheapest ways of capturing finger presses on a non-touch display. It won't allow you to track touching on all of the display or tracking multiple fingers, but it will allow you to create a touch sensitive sidepanel. What's keeping you from doing the entire screen is the narrow field of view in most webcameras, so you could hypothetically get around this problem with some form of lensing.

It's also worth noting that it's a better idea to experiment with this on CRT displays instead of flat displays, since CRTs are easier to clean and harder to damage with your fingers.

All you require is 2 webcams with a decent frame rate (mostly anything in the 10-20 buck range will do the trick, so long as they can do 30 fps or so), some black cardboard and some adhesive tape.


Figure 1: The set-up. A. Vertical camera, B. Horizontal camera, C. Black screen. D. Section of screen where fingers can be tracked.


The black cardboard in opposite of the cameras is to make finger identification much simpler. Instead of comparing with some complicated idle state, you'll just have to check what portions of the view isn't black.



Figure 2: What the cameras see. A. The area to be ignored. B. The area to be scanned for fingers (with black cardboard in the background). C. A finger. D. Portion of the screen at a steep angle (to be ignored as well)


Once you've got your hardware set up, the actual interaction with the hardware can be a bit of an headache, but I got it working by reading the Video4Linux documentation. You can probably settle for a cargo cult implementation of the actual driver interaction. There's also some limitations in your computer that you may run into. The USB bus only has so much bandwidth, and if you're unlucky, that might not be enough for two video feeds. I had to hack the kernel module to get around that problem with my drivers.

The basic finger identification algorithm is pretty simple. You take an average of the positions of all the pixels which are bright. Simply scan region B in Figure 2 for pixels with a color intensity stronger than some threshold value, and for such regions, increment one counter by the brightness, and one counter by the vertical position of the pixel multiplied with it's brightness. When you've iterated over all points, divide by the first counter.


float sum = 0, pos = 0;
for(x = 0; x < x_max; x++) {
for(y = y_min; y < y_max; y++)
if(ispixel(x,y)) {
sum+=intensity(x,y);
pos+=x*intensity(x,y);
}
}
pos/=sum;


Besides the actual implementation, you'll need to do some form of calibration. It's mostly trial and error. First, figure out how much off the zero position is, then estimate how much off the scale is, and compensate for that.

You can cut down on some of the noise by giving the values some "inertia", which essentially the same as applying a low-pass filter to your data. Use the following algorithm:

X = X*(1-alpha) + X_new*alpha;
Y = Y*(1-alpha) + Y_new*alpha;

where alpha is a number in the 0 ... 1 range (I chose 0.25), and the position will stabilize a lot.

We can also use the fact that the position is relatively stable to identify a "press", i.e. when the difference in position between iterations is lower than some value, we decide that the user has pressed his/her finger. We may also want to wait a couple of frames before triggering a new press, since obviously it's not desirable to have the same action performed 30 times because you didn't have the reflexes of a ninja and accidentally pressing down for longer than 0.33 ms.


Putting it all together, I made a side-bar for my secondary screen that is touch sensitive, borrowing some icons from the crystal project.



The code is a working prototype, and a lot of stuff is hard coded, but it should be possible to salvage a lot of the tricky parts for use in whatever project you're working on.

touchcam.c
Makefile

It should on most x86 linux systems with SDL present, but requires a bunch of icons to be put in a subdirectory resources/ to actually run. I got mine from 64x64/apps and 32x32/actions in the crystal project icon tarball.

I'm actually surprised at how useful it is. It may not be 100% accurate, but it's still a really nice way of controlling your mp3 player or starting new programs.