Thursday, 29 September 2011

toughen up

Toughness in the workplace consists of in-fighting and bullying.
Companies encourage this behavior
  1. by not discouraging it
  2. by providing productivity incentives
  3. as a way of controlling their employees
  4. as a way of managing their customers expectations
Some people pride themselves at being tough.

Toughness isn't only about people by the way - being able to tolerate difficult situations is a skill, whether it's a property of your location, situation or the rules of the arena you're in.

Some crossword puzzles are tough.

It seems that, all things being equal, those who can dish it out as well as they can take it will succeed.

This takes a strange turn when this fashion becomes popular.

These fashion followers not only expect to be on both ends of this abuse cycle, they promote this fashion by their success, which they attribute to being tough.

Here's the rub. Any society, for example the society you find yourself in right now, depends on money moving around.

If that society finds itself populated predominantly by "tough" people, then they want to pay less and get more.

Less money moves around, meaning there's less money to make, and life becomes more difficult for everyone.

A tough society is a poor society.

Look at countries without even a working democracy - third world countries.

People can live without money, being able to possess only as much as they can carry or watch over.

Others form gangs and look for ways to exploit and dominate others.

How tough is that?

A society is a show. A performance we put on for each other.

A tough society is a poor show.

The fashion of toughness is just comical.

We depend on each other indirectly as customers for our business as they in turn depend indirectly on us.

All a society needs to be really successful is lots of people.

The next time someone tells you to "toughen up", point them here!

It has a lot in common with a mullet.

Monday, 26 September 2011

Do you own an Andriod phone?

I recently got bitten by the Android bug and bought a Vodafone Smart.
I downloaded some apps just 'cause I could and life felt a bit better.

As a pay as you go customer I don't get itemized bills.
Also, I don't use the phone itself much - I work online and was thinking of developing some Android applications.

So every six months I have to top up even if I don't need to, as Vodafone likes fresh money in my account.

Straight away I gift the top-up to someone else, leaving by ample existing credit as it is, minus 20cent for the gift.

I recently had that old familiar notice about having to top up and dutifully did so - 5 Euros. with the intention of gifting it straight away.

That was last Thursday.

Later I was horrified to discover that the total amount of credit in my phone account was 5 Euros and 5 cents!

I made a note to contact Vodafone over this on Monday.

Now it's Monday and before calling I checked my credit again.

My credit is 5 cents!

I phoned Vodafone about this and after at least 5 minutes waiting I was told:
  1. Vodafone charges me 8 cents per connection to the internet.
  2. Vodafone can't tell me which app is connecting to the internet because of the data protection act.
  3. When I install an app I am told which rights (such as connecting to the internet) I want to give an app.
  4. There's no way for me to discover which app is connecting to the internet.
  5. I can bring my phone along to a Vodafone shop where they can tell me which app might be connecting to the internet.
I tried to explain that giving an app the right to connect to the internet is not the same as saying that I'm happy for the app to connect every 5 minutes (or however often it is) and that this constituted a misuse of terms - a kind of fraud.

I also pointed out that there was apparently a shell game in operation between Vodafone, the phone manufacturer, Android market where none of them are responsible, but which profits Vodafone in the event of misuse - that they're benefiting from the fraud.

The customer services representative told me that I can't check on the phone which app is connecting to the internet.

He also disabled internet access via the phone network at their end, at my request.

Update: I went to the Vodafone shop to find out if they could tell me any more about this.

The Vodafone representative told me that
  1. 5 Euros over 5 days isn't unusual if you're using the internet
  2. Internet access is charged at 80 cents a day
  3. did I have an email account set up on the phone?
  4. An email account checking for new mail would cost that much
  5. You can turn off internet access when not on WiFi from (some obscure menu sequence).
  6. They couldn't tell me which application was accessing the internet

In other words,
  1. I chose to give the email program access to the internet whether I wanted it to cost me money or not
  2. I chose to allow it to access the internet via the phone network because I didn't disable an option I didn't know about
Some people don't trust banks because of unexpected charges.

Let's compare this situation to a direct debit.
  1. I give my bank details to a company because I want a service in return
  2. everything is made clear in advance
  3. both parties have the same expectations as to how much money gets transferred out of my account and in how many payments.
Vodafone missed out on the last two by
  1. Making it my responsibility without me knowing all the relevant information, like how much access to the internet costs per day and per connection.
  2. Not putting a big red warning on their shop window or on the phone box that money will basically fly out of my phone account if I use email (I'm still not certain it was email though)
  3. Not alerting me to unusual account activity
  4. Doing this again and again for all the other customers who fell into this trap, and making it their fault too
It cost me 35 Euros to find out this information - I'm certain that wasn't in the contract.

I hope those reading this won't fall into the same trap as I did, thereby making my 35 Euros a bargain in terms of the revenue Vodafone loses through misconduct, breech of contract, breech of trust, deception, envaglement, confusion, fine print, or was getting ripped off in the fine print too?

I can hear them now - "why wasn't I happy to spend 35 Euros on a test account I had set up just because I could?"
It sounds a lot like roaming charges you can incur from the comfort of your own home.
    I think I'll report this to the Police and see if they have a policy in place for this situation.

    Sunday, 25 September 2011

    DIY glass free 3D! (part 2)

    You'll need to have Asymptote, ImageMagick and a C compiler to do the following.

    For you poor Windows users out there, look up CygWin, which is sort of Linux on Windows, or look up Wubi to install Ubuntu along side Windows.

    Trying to observe the result of a parallax barrier with a 3D picture/video is hit and miss without a hard reference - alternating vertical black and white lines, one pixel wide.

    When you've got a barrier with the right separation, you should see white with one eye and black with the other eye.

    You'll notice that the barrier needs to be positioned very carefully - we're trying to align it to the edge of a pixel!

    You'll need a "C" compiler for this one

    #include <stdio.h>
    int main(int argc, char *argv[])
        if(argc != 3) {
            fprintf(stderr, "Usage: vertical-lines X Y\n");
            fprintf(stderr, "For example vertical-lines 1400 900\n");
            return 1;
        int X, Y;
        if(sscanf(argv[1], "%u", &X) != 1) {
            fprintf(stderr, "Parameter X parse failed.\n");
            return 1;
        if(sscanf(argv[2], "%u", &Y) != 1) {
            fprintf(stderr, "Parameter Y parse failed.\n");
            return 1;
        printf("%u %u\n", X, Y);
        int x, y;
        for(y = 0; y < Y; ++y) {
            for(x = 1; x < X; ++x) {
                if(x & 1)
                    printf(" 1");
                    printf(" 0");
        return 0;

    Once you paste the above code into a text file called "vertical-lines.c",
    run the following commands
    cc -o vertical-lines vertical-lines.c
    ./vertical-lines 1440 900 > vertical-lines.tiff
    convert vertical-lines.tiff vertical-lines.png
    Specify the resolution of your laptop screen above. Mine is 1440 x 900.

    The "convert" program above is part of ImageMagick.

    Open up vertical-lines.png with an image viewer capable of showing the image full screen.
    Here's the source code for the swatch I made with asymptote.

    Once you get this on your computer, you can create PDF documents with it.

    If you know your metrics you'll recognize the 210x297 in the script - A4.
    // I'm on a learning curve with Asymptote - my incomplete mental model results
    // in a few different ways to add padding to the print area.
    // Do let me know if you can clean it up!

    import plain;

    // Define the bounds of the page.

    // Landscape
    //int X = 297, Y = 210;

    // Portrait
    int X = 210, Y = 297;

    // Define our page.
    fill((0,0)--(X,0)--(X,Y)--(0,Y)--cycle, white);

    // Add a 1cm white border, leaving our print area.
    int X0 = 10, Y0 = 10, X1 = X - 10, Y1 = Y - 10;

    real lenX = X1 - X0;
    real lenY = Y1 - Y0;

    // See text for the formula.
    real GridWidth = 0.25793958;
    real GridDelta = 0.001;

    // The range of our calibration grill.
    real W0 = GridWidth - 10 * GridDelta;
    real W1 = GridWidth + 10 * GridDelta;

    // We want the black lines to be slightly thicker, so we pad each side.
    real pad = 0.005; // 5 microns!

    real result = 95.5; // The most promising result I got.

    if(true) { // for calibration.
    //if(false) { // for the lenticular barrier.
        // The calibration step.

        // I'll put the calculated value in the middle of the page.
        // Sizes get progressively larger above and progressively smaller below.
        // Pick a value where the horizontal line is all the same color.

        int n, c = 0;
        real x0, y0, x1, y1;

        // Draw a CONTINUOUS grid.
        y0 = Y0;
        y1 = Y1;
        x0 = X0;
        x1 = X0;
        while((x0+W0) <= X1) {
            fill((x0,y0)--(x0 + W0, y0)--(x1 + W1, y1)--(x1, y1)--cycle, black);
            // It's 2 * c as we're effectively drawing black, then white, then black...
            // Avoid accumulating errors.
            x0 = X0 + 2 * c * W0;
            x1 = X0 + 2 * c * W1;
        clip((X0, Y0)--(X1,Y0)--(X1,Y1)--(X0,Y1)--cycle);

        // Define our page - again?
        // White out the 1cm border.
        draw((0,0)--(10,0)--(10,Y)--(0,Y)--cycle, white);
        draw((0,0)--(X,0)--(X,10)--(0,10)--cycle, white);
        draw((X-10,0)--(X,0)--(X,Y)--(X-10,Y)--cycle, white);
        draw((0,Y-10)--(X, Y-10)--(X,Y)--(0,Y)--cycle, white);

        // Draw border around the print area.
        draw((X0,Y0)--(X1,Y0)--(X1,Y1)--(X0,Y1)--cycle, black);

        // Draw horizontal white lines at 1mm intervals to line up the transparency
        // and to locate the precise match.
        for(n = Y0 + 1; n < Y1; ++n) {
            draw((X0 + 0.4, n)--(X1 - 0.4, n), white);
            if((n % 10) == 0) {
                label(format("%d", n - Y0), (X0, n), align=E, Fill(white));
                label(format("%d", n - Y0), (X1, n), align=W, Fill(white));
        // The lenticular barrier step.

        int c = 0;
        real t = result / lenY; // 0 <= t <= 1
        real W = W0 * (1.0 - t) + W1 * t;
        real X0 = X0 + W;
        real x = X0;
        while((x+W) <= X1) {
            fill((x - pad,Y0)--(x + W + pad, Y0)--(x + W + pad, Y1)--(x - pad, Y1)--cycle, black);
            // It's 2 * c as we're effectively drawing black, then white, then black...
            x = X0 + 2 * c * W; // Avoid accumulating errors.
    Save the indented text above to a text file named "barrier.asy" then in the same directory run
    asy -f pdf -o calibrate.pdf barrier.asy
    and you'll get calibrate.pdf in the same directory in a second or two.

    This is the calibration file - use it with the grill to find the line which is one color all the way across. Each eye will see a different color, so use just one eye to find it.

    The numbers on the left are centered on centimeter boundaries and the white lines are on millimeter boundaries.

    Once you have the result, plug it into the script and swap comments for the lines starting with if(true) and if(false) to enable the final print.
    asy -f pdf -o barrier.pdf barrier.asy
    The PDF format is good to 1/100 of a millimeter - plenty for this purpose.

    I used VmWare player to install Windows XP Professional into an image on my Debian laptop (it belonged to an old laptop whose hard disk died).

    This way, I could install all the software Canon supplies - Windows only, unfortunately.

    One of the little gems included is a manual print head alignment utility.

    After printing out three pages and filling in the on-screen forms based on the printouts, I got a lot better results.

    Once again, the swatch entry that appeared to work best was slightly smaller than the calculated value, leading me to conclude that the borderless printing option enlarges the print slightly, whether I want it to or not.

    My other disappointment was when I printed the same swatch twice.

    It seems the printer can't produce identical results, so you end up choosing a different swatch entry each time - a moving target. The first print of the day seemed to be the best one.

    Maybe I should get a printer with a "transparency" paper type.

    The inkjet transparencies I used are only 100 microns thick.

    I've achieved better results by sticking ordinary printer paper onto the back of the transparency with double-sided tape.

    You can remove the paper + tape from the transparency after the print.

    You'll need two strips along the length of the page, as close to the edge as you dare.

    If you didn't align the paper exactly with the transparency, then trim the surplus paper off with a scissors.

    The "paper type" printer setting that worked best for me with this setup was "matt photo paper".

    This is far from an exact science as it appears that printer pixels aren't exactly square at the precision we're dealing with here, so you'll have to do a different calibration print for portrait and landscape.

    Let me know how you get on!

    Monday, 19 September 2011

    New dental implant technology

    The day before my brothers wedding celebration, the junction between my crown and the tooth - my upper front right tooth, broke.

    I braved my way through the proceedings and started looking into my options.
    1. dentures
      I'm not crazy about putting a piece of plastic in my mouth, maybe it's just me
    2. implants
      from the research I've done it seems that 1 in 50 down to 1 in 20 of these fail at some time.
      I don't like those odds.
    So I started looking into alternatives, and there are some
    1. grow new teeth
      This should be available at the end of 2012 in the US at least.
    2. come up with some other way

    some other way

    The existing implant process ignores the tooth socket, drills a hole in the jaw/skull and screws a bolt into it, on top of which goes the implant crown.

    The reason that 1 in 50 down to 1 in 20 of these implants fail is because sometimes osseointegration doesn't happen - the bond fails and the bolt becomes loose.

    My idea was to reuse the socket and construct an implant from multiple interconnecting parts, assembled in the socket and secured by a pin, instead of a bolt.

    This lets the socket take the load like it did before and any movement of the implant/tooth would be just like the old tooth did.

    Yes, our teeth do move around slightly when we chew.

    The pin serves as no more than a tether. I'm not certain it would even be needed for molars as their shape inside the bone, in combination with atmospheric pressure, prevents the tooth from falling out on its own.

    I figured that the preservation of the tooth socket was vital for this so the existing tooth root would have to be scanned to get it's 3D shape before being destroyed in the socket with ultrasound, and removed in pieces.

    Is what I'm proposing medically viable? The best answer I got was from Carsten Engel on LinkedIn, which was that he couldn't find anyone else who had tried this approach.

    I still don't know exactly what holds teeth in their sockets, but suspect it's a combination of gum tissue, the shape of the socket preventing the tooth working itself free, and the air seal.

    The air seal of the gum around the tooth acts like a cork in a bottle - you need to act against atmospheric pressure to extract it.

    I've also heard that our teeth have a coating that assists in the seal-forming process, but as a layman in this field, I need more to go on.

    If it is then the pieces fall quickly into place:
    1. MRI scanning - how expensive is it? Are there alternatives that can achieve <50 microns of resolution?
    2. Software - process the scan to determine the dimensions of the tooth/teeth
    3. Software - fragment each tooth model so it can be assembled inside the socket. I don't know exactly what holds incisors in place - it's not the socket shape which is a sort of cone - fragmentation isn't always needed.
    4. Manufacture/rendering - there are plenty of commercial units that can do this and I'm working on my own as well.
    5. Existing tooth removal - I've heard that ultrasound can be used, but any approach that preserves the socket will do.
    6. In-situ replacement construction - the replacement will be composed of interlocking segments that, once in place, can be secured together using a locking bolt, ring or nut, depending on the fragmenting scheme used.
    I'm currently in discussions with Local Enterprise Office.

    I met with Brian Davitt and he subsequently shared a link with me about regeneration of ligament tissue, which looks promising.

    Later on I thought that a material that could maintain the tooths position while the ligament formed and be later absorbed into the body might be an avenue worth exploring - see this article about Dr Marion McAfee.

    Brian's going to Sligo soon anyway so there's hope that some networking might happen!

    DIY glass free 3D!

    First and foremost - you're not going to turn your laptop into a 1080p 3d theater using this process!

    Because this process directs odd pixels to the right eye and even pixels to the left eye, your horizontal resolution is halved.

    My screen has a resolution of 1440x900, so I'll get 720x900 - that's about DVD quality in 3D!

    This involves printing a lenticular barrier on a transparency sheet and placing it over your laptop screen.

    A lenticular barrier is a repeating pattern of thin vertical lines that directs odd pixels to the left eye and even pixels to the right eye.

    By combining two photos or videos by interleaving them, you would have all you need to see 3D.

    Youtube 3D already allows you to play 3D videos this way, so you've got something to test with.

    I took a look at the parallax barrier documentation in Wikipedia and realized that it was a bit short on maths, so here goes.

    Some constants
    d : distance to viewer. This depends on ones preference and I've chosen 560mm.
    e : distance between eyes (pupil centres).
        Mine is 63 mm, measured with a tape measure and a mirror.
    p : pixel width/height. For Compaq A945EM, this is 0.259mm from the docs.
    Calculated variables
    g : grill distance, the distance seperating the pixels from the grill.
        Note that Hp/Compaq's own docs don't give the thickness of the screen cover,
        so trial and error is required here. Use clear acetate sheets or transparancy sheets between the printed barrier and the display.
        g = p * d / e
    h : the grill line/space width. It's p but scaled down by d/(g+d)
        h = p * d / (g + d)
    expanding g,
        h = p * d / ((p * d / e) + d)
    swapping p and d,
        h = p * d / ((d * p / e) + d)
    inverting division,
        h = p * d * 1/((d * p / e) + d)
    inverting multiplication,
        h = p * d * 1/((d / (e / p)) + d)
    setting a common divisor,
        h = p * d * 1/((d / (e / p)) + d(e/p)/(e/p))
    sharing divisor,
        h = p * d * 1/((d + d(e/p))/ (e / p))
    inverting divisor,
        h = p * d * (e / p)/(d + d(e/p))
    dividing by d
        h = p * d * ((e / p)/d) / (1 + e/p)
        h = p * d/d * (e / p) / (1 + e/p)
        h = e / (1 + e/p)
    or, multiplying above and below by p,
        h = p * e / (p + e)

    Yes, that's right - the line-space widths of the barrier depends only on the distance between your eyes and the size of the display pixels.

    If you want to view it from further away, add some clear acetate sheets between the barrier and the display.

    r : the printer resolution in pixels per millimetre
    r': the inverse of r: the size of each printer pixel
    x : the number of printer pixels per line/space(h)
        x = h / r'
        x = h * r

    From the above constants,
    g = p(0.259) * d(560) / e(63)
    g = p * d(145.04) / e(63)
    g = 2.3022222

        h = p(0.259) * d(560) / (g(2.3022222) + d(560))
        h = p * d(145.04) / (g + d(562.3022222))
        h = 0.25793958
        h = e/(1 + e/p)
        h = 0.25793958
        h = p * e / (p + e)
        h = 0.259 * 63 / (0.259 + 63)
        h = 0.259 * 63 / 63.259
        h = 0.25793958

    Printer res (r)
     (in)       (mm)         x
     300  11.811024  3.0465306
     600  23.622047  6.0930609
    1200  47.244094 12.186122

    You might think 3.0465306, that's about 3. Can't I just print lines 3 printer pixels wide spaced apart by 3 pixels?
    Well the problem is that this small term causes a left-right reversal error when the error terms add up to 3.
    So your image will go to the wrong eye about every (3 / 0.0465306) or 65 lines.

    So 1200 dpi is the minimum I think is needed for it to work.

    Obviously the printer can only print whole pixels so you round up the error term when calculating filled pixels and round down when calculating spaces, so the wrong pixels are always blocked, at the expense of sometimes blocking parts of good pixels.

    just to let you know what a "pixel" looks like, I took a picture

    I think it should be possible to craft an image with the first row having the line/space pixels, repeating that row all the way down the image.

    The really tricky part is getting the printer to stay out of the way and just print it as is.

    I took the following images using a pocket microscope set to 100x and photographed with my mobile phone set to 2.8x - tricky at best.

    Firstly, 1mm

    Next, some display pixels at the same scale.

    Finally, I printed a "swatch" of different spacings. Below is subjectively the one that worked best for me.

    I put two clear acetate sheets between the screen and the filter with the filter print side closest to the screen.

    I printed this with my Canon Pixma MP560 which claims a 9600x1200 dpi print resolution. It looks a bit noisy and blotchy.

    As I printed horizontal lines running down the page, I had 1200dpi to work with, so I had to stagger the line/space widths to average to the required value.

    More in part II.

    Sunday, 18 September 2011

    QtJs - Converting Qt c++ to JavaScript

    I've approached this task from several angles.

    by hand

    QtJs is a SourceForge project I created to share some demos I made with JavaScript implementations of some of Nokias open source Qt's classes.

    At the moment I've got a bare-bones JavaScript implementation on my web site, but I think I'm going to abandon it, as
    1. it takes ages
    2. every time Qt releases another version I'll have to manually update
    3. the Qt code base is huge
    These are good reasons to try to develop a semi-automated approach.
    When I say "semi-automated", I mean writing code that does a reasonable job most of the time, but handles corner cases using some kind of filter, be it location, name or scope, maybe a combination of these.
    These corner cases will have an action to deal specifically with that case.

    I can see this being applied first and foremost to the QObject / QWidget classes. Once I have them working, the rest should follow.

    That said, the existing implementation has taken me a long way, and contains some useful ideas.

    I had a new version in the works that chain-loads the seperate JavaScript files.
    The main html file only has to reference "qtjs.js" and it gets the rest.
    I'll definitely be using that approach until I wrap the whole thing up using the closure compiler.

    I may also end up also using the closure library.


    I've had mixed success in my previous encounters with clang/llvm, which discouraged me from "biting the bullet" and committing to this semi-automated approach to the conversion.


    I had a look at emscripten and I may yet give it a go, if only to compare the two approaches.

    Google chrome browser plugin

    Google's NaCl promises to allow me to write JavaScript that runs in the (Google Chrome) web browser but uses a native Qt implementation.

    Android plugin

    There's an Android app called Ministro that allows you to run compiled Qt apps on your Android phone. I'll have to research if it's possible to leverage this from within an Android web browser though. It may be more trouble than it's worth given that you can just go ahead and write Android programs with Qt directly.

    I'm a blogger!

    Well, after much self-deliberation, I finally won/lost and decided to start a blog.

    As I've encountered before it seemed the obvious choice, so here we are!