Is it game night yet?

It’s Thursday night. Just one more day to plow through until you reach Friday night with all its movies and drinks. Well, we can’t tell you how to speed up time but we can tell you how to feel like it’s going faster. Play computer games. 

Now, we are talking about Genspace, and we do have a bit of reputation to maintain. So as much as I would like to recommend everyone to get cracking on the battle.net with Starcraft 2 we’ll have to make do with something different; a computer game with science in it.

It’s called Phylo, and you can find it here. Phylo is an entirely browser based (flash based, to be specific. Sorry to disappoint all my iPad toting readers) and doesn’t require any serious computing muscle on the player’s end. I’ve been playing it for the last hour or so, and it’s an odd piece of work. On the surface the game follows some basic rules of pattern matching casual games you might be familiar with like Bejeweled. Yet the experience of playing the game feels far more complex than that, and I don’t necessarily mean that in a bad way. Also, there’s a real benefit to playing this game on your spare time, other than gaining the l33t skills to pwn the n00bs with.

You see, Phylo is ‘a human computing framework for comparative genomics.’ Basically it gives you real multiple sequence alignment problems represented by 4 color blocks scattered on a grid. And of course, budding bio-enthusiasts like us know what’s up when a science programs give us 4 of anything- they represent nucleotide sequences. As you match same colored blocks with each other, you contribute some of your brain power to finding aligned sequences between different genes. If you misalign the blocks you lose a point, and if you create gaps between the blocks (which represent mutation) you lose lots of points. You can gain points by aligning same color blocks on vertical row and you need to gain certain amount of points to pass a level or get another gene to align with your existing sequence. This is a very abstract process of optimization that is usually done with complex computer algorithms and lots of processing power, which would be prohibitively expensive when brute-forced. The authors of the program hope to use the human-computer interaction on a large scale to come up with optimized heuristic pattern.

 

This is how Phylo looks
The logic is sound. After all, usefulness of human ability to find patterns in complex biological simulations have already been proven worthwhile with the fold.it protein folding puzzle game and the Nature paper that came out of it. Guess who’s a co-author of a nature paper.  😉
 

This is how it might look on a scientists’ computer
I’ve played around with the DNA code responsible for idiopathic generalized epilepsy and already 160 other people attempted to solve the puzzle… And 146 people failed. And there lies the problem of biology-turned games. You see, unlike regular puzzle games like Bejeweled or Tetris, not everything will fit together with perfect logical coherency. Granted, there are a few techniques you can use to treat this like any other game (for example, don’t waste your time moving around single blocks in the beginning stages. Crush them together into single group for maximum points in shortest amount of time), but the fact is not everything will fit together and it can be rather jarring for a beginner to figure out what he/she’s doing right, since there isn’t any satisfying feedback to a ‘correct’ sequence formation. It can’t be helped though. This is science, and no one knows the correct answer to detect and give you feedback with. Maybe that’s the whole reason why you should play this game. After all, would you play a match in starcraft with predetermined outcome?
I for one, am looking forward to the future where all games contribute to the discovery of science in some shape or form.

Jaron Lanier and the Fall of Opensource

Jaron Lanier, one of the pioneers of opensource movement and virtual reality, thinks the opensource movement had been a total failure. He does point out the opensource movement and the web culture are two different things and agrees the latter had been a phenomenal success in demonstrating the capacity of the unknown, average individuals out there to create beautiful, useful, and interesting things.

I don’t agree with everything he said, but I think he has some important points we should pay attention to.

1.Opensource movement is boring. Seriously, sitting down and writing Wikipedia entries (of often questionable accuracy), worrying about how to format texts? Sure, it’s something you and I might do in our spare time, but we are geeks. Opensource is about serving all of humanity, but as it stands opensource just serves the narrow interests of a very small portion of the population: Geeks and nerds. As long as grandma and primary schools kids next door can’t use opensource products/projects/frameworks simply because it’s fun, the whole culture is just another outlet for elitism and fascism most hackers are supposed to hate so much. Ever said something on the lines of ‘I hate being the tech support for the whole family’? That means the people who wrote those programs and services suck, not the users.

2.Major opensource products are built upon nostalgia of the ‘better times’, the golden age of the hackerdom during the 60’s~80’s. Linux, gcc the vast majority of the main opensource projects are built upon, vim vs. emacs war, and etc etc. Linux distros had been making some good strides in this department but we still need to face facts. To anyone who didn’t string together shell scripts when he/she was in high school, major opensource projects and the tools they are based upon look downright archaic. It isn’t because they have bad user interface design (they do). It’s because they really are old and deprecated. I am continuously amazed by how many people tell fresh young minds entering hackerdom to go learn C. Kindergartners don’t start learning English language by starting with Latin. Why is the whole darn culture based on a fast-but-bad programming language designed before many of us were born? Let’s be honest here, most people who recommend C to beginners started with BASIC. When a whole culture based on ideals of innovation and sharing begins to look outdated and conservative next to hulking multi-billion/trillion-dollar corporate entities, they are in trouble.

3.This is a repeat of above statement, but it bears some reiteration. There isn’t enough innovation in the opensource community. Again, large corporate entities that takes three days to ship an empty box innovates a whole lot more than most of the opensource communities out there. Sure, there had been some interesting developments that’s making the world a better place, like Ruby and Python. The same Ruby and Python people praise for finally getting around to implementing great ideas of programming languages like Smalltalk and Lisp. Smalltalk and Lisp was invented back when the idea of a cheap personal computer was the stuff of science fiction. Linux is playing catch up in terms of features and architecture with commercial operating systems and in critical applications UNIX is king (guess how old it is).  Meanwhile Microsoft is making strides with .NET framework and Google/Apple is on the cusp of next era of personal computing. Based on real world progress, opensource community as a whole lack clear vision of what the future should be.

4.There is an inherent elitism within a lot of the opensource communities. Personally I have no issue with elitism on personal level. It’s when such attitude permeates within entire communities that they begin to do real harm. Common sense dictates that any software targeted at Jane Doe should be easy enough for J.D. to use. Not so in a lot of opensource communities. If Jane Doe has hard time using an obscure text editor with more commands than the usual operating system it’s her fault for being so lazy and/or dumb. If a kid who can barely type can’t learn C and work with pointers the kid must be stupid. If it’s too difficult for artists to use computer systems to create beautiful things without pre-packaged software it’s because artsy types aren’t supposed to be good at computers. These problems are being addressed by a new wave of hackers and hacker-minded people but they are still tragically present in many of the present communities, even when they don’t specifically come out and say those things.

There are other interesting traits about opensource and opensource oriented communities Jaron Lanier pointed out as well, like how most of them are structured to shout down any voice of dissent based on fear of isolation, and how there is a culture of complacency among its leading members, but those things apply to almost any large group of people, so I felt no need to single out and discuss them.

I’m an optimist. I think there are movements within the opensource community that are trying to address this problem. I think the prevalence of web platforms, popularity of light weight scripting languages, and web/user interface designs are all in some form a reaction to the perceived stagnation of opensource community. People are increasingly becoming aware of what a stupid idea it is to teach C in middle schools, and how even stupider it is to begin computer education in a middle school instead of much, much earlier. I might go out on a limb and say that some people are beginning to realize that programming as an activity is not difficult at all, and that it is the teachers who don’t know what they are doing, not the students.

Yet I am still worried about the culture of opensource. Opensource as in framework of idea, not of computing. How can we apply the ideas of opensource and innovation to the fields outside computing, like CNC based personal manufacturing, scientific research and DIYbio when it’s running into such problems on what should be the culture’s home ground? Are those open-manufacturers/scientists/biohobbyists/etc about to run into unforeseen trouble inherent in existing idea of opensource itself? Are we already in trouble?

edit: maybe I should say that the woe of current opensource community (as a whole. There are many brilliant people and groups out there, can’t stress that enough) is that they don’t hack as much as smaller groups?

Edit: Aug 28
Some people wrote me some valid (“you don’t seem to understand opensource in the post”), and some vitriolic (“what’s wrong with being a nerd?!” but with lots of swearing in it), rebuttals to this post. I refrained from replying to those responses individually and getting into arguments since I think this post is terrible myself (like how I used opensource and web2.0 interchangeably throughout some of the parts). I must stress that I’m a student of all things Free software and what I say or write here should never be taken as something it isn’t.

I personally like to consider myself as someone with geek tendencies. I love emacs, and I love the idea of emacs. I think GCC is a huge thing that changed the course of humanity as much as development of steam engine changed the face of humanity forever. And yet I think all of those tools are old, based on older ideas and inaccessible to anyone who doesn’t subscribe to the lifestyles of people like you and me, the people who wouldn’t mind staring at a screen for hours on end.

I’ve had a chance to talk to some ex-programmers turned artists at the ITP exhibition last year. There was a particularly interesting exhibit with a type of evolving display system. He did all of the graphic generation within the exhibition by hand, by putting together a library of hand-drawn images. It was rather obvious he could simply do some coding in the Processing language and get it done faster and more efficiently, so I asked him why he bothered with the manual labor… And he told me that he simply doesn’t like to program. I’m not sure how I can portray the eye-opening effect it had on me at the time. The artist was fine with studying algorithms and working them out on paper, using it to generate obviously computational results. He was a very logical guy with mathematical proficiency to spare, certainly more than what I can say about myself. It’s only that he just couldn’t stand the whirring of the computer fans, the monitors, the endless clacking of keyboards and always worrying about battery life of one device or another. And I get a feeling that he is not alone in this. Maybe there are some people who are allergic to certain type of things used universally in building computers. Maybe there are some kids who just can’t handle the physical environment that comes with using a computer as we know it due to some psychological trauma. Such cases aren’t unheard-of in education circles and there can be hundreds of thousands of reasons why someone would shy away from programming activities while possessing logical acuity and vision that would normally lead to the act of programming.
I subscribe to the Alan Kay notion of describing computer-use: every interaction with a computer is an act of programming, but programming activity isn’t exclusive to usage of computer devices. And that’s why I agree with his frequent statement that the computer revolution never really happened. Computer revolution was supposed to be the revolution of the mind-ware. It was supposed to be this awesome tool of abstraction that would elevate (for lack of a better term) all of humanity to a state of freedom through better understanding of subjects that were distant and foreign to them… It was supposed to make science easier, a goal that is near and dear to people like me interested in DIYbiology. Easier not as in being lazy but being accessible, like how calculus was once considered the pinnacle of human knowledge but is now being taught even in some of the worst educational curriculums in the world as something every human being should know regardless of their intellectual rigor.
The original post was how I tried to address the inconsistency between the ideals that I believe should be applied to opensource community and the reality of the tools deployed. It’s called open-source, purists call it Free-software. Despite some differences between the two they really are about openness and freedom, but as long as its users and contributors subscribe to a certain type of lifestyle. Is there any way to change that? Can Free software be so free as to be no longer confined to the silicon and copper frameworks and languages of C and (gasp) Fortran?

I know this is all sounds like a pie-in-the-sky talk right now but I feel it’s a goal worth pursuing for those in the opensource community.

8bit tools of science

According to the founder of Playpower.org, more people in India have TVs at home than tap water. And there are $12 computers everywhere that uses the TVs as monitors, like so many of the personal computers of old.

Now consider that these hardwares based off older 8bit chip designs and the softwares that run on them are more or less in public domain. We are looking at a significant portion of the entire human population just poised on the verge to hackerdom. It’s not just typing education and language training. We could build entirely new framework for education in 3rd world urban area using existing tools of education and science. Imagine being able to design an 8bit program for those machines (some of them can actually do internet) that pulls data from research institutions of all kinds (BLAST, Wolfram Alpha, and etc etc) and scale it down to a form those machines and people using those machines can understand. We already have beta versions of synthetic biology CAD program that undergraduates regularly use for their school assignments and private projects, so it’s not that far away in the future.

Will a child capable of programming computers and pull data on SNP variations to do his/her own genotyping using soon-to-be widely available opensource PCR machines still languish in poverty and despair? I don’t know. I’d sure like to find out though.

Alan Kay applied to synthetic biology, and other stuff.

This is something I wrote up a few days ago, probably around four or so in the morning. So take whatever it says with caution.

I know I should be writing about some other things as well, like how the diybio nyc might be amazingly close to getting a real lab space, or how I’m prepping to stop by for this year’s iGEM jamboree. I also have the pictures from this year’s major diybio nyc event, where we set up a stall on the NYC green market and extracted dnas from the natural produces with common household material (with the passers-by of course). Each of those things would probably make for some lengthy and interesting reading, and the list goes on (my life’s actually kind of exciting right now). Yet whenever I find the time to write something down, nada. Nothing. My mind just shuts down and nothing I can commit to paper or the keyboard seems good enough.

Tonight though, aided by my weird bout with insomnia, I’ll just write something down I’ve been meaning to say for a long time.

I’ve been looking into the history of computing and computer languages recently. I’ve always had some level of interest in computers. Not just the spiffy brand-new muscle machines but in what most people would refer to as ‘retrocomputing’ (I once ended up practicing some AIDA because of that. Ugh), which is a story for another time. It’s not that I think old ways of computing were better than what we have now (protected memory FTW). It’s just that it’s much easier to trace the evolution of the concept of computing when you see beyond the immediate commercial products.

Synthetic biology is effectively a pursuit of engineering biological organisms. Biological organisms are based upon somewhat unified information storage and processing system that has quite a bit of parallels to mechanical computerized systems. I’ve been wondering whether it would be possible to predict the future development of synthetic biology by looking at how computer programming languages evolved (because they deal with information processing systems applied to physical counting medium). Maybe it’d be possible to predict some of the pitfalls that are inherent in developing complex programmable information processing system that will apply to the synthetic biology in the future. Maybe we can bring a conceptual framework to the synthetic biology that would have taken decades if left to mature naturally to within mere years.

While I was rummaging through the texts in both real life and the web (with many of the promising links on the web leading to dead-ends and 404s) I ran into a programming paradigm and environment I was only superficially familiar with before. Smalltalk and Squeak, both the brainchild of the computing pioneer Alan Kay.

Here’s an excerpt from Alan Kay’s biography I found on the net (I can’t find the website right now. I swear I’ll edit it in later, when my brain’s actually working!)

“Alan Kay postulated that the ideal computer would function like a living organism; each “cell” would behave in accord with others to accomplish an end goal but would also be able to function autonomously. Cells could also regroup themselves in order to attack another problem or handle another function.”

This is the basic philosophy behind smalltalk/squeak and object oriented computer programming paradigm. It is no coincidence that Alan Kay’s vision of the ideal computer language and computing environment would take to a biological allegory, since he came from molecular biology background.

While I’m reading through the history of different computing paradigms for the purpose of figuring out how it might be applied to synthetic biology, there’s something else I found awesome and perhaps a little heartwarming. Alan Kay throughout his life as a computing pioneer held onto the belief that the ideal computing platform won’t be a platform capable of crunching numbers the fastest. It will be a platform that can be integrated into the educational function of the user through ease of manipulation and control. Ideal computing platform should be hackable because it makes logical sense to do so.

Can we say the same of synthetic biology? Perhaps not. The direct comparison of a complex biological system to computerized circuits can only take us so far. Yet I can’t shake the nagging feeling that synthetic biology might be looking at some very unique opportunities for change precisely because it is different from regular electronic systems, with documents of the early days of computer and programming already here for our perusal.

A good, elegant system that allows programmable extension must be at the same time easy to learn, since one thing must inevitably lead to the other. And there are classes of systems that both run and learn better compared to other systems. This might become something of an issue of how synthetic biology parts/devices/systems are put together in the future as the capacity of the synthetic biologists to handle complex systems increase.

I think it might be able to pursue this idea further. As it stands this is nothing more than an interesting parallel in concept without substantial scientific reasoning.

Which is why I should get myself to learn smalltalk/squeak sometime in the future. Maybe I should knock on the hackerspaces in the city, see if anyone’s willing to mentor me.

How to change the world.

This is a bit of rant post on something I thought of after watching bunch of old hacker-themed movies from the Hollywood. It continues to amaze me how I can participate in all sorts of crazy things even with the summer studies and jobs I need to keep up with. I guess that’s the benefit of living in place like NYC.

I’ve been watching some old hacker movies lately.  And I just can’t believe what kind of cool things those movie hackers were able to pull off with their now decades-old computers and laptops. Computers with interfaces and hardware that exudes that retro feel even across the projector screen. I know a lot of people with brand-spanking-new computers with state of the art hardwares and what they usually do, or can do with those machines aren’t as cool as the stuff on the movies being pulled off with vastly inferior hardware and network access. Of course, like everything in life it would be insane to compare the real with the imagined, and Hollywood movies have a bad tendency to exaggerate and blow things out of proportion (I’m just waiting for that next dumb movie with synthetic biology as a culprit, though it might not happen since Hollywood’s been barking about indecency of genetic engineering technology for decades now). Even with that in mind, I can’t help but feel that the modern computerized society is just way too different from the ones imagined by artists and technologists of the old.

Ever heard of younger Steve Jobs talking in one of his interviews? He might have been a rather nasty person but he certainly believed that ubiquitous personal computing will change the world for the better. Not one of those gradual, natural changes either. He actually believed that it’s going to accelerate the humanity itself, very much like how Kurzweil is preaching about the end of modernity with the upcoming singularity. Well, personal computing is nothing new these days. It’s actually quite stale until about a few months ago when people finally found out glut-ridden software with no apparent advantage in functionality were bad things, both in terms of user experience and economics. Ever since then they’ve been coming out with some interesting experiments like the atom chipset for netbooks (as well as netbooks themselves), and Nvidia Ion system for all sorts of stuff I can’t even begin to describe. And even with the deluge of personal computing in the world we have yet to see the kind of dramatic and intense changes we were promised so long ago. Yeah sure, the world’s slowly getting better, or changing at least. It’s all there when you take some time off and run the real numbers. It’s getting a little bit better as time goes on, and things are definitely changing like some slow-moving river. But this isn’t the future we were promised so long ago. This isn’t the future people actually wanted to create.

We have engines of information running in every household and many cellphones right now.  Engines of information meaning all sorts of machinery that can be used to create and process information content. Not just client-side consumption device where the user folks money over to some company to get little pieces of pixels or whatever, but real engines of information that’s capable of creating as well as consuming using all of the hardware capabilities. It’s like this is the Victorian Era, and everyone had steam engine built into everything they can think of. And nothing happened. No steam cars, no steam blimps, no nothing. The world’s rolling at the same pace as before and most people still think in the same narrow minded niches of their own. What’s going on here? Never had such a huge number of ‘engines’ responsible for creating an era in history been available to so many people at once. And that’s not all. Truly ubiquitous computing made available by advances in information technology is almost here, and it is very likely that it will soon spread to the poorer parts of the world and remoter parts of the globe traditionally cut off from conventional infrastructures.

But yet again, no change. No dice. Again, what’s happening here, and what’s wrong with this picture? Why aren’t we changing the world using computers at vastly accelerated rate like how we changed the world with rapid industrialization (not necessarily for the better, of course)? That’s right. Even compared to the industrialization of the old times with its relatively limited availability and utility of the steam engines we are falling behind on the pace of the change of the world. No matter what angle you take there is something wrong in our world. Something isn’t quite working right.

So I began to think during the hacker movie screening and by the time the movie finished I was faced with one possible answer to the question of how we’ll change the world using engines of information. How to take back the future from spambots, ‘social gurus’, and unlimited porn.

The answer is science. The only way to utilize the engines of information to change the world in its tangible form is science. We need to find a way to bring sciences to the masses. We need to make them do it, participate in it, and maybe even learn it, as outlandish as the notion might sound to some people out there. We need to remodel the whole thing from the ground-up, change what people automatically think of when they hear the term ‘science’. We also need the tools for the engines of information. We need some software based tools so that people can do science everywhere there is a computer, and do it better everywhere there is a computer and an internet connection. And we need to make it so that all of those applications/services can run on a netbook spec’d computer. That’s right. Unless you’re doing serious 3D modeling or serious number-crunching you should be able to do scientific stuff on a netbook. Operating systems and applications that need 2GB of ram to display a cool visual effect of scrolling text based documents are the blight of the world. One day we will look back at those practices and gasp in horror at how far they held the world back from the future.

As for actual scientific applications, that’s where I have problems. I know there are already a plethora of services and applications out there catering to openness and science integrated with the web. Openwetware and other synthetic biology related computer applications and services come to mind. Synthetic biology is a discipline fundamentally tied to usage of computer, accessibility to outside repositories and communities, and large amateur community for beta testing their biological programming languages. It makes sense that it’s one of the foremost fields of sciences that are open to the public and offers number of very compelling design packages for working with real biological systems. But we can do more. We can set up international computing support for amateur rocketry and satellite management, using low-cost platforms like the CubeSat. I saw a launching of a privately funded rocket into the Earth’s orbit through a webcam embedded into the rocket itself. I actually saw the space from the point of view of the rocket sitting in my bedroom with my laptop as it left the coils of the Earth and floated into the space with its payload. And this is nothing new. All of this is perfectly trivial, and is of such technical ease that it can be done by a private company instead of national governments. And most of the basic the peripheral management for such operations can be done on a netbook given sufficient degree of software engineering and reliable network connection. There are other scientific applications that I can rattle on and on without pause, and there are plenty of people out there much better versed in sciences who can probably come up with even cooler ideas… So why isn’t this happening? Why aren’t we doing this? Why are we forcing people to live in an imaginary jail cell where the next big thing consists of scantily clad men/women showing off their multi-million dollar homes with no aesthetic value or ingenuity whatsoever? Am I the only one who thinks the outlook of the world increasingly resembles some massive crime against humanity? It’s a crime to lock up a child in a basement and force him/her to watch crap on T.V., but when we do that to all of humanity suddenly it’s to be expected?

We have possibilities and opportunities just lying around for the next ambitious hacker-otaku to come along and take. But they will simply remain as possibilities unless people get to work with it. We need softwares and people who write softwares. We need academics willing to delve into the mysterious labyrinths of the sciences and regurgitate it in user-friendly format for the masses to consume, with enough nutrient in it that interested people can actually do something with it.

This should be a wake-up call to the tinkerers and hackers everywhere. Stop fighting over which programming language is better than others. Stop with the lethargic sarcasm and smell the coffee. Learn real science and hack it to pieces like any other system out there.

Get to work.

Change the world.

The antikythera mechanism

 

An ex-senior curator finally succeeded in replicating all known features of the 2000 years old Antikythera mechanism, the first known mechanical computer in human history. Technically this is in similar spirit as a 19th century clock. There is some strange notion among some people regarding how people got smarter over time. Sometimes I feel like throwing the Antikythera mechanism in their faces. Or, I could just tell them to go read a good history book instead. Yes, I could always do that. 

All in all, amazing mechanism. Perhaps there were even more amazing things lost to time in other ancient civilizations as well.

Life: Deciding on a laptop

As I’ve continuously whined about past few months in various places around the net, I need to buy a new laptop. Yes. I haven’t bought the darn thing yet. I’ve been doing all my computing on school desktop (by remote connection) and the Asus EEEpc 701 ‘netbook’, which comes equipped with Xandros linux (buggy as a sin would be an understatement), 7in screen, 516mb RAM, and 4GB SSD (which I complement with another 4GB SD card). The little laptop have been surprisingly useful, and I don’t know what I would have done without it by my side. Only if the default OS was a bit more stable… The system is more shaky than a vial of nitroglycerin on a centrifuge.

I’ve actually ordered my laptop on the net already, Lenovo Thinkpad T400. It’s scheduled to ship sometime in the week of Nov 12th, so I will be receiving it near the end of the November, which would be roughly a month from now. Yes. While Lenovo builds some decent quality laptops, they certainly suck big time at customer service and shipping arrangements.

The problem is, Apple released their aluminum MacBook line a week or two ago. And from what I’m seeing, the performance on that machine is amazing. The integrated graphics on that machine trumps the dedicated graphics card on quite a few laptops of similar class, and actually does slightly better than the T400 with dedicated video memory I have on order. I’ve  stopped by at the Apple Store on Broadway to take a look (at 11 PM, those guys are open 24hrs), and the weight/design impression is fantastic. Even better, if I decide to pick up the new MacBook, I don’t have to sit around sucking on my thumb for a month. Oh, and then there’s OS X. Aesthetics wise, I hate OS X and its outdated brushed aluminum look, but the system is built on top of UNIX, so it affords some unique advantages for someone in the field of sciences. The wealth of biology-oriented scientific softwares in OS X and native mathematica integration is staggering, and user even has an option to utilize OS X variety of apt-get software repository for installing some of the more obscure and specialized softwares and frameworks. Extensive software development environment like the Xcode is included free of extra charge, and you are allowed to reinstall the OS as many times as you want. (learn from this, MS!!!) The rumors of impending update to the OS X that would allow users to utilize the GPU component as a secondary (primary?) processor for calculation-intense tasks doesn’t sound too bad either… If done properly, it might even be possible for regular MacBook to have near workstation quality number crunching capabilities.

There are several disadvantages in getting the MacBook/OS X, though. The first issue is software compatibility. OS X library might have grown by leaps and bounds in past few years, but it still pales in comparison to what is available on windows platform. Things get progressively worse when you try to use web services/programs in foreign language, i.e. entirely different software culture and financial ecosystem. Take, for example, QR-Code. QR codes are almost universally available in Japan and used in some other Eastern Asian countries to lesser extent. Windows have hundreds of different scripts and programs for generating and reading QR codes. Quick search of google nets us three or so read-only programs for OS X and it is not certain whether they are actively maintained or not. How about interactive fiction utilizing the infocom Z-machine? (My secret passion…) Gargoyle program on windows runs nearly all possible formats used in IF, while OS X needs about two, maybe three of such programs installed on same machine for maximum compatibility. Some people would say that I can run windows on a Mac machine using bootcamp or a virtualization software, but frankly I find the notion of running multiple OS on a single computer to be unrealistic on usability perspectives. Theoretically it might sound like a great option, but the prospect of turning off a computer and ending all my working sessions just to use another program or two is certainly not attractive to me.
The price ratio is also something of an issue. In my configuration of the T400, I get 1440×900 resolution 14in screen, built-in 7in1 card reader, three USB ports, express card slot, 6 hours of wifi-using battery life, 2.2 ghz processor, bluetooth, and WiMax/WWAN upgrade capacity. All of it for 1180 dollars. If I choose to go with the MacBook, I get two USB ports, bluetooth, 1280x 800 resolution on 13in screen, 2.0ghz processor, and 3~4 hours of wifi-using battery life. All of it for whopping 1400 dollars including taxes. That’s roughly a 200 dollar difference, with the machine obviously lacking in feature set costing more. Mac aficionados out there will tell me that the OS X itself (with its unlimited reinstallation capabilities), variety of built in software tools, the iLife suit (which looks quite amazing), and UNIX based performance boost/stability offsets the 200 dollar premium, and they might be right (build quality is stacked in favor of the Thinkpad, since Thinkpads have industry-approved build quality record under their belt already). But then I know a good number of free, open source programs available for the windows platform that can do all of those things… Perhaps not better than the Mac software, but certainly adequate. Aesthetics-wise, as I’ve stated above above, I am not very fond of the OS X design and its ‘Aqua’ theme, and I personally find how they shove the ‘dock’ interface down their user’s throats to be insulting and grotesque. Windows has such issues as well, but at least I am familiar with some very hard-core theme-patching under the windows platform. It doesn’t hurt that I know precisely how I want my computer/OS to look design-wise (and yes, I don’t think T400’s black box look is ugly, contrary to popular opinion).

I guess for the time being, my ideal machine would be T400 capable of running OS X out-of-the-box. I am aware of certain projects like OSx86 that tries to tune OS X so that they can run on non-native hardwares, but they are just too darn clunky to be used on a mission-critical work laptop. Maybe I should install Ubuntu within the windows partition of the T400?

Whatever the case, logic dictates that I should wait for a month for my cheaper and faster T400 to arrive. It’s only that I get constant urge to cancel my order and just go pick up a MacBook like some primal impulse beyond the reach of civilized consciousness… (insert witty H.P. Lovecraft reference here)

Science in Apple?

Like most people, I was tuned into the WWDC keynote address on Monday. Most of the stuff on the keynote were more or less expected, including the iPhone/Dev kit and the OS X 10.6. However, the way they were presented were intriguing to say the least… To this scientist-in-training at least.

First the iPhone. Inclusion of medical applications within the presentation was the real eye-catcher of the show for me (other than the $199 price point for the iPhone, but that was expected). Why go through the trouble of including such specialist application in a presentation aimed at developers and consumer-enthusiasts? Of course, it would be nice to be able to present applications from variety of fields to showcase the capacity of the iPhone and ‘grow the image,’ but something tells me that medical imaging application and med-school study guide are probably not the most interesting of the applications submitted to the Apple in time for WWDC. Based on circumstantial evidence, I think Apple planned to have presentation for medical application included from the beginning, and I think they wanted more than one to showcase the professional academic muscle of the iPhone. The very fact that they took the trouble to include a testimony from Genetech regarding their enterprise functions of the iPhone seem to support this assumption.

Second, the OS X 10.6, also known as the Snow Leopard. The primary idea of the OS seem to be out-of-the-box utilization of multi-core processors that are mainstream these days. Most of us run dual processors right now and it wouldn’t be farfetched to think that we (and by we, I mean the normal computer users. There are already quite a number of quad core users in more specialized communities I hear) might as well be running quad processor systems a year or two from now. It’s a reasonable move, considering that no OS of any flavor seem to be taking noticeable advantage of the 64 bit architecture that had been around forever. Apparently Apple is calling their own system for utilization of expected slew of multi-core processors Grand Central (after the beautiful Grand Central in my hometown, no doubt), which will no doubt form the headstone for the new OS X 10.6 iteration when it is released a year or so from now. Is it pushing it to far to say that this might as well be a move on Apple’s part to appeal to the professional scientist community that actually has real and pressing need for more computing power? The distributed computing projects like the BOINC and the folding@home for example (both of which I am an active participant. I urge you to join up if you think you ave some cpu cycles to spare). My Intel Core 2 Duo 2.3 Ghz processor isn’t enough to complete complex work cycles in any reasonable frame of time. What if we can run more simulations and calculation on our own laptops/desktops for faster results? It’s no secret that Mathematica and Apple seem to be on something of a favorable ground. Apple’s ethos on this particular attempt will be simple. Keep the computer out of the scientists’ way. Just plug in the numbers, get the results, no worries about 64 bit support or any complex refitting of scientific programs (unlike what most people seem to think, studying physics or any other branch of science doesn’t make you good at computer science. Those are entirely different fields! Physicists are merely proficient at limited skills needed for physics computing). Who wouldn’t want that?

Third, the OpenCL (which stands for Open Computing Language). This part might as well be a dead giveaway of the Apple’s company wide strategy to woo the scientific community. OpenCL is a method Apple is developing that would allow developers to use the GPU of computers to do CPU tasks. A few years ago the news of PS3 GPU being redirected for mathematical calculation made some news. I believe there were other ones where conventional graphics chipsets were utilized for complex physics calculations that gave results that far surpassed what was possible when using only the conventional cpu. It’s been such a long time that I am somewhat surprised that only now they are thinking of integrating it into mainstream computer market. Mind you, this method of diverting gpu to do cpu work was done at first to provide more muscle for physics simulations using conventional computer systems and components rather than specialized supercomputer systems. I do not foresee normal Apple toting screenwriters and web surfers needing all that computing power anytime soon. If this is coming, it’s coming for us, the scientists, who need to crunch numbers most people haven’t even heard of.

If we put the three together with the assumption that Apple might be shooting for the scientific computing community, we have possibly mobile computing platform with serious power (macbook pro), able to run variety of scientific programs (Mathematica+Matlab, BLAST etc), with built in ability to sync and wirelessly connect to/controlled by a dedicated mobile phone with some serious computing power of its own (iPhone+community apps). So the actual computing can be done at home, while the user receives output and sends input from his iPhone. Would this work? I think there are plenty of people doing the similar thing already. But there will be possibly significant differences between device that had been essentially hacked together and series of devices that were designed to work in conjunction from the beginning. I see this as very exciting development on part of Apple and computing industry in general.

Having a science-oriented Apple isn’t the only thing I’m excited about. Let me put it this way. iPhone made people who didn’t use text messages on conventional phones to text each other constantly. iPhone also made people who never used the browsing capabilities of their conventional phones to browse around the web. This is the problem and effect of accessibility that I mentioned in some of the other posts on this blog. When people don’t do something, it might not be because they want it that way. It might be because there is an accessibility barrier between the individual and the activity. We complain about how people are no longer interested in sciences and other higher academic pursuits. Maybe we’ve been unwittingly placing accessibility barriers on the paths to higher education? If such ideas about accessibility barrier between the public and the sciences have a grain of truth in it, maybe this new direction of Apple can do for sciences what it did for telephony. Especially with the community based distributed computing projects and DIY mentality across variety of scientific, but especially biological disciplines on the rise, (the term synthetic biology itself isn’t even new anymore, despite the immaturity of the field itself) maybe I can hope for some sort of change in today’s somewhat disappointing state of affairs.

Hacker attitude

The ‘hacker’ culture had been around for so long, and involved in so much of the substantial progress of the last half of the decade, to have their own ethos and philosophy into codified laws, somewhat like the ten commandments. Except that these rules are, as pertaining to the hacker subculture itself, a matter of choice for the most part. If you are finding yourself agreeing to the code, than you are probably a hacker, regardless of whether you know about computers or not. Even if you regularly write in assembly language for living, if you cannot agree to the codes outlined by the hacker culture, you are probably not a hacker. In a way calling it a ‘code’ and comparing it to the ten commandments would be something of a misnomer. Think of it as something of an identification tag, to be used between people of similar disposition.

There are five fundamental common attitudes shared by most hackers, and they are as follows.

1. The world is full of fascinating problems waiting to be solved.
2. No problem should ever have to be solved twice.
3. Boredom and drudgery are evil.
4. Freedom is good.
5. Attitude is no substitute for competence.

It is rather interesting that all of the five attitudes go against common beliefs and pratice held by most public school education system. At least for the inner city schools I know of. Around those schools teachers and administrators can say they are trying to teach children how to respect the authority without even blushing in shame. That’s right folks, not respect to your fellow men/ladies, and not respect to yourself. The primary goal seem to be built around having the kids in middle and high school stages of education to respect the person who has the right to call the police or security on them. Of course, I am being rather crass here, but this is the sentiment shared by most if not all urban city youths, the same feeling I shared when I was their age. And who am I supposed to blame for current less-than-fantastic state the public education system is in? Kids or experienced, supposed ‘professionals’ who get paid to study the children and lead them to the best possible future?

As I grow older I’m finding that this ‘hacker’ mindset is not new at all. I believe it had been around since the very beginning of civilizations, and that this is a part of natural instinct of being a human being. It is becoming increasingly certain that you don’t need to know about computers to hack things. What you need instead is the insight and wisdom to seem through the system of the world. It’s like applied cybernetics. As long as things affect each other in certain way they form a system. A system of human society is a system like any other, albeit fundamentally more complex since such systems are usually evolved rather than designed. As long as something can be considered a system, it can be, and perhaps should be, hacked. A mudlark in highly hierarchical society later becoming a shipping magnate, or a leader of a nation, is as much a hacker as the computer science major hacking with python and C++ in pursuit of digital artificial life. A writer, a cook, a musician, the applicable list goes on and on. The field of synthetic biology, though fledgling at the moment, seem to be shaping up as the next contender to the hackerdom’s primary pursuit, in the search of the ability to hack the life as we know it. Who knows what we’ll be hacking some distant time into the future? Perhaps the very nature of space and time itself. Maybe even designer universes.

And from this standpoint of the universal hackery, I must ask, would it be possible to hack the human world? Would it be possible to hack the public mind and the generational zeitgeist to nudge the rest of humanity into some vision of future? Is it possible to hack the origin of all the situations and motivations, the human itself?

From virtual to real

I must admit, there was a time when I would play computer/video games late into the night. I was a wee-lad back then, so impressionable and curious about the whole plethora of things of this universe. And the allure of the virtual worlds to such mind was just too sweet to resist. I gave a lot of thought to my then-current condition during the phase of my life. Why would I be captivated by certain types of virtual reality? Is there something shared in common between the hundreds of different worlds constructed using a number of different mediums-writing, visual, and aural-that composes the fundamental idea of what an enjoyable world should be? Would the impression of such an ‘idea’ of the mysteriously attractive world be common to all human beings? Or only human beings of certain memories and experiences? I would spend many days just thinking about the nature of all possible virtual worlds imaginable by human mind and their possible implications while my hands played the mechanical play of controlling my representation within the display.

Deus Ex was a computer game created by the now-defunct ION storm that came out during the aforementioned impressionable period of my life. This game isn’t aesthetically pleasing by any stretch of imagination. It’s gritty, ugly, in a very superficial and unintended kind of way. It is based in imaginary near-future where nanotechnology and artificial intelligence are just coming into full gear among the financial and political turmoils of a new human age. Conspiracy theories based on some real-world conspiracy fads play an important role in the setting and the plot, and there are lot of techno-jargon thrown around in one of the numerous conversations within the game world which might add to its depth. Any way you look at it, Deus Ex is not a work of art, and it was never meant to be. Deus Ex as a game was designed to be immersive. Immersive as in realistic within the confines of the plot and available technological means to execute that plot. Whatever the Deus Ex was meant to be, it did its job and it did its job fantastically. Deus Ex took itself just serious enough to be immersive.

I played and finished Deus Ex numerous times since the day it came out. The game had the semblance of a virtual world, just enough to be a better game, not enough to be a real virtual world, which was actually a good thing. I’d figure out a number of different ways to achieve the objective of the specific stages and the game as a whole, each of those paths gradually beginning to encompass different processes that the designer of the game probably never intended in the first place-a first form of truly emergent game play on digital medium. I can still remember a number of quotes and conversations from the game by heart, not through any diligent study, but simply through repeated exposure stemming from the interest in the world itself. And to be perfectly honest, while I was aware of nanotechnology and its growing prominence before playing the game (I was a little precocious for my age), I began to truly comprehend what such technology could mean to the world and the people in the far future by seeing it applied within the virtual world built and maintained by fictional premises. It would not be far from to truth to say that my interest in ‘industries’ of biology and other fields of science (with my current ‘official’ pursuit being plasma physics, which is an entirely different field altogether) began with my introduction to this game… I place much emphasis on the term ‘industry’ because it was through the application of the idea of technology within a virtual (no matter how absurd it might be compared to the real) world that I began to grasp the requirements of science and its true impacts in the modern human civilization of rapid prototyping and mass production. Yes, I’ve come to learn that science effects the human world as a whole, just as the hand of economy reaches into the deepest pockets of the remotest corners of the globe, and such permutation of ideas and information might have a reasonable pattern of causality behind it, forming a system of sorts. All this at the first year of high school, all this because I’ve seen it applied in a limited virtual world whose goal was to entertain, perhaps mindlessly.

People talk of the web 2.0, the web based virtual reality (like the second life) all the time, perhaps without grasping what it truly means. To me, the change on the web and its technical and semantic updates are merely superficial effects of the real change that is taking place right now. The real change we are about to face at this moment, is the change to the nature of the human network. I find that I’m using the term human network more often these days. The human network had been present since the very first moment of human civilization (perhaps even before, going back to the start of the human species) and has the same mathematical and sociological properties of networks that more or less remains the same on some compartmentalized level. The changes we are seeing in the emergence of the web 2.0 ideas and virtual realities merely reflect the technological advances applied to the same ever present human network that had been in place for as long as anyone can remember. At the core of the web 2.0 is the idea of user interactivity. What happens when there is a freedom of interactivity between millions and billions of people? The medium providing the room for interactions itself begins to take on closer resemblance to the concept we call ‘the world.’ Forget reality. What is a ‘world?’ What satisfies the definition of a ‘world?’ The core of a ‘world’ as it stands happen to be a place where people can interact with the very components of the world itself and with each other. In that sense, if our reality somehow forbid certain type of interaction between us and the ‘world’, it would cease to be real.  The world as seen from information perspective, is a massive space/concept/thing for interactivity, and interaction between the ‘things’ within the world builds and evolves the form of the world itself.

The web 2.0 in that sense, is the beginning of a virtual world that builds upon human interactivity rather than superficial (though still quite important) reliance on resembling the physical characteristics of the real. And the real change being brought on by the advent of the web 2.0 thought to the general population is the enlargement of the perspectives of the real world brought on by interactions with other human nodes within the virtual world. I am not suggesting that people are somehow becoming more conscious. Just as I have demonstrated with my old experience with the computer game Deus Ex where seeing certain kind of ideas applied to a virtual world left an impression of impact of such ideas on a rapidly prototyping, global world, the population of this world is becoming increasingly aware of the true global consequences of their and others actions and thought. It is the awareness that in this highly networked world, science, industry, economics and politics all walk hand-in-hand as ‘ideas’ and its currencies, a single change in one sector of one corner of the world giving birth to certain other events on the opposite corner of the globe in entirely different field of ideas. It is the beginning of the understanding of the malleability of the human world and its thought.

I’ve started with remembering my experience with an old computer game, and came to the talks of virtual reality, the human network and the changes of the world. I hope I didn’t confuse you too much. This is what I call ‘taking a walk’, where I begin with one thought and its conclusions and apply them to different yet related thoughts to arrive at interesting ideas. In case you are wondering about the game itself, it seem that they are giving it away for free now. Go grab it and spend some time with it. It’s still fun after all these years.