Barcode the World

I’ve always been curious about DNA barcoding. Interest in wides-scale DNA barcoding exercise had been around for a long time, in part due to potential for amateur scientists to contribute to cause of the sciences using relatively minimal and easily obtained equipments and reagents. There had been some high-profile events and articles involving DNA barcoding techniques applied to everyday life in recent memory, like the infamous ‘sushi-gate‘ incident. Yet how many people really know what is it and how many people have a clear understanding of how to do it? I certainly was clueless for a long time.

It’s a little weird now that I think about it. Despite doing tons of PCR reactions day-in and day-out at the Genspace lab for one reason or another, I never tried  to dig into what exactly DNA barcoding entails in its visceral, barcoded details. Well, recently some of our Genspace members including yours truly went out on a sort of field trip to the Harlem DNA lab (situated within a junior high school in Harlem) for a day-long DNA barcoding workshop in preparation for the upcoming NYC Urban Barcode Project.

And the process couldn’t be easier. In a nutshell it just involved amplifying specific segments of DNA from a sample organism and sending it in to Genewiz for sequencing. The specific DNA segment to be amplified differs slightly from the kind of organism (is it a fish? Plant? Or insects?) but in case of most vertebrate mammals you use a portion of its mitochondrial genome called cytochrome c oxidase subunit I (COI) as the bacoding region. Mitochondrial genome (something I’ve been working with a lot for the past few months, ironically enough) is ideal for this sort of genetic species identification since they hit the exact sweet spot between homogeneity and differentiation within similar branches of the phyla, due to their rate of mutation and how mtDNAs are only passed through the maternal line. If you’re interested in performing your own DNA barcoding experiment outside regular lab settings or any official competition you can do so with the pdf files of requisite primer sequences already online and just order it straight from places like IDT. While specific protocols for running the PCR and prepping samples differ from place to place (I’m still looking for that perfect optimized protocol) what you are doing is a basic PCR amplification of the specific part of the mitochondrial genome, so when push comes to shove I’m sure simple chelex based DNA extraction (crush and pop in the sample with chelex beads for 10 minutes at ~99 C, centrifuge at 13000rpm for about a minute and extract the supernatant) combined with primers and PCR mastermixes or GE PCR beads (which already contain pre-made taq polymerase and buffer mixtures for optimal performance) will work just as well, provided that the sample is fresh enough. I think I’m going to run some experiments with the materials we already have at the Genspace lab and post the results later on. Once we put together a library of verified barcoding primer parts we should be able to do some very interesting projects and classes with the NYC biology community at large.

During the barcoding workshop we had a chance to pick out our own samples and run through the barcoding process with the instructors. I picked regular house ants, some random plant Ellen brought from her garden, and a YFP producing zebrafish that’s been dead for some time (it’s a long story). I went through the DNA extraction, purification, and PCR process outlined briefly above, using appropriate primers (for students participating in the competition the Dolan DNA learning center & Harlem DNA lab will provide the kits for free!). Here’s a picture of the gel we ended up with, dyed with syber-green (thanks Oliver!).

Now I seem to have misplaced the list of what each lane does, but the point is, all the barcoding amplifications worked except for the transgenic zebrafish. And it’s not just me, transgenic fish samples prepared by everyone else failed as well, something I can only attribute to the condition of the sample at the time of the barcoding experiment. You see when living things die cells lose structural integrity and rupture all over the place, mixing existing DNA molecules within the cells with all kinds of junk and nucleases that will damage the sequence. Considering the fish was stinking up to the high heavens by the time we got it to the lab that certainly sounds like a very likely scenario to me.

All the other samples works beautifully, and we prepared about 10ul aliquots of each PCR product and sent it in to Genewiz to get sequenced (the same Genewiz I got my mitochondrial DNA sequence from). They’ll be getting back to use within few days with the sequence data we can feed into public databases of DNA barcodes to determine what kind of organisms they are.

People always talk about how the field of biotechnology is advancing by leaps and bounds, and how the infrastructural developments like massive DNA sequencing centers for cheap sequencing will change how most view life and themselves. For a person not previously versed in biology like myself this was a great opportunity to come face to face with capacity for people outside of traditional academia to contribute to the sciences, using largely off-the-shelf technologies and public databases. The entire process of obtaining the sample, amplifying a specific genome within the sample, and getting it sequenced probably cost me about $5 in terms of materials. Think about that. $5 dollars to gain some level of insight into a genetic makeup of an unknown organism, open to everyone. Although this is nowhere near the kind of stuff we can do with true deep sequencing the day is coming, and it will certainly make for a very interesting world.

If you’re interested in learning more about the NYC Urban Barcode Project or DNA barcoding process in general, feel free to contact me at sung at genspace dot org. Genspace is one of the sponsors of the NYC Urban Barcode Project and we are looking forward to input and participation from students and teachers around the city!

Is it game night yet?

It’s Thursday night. Just one more day to plow through until you reach Friday night with all its movies and drinks. Well, we can’t tell you how to speed up time but we can tell you how to feel like it’s going faster. Play computer games. 

Now, we are talking about Genspace, and we do have a bit of reputation to maintain. So as much as I would like to recommend everyone to get cracking on the battle.net with Starcraft 2 we’ll have to make do with something different; a computer game with science in it.

It’s called Phylo, and you can find it here. Phylo is an entirely browser based (flash based, to be specific. Sorry to disappoint all my iPad toting readers) and doesn’t require any serious computing muscle on the player’s end. I’ve been playing it for the last hour or so, and it’s an odd piece of work. On the surface the game follows some basic rules of pattern matching casual games you might be familiar with like Bejeweled. Yet the experience of playing the game feels far more complex than that, and I don’t necessarily mean that in a bad way. Also, there’s a real benefit to playing this game on your spare time, other than gaining the l33t skills to pwn the n00bs with.

You see, Phylo is ‘a human computing framework for comparative genomics.’ Basically it gives you real multiple sequence alignment problems represented by 4 color blocks scattered on a grid. And of course, budding bio-enthusiasts like us know what’s up when a science programs give us 4 of anything- they represent nucleotide sequences. As you match same colored blocks with each other, you contribute some of your brain power to finding aligned sequences between different genes. If you misalign the blocks you lose a point, and if you create gaps between the blocks (which represent mutation) you lose lots of points. You can gain points by aligning same color blocks on vertical row and you need to gain certain amount of points to pass a level or get another gene to align with your existing sequence. This is a very abstract process of optimization that is usually done with complex computer algorithms and lots of processing power, which would be prohibitively expensive when brute-forced. The authors of the program hope to use the human-computer interaction on a large scale to come up with optimized heuristic pattern.

 

This is how Phylo looks
The logic is sound. After all, usefulness of human ability to find patterns in complex biological simulations have already been proven worthwhile with the fold.it protein folding puzzle game and the Nature paper that came out of it. Guess who’s a co-author of a nature paper.  😉
 

This is how it might look on a scientists’ computer
I’ve played around with the DNA code responsible for idiopathic generalized epilepsy and already 160 other people attempted to solve the puzzle… And 146 people failed. And there lies the problem of biology-turned games. You see, unlike regular puzzle games like Bejeweled or Tetris, not everything will fit together with perfect logical coherency. Granted, there are a few techniques you can use to treat this like any other game (for example, don’t waste your time moving around single blocks in the beginning stages. Crush them together into single group for maximum points in shortest amount of time), but the fact is not everything will fit together and it can be rather jarring for a beginner to figure out what he/she’s doing right, since there isn’t any satisfying feedback to a ‘correct’ sequence formation. It can’t be helped though. This is science, and no one knows the correct answer to detect and give you feedback with. Maybe that’s the whole reason why you should play this game. After all, would you play a match in starcraft with predetermined outcome?
I for one, am looking forward to the future where all games contribute to the discovery of science in some shape or form.

8bit tools of science

According to the founder of Playpower.org, more people in India have TVs at home than tap water. And there are $12 computers everywhere that uses the TVs as monitors, like so many of the personal computers of old.

Now consider that these hardwares based off older 8bit chip designs and the softwares that run on them are more or less in public domain. We are looking at a significant portion of the entire human population just poised on the verge to hackerdom. It’s not just typing education and language training. We could build entirely new framework for education in 3rd world urban area using existing tools of education and science. Imagine being able to design an 8bit program for those machines (some of them can actually do internet) that pulls data from research institutions of all kinds (BLAST, Wolfram Alpha, and etc etc) and scale it down to a form those machines and people using those machines can understand. We already have beta versions of synthetic biology CAD program that undergraduates regularly use for their school assignments and private projects, so it’s not that far away in the future.

Will a child capable of programming computers and pull data on SNP variations to do his/her own genotyping using soon-to-be widely available opensource PCR machines still languish in poverty and despair? I don’t know. I’d sure like to find out though.

First DIYBio rant of the year

I can’t believe I’m uploading the first post of the year in March. Still, better late than never to show people that I’m still alive and kicking. While I haven’t been able to think about personal writing due to deluge of job and school related stuff I’ll try to keep things more organized in the coming months. If half of what I hope comes true this coming year will be the most awesome so far, for myself and for other activities and organizations I believe in.

This post is, like it says in the title, a rant post of what DIYBio ought to be and how I plan to do my part this year. It’s also written on my blackberry which I later copy-pasted into the wordpress… I just hope half a year of writing boring technical stuff didn’t burn out creative writing part of my brain. I’ll be using it a lot from now on.

Year 2009 was series of exciting experiences, with ISFF, DIYBio and iGEM jamboree. I’m trying to pan it out into this year without losing momentum, through activities like synthetic biology crash course for beginners, various internships and private research projects. Hopefully I’ll have more time to write about them in the coming months.

I’ve been thinking a lot on diybio, about what it’s supposed to be & what it needs, and I think I’ve arrived at some sort of conclusion.

DIYBio must inevitably find the way to bridge the gap between the enthusiastic members of the public and tools and devices that makes synthetic biology feasible. While there are many members out there who seem to work toward specific gadgets and other physical tools of biological experiment, I think we still need something more.

DIY or not, biology is a science. If we want to bring hard science to the public with aid of ever cheapening yet sophisticated lab equipments we need to look beyond the hardware.

I’ve written quite a few times about Alan Kay (on this blog and elsewhere), the pioneer of modern computer programming/interface paradigm and his relationship with synthetic biology… There are mountains of information on him and his works that are relevant to the discussion of models in biology and how they might be used to organize information, with emphasis on education as a sort of interface between data and human mind… All of which are beyond the scope of this particular post.

The important point is this. I believe true potential for diybio is to bridge the gap between the complexity of bleeding edge science with the innate human ability to learn and tinker. And the main tool in making it happen is idea, not low cost lab tools (the costs of the lab tools are coming down anyway. Why DIY every single appliance when you can buy a used one that works just as good, oftentimes even better?). While low cost lab implementations are important, the true future lies with the ability to abstract and package/rebuild complexity into something much more manageable.

Some people seem to have difficulty understanding what I’m trying to say from the few times I’ve tried to talk about… I’m talking about reviving and revising the notion of knowledge engineering, something that was supposed to be the corner stone of true computer revolution that never really took off (google and wikipedia are some remnants of the original idea).

Synthetic biology is a good example of what knowledge engineering coupled with physical science might be able to achieve. None of the specific pieces forming what we perceive as synthetic biology are new. They’ve been around for quite a while in one form or another following course of gradual improvement rather than truly new scientific advance.
Synthetic biology at heart is about how dedicated professionals can organize scattered pieces of knowledge into something that can potentially allow ambitious undergraduate students to undertake projects that would have been beyond their ability a decade ago. Never mind the actual success rate of their projects for now. They very fact that those students are able to plan for the future with much broader sphere of possibility is significant enough.
And why stop with undergraduates? Wouldn’t it be possible to have motivated high school students design something that at least works on paper? Wouldn’t it be possible to build a conceptual framework so that those kids can at least discuss possibilities of future projects on back of a napkin without resorting to sci fi?

If diybio is to do what it originally set out to do, we need to look beyond gadgets and tools. We need to think about ideas and how they come together… We need to make biology easier, not just cheaper. This is the mantra that will drive my DIYBio related activities this year.

Alan Kay applied to synthetic biology, and other stuff.

This is something I wrote up a few days ago, probably around four or so in the morning. So take whatever it says with caution.

I know I should be writing about some other things as well, like how the diybio nyc might be amazingly close to getting a real lab space, or how I’m prepping to stop by for this year’s iGEM jamboree. I also have the pictures from this year’s major diybio nyc event, where we set up a stall on the NYC green market and extracted dnas from the natural produces with common household material (with the passers-by of course). Each of those things would probably make for some lengthy and interesting reading, and the list goes on (my life’s actually kind of exciting right now). Yet whenever I find the time to write something down, nada. Nothing. My mind just shuts down and nothing I can commit to paper or the keyboard seems good enough.

Tonight though, aided by my weird bout with insomnia, I’ll just write something down I’ve been meaning to say for a long time.

I’ve been looking into the history of computing and computer languages recently. I’ve always had some level of interest in computers. Not just the spiffy brand-new muscle machines but in what most people would refer to as ‘retrocomputing’ (I once ended up practicing some AIDA because of that. Ugh), which is a story for another time. It’s not that I think old ways of computing were better than what we have now (protected memory FTW). It’s just that it’s much easier to trace the evolution of the concept of computing when you see beyond the immediate commercial products.

Synthetic biology is effectively a pursuit of engineering biological organisms. Biological organisms are based upon somewhat unified information storage and processing system that has quite a bit of parallels to mechanical computerized systems. I’ve been wondering whether it would be possible to predict the future development of synthetic biology by looking at how computer programming languages evolved (because they deal with information processing systems applied to physical counting medium). Maybe it’d be possible to predict some of the pitfalls that are inherent in developing complex programmable information processing system that will apply to the synthetic biology in the future. Maybe we can bring a conceptual framework to the synthetic biology that would have taken decades if left to mature naturally to within mere years.

While I was rummaging through the texts in both real life and the web (with many of the promising links on the web leading to dead-ends and 404s) I ran into a programming paradigm and environment I was only superficially familiar with before. Smalltalk and Squeak, both the brainchild of the computing pioneer Alan Kay.

Here’s an excerpt from Alan Kay’s biography I found on the net (I can’t find the website right now. I swear I’ll edit it in later, when my brain’s actually working!)

“Alan Kay postulated that the ideal computer would function like a living organism; each “cell” would behave in accord with others to accomplish an end goal but would also be able to function autonomously. Cells could also regroup themselves in order to attack another problem or handle another function.”

This is the basic philosophy behind smalltalk/squeak and object oriented computer programming paradigm. It is no coincidence that Alan Kay’s vision of the ideal computer language and computing environment would take to a biological allegory, since he came from molecular biology background.

While I’m reading through the history of different computing paradigms for the purpose of figuring out how it might be applied to synthetic biology, there’s something else I found awesome and perhaps a little heartwarming. Alan Kay throughout his life as a computing pioneer held onto the belief that the ideal computing platform won’t be a platform capable of crunching numbers the fastest. It will be a platform that can be integrated into the educational function of the user through ease of manipulation and control. Ideal computing platform should be hackable because it makes logical sense to do so.

Can we say the same of synthetic biology? Perhaps not. The direct comparison of a complex biological system to computerized circuits can only take us so far. Yet I can’t shake the nagging feeling that synthetic biology might be looking at some very unique opportunities for change precisely because it is different from regular electronic systems, with documents of the early days of computer and programming already here for our perusal.

A good, elegant system that allows programmable extension must be at the same time easy to learn, since one thing must inevitably lead to the other. And there are classes of systems that both run and learn better compared to other systems. This might become something of an issue of how synthetic biology parts/devices/systems are put together in the future as the capacity of the synthetic biologists to handle complex systems increase.

I think it might be able to pursue this idea further. As it stands this is nothing more than an interesting parallel in concept without substantial scientific reasoning.

Which is why I should get myself to learn smalltalk/squeak sometime in the future. Maybe I should knock on the hackerspaces in the city, see if anyone’s willing to mentor me.

Lecture and presentation

Long time no see on the blogosphere. I’ve been busy during the summer with all the usual stuff, mostly learning and working. I’m glad to say that I’ve almost finished the Exploring Complexity: An Introduction book during the summer, and I was even able to get some of the mathematics out of the way. I think I was able to model a pretty neat animation on some of the methods demonstrated in the book, and I’ll try to post it soon.

I’ve also been saving up for going skydiving before the summer’s over… I’ve always dreamed of the skies (my first choice in college education was majoring in aeronautics, never quite made it though), so it’s only natural that I do something that involves full-contact with the air up there. Living on the student budget means that I have to work some extra jobs for that though. Some a bit more crazier than the others.

And of course, there’s always the DIYBio NYC. I’ve been trying to come up with some decent ideas, but everything I can think of at the moment mostly revolves around the kind of project that would require some sort of dedicated labspace. All I can do at the moment is to prepare for that inevitable day when we’ll obtain access to a labspace through independent studies. Some of the things I’ve talked about the members during a recent meeting regarding the state of the group and the processes that are involved in constructing artificial vesicles were very enlightening, and I intend to do a full-length post about that some time in the near future.

On to the main post…

During today’s twitter and identi.ca browsing I happened upon some interesting resources for scientists and potential scientists.

The first one is a collection of links and documents on how to prepare a scientific presentation. I haven’t had the time to read through it yet, but I know some of the posts on the list, and if the rest are like the ones I know, they are definitely worth a read, especially for an aspiring scientist like me. It’s amazing just how many things are involved in preparing a half-way decent presentation, and how most people are just plain terrible at it. I’ve sat through my share of lectures/symposiums/conferences and there’s nothing more painful than a horrible presentation with irrational powerpoint.

The second resource I want to share with you is osgrid. It’s a virtual environment tool like the second life except that it’s opensource. It’s relatively simple to download the environment and run it off your own servers, though it also means that you ‘need’ to run it on your own server for the whole thing to work. I’m really interested in finding out how this environment can be used for scientific research. Perhaps virtual laboratories running off university computer clusters? Open educations tool like a virtual university? A method for scientists to interact with their own 3D datasets in clean and intuitive manner? There are plenty of possibilities out there.

… I can also think of a few ways to utilize some of the stuff for the DIYBio community.

Bioinformatics Misconceptions

I just read an interesting paper on the three common misconceptions people normally have about the field of bioinformatics. I’ve been eyeing bioinformatics as a possible venue for bringing more people into DIY sciences, so I took some notes for future reference. It turns out that I’ve been suffering from same hype and illusion about the field of bioinformatics just like the vast majority of the non-specialists out there.

Simply put major misunderstandings about bioinformatics might be narrowed down to three myths permeating the science culture, according to the author.

Myth#1: anybody can do this
-bioinformatics is inexpensive
-bioinformatics software is free

Myth#2: you’ll always need an experiment
-bioinformatics is a rapid-publication field
-all bioinformatics does is generate testable predictions

Myth#3: this is news technology but technology nevertheless
-bioinformatics is a new field
-bioinformatics is an application discipline

*FYI the statements under the Myth headings are the ones the author refutes in his writing.

Myth#1 is that everybody can do bioinformatics, using only the cheap or opensource tools available off the net. The author does admit that this is indeed the case to certain extent. However once you get into any serious large scale research about or involving bioinformatics the initial assumptions will prove to be a burden on the organizational level. As the author will elaborate in later parts, bioinformatics is a field of scientific research on its own not subservient to the conventional wetlab biology. Indeed, while reading the article I was under the impression that the main statement for the whole article revolved around how people do not realize that bioinformatics is a field of scientific research with its own goals and complications. Very unlike the laymen assumption that bioinformatics is in fact just biology done with computers, or application of computerized tools to wetlab based biological research just like how the researchers would use word processors or LaTeX to type up their reports. Personally I found it a little disheartening that bioinformatics research is just as complicated as any other field of scientific research for DIY implementation, possibly more depending on what the amateur scientist is trying to do. But then I can only blame my naivety. The author also makes a point that bioinformatics can be very expensive to begin due to some number of proprietary software services that must be purchased (never went into much detail on that. I guess it’s different according to the theme of the research?) and the resources needed to write and maintain codes for the project. It makes sense when you think about it. While it would be possible to come up with some bioinformatics application in-house, after certain level it would be vastly cheaper to simply buy some number of components and just use in-house resources to link them and tune them into giving results needed for the project (which shouldn’t be easy to begin with).

Of course, I still think that we can, and maybe should, use some approaches of bioinformatics to provide interesting DIY science framework to the public, like the Annotathon metagenome annotation project that had been open to the public for a while now. I’m just glad that I got a chance to listen to some of the intricacies of the field from someone already working with the tools of the trade.

While I now understand some stuff about what the field of bioinformatics is about, I’m still unsure as to what kind of project idea I can come up for DIYBio curriculum using the technology… It’s a problem I’ve been running into a lot lately in doing stuff involving DIYBio. I know there are tools and tutorials out there, but I just can’t seem to be able to put them together into a coherent whole. DIYBio needs some sort of project that would turn knowledge into skill… More on that later.

Bruce Sterling on education

I’m taking a bit of a break today, which leaves me some time to indulge in all sorts of creative yet ultimately possibly meaningless ventures, like mathematica visualization, the processing language study, and scrounging for interesting bits on the net.

While on my usual sojourn throughout the infosphere this morning, I found an interesting passage written on subject of education by someone I assume to be the Bruce Sterling (here’s the original website I found this in). This passage was intended as a response to a question asking him what he would do as a ‘head honcho’ of the Ministry of Education asked earlier in the said webpage. A little too close to the truth for comfort I think. And people actually wonder why the public education systems all over the world is hitting the rock bottom.

If I were head honcho of the Ministry of Education,
my job would not be to make schools work as learning
environments.  Basically, my job would be to make
school-age children walk in straight lines and
salute the flag as I freed up the productive
capacity of their parents.

If schools were learning environments, all the smart
kids would clear out in half an hour.  Then they'd
go home and demand attention from Mom and Dad.
That just can't be allowed.

Science in Apple?

Like most people, I was tuned into the WWDC keynote address on Monday. Most of the stuff on the keynote were more or less expected, including the iPhone/Dev kit and the OS X 10.6. However, the way they were presented were intriguing to say the least… To this scientist-in-training at least.

First the iPhone. Inclusion of medical applications within the presentation was the real eye-catcher of the show for me (other than the $199 price point for the iPhone, but that was expected). Why go through the trouble of including such specialist application in a presentation aimed at developers and consumer-enthusiasts? Of course, it would be nice to be able to present applications from variety of fields to showcase the capacity of the iPhone and ‘grow the image,’ but something tells me that medical imaging application and med-school study guide are probably not the most interesting of the applications submitted to the Apple in time for WWDC. Based on circumstantial evidence, I think Apple planned to have presentation for medical application included from the beginning, and I think they wanted more than one to showcase the professional academic muscle of the iPhone. The very fact that they took the trouble to include a testimony from Genetech regarding their enterprise functions of the iPhone seem to support this assumption.

Second, the OS X 10.6, also known as the Snow Leopard. The primary idea of the OS seem to be out-of-the-box utilization of multi-core processors that are mainstream these days. Most of us run dual processors right now and it wouldn’t be farfetched to think that we (and by we, I mean the normal computer users. There are already quite a number of quad core users in more specialized communities I hear) might as well be running quad processor systems a year or two from now. It’s a reasonable move, considering that no OS of any flavor seem to be taking noticeable advantage of the 64 bit architecture that had been around forever. Apparently Apple is calling their own system for utilization of expected slew of multi-core processors Grand Central (after the beautiful Grand Central in my hometown, no doubt), which will no doubt form the headstone for the new OS X 10.6 iteration when it is released a year or so from now. Is it pushing it to far to say that this might as well be a move on Apple’s part to appeal to the professional scientist community that actually has real and pressing need for more computing power? The distributed computing projects like the BOINC and the folding@home for example (both of which I am an active participant. I urge you to join up if you think you ave some cpu cycles to spare). My Intel Core 2 Duo 2.3 Ghz processor isn’t enough to complete complex work cycles in any reasonable frame of time. What if we can run more simulations and calculation on our own laptops/desktops for faster results? It’s no secret that Mathematica and Apple seem to be on something of a favorable ground. Apple’s ethos on this particular attempt will be simple. Keep the computer out of the scientists’ way. Just plug in the numbers, get the results, no worries about 64 bit support or any complex refitting of scientific programs (unlike what most people seem to think, studying physics or any other branch of science doesn’t make you good at computer science. Those are entirely different fields! Physicists are merely proficient at limited skills needed for physics computing). Who wouldn’t want that?

Third, the OpenCL (which stands for Open Computing Language). This part might as well be a dead giveaway of the Apple’s company wide strategy to woo the scientific community. OpenCL is a method Apple is developing that would allow developers to use the GPU of computers to do CPU tasks. A few years ago the news of PS3 GPU being redirected for mathematical calculation made some news. I believe there were other ones where conventional graphics chipsets were utilized for complex physics calculations that gave results that far surpassed what was possible when using only the conventional cpu. It’s been such a long time that I am somewhat surprised that only now they are thinking of integrating it into mainstream computer market. Mind you, this method of diverting gpu to do cpu work was done at first to provide more muscle for physics simulations using conventional computer systems and components rather than specialized supercomputer systems. I do not foresee normal Apple toting screenwriters and web surfers needing all that computing power anytime soon. If this is coming, it’s coming for us, the scientists, who need to crunch numbers most people haven’t even heard of.

If we put the three together with the assumption that Apple might be shooting for the scientific computing community, we have possibly mobile computing platform with serious power (macbook pro), able to run variety of scientific programs (Mathematica+Matlab, BLAST etc), with built in ability to sync and wirelessly connect to/controlled by a dedicated mobile phone with some serious computing power of its own (iPhone+community apps). So the actual computing can be done at home, while the user receives output and sends input from his iPhone. Would this work? I think there are plenty of people doing the similar thing already. But there will be possibly significant differences between device that had been essentially hacked together and series of devices that were designed to work in conjunction from the beginning. I see this as very exciting development on part of Apple and computing industry in general.

Having a science-oriented Apple isn’t the only thing I’m excited about. Let me put it this way. iPhone made people who didn’t use text messages on conventional phones to text each other constantly. iPhone also made people who never used the browsing capabilities of their conventional phones to browse around the web. This is the problem and effect of accessibility that I mentioned in some of the other posts on this blog. When people don’t do something, it might not be because they want it that way. It might be because there is an accessibility barrier between the individual and the activity. We complain about how people are no longer interested in sciences and other higher academic pursuits. Maybe we’ve been unwittingly placing accessibility barriers on the paths to higher education? If such ideas about accessibility barrier between the public and the sciences have a grain of truth in it, maybe this new direction of Apple can do for sciences what it did for telephony. Especially with the community based distributed computing projects and DIY mentality across variety of scientific, but especially biological disciplines on the rise, (the term synthetic biology itself isn’t even new anymore, despite the immaturity of the field itself) maybe I can hope for some sort of change in today’s somewhat disappointing state of affairs.

From virtual to real

I must admit, there was a time when I would play computer/video games late into the night. I was a wee-lad back then, so impressionable and curious about the whole plethora of things of this universe. And the allure of the virtual worlds to such mind was just too sweet to resist. I gave a lot of thought to my then-current condition during the phase of my life. Why would I be captivated by certain types of virtual reality? Is there something shared in common between the hundreds of different worlds constructed using a number of different mediums-writing, visual, and aural-that composes the fundamental idea of what an enjoyable world should be? Would the impression of such an ‘idea’ of the mysteriously attractive world be common to all human beings? Or only human beings of certain memories and experiences? I would spend many days just thinking about the nature of all possible virtual worlds imaginable by human mind and their possible implications while my hands played the mechanical play of controlling my representation within the display.

Deus Ex was a computer game created by the now-defunct ION storm that came out during the aforementioned impressionable period of my life. This game isn’t aesthetically pleasing by any stretch of imagination. It’s gritty, ugly, in a very superficial and unintended kind of way. It is based in imaginary near-future where nanotechnology and artificial intelligence are just coming into full gear among the financial and political turmoils of a new human age. Conspiracy theories based on some real-world conspiracy fads play an important role in the setting and the plot, and there are lot of techno-jargon thrown around in one of the numerous conversations within the game world which might add to its depth. Any way you look at it, Deus Ex is not a work of art, and it was never meant to be. Deus Ex as a game was designed to be immersive. Immersive as in realistic within the confines of the plot and available technological means to execute that plot. Whatever the Deus Ex was meant to be, it did its job and it did its job fantastically. Deus Ex took itself just serious enough to be immersive.

I played and finished Deus Ex numerous times since the day it came out. The game had the semblance of a virtual world, just enough to be a better game, not enough to be a real virtual world, which was actually a good thing. I’d figure out a number of different ways to achieve the objective of the specific stages and the game as a whole, each of those paths gradually beginning to encompass different processes that the designer of the game probably never intended in the first place-a first form of truly emergent game play on digital medium. I can still remember a number of quotes and conversations from the game by heart, not through any diligent study, but simply through repeated exposure stemming from the interest in the world itself. And to be perfectly honest, while I was aware of nanotechnology and its growing prominence before playing the game (I was a little precocious for my age), I began to truly comprehend what such technology could mean to the world and the people in the far future by seeing it applied within the virtual world built and maintained by fictional premises. It would not be far from to truth to say that my interest in ‘industries’ of biology and other fields of science (with my current ‘official’ pursuit being plasma physics, which is an entirely different field altogether) began with my introduction to this game… I place much emphasis on the term ‘industry’ because it was through the application of the idea of technology within a virtual (no matter how absurd it might be compared to the real) world that I began to grasp the requirements of science and its true impacts in the modern human civilization of rapid prototyping and mass production. Yes, I’ve come to learn that science effects the human world as a whole, just as the hand of economy reaches into the deepest pockets of the remotest corners of the globe, and such permutation of ideas and information might have a reasonable pattern of causality behind it, forming a system of sorts. All this at the first year of high school, all this because I’ve seen it applied in a limited virtual world whose goal was to entertain, perhaps mindlessly.

People talk of the web 2.0, the web based virtual reality (like the second life) all the time, perhaps without grasping what it truly means. To me, the change on the web and its technical and semantic updates are merely superficial effects of the real change that is taking place right now. The real change we are about to face at this moment, is the change to the nature of the human network. I find that I’m using the term human network more often these days. The human network had been present since the very first moment of human civilization (perhaps even before, going back to the start of the human species) and has the same mathematical and sociological properties of networks that more or less remains the same on some compartmentalized level. The changes we are seeing in the emergence of the web 2.0 ideas and virtual realities merely reflect the technological advances applied to the same ever present human network that had been in place for as long as anyone can remember. At the core of the web 2.0 is the idea of user interactivity. What happens when there is a freedom of interactivity between millions and billions of people? The medium providing the room for interactions itself begins to take on closer resemblance to the concept we call ‘the world.’ Forget reality. What is a ‘world?’ What satisfies the definition of a ‘world?’ The core of a ‘world’ as it stands happen to be a place where people can interact with the very components of the world itself and with each other. In that sense, if our reality somehow forbid certain type of interaction between us and the ‘world’, it would cease to be real.  The world as seen from information perspective, is a massive space/concept/thing for interactivity, and interaction between the ‘things’ within the world builds and evolves the form of the world itself.

The web 2.0 in that sense, is the beginning of a virtual world that builds upon human interactivity rather than superficial (though still quite important) reliance on resembling the physical characteristics of the real. And the real change being brought on by the advent of the web 2.0 thought to the general population is the enlargement of the perspectives of the real world brought on by interactions with other human nodes within the virtual world. I am not suggesting that people are somehow becoming more conscious. Just as I have demonstrated with my old experience with the computer game Deus Ex where seeing certain kind of ideas applied to a virtual world left an impression of impact of such ideas on a rapidly prototyping, global world, the population of this world is becoming increasingly aware of the true global consequences of their and others actions and thought. It is the awareness that in this highly networked world, science, industry, economics and politics all walk hand-in-hand as ‘ideas’ and its currencies, a single change in one sector of one corner of the world giving birth to certain other events on the opposite corner of the globe in entirely different field of ideas. It is the beginning of the understanding of the malleability of the human world and its thought.

I’ve started with remembering my experience with an old computer game, and came to the talks of virtual reality, the human network and the changes of the world. I hope I didn’t confuse you too much. This is what I call ‘taking a walk’, where I begin with one thought and its conclusions and apply them to different yet related thoughts to arrive at interesting ideas. In case you are wondering about the game itself, it seem that they are giving it away for free now. Go grab it and spend some time with it. It’s still fun after all these years.