Joy of reading

As my acquaintances know well enough already, I love reading. There is something peculiarly tempting about the very activity of reading itself that appeals to me beyond the information, stories and knowledge that can be gleamed as the result of reading, just like the act of gaming that attracts people of all ages beyond the benefits of aesthetics and possible brain-enhancement associated with the act.

Of course, my job as a full-time student makes reading a mandatory part of my life. And my interests tend to diverge across wildly different fields, so the volumes I handle tend to be just as numerous. Through all the time I’ve spent digging into the mazes of phantasm and ideas, I’ve come to notice something about the nature of my captivation with the act of reading. While I do enjoy reading through the informationally intense texts, I much prefer well written fiction of somewhat classical setting and witty writing while I’m winding down. Strangely enough, occasional feats of such ‘light reading’ helps me concentrate even better while reading through the academic texts and papers, and the performance boost is very significant.

So I’m thinking of designing a reading schedule for myself that should be able to satisfy my urge for reading and academic performance in one fell swoop. I remember reading through the ‘Jonathan Strange and Mr. Norrell’ a few months ago. I was completely immersed in the story and character of the mysterious yet whimsical world created by the gifted author Susanna Clarke, and even now I can picture some of the scenes of the book in front of me as clear as the daylight. I’ve already finished the City of Dreaming Books by Walter Moers, which I loved as much as I’ve loved reading the Jonathan Strange and Mr. Norrell. I’m almost at the end of the Baroque Cycle series of three books written by Neal Stephenson, who I believe is a thoughtful yet humorous writer, with certain charmingly irreverent attitude woven into every pages of his books. His newest work would be Anathem scheduled to be released somewhere in September, so I need to find some book to tide me over until then. Just what kind of book would be able to satisfy the strange bibliophile in me? I prefer to hold the book in my hand while reading, absorbing the subtle shades of light and the texture of the pages just as rich as the stories and characters themselves, forming the icing on the cake that is the activity of reading… So no ebooks or internet books at the moment. I get plenty of those from my school in forms of scientific papers I have to report on.

I’m burning through amazon and librarything web pages trying to find my next leisure reading right now. I just hope I’d be able to find another amazing book soon. I feel like someone searching for water in middle of a vast desert, its borders continuing throughout the stretch of my lifetime.

Your name to the moon!

A little note on something I just came across.

The wonderful folks at NASA (despite what the people say, I still believe in the dream!) have decided to send bunch of names to the moon on their new Lunar Reconnaissance Orbiter project. All you have to do is visit the site and click on the link titled ‘Send Your Name to the Moon.’ After which you’ll be prompted to enter your name. They also have this neat certificate of participation available on their website, which you can print and/or download as a pdf file for showing off to family and friends.

Since manned flight to the moon (at least for me) won’t be happening any time soon, we might as well whet our appetite with this little demo of the moon world experience. Tell this to your friends and families. It doesn’t hurt to have your name sent up to the moon. And we don’t want to disappoint the nice people who thought to do this, do we?

As for me, I have to go get some green tea ice cream while humming along the fly me to the moon.

Science in Apple?

Like most people, I was tuned into the WWDC keynote address on Monday. Most of the stuff on the keynote were more or less expected, including the iPhone/Dev kit and the OS X 10.6. However, the way they were presented were intriguing to say the least… To this scientist-in-training at least.

First the iPhone. Inclusion of medical applications within the presentation was the real eye-catcher of the show for me (other than the $199 price point for the iPhone, but that was expected). Why go through the trouble of including such specialist application in a presentation aimed at developers and consumer-enthusiasts? Of course, it would be nice to be able to present applications from variety of fields to showcase the capacity of the iPhone and ‘grow the image,’ but something tells me that medical imaging application and med-school study guide are probably not the most interesting of the applications submitted to the Apple in time for WWDC. Based on circumstantial evidence, I think Apple planned to have presentation for medical application included from the beginning, and I think they wanted more than one to showcase the professional academic muscle of the iPhone. The very fact that they took the trouble to include a testimony from Genetech regarding their enterprise functions of the iPhone seem to support this assumption.

Second, the OS X 10.6, also known as the Snow Leopard. The primary idea of the OS seem to be out-of-the-box utilization of multi-core processors that are mainstream these days. Most of us run dual processors right now and it wouldn’t be farfetched to think that we (and by we, I mean the normal computer users. There are already quite a number of quad core users in more specialized communities I hear) might as well be running quad processor systems a year or two from now. It’s a reasonable move, considering that no OS of any flavor seem to be taking noticeable advantage of the 64 bit architecture that had been around forever. Apparently Apple is calling their own system for utilization of expected slew of multi-core processors Grand Central (after the beautiful Grand Central in my hometown, no doubt), which will no doubt form the headstone for the new OS X 10.6 iteration when it is released a year or so from now. Is it pushing it to far to say that this might as well be a move on Apple’s part to appeal to the professional scientist community that actually has real and pressing need for more computing power? The distributed computing projects like the BOINC and the folding@home for example (both of which I am an active participant. I urge you to join up if you think you ave some cpu cycles to spare). My Intel Core 2 Duo 2.3 Ghz processor isn’t enough to complete complex work cycles in any reasonable frame of time. What if we can run more simulations and calculation on our own laptops/desktops for faster results? It’s no secret that Mathematica and Apple seem to be on something of a favorable ground. Apple’s ethos on this particular attempt will be simple. Keep the computer out of the scientists’ way. Just plug in the numbers, get the results, no worries about 64 bit support or any complex refitting of scientific programs (unlike what most people seem to think, studying physics or any other branch of science doesn’t make you good at computer science. Those are entirely different fields! Physicists are merely proficient at limited skills needed for physics computing). Who wouldn’t want that?

Third, the OpenCL (which stands for Open Computing Language). This part might as well be a dead giveaway of the Apple’s company wide strategy to woo the scientific community. OpenCL is a method Apple is developing that would allow developers to use the GPU of computers to do CPU tasks. A few years ago the news of PS3 GPU being redirected for mathematical calculation made some news. I believe there were other ones where conventional graphics chipsets were utilized for complex physics calculations that gave results that far surpassed what was possible when using only the conventional cpu. It’s been such a long time that I am somewhat surprised that only now they are thinking of integrating it into mainstream computer market. Mind you, this method of diverting gpu to do cpu work was done at first to provide more muscle for physics simulations using conventional computer systems and components rather than specialized supercomputer systems. I do not foresee normal Apple toting screenwriters and web surfers needing all that computing power anytime soon. If this is coming, it’s coming for us, the scientists, who need to crunch numbers most people haven’t even heard of.

If we put the three together with the assumption that Apple might be shooting for the scientific computing community, we have possibly mobile computing platform with serious power (macbook pro), able to run variety of scientific programs (Mathematica+Matlab, BLAST etc), with built in ability to sync and wirelessly connect to/controlled by a dedicated mobile phone with some serious computing power of its own (iPhone+community apps). So the actual computing can be done at home, while the user receives output and sends input from his iPhone. Would this work? I think there are plenty of people doing the similar thing already. But there will be possibly significant differences between device that had been essentially hacked together and series of devices that were designed to work in conjunction from the beginning. I see this as very exciting development on part of Apple and computing industry in general.

Having a science-oriented Apple isn’t the only thing I’m excited about. Let me put it this way. iPhone made people who didn’t use text messages on conventional phones to text each other constantly. iPhone also made people who never used the browsing capabilities of their conventional phones to browse around the web. This is the problem and effect of accessibility that I mentioned in some of the other posts on this blog. When people don’t do something, it might not be because they want it that way. It might be because there is an accessibility barrier between the individual and the activity. We complain about how people are no longer interested in sciences and other higher academic pursuits. Maybe we’ve been unwittingly placing accessibility barriers on the paths to higher education? If such ideas about accessibility barrier between the public and the sciences have a grain of truth in it, maybe this new direction of Apple can do for sciences what it did for telephony. Especially with the community based distributed computing projects and DIY mentality across variety of scientific, but especially biological disciplines on the rise, (the term synthetic biology itself isn’t even new anymore, despite the immaturity of the field itself) maybe I can hope for some sort of change in today’s somewhat disappointing state of affairs.

Bach by the window

It’s been an unusual afternoon of relaxation and reading today. I’m listening to Bach’s concerto for two violins in D minor. The music of violin is especially fitting with the calm afternoon sunlight streaming in through the windows… It sounds as if the light and the instrument were made for each other, sharing some fundamental secret amongst themselves that mortal men are not privy to.

This kind of experience always arouses an intense wave of curiosity over my mind. Just what makes a molecular system react in such a way to certain vibrations of air? Is it complexity? Is it some innate characteristic of my components? Or is it purely dependent on the way the materials are arranged? Is it replicable? If this is a natural phenomenon, who’s to say that another phenomenon of similar nature might be manifesting somewhere else in this universe, on entirely different materials and scales, like that of a whole galaxy? Imagine, the bow of energies striding across the strings of time and space, propagating a subtle echo of the exuberant creations, ringing away toward eternity.

My favorite cup of green tea ice cream is beginning to melt away. I should go now and enjoy my life more.

Hacker attitude

The ‘hacker’ culture had been around for so long, and involved in so much of the substantial progress of the last half of the decade, to have their own ethos and philosophy into codified laws, somewhat like the ten commandments. Except that these rules are, as pertaining to the hacker subculture itself, a matter of choice for the most part. If you are finding yourself agreeing to the code, than you are probably a hacker, regardless of whether you know about computers or not. Even if you regularly write in assembly language for living, if you cannot agree to the codes outlined by the hacker culture, you are probably not a hacker. In a way calling it a ‘code’ and comparing it to the ten commandments would be something of a misnomer. Think of it as something of an identification tag, to be used between people of similar disposition.

There are five fundamental common attitudes shared by most hackers, and they are as follows.

1. The world is full of fascinating problems waiting to be solved.
2. No problem should ever have to be solved twice.
3. Boredom and drudgery are evil.
4. Freedom is good.
5. Attitude is no substitute for competence.

It is rather interesting that all of the five attitudes go against common beliefs and pratice held by most public school education system. At least for the inner city schools I know of. Around those schools teachers and administrators can say they are trying to teach children how to respect the authority without even blushing in shame. That’s right folks, not respect to your fellow men/ladies, and not respect to yourself. The primary goal seem to be built around having the kids in middle and high school stages of education to respect the person who has the right to call the police or security on them. Of course, I am being rather crass here, but this is the sentiment shared by most if not all urban city youths, the same feeling I shared when I was their age. And who am I supposed to blame for current less-than-fantastic state the public education system is in? Kids or experienced, supposed ‘professionals’ who get paid to study the children and lead them to the best possible future?

As I grow older I’m finding that this ‘hacker’ mindset is not new at all. I believe it had been around since the very beginning of civilizations, and that this is a part of natural instinct of being a human being. It is becoming increasingly certain that you don’t need to know about computers to hack things. What you need instead is the insight and wisdom to seem through the system of the world. It’s like applied cybernetics. As long as things affect each other in certain way they form a system. A system of human society is a system like any other, albeit fundamentally more complex since such systems are usually evolved rather than designed. As long as something can be considered a system, it can be, and perhaps should be, hacked. A mudlark in highly hierarchical society later becoming a shipping magnate, or a leader of a nation, is as much a hacker as the computer science major hacking with python and C++ in pursuit of digital artificial life. A writer, a cook, a musician, the applicable list goes on and on. The field of synthetic biology, though fledgling at the moment, seem to be shaping up as the next contender to the hackerdom’s primary pursuit, in the search of the ability to hack the life as we know it. Who knows what we’ll be hacking some distant time into the future? Perhaps the very nature of space and time itself. Maybe even designer universes.

And from this standpoint of the universal hackery, I must ask, would it be possible to hack the human world? Would it be possible to hack the public mind and the generational zeitgeist to nudge the rest of humanity into some vision of future? Is it possible to hack the origin of all the situations and motivations, the human itself?

From virtual to real

I must admit, there was a time when I would play computer/video games late into the night. I was a wee-lad back then, so impressionable and curious about the whole plethora of things of this universe. And the allure of the virtual worlds to such mind was just too sweet to resist. I gave a lot of thought to my then-current condition during the phase of my life. Why would I be captivated by certain types of virtual reality? Is there something shared in common between the hundreds of different worlds constructed using a number of different mediums-writing, visual, and aural-that composes the fundamental idea of what an enjoyable world should be? Would the impression of such an ‘idea’ of the mysteriously attractive world be common to all human beings? Or only human beings of certain memories and experiences? I would spend many days just thinking about the nature of all possible virtual worlds imaginable by human mind and their possible implications while my hands played the mechanical play of controlling my representation within the display.

Deus Ex was a computer game created by the now-defunct ION storm that came out during the aforementioned impressionable period of my life. This game isn’t aesthetically pleasing by any stretch of imagination. It’s gritty, ugly, in a very superficial and unintended kind of way. It is based in imaginary near-future where nanotechnology and artificial intelligence are just coming into full gear among the financial and political turmoils of a new human age. Conspiracy theories based on some real-world conspiracy fads play an important role in the setting and the plot, and there are lot of techno-jargon thrown around in one of the numerous conversations within the game world which might add to its depth. Any way you look at it, Deus Ex is not a work of art, and it was never meant to be. Deus Ex as a game was designed to be immersive. Immersive as in realistic within the confines of the plot and available technological means to execute that plot. Whatever the Deus Ex was meant to be, it did its job and it did its job fantastically. Deus Ex took itself just serious enough to be immersive.

I played and finished Deus Ex numerous times since the day it came out. The game had the semblance of a virtual world, just enough to be a better game, not enough to be a real virtual world, which was actually a good thing. I’d figure out a number of different ways to achieve the objective of the specific stages and the game as a whole, each of those paths gradually beginning to encompass different processes that the designer of the game probably never intended in the first place-a first form of truly emergent game play on digital medium. I can still remember a number of quotes and conversations from the game by heart, not through any diligent study, but simply through repeated exposure stemming from the interest in the world itself. And to be perfectly honest, while I was aware of nanotechnology and its growing prominence before playing the game (I was a little precocious for my age), I began to truly comprehend what such technology could mean to the world and the people in the far future by seeing it applied within the virtual world built and maintained by fictional premises. It would not be far from to truth to say that my interest in ‘industries’ of biology and other fields of science (with my current ‘official’ pursuit being plasma physics, which is an entirely different field altogether) began with my introduction to this game… I place much emphasis on the term ‘industry’ because it was through the application of the idea of technology within a virtual (no matter how absurd it might be compared to the real) world that I began to grasp the requirements of science and its true impacts in the modern human civilization of rapid prototyping and mass production. Yes, I’ve come to learn that science effects the human world as a whole, just as the hand of economy reaches into the deepest pockets of the remotest corners of the globe, and such permutation of ideas and information might have a reasonable pattern of causality behind it, forming a system of sorts. All this at the first year of high school, all this because I’ve seen it applied in a limited virtual world whose goal was to entertain, perhaps mindlessly.

People talk of the web 2.0, the web based virtual reality (like the second life) all the time, perhaps without grasping what it truly means. To me, the change on the web and its technical and semantic updates are merely superficial effects of the real change that is taking place right now. The real change we are about to face at this moment, is the change to the nature of the human network. I find that I’m using the term human network more often these days. The human network had been present since the very first moment of human civilization (perhaps even before, going back to the start of the human species) and has the same mathematical and sociological properties of networks that more or less remains the same on some compartmentalized level. The changes we are seeing in the emergence of the web 2.0 ideas and virtual realities merely reflect the technological advances applied to the same ever present human network that had been in place for as long as anyone can remember. At the core of the web 2.0 is the idea of user interactivity. What happens when there is a freedom of interactivity between millions and billions of people? The medium providing the room for interactions itself begins to take on closer resemblance to the concept we call ‘the world.’ Forget reality. What is a ‘world?’ What satisfies the definition of a ‘world?’ The core of a ‘world’ as it stands happen to be a place where people can interact with the very components of the world itself and with each other. In that sense, if our reality somehow forbid certain type of interaction between us and the ‘world’, it would cease to be real.  The world as seen from information perspective, is a massive space/concept/thing for interactivity, and interaction between the ‘things’ within the world builds and evolves the form of the world itself.

The web 2.0 in that sense, is the beginning of a virtual world that builds upon human interactivity rather than superficial (though still quite important) reliance on resembling the physical characteristics of the real. And the real change being brought on by the advent of the web 2.0 thought to the general population is the enlargement of the perspectives of the real world brought on by interactions with other human nodes within the virtual world. I am not suggesting that people are somehow becoming more conscious. Just as I have demonstrated with my old experience with the computer game Deus Ex where seeing certain kind of ideas applied to a virtual world left an impression of impact of such ideas on a rapidly prototyping, global world, the population of this world is becoming increasingly aware of the true global consequences of their and others actions and thought. It is the awareness that in this highly networked world, science, industry, economics and politics all walk hand-in-hand as ‘ideas’ and its currencies, a single change in one sector of one corner of the world giving birth to certain other events on the opposite corner of the globe in entirely different field of ideas. It is the beginning of the understanding of the malleability of the human world and its thought.

I’ve started with remembering my experience with an old computer game, and came to the talks of virtual reality, the human network and the changes of the world. I hope I didn’t confuse you too much. This is what I call ‘taking a walk’, where I begin with one thought and its conclusions and apply them to different yet related thoughts to arrive at interesting ideas. In case you are wondering about the game itself, it seem that they are giving it away for free now. Go grab it and spend some time with it. It’s still fun after all these years.

Sketch-Creativity and origin of creativity

I’ve been listening to Amy Tan’s talk on TED titled ‘Where does Creativity Hide?’

Interesting stuff. I didn’t have enough time to mull over it properly yet, but listening to her gave me a few thoughts on the issue of the origin of creativity, an issue I am very passionate about.

It is relatively simple matter to simulate the process of creativity, I think. Plenty of mathematical constructs and randomly generated ‘events’ linked together has the resemblance of pure creative output, and despite some number of conflicts and arguments for and against such ‘engines of creation’, I do believe that what we do might in essence be not so much different from the simulated behaviors of such random patterns and mechanization.

However, the real problem, at least for me, lies in the issue of the origin of creativity rather than the process of it. Human beings are not machines or algorithms specifically designed to be creative. In fact, human beings as molecular machines might not have been built for anything (and everything, in that sense), for evolution tend to be quite blind in such matters of directionality in nature (there are theories and viewpoints arguing otherwise). I will not even look at the possibility that the wellspring of creativity emerges from some spiritual source, instead approaching the problem from purely materialistic and reproducible viewpoint.

As physio-chemical complex dissipative systems, what drives human beings to create and innovate throughout their duration of activity, i.e. life? What kind of mechanism underlies this strange anomaly emerging from entangled soups contained within chunks of chemicals? Even more, how would we be able to replicate such behavior using less than usual components? This, ladies and gentlemen, is the question of the ages, the true question toward the question of creativity.

This, I believe, is the true crossroad between the arts and the sceinces, the significance of artificial life in science, society, industries, and the zenith where the artificial intelligence becomes simple intelligence.

More to follow.