New career path

I can’t believe I didn’t write here for so long. Welp, can’t help what’s done already, I’ll try to document all the cool stuff that’s happening right now at&around myself, iGEM and Genspace a bit more. I’m officially a team member of the NYC-iGEM team and there are plenty of real biology being done at Genspace now that we’re public and all. I just have so much to write about.

But first, let me give you a news, this is the one that got me writing on this blog again after months of vacation:

Virgin Galactic is hiring astronauts! Not just any astronaut, but a private astronaut presumably for their SpaceShipOne based launch program at Mojave spaceport.

You need all the qualifications usually associated with being hired as a pilot for any Aerospace corp, with preference given to those with real experience in spaceflight. I guess all those out of work astronauts from the space shuttle program can still get their flight on 🙂

I kept asking myself why I didn’t go to a flight school instead of bothering with all this physics baloney. I think my friend who does aeronautical engineering thinks the same way too. I was more or less blaming my own shortsightedness before I hit upon a memory from decade ago.

I wanted to go to space, become an astronaut. That meant I had to enroll in the airforce, go through the officer’s training, and get really, really lucky. Now luck part I never really had a problem with. Don’t worry about things you can’t control, as they say. But enrolling and spending my life in the military just to get to space? Man that just put me way off. It’s probably the same story with my engineer friend. And I’m not even sure what women go through when they want to become an astronaut. I’m thinking it’s something a lot more different from what males have to go through, whether we want to admit it or not.

I’m not really crazy about the idea of libertarian capitalism, but I can’t help but to welcome this development of private space industries. I think years of treating space as if it was a special military domain really killed lots of initiatives that could have happened, and just shelved decades worth of scientific progress under the guise of national security (for all nations with capacity for spaceflight, really).

Advertisements

First DIYBio rant of the year

I can’t believe I’m uploading the first post of the year in March. Still, better late than never to show people that I’m still alive and kicking. While I haven’t been able to think about personal writing due to deluge of job and school related stuff I’ll try to keep things more organized in the coming months. If half of what I hope comes true this coming year will be the most awesome so far, for myself and for other activities and organizations I believe in.

This post is, like it says in the title, a rant post of what DIYBio ought to be and how I plan to do my part this year. It’s also written on my blackberry which I later copy-pasted into the wordpress… I just hope half a year of writing boring technical stuff didn’t burn out creative writing part of my brain. I’ll be using it a lot from now on.

Year 2009 was series of exciting experiences, with ISFF, DIYBio and iGEM jamboree. I’m trying to pan it out into this year without losing momentum, through activities like synthetic biology crash course for beginners, various internships and private research projects. Hopefully I’ll have more time to write about them in the coming months.

I’ve been thinking a lot on diybio, about what it’s supposed to be & what it needs, and I think I’ve arrived at some sort of conclusion.

DIYBio must inevitably find the way to bridge the gap between the enthusiastic members of the public and tools and devices that makes synthetic biology feasible. While there are many members out there who seem to work toward specific gadgets and other physical tools of biological experiment, I think we still need something more.

DIY or not, biology is a science. If we want to bring hard science to the public with aid of ever cheapening yet sophisticated lab equipments we need to look beyond the hardware.

I’ve written quite a few times about Alan Kay (on this blog and elsewhere), the pioneer of modern computer programming/interface paradigm and his relationship with synthetic biology… There are mountains of information on him and his works that are relevant to the discussion of models in biology and how they might be used to organize information, with emphasis on education as a sort of interface between data and human mind… All of which are beyond the scope of this particular post.

The important point is this. I believe true potential for diybio is to bridge the gap between the complexity of bleeding edge science with the innate human ability to learn and tinker. And the main tool in making it happen is idea, not low cost lab tools (the costs of the lab tools are coming down anyway. Why DIY every single appliance when you can buy a used one that works just as good, oftentimes even better?). While low cost lab implementations are important, the true future lies with the ability to abstract and package/rebuild complexity into something much more manageable.

Some people seem to have difficulty understanding what I’m trying to say from the few times I’ve tried to talk about… I’m talking about reviving and revising the notion of knowledge engineering, something that was supposed to be the corner stone of true computer revolution that never really took off (google and wikipedia are some remnants of the original idea).

Synthetic biology is a good example of what knowledge engineering coupled with physical science might be able to achieve. None of the specific pieces forming what we perceive as synthetic biology are new. They’ve been around for quite a while in one form or another following course of gradual improvement rather than truly new scientific advance.
Synthetic biology at heart is about how dedicated professionals can organize scattered pieces of knowledge into something that can potentially allow ambitious undergraduate students to undertake projects that would have been beyond their ability a decade ago. Never mind the actual success rate of their projects for now. They very fact that those students are able to plan for the future with much broader sphere of possibility is significant enough.
And why stop with undergraduates? Wouldn’t it be possible to have motivated high school students design something that at least works on paper? Wouldn’t it be possible to build a conceptual framework so that those kids can at least discuss possibilities of future projects on back of a napkin without resorting to sci fi?

If diybio is to do what it originally set out to do, we need to look beyond gadgets and tools. We need to think about ideas and how they come together… We need to make biology easier, not just cheaper. This is the mantra that will drive my DIYBio related activities this year.

Synthetic Biology on KQED QUEST- and some comments on the diybio aspect

(((I was trying to embed the videos from the KQED site directly in the post, but apparently copy pasting embed code in HTML panel isn’t good enough for wordpress. I’ve linked to them instead. They are quite good. You should really check them out.)))

Here are two videos on synthetic biology. The first one is a short introduction to synthetic biology produced by the wonderful people at KQED QUEST program, which goes into some level of detail on what synthetic biology is and what we are doing with it at the moment. Certainly worth some of your time if you’re interested in this new exciting field of science.

The first video is the original KQED QUEST video on synthetic biology.

The second video is the extended interview with Drew Endy available off their website… While the field of synthetic biology in the form we now know and love probably began with the efforts of Tom Knight at MIT, Drew Endy is certainly one of the most active and clear thinking proponents of the scientific field of synthetic biology.

Here is the link to the second video, the extended interview with Drew Endy.

If you hadn’t guessed yet, I’m really big on synthetic biology. I think it’s one of the most exciting things happening in the sciences today, not just for biologists but for mathematicians and physicists in that synthetic biology might one day provide a comprehensive toolset for studying the most complex physical system known to humanity so far… That of complex life-like systems.

I also believe that abstraction driven synthetic biology cannot manifest without a reasonably sized community of beta-testers willing and able to use the new parts and devices within original systems of their own creation. Computer languages like python and ruby needed efforts of hundreds of developers working in conjunction with each other for a multiple years to get where they are today. Complete operating system like Linux took longer with even larger base of developers and we still have usability issues. Synthetic biology must deal with systems that are even more complex than most computerized systems, so it’s not unreasonable to think that we’ll be needing an even wider deployment of the technology to the public and active community involvement in order to make it work as engineering capable system.

So I am a little dismayed, along with legions of other people who were initially excited by the promises of synthetic biology in conjunction with diybio community, to find that access to BioBrick parts and iGEM competition is severely limited against any amateur biology group operating outside conventional academic circles.

You see, unlike computer programming, constructing synthetic biology systems require BioBrick parts from the registry of standard biological parts. Right now it is next to impossible for diy-biologist interested in synthetic biology to get his or her hands on the BioBrick components through proper channels. The DIYBio-NYC group alone had quite a few number of people lose interest because of uncertain future aspects of being allowed access to the BioBrick parts and talking to people from around the world on that issue I’m beginning to think that there are a lot more of such cases. So far the major reasoning behind the restricted access seem to be the safety issue, but considering that the regular chassis used to put together BioBrick parts is based on academic strains of E.Coli that are even more harmless than your average skin cell I can’t see much wisdom in restricting access to the parts on basis of safety.

The bottom line is, the state of synthetic biology and BioBricks foundation at the moment is forcing a lot of people, some of them quite talented, who are enthused about contributing to a new emerging field of science to back down in either confusion or disappointment. Considering that the very structure of synthetic biology itself demands some level of public deployment to stress-test and demonstrate the effectiveness and stability of its individual parts and devices (with creation of those individual parts and devices left to the highly trained professionals at up scale laboratories) this is highly unusual state of affair that is not motivated by science behind synthetic biology. I might even go as far as to say it has the distinct aftertaste of political calculations of public relations kind.

The field of synthetic biology will never achieve its true potential unless the BioBricks foundation and iGEM administrators come up with some way for people outside traditional academy settings to participate in real design and construction of synthetic biology systems.

Here’s a little bonus, the QUEST show producer’s notes on ‘Decoding Synthetic Biology.’

How to change the world.

This is a bit of rant post on something I thought of after watching bunch of old hacker-themed movies from the Hollywood. It continues to amaze me how I can participate in all sorts of crazy things even with the summer studies and jobs I need to keep up with. I guess that’s the benefit of living in place like NYC.

I’ve been watching some old hacker movies lately.  And I just can’t believe what kind of cool things those movie hackers were able to pull off with their now decades-old computers and laptops. Computers with interfaces and hardware that exudes that retro feel even across the projector screen. I know a lot of people with brand-spanking-new computers with state of the art hardwares and what they usually do, or can do with those machines aren’t as cool as the stuff on the movies being pulled off with vastly inferior hardware and network access. Of course, like everything in life it would be insane to compare the real with the imagined, and Hollywood movies have a bad tendency to exaggerate and blow things out of proportion (I’m just waiting for that next dumb movie with synthetic biology as a culprit, though it might not happen since Hollywood’s been barking about indecency of genetic engineering technology for decades now). Even with that in mind, I can’t help but feel that the modern computerized society is just way too different from the ones imagined by artists and technologists of the old.

Ever heard of younger Steve Jobs talking in one of his interviews? He might have been a rather nasty person but he certainly believed that ubiquitous personal computing will change the world for the better. Not one of those gradual, natural changes either. He actually believed that it’s going to accelerate the humanity itself, very much like how Kurzweil is preaching about the end of modernity with the upcoming singularity. Well, personal computing is nothing new these days. It’s actually quite stale until about a few months ago when people finally found out glut-ridden software with no apparent advantage in functionality were bad things, both in terms of user experience and economics. Ever since then they’ve been coming out with some interesting experiments like the atom chipset for netbooks (as well as netbooks themselves), and Nvidia Ion system for all sorts of stuff I can’t even begin to describe. And even with the deluge of personal computing in the world we have yet to see the kind of dramatic and intense changes we were promised so long ago. Yeah sure, the world’s slowly getting better, or changing at least. It’s all there when you take some time off and run the real numbers. It’s getting a little bit better as time goes on, and things are definitely changing like some slow-moving river. But this isn’t the future we were promised so long ago. This isn’t the future people actually wanted to create.

We have engines of information running in every household and many cellphones right now.  Engines of information meaning all sorts of machinery that can be used to create and process information content. Not just client-side consumption device where the user folks money over to some company to get little pieces of pixels or whatever, but real engines of information that’s capable of creating as well as consuming using all of the hardware capabilities. It’s like this is the Victorian Era, and everyone had steam engine built into everything they can think of. And nothing happened. No steam cars, no steam blimps, no nothing. The world’s rolling at the same pace as before and most people still think in the same narrow minded niches of their own. What’s going on here? Never had such a huge number of ‘engines’ responsible for creating an era in history been available to so many people at once. And that’s not all. Truly ubiquitous computing made available by advances in information technology is almost here, and it is very likely that it will soon spread to the poorer parts of the world and remoter parts of the globe traditionally cut off from conventional infrastructures.

But yet again, no change. No dice. Again, what’s happening here, and what’s wrong with this picture? Why aren’t we changing the world using computers at vastly accelerated rate like how we changed the world with rapid industrialization (not necessarily for the better, of course)? That’s right. Even compared to the industrialization of the old times with its relatively limited availability and utility of the steam engines we are falling behind on the pace of the change of the world. No matter what angle you take there is something wrong in our world. Something isn’t quite working right.

So I began to think during the hacker movie screening and by the time the movie finished I was faced with one possible answer to the question of how we’ll change the world using engines of information. How to take back the future from spambots, ‘social gurus’, and unlimited porn.

The answer is science. The only way to utilize the engines of information to change the world in its tangible form is science. We need to find a way to bring sciences to the masses. We need to make them do it, participate in it, and maybe even learn it, as outlandish as the notion might sound to some people out there. We need to remodel the whole thing from the ground-up, change what people automatically think of when they hear the term ‘science’. We also need the tools for the engines of information. We need some software based tools so that people can do science everywhere there is a computer, and do it better everywhere there is a computer and an internet connection. And we need to make it so that all of those applications/services can run on a netbook spec’d computer. That’s right. Unless you’re doing serious 3D modeling or serious number-crunching you should be able to do scientific stuff on a netbook. Operating systems and applications that need 2GB of ram to display a cool visual effect of scrolling text based documents are the blight of the world. One day we will look back at those practices and gasp in horror at how far they held the world back from the future.

As for actual scientific applications, that’s where I have problems. I know there are already a plethora of services and applications out there catering to openness and science integrated with the web. Openwetware and other synthetic biology related computer applications and services come to mind. Synthetic biology is a discipline fundamentally tied to usage of computer, accessibility to outside repositories and communities, and large amateur community for beta testing their biological programming languages. It makes sense that it’s one of the foremost fields of sciences that are open to the public and offers number of very compelling design packages for working with real biological systems. But we can do more. We can set up international computing support for amateur rocketry and satellite management, using low-cost platforms like the CubeSat. I saw a launching of a privately funded rocket into the Earth’s orbit through a webcam embedded into the rocket itself. I actually saw the space from the point of view of the rocket sitting in my bedroom with my laptop as it left the coils of the Earth and floated into the space with its payload. And this is nothing new. All of this is perfectly trivial, and is of such technical ease that it can be done by a private company instead of national governments. And most of the basic the peripheral management for such operations can be done on a netbook given sufficient degree of software engineering and reliable network connection. There are other scientific applications that I can rattle on and on without pause, and there are plenty of people out there much better versed in sciences who can probably come up with even cooler ideas… So why isn’t this happening? Why aren’t we doing this? Why are we forcing people to live in an imaginary jail cell where the next big thing consists of scantily clad men/women showing off their multi-million dollar homes with no aesthetic value or ingenuity whatsoever? Am I the only one who thinks the outlook of the world increasingly resembles some massive crime against humanity? It’s a crime to lock up a child in a basement and force him/her to watch crap on T.V., but when we do that to all of humanity suddenly it’s to be expected?

We have possibilities and opportunities just lying around for the next ambitious hacker-otaku to come along and take. But they will simply remain as possibilities unless people get to work with it. We need softwares and people who write softwares. We need academics willing to delve into the mysterious labyrinths of the sciences and regurgitate it in user-friendly format for the masses to consume, with enough nutrient in it that interested people can actually do something with it.

This should be a wake-up call to the tinkerers and hackers everywhere. Stop fighting over which programming language is better than others. Stop with the lethargic sarcasm and smell the coffee. Learn real science and hack it to pieces like any other system out there.

Get to work.

Change the world.

Ebook future

I just came across an article in the Wired(link) stating that Amazon will almost certainly unveil a new ebook reader with larger screen size. While the article goes on to talk about possible tablet device from Apple as being a heavy competiton on the ebook market compared to the text-centric ebook devices, my attention span more or less stopped with the mention of the new ebook device on the horizon. It’s not just a new ebook device that’s about to come out. It’s a larger screen ebook device specifically targeted at the academic textbook market. Apparently Amazon want a share of the 9.8 billion textbook market(link) (and that’s just U.S.), and I say it’s about time. I can still feel the phantom pain imposed on my back by years of carrying around textbooks that are heavy enough to be used as a decent weapon (and accroding to this picture many people agre with me:pic of someone hitting other with a book:game?). It would be great to be able to finally carry a bookbag that weighs less than the standard combat gear of most armed forces around the world.
I’ve been an avid ebook user ever since I learned about existence of those wonderful devices and the myriad of texts available on the web for free use, like the extensive collections in wikipedia(link), various blogs(link boingboing), and the project gutenberg(link). I had my first encounter with ebook devices a long time ago before Kindle made it cool to carry around ebook devices. In fact, as far as I know the ebook reader I use, the Sony Reader PRS-500 (wiki-link) might be the first dedicated ebook reader in North America that uses e-ink display. This device is certainly the oldest dedicated ebook reader device with e-ink display in North America (redundant) and it’s been a trust mobile library by my side for the past two or three years. Even before purchaing this dedicated ebook reader, however, I was using old discarded palm pilot devices (so old that they stil had this ‘volatile memory.’ It was a memory scheme used in palm devices before the advent of all-too-familiar flash memory. If the device ever ran out of power all the data stored on the device would be lost, thus the term ‘volatile memory’) to read ebooks on the go, most of them reformatted webages I made using a handy Palm utility program called ‘plucker’ that had a capability to turn any webpage/archive format into a palm-ready ebook. Later on I’ve also used my Nintendo DS as a dedicted ebook reader (instead of playing games like a good kid) burning multitude of memory cards with whole repository of text and HTML formatted ebooks I found through my sojourns on the net.
I love my paper books as much as anyone, of course. And even now, with my extensive ebook collection (most of them surprisingly DRM free) I always make a point of buying paper books now and then. Some people stock up on weapons and emergency supplies for the inevitable zombie apocalyse. I stock up on paper books for that one day when I won’t be able to recharge my digital book-reading devices anymore, and my vast library is lost within the magnetic patterns etched upon my external hard drives. However, there is an unavoidable allure to being able to carry around twenty to thirty books of my choosing in a slim and light package that weighs as much as my hard drive ipod. The fact that I’m a rather fast reader only adds to the attraction of ebooks and ebook readers. Before I came across ebooks how my luggage would be filled with books whenever I traveled far away from home, and I happen to travel often. It really made for quite a workout, carrying those bags all over the place. With ebooks, I just need to carry the little device and its charger for my casual reading needs, with a hardcover or two just for those tight spots when I’d need to study instead of read. Many people still debate the need for having a dedicated device for reading digitally formatted books, and they are right. having an ebook reader will not change your life if you don’t read in the first place. In that light dedicated ebook readers are certainly niche devices, intended for use by the relatively smaller portion of the population would would buy books through digital distribution channel and who would be willing to pay for a device that goes into the hundreds of dollars just to be able to read more. The two things I’ve just mentioned might sound insignificant hurdles to most people who consider themselves to be internet savvy, but when we think of the reading population as a whole whose members come from various walks of life and are at various stages of life, those are some significant barriers for entry to the ebook world. Yet Amazon’s Kindle demonstrated clearly what a few dedicated gadget community members knew all along. People actually read, and many of them are willing to pay to support their habbit, as the multi-billion dollar publishing industry would attest (and this is just in U.S., and quite frankly, this isn’t the most reading-intensive country in the world).
Reading the article from the Wired, and listening to conversations related to ebooks on and off the net, the ebook question seem to be moving from ‘will people bother to read on machines’ to ‘will people bother to purchase dedicated readng machines.’ This is a good sign I think. The market’s beginning to awknowledge that people are willing to take time to read things and even (gasp) pay for them, which means larger selection of stuff to read and things to read that stuff on in the future. However, the answer to the question of whether we need to have an ebook reader instead of making people read on their cellphones is a thorny one. It’s a question of how far people are willing to go to support their reading lifestyle. How many people are willing to cough up close to $400 for a dedicated ebook reader that you will later have to pay more to load content onto it? When we simply look at the Kindle as the only ebook reader of choice, the answer is obvious. Not so much. I’m a self-confessed ebook enthusiast who regularly dig through the net for that obscure script to traslate microsoft proprietary LIT format to Sony proprietary BBeB format. But even I am not willing (or rather, able) to pay more than a month’s living expense on student budget to buy an ebook device. So are dedicated ebook platforms doomed? Not quite. We must remember that there are still myriad of companies out there that manufactures cheaper ebook devices, some of them more hgih profile then others (Sony isn’t a low profile company). Add to them the quirky yet ambitious enterpreneurs of the East, who seem to be jumping into any and all kinds of electronics market with vigor and goods of varying qualities. I got my own Sony PRS-500 for about $50 dollars in a promotional offer. I get most of my reading materials purchased through limited DRM free channels or through public domain, and they usually don’t cost much, certainly not as much as their printed cousins. Unlike what people think, ebook reading devices themselves aren’t really that expensive. Dedicated ebook device is basically an electronic device with two features. E-ink display capable of displaying basic HTML-like formatting along with a few more conventional formats like PDF, and a cable to connect it to a computer so the end user can load content into it. Simply put, it’s a glorified USB thumbdrive with big E-ink screen along with some buttons. While Amazon’s Kindle is a notch or two above the rest with its fancy whispernet technology and over the air delivery system, those things are not absolutely necessary to an ebook device. I mean, these devices are capable of holding 20~30 ebooks each going a few thousand pages. You probably don’t have to constantly buy new content before you go home from wherever you are at the moment (besides, if you can chug through that much content before you get to a computer with internet and USB connection you deserve to buy yourself a $400 reading device). The real issue that will either make or break the future of ebooks is not with introducing newer devices with more features (though I would certainly like to see existing feature set get better), but with software. The DRMS and ebook formats. I can manage quite a different number of file formats and DRMed formats on my single PRS-500 device only because of the collective action of the volunteer ebook community, some of whom managed to code indispensable piece of cross-format software like libre(link). Many people can’t. DRM leads to limited distribution, since investing in DRM of a specific platform or corporation means that you trust that platform or corporation to exist ten or twenty years from the date of your book purchse. Which is prepsetrous to anyone with a working mind. Average lifetime of a corporation in America is about ten or fifteen years (cite:link), and that’s assuming they are successful, and that they will continue to maintain and support whatever the DRM scheme they came up with up until the very last moment. You can browse through your old books ten and twenty years from now on, and your children and children’s children will be able to read or sell those books send hand, ensuring certain degree of propagation of the written content. With DRMed books, it’s highly unlikely for your own children to be able to access your book, and whether you yourself will be able to read your favorite passage years from now will be decided by a boardroom composed of people who don’t know you and quite possibly don’t care whether you want to read or not. Even when we don’t consider faraway scenario like this, the dangers posed by DRM on the general propagation ebook into larger market is obvious, owing to the simple fact that DRMed ebooks will impose limits upon its own market and distribution. The first thing most people encounter whenever they browse to an ebook store that is’t Amazon is this: Name of the book:LIT, PDF, BBeB, MOBI and etc etc… When users somehow manages to find the book they want to purchase (despite the severely limited selection in most of those stores) they are faced with multitude of options as to the format of the book, most of them incompatible with each other. From what I know of people who are not familiar with ebook formats, this is the step when most of them will just give up and go buy a paper book in local bookstore for only slightly more, or maybe even less than the DRMed digital copy if the user knows how to shop around on ebay. Even larger scale distributor like the Amazon, with its almighty capacity to push their own content into their own platforms, is basically playing in an uneven field. the reality is that people will inevitably ask questions about the future of their books and all Amazon can do is to cross their fingers and wish that doesn’t happen anytime soon. Limiting their own source of income and praying for only good things to happen in the future is not a valid business strategy.
The valid business strategy in near future would be to get rid of the DRM scheme entirely. For everyone. Even giant like Amazon is hedging for uncertain bet with DRM restriction in their ebooks. Smaller distributors like Sony ebook store doesn’t stand a chance. Just sell ebooks like you sell books. Let the market grow and let more people get hooked on using ebooks on ebook reader devices. There are cellphones and laptops, sure. But the reality is that they don’t comapre to dedicated ebook readers in terms of providing a valid reading experience. Cellphones are supposed to make calls and laptops are for computing, and no one will burn out their batteries on those devices and risk their bill-paying work just to read more books. Once the quantity and quality of DRM free ebooks reach a critical mass there will be cheaper ebook readers on the market. That’s the time for Amazon to introduce their new and improved Kindlets. Why go for generic, cheap ebook reader when you can get the same content on far better machine with awesome battery with life-saving features and innovative interface? Only way to achieve this end with DRM still in the picture would be to either open Amazon DRM specifications to other manufacturers which defeats the purpose of having a DRM in the first place, or having a unified standard DRM for all publishers/distributors that’s compatible across variety of devices. That would require deal making and engineering of ungodly devotion, and I doubt even Amazon will be able to pull it off on their own, especially considering that there are markets outside of U.S. as well, especially when it comes to reading materials both traditional books and ebooks.

I just came across an article in the Wired stating that Amazon will almost certainly unveil a new ebook reader with larger screen size. While the article goes on to talk about possible tablet device from Apple as being a heavy competition on the ebook market compared to the text-centric ebook devices, my attention span more or less stopped with the mention of the new ebook device on the horizon. It’s not just a new ebook device that’s about to come out. It’s a larger screen ebook device specifically targeted at the academic textbook market. Apparently Amazon want a share of the 9.8 billion textbook market (and that’s just U.S.), and I say it’s about time. I can still feel the phantom pain imposed on my back by years of carrying around textbooks that are heavy enough to be used as a decent weapon. It would be great to be able to finally carry a book-bag that weighs a lot less than the standard combat gear.

I’ve been an avid ebook user ever since I learned about existence of those wonderful devices and the myriad of texts available on the web for free use, like the extensive collections in wikipedia, various blogs, and the project gutenberg. I had my first encounter with ebook devices a long time ago before Kindle made it cool to carry around ebook devices. In fact, as far as I know the ebook reader I use, the Sony Reader PRS-500 might be the first dedicated ebook reader in North America that uses e-ink display. This ebook reader  had been a trusted mobile library by my side for the past two or three years. Even before purchaing this dedicated ebook reader, however, I was using old discarded palm pilot devices (so old that they stil had this ‘volatile memory.’ It was a memory scheme used in palm devices before the advent of all-too-familiar flash memory. If the device ever ran out of power all the data stored on the device would be lost, thus the term ‘volatile memory’) to read ebooks on the go, most of them reformatted webages I made using a handy Palm utility program called ‘plucker’ with ability to turn any webpage/archive format into a palm-ready ebook. Later on I’ve also used my Nintendo DS as a dedicated ebook reader (instead of playing games like a good kid) burning multitude of memory cards with whole repository of text and HTML formatted ebooks I found through my sojourns on the net.

I love my paper books as much as anyone, of course. And even now, with my extensive ebook collection (most of them surprisingly DRM free) I always make a point of buying paper books now and then. Some people stock up on weapons and emergency supplies for the inevitable zombie apocalypse. I stock up on paper books for that one day when I won’t be able to recharge my digital book-reading devices anymore, and my vast library is lost within the magnetic patterns etched upon my external hard drives. However, there is an unavoidable allure to being able to carry around twenty to thirty books of my choosing in a slim and light package that weighs as much as my hard drive ipod. The fact that I’m a rather fast reader only adds to the attraction of ebooks and ebook readers. Before I came across ebooks how my luggage would be filled with books whenever I traveled far away from home, and I happen to travel often. It really made for quite a workout, carrying those bags all over the place. With ebooks, I just need to carry the little device and its charger for my casual reading needs, with a hardcover or two just for those tight spots when I’d need to study instead of read. Many people still debate the need for having a dedicated device for reading digitally formatted books, and they are right. having an ebook reader will not change your life if you don’t read in the first place. In that light dedicated ebook readers are certainly niche devices, intended for use by the relatively smaller portion of the population would would buy books through digital distribution channel and who would be willing to pay for a device that goes into the hundreds of dollars just to be able to read more. The two things I’ve just mentioned might sound insignificant hurdles to most people who consider themselves to be internet savvy, but when we think of the reading population as a whole whose members come from various walks of life and are at various stages of life, those are some significant barriers for entry to the ebook world. Yet Amazon’s Kindle demonstrated clearly what dedicated gadget community members knew all along. People actually read, and many of them are willing to pay to support their habit, as the multi-billion dollar publishing industry would attest (and this is just in U.S., and quite frankly, we aren’t the most reading-intensive country in the world).

Reading the article from the Wired, and listening to conversations related to ebooks on and off the net, the ebook question seem to be moving from ‘will people bother to read on machines’ to ‘will people bother to purchase dedicated reading machines.’ This is a good sign I think. The market’s beginning to acknowledge that people are willing to take time to read things and even (gasp) pay for them, which means larger selection of stuff to read and things to read that stuff on in the future. However, the answer to the question of whether we need to have an ebook reader instead of making people read on their cellphones is a thorny one. It’s a question of how far people are willing to go to support their reading lifestyle. How many people are willing to cough up close to $400 for a dedicated ebook reader that you will later have to pay more to load content onto it? When we simply look at the Kindle as the only ebook reader of choice, the answer is obvious: not so much. I’m a self-confessed ebook enthusiast who regularly dig through the net for that obscure script to translate Microsoft proprietary LIT format to Sony proprietary BBeB format. Yet even I am not willing (or rather, able) to pay more than a month’s living expense on student budget to buy an ebook device. So are dedicated ebook platforms doomed? Not quite. We must remember that there are still myriad of companies out there that manufactures cheaper ebook devices, some of them more hgih profile then others (Sony isn’t a low profile company). Add to them the quirky yet ambitious enterpreneurs of the East, who seem to be jumping into any and all kinds of electronics market with vigor and goods of varying qualities. I got my own Sony PRS-500 for about $50 dollars in a promotional offer. I get most of my reading materials purchased through limited DRM free channels or through public domain, and they usually don’t cost much, certainly not as much as their printed cousins. Unlike what people think, ebook reading devices themselves aren’t really that expensive. Dedicated ebook device is basically an electronic device with two features. E-ink display capable of displaying basic HTML-like formatting along with a few more conventional formats like PDF, and a cable to connect it to a computer so the end user can load content into it. Simply put, it’s a glorified USB thumbdrive with big E-ink screen along with some buttons. While Amazon’s Kindle is a notch or two above the rest with its fancy whispernet technology and over the air delivery system, those things are not absolutely necessary to an ebook device. I mean, these devices are capable of holding 20~30 ebooks each going a few thousand pages. You probably don’t have to constantly buy new content before you go home from wherever you are at the moment (besides, if you can chug through that much content before you get to a computer with internet and USB connection you deserve to buy yourself a $400 reading device). The real issue that will either make or break the future of ebooks is not with introducing newer devices with more features (though I would certainly like to see existing feature set get better), but with software- the DRMS and ebook formats. I can manage quite a different number of file formats and DRMed formats on my single PRS-500 device only because of the collective action of the volunteer ebook community, some of whom managed to code indispensable piece of cross-format software like calibre. Many people can’t. DRM leads to limited distribution, since investing in DRM of a specific platform or corporation means that you trust that platform or corporation to exist ten or twenty years from the date of your book purchase. That is preposterous to anyone with working mind. Average lifetime of a corporation in America is about ten or fifteen years, and that’s assuming they will continue to maintain and support whatever the DRM scheme they came up with up until the very last moment. You can browse through your old books ten and twenty years from now on, and your children and children’s children will be able to read or sell those books send hand, ensuring certain degree of propagation of the written content. With DRMed books, it’s highly unlikely for your own children to be able to access your book, and whether you yourself will be able to read your favorite passage years from now will be decided by a boardroom composed of people who don’t know you and quite possibly don’t care whether you want to read or not. Even when we don’t consider faraway scenario like this, the dangers posed by DRM on the general propagation ebook into larger market is obvious, owing to the simple fact that DRMed ebooks will impose limits upon its own market and distribution. The first thing most people encounter whenever they browse to an ebook store that isn’t Amazon is this: Name of the book:LIT, PDF, BBeB, MOBI and etc etc… When users somehow manages to find the book they want to purchase (despite the severely limited selection in most of those stores) they are faced with multitude of options as to the format of the book, most of them incompatible with each other. From what I know of people who are not familiar with ebook formats, this is the step when most of them will just give up and go buy a paper book in local bookstore for only slightly more, or maybe even less than the DRMed digital copy if the user knows how to shop around on eBay. Even larger scale distributor like the Amazon, with its almighty capacity to push their own content into their own platforms, is basically playing in an uneven field. the reality is that people will inevitably ask questions about the future of their books and all Amazon can do is to cross their fingers and wish that doesn’t happen anytime soon. Limiting their own source of income and praying for only good things to happen in the future is not a valid business strategy.

The valid business strategy in near future would be to get rid of the DRM scheme entirely. For everyone. Even giant like Amazon is hedging for uncertain bet with DRM restriction in their ebooks. Smaller distributors like Sony ebook store doesn’t stand a chance. Just sell ebooks like you sell books. Let the market grow and let more people get hooked on using ebooks on ebook reader devices. There are cellphones and laptops, sure. But the reality is that they don’t compare to dedicated ebook readers in terms of providing a valid reading experience. Cellphones are supposed to make calls and laptops are for computing, and no one will burn out their batteries on those devices and risk their bill-paying work just to read more books. Once the quantity and quality of DRM free ebooks reach a critical mass there will be cheaper ebook readers on the market. That’s the time for Amazon to introduce their new and improved Kindlets. Why go for generic, cheap ebook reader when you can get the same content on far better machine with awesome battery with life-saving features and innovative interface? Only way to achieve this end with DRM still in the picture would be to either open Amazon DRM specifications to other manufacturers which defeats the purpose of having a DRM in the first place, or having a unified standard DRM for all publishers/distributors that’s compatible across variety of devices. That would require deal making and engineering of ungodly devotion, and I doubt even Amazon will be able to pull it off on their own, especially considering that there are markets outside of U.S. as well, especially when it comes to reading materials both traditional books and ebooks. The market is moving on, and publishers should move along with it instead of trying to hold back the tide.

Internet intelligence

So here’s an interesting short article on the possibility of internet gaining some type of consciousness due to its network based emergence-friendly structure. The author is the famous Ben Goertzel, one of the foremost minds of the futurist/AI school. If you’ve got time you should check out his blog for other articles as well. I’ve found a number of them to be quite compelling. I’ve always been interested in artificial intelligence, though my concentration is with artificial life. In time I’ve come to view the two as the same type of system manifesting in different mediums, and I’ve come to think that intelligence is a trait that naturally comes along with the collection of characteristic called life. Intelligence is life and life is intelligence. In that sense I consider even minuscule bacteria to be intelligent, though not in a way we usually think about intelligence. The very fact that certain collection of molecular machines can work in conjunction to behave in such a way that allows it to feed, evade harm and propagate, even in evolution-aided unconscious manner means that certain system should be considered intelligent. Of course, this is merely my personal view that is not backed by evidence based professional study. This is more of a personal impression with reasonable causes, something that’s on it’s way of becoming a hypothesis but not quite there yet as the things stand. Considering that I consider our current definition of intelligence to be lacking in many ways, I will be at ire of many neurobiologists should I exclaim such opinions carelessly. And for some reason there are a lot of neurobiologists around me so I try to keep my mouth shut most of the time regarding that issue.

Ben Goertzel’s answer to whether the net can become an intelligent construct is somewhat vague, but then he probably can’t help it himself. The question itself is a bit on the vague side when you think about it, including the whole uncertainty of the definition of intelligence that I just wrote about above. He briefly mentions the pervading ethos of the neurobiologists of the recent years, that many of them believe that intelligence/consciousness is a property that will inevitably emerge from any complex system that has the right sort of internal dynamics. I do definitely agree with him on that point, since when you think about it it’s about the only scientifically feasible explanation of the emergence of intelligence/consciousness without attributing some specific part of the brain to the trait of intelligence (like how Rene Descartes attributed ganglia as the sit of the soul). I also suspect that life arises in a very similar manner, and whether that pattern of internal dynamics can be an abstraction that can be applied to different types of physical systems is a major part of my current research as a fledgling science student (the one that’s helping paying my rent). Hopefully I’ll be able to come up with something in my lifetime, since I view the possibility of such a universal theoretical platform to be a big game changer in the upcoming human century, something that might as well change the world we live in along with applications of nanotechnology and modular biology.

Will internet itself become intelligent at some point? I’m sure it will. Dr. Ben Goertzel points out that the internet is way too fragmented to display a coherent vision of an artificial intelligence and instead suggests that there might be a way to construct a sort of unifying backbone using the network infrastructure of the internet itself as a sort of raw data feed/complexity provider for that central structure. It makes sense, in a way that no one really thinks about it before someone else says it first. Most complex emergent systems, when laid out using some elements of graph theory (the graph theory, we are not talking about bar graphs and such nonsense here, for those who haven’t been keeping tabs on mathematics) displays inexplicable tendency to form central clusters around certain limited number of nodes instead of distributing indefinitely. And the change usually isn’t gradual or predictable. It happens rapidly under certain critical threshold as Stuart Kauffman put it very succinctly on his book “At home in the universe.” Internet is very obviously following in that pattern. The last graphic map of the internet I’ve seen displayed certain number of nodes (websites/services) with overwhelming number of links with a lot of nodes with limited number of links. Similar pattern is also observed in the growth of neural pathways and formation of galactic clusters, and who knows what other phenomenon in this universe escaped our notice, considering that complexity science is still a new field. Now I don’t have a very clear idea of what form that central structure would have to take to make the internet truly intelligent to observable degree… I assume it would be something on par with designing CNS for the distributed system that is the internet, possibly with a hint of recursive structure via Douglas Hofstadter, but this is all just some ideas bouncing around and I have no idea what physical/informational form such a construct would take. I’d assume it is something far past the simple matter of linking a lot of links within network nodes or providing raw processing power (that would be like saying any game of go can be won with large enough number of stones, which is just dumb. This isn’t a chess, kiddo)… I should definitely give some more thought to this, the ideas on the nature of the ‘central structure’ might as well be the catalyst I’ve been looking for.

The problem that continues to bother me whenever I think of artificial intelligence is the vague definition of intelligence we seem to share. Just how can we tell what is intelligent or not? Most definition at the moment seem to be about figuring out how human-like other organisms/systems are without regards to the actual ‘intelligence’ of that organism/system. I may not be a professional but I smell a very homocentric perception whenever I read something that pertains to the nature of intelligence. If intelligence is about being able to communicate with other beings then antisocial foreigners are not intelligent. If intelligence is about being able to react to the environment so that you can find sources of food and multiply, then bacteria are intelligent. Maybe even viruses. Both of them do not have any sort of nervous system like we do with ‘higher organisms’ so it makes the problem of intelligence a bit more complex.

Internet may become intelligent someday. This is the year that the internet will have the equal or higher number of hyperlinks as there are synapses in our brains. The real question is, how will we be able to tell if it is intelligent? Are we looking for intelligent traits or are we looking for human traits?  How would we be able to tell the difference when the time comes? Maybe the first machine intelligence that blossoms on the world wide web will be trampled on by us as a mere bug in the system. After all, we do it to each other all the time.

On a little side note, the diy-bio NYC had our second meeting this Monday. We made a gel box, extracted DNA, and had a jolly good time. More on that later.

Tweeting the future

A quick post in the morning before going off to school/work. There’s nothing like a little freewrite-ish post in the morning to prepare the morning for a day of hard work ahead. I keep on meaning to do one of the more ‘serious’ posts on this blog but for some reason my fingers stop typing whenever the topic gets a little too professional in any way. When I do a spur-of-the moment writing, however, I can write and write for hours on end, on all sorts of topics from personal to somewhat more academic themes, something that’s really beginning to piss me off. I can write when I want to but I can’t write when I should be writing.

Most of you have heard of twitter by now. I’ve been on it a lot lately jotting down little notes and thoughts via SMS and sometimes even having a small conversation on it time to time. The amount of theme-specific information I can get from twitter, from topics on android development to synthetic biology, is second only to the friendfeed except that twitter has the added benefit of being mobile and more active (I can’t remember the last time someone actually used the physicist room on friendfeed. What gives?). Most of all, twitter provides a tool to create a constant thought-stream from my brain to the net that can be indexed and searched later on by myself or others. Twitter is one of those things that doesn’t sound like much on paper but turns out to be really handy once you figure out how to use it properly. I’m willing to bet that if some sort of ubiquitous connection to the net is implemented in human beings sometime in the future (like the Clatter system imagined by Warren Ellis), it will be in form of twitter rather than IM protocols.

The real life examples of twitter being put into good use are too numerous to write here in its entirety. Lot of people heard about the Mumbai bombing the moment it happened from people standing in the actual ground zero, streaming messages to the net as the events came to pass. I’ve heard about the Russian/US satellite collision incident in the space faster than the local news through twitter. Now these examples are at best gonzo journalism that may or may not appeal to some people out there. How about this? It’s a PLoS article on the benefits of microblogging tool like the twitter in conference reporting. Twitter provides an access for enthusiastic public of scientific bent to gather insight into major academic events and the concise key points that might have been lost in bustle of person-to-person conference. I myself tried to do a little bit of microblogging during the synthetic biology 4.0 in Hong Kong, something I didn’t get to do much due to the difficulty I had with my laptop during the event (like trying to find a suitable power converter). My understanding was that lot of people were still very interested in the venue, both from the professional and hobbyist sector of the public. Twitter provides an efficient networking tool between fellow professionals so that they can share information and insight over the net and beyond.

All of this means nothing. The medium of twitter is new. The very nomenclature of microblogging is quite new to most of us and the bubble we are experiencing may someday die out, perhaps even with the twitter itself. However I do suspect that the very format of microblogging itself will only mature as the time goes on doing what it does best. Providing a human-to-network interface, where everyone becomes a broadcasting center with all their stream of thoughts encoded into digital information regardless of their physical location, accessible by the net as a whole. There will be set backs, and most of the content on the thought-streams will be useless. I mean, who really cares if someone in Brussels had pizza for lunch or not? We must keep in mind, however, that in any form of media any worthwhile content is a mere fraction of the total output of the said media (I think someone came up with a math for this, but can’t quite recall it in the morning rush). There are probably thousands of new books published per day. How many of them are actually worth reading? How many do you actually get to read during your lifetime? The same can be said for movies, or even, academic papers on printed journals.

People are still looking for ways to define what microblogging is and how to use it properly, in both its physical usage and integration of the results of microblogging into conventional infosphere. Like data mining for information within the thought-stream provided by people all over the place. This isn’t some random text cloud we are talking about. This is the kind of information already filtered once or more by living thinking human beings according to their interests. Google and other such information based corporations are probably eyeing the twitter-verse and other potential microblogging services as if they were goldmines.

The potential of twitter and twitter-like microblogging services as a sort of radio station of the future present is really interesting for me. The information people stream into twitter can be channeled through cellphone SMS providing ubiquitous access to information. Say you like works by Bruce Sterling and are interested in hearing more of his thoughts. You can set your twitter account so that you can receive his twitter updates via SMS, wherever you are. That’s basically a radio station isn’t it? It’s only that twitter isn’t censored or regulated by the conventional authority like it is the case with normal or pirate radio station. Twitter, it turns out, is the result of the abstraction of modern technology and infrastructure into simple little pieces that can be integrated into each other.

The question of how to best use twitter still remains a great unknown for me. I do admit that I am a moderate twitter user, doing everything from complaining about some daily event to jotting down notes or thoughts on artificial life and such when I am on the move. I even set my cellphone up so that I can receive updates from some of the more eccentric personalities across the globe on the convenience of my cellphone. What I can’t figure out at the moment is how to use all this ‘properly.’ Every time I use twitter I am surprised by its potential, and at the same time I am enveloped by certain uneasy feeling that I still do not understand twitter, and that there must be some way of using it properly. I feel as though there is some arcane method for twitter that escapes my notice every time I send or receive an SMS update on my cellphone to/from the twitter. And that empty feeling make it impossible for me to predict the future of twitter, and the future of the world with microblogging.

Microblogging is a natural evolution of blogging for people who don’t like to write much. Such statement might sound like a bad thing but it isn’t. Some people don’t want to write stories. They just want to write down ‘something’ without spending a large portion of their life doing it. Not everyone can end up writing ‘In Search of Lost Time.’ Microblogging combined with ubiquitous communications technology give people an output for something to do whenever they feel like doing something. It plays on the basic human instinct to be doing something all the time instead of lying on their backs with dead fish eyes. And the result of the medium of ubiquitous microblogging is a continuous stream of thought posted on the net that numbers in the millions and counting. All of them mine-able for information, all of them capable of being broadcast into any cellphone and any machine with internet access, instantaneously. This makes nomenclatures of web 2.0 look old and grumpy. I can’t even begin to imagine what kind of future this picture will evolve into, because I don’t understand what’s happening. I don’t think anyone has a clear picture of what’s happening at this moment.
I guess I should tweet more for the time being.

Japan: Robot Nation

Here’s the link to the documentary in full at current.com. It’s about twenty or so minutes long (why is it so hard to embed video on wordpress?).

Unlike some other (let’s be honest. Most other) Japan/robot documentaries, this one focuses on the social conditions leading to the Japan’s apparent love of robotics. It sheds something of a harsh, yet realistic view on the state of Japanese society and their labor market, something I am somewhat familiar on indirect level through experiences of those close to me.

I only wish the documentary was longer. They had a lot of venues to explore in depth, Japanese society being one of the most complex human organizations around these days (but then, aren’t all human organizations complex?).

They briefly mention the difficulty foreign immigrants (even those of foreign-heritage native to Japan) face in mainstream Japanese society. Caucasians get an easier time though, especially if you’re rich and hold a professional job. It must be noted though, that while Japanese society have issues the individual folks are pleasant enough, friendly people.

I wonder what the venerable leaders of the United States are planning in preparation for the incoming onslaught of robotic workforces?

Plenty of room

Just a quick note before I drift off to study for my exams.

I re-read the famous ‘there’s plenty of room at the bottom‘ speech made by Richard Feynman recently. Aside from being inspired by his genius and foresight (as usual) I think I hit on an interesting idea.

At the end of the speech Feynman half-jokingly proposes a contest for high school students with the goal of writing smaller than anyone else. I think we have enough industrial infrastructure and technical expertise to make that contest come true, albeit with possibly different goal than simply ‘writing small’ and perhaps geared towards undergraduate students.

Those of you who have been following this blog or any other one of my web presence knows that I am deeply interested in synthetic biology, to the extent that I ventured into the recent Synthetic Biology conference 4.0 in Hong Kong armed with my meager knowledge of genetics and molecular biology. In fact, I’ve been so interested in the discipline that I’ve been driving my professors crazy with questions, delving deep into molecular biology texts and courses outside my proclaimed field of expertise (which is plasma physics), even touching up with a bit of crude wet work.

The reason why I became aware of the field of synthetic biology and began taking its possibilities and my involvement with it seriously, was the International Genetically Engineered Machine competition or iGEM. It is an international competition for high school-to-undergraduate students to build the best synthetic organism (or genetically engineered machine) using opensource biological parts termed BioBricks, which can be pieced together like puzzle to form a working genetic system complete with chassis (usually E.Coli or Yeast). The quality of the competition entries have been phenomenal so far. The winning entry in this year’s iGEM competition actually prototyped a whole new vaccine against gastritis. It took undergraduates six months to come up with that stuff (with help of graduate level faculty). Just imagine what people will be able to do once we streamline the whole process and work out some kinks inherent in dealing with biological systems!

Now, let’s imagine something similar with nanotechnology. I believe that it is possible to put together some minimal nanotech components/chassis in the fashion of the BioBricks, opensource them, and apply it toward high school-undergraduate level competition. Of course, the things we can come up with using today’s technology won’t be as vibrant as the projects pursued by those of iGEM teams, but I still believe that we have enough room for ingenuity and improvisation in constructing minimal nanotechnological systems and parts. With suitable industrial support the international nanoengineered machine competition (iNEM?) might lend the field of nanotechnology accessibility and interest the field rightly deserves.

Cory Doctorow excerpt and musings

In answer to a question posed by an interviewer at the end of a comic “Futuristic Tales of The Here and Now”

TW: 

Many people in your story suffer from a disease you term as “Zombiism.” Is this comparable to, say, the horrendously extreme amount of AIDS cases in Africa, a continent also rife with warfare?

CD: 

Yeah, and all the other diseases-like malaria, which kills one person every second-that our pharma companies can’t even be bothered with because boner-pills are so much more profitable. 
We grant global monopolies to these companies over the reproduction of chemical compounds. They argue that they need these patents because otherwise, no one would do the core research they do and we’d all be dead of disease without them.
But what do they spend their regulatory windfall on? Figuring out how to reformulate heartburn pills that are going public domain so that they can be re-patented, cheating the system and the world out of twenty more years of low-cost access to their magic potions; marketing budgets that beggar the imagination; lobbyists arguing for stricter rules. 
Meanwhile, people are actually dying, in great numbers, of diseases treatable by drugs that Roche and Pfizer and the rest of the dope-mafia won’t sell them at an accessible price, and won’t let them make themselves.

This reminds me, there were quite a number of people representing pharmaceutical interests at the Hong Kong Synthetic Biology 4.0 conference… The possibility of building or reconfiguring microbial organisms to produce noble chemicals is certainly an attractive prospect, and is fast becoming an industrially viable production method of rare chemicals. A case in point, recent iGEM 2008 competition’s winning entry was asynthetically designed vaccine against Halicobacter pylori which causes gastritis, built using immunobricks biological components designed in-house by undergraduates (albeit with support of graduate level faculty and facilities). The BioBricks foundation (upon which most of the synthetic biology practices today are based upon) runs on the principles of opensource like many of the server side technologies and programming languages in use today, and the possible social and economic ramifications of the growing field of synthetic biology is promising even at this early stage of development.

Are we seeing the beginning of the end for the workings of current generation pharmaceutical industry? Vaccinations and pills developed by relatively small scale biotech developers, perhaps even run by some of the poorer nations to counter against indigenous diseases? Perhaps in such a  universe, intellectual property rights can truly be something that protects the interests of the public instead of being a noose around their neck. 

I’ve been going through a number of Cory Doctorow’s works lately (thank goodness for DRM free ebook reader). He released a lot of his works under CC license to be available freely on the net, and I can’t recommend them highly enough. Visit his blog for a list.