Petaflop Desktop Computing

December 24th, 2008 | Posted by paul in Uncategorized - (Comments Off on Petaflop Desktop Computing)

graphene_chip

Sometime in the next 10-15 years, personal computers could be running up to a million times faster than they are today. Until recently the computer industry had no clear idea on how such speeds would be possible, as silicon is rapidly hitting the limits for use in microprocessors.

The first of these limits was reached in 2004 when chip manufacturers were unable to get chips running faster than 4Ghz without melting them.

To get more performance out of new chips, manufacturers have taken advantage of shrinking die sizes to squeeze more processing cores into the same space a single core took before.  With multiple cores, tasks can be shared between cores, allowing more computation to be done per clock cycle.

The current king of chip design is the Core i7 from Intel which runs on a 45nm process and has four cores.  Speeds of 100Glops (100 billion calculations per second) are possible running at full capacity. Next year, Intel plans on moving their chip design to the 32nm process, allowing for doubling of transistor density and 8 cores on a chip. The next two die sizes are 22nm and 14nm.  However when you get to 14nm, quantum tunneling begins interfering with a chips capacity to perform.  It is now widely acknowledged that 14nm represents the end of silicon for use in computers. With graphene, die sizes can be reduced to 0.5nm, resulting in several trillion transistors per chip.

Carbon in a Post-Silicon World

Ever since the discovery of carbon nanotubes, companies have recognized the huge potential carbon can play in future computers.  Carbon as a computing material has two main advantages.  First, it does not suffer from the heat limitation that silicon does, allowing chips to run at terahertz frequencies. This represents more than a 1000 fold increase in raw computing speed. Second, carbon transistors operate better the smaller you make them. Ray Kurzweil, in his book In The Age of Spiritual Machines, talks about carbon nanotubes being used to create very dense 3d computational structures running millions of times faster than today’s computers.

The challenge with carbon nanotubes to researchers is they still don’t know how to mass produce them with consistent size and accurate placement necessary to build microprocessors.  In 2006, Georgia Tech Professor Walt de Heer developed a proof-of-principle transistor constructed of graphene. Graphene is the same material as nanotubes, just flattened out like an atomic version of chicken wire, instead of wrapped up into a tube. The basic physics of graphene remain the same – and in some ways its electronic properties actually improve – in pieces smaller than a single nanometer. Graphene transistors start showing advantages and good performance at sizes below 10 nanometers – the miniaturization limit at which the Silicon technology is predicted to fail.

This equates into raw increases in speed of over 1000, and density increases of more than 8000 over the 45nm process we have today.  This works out to dozens of teraflops per core, and between four and eight thousand cores on a single chip!  Assuming software developers could utilize even a fraction of that capacity, speeds far in excess of a petaflop (quadrillion instructions per second) would be possible in every day computing devices.

When asked about their potential in 2006, Professor Walt de Heer said it would take at least a couple of decades before seeing graphene’s potential.  However just two weeks ago, IBM announced the creation of a 50nm graphene transistor running at 26Ghz.  For comparison sake, the best transistor in commercial use today is 45nm and running at a paltry 3.6Ghz.  Given this breakthrough there is no reason to believe graphene processors won’t be available to take over from silicon as it runs out of steam.

The Roaring 2020’s: Ubiquitous Petaflop computing.

It’s hard to imagine how the world will look like with cheap and pervasive petaflop computing. The degree of  advancement made possible by petaflop computing is far greater than anything we’ve seen in the last 50 years.  At the very least, computers will become invisible and blend seamlessly with our environments.  Our interaction with them will be completely intuitive and natural.  They will anticipate our needs and adjust themselves towards maximizing an intuitive and seamless connection. They will be able to respond to touch, gesture, voice and emotional states as if they were our best friend.  It will represent the beginning of intimate symbiotic computing. Computer graphics will be as good as reality, allowing a seamless blend of digital and analog worlds.  Fully realistic virtual worlds generated on the fly will allow immersion in vast and endlessly novel virtual realities.

Although I believe artificial intelligence will continue to be overrated in human reasoning abilities, the creation of believable agents will have many of the characteristics of a human personality, with near human level reasoning in games, and as digital “servants” that augment our own intelligence and interaction with the vast amount of information that would overwhelm us today.  In other words, we won’t suffer from information overload, but interact seamlessly with it in a much more fun and natural way than we do today.

By the 2020’s scientists will probably figure out how to finally use carbon nanotubes to make 3d computer cores with speeds in the exaflop range (a quintillion operations per second).  By then we’ll probably see the beginnings of true human-computer symbiotic intelligence far exceeding our own.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Photosynth

April 18th, 2007 | Posted by paul in Uncategorized - (Comments Off on Photosynth)

Photosynth, the brainchild of Blaise Aguera y Arcas, takes image interaction to a whole new level.

According to the website, “Photosynth takes a large collection of photos of a place or an object, analyzes them for similarities, and then displays the photos in a reconstructed three-dimensional space, showing you how each one relates to the next.”

Here’s Blaise giving his “Jaw-Dropping” demo at TED2007.

Imagine using this software along with Ken Han’s multi-touch display. Wow.  Add the rapidly increasing amount of photographic data of aerial photographers and Google Earth will become increasingly realistic and real-time.   Another step towards hypermediation.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Multitouch Interactive Display

March 29th, 2007 | Posted by paul in Uncategorized - (Comments Off on Multitouch Interactive Display)

This is very cool.

Perspective Pixel, a new company started by Jeff Han, have created the first multi-touch, liquid, fun and beautifully interactive display.

Here’s a quick demo:

And here’s a fully detailed demo at TED2007. Simply amazing, and definitely the future of how we’ll be able to interact with computers:

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Exceeding Moore’s Law

January 6th, 2003 | Posted by paul in Uncategorized - (Comments Off on Exceeding Moore’s Law)

In the most recent IEEE Survey, many think Moore’s Law will be exceeded later this decade:

Among the suggested candidates were various forms of nanotechnology (carbon nanotubes, resonant tunneling diodes, molecular transistors), photonics, giant magnetoresistance-based memories, and three-dimensional IC stacking. “I think that smart developers are investing in biologically based computing, quantum computing, and other alternatives to the IC transistor,” wrote one Fellow. “This area is ripe for a disruptive technology.”

I think Ray Kurzweil was correct in predicting doubling times to decrease to 12 months from its current 18. Orginally Moore’s Law was based on a doubling every 2 years. These increases in computer power not only bring a corresponding increase in productivity, but also enable entirely new ways of doing things. I expect to see a lot of technological disruption with lots of applications we never thought possible before. The Dot-com bubble may have burst, but the true power of a networked world is only just beginning.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

From DNA Storage to Exopaleontology

January 6th, 2003 | Posted by paul in Uncategorized - (Comments Off on From DNA Storage to Exopaleontology)

Appearing on Slashdot, Using Bacterial DNA For Data Storage:

This is an attempt to achieve the ultimate in archival storage (one of the modified bacteria can tolerate 1000X more radiation than a human being). Now just suppose that the “junk DNA” in the human genome is the documentation package for the machine code. Who wrote that manual?

Richard_GalenReminds me of that Star Trek episode The Chase, in which Dr. Galen, Captain Picards old Archaeology professor, found genetic data-blocks from various species around the galaxy stored in the junk portion of each species DNA, including our own. When a sufficient number of these data blocks were put together it completed a stellar map, identifying the precise location of the original origin of life on our planet and countless others. The jury is still out on the Panspermia Theory, but my own hunch is that there is lots of intelligence out there vastly older and greater than we are.

Facebooktwittergoogle_plusredditpinterestlinkedinmail