LSG (Graphene) flexible capacitor

A super advanced technology breakthrough using  a $50 DVD-Burner.

Courtesy Extreme Tech:

A team of international researchers have created graphene supercapacitors using a LightScribe DVD burner. These capacitors are both highly flexible and have energy and power densities far beyond existing electrochemical capacitors, possibly within reach of conventional lithium-ion and nickel metal hydride batteries.

The team, which was led by Richard Kaner of UCLA, started by smearing graphite oxide — a cheap and very easily produced material — films on blank DVDs. These discs are then placed in a LightScribe drive (a consumer-oriented piece of gear that costs less than $50), where a 780nm infrared laser reduces the graphite oxide to pure graphene. The laser-scribed graphene (LSG) is peeled off and placed on a flexible substrate, and then cut into slices to become the electrodes. Two electrodes are sandwiched together with a layer of electrolyte in the middle — and voila, a high-density electrochemical capacitor, or supercapacitor as they’re more popularly known.

Now, beyond the novel manufacturing process — the scientists are confident it can be scaled for commercial applications, incidentally — the main thing about LSG capacitors is that they have very desirable energy and power characteristics. Power-wise, LSG supercapacitors are capable of discharging at 20 watts per cm3, some 20 times higher than standard activated carbon capacitors, and three orders of magnitude higher than lithium-ion batteries. Energy-wise, we’re talking about 1.36 milliwatt-hours per cm3, about twice the density of activated carbon, and comparable to a high-power lithium-ion battery.

These characteristics stem from the fact that graphene is the most conductive material known to man — the LSG produced by the scientists showed a conductivity of 1738 siemens per meter (yes, that’s a real unit), compared to just 100 siemens for activated carbon. The performance of capacitors is almost entirely reliant on the surface area of the electrodes, so it’s massively helpful that one gram of LSG has a surface area of 1520 square meters (a third of an acre). As previously mentioned, LSG capacitors are highly flexible, too, with no effect on its performance (pictured right).

These graphene supercapacitors could really change the technology landscape. While computing power roughly doubles every 18 months, battery technology is almost at a standstill. Supercapacitors, which suffer virtually zero degradation over 10,000 cycles or more, have been cited as a possible replacement for low-energy devices, such as smartphones. With their huge power density, supercapacitors could also revolutionize electric vehicles, where huge lithium-ion batteries really struggle to strike a balance between mileage, acceleration, and longevity. It’s also worth noting, however, that lithium-ion batteries themselves have had their capacity increased by 10 times thanks to the addition of graphene. Either way, then, graphene seems like it will play a major role in the future of electronics.



Graphene Goes Green

August 2nd, 2010 | Posted by paul in Uncategorized - (Comments Off on Graphene Goes Green)

graphene-greenFrom Smart Planet:

Two papers published this week in ACS Nano detail the two ends of graphene’s life cycle. While many theories exist on how to best make graphene and graphene oxide (GO) in bulk, one paper demonstrates how to do so without emitting toxic gas.

The other study addresses how graphene will decompose in the environment. The research finds the almost ubiquitous Shewanella bacteria capable of breaking down GO into graphene. Stacks of graphene, which are single-atom carbon layers, become graphite, which is considered ecologically benign.

This is good news as graphene’s potential is enormous.


Petaflop Desktop Computing

December 24th, 2008 | Posted by paul in Uncategorized - (Comments Off on Petaflop Desktop Computing)


Sometime in the next 10-15 years, personal computers could be running up to a million times faster than they are today. Until recently the computer industry had no clear idea on how such speeds would be possible, as silicon is rapidly hitting the limits for use in microprocessors.

The first of these limits was reached in 2004 when chip manufacturers were unable to get chips running faster than 4Ghz without melting them.

To get more performance out of new chips, manufacturers have taken advantage of shrinking die sizes to squeeze more processing cores into the same space a single core took before.  With multiple cores, tasks can be shared between cores, allowing more computation to be done per clock cycle.

The current king of chip design is the Core i7 from Intel which runs on a 45nm process and has four cores.  Speeds of 100Glops (100 billion calculations per second) are possible running at full capacity. Next year, Intel plans on moving their chip design to the 32nm process, allowing for doubling of transistor density and 8 cores on a chip. The next two die sizes are 22nm and 14nm.  However when you get to 14nm, quantum tunneling begins interfering with a chips capacity to perform.  It is now widely acknowledged that 14nm represents the end of silicon for use in computers. With graphene, die sizes can be reduced to 0.5nm, resulting in several trillion transistors per chip.

Carbon in a Post-Silicon World

Ever since the discovery of carbon nanotubes, companies have recognized the huge potential carbon can play in future computers.  Carbon as a computing material has two main advantages.  First, it does not suffer from the heat limitation that silicon does, allowing chips to run at terahertz frequencies. This represents more than a 1000 fold increase in raw computing speed. Second, carbon transistors operate better the smaller you make them. Ray Kurzweil, in his book In The Age of Spiritual Machines, talks about carbon nanotubes being used to create very dense 3d computational structures running millions of times faster than today’s computers.

The challenge with carbon nanotubes to researchers is they still don’t know how to mass produce them with consistent size and accurate placement necessary to build microprocessors.  In 2006, Georgia Tech Professor Walt de Heer developed a proof-of-principle transistor constructed of graphene. Graphene is the same material as nanotubes, just flattened out like an atomic version of chicken wire, instead of wrapped up into a tube. The basic physics of graphene remain the same – and in some ways its electronic properties actually improve – in pieces smaller than a single nanometer. Graphene transistors start showing advantages and good performance at sizes below 10 nanometers – the miniaturization limit at which the Silicon technology is predicted to fail.

This equates into raw increases in speed of over 1000, and density increases of more than 8000 over the 45nm process we have today.  This works out to dozens of teraflops per core, and between four and eight thousand cores on a single chip!  Assuming software developers could utilize even a fraction of that capacity, speeds far in excess of a petaflop (quadrillion instructions per second) would be possible in every day computing devices.

When asked about their potential in 2006, Professor Walt de Heer said it would take at least a couple of decades before seeing graphene’s potential.  However just two weeks ago, IBM announced the creation of a 50nm graphene transistor running at 26Ghz.  For comparison sake, the best transistor in commercial use today is 45nm and running at a paltry 3.6Ghz.  Given this breakthrough there is no reason to believe graphene processors won’t be available to take over from silicon as it runs out of steam.

The Roaring 2020’s: Ubiquitous Petaflop computing.

It’s hard to imagine how the world will look like with cheap and pervasive petaflop computing. The degree of  advancement made possible by petaflop computing is far greater than anything we’ve seen in the last 50 years.  At the very least, computers will become invisible and blend seamlessly with our environments.  Our interaction with them will be completely intuitive and natural.  They will anticipate our needs and adjust themselves towards maximizing an intuitive and seamless connection. They will be able to respond to touch, gesture, voice and emotional states as if they were our best friend.  It will represent the beginning of intimate symbiotic computing. Computer graphics will be as good as reality, allowing a seamless blend of digital and analog worlds.  Fully realistic virtual worlds generated on the fly will allow immersion in vast and endlessly novel virtual realities.

Although I believe artificial intelligence will continue to be overrated in human reasoning abilities, the creation of believable agents will have many of the characteristics of a human personality, with near human level reasoning in games, and as digital “servants” that augment our own intelligence and interaction with the vast amount of information that would overwhelm us today.  In other words, we won’t suffer from information overload, but interact seamlessly with it in a much more fun and natural way than we do today.

By the 2020’s scientists will probably figure out how to finally use carbon nanotubes to make 3d computer cores with speeds in the exaflop range (a quintillion operations per second).  By then we’ll probably see the beginnings of true human-computer symbiotic intelligence far exceeding our own.