Transistors are the basis for electronic switching and memory devices as they exhibit extreme reliabilities with on/off ratios of 104–105, and billions of these three-terminal devices can be fabricated on single planar substrates. On the other hand, two-terminal devices coupled with a nonlinear current–voltage response can be considered as alternatives provided they have large and reliable on/off ratios and that they can be fabricated on a large scale using conventional or easily accessible methods. Here, we report that two-terminal devices consisting of discontinuous 5–10 nm thin films of graphitic sheets grown by chemical vapour deposition on either nanowires or atop planar silicon oxide exhibit enormous and sharp room-temperature bistable current–voltage behaviour possessing stable, rewritable, non-volatile and non-destructive read memories with on/off ratios of up to 107 and switching times of up to 1 μs (tested limit). A nanoelectromechanical mechanism is proposed for the unusually pronounced switching behaviour in the devices.It will be several years before memories based on these switches are available for laptops and desktops, but it's a cool thing. To my knowledge, the mechanism is not yet known, so there may be some interesting new science involved as well.
Monday, December 29, 2008
Graphene memory device at Rice University
James Tour and colleagues at Rice University have demonstrated a switch (described in Nature Materials) composed of a layer of graphite about ten atoms thick. An array of such switches can be built in three dimensions, offering very high densities of storage volume, far exceeding what we now see in hard disks and flash memory USB widgets. The switch has been tested over 20,000 switching cycles with no apparent degradation. The abstract of the Nature Materials article reads:
Tuesday, December 23, 2008
Encouraging news about mechanosynthesis
Yesterday there was a very encouraging posting (by guest blogger Tihamer Toth-Fejel) on the Responsible Nanotechnology blog, regarding recent goings-on with mechanosynthesis. What the heck is mechanosynthesis? It is the idea that we will build molecules by putting atoms specifically where we want, rather than leaving them adrift in a sea of Brownian motion and random diffusion. Maybe not atoms per se, maybe instead small molecules or bits of molecules (a CH3 group here, an OH group there) with the result that we will build the molecules we really want, with little or no waste. The precise details about how we will do this are up for a certain amount of debate. We used to talk about assemblers, now we talk about nanofactories, but the idea of intentional design and manufacture of specific molecules remains.
The two items of real interest in the CRN blog posting are these.
First, Philip Moriarty, a scientist in the UK, has secured a healthy chunk of funding to do experimental work to validate the theoretical work done by Ralph Merkle and Rob Freitas in designing tooltips and processes for carbon-hydrogen mechanosynthesis, with the goal of being able to fabricate bits of diamondoid that have been specified at an atomic level. If all goes well, writes Toth-Fejel:
Toth-Fejel writes:
The two items of real interest in the CRN blog posting are these.
First, Philip Moriarty, a scientist in the UK, has secured a healthy chunk of funding to do experimental work to validate the theoretical work done by Ralph Merkle and Rob Freitas in designing tooltips and processes for carbon-hydrogen mechanosynthesis, with the goal of being able to fabricate bits of diamondoid that have been specified at an atomic level. If all goes well, writes Toth-Fejel:
Four years from now, the Zyvex-led DARPA Tip-Based Nanofabrication project expects to be able to put down about ten million atoms per hour in atomically perfect nanostructures, though only in silicon (additional elements will undoubtedly follow; probably taking six months each).Second is that people are now starting to use small machines to build other small machines, and to do so at interesting throughputs. An article at Small Times reports:
Dip-pen nanolithography (DPN) uses atomic force microscope (AFM) tips as pens and dips them into inks containing anything from DNA to semiconductors. The new array from Chad Mirkin’s group at Northwestern University in Evanston, Ill., has 55,000 pens - far more than the previous largest array, which had 250 pens.So there are two take-home messages here. First, researchers are getting ready to work with the large numbers of atoms needed to build anything of reasonable size in a reasonable amount of time. Second, this stuff is actually happening rather than remaining a point of academic discussion.
Toth-Fejel writes:
What happens when we use probe-based nanofabrication to build more probes? ...What happens when productive nanosystems get built, and are used to build better productive nanosystems? The exponential increase in atomically precise manufacturing capability will make Moore’s law look like it’s standing still.Interesting stuff.
Friday, December 05, 2008
Adventures in protein engineering
Proteins are a good material to consider for an early form of rationally designed nanotechnology. They are cheap and easy to manufacture, thoroughly studied, and they can do a lot of different things. Proteins are responsible for the construction of all the structures in your body, the trees outside your window, and most of your breakfast.
Why don't we already have a busy protein-based manufacturing base? Because the necessary technologies have arisen only in the last couple of decades, and because older technologies already have a solid hold on the various markets that might otherwise be interested in protein-based manufacturing. Finally, most researchers working with proteins aren't thinking about creating a new manufacturing base. But people in the nanotech community are thinking about it.
One of the classical scientific problems involving proteins is the "protein folding problem". Every protein is a sequence of amino acids. There are 20 different amino acids, which are strung together by a ribosome to create the protein. As the amino acids are strung together, the protein starts folding up into a compact structure. The "problem" with folding is that for any possible sequence of amino acids, it's not always possible to predict how it will fold up, or even whether it will always fold up the same way each time.
But maybe you don't need a solution for all possible sequences. Maybe you can limit yourself to just the sequences that are easy to predict. People have been studying proteins for a long time and it's easy to put together a much shorter list of proteins whose foldings are known. Discard any proteins that sometimes fold differently, to arrive at a subset of proteins whose foldings are well known and reliable.
The next issue is extensibility. Having identified a set of proteins whose foldings are easily predictable, would it be possible to use that knowledge to predict the foldings of larger novel amino acid sequences? A trivial analogy would be that if I know how to pronounce "ham" and I know how to pronounce "burger", then I should should know how to pronounce "hamburger". A better analogy would be Lego bricks or an Erector set, where a small alphabet of basic units can be used to construct a vast diversity of larger structures.
If we can build a large diversity of big proteins and predict their foldings correctly, we're on to something. Then we can design things with parts that move in predictable ways. Some proteins (like the keratin in your fingernails or a horse's hooves) have a good deal of rigidity, and we can think about designing with gears, cams, transmissions, and other such stuff.
Why don't we already have a busy protein-based manufacturing base? Because the necessary technologies have arisen only in the last couple of decades, and because older technologies already have a solid hold on the various markets that might otherwise be interested in protein-based manufacturing. Finally, most researchers working with proteins aren't thinking about creating a new manufacturing base. But people in the nanotech community are thinking about it.
One of the classical scientific problems involving proteins is the "protein folding problem". Every protein is a sequence of amino acids. There are 20 different amino acids, which are strung together by a ribosome to create the protein. As the amino acids are strung together, the protein starts folding up into a compact structure. The "problem" with folding is that for any possible sequence of amino acids, it's not always possible to predict how it will fold up, or even whether it will always fold up the same way each time.
But maybe you don't need a solution for all possible sequences. Maybe you can limit yourself to just the sequences that are easy to predict. People have been studying proteins for a long time and it's easy to put together a much shorter list of proteins whose foldings are known. Discard any proteins that sometimes fold differently, to arrive at a subset of proteins whose foldings are well known and reliable.
The next issue is extensibility. Having identified a set of proteins whose foldings are easily predictable, would it be possible to use that knowledge to predict the foldings of larger novel amino acid sequences? A trivial analogy would be that if I know how to pronounce "ham" and I know how to pronounce "burger", then I should should know how to pronounce "hamburger". A better analogy would be Lego bricks or an Erector set, where a small alphabet of basic units can be used to construct a vast diversity of larger structures.
If we can build a large diversity of big proteins and predict their foldings correctly, we're on to something. Then we can design things with parts that move in predictable ways. Some proteins (like the keratin in your fingernails or a horse's hooves) have a good deal of rigidity, and we can think about designing with gears, cams, transmissions, and other such stuff.
Subscribe to:
Posts (Atom)