Friday, July 17, 2009

The Big Smart Grid Challenges

Regulations, privacy and security concerns, and other issues could hold back developments.

A smarter electricity grid could fundamentally change the way people pay for and manage their electricity use. In theory, the technology could help reduce demand, save money, and improve reliability and efficiency. But implementing the necessary changes will be difficult, according to experts attending a symposium on the smart grid at GE Global Research in Niskayuna, NY, this week. They expect resistance from regulators and consumers alike, citing the complexity of the proposed system as well as concerns about privacy and security.

Smarter meter: Possible strategies for reducing energy consumption rely on devices that can send and receive information from utilities and communicate wirelessly with appliances.
Credit: Kevin Bullis, Technology Review
Multimedia
video See how the smart grid works.

The smart grid will incorporate new networking technology, including sensors and controls that make it possible to monitor electricity use in real time and make automatic changes that reduce energy waste. Furthermore, grid operators should be able to instantly detect problems that could lead to cascading outages, like the ones that cut power to the northeastern United States in 2003. And the technology ought to allow energy companies to incorporate more intermittent, renewable sources of electricity, such as wind turbines, by keeping the grid stable in the face of minute-by-minute changes in output.

For consumers, the smart grid could also mean radical changes in the way they pay for electricity. Instead of a flat rate, they could be charged much more at times of high demand, encouraging them to reduce their energy use during these periods. Companies such as GE are developing refrigerators, dryers, and other appliances that can automatically respond to signals from the utility, shutting off or reducing energy consumption to allow consumers to avoid paying the peak prices. Such strategies could allow utilities to put off building new transmission lines and generators to meet peak demand--savings that could be important as proposed regulations on carbon dioxide emissions force them to switch to more expensive sources of electricity.


But the necessary changes could prove difficult for consumers to adjust to, says Garry Brown, chairman of the New York State Public Service Commission, a utility regulator. Industrial and commercial electricity customers already have variable electricity rates that change with the time of day, but "they have the ability and expertise and wherewithal to figure out what to do with this," Brown says. "They have a manager that spends their life trying to react to it." Ordinary consumers don't have that advantage. Indeed, in the 1990s the New York state legislature blocked mandatory variable pricing amid concerns about the impact it could have on customers who couldn't avoid peak prices, such as people who must use electric-powered medical equipment around the clock. We have to be "slow and cautious," about introducing the technology, Brown says.

The grid upgrade may also face resistance from regulators because some of the benefits are difficult to measure. Regulators are responsible for ensuring that utilities make wise investments that restrain the price of electricity. But improved efficiency and reliability can't easily be quantified, says Bryan Olnick, a senior director at the major utility Florida Power and Light. He says that regulators need to start considering long-term societal benefits in addition to electricity costs. Ultimately, regulators will need proof that the systems can deliver the promised benefits, which is why there are now smart-grid demonstration projects in places including Boulder, CO; Maui; and Miami.

Beyond the challenge of measuring results, the smart grid raises questions about national security, says Bob Gilligan, GE's vice president for transmission and distribution. "We hear a lot of concerns about cyberterrorism and attacks on our energy infrastructure," he says. "As we talk about bringing more technology into the grid, providing more connections to the energy infrastructure, there are escalating concerns about protecting that infrastructure."

Gilligan adds that the technology raises serious privacy concerns as well. "The major concern is that folks don't want to be inundated with telemarketing calls associated with their usage behavior," he says. "There's also some concern about what they're doing being known minute by minute."

The massive amount of data generated by smart-grid technology could itself pose a practical problem. Right now, a utility with five million meters has about 30,000 devices for monitoring the grid. As the smart grid develops, that number could increase a thousandfold, with each device conveying a thousand times as much information as one of its counterparts does now, says Erik Udstuen, a general manager at GE Fanuc Intelligent Platforms. Though so much data may be difficult to process, it could also create opportunities for entrepreneurs to develop new monitoring applications, especially if open standards are developed.

Consumers needn't brace themselves for changes right away; it could take a decade to implement variable pricing. Meanwhile, the grid can be improved in ways that won't affect customers directly, such as reducing the amount of energy wasted in getting power from generators to consumers: 7 to 10 percent is often lost, and that figure can reach 20 or 30 percent during periods of peak demand. Meanwhile, smart meters and appliances that allow variable pricing will cost billions to develop and could take a decade to install.

Eventually, however, the smart grid could make the supply of electricity more efficient and reliable, and it could help reduce greenhouse-gas emissions by promoting renewable technologies and reducing overall power consumption. "In the long run," says James Gallagher, a senior vice president at the New York City Development Corp, "it will lead to lower rates."

Co-opting a Cancer Treatment to Spur Fat Loss

Drugs that block blood-vessel growth could tackle obesity.

Both cancer and obesity kill hundreds of thousands of patients each year, but they have more than the Grim Reaper in common. Tumors and excess fat are both unhealthy accumulations of tissue that require elaborate networks of blood vessels to feed them. Now Zafgen, a biopharmaceutical startup based in Cambridge, MA, is attacking obesity the way that cancer researchers have been attacking tumors for decades: using drugs that interfere with its blood supply.

"It's a very interesting and exciting concept," says Rakesh Jain, director of the Edwin L. Steele Laboratory for Tumor Biology, at Massachusetts General Hospital, who has no ties to Zafgen. However, anti-angiogenic drugs such as Avastin, used to treat breast, lung, and colon cancer, have unpleasant side effects--especially when used long term--including problems with the reproductive, cardiovascular, and immune systems. "Their toxicity is manageable, but they are not innocuous agents," says Jain.

Most pharmacological treatments for obesity have focused on controlling food intake. They attack weight gain centrally--in the brain--by trying to reduce appetite or encourage a feeling of satiety. But the neural mechanisms that regulate food intake also influence other physiological processes, says Zafgen president and CEO Thomas Hughes, meaning that this strategy is prone to producing side effects. Past weight-loss drug candidates have been discarded for their unwanted effects on mood, wakefulness, and reproductive function, and because their efficacy can wear off over time. "It's kind of like a whack-a-mole game," says Hughes. "You push down one thing, but something else pops up. That seems to be the nature of the way that circuits are wired in our brain."

Instead, Zafgen aims to attack weight gain peripherally--in the fat tissue--which researchers hope will circumvent the side effects and rebound associated with more traditional approaches. "Conventional wisdom is that people become obese because they overeat," says Hughes. "But the fact is that in an environment where people are exposed to the same food supply and lifestyle, some will gain weight and others will not." In animals, those discrepancies seem to correlate with genetically determined differences among individuals' fat tissue, he says. Animals with so-called hungry adipose--fat tissue with a strong propensity to expand--show different expression of genes that regulate blood-vessel formation than animals that are naturally lean.

Zafgen aims to alter those natural differences, effectively converting hungry adipose into its more benign cousin, thereby shrinking existing fat stores and preventing the accumulation of new ones. To do so, the company is investigating a class of small molecules originally designed to stop blood-vessel growth in tumors but abandoned due to their low performance. These agents attach to receptors in the lining of blood vessels, preventing the binding of factors that normally spur those vessels to proliferate. While these drugs proved ineffective for treating cancer, they might work for obesity, in which case simply shrinking fat tissue rather than completely eradicating it is sufficient.

In animal trials, obese mice treated with these repurposed drugs began to slim down after a few days and continued to shed fat at a precipitous rate until they reached a normal body weight, usually about three weeks later. This process was associated with a dramatic decrease in food consumption. But unlike drugs that cause weight loss by reducing food intake, these compounds seemed to reduce food intake by causing weight loss. As the fat cells shrank, they released free fatty acids that acted as a source of energy for the body, seeming to partially supersede the need for food calories. As soon as the animals reached a healthy weight, their food consumption returned to normal or even elevated levels, even though they continued to receive the drug. Nonetheless, the mice retained their new lean physiques for the remainder of the study--about six months total.

Not only did the mice lose weight, but they also became healthier overall. Their metabolic rate increased, their insulin sensitivity improved, and the fat content of their livers diminished. Within the fat tissue itself, there was a marked change in the number and architecture of blood vessels. Hughes says that all of these changes were highly reminiscent of those seen with extreme calorie restriction, which has long been known to improve health and extend life span in rodents. That makes sense, he says, because while the mice are actively losing weight, their calorie consumption plummets by as much as 80 percent.

Zafgen plans to start clinical trials on an anti-angiogenic molecule later this year, to determine whether the weight loss and health improvements seen in mice will translate to humans. Meanwhile, the company is working to better understand why the drugs it has tested are so potent in mice, and to discover new molecules with similar effects.

The rodent studies suggest that the doses sufficient for fat loss are lower than that required for tumor suppression, which might reduce the potential for side effects.

Hughes emphasizes that Zafgen intends its drugs to be used by the morbidly obese, and not by those trying to shed a pesky 15 pounds. "It's serious medicine," he says--not a lifestyle drug.

NASA's New Crew Escape System

The next vehicle to carry humans to space will let astronauts safely abort in case of an emergency.

In building a successor to the space shuttle, NASA has made one component a necessity: a system to let the crew escape should a catastrophe occur on the launch pad or during the first few seconds of flight.

For this reason, a completely new launch escape system is being developed for the Orion crew exploration vehicle, which NASA plans to send into space aboard the Ares rockets in 2015. Both are part of NASA's Constellation Program to send humans to the moon and, eventually, to Mars.

The new escape system would separate the crew module from the launch rocket in a fraction of a second with a small, controlled explosion. Almost simultaneously, a solid rocket motor would fire, providing a million pounds of thrust to accelerate the module from 0 to 600 miles per hour in 3.5 seconds, pulling the astronauts to a safe distance before the module's parachutes deploy.

An escape system was judged an unnecessary addition to the space shuttle, which was originally designed to fly frequently, carrying huge payloads such as large satellites into orbit. "There were so many safety elements designed into the shuttle, people thought the safest thing was to just make sure the shuttle could always get back to the runway in case of engine shutdown," says Jeffrey Hoffman, a former astronaut and currently a professor of aeronautics and astronautics at MIT. "In retrospect, people would agree we need an escape system."

This point was proved tragically in 1986, when the Space Shuttle Challenger broke apart 73 seconds into flight due to a failure in one of its solid rocket boosters. "If the crew had a launch abort system, there may have been an opportunity for them to escape," says Henri Fuhrmann, program manager of the new launch abort system at Orbital Sciences, an aerospace company that has partnered with NASA to design and develop the escape system. The space agency has also partnered with Lockheed Martin, Aerojet, and Alliant Techsystems (ATK) on the project.

The design of the new system is based on the launch escape system built for the Apollo capsule; it also has similarities to Russia's abort system on the Soyuz spacecraft. The Russian system was used successfully in 1983 when a fuel spill caused a fire on the launch pad seconds before liftoff. But NASA's new system will also feature novel technologies, including a motor for steering the crew module and nozzles to reverse the flow of hot gases. The system is the "first of its kind," says Kevin Rivers, project manager at NASA's Langley Research Center, in Hampton, VA. Unlike its predecessors, the system will function at an altitude of up to 91,440 meters during phases of the flight when the rocket is most susceptible to failures.

It will be possible for an abort command to be initiated by the crew, by ground-control personnel, or by the flight computer. Once the crew module and launch abort tower, which sits on top of the module, have been detached from the rocket, a second motor will steer the vehicle into a safe orientation. If activated on the launch pad, the crew module and abort tower will fly one mile into the air and three miles downrange relative to the rocket; during ascent, these distances would vary depending on flight conditions. Once the vehicle is oriented so that the heat shield is facing forward, a third motor fires to separate the launch abort tower from the crew module, parachutes deploy, and the capsule safely splashes down in the ocean for recovery.

The abort motor, the first motor to fire, has a unique design: its four nozzles turn the flow of the hot gases it produces away from the crew module. The second motor, which is located at the very top of the tower and used to control and steer the vehicle, is the most complex and consists of eight small thrusters that fire differentially to point the nose of the launch abort system in the direction that is determined the safest.

Apollo used a simple system that was passively controlled like a large dart, says NASA's Rivers. "But because of the mass properties of the [new system], using a passive system was deemed to be aerodynamically unstable," says David McGowan, lead engineer at Langley. "Without attitude control, the vehicle would just flip over."

"The steering thrusters are pretty fantastic," says Scott Uebelhart, a postdoctoral associate at MIT who studies human spaceflight. "And no one has tested a new rocket engine like this in almost 40 years. It's a big leap forward."

Last week, NASA tested an alternative launch abort system called the max launch abort system, which is based on some of the original concepts studied for the Constellation Program. The test demonstrated a stable trajectory, reorientation, and separation of the crew module from the abort system, and parachute recovery of the crew module simulator, but it was mostly designed for gathering data. It did not have to follow the same criteria as the newer system. "It was just a quick try and turn-around approach for research," says Rivers.

The launch abort system for Orion will undergo its first flight test later this year and several more tests before it is ready for launch by 2015.

"We know we are building a system that is going to save lives," says Fuhrmann. "It is something that we hope we never have to operate, but if it is called upon, it has to function flawlessly."

Dell shares dive as PC market still looks rough

SEATTLE (AP) -- Dell Inc. said Tuesday that the U.S. personal computer market has reached its low point but that the timing of a global turnaround in the technology industry remains anyone's guess.

The pessimism sent Dell shares plummeting $1.04, or 8 percent, to $11.98 in afternoon trading.

At a meeting with Wall Street analysts, the world's No. 2 PC maker elaborated on guidance it issued Monday, when it said it expects slightly stronger sales in the current quarter than in the last one. Despite these signs of improvement, Dell executives said Tuesday that many of the conditions that hurt the PC industry over the last several quarters aren't easing.

Businesses have clamped down on technology spending and put off new computer purchases as the economic crisis persists. Consumers are more eager to buy new computers but are choosing cheaper models such as "netbooks," which are smaller and less powerful than regular laptops.

"Certainly customers are elongating the life cycle" of their machines, Chief Executive Officer Michael Dell said.

Before the economic downturn, PCs were replaced after about three years, but now the CEO said, laptops are being kept for 3½ years, and desktops for four to five.

The CEO said he expects a wave of replacements for aging computers to come in 2010, provided the economy has improved. By then Microsoft Corp. will have released its next operating system, Windows 7, which Michael Dell said should accelerate new PC sales.

"Large numbers of commercial customers completely skipped Vista," the CEO said. He expects more interest in Windows 7, and not only because the cost of maintaining old computers will be rising.

"Windows 7 is a great product at this point, I'd say even a better product than Vista was at this stage," he said. For instance, he pointed to the upcoming software's improved power management and its "Windows XP compatibility mode," which should reduce fears that specialty programs won't work on a new system.

For now Dell Inc., which is based in Round Rock, Texas, and trails Hewlett-Packard Co. in worldwide PC sales, is advising analysts that improvement in its business still varies significantly by region and product type.

Chief Financial Officer Brian Gladden said U.S. sales are "not necessarily getting a lot better," but they're "finding a bottom" in the quarter that ends July 31.

China is pushing revenue in Asia higher and Latin American sales appear to be improving, but Western Europe is weak and even deteriorating, Gladden said.

Worldwide sales to large and small businesses alike are "still very weak," similar to the first quarter, when Dell saw revenue overall sink 23 percent to $12.3 billion.

The division that sells to educational institutions and the government is picking up, Gladden said, as schools prepare for the upcoming academic year, and Dell's consumer PC business is also expected to post better sales than in the previous quarter.

Michael Dell told analysts that his company is chasing higher profits rather than increased market share in the consumer PC business, which has slim margins.

In the short term, at least, Dell may struggle on both counts. Gladden said higher costs for LCD screens and memory are cutting into Dell's profits and will continue for at least three more months. The company is also having to slash prices just to maintain its market share in some areas.

"That's a bit of a new dynamic," Gladden said.

Analysts were looking for evidence that Dell plans to expand beyond PCs into other consumer electronics. Ron Garriques, president of Dell's consumer business, indicated a Dell-branded smart phone is likely, saying that customers want devices with screens of many sizes. But he did not give details on timing or specific devices.

Thursday, July 16, 2009

Who's Typing Your Password?

By watching how passwords are entered, a company hopes to make log-ins more secure.

Passwords can be one of the weakest links in online security. Users too often choose one that's easily guessed or poorly protected; even strong passwords may need to be combined with additional measures, such as a smart card or a fingerprint scan, for extra protection.

Credit: Technology Review

Delfigo Security, a startup based in Boston, has a simpler solution to bolstering password security. By looking at how a user types each character and by collecting other subtle clues as to her identity, the company's software creates an additional layer of security without the need for extra equipment or user actions.

The software, called DSGateway, can be combined with an existing authentication process. As a user enters her name and password, JavaScript records her typing pattern along with other information, such as her system configuration and geographic location. When the user clicks "submit," her data is sent to the Web server and, provided that the username and password are correct, the additional information is passed on to Delfigo. The company's system then evaluates how well this information matches the behavior patterns of the appropriate authorized user.

Delfigo's algorithms build up a profile of each user during a short training period, combing 14 different factors. The company's president and CEO, Ralph Rodriguez, developed the necessary algorithms while working as a research fellow at MIT. Rodriguez notes that recording multiple factors is crucial to keeping the system secure without making it unusable. If the user types a password with one hand, for example, while holding coffee in the other, the system must turn to other factors to decide how to interpret the variation, he says. If she does this every morning, the system will learn to expect to see this behavior at that time of day.


Trying to strengthen authentication without forcing users to change their behavior is a promising approach, says Bill Nagel, an analyst at Forrester Research, who covers security and risk management. "People want ease of use without losing any security, and that's a tough balance for a lot of IT departments," he says.

Ben Adida, a fellow at Harvard University's Center for Research on Computation and Society, who studies security and privacy, notes that other companies have tried to find ways to improve authentication without inconveniencing users. Some banks, for example, install a cookie in a user's browser after he answers several security questions correctly. The cookie serves as another identifying token. "That's easier than having a physical token, but it's also not as secure," Adida says, since the attacker could trick the user into giving up the information needed to recreate the cookie..

Adida adds that the strength of Delfigo's product will depend on how hard it is for an attacker to re-create the additional factors that it uses. For example, an attacker may be able to trick a user into typing her username and password into a dummy site, in order to collect keystroke patterns and other information, Adida says.


Coping with Bad Genetic News

New research suggests that most people can cope with learning that they are at high genetic risk for disease.

As direct-to-consumer genetic testing spreads, a major concern expressed by ethicists and physicians has been whether the average person will be able to understand the results of these somewhat subtle tests. Rather than giving an answer in black and white, the tests predict whether someone has an elevated risk for developing common diseases, such as Alzheimer's. Even if consumers do understand the results, it has been unknown how they might react to news that they have a sequence of DNA that raises their risk of developing a disease.

Two new studies suggest that most patients cope easily with such negative genetic information. People who learn that they carry a high-risk genetic variant for Alzheimer's disease, called APOE4, have no greater anxiety over their long-term prospects than do those who don't know their risk, according to research published today in the New England Journal of Medicine. Another recent study of smokers revealed that those who found out that they had a lower genetic risk for developing lung cancer were just as interested in stopping smoking as those determined to be at higher cancer risk.

"The findings may help us to subdue paternalistic concerns that we have to protect people from this information," says Colleen McBride, chief of the Social and Behavioral Research Branch at the National Human Genome Research Institute, in Bethesda, MD, and senior author on the smoking study. "People given the option to take these tests can protect themselves, and they find it useful to know the results, even if the test hasn't been proven to make a difference in what they do."

In the past few years, a number of companies have sprung up to offer genetic testing directly to consumers. "Studies like this are important because we are clearly going to see testing like this make its way routinely into mainstream medicine," says Michael Christman, president of the Coriell Institute for Medical Research. Because the results of this type of testing are much more complex than the genetic tests currently used most commonly in medicine--largely single-gene testing for rare, severe disorders, such as cystic fibrosis--physicians worry about how people will react. Some have speculated that someone at high risk for neurological disease might give up on long-term relationships, or someone at low genetic risk for type 2 diabetes might indulge in a diet of doughnuts and cheeseburgers.

To date, most sociological studies of genetic testing have focused on rare inherited diseases rather than on more common ones, such as Alzheimer's. Robert Green and his colleagues at Boston University are among just a handful of researchers examining this issue: Green's team has spent the past decade studying the impact of genetic testing for APOE4, which raises the risk of developing Alzheimer's disease threefold in those who inherit one copy and tenfold in those who have two copies. No proven treatments exist to reduce Alzheimer's risk in APOE4 carriers, and testing for the risk variant is not currently recommended. But surveys indicate that 15 percent of primary-care physicians who treat patients with Alzheimer's have already been asked about the test.


In the newly published study, Green and his colleagues offered APOE4 testing to adult children of people with Alzheimer's disease and then revealed the results to half of the group. The team found that people clearly understood their results, and that six weeks after learning them, those who were told that they had the high-risk variant seemed more stressed than the other participants. But that spike in anxiety had faded by the time participants were tested again both six months and one year afterward.

"We were astounded by how many people wanted to know: more than 20 percent wanted to receive it," says Green. "Even though patients clearly understood there was nothing they could do to stave off the disease, they had nonmedical reasons to learn about it: to prepare their children, to think about the longevity of careers."

For example, "people do in fact change insurance purchasing behavior based on this information," says Green. "We should be cautious as medical professionals not to dismiss those personal reasons, as long as we can convince ourselves it's not harmful to offer this information."

Green and his collaborators have also found that people who know they have the high-risk gene are more likely to take vitamins. "That's fine, except that some types of supplements are highly unregulated and can be harmful," he says. "You can easily imagine people trying to link results of genetic tests to the purchase of unproven vitamins that could at best take their money and distract them, and at worst could be harmful."

Researchers caution that results from the study are not necessarily indicative of the general population. For example, Green's team weeded out people who scored high on measures of anxiety and depression at the start of the study. And the study does not examine all of the potential drawbacks of testing. In an editorial accompanying the paper, Rosalie Kane, a public-health specialist, and Robert Kane, a physician, both from the University of Minnesota, in Minneapolis, say that people who test positive for high-risk genetic variants might be denied some types of insurance. The Genetic Non-Discrimination Act, passed last year, prohibits such discrimination in employment and health insurance, but not in life, disability, or long-term care insurance.

One of the other major concerns for the new generation of genetic testing is how best to deliver the results. In Green's APOE4 study, participants learned of their risk through genetic counselors--but this may not always be possible as genetic testing becomes more widespread. "I would be interested going forward to see how people who received this information without counseling deal with it," says Christman. "Some of the direct-to-consumer companies are doing this right now."

In the lung-cancer study, McBride and her collaborators offered smokers who had a family member with lung cancer genetic screening for a variant associated with a higher risk of developing lung cancer. Information about the risks and benefits of the test, provided to help people decide whether to take it, as well as the results were delivered online.

The researchers found that all of the people in the study who tested high risk understood the meaning of the results, while only about 60 percent of those who scored low risk understood them. "That kind of defies expectation," says McBride. "Psychological theories predict that people protect themselves from threatening information, and one way to do that is by not understanding it."

The researchers found no difference between the high- and low-risk participants' interest in getting additional tools to quit smoking. "Telling someone they are low risk doesn't undermine their motivation to seek out cessation materials, and being told you are high risk didn't increase motivation," McBride says. "All smokers were motivated enough to log on and consider testing and availed themselves of cessation materials."

McBride says that she doesn't think genetic testing itself will motivate people to quit smoking or lose weight or make whatever changes might help their health. Instead, she says that the tests' utility may be to motivate people to take initial steps--"to get someone engaged in a smoking-cessation program or dietary-change intervention."

McBride is now studying the impact of genetic tests that analyze many spots on the genome and assess risk for multiple diseases, such as those offered by a number of online gene-testing companies. "There the story is much more complicated," she says. "The results might conflict with each other, and people might be at risk for many conditions."

Tuesday, July 14, 2009

Getting More out of Crude

An improved catalyst could help oil refineries get more gasoline out of a barrel of crude petroleum.

In an effort to make gasoline production cleaner and more efficient, Rive Technology of Cambridge, MA, is developing a catalyst that can help turn a greater percentage of crude petroleum into gasoline and other usable products. The company, which is testing the catalyst in its pilot plant in South Brunswick, NJ, believes that the technology will be able to process lower-grade fossil fuels and reduce the amount of energy that goes into the refining process.

Holey catalyst: Rive Technology is designing a zeolite catalyst with pores larger than those found in conventional zeolites, which are widely used in petroleum and petrochemical production. The larger pores allow the catalysts to handle a wide range of compounds.
Credit: Rive Technology

Andrew Dougherty, vice president of operations at Rive, says that the catalyst could increase the proportion of petroleum processed by as much as 7 to 9 percent. "We're going to need liquid, fossil-fuel-based transportation fuels for the foreseeable future," he says. "We help make the production of those fuels much more efficient."

The company's technology is based on zeolites--tiny pore-studded particles made of a mix of aluminum, oxygen, and silicon that are a mainstay of the petroleum and petrochemical industries. Heated and mixed in with crude petroleum, zeolites act as a catalyst, breaking apart the complex hydrocarbon molecules of crude into simpler hydrocarbons that make gasoline, diesel, kerosene, and other desirable products in the process known as fluid catalytic cracking. By making zeolites with pores larger than those in conventional ones, Rive hopes to create catalysts that handle a higher proportion of hydrocarbons.

Typically, the openings of pores in zeolites are less than a nanometer wide, which limits the range of hydrocarbon that can get into the porous catalysts. But Javier Garcia Martinez, a cofounder of Rive and now a professor at the University of Alicante, in Spain, came up with a way to control the size of the openings while working as a postdoctoral fellow at MIT's Nanostructured Materials Research Laboratory. He mixes the constituents of the zeolites in an alkaline solution, then adds a surfactant--a soaplike liquid. The surfactant makes bubbles, and the zeolites form around the bubbles. Then he burns away the surfactant, leaving behind zeolites with openings two to five nanometers wide--big enough to let in larger hydrocarbon molecules. By varying the chemistry of the surfactant, Garcia Martinez can control the size of the pore openings.

Part of improving the yield will be a result of perfecting the catalyst, which must be mixed with clay and other inert materials and spray-dried to create microspheres about 0.10 millimeters in diameter. The pilot plant is testing different combinations of materials to get the best properties. "By the end of the year, we hope to have hit upon the optimum mix of these things," says Dougherty. "We hope to be in commercial refineries in the second half of 2011." The plan is to license the recipe to commercial manufacturers of petroleum catalysts, such as BASF or W.R. Grace.


Dougherty also sees Rive's zeolites being used in hydrocracking, a refining technique that employs high-pressure hydrogen to create a low-sulfur diesel. Hydrocracking is a small market, but with the U.S. Environmental Protection Agency trying to reduce sulfur emissions, it's a growing one, he says. With its ability to choose pore size, the company might also make catalysts for processing tar sands, which contain extremely dense petroleum. Further down the road, the material might also be used to process biofuels, according to the company. .

Rive, which licenses the technology from MIT, is operating on $22 million of venture financing, which should carry it into 2010. "The economy hasn't been a big factor for us, and we don't expect it to be as long as the fund-raising markets come back by next year," Dougherty says.


Hints of How Google's OS Will Work

Google isn't saying how its new operating system will function, but the clues lie in its browser.

Audio: Listen Now

Soon after Google announced plans for its own operating system (OS), called Google Chrome OS, on Tuesday night, the Web giant clammed up about technical details, saying that the project is still at too early a stage. The first netbook devices running Chrome OS won't be released until the second half of 2010, so most users will have to wait until then to find out precisely how the software will work. But that doesn't mean there aren't hints out there already, and the biggest clues can be found in Google's Chrome browser, which the company says will be a key part of the new OS.

According to a post written by Sundar Pichai, a vice president of product management at Google, and Linus Upson, the company's engineering director, the open-source Chrome OS will consist of a Linux kernel with the Google Chrome browser running on top inside an entirely new desktop environment.

The Chrome browser was released nine months ago and is Google's effort to reinvent the browser completely: it's designed from scratch with Web applications in mind and is meant to be the only application that a Web-savvy user needs on her computer.

In an interview in March, Darin Fisher, an engineer on the Google Chrome team, said that in early sessions, the engineers decided to "take a page out of the operating system book" when they built the browser. Notably, the Chrome team decided to treat the browser as a launchpad from which the user can start different Web applications. Each application operates independently so that if one crashes, it doesn't affect the others. OSes, Fisher said, had to take the same approach to allow a single application to crash without requiring a user to reboot the whole system. This change in browser design helps give Web applications the stability that desktop applications enjoy.

The concept is easily extended back to the OS. Provided that the user relies on Web applications, such as Gmail, Google Docs, and the like, this simplifies the OS a great deal. It vastly reduces the number of applications that need to be installed and the amount of data that must be stored and processed on the computer itself.

With Chrome OS, Google will blur the line between the browser and the OS completely, says Ramesh Iyer, head of worldwide business development for mobile computing at Texas Instruments, which is one of Google's partners on the project. "The browser is your operating system," Iyer says. "The browser is your user interface. The browser is the mechanism from which you launch applications."

Streamlining the OS to focus on the Web, Iyer says, will allow devices to run more powerful programs with less powerful processors. By keeping processor requirements low, new devices could use less battery power and stay lighter. Texas Instruments, for example, is working with Google to integrate the Chrome OS software with its OMAP 3 multimedia applications processors, creating a system that could be easily installed in netbooks and other devices.

Giving Web applications deeper access to the underlying kernel could make it easier for Web developers to provide better functionality and a better user experience, says Jared Spool, founding principal of User Interface Engineering, a consulting firm based in North Andover, MA. When Web applications such as Gmail and Google Maps first appeared, Spool says, the software engineers who built them had to do a lot of hacking to create the appropriate levels of interaction. "When we went from the desktop to the browser, we took a huge step backward," Spool says.

With an OS tied closely to the Web, Google can introduce sophisticated resource management tools that will allow Web applications to run much more smoothly. A major role for the OS is allocating memory to applications and adjusting it as their needs change. A big problem with interactive Web applications to date has been that browsers didn't have efficient ways to adjust the memory assigned to different Web pages. The Chrome browser has already improved the situation, Spool says, and he expects the OS to go even farther. He says that this will allow more powerful Web applications that run more smoothly on the new OS.

But building the Chrome OS won't be as simple as sticking a browser on top of the Linux kernel, Spool says. The browser version of Chrome relies on the underlying OS's user interface, for example. Features such as scrollbars come from the OS, not the browser, so Google will need to build all of this from scratch, and even simple things will require significant time and effort.

The Chrome browser also lacks the drivers needed to power any external devices, such as printers or iPods. Texas Instruments' Iyer envisions a new way that Chrome OS could address this problem. "Wouldn't you rather have a printer connected in the cloud?" he says. As more devices, including cameras, printers, GPSes, and so on, become able to connect to the Internet in their own right, the concept of a Web interface between a user's computer and the device comes closer to reality. "This is the holy grail of the Internet," Iyer says.

Pichai and Upson have also said that Chrome OS will support all Web-based applications automatically, and that new applications written for Chrome OS will run "on any standards-based browser on Windows, Mac, and Linux." While Web application development has exploded in recent years, this might also introduce limitations, preventing the user from accessing interesting applications developed in programming languages not intended for the Web.

However, Google may have a solution for that too. The company is working on an experimental project called Google Native Client that would allow code written in non-Web languages such as C and C++ to run securely in the browser.

Chris Rohlf, a senior security consultant for Matasano Security, which has been involved in testing the implementation of Native Client, says, "It could be Google's secret weapon when it comes to Chrome OS, because it would allow developers to extend that platform with things like video and graphics without having to wait for Google to implement any of that."

Portable DNA Purifier for Poor Countries

A new handheld device isolates DNA from human fluid without the use of electrical power.

A standard bicycle pump is all that's required to power a DNA purifying kit, designed by Catherine Klapperich and her students at Boston University. The thermos-size device, dubbed SNAP (System for Nucleic Acid Preparation), extracts genetic material from blood and other bodily fluids by pumping fluid through a polymer-lined straw designed to trap DNA. A user can then pop the straw out and mail it to the nearest lab, where the preserved DNA can be analyzed for suspicious bacteria, viruses, and genetic diseases.

DNA pump: A new portable device extracts DNA from human fluids without using electricity.
Credit: Catherine Klapperich

A DNA extraction device that requires no power, such as the SNAP prototype, would have tremendous value in rural communities, says Paul Yager, a professor and acting chair of the University of Washington's Department of Bioengineering, who was not involved in the research. "This would be the front end for a lot of potential instruments people could use," he says.

To test for diseases like HIV, clinicians typically take blood samples from patients, which then must be refrigerated and transported to the nearest laboratory. Technicians then extract and analyze the DNA. In areas where electricity is scarce, blood may not be adequately refrigerated, potentially degrading a sample's quality. Isolated DNA, on the other hand, remains relatively stable at room temperature, so extracting DNA from blood before shipping it to a laboratory may eliminate the need for expensive refrigeration.

"Instead of taking blood samples and keeping them cold, with our technology, they would be able to prepare all the samples at the point of care," says Klapperich, an assistant professor of mechanical and biomedical engineering at Boston University. "They would also have a longer period of time to get a much more preserved sample to a central lab someplace else."

The conventional method of extracting DNA from blood involves a number of instruments: researchers first break open blood cell walls, either with chemicals or by shaking the blood, in order to get at genetic material inside cells. They then add a detergent to wash away the fatty cell walls, and spin the DNA out of solution with a centrifuge. The SNAP prototype performs a similar series of events with a bicycle pump, some simple chemicals, and a specialized straw lined with a polymer designed to attract and bind DNA.


A clinician first takes a fluid sample, such as blood or saliva from a patient, and injects it into the disposable straw within the device. A large cap on the device contains two small packets: a lysis buffer and an ethanol wash. Pressure from the pump releases the lysis buffer, which breaks open cells in the fluid, releasing DNA. A second pump of air releases ethanol, which washes out everything but the DNA.

So far, Klapperich has used the prototype to isolate DNA from nasal wash samples infected with influenza A. Compared with traditional DNA extraction kits in the laboratory, Klapperich says, the SNAP prototype isolates less DNA. "But in general, our data show that the nucleic acid we get back is cleaner," she says. The DNA can also be amplified using the polymerase chain reaction, or PCR, one of the most common methods of amplifying DNA in the lab. In the near future, the group plans to experiment with other human fluids that contain different viruses; DNA from various bacteria and viruses may behave differently at room temperature.

Jose Gomez-Marquez, program director for the Innovations in International Health Initiative at MIT, first learned of Klapperich's invention at a recent meeting about medical technology for the developing world. Since then, he and Klapperich have worked together to refine the prototype. Gomez-Marquez will soon be bringing a model to Nicaragua, where he hopes to get feedback on its effectiveness and user friendliness from local clinicians and patients. "This device doesn't wait for a cold system to be in place for diagnostic samples to be transferred from one place to another," says Gomez-Marquez. "You can take five days or two weeks to get a sample out there--you don't have to worry about refrigerating it."

Tracking the Evolution of a Pandemic

Understanding how viruses evolve could help predict the next outbreak.

A close examination of the genetic evolution of the three major influenza epidemics of the 20th century concludes that all of the viruses involved evolved slowly, through interspecies genetic exchange, and that genes from the catastrophic 1918 pandemic may have been circulating as many as seven years earlier. If true, this means that widespread genetic surveillance methods should have ample time to detect the next pandemic strain, and possibly even vaccinate against it before it gets out of control.

Birth of a bug: New research on the emergence of the 1918 influenza virus suggests that it may have evolved in a manner similar to that of the current H1N1 strain (shown here).
Credit: Center for Disease Control and Prevention

Prior research suggested that the 1918 influenza strain was the result of an avian virus introduced into humans just before the epidemic began. But the latest study, published today in the Proceedings of the National Academy of Sciences, suggests that all three influenza pandemics--1918, 1957, and 1968--were the result of stepwise genetic integrations of both avian and mammalian genes over a number of years, ultimately creating the more virulent virus strains.

And although the research was done before the emergence of the current H1N1 "swine flu" strain, the scientists' conclusions are relevant, showing that the current virus follows the same historical pattern. For each pandemic, "our results argued that there was at least one intermediate host that was most likely to be pigs, and that they're involved in the emergence of these pandemic strains," says Gavin Smith, the paper's lead author and a viral-evolution researcher at the State Key Laboratory of Emerging Infectious Diseases, at the University of Hong Kong.

The researchers collected all available genetic sequences of the influenza virus--human, bird, and pig variants--then plugged the data into a computer program that uses genetic information to build evolutionary trees, dating species' divergence back to their most recent common ancestor. But there are no known precursor viruses to the 1918 strain, so the computational results can only infer the time of interspecies transmission, based on known patterns of genetic evolution. The genetic data itself was derived from virus strains that have evolved since 1918.


Such studies have only become possible in the past few years, with the advancement of computational techniques that can incorporate known rates of various species' evolution--techniques that are proving to be quite accurate when tested against known relationships. But the results are still, as Smith notes, "all just inference," working backward from known relationships and based on estimated dates.

According to the virus's updated family tree, the 1918 strain was not newly minted but actually a slightly modified version of a mild flu strain already in the human population. In fact, according to the new analysis, some genes of the virus may have been circulating as early as 1911. "It was certainly different in terms of severity of the actual pandemic," Smith says. "But our results show that, in terms of how the virus emerged, it looks like much the same mechanism of the 1957 and 1968 pandemics, where the virus gets introduced into the human population over a period of time and reassorts with the previous human strain."

Each of the pandemics appears to have the same pattern when emerging in humans, with different genetic components floating around in people for a few years before a pandemic strain is detected. And the detailed computational analysis showed that different component genes of the viruses seemed to be different ages. "What this suggests is that it's not one virus coming in and mixing with the human seasonal strain to produce a pandemic strain," Smith says. "Rather, there are a number of reassortment events, where one gene comes in and mixes with the human strain, and then another gene comes in and mixes with the human in a stepwise pattern."

If the researchers are right, the 1918 flu may have even more in common with the current swine flu virus than scientists previously believed. And finding such a pattern among known pandemic strains holds implications for future surveillance. By looking backward, at which genes have caused prior influenza strains to turn lethal, the research may one day enable researchers to look forward too. "What this paper is saying is that we're actually in a position now to get hints about these viruses even years in advance," says Greg Poland, a vaccine and infectious-disease expert at the Mayo Clinic, who was not involved in the research. "I think it will inform surveillance efforts, I think it will inform vaccine development efforts, and I think it will eventually inform policy-making decisions."

In addition to keeping an eye out for influenza variants in humans, Poland and Smith believe that it's just as important to start doing deeper surveillance in birds and pigs, and on a much more extensive basis. And, Poland notes, knowing that the strains emerge slowly could help inform vaccine efforts as well.

"There's no reason we can't move away from [creating] a vaccine against what we think we know will circulate this year, toward including proteins from variants we suspect might become problematic in the future," says Poland.

Smith hopes that more full-genome sequencing will provide advance warning of which genes might show up in humans, and that a deeper look at the genomes will provide clues about where and why the animal-to-human transmission occurs. He also hopes that one day, the team's research could help change governmental approaches from pandemic preparedness to pandemic prevention. "But the problem is that we still don't know what it is about a virus that makes it pandemic," Smith says. "Is it mutation? Is it a certain combination of genes? These are things that we still need to look at."

Injured Racehorses Can Save Your Knees..

Orthopedic stem-cell therapies are moving into human trials.

A runner with a torn tendon has reason to envy a racehorse with the same affliction: horses have treatment options not available to human patients--most notably, injections of adult stem cells that appear to spur healing in these animals with shorter recovery time than surgical treatments. Now the same stem-cell therapies used routinely in competitive horses and increasingly in dogs are beginning to make their way into human testing.

Tendon repair: These ultrasound images show the tendon in a horse’s front leg. An area of damage (circle in yellow, top) has healed (bottom) after the injection of stem cells derived from the animal’s fat.
Credit: Vet-Stem

Human stem-cell treatments are advancing quickly in many areas: therapies using adult stem cells derived from both fat and bone marrow are currently being tested for a variety of ailments, including Crohn's disease, heart disease, and diabetes. (Bone-marrow-derived stem-cell transplants have been used for decades to treat blood diseases and some cancers.) But when it comes to orthopedic injuries, such as torn tendons, fractures, and degenerating cartilage, veterinary medicine has outpaced human care.

Veterinarians and private companies have aggressively tested new treatments for the most common injuries in racehorses, in large part because these animals are so valuable and can be so severely incapacitated by these wounds. "Soft-tissue injury is the number-one injury competitive horses will suffer and can end a thoroughbred horse's career," says Sean Owens, a veterinarian and director of the Regenerative Medicine Laboratory, at the University of California, Davis. Veterinary medicine also has much more lax regulations when it comes to treating animals with experimental therapies, allowing these treatments to move rapidly into routine clinical use without clinical trials. "Regulatory oversight of veterinary medicine is minimal," says Owens. "For the most part, the USDA [U.S. Department of Agriculture] and the FDA [Food and Drug Administration] have not waded into the regulatory arena for us."

Owens's newly created research center aims to move both animal and human stem-cell medicine forward by conducting well-controlled trials not often performed elsewhere. "Part of our mission is to do basic science and clinical trials and also improve ways of processing cells," says Owens. The center has a number of ongoing clinical trials in horses--one for tendon tears and one for fractured bone chips in the knee--that are run in a similar way to human clinical trials. The goal is to develop better treatments for horses, as well as to leverage the results to support human studies of the same treatments. Owens is partnering with Jan Nolta,director of the Stem Cell Program, at UC Davis, who will ultimately oversee human testing.

A handful of studies in animals have shown that these stem-cell therapies are effective, allowing more animals to return to racing, reducing reinjury rates, and cutting healing times. VetCell, a company based in the United Kingdom that derives stem cells from bone marrow, has used its therapy on approximately 1,700 horses to date. In a study of 170 jumping horses tracked through both treatment and rehabilitation, researchers found that nearly 80 percent of them could return to racing, compared with previously published data showing that about 30 percent of horses given traditional therapies could return to racing. After three years, the reinjury rate was much lower in stem-cell-treated animals--about 23 percent compared with the published average of 56 percent, says David Mountford, a veterinary surgeon and chief operating officer at VetCell.

While scientists still don't know exactly how the cells aid repair of the different types of injuries, for tendon tears, initial studies show that stem cells appear to help the tissue regenerate without forming scar tissue.

Mountford says that the company chose to focus on tendon injuries in horses in part because they so closely resemble injuries in humans, such as damage to the Achilles tendon and rotator cuff. For both people and horses, tendon tears trigger the formation of scar tissue, which has much less tensile strength and elasticity than a healthy tendon. "It becomes a weak spot and prone to injury," says Owens.

Next year, VetCell plans to start a human clinical trial of its stem-cell treatment for patients with degeneration or damage of the fibers of the Achilles tendon. As in the horse therapy, stem cells will be isolated from a sample of the patient's bone marrow, then cultured and resuspended in a growth medium also derived from the patient. Surgeons will then inject the solution into the area of damage, using ultrasound imaging to guide the needle to the correct location. "Our long-term goal is to use it to treat a number of tendon injuries," says Mountford.

Stem-cell therapies also show promise for arthritis. Vet-Stem, a California-based company that uses stem cells isolated from fat rather than bone marrow, has shown in a placebo-controlled trial that the treatment can help arthritic dogs. "About 200,000 hip replacements are done every year in humans," says Robert Harman, a veterinarian and founder of the company. "That's a very good target for someone to look at cell therapy."

For osteoarthritis, the stem cells seem to work not by regenerating the joint, but by reducing inflammation. "But in the last couple of years, evidence has come out that the cells we use reduce inflammation and pain, and help lubricate the joint," says Harman.

While Vet-Stem does not plan to move into human testing, Cytori, a company based in San Diego, has developed a device for isolating stem cells from fat in the operating room. (Vet-Stem does the procedure manually: veterinarians collect a fat sample from the animal and then send it to the company for processing.) Cytori's device is currently approved for use for reconstructive surgeries in Japan but not yet in the United States.

Sunday, July 12, 2009

Innovation: Smarter phone calls for your smart phones



Two slick new handsets launched last week continue the trend for phones to become ever more powerful and multi-functional computing devices. Gadgets like these could make technological novelties like augmented reality commonplace.

But while hardware manufacturers are finding ever more things for us to do with our phones, their most basic function – to help us receive and manage calls – hasn't changed much in years. For most of us, call management remains a matter of basic redirection and voicemail services.

Rather than enhancing these core services, network operators have made their calling packages attractive by tying them to coveted gadgets, and in some cases to third-party services such as Twitter and Skype. Now, however, a number of recent developments mean that smarter call management is on its way – though it won't be the telecom companies that deserve the credit.

What's your number?

It will probably come as little surprise that Google, a serial innovator when it comes to communications, is one of the prime movers in this area. Two years ago, it acquired GrandCentral, a company whose service allowed customers to integrate their various telephone numbers and mailboxes into a single, web-accessible account.

The service, now dubbed Google Voice, provides users with a single number that is transferred to different combinations of devices according to who is calling and what time it is. So you might, for example, send a call from a business contact to your work voicemail after office hours, while routing one from a friend to your home line and cellphone, even though both would have dialled the same number.

This is the kind of service that established network operators are in the best position to offer. But they're currently being left behind by an upstart from a different sector altogether.

Google has recently added features that could yank more control away from the networks, including centralised voicemail, automated voicemail transcription, and caller-specific voicemail. And this week, it was reported that users will be able to take their existing numbers with them to the Google Voice – overcoming one frequent obstacle to new telecom services. There's no launch date for Google Voice as yet, though, and for the moment, it remains invitation-only.

Reviews from users that have access to the service suggest it can radically change a person's relationship with their phone and phone number. And in the Android cellphone operating system, Google has a powerful platform to support and promote their new product.

White spaces

Google Voice sits on top of the existing network infrastructure; you still need to subscribe to a network for calls to reach your cellphone, for example. But a patent filed by Google last year has the potential to shake up the industry much more directly. It envisages that your device would switch to the cheapest provider every time a connection was needed, rather than being tied to a single network.

That's not all. Last week, the US switched off all analogue television signals, freeing up large swathes of wireless spectrum. That virtual real estate will be made available to a new class of "super Wi-Fi" mobile devices thanks to successful campaigning by the White Spaces Coalition, a group of eight technology firms (including Google).

Although prototype white space devices have been submitted to the Federal Communications Commission for testing, it's too early to say exactly what they might offer. Nonetheless, they clearly they have the potential to cause major headaches for purveyors of traditional phone connections: by creating a national voice-over-broadband system that could stand entirely apart from the conventional telecom networks, for example.

Bill per byte?

These new possibilities are now starting to emerge because of the shift away from the point-to-point principle of telephony – a concept that hasn't changed much since it was pioneered by Alexander Graham Bell. Today, we're moving towards a world where every person has a computer in their pocket that is able to communicate in a variety of ways over a network that also offers multiple ways to make connections.

That undermines the traditional way networks have charged their customers. A UK space scientist last year calculated that the per-byte cost of text messages far exceeds that of data sent from space. That means texting is an expensive business once a quota of free messages runs out – but a smartphone user can send any number of email messages, which are functionally very similar, without additional charges.

Then there's Skype, now probably the world's biggest carrier of international voice calls. While some networks remain hostile to Skype, prohibiting its use over their mobile broadband networks, others have embraced it, allowing free Skype calls between handsets. Again, that raises uncomfortable – for the network operators, anyway – questions about the cost of traditional voice calls.

Ultimately, these distinctions are hard to justify in a digital world, full of devices that can readily switch between communications media. They imply that the 1s and 0s which make up a voice call, a Skype call or a webpage are somehow worth different amounts. Perhaps it's time to address that anomaly and switch to pricing according to the amount of data carried, and to open up to new ideas about carrier-side services that would allow communications gadgets to realise their full potential. Then we would really have smart phones.