Computer student on gesture control: Start experimenting
Back in 2012, authors from Microsoft Research and UbiComp Lab at University of Washington prepared their paper, "SoundWave: Using the Doppler Effect to Sense Gestures," for the Proceedings of the Association for Computing Machinery's Conference on Human Factors in Computing Systems. Their work created a lot of interest in the way that they managed to implement motion sensing using only speaker and mic. Sidhant Gupta, Dan Morris, Shwetak Patel, Desney Tan said that "Gestures are becoming an increasingly popular means of interacting with computers. but—it is still relatively costly to deploy robust gesture-recognition sensors in existing mobile platforms." Enter their SoundWave, a technique that leveraged speaker and microphone for sensing in-air gestures and motion around a device.
They wrote that SoundWave could directly control existing applications without requiring a user to wear any special sensors on their body. They used the existing speakers on commodity devices to generate tones between 18-22 kHz. Then they used existing microphones on these same devices to pick up the reflected signal and estimate  and gesture through the observed frequency shifts. In explaining their leaning toward tones above18 kHz, they said the higher the frequency, the greater the shift for a given velocity, which makes it computationally easier to estimate motion at a given resolution. The gesture range included scrolling and "two-handed seesaw," where you move both hands simultaneously in opposite directions at the same time.
Their technique has again made news thanks to Daniel Rapp, a Swedish computer science student, who read their research paper and worked it out offering Doppler demos right on the web page. John Wenz in Popular Mechanics described what Rapp worked out in computer control via hand gestures based on the paper. "First, the speakers emit sounds at particular frequencies. When a hand passes through the waves, it changes them in subtle ways, and the microphone senses those changes when it picks up the sound. The app is programmed to use them to scroll the screen. Rapp has also designed the sound to modulate like a theremin." (Rapp had written "reminiscent of a Theremin, in that you can control sound by moving your hands in free space, yet different in that the Theremin is able to measure absolute distance from the base, whereas we can only measure relative motion.")
The Doppler Effect, said Rapp, "is a physical phenomenon which affects waves in motion. The standard example is the effect on a fire engine siren as it quickly drives past. When it moves towards you the sound waves are compressed, and so the frequency becomes higher, and when it moves away from you the frequency becomes lower." Rapp also remarked that "It's important to realize that the doppler effect would also occur if you were to run towards the siren, rather than the siren moving towards you."
"Recently I stumbled upon an interesting paper for implementing motion sensingrequiring no special hardware, only a speaker and mic! Unfortunately the paper didn't include code to test it, so I decided to reproduce it here on the web!" said Rapp.


Quantum experiment verifies Einstein's 'spooky action at a distance'
Professor Howard Wiseman, Director of Griffith University's Centre for Quantum Dynamics. Credit: Griffith University
An experiment devised in Griffith University's Centre for Quantum Dynamics has for the first time demonstrated Albert Einstein's original conception of "spooky action at a distance" using a single particle.
In a paper published in the journal Nature Communications, CQD Director Professor Howard Wiseman and his experimental collaborators at the University of Tokyo report their use of homodyne measurements to show what Einstein did not believe to be real, namely the non-local collapse of a particle's wave function.
According to quantum mechanics, a single particle can be described by a wave function that spreads over arbitrarily large distances, but is never detected in two or more places.
This phenomenon is explained in quantum theory by what Einstein disparaged in 1927 as "spooky action at a distance", or the instantaneous non-local collapse of the wave function to wherever the particle is detected.
Almost 90 years later, by splitting a single photon between two laboratories, scientists have used homodyne detectors—which measure wave-like properties—to show the collapse of the wave function is a real effect.
This phenomenon is the strongest yet proof of the entanglement of a single particle, an unusual form of quantum entanglement that is being increasingly explored for quantum communication and computation.
"Einstein never accepted orthodox quantum mechanics and the original basis of his contention was this single-particle argument. This is why it is important to demonstrate non-local wave function collapse with a single particle," says Professor Wiseman.
"Einstein's view was that the detection of the particle only ever at one point could be much better explained by the hypothesis that the particle is only ever at one point, without invoking the instantaneous collapse of the wave function to nothing at all other points.
"However, rather than simply detecting the presence or absence of the particle, we used homodyne measurements enabling one party to make different measurements and the other, using quantum tomography, to test the effect of those choices."
"Through these different measurements, you see the wave function collapse in different ways, thus proving its existence and showing that Einstein was wrong."

Sharper nanoscopy
Illustration of the interference between light from the quantum dot (black sphere) and radiation from the mirror dipole (black sphere on the wire). This interference will slightly distort the perceived location of the diffraction spot as imaged on a black screen at the top. The distortion is different depending on whether the quantum dot dipole is oriented perpendicular (red) or parallel (blue) to the wire surface, a difference that can be visualized by imaging the diffraction spot along different polarizations. Credit: Ropp
The 2014 chemistry Nobel Prize recognized important microscopy research that enabled greatly improved spatial resolution. This innovation, resulting in nanometer resolution, was made possible by making the source (the emitter) of the illumination quite small and by moving it quite close to the object being imaged. One problem with this approach is that in such proximity, the emitter and object can interact with each other, blurring the resulting image. Now, a new JQI study has shown how to sharpen nanoscale microscopy (nanoscopy) even more by better locating the exact position of the light source.
Diffraction limit
Traditional microscopy is limited by the diffraction of light around objects. That is, when a light wave from the source strikes the object, the wave will scatter somewhat. This scattering limits the spatial resolution of a conventional microscope to no better than about one-half the wavelength of the light being used. For visible light, diffraction limits the resolution to no be better than a few hundred nanometers.
How then, can microscopy using visible light attain a resolution down to several nanometers? By using tiny light sources that are no larger than a few nanometers in diameter. Examples of these types of light sources are fluorescent molecules, nanoparticles, and quantum dots. The JQI work uses quantum dots which are tiny crystals of a semiconductor material that can emit single photons of light. If such tiny light sources are close enough to the object meant to be mapped or imaged, nanometer-scale features can be resolved. This type of microscopy, called "Super-resolution imaging," surmounts the standard diffraction limit.
Image-dipole distortions
JQI fellow Edo Waks and his colleagues have performed nanoscopic mappings of the electromagnetic field profile around silver nano-wires by positioning quantum dots (the emitter) nearby. (Previous work: http://phys.org/news/2013-02-quantum-dots-probe-nanowires.html ). They discovered that sub-wavelength imaging suffered from a fundamental problem, namely that an "image dipole" induced in the surface of the nanowire was distorting knowledge of the quantum dot's true position. This uncertainty in the position of the quantum dot translates directly into a distortion of theelectromagnetic field measurement of the object.
The distortion results from the fact that an electric charge positioned near a metallic surface will produce just such an electric field as if a ghostly negative charge were located as far beneath the surface as the original charge is above it. This is analogous to the image you see when looking at yourself in a mirror; the mirror object appears to be as far behind the mirror as you are in front. The quantum dot does not have a net electrical charge but it does have a net electrical dipole, a slight displacement of positive and negative charge within the dot.
Thus when the dot approaches the wire, the wire develops an "image" electrical dipole whose emission can interfere with the dot's own emission. Since the measured light from the dot is the substance of the imaging process, the presence of light coming from the "image dipole" can interfere with light coming directly from the dot. This distorts the perceived position of the dot by an amount which is 10 times higher than the expected spatial accuracy of the imaging technique (as if the nanowire were acting as a sort of funhouse mirror).
The JQI experiment successfully measured the image-dipole effect and properly showed that it can be corrected under appropriate circumstances. The resulting work provides a more accurate map of the electromagnetic fields surrounding the nanowire.
The JQI scientists published their results in the journal Nature Communications.
Lead author Chad Ropp (now a postdoctoral fellow at the University of California, Berkeley) says that the main goal of the experiment was to produce better super-resolution imaging: "Any time you use a nanoscale emitter to perform super-resolution imaging near a metal or high-dielectric structure image-dipole effects can cause errors. Because these effects can distort the measurement of the nano-emitter's position they are important to consider for any type of super-resolved imaging that performs spatial mapping."
"Historically scientists have assumed negligible errors in the accuracy of super-resolved imaging," says Ropp. "What we are showing here is that there are indeed substantial inaccuracies and we describe a procedure on how to correct for them."
 hour ago
Google selects as its new chief financial officer Ruth Porat, who will replace Patrick Pichette
Google selects as its new chief financial officer Ruth Porat, who will replace Patrick Pichette
Google on Tuesday named as its new chief financial officer Ruth Porat, a Morgan Stanley banker considered one of the most powerful women on Wall Street.
Porat will start at Google on May 26, moving from a similar position at the investment banking giant Morgan Stanley.
She will replace Patrick Pichette, who announced his retirement this month after nearly seven years as one of the Internet titan's top executives.
Her work to help rescue the insurance giant AIG during the 2008 financial crisis earned her the "most powerful" moniker in several media reports.
During that crisis, she led the Morgan Stanley teams advising the US Treasury on mortgage finance giants Fannie Mae and Freddie Mac as well as AIG.
Porat has been at Morgan Stanley since 1987 and has been vice chairman of investment banking, global head of the financial institutions group and co-chief of technology investment banking.
She has been the lead banker for big tech firm financing, including for Amazon, eBay, Netscape, Priceline and VeriSign.
Porat, who will start the new job May 26, has a bachelor's degree from Stanford University, an MBA from the Wharton School of the University of Pennsylvania and another master's degree from the London School of Economics.
"We're tremendously fortunate to have found such a creative, experienced and operationally strong executive," said Google chief executive Larry Page in a statement.
"I look forward to learning from Ruth as we continue to innovate in our core—from search and ads, to Android, Chrome and YouTube—as well as invest in a thoughtful, disciplined way in our next generation of big bets."
Two weeks ago, Canada-born Pichette laid out his reasons for retiring in a heartfelt post at Google+ social network, explaining in touching detail how after more than 25 years of non-stop work he is shifting gears to explore the world with his wife, Tamar.
Porat, who grew up in Silicon Valley, said she was "delighted to be returning to my California roots and joining Google."
"Growing up in Silicon Valley, during my time at Morgan Stanley and as a member of Stanford's board, I've had the opportunity to experience first hand how tech companies can help people in their daily lives," she added. "I can't wait to roll up my sleeves and get started."

Chemists discover temporary phases of chemical structures
The research group of Prof. Tomislav Friščić in McGill's Department of Chemistry has made a name for itself in the little-known, but growing field of "mechanochemistry," in which chemical transformations are produced by milling, grinding or shearing solid-state ingredients – brute force, in other words, rather than fancy liquid agents. "Your coffee maker grinds things," and grinding molecules in the lab involves much the same principle, Friščić notes. Using mechanical force also has the significant advantage of avoiding the use of environmentally harmful bulk solvents.
In late 2012, a trans-Atlantic team of researchers co-led by Friščić reported they had been able to observe a milling reaction in real time, by using highly penetrating X-rays to observe the rapid chemical transformations as a mill mixed, ground, and transformed simple ingredients into a complex product.  Now, the researchers have used this technique to discover a short-lived, structurally unusual metal-organic material created during the milling process.
In a paper published March 23 in Nature Communications, the scientists dub the material "katsenite," after the first author of the article, Athanassios D. Katsenis. Now a postdoctoral fellow at McGill, Katsenis was a visiting student in Friščić's group when the research was conducted. He analyzed the topology of the material—the arrangement and connections between the structural 'nodes') of its crystal structure—and realized that it didn't correspond to anything previously seen.
The discovery provides the first concrete evidence of something that has long been suspected, the researchers conclude:  milling creates temporary phases with structures that are not achievable under conventional conditions.
"While this particular katsenite-type structure is unlikely to be of any practical use, the discovery represents a breakthrough that impacts our understanding of large-scale processing of materials and opens a new environment to generate previously inaccessible structures," Friščić says. Besides all that, he adds, "It is just great to have a chemical structure type named after a researcher at McGill!"

3-D satellite, GPS earthquake maps isolate impacts in real time
Satellite radar image of the magnitude 6.0 South Napa earthquake. The "fringe" rainbow pattern appears where the earthquake deformed the ground's surface, with one full cycle of the color spectrum (magenta to magenta) showing 3 centimeters of change. Satellite data like this can now be used to give researchers an understanding of an earthquake and its impacts within days. Credit: the European Space Agency.
When an earthquake hits, the faster first responders can get to an impacted area, the more likely infrastructure—and lives—can be saved.
New research from the University of Iowa, along with the United States Geological Survey (USGS), shows that GPS and satellite data can be used in a real-time, coordinated effort to fully characterize a fault line within 24 hours of an earthquake, ensuring that aid is delivered faster and more accurately than ever before.
Earth and Environmental Sciences assistant professor William Barnhart used GPS and satellite measurements from the magnitude 6.0 South Napa, California earthquake on August 24, 2014, to create a three-dimensional map of how the ground surface moved in response to the earthquake. The map was made without using traditional rapid response instruments, such as seismometers, which may not afford the same level of detail for similar events around the globe.
"By having the 3D knowledge of the earthquake itself, we can make predictions of the ground shaking, without instruments to record that ground shaking, and then can make estimates of what the human and infrastructure impacts will be— in terms of both fatalities and dollars," Barnhart says.
The study, "Geodetic Constraints on the 2014 M 6.0 South Napa Earthquake" published in the March/April edition of Seismological Research Letters, is the first USGS example showing that GPS and satellite readings can be used as a tool to shorten earthquake response times.
And while information about an earthquake's impact might be immediately known in an area such as southern California, Barnhart says the technique will be most useful in the developing world. The catastrophic magnitude 7.0 earthquake that hit Haiti in 2010 is the perfect example for the usefulness of this kind of tool, Barnhart says. The earthquake struck right under the capital city of Port Au Prince, killing up to 316,000 people, depending on estimates, and costing billions of dollars in aid.
"On an international scale, it dramatically reduces the time between when an earthquake happens, when buildings start to fall down, and when aid starts to show up," Barnhart says.
To accurately map the South Napa earthquake for this study, Barnhart and a team of researchers created a complex comparison scenario.
They first used GPS and satellite readings to measure the very small- millimeter-to-centimeter-sized-displacements of the ground's surface that were caused by the earthquake. They fed those measurements into a mathematical equation that inverts the data and relates how much the ground moved to the degree of slip on the fault plane. Slip describes the amount, timing, and distribution of fault plane movement during an earthquake.
This allowed the group to determine the location, orientation, and dimensions of the entire fault without setting foot on the ground near the earthquake. The mathematical inversion gave the researchers predictions of how much the ground might be displaced, and they compared those results to their initial estimations, bit by bit, until their predictions and observations match. The resulting model is a 3D map of fault slip beneath the Earth's surface. The entire procedure takes only a few minutes to complete.
Nationally, there is a push to create an earthquake early-warning system, which is already being tested internally by the USGS in coordination with the University of California, Berkeley; the California Institute of Technology; and the University of Washington. While only researchers, first responders, and other officials received the early warning message, it did work in testing for the Bay Area during the Napa earthquake. Individuals in Berkeley received nearly 10 seconds of advanced warning before the ground began shaking. The information contained in Barnhart's study could be used to create further tools for predicting the economic and human tolls of earthquakes.
"That's why this is so important. It really was the chance to test all these tools that have been put into place," Barnhart says. "It happened in a perfect place, because now we're much more equipped for a bigger earthquake."

Medical residues purified from wastewater with new techniques
Purified wastewater using membrane filtration. Credit: Teemu Leinonen, LUT
Contaminants such as medical residues and pesticides go through the traditional wastewater purifying process and go back to the environment. Concern over their volyme in waste and drinking water is growing globally. Research by Lappeenranta University of technology (LUT) found that by modernising the current wastewater purification process over 95 percent of contaminants can be removed from wastewater.
Improving the efficiency of the current wastewater cleaning process can remove more than 95% of contaminants, such as drug residues and pesticides, from water. These are the findings of research performed at Lappeenranta University of Technology (LUT).
Researchers tested the removal of drug residues using membrane filtration and oxidation. The results show that these technologies remove 95% and, in some cases, up to 99% of contaminants and nutrients. If they enter the water system, even very small concentrations of contaminants can damage the water ecosystems, for example, through hormonal changes in fish.
The research results show that more efficient water cleaning technologies, such as membrane filtration and oxidation, effectively remove substances that are normally difficult or impossible to remove in a biological cleaning process. Such substances include drugs for depression and epilepsy and pain gels that are applied externally.
"Generally speaking, drugs that act on the mind and the heart are the most difficult to remove," says LUT Professor Mika Mänttäri, who was responsible for the study.  
The research also demonstrated that increasing the efficiency of water treatment can also filter certain nutrients from the water. For example, phosphorous and nitrogen can be almost completely removed from the water. Membrane filtration reduced the amount of phosphorous to one tenth or even one hundredth of the emissions currently permitted. The water cleaning technologies that were tested can thus reduce emissions significantly in comparison to the water treatment process that is traditionally used. 
The wastewater cleaning process used today in nearly all municipalities only removes the substances that are readily biodegradable or which bind to the activated sludge in the process. Such substances include pesticides and basic pain medications that contain ibuprofen, paracetamol and ketoprofen. The process allows a lot of other non-biodegradable contaminants to pass through.
At this time, Finland does not have any wastewater treatment plants that are designed especially to filter drug residues out of the water. The situation is complicated by the fact that Finland does not have any general limit values for contaminants in the waterand, subsequently, no obligations to monitor or remove such substances. This means that existing wastewater treatment plants have no obligations or incentives to invest in new technologies.   
In the near future, the EU will make large treatment plants subject to monitoring requirements for certain priority substances. A monitoring requirement was already set for some substances last autumn. One of the substances being monitored is the anti-inflammatory drug diclofenac, which is used as the active ingredient in pain gels. Once the limit values are implemented, the current biological process will automatically be in trouble.
However, many municipalities are already discussing how wastewater treatmentshould be arranged in the future. The LUT researchers estimate that the next 5–10 years in Finland will be a transition phase to more efficient cleaning, as the environmental permits for many municipal treatment plants expire and the permit conditions become stricter over the next decade.