Technologies
Looks like msft finally making a correct bet. If I were the new chief in town, I will make an offer to dropbox so huge that they could not refuse but to sell. Its time to load up msft!
這是馬雲不能在美國IPO的主要原因?
收皮啦, 馬雲! Twitter will suck all the Oxygen out of the IPO market in the USA in the foreseeable future. 馬雲 needs either an IPO to raise capital or a bridge loan to payoff the existing loan due in coming months. There is no chance for 馬雲 to have a successful IPO in the USA in the near future. Its absolutely unnecessary for Hong Kong to change any rule because 馬雲 has no where to go!
If I were the regulator in Hong Kong, I will have a counter offer for 馬雲 as follows:
- set up a holding company in Hong Kong according to the laws and regulations in Hong Kong
- put his 合伙人制度 partnership as a wholly subsidiary of the holding company
- 2/3 of the board members of the holding company will be elected according to the laws and regulations in Hong Kong as well as the by-laws of the holding company
- 1/3 of the board members of the holding company will be elected by 馬雲的合伙人
- the entire board has the full power of hire and fire 馬雲的合伙人.
PS: its not even necessary to give the above accommodation because 馬雲 is backed into a corner! also, if hong kong changes the laws for 馬雲, only God knows who will be next in line...sooner or later, Hong Kong will be erased on the map of the World's major financial centers!
這是馬雲不能在美國IPO的主要原因?
收皮啦, 馬雲! Twitter will suck all the Oxygen out of the IPO market in the USA in the foreseeable future. 馬雲 needs either an IPO to raise capital or a bridge loan to payoff the existing loan due in coming months. There is no chance for 馬雲 to have a successful IPO in the USA in the near future. Its absolutely unnecessary for Hong Kong to change any rule because 馬雲 has no where to go!
If I were the regulator in Hong Kong, I will have a counter offer for 馬雲 as follows:
- set up a holding company in Hong Kong according to the laws and regulations in Hong Kong
- put his 合伙人制度 partnership as a wholly subsidiary of the holding company
- 2/3 of the board members of the holding company will be elected according to the laws and regulations in Hong Kong as well as the by-laws of the holding company
- 1/3 of the board members of the holding company will be elected by 馬雲的合伙人
- the entire board has the full power of hire and fire 馬雲的合伙人.
PS: its not even necessary to give the above accommodation because 馬雲 is backed into a corner! also, if hong kong changes the laws for 馬雲, only God knows who will be next in line...sooner or later, Hong Kong will be erased on the map of the World's major financial centers!
Raspberry Pi
“Final evolution” of original Raspberry Pi gains micro-SD and lower power consumption by David Meyer
Jul. 14, 2014 - 1:42 AM PD
There’s a new iteration of the open-source Raspberry Pi computer kit: the Model B+. According to the Raspberry Pi Foundation, it’s the “final evolution” of the original Raspberry Pi design, before a move to a future full version 2.
The changes are mostly in the connector layout, meaning cases for the existing Model B may not be compatible. A couple parts and kits also won’t work anymore with the new design, such as the Wolfson audio card and the Adafruit Cobbler prototyping kit (at least, not out of the box).
In a blog post on Monday, the U.K.-based Foundation detailed the new features of its maker-friendly kit – including the ability to power memory sticks and so on through the USB port:
• More GPIO. The GPIO header has grown to 40 pins, while retaining the same pinout for the first 26 pins as the Model B.
• More USB. We now have 4 USB 2.0 ports, compared to 2 on the Model B, and better hotplug and overcurrent behaviour.
• Micro SD. The old friction-fit SD card socket has been replaced with a much nicer push-push micro SD version.
• Lower power consumption. By replacing linear regulators with switching ones we’ve reduced power consumption by between 0.5W and 1W.
• Better audio. The audio circuit incorporates a dedicated low-noise power supply.
• Neater form factor. We’ve aligned the USB connectors with the board edge, moved composite video onto the 3.5mm jack, and added four squarely-placed mounting holes.
The Model B+ costs $35, the same as its predecessor. Apart from the changes listed above, it uses the same 700MHz processor and also has half a gig of RAM. However, because “industrial customers” might still want to continue with the Model B layout, production of that model will continue “for as long as there’s demand for it.”
Jul. 14, 2014 - 1:42 AM PD
There’s a new iteration of the open-source Raspberry Pi computer kit: the Model B+. According to the Raspberry Pi Foundation, it’s the “final evolution” of the original Raspberry Pi design, before a move to a future full version 2.
The changes are mostly in the connector layout, meaning cases for the existing Model B may not be compatible. A couple parts and kits also won’t work anymore with the new design, such as the Wolfson audio card and the Adafruit Cobbler prototyping kit (at least, not out of the box).
In a blog post on Monday, the U.K.-based Foundation detailed the new features of its maker-friendly kit – including the ability to power memory sticks and so on through the USB port:
• More GPIO. The GPIO header has grown to 40 pins, while retaining the same pinout for the first 26 pins as the Model B.
• More USB. We now have 4 USB 2.0 ports, compared to 2 on the Model B, and better hotplug and overcurrent behaviour.
• Micro SD. The old friction-fit SD card socket has been replaced with a much nicer push-push micro SD version.
• Lower power consumption. By replacing linear regulators with switching ones we’ve reduced power consumption by between 0.5W and 1W.
• Better audio. The audio circuit incorporates a dedicated low-noise power supply.
• Neater form factor. We’ve aligned the USB connectors with the board edge, moved composite video onto the 3.5mm jack, and added four squarely-placed mounting holes.
The Model B+ costs $35, the same as its predecessor. Apart from the changes listed above, it uses the same 700MHz processor and also has half a gig of RAM. However, because “industrial customers” might still want to continue with the Model B layout, production of that model will continue “for as long as there’s demand for it.”
StepUp
StepUp Lets You Easily Chop YouTube Videos Into Bite-Size Chunks
Online video is limbering up. The volume of video content already being watched online is massive. Here’s just one stat to boggle the mind: ComScore reported close to 50 billion online videos watched by Americans alone in just the month of January this year. But really you ain’t seen nothing yet.
As cameras find their way into more connected devices, and more people around the world adopt smartphones with a lens and a data connection as their personal mobile device then orders of magnitude more video content is going to be produced, uploaded and consumed.
There’s no doubt the digital future will be filmed and streamed from myriad mobile devices. So there’s a growing problem. Namely: discoverability.
With so much video content fighting for eyeballs, finding the best stuff is only going to get trickier. And that’s likely to drive demand for tools that help condense videos into highlights packages to make content quicker and easier to consume. Digital video is both increasingly plentiful and ripe for remixing.
So step forward UK-based startup StepUp which has built a platform for turning existing digital video content into shorter snippets that can be looped for repeat viewing, or watched in sequence — one segment after another.
Founder Makoto Inoue dubs his creation a ‘Vine for YouTube’. The basic idea is to give the average online consumer the ability to chain together and tag/annotate video snippets — cutting a single longer original video down to size as a highlight snippet. Or combining multiple highlights into a sequence of easily digestible chunks that can be used to structure and navigate the video content to aid learning.
“Online video itself is a huge market but many people are focusing on how to let people create more video contents, but there are not many companies focusing on how to help people consume. So the more videos get generated there’s more [scope] for us to help people consume videos much easier,” says Inoue.
Pro video editing tools already offer the ability to edit video content, of course, but Inoue wants to democratize the process to make it easy enough for the mainstream online consumer to do, not just video editing professionals. The resulting edit may be less polished but it’s more accessible.
He describes StepUp as a “video bite-sized service”. While that might immediately make you think of Vine, Twitter’s looping micro video snippet format, it’s not a like-for-like rival because Vine is focused on helping people film new video, while StepUp wants to let people remix existing content in new ways.
“Vine is all about video creation,” says Inoue. “[StepUp is] more about curation. Vine actually made my life a lot easier because when I say ‘Vine for YouTube video’ more people understand why a small size make sense.”
Animated GIF tools are perhaps a better comparison but StepUp aims to be a much broader platform than the GIF’s one trick pony. It’s about remixing longer video content into useful — or entertaining — highlights packages that can be used as a learning aid (by letting people re-watch individual segments until a tidbit of knowledge sticks, for instance), not just a medium for condensed video buffoonery.
As well as giving people basic and accessible video editing tools, StepUp is obviously also aiming to become a video content platform in its own right — where people can seek, find and consume videos on particular topics that others have curated into its segmented montage format. (Another startup with some crossover is Coursmos, which is using short videos as a format to support mobile e-learning.)
Inoue argues that identifying a good bit in a video doesn’t require a video editor or any specialist training. “You don’t really need a specific skill or design skill to augment video [in that way]. All you need to know is from where to where is important. So I focus on that specific bit,” he tells TechCrunch.
“And, the biggest win for us is, assuming [the video] on YouTube you don’t have to download and upload which takes lots of time,” he adds.
There’s no limit on the length a StepUp video segment can be (although it defaults to a Vine-esque six seconds). Once a source video has been added to StepUp, the user can press a clip button to grab a particular segment, changing the time stamp after the fact if needed and then, once happy with their clip, they add a category, tags or other notes and upload their remix to StepUp’s platform.
Currently source video for splicing and dicing can be grabbed from YouTube, or users can search for existing content uploaded to StepUp to use. On the viewer side, those watching StepUp videos can like individual segments, and comment on the whole video. StepUp videos can also be embedded on other sites as well as watched on its own platform.
In terms of direct competitors Inoue names Russian startup Coub as the closest, but points out that unlike Coub there’s no time limit on video segments on StepUp — meaning it can be used to create remixes that offer the viewer something more substantial than can be conveyed in just 10 seconds.
StepUp sidesteps rights issues about using others’ content by proving a link back to the original video/s — much like Pinterest turning existing online photos into pins. Inoue believes this could be a selling point for StepUp, pointing to online news’ and community sites’ penchant for using embedded animated GIFs as a bit of a grey area when it comes to rights issues.
“For these purposes it’s much easier if they embed my tool. It’s almost like animated GIF but with sounds and also with one click you can go back to the original video. That’s one area I wish people would use [StepUp for],” he adds.
In terms of particular video content he sees as a good fit for StepUp he suggests longer talks and music videos offer especially ripe raw material for stepping up.
For instance he says panel discussions at conferences might lend themselves to a best bits cut, while music videos tend to be about three to four minutes long on average — so he argues there’s scope to offer a condensed, bite-size teaser version. Or for fans to splice together their favourite moments.
The wider point again is that StepUp can accommodate both e-learning and entertainment use-cases. And if it can build a big enough user-base it could become a video discovery platform in its own right. A sort of YouTube digest, if you will. Or that’s the grand vision.
Inoue’s original idea for StepUp was called Benkyo Player, which still exists as a separate video learning toolset offering things like subtitle search for the video libraries of massively open online courses (MOOCs).
That e-learning angle got Innoe and Benkyo Player backing from the social good focused Bethnal Green Ventures (BGV). The accelerator then remained on board, despite the pivot from dedicated e-learning video player to more expansive StepUp video platform.
Content categories on the site currently include various topics that lend themselves to step-by-step learning and instruction, such as cooking, fitness, martial arts, musical instruments and languages. So, although StepUp is a pivot it’s not moved a million miles away from Innoe’s original e-learning focus.
The name StepUp actually comes from a film of the same name about a hip hop dancer and a classical ballet dancer trying to teach each other their respective dance moves — and that step by step learning process is what StepUp aims to facilitate, says Inoue.
StepUp as it is now launched around April this year and is getting monthly page views that average between 10,000 and 30,000 at this early stage.
As well as the original £15,000 backing from BGV Inoue also pulled in a grant and mentoring from Nominet so has raised £65,000 in pre-seed funding so far for StepUp. He’s now looking for seed funding of between £100,000 and £500,000 to expand on what he’s built so far.
If he’s able to pull in the larger amount — either in Europe or over the pond where the money generally flows freer for these type of big platform UGC content startups — Inoue says the priority would be building StepUp for mobile. Currently the product doesn’t work properly on mobile devices because of browser restrictions. Fixing that would be a key priority, he says.
In terms of business model, he sees the platform supporting a Pinterest-style model once it has enough users — offering native ads in a spliced snippets format, giving brands a way to tease ad content and potentially spice it up or make it less annoying to view (by turning a three minute pre-roll advert into a more condensed and consumable snippet, for example).
Other monetization ideas include a freemium model for using StepUp’s tools — e.g. charging for things like adding non-public video content for editing, or offering more control over the tagging process. Doing a degree of automation for the step/clip or caption process for a fee is another idea.
Inoue also sees potential offering video analytics for brands — since the platform can be used to identify the particular segments of video content that viewers really like (based on things like how many times they loop a segment or which snippets of a video they actively like).
All those ideas are just snippets of potential right now, though. Inoue is a sole founder driving StepUp, and needs funding to step the current desktop-only product to a more accessible and mobile friendly next level. But the core idea at least stands on a solid foundation — so he’ll be hoping the money follows.
Online video is limbering up. The volume of video content already being watched online is massive. Here’s just one stat to boggle the mind: ComScore reported close to 50 billion online videos watched by Americans alone in just the month of January this year. But really you ain’t seen nothing yet.
As cameras find their way into more connected devices, and more people around the world adopt smartphones with a lens and a data connection as their personal mobile device then orders of magnitude more video content is going to be produced, uploaded and consumed.
There’s no doubt the digital future will be filmed and streamed from myriad mobile devices. So there’s a growing problem. Namely: discoverability.
With so much video content fighting for eyeballs, finding the best stuff is only going to get trickier. And that’s likely to drive demand for tools that help condense videos into highlights packages to make content quicker and easier to consume. Digital video is both increasingly plentiful and ripe for remixing.
So step forward UK-based startup StepUp which has built a platform for turning existing digital video content into shorter snippets that can be looped for repeat viewing, or watched in sequence — one segment after another.
Founder Makoto Inoue dubs his creation a ‘Vine for YouTube’. The basic idea is to give the average online consumer the ability to chain together and tag/annotate video snippets — cutting a single longer original video down to size as a highlight snippet. Or combining multiple highlights into a sequence of easily digestible chunks that can be used to structure and navigate the video content to aid learning.
“Online video itself is a huge market but many people are focusing on how to let people create more video contents, but there are not many companies focusing on how to help people consume. So the more videos get generated there’s more [scope] for us to help people consume videos much easier,” says Inoue.
Pro video editing tools already offer the ability to edit video content, of course, but Inoue wants to democratize the process to make it easy enough for the mainstream online consumer to do, not just video editing professionals. The resulting edit may be less polished but it’s more accessible.
He describes StepUp as a “video bite-sized service”. While that might immediately make you think of Vine, Twitter’s looping micro video snippet format, it’s not a like-for-like rival because Vine is focused on helping people film new video, while StepUp wants to let people remix existing content in new ways.
“Vine is all about video creation,” says Inoue. “[StepUp is] more about curation. Vine actually made my life a lot easier because when I say ‘Vine for YouTube video’ more people understand why a small size make sense.”
Animated GIF tools are perhaps a better comparison but StepUp aims to be a much broader platform than the GIF’s one trick pony. It’s about remixing longer video content into useful — or entertaining — highlights packages that can be used as a learning aid (by letting people re-watch individual segments until a tidbit of knowledge sticks, for instance), not just a medium for condensed video buffoonery.
As well as giving people basic and accessible video editing tools, StepUp is obviously also aiming to become a video content platform in its own right — where people can seek, find and consume videos on particular topics that others have curated into its segmented montage format. (Another startup with some crossover is Coursmos, which is using short videos as a format to support mobile e-learning.)
Inoue argues that identifying a good bit in a video doesn’t require a video editor or any specialist training. “You don’t really need a specific skill or design skill to augment video [in that way]. All you need to know is from where to where is important. So I focus on that specific bit,” he tells TechCrunch.
“And, the biggest win for us is, assuming [the video] on YouTube you don’t have to download and upload which takes lots of time,” he adds.
There’s no limit on the length a StepUp video segment can be (although it defaults to a Vine-esque six seconds). Once a source video has been added to StepUp, the user can press a clip button to grab a particular segment, changing the time stamp after the fact if needed and then, once happy with their clip, they add a category, tags or other notes and upload their remix to StepUp’s platform.
Currently source video for splicing and dicing can be grabbed from YouTube, or users can search for existing content uploaded to StepUp to use. On the viewer side, those watching StepUp videos can like individual segments, and comment on the whole video. StepUp videos can also be embedded on other sites as well as watched on its own platform.
In terms of direct competitors Inoue names Russian startup Coub as the closest, but points out that unlike Coub there’s no time limit on video segments on StepUp — meaning it can be used to create remixes that offer the viewer something more substantial than can be conveyed in just 10 seconds.
StepUp sidesteps rights issues about using others’ content by proving a link back to the original video/s — much like Pinterest turning existing online photos into pins. Inoue believes this could be a selling point for StepUp, pointing to online news’ and community sites’ penchant for using embedded animated GIFs as a bit of a grey area when it comes to rights issues.
“For these purposes it’s much easier if they embed my tool. It’s almost like animated GIF but with sounds and also with one click you can go back to the original video. That’s one area I wish people would use [StepUp for],” he adds.
In terms of particular video content he sees as a good fit for StepUp he suggests longer talks and music videos offer especially ripe raw material for stepping up.
For instance he says panel discussions at conferences might lend themselves to a best bits cut, while music videos tend to be about three to four minutes long on average — so he argues there’s scope to offer a condensed, bite-size teaser version. Or for fans to splice together their favourite moments.
The wider point again is that StepUp can accommodate both e-learning and entertainment use-cases. And if it can build a big enough user-base it could become a video discovery platform in its own right. A sort of YouTube digest, if you will. Or that’s the grand vision.
Inoue’s original idea for StepUp was called Benkyo Player, which still exists as a separate video learning toolset offering things like subtitle search for the video libraries of massively open online courses (MOOCs).
That e-learning angle got Innoe and Benkyo Player backing from the social good focused Bethnal Green Ventures (BGV). The accelerator then remained on board, despite the pivot from dedicated e-learning video player to more expansive StepUp video platform.
Content categories on the site currently include various topics that lend themselves to step-by-step learning and instruction, such as cooking, fitness, martial arts, musical instruments and languages. So, although StepUp is a pivot it’s not moved a million miles away from Innoe’s original e-learning focus.
The name StepUp actually comes from a film of the same name about a hip hop dancer and a classical ballet dancer trying to teach each other their respective dance moves — and that step by step learning process is what StepUp aims to facilitate, says Inoue.
StepUp as it is now launched around April this year and is getting monthly page views that average between 10,000 and 30,000 at this early stage.
As well as the original £15,000 backing from BGV Inoue also pulled in a grant and mentoring from Nominet so has raised £65,000 in pre-seed funding so far for StepUp. He’s now looking for seed funding of between £100,000 and £500,000 to expand on what he’s built so far.
If he’s able to pull in the larger amount — either in Europe or over the pond where the money generally flows freer for these type of big platform UGC content startups — Inoue says the priority would be building StepUp for mobile. Currently the product doesn’t work properly on mobile devices because of browser restrictions. Fixing that would be a key priority, he says.
In terms of business model, he sees the platform supporting a Pinterest-style model once it has enough users — offering native ads in a spliced snippets format, giving brands a way to tease ad content and potentially spice it up or make it less annoying to view (by turning a three minute pre-roll advert into a more condensed and consumable snippet, for example).
Other monetization ideas include a freemium model for using StepUp’s tools — e.g. charging for things like adding non-public video content for editing, or offering more control over the tagging process. Doing a degree of automation for the step/clip or caption process for a fee is another idea.
Inoue also sees potential offering video analytics for brands — since the platform can be used to identify the particular segments of video content that viewers really like (based on things like how many times they loop a segment or which snippets of a video they actively like).
All those ideas are just snippets of potential right now, though. Inoue is a sole founder driving StepUp, and needs funding to step the current desktop-only product to a more accessible and mobile friendly next level. But the core idea at least stands on a solid foundation — so he’ll be hoping the money follows.
Quantum Computing
Atom optics to help detect the imperceptible Mon, 10/22/2012 - 8:43am by Lori Keesey
A pioneering technology capable of atomic-level precision is now being developed to detect what so far has remained imperceptible: gravitational waves or ripples in space-time caused by cataclysmic events including even the Big Bang itself.
A team of researchers at NASA's Goddard Space Flight Center in Greenbelt, Md., Stanford University in California, and AOSense, Inc., in Sunnyvale, Calif., recently won funding under the NASA Innovative Advanced Concepts (NIAC) program to advance atom-optics technologies. Some believe this emerging, highly precise measurement technology is a technological panacea for everything from measuring gravitational waves to steering submarines and airplanes.
"I've been following this technology for a decade," said Bernie Seery, a Goddard executive who was instrumental in establishing Goddard's strategic partnership with Stanford University and AOSense two years ago. "The technology has come of age and I'm delighted NASA has chosen this effort for a NIAC award," he said.
The NIAC program supports potentially revolutionary, high-risk technologies and mission concepts that could advance NASA's objectives. "With this funding and other support, we can move ahead more quickly now, Seery said, adding that the U.S. military has invested heavily in the technology to dramatically improve navigation. "It opens up a wealth of possibilities."
Although the researchers believe the technology offers great promise for a variety of space applications, including navigating around a near-Earth asteroid to measure its gravitational field and deduce its composition, so far they have focused their efforts on using Goddard and NASA Research and Development seed funding to advance sensors that could detect theoretically predicted gravitational waves.
Predicted by Albert Einstein's general theory of relativity, gravitational waves occur when massive celestial objects move and disrupt the fabric of space-time around them. By the time these waves reach Earth, they are so weak that the planet expands and contracts less than an atom in response. This makes their detection with ground-based equipment more challenging because environmental noise, like ocean tides and earthquakes, can easily swamp their faint murmurings.
Although astrophysical observations have implied their existence, no instrument or observatory, including the ground-based Laser Interferometer Gravitational-Wave Observatory, has ever directly detected them.
Cataclysmic events, such as this artist's rendition of a binary-star merger, are believed to create gravitational waves that cause ripples in space-time.Should scientists confirm their existence, they say the discovery would revolutionize astrophysics, giving them a new tool for studying everything from inspiralling black holes to the early universe before the fog of hydrogen plasma cooled to give way to the formation of atoms.
The team believes atom optics or atom interferometry holds the key to directly detecting them.
Atom interferometry works much like optical interferometry, a 200-year-old technique widely used in science and industry to obtain highly accurate measurements. It obtains these measurements by comparing light that has been split into two equal halves with a device called a beamsplitter. One beam reflects off a mirror that is fixed in place; from there, it travels to a camera or detector. The other shines through something scientists want to measure. It then reflects off a second mirror, back through the beamsplitter, and then onto a camera or detector.
Because the path that one beam travels is fixed in length and the other travels an extra distance or in some other slightly different way, the two light beams overlap and interfere when they meet up, creating an interference pattern that scientists inspect to obtain highly precise measurements.
Atom interferometry, however, hinges on quantum mechanics, the theory that describes how matter behaves at sub-microscopic scales. Just as waves of light can act like particles called photons, atoms can be cajoled into acting like waves if cooled to near absolute zero. At those frigid temperatures, which scientists achieve by firing a laser at the atom, its velocity slows to nearly zero. By firing another series of laser pulses at laser-cooled atoms, scientists put them into what they call a "superposition of states."
In other words, the atoms have different momenta permitting them to separate spatially and be manipulated to fly along different trajectories. Eventually, they cross paths and recombine at the detector—just as in a conventional interferometer. "Atoms have a way of being in two places at once, making it analogous to light interferometry," said Mark Kasevich, a Stanford University professor and team member credited with pushing the frontiers of atom optics.
The power of atom interferometry is its precision. If the path an atom takes varies by even a picometer, an atom interferometer would be able to detect the difference. Given its atomic-level precision, "gravitational-wave detection is arguably the most compelling scientific application for this technology in space," said physicist Babak Saif, who is leading the effort at Goddard.
Shown here is the Goddard-designed breadboard laser system critical to advancing atom-optics instruments. The device will be tested in the Stanford University drop tower. Credit: NASA/Pat IzzoSince joining forces, the team has designed a powerful, narrowband fiber-optic laser system that it plans to test at one of the world's largest atom interferometers—a 33-foot drop tower in the basement of a Stanford University physics laboratory. Close scientifically to what the team would need to detect theoretical gravitational waves, the technology would be used as the foundation for any atom-based instrument created to fly in space, Saif said.
During the test, the team will insert a cloud of neutral rubidium atoms inside the 33-foot tower. As gravity asserts a pull on the cloud and the atoms begin to fall, the team will use its new laser system to fire pulses of light to cool them. Once in the wave-like state, the atoms will encounter another round of laser pulses that allow them to separate spatially. Their trajectories then can be manipulated so that their paths cross at the detector, creating the interference pattern.
The team also is fine-tuning a gravitational-wave mission concept it has formulated. Similar to the Laser Interferometer Space Antenna (LISA), the concept calls for three identically equipped spacecraft placed in a triangle-shaped configuration. Unlike LISA, however, the spacecraft would come equipped with atom interferometers and they would orbit much closer to one another—between 500 and 5,000 kilometers apart, compared with LISA's five-million-kilometer separation. Should a gravitational wave roll past, the interferometers would be able to sense the miniscule movement.
"I believe this technology will eventually work in space," Kasevich said. "But it presents a really complicated systems challenge that goes beyond our expertise. We really want to fly in space, but how do you fit this technology onto a satellite? Having something work in space is different than the measurements we take on Earth."
That's where Goddard comes in, Saif said. "We have experience with everything except the atom part," he said, adding that AOSense already employs a team of more than 30 physicists and engineers focused on building compact, ruggedized atom-optics instruments. "We can do the systems design; we can do the laser. We're spacecraft people. What we shouldn't be doing is reinventing the atomic physics. That's our partners' forte."
Source: NASA's Goddard Space Flight Center
A pioneering technology capable of atomic-level precision is now being developed to detect what so far has remained imperceptible: gravitational waves or ripples in space-time caused by cataclysmic events including even the Big Bang itself.
A team of researchers at NASA's Goddard Space Flight Center in Greenbelt, Md., Stanford University in California, and AOSense, Inc., in Sunnyvale, Calif., recently won funding under the NASA Innovative Advanced Concepts (NIAC) program to advance atom-optics technologies. Some believe this emerging, highly precise measurement technology is a technological panacea for everything from measuring gravitational waves to steering submarines and airplanes.
"I've been following this technology for a decade," said Bernie Seery, a Goddard executive who was instrumental in establishing Goddard's strategic partnership with Stanford University and AOSense two years ago. "The technology has come of age and I'm delighted NASA has chosen this effort for a NIAC award," he said.
The NIAC program supports potentially revolutionary, high-risk technologies and mission concepts that could advance NASA's objectives. "With this funding and other support, we can move ahead more quickly now, Seery said, adding that the U.S. military has invested heavily in the technology to dramatically improve navigation. "It opens up a wealth of possibilities."
Although the researchers believe the technology offers great promise for a variety of space applications, including navigating around a near-Earth asteroid to measure its gravitational field and deduce its composition, so far they have focused their efforts on using Goddard and NASA Research and Development seed funding to advance sensors that could detect theoretically predicted gravitational waves.
Predicted by Albert Einstein's general theory of relativity, gravitational waves occur when massive celestial objects move and disrupt the fabric of space-time around them. By the time these waves reach Earth, they are so weak that the planet expands and contracts less than an atom in response. This makes their detection with ground-based equipment more challenging because environmental noise, like ocean tides and earthquakes, can easily swamp their faint murmurings.
Although astrophysical observations have implied their existence, no instrument or observatory, including the ground-based Laser Interferometer Gravitational-Wave Observatory, has ever directly detected them.
Cataclysmic events, such as this artist's rendition of a binary-star merger, are believed to create gravitational waves that cause ripples in space-time.Should scientists confirm their existence, they say the discovery would revolutionize astrophysics, giving them a new tool for studying everything from inspiralling black holes to the early universe before the fog of hydrogen plasma cooled to give way to the formation of atoms.
The team believes atom optics or atom interferometry holds the key to directly detecting them.
Atom interferometry works much like optical interferometry, a 200-year-old technique widely used in science and industry to obtain highly accurate measurements. It obtains these measurements by comparing light that has been split into two equal halves with a device called a beamsplitter. One beam reflects off a mirror that is fixed in place; from there, it travels to a camera or detector. The other shines through something scientists want to measure. It then reflects off a second mirror, back through the beamsplitter, and then onto a camera or detector.
Because the path that one beam travels is fixed in length and the other travels an extra distance or in some other slightly different way, the two light beams overlap and interfere when they meet up, creating an interference pattern that scientists inspect to obtain highly precise measurements.
Atom interferometry, however, hinges on quantum mechanics, the theory that describes how matter behaves at sub-microscopic scales. Just as waves of light can act like particles called photons, atoms can be cajoled into acting like waves if cooled to near absolute zero. At those frigid temperatures, which scientists achieve by firing a laser at the atom, its velocity slows to nearly zero. By firing another series of laser pulses at laser-cooled atoms, scientists put them into what they call a "superposition of states."
In other words, the atoms have different momenta permitting them to separate spatially and be manipulated to fly along different trajectories. Eventually, they cross paths and recombine at the detector—just as in a conventional interferometer. "Atoms have a way of being in two places at once, making it analogous to light interferometry," said Mark Kasevich, a Stanford University professor and team member credited with pushing the frontiers of atom optics.
The power of atom interferometry is its precision. If the path an atom takes varies by even a picometer, an atom interferometer would be able to detect the difference. Given its atomic-level precision, "gravitational-wave detection is arguably the most compelling scientific application for this technology in space," said physicist Babak Saif, who is leading the effort at Goddard.
Shown here is the Goddard-designed breadboard laser system critical to advancing atom-optics instruments. The device will be tested in the Stanford University drop tower. Credit: NASA/Pat IzzoSince joining forces, the team has designed a powerful, narrowband fiber-optic laser system that it plans to test at one of the world's largest atom interferometers—a 33-foot drop tower in the basement of a Stanford University physics laboratory. Close scientifically to what the team would need to detect theoretical gravitational waves, the technology would be used as the foundation for any atom-based instrument created to fly in space, Saif said.
During the test, the team will insert a cloud of neutral rubidium atoms inside the 33-foot tower. As gravity asserts a pull on the cloud and the atoms begin to fall, the team will use its new laser system to fire pulses of light to cool them. Once in the wave-like state, the atoms will encounter another round of laser pulses that allow them to separate spatially. Their trajectories then can be manipulated so that their paths cross at the detector, creating the interference pattern.
The team also is fine-tuning a gravitational-wave mission concept it has formulated. Similar to the Laser Interferometer Space Antenna (LISA), the concept calls for three identically equipped spacecraft placed in a triangle-shaped configuration. Unlike LISA, however, the spacecraft would come equipped with atom interferometers and they would orbit much closer to one another—between 500 and 5,000 kilometers apart, compared with LISA's five-million-kilometer separation. Should a gravitational wave roll past, the interferometers would be able to sense the miniscule movement.
"I believe this technology will eventually work in space," Kasevich said. "But it presents a really complicated systems challenge that goes beyond our expertise. We really want to fly in space, but how do you fit this technology onto a satellite? Having something work in space is different than the measurements we take on Earth."
That's where Goddard comes in, Saif said. "We have experience with everything except the atom part," he said, adding that AOSense already employs a team of more than 30 physicists and engineers focused on building compact, ruggedized atom-optics instruments. "We can do the systems design; we can do the laser. We're spacecraft people. What we shouldn't be doing is reinventing the atomic physics. That's our partners' forte."
Source: NASA's Goddard Space Flight Center