A robot with a view—how drones and machines can navigate on their own [video]
2017-05-12 17:43
501 查看
Robots have captured our imaginations for more than 70 years. When you think about a robot, you might picture Rosie from the Jetsons or a robotic arm in a manufacturing facility. However, the next generation of robots will be very different. These robots and
drones will be seamlessly integrated into our everyday lives. Think small flying cameras that will change the way we take selfies, or home-assistant robots that will simplify the tedium of everyday, time-consuming tasks.
Why now? Historically, robots have been limited to industrial settings. But thanks to the same technology powering our smartphones, robots are poised to evolve into intelligent, intuitive machines capable of perceiving their environments, understanding our
needs, and helping us in unprecedented ways.
For years Qualcomm Research, the R&D division of Qualcomm Technologies, Inc., has been at the forefront of sensor fusion, computer vision and machine learning technologies—innovations that will enable smarter robots and drones to “see” in 3D, sense their surroundings,
avoid collision, and autonomously navigate their environments.
How we’re enabling robots to move autonomously
In order for a robot to autonomously navigate its environment, it has to accurately estimate its relative position and orientation while moving through an unknown environment. Known as position and orientation tracking with 6 degrees of freedom (6-DOF pose
tracking), this capability is essential not only for robotics, but also for many other applications, such as virtual reality (VR), augmented reality (AR) gaming, and indoor navigation.
To solve this challenge, Qualcomm Research uses a technique called Visual-Inertial Odometry (VIO). This fuses information from a camera and inertial sensors, specifically gyroscopes and accelerometers, to estimate device pose without relying on GPS and GNSS
(Global Navigation Satellite System).
VIO takes advantage of the complementary strengths of the camera and inertial sensors. For example, a single camera can estimate relative position, but it cannot provide absolute scale—the actual distances between objects, or the size of objects in meters or
feet. Inertial sensors provide absolute scale and take measurement samples at a higher rate, thereby improving robustness for fast device motion. However, sensors, particularly low-cost MEMS varieties, are prone to substantial drifts in position estimates
when compared with cameras. So VIO blends together the best of both worlds to accurately estimate device pose.
Technology designed for the mobile environment
At Qualcomm Research, we’ve designed VIO from the ground up for power-efficient operation on mobile and embedded devices, and we’ve achieved a very high level of accuracy across a wide variety of device motions and scene environments. All this was made possible
through our breakthrough algorithmic innovations and optimizations using the Qualcomm Snapdragon processor’s vector processing and parallel computation abilities. The result? Faster execution time and lower memory consumption.
Our optimizations also made VIO work across a wide range of smartphones, despite several impairments, including rolling shutter, inaccurate sensor time stamps, and limited field of view (FOV) lenses.
Qualcomm Research’s joint work with the University of Pennsylvania’s GRASP Lab is a testament to what’s possible using only a common smartphone. We recently demonstrated in a video the
world’s first smartphone autonomously flying a Quad-copter with all processing, including our VIO.
Additionally, the video below illustrates the level of robustness our VIO solution was able to maintain across a wide variety of device motions and scene environments, including walking, running, biking, and VR-like head motions both indoors and outdoors.
The demo video is using a global shutter camera with a wide FOV lens and accurate time stamps. By using VIO to combine landmark measurements from the camera with inertial sensor measurements in an extended
Kalman filter (EKF) framework, we were able to accurately estimate not only the device pose, but also inertial calibration parameters (biases, scale factors, and so on).
What we see is the trajectory of the device projected on a horizontal plane. The device tracks visual landmarks, such as corners, and displays the estimated depth (distance) and associated uncertainty of this estimation (shown as orange-colored numbers). At
the end of the video, the device returns to its starting location with a less than 1 percent residual error in computed trajectory—end-to-end drift—for the total trajectory length. This highlights the accuracy and robustness of our VIO solution across different
user motions.
By taking advantage of the heterogeneous compute capabilities of Snapdragon, we are further optimizing VIO to enable breakthrough experiences at lower power consumption. VIO is just one way Qualcomm Research is bringing
the future forward faster. In upcoming blog posts, we’ll walk you through other technologies we’re developing to build smarter and safer robots, drones, cars, and many other machines and devices.
转自:https://www.qualcomm.com/news/onq/2015/12/16/robot-view-how-drones-and-machines-can-navigate-their-own
drones will be seamlessly integrated into our everyday lives. Think small flying cameras that will change the way we take selfies, or home-assistant robots that will simplify the tedium of everyday, time-consuming tasks.
Why now? Historically, robots have been limited to industrial settings. But thanks to the same technology powering our smartphones, robots are poised to evolve into intelligent, intuitive machines capable of perceiving their environments, understanding our
needs, and helping us in unprecedented ways.
For years Qualcomm Research, the R&D division of Qualcomm Technologies, Inc., has been at the forefront of sensor fusion, computer vision and machine learning technologies—innovations that will enable smarter robots and drones to “see” in 3D, sense their surroundings,
avoid collision, and autonomously navigate their environments.
How we’re enabling robots to move autonomously
In order for a robot to autonomously navigate its environment, it has to accurately estimate its relative position and orientation while moving through an unknown environment. Known as position and orientation tracking with 6 degrees of freedom (6-DOF pose
tracking), this capability is essential not only for robotics, but also for many other applications, such as virtual reality (VR), augmented reality (AR) gaming, and indoor navigation.
To solve this challenge, Qualcomm Research uses a technique called Visual-Inertial Odometry (VIO). This fuses information from a camera and inertial sensors, specifically gyroscopes and accelerometers, to estimate device pose without relying on GPS and GNSS
(Global Navigation Satellite System).
VIO takes advantage of the complementary strengths of the camera and inertial sensors. For example, a single camera can estimate relative position, but it cannot provide absolute scale—the actual distances between objects, or the size of objects in meters or
feet. Inertial sensors provide absolute scale and take measurement samples at a higher rate, thereby improving robustness for fast device motion. However, sensors, particularly low-cost MEMS varieties, are prone to substantial drifts in position estimates
when compared with cameras. So VIO blends together the best of both worlds to accurately estimate device pose.
Technology designed for the mobile environment
At Qualcomm Research, we’ve designed VIO from the ground up for power-efficient operation on mobile and embedded devices, and we’ve achieved a very high level of accuracy across a wide variety of device motions and scene environments. All this was made possible
through our breakthrough algorithmic innovations and optimizations using the Qualcomm Snapdragon processor’s vector processing and parallel computation abilities. The result? Faster execution time and lower memory consumption.
Our optimizations also made VIO work across a wide range of smartphones, despite several impairments, including rolling shutter, inaccurate sensor time stamps, and limited field of view (FOV) lenses.
Qualcomm Research’s joint work with the University of Pennsylvania’s GRASP Lab is a testament to what’s possible using only a common smartphone. We recently demonstrated in a video the
world’s first smartphone autonomously flying a Quad-copter with all processing, including our VIO.
Additionally, the video below illustrates the level of robustness our VIO solution was able to maintain across a wide variety of device motions and scene environments, including walking, running, biking, and VR-like head motions both indoors and outdoors.
The demo video is using a global shutter camera with a wide FOV lens and accurate time stamps. By using VIO to combine landmark measurements from the camera with inertial sensor measurements in an extended
Kalman filter (EKF) framework, we were able to accurately estimate not only the device pose, but also inertial calibration parameters (biases, scale factors, and so on).
What we see is the trajectory of the device projected on a horizontal plane. The device tracks visual landmarks, such as corners, and displays the estimated depth (distance) and associated uncertainty of this estimation (shown as orange-colored numbers). At
the end of the video, the device returns to its starting location with a less than 1 percent residual error in computed trajectory—end-to-end drift—for the total trajectory length. This highlights the accuracy and robustness of our VIO solution across different
user motions.
By taking advantage of the heterogeneous compute capabilities of Snapdragon, we are further optimizing VIO to enable breakthrough experiences at lower power consumption. VIO is just one way Qualcomm Research is bringing
the future forward faster. In upcoming blog posts, we’ll walk you through other technologies we’re developing to build smarter and safer robots, drones, cars, and many other machines and devices.
转自:https://www.qualcomm.com/news/onq/2015/12/16/robot-view-how-drones-and-machines-can-navigate-their-own
相关文章推荐
- How to read out WhatsApp messages with Tasker and react on their content in real time
- how websites are perceived by their visitors and the basic ways in which websites can be constructed.
- how websites are perceived by their visitors and the basic ways in which websites can be constructed.
- with ffmpeg to encode video for live streaming and for recording to files for on-demand playback
- <merge /> can be used only with a valid ViewGroup root and attachToRoot=true
- Howto Enable and Use A2DP Sink on Ubuntu Linux with Bluez
- How sign the code by own certificate and install the certificate on device.
- How can I limit the CPU load and bandwidth when make a video call?
- How can we live with loss — and could we truly live without it?
- How to build monitoring system with Face Recognization by HTC Hero、VideoLAN and EmguCV.
- How to get your very own RStudio Server and Shiny Server with DigitalOcean
- how websites are perceived by their visitors and the basic ways in which websites can be constructed.
- How Virtual Machines Access Data on a SAN-- FC and iSCSI
- Build Your Own Video Community With Lighttpd And FlowPlayer (Debian Etch)
- How sign the code by own certificate and install the certificate on device.
- How to use Trusted Connection when SQL server and web Server are on two separate machines.
- How sign the code by own certificate and install the certificate on device.
- Uninstall PhotoJoy Toolbar - How you can Uninstall and take away PhotoJoy Toolbar Completely with Ph
- How to Install PHP 7 with Apache and MariaDB on CentOS 7/Debian 8
- How To Setup a Rails 4 App With Apache and Passenger on CentOS 6