Source: https://news.mit.edu/2021/deep-learning-helps-predict-traffic-crashes-1012
The world is a prominent, confusing place. We have all the technology to make navigation more accessible. However, we still rely on traffic lights and steel for safety measures because there’s no telling when something will go wrong in this maze-like landscape of concrete & asphalt roads that afford us luxury while also being connected by an interconnected webbing of wires overhead making sure nothing gets left behind from point A – B.
In the United States, traffic accidents cost about 3% of our country’s GDP and are a leading cause of death among children as well. Accident risk maps can help monitor drivers for patterns that might indicate an increased likelihood at risk in their area or type of vehicle involved with crashes to determine if any action needs to be taken accordingly.
The MIT CSAIL and Qatar Center for Artificial Intelligence developed a deep learning model that can predict the number of crashes over time in areas with high risk. From historical data, road maps, satellite imagery, and GPS traces, they created very accurate crash prediction models using an artificial neural network system called “random forest,” which is fed on historical information and future projections about traffic conditions along roads where accidents might happen.
Maps of risk are typically captured at lower resolutions that blur together the lines between roads because they’re so detailed. Maps made up 5×5 meter grid cells, however – with higher resolution than normal- bring new clarity: Scientists found highways have a much greater chance than residential streets and ramps merging or exiting from one highway has an even higher chance in comparison to other road types.
Source: https://openaccess.thecvf.com/content/ICCV2021/papers/He_Inferring_High-Resolution_Traffic_Accident_Risk_Maps_Based_on_Satellite_Imagery_ICCV_2021_paper.pdf
The odds of a crash in 5×5 grid cells are about one-in-1,000 — but they’re not as high anywhere else on the map. The old method for predicting these risks was “historical” because it only looked at crashes that had happened nearby before; areas were considered risky if there was another incident nearby.
The proposed deep learning model can identify high-risk locations using GPS trajectory patterns, which give information about density, speed and direction of traffic. It also identifies places with no crashes or few recorded ones as “high risk” due to their topology alone.
The scientists were able to make predictions about crashes in locations where there had been none before. They did this by using data from 2017 and 2018, with Predictions made for 2019-2020 proving accurate as well.
The dataset used in this research covered 7500 square kilometers from Los Angeles, New York City, Boston, and Chicago. LA had the highest crash density of these four cities, followed by New York, then Chicago and Boston.
Paper: https://openaccess.thecvf.com/content/ICCV2021/papers/He_Inferring_High-Resolution_Traffic_Accident_Risk_Maps_Based_on_Satellite_Imagery_ICCV_2021_paper.pdf
Source: https://news.mit.edu/2021/deep-learning-helps-predict-traffic-crashes-1012
Source: https://deepmind.com/blog/article/stacking-our-way-to-more-general-robots
For many people stacking one thing on top of another seems to be a simple job. Even the most advanced robots, however, struggle to manage many tasks at once. Stacking necessitates a variety of motor, perceptual, and analytical abilities and the ability to interact with various things. Because of the complexity required, this simple human job has been elevated to a “grand problem” in robotics, spawning a small industry dedicated to creating new techniques and approaches.
DeepMind researchers think that improving state of the art in robotic stacking will need the creation of a new benchmark. Researchers are investigating ways to allow robots to better comprehend the interactions of objects with various geometries as part of DeepMind’s goal and as a step toward developing more generalizable and functional robots. In a research paper to be presented at the Conference on Robot Learning (CoRL 2021), Deepmind research team introduces RGB-Stacking. The research team introduces RGB-Stacking as a new benchmark for vision-based robotic manipulation, which challenges a robot to learn how to grab various items and balance them on top of one another. While there are existing standards for stacking activities in the literature, the researchers claim that the range of objects utilized and the assessments done to confirm their findings make their research distinct. According to the researchers, the results show that a mix of simulation and real-world data may be used to learn “multi-object manipulation,” indicating a solid foundation for the challenge of generalizing to novel items.
The objective of RGB-Stacking is to teach a robotic arm to stack items of various shapes using reinforcement learning. Reinforcement learning is a machine learning approach that allows a system — in this example, a robot — to learn via trial and error while receiving feedback from its actions and experiences. RGB-Stacking positions a gripper linked to a robot arm above a basket with three red, green, and blue items in it (hence the name RGB). The red object must be stacked on top of the blue object in 20 seconds, while the green object acts as an impediment and a diversion.
Source: https://arxiv.org/pdf/2110.06192.pdf
According to DeepMind researchers, the learning method guarantees that a robot develops generic skills by training on numerous object sets. RGB-Stacking deliberately changes the grip and stack characteristics that determine how a robot may grasp and stack each object, forcing the robot to engage in more complex behaviors than a basic pick-and-place method.
Each triplet presents the agent with its own set of challenges: Triplet 1 necessitates a precise grasp of the top object; Triplet 2 frequently requires the use of the top object as a tool to flip the bottom object before stacking; Triplet 3 necessitates balancing; Triplet 4 necessitates precision stacking (the object centroids must align), and Triplet 5’s top object can easily roll off if not stacked gently. We discovered that our hand-coded scripted baseline had a 51 percent success rate at stacking when measuring the difficulty of this activity.
The researchers said that their RGB-Stacking benchmark comprises two task variants with varying levels of complexity. Their aim in ‘Skill Mastery’ is to train a single agent who can stack a specified set of five triplets. They utilize the same triplets for assessment in ‘Skill Generalization,’ but they train the agent on an extensive collection of training objects with over a million potential triplets. These training items exclude the family of objects from which the test triplets were chosen to test for generalization. Their learning pipeline is decoupled into three steps in both versions.
According to the researchers, their RGB-Stacking techniques provide “surprising” stacking strategies and “mastery” of stacking a subset of items. Even yet, they admit that they’ve just scratched the surface of what’s conceivable and that the generality problem has yet to be solved.
“As researchers continue to work on solving the open challenge of true generalization in robotics, we hope that this new benchmark, along with the environment, designs, and tools we’ve released, contribute to new ideas and methods that can make manipulation even easier and robots more capable,” the researchers concluded.
Also, DeepMind is open-sourcing a version of their simulated environment, the blueprints for creating the real-robot RGB-stacking environment, and the RGB-object models and information for 3D printing to help other researchers. They’re also making a variety of libraries and tools used in robotics research available to the public.
Paper: https://openreview.net/pdf?id=U0Q8CrtBJxJ
Github: https://github.com/deepmind/rgb_stacking
Reference 1: https://deepmind.com/blog/article/stacking-our-way-to-more-general-robots
Reference 2: https://venturebeat.com/2021/10/11/deepmind-proposes-new-benchmark-to-improve-robots-object-stacking-abilities/