Until autonomous vehicles are better drivers than humans, we likely won’t see widespread uptake of the technology. Luckily, MIT engineers are on the case. Funded by the Toyota Research Institute, the researchers have developed a new method for effectively seeing around corners.
By using off-the-shelf cameras, the system, dubbed “ShadowCam,” uses computer-vision algorithms to detect and categorize changes to shadows on the ground. The system tracks light intensity on a frame by frame basis as video rolls continuously. As light changes, this could indicate that an object is approaching from a blindspot. This allows self-driving vehicles, or future robots (say autonomous hospital attendants), to make preemptive decisions based on potential future events, much like how humans can (when they’re paying close attention, at least).
The benefit of teaching a machine this type of detection and decision-making is that a camera can be set to track changes so minute they might be invisible to a human driver. The whole point of autonomous driving is to take human-error out of the equation, but this only makes sense if the machines can operate with greater attention to detail.
MIT is working with some seriously advanced technology to accomplish this; they’re literally using a “visual-odometry technique” that the Mars Rover uses. This technique allows the computer to “[estimate] the motion of a camera in real-time by analyzing pose and geometry in sequences of images.” That means the vehicle could understand a 3D environment without having scanned or mapped it beforehand. This is mission-critical; when attempting to track a shadow as a vehicle is in motion, while simultaneously looking for subtle changes in light intensity.
What’s really impressive about MIT’s ShadowCam is that it’s already outperforming current systems. The researchers tested their tech in an autonomous car inside a parking garage, pitting it against a current lidar-based system. When it came to detecting a car turning around a pillar, the ShadowCam was able to react about 0.72 seconds faster than the lidar.
This is all very new tech, and still very much in the early testing phase. However, MIT is working towards enabling their system to work in different indoor and outdoor lightening scenarios. As their technology advances, the system should only get faster at detecting changes in shadows.
When it comes to turning over our freedom, and putting our trust in the machines, it’s not a bad idea to hedge your bets by using as much technological redundancy as possible. In the future of fully autonomous, self-driving vehicles, the machines will likely use a combination of technologies and sensors to monitor their environment. Hopefully, this is a step towards programming AVs with the equivalent of your above average driver’s common sense.