By IANS,
New York : Two Indian students in the US are among a group of researchers who have developed ZunaVision, a technology that allows you to view a video and change a still photo inside it.
The Stanford University researchers, computer science graduate students Ashutosh Saxena and Siddharth Batra, and assistant professor Andrew Ng, say the technology will have many uses, Science Daily reported.
For instance, suppose a favourite video of a birthday party is marred by a small, troubling detail – in it is a photograph of your ex-wife. ZunaVision will allow you to “take down” the photo – and replace it with your current wife’s photo.
The researchers said the software can place an image on almost any plane surface in a video – it can even insert a video inside a video.
This, for instance, opens up the possibility of your singing karaoke side-by-side with your favourite singer. Or livening up those dull vacation videos.
The researchers said that anyone with a video camera might now earn some spending money by agreeing to have unobtrusive corporate logos placed inside their videos before they are posted online.
The person who shot the video, and the company handling the business arrangements, would be paid per view, in a fashion analogous to Google AdSense, which pays websites to run small ads.
The embedding technology is driven by an algorithm that first analyses the video, with special attention paid to the section of the scene where the new image will be placed.
The colour, texture and lighting of the new image are subtly altered to blend in with the surroundings. Shadows seen in the original video will be seen in the added image as well. The result is a photo or video that appears to be an integral part of the original scene, rather than a sticker pasted artificially on the video.
The algorithm (“3D Surface Tracker Technology”) used to produce these realistic results is also capable of dealing with what researchers call “occluding objects” in the video – a guest walking in front of the newly hung photo.
It achieves this by first building a model, pixel by pixel, of the area of interest in the video.
“If the lighting begins to change with the motion of the video or the sun or the shadows, we keep a belief of what it will look like in the next frame. This is how we track with very high sub-pixel accuracy,” Batra said.
“It’s as if the embedded image makes an educated guess of where the wall is going next, and hurries to keep up.”
Other technologies can perform these tricks – witness the spectacular special effects in movies and the virtual first-down lines on televised football games – but the Stanford researchers say the existing systems are expensive, time consuming and require considerable expertise.
Some of the recent Stanford work grew out of an earlier project, Make3D, a website that converts a single still photograph into a brief 3D video. It works by finding planes in the photo and computing their distance from the camera, relative to each other.
“That means, given a single image, our algorithm can figure out which parts are in the front and which parts are in the background,” said Saxena. “Now we have extended this technology to videos.”