Tel Aviv University researchers have made a new breakthrough in photography tech which will be sure to revolutionize sports and events photography. The new tech from TAU involves a computational photography process based on an optical element that encodes motion information and a corresponding digital image processing algorithm. This, they say, enables a clear, sharp photography of moving objects without motion blur.
So the innovations coming out of Startup Nation which are changing the world are not just coming from the high tech startups. They are also coming from dedicated scientists at Universities and their students.
Don’t you hate it when your picture comes out all blurry? Even new digital cameras have this problem. A camera’s shutter speed can only be so fast and either a picture comes out blurred or too dark.
This new integrated processing method was developed by PhD student Shay Elmalem from the School of Electrical Engineering in the Iby and Aladar Fleischman Faculty of Engineering, under the joint guidance of Prof. Emanuel Marom and Dr. Raja Giryes.
Shay Elmalem explains that, “If you photograph a racing car, even an exposure of a tenth of a second could be too long, and if you’re photographing a person walking, long exposure could be a second or longer.”
The way that cameras are designed, their lenses produce the best possible image, attempting to copy what the human eye sees. Unfortunately, this does not always work, especially with moving objects.
So how does this new technology work?
Basically it utilizes an integrated design of optical components and image post-processing algorithms, to encode motion information cues in the raw optical image (minimally processed data). These cues are decoded by the image processing algorithm which utilizes them to un-blur an image.
The cues are encoded using two optical components integrated in a conventional lens: a clear phase plate developed by the researchers, and a commercial electronic focusing lens. The phase plate contains a micro-optical structure designed to introduce a color-focus dependency, whereas the focusing lens is synchronized in order to make a gradual focus change during the image exposure. As a result, moving objects are colored with various colors as they move. Encoding the colors enables the algorithm to decode the direction and velocity of the object’s movement, which enables it to correct the motion blur and restore the image sharpness.
“In every split second of exposure, our lens generates a bit different image”, Elmalem explains; “thus, the blur of a moving object will not be uniform, but rather change gradually with its movement. In order to understand where and how fast the object in the image is going, we use color.”
“So, for example, a white ball suddenly thrown into the frame will be colored with different colors over the course of its movement, like passing light through a prism. According to these colors, our algorithm knows where the ball has been thrown from and at what velocity. It will thus know how to correct the blur. With a regular camera we’d see a white wake that would compromise the sharpness of the whole picture, whereas with our camera the final image will be a clear focused white ball.”
According to Elmalem, the computational image technique they developed can enhance any camera – and at minimum cost. “The potential is very broad: from basic uses like smartphone cameras to research, medical and industrial uses such as for production line controllers, microscopes and telescopes. They all suffer from the same smearing problem, and we offer a systemic solution to it.”
So soon we may see this new photographic technology added into every smart phone camera. Professional photographers beware.