Image Processing at the Global TCE Conference

Professor Peyman Milanfar from the University of California Santa Cruz, at the Technion Conference:
To improve the quality of digital photography in small cameras it will be necessary to take a number of pictures and merge them together into one good image – Google Glass are the first to do soץ

 

 

Professor Peyman Milanfar wearing the Google Glass - Yossi Shram,    the Technion’s Spokesperson’s Office

 

“Those of you with a keen eye and even those of you without can distinguish between a photo taken with a high quality camera than from one taken with a cell phone, but this will not be the case for long, ” asserted Professor Peyman Milanfar from the University of California Santa Cruz, an expert in image processing and artificial vision who has been working for Google over the past year. He spoke at the Fourth Annual International Conference held by TCE Technion Computer Engineering Center named after Henry Taub at the Technion. Professor Milanfar is working with the team developing the Google Glass software.

Professor Oded Shmueli, the Technion’s Executive Vice President for Research, said at the opening of the conference: “We are at the brink of a process which will usher in a new era. The areas of research discussed at the conference, such as artificial intelligence, computer vision and image processing, affect all aspects of our lives. Within a decade from now cars will travel on roads equipped with computer, sensors and navigation and radar systems which will allow them to travel alone without the intervention of a driver.”

“The TCE Technion Computer Engineering Center was inaugurated three years ago and since then has become a leading center of excellence in groundbreaking research, ” said Professor Assaf Schuster, the Head of TCE. “We have been successful at creating here a new model for collaboration between academia and industry.”

According to Professor Milanfar it will be hard and nearly impossible to achieve the level of next generation camera with the simple cameras installed today in cellular phones and tablets, and which in the near future may also be used as part of wearable computing devices. It is because they lack all the moving parts and the complex heavy lenses that professional cameras have. Even the need of not overburdening the user, which has prompted planners to make them lighter and smaller, doesn’t let them compete with the big cameras without encountering physical limitations. The miniaturization of devices makes it very difficult to bring light into the device, so what remains is to use sophisticated algorithms to compensate for the size reduction.

“My job at Google is to develop the field of computerized photography that can merge a number of former disciplines such as image processing, photography, computer graphics and computer vision. It includes the development of algorithms, hardware Optics and image processing techniques (Rendering), ” explained Professor Milanfar. “The principle is quite simple – instead of taking only one image you shoot a series of images and then merge them together into one image. This can be in the form of a high resolution picture, a trivial feature that allows intensified use of multiple photos, or making use of other ‘tricks’ such as shooting several pictures from different angles and calculating the distance to objects, so that you will be able to decide in which area of the picture to focus and what part of the image will remain vague to achieve a sense of depth. Another ‘trick’ that can be used is to capture images that cannot be detected by the naked eye, such as night vision (by using infra-red sensors), and the ability to detect changes that occur very quickly or very slowly, distinguishing fine details (for example, the motion of a baby’s breathing through cameras installed in a child’s bedroom).”

Scientists (and high school students alike) that use microscopes are surely aware of the phenomenon that occurs when looking at a sample – where only the central portion of the image appears very sharp while the rest of it remains vague. Merging the images will produce one photo where all of the parts of the specimen are sharp and clear. “Google Glass is the first device that contains a camera that at every snapshot photographs a series of pictures and merges them, ” added Professor Milanfar.

Professor Amnon Shashua from the Hebrew University in Jerusalem, Co-founder, Chairman of the Board and CTO of Mobileye and the startup company OrCam, described another approach to wearable computing based cameras. OrCam developed a system that includes a camera and microphone that fasten onto regular eyeglasses. The system allows the visually impaired to point at objects such as street signs, traffic lights, buses or restaurant menus, and reads it back to them (the menu, color at traffic light, street sign, etc…).

“The OrCam concept differs from Google Glass – as it doesn’t shoot a photo each time the user requests a picture but rather shoots a continuous video and performs immediate processing. This requires a completely different deployment in terms of hardware and particularly with regards to energy consumption, ” said Professor Shashua.

 

 

Read more about: , , ,

Wordpress site Developed by Fixing WordPress Problems