In recent years, there has been major progress in the world of artificial intelligence, mainly due to models of deep neural networks (DNNs); sets of algorithms inspired by the human brain and designed to recognize patterns.
These DNNs have had unprecedented success in dealing with complex tasks such as autonomous driving, natural language processing, image recognition and the development of innovative medical treatments which is achieved through the machine’s self-learning from a vast pool of examples often represented by images. This technology is developing rapidly in academic research groups and leading companies such as Facebook and Google are utilizing it for their specific needs.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
Learning by example requires large scale computing power and is therefore carried out on computers that have graphic processing units (GPUs) suited for the task. Yet, these units consume considerable amounts of energy and their speed is slower than the required learning rate of the neural networks, thereby hindering the learning process. “In fact, we are dealing with hardware originally intended for mostly graphic purposes and it fails to keep up with the fast-paced activity of the neural networks,” explains Kvatinsky. “To solve this problem, we need to design hardware that will be compatible with deep neural networks.”
Prof. Kvatinsky and his research group have developed a hardware system specifically designed to work with these networks, enabling the neural network to perform the learning phase with greater speed and less energy consumption. “Compared to GPU’s, the new hardware’s calculation speed is 1,000 times faster and reduces power consumption by 80%.”
This novel hardware represents a conceptual change; rather than focus on improving the existing processors, Kvatinsky and his team decided to develop the structure of a three-dimensional computing machine that integrates memory. “Rather than split between the units that perform calculations and the memory responsible for storing information, we conduct both tasks within the memristor, a memory component with enhanced calculation capabilities assigned to work with deep neural networks.”
Although their research is still at its theoretical stage, they have already demonstrated the implementation via simulation. “Currently, our development is destined to work with the momentum learning algorithms, but our intention is to continue developing the hardware so that it will be compatible with other learning algorithms as well. We may be able to develop a dynamic, multi-purpose hardware which will be able to adapt to various algorithms, instead of having a number of different hardware components,” Kvatinsky added.
***
Prof. Shahar Kvatinsky and doctoral student Tzofnat Greenberg-Toledo, together with students Roee Mazor and Ameer Haj-Ali of Technion’s Andrew and Erna Viterbi Faculty of Electrical Engineering recently published their research in the IEEE Transactions on Circuits and Systems journal.