Deci is an Israeli deep learning company that utilizes Artificial Intelligence (AI) to build AI. Now the company has announced a new set of image classification models, dubbed DeciNets, for Intel Cascade Lake CPUs. Cascade Lake is an Intel codename for a 14 nanometer server, workstation and enthusiast processor microarchitecture, launched in April 2019.
Founded in 2019, by Yonatan Geifman, Jonathan Elial, and Professor Ran El-Yaniv, Deci enables deep learning to “live up to its true potential by using AI to build better AI.” With the company’s end-to-end deep learning development platform, AI developers can build, optimize, and deploy faster and more accurate models for any environment including cloud, edge, and mobile, allowing them to revolutionize industries with innovative products.
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
Last October, Deci raised $21 million in a series A round led by Insight Partners. This brought its total funding to $30.1 million. And in March 2021, Deci and Intel announced a broad strategic collaboration to optimize deep learning inference on Intel Architecture (IA) CPUs.
Deci now boasts that its proprietary Automated Neural Architecture Construction (AutoNAC) technology automatically generated the new image classification models that significantly improve all published models and deliver more than 2x improvement in runtime, coupled with improved accuracy, as compared to the most powerful models publicly available such as EfficientNets, developed by Google.
While GPUs have traditionally been the hardware of choice for running convolutional neural networks (CNNs), explains Deci, CPUs, already more commonly utilized for various computing tasks, would serve as a much cheaper alternative. Although it is possible to run deep learning inference on CPUs, generally they are significantly less powerful than GPUs. Consequently, deep learning models typically perform 3-10X slower on a CPU than on a GPU.
Deci says that its tech closes the gap significantly between GPU and CPU performance for CNNs and it boasts that tasks that previously could not be carried out on a CPU because they were too resource intensive are now possible.
“As deep learning practitioners, our goal is not only to find the most accurate models, but to uncover the most resource-efficient models which work seamlessly in production – this combination of effectiveness and accuracy constitutes the ‘holy grail’ of deep learning,” said Yonatan Geifman, co-founder and CEO of Deci. “AutoNAC creates the best computer vision models to date, and now, the new class of DeciNets can be applied and effectively run AI applications on CPUs.”
“There is a commercial, as well as academic desire, to tackle increasingly difficult AI challenges. The result is a rapid increase in the complexity and size of deep neural models that are capable of handling those challenges,” said Prof. Ran El-Yaniv, co-founder and Chief Scientist of Deci and Professor of Computer Science at the Technion – Israel Institute of Technology. The hardware industry is in a race to develop dedicated AI chips that will provide sufficient compute to run such models; however, with model complexity increasing at a staggering pace, we are approaching the limit of what hardware can support using current chip technology. Deci’s AutoNAC creates powerful models automatically, giving users superior accuracy and inference speed even on low-cost devices, including traditional CPUs.”