Advertisement

Top Deep Learning Based Time Series Methods

top deep learning based time series methods

BEGIN ARTICLE PREVIEW:

Download our Mobile App

The components of time-series can be as complex and sophisticated as the data itself. With every passing second, the data obtained multiplies and modelling becomes tricky.

For instance, social media platforms, the data handling chores get worse with their increasing popularity. Twitter stores 1.5 petabytes of logical time series data and handles 25K query requests per minute. There are more critical applications of time series modelling, such as IoT and on various edge devices. Sensors of smart buildings, factories, power plants, and data centres generate vast amounts of multivariate time series data. Conventional anomaly detection methods are inadequate due to the dynamic complexities of these systems.

Today, most of the state-of-the-art methods aim to leverage deep learning for time-series modelling. In this article, we take a look at a few of the top works on deep learning base time series that have been published in the past couple of years.

Multivariate LSTM-FCNs

Year: 2018

The researchers transformed the univariate model, Long Short Term Memory Fully Convolutional Network (LSTM-FCN) and Attention-based variant–ALSTM-FCN), into a multivariate time series classification model. The proposed models work efficiently on various complex multivariate time series classification tasks such as activity recognition or action …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE

Best of arXiv.org for AI, Machine Learning, and Deep Learning

best of arxiv.org for ai, machine learning, and deep learning

BEGIN ARTICLE PREVIEW:

In this recurring monthly feature, we filter recent research papers appearing on the arXiv.org preprint server for compelling subjects relating to AI, machine learning and deep learning – from disciplines including statistics, mathematics and computer science – and provide you with a useful “best of” list for the past month. Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. arXiv contains a veritable treasure trove of statistical learning methods you may use one day in the solution of data science problems. The articles listed below represent a small fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Links to GitHub repos are provided when available. Especially relevant articles are marked with a “thumbs up” icon. Consider that these are academic research papers, typically geared toward graduate students, post docs, and seasoned professionals. They generally contain a high degree of mathematics so be prepared. Enjoy!

An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale

While the Transformer architecture has become the de-facto standard for natural language processing …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “Best of arXiv.org for AI, Machine Learning, and Deep Learning”

Shrinking massive neural networks used to model language

shrinking massive neural networks used to model language

BEGIN ARTICLE PREVIEW:

You don’t need a sledgehammer to crack a nut.
Jonathan Frankle is researching artificial intelligence — not noshing pistachios — but the same philosophy applies to his “lottery ticket hypothesis.” It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those “lucky” subnetworks, dubbed winning lottery tickets.
In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP). As a branch of artificial intelligence, NLP aims to decipher and analyze human language, with applications like predictive text generation or online chatbots. In computational terms, BERT is bulky, typically demanding supercomputing power unavailable to most users. Access to BERT’s winning lottery ticket could level the playing field, potentially allowing more users to develop effective NLP tools on a smartphone — no sledgehammer needed.
“We’re hitting the point where we’re going to have to make these models leaner and more efficient,” says Frankle, adding that this advance could one day “reduce barriers to entry” for NLP.
Frankle, a PhD student in Michael Carbin’s group at the MIT Computer Science and Artificial Intelligence Laboratory, co-authored the study, which will be …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “Shrinking massive neural networks used to model language”

RSNA 2020: AI highlights from an all-virtual annual meeting

rsna 2020: ai highlights from an all-virtual annual meeting

BEGIN ARTICLE PREVIEW:

RSNA 2020, the annual meeting of the Radiological Society of North America, showcases the latest research advances and product developments in all areas of radiology. Here’s a selection of studies presented at this year’s all-virtual event, all of which demonstrate the increasingly prevalent role played by artificial intelligence (AI) techniques in diagnostic imaging applications
Deep-learning model helps detect TB
Early diagnosis of tuberculosis (TB) is crucial to enable effective treatments, but this can prove challenging for resource-poor countries with a shortage of radiologists. To address this obstacle, Po-Chih Kuo, from Massachusetts Institute of Technology, and colleagues have developed a deep-learning-based TB detection model. The model, called TBShoNet, analyses photographs of chest X-rays taken by a phone camera.
Deep-learning-based diagnosis: original chest X-ray (left); smartphone-captured chest X-ray photo (centre); TB detection by TBShoNet (right). (Courtesy: Radiological Society of North America)
The researchers used three public datasets for model pre-training, transferring and evaluation. They pretrained the neural network on a database containing 250,044 chest X-rays with 14 pulmonary labels, which did not include TB. The model was then recalibrated for chest X-ray photographs by using simulation methods to augment the dataset. Finally, the team built TBShoNet by connecting the pretrained model to an …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “RSNA 2020: AI highlights from an all-virtual annual meeting”

Deci Collaborates with Intel to Achieve 11.8x Accelerated Inference Speed at MLPerf

deci collaborates with intel to achieve 11.8x accelerated inference speed at mlperf

BEGIN ARTICLE PREVIEW:

TEL AVIV, Israel, Dec. 1, 2020 /PRNewswire/ — Deci, the deep learning company building the next generation of AI, announced its inference results that were submitted to the open division of the MLPerf v0.7 inference benchmark (full results here). On several popular Intel CPUs, Deci’s AutoNAC (Automated Neural Architecture Construction) technology accelerated the inference speed of the well-known ResNet-50 neural network. It reduced the submitted models’ latency by a factor of up to 11.8x and increased throughput by up to 11x– all while preserving the model’s accuracy within 1%.
“Billions of dollars have been spent on building dedicated AI chips, some of which are focused on computer vision inference,” says Yonatan Geifman, CEO and co-founder of Deci. “At MLPerf we demonstrated that Deci’s AutoNAC algorithmic acceleration, together with Intel’s OpenVino toolkit, enables the use of standard CPUs for deep learning inference at scale.”
According to MLPerf rules, Deci’s goal was to reduce the latency, or increase throughput, while staying within 1% accuracy of ResNet-50 trained on the Imagenet dataset. Deci’s optimized model improved latency between 5.16x and 11.8x when compared to vanilla ResNet-50. When compared to competing submissions, Deci achieved throughput per core that was three times higher than models of other submitters. 
“Intel’s …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “Deci Collaborates with Intel to Achieve 11.8x Accelerated Inference Speed at MLPerf”

Krones employs deep learning for empty bottle inspection

krones employs deep learning for empty bottle inspection

BEGIN ARTICLE PREVIEW:

For the first time, the new Linatronic AI employs deep learning technology for automatic image detection. Photo – Krones

Anyone who works with empty bottle inspectors knows that not every bottle that the inspector rejects has a defect. In most cases, it might simply be water droplets or a bit of foam still clinging to the bottle after cleaning. Since conventional systems can’t always distinguish these from contaminants or damage with 100% certainty, they tend to err on the side of caution and reject the container. As a result, countless entirely usable bottles land in the trash in every production shift, never to be seen again.
To change that, Krones has taken the evolution of its inspection technology to the next level. According to Krones, the new Linatronic AI employs deep learning software to automatically detect and classify anomalies, making it much smarter and more efficient than its conventional peers.

Artificial neural networks
Deep learning is a technology that enables machines to do what we humans do naturally — learn from example. But there is one big difference – a machine can use this ability many times more efficiently than humans can.
The foundation for deep learning is an artificial neural network (ANN). …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “Krones employs deep learning for empty bottle inspection”

Automatic deep-learning AI tool measures volume of cerebral ventricles on MRIs in children

automatic deep-learning ai tool measures volume of cerebral ventricles on mris in children

BEGIN ARTICLE PREVIEW:

IMAGE: Deep learning model (blue) and ground truth manual (green) segmentation of representative control (left) and hydrocephalus (right) T2-weighted MR images.
view more 
Credit: Copyright 2020 AANS.

CHARLOTTESVILLE, VA (DECEMBER 1, 2020). Researchers from multiple institutions in North America have developed a fully automated, deep-learning (DL), artificial-intelligence clinical tool that can measure the volume of cerebral ventricles on magnetic resonance images (MRIs) in children within about 25 minutes. The ability to track ventricular volume over time in a clinical setting will prove invaluable in the treatment of children and adults with hydrocephalus. Details on the development of the tool and its validation are reported today in a new article, “Artificial intelligence for automatic cerebral ventricle segmentation and volume calculation: a clinical tool for the evaluation of pediatric hydrocephalus,” by Jennifer L. Quon, MD, and colleagues, in the Journal of Neurosurgery: Pediatrics .
Hydrocephalus is a pathological condition caused by an excessive amount of cerebrospinal fluid (CSF) in chambers of the brain known as ventricles. The condition results from an imbalance between the production and absorption of CSF. Hydrocephalus is called “communicating” when CSF can pass from one ventricle to another and “obstructive” when passage from one ventricle to another is blocked. The prevalence of …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “Automatic deep-learning AI tool measures volume of cerebral ventricles on MRIs in children”

Using Algorithms Derived From Neuroscience Research, Numenta Demonstrates 50x Speed Improvements on Deep Learning Networks

using algorithms derived from neuroscience research, numenta demonstrates 50x speed improvements on deep learning networks

BEGIN ARTICLE PREVIEW:

Source: https://numenta.com/assets/pdf/research-publications/papers/Sparsity-Enables-50x-Performance-Acceleration-Deep-Learning-Networks.pdf

Numenta is a machine intelligence company that focuses on developing cohesive theory, core software, technology, and applications following the neocortex principles. The scientists and engineers work on one of the most significant challenges humanity can face, i.e., understanding how the brain works. Numenta recently announced that it had achieved dramatic performance improvements in deep learning networks’ inference tasks without any loss in accuracy. 

Numenta has made some advances by applying a principle of the brain called sparsity. It compared sparse networks and dense networks by running its algorithms on Xilinx FPGAs (Field Programmable Gate Array) for a speech recognition task that used the Google Speech Commands (GSC) dataset. It used the number of words processed per second as a parameter to measure the efficiency. The results show that sparse networks yield more than 50x acceleration over dense networks on a Xilinx Alveo board. 

Numenta also demonstrated the running of GSC network on a Xilinx Zynq chip (a smaller chip that is not efficient enough to run dense networks), enabling a new set of applications based on low-cost, low-power solutions. Using the metric of the number of words per …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “Using Algorithms Derived From Neuroscience Research, Numenta Demonstrates 50x Speed Improvements on Deep Learning Networks”

DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology

deepmind’s protein-folding ai has solved a 50-year-old grand challenge of biology

BEGIN ARTICLE PREVIEW:

In this year’s CASP, AlphaFold predicted the structure of dozens of proteins with a margin of error of just 1.6 angstroms—that’s 0.16 nanometers, or atom-sized. This far outstrips all other computational methods and for the first time matches the accuracy of experimental techniques to map out the structure of proteins in the lab, such as cryo-electron microscopy, nuclear magnetic resonance and x-ray crystallography. These techniques are expensive and slow: it can take hundreds of thousands of dollars and years of trial and error for each protein. AlphaFold can find a protein’s shape in a few days. The breakthrough could help researchers design new drugs and understand diseases. In the longer term, predicting protein structure will also help design synthetic proteins, such as enzymes that digest waste or produce biofuels. Researchers are also exploring ways to introduce synthetic proteins that will increase crop yields and make plants more nutritious. “It’s a very substantial advance,” says Mohammed AlQuraishi, a systems biologist at Columbia University who has developed his own software for predicting protein structure. “It’s something I simply didn’t expect to happen nearly this rapidly. It’s shocking, in a way.” “This really is a big deal,” says David Baker, …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE Continue reading “DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology”