The AI landscape is ever changing and very dynamic. New models are published every month, machine learning tools appear and disapper before you even know it and the democratization of artificial intelligence through the use of cloud resources exposes a whole new array of possibilties for practioners working in the AI field. New developments are arising so quickly that keeping up to date with the evolution of the AI lanscape can be challenging at times. Professionals not working on a daily basis in the artificial intelligence and machine learning space often aren't aware of the speed in which the sector moves forward. Even some large corporate clients we have worked for acknowledge they can not keep up with all of the latest developments that could be of interest to them. Nonetheless we are always trying to keep up with the latest developments and trends to keep a bright overview of the latest models, tools and infrastructure which could be useful for our clients. In this blog post we will summarize some of the latest trends of which we think the importance in the not too distant future will certainly rise.

Models
Deep Reinforcement learning
Today machines can teach themselves based upon the results of their own actions. This advancement in Artificial Intelligence seems like a promising technology through which we can explore more innovative potentials of AI. The process is termed as deep reinforcement learning.
Deep reinforcement learning is a subcategory in the machine learning and artificial intelligence field where intelligent machines can learn from their actions similar to the way humans learn from experience. Inherent in this type of machine learning and artificial intelligence is that an agent is rewarded or penalized based on their actions. Actions that get them to the target outcome are rewarded, hence reinforced.
Through a series of trial and error, the algorithms underlying reinforcement learning keep on learning in this way, making this technology ideal for dynamic environments that keep changing. Although reinforcement learning has been around for decades, it was much more recently combined with deep learning which yielded phenomenal results. The “deep” portion of reinforcement learning refers to a multiple deep layers of artificial neural networks that replicate the structure of a human brain. Deep learning requires large amounts of training data and significant computing power. Over the last few years, the volumes of data have exploded while the costs for computing power have dramatically reduced, which has enabled the explosion of deep learning applications. The advents of deep reinforcement learning gathered attention when DeepMind’s AlphaGo defeated Go grandmaster.
Apart from games, AI toolkits including OpenAI Gym, DeepMind Lab, and Psychlab provide a training environment to project large-scale innovation for deep reinforcement learning.
Other practical use cases include intelligent robots that are being used in manufacturing plants or warehouses to sort out millions of products and deliver them to the right people. In these use cases, when a robot picks an item to put in a container, deep reinforcement learning helps it gain knowledge based on whether it succeeded or failed in this task. It uses this knowledge to perform this operation more efficiently in the future.
Generative adversial networks
With the recent improvements in computation power and high scale datasets, many interesting studies have been presented based on discriminative models such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) for various classification problems. These models have achieved current state-of-the-art results in almost all applications like computer vision for example. These models will still be very important in the future but a new exciting kid on the block arrived. By pioneers of the deep learning community, generative adversarial training is defined as the most exciting topic of the computer vision field nowadays. With the influence of these views and potential usages of generative models, many kinds of researches were conducted using generative models especially Generative Adversarial Network (GAN) and Autoencoder (AE) based models with an increasing trend.
Generative modelling using neural networks has its
origins in the 1980s with aims to learn about data
with no supervision, potentially providing benefits for standard classification tasks. This is because collecting training
data for unsupervised learning is naturally a much lower
effort task and cheaper than collecting labelled data. Generative models are able to synthesie new data making it possible to counter the shortage of crucial training data for many applications.
Beyond this, generative modelling has a plethora of
direct applications; some of the wide variety of recent
work includes image generation: super-resolution, text-toimage and image-to-image conversion, inpainting, attribute
manipulation, pose estimation; video: synthesis and retargeting; audio: speech and audio synthesis; text: generation,
translation; reinforcement learning; computer graphics: fast
rendering, texture generation, character movement, liquid
simulation; medical: drug synthesis, modality conversion;
density estimation; data augmentation; and feature generation. In short we expect we will hear a lot the coming years about exciting applications of generative adversial networks.
Pre-trained models
Pre-training is now ubiquitous in natural language understanding (NLU). Regardless of the target application (e.g., sentiment analysis, question answering, or machine translation), models are first pre-trained on vast amounts of free-form text, often hundreds of gigabytes. The intention is to initialize models with general linguistic knowledge that can be later leveraged in multiple contexts. A pre-trained model that is linguistically well-versed can then be fine-tuned on a much smaller dataset to perform the target application. The computer vision field is also already using pre-trained models to a great extent, for example by using a pre-trained version of Imagenet for a range of tasks. We expect increasing usage of pre-trained models the coming years since it presents so many benefits.
Architecture
Edge computing
In the last few years, artificial intelligence implementations in various companies have changed around the world. As more enterprise-wide efforts dominate, Cloud Computing became an essential component of the AI evolution. As customers spend more time on their devices, businesses increasingly realize the need to bring essential computation onto the device to serve more customers. This is the reason that the Edge Computing market will continue to accelerate in the next few years. The Edge Computing market is forecasted to reach 1.12 trillion in value by the year 2023.
To prepare for this, large Cloud companies are offering Edge Computing services. Intel and Udacity just launched its program to train 1 million developers worldwide.
According to Gartner, 91% of today’s data is processed in centralized data centers. But by 2022, about 74% of all data will need analysis and action on the edge.
Edge computing is a distributed computing paradigm that brings computation and data storage closer to the location of the device. Edge computing originated from content delivery networks. Now, companies use virtualization to extend the capabilities.
There’s a misconception that edge computing will replace the Cloud. On the contrary, it functions in conjunction with the Cloud.
Big data will always be operated on the Cloud. However, instant data that is generated by the users and relates only to the users can be computed and operated on the edge.
Cloud AI adoption
Artificial intelligence and cloud computing have merged to improve the lives of millions. Digital assistants like Siri, Google Home, and Amazon’s Alexa blend AI and cloud computing in our lives every day. With a quick verbal cue, users can make a purchase, adjust a smart home thermostat, or hear a song played over a connected speaker. A seamless flow of AI and cloud-based resources make those requests a reality. Most users never even realize that it’s a customized blend of these two technology spheres—artificial intelligence and cloud computing—that make these connected, intuitive experiences possible.
On a larger scale, AI capabilities are working in the business cloud computing environment to make organizations more efficient, strategic, and insight-driven. Cloud computing offers businesses more flexibility, agility, and cost savings by hosting data and applications in the cloud. Artificial intelligence capabilities are now layering with cloud computing and helping companies manage their data, look for patterns and insights in information, deliver customer experiences, and optimize workflows.
Comments
Post a Comment