【翻譯】背後原理大解析:如何運用強化學習的演算法,自動化圖像資料擴增?

翻譯

宗諭

審閱

阿吉老師

圖片

Google AI 部落格

主題圖片:Designed by Zivile_z

說明

本文源自 Google AI 部落格,經 Google 同意轉載,特此致謝!

 

The success of deep learning in computer vision can be partially attributed to the availability of large amounts of labeled training data — a model’s performance typically improves as you increase the quality, diversity and the amount of training data. However, collecting enough quality data to train a model to perform well is often prohibitively difficult. One way around this is to hardcode image symmetries into neural network architectures so they perform better or have experts manually design data augmentation methods, like rotation and flipping, that are commonly used to train well-performing vision models. However, until recently, less attention has been paid to finding ways to automatically augment existing data using machine learning. Inspired by the results of our AutoML efforts to design neural network architectures and optimizers to replace components of systems that were previously human designed, we asked ourselves: can we also automate the procedure of data augmentation?

深度學習在電腦視覺領域的成功,部分可歸功於可以取用大量已被標記好的訓練資料。因為隨著使用者增加訓練資料的質、量及多樣性,模型的效能通常會提升。然而,常見的狀況是難以收集到高品質的資料來訓練模型,效能也因此提升不起來。解決這個問題的方法之一,是在神經網路架構中將圖像對稱寫死,好提升模型效能;或是讓專家採取手動設計資料擴增的方法,例如旋轉或翻轉,這些方法已普遍用於訓練出高效能的視覺模型。但直到最近,人們才把注意力放在如何運用機器學習找到自動擴增現有資料的方法。為替換先前由人類設計的系統元件,我們運用AutoML設計神經網路架構優化方法,得到相當好的成果。受到這樣成果的啟發,我們不禁自問:「資料擴增的程序也可以自動化嗎?」

 

n “AutoAugment: Learning Augmentation Policies from Data”, we explore a reinforcement learning algorithm which increases both the amount and diversity of data in an existing training dataset. Intuitively, data augmentation is used to teach a model about image invariances in the data domain in a way that makes a neural network invariant to these important symmetries, thus improving its performance. Unlike previous state-of-the-art deep learning models that used hand-designed data augmentation policies, we used reinforcement learning to find the optimal image transformation policies from the data itself. The result improved performance of computer vision models without relying on the production of new and ever expanding datasets.

在”AutoAugment: Learning Augmentation Policies from Data” 這篇論文中,我們探索一種強化學習的演算法,它能增加現存訓練資料集中資料的量和多樣性。直覺上,資料擴增被用來教導模型關於資料域中影像的不變性,在某種程度上使神經網路對這些重要的對稱性也維持不變,因而能改善這個神經網路的效能。不像之前最先進的深度學習模型採用人類設計的的資料擴增策略,我們運用強化學習,從資料本身發現最佳的圖像轉換原則。運用這些原則的結果改善了電腦視覺模型的效能,而無須仰賴新的且不斷變大的資料集。

 

Augmenting Training Data
The idea behind data augmentation is simple: images have many symmetries that don’t change the information present in the image. For example, the mirror reflection of a dog is still a dog. While some of these “invariances” are obvious to humans, many are not. For example, the mixup method augments data by placing images on top of each other during training, resulting in data which improves neural network performance.

擴增訓練的資料

資料擴增的原理十分簡單:一張圖會有許多的對稱性,它們不會改變這張圖所要呈現的資訊。例如,一條狗做鏡像之後仍然是一條狗。但這些「不變的地方」對我們人類來說,有的相當明顯,有些則不然。例如,「混合(mixup)」方法會在訓練時將影像放置於彼此的上方來擴增資料,進而提升神經網路的效能。

圖1

 

AutoAugment is an automatic way to design custom data augmentation policies for computer vision datasets, e.g., guiding the selection of basic image transformation operations, such as flipping an image horizontally/vertically, rotating an image, changing the color of an image, etc. AutoAugment not only predicts what image transformations to combine, but also the per-image probability and magnitude of the transformation used, so that the image is not always manipulated in the same way. AutoAugment is able to select an optimal policy from a search space of 2.9 x 1032 image transformation possibilities.

AutoAugment是能為電腦視覺資料集設計自定義的資料擴增策略的一種自動化方法。例如,導引如何選擇基本圖像的轉換作業,像是水平或垂直翻轉圖像、旋轉圖像及改變圖像的顏色⋯⋯等等。AutoAugment不僅能預測到要結合哪一種圖像轉換法,還能預測每張圖像的概率,還有圖像轉換的強度,所以圖像不再使用同一種方法來調整。自動擴增技術能從高達2.9 x 1032   種圖像轉換可能性的搜尋空間中找到一個最佳策略。

 

AutoAugment learns different transformations depending on what dataset it is run on. For example, for images involving street view of house numbers (SVHN) which include natural scene images of digits, AutoAugment focuses on geometric transforms like shearing and translation, which represent distortions commonly observed in this dataset. In addition, AutoAugment has learned to completely invert colors which naturally occur in the original SVHN dataset, given the diversity of different building and house numbers materials in the world.

AutoAugment能根據正在處理的資料集,學習到不同的圖像轉換方式。例如,針對street view of house numbers(SVHN)這套資料集中的圖像,因其中包含真實場景圖像中的各種數字,自動擴增技術便聚焦於幾何轉換,像是歪斜和位移,代表這個資料集中常見的資料失真情況。此外,自動擴增技術已完整學會如何將顏色反轉,而這樣的情況在原始的SVHN資料集中相當常見,原因是考慮到真實世界中的各種不同的建築物和房屋門牌號碼材質。

圖2

 

On CIFAR-10 and ImageNet, AutoAugment does not use shearing because these datasets generally do not include images of sheared objects, nor does it invert colors completely as these transformations would lead to unrealistic images. Instead, AutoAugment focuses on slightly adjusting the color and hue distribution, while preserving the general color properties. This suggests that the actual colors of objects in CIFAR-10 and ImageNet are important, whereas on SVHN only the relative colors are important.

在處理CIFAR-10ImageNet這兩個資料集時,AutoAugment技術就不會使用歪斜,因為這兩個資料集通常未包括歪斜物體的圖像;它也不會去反轉顏色,因為這樣會導致圖像失真。相反地,AutoAugment技術聚焦於微調顏色和色調分佈,同時還能保留一般顏色屬性。這意味在CIFAR-10和ImageNet中,物體的實際顏色很重要,而在前面提到的SVHN資料集中,重要的只有相對的顏色。

圖3

 

Results
Our AutoAugment algorithm found augmentation policies for some of the most well-known computer vision datasets that, when incorporated into the training of the neural network, led to state-of-the-art accuracies. By augmenting ImageNet data we obtain a new state-of-the-art accuracy of 83.54% top1 accuracy and on CIFAR10 we achieve an error rate of 1.48%, which is a 0.83% improvement over the default data augmentation designed by scientists. On SVHN, we improved the state-of-the-art error from 1.30% to 1.02%. Importantly, AutoAugment policies are found to be transferable — the policy found for the ImageNet dataset could also be applied to other vision datasets (Stanford CarsFGVC-Aircraft, etc.), which in turn improves neural network performance.

運用AutoAugment技術的成果

針對一些最知名的電腦視覺資料集,我們的AutoAugment演算法找出一些擴增策略,而當我們把這些策略原則納入神經網路的訓練時,可以產生出最高的準確率。藉由擴增ImageNet的資料,我們獲得83.54%的準確率,排名第一!另外,在擴增CIFAR10的資料時,錯誤率則只有1.48%,相較於由科學家設計的預設的資料擴增法,錯誤率降了0.83%!而在擴增SVHN的資料時,錯誤率則是由1.30%降至1.02%。重要的是,AutoAugment策略是可轉移的,也就是說針對ImageNet資料集找出的策略原則也用在其它電腦視覺資料集(例如,Standford CarsFGVC-Aircraft⋯⋯等等。)藉此改善神經網路的效能。

 

We are pleased to see that our AutoAugment algorithm achieved this level of performance on many different competitive computer vision datasets and look forward to seeing future applications of this technology across more computer vision tasks and even in other domains such as audio processing or language models. The policies with the best performance are included in the appendix of the paper, so that researchers can use them to improve their models on relevant vision tasks.

很高興看到我們的AutoAugment演算法在許多不同的電腦視覺資料集中,能有這麼高水準的效能;並且我們期待,未來這項技術能橫跨更多電腦視覺專案,甚至被應用於其它領域,例如音訊處理或語言模型。效能最好的的擴增策略已包含在這篇論文附錄中,研究者可藉此改善他們的視覺專案模型。

 

備註:如果您想要購買AI人工智慧相關產品,歡迎洽詢機器人王國商城,謝謝。

 

相關文章

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *