再過半個多月，Maker Faire Taipei 2018即將盛大登場！屆時，將有許多參展廠商與創客夥伴們齊聚一堂，展出自己的精心之作。而RS DesignSpark也會參與今年的Maker Faire Taipei，並在活動中展出Arduino、Raspberry Pi、Intel Movidius⋯⋯等等應用，分別有盤旋小魚、小型的Pidentifier、Pi-Top物件識別器、Arduino空氣吉他及簡報訓練器。為讓讀者們嚐嚐鮮、過過癮、吃吃味（咦），小編先介紹其中一個作品，Check it out：
AI 等資訊科技是現在進行式，今天弄得要死要活的東西，明天說不定點點按鈕就好了？近兩年物聯網教學就是很好的例證，使用LinkIt 7697搭配 MCS 雲服務，已經能讓國小學生也能做出簡單的物聯網專案，從網頁與手機就能監看感測器資訊或控制開發板。在此的並非說網路通訊協定不重要，而是對於非專業人士來說，這樣的設計能幫助他們聚焦在最重要的事情上：資料。如果資料對於開發者來說是有意義或是重要的，那先從資料本身開始是個相當好的出發點。
We are excited to introduce a new optimization toolkit in TensorFlow: a suite of techniques that developers, both novice and advanced, can use to optimize machine learning models for deployment and execution.
While we expect that these techniques will be useful for optimizing any TensorFlow model for deployment, they are particularly important for TensorFlow Lite developers who are serving models on devices with tight memory, power constraints, and storage limitations. If you haven’t tried out TensorFlow Lite yet, you can find out more about it here.
The first technique that we are adding support for is post-training quantization to the TensorFlow Lite conversion tool. This can result in up to 4x compression and up to 3x faster execution for relevant machine learning models.
By quantizing their models, developers will also gain the additional benefit of reduced power consumption. This can be useful for deployment in edge devices, beyond mobile phones.
Enabling post-training quantization
The post-training quantization technique is integrated into the TensorFlow Lite conversion tool. Getting started is easy: after building their TensorFlow model, developers can simply enable the ‘post_training_quantize’ flag in the TensorFlow Lite conversion tool. Assuming that the saved model is stored in saved_model_dir, the quantized tflite flatbuffer can be generated:
Our tutorial walks you through how to do this in depth. In the future, we aim to incorporate this technique into general TensorFlow tooling as well, so that it can be used for deployment on platforms not currently supported by TensorFlow Lite.
These speed-ups and model size reductions occur with little impact to accuracy. In general, models that are already small for the task at hand (for example, mobilenet v1 for image classification) may experience more accuracy loss. For many of these models we provide pre-trained fully-quantized models.
Under the hood, we are running optimizations (otherwise referred to as quantization) by lowering the precision of the parameters (i.e. neural network weights) from their training-time 32-bit floating-point representations into much smaller and efficient 8-bit integer ones. See the post-training quantization guide for more details.
These optimizations will make sure to pair the reduced-precision operation definitions in the resulting model with kernel implementations that use a mix of fixed- and floating-point math. This will execute the heaviest computations fast in lower precision, but the most sensitive ones with higher precision, thus typically resulting in little to no final accuracy losses for the task, yet a significant speed-up over pure floating-point execution. For operations where there isn’t a matching “hybrid” kernel, or where the Toolkit deems it necessary, it will reconvert the parameters to the higher floating point precision for execution. Please see the post-training quantization page for a list of supported hybrid operations.
We will continue to improve post-training quantization as well as work on other techniques which make it easier to optimize models. These will be integrated into relevant TensorFlow workflows to make them easy to use.
Google的Deepmind團隊使用了Alpha GO 挑戰世界棋王獲勝的事，大家還記得嗎？（快速回憶AlphaGO-連結），這項成果該團隊使用的是神經網路運算技術，工具是Tensorflow。Gmail的垃圾郵件判讀、Google相簿臉部識別、Google翻譯，Google在Tensorflow上以Opensource的方式開放出來，大家可按照自己想做的AI案例收集樣本資料，訓練AI判斷的模型。
第三站是MIT Media Lab。這裡大概是一天行程中，所有學生印象最深刻的經歷！因為我們特別情商目前在MIT Media Lab的Biomechatronics Leg Lab內，全球知名生物機電跨域整合義肢實驗室計畫主持人Hugh Herr門下，來自台灣的優秀學生謝宗翰，分享他在實驗室內關於仿生義肢、輔具、生物力學、肌肉骨骼系統及神經科學的研究，並帶領所有師生入實驗室內參訪。更感人的是參訪後，謝宗翰分享他一路的求學經歷與人生觀。