These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Machine Learning DEMOs with iOS
This repo was moved from @motlabs group. Thanks for @jwkanggist who is a leader of motlabs community.
Awesome Machine Learning DEMOs with iOS
We tackle the challenge of using machine learning models on iOS via Core ML and ML Kit (TensorFlow Lite).
Contents
- Machine Learning Framework for iOS
- Baseline Projects
- Application Projects
- Create ML Projects
- Performance
- See also
Machine Learning Framework for iOS
- Core ML
- TensorFlow Lite
- Pytorch Mobile
- fritz
- etc.
Tensorflow MobileDEPRECATED
)
Flow of Model When Using Core ML
The overall flow is very similar for most ML frameworks. Each framework has its own compatible model format. We need to take the model created in TensorFlow and convert it into the appropriate format, for each mobile ML framework.
Once the compatible model is prepared, you can run the inference using the ML framework. Note that you must perform pre/postprocessing manually.
If you want more explanation, check this slide(Korean).
Flow of Model When Using Create ML
Baseline Projects
DONE
-
Using a built-in model with Core ML
-
Using a built-in on-device model with ML Kit
-
Using the custom model for Vision with Core ML and ML Kit
-
Object Detection with Core ML
TODO
- Object Detection with ML Kit
- Using built-in cloud model on ML Kit
- Landmark recognition
- Using the custom model for NLP with Core ML and ML Kit
- Using the custom model for Audio with Core ML and ML Kit
- Audio recognition
- Speech recognition
- TTS
Image Classification
Name | DEMO | Note |
---|---|---|
ImageClassification-CoreML | – | |
MobileNet-MLKit | – |
Object Detection & Recognition
Name | DEMO | Note |
---|---|---|
ObjectDetection-CoreML | – | |
TextDetection-CoreML | – | |
TextRecognition-MLKit | – | |
FaceDetection-MLKit | – |
Pose Estimation
Name | DEMO | Note |
---|---|---|
PoseEstimation-CoreML | – | |
PoseEstimation-TFLiteSwift | – | |
PoseEstimation-MLKit | – | |
FingertipEstimation-CoreML | – |
Depth Prediction
DepthPrediction-CoreML | – |
Semantic Segmentation
Name | DEMO | Note |
---|---|---|
SemanticSegmentation-CoreML | – |
Application Projects
Name | DEMO | Note |
---|---|---|
dont-be-turtle-ios | – | |
WordRecognition-CoreML-MLKit(preparing…) | Detect character, find a word what I point and then recognize the word using Core ML and ML Kit. |
Annotation Tool
Name | DEMO | Note |
---|---|---|
KeypointAnnotation | Annotation tool for own custom estimation dataset |
Create ML Projects
Name | Create ML DEMO | Core ML DEMO | Note |
---|---|---|---|
SimpleClassification-CreateML-CoreML | A Simple Classification Using Create ML and Core ML |
Performance
Execution Time: Inference Time + Postprocessing Time
(with iPhone X) | Inference Time(ms) | Execution Time(ms) | FPS |
---|---|---|---|
ImageClassification-CoreML | 40 | 40 | 23 |
MobileNet-MLKit | 120 | 130 | 6 |
ObjectDetection-CoreML | 100 ~ 120 | 110 ~ 130 | 5 |
TextDetection-CoreML | 12 | 13 | 30(max) |
TextRecognition-MLKit | 35~200 | 40~200 | 5~20 |
PoseEstimation-CoreML | 51 | 65 | 14 |
PoseEstimation-MLKit | 200 | 217 | 3 |
DepthPrediction-CoreML | 624 | 640 | 1 |
SemanticSegmentation-CoreML | 178 | 509 | 1 |
WordRecognition-CoreML-MLKit | 23 | 30 | 14 |
FaceDetection-MLKit | – | – | – |
Measure module
You can see the measured latency time for inference or execution and FPS on the top of the screen.
If you have more elegant method for measuring the performance, suggest on issue!
Implements
Measure | Unit Test | Bunch Test | |
---|---|---|---|
ImageClassification-CoreML | O | X | X |
MobileNet-MLKit | O | X | X |
ObjectDetection-CoreML | O | O | X |
TextDetection-CoreML | O | X | X |
TextRecognition-MLKit | O | X | X |
PoseEstimation-CoreML | O | O | X |
PoseEstimation-MLKit | O | X | X |
DepthPrediction-CoreML | O | X | X |
SemanticSegmentation-CoreML | O | X | X |
See also
- Core ML | Apple Developer Documentation
- Machine Learning – Apple Developer
- ML Kit – Firebase
- Apple’s Core ML 2 vs. Google’s ML Kit: What’s the difference?
- iOS에서 머신러닝 슬라이드 자료
- MoT Labs Blog
WWDC
Core ML
-
WWDC2020
-
WWDC2019
- WWDC2019 256 Session – Advances in Speech Recognition
- WWDC2019 704 Session – Core ML 3 Framework
- WWDC2019 228 Session – Creating Great Apps Using Core ML and ARKit
- WWDC2019 232 Session – Advances in Natural Language Framework
- WWDC2019 222 Session – Understanding Images in Vision Framework
- WWDC2019 234 Session – Text Recognition in Vision Framework
-
WWDC2018
-
WWDC2017
Create ML and Turi Create
- WWDC2020
- WWDC2019
- WWDC2019 424 Session – Training Object Detection Models in Create ML
- WWDC2019 426 Session – Building Activity Classification Models in Create ML
- WWDC2019 420 Session – Drawing Classification and One-Shot Object Detection in Turi Create
- WWDC2019 425 Session – Training Sound Classification Models in Create ML
- WWDC2019 428 Session – Training Text Classifiers in Create ML
- WWDC2019 427 Session – Training Recommendation Models in Create ML
- WWDC2019 430 Session – Introducing the Create ML App
- WWDC2018
Common ML
- WWDC2020
- WWDC2019
- WWDC2018
- WWDC2016
Metal
- WWDC2020
- WWDC2020 10632 Session – Optimize Metal Performance for Apple Silicon Macs
- WWDC2020 10603 Session – Optimize Metal apps and games with GPU counters
- TECH-TALKS 606 Session – Metal 2 on A11 – Imageblock Sample Coverage Control
- TECH-TALKS 603 Session – Metal 2 on A11 – Imageblocks
- TECH-TALKS 602 Session – Metal 2 on A11 – Overview
- TECH-TALKS 605 Session – Metal 2 on A11 – Raster Order Groups
- TECH-TALKS 604 Session – Metal 2 on A11 – Tile Shading
- TECH-TALKS 608 Session – Metal Enhancements for A13 Bionic
- WWDC2020 10631 Session – Bring your Metal app to Apple Silicon Macs
- WWDC2020 10197 Session – Broaden your reach with Siri Event Suggestions
- WWDC2020 10615 Session – Build GPU binaries with Metal
- WWDC2020 10021 Session – Build Metal-based Core Image kernels with Xcode
- WWDC2020 10616 Session – Debug GPU-side errors in Metal
- WWDC2020 10012 Session – Discover ray tracing with Metal
- WWDC2020 10013 Session – Get to know Metal function pointers
- WWDC2020 10605 Session – Gain insights into your Metal app with Xcode 12
- WWDC2020 10602 Session – Harness Apple GPUs with Metal
AR
- WWDC2020
Examples
- Training
- Keras examples: https://keras.io/examples/
- Pytorch examples: https://github.com/pytorch/examples
- Inference
- TFLite examples: https://github.com/tensorflow/examples/tree/master/lite
- Pytorch Mobile iOS example: https://github.com/pytorch/ios-demo-app
- FritzLabs examples: https://github.com/fritzlabs/fritz-examples
- Models
- TensorFlow & TFLite models: https://tfhub.dev/
- Pytorch models: https://pytorch.org/hub/
- CoreML official models: https://developer.apple.com/machine-learning/models/
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Purpose(s):
Country of origin:
Type of approach:
Usage rights:
License:
Programming languages:
Github stars:
- 899
Github forks:
- 113
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case