Simplifies machine learning by providing framework to easily detect object, text or poses in an image
Machine Learning integrations can be difficult and time consuming to set up, with a lot of boiler plate code and domain-specific knowledge required. With SimpleML you just need to choose what you want to detect, and detect it.
Add to your build.gradle
:
repositories {
maven("https://maven.jakebarnby.com")
}
dependencies {
implementation 'com.jakebarnby:simpleml:simpleml-objects:1.1.0-beta01'
implementation 'com.jakebarnby:simpleml:simpleml-text:1.1.0-beta01'
implementation 'com.jakebarnby:simpleml:simpleml-poses:1.1.0-beta01'
}
You can choose to detect:
- Objects
- Text
- Poses
There is a separate dependency for each so you can include one or all depending on your needs.
- Tensorflow Lite Integration
- TensorflowLite Tasks Integration
- Custom model support
There are 3 easy ways to use SimpleML.
- As a
View
Add the view to your xml (all simpleml attributes are optional).
<com.jakebarnby.simpleml.objects.view.LocalObjectAnalyzerView
android:id="@+id/view"
android:layout_width="200dp"
android:layout_height="200dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:simpleml_analysisLocation="device"
app:simpleml_classificationEnabled="true"
app:simpleml_detectMultiple="true"
app:simpleml_detectionDispatcher="io"
app:simpleml_detectorMode="frame_stream"
app:simpleml_minimumConfidence="0.7" />
Attach a listener to the detector
val objectAnalyzerView = findViewById<LocalObjectAnalyzerView>(R.id.view)
objectAnalyzerView.setOnNextDetectionListener { results: List<DetectedObject> ->
}
- As a
Fragment
val objectAnalyzerFragment = ObjectAnalyzerFragment.newInstance { results: List<DetectedObject> ->
}
- As an
Activity
ObjectDetector().stream(this) { results: List<DetectedObject> -> }
PoseDetector().stream(this) { results: List<DetectedPose> -> }
TextDetector().stream(this) { result: DetectedText -> }
Each detectable type has a corresponding result model type that is returned from detection
class DetectedObject(
// Pairs of label to confidence
var labels: List<Pair<String, Float>> = listOf(),
// Detected objects position
var boundingBox: Rect? = null,
)
class DetectedText(
// The raw detected text
var text: String? = null,
// The languages of detected texts
var detectedLanguages: List<String>? = null,
// The detected text as boxes(lines(words))
var textBoxes: List<TextBox>? = null,
)
class DetectedPose(
// The detected landmark
var landmark: PoseLandmark? = null,
// The position of the detected landmark
var position: PointF? = null,
// Confidence that the landmark is in frame
var inFrameLikelihood: Float? = null,
)
Each detectable type has a corresponding Options
type extending an OptionsBase
that can be used to configure detection parameters. All option sets are pre-configured with sane defaults so you can safely ignore them if you want to detect and forget.
open class OptionsBase(
val analysisType: AnalysisType,
val analysisMode: AnalysisMode = AnalysisMode.FRAME_STREAM,
val analysisDispatcher: AnalysisDispatcher = AnalysisDispatcher.IO,
val analysisLocation: AnalysisLocation = AnalysisLocation.DEVICE,
) : Serializable
class ObjectOptions(
val minimumConfidence: Float = 0.5f,
val classificationEnabled: Boolean = true,
val detectMultiple: Boolean = true,
analysisMode: AnalysisMode = AnalysisMode.FRAME_STREAM,
analysisDispatcher: AnalysisDispatcher = AnalysisDispatcher.IO,
analysisLocation: AnalysisLocation = AnalysisLocation.DEVICE
) : OptionsBase(
AnalysisType.OBJECT,
analysisMode,
analysisDispatcher,
analysisLocation
)
class TextOptions(
val minimumConfidence: Float = 0.5f,
analysisMode: AnalysisMode = AnalysisMode.FRAME_STREAM,
analysisDispatcher: AnalysisDispatcher = AnalysisDispatcher.IO,
analysisLocation: AnalysisLocation = AnalysisLocation.DEVICE
) : OptionsBase(
AnalysisType.TEXT,
analysisMode,
analysisDispatcher,
analysisLocation
)
class PoseOptions(
analysisMode: AnalysisMode = AnalysisMode.FRAME_STREAM,
analysisDispatcher: AnalysisDispatcher = AnalysisDispatcher.IO,
analysisLocation: AnalysisLocation = AnalysisLocation.DEVICE
) : OptionsBase(
AnalysisType.POSE,
analysisMode,
analysisDispatcher,
analysisLocation
)
SimpleML is built with extension in mind, and as such makes heavy use of generics. All functionality is extensible via the included base classes.
class YourAnalyzer: Analyzer<TDetector, TOptions, TInput, TResult>
class YourView: Camera2View<
TDetector,
TOptions,
TInput,
TResult,
TOutResult>
class YourFragment: Camera2Fragment<CustomAnalyzer,
TDetector,
TOptions,
TInput,
TResult,
TOutResult>