·
3 commits
to main
since this release
Major Changes
-
83b7e6e: Standardize API patterns and coordinate structures across ML Kit modules
- Separates model operations into three hooks with simpler APIs
- loading the models, (
useObjectDetectionModels
,useImageLabelingModels
) - initializing the provider (
useObjectDetectionProvider
,useImageLabelingProvider
) - accessing models for inference, (
useObjectDetector
,useImageLabeling
)
- loading the models, (
- Implements consistent naming patterns to make the APIs more legible
- Removes "RNMLKit" prefix from non-native types
- Use specific names for hooks (
useImageLabelingModels
instead ofuseModels
) - Model configs are now
Configs
, instead ofAssetRecords
- Moves base types into the
core
package to ensure consistency - Fixes an issue with bounding box placement on portrait / rotated images on iOS
- Improves error handling and state management
- Updates documentation to match the new API
Breaking Changes
Image Labeling
- Renamed
useModels
touseImageLabelingModels
for clarity - Renamed
useImageLabeler
touseImageLabeling
- Introduced new
useImageLabelingProvider
hook for cleaner context management - Added type-safe configurations with
ImageLabelingConfig
- Renamed model context provider from
ObjectDetectionModelContextProvider
toImageLabelingModelProvider
Here's how to update your app:
Fetching the provider
- const MODELS: AssetRecord = { + const MODELS: ImageLabelingConfig = { nsfwDetector: { model: require("./assets/models/nsfw-detector.tflite"), options: { maxResultCount: 5, confidenceThreshold: 0.5, } }, }; function App() { - const { ObjectDetectionModelContextProvider } = useModels(MODELS) + const models = useImageLabelingModels(MODELS) + const { ImageLabelingModelProvider } = useImageLabelingProvider(models) return ( - <ObjectDetectionModelContextProvider> + <ImageLabelingModelProvider> {/* Rest of your app */} - </ObjectDetectionModelContextProvider> + </ImageLabelingModelProvider> ) }
Using the model
- const model = useImageLabeler("nsfwDetector") + const detector = useImageLabeling("nsfwDetector") const labels = await detector.classifyImage(imagePath)
Object Detection
useObjectDetectionModels
now requires anassets
parameteruseObjectDetector
is nowuseObjectDetection
- Introduced new
useObjectDetectionProvider
hook for context management - Renamed and standardized type definitions:
RNMLKitObjectDetectionObject
→ObjectDetectionObject
RNMLKitObjectDetectorOptions
→ObjectDetectorOptions
RNMLKitCustomObjectDetectorOptions
→CustomObjectDetectorOptions
- Added new types:
ObjectDetectionModelInfo
,ObjectDetectionConfig
,ObjectDetectionModels
- Moved model configuration to typed asset records
- Default model now included in models type union
Here's how to update your app:
Fetching the provider
- const MODELS: AssetRecord = { + const MODELS: ObjectDetectionConfig = { birdDetector: { model: require("./assets/models/bird-detector.tflite"), options: { shouldEnableClassification: false, shouldEnableMultipleObjects: false, } }, }; function App() { - const { ObjectDetectionModelContextProvider } = useObjectDetectionModels({ - assets: MODELS, - loadDefaultModel: true, - defaultModelOptions: DEFAULT_MODEL_OPTIONS, - }) + const models = useObjectDetectionModels({ + assets: MODELS, + loadDefaultModel: true, + defaultModelOptions: DEFAULT_MODEL_OPTIONS, + }) + + const { ObjectDetectionProvider } = useObjectDetectionProvider(models) return ( - <ObjectDetectionModelContextProvider> + <ObjectDetectionProvider> {/* Rest of your app */} - </ObjectDetectionModelContextProvider> + </ObjectDetectionProvider> ) }
Using the model
- const {models: {birdDetector} = useObjectDetectionModels({ - assets: MODELS, - loadDefaultModel: true, - defaultModelOptions: DEFAULT_MODEL_OPTIONS, - }) - + const birdDetector = useObjectDetection("birdDetector") const objects = birdDetector.detectObjects(imagePath)
Face Detection
- Changed option naming conventions to match ML Kit SDK patterns:
detectLandmarks
→landmarkMode
runClassifications
→classificationMode
- Changed default
performanceMode
fromaccurate
tofast
- Renamed hook from
useFaceDetector
touseFaceDetection
- Renamed context provider from
RNMLKitFaceDetectionContextProvider
toFaceDetectionProvider
- Added comprehensive error handling
- Added new state management with
FaceDetectionState
type
Here's how to update your app:
Using the detector
const options = { - detectLandmarks: true, + landmarkMode: true, - runClassifications: true, + classificationMode: true, }
Using the provider
- import { RNMLKitFaceDetectionContextProvider } from "@infinitered/react-native-mlkit-face-detection" + import { FaceDetectionProvider } from "@infinitered/react-native-mlkit-face-detection" function App() { return ( - <RNMLKitFaceDetectionContextProvider> + <FaceDetectionProvider> {/* Rest of your app */} - </RNMLKitFaceDetectionContextProvider> + </FaceDetectionProvider> ) }
Using the hooks
- const detector = useFaceDetector() + const detector = useFaceDetection() // useFacesInPhoto remains unchanged const { faces, status, error } = useFacesInPhoto(imageUri)
Core Module
- Introduced shared TypeScript interfaces:
ModelInfo<T>
AssetRecord<T>
- Standardized frame coordinate structure
- Implemented consistent type patterns
- Separates model operations into three hooks with simpler APIs
-
b668ab0: Upgrade to expo 52
Minor Changes
- b668ab0: align podspec platform requirements with expo version
Patch Changes
- Updated dependencies [b668ab0]
- Updated dependencies [83b7e6e]
- Updated dependencies [b668ab0]
- Updated dependencies [213f085]
- @infinitered/[email protected]