You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the previous working group, the driver for change to the standard was to build out in support of additional biometrics (e.g. face and DNA With the expansion and support of multiple biometrics in a NIST based search request , the current v6.0 does not have a mechanism to describe what search modalities you would like to be happen when you send in a transaction like a CPS. This is important where the transactions are being used in a b2b context. You may have a capture for border purposes that has face and fingers but want a lights out response and expect that it is therefore based on a fingerprint to fingerprint search only. We also need to cater for a multi modal search where the output from the different search modalities are used to improve the search outcome (and confidence). This implies potential new data fields in the User-defined Descriptive Text Record/ Type 2 equivalent area. It would be prudent to also clarify what the behaviour is (the default) if this field were not included. Given that future use cases may also follow the current Prum model (i.e. verification done by the sender), this would also support that, as not all organisation have facial matching system but may still have facial stores so can share a facial image but not deal with a facial search response with multiple candidates of different identities.
The text was updated successfully, but these errors were encountered:
At the previous working group, the driver for change to the standard was to build out in support of additional biometrics (e.g. face and DNA With the expansion and support of multiple biometrics in a NIST based search request , the current v6.0 does not have a mechanism to describe what search modalities you would like to be happen when you send in a transaction like a CPS. This is important where the transactions are being used in a b2b context. You may have a capture for border purposes that has face and fingers but want a lights out response and expect that it is therefore based on a fingerprint to fingerprint search only. We also need to cater for a multi modal search where the output from the different search modalities are used to improve the search outcome (and confidence). This implies potential new data fields in the User-defined Descriptive Text Record/ Type 2 equivalent area. It would be prudent to also clarify what the behaviour is (the default) if this field were not included. Given that future use cases may also follow the current Prum model (i.e. verification done by the sender), this would also support that, as not all organisation have facial matching system but may still have facial stores so can share a facial image but not deal with a facial search response with multiple candidates of different identities.
The text was updated successfully, but these errors were encountered: