E2Data Biometric Security service will be able to meet the dual challenge of intensive computational workloads and differentiated quality of service to deliver a high availability and performant anti-spoofing service. A typical use-case would be that of a third-party secure access controller for high-risk online transactions setting up additional payees on bank accounts. The liveness (anti-spoofing) component verifies that an actual human is present and a real face is being shown to the camera. It is therefore capable of detecting mask attacks (in which the attacker wears a realistic mask of someone else), and replay-attacks (in which the attacker presents a video of someone else to the camera).
Real-world adoption of such a service has been impeded by the high compute cost required to process video streamed from user sessions. The amount of data and computation required is orders of magnitude greater than previous methods. E2Data Biometric Security service will be able to support it.
Anti-spoofing, Security
Biometric authentication, using facial recognition, is fast becoming a mainstream method of authenticating customers for high value transactions, such as the creation of Bank Accounts, issuing of Travel Visas and unmanned border crossing by pre-registered users. Such processes are coupled with tight SLAs to ensure the best possible user experience. E2Data will both optimize the cost base of the platform and automate the performance optimization of code, something that until now has required skilled, and expensive, developers.
Lower costs for the deployment of advanced security features will lead to increased security for customers. Furthermore, the significant acceleration offered by the E2Data platform will decrease the time needed for the authentication of customers, improving the user experience.
E2Data changes the de-facto scale-out or homogeneous scale-up model in which applications are partitioned and sent for execution on CPU nodes. On the contrary, E2Data intelligently identifies which parts of the applications can be hardware accelerated and, dynamically, based on the current hardware resources sends tasks for execution on the appropriate nodes. The scheduling, compilation, and execution of the tasks takes place “on-the-fly” without requiring Big Data practitioners to write not-portable, low-level code for each specific device or accelerator. This ultimately translates to: a) higher performance of Big Data execution, b) energy efficient execution, c) significant cost improvements for cloud providers and end-users, and d) enhanced scalability and performance portability of Big Data applications.