SmartDataLake
Data lakes are raw data ecosystems, where large amounts of diverse data are retained and coexist. They facilitate self-service analytics for flexible, fast, ad hoc decision making.
I-BiDaaS: Industrial-Driven Big Data as a Self-Service Solution
The I-BiDaaS project aims to empower end-users to utilize and interact with big data technologies more easily.
HistoGrapher
Histographer is a software platform for building holistic solution integrating multimodal histopathology data.
Specification of an end-to-end Big Data as-a-self-service platform
This innovation corresponds to specification of an end-to-end platform for Big Data as-a-self-service, developed within the I-BiDaaS project.
Parallelization of Constraint Satisfaction Problems (CSP) solver
The CSP solver is the heart of the IBM’s Data Fabrication Platform technology. The generated data is the solution of a constraint satisfaction problem created by the Platform engine out of the user defined data model.
euBusinessGraph – EBG Marketplace
The EBG “company data marketplace” ia a place where company data providers can advertise their data, and data consumers can search, analyse, and compare data from the data providers, all in a unified way.
Multidimensional Storage with Efficient Sampling (MuSES)
Our technology allows organizing the data according to their multidimensional attributes while building stratified samples at high-speed.
Blockchain network for exchanging 3D personal data extracted from medical images and body scanners
Private blockchain network that includes a metrics catalogue. Data customers are connected to data providers. Data costumers can configure SQL queries and obtain individual anonymised data or aggregated data.
COVID-19 Data portal
The aim of the COVID-19 Data Portal is to facilitate data sharing and analysis, and to accelerate coronavirus research
The COVID-19 Data Portal
DEEPaaS - Deep Learning for everybody
The key concept proposed in the DEEP Hybrid DataCloud project is the need to support intensive computing techniques that require specialized HPC hardware, like GPUs or low-latency interconnects, to explore very large datasets.