Cloud systems and edge-based systems have an increasing appetite for privacy-preserving computation over distributed sensitive data. Most existing cryptographic solutions don't perform well when executing complex inference tasks while hiding both the input data and the logic of the functions. This can be a serious shortcoming in particular areas, such as healthcare analytics and financial modeling, where data privacy and model protections are paramount. Although secure multiparty computation (SMPC) and functional encryption (FE) hold promise individually, current implementations are often either not scalable or not auditable from end to end in adversarial models. This work presents a hybrid framework that fuses FE with SMPC to enable private function evaluation (PFE) in decentralized environments. The architecture supports encrypted expert inference, leveraging a trust-weighted cryptographic consensus layer, dynamic key management, and function-specific policy enforcement. An adaptive fusion of secure execution and traceable audit logging ensures both privacy and compliance without sacrificing computational tractability. Experimental validation demonstrates that our system reduces decision latency by up to 18 % over standard FE baselines and improves leakage resistance under semi-honest and collusion-based attacks by 23 %, with auditability scores reaching 87 % in real-world simulation settings. By enabling the execution of confidential functions with built-in explainability and regulatory transparency, the proposed system lays the foundation for secure AI-as-a-service platforms. Its compatibility with edge deployments and extensibility toward zero-knowledge and post-quantum cryptography position it as a robust candidate for the next generation of trust-aware decentralized computation.
扫码关注我们
求助内容:
应助结果提醒方式:
