HTTPS MITM attacks based on the shared TLS certificates as HTTPS Context Confusion Attack (SCC Attack)
Measurement of CFI solutions’ security
FuzzGuard: Filtering out Unreachable Inputs in Directed Grey-box Fuzzing through Deep Learning
Poison Over Troubled Forwarders: A Cache Poisoning Attack Targeting DNS Forwarding Devices
A cache poisoning attack targeting DNS forwarders.
Improve fuzzing efficiency with lightweight data flow analysis.
Fuzzing Android Binder services with automated interface analysis.
AI-based Side Channel and Covert Channel Detection.
Generate adversarial Chinese texts with Glyph and Pinyin mutation.
Abstract:Recently, AI security has drawn significant attention from the academia and industry. Various kinds of adversarial attacks and defenses spring up like bamboo shoots. Now, considering more and more AI systems have been deployed and are being deployed, it is the time to comprehensively understand what is the performance of the attacks against real world systems? Furthermore, in security-critical applications, in addition to empirical evaluation, how to understand/quantify the security space of deep models is also important. In this talk, based on our previous research, I will introduce some AI security projects, as well as some recent interesting results from the adversarial example transferability and robustness quantification perspectives.
A Large-Scale Empirical Study on Vulnerability Distribution within Projects and the Lessons Learned
Empirical Study on Vulnerability Distribution within Projects.
Amplification Attacks Based on HTTP Range Requests