Recently, AI security has drawn significant attention from the academia and industry. Various kinds of adversarial attacks and defenses spring up like bamboo shoots. Now, considering more and more AI systems have been deployed and are being deployed, it is the time to comprehensively understand what is the performance of the attacks against real world systems? Furthermore, in security-critical applications, in addition to empirical evaluation, how to understand/quantify the security space of deep models is also important. In this talk, based on our previous research, I will introduce some AI security projects, as well as some recent interesting results from the adversarial example transferability and robustness quantification perspectives.